content
stringlengths
86
994k
meta
stringlengths
288
619
Sheila May Edmonds Born: 1 April 1916 in Kingston, Kent, England Died: 2 September 2002 in Cambridge, England Click the picture above to see two larger pictures Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Sheila Edmonds was the only child of Harold Montagu Edmonds (born in Camberwell, Surrey about 1884) who trained as a telephone engineer, and his wife, Florence Myra Lilley (born about 1887 in Stapleford, Nottinghamshire) who was a teacher. At Wimbledon High School she excelled in mathematics and in 1935 she entered Newnham College, Cambridge, to study that subject. Newnham College had opened in 1880, although it grew from a house for women students which was established in 1871 after two years of planning. Although women could attend classes and take the examinations at the University of Cambridge in Edmonds' time, they could still not graduate with a degree which only became possible in 1947. In Part II of the Mathematical Tripos Edmonds was a Wrangler (placed in the First Class) and in Part III, which she took in the following year, she received a distinction. She then began to undertake research under G H Hardy who, by this time, was nearing the end of his career. While undertaking research she spent a year at Westfield College,London, and another year at the University of Paris. Edmonds' doctoral thesis Some multiplication problems was completed in 1944. Before her thesis was complete she had already published a number of papers which would form part of the work of her thesis. Two papers appeared in 1942: On the multiplication of series which are infinite in both directions was published in the Journal of the London Mathematical Society while On the Parseval formulae for Fourier transforms appeared in the Proceedings of the Cambridge Philosophical Society. We should say a little about the mathematics which these papers contain. In the first of these papers Edmonds looks at the doubly infinite series ∑ a[n] where the sum is over both positive and negative integers. The series is said to converge if the two series, one defined over the positive integers, the other defined over the negative integers, both converge. In this case the original series is said to have sum A = A' + A'' where A' and A'' are the sums over the positive and negative integers respectively. The product of two such sequences ∑ a[n] and ∑ b[n] is defined to be ∑ c[n] where c[n] = ∑ a[m]b[n-m] where the sum is again over all integers m, both positive and negative. Edmonds proves that even when ∑ a[n] converges and has sum A, ∑ b[n] converges and has sum B, and ∑ c[n] converges and has sum C, the relation C = AB need not hold. In the second of the two papers Edmonds looks at conditions under which the integral from 0 to ∞ of the product of two functions f (x) and g(x) is equal to the integral from 0 to ∞ of the product of the cosine transforms of the two functions f (x) and g(x). Similar questions are investigated for the sine transforms rather than the cosine transforms. In a series of papers published over the following years Edmonds examined a whole variety of different conditions on the functions f and g which give the required equalities. In 1957 Edmonds published Sums of powers of the natural numbers. In it she answered a question which arises very naturally. The formula ∑ n^3 = (∑ n)^2 (*) has been known for over 1000 years (certainly al-Karaji knew it around 1000 AD). Edmonds asks if there are any other formulae of this type, in particular what formulae of the form ∑ Cn^p = (∑ n^q)^2 are possible for C a constant and p, q non-negative integers? In Sums of powers of the natural numbers she showed that al-Karaji's formula (*) is the only one that exists. The demands made on lecturers at Cambridge were great for they had to tutor students in all branches of pure and applied mathematics. Edmonds wished to give this aspect of her job 100% of her effort, so after 1957 she published no further papers. Teaching was a task in which she excelled [1]:- ... conveying in a characteristically understated manner a deep enthusiasm for her particular subject areas. Her patience, thoroughness and encouragement set a generation of women on the path to careers in mathematics and related subjects. Edmonds made important contributions in addition to her teaching. She became Vice-Principal of Newnham College, Cambridge, in 1960 and she held this post until she retired in 1981. Other positions in which she served included ones in the Mathematical Association, and the Examination Syndicate, as well as serving several schools by sitting on the board of governers. She held positions of influence in Cambridge such as on the University Faculty Board of Mathematics which she chaired in 1975 and 1976. As a person she is described in these terms in [1]:- Sheila Edmonds was fundamentally a shy person but her reticent style concealed a great deal of warmth and real concern for her students, colleagues and friends. On formal occasions, she cut an impressive figure but without any hint of pomposity. After her retirement she was content to live quietly with her newly acquired dog until that was made impossible by the onset of Alzheimer's disease. She had no close relatives. Professor John McCutcheon writes [2]:- It is no exaggeration to say that the lectures of Sheila Edmonds were "wonderfully lucid and inspiring". I remember her as one of the very best teachers of my undergraduate career. She came across as a most sympathetic lecturer, clearly anxious to assist students in every way. Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (2 books/articles) Mathematicians born in the same country Additional Material in MacTutor Cross-references in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © October 2003 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Edmonds.html","timestamp":"2014-04-18T23:48:24Z","content_type":null,"content_length":"15220","record_id":"<urn:uuid:b3e7397f-565d-44f1-8087-34fcc5438d9a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Expand All Collapse All • 1.1 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 5: Trigonometric Angles of Functions” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 5: Trigonometric Angles of Functions” (PDF) Instructions: Read pages 297-301 of Chapter 5 to learn about circles in trigonometry. Note that this reading covers the material in subunits 1.1.1 through 1.1.4. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 1.2 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 5: Trigonometric Angles of Functions” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 5: Trigonometric Angles of Functions” (PDF) Instructions: Read pages 307-317 of Chapter 5 to learn about angles in trigonometry. Pay particular attention to the new form of angle measure, the radian. A complete grasp of this concept will serve you well through the remainder of the course. Also note that this reading covers the material in subunits 1.2.1 through 1.2.5. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 1.3 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 5: Trigonometric Angles of Functions” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 5: Trigonometric Angles of Functions” (PDF) Instructions: Read pages 321-330 of Chapter 5 to learn points on circles using sine and cosine. The unit circle is one of the key concepts in trigonometry, and a complete understanding how the coordinates from the equation of the circle are used to create the trig functions is fundamental to understanding the derivations of the graphs of the functions and all the useful identities we will study in later sections. Committing the unit circle to memory is a useful skill. This reading covers the material in subunits 1.3.1 through 1.3.4. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 1.4 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 5: Trigonometric Angles of Functions” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 5: Trigonometric Angles of Functions” (PDF) Instructions: Read pages 333-338 of Chapter 5 to learn about the other trigonometric functions and some important identities, establishing some relationships between all six of the trigonometric functions. This reading covers the material in subunits 1.4.1 through 1.4.3. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 1.5 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 5: Trigonometric Angles of Functions” Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 5: Trigonometric Angles of Functions” (PDF) Instructions: Read pages 343-347 of Chapter 5 to learn about the trig functions in the context of right triangles. Note that this reading covers subunits 1.5.1 through 1.5.3. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 2.1 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 6: Periodic Functions” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 6: Periodic Functions” (PDF) Instructions: The graphs of the sinusoidal functions have some important features that help us construct them, and make them useful for modeling. Read pages 353-365 to gain an understanding of the properties of these graphs. This reading also covers the topics outlined in subunits 2.1.1 through 2.1.5. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 2.2 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 6: Periodic Functions” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 6: Periodic Functions” (PDF) Instructions: Much like the sinusoidal functions, the remaining trig function graphs have some key features that are important to understand. Read pages 369- 374 to understand these. This reading selection covers the topics outlined in subunits 2.2.1 through 2.2.4. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 2.3 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 6: Periodic Functions” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 6: Periodic Functions” (PDF) Instructions: The functions give us some powerful tools for equation solving. Read pages 379–384 to begin to understand them, their graphs, and their relationship to the trig functions. This reading covers the topics outlined in subunits 2.3.1 through 2.3.3. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 2.4 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 6: Periodic Functions” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 6: Periodic Functions” (PDF) Instructions: Now that you have an understanding of the inverse trig functions and the domains and ranges of both the trig and inverse trig functions, you can begin solving more complicated equations. Read pages 387-394 to understand how. This reading covers the topics outlined in subunits 2.4.1 and 2.4.2. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 2.5 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 6: Periodic Functions” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 6: Periodic Functions” (PDF) Instructions: Trigonometry is very useful for modeling real world data. Read the selection on pages 397–403 to develop some modeling techniques. Note that this reading covers the topics outlined in subunits 2.5.1 and 2.5.2. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 3.1 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 7: Trigonometric Equations and Identities” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 7: Trigonometric Equations and Identities” (PDF) Instructions: Read pages 409–415 to learn some additional techniques for solving trig equations. This reading covers the topics outlined in subunits 3.1.1 and 3.1.2 Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 3.2 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 7: Trigonometric Equations and Identities” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 7: Trigonometric Equations and Identities” (PDF) Instructions: Add some additional identities to your problem solving arsenal by reading pages 417-430. This selection also covers the topics outlined in subunits 3.2.1–3.2.3. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 3.3 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 7: Trigonometric Equations and Identities” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 7: Trigonometric Equations and Identities” (PDF) Instructions: Read pages 431–441 to learn about simplifying trig expressions and solving trig equations involving double angles. This reading also covers the topics outlined in subunits 3.3.1 and Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 3.4 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 7: Trigonometric Equations and Identities” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 7: Trigonometric Equations and Identities” (PDF) Instructions: Because real world phenomena are often modeled with trig functions, it is important to understand how changes to the functions affect the resulting graphs and the phenomena being modeled. To increase your understanding of this, read pages 442–448 of Chapter 7. This selection also covers the topics outlined in subunits 3.4.1 through 3.4.3. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 4.1 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 8: Further Applications of Trigonometry” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 8: Further Applications of Trigonometry” (PDF) Instructions: Pages 451–466 introduce the idea of using trigonometric functions in triangles other than right triangles. Read this selection carefully. This selection also covers the topics outlined in subunits 4.1.1 and 4.1.2. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 4.2 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 8: Further Applications of Trigonometry” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 8: Further Applications of Trigonometry” (PDF) Instructions: Read the selection from pages 467–475. The selection defines a new system for graphing points and curves based on distances and angles rather than the horizontal and vertical distances used in the Cartesian Coordinate system. This reading covers the topics outlined in subunits 4.2.1 through 4.2.3. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 4.3 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 8: Further Applications of Trigonometry” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 8: Further Applications of Trigonometry” (PDF) Instructions: Read pages 480–490 of Chapter 8 to learn how polar coordinates and complex numbers are related. This selection also covers the topics outlined in subunits 4.3.1 through 4.3.4. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 4.4 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 8: Further Applications of Trigonometry” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 8: Further Applications of Trigonometry” (PDF) Instructions: Vectors are geometric objects with both distance and direction, and they have numerous applications. Read pages 491-502 from Chapter 8 carefully to understand these applications. This reading selection also covers the topics outlined in subunits 4.4.1 through 4.4.3. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen. • 4.5 Reading: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 8: Further Applications of Trigonometry” Link: Lippman and Rasmussen’s Precalculus: An Investigation of Functions: “Chapter 8: Further Applications of Trigonometry” (PDF) Instructions: Up until this point in the course, we have been defining functions in terms of two variables: a dependent and an independent variable. Parametric equations give us a new way to define functions, determining the coordinates of a point based on functions of a third variable, often time. Read pages 504–512 to learn about these concepts. This reading also covers the topics outlined for subunits 4.5.1 through 4.5.3. Terms of Use: The article above is released under a Creative Commons Attribution-Share-Alike License 3.0 (HTML). It is attributed to Lippman & Rasmussen.
{"url":"http://www.saylor.org/courses/ma003/?ismissing=0&resourcetype=1","timestamp":"2014-04-16T21:52:21Z","content_type":null,"content_length":"54025","record_id":"<urn:uuid:a6fb07f3-a359-498f-9ea1-49ee4bd89fc7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Audubon Park, NJ Algebra 2 Tutor Find an Audubon Park, NJ Algebra 2 Tutor ...I promote using some imagination when looking at these topics, especially in physics. When someone can understand how a concept is working then they can apply it to solve a whole range of problems and most memorization will be unnecessary. This approach will help aid students to achieve a higher understanding of these subjects and it will promote critical thinking. 16 Subjects: including algebra 2, Spanish, calculus, physics ...Students I tutor are mostly college-age, but range from middle school to adult. As a tutor with a primary focus in math and science, I not only tutor algebra frequently, but also encounter this fundamental math subject every day in my professional life. I conduct research at UPenn and West Chester University on colloidal crystals and hydrodynamic damping. 9 Subjects: including algebra 2, calculus, physics, geometry ...This gave me the opportunity to tutor students in a variety of math subjects, including Discrete Math. I have a bachelor's degree in secondary math education. During my time in college, I took one 3-credit course in Differential Equations. 11 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I have been told by students that they enjoy my teaching and tutoring methods because I am able to make math seem practical and relevant to their lives. I have learned through the years how to make math seem easy. I enjoy math a great deal and look forward to working with you.I have taught and tutored Algebra 1 in different capacities for over 5 years among other subjects. 11 Subjects: including algebra 2, statistics, geometry, algebra 1 ...For tutoring, I have flexible hours in the evening during the week and am open for most weekends. I ask for about 1 week's notice to start a new student. I need that for prep and for scheduling purposes. 14 Subjects: including algebra 2, chemistry, physics, geometry Related Audubon Park, NJ Tutors Audubon Park, NJ Accounting Tutors Audubon Park, NJ ACT Tutors Audubon Park, NJ Algebra Tutors Audubon Park, NJ Algebra 2 Tutors Audubon Park, NJ Calculus Tutors Audubon Park, NJ Geometry Tutors Audubon Park, NJ Math Tutors Audubon Park, NJ Prealgebra Tutors Audubon Park, NJ Precalculus Tutors Audubon Park, NJ SAT Tutors Audubon Park, NJ SAT Math Tutors Audubon Park, NJ Science Tutors Audubon Park, NJ Statistics Tutors Audubon Park, NJ Trigonometry Tutors Nearby Cities With algebra 2 Tutor Audubon, NJ algebra 2 Tutors Barrington, NJ algebra 2 Tutors Brooklawn, NJ algebra 2 Tutors Collingswood algebra 2 Tutors Glendora, NJ algebra 2 Tutors Gloucester City algebra 2 Tutors Haddon Heights algebra 2 Tutors Hi Nella, NJ algebra 2 Tutors Laurel Springs, NJ algebra 2 Tutors Mount Ephraim algebra 2 Tutors Oaklyn algebra 2 Tutors Philadelphia Ndc, PA algebra 2 Tutors West Collingswood Heights, NJ algebra 2 Tutors Westville, NJ algebra 2 Tutors Woodlynne, NJ algebra 2 Tutors
{"url":"http://www.purplemath.com/Audubon_Park_NJ_Algebra_2_tutors.php","timestamp":"2014-04-21T02:32:58Z","content_type":null,"content_length":"24581","record_id":"<urn:uuid:beb4321c-71ab-405b-a7ca-b40c4dec168a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Method for Scheduling Frequency Channels Patent application title: Method for Scheduling Frequency Channels Inventors: Gerard Marque-Pucheu (Verneuil Sur Seine, FR) Christophe Gruet (Elancourt, FR) Vincent Seguy (Boulogne Billancourt, FR) Assignees: CASSIDIAN SAS IPC8 Class: AH04W7204FI USPC Class: 370281 Class name: Duplex communication over free space frequency division Publication date: 2013-07-25 Patent application number: 20130188537 Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP The invention concerns a method for scheduling frequency channels implemented in a device for a narrowband radiocommunication system sharing with a broadband radiocommunication system, each comprising a narrowband base station and a broadband base station, and the same frequency band, the frequency band being in part divided into a given number of frequency blocks, each comprising a given number of carrier frequencies to optionally be allocated to the narrowband base stations. The device comprises a means for associating carrier frequencies with the narrowband base stations and a means for distributing over the frequency band the carrier frequencies associated with the narrowband base stations such that each frequency block comprises at least two distinct groups of carrier frequencies, each associated with a different base station, the two groups of carrier frequencies being selected according to a distribution rule such that interference relating to the emission of the base stations associated with groups of carrier frequencies distributed in the same frequency block have a minimum interfered surface area. A method for scheduling carrier frequencies for a narrowband radiocommunication system sharing with a broadband radiocommunication system, in one and the same geographical zone, radioelectric transmission cells each comprising a narrowband base station and a broadband base station, and one and the same frequency band, the frequency band being in part divided into a given number of frequency blocks, each comprising a given number of carrier frequencies to optionally be allocated to the narrowband base stations, characterized in that it comprises a step of distributing the carrier frequencies to be allocated to the narrowband base stations over the frequency band such that each frequency block comprises at least two distinct groups of carrier frequencies, each associated with a different narrowband base station, the two groups of carrier frequencies being selected according to a distribution rule such that interference relating to the emission of the narrowband base stations associated with groups of carrier frequencies distributed in one and the same frequency block has a minimum interfered surface area. The method as claimed in claim 1, according to which in the distribution step the at least two selected groups of carrier frequencies are distributed in a frequency block by alternately intercalating each carrier frequency of one group with respectively each carrier frequency of the other group so as to comply with a minimum frequency gap between the carrier frequencies of one and the same group of carrier frequencies. The method as claimed in claim 1 comprising an establishment of a frequency scheduling which associates each narrowband base station of the narrowband radiocommunication system with at least one group of carrier frequencies from among several groups of carrier frequencies distributed per frequency block over the frequency band according to the distribution rule. The method as claimed in claim 1, according to which the narrowband radiocommunication system and the broadband radiocommunication system are radiocommunication systems of FDD type sharing in the same frequency band a first frequency band intended for uplink communications from mobile terminals to base stations of one of the two radiocommunication systems and a second frequency band intended for downlink communications from base stations to mobile terminals of one of the two radiocommunication systems, the distribution of the carrier frequencies by frequency block being carried out in a similar manner in the first frequency band and in the second frequency band. The method as claimed in claims 1, according to which the method comprises the following successive steps: an association step determining a first set of first groups of carrier frequencies, each first group of carrier frequencies of which is associated with one or more narrowband base stations according to reuse rules, and the distribution step bijectively mapping each carrier frequency of one of the groups of the first set with a carrier frequency of a frequency block while complying, on the one hand, with the distribution rule and, on the other hand, with a minimum frequency gap between the carrier frequencies of one and the same group of carrier frequencies bijectively mapped with carrier frequencies of one and the same frequency block. The method as claimed in claim 5, according to which the distribution step (ER) comprises a first iterative loop (B11) for selecting each frequency block (Bfej) of the frequency band and a second iterative loop (B2-|)for selecting each carrier frequency (Fen,j) of the selected frequency block, and comprising in the second iterative loop a bijective mapping (φ) of the carrier frequency of the frequency block with a carrier frequency of the first set (A) while complying with the distribution rule (RR3) and the minimum frequency gap between carrier frequencies of one and the same group of carrier frequencies distributed in one and the same frequency block. The method as claimed in claim 6 according to which each second iterative loop comprises a step of bijectively mapping a carrier frequency of a first group of carrier frequencies with the selected carrier frequency of the selected frequency block as soon as another carrier frequency of the first group of carrier frequencies has been bijectively mapped with another carrier frequency of the frequency block selected during a previous second iterative loop. The method as claimed in claim 5 according to which the number of carrier frequencies of each first group of carrier frequencies is at most equal to half of the number of carrier frequencies of a frequency block and the distribution step comprises an iterative loop for selecting each frequency block of the frequency band comprising a selection according to the distribution rule of two first groups of carrier frequencies belonging to the first set and a bijective mapping successively of a carrier frequency of the frequency block with alternately a carrier frequency of one of the first two groups while complying with the minimum frequency gap between carrier frequencies of one and the same first group of carrier frequencies. The method as claimed in claim 1, according to which the method comprises the following successive steps: the step of distributing a first set of first groups of carrier frequencies, associated respectively with the narrowband base stations, each first group being distributed with at least one other different first group in one and the same virtual frequency block belonging to a set of virtual frequency blocks while complying with a minimum frequency gap between the carrier frequencies of one and the same group and while complying with the distribution rule, the set of virtual frequency blocks comprising a number greater than or equal to the given number of frequency blocks of the frequency band, and an association step for associating each virtual frequency block with a frequency block of the frequency band while complying with carrier frequency reuse rules. A narrowband radiocommunication system sharing with a broadband radiocommunication system, in one and the same geographical zone, radioelectric transmission cells each comprising a narrowband base station and a broadband base station, and one and the same frequency band, the frequency band being in part divided into a given number of frequency blocks, each comprising a given number of carrier frequencies to optionally be allocated to the narrowband base stations, characterized in that the carrier frequencies of the narrowband radiocommunication system allocated to narrowband base stations are distributed over the frequency band such that each frequency block comprises at least two distinct groups of carrier frequencies, each allocated to a different narrowband base station, the two groups of carrier frequencies being selected according to a distribution rule such that interference relating to the emission of the narrowband base stations associated with groups of carrier frequencies distributed in one and the same frequency block has a minimum interfered surface area. A narrowband base station of a narrowband radiocommunication system sharing with a broadband radiocommunication system, in one and the same geographical zone, radioelectric transmission cells each comprising a narrowband base station and a broadband base station, and one and the same frequency band, the frequency band being in part divided into a given number of frequency blocks, each comprising a given number of carrier frequencies to optionally be allocated to the narrowband base stations, characterized in that the carrier frequencies allocated to the base station are distributed over the frequency band with other carrier frequencies allocated to other base stations so that each frequency block comprises at least two distinct groups of carrier frequencies, each allocated to a different narrowband base station, the two groups of carrier frequencies being selected according to a distribution rule such that interference relating to the emission of the narrowband base stations associated with groups of carrier frequencies distributed in one and the same frequency block has a minimum interfered surface area. A device for scheduling carrier frequencies for a narrowband radiocommunication system sharing with a broadband radiocommunication system, in one and the same geographical zone, radioelectric transmission cells each comprising a narrowband base station and a broadband base station, and one and the same frequency band, the frequency band being in part divided into a given number of frequency blocks, each comprising a given number of carrier frequencies to optionally be allocated to the narrowband base stations, characterized in that it comprises a means for associating carrier frequencies with the narrowband base stations and a means for distributing over the frequency band the carrier frequencies associated with the narrowband base stations such that each frequency block comprises at least two distinct groups of carrier frequencies, each associated with a different narrowband base station, the two groups of carrier frequencies being selected according to a distribution rule such that interference relating to the emission of the narrowband base stations associated with groups of carrier frequencies distributed in one and the same frequency block has a minimum interfered surface area. A computer program able to be implemented in a scheduling device so as to schedule the carrier frequencies for a narrowband radiocommunication system sharing with a broadband radiocommunication system, in one and the same geographical zone, radioelectric transmission cells each comprising a narrowband base station and a broadband base station, and one and the same frequency band, the frequency band being in part divided into a given number of frequency blocks, each comprising a given number of carrier frequencies to optionally be allocated to the narrowband base stations, said program being characterized in that it comprises instructions which, when the program is executed in said scheduling device, carry out the steps of the scheduling method as claimed in claims The present invention relates in a general manner to a method for scheduling frequency channels, also called carrier frequencies, for a narrowband radiocommunication system sharing with a broadband radiocommunication system, in one and the same geographical zone, radioelectric transmission sites and one and the same frequency band. A radiocommunication system SY comprising a first broadband radiocommunication system SY and a second narrowband radiocommunication system SY which are deployed on the same radioelectric transmission sites in a determined geographical zone is known. The operator of these sites can thus offer over this same zone at one and the same time narrowband services and broadband services. According to the prior art, these two systems operate in separate frequency bands to avoid mutual interference. With reference to FIG. 1, the radiocommunication system SY comprises a plurality of sites, called cells C to C . For a better understanding of FIG. 1, only 4 cells C , C , C and C are detailed. Each cell C , with 1≦c≦C, comprises first and second base stations, respectively BS ,c, BS ,c and mobile stations MS to MS which communicate with the base stations through the radio resources shared in the respective frequency bands ΔFsy for broadband communications and ΔFsy for narrowband communications. More particularly, each cell C comprises a first base station BS ,c, called a broadband base station BS ,c in the subsequent description, able to communicate radioelectrically with mobile stations in a broadband radiocommunication network of the first radiocommunication system SY . Each cell C also comprises a second base station BS , called a narrowband base station BS ,c in the subsequent description, able to communicate radioelectrically with mobile stations in a narrowband radiocommunication network of the second radiocommunication system SY . The mobile stations present in a cell and operating according to a single one of the two modes of communication, broadband or narrowband, register respectively with one of the two base stations BS ,c or BS ,c according to their mode of operation. Mobile stations operating according to both modes of communication can register with one of the two base stations by choice or with both base stations. For radiocommunication systems SY and SY of FDD (Frequency Division Duplex) type, the respective predetermined frequency bands ΔFsy and ΔFsy each comprise a first frequency band ΔFsy , respectively ΔFsy , for the emission of communications from the base stations BS ,c or BS ,c to the mobile stations, supplemented with a second frequency band of the same width ΔFsy , respectively ΔFsy called the duplex band, for the receptions of communications originating from the mobile stations by the base stations BS ,c or BS ,c. The first frequency band ΔFsy , respectively ΔFsy and the second associated frequency band ΔFsy , respectively ΔFsy are shifted by one and the same duplex gap ΔF The broadband radiocommunication system SY is for example of the WIMAX ("Worldwide Interoperability for Microwave Access") type based on an Air interface according to the IEEE 802.16 standard, more particularly according to the 802.16m standard, or for example of the LTE (Long Term Evolution) standard which employs wide frequency bands ΔFsy and ΔFsy each typically greater than a Mega-Hertz, for example 1.25 MHz, 1.4 MHz, 3 MHz, 5 MHz, 10 MHz or 20 MHz. As shown in FIG. 2A, in the broadband radiocommunication system SY , each predetermined frequency band ΔFsy and ΔFsy is divided into J frequency blocks respectively BFe to BFe and BFr to BFr , each of bandwidth ΔBF, typically of a few hundred Kilo-Hertz, for example ΔBF=180 kHz in the case of a system according to the LTE standard. Each block BFe , BFr , with 1≦j≦J, comprises N consecutive and regularly distributed carrier frequencies F ,1 . . . F , . . . F ,N of channel width ΔF=ΔFsy /(J×N), with 1≦n≦N. For example, in the case of the LTE standard, N is equal to 12 and the interval AF between two consecutive sub-carriers is equal to 15 kHz, so that ΔBF=N×δF=12×15 kHz=180 kHz. Radio resources are allocated to a base station BS ,c for a high data throughput transmission to (or from) a mobile station operating at least in broadband mode. FIG. 2B is an illustration of the radio resources shared by the broadband base stations in a downlink communication channel in the frequency band ΔFsy during a time frame TP, and are similar in the uplink communication channel (not represented). A communication channel, downlink or uplink, of the LTE broadband system corresponds to the set of resources in the frequency band ΔFsy (or ΔFsy ) during a time frame TP. The radio resources are blocks of resources, each BR ,tp defined on a frequency block BFe (or BFr depending on the direction of the channel) during a specific time window tp, called a time pitch, consisting of several symbol times within the meaning of OFDM modulation. A communication channel comprises common sub-channels CNC for synchronization and broadcasting of the system information between the broadband base stations, and transport sub-channels for exchanges of data and of signaling between the base stations and the mobile terminals. The common sub-channels CNC correspond to a set of resource blocks extending over a few contiguous frequency blocks (six in the case of LTE) for a few symbol times and are repeated in part in the time frame TP. The other blocks of resources correspond to the transport sub-channels and are shared between the C base stations BS ,1 to BS ,c of the radiocommunication system SY according to a known method for allocating resources, such as frequency reuse according to a specific factor for example a factor of 3 or a factor of 1, or such as fractional frequency reuse. With reference to FIG. 2B, on the frequency plan, several frequency blocks, for example blocks BFe to BFe +5, comprise few resource blocks intended for the sub-channels CNC and resource blocks intended for the transport sub-channels. The other frequency blocks comprise resource blocks intended solely for the transport channels, for example the frequency block BFe with reference to FIG. 2B. The narrowband radiocommunication system SY is for example a TETRA ("TErrestrial Trunked RAdio") or TETRAPOL system whose channel width δf is of the order of a few Kilo-Hertz for example 10 kHz, 12.5 kHz or 25 kHz, this width δf also being the frequency pitch separating two carrier frequencies. With reference to FIG. 3A, the uplink and/or downlink communication frequency channel of the narrowband system between a narrowband base station and a mobile terminal corresponds to a carrier frequency fe or fr (represented fe/r in FIG. 3A) of channel width δf. The useful bandwidth SID of the filtered frequency signal is less than the width of the channel δf. For example, for a channel width δf of 10 KHz the bandwidth δb will be for example 8 KHz. With reference to FIG. 3B, in the narrowband radiocommunication system SY of FDD type, the usual distribution of the frequency plan is such that to each cell C are allocated two groups of P carrier frequencies fe ,1 . . . fe , . . . fe ,P and fr ,1 . . . fr , . . . fr ,P of channel width δf, which are respectively distributed over the frequency bands ΔFsy and ΔFsy . For each frequency band ΔFsy and ΔFsy the distribution of the narrowband carrier frequencies allocated to one and the same base station, in one and the same cell C , complies with certain constraints between said frequencies. A first constraint relating to the use of conventional coupling systems, more particularly coupling systems using cavities, for transmitting messages from the base station BS ,c to the mobile terminals present in the cell, requires compliance with a first minimum frequency interval Δfe between the carrier frequencies used in one and the same cell, for example Δfe=150 kHz. A second constraint makes it possible to avoid disturbances related to the use of too close frequency channels to transmit messages to the base station BS ,c, at one and the same time, by mobile terminals close to the base station BS ,c and mobile terminals far removed from the base station BS ,c. This constraint imposes compliance with a second minimum frequency interval AΔfr between said carrier frequencies of one and the same cell, for example Δfr=20 kHz, and which may be less than the first pitch Δfe. As the frequency channels for uplink communication in the direction from mobiles to base station correspond, to within the duplex gap, to the frequency channels for downlink communication from the base station to the mobile stations, the minimum gaps between channels related to the constraints of the base station will lie identically, to within a frequency translation, in the other frequency sub-band corresponding to the uplink communications from the mobile stations to the base station. Cells which are geographically sufficiently far apart can have identical carrier frequencies fe , fr or groups or parts of groups of identical carrier frequencies. The mutual interference of these cells in one and the same frequency channel is very low, the carrier-to-interference ratio determined in each of the cells as a function of the other cell being less than a specific threshold. Standard allocations, such as these, of frequency blocks and of carrier frequencies are effective when they are applied respectively to a first and a second radiocommunication system, SY and SY , located in distinct geographical zones, and/or working on distinct frequency bands ΔFsy , ΔFsy . If the communication systems SY and SY , according to the invention, are located in one and the same geographical zone and share the same emission and reception frequency bands ΔFsy and ΔFsy the allocations of carrier frequencies on the one hand, and of frequency blocks, more particularly the transport channels, on the other hand, will produce mutual interference having a very negative effect on the service quality of said communication systems. Indeed, according to a typical exemplary configuration, the carrier frequencies of the narrowband system SY have a channel width Δf of 10 KHz and the first frequency interval Me between two carrier frequencies of one and the same cell C is 150 KHz. By assuming that each frequency block BFe , BFr of the broadband system SY has a bandwidth ΔBF of 180 KHZ for the LTE systems, several frequency blocks, indeed all the frequency blocks potentially used by the broadband base station BS ,c of the cell C can each contain at least one carrier frequency of the narrowband base station BS ,c belonging to the same cell C and be interfered with by these carrier frequencies. It is possible to limit this drawback by avoiding allocating a frequency block to a given cell, stated otherwise by neutralizing the block, when its allocation would be liable to create interference at the carrier frequencies of the narrowband system that are allocated in the same given cell or in cells sufficiently near to this given cell to undergo interference. Thus, these interfered frequency blocks become unusable by application of a strategy for sharing the radiocommunication system SY prohibiting the allocation of a frequency block BFe , BFr to a broadband base station BS if it is interfered with by a carrier frequency of a base station BS located in the same cell or in a geographically close cell. The application of such a strategy mutually ensures the protection of the frequency blocks of the broadband system. Nonetheless, the number of neutralized frequency blocks may, in the configuration represented hereinabove, very severely reduce the capacity of the broadband communication system. To alleviate this drawback, it is known to use multi-carrier frequency transmitters in the narrowband base stations of the narrowband communication system SY . Such a transmitter groups together the carrier frequencies allocated to one and the same base station BS ,c into a group of carrier frequencies distributed consecutively over a not very extended frequency band with a small frequency interval Δlfe between each carrier frequency, for example Me goes from 150 KHz to 20 KHz. The carrier frequency group allocated to the base station BS ,c of the cell C thus has a frequency bandwidth, for example of 140 KHz in the case of a group of 8 frequencies, that is less than the bandwidth of a frequency block, which in the previous example is 180 KHz. Depending on its position with respect to the frequency blocks, the group of carrier frequencies interferes with only one or two frequency blocks at most. The other frequency blocks that are not interfered with by this group of frequencies may potentially be allocated to the broadband base station BS ,c belonging to the cell C . However, the carrier frequency groups allocated respectively to the narrowband base stations lying respectively in the cells adjacent to the cell C , may nonetheless be distributed over the whole of the frequency band of the radiocommunication system SY and thus interfere with several frequency blocks, indeed all the frequency blocks distributed over the frequency bands ΔFsy and ΔFsy , rendering them unusable for the broadband base station BS ,c of the cell C The objective of the invention is to alleviate the drawbacks of the prior art through a method for scheduling carrier frequencies for a narrowband radiocommunication system sharing with a broadband radiocommunication system, in one and the same geographical zone, radioelectric transmission cells each comprising a narrowband base station and a broadband base station, and one and the same frequency band, the frequency band being in part divided into a given number of frequency blocks, each comprising a given number of carrier frequencies to optionally be allocated to the narrowband base stations. The method is characterized in that it comprises a step of distributing the carrier frequencies to be allocated to the narrowband base stations over the frequency band such that each frequency block comprises at least two distinct groups of carrier frequencies, each associated with a different narrowband base station, the two groups of carrier frequencies being selected according to a distribution rule such that interference relating to the emission of the narrowband base stations associated with groups of carrier frequencies distributed in one and the same frequency block have a minimum interfered surface area. The method makes it possible to minimize the interference of the carrier frequencies of the narrowband radiocommunication system over the set of frequency blocks of the broadband radiocommunication system which shares in part the same frequency band in the same geographical zone as the narrowband radiocommunication system. According to one characteristic of the invention, in the distribution step the at least two selected groups of carrier frequencies are distributed in a frequency block by alternately intercalating each carrier frequency of one group with respectively each carrier frequency of the other group so as to comply with a minimum frequency gap between the carrier frequencies of one and the same group of carrier frequencies. According to another characteristic of the invention, the method comprises an establishment of a frequency scheduling which associates each narrowband base station of the narrowband radiocommunication system with at least one group of carrier frequencies from among several groups of carrier frequencies distributed per frequency block over the frequency band according to the distribution rule. According to a first implementation of the method of the invention, the method comprises the following successive steps: an association step determining a first set of first groups of carrier frequencies, each first group of carrier frequencies of which is associated with one or more narrowband base stations according to reuse rules, and the distribution step bijectively mapping each carrier frequency of one of the groups of the first set with a carrier frequency of a frequency block while complying, on the one hand, with the distribution rule and, on the other hand, with a minimum frequency gap between the carrier frequencies of one and the same group of carrier frequencies bijectively mapped with carrier frequencies of one and the same frequency block. According to one characteristic of the first implementation of the method, the distribution step comprises a first iterative loop for selecting each frequency block of the frequency band and a second iterative loop for selecting each carrier frequency of the selected frequency block, and comprising in the second iterative loop a bijective mapping of the carrier frequency of the frequency block with a carrier frequency of the first set while complying with the distribution rule and the minimum frequency gap between carrier frequencies of one and the same group of carrier frequencies distributed in one and the same frequency block. According to one variant of this characteristic, each second iterative loop comprises a step of bijectively mapping a carrier frequency of a first group of carrier frequencies with the selected carrier frequency of the selected frequency block as soon as another carrier frequency of the first group of carrier frequencies has been bijectively mapped with another carrier frequency of the frequency block selected during a previous second iterative loop. According to another characteristic of the first implementation of the method, the number of carrier frequencies of each first group of carrier frequencies is at most equal to half of the number of carrier frequencies of a frequency block and the distribution step comprises an iterative loop for selecting each frequency block of the frequency band comprising a selection according to the distribution rule of two first groups of carrier frequencies belonging to the first set and a bijective mapping successively of a carrier frequency of the frequency block with alternately a carrier frequency of one of the first two groups while complying with the minimum frequency gap between carrier frequencies of one and the same first group of carrier frequencies According to a second implementation of the method of the invention, the method comprises the following successive steps: the step of distributing a first set of first groups of carrier frequencies, associated respectively with the narrowband base stations, each first group being distributed with at least one other different first group in one and the same virtual frequency block belonging to a set of virtual frequency blocks while complying with a minimum frequency gap between the carrier frequencies of one and the same group and while complying with the distribution rule, the set of virtual frequency blocks comprising a number greater than or equal to the given number of frequency blocks of the frequency band, and an association step for associating each virtual frequency block with a frequency block of the frequency band while complying with carrier frequency reuse rules. The invention also relates to a narrowband radiocommunication system sharing with a broadband radiocommunication system, in one and the same geographical zone, radioelectric transmission cells each comprising a narrowband base station and a broadband base station, and one and the same frequency band, the frequency band being in part divided into a given number of frequency blocks, each comprising a given number of carrier frequencies to optionally be allocated to the narrowband base stations. The system is characterized in that the carrier frequencies of the narrowband radiocommunication system allocated to narrowband base stations are distributed over the frequency band such that each frequency block comprises at least two distinct groups of carrier frequencies, each allocated to a different narrowband base station, the two groups of carrier frequencies being selected according to a distribution rule such that interference relating to the emission of the narrowband base stations associated with groups of carrier frequencies distributed in one and the same frequency block have a minimum interfered surface area. The invention also relates to a narrowband base station of a narrowband radiocommunication system sharing with a broadband radiocommunication system, in one and the same geographical zone, radioelectric transmission cells each comprising a narrowband base station and a broadband base station, and one and the same frequency band, the frequency band being in part divided into a given number of frequency blocks, each comprising a given number of carrier frequencies to optionally be allocated to the narrowband base stations. The narrowband base station is characterized in that the carrier frequencies allocated to the base station are distributed over the frequency band with other carrier frequencies allocated to other base stations so that each frequency block comprises at least two distinct groups of carrier frequencies, each allocated to a different narrowband base station, the two groups of carrier frequencies being selected according to a distribution rule such that interference relating to the emission of the narrowband base stations associated with groups of carrier frequencies distributed in one and the same frequency block have a minimum interfered surface area. The invention also relates to a device for scheduling carrier frequencies for a narrowband radiocommunication system sharing with a broadband radiocommunication system, in one and the same geographical zone, radioelectric transmission cells each comprising a narrowband base station and a broadband base station, and one and the same frequency band, the frequency band being in part divided into a given number of frequency blocks, each comprising a given number of carrier frequencies to optionally be allocated to the narrowband base stations. The device is characterized in that it comprises a means for associating carrier frequencies with the narrowband base stations and a means for distributing over the frequency band the carrier frequencies associated with the narrowband base stations such that each frequency block comprises at least two distinct groups of carrier frequencies, each associated with a different narrowband base station, the two groups of carrier frequencies being selected according to a distribution rule such that interference relating to the emission of the narrowband base stations associated with groups of carrier frequencies distributed in one and the same frequency block have a minimum interfered surface area. Finally, the invention pertains to a computer program able to be implemented in a scheduling device, said program comprising instructions which, when the program is executed in said scheduling device, carry out the scheduling of carrier frequencies, according to the method of the invention, for a narrowband radiocommunication system sharing with a broadband radiocommunication system, in one and the same geographical zone, radioelectric transmission cells each comprising a narrowband base station and a broadband base station, and one and the same frequency band, the frequency band being in part divided into a given number of frequency blocks, each comprising a given number of carrier frequencies to optionally be allocated to the narrowband base stations. Other characteristics and advantages of the present invention will be more clearly apparent on reading the following description of several embodiments of the invention given by way of nonlimiting examples, with reference to the corresponding appended drawings in which: FIG. 1, already described, schematically shows a radiocommunication system; FIGS. 2A and 2B, already described, show a representation of a usual allocation of frequency channels for a broadband communication system; FIG. 3A, already described, shows a representation of a usual allocation of frequency channels for a narrowband communication system; FIG. 3B, already described, shows a representation of a carrier frequency of a narrowband communication system; FIG. 4 shows a representation of allocation of carrier frequencies for a narrowband radiocommunication system according to the invention; FIG. 5 shows a block diagram of a carrier frequency scheduling device of the radiocommunication system implementing the frequency scheduling method according to the invention; FIGS. 6A, 6B and 6C show respectively three variants of an algorithm for distributing carrier frequencies according to a first embodiment of the method of the invention; and FIG. 7 shows an algorithm for distributing carrier frequencies according to a second embodiment of the method of the invention. Unless specified otherwise, the various elements appearing in the various figures retain the same references. The radiocommunication system of FDD type according to the invention is fairly similar to the radiocommunication system SY previously described with reference to FIG. 1 and comprises in one and the same geographical zone a first broadband radiocommunication system SY and a second narrowband radiocommunication system SY which are deployed in respective predetermined frequency bands ΔFsy and ΔFsy overlapping in part or totally and constituting a common frequency band ΔFsy, considered in the subsequent description to be the frequency band of the system SY. The radiocommunication system SY comprises a plurality of cells C to C , each C , with 1≦c≦C, comprising first and second base stations, respectively BS ,c, BS ,c and mobile stations MS to MS which communicate with the base stations through the radio resources shared in the common frequency band ΔFsy. More particularly, each cell C comprises a first broadband base station BS ,c able to communicate radioelectrically with mobile stations in a broadband radiocommunication network of the first radiocommunication system SY . Each cell C also comprises a second narrowband base station BS able to communicate radioelectrically with mobile stations in a narrowband radiocommunication network of the second radiocommunication system SY The frequency band ΔFsy also comprises a first frequency band ΔFsy for the emission of downlink communications from the base stations BS ,c or BS ,c to the mobile stations, supplemented with a second frequency band of the same width ΔFsy , called the duplex band, for the receptions of uplink communications originating from the mobile stations by the base stations BS ,c or BS ,c. These two frequency bands ΔFsy and ΔFsy are shifted by a duplex gap ΔF . More particularly, in downlink communications, the frequency band ΔFsy is formed by the frequency band ΔFsy BB of the broadband radiocommunication system SY overlapping totally or in part the frequency band ΔFsy NB of the narrowband radiocommunication system SY . Likewise in uplink communications, the frequency band ΔFsy is formed by the frequency band ΔFsy BB of the broadband radiocommunication system SY overlapping totally or in part the frequency band ΔFsy NB of the narrowband radiocommunication system SY . Since the frequency scheduling method according to the invention is identical in each of the two frequency bands ΔFsy BB, ΔFsy NB) and ΔFsy BB, ΔFsy NB), only the frequency distribution of the two systems SY and SY is described on the first frequency band ΔFsy in the subsequent description. As described previously, with reference to FIGS. 2A and 2B, the broadband radiocommunication system SY is for example of the WIMAX ("Worldwide Interoperability for Microwave Access") type based on an Air interface according to the IEEE 802.16 standard, more particularly according to the 802.16m standard or for example of LTE ("Long Term Evolution") standard which employs wide frequency bands ΔFsy and ΔFsy each typically greater than a Megahertz, for examples 1.25 MHz, 1.4 MHz, 3 MHz, 5 MHz, 10 MHz or 20 MHz. As shown in FIG. 2A, in the broadband radiocommunication system SY , the predetermined frequency band ΔFsy is divided into J frequency blocks BFe to BFe , each of bandwidth ΔBF, typically of a few hundred Kilo-Hertz, for example ΔBF=180 kHz in the case of a system according to the LTE standard. Each block BFe , with 1≦j≦J, comprises N consecutive and regularly distributed carrier frequencies Fe ,1 . . . Fe . . . Fe ,N of channel width ΔF =ΔFsy /(J×N), with ≦n≦N. For example, in the case of the LTE standard, N is equal to 12 and the interval δF between two consecutive sub-carriers is equal to 15 kHz, so that ΔBF=N×δF=12×15 kHz=180 kHz. Radio resources are allocated to a base station BS ,c for a high data throughput transmission to (or from) a mobile station operating at least in broadband mode. FIG. 2B is an illustration of the radio resources shared by the broadband base stations in a downlink communication channel in the frequency band ΔFsy during a time frame TP, and are similar in the uplink communication channel (not represented). A communication channel, downlink (or uplink), of the LTE broadband system corresponds to the set of resources in the frequency band ΔFsy during a time frame TP. The radio resources are blocks of resources each (BR ,tp) defined on a frequency block BFe during a specific time window tp, called a time pitch. A communication channel comprises common sub-channels CNC for synchronization and broadcasting of the system information between the broadband base stations, and transport sub-channels for exchanges of data and of signaling between the base stations and the mobile terminals. The common sub-channels correspond to a set of resource blocks extending over a few contiguous frequency blocks for a few symbol times and are repeated in part in the time frame TP. The other blocks of resources correspond to the transport channels and are shared between the C base stations BS ,1 to BS ,C of the radiocommunication system SY according to a known method for allocating resources. In the frequency plan, several frequency blocks, for example blocks BFe to BFe +5 with reference to FIG. 2B, comprise a few resource blocks intended for the sub-channels CNC and resource blocks intended for the transport channels. The other frequency blocks comprise resource blocks intended solely for the transport channels, for example frequency block BFe with reference to FIG. 2B. The narrowband radiocommunication system SY is for example a TETRA ("TErrestrial Trunked RAdio") or TETRAPOL system in which the channel width δf of each carrier frequency is of the order of a few Kilo-Hertz. To each cell C , more particularly to each narrowband base station SB ,c, are allocated one or more groups of carrier frequencies Ge with 1≦m≦M, from among M groups of carrier frequencies Ge to Ge . Each group of carrier frequencies Ge comprises F carrier frequencies Fe ,1 to Fe ,F. The set of carrier frequencies of each group is disjoint from one group to another group. One and the same group of carrier frequencies Ge may be assigned to several mutually distant cells so as to avoid any frequency interference. According to this configuration of the communication system, only the allocated resource blocks of the broadband communication system which are dedicated to the transport channels interfere with the carrier frequencies of the narrowband communication system that are allocated in the same frequency band. The resource blocks dedicated to the common channels CNC of a frequency block of the broadband communication system have negligible interference on the carrier frequencies of the narrowband communication system located in the same frequency band, the ratio of the mean power of the useful signal of the narrowband system to the mean power of the disturbing signal of the common channels CNC of the broadband system being much lower than the threshold of detrimental signal-to-noise ratio of the narrowband communication system. Indeed, by assuming that the emission power of the narrowband system of TETRAPOL type is 42 dBm per carrier frequency and that the power of the broadband system of LTE type is 48 dBm over the whole of a 1.080 MHz channel (so-called 1.4 MHz nominal channel), the broadband power density during just the emission of the common channels CNC is about 48 dBm/MHz, since the latter occupy nearly the whole of the emission band (between 62 and 72 carriers of 15 kHz), but it will be only 27 dBm in a reception filter of the narrowband communication system having a bandwidth δb of 8 kHz (48 dBm decreased by the ratio between the bandwidths of 1 MHz and of 8 kHz, respectively, i.e. 21 dB). Moreover, the emission duration of the common channels is of the order of 5% of the time compared with the total emission duration of the channels of a broadband system and the mean power of the common channels is reduced by a factor of close to 20 corresponding to the duty ratio of their emission in the time frame and is therefore 13 dB lower on average, that is to say a power of 14 dBm=-27 dBm-13 dB in the band for reception of the disturbing signal by the narrowband communication system. The ratio between the narrowband useful signal and the common channels disturbing signal has a mean value of 28 dB=42 dBm-14 dBm, that is to say much lower than the threshold of detrimental signal-to-noise ratio of the narrowband system which in this case is 15 dB. On the contrary, if the resource blocks dedicated to the transport channels included in a frequency block are permanently allocated to communications of the broadband base station BS , the attenuation due to the duty ratio of the transmission will not apply, the signal-to-noise ratio for identical propagation conditions will be only 15 dB=42 dBm-27 dBm, this being insufficient to avoid interference, The frequency scheduling method according to the invention is implemented in a scheduling device DP while installing and configuring the narrowband base stations SB ,1 to SB ,C respectively in the cells C to C . The carrier frequency scheduling device DP will be described subsequently with reference to FIG. 5. The device DP will establish a frequency scheduling PF for the groups of carrier frequencies to be allocated to the narrowband base stations and distributed in the frequency band ΔFsye so as to minimize the interference between the two communication systems, as a function of the following three distribution rules which will characterize the method. According to the first distribution rule RR1, the scheduling device DP distributes in a frequency block BFe , all or part of the set of carrier frequencies of a group Ge , subject to compliance with the second distribution rule RR2 hereinbelow, said frequency block being considered to be interfered with. According to the second distribution rule RR2, to avoid interference between the carrier frequencies of one and the same group Ge which is assigned to one or more cells, a minimum frequency gap Δfe must be complied with between each of the successive carrier frequencies belonging to the same group Ge and distributed in one and the same frequency block, in accordance with the rules of the state of the art for allocating carrier frequencies in a narrowband communication system. According to the third distribution rule RR3, a frequency block BFe which is in part interfered with, that is to say some of the frequencies of whose frequency block have not yet been associated with cells of the system, will be supplemented with one or more carrier frequency groups selected in such a way that the geographical zone interfered with by the emission of the base stations associated with the groups of carrier frequencies distributed in the same frequency block has a minimum interfered surface area. By applying the above rules in the scheduling method, the device DP schedules in one and the same frequency block BFe on the one hand, according to the first distribution rule RR1 the carrier frequencies constituting a group of frequencies Ge to be allocated to at least one narrowband base station of a cell C , said frequencies being distributed in the block white complying with the constraints of minimum frequency gap according to the second distribution rule RR2 and, on the other hand, according to the third distribution rule RR3 to group together in this same frequency block BFe the carrier frequencies constituting one or more other groups of carrier frequencies to be allocated to narrowband base stations of cells which are different from the cell C but sufficiently close to the latter. This frequency block BFe is consequently completely interfered with by the carrier frequencies to be allocated to narrowband base stations belonging to the cell C and to the cells adjacent to C , other frequency blocks of the frequency band of the radiocommunication system not being interfered with by these carrier frequencies and then being able to be used by the broadband base stations of the cell C . In the frequency blocks that are not interfered with, or very slightly, by the frequencies of the cell C , carrier frequencies of the narrowband base stations of the cells geographically distant from the cell C can also be distributed without interfering in the broadband communications of the broadband base station of the cell C Once the frequency scheduling PF has been established, which associates each narrowband base station of the narrowband radiocommunication system with at least one group of carrier frequencies from among several carrier frequency groups distributed per frequency block over the frequency band according to the above distribution rules, the device DP transmits the scheduling PF to an operator of the radiocommunication system so that he allocates carrier frequencies of the frequency band to each narrowband base station as scheduled in the frequency scheduling PF. FIG. 4 illustrates an example of distribution according to the distribution rules RR1, RR2 and RR3 in a frequency block BFe of a first group Ge of eight carrier frequencies fe ,1 to fe ,B allocated to a first narrowband base station of a first cell C and of a second group Ge +1 of eight carrier frequencies fe +1,1 to fe +1,8 allocated to a second narrowband base station of a second cell C +1 adjacent to the first cell C . The two base stations belong to a narrowband communication system SY of TETRAPOL type, each frequency of which has a channel width δf of 10 KHz, a bandwidth δb of 8 KHz and a minimum frequency gap between each frequency of one and the same group Δfe of 20 KHz. The broadband communication system SY located in the same frequency band as the narrowband communication system SY is of LTE type and possesses a spectral width ΔFsye of 1.4 Mhz with a frequency block width ΔBF equal to 180 KHZ. As represented in FIG. 4, the two groups of carrier frequencies Ge and Ge +1 are interleaved by alternately intercalating a carrier frequency of the first group with a carrier frequency of the second group so as to comply with the minimum frequency gap Δfe=20 KHz according to the second distribution rule RR2. In FIG. 4, a frequency block BFe according to the LTE system of a total width of 180 kHz corresponds to the union of 18 carrier frequencies, denoted Fe ,0 to Fe j,15, of the narrowband system with a channel width δf of 10 kHz. The first group Ge allocated to the first cell C comprises the odd carriers denoted from Fe ,1 to Fe j,15 and the second group Ge +1 allocated to the second cell C +1 comprises the even carriers denoted Fe ,2 to Fe ,16. The distribution of each group of carrier frequencies in a frequency block satisfies the frequency constraint of minimum frequency gap Δfe equal to 20 kHz between two successive frequencies belonging to one and the same group. The last two carrier frequencies may be allocated to other cells of the radiocommunication system SY while complying with the distribution rules RR1, RR2 and RR3. It will be noted that if it were necessary to allocate more than eight carrier frequencies to a base station of the narrowband system, it would be possible to do so by separately allocating two groups of eight carrier frequencies belonging to two different frequency blocks, contiguous or not, the non-assignment of the carriers Fe ,0 and Fe ,17 in the previous case ensuring that whatever the case at issue, the constraint of minimum frequency gap will always be complied with between the carriers of two groups belonging to different frequency blocks. With reference to FIG. 5, the frequency scheduling method is implemented in the scheduling device DP which comprises an association unit UA for associating groups of carrier frequencies with cells of the system SY according to reuse rules RU, a distribution unit UR for distributing groups of carrier frequencies in frequency blocks according to an algorithm AG and the distribution rules RR1, RR2 and RR3 and a memory ME comprising in particular the frequency scheduling PF for the groups of carrier frequencies to be allocated to the base stations of the narrowband radiocommunication system, distributed on the basis of frequency block of the broadband radiocommunication system, the scheduling PF being the result of the scheduling method according to the invention. The units UA, UR and ME of the device DP are represented in the form of functional blocks most of which ensure functions having a link with the invention and may correspond to software modules implemented in at least one processor and/or to dedicated and/or programmable hardware modules. The storage unit ME also comprises information on the frequency bands ΔFsy and ΔFsy of the radiocommunications system, the number F of carrier frequencies per group of carrier frequencies and to be associated with each cell, the value of the minimum frequency gap Δfe, the number J of frequency blocks distributed in the frequency band of the broadband system ΔFsy and the number N of carrier frequencies per frequency block. The device can also comprise a communication interface for transmitting the frequency scheduling PF to the radiocommunication system SY so that the operator of the system implements the allocations of frequencies per cell according to the scheduling PF. The scheduling device may be for example a server connected via a packet network to the radiocommunication system SY. The distribution unit UR comprises for example one or more processors controlling the execution of a distribution algorithm AG taking the distribution rules RR1, RR2 and RR3 into account. The association unit UA comprises for example one or more processors controlling the execution of an association algorithm taking the frequency reuse rules RU into account. The memory ME is a recording medium in which programs may be saved. The memory ME is connected to the units UR and UA via a bidirectional bus BU and comprises volatile and/or nonvolatile memories such as EEPROM, ROM, PROM, RAM, DRAM, SRAM memories, etc. The algorithms implementing the scheduling method are stored in the memory ME. The method for scheduling carrier frequencies to be allocated to the narrowband base stations of the radiocommunication system SY is implemented according to several embodiments of the invention described in greater detail hereinbelow. Each embodiment comprises two main steps: a step EA of associating groups of carrier frequencies with cells of the system and executed by the association unit UA of the device DP and a step ER of distributing the groups of frequencies in frequency blocks, executed by the distribution unit UR of the device DP. According to the first embodiment, the steps are executed in a first order EA and then ER. According to the second embodiment, the steps are executed in the reverse order ER and then EA. The step EA of associating groups of carrier frequencies with narrowband base stations of the system consists in associating one and the same group of carrier frequencies with narrowband base stations of the radiocommunication system SY while complying with the reuse rules RU known to the person skilled in the art and applied to narrowband radiocommunication systems, all the narrowband base stations of the system having to be associated with at least one group of carrier frequencies. The step ER of distributing the carrier frequencies in frequency blocks consists more particularly in distributing, in same frequency blocks, carrier frequency groups associated with narrowband base stations whose surface interfered with by the emission of said narrowband base stations associated with groups of carrier frequencies distributed in one and the same frequency block is a minimum, by applying the distribution rules RR1, RR2 and RR3 according to the invention. The method also comprises, after the execution of the two steps EA and ER, the establishment of a frequency scheduling which associates each narrowband base station of the narrowband radiocommunication system with at least one group of carrier frequencies from among several carrier frequency groups distributed per frequency block over the frequency band according to the distribution rules. According to the first embodiment of the scheduling method, in the association step EA, the association unit UA of the scheduling device DP determines a first set A of first groups of frequencies A to A , each first group A , with 1≦m≦M, being associated with one or more cells of the radiocommunication system SY according to the frequency reuse rules RU. To each first group of frequencies A is assigned a set of F frequencies fa ,1 to fa ,F while complying with a minimum frequency gap Δfe between each frequency fa of the frequency group A . Each set of frequencies is disjoint from one first group of frequencies to another first group of frequencies. In the case where several first groups of frequencies are allocated to a cell C , the set of frequencies corresponding to the union of the sets of frequencies making up the first groups allocated to the cell comply with the minimum frequency gap Δfe. The reuse rules RU consist in associating one or more first groups of frequencies from among the M first groups of frequencies with each cell C of the radiocommunication system SY, one and the same first group of frequencies possibly being associated with several different cells geographically distant from one another by a given gap avoiding frequency interference between these cells. These reuse constraints involve co-channel interference only. In the distribution. step ER, the distribution unit UR of the device DP bijectively maps each carrier frequency of one of the first groups of the set with a carrier frequency of a frequency block while complying, on the one hand, with the distribution rules and, on the other hand, with a minimum frequency gap between the carrier frequencies of one and the same first group of carrier frequencies bijectively mapped with carrier frequencies of one and the same frequency block. More particularly, the distribution unit UR of the device DP determines a second set Ge of M second groups of carrier frequencies Ge to Ge , the carrier frequencies of each group Ge being inter alia distributed preferably in a frequency block BFe while complying with a minimum frequency gap between the carrier frequencies of one and the same second group included in one and the same frequency block. According to the first embodiment, the distribution step also comprises a bijective mapping φ of the frequencies of each group of the first set A with the frequencies of each group of the second set Ge thus making it possible to associate the frequencies of the M groups of the set Ge with the cells of the system SY in a manner identical to the association of the frequencies respectively of the M groups of the set A according to the association step EA by taking into account of the reuse rules RU and of the distribution rules RR1, RR2 and RR3. More precisely, the second set Ge of second groups of frequencies Ge , . . . , Ge with 123 m≦M is determined such that the first set A[A ∪ . . . ∪A ] and the second set Ge[Ge ∪ . . . ∪Ge ] are in bijection. A bijective mapping φ by bijection of the frequencies of the first set A[A ∪ . . . ∪A ] with the frequencies of the second set Ge[Ge ∪ . . . ∪Ge ] is thus determined while complying with the reuse rules RU and scheduling rules RR1, RR2 and RR3: φ(A)=Ge. According to the first embodiment of the invention, macro-cells M , . . . M of the narrowband network are defined such that all the cells comprising a narrowband base station which is associated with a first group of frequencies A constitute the macro-cell M , each macro-cell M then comprising all the cells whose respective group of frequencies A is associated. The various narrowband emitters of each of the macro-cells retain all the characteristics inherited from the corresponding cells, in particular the characteristics of the antennal systems (radiation patterns in particular) and the emission powers. FIGS. 6A, 6B and 6C more particularly detail the step of distributing frequency ER according to respectively three different iterative algorithms AG1, AG2 and AG3, the main iteration B1 of which corresponds to each processing of a new different frequency block of the frequency band ΔFsy . With each iteration in one of these algorithms, that is to say with each new frequency block selected by the distribution unit UR of the device DP according to the invention, bijective mappings of carrier frequencies of the set A with frequencies of the new selected frequency block considered to be the frequency block undergoing processing are executed, the carrier frequencies of the frequency block undergoing processing belonging to the second set Ge. The frequency blocks all of whose frequencies have already been bijectively mapped with carrier frequencies of the set A are considered to be processed. The distribution step ER according to the first algorithm AG=AG1, with reference to FIG. 6A comprising steps S100 to S108. The algorithm AG1 comprises the first iterative loop B1 making it possible to select each frequency block of the frequency band ΔFsy and comprises a second iterative loop B2 included in the first loop B1 for selecting from the frequency block BFe , each carrier frequency fg to be bijectively mapped with a frequency fa of the set A, with 1≦f≦F, while complying with the distribution rules RR1, RR2 and RR3. In step S100, the device DP defines a third set Y comprising the carrier frequencies of the set A which have not yet been processed, that is to say which have not yet been bijectively mapped with a frequency Fe of the set Ge. The set Y is stored in the memory ME of the device DP and is initially equal to the set A. In step S101, the unit UR executes the first iterative loop B1 and verifies whether the frequency band ΔFsy comprises at least one free frequency block BFe , that is to say not yet processed. If all the frequency blocks have been processed, no frequency block is free, the allocation method stops in step S102. In step S102, if there are still carrier frequencies of the set A that have not been distributed over the frequency band ΔFsy, they are in excess with respect to the frequency band of the broadband system ΔFsy and must therefore be distributed outside of this frequency band. This can be done according to any method known to the person skilled in the art. In this case, the frequency band ΔFsy of the broadband radiocommunication system SY overlaps only a part of the frequency band ΔFsy of the narrowband radiocommunication system SY which is larger. In step S101, if there are still some free frequency blocks in the frequency band, the unit UR selects one of them, either in a successive manner by incrementing a variable associated with each index j of the blocks BFe , or in a random manner. In step 8103, the device DP defines an initially empty fourth set X, comprising the carrier frequencies of the set A which have already been bijectively mapped with carrier frequencies Fe of the frequency block BFe . With each selection of a new frequency block, the set X is initialized to the empty set. The set X is stored in the memory ME. In step S104, the unit UR executes the second iterative loop B2 , verifying whether all the carrier frequencies of the frequency block BFe have been processed. If frequencies of the block BFe have not been processed, the unit UR selects one of them Fe , either in a successive manner by incrementing a variable associated with each index n of the frequencies Fe , or in a random manner. If all the N frequencies Fe ,1 to Fe ,N of the frequency block BFe have already been selected, the second iterative loop B2 stops and the first loop B1 is again iterated in step S105 so as to select a new frequency block in step S104. During the selection of a new carrier frequency Fe in the frequency block BFe , the unit UR selects in step S106, a carrier frequency fa in the set Y which complies with the two distribution conditions CD1 and CD2 relating to the distribution rules RR1, RR2 and RR3. According to the first condition CD1 relating more particularly to the rules RR1 and RR3, the carrier frequency fa must be selected such that the frequency interference emitted by the macro-cells associated with the frequency fa and with the frequencies of the set X--that is to say the frequencies already distributed in the frequency block BFe -, corresponds to the smallest interfered surface area Slmin. The unit UR determines the interfered surface area by means of frequency propagation prediction procedures known to the person skilled in the art for each frequency of the set Y, and selects the frequency fa associated with the smallest interfered surface area and which also complies with the condition CD2 . The condition CD1 makes it possible to reduce the choice of the frequencies to be selected of the set Y. The following are selectable, with regards to the condition CD1 on the one hand the frequencies allocated to first cells for which carrier frequencies already distributed in the block BFe are also allocated, the first distribution rule RR1 being complied with implicitly, and on the other hand the frequencies allocated to cells close to the first cells through compliance with the third distribution rule RR3 according to the minimum interfered surface area. According to the second condition CD2 , relating more particularly to the second distribution rule RR2, the frequency fa must be selected in such a way that for any frequency Fα which belongs to the set X of frequencies distributed in the frequency block BFe and is associated--together with the frequency fa --with one and the same cell of a macro-cell, each frequency φ(Fα) of the block of frequencies BFe corresponding bijectively to each frequency Fα complies with the constraint of minimum frequency gap Δfe with respect to the frequency Fe and according to the distribution rule RR2. This condition CD2 makes it possible to verify that the carrier frequencies belonging to the same group of frequencies and distributed in one and the same frequency block are spaced apart by a minimum frequency gap Δfe so as to avoid any frequency interference between frequencies associated with one and the same cell. In step S107, the unit UR bijectively maps the frequency fa with the frequency Fe : φ(fa , the frequency fa belonging to the first set A of carrier frequencies and the frequency Fe belonging to the second set Ge. The mapping is stored in the memory ME of the device DP. The sets X and Y are updated such that the frequency fa is included in the set X (X=X ∪{fa }) and is excluded from the set Y (Y=Y -{fa })). At the end of step S107, the unit UR repeats the second loop B2 in step S108 which loops back to step S104, so as to select a new carrier frequency of the frequency block BFe Once all the frequency blocks have been processed and all the carrier frequencies of the set A have been distributed in the frequency band ΔFsy of the system SY, the device DP establishes a frequency scheduling PF which associates for each cell C of the system SY one or more groups of frequencies of the set Ge according to the reuse rules RU, the groups of frequencies being distributed per frequency block according to the distribution rules RR1, RR2 and RR3. The device DP transmits the frequency scheduling PF to the radiocommunication system SY which will allocate to each narrowband base station the groups of frequencies scheduled in the frequency scheduling PF. The algorithm AG1 provides an optimal definition of a frequency scheduling minimizing the number of frequency blocks interfered with by groups of frequencies allocated to narrowband base stations of adjacent cells of the radiocommunication system SY while complying with the constraints of frequency spacing between the carrier frequencies of the narrowband system. However, it requires at each second iterative loop B2 , in order to satisfy the condition CD1 of step S106, a redetermination of the interfered surface area for each carrier frequency of the set Y. The algorithm AG2, with reference to FIG. 6B decreases in a consequent manner the complexity of the algorithm AG1 by reducing the number of redetermination of the interfered surface area SI for each frequency of the set Y. Indeed, if during step S106 the set X already contains a frequency Fα belonging to a group A of the set A, if the frequencies φ (Fα) and Fe comply with the minimum spacing Me constraint and if carrier frequencies of the group A have not been processed, for example a frequency fa , then this frequency fa quite obviously satisfies the first condition CD1 of step S106 since the surfaces interfered with by the emission of the frequencies of the set X and of the set X including the frequency fa are equal by construction. The distribution step ER according to the second algorithm AG=AG2, with reference to FIG. 6B, comprises steps S200 to S210. By comparison with the algorithm AG1, the algorithm AG2 also comprises a first iterative loop B1 and a second iterative loop B2 . The steps of selecting a frequency block BFe (S202, S203 and S205) are similar to the steps of the algorithm AG1 (respectively S102, S103 and S105), likewise the steps of selecting a carrier frequency Fe of the frequency block BFe (S204 and S208) are similar to the steps of the algorithm AG1 (respectively S104 and S108) and are therefore not described. In step S200, the distribution unit UR defines an initially empty fourth set Z intended to comprise the carrier frequencies of the set A which have not yet been processed but which belong to groups of frequencies undergoing processing, that is to say which comprise at least one bijective frequency mapped with a frequency of the frequency block BFe After the selection of a frequency Fe of the frequency block BFe in step S204, the distribution unit UR verifies, in step S209, whether the set Z comprises a frequency fa complying with a third condition CD3. According to this condition CD3, the carrier frequency fa must be selected such that for any frequency Fα belonging to the set X of the frequencies distributed in the frequency block BFe and belonging--together with the frequency fa --to one and the same frequency group A , each frequency φ(Fα) of the block of frequencies BFe corresponding bijectively to each frequency Fα comply with the constraint of minimum frequency gap Me with respect to the frequency Fe according to the distribution rule RR2. In step S210, the set Z is updated such that the frequency fa is excluded from the set Z. Next, the unit UR executes step S207 which is similar to step S107 of the first algorithm AG1. If in step S209, the set Z does not comprise any frequency fa complying with the condition CD3, the unit UR executes step S206 which is similar to step S106 of the first algorithm AG1 by adding an update of the set Z. In step S206, the unit UR selects a frequency fa complying with the conditions CD1 and CD2 and belonging to a group of frequencies A that has not yet been processed. At the end of step S206, the unit UR updates the set Z such that the set Z also comprises all the unprocessed frequencies of the frequency group A , that is to say all the frequencies of the group A while excluding the frequency fa ;f. Next the distribution unit executes step S207. Once all the frequency blocks have been processed and all the carrier frequencies of the set A have been distributed in the frequency band ΔFsy of the system SY, the device DP establishes the frequency scheduling PF and transmits it to the radiocommunication system SY. According to a third variant the algorithm AG3 is very greatly simplified, in the case where the narrowband radiocommunication system is a TETRAPOL system with a channel width and an interval between carriers of 10 kHz, with groups of F=8 carrier frequencies (they could contain nine frequencies but this is almost never the case in practice) and with a minimum frequency gap Δfer equal to 20 kHz. In that case, the number of carrier frequencies, F=8, of each group of carrier frequencies is at most equal to half of the number of carrier frequencies of a frequency block, N=18. If after two iterations B1 the distribution unit selects two carrier frequencies which on account of the constraint Me belong to different groups of carrier frequencies, then all the carrier frequencies of these groups will be selected alternately during the following steps. Considering that the fill limit for the test of step S204 is fixed at 16 carrier frequencies instead of a maximum value of 18 carrier frequencies, the algorithm amounts to selecting pairs of groups of frequencies so as to map them bijectively with frequencies of a frequency block according to FIG. 4. The distribution step ER according to the third algorithm AG=AG3, with reference to FIG. 6C, comprises steps S300 to S307. By comparison with the algorithms AG1 and AG2, the algorithm AG3 does not comprise any second iterative loop B2 and the sets X, Y and Z. In step S300, the distribution unit UR defines a fifth set W comprising the groups of F=8 frequencies of the set A which was not processed by the unit UR. The set W is initially equal to the set A and is stored in the memory ME. In step S301, the unit executes the iterative loop B1 by verifying whether the frequency band ΔFsy comprises at least one free frequency block BFe , as in steps S101 and S201 respectively of the algorithms AG1 and AG2. If all the frequency blocks have been processed, the allocation method stops in step S302 which is similar to steps S102 and S202 respectively of the algorithms AG1 and AG2. In step S301, if there are still some free frequency blocks in the frequency band, the unit UR selects one of them and executes step S306. In step 306, the unit UR selects two groups of frequencies Ak and Ap--with the indices k≠p, 1≦k≦M and 1≦p≦M -, each comprising F=8 carrier frequencies fa ,0, . . . , fa ,F-1 respectively fa ,0, . . . , fa ,F-1, both belonging to the set W and which comply with a fourth condition CD4. According to the condition CD4, relating more particularly to the rules RR1 and RR3, the groups Ak and Ap are selected such that the frequency interference emitted by the macro-cells whose groups of carrier frequencies Ak and Ap have been associated (in the step EA), corresponds to the smallest interfered surface area Slmin. The unit UR determines the interfered surface area by means of frequency propagation prediction procedures known to the person skilled in the art for each pair of groups of carrier frequencies belonging to the set W, and selects the pair (A , A ) of groups of frequencies which is associated with the smallest interfered surface area. At the end of step S306, the unit UR executes step S307 and bijectively maps each frequency fa of the first frequency group A with a frequency of even index Fe ,2f of the frequency block BFe : φ(fa ,2f, and each frequency fa of the second frequency group A with a frequency of odd index Fe ,2f+1 of the frequency block BFe : φ(fa ,2f+1, with 0≦f≦F-1, and the frequency block BFe comprising the frequencies Fe ,0 to Fe ,17. The mappings are stored in the memory ME of the device DP. The set W is updated such that the frequency groups A and A are excluded from the set W (W=W -{A , A }). At the end of step S307, the unit UR repeats the loop B1 in step S308 which loops back to step S301, so as to select a new frequency block. Once all the frequency blocks have been processed and all the carrier frequencies of the set A have been distributed in the frequency band ΔFsy of the system SY, the device DP establishes the frequency scheduling PF. This algorithm is slightly sub-optimal since only 16 carrier frequencies are distributed over the 18 available carrier frequencies of the frequency block. It is however simple and fast to execute. As a variant the algorithm AG3 can again be simplified by noting that each group of carrier frequencies is selected only once throughout the execution of the algorithm and that the fourth condition CD4 may be replaced with the condition CD5 which is: the intersection of the surfaces interfered with by the frequency interference emitted by macro-cells associated with the groups of frequencies of and A is a maximum. According to a last simplifying variant of the algorithm AG3, interference matrices well known to the person skilled in the art may be used for each macro-cell M . By considering that the row of the interference matrix of a macro-cell contains the group of frequencies A , that is to say the percentage of the various cells interfered with by the macro-cell containing this group A , and that the corresponding row of the interference matrix for the macro-cell contains the group of frequencies A , the scalar product of the corresponding row vectors provides a good approximation of the degree of overlap of the surfaces interfered with by these two sets of cells and therefore of the nature of their intersection. In the algorithm AG, the condition CD4 is then replaced with the following condition: the scalar product of the rows of the interference matrix corresponding to the macro-cells containing the carrier frequencies of the groups A and A is a maximum. These various algorithms AG1, AG2 and AG3 make it possible to realize a first embodiment of the method according to the invention when a prior association of the carrier frequencies of the narrowband system has been established for each cell C of the system SY. According to the second embodiment of the scheduling method, the scheduling device DP defines a first set B of C groups of carrier frequencies B to B associated respectively with the C cells of the radiocommunication system SY and each group comprising carrier frequencies, called different "virtual" frequencies. The virtual carrier frequencies can correspond for example to names of frequencies which will be associated subsequently with carrier frequencies of the frequency band ΔFsye of the radiocommunication system SY. The number of virtual carrier frequencies in a group can vary from one group to another. Each group B is disjoint from another group of the set B. The device also defines a set of virtual blocks comprising an infinite number of frequency blocks, called virtual frequency blocks, BFv , . . . , BFv , . . . , BFv∞ in some of which will be distributed the virtual carrier frequencies in the step ER. Each virtual frequency block BFv comprises N carrier frequencies Fv ,h to Fv.sub.N,h. The virtual frequency blocks can correspond for example to names of frequency blocks which will be associated, in the step EA, with the real frequency blocks BFe to BFe of the frequency band ΔFsye of the broadband radiocommunication system SY With reference to FIG. 7, the distribution unit UR of the scheduling device executes the step EP of distributing the virtual frequencies of the set B in virtual frequency blocks as a function of the distribution rules RR1, RR2 and RR3. Then, the association unit UA of the device DP executes the step EA of associating the virtual frequency blocks in which the virtual frequencies of the set B have been distributed, with the J real frequency blocks as a function of the frequency reuse rules RU. The algorithm AG=AG4 of the second implementation executed by the distribution unit UR comprises steps S400 to S405 including a first iterative loop B1 for selecting a virtual frequency block BFv and a second iterative loop B2 for selecting a carrier frequency Fv of this virtual frequency block BFv Initially, in step S400, the device defines and stores in the memory ME the first set B and the set of virtual frequency blocks. The device also defines a set Y comprising the virtual carrier frequencies of the set B which have not yet been processed, that is to say which have not yet been bijectively mapped with a frequency of a virtual frequency block. The set Y is stored in the memory ME of the device DP and is initially equal to the set B. In step S401, the distribution unit UR executes the first iterative loop B1 by selecting a virtual frequency block BFv and by defining an initially empty set X, intended to comprise the virtual carrier frequencies of the set B which have already been bijectively mapped with carrier frequencies Fv of the selected frequency block BFv . With each selection of a new frequency block, the set X is initialized to the empty set. The set X is stored in the memory ME. Then, in step S402, the unit UR executes the second iterative loop B2 , by verifying whether all the carrier frequencies of the virtual frequency block BFv have been processed, If some frequencies of the virtual frequency block BFv have not been processed, the unit UR selects one of them Fv , either in a successive manner by incrementing a variable associated with each index n of the frequencies Fv , or in a random manner. If all the N frequencies Fv ,1 to Fv ,N of the virtual frequency block BFv have already been selected, the second iterative loop B2 stops and the first loop B1 is again iterated in step S401 so as to select a new virtual frequency block. During the selection of a new carrier frequency Fv in the virtual frequency block BFv , in step S402, the unit UR verifies in step S403 whether there are still virtual carrier frequencies in the set Y. If all the frequencies of the set B have been processed in step S403, that is to say the set Y is empty, the distribution unit UR terminates executing the algorithm AG=AG4 and the association unit UA executes the association step EA which will be described subsequently. If there are still virtual carrier frequencies in the set Y in step S403, the unit UR selects in step S404 a carrier frequency fb in the set Y which complies with the two distribution conditions CD1 and CD2 relating to the distribution rules RR1, RR2 and RR3 of the invention. According to the first condition CD1 , relating more particularly to the rules RR1 and RR3, the carrier frequency fb must be selected such that the frequency interference emitted by the set of cells associated with the frequency fb and with the frequencies of the set X--that is to say the frequencies already distributed in the virtual frequency block BFv , correspond to the smallest interfered surface area Slmin. The unit UR determines the interfered surface area by means of frequency propagation prediction procedures known to the person skilled in the art for each virtual carrier frequency of the set Y, and selects the frequency fb associated with the smallest interfered surface area and which also complies with the second condition CD2 According to the second condition CD2 , associated more particularly with the second distribution rule RR2, the frequency fb must be selected in such a way that for any virtual frequency Fa belonging to the set X of frequencies distributed in the virtual frequency block BFv and being associated--together with the frequency fb --with one and the same cell, each frequency φ(Fα) of the block of frequencies BFv corresponding bijectively to each frequency Fα complies with the constraint of minimum frequency gap Δfe with respect to the frequency Fv and according to the distribution rule RR2. This condition CD2 makes it possible to verify that virtual carrier frequencies belonging to the same group of frequencies B and distributed in one and the same frequency block are spaced apart by a minimum frequency gap Me so as to avoid any frequency interference between frequencies associated with one and the same cell. In step S405, the unit UR bijectively maps the frequency fb selected from the set Y with the frequency Fv : φ(fb , the frequency fb belonging to the first set B of virtual carrier frequencies and the frequency Fv belonging to the virtual block BFv . The mapping is stored in the memory ME of the device DP. The sets X and Y are updated such that the frequency fb is included in the set X (X=X ∪{fb }) and is excluded from the set Y (Y=Y-{fb })). At the end of step S405, the unit UR repeats the second loop B2 which loops back to step S401, so as to select a new carrier frequency of the frequency block BFv Once all the carrier frequencies of the set B have been distributed in virtual frequency blocks, the device DP executes the association step EA so as to associate the virtual frequency blocks in which the virtual carrier frequencies of the set B are distributed, with real frequency blocks of the frequency band ΔFsy of the system SY while considering the limitation of the frequency resources and the reuse rules RU known from the narrowband radiocommunication systems. Several virtual frequency blocks may be associated with one and the same real frequency block of the frequency band. At the end of the step EA, the frequency scheduling PF is determined as a function of the distribution of the carrier frequencies in each real frequency block of the frequency band and the association of each of these carrier frequencies with one or more cells of the communication system SY. A simplifying variant of the algorithm AG4, called algorithm AG5, similar to the algorithm AG3 of the first implementation, is to seek the pairs of cells such that the surface interfered with by the emission of the carrier frequencies associated with the selected pair of cells is the smallest. The association step EA then consists in generating a frequency plan PF of real carrier frequencies by associating real frequency blocks with the virtual frequency blocks using techniques well known to the person skilled in the art. The device PF considers each virtual frequency block as a group and applies the conventional scheduling and frequency reuse rules RU for narrowband systems to associate the virtual frequency blocks with the real frequency blocks. A third step (not represented in FIG. 7) can optionally be applied by considering that the order of the frequencies in a virtual block is defined only insofar as the constraint of minimum spacing between two carrier frequencies associated with one and the same cell is satisfied. After the frequency scheduling has been established, that is to say the association of the real frequencies with a virtual frequency block, the device permutes the frequencies inside this block with the proviso that the minimum gap constraint is still complied with by the permutation performed. In particular, in the case of the algorithm AGS, this permutation amounts to permuting the roles of the cells C and C and to seeking which of these two permutations leads to the lowest interference level. Once the distribution of the frequencies of the narrowband system has terminated, the scheduling of the broadband system may be performed, the frequency blocks used in a cell of the broadband system being the frequencies which are not interfered with by the carriers of the narrowband system and which do not interfere with the carriers of the narrowband system. The method according to the invention guarantees an optimum or near optimum number for the number of available frequency blocks, without interference with the carriers of the narrowband system. The descriptions hereinabove are given merely by way of example to illustrate the invention and the person skilled in the art will be able to define variants of these embodiments while remaining within the framework of the invention. The invention described here relates to a method, a radiocommunication system consisting of a narrowband radiocommunication system and a broadband radiocommunication system both co-located in part or totally on the same frequency band, a scheduling device and at least one base station of the narrowband radiocommunication system. According to one embodiment, the steps of the method of the invention are determined by the instructions of a computer program incorporated into the scheduling device DP. The computer program able to be implemented in the scheduling device comprises program instructions which, when said program is executed in the device whose operation is then controlled by the execution of the program, carry out an allocation of carrier frequencies of the narrowband base station in accordance with the method of the invention. Consequently, the invention also applies to a computer program, in particular a computer program recorded on or in a recording medium readable by a computer and any data processing device suitable for implementing the invention. This program can use any programming language, and be in the form of source code, object code, or of code intermediate between source code and object code such as in a partially compiled form, or in any other desirable form for implementing the method according to the invention. The program may be downloaded into the device via a communication network such as the The recording medium may be any entity or any device capable of storing the program. For example, the medium can comprise a storage means on which the computer program according to the invention is recorded, such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or else a USB key, or a magnetic recording means, for example a hard disk. Patent applications by Christophe Gruet, Elancourt FR Patent applications by Gerard Marque-Pucheu, Verneuil Sur Seine FR Patent applications by CASSIDIAN SAS Patent applications in class Frequency division Patent applications in all subclasses Frequency division User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130188537","timestamp":"2014-04-17T17:47:31Z","content_type":null,"content_length":"130109","record_id":"<urn:uuid:11d85d44-3909-4946-b8c4-512ef336dbc1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of solutions of x^2 + xy + y^2 = 27 in Q How many solution does the $x^2+xy+y^2=27$ equation have in the set of $\mathbb{Q}$? Last edited by mr fantastic; September 25th 2010 at 02:19 AM. Reason: Edited title. There are infinitely many rational solutions to this equation. One family of solutions comes from taking y to have numerator –3, and using the continued fraction expansion of $\sqrt3$. That leads to the solutions $(x,y) = (6,-3),\; \bigl(\frac{69}{13},-\frac3{13}\bigr),\; \bigl(\frac{942}{181},-\frac3{181}\bigr),\; \bigl(\frac{13101}{2521},-\frac3{2521}\bigr),\ldots\,.$ How can we show that there exists infinitely many $b$ integer so that $3(4b^2-1)$ is a square? If $3(4b^2-1) = a^2$, suppose for convenience that $a$ is a multiple of 3, $a = 3c.$ Then $4b^2-1 = 3c^2$, or $\bigl(\frac{2b}c\bigr)^2 - \frac1{c^2} = 3.$ If $c$ is large, then the left side will be close to $\bigl(\frac {2b}c\bigr)^2$ and so $\frac {2b}c$ must be close to $\sqrt3$. So go to the continued fraction calculator and plug in "3" in the square root box. You will then see a list of the convergents for the continued fraction expansion of $\sqrt3$, and you will notice that every fourth item in the list has an even numerator: $\frac21,\;\frac{26}{15},\;\frac{362}{209},\,\ldots \,.$ Let $2p_n/q_n$ be the n'th term in that sequence. Then $p_1=q_1=1$ and the sequence grows by the inductive rules $p_{n+1} = 7p_n + 6q_n$, $q_{n+1} = 8p_n + 7q_n$. You should then be able to show by induction that $4p_n^2-1 = 3q_n^2$. That gives an infinite family of integer solutions to the equation $4x^2-1 = 3y^2$. Going back to the original problem, $3(4x^2-1) = (3y)^2$, so that gives infinitely many integers x such that $3(4x^2-1)$ is a square.
{"url":"http://mathhelpforum.com/number-theory/157343-number-solutions-x-2-xy-y-2-27-q.html","timestamp":"2014-04-17T14:03:29Z","content_type":null,"content_length":"46674","record_id":"<urn:uuid:e876fb58-062c-4c9d-bd33-4feba4947e97>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof of the Cauchy-Schwarz inequality 667pages on this wiki If $\vec{X}$ and $\vec{Y}$ are vectors in $\Re^n$, then $\lVert \vec{X} \cdot \vec{Y} \rVert \leq \lVert \vec{X} \rVert \lVert \vec{Y} \rVert$ If either $\vec{X}$ or $\vec{Y}$ are the zero vector, the statement holds trivially, so assume that both $\vec{X}$ and $\vec{Y}$ are nonzero. Let $r$ be a scalar and $\vec{Z}$ be defined as $r\vec{X} + \vec{Y}$. Since, for any nonzero vector $\vec{V}$, $\vec{V} \cdot \vec{V} = \lVert \vec{X} \rVert^2 \geq 0$ (NOTE: merits own proof) $\displaystyle 0 \leq (r\vec{X} + \vec{Y}) \cdot (r\vec{X} + \vec{Y})$ $\displaystyle {} \leq r^2(\vec{X} \cdot \vec{X}) + 2r(\vec{X} \cdot \vec{Y}) + (\vec{Y} \cdot \vec{Y})$ $\displaystyle {} \leq ar^2 + 2br + c$ where $a = \vec{X} \cdot \vec{X}$, $b = \vec{X} \cdot \vec{Y}$ and $c = \vec{Y} \cdot \vec{Y}$. It can be seen clearly that $p(r) = ar^2 + 2br + c$ is a quadratic polynomial that is non-negative for any $r$. Consequently, the polynomial has two complex roots, or has a single distinct real root.^[1] Remember that the roots of $p(r)$ are given by the quadratic formula $\displaystyle \frac{-2b \pm \sqrt{4b^2 - 4ac}}{2a}$ In particular, the term $4b^2 - 4ac$ must either be negative, yielding two complex roots, or zero, yielding a single real root. Thus $4b^2 - 4ac \leq 0$ $4b^2 \leq 4ac$ $b^2 \leq ac$ $b \leq \sqrt{ac}$ Substituting the values of $a$, $b$ and $c$ into the last of these inequalities, it can be seen that $\sqrt{(\vec{X} \cdot \vec{Y})^2} \leq \sqrt{\vec{X} \cdot \vec{X}}\ \sqrt{\vec{Y} \cdot \vec{Y}}$ Failed to parse (unknown function\lvert): \lvert \vec{X} \cdot \vec{Y} \rvert \leq \lVert \vec{X} \rVert \lVert \vec{Y} \rVert which is equal to the original statement. 1. ↑ Intuitively, the graph of $p(r)$ is either 'floating above' the horizontal axis, if it has two complex roots, or tangent if it has one real root. Since $p(r)$ is non-negative for every $r$, it can't have two real roots because the graph of the function would have to 'pass under' the horizontal axis.
{"url":"http://math.wikia.com/wiki/Proof_of_the_Cauchy-Schwarz_inequality","timestamp":"2014-04-18T13:09:03Z","content_type":null,"content_length":"60827","record_id":"<urn:uuid:8a4f0f19-f61c-4717-9bb6-7153e4fc70b7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Font utilities [ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ] 10.1.4 Filtering curves After generating the final pixel coordinates for each curve (see the previous sections), Limn next filters the curves to smooth them. Before this step, all the coordinates are on integer boundaries, which makes the curves rather bumpy and difficult to fit well. To filter a point p, Limn does the following: 1. Computes the sum of the distances of n neighbors (points before and after p) to p. These neighbors are always taken from the original curve, since we don't want a newly filtered point to affect subsequent points as we continue along the curve; that leads to strange results. 2. Multiplies that sum by a weight, and adds the result to p. The weight is one-third by default; you can change this with the `-filter-percent' option, which takes an integer between zero and 100. Repeatedly filtering a curve leads to even more smoothing, at the expense of fidelity to the original. By default, Limn filters each curve 4 times; you can change this with the `-filter-iterations' If the curve has less than five points, filtering is omitted altogether, since such a short curve tends to collapse down to a single point. The most important filtering parameter is the number n of surrounding points which are used to produce the new point. Limn has two different possibilities for this, to keep features from disappearing in the original curve. Let's call these possibilities n and alt_n; typically alt_n is smaller than n. Limn computes the total distance along the curve both coming into and going out of the point p for both n and alt_n surrounding points. Then it computes the angles between the in and out vectors for both. If those two angles differ by more than some threshold (10 degrees by default; you can change it with the `-filter-epsilon' option), then Limn uses alt_n to compute the new point; otherwise, it uses n. Geometrically, this means that if using n points would result in a much different new point than using alt_n, use the latter, smaller number, thus (hopefully) distorting the curve less. Limn uses 2 for n and 1 for alt_n by default. You can use the options `-filter-surround' and `-filter-alternative-surround' to change them. If the resolution of the input font is not 300dpi, you should scale them proportionately. (For a 1200dpi font, we've had good results with `-filter-surround=12' and `filter-alternative-surround= 6'.) [ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
{"url":"http://www.gnu.org/software/fontutils/manual/0.7/fontu_71.html","timestamp":"2014-04-21T14:19:19Z","content_type":null,"content_length":"5692","record_id":"<urn:uuid:265bcb9f-90b7-42b1-b4a8-e59c8b0b4fe4>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
Sunnyvale, TX Algebra Tutor Find a Sunnyvale, TX Algebra Tutor ...These will be "graded" and returned back to allow the student maximum potential in the topic. Since I hold myself and my students to high standards I will NOT charge any lesson that the student is not satisfied in. Under NO circumstance should anyone pay for the service they are not receiving correctly. 16 Subjects: including algebra 1, algebra 2, reading, chemistry ...These classes were designed to teach a method for analyzing the subject and the audience in order to ensure effective communication. I look forward to working with you and easing your public speaking anxieties! I am an active fitness participant, currently working out 6 days a week. 29 Subjects: including algebra 1, English, reading, writing ...I believe a big key to success in math is recognizing and understanding the terminology, as well as finding a way for students to comprehend and even embrace the processes they are attempting to follow. I know that everyone hears how math is the basis for most of what we see and hear, but mathem... 17 Subjects: including algebra 1, algebra 2, English, reading ...I began tutoring my friends in high school and it quickly turned into a job in college while in Ann Arbor! It was then that I realized my deep passion for facilitating learning and when I decided to become more effective at my teaching practices. I then began working for Kaplan Test Prep where I was trained in ACT/SAT and GRE instruction. 21 Subjects: including algebra 1, algebra 2, reading, calculus ...In summary, students from the many public and private secondary schools and also community colleges and universities within or outside of the Dallas Metro Area will benefit academically, become more effective immediate and long-term critical thinkers, experience improved brain performance by impl... 17 Subjects: including algebra 1, algebra 2, chemistry, geometry Related Sunnyvale, TX Tutors Sunnyvale, TX Accounting Tutors Sunnyvale, TX ACT Tutors Sunnyvale, TX Algebra Tutors Sunnyvale, TX Algebra 2 Tutors Sunnyvale, TX Calculus Tutors Sunnyvale, TX Geometry Tutors Sunnyvale, TX Math Tutors Sunnyvale, TX Prealgebra Tutors Sunnyvale, TX Precalculus Tutors Sunnyvale, TX SAT Tutors Sunnyvale, TX SAT Math Tutors Sunnyvale, TX Science Tutors Sunnyvale, TX Statistics Tutors Sunnyvale, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/sunnyvale_tx_algebra_tutors.php","timestamp":"2014-04-21T02:12:52Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:b51098b3-125b-4117-84e9-23f83559508a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - hypergeom() gives Inf Date: Jun 29, 2012 11:49 AM Author: kumar vishwajeet Subject: hypergeom() gives Inf I am using hypergeom function in Matlab to find 1F1. The arguments of hypergeom are: hypergeom(1.5,4,973). I get "Inf" for it. So, I tried to write the code for hypergeom function. Here is the code: total = 0; a = 1.5;b = 4;z = 973.2763; for k = 1:103 %theoretically k goes upto infinity num = 1; den = 1; for i = 1:k num = num*(a+i-1); den = den*(b+i-1); total = total + ((num/den)*(z^k)/factorial(k)); I get Inf when I increase limit of k to 104. Unfortunately, the series in this case is non-converging. i.e as I increase the limit of k, the value "total" increases. How should I go about it?? Meanwhile, I tried a simple converging series with hypergeom(1,1,2). MATLAB gave me 7.3891. I used the code written above to calculate it myself. I got 6.3891 when the last limit of k = 172 i.e.k = 1:172. As soon as I increased k to 173, I got NaN. So, I want to know How does the function hypergeom in MATLAB work?? Does it also work for diverging series, as in my case.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7843422","timestamp":"2014-04-18T11:14:40Z","content_type":null,"content_length":"2024","record_id":"<urn:uuid:eb38af9f-7185-4f59-a7e7-290dd2eae8b5>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Department, Princeton University Graduate Courses MOL 515/PHY 570/EEB 517/CHM 517Method and Logic in Quantitative BiologyClose reading of published papers illustrating the principles, achievements, and difficulties that lie at the interface of theory and experiment in biology. Two important papers, read in advance by all students, will be considered each week; the emphasis will be on discussion with students as opposed to formal lectures. Topics include: cooperativity, robust adaptation, kinetic proofreading, sequence analysis, clustering, phylogenetics, analysis of fluctuations, and maximum likelihood methods. A general tutorial on Matlab and specific tutorials for the four homework assignments will be available.Ned S. Wingreen PHY 503Classical Mechanics: Principles and Problem Solving (Half-Term)A graduate-level review of classical mechanics emphasizing problem solving.Staff PHY 504Electromagnetism: Principles and Problem Solving (Half Term)A graduate-level review of electromagnetism emphasizing problem-solving.Staff PHY 509Quantum Field TheoryCanonical and path integral quantization of quantum fields, Feynman diagrams, gauge symmetry, elementary processes in quantum electro dynamics.Staff PHY 513Quantum Mechanics: Principles and Problem Solving (Half Term)A graduate-level review of quantum mechanics emphasizing problem-solving.Staff PHY 514Statistical Physics: Principles and Problem Solving (Half-Term)A graduate-level review of statistical physics emphasizing problem-solving.Staff PHY 525Introduction to Condensed Matter PhysicsElectronic structure of crystals, phonons, transport and magnetic properties, screening in metals, and superconductivity.Staff PHY 540Selected Topics in Theoretical High-Energy Physics: Strings, Black Holes and Gauge TheoriesConformal field theory gauge/strings duality de Sitter space turbulanceStaff PHY 557Electronic Methods in Experimental PhysicsExperimental techniques with analog and digital electronics. Analog circuits: operational amplifiers, active filters, low-level measurements, phase-lock loops and power supplies. Digital circuits: discrete logic, flip-flops, counters, data transmission, A/D and D/A converters; and FPGA programming and microcontroller-based data acquisition. Students build about 100 circuits from voltage dividers to microcomputers.Christopher G. TullyNorman C. Jarosik
{"url":"http://www.princeton.edu/physics/graduate-program/courses/","timestamp":"2014-04-16T09:02:41Z","content_type":null,"content_length":"14177","record_id":"<urn:uuid:a1c76ccf-61e8-4036-b242-b1894a5029f6>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Hialeah Lakes, FL Calculus Tutor Find a Hialeah Lakes, FL Calculus Tutor ...I try to make Algebra interesting by also showing application of problems in real life. My University B.A degree from the University of California is Computational Mathematics, and I had to take upper division courses in discrete math. Discrete math is the study of mathematical structures and functions that are "discrete" instead of continuous. 48 Subjects: including calculus, chemistry, reading, French ...I have a course for a student who just wants test taking tips and that is 3 weeks at 3 times per week. This is for the student who needs a little math review and more on standardized test taking skills review. For the student who needs more math review I have a 6 - 12 week program at 3 times per week. 30 Subjects: including calculus, geometry, ASVAB, GRE ...Sincerely, NadeemDiscrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics – such as integers, graphs, and statements i... 23 Subjects: including calculus, chemistry, physics, statistics ...We will learn terms like circumference and area of a circle; also, area of a triangle, volume of a cylinder, sphere, and a pyramid. Trigonometric functions and angle derivation will be explained and applied. Geometric proofs are an important aspect of geometry and so these will be extensively explained. 46 Subjects: including calculus, chemistry, reading, English ...During my four years at UM, I spent a considerable amount of time tutoring, both one-on-one and in front of larger groups. I worked for the Academic Resource Center, which provides free tutoring for undergraduate students, for three years tutoring both chemistry and calculus. As a senior, I was nominated for the Excellence in Tutoring Award. 14 Subjects: including calculus, chemistry, geometry, biology Related Hialeah Lakes, FL Tutors Hialeah Lakes, FL Accounting Tutors Hialeah Lakes, FL ACT Tutors Hialeah Lakes, FL Algebra Tutors Hialeah Lakes, FL Algebra 2 Tutors Hialeah Lakes, FL Calculus Tutors Hialeah Lakes, FL Geometry Tutors Hialeah Lakes, FL Math Tutors Hialeah Lakes, FL Prealgebra Tutors Hialeah Lakes, FL Precalculus Tutors Hialeah Lakes, FL SAT Tutors Hialeah Lakes, FL SAT Math Tutors Hialeah Lakes, FL Science Tutors Hialeah Lakes, FL Statistics Tutors Hialeah Lakes, FL Trigonometry Tutors Nearby Cities With calculus Tutor Coral Gables, FL calculus Tutors Doral, FL calculus Tutors Hialeah calculus Tutors Hialeah Gardens, FL calculus Tutors Medley, FL calculus Tutors Miami calculus Tutors Miami Gardens, FL calculus Tutors Miami Lakes, FL calculus Tutors Miami Shores, FL calculus Tutors Miami Springs, FL calculus Tutors North Miami, FL calculus Tutors Opa Locka calculus Tutors Sweetwater, FL calculus Tutors Virginia Gardens, FL calculus Tutors West Miami, FL calculus Tutors
{"url":"http://www.purplemath.com/Hialeah_Lakes_FL_Calculus_tutors.php","timestamp":"2014-04-16T05:00:37Z","content_type":null,"content_length":"24481","record_id":"<urn:uuid:fbc0815a-0f77-4a29-b0ed-2e886db193e8>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Counterexamples in Origami | Roots of Unity, Scientific American Blog Network Surfaces are complicated. Triangles are simple. That’s an idea behind some methods of creating computer graphics and some advanced mathematics. If we have a surface, we can take a bunch of points on the surface and connect them into triangles to obtain an approximation of the surface. That’s all well and good, but how reliable is the triangulation? How accurately does it reflect the properties of the original surface? For example, as we increase the number of triangles in the approximation of the surface, will the surface area of the triangulated surface get close to the surface area of the original surface? In 1880, mathematician and righteous facial hair maintainer Hermann Schwarz answered this question in the negative by producing a counterexample, a surface and sequence of triangulated approximations for which the surface area of the triangulations gets arbitrarily large and hence doesn’t converge to the surface area of the original surface. Earlier this semester, I had the opportunity to go to an origami workshop put on by the local chapter of the Association for Women in Mathematics. Radhika Gupta, a graduate student here at the University of Utah, showed us how to make this counterexample, dubbed the Schwarz lantern, just by folding paper. To make a Schwarz lantern, we start with a piece of paper. We divide the length of the paper into M parts, so we get M skinny horizontal rectangles. Then each skinny horizontal rectangle gets divided into 2N triangles by placing N points on both the top and bottom at an offset and connecting the points up into triangles. (Note that in the end, the left and right sides will be glued together, so some triangles are drawn so half is on the right side and half is on the left side before gluing.) Now we just have to fold the horizontal lines “in” and the diagonal lines “out” and tape the right side to the left side to get a surface that is made of triangular faces but kind of looks like a cylinder. (Caution: this step is kind of hard. I had to get help from workshop leader Gupta and one of my students, who is a talented origami folder. You may also require assistance.) As M and N increase, the triangulation converges to the cylinder in one sense: the distance between any point on the triangulation and a point on the cylinder goes to 0 as the number of triangles increases. But depending on the ratio of N to M, the surface area of the triangulation may not converge to the surface area of the cylinder. If N is a lot larger than M (specifically, if the ratio N/ M does not go to 0), the surface area of the triangulation gets arbitrarily large. Another way to think about it is that if N is very large compared to M, then as N and M increase, the cylinder you get when you fold them up gets smaller and smaller. It’s worth asking whether we should be surprised that we can create this counterexample. Should we expect triangulated approximations of our surface to have the same properties as the surface itself? When I was trying to decide whether the Schwarz lantern was mind-blowing or just interesting, I thought about what happens when we ask a similar question about curves in the plane. When we approximate a curved shape in the plane with polygons (shapes with straight sides), will the perimeter of the polygons converge to the perimeter of the curved shape? Not necessarily. Let’s just think about a circle as the shape we’re approximating. If we restrict our approximations to regular convex polygons (your standard issue equilateral triangles, squares, and so on) then we can approximate the circle by increasing the number of sides in the regular polygons. In that case, the perimeter of the polygons does go to the perimeter of the circle as the number of sides increases. But if we relax the rules, it’s possible to draw a sequence of polygons that seem to converge to the circle but whose perimeters are stubbornly always equal to 4. For a nice explanation of that construction, see Vi Hart’s video Rhapsody on the Proof of Pi=4. Although it’s not a perfect analogy, I think the Schwarz lantern idea is similar to the one in that video, so in some sense its existence isn’t too For more information on the Schwarz lantern, check out Conan Wu’s blog post about it or its page on Cut the Knot. Wu’s post includes a printable template and instructions for making your own! 1. Layer_8 5:45 pm 12/1/2013 first thought a fractal, but it’s not Link to this Add a Comment You must sign in or register as a ScientificAmerican.com member to submit a comment.
{"url":"http://blogs.scientificamerican.com/roots-of-unity/2013/11/30/counterexamples-in-origami/","timestamp":"2014-04-16T13:54:06Z","content_type":null,"content_length":"97155","record_id":"<urn:uuid:06a378aa-9cb4-44f1-8814-d3de8fea8ee0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
A parabolic headlight is formed by revolving the parabola y^2 = 15x between the lines y = -3 and y = 3 about its line... - Homework Help - eNotes.com A parabolic headlight is formed by revolving the parabola y^2 = 15x between the lines y = -3 and y = 3 about its line of symmetry. Where should the headlight bulb be placed for maximum illumination? Maximum illumination will be achieved if the bulb is placed at the focus of the parabola created by the cross-section of the headlight. If we rearrange the equation we know that: `x=1/15y^2` , therefore, a = 1/15 The focus of a parabola is located at: If we place the vertex of the parabola at the origin (0,0) for simplicity this gives: Substituting our known value of a we get: Therefore, the bulb should be placed 3.75 units to the right of the vertex. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/parabolic-headlight-formed-by-revolving-parabola-434610","timestamp":"2014-04-16T19:01:18Z","content_type":null,"content_length":"26041","record_id":"<urn:uuid:ce3c545b-89da-48c7-bebb-391540216f48>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Thermal expansion of a tube [Archive] - Mechanical Design Forum 6th Sep '10, 09:07 I would like some help please on calculating the increased size of a tube due to increased temperature. This tube forms part of an assembly inside a pressure vessel and I need to ensure the necessary clearances are in place so this tube can be free to expand as temperature increases. The technical details of the problem are as follows:- Tube Material - PTFE Coefficient of linear thermal expansion 1 degC 10-5 Initial Temperature - 20 deg C Elevated Temperature - 200 deg C Change in Temperature - 180 deg C Tube Outside Diameter - 399mm Tube Inside Diameter - 356mm Tube Length - 190mm Any help in calculating what the tube sizes will be when at 200 deg C would be greatly appreciated.
{"url":"http://www.mechanicaldesignforum.com/archive/index.php/t-519.html","timestamp":"2014-04-20T23:43:47Z","content_type":null,"content_length":"8857","record_id":"<urn:uuid:ed0fe6d7-7c89-40aa-8100-eab2515b1be2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Computer Science 343 Computer Science 343 Data Structures Fall 2012 Dr. Stephen Bloch office 203 Post Hall phone 877-4483 Web page Class Web page office hours TTh 3:00-4:00, W 10:00-1:00, W 2:30-4:00, or by appointment August 25, 2012 Course Description Expand on topics learned in CSC 172. Examine, implement, and analyze common data structures such as stacks, queues, lists, trees, heaps, and graphs. Understand how to choose an appropriate data structure for a real-world problem and use it in solving such problems. Subject Matter This is a course for anybody who’s ever complained about a computer being slow. When faced with a computer program that takes unacceptably long to solve the problem at hand, one option is to buy faster, more expensive computer hardware. But this option has its limits: perhaps your money will run out, or perhaps you’ll upgrade to the fastest computer on the market and still be unsatisfied. Often a better option is to find a more efficient approach to the problem — to re-think the algorithm so that it makes better use of the hardware. A $1,000 personal computer running a good algorithm can often outperform a $1,000,000 supercomputer running a bad (though correct) algorithm. In 1976, Niklaus Wirth (inventor of the Pascal language) wrote a classic computer science textbook entitled Algorithms + Data Structures = Programs. The statement was somewhat oversimplified, but almost forty years later it still carries a lot of truth: a great deal of the work of writing a correct and efficient program consists of choosing algorithms and data structures. Indeed, although Adelphi offers two separate courses CSC 343 (“Data Structures”) and CSC 344 (“Algorithms and Complexity”), the subjects really can’t be separated so easily: many important algorithms only work well with a particular data structure, and many important data structures are motivated by particular algorithms. Accordingly, I plan to teach the two courses as a seamless whole, addressing both algorithms and data structures as they naturally come up. In this course we’ll learn how to measure the efficiency of an algorithm, independent of language, operating system, or hardware. We’ll survey a variety of techniques for designing efficient algorithms and data structures. Many of these techniques will help you program correctly even when you’re not worried about efficiency. In the second semester, we’ll learn some additional important algorithms and data structures. We’ll learn how to prove that a given algorithm is “as good as it can get,” in the sense that no algorithm, no matter how clever, will ever be better than this one. Finally, we’ll study problems believed to have no efficient solution by computer program, and even some problems which have no computational solution at all, and how we deal with such problems in reality. Course Learning Goals By the end of this semester, you should be able to • write and critique proofs by mathematical induction • read and write algorithms in pseudocode • offer constructive critiques of other students’ algorithms, data structures, and proofs • respond constructively to such critiques of your work • make good asymptotic estimates (in big-O notation) of the running time and memory consumption of a given program • capture and analyze empirical data to compare the running time of different programs • implement and use linked lists, array-based lists, stacks, queues, binary and n-ary trees, self-balancing trees, heaps, and hash tables, including the provision of access and traversal methods for these data structures • implement and compare a variety of algorithms for a single problem, e.g. insertion sort, bubble sort, selection sort, Shell sort, merge sort, Quicksort, tree sort, heap sort, bin sort, bucket I’ve chosen two textbooks to use throughout the two-semester sequence: they cover a lot of the same topics, but sometimes I like one author’s treatment better than the other’s. One is the “bible” of algorithms and data structures, Introduction to Algorithms by Cormen, Leiserson, Rivest and Stein (MIT Press 2009, ISBN 0262033844). (The “Rivest” author is the R in the RSA cryptosystem, by the way.) It’s a big, thick book, but relatively inexpensive, because there’s not much second-hand market for it: once you’ve bought it, you’ll probably refer to it for the rest of your programming career. If you really don’t want to buy the big hardback, the second edition (and several hundred other books) is free on-line to ACM Student Members. I highly recommend that all CS majors become members of the ACM: at $19/year, it’s an incredible bargain. The second edition is about 90% the same as the current third edition; I’ll try to have a copy of the third edition on reserve at the library. The other textbook is Cliff Shaffer’s Data Structures and Algorithm Analysis (Dover 2012, ISBN 0486485811). You can buy a printed copy, but it’s also available free on-line from http:// people.cs.vt.edu/~shaffer/Book/ (in either a Java or a C++ edition). I may also give out other reading assignments by email, on the Web, or in journals. This is a 3-credit class, which means you should budget 6 hours per week outside class for reading and homework. In particular, you’ll need to read about 30 pages per week, on average. Make time in your weekly schedule for this! I assume that everyone in the class has passed CSC-MTH 156 (Discrete Structures) or an equivalent course covering Boolean logic and algebra, graphs and trees, and perhaps recurrence relations. I assume that you’ve passed pre-calculus: we’ll seldom need derivatives or integrals, but we’ll need logs and exponentials all the time, so refresh your memory of them. I also assume that everyone in the class has passed at least a year of programming courses. I don’t care much about the language, as long as you have written and debugged a number of programs and are familiar with the notions of algorithm, recursion, loop, array, linked list, etc. I’ll put up examples in Java or Scheme syntax, whichever seems more natural for the problem at hand. There will probably be 5 homework assignments, each worth 12% of the semester grade, and a final exam worth 20%. Another 15% of your semester grade will be based on class participation and in-class presentations of homework problems; see below. The remaining 5% of your semester grade will be “brownie points”, which you earn by asking and answering good questions in class, coming to me for help when you need it, etc, and you lose by cheating, being a pain in class, etc. The final exam must be taken at the scheduled time, unless arranged in advance or prevented by a documented medical or family emergency. If you have three or more exams scheduled on the same date, or a religious holiday that conflicts with an exam or assignment due date, please notify me in writing within the first two weeks of the semester in order to receive due consideration. Exams not taken without one of the above excuses will be recorded with a grade of 0. Late homework policy Homework assignments will be accepted late, with a penalty of 25% per 24 hours or portion thereof after they’re due. An hour late is 25% off, 25 hours late is 50% off, etc. Any homework assignment turned in more than four days late will get a zero. Any homework assignment turned in after Dec 6 (the last day of class) will get a zero. (This is so I have, perhaps, time to grade them before the final exam.) There will be several kinds of homework in this class. At one extreme are the analysis and “thought” problems on paper, resembling problems in a math class. At the other extreme are programming assignments, which may be written in any language that you and I both know and that runs at Adelphi (e.g. Scheme, Prolog, Java, C, C++). In between are pseudocode assignments: these need to be precise descriptions of an algorithm, like a computer program, but they don’t need to meet the syntactic requirements of a compiler (only a human being will read them) and you can ignore details that aren’t relevant to the problem at hand. For example, in a problem that wasn’t primarily about sorting, you might say “sort table A in increasing order by value” as one line of the algorithm. On the other hand, if the assignment were about sorting, I would expect you to give the details of your sorting algorithm. Each student in the class will present some solutions to homework problems at the board in class, explaining the solution and answering technical questions from me and other students. (This is one reason I don’t want things handed in late: I’d rather see your solution rather than have you copy down and turn in the solution one of your classmates presented.) When I assign a problem, I’ll give it a number of “points” based on difficulty, and you are expected to present 20 “points” worth of problems in the semester. If you want to present a particular problem in class, try to get to class a few minutes early and start writing it up on the board. If you’re not sure of your solution or your presentation, feel free to discuss it with me in my office before the day you want to present it in class. Your grade for in-class presentations will be my assessment of how well you presented the solution and answered questions about it. I’ll also pay attention to who asks those questions. Attendance policy I expect you to be in the classroom at 9:25, and to still be in the classroom at 10:40; if you’re not present, and miss some important material, it’s your problem. If you must miss a class, arrive late, or leave early, please let me know as far in advance as possible. I don’t explicitly grade on attendance, but consistent lateness or absence will impact the “brownie points” part of your grade. Students with Disabilities If you have a disability that may impact your ability to carry out assigned course work, and are not enrolled in the Learning Disabilities Program, please contact the staff in the Disability Support Services Office (DSS), University Center, Room 310, (516) 877-3145. DSS@adelphi.edu. DSS will review your concerns and determine, with you, appropriate and necessary accommodations. All information and documentation of disability is confidential. In particular, you may choose whether, and in how much detail, to discuss your disability with the instructor. The Adelphi University Code of Ethics applies to this course; look it up on the Web at http://academics.adelphi.edu/policies/ethics.php. Assignments in this class are to be done individually, or in teams of of two by prior permission; in the latter case, you may not do multiple homeworks with the same partner. You may discuss general approaches to a problem with classmates, but you may not copy large pieces of programs or homework solutions. If you do, all the students involved will be penalized (e.g. I’ll grade the assignment once and divide the points equally among the several people who turned it in). All work on an exam must be entirely the work of the one person whose name is at the top of the page. If I have evidence that one student copied from another on an exam, both students will be penalized; see above. Student course evaluations During the last two weeks of the class, you will be informed, via email and eCampus, that the University’s online course evaluation form is available for your input. It will no longer be available after the beginning of final-exam week, so make sure you fill it out before then. They don’t show me any of the results until after I’ve turned in semester grades. We really do take this stuff seriously in deciding what to do differently in future courses, so please give detailed feedback (something more specific and useful than “this course sucks!” or “this course rocks!”). This class meets every Tuesday and Thursday from 9:25-10:40 AM in Hagedorn 111, unless we agree to change that. The schedule of topics, reading assignments, and homework assignments will be maintained on the Web at http://www.adelphi.edu/sbloch/class/343/calendar.html The dates are subject to change depending on how classroom discussions actually go. I expect you to have read the reading assignments before the lecture that deals with that topic; this way I can concentrate my time on answering questions and clarifying subtle or difficult points in the textbook, rather than on reading the textbook to you, which will bore both of us. Please read ahead! Last modified: Saturday, August 25th, 2012 9:31:07pm
{"url":"http://home.adelphi.edu/sbloch/class/archive/343/fall2012/syllabus.html","timestamp":"2014-04-19T04:21:08Z","content_type":null,"content_length":"15754","record_id":"<urn:uuid:b30fc7d8-df1b-4aa4-8fd5-0a9107a3e4b8>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the point on the line closest to the origin? October 26th 2009, 06:13 PM #1 Junior Member Oct 2008 Find the point on the line closest to the origin? The question asks to find the point closest to the origin of the straight line perpendicular to a surface equation previously given. I found the equation for the normal line, but I am not sure how to find the point on the line closest to the origin. Can anyone tell me how to do this? Thanks in advance. Here is the equation of the line... I've yet to get to multi-variable calculus, but wouldn't you simply be solving an optimization problem using the distance formula? The question asks to find the point closest to the origin of the straight line perpendicular to a surface equation previously given. I found the equation for the normal line, but I am not sure how to find the point on the line closest to the origin. Can anyone tell me how to do this? Thanks in advance. Here is the equation of the line... Parametrize the line as r(t) = <6t + 1, 3t + 2, 2t + 3>. Then the distance from the origin squared of r is given by (6t + 1)^2 + (3t + 2)^2 + (2t + 3)^2. Differentiate by t and set it equal to zero. Then solve for t and plug it into the distance formula. Okay, so by distance formula do you mean plug t back into the parametrized r(t) equations you have above? Do you know what the final answer should be? Thanks. October 26th 2009, 06:31 PM #2 Super Member Jul 2009 October 26th 2009, 06:47 PM #3 Aug 2009 October 26th 2009, 07:13 PM #4 Junior Member Oct 2008 October 26th 2009, 07:16 PM #5 Aug 2009 October 26th 2009, 07:27 PM #6 Junior Member Oct 2008 October 26th 2009, 07:36 PM #7 Aug 2009
{"url":"http://mathhelpforum.com/calculus/110709-find-point-line-closest-origin.html","timestamp":"2014-04-19T11:45:41Z","content_type":null,"content_length":"46617","record_id":"<urn:uuid:f4517150-9284-499a-8359-505ff640fae3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Idledale Science Tutor Find a Idledale Science Tutor ...I have learned through personal experience, that good study skills and hard work is a strong foundation for success. As a student I have struggled with rushing through study material for exams, and found myself stressed during tests. However, once I developed my study skills, I realized that homework, essays, and studying for exams didn't take as long as it use to. 31 Subjects: including physiology, genetics, nutrition, piano ...The Bilingual Education courses focused on teaching strategies for Second Language Learners (English Language Learners/ELLs). For example, one strategy is SIOP, which is a learning program that stands for Sheltered Instruction Observation Protocol. The focus of this model is examining language objectives. Language acquisition is a process. 13 Subjects: including biology, anatomy, reading, Spanish ...I believe that in order to help someone learn material you must use repetition complied with positive reinforcement of habits and success when it is achieved. When a student feels that they have succeeded and their efforts are reinforced positively learning become much easier and they are willing to try harder. I am a great student in reading, writing, math, science, and history. 13 Subjects: including physics, physiology, anatomy, biology ...I have a good understanding of vector spaces, linear operators and linear systems. While I was at Fort Valley State University, emphasis was placed on finite-dimensional vector spaces, linear transformations and matrix algebra. Topics included determinants, Gauss method, Gauss-Jordan method, Cr... 15 Subjects: including electrical engineering, physics, chemistry, calculus ...Cognitive techniques can empower learners to anticipate this deficit, alternate between task types, and resist the depletion of attentive energy. No single solution exists for every learner. My goal is to empower you to test yourself and monitor your own progress. 31 Subjects: including psychology, reading, biology, ecology
{"url":"http://www.purplemath.com/idledale_science_tutors.php","timestamp":"2014-04-19T20:13:17Z","content_type":null,"content_length":"23986","record_id":"<urn:uuid:c1ae15cf-a669-4719-8f75-c851ddc0bb94>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
= Preview Document = Member Document = Pin to Pinterest • Students fill in 10 missing products on a 5x5 multiplication grid. • Fill in the squares below by adding the two numbers that meet in the square. Use your finger, pencil, or ruler to help you track the grid. • Students fill in 48 missing products on a 9x9 multiplication grid. • Follow the directions and color the grid to make the shape of a tulip. Grid coloring is great for developing early map skills. • Find the shapes in these 6x6 grids and write down the correct coordinates. Where is the square? • From "acute" to "trapezoid". Review geometry vocabulary with this word search. Answer sheet included. • Students fill in 40 missing products on a 10x10 multiplication grid. • Follow the directions and color the grid to make the shape of a daisy. Grid coloring is great for developing early map skills. • Add positive and negative numbers vertically and horizontally to complete ten nine-square grids. • Fill in the squares below by adding the two numbers that meet in the square. Use your finger, pencil, or ruler to help you track the grid. • Find the shapes in these 4x4 grids and write down the correct coordinates. Where is the circle? • Follow the directions and color the grid to make the shape of an umbrella. Grid coloring is great for developing early map skills. The happy face is in B1. Find the pictures in this simple grid and write down the correct co-ordinates. A great introduction to map-reading! Not really magic-- just math fun! Numbers should be inserted so that rows, columns, and diagonals all add up to the same sum. Solve the two squares 4 different ways. Not really magic-- just math fun! Numbers should be inserted so that rows, columns, and diagonals all add up to the same sum. Solve the two squares 3 different ways. Good practice for giving directions, reading grid maps, and understanding legends. Students determine the rule for input/output tables, and complete the table using the rule. One page worksheet practice for reading grid maps and understanding legends. Add the numbers vertically and horizontally to complete the nine nine-square grids. Add the numbers vertically and horizontally to complete the nine nine-square grids. Add the numbers vertically and horizontally to complete the nine nine-square grids. Add the numbers vertically and horizontally to complete the nine nine-square grids. Add the numbers vertically and horizontally to complete the nine nine-square grids. Not really magic-- just math fun! Numbers should be inserted so that rows, columns, and diagonals all add up to the same sum. Solve the two squares 4 different ways. Students determine the rule for input/output tables, and complete the table using the rule. Add three-digit numbers vertically and horizontally to complete nine nine-square grids. Color D3 red. Follow the directions and color the grid to make the shape. Grid coloring is great for developing early map skills, as well as practicing following directions. Students determine the rule for input/output tables, and complete the table using the rule. Students follow color and number patterns to determine the rule for a grid.
{"url":"http://www.abcteach.com/directory/subjects-math-grids-9388-2-1","timestamp":"2014-04-21T04:42:56Z","content_type":null,"content_length":"149203","record_id":"<urn:uuid:1f66dc05-5ea0-46d7-b095-46ca801ad8d9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
on the TI-86 Euler's Method on the TI-86 Given a differential equation, say y' = 3x-y, here is how to make the TI-86 draw an Euler's method solution curve for that differential equation. 1. Put the calculator into Differential Equation mode: Press [2nd] [MODE] and select DifEq as shown: 2. We must translate the differential equation: In place of the function (y in this case), write Q1, and in place of the variable (x in this case), write t. So we get Q1' = 3t-Q1, or as the calculator wants it, Q'1=3t-Q1. 3. Now press [GRAPH][Q'(t)=] and enter our differential equation from the last step on the first line: 4. Press [EXIT][MORE][FORMT] and make sure that, at the bottom of the screen, both Euler and FldOff are selected: 5. Press INITC and enter your initial condition. If the initial condition was y(1)=4, then since t is playing the role of x and Q1 is playing the role of y, we must set tMin=1 and QI1=4: 6. Press AXES and tell the calculator which axes to graph. We will always use x=t and y=Q1 in this class: 7. Finally, press [WIND] to select a suitable window: 1. Set x and y ranges as usual. 2. Set tMin to match what you gave in INITC above. Usually you will want the range for x and t to be the same. 3. tStep sets how often points are actually drawn. I'll pick 0.1 for this example. 4. EStep, at the bottom of the window screen, sets how many Euler steps are made for each point actually plotted. I'll pick 1. 8. Then press [GRAPH]. Last Modified December 7, 1998. Prof. Janeba's Home Page | Send comments or questions to: mjanebawillamette.edu Department of Mathematics | Willamette University Home Page
{"url":"http://www.willamette.edu/~mjaneba/help/TI-86-euler.html","timestamp":"2014-04-21T02:10:56Z","content_type":null,"content_length":"3849","record_id":"<urn:uuid:dc5a9c75-5ca2-4378-9839-6ded1e3f943f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
A TIME-SUBJECT INDEX FOR "AGAINST ALL ODDS: INSIDE STATISTICS" Edward R. Mansfield Box 870226, University of Alabama, Tuscaloosa, AL 35487-0226 KEY WORDS: Teaching, Multimedia, Visual aids. Against All Odds: Inside Statistics is a collection of twenty-six half hour outstanding video presentations that depict how statistics is used in society. This outstanding series was produced under the guidance of Dr. David Moore of Purdue University. The tapes are available from The Annenberg/CPB Collection, and are also offered free to schools that adopt certain text books. Using short excepts from the thirteen hours of video in the classroom can be a excellent way introduce students of statistics to real problems. Contained in this paper is a detailed listing of the real world setting, the specific statistical items discussed, and a time index so that instructors can go directly to a desired story on a tape. As an example of how to use a tape in class, consider showing students the data and graphs that were used by engineers to determine if temperature was related to joint scaring for a particular complicated machine. Then show a video of the result of what happened when all the data were not considered: the space shuttle Challenger exploding! The impact on the class can be quite dramatic. Those segments of the series that I think can be used effectively in a classroom setting are highlighted. A H to H H H H rating is used. These are my ratings; yours will vary. The portions of the tapes in which Teresa Amabile, series hostess from Brandis University, teaches are generally not rated, since she is playing the role of the instructor, the task you will be performing in the classroom. Statistical terms are shown in bold type. The names of the experts used in the stories are given. A VCR and a TV monitor are all that is needed. Knowing where specific segments start and end is greatly enhanced if your VCR has a "real time" counter as opposed to one that counts revolutions of the videocassette reel. Each tape in the series contains two 30 minute programs. All times in this paper are measured from the front of a tape, beginning at the first video signal on the tape. This signal is always "1-800-LEARNER", the phone number for information on the tape series. When this first appears, reset your VCR counter to "0:00". The time corresponding to "END OF PROGRAM" indicates the point at which the program credits start. Times on the left indicate the time index at which a segment starts; the times given in brackets show the elapsed time for that particular segment. Note, because of differences in the productions runs made for duplicating this series, the "gap time" between the two programs on each tape may vary. Your tapes may have different gap times than mine; if so, note the difference and adjust all times on the even numbered programs accordingly. I am not advocating showing any tape in its entirety as a substitute for live instruction. Because we feel that the tapes are valuable, we indicate on our course syllabi which programs correspond to the lectures. We then make all the tapes in the series available to our students in our video viewing center for "after hours viewing". Although we encourage students to watch the tapes, most do not. Enjoy the series! Overview of Stories Details for each story are given in the next section PROGRAM 1: What is Statistics? 4:48 Domino's Pizza. H H H Note: Each of the following stories in Program 1 is presented in more detail in later programs. Describing Data 13:53 Lighting strikes in Colorado. Histogram. 14:33 Growth hormones; heights of children. 15:26 Manatee deaths in Florida waters. Scatterplot. 16:05 Baseball players' salaries. Correlation. Producing Data 16:50 Chesapeake Bay pollution. 17:54 Aspirin and heart attacks. 18:44 Frito-Lay Potato Chips. Sampling. 19:18 Political Polls. 20:35 The Space Shuttle Challenger. Joint probability. 21:24 Casino gambling. Conclusion from Data 21:59 Discrimination within the FBI. 23:13 Duracell Batteries. 24:05 Shakespearean poetry. Salem Witch trials. 25:06 Welfare in Baltimore. 27:06 END OF PROGRAM. Tape continues PROGRAM 2: Picturing Distributions 31:58 Charles Menard's 200 year old map/graph 33:55 Lighting strikes in Colorado. Histograms H H H 44:02 Scheduling TV Programs. H H H 52:17 Lowering health care costs. histograms, stem and leaf plots H /2 End of Tape PROGRAM 3: Describing Distributions 3:30 National salary & wage data. 5:55 Wage inequities in Colorado H 16:07 Hot dogs' composition. H H 21:00 Musical analysis of urine data. H H Tape continues PROGRAM 4: Normal Distributions 33:50 Baby "bust" problem. H 37:40 Aging of our population. H H H 46:07 Boston Bean Stalk Social Club. H 50:38 Baseball statistics. ".400" hitters? H H H 55:48 Ty Cobb, Ted Williams, and George Brett batting averages. H H End of Tape PROGRAM 5: Normal Calculations 3:21 Standardizing the normal distribution with heights of American women. 7:07 GM Proving grounds. H 11:15 Does a new model car meets emission standards? use only n=5 prototypes? 14:10 Cholesterol values. H H 19:50 Sizes of military uniforms. H H 22:46 New army helmets 24:21 Normal Quantile Plot. Tape continues PROGRAM 6: Time Series 32:17 Driving times to work in a control chart. 34:50 The body's internal clock. Time series. H 38:58 National economic statistics. Seasonal variation. 40:03 Ozone levels in the atmosphere. Seasonal var. and negative trend. H H 41:03 Boston Marathon. running median 43:38 Brain's reaction time. H 47:27 Wall Street. Diversification reduces risk. Do stock market exist? H H H End of Tape PROGRAM 7: Models for Growth 3:00 Children growth rates. Sara height. Alice in wonderland grows BIG, H H H 9:20 Linear growth, residual patterns, Extrapolation problems. 14:00 Gypsy moths and exponential growth. H H 20:17 Cartoon: The price of a chess board? H H 23:30 Crude oil production. Use of logarithm. Tape continues PROGRAM 8: Describing Relationships 32:25 Manatees vs. motor boats in Florida. H H H 37:55 Cavities vs. fluoride levels. 39:31 1970 Draft lottery. Median trace H H H 44:04 Obesity: metabolic rate vs. lean body mass H End of Tape PROGRAM 9: Correlation 1:36 Taste of chocolate cake and price? 3:45 Correlation illustrated with animated graphics. H H 5:42 Identical twins raised apart. H H H 16:22 Baseball players' salaries. H H 20:53 Education in the 60's. Coleman Report and Fred Mosteller, H Tape continues PROGRAM 10: Multi-dimensional Data Analysis 32:28 Chesapeake Bay pollution. H H 45:07 Chernof faces, "Trees" and "stars. 47:42 Bell Core graphics. Speech synthesis, 3-D plots and higher, brushing. H H End of Tape PROGRAM 11: The Question of Causation 3:01 Cartoon: Causation, Common response, and coincidence examples. H H H H 5:42 Simpson's Paradox. "City University" H H H H 11:50 "Good" bad experiment with new borns. H H 12:47 The Wynder-Graham study. smoking causes cancer. H H H Tape continues PROGRAM 12: Experimental Design 32:46 Observational study of lobsters. H 36:14 The Physicians Health Study. Aspirin and heart attacks? H H H H 43:39 Is Ribavirin to good to be true? H H H 47:22 Disposion of domestic violence. Milwaukee, Wisconsin police dept. H 53:22 A fictional experiment to illustrate bad experimental practices. H H H H End of Tape PROGRAM 13: Blocking and Sampling 1:39 Dirty laundry, blocked and treated. 4:45 The perfect strawberry. H 13:28 Undercounting in the national H H H 19:45 Shere Hite's Women and Love. 20:48 Frito Lay potato chips. Sampling H H H H Tape continues PROGRAM 14: Samples and Surveys 32:03 A stratified national sample. Nice graphics. 34:41 A fish story. H 39:36 Bad interviewer techniques. H H H 41:21 National Opinion Research Center (NORC) (A must see segment. ) H H H H 50:30 Sampling distributions: beads in a bowl. Precision in estimation. H H H End of Tape PROGRAM 15: What is Probability 4:32 Assessing probabilities of injury or death in everyday life. H 10:50 A magician shows randomness. H 17:49 Traffic control in New York City. Simulation model. H H Tape continues PROGRAM 16: Random Variables 33:36 Cheating on an AP Calculus exam. H 34:33 Space Shuttle Challenger. H H H H 43:02 Points in a profession basketball game. 49:10 Earthquakes in California. H 54:45 Distribution of ice cubes used per drink. End of Tape PROGRAM 17: Binomial Distributions 3:46 Boston Celtics Basketball. Free throws are independent in game situations. H H H 6:38 Stocks and T-bills. Expected rate of returns. 9:45 A finance class experiment. H H 16:23 Binomial distribution. 17:22 Sickle cell anemia. H 24:25 Quincunx: Falling balls. H H Tape continues PROGRAM 18: The Sample Mean and Control Charts 33:55 Roulette. 35:04 Interviews with gamblers. H H 40:44 The casino always wins. H H H H 47:03 Frito Lays Potato Chips. Statistical Process Control H H H 53:41 Dr. W. Edwards Deming. H H H H End of Tape PROGRAM 19: Confidence Intervals 3:11 Political Polls. Margin of error H H 6:37 Systolic blood pressure. Confidence interval. 11:35 Duracell batteries. H H 18:25 Rhesus monkeys in medical studies. H 21:21 The feeding behavior of marmosets. Tape continues PROGRAM 20: Significance Tests 34:18 Shakespearean Poetry. H H 49:06 Discrimination within the FBI. H H H End of Tape PROGRAM 21: Inference for One Mean 3:03 The t-distributions: 1908, the Guinness Brewery and William Gossett. 5:55 The National Institute of Standards and Technology. H H 10:33 CI for the mean of PCB concentrations 13:30 NutraSweet, shelf life of a new cola. H H H 18:19 Paired comparison test of sweetness of cola. 21:08 Autism. H Tape continues PROGRAM 22: Comparing Two Means 33:32 Welfare in Baltimore. H H 45:05 Union Carbide product testing. H H H 51:00 SAT Exams. Can "coaching" help? 55:40 CI for the difference between the means. H H H End of Tape PROGRAM 23: Inference For Proportions 3:03 Measuring Unemployment Nationwide. The Bureau of Labor Statistics. H 11:58 Safety of City Water. H H H 20:15 The Salem Witch Trials. Tape continues PROGRAM 24: Inference for Two-Way 34:11 Ancient Man. Are Africanus and Robustus different? H H 43:30 Breast Cancer. H H 52:02 Mendal's Peas. H H End of Tape PROGRAM 25: Inference for Relationships 3:32 Are galaxies speeding away from earth? Edwin Hubble's work. H H H H 8:05 Regression using Hubble’s original data on 24 galaxies. 14:08 Complications in the Hubble Constant. Rotating 3-D plot illustrates the "Swiss cheese concept" of universe. H H Tape continues PROGRAM 26: Case Study 35:49 How the drug, AZT, was tested and got to market. 36:57 Phase 1: Observation Study 39:22 Phase 2, a double blind experiment The Data Safety Monitoring Board Confirming the data analysis. 51:30 Getting AZT to patients. Statistical process control in manufacturing. 53:07 A patient's perspective. H H H Safety and efficiency of a new drug. End of Tape End of series. Details of the Stories Reset your VCR counter to 0:00 when the first signal appears on the tape. That signal will be "1-800-Learner". Times in left margin = VCR counter times. Times in brackets = elapsed times for that story segment. PROGRAM 1: What is Statistics? 0:00 1-800-Learner, (Reset your VCR counter to "ZERO" when this signal first appears.) A listing of series contributors follows: Annenberg/C&B Project, Symbolics, among others. 0:43 Against All Odds logo. (An animated, rotating scatterplot. Each program begins with this.) 1:15 Teresa Amabile, series hostess. A study on creativity, designed to determine whether or not competitive rewards make a difference, is used to give an overview of the statistical concerns in experimentation. [ 0:32 ] 4:48 STORY: Domino's Pizza. A study is conducted to determine if deep pan pizza will be a successful new product for their market. Sensory evaluations by professional tasters, consumer reactions, advertising issues, and test marketing are illustrated. Tom Monnahan, Chairman, Taylor Bond, Sales Info, and Margaret Olson-Cox, Marketing Research Director present the company's concerns. [ 7:24 ] H H H 12:12 Teresa: How does Statistics fit into the decision process? [ 1:03 ] 13:15 Three parts of the puzzle: 1. Describing data, 2. Producing data, 3. Conclusions from data. Note: Each of the following stories in Program 1 is presented in more detail in later programs. Describing Data 13:53 STORY: Lighting strikes in Colorado. A histogram of voluminous data on lightening strikes, illustrates when it is most likely to strike. [ 0:40 ] (See Program 7 for details). 14:33 STORY: Growth hormones. Charts of the heights of children can be used to determine if synthetic growth hormones can increase the rate of a child's growth. [ 0:53 ] (See Program 2 for details). 15:26 STORY: Manatee deaths in Florida waters. Comparing the number of motor boat registrations with number of manatee deaths in a scatterplot indicated a clear positive relationship. Practices were implemented that could help the survival of these water mammals. (See Program 8 for details). [ 0:39 ] 16:05 STORY: Baseball players' salaries. The number of home runs is positively correlated with a player's salary. (See Program 9 for details). [ 0:45 ] Producing Data 16:50 STORY: Chesapeake Bay pollution. Experiments lead to results that will help clean up the bay. [ 1:04 ] (See Program 10 for details). 17:54 STORY: Aspirin and heart attacks. Taking simple aspirin might help reduce the number of heart attack victims. (See Program 12 for details). [ 0:50 ] 18:44 STORY: Frito-Lay Potato Chips. Sampling is used at many stages of production to insure a high quality product. [ 0:34 ] (See Programs 13 and 18 for details). 19:18 STORY: Political Polls. Polls were used extensively in the 1988 presidential election to measure public opinion. (See Program 19 for details). [ 1:17 ] 20:35 STORY: The Space Shuttle Challenger. Although each o-ring joints had a high probability of working properly, the joint probability that all six would work was much lower. [ 0:49 ] (See Program 16 for details). 21:24 STORY: Casino gambling. Although individual bettors will have widely ranging results, the house does a profitable business every day of the year. (See Program 18 for details). [ 0:35 ] Conclusion from Data 21:59 STORY: Discrimination within the FBI. Minority agents were not given opportunities for advancement. (See Program 20 for details). [ 1:14 ] 23:13 STORY: Duracell Batteries. How do they determine if their batteries really do have a longer life? (See Program 19 for details). [ 0:52 ] 24:05 STORY: Shakespearean poetry. Can statistics help determine if a newly found poem was actually written by Shakespeare? (See Program 20 for details). STORY: Salem Witch trials. The accused witches and their accusers lived in different parts of Salem. Was there political persecution? (See Program 23 for details). [ 1:01 ] 25:06 STORY Welfare in Baltimore. Can women participating in a special training program earn more money than women in the existing program? (See Program 22 for details). [ 0:52 ] 25:58 Teresa: Closing comments about the big picture of Statistics. 27:06 END OF PROGRAM. Tape continues PROGRAM 2: Picturing Distributions 31:26 Against All Odds logo 31:58 Teresa: Charles Menard's two hundred year old map/graph shows the decline in the size of Napoleon's army. Lesson Objectives: 1. Histogram, 2. Shape of distribution, 3. Center and Spread, 4. Stem plot. 33:55 STORY: Lighting strikes in Colorado. Histograms are used to digest vast amounts of data to help determine when and where lighting is more likely to strike. Raul Lopez, NOAA meteorologist. [ 2:31 ] H H H 36:26 Teresa: Mechanics of constructing a histogram. [ 2:11 ] 38:37 Raul Lopez: Outliers helped to identify that "first flashes" tend to occur at irregular times due to the geologic structure of the area. [ 2:53 ] 41:30 Teresa: Shapes of distributions and the location of the center. [ 2:32 ] 44:02 STORY: Scheduling TV Programs. A small independent TV station, WSBK, TV 38 Boston, uses statistics for "counter programming", i.e. targeting the audiences that will not be attracted to the major networks during a particular time slot. [ 4:36 ] H H H 48:38 Teresa: Comparing histograms of the ages of TV viewers can help to determine if a sitcom or action series is more likely to attract a certain age group. [ 1:43 ] H H 50:21 Teresa: Important considerations in constructing histograms: equal width intervals and an appropriate number of classes. [ 1:56 ] 52:17 STORY: Lowering health care costs. Medical examples are used to illustrate histograms having different spreads, and the construction of stem and leaf plots. Peter Van Etten, New England Med. Center. [ 5:19 ] H /2 57:36 END OF PROGRAM. End of Tape PROGRAM 3: Describing Distributions 0:00 1-800-Learner, Program tease, etc. 1:11 Against All Odds logo. 1:43 Teresa: Pay Inequities. Lesson Objectives: 1. Mean & median, 2. Box Plots, 3. IQR & Std. Dev. 3:30 Teresa: National salary & wage data. Graphics used to compare mean & median. Why do salaries for men and women differ? [ 2:25 ] 5:55 STORY: Wage inequities. Concerns in Colorado Springs over comparable pay for men and women in city government. The mayor and city employees discuss the effects. [ 5:29 ] H 11:24 Teresa: Mechanics for calculating mean, median, and five number summary. [ 4:43 ] 16:07 STORY: Hot dogs' composition. How do you measure the calories in hot dogs? By measuring protein, fat, and water content. Sidney Shifman, Food Res. Labs. [ 2:11 ] H H 18:18 Consumer Reports data used to compare three types of hot dogs by using Box Plots. [ 1:37 ] 19:55 Teresa: Mechanics for calculating IQR. [ 1:05 ] 21:00 STORY: Musical analysis of urine data. Urine data for a new born baby is set to music. If a particular metabolite value is out of range, a bad note is played. Prof. Charles Sweeley & Prof John F. Holland, Biochemists, Mich. St. Univ. [ 3:57 ] H H 24:57 Teresa: Mechanics for calculation of s. 26:58 Summary: Pictures are important! 27:33 END OF PROGRAM. Credits, Logos, etc. Tape continues PROGRAM 4: Normal Distributions 31:55 Against All Odds logo. 32:22 Teresa: Lesson Objectives: 1. Density Curves, 2. Normal Curves, 3. The 68 - 95 - 99.7 Rule, 4. Standardization. 33:50 STORY: Baby "bust" problem. Changes in age distributions over time spell trouble for the Social Security System. Prof. William Hsiao, Harvard School of Public Health. [ 3:50 ] H 37:40 Teresa: Aging of our population. Creating density curves for US. population for 1930 & 2075. [ 2:10 ] 39:50 Location of median & mean on density curves. (Good graphics.) [ 2:10 ] H H H 42:00 Examples of variables that may follow Normal Distributions found in Teresa's "home movies": children's heights, class arrival times, test scores, mpg & weight of cars, etc. [ 2:29 ] 44:29 Effect of changing m and s in the Normal Distribution. [ 1:38 ] 46:07 STORY: Boston Bean Stalk Social Club. A woman’s height must be in the top 2 to 3% for membership. [ 2:11 ] H 48:18 Teresa: Graphical explanation of the 68 - 95 - 99.7 rule. [ 2:20 ] 50:38 STORY: Baseball statistics. Where are the ".400" hitters today? The mean batting averages remain consistent over time, m = .260; but s is getting smaller. (Good baseball clips). Stephen Jay Gould, Ph.D., Harvard [ 5:10 ] H H H 55:48 Teresa: Standardizing the normal distributions using Ty Cobb, Ted Williams, and George Brett batting averages. [ 1:32 ] H H 57:20 Teresa: Closing comments. 57:45 END OF PROGRAM End of Tape PROGRAM 5: Normal Calculations 0:00 1-800-Learner. 1:11 Against All Odds logo. 1:43 Teresa: Sizes of clothes. Lesson Objectives: 1. Relative Frequencies, 2. Percentile, 3. Quantile Plots. 3:21 Teresa: Standardizing the normal distribution illustrated with heights of American women. [ 1:30 ] 4:51 The standard normal table. Finding probability from z. 7:07 STORY: GM Proving grounds. Tests for nitrous oxide emissions using a dynamometer. x= grams/mile. Percent of new cars that fail must be less than 40%. Harold Haskew, GM Engineer and Tom Lorenzen, GM Statistician. [ 4:08 ] H 11:15 Teresa: GM must decide if a new model car meets emission standards using only n=5 prototypes? [ 1:09 ] 12:24 STORY continued: Actual test drives. [ 1:21 ] 13:45 Teresa: Calculating the area between two values of cholesterol. [ 0:25 ] 14:10 STORY: Cholesterol values. A campaign is targeted at the borderline risk group; those having readings of 200 to 250. How can these individuals reduce their heart attack risk? William Castelli, MD. [ 4:07 ] H H 18:17 Teresa: What percentage are in the borderline risk group for cholesterol m = 213 and s = 48.4? [ 1:33 ] 19:50 STORY: Sizes of military uniforms. Anthropologists measured many characteristics of US soldiers in order to determine the needed distribution of sizes for new uniforms and equipment. Robert Walker & Bruce Bradtmiller, Anthropologists. [ 2:56 ] H H 22:46 Teresa: Calculating sizes for new helmets. The normal table backwards. [ 1:35 ] 24:21 Teresa: Normal Quantile Plot. How do you determine if a population is normal? [ 2:09 ] 26:30 Teresa: Closing Comments. 27:03 END OF PROGRAM. Tape continues PROGRAM 6: Time Series (add 0:42) 30:20 Against All Odds logo. 30:56 Teresa: Lesson Objectives: 1. Cycles and Trends, 2. Seasonal Variation, 3. Smoothing Data, 4. Seeing Isn't Believing. 32:17 Teresa: Driving times to work shown in a control chart. [ 2:33 ] 34:50 STORY: The body's internal clock. Comparing the body's cycle to the day-night cycle using time series. Charles Czeislser, MD, Brigham & Women’s Hosp. [ 2:22 ] H 37:12 Teresa: Graphics illustrating cycles. [ 0:51 ] 38:03 The "light-dark" cycle. Bright light therapy. [ 0:55 ] 38:58 Teresa: National economic statistics contain seasonal variation. [ 1:05 ] 40:03 Teresa: Ozone levels in the atmosphere. Seasonal variation and negative trend. [ 1:00 ] H H 41:03 Teresa: Boston Marathon. Smoothing out cycles and trends using running median on winning times in the Boston Marathon. [ 2:35 ] 43:38 STORY: Brain's reaction time. Measuring the time until a surprise reaction while a subject reads unusual sentences. Gregory McCarthy, Ph.D. Psychologist. [ 2:19 ] H 45:57 Teresa: A plot of means for several trials smoothes the data to show the estimated reaction time. [ 1:30 ] 47:27 STORY: Wall Street. Predictions on the floor of the New York Stock Exchange. 48:55 Dr. Burton Malkeil, Yale: Why diversification reduces risk. 50:10 Teresa: Can cycles in the stock market be predicted? Peter Eliades answers "Yes." 51:51 Dr. Burton Malkeil answers "No." [ 7:29 ] H H H 54:56 Teresa: How is a pattern discovered? Does it repeat after we discover it? [ 1:34 ] H 56:30 END OF PROGRAM. End of Tape PROGRAM 7: Models for Growth 0:00 1-800-Learner, etc. 1:09 Against All Odds logo 1:40 Teresa: Flowers growing. Lesson Objectives: 1. Linear Growth, 2. Exponential Growth, 3. Exponential Beats Linear Growth. 3:00 STORY: Children growth rates. Alice in wonderland grows BIG. Growth hormones were used for five-year old Sara when her growth rate was determined to be deficient. Edward Reiter, MD, Bay State Med. Center. [ 4:57 ] H H H 7:57 14-year old Jason has similar problem but with different results. John Crigler, MD, Endocrinology Div. Children’s Hospital. [ 1:23 ] 9:20 Teresa: Linear growth defined. Ticket sales for a movie. Eye-balling the slope. Residual patterns can suggest curvature. Extrapolation problems. [ 4:40 ] 14:00 STORY: Gypsy moth infestations result from exponential growth. Radiation of adult males may help to reduce the effects of the problem. Chuck Schwalbe, Dir. USDA Otis Lab. and Andrew Liebhold, Ph.D., Entomologist, Univ. of Mass. [ 5:58 ] H H 19:58 Teresa: Exponential growth: multiply by a fixed amount. [ 0:19 ] 20:17 STORY: (Cartoon) The price of a chess board? "A grain of rice on the first square, double for the next, and so on." [ 1:23 ] H H 21:40 Teresa: Exponential growth will always surpass linear growth. [ 1:50 ] 23:30 Crude oil production for 100 years. Use the logarithm to re-express exponential growth to linear growth. [ 1:10 ] 24:40 Use residuals to "see" the problem. [ 1:50 ] 26:30 Teresa: Summary comments. 27:03 END OF PROGRAM Tape continues PROGRAM 8: Describing Relationships 30:25 Against All Odds logo. 30:57 Teresa: Lesson Objectives: 1. Scatterplot, 2. Categorical Variety, 3. Regression Line. 32:25 STORY: Manatees versus motor boats in Florida waterways. Tom O’Shea, Wildlife Biologist. [ 2:33 ] H H H 34:58 Teresa: Scatterplots of the number of manatee deaths and motor boat registrations. [ 2:57 ] 37:55 Teresa: Cavities vs. fluoride levels. Scatterplot with points identified by cities being either small or large. [ 1:36 ] 39:31 1970 Draft lottery problem. Median trace helps identify the non-random pattern. [ 2:40 ] H H H 42:11 Teresa: Least squares regression line illustrated with graphics. [ 1:53 ] H 44:04 STORY: Obesity problems. Resting metabolic rate versus lean body mass. C. Wayne Callaway, MD Endocrinologist and Stanley Heshka, Ph.D. St. Luke’s Roosevelt Hospital. [ 3:17 ] H 47:21 Teresa: Mechanics for calculating the regression line. [ 3:28 ] 50:49 STORY continued: Predicted values tend to be higher than actual metabolism rate. The dieting process lowers the metabolic rate. [ 2:55 ] 53:44 Teresa: Cautions to consider in regression. 56:15 END OF PROGRAM. End of Tape PROGRAM 9: Correlation 0:00 1-800-Learner. 1:06 Against All Odds logo 1:36 Teresa: Is there an the association between taste of chocolate cake and its price? 3:45 Correlation illustrated with animated graphics. [ 1:14 ] H H 4:59 Lesson Objectives: 1. Interpreting correlation, 2. Deriving r, 3. Correlation and regression. 5:42 STORY: Identical twins raised apart. The annual "twins" convention in Minneapolis. Thomas Bouchard & Nancy Segal, Minn. Study of Twins Reared Apart, Prof. David Lykken, Univ. of Minn. [ 4:03 ] H H H 9:45 Teresa: Scatterplot for twins raised apart. [ 1:43 ] 11:28 STORY continued: Explanation of associations. [ 1:30 ] 12:58 Teresa: Mechanics for calculating r. [ 2:47 ] H 15:45 Illustration of r for a parabolic curve. 16:22 STORY: Baseball players' salaries. The lively baseball controversy. The association between home runs and strikeouts, and between salaries and home runs. Daniel Seligman, Fortune Magazine. [ 3:28 ] H H 18:22 Calculation of r for Home Runs vs. Salary. [ 1:28 ] 19:50 r-square, the percentage of variation of y explained by x. [ 1:03 ] 20:53 STORY: Education in the 60's. A social science study in the '60s, the Coleman Report, compared black and white schools. What variables were related, and how strong were the relationships? Harold Howe, Commission of Education, ‘66-’68 and Fred Mosteller, Harvard Statistician. [ 2:30 ] H 23:23 Teresa: simplified example of the study. 24:29 STORY continued: Interpretation by the experts involved in the study. [ 3:18 ] H 26:48 Teresa: Closing Comments. 27:27 END OF PROGRAM. Tape continues PROGRAM 10: Multi-dimensional Data Analysis (add 1:10) 30:30 Against All Odds logo 31:02 Teresa: Chesapeake Bay. Lesson Objectives: 1. Review. 2. Multi-dimensional data. 3. Human/Computer Interaction. 32:28 STORY: Chesapeake Bay pollution. How healthy is the Chesapeake Bay? Measurements of organisms are made on samples of bottom of mud, and of dissolved oxygen levels in the water. A. Fred Holland, Ph.D. & Anna Shaughessy, Biologists, and Hal Wilson, Statistician, Versar, Inc., Michael Hirshfield, Maryland Dept. of Natural Resources. [ 9:52 ] H H 42:40 Measuring the effect of a paint factory. 45:07 Teresa: Some multi-dimensional plots: Chernof faces are used for determining if dollar bills are counterfeit. "Trees" and "stars" are used to display data. [ 2:35 ] 47:42 STORY: Bell Core graphics. Computer and graphical techniques used by Bell Core: Speech synthesis, three dimension plots and higher, brushing techniques. Paul Tukey, Bell Core. [ 8:22 ] H H 56:04 Teresa: Closing Comments. 56:48 END OF PROGRAM. End of Tape PROGRAM 11: The Question of Causation 0:00 1-800-Learner, etc. 1:05 Against All Odds logo 1:36 Teresa: Lesson Objectives 1. Analyzing Association. 2. Simpson's Paradox. 3. Causation. 3:01 STORY: (Cartoon) Causation, Common response, and coincidence examples. [ 1:57 ] H H H H 4:55 Teresa: Lurking variables. [ 0:47 ] 5:42 STORY: Example of Simpson's Paradox. "City University" seems to have sex discrimination for admissions to its two professional schools. 6:52 Teresa: Data for the City University problem. 8:11 City University's Business School compared to its Law School. 10:08 Teresa: Interpretation of the Simpson Paradox. [ 4:26 ] H H H H 11:50 Teresa: An intentionally unrealistic experiment with new born babies used to test effect of smoking on cancer. [ 0:57 ] H H 12:47 STORY: The Wynder-Graham study. This classic work first suggested that smoking causes cancer. How can causation be established? Richard Overholt, MD, Ernest Wynder, MD, Dwight Harken, MD, Irwin Miller, Statistician, Lawrence Garfinkel, Dietrich Hoffman, MD, all of the American Health Foundation; Prof Allan Brandt, Harvard, and Donald Shopland, 1964 Surgeon General’s Report. (Excellent but lengthy.) [ 13:22 ] H H H 26:09 Teresa: Comments on causation. 27:00 END OF PROGRAM. Tape continues PROGRAM 12: Experimental Design (+0:33) 30:30 Against All Odds logo 31:02 Teresa: Lesson Objectives: 1. Experiment, 2. Confounding, 3. Randomized Comparative Exper. 32:46 STORY: Observational study of the behavior of lobsters. Diane Cowan & Jelle Atena, Boston Univ. Marine Program. [ 2:41 ] H 35:27 Teresa: Explanation of an experiment. [ 0:47 ] 36:14 STORY: The Physicians Health Study. Could taking aspirin reduce the occurrences of heart attacks? A controlled double blind experiment using a treatment and a placebo. Ethical concerns are discussed. Bernard Katz, MD, Charles Hennekens, MD, Dir. of Physicians Health Study. 40:10 Teresa: Subjects were randomly placed in two groups to avoid confounding. 41:31 STORY continued: A 47% reduction in heart attacks resulted for those receiving aspirin. [ 7:06 ] H H H H 43:20 Teresa: Two groups must be equivalent in order to avoid bias. [ 0:19 ] 43:39 STORY: Is Ribavirin to good to be true? A study for patients having a pre-AIDS condition yielded bias results because the healthiest patients received the drug; while the sickest, the placebo. [ 1:01 ] H H H 44:40 Teresa: The study failed to randomly assign subjects to the two groups. [ 0:35 ] 45:15 Teresa: Illustration of how to randomize. [ 2:07 ] H 47:22 STORY: Disposion of domestic violence. A Milwaukee, Wisconsin police department experiment to determine the best method to handle domestic violence cases. Larry Sherman, Ph.D. [ 6:00 ] H 53:22 STORY: A fictional experiment to illustrate bad experimental practices. [ 2:19 ] H H H H 55:41 Teresa: Recap of good experimental practices. 56:44 END OF PROGRAM. End of Tape PROGRAM 13: Blocking and Sampling 0:00 1-800-Learner. 1:07 Against All Odds logo. 1:39 Teresa: Dirty laundry is blocked (cotton, synthetics) and treated (warm or cold water). Lesson Objectives: 1. Blocking 2. Sampling 3. Census 4:45 STORY: The perfect strawberry. Horticulturists use a randomized complete block design to determine the best berry for market. Olivia Mageau, Horticulturist, & Gene Galletta, Ph.D., Geneticist. 8:43 Teresa: Reasons for blocking. 9:57 STORY continued: The evaluation of the berry data. [ 6:53 ] H 11:38 Teresa: Reasons for multi-factor experiments. [ 1:50 ] 13:28 STORY: Undercounting in the national census. This illustrates the difficulties of getting an exact count. Barbara Bailer, Statistician, & Peter Bounpane, US Census Bureau. [ 5:08 ] H H H 18:36 Teresa: Why a sample instead of a census? 19:45 Shere Hite's Women and Love. 1987. An example extremely biased sampling due to voluntary response. [ 2:12 ] 20:48 STORY: Frito Lay potato chips. Sampling is used at many steps in the production of potato chips to insure a high quality product. [ 5:49 ] H H H H 26:37 Teresa: Closing Comments. 27:08 END OF PROGRAM. Tape continues PROGRAM 14: Samples and Surveys 30:45 Against All Odds logo 31:15 Teresa: Lesson Objectives: 1. Stratified Random Sample, 2. Getting It Right, 3. Sampling Distribution. 32:03 Teresa: A stratified national sample. Graphics nicely illustrate the process. [ 2:38 ] 34:41 STORY: A fish story. A survey of fishermen and the types of fish caught is used to determine if some species are endangered. John Witzig, Nat’l Marine Fisheries Service. [ 3:29 ] H 38:10 Teresa: Problems in surveying people. The Literary Digest and the George Gallup predicted the 1936 presidential election results. Why was George closer? [ 1:26 ] H 39:36 STORY: Examples of bad interviewer techniques. [ 1:45 ] H H H 41:21 STORY: National Opinion Research Center (NORC) sampling procedures. Statistically sound sample selection, careful question design, and skillful interviewing are illustrated James Davis and Tom Smith, Dirs, General Social Survey, and Leigh Brandon, NORC. (A must see segment. ) [ 9:09 ] H H H H 50:30 Teresa: Sampling distributions illustrated using beads in a bowl. Large n implies more precision in estimation. (Nicely done.) [ 5:57 ] H H H 56:27 Teresa: Closing Comments. 56:51 END OF PROGRAM End of Tape PROGRAM 15: What is Probability 0:00 1-800-LEARNER 1:06 Against All Odds logo. 1:40 Teresa: Lesson Objective: 1. Relative Frequency, 2. Randomness, 3. Sample Space, 4. Probability Rules. 4:32 STORY: Assessing probabilities of injury or death in everyday life. What are the chances of dying in an automobile accident at some point in your life? Baruch Fischhoff, Carnegie Mellon. [ 4:54 ] H 9:26 Teresa: Relative frequency of heads in many flips of a coin. [ 1:24 ] 10:50 STORY: Percy Diaconis, magician & professor, discusses the concepts of randomness. [ 3:50 ] H 14:40 Teresa: Demonstration of sample spaces and the basic rules of probability. [ 3:09 ] 17:49 STORY: Traffic control in New York City. A probability based traffic simulation model is used to avoid "stillback" and "gridlock" at key intersections. Mark Yedlin. [ 6:00 ] H H 23:49 Teresa: Simplified example of traffic control used to illustrate basic rules of probability. [ 3:23 ] 27:12 END OF PROGRAM. Tape continues PROGRAM 16: Random Variables 31:38 Against All Odds logo 32:10 Teresa: Lesson Objective: 1. Multiplication Rule, 2. Random Variables, 3. Mean and standard deviation of random variables. 33:36 STORY: In the movie Stand and Deliver, several students are accused of cheating for giving the same wrong answer on an AP Calculus exam. The events were not independent. [ 0:57 ] H 34:33 STORY: Space Shuttle Challenger. Defects in NASA reliability program contributed the catastrophic failure. Note: P(one joint fails) = .023). 37:53 Teresa: Using the multiplication rule for finding the joint probability of all 6 field joints not failing. 39:15 STORY continued: The backup o-ring for each field joint did not behavior independently of the primary. Cold weather contributed to failure. (The short version in PROGRAM 1 may be more effective). [ 8:29 ] H H H H 43:02 Teresa: Addition rules for disjoint events and independent events. The two types of random variables discrete and continuous. Points in a profession basketball game. [ 6:08 ] 49:10 STORY: Earthquakes in California. Estimating the probability of occurrence along the San Andreas fault. Lucille Jones and Kerry Sieh, US Geological Survey [ 5:35 ] H 54:45 Teresa: Distribution of the number of ice cubes used per drink and the mechanics of calculating the mean and standard deviation. [ 2:49 ] 57:34 END OF PROGRAM. End of Tape PROGRAM 17: Binomial Distributions 0:00 1-800-Learner 1:03 Against All Odds logo 1:33 Teresa: Insurance company concerns. Lesson Objective: 1. Law of Large Numbers 2. Rules for means and variances of random variables. 3. Binomial distribution 4. Binomial means and standard dev. 2:22 Teresa: Law of Large Numbers. 3:46 STORY: Boston Celtics Basketball. The myth of "small numbers" illustrated with "streak shooting". Tom Gilovich, Cornell Univ, did research that showed that even free throw shots are independent events in game situations. [ 2:52 ] H H H 6:38 Teresa: Stocks and T-bills. Expected rate of returns are used to illustrate the rules for the means of random variables. [ 3:07 ] 9:45 STORY: A finance class experiment. A class at Cretan College learns the reasons for maintaining diversification of investments. [ 4:53 ] H H 14:38 Teresa: Variance implies risk. Rules for variances of random variables. [ 1:43 ] 16:23 Teresa: Binomial distribution. [ 0:59 ] 17:22 STORY: Sickle cell anemia. Example of dichotomous outcomes each having a fixed probability of occurring. Dr. Orah Platt, Boston's Children Hospital, and Dr. Marilyn Gaston, Nat'l Institute of Health. [ 4:40 ] H 22:02 Teresa: Calculating probabilities of X children having sickle cell anemia in a group of n = 6. [ 2:23 ] 24:25 STORY: Quincunx. Balls falling through rows of pegs illustrate that the binomial can look like a normal distribution. [ 1:19 ] H H 25:44 Teresa illustrates conditions when the normal approximations work. [ 1:19 ] 27:03 END OF PROGRAM. Tape continues PROGRAM 18: The Sample Mean and Control Charts 32:13 Against All Odds logo 32:44 Teresa: Gambling. Lesson Objective: 1. Sample means, 2. Central limit theorem, 3. Control charts. 33:55 Teresa: Roulette. Finding the mean winnings when betting on "red" many times. [ 1:09 ] 35:04 STORY: Interviews with gamblers. How and why casinos keep players coming back. Steven Norton & John Belisle, Resorts International. [ 5:40 ] H H 40:44 Teresa: Central limit theorem. Illustrated from the gamblers point of view with 50 $1 bets; then from the casino's side with 1000 and 100,000 $1 bets. [ 6:19 ] H H H H 47:03 STORY: Frito Lays Potato Chips. How and why Statistical Process Control is used in the production of chips. Anthony Gallonio and Don Strickert, Quality Assurance. [ 3:09 ] H H H 50:12 Teresa: Control charts. Construction and used of control charts for the salt content of potato chips. 52:39 Decision rules for control charts. [ 3:29 ] 53:41 STORY: Dr. W. Edwards Deming. How Japan reversed its economy by using Dr. Deming's ideas of quality. What American management MUST do to survive. Prof. Robert Hayes, Harvard Business School, and Frank Fagan, SQC Analyst. (A must see segment.) [ 3:55 ] H H H H 57:36 Teresa: Closing comments. 58:05 END OF PROGRAM. End of Tape PROGRAM 19: Confidence Intervals 0:00 1-800-Learner 1:05 Against All Odds logo. 1:37 Teresa: Lesson Objectives: 1. Confidence Intervals, 2. Trade-off, 3. Sample Size. 3:11 STORY: Political Polls. Polls were used extensively in the 1988 Presidential election polls. What are polls, and why do their results vary? What is the margin of error? Daniel Yankelovich, Public opinion analyst, and Waren Mitofsky, CBS News Election Unit. [ 3:26 ] H H 6:37 Teresa: Systolic blood pressure used to construct a confidence interval. What assumptions are made? Mechanics of finding the margin of error. Meaning of being "95% confident". [ 4:58 ] 11:35 STORY: Duracell batteries. A demonstration of how the lives of batteries are actually measured. Richard Cataldi and Larry Morgan, Ultra Technologies. [ 2:38 ] H H 14:13 Teresa: How the normal table is used to find any degree of confidence. What effect does the sample size have on the width of the confidence interval? [ 4:12 ] 18:25 STORY: The use of Rhesus monkeys in medical studies. Sample sizes should be conservative in order to not waste large numbers of animals. Prof. Melinda Novak, U. Mass, and Andrew Petto, New England Primate Research Center. [ 2:56 ] H 21:21 The feeding behavior of marmosets. [ 2:46 ] 24:07 Teresa: Mechanics of calculating the sample size needed given a desired width and amount of confidence. Warnings regarding the use of C. I.'s. [ 3:03 ] 27:10 END OF PROGRAM Tape continues PROGRAM 20: Significance Tests 31:09 Against All Odds logo. 31:39 Teresa: Are Seat belts effective? Lesson Objective: 1. Significance tests, 2. P-value, 3. Statistically significant. Null and Alternative hypotheses. 34:18 STORY: Shakespearean Poetry. Testing a hypothesis to determine if a newly found poem was actually written by William Shakespeare. Ron Thisted, Univ of Chicago. [ 6:42 ] H H 41:00 Teresa: Mechanics of a hypothesis test. The calculation and interpretation of the p-value for the Z-distribution. [ 8:06 ] 49:06 STORY: Discrimination within the FBI. Minority agents were not given an opportunity for advancement. A comparison of the points of view of the statistical experts from both sides of the suit is given. Matt Perez, FBI, Gary Lafree, Univ of New Mexico, and Rebecca Klemm, Klemm Analysis Group. [ 6:54 ] H H H 56:00 Teresa: Closing comments. Data must be meaningful to achieve meaningful results. 57:26 END OF PROGRAM End of Tape PROGRAM 21: Inference for One Mean 0:00 1-800-Learner, etc. 1:04 Against All Odds Logo. 1:36 Teresa: Over specialized gadgets. Lesson Objectives: 1. t-Procedures, 2. t-Distributions, 3. Paired Comparisons. 2:40 Facing the fact that s is usually never known, z statistics are usually just "over specialized" gadgets. 3:03 The t-distributions. 1908, the Guinness Brewery and William Gossett. [1:15] 4:18 Comparing the t distributions to the standard normal. [1:37] 5:55 STORY: The National Institute of Standards and Technology. Stanley Rasberry and Susannah Schiller, NIST. Customers of NIST purchase materials that they use as "standard" reference materials. NIST certifies that bottles contain what the label says. NIST uses stratified samples of bottles to construct confidence interval estimates for the properties of the materials. [4:38] H H 10:33 Teresa: Construction of a Confidence Interval for the mean level of PCB concentrations based on a sample of 10 bottles. [2:17] 12:50 Teresa: Paired Comparisons. [0:40] 13:30 STORY: NutraSweet. Taste tests are used to test shelf life of a new cola that contains this artificial sweetener.. Product testing is a constant on-going process. Thomas Carr, Research Statistician, Suzanne Pecore, Sensory Evaluation Barry Howler, Application Technology. [4:49] H H H 18:19 Teresa: A paired comparison test of the sweetness of the cola. [2:01] 20:20 Benefits of t-tests. [0:48] 21:08 STORY: Autism. The Vineland test for social development. Dr. Fred Volkmar, Yale Child Study Center. [4:10] H 25:18 Teresa: Confidence Interval for Age Equivalent Scores based on t-distribution. [1:50] 27:08 END OF PROGRAM Tape continues PROGRAM 22: Comparing Two Means 31:40 Against All Odds logo. 32:12 Teresa: Lesson Objectives: 1. Two-sample problems, 2. C.I. for Two Means, 3. Significance Test for two means. 33:32 STORY: Welfare in Baltimore. Can women participating in a special training program, called OPTIONS, earn more money than women in the existing welfare jobs program called WIN? The result played a role in a congressional overhaul of welfare. Daniel Friedlander, M.D.R.C. [5:42] H H 39:14 Teresa: The mechanics of two independent sample tests and confidence interval. 45:05 STORY: Union Carbide product testing. Is there a difference in the bounce of a new environmentally safe foam and a std foam? Union Carbide tested the new product, Ultra-cell, several ways: support test, combustion test, stretch test, fatigue test, & "bounce" test. Stanley-Hager, Research Scientist and Charles Hendrix, Statistician. [3:45] H H H 48:50 Teresa: The mechanics of the two independent sample t-test. 51:00 STORY: SAT Exams. Can "coaching classes" improve one's performance on the SAT? Donald Powers, E.T.S. and John Katzman, The Princeton Review 55:40 Teresa: The mechanics of the finding a CI for the difference between the means of the coached group and the non-coached group. The 95% CI is ( -15, +91 ). [2:23] H H H 58:03 END OF PROGRAM End of Tape PROGRAM 23: Inference For Proportions 0:00 1-800-Learner, etc. 1:07 Against All Odds Logo 1:38 Teresa: Lesson Objectives: 1. Inference on proportions, 2. CI and Significant Tests, 3. Two proportions. 3:03 STORY: Measuring Unemployment Nationwide. The Bureau of Labor Statistics takes samples of individual households to get data on national unemployment. Sub group estimates are also of interest, but are available at a loss precision. Janet Norwood, Director of Bureau of Labor Statistics, Sen. William Proxmive, Joint Economic Committee, and Lawrence Cahoon, Census Bureau. [4:58] H 8:01 Teresa: Mechanics of constructing a CI for a proportion, and a test of significance. 11:58 STORY: Safety of City Water. Did water from contaminated wells in one side of Woubourn, Mass. lead to a higher incidence of health problems that in other parts of the city? Marvin Zellen, Harvard School of Public Health. [4:52] H H H 16:50 Teresa: Mechanics of CI for the difference of proportions for two independent samples. [3:25] 20:15 STORY: The Salem Witch Trials. The accused witches and their accusers lived in different parts of Salem. Was there political persecution? George Cobb, Mount Holyoke College. 23:54 Teresa: Mechanics of a significant test for proportions of two independent samples. 27:17 END OF PROGRAM Tape continues PROGRAM 24: Inference for Two-Way 31:20 Against All Odds logo. 31:51 Teresa: Feline Illness. Lesson Objectives: 1. Two-Way Tables, 2. Relationships between categorical variables, 3. Chi square test. 34:11 STORY: Ancient Man. Are two categories of prehistoric creatures, Africanus and Robustus, different? Measurements made from discovered skulls regarding dental "scratches & pits" indicates differences. Dr. Fred Grine, Anthropologist, SUNY-Stony Brook. [4:28] H H 38:39 Teresa: Mechanics for the Chi Square tests. [4:51] 43:30 STORY: Breast Cancer. Is there a relationship between the patient’s age category and the type of treatment given? Vincent Mor, Brown University, and Alan Weitberg, Roger William's Cancer Center. [4:23] H H 47:53 Teresa: Calculating a P-value for the Chi Square distribution. 52:02 STORY: Mendal's Peas. Were the results too good to be true? R. A. Fisher said, "Yes"; Prof. Robert Bernstein, scientific historian, says no. The problem was due to "how to classify" borderline items. [4:51] H H 56:53 Teresa: Closing Comments. 57:18 END OF PROGRAM. End of Tape PROGRAM 25: Inference for Relationships 0:00 1-800-Learner, etc. 1:03 Against All Odds logo. 1:38 Teresa: The Big Bang. Lesson Objectives: 1. Inference from linear regression, 2. Simple Linear Regression Model, 3. CI and significant tests. 3:32 STORY: Of astronomical Interest: Are galaxies speeding away from earth? Edwin Hubble's early 20th century work used linear regression, with speed vs. distance, to help establish the "Big Bang" theory. The Hubble constant is the slope. Robert Kirshner, Harvard/Smithsonian Center for Astrophysics. [4:33] H H H H 8:05 Teresa: Regression calculations using Hubble’s original data on 24 galaxies. 14:08 STORY: Complications in the Hubble Constant. John Huchra, Astronomer at Harvard, discovered that galaxies were not randomly distributed throughout the universe. Rotating 3-D plot illustrates the "Swiss cheese concept" of universe. [2:42] H H 16:50 Teresa: Confidence Interval and significant tests using the original Hubble data. 20:03 STORY: Data from ultra-sound waves pictures of a fetus are used to determine if birth defects may be present. Prediction intervals are used to spot unusual physical measurement. [3:10] H H 23:13 Teresa: Calculation of confidence bands and prediction intervals for a regression line. [3:47] 27:00 END OF PROGRAM Tape continues PROGRAM 26: Case Study 33:49 Against All Odds logo. 34:22 Teresa: A Drug to treat AIDS. Lesson Objectives: See how statistics were applied to a "real problem". 35:49 STORY: How the drug, AZT, was tested and got to market. 36:57 1985, Phase 1: Observation Study Samuel Broder, Director, National Cancer Institute; David Barry, MD, Burroughs Wellcome Drug Co.; and Robert Schooley, Mass. General Hosp. 39:22 Spring, 1986 Phase 2: A randomized controlled double blind experiment. Summer 1986. The Data Safety Monitoring Board convenes. Gail Rogers, Statistician, and Sandra Lehrman, MD, Burroughs Wellcome. Also Robert Machete, Statistician, Data Safety Monitoring Board. 43:33 Sept. 10, 1986 Emergency Meeting. "Is there a statistically significant difference between AZT and placebo? 47:53 Sept. 11-19, 1986. Confirming the data analysis. Robert O' Neill, Statistician, FDA 51:30 Fall 1986. Getting AZT to patients. Statistical process control in manufacturing. A. R. Peters, QC., Burroughs Wellcome Co. 53:07 A patient's perspective. [4:55] H H H 58:02 Teresa: Final Comments. Safety and efficiency of a new drug. 60:00 END OF PROGRAM. [1:58] End of Tape End of series. The volume of work in this video series is awesome and the quality is outstanding. Our profession would be well served if all college students watched these programs. As professors teaching Statistics classes, we can use excerpts from these tapes in class to introduce students to situations in society where Statistics plays a key roll. Who decided that aspirin can reduce the number of heart attacks? Does it work for anybody? Have Deming speak to your class. "It's so simple, . . . ." Do cycles exist in the stock market? See two experts go head to head. Can your really trust a NORC survey? Can you trust just any ol' survey? How can sampling result in better tasting potato chips? Free throws. Does making the first improve your chances of making the second? The Boston Celtics know the answer. What factors contributed to the space shuttle Challenger disaster? Inquiring minds want to know! Show them!! It’s all in the video series, Against All Odds. short segments in your classes. Introduce your students to "real world" problems. You don’t have the tapes? Call 1 - 800 - LEARNER
{"url":"http://www.dartmouth.edu/~chance/ChanceLecture/Against.All.Odds.htm","timestamp":"2014-04-21T05:41:51Z","content_type":null,"content_length":"93540","record_id":"<urn:uuid:00468329-b42a-4551-a318-901e299c140c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
(JAVA) out of memory error (JAVA) out of memory error OK I'm implementing a data structure called "trie" to store a dictionary of words. The text file I'm reading in has over 172K words....this, with a good implementation of a trie should not take more than 20-22MBs of memory. For some reason my implementation gives me an OutOfMemoryError. If I use a small dictionary....say 50K words everything is fine...any ideas? BTW...I'm passing a hashedMap set to add. class TrieNode { TrieNode[] myLinks; boolean myIsWord; String myWord; * create a new TrieNode (characters a-z and ' ') public TrieNode() { myLinks = new TrieNode[Trie.ALPH]; myIsWord = false; myWord = null; public class Trie { TrieNode root; public static final int ALPH = 27; public Trie() { root = new TrieNode(); * Add a string to the trie * @param s The string added to Trie public void add(String s) { TrieNode t = root; int limit = s.length(); for(int k=0; k < limit; k++) { int index = index(s.charAt(k)); if (t.myLinks[index] == null) { t.myLinks[index] = new TrieNode(); t = t.myLinks[index]; t.myIsWord = true; t.myWord = s; * print every word in the trie, one per line public void printTrie() { private void printTrie(TrieNode t) { if (t != null) { if (t.myIsWord) { for(int k=0; k < ALPH; k++) { if (t.myLinks[k] != null) { // System.out.println("Descend to child " + letter(k)); * determine if a word is in the trie (starting at root) * @param s The string searched for * @return true iff s is in trie (starting at root) public boolean contains(String s) { TrieNode t = root; int limit = s.length(); for(int k=0; k < limit; k++) { int index = index(s.charAt(k)); if (t.myLinks[index] == null) return false; t = t.myLinks[index]; return t.myIsWord; // convert all unknown symbols to spaces public static int index(char ch) { int i = (int) (ch - 'a'); if ((i < ALPH-1) && (i >= 0)) return i; return ALPH-1; public static char letter(int i) { if (i == ALPH-1) return ' '; return (char) (i + 'a'); I'm no Java expert, but there may be a limit in how much memory you can use. hmm...does anyone know how I can track memory consumption while debugging for example? I'm using JBuilderX This seems a lot like an error a professor of mine was receiving on a sample program he showed us. He was showing us an example of a fibbonacci series coded in the most memory-inefficient way possible. the java runtime would throw an out-of-memory error very quickly after the program began he increased the stack (or maybe heap) size allocated by the java runtime system and the program ran longer... i forgot how he did it, but he changed some environtment variable... try looking into that well I just succefully read in a littlebit over 160K words, and the process took about 4seconds on a P31000mghz 256ram I tried with 165K and it failed. I can't seem to find the place to raize the stack size in borland...but isn't the JSDK default stack size 256Meg anyways? try adding "-Xmx256m" to your compile-time command line. (or whatever number you want in MB) >>try adding "-Xmx256m" to your compile-time command line. yeah I've tried that yesterday and it didn't work. I guess something is messed up with my laptop because everything works fine on my (even older) desktop. Thanks for the help though. does the error get thrown from jbuilder, or from the runtime environment? there could be a bug in the version of the jre you're running. >>does the error get thrown from jbuilder, or from the runtime environment? there could be a bug in the version of the jre you're running. JBuilder throws the error...sometimes main also throws a fatal exception and it terminates, but that happened less than 25% of the time. Originally Posted by axon >>try adding "-Xmx256m" to your compile-time command line. yeah I've tried that yesterday and it didn't work. I guess something is messed up with my laptop because everything works fine on my (even older) desktop. Thanks for the help though. are you running the same JVM on both machines? You can also try adding "-Xms256m" as a command line option. (youd probably want a smaller number than 256 though ;) )
{"url":"http://cboard.cprogramming.com/tech-board/55005-java-out-memory-error-printable-thread.html","timestamp":"2014-04-20T22:00:17Z","content_type":null,"content_length":"15567","record_id":"<urn:uuid:59074b96-4de1-4b29-ba1c-470e2d52e557>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Keyword : Bidiagonal f08kec Orthogonal reduction of real general rectangular matrix to bidiagonal form f08kfc Generate orthogonal transformation matrices from reduction to bidiagonal form determined by f08kec f08kgc Apply orthogonal transformations from reduction to bidiagonal form determined by f08kec f08ksc Unitary reduction of complex general rectangular matrix to bidiagonal form f08ktc Generate unitary transformation matrices from reduction to bidiagonal form determined by f08ksc f08kuc Apply unitary transformations from reduction to bidiagonal form determined by f08ksc f08lec Reduction of real rectangular band matrix to upper bidiagonal form f08lsc Reduction of complex rectangular band matrix to upper bidiagonal form f08mdc Computes the singular value decomposition of a real bidiagonal matrix, optionally computing the singular vectors (divide-and-conquer) f08mec SVD of real bidiagonal matrix reduced from real general matrix f08msc SVD of real bidiagonal matrix reduced from complex general matrix © The Numerical Algorithms Group Ltd, Oxford UK. 2012
{"url":"http://www.nag.com/numeric/CL/nagdoc_cl23/pdf/INDEXES/KWIC/bidiagonal.html","timestamp":"2014-04-20T21:04:31Z","content_type":null,"content_length":"3965","record_id":"<urn:uuid:19726f00-157f-461f-bde3-0799654edb86>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Pine Brook Math Tutor Find a Pine Brook Math Tutor ...I enjoyed this course and did very well. I also found that the logic course provided me with a solid base for other subjects, as well as for business because logical thinking is a requirement for making good decisions. I have the experience, the patience, and the knowledge to effectively tutor this subject. 17 Subjects: including linear algebra, algebra 1, algebra 2, logic ...I assess each student prior to starting sessions, during the session midpoints and the end of semester that allows the student to not only assess his or her own progress but also to assess the need for making changes as needed and to ensure the student is achieving their goal within the time fram... 12 Subjects: including algebra 1, SAT math, prealgebra, algebra 2 ...It is very useful in the fields of Engineering, computer science, and the sciences that is, Biology, Chemistry, Physics, Astronomy, and Geophysics. It enables the vector space to be extended beyond the conventional 3D space, and with the applications of matrix, determinant, mapping, etc, several... 9 Subjects: including algebra 1, algebra 2, calculus, geometry ...Enrichment instruction for parents who feel that average is just not enough. In addition to tutoring, I also give piano lessons and can help students understand music theory. Whether the student is a beginner or advanced, six years old or seventy-six years old, I can help the student reach his or her musical goals. 30 Subjects: including prealgebra, geometry, reading, statistics ...My teaching and tutoring experience encompasses the following subjects: Language and Composition (Essay coaching) American, World, and European Histories Math up through Algebra Spanish up to AP level Psychology Biology and Earth Sciences Sociology I have also helped various students prepare fo... 33 Subjects: including prealgebra, algebra 1, English, Spanish
{"url":"http://www.purplemath.com/Pine_Brook_Math_tutors.php","timestamp":"2014-04-21T07:06:14Z","content_type":null,"content_length":"23854","record_id":"<urn:uuid:8979a665-ded7-4031-a94d-d082fd93a278>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the speed January 31st 2008, 03:47 PM #1 Junior Member Sep 2007 Find the speed Plane A and Plane B flew in opposite directions around the Earth (40,000 km). Plane A covered half its distance at a speed of 2,500 km/h and the other half at a speed of 1,000 km/h. Plane B spent half its time at 2,500km/h and the other half at 1,000 km/h. How long did it take each plane to complete the flight. Ok so I found the time of Plane A, half the distance being 20,000 km if i'm correct. Therefore Plane A took 28 Hours. Problem is i have no idea how to tell how long it took Plane B. Any help please. Last edited by Raj; January 31st 2008 at 04:28 PM. Plane A and Plane B flew in opposite directions around the Earth (40,000 km). Plane A covered half its distance at a speed of 2,500 km/h and the other half at a speed of 1,000 km/h. Plane B spent half its time at 2,500km/h and the other half at 1,000 km/h. How long did it take each plane to complete the flight. Ok so I found the time of Plane A, half the distance being 20,000 km if i'm correct. Therefore Plane A took 28 Hours. Problem is i have no idea how to tell how long it took Plane B. Any help please. d = vt Plain A: First half of trip: d = 20000 km v = 2500 km/h So $t = \frac{d}{v} = \frac{20000~km}{2500~km/h} = 8~h$ Second half of trip: d = 20000 km v = 1000 km/h So $t = \frac{d}{v} = \frac{20000~km}{1000~km/h} = 20~hr$ Thus t = 28 hours, as you say. Plain B: Let the total trip time be T. t = (1/2)T v = 2500 km/h $d_1 = vt = (2500~km/h) \cdot \left ( \frac{T}{2} \right ) = 1250T$ For the second half of the trip: t = (1/2)T v = 1000 km/h $d_2 = vt = (1000~km/h) \cdot \left ( \frac{T}{2} \right ) = 500T$ And we know the total trip distance is d1 + d2 = 40000 km. Thus $40000 = 1250T + 500T$ Solve for T. January 31st 2008, 04:56 PM #2
{"url":"http://mathhelpforum.com/math-topics/27169-find-speed.html","timestamp":"2014-04-16T14:16:23Z","content_type":null,"content_length":"35478","record_id":"<urn:uuid:0df8ac0b-8d00-44c8-9d13-d363e7fe03ab>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
further triangle November 27th 2006, 12:51 PM #1 Nov 2006 want to check my answers to see if i have finally grasped sine and cosine. triangle BCF..<f=60.6 side BC=4500 and BF=4778.7 no its not it sits side on to the previous triangle in the problem i posted cosine rule November 27th 2006, 01:13 PM #2 November 27th 2006, 01:21 PM #3 Nov 2006
{"url":"http://mathhelpforum.com/trigonometry/8075-further-triangle.html","timestamp":"2014-04-16T16:53:07Z","content_type":null,"content_length":"33862","record_id":"<urn:uuid:41b9a137-ff46-4a0a-a838-9a4ad37f40b1>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Space is three-dimensional ... or is it? When we spoke to theoretical physicist David Berman in October this year we found out that in fact, we are all used to living in a curved, multidimensional universe. And a mathematical argument might just explain how those higher dimensions are hidden from view. Kaluza, Klein and their story of a fifth dimension — David Berman explains the concept of dimension and how a mathematical idea suggests that we might well live in five of them. The ten dimensions of string theory — String theory has one very unique consequence that no other theory of physics before has had: it predicts the number of dimensions of space-time. David Berman explains where these other dimensions might be hiding and how we might observe them. How many dimensions are there? – the podcast — You can listen to an interview with David Berman as he tells us how Kaluza, Klein and their fifth dimension might help us understand the ten dimensions of string theory.
{"url":"http://plus.maths.org/content/comment/reply/5841","timestamp":"2014-04-19T17:28:26Z","content_type":null,"content_length":"22877","record_id":"<urn:uuid:ea6d1be2-5b27-48ea-ad81-c2755938f695>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Site Swap How To Write Down A Juggling Pattern: A Guide For The Perplexed The contents of this document are Copyright (C) Solipsys Ltd, 1996, but you may reproduce and redistribute them freely provided that you make no changes, no charge, that this copyright notice remains attached, and you provide a link to the original. A Word Of Warning: You're sitting in front of a computer, right? And you're about to learn something about juggling. You may even feel the need to do some. Now you know this might not be a good idea. It could get expensive. We'd just like to make it plain that if you break anything, it's not our fault. We warned you, OK? Site Swaps are a notation for writing down juggling patterns. This document is a gentle introduction, assuming that you already know how to juggle. If you don't already know how to juggle but would like to learn, you might care to look at the juggling animation and tutorial program Juggle Krazy, or at some of the other web pages we've listed under Further Reading. This is intended to be an extremely gentle introduction to Site Swaps. You may think it's a bit slow in places, but you can always skip ahead and then come back if you think you've missed something. For this entire document we're going to assume that we're juggling balls. In principle at least, everything we do with balls we can also do with clubs and rings, so we may as well stick to balls for now. We're also not going to deal with multiplex or synchronous patterns. Everything here will assume that each hand holds, throws and catches at most one ball at a time, and that the hands always take it in turns. Another thing we're going to ignore, along with clubs, rings, flaming torches and chainsaws, is hand movements. We're going to pretend that the hands always stay on their own side of the body, and that apart from the small movements needed to actually throw and catch, we'll pretend that they don't move around at all. Of course, this means that we're ignoring most of the patterns that any performer (or the public, for that matter!) will think are interesting. It means we won't allow Mills Mess, Burke's Barrage, Rubenstein's Revenge, behind the back throws, or any of the carry type variations. The Site Swap notation by itself doesn't cover these sorts of things. Finally, we're only going to deal with exactly two hands. We expect that documents covering hand movements, multiplex, multi-hand patterns and passing will be developed soon. Questions and comments on this tutorial/explanation would be very welcome. We want to improve this document so as to make it better and better. Your assistance will be greatly appreciated. Besides, let's be honest. We'd like to sell you something. A good way of getting to grips with the Site Swap notation is to use our juggling animation package Juggle Krazy. You can certainly use the notation without any computer assistance at all, however, so we won't mention Juggle Krazy any more in this document. Err, well, not much anyway. Lesson 1: The Basic Rhythm We start by pretending that we're juggling in time with some music. Suppose that it's a nice comfortable rhythm for doing an ordinary three ball cascade. Every person has their own speed, so imagine your own music. Notice that each ball is thrown in turn. Red, green, blue, red, green, blue, red, green, blue, red, green, blue, red, green, blue, red, green, blue, red, green, blue, Now, if you have to juggle a four ball fountain to this same music you will have to throw higher. Obvious, yes, but important. When juggling to a constant rhythm, more balls means higher throws. In fact, most people juggle slightly faster as they increase the number of balls, but for this discussion we need to pretend that we always juggle to the same piece of music. A technical point here. We're going to ignore the actual physical heights the balls go to. For this discussion it's not important. It is very technical, and if you're really interested, it is covered below in technical notes [2]. So, here we are: • we always juggle to the same piece of music, • more balls means higher, • in fountain and cascade, the balls always take it in turns. Lesson 2: Types of Throw Remember our notes from Lesson 1: • we always juggle to the same piece of music, • more balls means higher, • in fountain and cascade, the balls always take it in turns. Now, it turns out that if you juggle all the balls to the same height, always alternate hands, and juggle at a steady rhythm, then even numbers must be done in a fountain, and odd numbers must be done in a cascade. The reason for this is explained in technical note [3] below, but feel free to read it later (or never!) if you either know this already or are prepared to accept it as given for Now, this next bit is going to seem obvious. We have already seen that if we always juggle to the same piece of music, the different patterns go to different physical heights. We are going to represent each height of throw by a number. The height at which we juggle four, we'll call that "4". The height at which we juggle 7 (we wish!) we'll call "7". (You can safely skip this next paragraph if you want, but only one!) It's very important to know that the physical height a ball goes to is not proportional to the number that we are using to represent the throw. An "8" does not go twice as high as a "4". The number is not the height, it represents the type of throw. For more details, see technical note [2]. This may all seem a little strange, but if you think of a "4" as representing the kind of throw you do when you juggle four balls, you won't go wrong. Now, and you'll feel like we're repeating ourselves here, a juggling pattern can simply be represented by a sequence of numbers that tell you the kinds of throws. For a four ball fountain it's obvious, they're all 4's! So a four ball fountain is simply ... 4 4 4 4 4 4 4 4 4 4 ... Similarly, a five ball cascade is simply ... 5 5 5 5 5 5 5 5 5 5 ... and so on. So, to summarise: • we always juggle to the same piece of music, • more balls means higher, • in fountain and cascade, the balls always take it in turns, and now • the different types of throws are represented by numbers. In the next lesson we'll see that the numbers can be mixed up - they don't all have to be the same! Lesson 3: Variations Just to remind you: • we always juggle to the same piece of music, • more balls means higher, • in fountain and cascade, the balls always take it in turns, • the different types of throws are represented by numbers. Let's now take a closer look at what happens when we juggle four balls in a fountain. Remember, each ball gets thrown in turn, so whenever a ball is thrown it is then thrown again four beats later. But what if, without warning or provocation, in the middle of a four ball fountain, we throw one of the balls as if it were in a five ball cascade? Well, three things to note. 1. It will change hands. 2. On landing, it will clash with the ball that is thrown immediately after it. 3. It will leave a hole where it used to land. Have a look at points 2 and 3. The ball we have just thrown as a 5, high and crossing, is going to collide with the next ball's landing, and leave a hole behind. The obvious thing to do, then, is to throw that next ball into the hole that's been left behind. This will both avoid the collision and fill the gap. So what kind of throw will that be, then? Well, it will cross, and it has to come down earlier than it used to. In fact, it turns out to be a 3! The two balls involved trade landing places. The first one thrown lands when the second one used to land, and the second one comes down early to land when the first one used to. The first comes down a beat late, making it a 5 instead of a 4, and the second comes down a beat early, making it a 3 instead of a 4. The pattern can therefore be described as ... 4 4 4 4 4 4 5 3 4 4 4 4 4 4 ... In fact, if you get the shareware version of Juggle Krazy and put in the text window (note, exactly six lots of 4, and spaces between the numbers) and hit the "Accept" button, you'll see two balls continually swapping places in a four ball fountain. You can try the same thing with fewer 4's. The only problem is that the swapping of balls happens more often, and doesn't always involve the same two balls. This makes it somewhat harder to see exactly what's going on, but it still works. Lesson 4: Some Examples Again the reminder: • we always juggle to the same piece of music, • more balls means higher, • in fountain and cascade, the balls always take it in turns, • the different types of throws are represented by numbers. OK. Remember we had this ... As we said above, the exact number of 4's doesn't really matter. We put six of them in so the swapping is of the same balls every time (yellow and green in the case of Juggle Krazy) and so that it happens at a relaxed rate. As has already been commented on, however, the number of 4's can be changed. Each of the following works... • 4 4 4 4 4 4 5 3 • 4 4 4 4 4 5 3 • 4 4 5 3 • 4 5 3 • 5 3 (We'll occasionally leave out the spaces for brevity from now on.) 44444453 we have already seen. 4444453 is just a little bit faster. 4453 has the continual exchange of two specific balls running as fast as it will go. Any fewer 4's and it won't be the same two balls that keep swapping every time. 453 is particularly interesting. The "3" turns out to be done with the same ball every time. The last pattern in this list, 53, is also special. In this pattern the right hand always does 5's. Always. The right hand, in fact, has no idea that it isn't doing 5 balls. Well, except for the fact that the incoming throws are all coming in low and lobbed instead of descending from a great height. The left hand always does the low throw, and so the left hand thinks it's only doing three balls. Here are some more examples. Ignore any collisions for now, we'll worry about getting rid of them another time. (optional) See if you can spot a simple rule that tells you how many balls are in the pattern. • 444633 the 6 is always the same ball • 633 you can do this with bounced 6's • 555564 the 6 is always the same ball • 55564 the 5's are always the same balls • 5564 the 4 is always the same ball • 64 this one's really hard! Lesson 5: Holds, Transfers and Empty Hands Now, this lesson is almost (but not quite!) completely optional, and this first bit is all you really need to know, although the reasons are interesting. Still ... We know what 3's, 4's, 5's and so on mean. What about 2's, 1's and 0's? It turns out, although the reasons are a little more complicated, that we interpret these as follows. • A "0" is an empty hand for a beat. • A "1" is a transfer from one hand to another. • A "2" is a hold of a ball for one beat. That's all you actually need to know, and you can now safely skip to the next lesson any time you want to. We'll still go gently, though, because some people find this bit interesting too. So, if you want to know more, here are the reasons why. You'll be getting tired of seeing this now, but for this lesson it is particularly important ... • we always juggle to the same piece of music, making a throw on every beat. Now let's go back a little and recall what the numbers mean. We say that a "4" is the kind of throw in a four ball fountain, but why? The idea was that the ball will next be thrown four beats later. That is what the number for a throw means -how many beats later will the ball next be thrown, regardless of the hand. In a three ball cascade each ball gets thrown every third beat, and in a four ball fountain each ball gets thrown every fourth beat. So whenever you write down a number it tells you how many beats the ball will take before it next gets thrown. So what about a 2? Well, the 2 means that the ball must next be thrown two beats later, and that will be the same hand again. We can say more than that, though. Since nothing will have happened in that hand between the throw and catch you may as well hold the ball. You can throw it if you like, but it doesn't have much time to get out of the hand, so it won't go very far. You may as well hold onto it. Try having a look at Note - exactly six lots of 4 again. This is a four ball fountain with a two high flash. The high throws are the 5's, and the 2 is holding a ball while waiting for the others to come down again. You could do a little bob with the 2 if you like, but it's difficult and pretty pointless. What about a 1? The 1 means that the ball must next be thrown on the very next beat. So if you do a 1 with the left hand, the right hand must throw that same ball on its next throw. That means that the ball must be handed across and not waste any time in the air. It must be a transfer. And how about a 0? Have a look at the following. They are all possible juggling tricks with 4 balls. ... 4 4 4 4 4 4 4 ... ... 4 4 4 5 3 4 4 4 ... ... 4 4 4 5 5 2 4 4 4 ... ... 4 4 4 5 5 5 1 4 4 4 ... Can you see what comes next? With the eye of faith, you may agree that the pattern is completed like this ... ... 4 4 4 4 4 4 4 ... ... 4 4 4 5 3 4 4 4 ... ... 4 4 4 5 5 2 4 4 4 ... ... 4 4 4 5 5 5 1 4 4 4 ... ... 4 4 4 5 5 5 5 0 4 4 4 ... This means that we should be able to juggle this last pattern. But what is it? Looking at the numbers we can see that we're doing a four ball fountain. Then we throw four high, crossing throws. In essence we do a four high flash out of a fountain, momentarily doing as much of a five ball cascade as we can. But now we have no balls left, and by our rule that we make a throw on every beat of the music we have to make some sort of throw, even though we have nothing left to throw. So we say that it's a "0", and that's an empty hand for one beat. There are two other ways of thinking about a "0", neither of which is very useful, but both of which are very entertaining to technical types. These are mentioned in technical note [4]. Lesson 6: Some More Examples Here are a few more examples along with their very much more verbose English versions ... Be warned, simply reading these will not necessarily be very enlightening. The best thing to do is run them in super slow motion in Juggle Krazy and compare it with the description ... • ... 3 3 3 3 4 2 3 3 3 3 ... Juggle a three ball cascade. Then do a fountain-like throw from one hand and pause with the other for a beat. You can wave that ball around if you like, but you don't have a lot of time. Then carry on with three balls. • ... 3 3 3 3 5 2 2 3 3 3 ... Juggle a three ball cascade. Then do a single high throw and wait for it to come down. If you get the height of the high throw right you'll have a wait of exactly two beats. • ... 3 3 3 3 5 5 5 0 0 3 3 3 ... Start with a three ball cascade. Do three high throws, leaving you with two empty hands. When the balls all come down you can resume your cascade. • ... 3 3 3 3 4 4 1 3 3 3 3 3 ... Start with a three ball cascade. Then from one hand do a fountain-like throw, and immediately from the other hand do the same. You have one ball left, so transfer it across. Strangely, you can now carry on with the cascade without breaking rhythm. • ... 3 3 3 3 4 5 1 4 1 3 3 3 3 ... The most complicated example yet. Do your three ball cascade. Then, when you want to, throw a fountain-like throw, about twice as high as the 3's, from, say, the right hand. Then with the left hand throw a cascade-like throw twice as high again. This is the kind of throw you would do if you were juggling 5 at this rhythm. Both these balls will land in your right hand. The last ball should now be transferred across, right to left. This is then immediately thrown up in a fountain throw. The first ball thrown has now landed in the right hand, so transfer that across too. Amazingly, you can now carry on with three balls in a cascade as if nothing had happened. Lesson 7: Inventing Tricks OK, once again, the reminder: • we always juggle to the same piece of music, • more balls means higher, • in fountain and cascade, the balls always take it in turns, • the different types of throws are represented by numbers. By now we hope you can interpret the lists of numbers, so here are some particularly nice ones for you to try ... Three balls ... • 4 4 1 • 5 3 1 • 4 5 1 4 1 • 4 5 0 Four balls ... There are lots more, and now we'll show you how to invent your own. It's easy once you know just one simple trick, although there are several ways of actually going about it. We'll show you our favourite first. Firstly, choose how long you want the pattern to last. This can be anything from just one throw (boring) to 100 throws (impossible to remember!) For this example we'll choose 5. Remember, this is not the number of balls you're going to be juggling. It's how long the sequence of throws will last. Write down a row of that many (5) x's, each one with a dot underneath - like this. x x x x x . . . . . What we're going to do is to replace each "x" with a number representing a throw. This can be any number from zero on upwards, but if you put 100 then you'd better work on your high throws! Let's start by putting a 4 in the first place. 4 x x x x . . . . . As you know, this throw is the kind of throw we do when juggling 4 balls in a fountain, so, in particular, it's next going to be thrown four beats later. Count forwards four places, and replace the dot with a "*". Like this. 4 x x x x . . . . * Now we do it again. Replace any "x" with a number, count forwards, and put a "*" on the dot. The only thing is that you're not allowed to put an "*" down unless there's a dot there. For example, you're not allowed to put a 3 in place of the next "x". Let's try a 6 instead. This gives ... 4 6 x x x . . * . * As you count forwards here you fall off the end, so come back on at the beginning and keep counting. The "*" ends up just one place further on from the 6, even though we've counted six places. Now let's put in another number. 4 6 4 x x . * * . * We fell off the end again, and again we came back on at the front. Things are getting tight now. The fourth "x" can't be replaced by a 1, 3, 4, 6, 8, 9, etc, so we can pick 0, 2, or any multiple of 5 added to these. Try it. Let's use 5. 4 6 4 5 x . * * * * The easy choice is now a 1, and we get this. * * * * * Finished. Put it into Juggle Krazy and see what you get! This system can be used to invent any number of repeating Site Swap patterns. Try it for yourself, but remember that the larger the numbers, the harder the pattern. There are other things that make a pattern hard too, but that's still not fully understood. In general you simply have to try a pattern for yourself and see if you can do it. See also Inventing Synchronous Patterns. In the next lesson we'll see how it can be used to work out transitions between Site Swap sequences. Lesson 8: Transitions So let's suppose you're doing a three ball cascade and you want to move efficiently into a three ball shower, which in Site Swap is "5 1". Write down the sequences... . . . 3 3 3 3 3 3 x x 5 1 5 1 5 1 5 1 . . . . . . . . . . . . . . . . . . . . . . . . . From each number count forwards, just as before, and replace the dot with a "*" ... . . . 3 3 3 3 3 3 x x 5 1 5 1 5 1 5 1 . . . . . . . . . * * * * * * . * . * * * * * * . . . . You'll see that there are two numbers missing and two gaps in the second row. Each "x" can go to either dot. Taking the possibilities in the non-obvious order, send the first "x" to the second dot so you replace the first "x" with "5". The second "x" then has to be replaced with "2", giving the sequence . . . 3 3 3 3 3 3 5 2 5 1 5 1 5 1 5 1 . . . This means that from the three ball cascade you throw the first ball from the shower, wait, and then do the shower. During the pause you can do a pirouette or something, particularly if you make the pause longer, like this ... . . . 3 3 3 3 3 3 7 2 2 2 5 1 5 1 5 1 5 1 . . . This is the transition most people use to go from cascade to shower. The other possibility is to replace the first "x" with "3", sending it to the first dot, and replace the second "x" with a "4", sending it to the second dot. This gives . . . 3 3 3 3 3 3 3 4 5 1 5 1 5 1 5 1 . . . This gives a transition that deserves to be more widely known. It goes like this. Do the three ball cascade. Throw a single fountain throw from the left hand, followed immediately by the shower from the right. The shower is suddenly there with no preceding pause. Great for confusing jugglers who don't know about it. You can use this technique for finding transitions out of a shower too, or between any two Site Swaps. Have you got this far? Congratulations! Now, what would you like to see here? There's a lot more about Site Swaps that we could tell you, but at this point we would like to know what you would like us to explain. Technical notes This has been moved to the Dwell Time page. Note 2: Physical throw heights Before reading this technical note you need to read about the Dwell Time. Suppose the Dwell Time is D, usually around 0.7 or so, and suppose the time between beats is T. Then for a Site Swap value of V the actual time from throw to catch is (V-2*D)*T. (We multiply the Dwell Time D by 2 because it's two beats between successive throws from the same hand.) Call this quantity Z. We note that if the Dwell Time is close to its usual value of 0.7 then this only works for values of 2 or more. Remembering that the ball has to go both up and down, so half the time is on the journey down, the physical height H is given by the formula H = 1/2 G (Z/2)^2 = 1/2 G Z^2/4 = G*Z*Z/8 where G is acceleration due to gravity, roughly 9.8 metres per second squared. As a quick reality check, suppose you do about 4 throws/catches per second with 5 balls, and have a Dwell Time of 0.7. Z = (V-2*D)*T = (5-1.4)*0.25 = 0.9 H = 9.8*0.9*0.9/8 = 0.992250 metres. or just over three feet, just about right. Actual and ratios of physical heights of different throws are given in the following table. SiteSwap Height Height compared Value in metres with 3 -------- --------- --------------- 3 0.20 1.00 4 0.52 2.64 5 1.00 5.06 6 1.62 8.27 7 2.40 12.25 As you can see, four is about two and a half times as high as 3, 5 is about twice as high as 4 or five times as high as 3, and 6 is about three times as high as 4. If the balls are always going to the same height then they must take it in turns. No ball has a chance to overtake another, so there is no other choice. Also, if you're juggling four balls then every fourth throw will be the same ball. But every fourth throw is with the same hand, assuming the hands always take it in turns, so that means that if you're doing four balls, and they all go to the same height, then each ball must come back to the same hand. You have to do a fountain. The same reason goes through for any even number of balls, and identical reasoning works for odd numbers of balls too, only now the balls have to swap hands, because every (say) fifth throw is with the other hand. Note 4: Empty hands Suppose that the Dwell Time is 50%, so that each hand is full for exactly half the time. This means that for every throw each ball will spend one beat in the hand, and the rest of the time in the air. A "5" will spend four beats in the air and one in the hand before being thrown again, a "4" will spend three beats in the air and one in the hand. Carrying on with this trend, a "1" must spend no time at all in the air. But what about a "0"? It must spend -1 beats in the air. That means that the ball when thrown must go backwards in time by one beat! There are various interpretations of this, and an entire thesis can be written on it. However, one way of looking at it is that the "catch" is actually the creation of a ball/anti-ball pair. The ball is in the hand, the anti-ball is in fact the ball going backwards in time. Then, one beat later, the ball and anti-ball re-combine. The hand once again becomes empty as the two mutually annihilate. But what about the number of balls? If we do 55550 in the middle of a four ball fountain there are four balls in the air. What about the ball from the ball/anti-ball pair? Doesn't that make 5 balls? No, because the anti-ball is a negative ball, and the balance is retained. Although this all sounds bizarre it in fact has precise analogies in modern physics in which anti-particles are treated as being identical to particles moving backwards in time. Further Reading • There's a huge amount of juggling-related material - tutorials, animations, manufacturer's information etc., available on the Juggling Information Service at http://www.juggling.org/ • The above site appears not to be being maintained, so for more up-to-date information you might want to look instead at http://www.jugglingdb.com/ • There are several books available. My own opinion is that the best is Charlie Dancey's Encyclopedia of Ball Juggling, ISBN 1-898591-13-X. Have a look at http://dspace.dial.pipex.com/dancey/ • (Ahem) Don't forget to check out the animation and tutorial package Juggle Krazy. There's a shareware version you can download and try out. • And of course don't forget the newsgroup rec.juggling. This list is far from exhaustive (and must be expanded!), but you can find plenty more places to look by visiting the above addresses. Send comments or questions to Colin Wright: mailto:SiteSwap@solipsys.co.uk By snailmail, we can be contacted at: Solipsys Ltd., 4 Brook Street, Port Sunlight Village, Wirral, CH62 5DB, Contents Links on this page ┌───────────────────────────────────────┐ │Site hosted by Colin and Rachel Wright:│ │ │ │ • Maths, Design, Juggling, Computing,│ │ • Embroidery, Proof-reading, │ │ • and other clever stuff. │ Suggest a change ( <-- What does this mean?) / Send me email Front Page / All pages by date / Site overview / Top of page Quotation from Tim Berners-Lee
{"url":"http://www.solipsys.co.uk/new/SiteSwap.html?HN0","timestamp":"2014-04-16T16:17:42Z","content_type":null,"content_length":"39751","record_id":"<urn:uuid:ae085d93-b2ac-4699-87a5-cb5b7c1871ff>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
How is: e^(-3x)*(dy/dx)-e^(-3x)*3y=e^(-3x)*6 equal to:(d/dx)*[e^(-3x)*y]=e^(-3x)*6 Are you saying that you do not know how to take the derivative of $e^{-3x}y$ with respect to x? Last edited by HallsofIvy; August 11th 2012 at 10:23 AM. No, I'm learning the method of solving linear 1st-order differential equations, and I have the example: dy/dx + 3y = 2xe^(-3x) I found e^(3x) to be the integrating factor, so then I multiplied it through the original equation giving me: e^(3x)*(dy/dx) + e^(3x)*3y = 2x this next step is what is confusing me, (and I realize that it is probably something very simple) I'm supposed to use the product rule where: u=e^(3x), dv=(dy/dx), du=e^(3x), & v=3y Then the resulting equation is: [e^(3x)*y]=2x, so my question is: Can you show me the individual steps for where: e^(3x)*(dy/dx) + e^(3x) *3y = 2x becomes: [e^(3x)*y]=2x Thank you.
{"url":"http://mathhelpforum.com/calculus/202039-how-e-3x-dy-dx-e-3x-3y-e-3x-6-equal-d-dx-e-3x-y-e-3x-6-a.html","timestamp":"2014-04-16T04:20:11Z","content_type":null,"content_length":"35954","record_id":"<urn:uuid:d8f96609-f6c9-40de-974d-d0720b602013>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Light-path length difference Hi Foxhound101, 1. The problem statement, all variables and given/known data Two narrow slits are 0.12 mm apart. Light of wavelength 550 nm illuminates the slits, causing an interference pattern on a screen 1.0 m away. Light from each slit travels to the m=1 maximum on the right side of the central maximum. Part A - How much farther did the light from the left slit travel than the light from the right slit? Express your answer using two significant figures. 2. Relevant equations (theta)[m] = m*(lambda/d) y[m] = (m*lambda*L)/d 3. The attempt at a solution I don't understand how to do these problems... theta[m] = (m*lambda*)/d theta[m] = (1*(5.5*10^-7m)/(1m) theta[m] = 5.5*10^-7 Remember that this is really: The approximation you are using ([itex]\theta=\frac{m\lambda}{d}[/itex]) is fine since the angle is small enough, but remember that this approximation is true if the angle is measured in radians. So the angle you found is [itex]5.5\times 10^{-7}\mbox{ rad}[/itex]. path length difference = dsin(theta) r = d*sin(theta) r = 1m *sin(5.5*10^-7) r = 9.599^-9m This number was calculated with the angle measure set to degrees, not radians.
{"url":"http://www.physicsforums.com/showthread.php?t=260275","timestamp":"2014-04-21T09:54:51Z","content_type":null,"content_length":"25632","record_id":"<urn:uuid:6159d19a-d3a2-47ee-a916-a523d2763995>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Formalization Thesis Vaughan Pratt pratt at cs.stanford.edu Fri Dec 28 01:40:16 EST 2007 Timothy Y. Chow wrote: > Catarina Dutilh wrote: >> So, unless one can make sense of the idea of 'expressing faithfully' in a >> precise way, the Formalization Thesis thus stated does not seem to be >> sufficiently precise. > Well, can you tweak it to *make* it "sufficiently precise"? With what is currently known about foundations, I doubt it. Foundations today seems to be where quantum mechanics was in the 1910's---vague shapes are coming into focus, but there is still no precisely stated uncertainty principle permitting a continuous allocation of uncertainty to conjugate pairs, which in the case of foundations I would take to be theories and classes. The 1920's saw the formulation of Heisenberg's uncertainty principle, that the product of the uncertainties, suitably measured, in such conjugate pairs as position and momentum, time and energy, or any two orthogonal angular momenta, was bounded below by a universal constant. Is there any equally symmetric uncertainty principle for theories and classes? Certainly there is a large and growing body of theorems indicative of some kind of interference between the two. But without a unifying central theorem from which this body can be seen as simply a sampling of its consequences, foundations cannot be said to have as firm a handle on the interference of theories and classes as physics does for its various conjugate pairs of variables. For me, Catarina's question is not just a good question, it is *the* question the Formalization Thesis needs to answer. Tim's three examples (CH, 2nd incompleteness, formalizations of intuition encountered in specific machine-checkable proofs) purporting to illustrate "faithful expression" seemed only to focus Catarina's question on particular cases, without however answering it at all. > Does this clarify what I intend by "expressing faithfully"? No, but an assessment of the degree of faithfulness of each example might help, especially if the examples covered a representative range of degrees of faithfulness. Vaughan Pratt More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2007-December/012385.html","timestamp":"2014-04-17T07:45:33Z","content_type":null,"content_length":"4486","record_id":"<urn:uuid:935412bc-a9ad-4950-a436-da31cba1eba9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Preclinical Assessment of HIV Vaccines and Microbicides by Repeated Low-Dose Virus Challenges Trials in macaque models play an essential role in the evaluation of biomedical interventions that aim to prevent HIV infection, such as vaccines, microbicides, and systemic chemoprophylaxis. These trials are usually conducted with very high virus challenge doses that result in infection with certainty. However, these high challenge doses do not realistically reflect the low probability of HIV transmission in humans, and thus may rule out preventive interventions that could protect against “real life” exposures. The belief that experiments involving realistically low challenge doses require large numbers of animals has so far prevented the development of alternatives to using high challenge doses. Methods and Findings Using statistical power analysis, we investigate how many animals would be needed to conduct preclinical trials using low virus challenge doses. We show that experimental designs in which animals are repeatedly challenged with low doses do not require unfeasibly large numbers of animals to assess vaccine or microbicide success. Preclinical trials using repeated low-dose challenges represent a promising alternative approach to identify potential preventive interventions. Citation: Regoes RR, Longini IM Jr, Feinberg MB, Staprans SI (2005) Preclinical Assessment of HIV Vaccines and Microbicides by Repeated Low-Dose Virus Challenges. PLoS Med 2(8): e249. doi:10.1371/ Academic Editor: Marc Lipsitch, Harvard School of Public Health, United States of America Received: April 11, 2005; Accepted: June 13, 2005; Published: July 19, 2005 Copyright: © 2005 Regoes et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Competing interests: The authors have declared that no competing interests exist. Abbreviations: ID50, infectious dose at which 50% of the animals become infected Worldwide approximately 40 million people are infected with HIV, and more than 3 million people died of AIDS last year alone [1]. Unfortunately, numerous obstacles to providing effective antiretroviral treatment to the majority of infected individuals in resource-poor countries exist. The development of a vaccine or other preventive biomedical intervention therefore bears the greatest hope to curb the rampant HIV epidemic [2]. Research on HIV vaccines and prevention relies strongly on preclinical studies in macaque models for the identification and evaluation of potential vaccines or prophylactic treatment strategies [3]. Initially, the goal was to use animal trials to screen for preventive interventions that induce sterilizing immunity (i.e., protection against infection) since this would clearly be the most effective way to contain the AIDS pandemic. Unfortunately, most of the vaccine approaches assessed to date in animal studies have failed to induce sterilizing immunity [4–7], although some prophylactic approaches were found to reduce susceptibility to infection [8–12]. As a result of this shortcoming, vaccine candidates are at present primarily examined with regard to their effects on set point viremia, disease progression, and their general immunogenicity, rather than with regard to the degree of protection against infection they confer. However, the inference as to the degree of sterilizing immunity from the level of immunogenicity is limited by our lack of knowledge about the mechanisms of protection against infection as such [13]. The inability of most vaccine candidates to induce protection against infection in animal studies may be due, at least in part, to unintended consequences of the design of the animal trials, rather than to problems inherent in the vaccination approaches themselves. In most animal studies that seek to test the efficacy of a given preventive intervention, very high challenge doses are used, typically of approximately 10–100 times the infectious dose at which 50% of the animals become infected (ID[50]). The motivation for using such high challenge doses is mostly practical: the experimenter wants to ascertain infection success in unvaccinated/untreated animals, which can then be compared to the hopefully lower infection success in vaccinated/treated animals. There are, however, concerns with using high challenge doses. Firstly, the extremely high probability of infection in high-dose challenge studies conflicts with the low transmission rate of HIV per sex act [14– 17]. Although it has been argued that transmission rates may be higher under some circumstances (such as during acute infection or when other infections of the genital tract are present) than the estimates obtained from discordant couple studies suggest (e.g., the recent study by Pilcher et al. [18]), transmission of HIV during one sex act surely does not occur with certainty. Secondly, protection against high-dose virus challenges may be more difficult to achieve because the use of high challenge doses makes stochastic extinctions that can play an important role in early control of the infection [19] very unlikely. Thus, standard high-dose challenge studies may rule out preventive intervention strategies that could protect against infections following “real life” exposures. The problems of using high virus doses in animal studies can be illustrated by the discrepancy between the protection zidovudine (AZT) confers in animals and humans. Whereas macaques [20,21] and mice [22] were not protected from infection with high challenge doses by zidovudine (a relatively weak antiretroviral drug when used in monotherapy), clinical studies surprisingly showed that two-thirds of perinatal infections (i.e., mother-to-child transmissions during birth) can be prevented by zidovudine administration [23]. It is important to note that the use of zidovudine to prevent perinatal HIV infection is a biomedical intervention aiming to protect from infection, whereas zidovudine is most commonly used as a therapeutic agent after infection. This example suggests that there is a need for experimental designs that allow the assessment of the protection against infection with lower, and thus more realistic, challenge doses. The belief that experiments involving realistically low challenge doses require unfeasibly large numbers of animals has prevented the development of low-dose challenge models. In this theoretical study, we show that, contrary to this widely held belief, low-dose challenge experiments can be designed such that they do not require large numbers of animals. Using statistical power analysis, we compare two experimental designs (see Figure 1): (i) a single low-dose challenge design in which each animal is challenged only once, and (ii) a repeated low-dose challenge design in which each animal is challenged until it is infected or a predetermined maximum number of challenges is reached. We find that the repeated low-dose challenge design does not require unfeasibly large numbers of Figure 1. Single and Repeated Low-Dose Challenge Designs Figure shows designs for single (A) and repeated (B) low-dose challenge designs. Small arrows denote challenges, and white and red symbols denote uninfected and infected animals, respectively. In the following, we are going to discuss the case of assessing whether a vaccine candidate induces sterilizing immunity. All the considerations in this article, however, apply equally to other preventive interventions, such as microbicides. To assess the quality of the single and the repeated low-dose challenge designs, we conducted a statistical power analysis. The statistical power of an experimental design is defined as the probability that an effective vaccine or treatment is correctly determined to be effective. This analysis consists of simulating the experiments, evaluating them, and then repeating this procedure thousands of times to estimate the statistical power of a given experimental design. Simulation of Single Low-Dose Challenge Experiments In our simulations of the single low-dose challenge experiments, we assume that we have n unvaccinated control animals and n vaccinated animals. In the control group, we simulate single challenges of each animal with the ID[50] by performing n Bernoulli trials with a probability of success of p[c] = 0.5. The probability of success corresponds to the probability with which an animal becomes infected after a single challenge. (By assuming the same probability p[c] for each animal, we ignore potential between-animal variation of the susceptibility to infection. This assumption will be relaxed below.) The results of these trials can be written as a vector x[c], the entries of which were either zero (uninfected) or one (infected): By summing over the elements of x[c], we obtain the number of infected animals in the control group, ι[c]: In the vaccinated group, we simulate single challenges with the ID[50] similarly to the control group by performing Bernoulli trials. However, we assume that, because of vaccination, the probability of infection (or success) in the vaccinated group, p[v], is lower than that in the control animals, p[c]. The relation of p[v] to the effect of the vaccine on the susceptibility of the host, VE[S], is given by: The results of these Bernoulli trials can again be written as a vector x[v], and summing the elements of x[v] we obtain the number of infected animals in the vaccinated group, ι[v]. The outcome of the simulated experiment can then be summarized in a contingency table as shown in Table 1. On this contingency table, we perform a standard one-tailed Fisher's exact test [24] to assess whether the fraction of infected animals in the vaccinated group is significantly lower than that in the control group. Table 1. Contingency Table of a Single Low-Dose Challenge Experiment Simulation of Repeated Low-Dose Challenge Experiments In our simulations of the repeated low-dose challenge experiments, we once more assume that we have n unvaccinated control animals and n vaccinated animals. We again simulate challenges of each control animal with the ID[50] by performing Bernoulli trials with a probability of success of p[c] = 0.5. Unlike in the simulations of the single low-dose challenge experiments, however, we now repeatedly challenge each animal until it is infected or until a maximum number of challenges, C[max], has been performed. We assume that the probability of infection p[c] is independent of how often an animal has been challenged before. The results of these repeated Bernoulli trials can be written as two vectors, y[c], which contains the number of challenges that have been performed on each animal: and s[c], which contains information on whether a given animal is uninfected (zero) or infected (one): By summing over y[c], we obtain the total number of challenges performed in the control group, η[c]: And, by summing over s[c], we obtain the number of infected animals in the control group, ι[c]: To simulate repeated low-dose challenges in the vaccinated group, we perform repeated Bernoulli trials with a probability of success p[v]. For a given vaccine efficacy VE[S],p[v] is determined by equation 3. Analogously to the control group, the results of these repeated Bernoulli trials can be written as two vectors, y[v] and s[v], and summing the elements of these two vectors yields the total number of challenges performed in the vaccinated group, η[v], and, the number of infected animals in the vaccinated group, ι[v]. As in the case of the single low-dose challenge design, the outcome of the simulated experiment can be summarized in a contingency table (Table 2). To assess whether the fraction of infected animals in the vaccinated group is significantly lower than that in the control group, we again perform a one-tailed Fisher's exact test [24]. In general, the number of challenges, η[c] and η[v], is larger than the number of animals per group, n. This increase of numbers in the contingency table leads to increased statistical power of the repeated low-dose challenge design. To analyze the outcome of the simulated repeated low-dose challenge experiments, we chose to use Fisher's exact test rather than a more obvious Cox proportional hazards model because the latter depends on large sample asymptotics while we were interested in cases of small numbers of experimental animals. Table 2. Contingency Table of a Repeated Low-Dose Challenge Experiment Heterogeneity in Infection Probabilities In our mml:mathematical description of challenge experiments, we have assumed that animals within each group have equal infection probabilities—p[c] and p[v], for the control and vaccinated groups, respectively. To simulate potential animal-to-animal variation in susceptibility to infection, we relaxed this assumption and assigned individual infection probabilities to each animal. The individual infection probabilities are drawn from a β-distribution, which is often used as a prior distribution for binomial proportions. The β-distribution has two shape parameters, α and β. Its probability density is given by and its mean and variance are We assume that μ = p[c] in the control group and μ = p[v] = (1 − VE[S])p[c] in the vaccinated group. Further, we assume that the coefficients of variation, CV = σ/μ, of the distributions in the two groups are equal. With these assumptions, we can rewrite the two shape parameters of the β-distribution, α and β, in terms of the infection probability, p, and the coefficient of variation, CV: Hereby, p = p[c] for the control group and p = p[v] = (1 − VE[S])p[c] for the vaccinated group. To incorporate potential heterogeneity in susceptibility into the virtual low-dose challenge experiments, we replaced the probability of success in the Bernoulli trials (see above) with the individual infection probabilities. Power Analysis To calculate the statistical power of the single and the repeated low-dose challenge designs, we performed 100,000 such simulated experiments for a given number, n, of animals per group. The statistical power can be estimated as the fraction of simulated experiments in which the vaccine is found to be significantly efficacious (significance level α = 0.05). We estimated the statistical power for the number of animals per group, n, ranging from one to 20, and for vaccine efficacies VE[S] = 0.67, 0.8, and 0.9. The power analysis outlined above was implemented in the R Language of Statistical Computing [25]. An R-script that performs the power analysis presented here is provided as Protocol S1. For large numbers of animals per group, n, the statistical power can be approximated using asymptotic theory. For the single low-dose challenge design the power is approximately (e.g., [26], p. 240): Hereby, Φ denotes the cumulative normal distribution, and z[α] is the standard normal deviate associated with the one-tailed probability α (the significance level). Furthermore, p[c] and p[v] denote the infection probabilities of animals in the control and vaccinated groups, respectively, and n the number of animals per group. The term 1/n in the numerator is the continuity correction [27,28]. For the repeated low-dose challenge design, the number of challenges is not the same as the number of animals, n, but is a random variable. The number of challenges for each individual is geometrically distributed with a maximum of C[max]. The expected number of challenges in the control group, , and the vaccinated group, are Substituting the expected number of challenges for the actual number, we can approximate the statistical power of the repeated low-dose challenge design as Hereby, γ = (1/〈η[c]〉 + 1/〈η[v]〉)/2 is the continuity correction. For C[max] = 1, equation 17 reduces to equation 13. Because the approximation in equation 17 involves the substitution of a random variable with its expectation, it is less accurate than the approximation for the power of the single low-dose challenge design in equation 13. The R-script provided as Protocol S1 also contains a function that calculates the statistical power using equation 17. Single Low-Dose Challenge Design Requires Large Numbers of Animals How would we measure protection against infection in a low-dose challenge model? The most straight-forward design would involve a large number of hosts, some vaccinated and some unvaccinated. After challenge with a low dose, one would determine the fraction of infected hosts in vaccinated and unvaccinated groups, and assess whether there is a statistically significant difference in the fractions (see Figure 1A). To assess how many animals would be required in a single low-dose challenge experiment, we performed a statistical power analysis (see Methods). The statistical power of an experimental design is defined as the probability that, in an experiment with an effective vaccine, the vaccine is correctly determined to be effective. Obviously the power depends on the efficacy of the vaccine (which is called the “effect size” in the context of power analysis) and the number of host animals used in the experiment. In the power analysis we performed, we assumed that we had equal numbers of unvaccinated and vaccinated animals, and that all animals within a group were equally susceptible to infection. Lastly, we assumed that the vaccine was “leaky” [29,30], i.e., that the susceptibility of vaccinated animals was by a constant factor lower than the susceptibility of the unvaccinated control animals. In virtual experiments, we then challenged each (virtual) animal once with a challenge dose of one ID[50], the dose at which on average 50% of the unvaccinated animals become infected after a single challenge. Using a one-sided Fisher's exact test, we tested whether the fraction of infected animals in the vaccinated group was significantly lower than in the control group. Performing 100,000 such virtual experiments for a given number n animals per group, we estimated the statistical power as the fraction of virtual experiments that yielded significant results (significance level α = 0.05). The result of this power analysis is shown by the green curves in Figure 2. We calculated the power for vaccine efficacies of 67%, 80%, and 90%. We found that, even for the highest vaccine efficacy of 90%, the single low-dose challenge design required more than 20 animals per group to reach a statistical power of 95%. Thus, the single low-dose challenge design is not feasible, or at least not practical, to assess the efficacy of a vaccine or other preventive interventions in animals. Figure 2. Power Analysis for the Repeated Low-Dose Challenge Design and the Single Low-Dose Challenge Design In our virtual experiments, we set the challenge dose equal to the ID[50], and assumed that the vaccine efficacy was 67% (dotted lines), 80% (dashed lines), or 90% (solid lines). The graph shows the statistical power of the repeated low-dose challenge design (black lines) and the single low-dose challenge design (green lines) for a given number of animals per group as determined from 100,000 virtual experiments. If the vaccine is 90% effective, the statistical power of the repeated low-dose challenge design is higher than 95% with only five animals per group, as compared to only 15% for the single low-dose challenge design. Repeated Low-Dose Challenge Design Does Not Require Large Numbers of Animals We propose an alternative design involving repeated challenges of individual animals with low doses, which circumvents the disadvantage of the single low-dose challenge design that large numbers of host individuals are required. Repeated challenges effectively “recycle” host animals, thus increasing the statistical power of the experiment. In addition to increasing the statistical power of the experimental design, repeated challenges recapitulate much more realistically the circumstances of human exposure than single challenges. In this alternative design, the efficacy of a vaccine can be estimated by measuring the difference in the number of challenges needed to infect vaccinated versus unvaccinated hosts (see Figure 1B). To show that this alternative design does not require unfeasibly large numbers of animals, we performed a statistical power analysis (see Methods). As for the single low-dose design, we assumed that we had equal numbers of unvaccinated and vaccinated animals, and that all animals within a group were equally susceptible to infection. We further made the important assumption that the susceptibility of an individual animal was independent of how often the animal was unsuccessfully challenged previously. This assumption is commonly adopted in statistical models that are used to estimate the transmission rate of HIV [14–17]. By making this assumption, we ignored that an unsuccessful challenge may induce some degree of immunity against subsequent challenges. We would like to emphasize, however, that this assumption is not crucial for our argument, unless the degree of induced immunity is very high. Lastly, we again assumed that the vaccine was leaky [29,30]. In virtual experiments, we then challenged the (virtual) animals repeatedly with a challenge dose of one ID[50]. We allowed for a maximum number of 20 challenges of each individual animal. Table 3 shows the outcome of one such virtual experiment. We analyzed the outcome of the virtual experiments with a one-tailed Fisher's exact test (see Methods). We again estimated the statistical power by performing 100,000 such virtual experiments for a given number n animals per group. Table 3. Outcome of One Virtual Repeated Low-Dose Challenge Experiment Figure 2 shows the statistical power of the repeated low-dose challenge design as a function of the number of animals per group for varying vaccine efficacies (black lines), and compares it to the statistical power of the single low-dose challenge design (green lines). The statistical power achieved with the repeated low-dose challenge design is generally higher than that achieved with the single low-dose challenge design. If the vaccine is 90% effective (VE[S] = 0.9), i.e., it reduces the susceptibility by a factor of ten, the number of animals per group could be as low as five to achieve more than 95% statistical power. In contrast, in single low-dose challenge experiments with the same number of animals per group the statistical power is only 15%. Thus, repeated low-dose challenge experiments are expected to require far fewer animals than single low-dose challenge experiments. How Often Should Virus Challenges Be Repeated? To investigate how the maximum number of challenges affected the statistical power, we plotted the power against C[max] for trials involving six and 12 animals per group (Figure 3). We found that the power increases with C[max], but for high C[max] the returns diminished considerably. The lower the number of animals per group, n, the higher the maximum number of challenges, C[max], for which the power effectively saturated. Even for low numbers of animals per group, n, however, the maximum number of challenges, C[max], needed to unfold the full potential of the repeated low-dose challenge design was in a feasible range. Figure 3. Impact of the Maximum Number of Challenges, C[max], on the Statistical Power For this plot we assumed trials with vaccine efficacies of VE[S] = 0.67 (dotted line), VE[S] = 0.8 (dashed line), and VE[S] = 0.9 (solid line). In (A) we calculated the statistical power for six animals per group, n = 6, and in (B) for 12 animals per group, n = 12. Impact of Animal-to-Animal Variation in Susceptibility To study how potential heterogeneity in susceptibility affected the power of low-dose challenge trials, we simulated experiments in which each animal was assigned an individual infection probability (see Methods). In these simulations, the degree of heterogeneity was measured by the coefficient of variation, CV, of the susceptibility distributions. Figure 4A shows susceptibility distributions for three different values of CV. Figure 4. Impact of Heterogeneity in Susceptibility on the Statistical Power (A) Susceptibility distributions for different levels of heterogeneity, measured by the coefficient of variation, CV, of the susceptibility distribution. The vaccine is assumed to be 80% effective, VE[S] = 0.8. (B) The statistical power depends on the coefficient of variation, CV, for the repeated low-dose challenge design (black lines) and the single low-dose challenge design (green lines). For these plots we assumed trials with six and 12 animals per group and vaccine efficacies of VE[S] = 0.67 (dotted lines), VE[S] = 0.8 (dashed lines), and VE[S] = 0.9 (solid lines). We extended our power analysis by considering the impact of the heterogeneity parameter CV on the statistical power (Figure 4B). We found that the statistical power of the single low-dose challenge design was almost unaffected by animal-to-animal variation in infection probability, whereas, for the repeated low-dose challenge design, the power decreased with increasing heterogeneity. Importantly, however, the power did not decrease linearly with heterogeneity: it was sufficiently stable in the range 0 < CV < 0.3 and dropped mainly for CV > 0.3. Thus, over a wide range of potential animal-to-animal variation in susceptibility, low-dose challenge designs are sufficiently powered, and the power of the repeated low-dose experiments is superior to that of single low-dose challenge experiments. Preclinical studies assessing the efficacy of potential vaccines, microbicides, or systemic chemoprophylaxis are usually conducted with very high virus challenge doses, which result in infection with certainty. Since these high challenge doses do not reflect the low probability of HIV transmission in humans, vaccines or prophylactic treatment strategies that are effective against “real life” exposures may go undetected in high-dose challenge experiments. For example, zidovudine was found to prevent a large fraction of perinatal HIV infections [23], even though studies in animal models, conducted with high challenge doses, could not establish any protection against infection by zidovudine [20–22]. In this paper, we investigated how efficacy trials of vaccines and preventive treatment could be conducted with low challenge doses in animal models. We showed that the repeated low-dose challenge design is expected to require far fewer experimental animals than commonly believed. It may therefore be feasible to conduct trials with low challenge doses, which more realistically simulate exposures of humans to HIV, allowing us to more directly and sensitively assess vaccine or treatment efficacy than with high-dose challenge experiments. Owing to the concerns with high challenge doses, several research groups, including our own, have started to develop low-dose challenge models [31–34]. In these preliminary studies, infection could be achieved by challenging macaques intra-rectally [31], intra-vaginally [32,34], or orally [33]. Since adopting low-dose challenge approaches has far-reaching consequences for the design of efficacy trials of vaccines or preventive treatment in animal models, we would like to discuss how some important aspects of trial design, such as transient infections, the challenge schedule, the route of infection, and the phenotype and dose of the challenge strain, should be dealt with and could be Using virus challenge doses that do not give rise to infection with certainty, one has to carefully define what one means by successful infection. This question is of particular importance in the repeated low-dose challenge design, because the efficacy of a preventive intervention is estimated on the basis of the number of challenges needed to infect an individual animal. Low-dose challenges have been observed to give rise to transiently detectable viremia [32–34]. Since transient infection is much more likely to lead to immunization [35], thus leading to lower probabilities of infection in subsequent challenges, we suggest considering transient viremia as successful infection and not to re-challenge animals that were transiently infected. The time interval between challenges is also an essential parameter in the design of repeated low-dose challenge experiments. In the four ongoing repeated low-dose challenge studies [31–34], different approaches have been taken, with time intervals ranging from hours to a week. There may be logistical reasons for choosing short time intervals between challenges, but from a statistical standpoint, the time intervals should be large enough to allow the identification of the challenge that gives rise to infection. Otherwise, the statistical power of the experimental design will be suboptimal and a beneficial effect of the vaccine candidate may be missed. In parallel to using more realistic, lower challenge doses, other crucial parameters of the experimental infection process, such as the route of transmission and the coreceptor usage of the challenge virus, should also be chosen to be as realistic as possible. Thus, we propose infecting intra-vaginally or intra-rectally in experiments that aim to assess a vaccine or prophylactic treatment against sexual transmission of HIV. Further, we suggest using challenge viruses that utilize CCR5 as coreceptor, such as for example SHIV-SF162P [36], rather than the standard strain SHIV89.6P, which has been found to use mainly CXCR4 [37,38]. These more realistic choices of the route of infection and coreceptor usage will permit the assessment of the efficacy of the preventive intervention in a setting that more accurately reflects HIV exposures of humans, and will enable us to carefully investigate the processes that give rise to infection. The challenge dose in a low-dose challenge study is another parameter of crucial importance. Although the most realistic choice would be a challenge dose that gives rise to infection with a probability of approximately 0.0005–0.10 [14–17], such extremely low doses would require unfeasibly large numbers of repeated challenges per animal. Moreover, there is substantial variation in transmission rates due to differences in factors such as virus load or the presence of other infections of the genital tract [15–18], and theoretical studies suggest that preventing the transmission events that occur with higher probability would have a disproportionately large effect on controlling the epidemic [39]. To maximize their epidemiological relevance, low-dose challenge experiments should therefore involve challenge doses that reflect transmission probabilities at the upper end of the spectrum. As a compromise between the practicality of high doses and the sensitivity associated with realistically low doses, we propose the ID[50]. The ID[50] can be estimated using well-established nonparametric methods like Spearman-Kärber [40] or single-parameter methods [41], and there is software available, such as a freely distributed package called ID50 developed by John Spouge (http://www.ncbi.nlm.nih.gov/CBBresearch/Spouge/Virology/, which allows an automated estimation of the ID[50] from data generated in titration experiments. The inability to detect sterilizing immunity in high-dose challenge experiments led to a shift of focus towards indirect effects of vaccine candidates on the pathogenicity of the infection and the infectiousness of the vaccinee. This shift of focus required the development of novel statistical models that allowed the estimation of these indirect effects [42,43]. Will the estimation of vaccine efficacy in repeated low-dose challenge studies also require the development of novel statistical techniques? The answer to this question depends on how much the realities of the infection process deviate from our idealized model. There are three potential deviations. First, we assumed in large parts of this study that the susceptibilities to infection were equal for all animals within each group. This is almost certainly not the case. Although we have shown that low-dose challenge experiments are sufficiently powered even if there is substantial animal-to-animal variation in susceptibility, we did not develop the statistical techniques that would allow the estimation of this variation. The extent of animal-to-animal variation in susceptibility can, in principle, be estimated, but this will probably require larger numbers of animals than the estimation of vaccine efficacy. Second, the vaccine may affect the susceptibility of individual animals differently. While we assumed in the present study that the vaccine is leaky, i.e., that the susceptibility is reduced by a constant factor in each animal, other modes of action of a vaccine are possible. In particular, some animals could be completely protected by vaccination, while others may remain completely susceptible. This mode of action is referred to as all-or-none [29,30]. Statistical methods based on maximum likelihood approaches exist that allow the determination of the mode of action of a given vaccine. However, these methods are based on large sample asymptotics, and exact methods will have to be developed to analyze the outcome of low-dose challenge experiments that involve small numbers of animals. Last, it will have to be determined whether the probability of infection changes with the number of challenges performed in a given animal, or, to put it differently, whether the animal has a “memory” of previous challenges. In our analysis, we assumed that the susceptibility of an animal did not change from challenge to challenge. If the probability of infection changes significantly with the number of challenges, however, the development of novel statistical models that take such changes into account will be necessary to adequately estimate vaccine efficacy. In addition to the potential to assess the vaccine or microbicide efficacy more sensitively and in a more realistic setting, a low-dose challenge approach may enable us to answer questions that cannot even be asked in high-dose challenge models. Some of the most relevant of these questions relate to the effect of challenges that do not lead to infection. If a low-dose challenge does not give rise to infection, where was the virus blocked? Did the virus fail to establish an infection at all? Or did it replicate transiently, but was cleared by the host's immunity? And, very importantly, is an unsuccessfully challenged animal partially immunized against further challenges, or, alternatively, do unsuccessful challenges facilitate future infection by “seeding” animals with defective proviruses that may recombine with complementing viruses upon subsequent exposures [44]? The answers to these questions would greatly enhance our understanding of HIV transmission and pathogenesis, and thus would provide further guidance toward an effective vaccine or microbicide. Furthermore, by assessing the protection against infection directly, we may be able to discern the specific types and levels of vaccine-induced cellular and humoral immune responses associated with sterilizing immunity [13]. This would provide important benchmarks by which to judge new vaccine candidates, and could also allow retrospective analysis of vaccine candidates evaluated earlier in high-dose challenge studies. In conclusion, the repeated low-dose challenge approach may enable us to assess the potential efficacy of vaccines and prophylactic treatment strategies more realistically, and more sensitively than the standard high-dose challenge approach. The increased sensitivity may allow us to more rapidly identify interventions that significantly reduce the transmission of low-dose infections that characterize the natural spread of HIV. Supporting Information Protocol S1. R-Function for the Calculation of the Statistical Power of Low-Dose Challenge Experiments (8 KB TXT). We thank Rustom Antia, Steven Self, Mark Tanaka, and Andrew Yates for discussion. RRR was supported by the Deutsche Forschungsgemeinschaft grant number Re 1618/1–2 and the National Institutes of Health (NIH) grant AI-49334. The support of the NIH National Institute of Allergy and Infectious Disease grant 1 R21 AI54260 is gratefully acknowledged. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Author Contributions RRR performed the analysis. RRR, IML, MBF, and SIS designed the study and wrote the paper. 1. 1. Joint United Nations Programme on HIV/AIDS (2004 December) AIDS epidemic update 2004. Geneva: Joint United Nations Programme on HIV/AIDS. Available: http://www.unaids.org/wad2004/report.html. Accessed 24 June 2005. 2. 2. Garber DA, Silvestri G, Feinberg MB (2004) Prospects for an AIDS vaccine: Three big questions, no easy answers. Lancet 4: 397–413. 3. 3. Staprans SI, Feinberg MB (2004) The roles of nonhuman primates in the preclinical evaluation of candidate AIDS vaccines. Expert Rev Vaccines 3: S5–32. 4. 4. Amara RR, Villinger F, Altman JD, Lydy SL, O'Neil SP (2001) Control of a mucosal challenge and prevention of AIDS by a multiprotein DNA/MVA vaccine. Science 292: 69–74. 5. 5. Barouch DH, Santra S, Schmitz JE, Kuroda MJ, Fu TM, et al. (2000) Control of viremia and prevention of clinical AIDS in rhesus monkeys by cytokine-augmented DNA vaccination. Science 290: 6. 6. Rose NF, Marx PA, Luckay A, Nixon DF, Moretto WJ (2001) An effective AIDS vaccine based on live attenuated vesicular stomatitis virus recombinants. Cell 106: 539–549. 7. 7. Shiver JW, Fu TM, Chen L, Casimiro DR, Davies ME (2002) Replication-incompetent adenoviral vaccine vector elicits effective anti-immunodeficiency-virus immunity. Nature 415: 331–335. 8. 8. Lederman MM, Veazey RS, Offord R, Mosier DE, Dufour J (2004) Prevention of vaginal SHIV transmission in rhesus macaques through inhibition of CCR5. Science 306: 485–487. 9. 9. Tsai CC, Emau P, Follis KE, Beck TW, Benveniste RE (1998) Effectiveness of postinoculation (r)-9-(2-phosphonylmethoxypropyl) adenine treatment for prevention of persistent simian immunodeficiency virus SIVmne infection depends critically on timing of initiation and duration of treatment. J Virol 72: 4265–4273. 10. 10. Tsai CC, Follis KE, Sabo A, Beck TW, Grant RF (1995) Prevention of SIV infection in macaques by (R)-9-(2-phosphonylmethoxypropyl)adenine. Science 270: 1197–1199. 11. 11. Veazey RS, Klasse PJ, Ketas TJ, Reeves JD, Piatak M (2003) Use of a small molecule CCR5 inhibitor in macaques to treat simian immunodeficiency virus infection or prevent simian-human immunodeficiency virus infection. J Exp Med 198: 1551–1562. 12. 12. Veazey RS, Shattock RJ, Pope M, Kirijan JC, Jones J (2003) Prevention of virus transmission to macaque monkeys by a vaginally applied monoclonal antibody to HIV-1 gp120. Nat Med 9: 343–346. 13. 13. Johnson RP (2002) Mechanisms of protection against simian immunodeficiency virus infection. Vaccine 20: 1985–1987. 14. 14. Brookmeyer R, Gail MH (1994) AIDS epidemiology: A quantitative approach. Oxford: Oxford University Press. 354 p. 15. 15. Gray RH, Wawer MJ, Brookmeyer R, Sewankambo NK, Serwadda D (2001) Probability of HIV-1 transmission per coital act in monogamous, heterosexual, HIV-1-discordant couples in Rakai, Uganda. Lancet 357: 1149–1153. 16. 16. Mastro TD, Kitayaporn D (1998) HIV type 1 transmission probabilities: Estimates from epidemiological studies. AIDS Res Hum Retroviruses 14: S223–S227. 17. 17. Wawer MJ, Gray RH, Sewankambo NK, Serwadda D, Li X, et al. (2005) Rates of HIV-1 transmission per coital act, by stage of HIV-1 infection, in Rakai, Uganda. J Infect Dis 191: 1403–1409. 18. 18. Pilcher CD, Tien HC, Eron JJ, Vernazza PL, Leu SY (2004) Brief but efficient: Acute HIV infection and the sexual transmission of HIV. J Infect Dis 189: 1785–1792. 19. 19. Wick D, Self SG (2000) Early HIV infection in vivo: Branching-process model for studying timing of immune responses and drug therapy. Math Biosci 165: 115–134. 20. 20. Fazely F, Haseltine WA, Rodger RF, Ruprecht RM (1991) Postexposure chemoprophylaxis with ZDV or ZDV combined with interferon-α—Failure after inoculating rhesus-monkeys with a high-dose of SIV retrovirology. J Acquir Immune Defic Syndr 4: 1093–1097. 21. 21. Legrand R, Clayette P, Noack O, Vaslin B, Theodoro F (1994) An animal-model for antilentiviral therapy—Effect of zidovudine on viral load during acute infection after exposure of macaques to simian immunodeficiency virus. AIDS Res Hum Retroviruses 10: 1279–1287. 22. 22. Ruprecht RM, Bronson R (1994) Chemoprevention of retroviral infection—Success is determined by virus inoculum strength and cellular-immunity. DNA Cell Biol 13: 59–66. 23. 23. Mofenson LM (1997) Reducing the risk of perinatal HIV-1 transmission with zidovudine: Results and implications of AIDS Clinical Trials Group protocol 076. Acta Paediatr Suppl 421: 89–96. 24. 25. R Development Core Team (2005) The R project for statistical computing. Vienna: R Foundation for Statistical Computing. Available: http://www.R-project.org. Accessed 24 June 2005. 25. 28. Newcombe RG (1998) Interval estimation for the difference between independent proportions: Comparison of eleven methods. Stat Med 17: 873–890. 26. 29. Smith PG, Rodrigues LC, Fine PE (1984) Assessment of the protective efficacy of vaccines against common diseases using case-control and cohort studies. Int J Epidemiol 13: 87–93. 27. 30. Halloran ME, Longini IM, Struchiner CJ (1999) Design and interpretation of vaccine field studies. Epidemiol Rev 21: 73–88. 28. 31. McDermott AB, Mitchen J, Piaskowski S, De Souza I, Yant LJ (2004) Repeated low-dose mucosal simian immunodeficiency virus SIVmac239 challenge results in the same viral and immunological kinetics as high-dose challenge: A model for the evaluation of vaccine efficacy in nonhuman primates. J Virol 78: 3140–3144. 29. 32. Otten RA, Adams DR, Kim CN, Jackson E, Pullium JK (2005) Multiple vaginal exposures to low doses of R5 simian-human immunodeficiency virus: Strategy to study HIV preclinical interventions in nonhuman primates. J Infect Dis 191: 164–173. 30. 33. Van Rompay KK, Abel K, Lawson JR, Singh RP, Schmidt KA (2005) Attenuated poxvirus-based simian immunodeficiency virus (SIV) vaccines given in infancy partially protect infant and juvenile macaques against repeated oral challenge with virulent SIV. J Acquir Immune Defic Syndr 38: 124–134. 31. 34. Ma ZM, Abel K, Rourke T, Wang Y, Miller CJ (2004) A period of transient viremia and occult infection precedes persistent viremia and antiviral immune responses during multiple low-dose intravaginal simian immunodeficiency virus inoculations. J Virol 78: 14048–14052. 32. 35. McChesney MB, Collins JR, Lu D, Lu X, Torten J (1998) Occult systemic infection and persistent simian immunodeficiency virus (SIV)-specific CD4(+)-T-cell proliferative responses in rhesus macaques that were transiently viremic after intravaginal inoculation of SIV. J Virol 72: 10029–10035. 33. 36. Harouse JM, Gettie A, Eshetu T, Tan RC, Bohm R (2001) Mucosal transmission and induction of simian AIDS by CCR5-specific simian/human immunodeficiency virus SHIV(SF162P3). J Virol 75: 34. 37. Zhang YJ, Lou B, Lal RB, Gettie A, Marx PA (2000) Use of inhibitors to evaluate coreceptor usage by simian and simian/human immunodeficiency viruses and human immunodeficiency virus type 2 in primary cells. J Virol 74: 6893–6910. 35. 38. Feinberg MB, Moore JP (2002) AIDS vaccine models: Challenging challenge viruses. Nat Med 8: 207–210. 36. 39. May RM, Anderson RM (1988) The transmission dynamics of human immunodeficiency virus (HIV). Philos Trans R Soc Lond B Biol Sci 321: 565–607. 37. 40. Miller RG (1973) Nonparametric estimators of the mean tolerance in bioassay. Biometrika 60: 535–542. 38. 41. Spouge JL (1992) Statistical-analysis of sparse infection data and its implications for retroviral treatment trials in primates. Proc Natl Acad Sci U S A 89: 7581–7585. 39. 42. Longini IM, Hudgens MG, Halloran ME, Sagatelian K (1999) A Markov model for measuring vaccine efficacy for both susceptibility to infection and reduction in infectiousness for prophylactic HIV vaccines. Stat Med 18: 53–68. 40. 43. Gilbert PB, DeGruttola VG, Hudgens MG, Self SG, Hammer SM (2003) What constitutes efficacy for a human immunodeficiency virus vaccine that ameliorates viremia: Issues involving surrogate end points in phase 3 trials. J Infect Dis 188: 179–193. 41. 44. Kim EY, Busch M, Abel K, Fritts L, Bustamante P (2005) Retroviral recombination in vivo: Viral replication patterns and genetic structure of simian immunodeficiency virus (SIV) populations in rhesus macaques after simultaneous or sequential intravaginal inoculation with SIVmac239Δvpx/Δvpr and SIVmac239Δnef. J Virol 79: 4886–4895. Patient Summary Before trials of medicines or vaccines are done in humans, most are tested in animals. There are many controversies about these animal trials, including whether they mimic the human disease accurately. In testing vaccines for HIV, animals are mostly given high doses of the virus, whereas in real life people are often repeatedly exposed to small amounts of the virus. No vaccine that has been tested against HIV prevents infection in animals. It is possible that some of this lack of success may be due to the design of the vaccine trials rather than the vaccine itself. What Did the Authors Do? They wanted to look at experimental designs that allowed assessment of protection against infection with lower, and thus more realistic, doses of virus. Previously, researchers had suggested that many animals would be needed for this type of study. The authors wanted to see whether this was correct. They developed a model to test how well single and multiple low-dose experiments performed. They did this by simulating the experiments with doses of virus, assessing the results, and then repeating this procedure 100,000 times to estimate how valid a given experimental design was. Their modeling showed that by repeatedly giving animals low doses of virus, it was possible to use a smaller number of animals than was needed for trials with a single low dose. What Do These Results Mean? It may be possible to use these results to plan trials of vaccines in animals that mimic more closely the way that humans are exposed to HIV, and hence the results may be more reliable for human Where Can I Get More Information? MedlinePlus has a great deal of information on HIV: The Body has information targeted to both patients and health professionals:
{"url":"http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020249","timestamp":"2014-04-19T17:43:21Z","content_type":null,"content_length":"170607","record_id":"<urn:uuid:36b2391e-6b6f-4fac-8da7-a30ef592a624>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Lake Clarke Shores, FL Math Tutor Find a Lake Clarke Shores, FL Math Tutor ...I help students improve their public speaking capabilities by focusing on audience analysis, speech preparation, speech delivery, and critique and help. I specialize in assisting foreign nationals improve their English skills. In the past year I have worked with a South Korean, a Lithuanian, a Colombian, a Swiss national and a Brazilian. 17 Subjects: including geometry, accounting, algebra 1, English ...This translated to working with around 700 students each week, helping them improve their level of conversational English. Through this experience, I worked with 7-8 native Korean teachers in planning lessons and carrying them out in the classroom. It was a year of growth change for the better to say the least. 15 Subjects: including geometry, writing, literature, algebra 1 ...Active engagement with the material is essential for success! If you're having trouble working up enthusiasm for the subject (don't worry, I used to hate math too!) I try my best to make the subject interesting for you. That's because I know from experience that you won't do something you don't want to. 27 Subjects: including logic, linear algebra, discrete math, physics I have been tutoring in Boca Raton for the last 10 years and references would be available on request. I basically tutor Math, mostly junior high and high school subjects and also tutor college prep, both ACT and SAT. I have also tutored SSAT. 10 Subjects: including trigonometry, algebra 1, algebra 2, geometry ...College and High School Awards: National Merit Scholarship Finalist, New College Excellence Award, AP Scholar With Distinction, Bright Futures Scholarship. I have experience as a writing tutor at an undergraduate college, working with a number of ESOL students. I have training in working with students on English basics such as grammar and sentence construction. 40 Subjects: including algebra 2, American history, calculus, European history Related Lake Clarke Shores, FL Tutors Lake Clarke Shores, FL Accounting Tutors Lake Clarke Shores, FL ACT Tutors Lake Clarke Shores, FL Algebra Tutors Lake Clarke Shores, FL Algebra 2 Tutors Lake Clarke Shores, FL Calculus Tutors Lake Clarke Shores, FL Geometry Tutors Lake Clarke Shores, FL Math Tutors Lake Clarke Shores, FL Prealgebra Tutors Lake Clarke Shores, FL Precalculus Tutors Lake Clarke Shores, FL SAT Tutors Lake Clarke Shores, FL SAT Math Tutors Lake Clarke Shores, FL Science Tutors Lake Clarke Shores, FL Statistics Tutors Lake Clarke Shores, FL Trigonometry Tutors Nearby Cities With Math Tutor Atlantis, FL Math Tutors Briny Breezes, FL Math Tutors Cloud Lake, FL Math Tutors Glen Ridge, FL Math Tutors Green Acres, FL Math Tutors Haverhill, FL Math Tutors Hypoluxo, FL Math Tutors Lake Clarke, FL Math Tutors Lake Worth Math Tutors Loxahatchee Groves, FL Math Tutors Manalapan, FL Math Tutors Palm Beach Math Tutors Palm Beach Shores, FL Math Tutors Palm Springs, FL Math Tutors South Palm Beach, FL Math Tutors
{"url":"http://www.purplemath.com/lake_clarke_shores_fl_math_tutors.php","timestamp":"2014-04-21T15:14:59Z","content_type":null,"content_length":"24661","record_id":"<urn:uuid:03941884-7138-4648-930a-3cc15ce674c7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
On the Expected Discounted Penalty Function for a Markov Regime-Switching Insurance Risk Model with Stochastic Premium Income Discrete Dynamics in Nature and Society Volume 2013 (2013), Article ID 320146, 9 pages Research Article On the Expected Discounted Penalty Function for a Markov Regime-Switching Insurance Risk Model with Stochastic Premium Income ^1School of Mathematics, Shandong University, Jinan 250100, China ^2School of Insurance, Shandong University of Finance and Economics, Jinan 250014, China Received 3 December 2012; Accepted 30 January 2013 Academic Editor: Fuyi Xu Copyright © 2013 Wenguang Yu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We consider a Markovian regime-switching risk model (also called the Markov-modulated risk model) with stochastic premium income, in which the premium income and the claim occurrence are driven by the Markovian regime-switching process. The purpose of this paper is to study the integral equations satisfied by the expected discounted penalty function. In particular, the discount interest force process is also regulated by the Markovian regime-switching process. Applications of the integral equations are given to be the Laplace transform of the time of ruin, the deficit at ruin, and the surplus immediately before ruin occurs. For exponential distribution, the explicit expressions for these quantities are obtained. Finally, a numerical example is also given to illustrate the effect of the related parameters on these quantities. 1. Introduction In recent years, ruin theory under regime-switching model is becoming a popular topic. This model is proposed in Reinhard [1] and Asmussen [2]. Asmussen calls it a Markov-modulated risk model. And this model is also a generalization of the classical compound Poisson risk model. The primary motivation for this generalization is to enhance the flexibility of the model parameter settings for the classical risk process. The examples usually given are weather conditions and epidemic outbreaks, even though seasonality would play a role and can probably not be modeled by a Markovian regime-switching model. Many papers on ruin probabilities and the expected discounted penalty function under the Markovian regime-switching risk model have been published. Some works in this area include Ng and Yang [3], Li and Lu [4], Lu and Li [5], Zhang [6], Zhu and Yang [7, 8], Yu [9], Dong et al. [10], Wei et al. [11], Elliott et al. [12], Ma et al. [13], Dong and Liu [14], Mo and Yang [ 15], Zhang and Siu [16], Li and Ren [17], and the references therein. All of the researches mentioned above only take the constant interest force into consideration and do not take into account the impact of the change of the external environment. This provides the practical motivation to develop the ruin theory with stochastic interest force. In recent years, the ruin theory with stochastic interest force has attracted much attention in the actuarial science literature. See, for example, Ouyang and Yan [18], Cai [19], Zhao and Liu [20], Zhao et al. [21], and Li and Wu [22]. But these papers have only considered the question in which the interest force process, from the beginning to its end, has been described to be one process. Since the risk management of an insurance company is a longer-term program, these models cannot capture the feature that interest policies may need to change if economical or political environment changes. So it is natural to introduce the stochastic interest force regulated by the Markovian regime-switching process in insurance risk analysis. Zhang and Zhao [23] first consider the expected discounted penalty function in a classical risk model, in which the discount interest force process was modeled by the Markovian regime-switching process. Xie and Zou [24] study a compound binomial risk model with a constant dividend barrier under stochastic interest rates. Two types of individual claims, main claims and by-claims, are defined, where every by-claim is induced by the main claim and may be delayed for one time period with a certain probability. In the evaluation of the expected present value of dividends, the interest rates are assumed to follow a Markov chain with finite state space. Inspired by the work of Zhang and Zhao [23], in this paper, we generalize the risk model and assume that the claim process, the premium income process, and the stochastic interest force process are independently regulated by the Markovian regime-switching process. The rest of this paper is organized as follows. In Section 2, the risk model and the stochastic interest force model are introduced. In Section 3, given the initial surplus and the initial environment state, the integral equation for the expected discounted penalty function is derived. In Section 4, for exponential distribution, we obtain the explicit expressions of the expected discounted penalty function. The results are illustrated by numerical examples in Section 5. Section 6 concludes the paper. 2. The Risk Model and the Stochastic Interest Force Model Throughout the paper, we let be a complete probability space with a filtration satisfying usual conditions containing all random variables and stochastic processes in our discussion. Let denote the surplus of an insurance company and be described as follows: where is the initial capital, is the amount of the th claim, and is the amount of the th premium. represents the number of claims occurring in , and represents the number of premium arrivals up to time , both of which are described by the Markovian regime-switching process with intensity processes and , respectively. Let the intensity processes and be homogeneous -state and -state Markovian process, respectively. The number of claims is assumed to follow a Poisson distribution with parameter , and the corresponding claim amounts have distribution when , for . Similarly, the number of premium arrivals has the Poisson distribution with parameter , and the corresponding premiums have distribution when , for . We further assume that all states of the process communicate, which is also the case of the process . The safety loading condition holds . Furthermore, we assume the processes , , , and are mutually Let be the rate at which the process leaves the state and the probability that it then goes to ; that is, the intensity of transition from to is given by Similarly, let be the rate at which the process leaves the state and the probability that it then goes to ; that is, the intensity of transition from to is given by The stochastic interest force function governed by the Markovian regime-switching process is defined by (Zhang and Zhao [23]) where is a homogeneous, irreducible, and recurrent Markovian process with finite state space with intensity matrix , where for . As pointed out by Asmussen [2], in health insurance, sojourns of could be certain types of epidemics, or, in automobile insurance, these could be weather types (e.g., icy, foggy, etc.). The state of interest is governed by . When the state of is , the interest force function is where , , and are nonnegative constants, is a standard Wiener process, and is a Poisson process with parameter . Moreover, we also assume that , and are independent of each other. Since stochastic fluctuation of interest cannot be large in reality, without loss of generality we might as well assume that Then the expected discounted penalty function with stochastic discount interest force driven by the Markovian regime-switching process is defined as where is the indicator function, denotes the time of ruin, is the surplus immediately prior to ruin, is the deficit at ruin, and is a nonnegative bounded function on . We can interpret as the “stochastic discount factor.” The probability of ruin for , , and is Obviously, can be reduced to , if and . 3. The Integral Equation In this section, we derive the integral equation for the expected discounted penalty function. Theorem 1. Suppose that the following conditions are satisfied: (1), is continuous with respect to on ; (2) is continuous with respect to ; (3), .Then satisfies the following integral equation: where and are the distribution of the number of premiums and the claim amounts, respectively. Proof. Consider in an infinitesimal time interval , and separate the seven possible cases as follows:(1) no claim occurs in , no change of claim state in , no premium-arrival in , no change of premium state in , and no change of interest state in , denoted by ;(2) no claim occurs in , no change of claim state in , no premium-arrival in , no change of premium state in , and a change of interest state in , denoted by ;(3) no claim occurs in , a change of claim state in , no premium-arrival in , no change of premium state in , no change of interest state in , denoted by ;(4) one claim occurs in , no change of claim state in , no premium-arrival in , no change of premium state in , and no change of interest state in , denoted by ;(5) no claim occurs in , no change of claim state in , one premium-arrival in , no change of premium state in , and no change of interest state in , denoted by ;(6) no claim occurs in , no change of claim state in , no premium-arrival in , a change of premium state in , and no change of interest state in , denoted by ;(7) all other events with total probability . By conditioning on the occurrence of claims, the change of claim state in , the occurrence of premiums, the change of premium state in , and the change of interest state in , the expected discounted penalty function is equal to First, we consider . We write as since . Because has independent and stationary increments, , , and are the Markovian processes; so we have In the second case, an interest state change occurs; that is, the interest state changes from the state to at switching time , where lies in . Hence, the interest discount factor at time should be written as , and should be revised as . Therefore, by the same approach, we may obtain Now we will turn to the third term in formula (10): For , , and , we have It follows from (10)–(16) that Cancelling , dividing both sides by , and taking limit, the above equation reduces to (9). Remark 2. If in (9), then let , and . This result can be reduced to a special case in which the interest process is described by stochastic interest; the premium process and the claim process are all compound Poisson processes; then the corresponding integral equation satisfied by the expected discounted penalty function is Specially, if and , that is, in (9), then we have Remark 3. If , , and in (9), denoting nonruin probability , then the integral equation in (9) is equivalent to the following: 4. The Explicit Results for Exponential Claim Distribution In this section, we consider the case that the claim amounts and premium numbers are exponentially distributed. We find that, in some specific settings, the expected discounted penalty function can be explicitly obtained. In most cases, it is difficult to obtain the precise expression of , if we consider multiple states. Even if we narrow them to two states (i.e., , at this point, we will get eight coupled equations), it would still be very hard for us to get the accurate expression of . For the sake of simplicity only one state will be taken into account, that is, . The purpose of this section is to get the explicit solution to prepare for the numerical calculation of the next section. If , that is, , and if , then define , which gives the Laplace transform of the time of ruin. Generally speaking, it is not easy to derive exact expression for . But in some special cases, such as the exponential distribution, we can obtain explicit form for the Laplace transform of the time of ruin Theorem 4. If , , , , , , and , then for where , , , Moreover, when , Proof. By the methods of Yao et al. [25], let . The change of in (9) leads to By taking the derivative with respect to on the sides of (24), we have Differentiating the above equation with respect to again, we arrive at from (24)–(26), we can obtain Since , the corresponding homogeneous equation of the above differential equation with constant coefficients is Its characteristic equation is which has two real characteristic roots Letting , and noting that is one special solution of (27), hence, Noting that , , and , , so , that is, which together with (24), implies so it follows that When , ( 27) is equivalent to With the same argument, we can obtain Similarly, we can take many other suitable in (7) when . Then corresponding integral equations can be obtained. In general, it is not easy to derive exact solutions for the equations. However, when and are exponential, the explicit expression for the discounted expectation of the amount of surplus immediately before ruin occurs, and the deficit at ruin can be obtained. The process of proof is completely similar to that of Theorem 4, and so it is omitted here. Remark 5. Let , and denote , which can be considered as the discounted expectation of the surplus immediately before ruin occurs. If , (9) can be reexpressed as Theorem 6. If, , , ,and , then where Remark 7. Let , and denote , which can be considered as the discounted expectation of the deficit at ruin. If , (9) can be re-expressed as Theorem 8. If , , , and , then, for , 5. Numerical Illustrations Taking into account the importance of the interest rates and the simplicity of discussion, we only consider the effect of stochastic interest on the , , and . Let us give some data analysis about the theoretical results in formula (21), such that we can catch the effect information of the stochastic interest factors. We first need to determine the value of the parameters in formula (21). For convenience, we might as well suppose that , , , and . The constant interest force is assumed to be 1.5, 2, and 2.5; the coefficient starts at 0 and ends at 1.5 evenly spaced by the value 0.1; the coefficient is valued at 0, 0.5, and 1, the parameter is supposed to be 1. Based on the formula (21), the above assumptions, and MATLAB, we get the values of under the different combinations of parameter values (see Table 1). From Table 1, we can get the trend of changes of when the other two parameters keep unchanged.(i)The is increased steadily with increasing when and are unchanged.(ii)The is increased with a small decrease of when and are unchanged.(iii)The is decreased, if increases when and are unchanged. In the same way, we can also get similar conclusion for (38) and (41); so we omit the detailed description here. See Tables 2 and 3. 6. Conclusions We have generalized the results in Zhang and Zhao [23]. We suppose that the premium income process, the occurrence of the claims, and the interest process are controlled by the Markov regime-switching process, respectively. We not only obtain the integral equations satisfied by the expected discounted penalty function under the stochastic interest force driven by the Markov regime-switching process, but also offer data analysis and direct interpretation based on the interest models for some special cases. These all provide insights into the effect of stochastic interest force on the expected discounted penalty function and show the importance of introducing stochastic interest force. The author thanks the three anonymous referees for the thoughtful comments and suggestions that greatly improved the presentation of this paper. This work was supported by the National Natural Science Foundation of China (Grant no. 11171187 and Grant no. 10921101) National Basic Research Program of China (973 Program, Grant no. 2007CB814906) Natural Science Foundation of Shandong Province (Grant no. ZR2012AQ013 and Grant no. ZR2010GL013) and Humanities and Social Sciences Project of the Ministry Education of China (Grant no. 10YJC630092 and Grant no. 09YJC910004). 1. J. M. Reinhard, “On a class of semi-Markov risk models obtained as classical risk models in a Markovian environment,” Astin Bulletin, vol. 14, no. 1, pp. 23–43, 1984. 2. S. Asmussen, “Risk theory in a Markovian environment,” Scandinavian Actuarial Journal, vol. 1989, no. 2, pp. 69–100, 1989. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 3. A. C. Y. Ng and H. Yang, “On the joint distribution of surplus before and after ruin under a Markovian regime switching model,” Stochastic Processes and their Applications, vol. 116, no. 2, pp. 244–266, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 4. S. M. Li and Y. Lu, “Moments of the dividend payments and related problems in a Markov-modulated risk model,” North American Actuarial Journal, vol. 11, no. 2, pp. 65–76, 2007. View at Publisher · View at Google Scholar · View at MathSciNet 5. Y. Lu and S. M. Li, “The Markovian regime-switching risk model with a threshold dividend strategy,” Insurance, vol. 44, no. 2, pp. 296–303, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 6. X. Zhang, “On the ruin problem in a Markov-modulated risk model,” Methodology and Computing in Applied Probability, vol. 10, no. 2, pp. 225–238, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 7. J. X. Zhu and H. L. Yang, “Ruin theory for a Markov regime-switching model under a threshold dividend strategy,” Insurance, vol. 42, no. 1, pp. 311–318, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 8. J. X. Zhu and H. L. Yang, “On differentiability of ruin functions under Markov-modulated models,” Stochastic Processes and their Applications, vol. 119, no. 5, pp. 1673–1695, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 9. W. G. Yu, “A m-type risk model with markov-modulated premium rate,” Journal of Applied Mathematics and Informatics, vol. 27, no. 5-6, pp. 1033–1047, 2009. 10. H.-L. Dong, Z.-T. Hou, and X.-N. Zhang, “The probability of ruin in a kind of Markov-modulated risk model,” Chinese Journal of Engineering Mathematics, vol. 26, no. 3, pp. 381–388, 2009. View at Zentralblatt MATH · View at MathSciNet 11. J. Q. Wei, H. L. Yang, and R. M. Wang, “Optimal threshold dividend strategies under the compound poisson model with regime switching,” Stochastic Analysis With Financial Applications, vol. 65, pp. 413–429, 2011. 12. R. J. Elliott, T. K. Siu, and H. L. Yang, “Ruin theory in a hidden Markov-modulated risk model,” Stochastic Models, vol. 27, no. 3, pp. 474–489, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 13. X.-M. Ma, K. Luo, G.-M. Wang, and Y.-J. Hu, “Constant barrier strategies in a two-state Markov-modulated dual risk model,” Acta Mathematicae Applicatae Sinica—English Series, vol. 27, no. 4, pp. 679–690, 2011. View at Publisher · View at Google Scholar · View at MathSciNet 14. J. G. Dong and G. X. Liu, “Joint distribution of the supremum, infimum and number of zeros in the Markov-modulated risk model,” Chinese Journal of Applied Probability and Statistics, vol. 27, no. 5, pp. 473–480, 2011. View at MathSciNet 15. X. Y. Mo and X. Q. Yang, “Path-depict and probabilistic construction of the markov-modulated risk model,” Acta Mathematicae Applicatae Sinica—Chinese Series, vol. 35, no. 3, pp. 385–395, 2012. 16. X. Zhang and T. K. Siu, “On optimal proportional reinsurance and investment in a Markovian regime-switching economy,” Acta Mathematica Sinica—English Series, vol. 28, no. 1, pp. 67–82, 2012. View at Publisher · View at Google Scholar · View at MathSciNet 17. S. M. Li and J. D. Ren, “The maximum severity of ruin in a perturbed risk process with Markovian arrivals,” Statistics and Probability Letters, 2013. View at Publisher · View at Google Scholar 18. Z. S. Ouyang and Y. Yan, “Some moment's results of present value function of increasing life insurance under random interest rate,” Mathematics in Economics, vol. 20, no. 1, pp. 41–47, 2003. View at MathSciNet 19. J. Cai, “Ruin probabilities and penalty functions with stochastic rates of interest,” Stochastic Processes and their Applications, vol. 112, no. 1, pp. 53–78, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 20. X. Zhao and J. E. Liu, “A ruin problem for classical risk processes under random interest force,” Applied Mathematics, vol. 20, no. 3, pp. 313–319, 2005. View at Zentralblatt MATH · View at 21. X. Zhao, B. Zhang, and Z. Mao, “Optimal dividend payment strategy under Stochastic Interest Force,” Quality and Quantity, vol. 41, no. 6, pp. 927–936, 2007. View at Publisher · View at Google Scholar · View at Scopus 22. J. Z. Li and R. Wu, “Upper bounds for ruin probabilities under stochastic interest rate and optimal investment strategies,” Acta Mathematica Sinica—English Series, vol. 28, no. 7, pp. 1421–1430, 2012. View at Publisher · View at Google Scholar · View at MathSciNet 23. B. Zhang and X. Zhao, “The expected discount penalty function under stochastic interest governed by Markov switching process,” Dynamics of Continuous, Discrete & Impulsive Systems A, vol. 14, pp. 343–348, 2007. View at MathSciNet 24. J.-H. Xie and W. Zou, “Expected present value of total dividends in a delayed claims risk model under stochastic interest rates,” Insurance, vol. 46, no. 2, pp. 415–422, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 25. D. J. Yao, R. M. Wang, and L. Xu, “On the expected discounted penalty function associated with the time of ruin for a risk model with random income,” Chinese Journal of Applied Probability and Statistics, vol. 24, no. 3, pp. 319–326, 2008. View at Zentralblatt MATH · View at MathSciNet
{"url":"http://www.hindawi.com/journals/ddns/2013/320146/","timestamp":"2014-04-19T15:25:59Z","content_type":null,"content_length":"637343","record_id":"<urn:uuid:958c229e-412c-4cfc-9858-156d1c2f065f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the speed... December 18th 2007, 10:44 AM #1 Mar 2007 Find the speed... Here's a problem that I cam across the other day, I can't seem to find the answer though at first glance the question looks relatively simple. There is a 2mile stretch of road. I drive down the first mile with an average speed of 30mph, find what speed I must travel down the remaining mile for my overall average speed to be 60mph. At first I thought time=distance/speed, so 2/(30+x)=2/60, but that yields the incorrect answer. Does it have something to do with the harmonic mean? Can someone help? Here's a problem that I cam across the other day, I can't seem to find the answer though at first glance the question looks relatively simple. There is a 2mile stretch of road. I drive down the first mile with an average speed of 30mph, find what speed I must travel down the remaining mile for my overall average speed to be 60mph. At first I thought time=distance/speed, so 2/(30+x)=2/60, but that yields the incorrect answer. Does it have something to do with the harmonic mean? Can someone help? I was stumped! Last edited by colby2152; December 19th 2007 at 05:41 AM. Here's a problem that I cam across the other day, I can't seem to find the answer though at first glance the question looks relatively simple. There is a 2mile stretch of road. I drive down the first mile with an average speed of 30mph, find what speed I must travel down the remaining mile for my overall average speed to be 60mph. At first I thought time=distance/speed, so 2/(30+x)=2/60, but that yields the incorrect answer. Does it have something to do with the harmonic mean? Can someone help? To drive 2 miles at an average speed of 60mph takes 1/30 of an hour. You have already driven 1 mile at 30mph, which took 1/30 hour So you must drive the final mile at $\infty$mph Here's a problem that I cam across the other day, I can't seem to find the answer though at first glance the question looks relatively simple. There is a 2mile stretch of road. I drive down the first mile with an average speed of 30mph, find what speed I must travel down the remaining mile for my overall average speed to be 60mph. At first I thought time=distance/speed, so 2/(30+x)=2/60, but that yields the incorrect answer. Does it have something to do with the harmonic mean? Can someone help? This is a variation of a Mensa question. I love this one! Hello, free_to_fly! As Dan pointed out, this is a classic problem . . . There is a 2-mile stretch of road. I drive down the first mile with an average speed of 30 mph. What speed must I travel on the remaining mile for my overall average speed to be 60 mph? To average 60 mph over the 2-mile road, . . you must cover the distance in: . $\frac{\text{2 miles}}{\text{60 mph}} \:=\:\frac{1}{30}\text{ hours} \:=\:2\text{ minutes}$ You have already driven 1 mile at 30 mph. . . This took you: . $\frac{\text{1 mile}}{\text{30 mph}}\:=\:\frac{1}{30}\text{ hours} \:=\:2\text{ minutes}$ You have already used up all of the allotted time. . . It is impossible to drive the last mile in 0 minutes. December 18th 2007, 12:14 PM #2 December 18th 2007, 12:34 PM #3 Grand Panjandrum Nov 2005 December 18th 2007, 06:45 PM #4 December 19th 2007, 05:37 AM #5 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/math-challenge-problems/25061-find-speed.html","timestamp":"2014-04-17T16:37:22Z","content_type":null,"content_length":"47973","record_id":"<urn:uuid:8a7a9390-4089-4c16-b5b0-b6a14a398c5b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
East Amwell Township, NJ Calculus Tutor Find an East Amwell Township, NJ Calculus Tutor ...When done effectively, reading becomes a performance art in which you kinesthetically bring conceptual relationships on dead paper to life. Trigonometry is sometimes introduced using a dull collection of problems, like those asking you to determine the height of a lighthouse based on the length ... 13 Subjects: including calculus, reading, writing, physics ...Thus, I can not only explain linear algebra to students, but also provide examples of how it is used in real-life applications. I am a PhD Computational Scientist, and in the course of my research I have written hundreds of Perl programs to solve computational problems. I have an excellent knowledge of the language, and I have experience training graduate students to program in 15 Subjects: including calculus, chemistry, statistics, biology I am a fun, helpful, and experienced tutor for the Sciences (biology and chemistry), Math (geometry, pre-algebra, algebra, and pre-calulus), English/Grammar, and the SATs. For the SAT, I implement a results driven and rigorous 7 week strategy. PLEASE NOTE: I only take serious SAT students who have... 26 Subjects: including calculus, chemistry, English, reading ...Sharing that with other people has been a great experience, and something I still take joy in doing. I look forward to working with you/your child! Other Hobbies:I am an avid traveler, and enjoy spending time with my family and friends.I am currently a tutor for a student struggling in Algebra I. 26 Subjects: including calculus, chemistry, physics, statistics ...I feel that getting experience teaching students one on one is the best way for me to have an immediate impact. This will especially help to personalize the teaching experience and is an effective way to create a trusting relationship. I am especially personable and I know I have the ability to... 16 Subjects: including calculus, Spanish, physics, algebra 1
{"url":"http://www.purplemath.com/East_Amwell_Township_NJ_Calculus_tutors.php","timestamp":"2014-04-17T01:15:44Z","content_type":null,"content_length":"24819","record_id":"<urn:uuid:e39972c1-98cd-466f-a9c4-78dff314f924>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
Saying Nothing About ERA Estimators If you follow the sabermetric blog/Twittersphere at all (and if you don’t, why on earth are you wasting time here?), I’m sure you can figure out what prompted this post. However, I’m not going to name the metric that has generated discussion about this general topic because this post is not meant to be targeted at anyone, or to be a debunking of a particular metric, or anything other than me expressing my opinion about the construction of ERA estimators. Others have different philosophies and they are welcome to them. This is mine: First, I find it helpful to classify the inputs and construction of each metric. This is not necessary, but the reason I find it helpful is that the ERA estimators out there are relatively diverse. Compared to the sabermetric metrics that exist for evaluating offense, they are extremely diverse. Almost all offensive rates are built around an estimate of runs created and divided by either outs or plate appearances. Almost all of them start with the traditional results-based batting line. ERA estimators, on the other hand, are all over the place. Some follow the lead of their batting cousins and use a run estimator as their base, but some are regression-based. Some use actual results, while some use batted ball data. Some use batted ball data but decide to combine the four standard categories (flyballs, line drives, groundballs, and popups) in some manner. Some assume that the pitcher has no control over what happens once the ball is put into play. Some have implicit or explicit regression built-in with regard to balls in play. Some limit themselves only to what happens when the ball is not put into play. Some estimate ERA, and some estimate total runs allowed. You probably don’t need personally need more than one overall batting metric. That doesn’t mean there shouldn’t be diversity across the sabermetric community--there's nothing wrong with having a number of intelligently designed choices, but as an individual you don’t need both wOBA and True Average--one will suffice. That is not necessarily the case with ERA estimators--sometimes you might be interested in one that is results-based, sometimes you might be interested in DIPS, sometimes you might want to venture out into the uncertain world of batted ball metrics…even when using a common construction (BsR or LW for example), there is arguably a place for two or three or more different variations based on the inputs. I believe that the most logical place to start with an ERA estimator is estimating runs. That is intentionally written to sound a little silly but it is not a philosophy shared by all developers of these metrics. Some put formulas down on the page that they would never consider using to try to estimate how many runs a team would score. I say that the place to start is with a logical run estimator. Given the team-level nature of the task, that suggests to me the use of Base Runs or another dynamic estimator, but I’m not going to argue too strenuously if you start with linear weights. This is a path which is not necessarily going to minimize your RMSE, or give the best correlation with future ERA. With respect to the latter, if your goal is to provide the best possible estimate of future ERA, your metric is not attempting to measure how well the pitcher actually performed, it’s trying to forecast how well he will perform in the future. Certain constructions will by their nature be less accurate at estimating ERA in the same period. Every step you take down the path from outcome inputs (hits, walks, home runs, etc.) to component-based inputs (ignoring the actual outcomes of balls in play, or looking at batted ball types, etc.) will cost you accuracy when the standard is same period ERA. However, one can still use accuracy at predicting same period ERA for methods of similar classes. Beginning the construction of the metric with a model of run scoring avoids some of the problems inherent in using actual pitcher runs allowed. I’m going to gloss over the fact that the number of runs a pitcher allows, regardless of whether it’s from a base period or a future period, is always dependent upon his defense and other factors outside of his control. There are still other concerns that do not apply when looking at true team-level data. The way runs are charged to individual pitchers is biased towards pitchers who inherit baserunners at the expense of those who bequeath baserunners. In practice, that means favoring relievers at the expense of starters, although depending on the performance of the relievers who inherit baserunners, individual bequeathers might actually benefit. Thus, whenever an approach detects a reliever ERA advantage is detected, some of it is attributable to the way runs are assigned and not to the actual effectiveness of the pitcher. It might even be possible to increase the accuracy of a metric by giving a bonus to relievers. It is entirely unclear to me what benefit this provides other than lowering RMSE. It doesn’t tell you anything about how well the pitchers performed, and it certainly doesn’t help you measure “true talent” any better--if that is the objective, an adjustment in the opposite direction could be warranted. Another advantage of modeling runs is that you can easily move between RA and ERA. Most sabermetricians prefer RA because of the biases present in ERA and the distortions created by reconstructing imaginary innings sans errors. It’s easy to rescale from RA to ERA by multiplying by a constant like .91. While it’s also easy to divide by .91 to go the other way, if the metric has been tailored to match ERA, you’ve baked the biases of ERA into your metric. This could potentially be most problematic for a regression-based estimator that uses batted ball data. Even if this bias is small, it’s still completely unnecessary. Finally, the issue of dynamism is one that is often misunderstood with respect to ERA estimators. SIERA trumpets its “interactive” nature in its name (which does distinguish it from FIP and other linear methods) but any metric based on the foundation of a dynamic run estimator is by nature interactive. Instead of the interactivity being limited to target categories, though, every event interacts with every other event. Singles interact with triples, walks interact with home runs, doubles interact with triples, home runs interact with outs, outs interact with themselves...you get the idea (and I think that’s enough talk of events interacting with themselves). Building your metric around a run estimator does not necessarily restrict you to simply plugging in the numbers in the appropriate place. Suppose you wanted to construct a metric based on batted ball types, strikeouts, and walks. One way to go about it would be to simply go through and estimate singles, doubles, triples, homers, and outs in play based on the percentage of each batted ball type that wind up as each. So, you would end up with equations that might look something like this: Singles = .057FB + .217GB + .516LD + .017PU However, if you believe that you have gleaned some other insights into the relationship between events that could improve your metric (such as strikeout pitchers having lower HR/FB rates) , you could still build that in to your formula for estimated home runs, and plug those into the run estimator. It’s more difficult than running a regression, and a more delicate balancing act (at least in terms of developing the formula), but it allows you to stay grounded in a model that estimates runs by taking a first step of, well, estimating runs. Again, I want to make it clear that I was attempting to explain where I’m coming from when I examine metrics of this type. There is room for legitimate philosophical differences and I’m not trying to state that sabermetricians who deviate from the way I’d do it are engaging in poor practice. It would certainly be possible to develop a lousy metric based on a run estimator and following some of the other suggestions. No comments: Post a Comment Comments are moderated, so there will be a lag between your post and it actually appearing. I reserve the right to reject any comment for any reason.
{"url":"http://walksaber.blogspot.com/2011/07/saying-nothing-about-era-estimators.html","timestamp":"2014-04-19T06:53:21Z","content_type":null,"content_length":"92047","record_id":"<urn:uuid:00419170-b29c-4dcd-a6b9-f6b8f04db3f3>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Dundee, IL Math Tutor Find a Dundee, IL Math Tutor ...If you are interested in taking home tutoring classes for your kids, and improving their grades, do not hesitate to contact me. Qualification: Masters in Computer Applications My Approach : I assess the child's learning ability in the first class and then prepare an individual lesson plan. I break down math problems for the child, to make him/her understand in an easy way. 8 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I was valedictorian in elementary school, achieved AP scores of 5/5 in French Language, French Literature, AB Calculus, and Biology in high school, and graduated from college as J.N. Honors Scholar Cum Laude with a BS in Elementary Education. I am certified to teach grades K-8 in Michigan and grades 1-6 in NY. 19 Subjects: including algebra 1, geometry, precalculus, ACT Math I am a Software Engineer by profession but like to tutor math courses as hobby. I am an experienced tutor, who has taught Quantitative Aptitude and Analytic Reasoning. I have coached students on College algebra and trigonometry. 12 Subjects: including differential equations, linear algebra, logic, electrical engineering ...I had public speaking training in law school as well as my master's degree in education. I'm prepared to tutor on public speaking techniques. I have experience as a law school assistant dean in charge of the international programs. 20 Subjects: including SAT math, reading, English, writing ...I consider myself well versed to expert in Linear Algebra. I am a self-taught lover of Python. I use Python to process large sets of data that I generate from simulations at work and produce meaningful results from these. 30 Subjects: including statistics, linear algebra, ACT Math, SAT math Related Dundee, IL Tutors Dundee, IL Accounting Tutors Dundee, IL ACT Tutors Dundee, IL Algebra Tutors Dundee, IL Algebra 2 Tutors Dundee, IL Calculus Tutors Dundee, IL Geometry Tutors Dundee, IL Math Tutors Dundee, IL Prealgebra Tutors Dundee, IL Precalculus Tutors Dundee, IL SAT Tutors Dundee, IL SAT Math Tutors Dundee, IL Science Tutors Dundee, IL Statistics Tutors Dundee, IL Trigonometry Tutors Nearby Cities With Math Tutor Barrington, IL Math Tutors Burlington, IL Math Tutors East Dundee, IL Math Tutors Gilberts Math Tutors Hampshire, IL Math Tutors Lily Lake, IL Math Tutors Maple Park Math Tutors Marengo, IL Math Tutors Medinah Math Tutors Pingree Grove, IL Math Tutors Plato Center Math Tutors Sleepy Hollow, IL Math Tutors Union, IL Math Tutors Wasco, IL Math Tutors West Dundee, IL Math Tutors
{"url":"http://www.purplemath.com/Dundee_IL_Math_tutors.php","timestamp":"2014-04-16T22:33:45Z","content_type":null,"content_length":"23735","record_id":"<urn:uuid:9233e989-2d59-4a80-8c8c-033f99ed72ec>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
How To Measure Investment Volatility: Capital Asset Pricing Model & Beta We often hear about stocks being volatile and risky. But what does it really mean? Let’s take a look at how it’s all measured. “Volatile” is what you’d probably describe someone with a short fuse or a volcano waiting to erupt, but it’s also a financial term that helps us describe stocks. Understanding volatility and how to measure it can be frustrating to someone without a degree in finance. But never fear! Here, we’ll break down the terms and look at an actual example to help us grasp this deep, yet important topic. Simply put, volatility means variation or changes in price over time. Financial instruments are said to be extremely volatile if prices fluctuate wildly in a short period of time. The greater the volatility, the higher the risk involved. “Risk measures” is the term used to describe the historical predictors of investment risk and volatility. There are actually five principal risk measures: Alpha, Beta, R-Squared, Standard Deviation, and Sharpe Ratio. Fancy terms, if you ask me, but these risk measures exist in order to allow us to compare how well different investments perform and whether they’re a fit for our investment needs. You may find that you’re really just cut out to stick with conservative savings accounts and large company stocks and you wouldn’t be able to sleep at night if you went for an all out Latin American fund, for example. So how do you make such a decision? While you can determine this via your gut instinct, you can actually turn to some technical ways to do it, but to understand those ways, you may want to wrap your mind around a little math. Now before you turn away because there’s a formula ahead, I’d encourage you to give it a chance. It’s actually quite interesting (especially if you’re new to the numbers behind the subject of risk). Here’s where the fun begins. Measuring Investment Volatility and Risk With The CAPM Formula There’s this thing called the Capital Asset Pricing Model (CAPM), which is just a fancy name for a concept that mathematically illustrates the relationship between an asset’s expected return and risk. CAPM is a model that attempts to price or value an individual security or portfolio. This is represented by the formula: E = F + β (M – F) • E is the Expected Return of the Capital Asset (whether it be your Gold fund, your Large Cap stock or your Emerging Markets ETF, etc) • F is the Risk Free Rate (money held in a “safe” account such as Savings that yields a steady interest rate) • β is the Beta Figure (the market risk) • M is the Expected Market Return The CAPM boils down to saying that an investment’s expected return should make up for the risk that it presents. According to the CAPM, investors should be compensated for their physical investment (money) plus the risk involved. If the actual return of an asset does not match or exceed the required expected return, then it’s probably not a wise investment. What Is Beta? Let’s learn more about Beta, since this figure is key to the Capital Asset Pricing Model. Beta (β) is an asset’s market risk. The general market is said to have a Beta of 1.0. An individual stock’s beta is measured in comparison to this baseline of 1.0. You can find a stock’s Beta by searching for its ticket symbol or name on Reuters.com. Apple stock’s Beta, for example, is 1.35. Therefore, Apple is 0.35 times more volatile than the general market. A stock that has a Beta of 3.0 would be three times as volatile as the market. CAPM Example: Apple Apple is one of the most talked-about technology companies now, so I’ll use Apple (or AAPL) as our example for calculating CAPM. Recall the formula for CAPM: E = F + β (M – F) Let’s use the following figures for our example: 1. F = 3% Let’s say for example, that 3% is the rate of return on an interest bearing account. Of course, this value varies according to the interest rate environment, but we set it at 3% for the purposes of this illustration. 2. β = 1.35 According to Reuters, Apple’s Beta is 1.35, making it 0.35 times more volatile than the general market. 3. M = 9.4% From 1900 to 2010, the average total return per year of the Dow Jones Industrial Average was approximately 9.4%. So the equation looks like this: E = (3%) + 1.35 (9.4% – 3%) E = 11.64% The CAPM of 11.64% tells us that Apple would need to earn 11.64% per year for it to be worth the risk of being selected as an investment. Obviously, if I use a savings vehicle with a higher rate of return or if I expect the market to return better than average results, the CAPM or expected return of the asset will change accordingly. Nonetheless, CAPM is a good method for determining whether or not an investment is worth pursuing. How to Use the Capital Asset Pricing Model You can see that CAPM isn’t hard to calculate once you understand what the figures mean. But how useful is Capital Asset Pricing when you’re deciding how to build a portfolio? As an investor, I may choose to invest in a portfolio of less risky assets if I decide that the CAPM percentage meets my personal comfort level. Perhaps I’m only comfortable investing in stocks that have a CAPM of less than 15%, which would be a generally less risky decision and a vote for relative stability. On the other hand, I could decide to invest in a riskier portfolio and invest a smaller portion of my wealth in cash (such as certificate of deposits or money-market accounts). The ratio of risky assets to risk-free assets here isn’t equal, but I can achieve an aggressive return if I pursue one of two approaches: 1. I invest all my wealth in a risky portfolio and let it grow over time. 2. I invest a percentage of my wealth in a risky portfolio and the remainder in cash (stable) vehicles, letting the risky portfolio grow aggressively over time. From a personal point of view, option 2 is more attractive. The risk-free investments (cash-stable vehicles such as savings and CDs) are not correlated to the risky assets of the portfolio, so even if my risky stocks sink one quarter, my core savings will be untouched. Most likely, the risky stocks will rebound if I’m patient and let them grow slowly. In essence, a higher return portfolio of risky stocks PLUS a reserve of cash-stable vehicles can be an efficient plan that will yield both returns and security. Just make sure you don’t end up with a passive aggressive portfolio like this one. Copyright © 2011 The Digerati Life. All Rights Reserved. { 0 comments… add one now } Leave a Comment
{"url":"http://www.thedigeratilife.com/blog/how-to-measure-investment-volatility-capital-asset-pricing-model-beta/","timestamp":"2014-04-21T00:08:09Z","content_type":null,"content_length":"23878","record_id":"<urn:uuid:9f82ba54-304a-4e8c-ab41-bdd3bb9c78e7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Re: How to delete studentized residuals with absolute values gre Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Re: How to delete studentized residuals with absolute values greater than or equal to two after conducting areg procedure? From Steve Samuels <sjsamuels@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Re: How to delete studentized residuals with absolute values greater than or equal to two after conducting areg procedure? Date Thu, 27 Jun 2013 17:34:01 -0400 I highly recommend the very robust mmregress package, by Verardi and Croux (net describe st0173_1,(http://www.stata-journal.com/software/sj10-2)) as the best, indeed, the only way in Stata to reliably identify outliers and high leverage points and to simultaneously fit models that down-weight or eliminate the influence of such points. Neither -qreg- nor -rreg- can downweight or identify high leverage points. Note that diagnostics based on OLS, including studentized residuals, are very sensitive to outliers. They consider changes related to the deletion of one observation at a time. Extreme points pull the fitted regression surface towards themselves. If there are two outlying/high-leverage observations in the same location, each will "mask" the other. -mmregress- is not subject to such masking. For a well-written introduction to these topics, look at Hampel et al. (1986) Verardi, V., and C. Croux. 2009. Robust regression in Stata. Stata Journal 9, no. 3: 439-453. Hampel, Frank, Elvezio Ronchetti, Peter Rousseeuw, and Werner Stahel. 1986. Robust Statistics: The Approach Based on Influence Functions (Wiley Series in Probability and Mathematical Statistics). New York: John Wiley and Sons. On Jun 27, 2013, at 10:36 AM, George_Huang wrote: Dear David, Your explanation helps a lot. Do you mean that I should pay attention not only on residual but also on leverage to identify the potentially unusual or influential observations? If so, Cook's D, DFITS, lvr2plot may be the better commands for us to detect “outliers”. Right? You are right. I have panel data from 2006 to 2011, so my coauthor wishes that I can run the regressions including firm or industry fixed effects. However, those regression diagnostics are not workable for areg. My coauthor also suggested that I can run median regressions (qreg) and robust regressions (rreg). He mentioned that these regressions do not allow controlling for firm fixed effects. However, these regressions can be mentioned in the robustness tests section to show that outliers do not affects our analysis. Thanks and Best, -----原始郵件----- From: David Hoaglin Sent: Thursday, June 27, 2013 8:31 PM To: statalist@hsphsun2.harvard.edu Subject: Re: st: Re: How to delete studentized residuals with absolute values greater than or equal to two after conducting areg procedure? Dear George, Assessing "the robustness of the analysis results" usually involves much more than rerunning the model after removing observations that the model does not fit well. Your coauthor should explain the justification for removing those "outliers." Whenever possible, one should investigate observations that have large residuals. The definition of "studentized residual" is important here. Much of the literature on regression diagnostics defines the studentized residual for observation i as the difference between the observed value of y for observation i and the value of y predicted for observation i by the regression model without observation i, divided by a suitable estimate of the standard deviation of that difference. Some people use the term "jackknife residual." The reasoning is that an observation that is influential may not have a large residual, because it has distorted the fit. Sometimes two or more observations are jointly influential, so that their individual studentized residuals are not large. If one can detect such behavior (not always an easy task), one then removed the whole group of observations (and tries to understand what is responsible for their behavior). All this is part of careful analysis; nothing is automatic. Earlier you mentioned -reg-, from which you can get the information you need (in postestimation). I have seldom used -areg-, but I am not surprised that it does not give the same detailed information about individual observations. It appears that you have some type of panel data, so the diagnostic process may be more complicated. You may want to tell us more about your data. I hope this discussion helps. David Hoaglin On Thu, Jun 27, 2013 at 2:27 AM, George_Huang <cjhuang168@gmail.com> wrote: > Dear David and Peter, > Thanks for both of your suggestions. I want to delete studentized residuals > that have an absolute value greater than or equal to two to delete outliers > because I want to test the robustness of the analysis results. This is > suggested by my coauthor. However, I am more comfortable for deleting the > outliers by 3 absolute value of studentized residuals as you mentioned. I > can not find postestimation for studentized residuals after conducing areg > procedure. If you have further suggesitons, please let me know. > Thanks a lot, > George * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-06/msg01296.html","timestamp":"2014-04-18T15:51:54Z","content_type":null,"content_length":"15704","record_id":"<urn:uuid:e1889470-1ce5-4639-a6ba-f0e618b120f1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Preliminary Results of Marine Electromagnetic Sounding with a Powerful, Remote Source in Kola Bay off the Barents Sea International Journal of Geophysics Volume 2013 (2013), Article ID 160915, 8 pages Research Article Preliminary Results of Marine Electromagnetic Sounding with a Powerful, Remote Source in Kola Bay off the Barents Sea ^1Kola Science Centre, Polar Geophysical Institute, Russian Academy of Science, Murmansk, 15 Khalturina Street, Murmansk 183010, Russia ^2Geoelectromagnetic Research Centre of Schmidt Institute of Physics of the Earth, Russian Academy of Sciences, P.O. Box 30, Troitsk, Moscow Region 142190, Russia ^3Kurchatov Institute, Moscow, 1 Akademika Kurchatova Square, Moscow 123182, Russia ^4Moscow State University, Moscow, GSP-1 Leninskie Gory, Moscow 119991, Russia ^5Institute of Terrestrial Magnetism, Ionosphere, and Radiowave Propagation, Russian Academy of Sciences, St. Petersburg Branch, 1 Mendeleevskaya Linia, St. Petersburg 199034, Russia Received 28 September 2012; Accepted 30 December 2012 Academic Editor: Michael S. Zhdanov Copyright © 2013 Valery Grigoryev et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We present an experiment conducted in Kola Bay off the Barents Sea in which new, six-component electromagnetic seafloor receivers were tested. Signals from a powerful, remote super-long wave (SLW) transmitter at several frequencies on the order of tens Hz were recorded at the six sites along a profile across Kola Bay. In spite of the fact that, for technical reasons, not all the components were successfully recorded at every site, the quality of the experimental data was quite satisfactory. The experiment resulted in the successful simulation of an electromagnetic field by the integral equation method. An initial geoelectric model reflecting the main features of the regional geology produced field values that differed greatly from the experimental ones. However, step-by-step modification of the original model considerably improved the fit of the fields. Thereby specific features of the regional geology, in particular the fault tectonics, were able to be corrected. These preliminary results open the possibility of inverse problem solving with more reliable geological conclusions. 1. Introduction Recently marine electromagnetic methods have become a valuable tool for seafloor mapping (e.g., [1, 2]). Among them, methods based on the natural (magnetotelluric) field offer advantages over those based on controlled source electromagnetic methods (CSEM) for lower-crust and upper-mantle mapping, while the latter offer advantages for upper-crust mapping. Although in general terms the audiomagnetotelluric method has been successfully applied to upper-crust survey too, such is not the case for the auroral zone, where the necessary plane-wave condition is roughly violated. A number of types of seafloor electromagnetic receivers measuring four (horizontal electric and magnetic) or five (the same plus vertical magnetic) components of the field, with various types of control sources dropped into the sea have been implemented ([3–8]). However, it is known that measurement of the sixth component (vertical electric) can be of much interest, because it is particularly sensitive to the insulating crust structures ([9]). On the other hand, it can be advantageous to use the powerful stationary control source for land mapping (e.g., [10]) and it is hoped that employment of such a powerful land-based source will also be beneficial in shelf mapping. In this paper, we present the experiment that made this possibility a reality. The first six-component electromagnetic seafloor receivers have been tested in Kola Bay off the Barents Sea. Signals from a distant, powerful super-long wave (SLW) transmitter were recorded at six sites along a profile across the bay. The results have been preliminarily interpreted by simulation of the electromagnetic field by the integral equation method. 2. Experiment This pioneer experiment on seafloor measurements of the electromagnetic field emitted by the powerful land-based SLW transmitter was performed in September 2011. The source represented a grounded horizontal line current (electric bipole) approximately 60km long located on the Kola Peninsula and oriented along the latitude parallel. The field was emitted at frequencies of 41Hz, 62Hz, 82Hz, and 144Hz, by 200A sinusoidal current every fifteen minutes and was measured by the six-component receivers (Figure 1) sited on the floor of the bay. The amplitude of the antenna current was recorded with reference to GPS time. The receivers were equipped with three orthogonal induction magnetic field sensors with low-noise amplifiers and with three orthogonal electric field sensors. The measured analog signals were received by the six-channel, 16 bit ADC, where they were converted to digital signals and saved for further analysis on a FLASH memory. Accuracy of the conversion was provided by pass-through calibration of the measurement channels using specialized metrology equipment. The receiver technical characteristics are follows. Frequency range: 0.01Hz–200Hz. Dynamic range of the measured signals: 72dB.The magnetic channels sensitivity in frequency band 0.01Hz–200Hz: above 0.5pT at the signal-noise ratio 3/1.The intrinsic noise of magnetic channels is1000fT at Hz,100fT at Hz,100fT at Hz.The electric channels sensitivity in frequency band 0.01Hz–200Hz: above 10nV/m at the signal-noise ratio .Operating depth-up to 500m. Before starting and stopping the recording, the verification of high-precision temperature-stabilized receiver clock with GPS time was produced. This ensured data synchronization with the correct time. During each measurement session values of receiver azimuth, roll, and pitch were fixed on a FLASH memory every minute by the receiver orientation unit. The coordinates of the station dive point were fixed by the navigation system of the boat. After measurements the station lifting was performed by the command from the ship through an acoustic channel. The reading of data is carried out after lifting station on the boat. So the antenna current data and the values of the field on the seabed were synchronous. The signal was separated from natural and man-made noise using the spectral analysis of measurement results from Welch’s method [11]. As a result, the amplitudes of the six components of the field as well as the phase differences between them were gleaned. The signal-noise ratio for all the components exceeded 30dB. As the receiver orientation was random after descent on the floor, the gleaned data needed to be converted into a single-axis coordinate system, taking into account the values of pitch, roll, and yaw of the receiver units. The new amplitudes of the orthogonal components from the measured ones, and, with phase difference after turning in one plane with angle, are as follows: The new phase difference between them is To present the measurement results, the Cartesian axis system was used with the x-axis oriented along the geographical meridian facing North and the y-axis oriented along the latitude facing East. The z-axis was oriented vertically, facing up. Calculations (1)–(2) were consequently applied in three planes, taking into account the magnetic declination 14° 56′ E in the area of observations when turning in the XY plane. As a result, we assembled the amplitude values of the field components and the phase differences in a single geographic axis system. The observation sites were located along two sides of the waterway at a depth of 36 to 85 meters—four on the southern side of the waterway and two on the northern side (Figure 2) because of the navigation peculiarities in Kola Bay. The receivers were deployed on the seafloor at least thirty minutes before transmission started. The receivers buoyed to the sea surface thirty minutes after transmission stopped. All of the horizontal and vertical components of the electromagnetic field at all the frequencies were detected at all the observation sites. As a result, data for the magnetic components were obtained at every site except site 3, where the receiver unit had not been fixed on the bay slope and changed its orientation during the session, resulting in noise from the induction sensors. In view of the unreliability of some electric component measurements at sites 2, 5, and 6, electric field data for these sites could not be transformed into a geographical axis system. However, data for single components of the field can be used for interpretation. Figure 3 presents an example of the plot of the reduced electric (a) and magnetic (b) fields at different frequencies at site 1. The frequency dependence is quite natural. To take into account the possible space variability of seawater conductivity, we also carried out its vertical and horizontal profiling with an oceanic probe. The conductivity proved to be the same at all the sites of measurement and independent of the depth except for a thin surface layer. It was equal to 3.5S/m. 3. The Main Features of Regional Geology and the Initial Geoelectric Model Kola Bay is a submeridional fjord situated on the coast of the Barents Sea in the northwestern part of the Kola Peninsula. Geological and geoelectric description of the area and the fjord itself is given in many works ([12–17]). It passes through two regions: Murmansk Craton and Kola-Norwegian Province, in area of distribution of resistive Archean granitoids and gneisses (a fragment of the geological map by Mitrofanov [18] is given in Figure 4). The average thickness of quaternary sediments (generally glacial and marine) is 50–100m and reaches 200m at the entrance of the fjord. In accordance with the cranked curves of the fjord, it is divided into three parts: northern, middle, and southern bends. The northern part is the deepest with depths up to 300m; in the middle part, where the observation profile is, the depths vary from 35 to 130m. Kola Bay is also complicated by underwater rapids and branches in the form of bays. The formation of such a complex structure is determined by a system of northwest- and northeast-trending faults, glacial exaration, and irregular postglacial uplift. On the basis of the publications cited above, the initial geoelectric model was constructed (Figure 5). The background ofthe model is a two-layered resistive medium: the conductivity of the upper layer is 10^−4S/m with a thickness of 2km; the conductivity of the lower half-infinite layer is 10^−5S/m. The conductivity of all the faults is the same, 1S/m (red in Figure 5); the water conductivity is 3.2S/m (brown in Figure 5); the sediment conductivity is 1S/m. The widths of the faults change from 2 to 4km. The sediments follow the curve of the fjord banks, and their thickness changes from 50 to 200m at the entrance of the fjord. 4. Method of Simulation The simulation was based on the 3D integral equation method. The use of this method in electromagnetic geophysics problems is founded on the following model of the complex conductivity distribution. The whole space is divided into two parts: the normal section and the anomaly. The normal section (background) means the set of unlimited horizontal homogenous layers and two homogeneous half-spaces, where the complex conductivity is constant in each layer and half-space. The conductivity of the upper half-space, which simulates the air, is zero. The anomaly is the 3D volume, where the conductivity is different from that in the normal section. Notice that if we use the integral equation method, we have to specify the conductivity in the whole space. Magnetic permeability is constant in the whole space. If conductivity of the normal section is defined, the electric and magnetic Green’s tensors and can be computed. This way the following expressions for electrical and magnetic fields may be written ([19]): where is the circular frequency, is the anomalous volume, is the anomalous conductivity at the point , and and are the primary electrical and magnetic fields. The primary field means the field, which is induced at this point in the normal section by the same source. If in the first expression in (3) is inside , then this expression becomes a 3D singular integral equation of the second kind: If (4) is solved, its solution is placed under the integral in (3). Then it becomes possible to compute the electrical and magnetic fields at any point in the space. The integral equation (4) is solved by the collocation method. The anomalous volume is divided into the set of rectangular cells . The electrical field is approximated by the constant in, which is the field in the center of . This way we get the following system of linear equations: where is the center of the , . The system (5) is solved by the generalized minimal residual method. We start with the very large anomaly extending more than 100km horizontally in both directions and 2km in the vertical direction, like the one which was modeled in our previous study ([10]). The computational experiments show that it is not necessary to use that large anomaly, because all the effects in the resulting field are induced by the local objects. These experiments also show that the effects of the geometry of the Kola Bay bottom are very strong, so it is necessary to use the right model of the bathyorography. This way we have to use cells with a very small size: 20m in the horizontal direction and 5m in the vertical one. The resulting model consists of 11 million cells. The computations have been conducted on the high-performance computer Skif-Tchebyshev, located at Moscow State University, using fully parallelized 3D forward modeling software PIE3D from the CEMI consortium of the University of Utah. 5. Preliminary Interpretation All six components of the field have been computed for the initialgeoelectric model (Figure 5). In spite of the fact that there were amplitudes and phases at four frequencies, at the primary stage of investigation we limited ourselves by comparison of the experimental and modeled fields only by the amplitudes and mainly at the lowest frequency. The thing is, phase is too sensitive parameter, which is important at the stage of solving the inverse problem, but which is difficult to employ at the stage of preliminary rough model matching. On the other hand, due to the exposure of an insulating basement around Kola Bay, very thin conductors like bogs or small rivers (which are practically impossible to take into account) strongly influence the field at high frequencies. Therefore at the preliminary rough model matching stage, we pay the most attention to modeling at the most reliable, lowest frequency (41Hz). Such an approach follows from the experience of land electromagnetic sounding with the same source ([10]). When the observed field amplitudes were compared with the computed ones for the initial geoelectric model (Figure 6), it was apparent that they differed widely both by their shape and level (up to three orders of value). Therefore the initial model had to be considerably corrected. We undertook step-by-step modification of the original model, varying the normal cross section, positions, and conductivity of the faults and geometry of the Kola Bay alluvial belt. Thus, as our original model was rather large and correspondingly every step took considerable time, we also tested the possibility of decreasing the horizontal sizes of the model so that amplitude differences between “large” and “small” models at observation would not exceed about 1%. We tested a total of about 50 models. One of the best variants is shown in Figures 7 and 8. Of course, the fit of the field components is rather imperfect, but obviously far better than in Figure 6. The average relative misfits for the EM-field amplitudes at 41Hz are presented in Table 1. At frequencies of 62 and 82Hz, the fit is only slightly worse than that presented in Figure 8 at 41Hz and evidently departs from fit only at the highest frequency, 144Hz. Thus the model presented in Figure 7 can be a good starting point for strict solving of the inverse problem. Although this model is not final, it allows the inference of some qualitative geophysical and geological conclusions.(i)The normal cross section attained an order of magnitude more conductive than it was believed before.(ii)Kola Bay is surrounded by a sedimentary alluvial belt. Evidently it is the result of regional postglacial uplift. The sedimentary belt proved to be asymmetric in accordance with normal right displacement of the ancient river.(iii)Conductivity of the faults proved essentially different. Probably the larger conductivity of a couple of faults seen in Figure 7 is related to contemporary tectonic activity. 6. Conclusions This pioneer experiment with a new six-component electromagnetic seafloor receiver and with a distant, powerful SLW transmitter was carried out in Kola Bay off the Barents Sea. The receivers were deployed at six sites along a profile across the bay. Not all six components were successfully measured at all the sites in particular, due to the mechanical unreliability of electric antennas. Nevertheless, the quality of the experimental data turned out to be quite satisfactory. The data have been preliminarily interpreted by a trial-and-error method with simulation of the electromagnetic field by an integral equation method. The resulting geoelectric model differs from the initial one, and this difference reveals new features of regional geology, in particular of fault tectonics. The resulting model is also the appropriate starting point for strict solving of the inverse problem with subsequently more comprehensive geological interpretation. The work was supported by RFBR (Grant 11-05-12015). The authors acknowledge the University of Utah's Consortium for Electromagnetic Modeling and Inversion (CEMI) for providing 3D forward modeling software PIE3D. 1. K. Key, “Marine electromagnetic studies of seafloor resources and tectonics,” Surveys in Geophysics, vol. 33, no. 1, pp. 135–167, 2011. 2. K. A. Weitemeyer, S. Constable, and A. M. Tréhu, “A marine electromagnetic survey to detect gas hydrate at Hydrate Ridge, Oregon,” Geophysical Journal International, vol. 187, pp. 45–62, 2011. View at Publisher · View at Google Scholar 3. R. L. Evans, M. C. Sinha, S. C. Constable, and M. J. Unsworth, “On the electrical nature of the axial melt zone at 13°N on the East Pacific Rise,” Journal of Geophysical Research B, vol. 99, no. 1, pp. 577–588, 1994. View at Publisher · View at Google Scholar · View at Scopus 4. S. Constable and C. S. Cox, “Marine controlled-source electromagnetic sounding 2. The PEGASUS experiment,” Journal of Geophysical Research B, vol. 101, no. 3, pp. 5519–5530, 1996. View at Scopus 5. L. MacGregor and M. Sinha, “Use of marine controlled-source electromagnetic sounding for sub-basalt exploration,” Geophysical Prospecting, vol. 48, no. 6, pp. 1091–1106, 2000. View at Publisher · View at Google Scholar · View at Scopus 6. L. MacGregor, M. Sinha, and S. Constable, “Electrical resistivity structure of the Valu Fa Ridge, Lau Basin, from marine Controlled-Source electromagnetic sounding,” Geophysical Journal International, vol. 146, no. 1, pp. 217–236, 2001. View at Publisher · View at Google Scholar · View at Scopus 7. S. Ellingsrud, T. Eidesmo, S. Johansen, M. C. Sinha, L. M. MacGregor, and S. Constable, “Remote sensing of hydrocarbon layers by seabed logging (SBL): results from a cruise offshore Angola,” Leading Edge, vol. 21, no. 10, pp. 972–982, 2002. View at Scopus 8. M. C. Sinha, P. D. Patel, M. J. Unsworth, T. R. E. Owen, and M. R. G. Maccormack, “An active source electromagnetic sounding system for marine use,” Marine Geophysical Researches, vol. 12, no. 1-2, pp. 59–68, 1990. View at Publisher · View at Google Scholar · View at Scopus 9. M. N. Berdichevsky, O. N. Zhdanova, and M. S. Zhdanov, Marine Deep Geoelectrics, Nauka, Moscow, Russia, 1989. 10. E. P. Velikhov, V. F. Grigor’v, M. S. Zhdanov et al., “Electromagnetic sounding of the Kola Peninsula with a powerful extremely low frequency source,” Doklady Earth Sciences, vol. 438, no. 1, pp. 711–716, 2011. View at Publisher · View at Google Scholar 11. F. J. Harris, “On the use of windows for harmonic analysis with the discrete Fourier transform,” Proceedings of the IEEE, vol. 66, no. 1, pp. 51–83, 1978. View at Scopus 12. E. P. Velikhov, Ed., Geoelectrical Studies with a Powerful Current Source on the Baltic Shield, Nauka, Moscow, Russia, 1989. 13. A. A. Kovtun, S. A. Vagin, I. L. Vardanjants, E. L. Kokvina, and N. I. Uspenskiy, “Magnetotelluric study of the structure of the crust and mantle of the eastern part of the Baltic Shield,” Izvestiya—Physics of the Solid Earth, no. 3, pp. 32–36, 1994. 14. A. A. Zhamaletdinov, “Graphite in the Earth's crust and electrical conductivity anomalies,” Izvestiya—Physics of the Solid Earth, vol. 32, no. 4, pp. 272–288, 1996. View at Scopus 15. L. L. Vanjan and N. I. Pavlenkova, “Layer of low velocity and high electrical conductivity at the base of the upper crust of the Baltic Shield:,” Izvestiya—Physics of the Solid Earth, no. 1, pp. 37–45, 2002. 16. E. V. Spiridonov, Paleoseismodislocations on the coast of the Barents Sea [Ph.D. thesis], MSU, Moscow, Russia, 2007. 17. E. A. Kovalchuk and E. V. Shipilov, “The first data about the structure and lithology of the section of the Kola Fjord sediments,” in Proceedings of the International Scientific Conference on the 100th Anniversary of D. G. Panov, pp. 157–160, SSC Academy of Sciences, Rostov-on-Don, Russia, 2009. 18. F. P. Mitrofanov, Ed., Geological Map of the Kola Region, Scale 1:500000. 2001. Apatity. 19. M. S. Zhdanov, Geophysical Inverse Theory and Regularization Problems, Elsevier, Amsterdam, The Netherlands, 2002.
{"url":"http://www.hindawi.com/journals/ijge/2013/160915/","timestamp":"2014-04-21T11:45:04Z","content_type":null,"content_length":"116229","record_id":"<urn:uuid:6d304683-7989-4083-9ac3-74f7e5670678>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Easy trig problem Maybe it's me, but this didn't seem easy!?!? I'm getting an answer of sin a = 1; is this correct? Here's a summary of what I did: You have the cosine of a difference identity: $cos (a-b) = cos\, a\: cos\, b + sin\, a\: sin\, b \quad$ (Eq1) Use the Law of sines to write sin b in terms of sin a: $sin\, b = \frac{3}{4}sin\, a \quad$ (Eq2) Use the pythagorean identity to write cosine in terms of sine: $cos\, a = \sqrt{1-sin^2\, a}$ (Eq3a) $cos\, b = \sqrt{1-sin^2\, b}$ (Eq3b) Plug (2), (3a), and (3b) into (1), plus the fact that cos (a-b) = 3/4: $\frac{3}{4} = \sqrt{1-sin^2\, a}\: \sqrt{1-\ left(\frac{3}{4}sin\, a\right)^2} + sin\, a\: \left(\frac{3}{4}sin\, a\right)$ (Eq4) Solved for sin a and I got sin a = 1. I'm sure I did something wrong here. In general , if $|ac| = B ~,~ |bc| = A ~,~ \cos(a-b) = X$ The method to find the value of $\sin(a)$ is as follows : First determine which of the angles $a,b$ is larger by comparing their opposite sides , let's say $a > b$ . Let $d$ be the other point on line $ab$ such that $|ac| = |cd| = B$ Then we have $\angle bcd = a-b$ , by using cosine formula , $|bd| = \sqrt{A^2 + B^2 - 2ABX}$ , Therefore , $\sin(a) : A = \sin(a-b) : |bd|$ $\sin(a) =\frac{A \sin(a-b)}{ \sqrt{A^2 + B^2 - 2ABX} }$
{"url":"http://mathhelpforum.com/trigonometry/149940-easy-trig-problem-print.html","timestamp":"2014-04-16T10:32:59Z","content_type":null,"content_length":"8217","record_id":"<urn:uuid:80300f9e-fed1-47aa-9d5b-f72c9769867c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
The "Unusual Episode" and a Second Statistics Course Jeffrey S. Simonoff New York University Journal of Statistics Education v.5, n.1 (1997) Copyright (c) 1997 by Jeffrey S. Simonoff, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author and advance notification of the editor. Key Words: Classroom exercise; Logistic regression; Model building; Survival data. Dawson (1995) described a dataset giving population at risk and fatalities for an unusual mortality episode (the sinking of the ocean liner Titanic), and discussed experiences in using the dataset in an introductory statistics course. In this paper the same dataset is analyzed from the point of view of the second statistics course. A combination of exploratory analysis using tables of observed survival percentages, model building using logistic regression, and careful thought allows the statistician (and student) to get to the essence of the random process described by the data. The well-known nature of the episode gives the students a chance at determining its character, and the data are complex enough to require sophisticated modeling methods to get at the truth. 1. The "Unusual Episode" Dataset 1 In a recent paper Dawson (1995) discussed a dataset relating to an "unusual episode" of mortality -- the sinking of the ocean liner Titanic after colliding with an iceberg on April 15, 1912. The dataset gives the number at risk and deaths for the passengers and crew of the ship categorized by different characteristics. The first part of the paper focused on the process by which a "correct" version of the dataset was determined. The second part described the author's experience in using the dataset in an early class session of an introductory statistics course (by presenting it with identifying characteristics omitted and asking the students to try to identify what the episode actually was). 2 Dawson (1995) actually presented two similar, but not identical, tables related to the Titanic sinking. Table 1 of that paper referred to passengers only, while Table 2 also included crew. Table 2 was a modified version of Table 1 based on information in the Board of Trade Inquiry Report (1990). This latter table is the basis of the analyses in this paper. The 2201 people at risk are categorized by economic status (first-class passengers, second-class passengers, third-class passengers, or crew), age (child or adult), gender (female or male), and survival (survived or did not survive). The table is thus a 4 x 2 x 2 x 2 contingency table, with one dimension (survival) a natural target variable. 2. The Second Statistics Course 3 Informal discussion of these data in an introductory course can help get students thinking about randomness and exploratory data analysis. The data are also ideal for use in a second course, however, where statistical (regression) model building is covered, as they can be used to show how models can highlight, reinforce, and contradict informal impressions. In this way they can be a powerful tool to help students see both the power and limitations of exploratory data analysis, and the power and limitations of statistical models. 4 During the Spring 1996 semester I used the "unusual episode" data in the second statistics course. The course assumes knowledge of data analysis and statistical inference up through basic regression modeling. The second course is called "Regression and Multivariate Data Analysis," and it is the data analysis portion of the title that is key. All discussion of models, testing, estimation, and diagnostics is directed towards practical issues of understanding and exploring real data. 5 The "unusual episode" data were discussed roughly two-thirds of the way through the semester, just after discussion of logistic regression. By that time linear regression (including simple and multiple regression, regression diagnostics, model selection, weighted least squares, and regression on time series data) had been thoroughly discussed at an applied level. This leads to discussion of analysis of variance (ANOVA) models, including the use of indicator and effect codings to fit ANOVA models. The students had seen two real data analyses using logistic regression in class. 3. The Handout and Logistic Regression Modeling 6 At the end of class I gave out a nine-page handout entitled "An unusual episode," which I asked the students to read before the next class. The handout, highlights of which follow, described a process by which a logistic regression model can be fit to these data (the entire model selection process was explicitly given in the handout). I also asked the students to write down on a piece of paper what they thought the mortality episode actually was (as all identifying characteristics were omitted from the handout). 7 My goal was to make the analysis as natural as possible by drawing heavily on analogies with least squares regression. The students had seen many analyses of continuous data, but had not seen tables of counts before (at least in any systematic fashion). Instead of histograms of all the variables, we have frequency distributions, because the variables are all categorical. Similarly, cross-classifications take the role of scatter plots. 8 The handout begins with a brief discussion of how cross-classified data can be analyzed as a logistic regression if one of the dimensions is a natural (binary) target variable. Table 1 summarizes the data set. The table is given with entries being the percentages of cell totals that survived, as this allows both the relationship with survival probability and the number at risk for each defined subgroup to be apparent. Summing over the table gives the overall survival rate of 32.3% of the 2201 people at risk. Table 1: Survival Percentages Separated by Characteristics Gender Age Adult Child High 32.6% of 175 100% of 5 Economic status Medium 8.3% of 168 100% of 11 Low 16.2% of 462 27.1% of 48 Other 22.3% of 862 --- High 97.2% of 144 100% of 1 Economic status Medium 86.0% of 93 100% of 13 Low 46.1% of 165 45.2% of 31 Other 87.0% of 23 --- 9 Table 2 gives the three cross-classifications of economic status, age, and gender, respectively, with survival. The tables highlight the patterns that identify this event as a shipwreck: decreasing survival with decreasing economic status ("Other" has lowest survival rate, but of course it is uncertain that it has lowest status), higher survival for children (and relatively few children at risk), and much higher survival for women (with fewer women than men at risk). Table 2: Observed Survival Percentages by Variable Economic status Percent survived Age Percent survived High 62.5% of 325 Child 52.3% of 109 Medium 41.4% of 285 Adult 31.3% of 2092 Low 25.2% of 706 Other 24.0% of 885 Gender Percent survived Female 73.2% of 470 Male 21.2% of 1731 10 The categorical nature of the dataset allows easy exploration of the effect of interactions of the variables on survival. Table 3 gives one such interaction table, that of economic status by gender. The interaction effect corresponds to an association with survival that is not explained by the marginal (main) effects alone. This table corrects and clarifies several impressions from Table 2. First, "Other" status is not actually lower than "Low" status in terms of survival probability; rather, more than 97% of the members of this status were male, with associated lower survival probability than females (this overwhelming gender imbalance provides a clue that "Other" status corresponds to the crew). The other pieces of new information in Table 3 are that women of "Low" status fared far worse than women at other status levels, while men of "High" status fared better than men at other status levels. Table 3: Observed Interaction of Economic Status and Gender on Survival Percent survived Economic status Female Male High 97.2% of 145 34.4% of 180 Medium 87.7% of 106 14.0% of 179 Low 45.9% of 196 17.3% of 510 Other 87.0% of 23 22.3% of 862 11 This exploratory analysis is supported by formal model building. Details are given in Table 4. The main effects and interaction effects are fit using effect codings (see, for example, Hamilton 1992, pp. 99-101). All of the models are hierarchical, in that the presence of an interaction effect in the model implies that the associated main effects are also present. Note that because there were no children in the crew, the interaction between economic status and age (EA) is fit using only two of the effect codings corresponding to pairwise products of those for the main effects, rather than three. The likelihood ratio goodness-of-fit statistic (G²) is given for each model, along with associated degrees of freedom (df) and tail probability (p). The Akaike Information Criterion (AIC), which attempts to provide a tradeoff between goodness-of-fit and parsimony, is given in the last column (it equals G² + 2 x (number of parameters in the model)). The models are given ordered from smallest to largest AIC value within model class (one main effect, two main effects, three main effects, etc.) to make model selection easier. 12 An important point about the construction of G² should be made here. A different representation of the data set from that given in Table 1 is to consider it as a set of 2201 observations, each having a 0/1 (survived/did not survive) response value associated with it. The representations are equivalent with respect to fitted logistic regression coefficients, likelihood ratio and Wald tests of the significance of any individual effects, and maximized log-likelihood. They are different, however, in their implications for goodness-of-fit. It is inappropriate to use G² to evaluate goodness-of-fit considering each of the 2201 observations as a separate binomial random variable (based on one trial each). The reason for this is that the distribution of G² given the fitted regression coefficients is degenerate in this circumstance, and thus provides no information on goodness-of-fit (McCullagh and Nelder 1989, section 4.4.5). Rather, goodness-of-fit should be evaluated over the set of covariate patterns, which in this case corresponds to the 14 cells in Table 1 with nonzero number at risk. Table 4: Logistic Regression Fits to Survival Data (E = Economic Status, A = Age, G = Gender) Model G^2 df p AIC G 237.49 12 <.0001 241.49 E 491.06 10 <.0001 499.06 A 652.40 12 <.0001 656.40 E, G 131.42 9 <.0001 141.42 A, G 231.60 11 <.0001 237.60 E, A 465.48 9 <.0001 475.48 E, A, G 112.57 8 <.0001 124.57 E, G, EG 66.24 6 <.0001 82.24 A, G, AG 215.28 10 <.0001 223.28 E, A, EA 436.27 7 <.0001 450.27 E, A, G, EG 45.90 5 <.0001 63.90 E, A, G, EA 76.91 6 <.0001 92.91 E, A, G, AG 94.55 7 <.0001 108.55 E, A, G, EA, EG 1.69 3 .6395 23.69 E, A, G, EG, AG 37.26 4 <.0001 57.26 E, A, G, EA, AG 65.02 5 <.0001 83.02 E, A, G, EA, EG, AG 0.00 2 1.000 24.00 13 According to G², the only two models that fit the table include the two interactions EA and EG, or all three pairwise interactions EA, EG and AG. We must recognize, however, that the large sample size here means that statistically significant effects might not have great practical importance. Similarly, while the three models with minimum AIC include all of the main effects and two or three interaction effects, the well-known tendency of AIC to lead to overfitted models (Hurvich and Tsai 1989), implies that more care in choosing a model that fits adequately but is parsimonious is called 14 One way of doing this is to compare the fitted values for the three models (E, G, EG), (E, A, G, EG), and (E, A, G, EA, EG) (the best-fitting models of their respective classes). These are given in Table 5. Examination of the fitted survival percentages shows that they are very similar for all three models for the adult classes, but differ for the child classes. Because children represent less than 5% of the total population at risk, the simple model E, G, EG seems adequate to describe the important associations with survival in the data. This model also has the advantage of yielding as fitted survival percentages the observed percentages in Table 3, making summary of the model easy. Table 5: Survival Percentages Separated by Characteristics for Three Models E, G, EG Gender Age Adult Child High 34.4% of 175 34.4% of 5 Economic status Medium 14.0% of 168 14.0% of 11 Low 17.3% of 462 17.3% of 48 Other 22.3% of 862 --- High 97.2% of 144 97.2% of 1 Economic status Medium 87.7% of 93 87.7% of 13 Low 45.9% of 165 45.9% of 31 Other 87.0% of 23 --- E, A, G, EG Gender Age Adult Child High 33.7% of 175 59.4% of 5 Economic status Medium 12.9% of 168 29.9% of 11 Low 15.5% of 462 34.4% of 48 Other 22.3% of 862 --- High 97.2% of 144 99.0% of 1 Economic status Medium 86.7% of 93 94.9% of 13 Low 41.9% of 165 67.4% of 31 Other 87.0% of 23 --- E, A, G, EA, EG Gender Age Adult Child High 32.6% of 175 100% of 5 Economic status Medium 8.3% of 168 100% of 11 Low 16.8% of 462 22.0% of 48 Other 22.3% of 862 --- High 97.2% of 144 100% of 1 Economic status Medium 86.0% of 93 100% of 13 Low 44.6% of 165 53.0% of 31 Other 87.0% of 23 --- 15 Model fitting can also help students determine what the nature of the mortality episode actually is. The comparison in Table 5 reinforces how few children were at risk here, potentially giving a clue to the students that this was not simply an epidemic in some town or city. The relationship between the fitted survival percentages and the economic status of the people at risk shows that apparently exposure to the mortality agent was higher for poorer people than for richer ones (or, more correctly for this incident, exposure to survival measures was lower). The fitted survival percentages for the two more complicated models highlight that the higher survival rate for children is stronger for boys (compared to men) than for girls (compared to women), perhaps helping to trigger recognition of the "women and children first" rule of the sea. In fact, the nine classes with highest fitted survival percentages for the model (E, A, G, EA, EG) correspond to either women or children or both. 4. The Results 16 Thirty students turned in their guesses about the nature of the mortality episode at the beginning of the next class. Subsequent tallies demonstrated little prediction success, as only 3 of the 30 correctly identified the episode. Disease (11 of 30, with 4 people specifically saying AIDS), a military engagement (6 of 30), and gunshot (4 of 30) were the most popular choices. Despite this, class discussion was very brief. When the second person to volunteer a guess said "The sinking of the Titanic," the response was positively electric -- all of the students started nodding their heads and saying "That's it." We then spent a few more minutes going over the data and analyses before moving on to new material. 17 One impression I received from the class discussion was that some of the students were unclear on how to obtain the values in Table 4, and how to interpret them. To make this clearer, I added an Appendix to the handout describing how ordinary (weighted) least squares regression can be used to approximate logistic regression fitting (see, e.g., McCullagh and Nelder 1989, pp. 106-107). When combined with the simplification of treating economic status as dichotomous (low status/not low status), a best subsets regression program can be used to try to choose the model that best balances goodness-of-fit with parsimony (the model based on economic status, gender, and their interaction is the model of choice). The revised handout is available with this paper. 5. Conclusion 18 The "unusual episode" data provide a very rewarding experience in the second statistics course. At the cost of a little class time, the students see how a combination of exploratory analysis, model building, and careful thought allows the statistician to cut through a maze of numbers to the essence of an unknown process. The data are complex enough to require careful and sophisticated modeling methods, yet easily accessible (because the episode itself is so well-known). An alternative approach to that described here is to actually analyze the data in front of the class using a computer (assuming that this is possible). Allowing the students to work through the analysis cooperatively could be a very rewarding educational experience, although it would likely also be a time-consuming one. I would like to thank the referees for helpful comments on an earlier draft of this article. "Report on the Loss of the `Titanic' (S.S.)" (1990), British Board of Trade Inquiry Report (reprint), Gloucester, UK: Allan Sutton Publishing. Dawson, R. J. M. (1995), "The `Unusual Episode' Data Revisited," Journal of Statistics Education [Online], 3(3). (http://www.amstat.org/publications/jse/v3n3/datasets.dawson.html) Hamilton, L. C. (1992), Regression With Graphics: A Second Course in Applied Statistics, Belmont, CA: Duxbury. Hurvich, C. M., and Tsai, C.-L. (1989), "Regression and Time Series Model Selection in Small Samples," Biometrika, 76, 297-307. McCullagh, P., and Nelder, J. A. (1989), Generalized Linear Models (2nd ed.), London: Chapman and Hall. Jeffrey S. Simonoff Department of Statistics & Operations Research New York University 44 West 4th Street, Room 8-54 New York, NY 10012-1126 A postscript version of the handout, "An unusual episode", is available. Return to Table of Contents | Return to the JSE Home Page
{"url":"http://www.amstat.org/publications/jse/v5n1/simonoff.html","timestamp":"2014-04-19T22:13:07Z","content_type":null,"content_length":"23476","record_id":"<urn:uuid:f789e8ce-45b3-4d0b-adbd-b657f6d28076>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
covariance question January 30th 2010, 11:56 AM #1 Junior Member Sep 2009 covariance question Ok, I understand how to get that far, but don't know where to go from there. I know it's something very simple, but I have to teach this to myself since it is an online course. Thanks in advance. For the sake of time, i'm gonna let 37.6363=a and 12.545=b. So you must take the sum $\frac{1}{10}[(x_1-a)(y_1-b)+(x_2-a)(y_2-b)+...+(x_{11}-a)(y_{11}-b)]$ I was about to say I figured it out. Anyhow, thank you for clarification. January 30th 2010, 12:04 PM #2 January 30th 2010, 12:07 PM #3 Junior Member Sep 2009
{"url":"http://mathhelpforum.com/advanced-statistics/126297-covariance-question.html","timestamp":"2014-04-17T23:12:09Z","content_type":null,"content_length":"35102","record_id":"<urn:uuid:fd9e2845-adb9-4403-9e0b-fbf46c225639>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
East Camden, NJ Algebra 1 Tutor Find an East Camden, NJ Algebra 1 Tutor ...Hello Students! If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair! 14 Subjects: including algebra 1, calculus, physics, geometry ...I have methods to determine what is the best way a student learns, and am committed to finding a way to teach them effectively using visual, auditory, or kinesthetic strategies, or some combination thereof. I obtained my International Baccalaureate Diploma in July 2012 at Central High School of ... 18 Subjects: including algebra 1, Spanish, reading, biology ...I'm able to bring those same strategies to the students I tutor. My goal is to help my students see that math is a type of puzzle and a game. It doesn't have to be scary or frustrating. 12 Subjects: including algebra 1, English, reading, grammar ...I've tutored students in grades 6-12 in homework preparation and currently teach 7th and 8th grade private school students on Saturday mornings in social and study skills. My 7 years of middle school teaching experience has prepared me to prepare students for the rigors of any standardized test. I know a wide variety of test taking strategies and the best way to share those with 22 Subjects: including algebra 1, English, Spanish, reading ...My range is from basic Bible knowledge such as location of Bible books, to in-depth education on genealogies, prophecy, and fulfillment. Currently I am enrolled in a program of progressive Bible education that I share with others on a weekly basis. Microsoft Outlook is the program I have worked with for more than 7 years. 21 Subjects: including algebra 1, English, writing, reading Related East Camden, NJ Tutors East Camden, NJ Accounting Tutors East Camden, NJ ACT Tutors East Camden, NJ Algebra Tutors East Camden, NJ Algebra 2 Tutors East Camden, NJ Calculus Tutors East Camden, NJ Geometry Tutors East Camden, NJ Math Tutors East Camden, NJ Prealgebra Tutors East Camden, NJ Precalculus Tutors East Camden, NJ SAT Tutors East Camden, NJ SAT Math Tutors East Camden, NJ Science Tutors East Camden, NJ Statistics Tutors East Camden, NJ Trigonometry Tutors Nearby Cities With algebra 1 Tutor Ashland, NJ algebra 1 Tutors Briarcliff, PA algebra 1 Tutors Camden, NJ algebra 1 Tutors Center City, PA algebra 1 Tutors East Haddonfield, NJ algebra 1 Tutors Eastwick, PA algebra 1 Tutors Edgewater Park, NJ algebra 1 Tutors Ellisburg, NJ algebra 1 Tutors Erlton, NJ algebra 1 Tutors Middle City East, PA algebra 1 Tutors Middle City West, PA algebra 1 Tutors South Camden, NJ algebra 1 Tutors West Collingswood Heights, NJ algebra 1 Tutors West Collingswood, NJ algebra 1 Tutors Westmont, NJ algebra 1 Tutors
{"url":"http://www.purplemath.com/East_Camden_NJ_algebra_1_tutors.php","timestamp":"2014-04-20T23:53:21Z","content_type":null,"content_length":"24386","record_id":"<urn:uuid:cb7390b0-0612-4dc3-b279-4b6d147aa673>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
reference for perturbation of projection result up vote 5 down vote favorite Let $A$ and $B$ have the same rank and dimensions. If $P_A$ denotes the projection onto the range space of $A$, then $$ \|P_A - P_B\|_2 \leq \|A - B\| \cdot \min (\|A^\dagger\|_2, \|B^\dagger\|_2). $$ Here $A^\dagger$ denotes the pseudoinverse of a matrix. I believe that this result is established by Golub and Zha in the course of their proof of Theorem 3.6 in "Perturbation Analysis of the Canonical Correlations of Matrix Pairs," but in a manner that's too messy to point to and say "here it is." Unfortunately, this result also doesn't seem to follow readily from the results in Stewart's paper on perturbation theory for pseudoinverses and projections. Is this result clearly established somewhere in the literature? add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged linear-algebra fa.functional-analysis na.numerical-analysis numerical-linear-algebra reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/101520/reference-for-perturbation-of-projection-result","timestamp":"2014-04-21T02:33:57Z","content_type":null,"content_length":"47153","record_id":"<urn:uuid:709c9694-6cc5-467f-ac42-a4dffcab32d5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
To integrate static? January 19th 2010, 11:15 AM #1 Senior Member Apr 2009 Atlanta, GA To integrate static? A curious question occurred to me, and I am not familiar enough with real analysis to answer it. Consider a real function $f(x): [a,b]\to[c,d]$ defined as follows: for each element $x\in[a,b]$, let $f(x)$ be some random element from $[c,d]$, a function obviously completely discontinuous. What would the value of $\int_a^b f(x)dx$ be? It would make intuitive sense that the area under the curve is equal to the length of the interval, $b-a$, times the average value of the function in that interval. Therefore the answer to the question would be $(b-a)\frac{c+d}2$. But how to prove this rigorously? Is the function even integrable and by what definition? A correct statement could be: Let $(f(x))_{x\in[a,b]}$ be a family of independent random variables uniformly distributed on $[c,d]$. What can be said about the (random variable) $\int_a^b f(x)dx$ , if this is well-defined? As a matter of fact, I would bet that $f$ is almost-surely not measurable, hence the integral wouldn't make sense. You can try another way, for instance as an approximation: for $n\geq 0$, define $f_n:[0,1]\to [0,1]$ (or with a,b,c,d) to be a step function where the steps have width $\frac{1}{n}$ and independent uniformly distributed heights in $[0,1]$. This won't converge to a function when $n\to\infty$. However, $\int_0^1 f_n(t)dt\to_n \frac{1}{2}$ almost-surely by the law of large numbers, thus for large $n$ this function $f_n$ nearly satisfies what you said. By the way, there is a name for such a function $f$: it is called a white noise, you can look that up. Laurent: Thank you for the suggestion. I believe the following sketch is much closer to a formal proof. Define $f_n:[0,1]\to[0,1]$ as follows: Divide the interval $[0,1]$ into $n$ subintervals $\Delta_i$ of width $\frac1n$. For each of these subintervals, let $f_n(\Delta_i)$ be a random element of $[0,1]$. Now define $g$ by sorting these subintervals from shortest to tallest, $g_n(\Delta_{i-1})\leq g_n(\Delta_i)$ for all $i$. It should be plain that $\int_0^1 f_n(x)dx=\int_0^1 g_n(x)dx$ for all $n$. 1. Define $g$ as the limiting function of $g_n$, ( $g=\lim_{n\to\infty} g_n$), which is $g(x)=x$. Of course, $\int_0^1 g(x)dx=\frac12$. Define $f$ as the limiting function of $f_n$, ( $f=\lim_{n\to\infty} f_n$) 2. Since $\int_0^1 f_n(x)dx=\int_0^1 g_n(x)dx$ for all $n$, $\int_0^1 f(x)dx=\int_0^1 g(x)dx=\frac12$. Are steps 1 and 2 valid? Laurent: Thank you for the suggestion. I believe the following sketch is much closer to a formal proof. Define $f_n:[0,1]\to[0,1]$ as follows: Divide the interval $[0,1]$ into $n$ subintervals $\Delta_i$ of width $\frac1n$. For each of these subintervals, let $f_n(\Delta_i)$ be a random element of $[0,1]$. Now define $g$ by sorting these subintervals from shortest to tallest, $g_n(\Delta_{i-1})\leq g_n(\Delta_i)$ for all $i$. It should be plain that $\int_0^1 f_n(x)dx=\int_0^1 g_n(x)dx$ for all $n$. 1. Define $g$ as the limiting function of $g_n$, ( $g=\lim_{n\to\infty} g_n$), which is $g(x)=x$. Of course, $\int_0^1 g(x)dx=\frac12$. Define $f$ as the limiting function of $f_n$, ( $f=\lim_{n\to\infty} f_n$) 2. Since $\int_0^1 f_n(x)dx=\int_0^1 g_n(x)dx$ for all $n$, $\int_0^1 f(x)dx=\int_0^1 g(x)dx=\frac12$. Are steps 1 and 2 valid? I don't feel like this would be any more proof-like than what I did: you give no argument for the limits, as if they were obvious. Perhaps you should re-read my post. The fact that $g_n$ converges to $x\mapsto x$almost surely (you need to specify this) is probably true but the proof is not immediate for sure. On the other hand, when I said that $\int_0^1 f_n(t) dt\to\frac{1}{2}$ a.s. by the law of large numbers, that was immediate since $\int_0^1 f_n(t)dt=\frac{1}{n}\sum_{i=1}^n f(\frac{i}{n})$. Then the fact that $f_n$ converges to a function $f$ should also be justified and as a matter of fact, like I wrote, you just can't: the sequence $(f_n)_n$ doesn't converge... I gave no justification since this should be clear: indeed, for any $x$, the sequence $ (f_n(x))_{n\geq 0}$ is a sequence of independent random variables uniformly distributed on $[0,1]$... January 19th 2010, 01:49 PM #2 MHF Contributor Aug 2008 Paris, France January 19th 2010, 06:45 PM #3 Senior Member Apr 2009 Atlanta, GA January 20th 2010, 12:30 AM #4 MHF Contributor Aug 2008 Paris, France
{"url":"http://mathhelpforum.com/differential-geometry/124445-integrate-static.html","timestamp":"2014-04-19T21:16:37Z","content_type":null,"content_length":"54977","record_id":"<urn:uuid:fe51bc1a-8e30-4b62-979f-d315f2924ecc>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Trying to determine initial thrust on an object from angular velocity? Hi all, I built an electric paper airplane launcher and I'm trying to figure out how much force or thrust is being applied to my paper airplane when it's launched. The setup looks like this: The discs are 124mm in diameter, are spinning at approximately 5800 RPM each, and are about 1mm apart (far enough not to touch but close enough to grab the airplane. The airplane is 7 inches long. I'm trying to figure out how much force is transferred to the airplane. My initial guess was to convert the angular velocity of the discs into linear velocity, and multiply that by mass of the plane.. but I don't think that's accurate. First, I'm not sure that two discs spinning in opposite directions at 5800 RPM each equals an angular velocity of 11600 RPM. I'm not sure that I can combine them that way. Second, assuming that I figure out the combined linear velocity of both discs, I am not sure how that is transferred to the airplane. The discs are applying force to the airplane for the total distance of its length, but I'm not sure how that factors into the airplane's initial acceleration. I'm sure there is a lot I'm missing here.. just looking for a point in the right direction. Thank you!
{"url":"http://www.physicsforums.com/showpost.php?p=4274396&postcount=1","timestamp":"2014-04-16T16:04:56Z","content_type":null,"content_length":"9759","record_id":"<urn:uuid:bc0ace69-1e7a-4818-bb9f-461db610f68f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Best Lewis Structure The Lewis structure that is closest to your structure is determined. The hybridization of the atoms in this idealized Lewis structure is given in the table below. Hybridization in the Best Lewis Structure 1. A bonding orbital for N1-C3 with 1.9983 electrons __has 67.52% N 1 character in a sp0.85 hybrid __has 32.48% C 3 character in a sp2.12 hybrid 2. A bonding orbital for N1-C3 with 1.9861 electrons __has 76.92% N 1 character in a p3 hybrid __has 23.08% C 3 character in a p3 hybrid 3. A bonding orbital for N1-C3 with 1.9539 electrons __has 79.15% N 1 character in a p3 hybrid __has 20.85% C 3 character in a p3 hybrid 4. A bonding orbital for N1-P4 with 1.9333 electrons __has 84.12% N 1 character in a sp1.22 hybrid __has 15.88% P 4 character in a s0.55 p3 d0.96 hybrid 5. A bonding orbital for O2-P4 with 1.9444 electrons __has 87.71% O 2 character in a s0.86 p3 hybrid __has 12.29% P 4 character in a s0.63 p3 d1.19 hybrid 6. A bonding orbital for O2-P4 with 1.9950 electrons __has 78.24% O 2 character in a p3 hybrid __has 21.76% P 4 character in a p3 d0.12 hybrid 7. A bonding orbital for O2-P4 with 1.9099 electrons __has 89.11% O 2 character in a s0.36 p3 hybrid __has 10.89% P 4 character in a s0.86 p3 d2.30 hybrid 16. A lone pair orbital for O2 with 1.9923 electrons __made from a sp0.51 hybrid 17. A lone pair orbital for C3 with 1.9818 electrons __made from a sp0.44 hybrid 18. A lone pair orbital for P4 with 1.9806 electrons __made from a sp0.36 hybrid -With core pairs on: N 1 O 2 C 3 P 4 P 4 P 4 P 4 P 4 - Donor Acceptor Interactions in the Best Lewis Structure The localized orbitals in your best Lewis structure can interact strongly. A filled bonding or lone pair orbital can act as a donor and an empty or filled bonding, antibonding, or lone pair orbital can act as an acceptor. These interactions can strengthen and weaken bonds. For example, a lone pair donor->antibonding acceptor orbital interaction will weaken the bond associated with the antibonding orbital. Conversly, an interaction with a bonding pair as the acceptor will strengthen the bond. Strong electron delocalization in your best Lewis structure will also show up as donor-acceptor interactions. Interactions greater than 20 kJ/mol for bonding and lone pair orbitals are listed below. The interaction of the third bonding donor orbital, 3, for N1-C3 with the second antibonding acceptor orbital, 102, for O2-P4 is 49.4 kJ/mol. The interaction of bonding donor orbital, 4, for N1-P4 with the antibonding acceptor orbital, 101, for O2-P4 is 47.9 kJ/mol. The interaction of bonding donor orbital, 4, for N1-P4 with the third antibonding acceptor orbital, 103, for O2-P4 is 208. kJ/mol. The interaction of bonding donor orbital, 5, for O2-P4 with the antibonding acceptor orbital, 100, for N1-P4 is 24.8 kJ/mol. The interaction of bonding donor orbital, 5, for O2-P4 with the third antibonding acceptor orbital, 103, for O2-P4 is 227. kJ/mol. The interaction of the third bonding donor orbital, 7, for O2-P4 with the antibonding acceptor orbital, 100, for N1-P4 is 112. kJ/mol. The interaction of the third bonding donor orbital, 7, for O2-P4 with the antibonding acceptor orbital, 101, for O2-P4 is 194. kJ/mol. The interaction of lone pair donor orbital, 17, for C3 with the antibonding acceptor orbital, 100, for N1-P4 is 29.5 kJ/mol. The interaction of lone pair donor orbital, 18, for P4 with the antibonding acceptor orbital, 97, for N1-C3 is 35.8 kJ/mol. Top of page. Molecular Orbital Energies The orbital energies are given in eV, where 1 eV=96.49 kJ/mol. Orbitals with very low energy are core 1s orbitals. More antibonding orbitals than you might expect are sometimes listed, because d orbitals are always included for heavy atoms and p orbitals are included for H atoms. Up spins are shown with a ^ and down spins are shown as v. 22 ----- -0.344 21 ----- -1.517 20 ----- -2.444 19 ----- -5.098 18 -^-v- -7.659 17 -^-v- -8.557 16 -^-v- -9.315 15 -^-v- -9.696 14 -^-v- -10.81 13 -^-v- -10.95 12 -^-v- -12.41 11 -^-v- -17.39 10 -^-v- -23.98 9 -^-v- -25.95 8 -^-v- -126.1 7 -^-v- -126.4 6 -^-v- -126.5 5 -^-v- -173.9 4 -^-v- -268.7 3 -^-v- -379.0 2 -^-v- -507.8 1 -^-v- -2071. Top of page. Total Electronic Energy The total electronic energy is a very large number, so by convention the units are given in atomic units, that is Hartrees (H). One Hartree is 2625.5 kJ/mol. The energy reference is for totally dissociated atoms. In other words, the reference state is a gas consisting of nuclei and electrons all at infinite distance from each other. The electronic energy includes all electric interactions and the kinetic energy of the electrons. This energy does not include translation, rotation, or vibration of the the molecule. Total electronic energy = -509.4846224869 Hartrees -> Return to Molecular Structure Page. -> Return to Chemistry Home Page
{"url":"http://www.colby.edu/chemistry/webmo/CNPO.html","timestamp":"2014-04-24T19:23:52Z","content_type":null,"content_length":"8000","record_id":"<urn:uuid:ff606a01-cbad-4186-be5f-6a354cced800>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Essington Algebra 1 Tutor ...I also work at the Vanguard school in the summer with Autistic students. I am certified in K-12 Special Education. I work as an Autistic Support teacher at Child Guidance Resource Centers. 10 Subjects: including algebra 1, reading, grammar, special needs ...I look forward to meeting you and will be happy to answer any questions you may have!I have played volleyball for the last ten years, starting my freshman year of high school. For those four years of high school I played club ball as well. I love it and play whenever possible. 10 Subjects: including algebra 1, geometry, ASVAB, GED ...I conduct research at UPenn and West Chester University on colloidal crystals and hydrodynamic damping. Students I tutor are mostly college-age, but range from middle school to adult. As a tutor with multiple years of experience tutoring people in precalculus- and calculus-level courses, tutoring calculus is one of my main focuses. 9 Subjects: including algebra 1, physics, calculus, geometry ...I often ask for feedback, as I find it important to constantly improve in my style of tutoring. I look forward to hearing from you and helping you achieve success in your studies! I have always enjoyed reading and excelled at it throughout school. 13 Subjects: including algebra 1, reading, English, writing ...I hope that we will be a good fit, and I look forward to meeting you! I finished my Bachelor's with a double major in Psychology and Anthropology. I also had the opportunity to study abroad in Australia where I studied Australian Aboriginals. 34 Subjects: including algebra 1, English, reading, writing
{"url":"http://www.purplemath.com/Essington_algebra_1_tutors.php","timestamp":"2014-04-16T13:18:20Z","content_type":null,"content_length":"23876","record_id":"<urn:uuid:dcdf23bd-b75b-43eb-af4c-c1efba07b7fc>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Please help me!!!! solve the system using elimination x+ y + +z =1 2x-y+2x=-5 -x+2y-z = 4 • one year ago • one year ago Best Response You've already chosen the best response. [1 1 1 1] [2 -1 2 -5] [-1 2 -1 4] Set up matrix Best Response You've already chosen the best response. i have to use elimination Best Response You've already chosen the best response. Good luck with that Best Response You've already chosen the best response. i am so confused on how to do these can someone please help!!! Best Response You've already chosen the best response. x+ y +z =1 2x-y+2z=-5 -x+2y-z = 4 x + y+z=1 -x+2y-z=4 3y = 5 2x-y+2z=-5 (-x+2y-z=4)2 -2x -4y-2z = 8 -5y = 3 3y = 5 -5y = 3 -2y = 8 y = -4 ^^^that's your the y part of the ordered triplet, now you just need to find the x and z values Best Response You've already chosen the best response. thanks sooooooooo...... much Best Response You've already chosen the best response. could you help me with one more? Best Response You've already chosen the best response. x+ y + +z =1 2x-y+2x=-5 -x+2y-z = 4 x+ y +z =1 2x-y+2z=-5 3x+3z =-4 (2x-y+2z=-5)2 4x-2y+4z=-10 -x+2y-z = 4 3x+3z=-6 3x+3z=-4 (3x+3z=-6)-1 -3x-3z=6 0 does not equal 2, so there is no solution for the problem Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50808da0e4b00774b24198ec","timestamp":"2014-04-20T00:58:49Z","content_type":null,"content_length":"44596","record_id":"<urn:uuid:8074f550-85dd-48b2-a824-11854f5bf944>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
combinatorics question May 9th 2012, 01:21 PM #1 Apr 2012 combinatorics question We have N bottles, m red and n blue balls, n > N, m > N. We need to know in how many ways is it possible to partition all read and blue balls into bottles. There should be at list one red and one blue ball in each bottle. Re: combinatorics question Two essential questions: 1) are the bottles identical or distinct? 2) are the balls identical( except for color) or distinct? Re: combinatorics question Re: combinatorics question There is an important concept known a integer partitions Because "There should be at least one red and one blue ball in each bottle". We are partitioning the numbers $n-N~\&~m-N$ From that webpage, you can see that the is no simple formula or way of doing this problem. Here is an example. Suppose $n=15,~m=10,~\&~N=6$. We think this way. Go ahead and put a red in each bottle leaving 9 reds. Go ahead and put a blue in each bottle leaving 4 blues. Now there are $P(9,6)=26$ ways to partition 9 into 6 or fewer summands. And $P(4,4)=5$ ways to partition 4 into 4 or fewer summands. The product $26\cdot 5=130$ is the number of ways of doing both. Re: combinatorics question Thanks a lot May 9th 2012, 01:28 PM #2 May 10th 2012, 11:09 AM #3 Apr 2012 May 10th 2012, 12:40 PM #4 May 10th 2012, 01:20 PM #5 Apr 2012
{"url":"http://mathhelpforum.com/statistics/198599-combinatorics-question.html","timestamp":"2014-04-18T02:01:18Z","content_type":null,"content_length":"45741","record_id":"<urn:uuid:03890929-885f-4c98-bb4f-27b241056ebf>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Annandale, VA Science Tutor Find an Annandale, VA Science Tutor Dear Prospective Tutee, Get ready to learn, have fun, and gain confidence in your ability to do math—and raise your grades too! I offer tutoring sessions for all high school math subjects—from pre-algebra to AP calculus. I have helped to significantly improve students' scores and grades (as much as from an F to an A) in high school math subjects for three years now. 15 Subjects: including organic chemistry, chemistry, calculus, geometry I am a recent graduate from George Mason University. I earned my B.S. in mathematics with a concentration in actuarial science and minors in economics and data analysis. While I typically excel in all areas of mathematics, I have a particular penchant for statistics. 21 Subjects: including physics, Spanish, reading, calculus ...In college, I had a major in Math with minor in Computer Science. I have a Masters degree in pure mathematics. I have taught, tutored, and graded students homeworks in Linear Algebra. 36 Subjects: including astronomy, physical science, statistics, ACT Science I have been tutoring since 1979 and have an excellent track record of superior results with students. My philosophy is that every student is capable of being a good student if they find a patient teacher who believes in them more than they believe in themselves. Students need to be encouraged by celebrating their small accomplishments as well as the large ones. 12 Subjects: including biology, vocabulary, grammar, psychology ...Statistics is heavily studied in Thermodynamics and I have taken many rigorous courses. I understand all the concepts well and can explain them in a manner in which they make sense. I am completely bilingual in English and Spanish. 23 Subjects: including mechanical engineering, electrical engineering, physics, chemistry Related Annandale, VA Tutors Annandale, VA Accounting Tutors Annandale, VA ACT Tutors Annandale, VA Algebra Tutors Annandale, VA Algebra 2 Tutors Annandale, VA Calculus Tutors Annandale, VA Geometry Tutors Annandale, VA Math Tutors Annandale, VA Prealgebra Tutors Annandale, VA Precalculus Tutors Annandale, VA SAT Tutors Annandale, VA SAT Math Tutors Annandale, VA Science Tutors Annandale, VA Statistics Tutors Annandale, VA Trigonometry Tutors Nearby Cities With Science Tutor Alexandria, VA Science Tutors Arlington, VA Science Tutors Burke, VA Science Tutors Centreville, VA Science Tutors Fairfax, VA Science Tutors Falls Church Science Tutors Fort Washington, MD Science Tutors Herndon, VA Science Tutors Mc Lean, VA Science Tutors Oakton Science Tutors Reston Science Tutors Springfield, VA Science Tutors Takoma Park Science Tutors Vienna, VA Science Tutors Washington, DC Science Tutors
{"url":"http://www.purplemath.com/Annandale_VA_Science_tutors.php","timestamp":"2014-04-18T18:38:12Z","content_type":null,"content_length":"24028","record_id":"<urn:uuid:6822727c-9d53-4f7a-b2ac-a593c8b93022>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: sparse matrix bug Replies: 0 sparse matrix bug Posted: Jun 18, 1996 11:50 AM An interesting little bug I tripped over in Matlab on the Sun today. It is also in the mac version. Matlab is supposed to work with sparse matrices and full matrices together, leaving the result as a sparse matrix. For example, when A is sparse and B is full, C is supposed to be sparse. This is good. C is still supposed to be sparse. Still good. Consider the following expression though. C = A(:,ind) - B*(R1\R2); A and B are sparse. ind is an index vector (full). R1 and R2 are full. C should be sparse, since it is a sum of terms, all of which should be sparse. NOT so. At least not on a Mac running 4.2c1 or on a sparc5. I can fix it, by forcing R1 and R2 to be sparse, but this cost me some headaches. When I later tried to use C, which is huge when full, it crashed the sparc20 I was running it on remotely with memory problems. It did not return cleanly. I tried it again on my own sparc5, and it quickly crashed that system too. No out of memory message. Just reboot. If you do not believe me, try out the following code. Check out b after it is computed. Full. Not sparse. John D'Errico, derrico@kodak.com a = sprandn(100,50,0.1); c = sprandn(10,50,0.05);
{"url":"http://mathforum.org/kb/thread.jspa?threadID=239110","timestamp":"2014-04-20T13:56:14Z","content_type":null,"content_length":"15033","record_id":"<urn:uuid:87849f91-bde3-4ec3-a8b9-fdae0afbb3ae>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
San Bruno Algebra 2 Tutor Find a San Bruno Algebra 2 Tutor ...Simplifying expressions, binomials, powers, factoring, linear equations, and graphing don't have to be difficult topics but if a student misses a key idea on a certain class day, it can quickly snowball. Many times, all it takes to get back on track is some focused review work. The second year ... 11 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I find that many students shy away from the core concepts in math and physics, preferring instead to learn only the specific problems they are assigned. This can result in the student becoming confused when confronted with a new problem. For this reason I first strive to ensure that the student grasps the basic concepts, and I then illustrate the concepts with a variety of examples. 25 Subjects: including algebra 2, physics, calculus, statistics ...I will make good use of private tutoring sessions to help the students with different teaching approaches and supplemented materials.Do you need help to learn numbers, equations, graphs, factorization, and solve word problems etc? I would love to help you to understand the keys to the problems and ways to find the solutions. Do you have difficulty in solving problems in Algebra 2? 10 Subjects: including algebra 2, geometry, Chinese, algebra 1 ...My tutoring methods vary student-by-student, but I specialize in breaking down problems and asking questions to guide the student toward discovering and truly understanding concepts which helps with retention and effective test-taking. I take pride in the success of each and every one of my stud... 17 Subjects: including algebra 2, chemistry, statistics, calculus ...If you are interested in discussing a unique summer curriculum catered to your child. Please e-mail me and I will be happy to talk to you. I am passionate about helping students dramatically improve their academic performance. 26 Subjects: including algebra 2, reading, writing, statistics
{"url":"http://www.purplemath.com/San_Bruno_Algebra_2_tutors.php","timestamp":"2014-04-19T19:39:48Z","content_type":null,"content_length":"24201","record_id":"<urn:uuid:e7ce4942-c94a-4104-9a19-d2d02f4fbf54>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
n Probability Theory Sheldon Ross 8th Edition Download A First Course in Probability Theory Sheldon Ross 8th Edition Download This book was shared by damian64 on misc category • Auhtor Sheldon M Ross • Category: Mathematics • ISBN 0979570409 Product Description: The 2006 INFORMS Expository Writing Award-winning and best-selling author Sheldon Ross (University of Southern California) teams up with Erol Pekoz (Boston University) to bring you this textbook for undergraduate and graduate students in statistics, mathematics, engineering, finance, and actuarial science. This is a guided tour designed to give familiarity with advanced topics in probability without having to wade through the • Auhtor Sheldon M. Ross • Category: Mathematics • ISBN 0137463146 First Course in Probability, A (5th Edition) Sheldon M. Ross is available to download Product Description: This market leader is written as an elementary introduction to the mathematical theory of probability for students in mathematics, engineering, and the sciences who possess the prerequisite knowledge of elementary calculus.This material is available do download at niSearch.com on Sheldon M. Ross's eBooks, A A first course in probability 6th edition by Ross A First Course in Probability SOLUTIONS MANUAL (7th Edition) , by Sheldon Ross A First Course in String Theory by Barton Zwiebach ... 8th Edition ,by J. Richard Christman Read Online introduction to the elementary concepts of probability. The first part of the course will ... Sheldon Ross, A First Course in Probability, 8th edition, Prentice - Hall ... A Course in Probability Theory, 3rd ed., Academic Press, 2001 Read Online A First Course In Probability 7th Edition by Sheldon M. Ross A First Course in Probability Theory, 6th edition, by S. Ross. A First Course in String Theory, 2004, ... Fundamentals of Corporate Finance 8th edition by Ross Read Online 210332 Applied Probability Sheldon Ross, A First Course in Probability 2006 (7th) ... An Introduction to the Theory of Numbers 2008 (8th) Cambridge Press 0521722365 David M. Burton, Elementary Number Theory 2007 ... Graph Theory with Applications, Revised Edition 2005 Wiley 0471363243 Read Online Course Title Edition New? Author Publisher Fall Spring Summer ... 51900 A First Course in Probability, ... 53200 Stochastic Processes latest New Sheldon Ross Wiley 15 53900 Probability and Measure ISBN 0-471-00710-2 3rd 1995 Same Billingsley Wiley 15 Read Online 3 . A First Course In Probability, 7E, by Sheldon Ross, SM 4 . A First Course in Probability, 8th Edition, Sheldon Ross, PEARSON, ISM 5 . ... Elements of Information Theory, 1st Edition, Thomas M. Cover, Joy A. Thomas, WILEY, SM 309 . Read Online Math 371 Elementary Probability Theory (3) ... Course Textbook: A First Course in Probability, 7th Edition by Sheldon Ross ... Course Textbook: Learning to Teach, 8th Edition by Richard I. Arends ... Read Online ACM/EE 116 Houman Owhadi Suggested Sheldon M. Ross and Erol A. Pekoz A second course in probability theory 979570409 ACM/EE ... De Bedts Required Dieteker, Simone & Van Hooff, Dominique En Bonne Forme 0‐470‐42869‐4 Houghton Mifflin 8th Edition Read Online 87- Stewart's Calculus , 5th edition 88- Basic Probability Theory by Robert B. Ash ... First Course In Probability - Sheldon M. Ross (8th ed) ... An Applied First Course - Bernard Kolman (8th ed) (ISBN 0131437402 Read Online Precalculus , Ron Larson, 8th ed., Brooks Cole, Print version ISBN: 1111476209, RC E-book:1133444644 MATH 2333 ... A First Course in Abstract Algebra, John Fraleigh, 7th ed., Pearson, ISBN: ... Introdcution to Probability Models, Sheldon Ross, 10e, Academic Press, ... Read Online
{"url":"http://nisearch.com/files/pdf/a-first-course-in-probability-theory-sheldon-ross-8th-edition-download/2","timestamp":"2014-04-18T20:44:13Z","content_type":null,"content_length":"32615","record_id":"<urn:uuid:05c83c5b-42fc-4b75-bb88-1b11dc5dbb89>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra. WebMath WebMath is designed to help you solve your math problems. Composed of forms to fill-in and then returns analysis of a problem and, when possible, provides a step-by. Math.com Homework help Department of Mathematics, IIT Kharagpur - ERNET Free student resources from Discovery Education. Find homework help, games and interactives, and step-by-step webmath help to help students learn and have fun. WebMath - Solve Your Math Problem Solve a Linear Equation Involving One Unknown - powered by WebMath Solve a Linear Equation Involving One Unknown. WebMath Plot an Inequality - powered by WebMath. This page will show you how to plot an inequality. Plotting inequalities can be a bit difficult because entire portions of. Free Student Resources | Digital textbooks and standards. Department of Theoretical and Applied Mathematics. (Akron, OH, USA) The University of Akron : Department of Mathematics Free math lessons and math homework help from basic math to algebra, geometry and beyond. Students, teachers, parents, and everyone can find solutions to their math. xbox 360 premium seattle flower show treating anxiety simulating noise using pspice screwdriver bottle opener
{"url":"http://5uv.xax.christmassdecoration.com/","timestamp":"2014-04-18T01:35:41Z","content_type":null,"content_length":"9093","record_id":"<urn:uuid:517ca413-825a-433d-b37f-4b9a495eca30>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
establishing identities with ^3 and ^4 in them i am asked two similar problems. my question is how it was done as the example in the book i am reading is useless in explaining. the first question is: (sin^3t+cos^3t)/(sint+cost)=1-sint cost the example plays out to what i assume is factoring sint+cost out of it to look like (sint+cost)(sin^2t-sint cost+cos^2)/(sint+cost) my question is, where did the "sin^2t-sint cost+cos^2" come from? when i factor it it doesnt look like that. the next problem is (cos^4t-sin^4t)/(cos[2t]) how do i get rid of the ^4? thanks for any help provided. Re: establishing identities with ^3 and ^4 in them bikeman wrote:i am asked two similar problems. my question is how it was done as the example in the book i am reading is useless in explaining. the first question is: (sin^3t+cos^3t)/(sint+cost)=1-sint cost the example plays out to what i assume is factoring sint+cost out of it to look like (sint+cost)(sin^2t-sint cost+cos^2)/(sint+cost) my question is, where did the "sin^2t-sint cost+cos^2" come from? when i factor it it doesnt look like that. the next problem is (cos^4t-sin^4t)/(cos[2t]) how do i get rid of the ^4? thanks for any help provided. bikeman wrote: where did the "sin^2t-sint cost+cos^2" come from? when i factor it it doesnt look like that. How did you factor? What did you get? (To review how to factor differences of squares and sums and differences of cubes, try here). Re: establishing identities with ^3 and ^4 in them I factored wrong. i found that (cos^4t-sin^4t) is the same as saying (cos^2t+sin^2t)(cos^2t-sin^2t) one of which is a Pythagorean identity. the lower part of the problem, Cos(2t) is part of a series of identities which which cancels one half of the upper part of the equation. meaning you are left is 1 X1 which equals 1
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=12&t=1359&p=4123","timestamp":"2014-04-16T07:29:27Z","content_type":null,"content_length":"23586","record_id":"<urn:uuid:53b61c17-73bb-483f-92dc-27db15e5814e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Appendix to A rigidity theorem for the solvable Baumslag­Soleter groups. Daryl Cooper August 18, 1997 The study of properties of a metric space which are preserved by bilipschitz homeomorphism occurs in the study of groups via a word metric. It was also studied in [1] for a certain type of Cantor set embedded in the real line. The Cantor sets concerned are mild generalizations of the original middle­third Cantor set. This Cantor set has the basic property that it is the union of two exact copies of itself each scaled down in size by a factor of 1=3: The generalization allows finitely many linear scale factors. It is easy to see that the Hausdorff dimension of this type of Cantor set depends only on these scale factors. Now a bilipschitz homeomorphism preserves Hausdorff dimension, and so a natural question is what, if any, further invariants other than Hausdorff dimension are there. An almost complete answer was given in [1], using invariants derived from the Hausdorff measure. This work was generalized to analogous Cantor sets in Euclidean space of dimension n by H. Vuong in his thesis, [2], [3]. In this section, we generalize in a different direction to abstract metric Cantor sets which possess a certain linear self­similarity structure. There is a further generalization to a much wider class of Cantor set, where the self­similarity structure is smooth rather than linear. This will not be dealt with here. The main result is Theorem (0.6) which states that every bilipschitz homeomorphism between
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/114/3704393.html","timestamp":"2014-04-19T19:40:04Z","content_type":null,"content_length":"8621","record_id":"<urn:uuid:51fe8de5-c3c9-4c17-8071-0b48e37a9996>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Bridgewater, MA ACT Tutor Find a Bridgewater, MA ACT Tutor I'm a semi-retired lawyer, with years of trial experience. As you might expect from a lawyer, I teach primarily by the Socratic method, leading students to find the right answers themselves. I have excelled in every standardized test I have taken: SAT 786M/740V, LSAT 794, National Merit Finalist. 20 Subjects: including ACT Math, reading, English, writing ...I'd also make sure you can apply them to problems. I've been well rated as a calculus tutor. I'm also well rated as a tutor in general because I take tutoring seriously. 47 Subjects: including ACT Math, chemistry, reading, calculus ...I am also the advisor for the high school math club and the advisor of the National Honor Society at a local high school.I have taught: SAT Prep, Pre-Caluculus, Trigonometry, Algebra 2 honors, Algebra 2 standard course, Geometry honors & Standard, Algebra 1, MCAS Prep, Pre-Algebra and 4-8th grade... 12 Subjects: including ACT Math, geometry, algebra 1, GED ...I have since taught multiple levels of Latin in secondary schools for three years. As a high school student, I competed in the original oratory category at forensics competitions of both the Catholic Forensics League and National Forensics League. My cleverly written and well-delivered speeches earned me honors at the district, regional, and state level. 43 Subjects: including ACT Math, English, reading, writing ...During high school, I took the AP exams in the following classes: biology, statistics, European history, United States history, and environmental science and did not receive lower than a 4 out of 5 on any of them. I have tutored many students at American University in science, history and econom... 18 Subjects: including ACT Math, chemistry, statistics, biology
{"url":"http://www.purplemath.com/bridgewater_ma_act_tutors.php","timestamp":"2014-04-19T17:05:53Z","content_type":null,"content_length":"23868","record_id":"<urn:uuid:18db9942-a841-433e-b509-76b579d329d5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
How many ways are there to split a group of 6 boys into two Question Stats: 51%48% (00:24)based on 47 sessions yufenshi wrote: How many ways are there to split a group of 6 boys into two groups of 3 boys each? (The order of the groups does not matter) The official answer is B, but I don't understand why in this case we need to divide by 2. It seems to me to be the same questions as: how many ways to choose 3 people out of 6 people. in that case it would be 20. 1. The number of ways in which different items can be divided equally into groups, each containing objects and the order of the groups is important 2. The number of ways in which different items can be divided equally into groups, each containing objects and the order of the groups is NOT important In original question as the order is NOT important, we should use second formula, objects (people): This can be done in another way as well: \frac{C^3_6*C^3_3}{2!}=10 , we are dividing by as there are 2 groups and order doesn't matter. For example if we choose with the group {ABC} then the group {DEF} is left and we have two groups {ABC} and {DEF} but then we could choose also {DEF}, so in this case second group would be {ABC}, so we would have the same two groups: {ABC} and {DEF}. So to get rid of such duplications we should divide by factorial of number of groups - 2!. Answer: B. This concept is also discussed at: Hope it helps. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests
{"url":"http://gmatclub.com/forum/how-many-ways-are-there-to-split-a-group-of-6-boys-into-two-105381.html","timestamp":"2014-04-17T16:02:01Z","content_type":null,"content_length":"190277","record_id":"<urn:uuid:30c89059-5c46-4885-a2cb-1bdad5db450a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Argo, IL Math Tutor Find an Argo, IL Math Tutor My tutoring experience ranges from grade school to college levels, up to and including Calculus II and College Physics. I've tutored at Penn State's Learning Center as well as students at home. My passion for education comes through in my teaching methods, as I believe that all students have the a... 34 Subjects: including prealgebra, statistics, SAT math, physics ...Having taught algebra to 7th graders for eleven years, I have extensive experience assessing and evaluating students for what they need to focus on. Skills I have taught/tutored include fractions, decimals, percents, statistics, geometry, algebra equations, graphing, ratios, proportions, and pro... 40 Subjects: including SAT math, ACT Math, algebra 1, algebra 2 Looking to really excel in Algebra, Geometry, Trigonometry, Calculus, Philosophy, Biology, Chemistry, Spanish, Biochemistry, Writing, or the ACT? Come to the alchemist who will help you understand the language each of these disciplines speaks. In many cases, students fail to achieve the grades the... 26 Subjects: including calculus, chemistry, geometry, ACT Math ...I have taught Pre-algebra 1 and 2. I can communicate well at this level and I have a lot of patience, so check me out! I have taught the material covered in precalculus at several colleges. 25 Subjects: including algebra 1, algebra 2, calculus, geometry I will be teaching honors physics and chemistry this year. This summer, I worked for ComEd's "smart grid" education program. I also spent a year doing ACT tutoring at Huntington Learning Center. I am available for tutoring chemistry, physics, earth science, math, and ACT on the weekends 12 Subjects: including precalculus, statistics, algebra 1, algebra 2
{"url":"http://www.purplemath.com/argo_il_math_tutors.php","timestamp":"2014-04-16T07:28:46Z","content_type":null,"content_length":"23561","record_id":"<urn:uuid:8d74e303-f705-466f-8f5c-9b032e5b56ec>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
31 May 2000 Vol. 5, No. 22A THE MATH FORUM INTERNET NEWS - MAY 2000 DISCUSSIONS This special issue of the Math Forum's weekly newsletter highlights recent interesting conversations on Internet math discussion groups. For a full list of these groups with links to topics covered and information on how to subscribe, see: Replies to individual discussions should be addressed to the appropriate group rather than to the newsletter editor. If you are familiar with a site we don't yet catalog, please use our Web form to suggest the link. Your own brief annotation will be much appreciated. ______________________________ + ______________________________ MAY SUGGESTIONS: CALC-REFORM - a mailing list hosted by e-MATH of the American Mathematical Society (AMS) and archived at - Re: precise use of language (25 May 2000) "... One difficulty that many students have with calculus (and other math courses) is that they don't understand the difference between precise mathematical usage of language and ordinary usage of language. Does anyone else have any info to share on how they have attempted to help students understand this difference?" - Ted Stanford ______________________________ + ______________________________ K12.ED.MATH - a moderated list on general math teaching questions, archived by the Math Forum at - Outdoor math activities (5 May 2000) "I am interested in any activities involving math that can be conducted outdoors. My primary age group is middle school but I might be able to adapt activities geared towards other age groups." - Tammy Muhs ______________________________ + ______________________________ MATH-TEACH, a list established to facilitate the discussion of teaching mathematics, including conversations about the NCTM standards (although not officially sponsored by or affiliated with the NCTM); open to subscription, unmoderated, and archived by the Math Forum at: - Subject: Tool Box & Academic Intensity (11 May 2000) "Clifford Adelman's study, 'Answers in a Tool Box: Academic Intensity, Attendance Patterns, and Bachelor's Degree Attainment', is a recent study that educators and education policy makers cannot afford to overlook. His goal was to isolate the principal factors in assessing college completion. Based on a long-term database, statistical analyses identified 'the intensity and quality of the secondary school curriculum' as the best predictor." - Alfred Barron The entire text of the Tool Box papers is available at: A shorter Executive Summary is available at: ______________________________ + ______________________________ NUMERACY, for those interested in the discussion of educational issues around adult mathematical literacy, archived at: - Measurement of the classroom (19 May 2000) "We have chosen to work specifically on the Geometry and Measurement strand of the Frameworks. Does anyone have any good ideas for advanced lessons?" - Krystal "I teach math for the building trades at a Latino adult education center in Philadelphia. Both measurement and geometry are major subjects. I have made several variations of a hands-on assignment depending on the students' experience with measuring. One involves measuring the classroom and computing the amount of tile we'd need to re-do the floor, paint needed for the walls, measurement of baseboards, and volume of the room for air conditioning..." - MathLyn ______________________________ + ______________________________ SCI.MATH.RESEARCH, a discussion group focused on research-level mathematics that can be read as a Usenet newsgroup or on the Web: - 'Distance' on permutations ??? (2 May 2000) "Does anyone know of a natural 'metric' on (signed) permutations? I ask as 'expatriot physicist' working in molecular biology; we are currently trying to study the evolutionary forces and constraints governing the shuffling of genes within prokaryotic genomes, and their spatial aggregation into clusters of functionally related genes on the chromosome called 'operons'." - Gordon D. Pusch ______________________________ + ______________________________ HISTORIA-MATEMATICA - a virtual forum for scholarly discussion of the history of mathematics in a broad sense, among professionals and non-professionals with a serious interest in the field, archived at - Mathematics as Theater (28 April 2000) - Mathematics in Literature (7 May 2000) ______________________________ + ______________________________ We hope you will find these selections useful, and that you will browse and participate in the discussion group(s) of your choice. CHECK OUT OUR WEB SITE: The Math Forum http://mathforum.org/ Ask Dr. Math http://mathforum.org/dr.math/ Problems of the Week http://mathforum.org/pow/ Mathematics Library http://mathforum.org/library/ Teacher2Teacher http://mathforum.org/t2t/ Discussion Groups http://mathforum.org/discussions/ Join the Math Forum http://mathforum.org/join.forum.html Send comments to the Math Forum Internet Newsletter editors
{"url":"http://mathforum.org/electronic.newsletter/mf.intnews5.22A.html","timestamp":"2014-04-19T20:35:45Z","content_type":null,"content_length":"10237","record_id":"<urn:uuid:61eaa1ee-8b5e-4dba-85b4-18c162ac11ff>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Great creation scientist: Blaise Pascal (1623–1662) Outstanding scientist and committed Christian Blaise Pascal was born one of three children on 19 June 1623, in the town of Clermont-Ferrand in rural France. Unfortunately, his mother died when he was only three. The family later moved to Paris. Throughout his life, Blaise’s health was extremely poor, but he was blessed with a brilliant mind. Initially his father feared that learning mathematics might overstrain him, but this only served to arouse Blaise’s interest. At 14, Blaise began attending weekly lectures in mathematics. It was from these weekly meetings of mathematicians that the French Academy of Sciences later developed. When only 16 years old, Blaise wrote a paper on conic sections^1 which was acclaimed by his fellow mathematicians as ‘the most powerful and valuable contribution that had been made to mathematical science since the days of Archimedes.’^2 This paper ‘laid the foundation for the modern treatment of conic sections.’^3 Blaise Pascal (1623–1662) Pascal’s calculating machine Blaise Pascal always tried to make his work in science and mathematics of practical use to mankind. While still a teenager, he invented the first machine to do calculations—an arithmetic machine which could add and subtract. This machine involved a set of wheels, each with the numbers zero through to nine on them. The wheels were connected with gears, so that a complete turn of one wheel would move the wheel next to it through one-tenth of a turn. This machine was of great use to his father—a judge in the taxation court—and to others involved in calculations. Although expensive to make and difficult to operate, Pascal’s calculating machine was an essential step in the subsequent development of calculators and computers. Christian beliefs In 1646, Pascal joined the Jansenists—a group of Catholics in France who believed as Calvin did on some doctrines, including salvation through God’s love and grace, rather than through good works. Pascal believed that ‘There is a God-shaped vacuum in the heart of every man which cannot be filled by any created thing, but only by God the Creator, made known through Jesus Christ.’^4 Pascal wholeheartedly believed that the events described in the book of Genesis were actual historical events. The Encyclopaedia Britannica states that Pascal believed ‘man’s wretchedness is explicable only as an effect of the Fall’^5 and that ‘For Pascal as for St Paul, Jesus Christ is the second Adam, inconceivable without the first.’^5 Now a committed Christian, Pascal continued his work in science and mathematics. Pascal’s experiments with the barometer proved the now familiar facts that atmospheric pressure (as shown by the height of the mercury in the barometer) decreases as altitude increases, and also changes as the weather changes. Pascal made a valuable contribution to developing both hydrostatics and hydrodynamics.^6 He showed that the ‘pressure applied to a confined liquid is transmitted undiminished through the liquid in all directions, regardless of the area to which the pressure is applied.’ This is known as Pascal’s Law and is the principle behind the hydraulic press, which Pascal designed. During these experiments with fluids, Pascal invented the syringe. Pascal also investigated the cycloid—the curve formed by a point on the circumference of a circle as the circle rolls along a straight line. Pascal’s discovery of many physical and mathematical properties of the cycloid was an important step towards the later development of calculus by others. Theory of Probability Pascal also worked with another mathematician, Fermat, on the Theory of Probability. Letters between the two ‘show that Pascal and Fermat participated equally in the creation of the theory.’^7^ Although their investigations were carried out on various gambling situations, this theory has an immense number of applications. It is the basis of all insurance schemes and it is of great value to many other branches of science such as quantum physics, where the behaviour of particles can be described using probabilities. Pascal invented a simple method now known as Pascal’s Triangle to determine the probability of certain outcomes Pascal attended parties where gambling was being conducted, and unfortunately became distracted by this lifestyle. However, Pascal had a narrow escape from death in 1654, when the horses pulling his carriage bolted. The horses were killed, but Pascal was unhurt. Convinced that it was God who had saved him, he reassessed how he was living. From then on, ‘From the age of thirty-one to the day of his death, at the age of thirty-nine, he had but one desire: he lived that he might turn the thoughts of men to his Saviour.’^8 At this time of recommitment to God, Pascal wrote: ‘Certainty! Joy! Peace! ‘I forget the world and everything but God! … ‘I submit myself absolutely to Jesus Christ my Redeemer.’^9 Much of Pascal’s last few years was devoted to his religious writings. He wrote a famous series of 18 letters known as the ‘Provincial Letters,’ considered by critics to mark the beginning of modern French prose. Pascal also wrote the outstanding book Pensées (French for ‘thoughts’) in which he argues the case for his Christian beliefs.^10 Pascal recognized that man could not arrive at all knowledge by his own wisdom. He wrote that ‘Faith tells us what the senses cannot, but it is not contradictory to their findings.’^11 He also recognized that God was more than just the Creator—He was a loving, personal God as well—‘the God of Abraham, the God of Isaac, the God of Jacob, the God of the Christians is a God of love and Pascal’s Wager Pascal is famous for the statement known as Pascal’s Wager in which he applied his thinking in terms of probabilities to the question of salvation. Pascal’s Wager paraphrased is: ‘How can anyone lose who chooses to become a Christian? If, when he dies, there turns out to be no God and his faith was in vain, he has lost nothing—in fact, he has been happier in life than his non-believing friends. If, however, there is a God and a heaven and hell, then he has gained heaven and his skeptical friends will have lost everything in hell.’^13 When approaching his death, Pascal wrote: ‘And so I stretch forth my hands to my Redeemer, who came to earth to suffer and die for me.’^14 Pascal died on 19 August 1662, in Paris. Despite a short life with constant sickness and pain, this devout Christian made outstanding contributions to science, mathematics, and literature. Pascal’s Triangle Pascal’s triangle is constructed very simply—each number in the triangle is the sum of the two number(s) immediately above it. It is very useful for finding the probability of events where there are only two possible outcomes (i.e. binomial). This includes tossing a coin (head or tail) or having a child (boy or girl). For example, if a coin is tossed three times, there are eight (2x2x2 or 2^3) │ HHH │ HHT │ HTH │ THH │ TTH │ THT │ HTT │ TTT │ If we look at Row 3 of the triangle, we see the numbers 1,3,3,1. This tells us that there is only one way of obtaining all heads or all tails, but three ways of obtaining two heads and one tail, or two tails and one head. Translated to probabilities, the chances of the possible outcomes are: │ 3H—1/8 │ 2H1T—3/8 │ 2T1H—3/8 │ 3T—1/8 │ │ (one chance in eight) │ │ │ │ Looking at Row 4, we can see that for families with four children, one daughter and three sons is four times as common as having four sons and no daughters, while families with two sons and two daughters are six times as common. There is only one chance in 16 (2^4) of a four-child family having all sons or all daughters. And so on. References and notes Manna from heaven? Because this site and the information it contains is free, you might think so. However, lots of hard work went into producing it. Your gifts help to produce this ‘manna’ for Support this site
{"url":"http://creation.com/great-creation-scientist-blaise-pascal-1623-1662","timestamp":"2014-04-17T06:55:18Z","content_type":null,"content_length":"59364","record_id":"<urn:uuid:c854107a-1816-4e01-a61b-daf64ea47868>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Visual Dictionary Online HOME :: SCIENCE :: SCIENTIFIC SYMBOLS :: MATHEMATICS :: MATHEMATICS [2] mathematics [2] The science that uses deductive reasoning to study the properties of abstract entities such as numbers, space and functions and the relations between them. Product of all positive whole numbers less than and equal to a given number. For example, the factorial of 4 is: 4! = 1x2x3x4 = 24. Result of the integral calculation used especially to determine an area and to resolve a differential equation. Symbol denoting that a value has no upper limit. Sign denoting that the number on the left of the slash (numerator) is one part of the number on the right of the slash (denominator). Sign indicating that several values are to be added together (their sum). square root of Sign denoting that, when a number is multiplied by itself, the result is the number that appears below the bar. is not an element of Binary sign denoting that the element on the left is not included in the set on the right. is an element of Binary sign denoting that the element on the left is included in the set on the right. union of two sets Binary sign denoting that a set is composed of the sum of the elements of two sets. intersection of two sets Binary sign denoting that two sets M and N have elements in common. is included in/is a subset of Binary sign denoting that a set A on the left is part of the set B on the right. Sign denoting that the number preceding it is a fraction of 100. empty set Sign denoting that a set contains no elements.
{"url":"http://visual.merriam-webster.com/science/scientific-symbols/mathematics/mathematics_2.php","timestamp":"2014-04-17T15:58:15Z","content_type":null,"content_length":"30775","record_id":"<urn:uuid:2c93e60c-1955-4cc5-877d-01df32d8fb9f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] faster nonzero indices Francesc Alted faltet@pytables.... Wed Oct 21 02:55:22 CDT 2009 A Wednesday 21 October 2009 06:02:53 Felix Schlesinger escrigué: > Is there a faster way to do: > foo = scipy.nonzero(bar > 1)[0] > where bar is a 1d ndarray of type 'int32' > i.e. to get all indices of an array for which a condition is true. > Since in this case the arrays are quite large and the condition is only > true for few items creating a long boolean array and then passing over it > again to find non zero entries seems inefficient. If the number of elements that evaluates the condition to true is effectively small, and you can afford to have a precomputed array with indexes in memory (typically, an `arange()`) you can try with numexpr [1]: In [1]: import numpy as np In [2]: import numexpr as ne In [3]: bar = np.random.randint(0,1e6,1e6).astype('int32') In [4]: timeit np.where(bar > 999000)[0] 100 loops, best of 3: 12.1 ms per loop In [5]: idx = np.arange(len(bar)) In [6]: timeit idx[ne.evaluate('where(bar > 999000, 1, 0)').astype('bool')] 100 loops, best of 3: 7.68 ms per loop which is more than 1.5x times faster than the numpy counterpart. Even if you have to compute idx each time, the above approach is faster than In [7]: timeit np.arange(len(bar))[ne.evaluate('where(bar > 999000, 1, 100 loops, best of 3: 11 ms per loop although in that case, just by a meager 10%. [1] http://code.google.com/p/numexpr/ Francesc Alted More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-October/022998.html","timestamp":"2014-04-20T13:40:02Z","content_type":null,"content_length":"4026","record_id":"<urn:uuid:1eb4f715-9bcb-4745-a6da-6469d7b50463>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
ACL 2 seminar at U.T. Austin: Toward proof exchange in the Semantic Web In our PAW and TAMI projects, we're making a lot of progress on the practical aspects of proof exchange: in PAW we're working out the nitty gritty details of making an HTTP client (proxy) and server that exchange proofs, and in TAMI, we're working on user interfaces for audit trails and justifications and on integration with a truth maintenance system. It doesn't concern me too much that cwm does some crazy stuff when finding proofs; it's the proof checker that I expect to deploy as part of trusted computing bases and the proof language specification that I hope will complete the Semantic Web standards stack. But N3 proof exchange is no longer a completely hypothetical problem; the first examples of interoperating with InferenceWeb (via a mapping to PML) and with Euler are working. So it's time to take a close look at the proof representation and the proof theory in more detail. My trip to Austin for a research library symposium at the University of Texas gave me a chance to re-connect with Bob Boyer. A while back, I told him about RDF and asked him about Semantic Web logic issues and he showed me the proof checking part of McCune's Robbins Algebras Are Boolean result: Proofs found by programs are always questionable. Our approach to this problem is to have the theorem prover construct a detailed proof object and have a very simple program (written in a high-level language) check that the proof object is correct. The proof checking program is simple enough that it can be scrutinized by humans, and formal verification is probably feasible. In my Jan 2000 notes, that excerpt is followed by... I offer a 500 brownie-point bounty to anybody who converts it to Java and converts the ()'s in the input format to <>'s. 5 points for perl. ;-) Bob got me invited to the ACL2 seminar this week; in my presentation, Toward proof exchange in the Semantic Web. I reviewed a bit of Web Architecture and the standardization status of RDF, RDFS, OWL, and SPARQL as background to demonstrating that we're close to collecting that bounty. (Little did I know in 2000 that TimBL would pick up python so that I could avoid Java as well as perl ;-) Matt Kauffman and company gave all sorts of great feedback on my presentation. I had to go back to the Semantic Web Wave diagram a few times to clarify the boundary between research and • RDF is fully standardized/ratified • turtle has the same expressive capability as RDF's XML syntax, but isn't fully ratified, and • N3 goes beyond the standards in both syntax and expressiveness One of the people there who knew about RDF and OWL and such really encouraged me to get N3/turtle done, since every time he does any Semantic Web advocacy, the RDF/XML syntax is a deal-killer. I tried to show them my work on a turtle bnf, but what I was looking for was in June mailing list discussion, not in my February bnf2turtle breadcrumbs item. They asked what happens if an identifier is used before it appears in an @forAll directive and I had to admit that I could test what the software does if they wanted to, but I couldn't be sure whether that was by design or not; exactly how quantification and {}s interact in N3 is sort of an open issue, or at least something I'm not quite sure about. Moore noticed that our conjunction introduction (CI) step doesn't result in a formula whose main connective is conjuction; the conjuction gets pushed inside the quantifiers. It's not wrong, but it's not traditional CI either. I asked about ACL2's proof format, and they said what goes in an ACL2 "book" is not so much a proof as a sequence of lemmas and such, but Jared was working on Milawa, a simple proof checker that can be extended with new prooftechniques. I started talking a little after 4pm; different people left at different times, but it wasn't until about 8 that Matt realized he was late for a squash game and headed out. I went back to visit them in the U.T. tower the next day to follow up on ACL2/N3 connections and Milawa. Matt suggested a translation of N3 quantifiers and {}s into ACL2 that doesn't involve quotation. He offered to guide me as I fleshed it out, but I only got as far as installing lisp and ACL2; I was too tired to get into a coding fugue. Jared not only gave me some essential installation clues, but for every technical topic I brought up, he printed out two papers showing different approaches. I sure hope I can find time to follow up on at least some of this stuff. del.icio.us tags:Austin, semantic, web, logic, research Blogged with Flock
{"url":"http://dig.csail.mit.edu/breadcrumbs/node/160","timestamp":"2014-04-20T20:55:14Z","content_type":null,"content_length":"19091","record_id":"<urn:uuid:ab211aeb-8516-45bb-b18d-f10290085988>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics question January 9th 2011, 06:07 PM #1 Junior Member Aug 2008 The average amount of time required to fill orders at a drive-up window has been observed to be 120 seconds, with a standard deviation of 10 seconds. Assuming that the required order-fill time follows a symmetric, bell-shaped distribution which of the following statements is correct regarding a random sample of 1,000 observations? a.) expect to see approx 680 of the order fill-times falling in the interval from 110 seconds to 120 seconds b) expect to see approx 955 of the order fill-times falling in the interval from 100 seconds to 140 seconds c) expect to see approx 27 orders of required fill times of less than 90 seconds d) expect to see approx 27 orders of required fill times in excess of 150 seconds e) all above I like the sound of b) over a) . The first option is not centred around the mean. For c) $\displaystyle 1000\times P\left(Z <\frac{90-120}{10}\right)$ and d) $\displaystyle 1000\times P\left(Z >\frac{150-120}{10}\right)$ (a) is 341 (b) is 954 (c) is 13 (d) is 13 January 9th 2011, 06:16 PM #2 January 16th 2011, 11:25 PM #3
{"url":"http://mathhelpforum.com/advanced-statistics/167922-statistics-question.html","timestamp":"2014-04-17T01:18:36Z","content_type":null,"content_length":"37021","record_id":"<urn:uuid:b42d0aff-8fff-4a19-9b51-560e05e6341e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
The Monoidal Category of Hilbert Spaces Next: Conclusions Up: Quantum Quandaries Previous: The *-Category of Hilbert Spaces 4. The Monoidal Category of Hilbert Spaces An important goal of the enterprise of physics is to describe, not just one physical system at a time, but also how a large complicated system can be built out of smaller simpler ones. The simplest case is a so-called `joint system': a system built out of two separate parts. Our experience with the everyday world leads us to believe that to specify the state of a joint system, it is necessary and sufficient to specify states of its two parts. (Here and in what follows, by `states' we always mean what physicists call `pure states'.) In other words, a state of the joint system is just an ordered pair of states of its parts. So, if the first part has One of the more shocking discoveries of the twentieth century is that this is wrong. In both classical and quantum physics, given states of each part we get a state of the joint system. But only in classical physics is every state of the joint system of this form! In quantum physics are also `entangled' states, which can only be described as superpositions of states of this form. The reason is that in quantum theory, the states of a system are no longer described by a set, but by a Hilbert space. Moreover -- and this is really an extra assumption -- the states of a joint system are described not by the cartesian product of Hilbert spaces, but by their tensor product. Quite generally, we can imagine using objects in any category to describe physical systems, and morphisms between these to describe processes. In order to handle joint systems, this category will need to have some sort of `tensor product' that gives an object To see this in detail, it pays to go back to the beginning and think about cartesian products. Given two sets 17]. In traditional set theory we arbitrarily choose one approach to ordered pairs and then stick with it. Apart from issues of convenience or elegance, it does not matter which we choose, so long as it `gets the job done'. In other words, all these approaches are all just technical tricks for implementing our goal, which is to make sure that It is a bit annoying that the definition of ordered pair cannot get straight to the point and capture the concept without recourse to an arbitrary trick. It is natural to seek an approach that focuses more on the structural role of ordered pairs in mathematics and less on their implementation. This is what category theory provides. The reason traditional set theory arbitarily chooses a specific implementation of the ordered pair concept is that it seems difficult to speak precisely about ``some thing The cartesian product Secretly we know that these pick out the first or second component of any ordered pair in But, our goal is to characterize the product by means of these projections without explicit reference to ordered pairs. For this, the key property of the projections is that given any element Thus, given two sets cartesian product to be any set Note that with this definition, the cartesian product is not unique! Wiener's definition of ordered pairs gives a cartesian product of the sets 27]. All this generalizes painlessly to an arbitrary category. Given two objects cartesian product (or simply product) to be any object called projections, such that for any object We say a category has binary products if every pair of objects has a a product. One can also talk about terminal object and denoted finite products. It turns out that these concepts capture much of our intuition about joint systems in classical physics. In the most stripped-down version of classical physics, the states of a system are described as elements of a mere set. In more elaborate versions, the states of a system form an object in some fancier category, such as the category of topological spaces or manifolds. But, just like To sketch how this works in general, suppose we have any category with finite products. To do physics with this, we think of any of the objects of this category as describing some physical system. It sounds a bit vague to say that a physical system is `described by' some object element of the object Next, we think of any morphism Then, given two systems that are described by the objects Calling these projections `processes' may strike the reader as strange, since `discarding information' sounds like a subjective change of our description of the system, rather than an objective physical process like time evolution. However, it is worth noting that in special relativity, time evolution corresponds to a change of coordinates With this groundwork laid, we can use the definition of `product' to show that a state of a joint system is just an ordered pair of states of each part. First suppose we have states of each part, say However, the situation changes drastically when we switch to quantum theory! The states of a quantum system can still be thought of as forming a set. However, we do not take the product of these sets to be the set of states for a joint quantum system. Instead, we describe states of a system as unit vectors in a Hilbert space, modulo phase. We define the Hilbert space for a joint system to be the tensor product of the Hilbert spaces for its parts. The tensor product of Hilbert spaces is not a cartesian product in the sense defined above, since given Hilbert spaces 3,8], so violations of Bell's inequality should be seen as an indication that this assumption fails. The Wooters-Zurek argument that `one cannot clone a quantum state' [32] is also based on the fact that the tensor product of Hilbert spaces is not cartesian. To get some sense of this, note that such that diagonal of duplicate information, just as the projections discard information. In applications to physics, the equations Since the tensor product is not a cartesian product in the sense explained above, what exactly is it? To answer this, we need the definition of a `monoidal category'. Monoidal categories were introduced by Mac Lane [23] in early 1960s, precisely in order to capture those features common to all categories equipped with a well-behaved but not necessarily cartesian product. Since the definition is a bit long, let us first present it and then discuss it: Definition. A monoidal category consists of: a category a functor a unit object natural isomorphisms called the associator: the left unit law: and the right unit law: such that the following diagrams commute for all objects This obviously requires some explanation! First, it makes use of some notions we have not explained yet, ruining our otherwise admirably self-contained treatment of category theory. For example, what In physics, we think of Similarly, in It is a further reflection of the deep structural analogy between quantum theory and the conception of spacetime embodied in general relativity. Turning to clause (iii) in the definition, we see that a monoidal category needs to have a `unit object' This raises an interesting point of comparison. In classical physics we describe systems using objects in a category with finite products, and a state of the system corresponding to the object Section 3, such operators are in one-to-one correspondence with vectors in Next, let us ponder clause (iv) of the definition of monoidal category. Here we see that the tensor product is associative, but only up to a specified isomorphism, called the `associator'. For example, in given by Similarly, we do not have Moreover, all these isomorphisms are `natural' in a precise sense. For example, when we say the associator is natural, we mean that for any bounded linear operators In other words, composing the top morphism with the right-hand one gives the same result as composing the left-hand one with the bottom one. This compatibility condition expresses the fact that no arbitrary choices are required to define the associator: in particular, it is defined in a basis-independent manner. Similar but simpler `naturality squares' must commute for the left and right unit Finally, what about clauses (v) and (vi) in the definition of monoidal category? These are so-called `coherence laws', which let us manipulate isomorphisms with the same ease as if they were equations. Repeated use of the associator lets us construct an isomorphism from any parenthesization of a tensor product of objects to any other parenthesization -- for example, from many such isomorphisms -- and in this example, the pentagonal diagram in clause (v) shows two. We would like to be sure that all such isomorphisms from one parenthesization to another are equal. In his fundamental paper on monoidal categories, Mac Lane [23] showed that the commuting pentagon in clause (v) guarantees this, not just for a tensor product of four objects, but for arbitrarily many. He also showed that clause (vi) gives a similar guarantee for isomorphisms constructed using the left and right unit laws. Next: Conclusions Up: Quantum Quandaries Previous: The *-Category of Hilbert Spaces © 2004 John Baez
{"url":"http://math.ucr.edu/home/baez/quantum/node4.html","timestamp":"2014-04-19T07:05:37Z","content_type":null,"content_length":"49636","record_id":"<urn:uuid:e75e9963-563a-4fe2-a0ce-2e6f18058dab>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
what is zero I don't see how this could be nonsensible. It could lead to a new number system that doesn't have nice properties. (e.g. maybe division isn't well-defined) It could require more complicated notation. (maybe we have to represent things like [itex]a + bi + ci^2[/itex]) The definition might not be self-contradictory, but it might lead to a contradiction in the future! These are things that one might worry about when trying to extend a number system. However, there is a nice theorem: Let [itex]F[/itex] be a field, and let [itex]p(x)[/itex] be an polynomial[itex]^1[/itex] of degree [itex]n[/itex] whose coefficients are in [itex]F[/itex]. Then, there exists a field [itex]F(\alpha)[/itex] that consists of all numbers of the form: [tex]a_0 + a_1 \alpha + a_2 \alpha^2 + ... + a_{n-1} \alpha^{n-1}[/tex] where all of the [itex]a_i[/itex] are elements of [itex]F[/itex], and we have that [itex]p(\alpha) = 0[/itex]. So the mathematics behind the curtain say we're rigorously justified to do this sort of thing. The mathematics behind the curtain also say there's exactly one extension of the real numbers that you can make in this way; the complex numbers. Here's an example of something that yields a system that has less nice properties (though this system is an interesting one): Define [itex]h[/itex] to be a root of [itex]x^2 - 1[/itex], but also require that [itex]h \neq 1[/itex]. Then we consider numbers of the form [itex]a + bh[/itex] where [itex]a, b \in \mathbb{R}[/ itex]. (that is, a and b are real numbers) The problem with this system is that division isn't always possible. For instance, what is [itex](1 - h)^{-1}[/itex]? If we try a generic possibility [itex](1 - h)^{-1} = (a + bh)[/itex] and multiply, we find: (a + bh) (1 - h) &= (a + bh - ah - bh^2) \\ &= (a - b) + (b - a) h So no matter what we choose for [itex]a[/itex] and [itex]b[/itex], this cannot equal 1, thus [itex]1 - h[/itex] doesn't have a multiplicative inverse; i.e. we cannot divide by [itex]1 - h[/itex]. So we have to be careful when we decree things like this; the result might not be quite what we expect! [:)] Are you saying that this use of ? is a better way to define complex numbers? No, I just wanted to make another example before leaving the comfortable world of the real and complex numbers. [:)] Besides, IMHO, playing around with at least one alternate definition of [itex]\ mathbb{C}[/itex] is a good exercise for understanding things. I don't get it. Isn't it already in that form, that is, two factors of that form? Well, if we foil, we get: (a + b\alpha) (c + d\alpha) = ac + (ad + bc) \alpha + bd \alpha^2 Which is not of the form [itex]p + q \alpha[/itex]. Are you talking about the analogy of extending the real numbers into the complex numbers? Yep. It turns out that this is an extremely useful thing to do with finite fields, or with the rational numbers. Is this F[2] the short way of writing "the field of integers mod 2?" Yes. Some other ways of writing it are [itex]GF(2)[/itex] and [itex]\mathbb{Z}_2[/itex]. Is this just a random polynomial that you chose as an example that doesn't have roots in mod 2, or is there something special/conventional about it? There are 4 polynomials of degree 2 over [itex]F_2[/itex]. [itex]x^2 + x + 1[/itex] is the only one that is irreducible. 1: An irreducible polynomial is one whose only factors are multiples of itself and multiples of 1. The field in question is important; e.g. [itex]2x^2 - 4[/itex] is irreducible over the rational numbers, but factors as [itex](2x - 2\sqrt{2})(x + \sqrt{2})[/itex] over the real numbers.
{"url":"http://www.physicsforums.com/showthread.php?t=9350&page=2","timestamp":"2014-04-17T07:30:37Z","content_type":null,"content_length":"93289","record_id":"<urn:uuid:1b1f82d8-29cc-473c-b5e4-df7e15d0c4d1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Triple Intergral October 30th 2006, 06:18 PM #1 Triple Intergral #14 f(x,y,z) = z W is the region in the first octant bounded by the cylinder y^2 + z^2 = 9 and the planes y = x, x = 0 and z = 0 My first intergral has the limits between x = 0 and x = y^2 + z^2 - 9 My second intergral has limits between z = 0 and z = sqrt(9-y^2) My final intergral ranges in between y = 0 and y = 3 I dont think it right because I end up with a negative answer I end up with -162/5 What am I doing wrong???????????????????????? Sorry I do not have a 3-d grapher, so accept my hand drawing, on the bottom. So we are finding, $\int \int_V \int z \, dV$ Which by Fubini's theorem is, $\int_A \int \int_0^{\sqrt{9-y^2}} z \, dz\, dA$ Where $A$ is your region. Which is, $\int_0^3 \int_x^3 \int_0^{\sqrt{9-y^2}} z\, dz\, dy\, dx$ October 30th 2006, 06:44 PM #2 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/calculus/7044-triple-intergral.html","timestamp":"2014-04-21T08:41:28Z","content_type":null,"content_length":"33488","record_id":"<urn:uuid:656bf37a-63d3-47fc-a36e-9bae1c8bf38d>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Lecture 19: Exam 2 Review PROFESSOR: The topics for the exam are on the web. You click on schedules and you can see the topics as we cover them at the various lectures. There is no way that I can cover all of this during one exam, and there's no way that I can cover them during this exam review. What I do not cover today can, and probably will, be on the exam. And not all that I cover today will be on the exam. I have to make a choice, and I make my choice. I want to start with a traveling wave in a string. Let the string has tension T, mass per unit length mu, and let the traveling wave have an amplitude A. So here is a traveling wave, and this amplitude is then A. And the speed or propagation is v. And this v is the square root of T divided by mu. That follows from the wave equation, but I'm not going to address that issue now. I want to deal with energy. And at first we'll write down the equation for the traveling wave. If this is the x direction, and this is the y direction, then y as a function of x, t. Would then be the amplitude A that you have there. And then you can write this down in many ways. I will write it down as the sine of k times x minus vt. If you want to write down for k, v omega, that's fine too. I have no problems with that. So the idea now, what is the energy in one wavelength say? There are two parts to it. There's kinetic energy and there is potential energy. The kinetic energy is not because the wave is moving in this direction. There is no material moving in this direction. The material is just moving up and down. That's all it's doing. So the kinetic energy is due to the motion in the y direction. There is potential energy due to the fact that I have to give this straight wire, or this straight string, a shape. And in order to give it that shape, I have to do work. There's a tension and I have to stretch the string. That's why I have to do work to give it that shape. So it comes in two parts. And I will only do the kinetic energy part, and I will leave you with the potential energy part. I slice out here a section dx. And so the mass in that section dm is mu times dx. And so the little bit of kinetic energy, K stands for kinetic energy, in this small part is one half times the mass-- one half mv squared-- and v is the speed, the velocity, in the y If the wave is traveling in this direction, this material here is going up. A little later it will more to the right, and so this is going up at this moment in time. So I can write for this, one half times mu times dx. And for vy I can write then, dy dt squared. And so the question now is what is dy dt? Well that's easy, because we have the function there. So dy dt, equals. So first I get A, then out pops a k. Then out pops the-- minus-- v. And then the sign becomes a cosine. Cosine k times x minus vt. But I have to square that. So I have a square here, and I get a square here. So dK can now be written as one half times mu. And I'll leave the dx all the way at the end. And so I'll get A squared. I will get k squared. I will get v squared. And I'll get the cosine squared of k times x minus vt. And then at the very end, I get my dx. And I now want to know how much kinetic energy there is in one wavelength, because the total amount of energy of this is infinitely long is of course infinitely high. So I'm interested in the amount of kinetic energy for one wavelength. And so that K then is the integral of this whole thing from zero to lambda. So that now is no longer K, but that now is K provided that you realize that it is K, kinetic energy per unit wavelength. So I only do it for one wavelength. All right. So I get 1/2. I'm going to get here a mu. I get an A squared. I'm going to write down for k, two pi divided by lambda. So that becomes four pi squared divided by lambda squared. For v squared, I'm going to write down T divided by mu. Remember, the velocity was the square root of T divided by mu. And now, I have to do the integral of this cosine squared, dx, between zero and lambda. And you will take my word for it that this is lambda divided by 2. And so I have to multiply. So the integral, that's the only one that I have to do, these are constant, is lambda divided by 2. And if you look now, you lose a mu. Your 4 goes and you lose even one lambda. And so you get A squared times pi squared times T divided by lambda. That is the kinetic energy per wavelength. So I'll leave you with the potential energy. It's a similar derivation, but now you have to deal with work that you have to do. And what comes out is perhaps a bit of a surprise. That the kinetic energy is exactly the same as the potential energy. Not all obvious, but that's the way it works out. So that means that the total energy per wavelength, so now I write down E total, again per wavelength. wl stands for wavelength. Not for Walter Lewin, but for wavelength. The total energy per wavelength is now twice that. So it's 2A squared times pi squared T divided by lambda. Now suppose I am generating this wave. I'm standing somewhere and I'm wiggling this. Then I have to generate this energy for every period that I go through, because in one period I generate one wavelength. So if I divide this by the time for one oscillation, then I have the average power over one oscillation. Now the period of an oscillation we normally call T. But I don't want to have another T, because we already have a T, so I call the period of the oscillation 2 pi divided by omega. Which is the same as the period. And so the power then, the average power that I have to generate if I am driving this travelling wave into the string, the average power is then E total. You have that here. Divided by 2 pi by omega. And this is in watts of course. This is joules per second. Now we turn to standing waves. The situation is very different. So here, I have a standing wave. There are nodes here, and at those nodes the string does not move. Let's assume that the maximum displacement is A. Again, the tension is T. Mu is the same. But when it comes to a halt, the maximum displacement in the center here is then A. And the question now is, how much energy is there in a standing wave? This was a traveling wave. This is a standing wave. Well at this moment in time, when it comes to a halt, there's no kinetic energy. There's no movement in the y direction. So the total energy that is in the wave per wavelengths is the same as U maximum. But of course, it's also the same as K maximum. When this thing goes through its equilibrium, when the string is straight, then you have the highest velocity. This comes down and has a velocity in this direction. This comes up and has a velocity in that direction. That is then the maximum kinetic energy that you can have. So E total is U max is K max. And so, we already know what U is. So all we have to say now is that is the same as A squared times pi squared times T divided by lambda. And so you see, a traveling wave has the same amplitude as a standing wave. A traveling wave has twice as much energy per wavelength. And as far as power is concerned, well, you have two waves going through each other. And so you have a reflection from the other side. So you really don't have to do anything anymore to drive the system. You just let it sit. If there's no energy dissipation, the standing wave will just support itself. So you do not have this power that you have to continuously put in, because in the case of a traveling wave you are continuously generating a wave that moves away from you. That is not the case with a standing wave. It's generated at one point in time. It reflects and it maintains itself. Now let's turn to electromagnetic traveling waves. Electromagnetic traveling waves, I take plane wave solutions in its most general form. E as a function of x, y, z, and t can be written as an amplitude, but this is in three directions. So it has an x component, a y component, and a z component. And then I could write down here say, for instance, the cosine of omega t minus k dot r. It's a dot product. And the meaning of k, k is called propagation vector, is kx f roof-- x roof plus ky, y roof, plus kz, z roof the magnitude of k is the square root of kx squared plus ky squared plus kz squared. Lambda is two pi divided by that k, and omega equals k times v. And if it is in a vacuum, then v is the same as c of course. But v then is c divided by n if you have a dielectric. The index of refraction n for dielectric is then square root of kappa e divided-- times kappa m. Most substances, kappa m is very, very close to 1.000. Except for ferromagnetic magnetic materials, kappa e itself is frequency dependent. And it can sometimes be extremely, strongly, dependent on frequency. That's where the dispersion comes in. R equals x, s roof-- x roof plus y, y roof plus z, z roof. That is the position vector. If you wanted to know what the associated magnetic field is, associated with this electric field, then your best bet always is that the curl of E is minus dB dt. So, if you know that whole function, in x, y and z, you can take the curl of E. Sometimes it's time consuming. Sometimes it's fast, depends on the wave. And then you have to do an integral in time to get the B vector. And what comes out of this is actually something that I do remember. There's very little in physics that I remember. This is one of the things that I remember so I don't have to apply every time Maxwell's equations when I solve these problems. And what I remember is what I wrote down earlier, point by point, on the blackboard. For traveling waves, E is perpendicular to B. E and B are both perpendicular to k. So E and B are perpendicular to k. Therefore, E cross B-- let me give that unit vectors. E cross B is k roof. The Poynting vector, remember, goes in the direction of k. The magnitude of B, at any moment in time in a traveling wave, is the magnitude of E divided by v which can be c, of course. And then, and this is key and I want to stress that, E and B are in phase with each other. In phase in space and in phase in time. And what that means is the following. The location where E reaches a maximum is also the location where B will reach a maximum. And they reach it at the same moment in time. So if you are somewhere, and you find that the E vector reaches a maximum, then you know that the B vector at that moment will also reach its maximum. If you're somewhere in space where E is 0, you can be sure that B is also 0. That is the case for traveling waves. The Poynting vector, s, is E cross B divided by mu 0. And if there were magnetic material, you would also have a kappa m here but I will leave that out. For the magnitude of B you can always write down E divided by v. This Poynting vector, of course, is time variable. Because E itself varies with frequency omega. B varies with frequency omega. So the Poynting vector is, of course, time dependent. And so the magnitude of the Poynting vector, you could also write down then, as the E squared value divided by mu zero divided by v. Because the B can then be replaced e divided by v. Since they're at 90 degrees relative to each other, I can ignore the cross because the sine of the angle is then 1. If E is a sinusoidal function. For instance, E0 times cosine omega t. Then it's clear that you get here the square, E0 squared, times the square of cosine omega t. If you time average that-- this s is variable in time, but if you just want to know what the time average value is then the time average value of the cosine squared equals 1/2. And so you can also write down then that s-- I'll write it down here-- s time averaged would then be E0 squared. This is now the amplitude divided by two mu 0 times v. And the two comes from the fact that the average value of cosine squared omega t equals one half. And this is then in watts per square meter. So that's the traveling wave. Now I want to go to standing waves. A standing electromagnetic wave, just like the one on the string, has a separation of space and time. Let us suppose we take a specific example. We have here a coordinate system. I will call this x. Call this y. Call this z. Suppose we have linearly polarized radiation, that I'll make it a little bit more difficult than normal, I will linearly polarize it in the yz plane. In other words, the E vector would be like this then, and oscillating back and forth in this plane perpendicular to k. So the standing wave, E, as a function of x, y, z, and t, can then be written as this vector. This here would be E0y at its maximum. This would be E0z at its maximum. So that's the one that comes first. So you get E0y in the y direction plus E0z in the z direction. That is the direction of that e vector. And now you get your spatial part, in the x direction, for which you can write down either the sine of kx times x. Or, if you prefer, cosine. I have no problem with that. And then we have here cosine omega t. There is no dependence of E in y and z, the plane waves. So if you take any plane perpendicular to x, infinite in size, independent of y and z, the E vector will be the same. And so, therefore, k of y is 0 and k of z is also 0. So there's only a k of x. And so k then equals k of x. And so, lambda is two pi divided by this value then. And omega divided by that k, omega divided by this k value-- I write an x but you can leave the x out-- would then be v or it could be c of course. If you're interested in knowing what the B field is you know that the curl of E is minus dB dt. The fact that we have here the spatial part, separated from the time part, means that there are locations for x which never change provided E field is always 0. That's typical for a standing wave. And in this case, there will be nodal planes. The whole plane perpendicular to the x direction, I will draw one here, and I will draw another one here. The E fields in that whole plane, at all moments in time, will be 0. That's the case when this sine, or whatever the cosine you've chosen, happens to be 0. And so these are nodal surfaces. And of course, the anti-nodal surfaces fall right in-between. It is not so easy to draw now for you what you will actually be seeing in terms of this sinusoidal wiggle that goes like this. That is not so easy to do, because the wave will be in the plane that is coming out like this. And it's difficult to show you that in a three dimensional way. I will make an attempt, nevertheless. So it will be in this plane, that is tilting forward, that you'll then have nodes and anti-nodes and they oscillate like this. Any plane perpendicular to x, at any moment in time, will see exactly the same thing. Now it's interesting to compare our list we suspend with the traveling wave. Again, with the standing wave, E is perpendicular to B. It is not too useful to talk about direction of propagation, because there are really two waves going through each other. So the whole idea of k vector is a little bit bizarre. B however, the magnitude of B, the maximum value will be E divided by v. The maximum possible value of B is E divided by v. But now comes the big surprise. E and B are 90 degrees out of phase in space and time. So if you have here the nodal planes for E those are the anti-nodal planes for B. And where you have the anti-nodal planes for E you would have the nodal planes for B. So they are 90 degrees out of space and out of time. So no surprise, of course, that the average value of the Poynting vector is then going to be 0. And that, of course, can also be seen. When you think of it that there's a traveling wave in this direction and a traveling wave in this direction. So there's sort of energy flow in this direction and energy flow in this direction. And the time average will obviously end up to be 0. But if you simply take that E cross B, and you time average it, you will see that immediately. This is a consequence of the 90 degrees out of phase and 90 degrees-- in space and time. All right. Now I would like to pursue this. I can either clean the blackboard or let me go to the other side. I'm going now to accelerated charges. So here we'll call that the y-axis quite arbitrarily, and here we have the z-axis. And I accelerate a charge q here with acceleration a. And I do that just in the z direction. And I am watching at a certain distance. My position vector is r and I'm watching here. And this angle is theta. If this charge is a positive charge, and there is a sudden acceleration, a little later in time I will see an electric field. There's a traveling wave going in my direction. It's not a plane wave, but there is a traveling wave going in my direction in the direction r. And so, that E vector then is in this direction. I will return to this. It's in the opposite direction as a perpendicular. This vector, which is the component of a, perpendicular to r is called a perpendicular. If q is positive, then this E vector is in the opposite direction direction of a perpendicular. The E vector that you experience at time t is then minus q times a perpendicular at time t prime. Because the acceleration took place earlier than when the signal arise where I am, because it takes time to travel. In other words, t prime equals t minus rc. This is the time that it takes for the signal to reach me. And so, t prime is earlier than t. And then we have this divided by 4 pi, epsilon zero, r, and I believe there is here c squared. Yes there is. And this is in volts per meter. Now a can easily be some amplitude times the sine of omega t. In other words, I can oscillate this one up and down harmonically. Clearly, when I do that, I will get an electric field that will also oscillate here harmonically. You really do not get plane wave solutions, because in any direction that you look you can change the r vector. And you have here the answer of what you see, the E vector. So there's really no such thing as a really a plane wave solution. If you're far away you can probably approximate it by a plane wave. The associated B field has all the ingredients that we are familiar with. It's perpendicular to E. And the B field must also be perpendicular to the direction of propagation. Which, in this case, is r. And E cross B will also be in this direction. And if you take all that into account, then you can write the B vector as the unit vector in the r direction times E divided by c. If it is the speed of light, c, if it is in vacuum then that is then an r, t. But that's the connection between E and B. So now comes an interesting point that we have stressed before. And that is that a perpendicular depends on theta, a perpendicular is a times the sine of theta. It is this component. And so you see that the magnitude of the E vector, that you will see when you look at that charge, will depend on where you are in space. If you happen to be at theta 90 degrees, you will see the maximum E possible. If you happen to be in this direction, where theta is 0, you won't see any electric field. And so it has a very strong dependence of the magnitude of the E vector in the direction. And the same is true for the B vector, because the B and the E vector are always married to each other. So they get the same sine theta. So the Poynting vector is now going to be proportional to sine squared theta, because you have an E and you have a B. I would like to summarize for you, in the same way that I did that here, what I would like you to remember. It's simple. I will raise this later again, because I want to work at a height so that you can see what I'm doing. So in summary, which is very good to remember, is that E is in the plane through r and a. In the plane does not mean into the plane. It's in the plane. So look here, the plane through a and r is the blackboard. And therefore, the E vector is in the blackboard in this case. That is key. E is in the plane through r and a, not into the plane. And E is perpendicular to r. Notice I have that. And E amplitude is proportional to the amplitude of E perpendicular. And therefore, it is proportional to the sine of theta. E itself is proportional to one over r. Notice that, not one over r squared but one over r. Which is obvious, it has to do with the conservation of energy. If you make your sphere around it, then the energy that flows through the sphere must be the same no matter how large your sphere is. And since that energy comes from the Poynting vector, as an E cross B, E is inversely proportional with r and B is inversely proportional with r. So that energy is conserved. So the Poynting vector will then be inversely proportional to r squared. If you're interested in the total power-- so you're not interested really in the fact that the Poynting vector is a strong function of theta, namely sine squared theta-- but if you integrate the Poynting vector over one sphere that you choose and you take the local Poynting vectors and you multiply that by the local areas, which becomes an integral, then you can actually come up with a This is now in joules per second. In other words, you tell me what q is. You tell me a is. You are creating electromagnetic radiation. I will tell you how many joules per seconds work you have to do. And that then becomes-- that's called the Larmar result, the Larmar equation. And that becomes q squared times a squared divided by 6 pi epsilon 0, c to the power of 3. Notice there's no r anymore, because clearly it's independent of where you are in space. That's the how many joules per second you have to generate. It's obvious that it is proportional to q squared because E is proportional with q. So B is also proportional with q. So the Poynting vector will be proportional to q squared. It's also obvious that there is an a squared, because E is proportional to a perpendicular but nevertheless a. And B is also proportional to a. So it's no surprise that you get there upstairs a q squared and an a squared. Now if a is oscillating, if is a0 sine omega t or cosine omega t, then you can calculate what the mean value of the power is during one oscillation. And then you will get here the average value of the sine. Squared omega t then becomes 1/2 of course. This is the energy that you have to generate per second in order to create electromagnetic radiation. It is not the kinetic energy that you have to put in the mass of the charge, the 1/2 m v squared. That's a whole different story. This is the price you pay for creating electromagnetic radiation. Several students have come to my office and asked me, why is it-- apparently I wasn't clear enough-- why is it that when you have Rayleigh scattering, and when the light scatters at 90 degree angles, that even if you have unpolarized light that you start with. Why is it 100% linearly polarized if, and only if, it scatters over 90 degrees. And so that is indeed an important point. So I'll expand on that in a way that is perhaps easier for you to digest. Maybe I went over it too fast when I cover it in class. I demonstrated if you remember. I did it with a smoke signal and I did it with a sunset. So we have unpolarized radiation. Let's first agree what is unpolarized radiation. There's a beam of light coming straight at you and it's unpolarized. The first plane wave, I think of it still in terms of a very classic 19th-century idea of plane wave solutions. The first plane wave is linearly polarized like that. The second one is linearly polarized like that. Then there's one like this, then there's one like this, then there's one like this, one like this, and one like this. It's a zoo of everything that's unpolarized radiation. OK. I pick one of those right here, this one, and he happens to be fine dust particles which have electrons. And these electrons are shaken up and down by that electric field. So what I'm drawing now here is the motion of that electron. The electron is going to be accelerated in a harmonic fashion. And where are you? Well you are here. You're looking, and this angle is theta. The same angle theta that I had there. What will you see? E is in the plane through r and a. So it's in the plane of the So radiation comes straight at you. But here is all of a sudden this electron, a bunch of electrons, that go like this. And I'm looking. I happen to be sitting in the plane of the blackboard, because that's 90 degrees right? When the radiation comes like this then the blackboard is 90 degrees. Whether you here it's 90 degrees. This is 90 degrees. This is 90 degrees. That's also 90 degrees. This is not 90 degrees, but that all is 90 degrees. Radiation comes in like this and I'm looking there. But it is in the plane through r and a so it is in the blackboard. It is perpendicular to r. That's nice. So it's linearly polarized. Do we agree? Linearly polarized? Now there's another one that comes in. This one comes in. OK. There it is. This one comes in. Starts to shake the electron in this direction. I am still where I was before. I'm here, same location. This angle now is theta. For a second plane wave comes in like this, what do I see here? E vector is in the plane through r and a. That is the blackboard. E is perpendicular to r. So I see this. Do I see the same strengths of the E vector? No, it's much less because a perpendicular is much less because theta is much less. But I will still see E vector in this direction following my recipe. And then there's one electromagnetic wave that comes in which happens to be in this direction. Well, in that case I will see nothing. That is tough luck. But any other direction you will always see electric vector perpendicular to your line of sight. And that is only true for 90 degrees. In other words, if I made a circle here-- and I have unpolarized light coming in here straight at you-- when you're here and you look down you will see the E vector like this. When you're here, and you're looking down, you will see an E vector like this. If you look here, and you look in this direction, then you will see the E vector like this. And this is only true if you are at 90 degree I can easily make you see that if you change the angle that it is not 100% linearly polarized. And the best way for me to show that is here are these electrons which are being shaken like this. One comes in. Shakes like this. And let's look now at forward scattering, not 90 degrees. This is 90 degrees. Forward scattering, here's the electron going up and down. You're sitting there in the audience. If this one comes in you'll see an electric vector like this generated by this charge which goes like this. Now the next one comes which goes like this. You'll see an E vector like this. The next one will come in like this. Forward scattering, you will see like this. So you see, if unpolarized light comes in forward direction it remains 100% unpolarized. In any other direction it will be partially polarized, but in the 90 degree direction it is 100% linearly polarized. That is the reason why the sky 90 degrees away from the sun is 100% polarized. And I check that almost every other day to make sure that physics still works. It's great fun. You take your linear polarizer, I always carry a handful with me, you look at the sky 90 degrees from the sun. And indeed, the sky is practically 100% polarized. Quite amazing. I have a movie which is not the greatest movie. I tried to show this to you earlier. It's not the greatest. It's trying to make you see that when you oscillate charges back and forth that something happens in the E field. That you get in the E field kinks. And students have asked me more than once, why do you only get kinks, and therefore electromagnetic radiation, if you accelerate them? Why don't you get kinks if you simply move them with a constant And I think the best answer that I can give is the following. Think of the electric field lines as spokes, rigid spokes, which are attached to the charge. So they're attached to the charge. They are rigid, but they are fragile like spaghetti. But they are rigid. And so they move with constant velocity. So all the spaghetti moves with the charge. They all have the same velocity. There is no stress anywhere on the spaghetti, because the whole thing has the same velocity. Now all of a sudden I take the charge and I accelerate it. Now the spaghetti feels the kink. The spaghetti feels, all of a sudden, it wants to break because the spaghetti is very And that break caused a kink in the field line. So that's the best way that I can convince myself why a constant velocity does not cause electromagnetic radiation, but it's really the acceleration. Some students thought when I used the word spaghetti that I meant cooked spaghetti. No, I didn't mean cooked spaghetti. I meant uncooked spaghetti which is very brittle. Which easily breaks just like this. And when you accelerate all of a sudden the charges, these field lines can break like uncooked spaghetti. And this movie is making an attempts. It's not the best thing, but we'll make an effort. So Markos, if you can do the honors there then I will do the honors here. And you may not get much out of it, but at least it is an attempt. Were we going to give it TV? Or are we going to make it completely dark? TV? If you wait a second then I think the part that I want to show. OK, so oscillating charges. So it oscillates but the speed is constant. So there's instantaneous fast acceleration, and then the speed remains constant. You see these shells here, which are then representative for the electromagnetic radiation. They're not harmonic yet. We're going to shake them harmonically shortly. And so these lines that you see are the field lines, and they're just here in these shells. They are broken. It's broken spaghetti. Also know that this is like a spherical wave. It's nothing to do with plane wave solutions anymore. You've really got a spherical wave going out. It's not so clear from here that the strength of the wave is very different for the different directions of theta. That's not so clear. So you here you have a simple harmonic motion. This is about one hertz electromagnetic radiation. Have you ever heard of that? One hertz? We're dealing normally with megahertz and gigahertz. This is very slow motion, but it's simple harmonic. And you begin to see that, indeed, there are electromagnetic. These are the E field lines. That they are being distorted in the direction perpendicular to the direction of your line of sight. Is it a great attempt? No, but it is an attempt. So I think this is a good moment for a break, perhaps a little bit early, and we will reconvene in five minutes. So you can warm up. OK. I will now tell you what I will not talk about, which is a lot. I will not discuss Fourier analysis. That doesn't mean that it will not be on the test. Make sure you're familiar and comfortable with the examples that I had in the problem sets. I will not cover today Doppler shift. Make sure you feel comfortable with Doppler shift. It's not a very difficult subject, but it is very far reaching consequences as we discussed including cosmology and black holes. I will not discuss Fresnel equations, even though they are very crucial. They were at the center of my lecture earlier this week. I will not discuss Snell's law. I will not discuss today the Brewster angle, but don't be surprised, don't be shocked, if there is a problem related to the Brewster angle. I will not discuss critical angles, total internal reflection, but it may be on your test. And I will not discuss today radiation pressure. That doesn't mean that it will not be on your test. It's simply not possible to cover all of this in the available amount of time. Now of course, I have not left out purposely things that will be on the test. A lot of stuff that I have covered today will be on the test, of course. But there will also be stuff that I cover today that will not be on the test. I want to discuss now one of the bizarre dispersion relations that evolve from the boundary conditions of electromagnetic wave on ideal conductors. And we spent a lot of time on the demonstration whereby we had two parallel conducting plates. And these plates were separated in the x direction by a distance a. This is the y direction. This is the x direction. And this is the z direction. And I want to propagate through these plates, which are infinitely large in the y direction, infinitely large in the z direction. At least very much larger than the wavelengths. I want to propagate electromagnetic radiation which linearly polarized only in the y direction. So that is what I want to do, and that's what I demonstrated also. Well, k-- this is beginning to be boring now-- is kx times x plus ky, y roof plus kz, z roof. This by the way is zero, because there's no dependence of the E field in the y direction. So ky is zero. So k is the square root of kx squared plus kz squared. Omega is k times v. Let's assume this is in vacuum. So omega is k times c. The boundary conditions demand that at x equals 0 and at x equals a, this component must vanish. Because you cannot have an electric field in the surface of an ideal conductor. That was one of the boundary conditions that we derived. In other words, E of y must become 0 for x equals 0 and x equals a. And as a result of that, you're going to get a standing wave in the x direction and you're going to get a traveling wave in the z direction. And the boundary conditions demand now, in order to meet this, that the k of x is going to be m pi divided by a. Then you have your wave that you will see substituting for kx. This value will always give you then, if you have the sine of kx times x, always a 0 here and always a 0 there. And so omega will then be c times k. So it's c times the square root of m pi over a squared plus k of z squared. And this is what we call a dispersion relationship. This equation is responsible for a bizarre behavior. And the bizarre behavior then can best be shown in a diagram which we call the omega kz diagram. I will raise this later again, because I want to work over my head so that you can see what I'm doing. So this is kz and this is omega. And I'm going to plot for you this relationship. This line would be omega equals kz times c. That will be non-dispersive. However, this is different because of this. And so now you have here a frequency which is the lowest frequency possible for which radiation can actually go between the plates. And this then is the case for Mary equals 1, and omega is then c times pi divided by a. And so here for omega, it is the cut off frequency is pi times c divided by a. And if I plot then that curve you get something like this. So no frequency below that value can propagate through the gate so to speak, through the opening, because it cannot meet then the necessary condition that the E field becomes 0 here and 0 there. Which is non-negotiable. It has to become zero here and it has to become zero there, because it's an ideal conductor. So k of x must obey this boundary condition. So let's assume that at a particular moment in time we have a frequency going through these two places. The frequency of this linearly polarized radiation in the y direction, and that this is the value for omega. That means that the associated value for kz is then this. kz must adjust itself so that kx can remain what it has to be. This line, everywhere on this line, kx is the same value. And the kx value everywhere is pi over a. And so kz is being slaved to become the value that meets this dispersion relation. That's this line. So kz settles there. The phase velocity in the z direction, v phase in the z direction, is omega divided by kz. Well, look what that means. Omega divided by kz. So I can draw a line here. And so, this angle is an indication for the ratio omega divided by kz. Which is larger, this angle is higher than this one, and this was c, remember? And so you see that that value is always larger than c. You can just see that by looking at the graph. The group velocity in the z direction is d omega dkz. And d omega dkz is the tangent along this line here. So at this point here, the tangent would be like this. I don't want to draw another line, because it becomes too cluttered. But you can see that this slope is smaller than the line here which indicates the c. And therefore, the group velocity is smaller than c. So this is smaller than c. If you lower omega, and gradually reach your cut off frequency below which you can no longer propagate any radiation between the two plates, then you get a situation which becomes even more bizarre. That when you reach that point nothing will go in the z direction anymore. Nothing will come out. I demonstrated that. The group velocity will therefore have to become zero. Well, you can see that because the tangent on this slope here becomes horizontal. So the group velocity, indeed right here, is zero. But the phase velocity is infinitely high. Think about it, because kz now becomes zero. And so you get an infinitely high phase velocity. And I spent quite some time during my lectures to explain to you why that has no physical meaning. I don't want to go over that now. There is no such thing as resonance frequencies. Often you people think that these are resonances. No, it's not a resonance at all. It is a mode. It means that if you have radiation at this frequency, that in the x direction, kx will adjust itself such that you get a standing wave in the x direction. In this case, you get something like this. Nice little sinusoid. Zero here and zero here. That's for m equals 1, and kz will then whatever it has to be. And so when you change omega, kz will adjust all the time. If you go to very high frequencies, there is here another cut off frequency which is twice this number. Because that is when Mary equals 2. And when kz becomes 0, that becomes twice as high. Then of course there are two different ways that you can propagate radiation in the z direction. Because now you have here the line, and if now your omega is high enough that you get an intersection with this line, and you get an intersection with that line. So that allows you for two different values of kx and two different values of kz. It's not a resonance frequency. Nothing is resonating. It's a mode, but it's not a resonance mode. It's not a normal mode in that sense. Now the most interesting thing for me is, and I stressed that when we discussed this, and I even demonstrated that. We did this with radar remember? With the 10 gigahertz transmitter the three centimeter waves. The most interesting thing is that if you reach omega equals omega c, and you can do that by-- we had a 10 gigahertz transmitter and we made a smaller and smaller and smaller. And when a became less than lambda over 2, that means omega became less than omega c. Then no radiation will propagate through anymore. And so what we did to demonstration was we started out. We had lambda was three centimeters. And we started out with a gap of about two centimeters, and radiation went through. And the moment we get the one and a half centimeters it stopped abruptly. The interesting thing now is that if your radiation is only linearly polarized in the x direction there is no such problem. There is no such thing as this crazy dispersion relationship, because an E vector being perpendicular to this plate, and being perpendicular to this plate, is no problem. If the E vector oscillates with frequency omega always perpendicular to those two plates the only thing that happens that nature like crazy adjusts the local rho s's-- the local number of Coulombs per square centimeter, so that you always meet the normal boundary condition. But nothing ever has to become zero. And so therefore, there is no boundary condition that gives you rise to such a crazy dispersion relationship. So if you had radiation linearly polarized in this direction, then it would follow this line. Non-dispersive, phase velocity is c. Group velocity is c. So now comes the interesting part. Suppose you manage to get radiation which is unpolarized. And this you try to send through there. And you make a smaller than lambda over 2. That means that the vertical component of the E vector of each one of those waves cannot get through. Only the horizontal component can get through, but without any difficulty. Speed of light, that means you have created linearly polarized light out of unpolarized. I wouldn't say the light in this case it is radar. So if you could manage to get unpolarized radar going into this direction, and you make a smaller than lambda over 2, all components in this direction are killed. Only this component can go through. And so you have created at the other end linearly polarized And I realized that after I gave that lecture how cute that is actually. That this is a way, in principle, that you can create linearly polarized radiation simply by squeezing it through a very narrow opening, a very narrow tunnel so to speak. The mother of all demonstrations was the sound box. That will go into history as one of the greatest demonstrations ever. This was the x direction, size a. This was the y direction, size b. And this was the z direction, and I gave that size c. Some of you thought that it was d but it really is c. We have sound. And this box is closed on all sides. I'll write down an a here. So the box is closed on all sides. That means if we think of the pressure, there must be pressure anti nodes at all The particles, the air particles, cannot go beyond the wall. So they can push on the wall, they can suck on the wall, they can push on the wall and suck on the wall. That means anti-nodes in pressure. And there have to be anti-nodes on all walls. So I can write down the pressure p, that is the pressure over and above one atmosphere, as the sum amplitude p0 times the cosine of kx times x times the cosine of ky times y times the cosine of kz times z times cosine omega t. And the reason why I already picked cosines is because I know that I want the anti-nodes when x equals 0 and when x equals a. I want the anti nodes when y equals 0 and y equals b and the same for z for the z direction. And so the boundary condition demands that kx can only have unique values, discrete values, which is l pi divided by a. ky can only be Mary pi divided by b. And kz can only be Nancy pi divided by c. If not, then the boundary conditions are not met. And then I don't have anti-nodes at all the surfaces. And l, m, and n can be 0 or 1 or 2 or 3. Including 0, make one of those 0, there's no sign here. If you have a sine there, you make it 0 and then everything becomes 0. But if you make a cosine 0 then it's just 1. So 0 is allowed. Accept they cannot all three be 0. You will see very shortly why. So omega, which is always k times v-- v is the speed of sound. So omega equals v then times the square root of l pi over a squared plus m pi over b squared plus n pi over c squared. Omega has then unique values. Now we are dealing with resonance frequencies. Completely different from there. There were no resonances there. These are the unique discrete resonances l, m, and n. And now you can see why you cannot make them all three 0. Well, because then you have omega equals 0 and that's not a very interesting thing. So now comes the question, what is the lowest possible frequency that is the lowest resonant frequency? Well that depends on which dimension, a, b, or c, is the largest. And so if a, for instance, were 10 centimeters and if b is 20 centimeters and if c is 50 centimeters, which happens to be 0.5 meters, then clearly the lowest frequency is l 0, m 0, n equals 1. 0,0,1. For a 0,0,1 mode I then get a frequency f. Which is omega divided by two pi. And that means I lose all these pi's here, by the way. And so I get v divided by two c. That is my c. Because look, if I make Nancy 1 my pi is gone. So I only have a 1 over c squared. This is 0. So I get 1 over c and the 2 comes from this. So the frequency that is the lowest possible frequency is v divided by 2c, c being now half a meter, and so if v is 344 meters per second then the lowest frequency is 344 hertz. And I proudly demonstrated that to you. Our prediction was accurate to better than one hertz. We had exactly the 344 as the lowest possible frequency. And then course you can look for higher order frequencies. And that depends then on the dimensions of a and b. Whether the next one is 0,0,2. Or whether the next one is zero, one, one. Or whether it is 1,0,1. And we ranked them all and I showed you eight of those frequencies. And we had a hand out, at least on the web, I showed you these wonderful resonance curves that we generated during our lecture. I think it was lecture number 16. You can still download it from the website. Now comes an interesting question. What would happen if I knock out this panel in front and the panel in back? So I knock out both panels in the z direction. So I make it open in the z direction, open at both sides. What happens now? Well, the pressure now can never be an anti-node at z equals 0 and at z equals c. On the contrary, it is connected with the universe. No pressure differential can ever build up. So the pressure has to become a node now for z equals 0 and z equals c. Well, that we can easily do. We just change the cosine into a sine and nothing else changes. Because look, if now I make kz n pi over c, then you will see for any value for n that you choose this will indeed become a pressure node. You will get 0. And so this now is the right solution then when the panels in the z direction have been removed. The only thing that you cannot allow n to be 0. Because if now you make n zero, then no mode could exist. Because if n is 0, then this would always be 0. So now you have a situation that yes, l can be 0. m can be 0, but n cannot be 0. So what now is the lowest possible frequency? It's the same. 0,0,1. And so the lowest possible frequency when you break out, when you make it a tunnel, is again 344 hertz. And that was quiz number nine. I asked you for this one, which was of course a giveaway if you attended that lecture. You could not possibly have forgotten that. I was almost crying when I showed you this demonstration. So you couldn't have forgotten that I was crying. And then I just wanted to test your insight. That you realized that the anti-nodes become nodes but that nothing changes. At least not in terms of resonance frequency. Of course something changes in the box of course. Because now you have four walls where you have pressure anti-nodes, and you have two non-walls where you have pressure nodes. But the resonance frequency is the same. When Markos and I were working on this, we got this crazy idea to ask the question, what would be the resonance frequencies of a sphere? So we have a sphere. And of course, it's not a perfect sphere. It is more like this. And it has a diameter. I call it capital D. And the diameter is about 28 and a half centimeters. Uncertainty is only a few millimeters. And so we got this crazy idea to put a speaker there, just like we did with the box, and to have a microphone inside. And then to see if we increase the frequency of the speaker whether we could predict, and actually confirm, the resonant frequencies of the motion of the air inside this box. And so I said to Markos, I know the answer. It's easy. It's a trivial problem. If this were a perfect sphere, then clearly the wall here must become a pressure anti-node. A complete sphere must become a pressure anti-node because there must be spherical symmetry. A sphere is a sphere, right? There must be spherical symmetry. You cannot have an anti-node here then a node here, because nature doesn't know the difference between upstairs, downstairs, left, and right. So it must be a complete anti-node sphere. But this center must then also be an anti-node, because if the air flows away and pushes onto the wall and then sucks onto the wall it must always come back there. So the center itself must also become a pressure anti-node. Knowing that, I said to Marcos well, that means that lambda must be 28.5 centimeters. Because from node to node to node is one whole wavelength. And so I made the prediction that the frequency f, which of course is the speed of sound divided by lambda, I made the prediction that would be 340/0.285. And that is something like 1,190 hertz. And I even made the prediction what the second harmonic would be. You must again add a nodal sphere. Sorry, an anti-nodal sphere. Another anti nodal sphere where the pressure reaches maximum, minimum, maximum, minimum because of the spherical symmetry. And so that means that now the diameter is twice the wavelength. So the wavelength is now half that value, because from here to here to here is one wavelength. From here to here the same wavelength. And so I predicted that the second harmonic would be twice this high. And so we set it up. And full of expectation, I said to Markos why don't we just start the system somewhere at 800 hertz. And then we'll find these resonances. And Markos, a little bit more cautious than I am, he said well, let's start a little lower. I said, well why waste our time. He said, well let's start low. Let's start at 45 hertz. That's what he did. You will see here the frequency of the speaker which is mounted there. And you will then see there, if that works, you will see the-- oh boy, Markos it does work. And you will see there at the bottom the driver which is the speaker. And you will see at the top, you'll see the response of the microphone which is inside. Sorry for that. Thank you. And so we started to slowly increase the frequency which I thought was going to be a complete waste of time. 45 hertz, 50 hertz, 58 hertz. Oh, I forgot to turn on the microphone. Thank you Markos. There would have been no resonance at all. OK, let's go back to the 45 then. 48, 51, 52, 53, 56, 58, 60, and there is at 61 hertz an unbelievable resonance. I couldn't believe my eyes. I said this is nonsense. This cannot be. It's impossible. How can you have a 60 hertz resonance frequency? What's wrong with my reasoning? Well we were staring at each other in disbelief, and so we went on. And we said, well. I said to him, the 60 hertz are the lights in the building. That's what you pick up. It's some pick up. It's some crazy pick up. It's not a sound resonance. Whenever you see 60 hertz in Europe, in the United States, it's pick up. If you see 50 hertz in Europe you know it is some pick up. So I was willing to forget about it. In any case, we went on. And we were trying to find the other frequencies. So I will quickly now try to go to much higher values. So I go in core steps and see where the first one is. I was hoping to see the 1190. And then by now it's 6-- 700s. I will now go a little slower. So watch. You see the frequency at the lower signal is increasing much higher than it was before. And here, you're actually beginning to hear it. And so the 61 make me sick in my stomach, and I was not all too happy with 787 hertz. And so I insisted to at least start looking for the 1190. And so we started looking for the 1190. And I was beginning to be happy because I think it's coming. I think it's coming up there. I think it's coming up there. I'm And it fell a little short, but not embarrassingly short, 1164. I went home and I couldn't sleep that night. I couldn't sleep. 60 hertz is absurd. Keep in mind that anything that is this size-- what am I doing? I turned off the microphone. Anything that is that size cannot have such a low frequency. I didn't sleep the first night. I didn't sleep the second night. And then the third night, I woke up. And then I remembered when I was your age, when I was a student in college, that one day I had a little flask like this. And I was blowing air past it. And I said to myself, what? Something 10 centimeters in size? You expect resonance frequencies of the order of the speed of sound divided by the length that is three kilohertz. This is nowhere near three kilohertz. And all of a sudden that image came back to me. And I remember what I did. I said, there has to be an explanation for that. So I went to the library. Looked in books, couldn't find a solution. Went to the library, found an article in the library. And that article told me what the resonance was. This one, only one resonance no harmonics, provided that you know all dimensions. You have to know d. You have to know L, which in our case L is 99.5 centimeters with an uncertainty of about three millimeters. And you have to know this d, which in our case is 5.5 centimeters with an uncertainty of maybe one millimeter. I did my homework. I turned the crank. I went to the Hayden library the next day. I found an article, maybe not the same that I found 50 years ago. But I found the article with the same solution. I plugged in the numbers. What do I find? 62 hertz. And I was very happy. I had recovered what I had lost 50 years ago. Now you are now at about the same age that I was then. And I want to present this to you as a challenge, something that goes a little bit beyond the standard textbooks. If you present me, before December 7, with a proper solution for the 61 hertz? I will generously reward you with extra credit for 8.03. I am not too worried about the 787. The reason for that is, it is not a perfect sphere. There is no doubt in my mind if that had been a perfect sphere, if this thing had not been there, you would not have seen the 61. Which is true. You would not have seen the 787. But the opening makes it very difficult to calculate. In fact, it's not even so bad that my 1164 does show up when my prediction was 1190. But I'm not even sure whether that is really the one that I predicted. In any case, it's the 61 that is the bizarre one. And it has no resonances. There's only one answer. There's only one resonance frequency, no harmonics. I will be here over the weekend to help you prepare for your exam if you need me. I will make half hour appointments. You have to indicate what you want to discuss with me, and I will then agree on a time with you. I wish you luck. And I'll see you then or Tuesday.
{"url":"http://ocw.mit.edu/courses/physics/8-03-physics-iii-vibrations-and-waves-fall-2004/video-lectures/lecture-19/","timestamp":"2014-04-18T18:17:23Z","content_type":null,"content_length":"101566","record_id":"<urn:uuid:13fc6a7b-c78d-44a2-baf5-b057753a1298>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Intersoliton forces in weak-coupling quantum field theories Rajaraman, R (1977) Intersoliton forces in weak-coupling quantum field theories. In: Physical Review D 15, 15 (10). 2866 -2874. 5.pdf - Published Version Restricted to Registered users only Download (1321Kb) | Request a copy We offer a procedure for evaluating the forces exerted by solitons of weak-coupling field theories on one another. We illustrate the procedure for the kink and the antikink of the two-dimensional φ4 theory. To do this, we construct analytically a static solution of the theory which can be interpreted as a kink and an antikink held a distance R apart. This leads to a definition of the potential energy U(R) for the pair, which is seen to have all the expected features. A corresponding evaluation is also done for U(R) between a soliton and an antisoliton of the sine-Gordon theory. When this U (R) is inserted into a nonrelativistic two-body problem for the pair, it yields a set of bound states and phase shifts. These are found to agree with exact results known for the sine-Gordon field theory in those regions where U(R) is expected to be significant, i.e., when R is large compared to the soliton size. We take this agreement as support that our procedure for defining U(R) yields the correct description of the dynamics of well-separated soliton pairs. An important feature of U(R) is that it seems to give strong intersoliton forces when the coupling constant is small, as distinct from the forces between the ordinary quanta of the theory. We suggest that this is a general feature of a class of theories, and emphasize the possible relevance of this feature to real strongly interacting hadrons. Actions (login required)
{"url":"http://eprints.iisc.ernet.in/24444/","timestamp":"2014-04-23T20:59:14Z","content_type":null,"content_length":"22880","record_id":"<urn:uuid:80a8ebe3-3cd0-4ef7-ae94-d8298c16c67b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple harmonic motion - vibration frequency I need to calculate the frequency of sinusoidal vibrations in simple harmonic motion. The only known values that I have are the acceleration amplitude and the vibration amplitude. Is there a way of calculating the frequency of vibration without knowing the period of the oscillations? The equation of motion is: [tex]F = ma = m\ddot{x} = -kx[/tex] The general solution is: [tex]x = A_0\sin(\omega t + \phi)[/tex] where [itex]\omega^2 = k/m[/tex] [itex]A_0[/itex] is the maximum amplitude of vibration (maximum x). The maximum acceleration occurs when x = ? What is the acceleration at that point? (hint: use the equation of motion to find maximum value for a). From that you should be able to determine [itex]\omega[/itex]
{"url":"http://www.physicsforums.com/showthread.php?t=439191","timestamp":"2014-04-21T04:43:38Z","content_type":null,"content_length":"23945","record_id":"<urn:uuid:6794e3a0-425d-49ac-a40d-997af3999a32>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Diff for "Teaching/IntegralEquationsFall2013" Differences between revisions 13 and 15 (spanning 2 versions) ⇤ ← ← Revision 13 as of 2013-05-08 03:24:24 Revision 15 as of 2013-05-10 14:40:24 → → ⇥ Deletions are marked like this. Additions are marked like this. Line 25: Line 25: * A Gentle Intro: Calculus/Linear Algebra/Python warm-up * A Gentle Intro: Linear Algebra/Numerics/Python warm-up Line 78: Line 78: * Leslie Greengard * [[https://cs.nyu.edu/courses/spring12/CSCI-GA.2945-001/index.html|Leslie Greengard]] Integral Equations and Fast Methods Class Time/Location TBD Instructor Andreas Kloeckner Email kloeckner@cims.nyu.edu Office TBD Office Hours TBD Class Webpage http://bit.ly/inteq13 Email Listserv Info page This class will teach you how how (and why!) integral equations let you solve many common types of partial differential equations robustly and quickly. You will also see many fun numerical ideas and algorithms that bring these methods to life on a computer. What to expect • A Gentle Intro: Linear Algebra/Numerics/Python warm-up • Some Potential Theory • The Laplace, Poisson, Helmholtz PDEs, and a few applications • Integral Equations for these and more PDEs • Ways to represent potentials • Quadrature, or: easy ways to compute difficult integrals • Tree codes and Fast Multipole Methods • Fun with the FFT • Fast Randomized Linear Algebra (if time) Month 1, 2013 Class starts on MONTH DAY, 2012, from XX-XXpm. We've also been assigned a room. We will be meeting in XXX. See you then! If you will be taking the class for credit, there will be • Weekly homework for (roughly) the first half of the class (50% of your grade) • A more ambitious final project, which may be inspired by your own research needs (50% of your grade) (also see /ProjectGuidelines) If you're planning on auditing or just sitting in, you are more than welcome. These books cover some of our mathematical needs: There really aren't any books to cover our numerical and algorithmic needs. So, unfortunately, there isn't one book that covers the entire class, or even a reasonable subset. It will occasionally be useful to refer to these books, but I would not recommend you go out and buy them just for this course. I will make sure they are available in the library for you to refer to. Source articles Because of the (no-)book situation (see above), I will post links to the research articles underlying the class here. Related classes elsewhere Online resources
{"url":"http://andreask.cs.illinois.edu/Teaching/IntegralEquationsFall2013?action=diff&rev1=13&rev2=15","timestamp":"2014-04-17T15:08:29Z","content_type":null,"content_length":"29792","record_id":"<urn:uuid:9b1d6141-578b-496c-8a47-8e68b0080ee4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Abel on Elliptic Integrals: A Translation Niels Henrik Abel was one of Norway’s great mathematicians. He was born on August 5, 1802,in Frindoe (near Stavanger), Norway, and died at the young age of 26,on April 6, 1829 in Froland, Norway, a victim of tuberculosis. It was only around the time of his death that Abel’s mathematical talents were beginning to be recognized around Europe. Sadly,he never got to realize in how high esteem others would eventually consider his works. There are some good existing accounts of Abel’s life and mathematics: Arlid Stubhaug has written a masterful biography of Abel’s life; Christian Houzel surveys at length Abel’s mathematics; Henrik Kragh Sørensen has written a useful and solid Ph.D. thesis on Abel’s mathematics, which he has generously made available Online. Abel’s “Recherches sur les fonctions elliptiques” (1827) was the first published account that made significant inroads on the theory of elliptic integrals, that is, integrals where the integrand is a quotient with a rational function for a numerator and a square root of a polynomial of degree 3 or 4 for the denominator. Abel’s key insight was to invert these integrals, that is, consider the function u(x), where x is the upper limit of integration of the elliptic integral. He discovered that these “elliptic functions” were unlike other functions considered up to that time; for instance, they were found to be doubly periodic. Gauss had made similar discoveries in the late 1790s, but failed to publish them. It only became widely known that Gauss had undertaken such studies after his collected works were published in the later half of the nineteenth century. The translation made available here is only the first part of Abel’s “Recherches sur les fonctions elliptiques” and represents a small portion of his work in this field. Abel’s work highly influenced later mathematicians, in particular, Bernhard Riemann (1826 - 1866), who would introduce bold new ideas into mathematics, including Riemann surfaces. Riemann introduced these in order to get a better handle on elliptic functions and what are known as Abelian functions, generalizations of elliptic functions where the denominator in the integrand is no longer just a square root of a polynomial of degree 3 or 4, but of higher degree. Pedagogical Perspective: Abel, using only elementary methods (nothing more complex than an intuitive version of the inverse function theorem) and crafty manipulation, is able to establish basic properties of elliptic functions such as their double periodicity. He is also able to determine specific values and derive an addition theorem, among other things. This is all done in a more concrete way than is typically found in all contemporary treatments of elliptic functions. As well, unlike Newton’s Principia, for example, what is written is easily understood by the modern reader. So students with only a basic grasp of calculus can profitably go through this text and get an intuitive idea of some basic properties of elliptic functions, something that would be quite difficult for students at that level to do using modern treatments. Translation Notes: A translator necessarily has to make certain decisions about language and notation. I have followed Abel’s writing quite closely, though I have tried to insure that the English flows as well as possible given his writing style. Also, I have used in places more standard notation than what Abel sometimes uses in the original. All in all, I hope that you find the translation to be of some use. Acknowledgments: I would like to thank Prof. Tom Archibald for various forms of assistance during the course of this translation project. Click here to read the translation of "Recherches sur les fonctions elliptiques." Abel, N. H., 1827. Recherches sur les fonctions elliptiques. Journal für die reine und angevandte Mathematik, 2. 101-181. Houzel, Christian, 2004. The Work of Niels Henrik Abel. In: Laudal, Olav Arnfinn and Piene, Ragni (Ed.), 2004. The Legacy of Niels Henrik Abel, The Abel Bicentennial, Oslo, 2002. Springer-Verlag, Berlin, 21-177. O'Connor, J J and Robertson, E F. Niels Henrik Abel. From: The MacTutor History of Mathematics archive. http://turnbull.mcs.st-and.ac.uk/history/Biographies/Abel.html Ore, Oystein, 1974/1957. Niels Henrik Abel, mathematician extraordinary. Chelsea Publishing Company, New York, N.Y. Sørenson, Henrik Kragh, 2002. The Mathematics of Niels Henrik Abel - Continuation and New Approaches in Mathematics During the 1820s. Ph.D. dissertation, History of Science Department, Faculty of Science, University of Aarhus, Denmark. Available Online at http://www.henrikkragh.dk/phd/ Stubhaug, Arlid, 2000. Niels Henrik Abel and his times: Called too soon by flames afar. Springer-Verlag, Berlin. Stubhaug, Arlid, 2004. The Life of Niels Henrik Abel. In: Laudal, Olav Arnfinn and Piene, Ragni (Ed.), 2004. The Legacy of Niels Henrik Abel, The Abel Bicentennial, Oslo, 2002. Springer-Verlag, Berlin, 17-20.
{"url":"http://www.maa.org/publications/periodicals/convergence/abel-on-elliptic-integrals-a-translation","timestamp":"2014-04-18T06:04:45Z","content_type":null,"content_length":"100966","record_id":"<urn:uuid:a50e89b5-1552-4a41-aeae-837558f2cbef>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Reversible automata Next: What to look Up: History Previous: Graph theory and Supposing that von Neumann's goal of exhibiting an automaton capable of universal construction or of universal computation has been realized, additional questions come to mind. For example, an automaton which contains an embedded Turing machine has to suffer some of the same limitations as the Turing machine, the most notorious of which is undecidability. It would appear that a cellular automaton ought to be more versatile than a Turing machine; for example it could have numerous read heads and work on many computations simultaneously. Nevertheless, since any automaton can presumably be modelled within some Turing machine, the computational powers of the two artifacts must be coextensive. One seeming paradox arose when it was found that some automata had configurations which could not be the product of any evolution, the ``Garden of Eden'' states of Edward F. Moore [82,84]. The reconciliation of such states with a universal constructor lies in the realization that the constructor is not required to fill space with arbitrary designs, but rather to create specific objects (including copies of itself) according to the instructions which it has received. Universality refers to whether arbitrary descriptions can be followed, not whether arbitrary constructs can be Nevertheless, there has been considerable interest in ascertaining whether or not there are restrictions on the long term behavior of an automaton, both in the remote past and in the remote future. Such restrictions could manifest themselves in unattainable configurations such as the Garden of Eden, as well as in certain limiting configurations which would only develop fully as limits. But there is also a middle ground, consisting of rules or even of configurations within a given rule, which never end up in some inaccessible region, either past or future. For this to be true, it is especially important that there be no barrier such as the Garden of Eden, devoid of a previous existence in any form. Equally, although the future always exists in some form or other, there might be reasons to avoid an approach to extremely complicated limits. In other words, there is an interest in time reversal symmetry, or even a simple equivalence between past and future, whereby past configurations would be just as recoverable from a given configuration as the future configurations that are deducible from the same data. Quite trivial instances in which the states shift sideways, remain unchanged, or complement themselves between generations readily come to mind; naturally real interest centers on whether there are others, besides. S. Amoroso and Y. N. Patt [4] searched for possible reversible rules and discovered eight nontrivial 109] found a way to generate whole classes by an increase in dimension in 1977, and Edward Fredkin discovered another scheme which has been reported in Toffoli and Margolis's book [110]. Fredkin employed evolutionary functions extending over two generations, evidently extendible within the same framework to three or more generations. Yet another alternative employs some of the states in a cartesian product to create a sort of memory. It is interesting to observe that the whole idea of reversible automata falls within the province of dynamical systems (once the connection is realized) just as it was already reported in great detail by Hedlund [56] in 1969. A recent survey article by Roy Adler and Leopold Flatto [2] contains a good exposition of the relationship of symbolic dynamics to flows through a graph. The reason that symbolic dynamics has such a special relationship to reversible automata lies in the fact that a dynamical system can be given a topology, with respect to which it is much easier to discuss infinite systems and the existence of limits. This in turn brings up such concepts as continuity, the multiple valuedness of mappings, and the existence of inverse functions. An essential element of the discussion hinges upon the fact that the evolution function for automata is not invertible in and of itself, but the discrepancies between counterimages can be pushed to remote regions of the automaton. With the help of the topology they can then be made to vanish in the limit, providing a context in which the reversibility of evolution can be discussed. Next: What to look Up: History Previous: Graph theory and Harold V. McIntosh
{"url":"http://delta.cs.cinvestav.mx/~mcintosh/oldweb/lcau/node11.html","timestamp":"2014-04-17T03:55:51Z","content_type":null,"content_length":"7419","record_id":"<urn:uuid:aa98915b-62db-4670-b9a3-070c87b6ed36>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Accuracy and precision of variance components in occupational posture recordings: a simulation study of different data collection strategies Information on exposure variability, expressed as exposure variance components, is of vital use in occupational epidemiology, including informed risk control and efficient study design. While accurate and precise estimates of the variance components are desirable in such cases, very little research has been devoted to understanding the performance of data sampling strategies designed specifically to determine the size and structure of exposure variability. The aim of this study was to investigate the accuracy and precision of estimators of between-subjects, between-days and within-day variance components obtained by sampling strategies differing with respect to number of subjects, total sampling time per subject, number of days per subject and the size of individual sampling periods. Minute-by-minute values of average elevation, percentage time above 90° and percentage time below 15° were calculated in a data set consisting of measurements of right upper arm elevation during four full shifts from each of 23 car mechanics. Based on this parent data, bootstrapping was used to simulate sampling with 80 different combinations of the number of subjects (10, 20), total sampling time per subject (60, 120, 240, 480 minutes), number of days per subject (2, 4), and size of sampling periods (blocks) within days (1, 15, 60, 240 minutes). Accuracy (absence of bias) and precision (prediction intervals) of the variance component estimators were assessed for each simulated sampling strategy. Sampling in small blocks within days resulted in essentially unbiased variance components. For a specific total sampling time per subject, and in particular if this time was small, increasing the block size resulted in an increasing bias, primarily of the between-days and the within-days variance components. Prediction intervals were in general wide, and even more so at larger block sizes. Distributing sampling time across more days gave in general more precise variance component estimates, but also reduced accuracy in some cases. Variance components estimated from small samples of exposure data within working days may be both inaccurate and imprecise, in particular if sampling is laid out in large consecutive time blocks. In order to estimate variance components with a satisfying accuracy and precision, for instance for arriving at trustworthy power calculations in a planned intervention study, larger samples of data will be required than for estimating an exposure mean value with a corresponding certainty.
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC3377541/","timestamp":"2014-04-19T18:22:37Z","content_type":null,"content_length":"143633","record_id":"<urn:uuid:eb847f2c-0d84-48db-81c9-d2697543ec61>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Problems Library - Middle School, Fractions/Decimals/Percents This page: fractions, decimals, About Levels of Difficulty exponents & roots factors, factoring, & prime numbers fractions, decimals & ratio & proportion Discrete Math combinations & graph theory circumference & Browse all M.S. Problems Middle School About the PoW Library Middle School: Fractions, Decimals, Percents In grades 6-8, students work flexibly with fractions, decimals, and percents, learning to compare and order them and to find their approximate locations on a number line. Students develop meaning for percents greater than 100 and less than 1, and explore the meaning and effects of arithmetic operations with fractions and decimals. They also learn to judge the reasonableness of estimates, and to convert among fractions, decimals, and percents. Problems that allow middle-school students to practice these skills are listed below. They address the NCTM Number and Operations Standard for Grades 6-8 expectations that students should be able to work flexibly with fractions, decimals, and percents to solve problems; compare and order fractions, decimals, and percents efficiently and find their approximate locations on a number line; and understand the meaning and effects of arithmetic operations with fractions, decimals, and integers. For background information elsewhere on our site, explore the Middle School Fractions & Percents area of the Ask Dr. Math archives, and see Fractions, Decimals, Percentages from the Dr. Math FAQ. For relevant sites on the Web, browse and search Fractions/Decimals/Percents in our Internet Mathematics Library; to find middle-school sites, go to the bottom of the page, set the searcher for middle school (6-8), and press the Search button. Access to these problems requires a Membership.
{"url":"http://mathforum.org/library/problems/sets/middle_fractions.html","timestamp":"2014-04-16T22:55:28Z","content_type":null,"content_length":"17638","record_id":"<urn:uuid:8597d26e-cc27-4883-89e5-a08394cddd03>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
Alexander Polyakov Russian Physicist. Polyakov studied physics at the Moscow Physical Technical Institute, under Professor Arkadii Migdal. Working with Migdal’s son, Alexander Migdal, Polyakov demonstrated that within gauge invariant field theories having massless particles, the symmetry can be spontaneously broken by what is now called the Higgs mechanism. By using the S-matrix, they showed that in such a gauge theory with a spontaneously broken symmetry the gauge bosons become massive and there are no mass zero particles. Polyakov’s approach was unusual in that he worked in quantum field theory and condensed matter physics simultaneously. He was thus an early advocate of using field theory for describing phase transitions. Alongside Migdal, Polyakov took the work of Valery Pokrovsky and Alexander Patashinski of 1964 and reformulated it in terms of dispersion relations in particle physics. Polyakov demonstrated the consistency of relativistic field theories with anomalous dimensions, by introducing an operator product expansion. That work was like the independent works of Leo P. Kadanoff and Kenneth G. Wilson. But Polyakov focused on electron positron annihilation and deep inelastic scattering. Having noticed a conformal symmetry in critical phenomena, Polyakov then began trying to classify fixed points using possible operator product expansions, even for three dimensions, to calculate critical exponents. But then Wilson’s definitive work on the 4 minus epsilon expansion appeared, providing a way to do calculations. Still, Polyakov prominently showed that some important results can be obtained in three-dimensional metrics rather than in Minkowski space-time. His contributions found many applications in statistical physics and in condensed matter Alexander Polyakov is a Professor of Physics at Princeton University. Click here to read the INTERVIEW.
{"url":"http://authors.library.caltech.edu/5456/1/hrst.mit.edu/groups/renormalization/interview/q-and-a.tcl_topic_id=34&topic=Alexander%20Polyakov.html","timestamp":"2014-04-18T13:10:27Z","content_type":null,"content_length":"17192","record_id":"<urn:uuid:df92d490-4ff4-4e50-8578-0e6f97346427>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Concept of Quantity 02-21-2011 #1 Concept of Quantity So I know infinitesimility is suppose to = 0, but I was thinking, what does infinitesimility * infinity equal? I mean, I would think it would have to equal 1, which argues infinitesimility = 0. What do you guys think? I mean, I would think it would have to equal 1, which argues infinitesimility = 0. How would anything equal multiplied by 0 manage to be equal to 1? The way you've set that up sounds wrong. I seriously doubt that infinitesimal is considered anything resembling 0. While math is not my area, I think there is a huge difference between something infinitely small and nothing at all and this difference somehow has been expressed in maths. What I remember learning is that an infinitesimal is a quantity approaching zero but not reaching it. It's distance to zero is infinitesimally Infinite quantities may be applied to maths, but you already realize that rules are different for them when for instance we learn that any operation on infinity returns infinity. So, you shouldn't just assume good old algebra is the way to solve your question. 0 * 1 is not what you should be looking for. Infinite quantities aren't numbers at all. Instead here's three possibilities: - If we were to represent infinitesimals differently than infinitely large quantities, it would probably make sense to establish a rule based on the position of them in the equation. That is to say an infinitesimally small number multiplied infinite times is still an infinitesimal number, while an infinitely large quantity multiplied by an infinitesimal amount is still an infinitely large quantity. This holds true for all operations. - We can simply establish that infinitesimal quantities are infinite quantities in fact. Which they are. There's no direction for infinity. So an infinitesimally small quantity is an infinite quantity. Infinite quantities aren't big or small. They are simply infinite. - Or we can remember that infinitesimal is not the same as 0, as such -- and because any operation on infinity results in infinity -- the answer to your question is infinity. If you look closely, all the three possibilities are the same. The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.” The programmer comes home with 12 loaves of bread. Originally Posted by brewbuck: Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster. Infinitesimals don't equal 0. If they did, all of calculus would be nonexistent. The problem is that infinity doesn't exist in the real numbers, so it doesn't apply to the same rules as real numbers. The only way of dealing with infinities, without using the extended reals, is to use infinite limits. You're conjecture that 1/∞ * ∞ = ∞/∞ = 1 can be disproven by showing that both limx->∞ 1/x = 0 and limx->∞ 1/x^2 = 0 and that limx->∞ x = ∞, then considering both limx->∞ 1/x * limx->∞ x = limx->∞ x/x = 1 and limx->∞ 1/x^2 * limx->∞ x = limx->∞ x/x^2 = limx->∞ 1/x = 0, to see that there are different possible answers depending on the infinity in question. So, in other words, depending on the "size" of the infinity, you get different results. Ratios of infinities, and a few other special cases like these are called indeterminate forms. There are a few good calculus resources out there, if you want to learn this type of stuff. I particularly like MIT's OCW Calculus, with Wikipedia to fill in the gaps. Unfortunately, they spend absolutely no time learning limits. I suggest you learn limits. They can be a fun for people who like algebra. You finally get to *almost* divide by 0! Last edited by User Name:; 02-21-2011 at 07:41 PM. Oh, it wouldn't, I'm trying to imply that maybe infinitesimal != 0. I seriously doubt that infinitesimal is considered anything resembling 0. While math is not my area, I think there is a huge difference between something infinitely small and nothing at all and this difference somehow has been expressed in maths. What I remember learning is that an infinitesimal is a quantity approaching zero but not reaching it. It's distance to zero is infinitesimally Yes this is as far as I know too, but I found some compelling arguments showing that because infinitesimal is so small, that is, if one were to search for it's 'end', they could never find it, thus making it nothing. Well, I can't find the pages with the proofs anymore. But here's a good reference nonetheless: 0.999... - Wikipedia, the free encyclopedia... Now, I hope my assuming that saying that 0.(999) = 1 is the same as saying that 0.(000)1 = 0, isn't wrong Infinite quantities may be applied to maths, but you already realize that rules are different for them when for instance we learn that any operation on infinity returns infinity. So, you shouldn't just assume good old algebra is the way to solve your question. 0 * 1 is not what you should be looking for. Infinite quantities aren't numbers at all. True, and in real life calculations, they're not used. To me, this seems to lead to holes in the mathimatical system, though, for example the infamous divide by 0 problem. The idea is too see how infinity and numbers would co-exist in the same mathimatical system. Formal math seems to say they don't/can't. We know that infinity is bigger than 1, and that infinitesimal is smaller than 1, so, when a difference in relationship to tangible real numbers can be shown, doesn't that prove that they aren't the same thing? On other words: Surely something can't be smaller and bigger than something else at the same time The problem with this is the assumption that any operation on infinity results in infinity, for example, 0 * infinity, is of course, 0. Just as we have 1000, and 0.001, we have infinity, that is, 1(00)0, and infinitesimal, that is, 0.(00)1, I purposed 1 as an answer because if 1000 * 0.001 = 1, then why not 1(00)0 * 0.(00)1 = 1? - If we were to represent infinitesimals differently than infinitely large quantities, it would probably make sense to establish a rule based on the position of them in the equation. That is to say an infinitesimally small number multiplied infinite times is still an infinitesimal number, while an infinitely large quantity multiplied by an infinitesimal amount is still an infinitely large quantity. This holds true for all operations. This breaks the rules, though. Well, I can't find the pages with the proofs anymore. But here's a good reference nonetheless: 0.999... - Wikipedia, the free encyclopedia... Now, I hope my assuming that saying that 0.(999) = 1 is the same as saying that 0.(000)1 = 0, isn't wrong No, 1 IS .999... and .999... IS 1, just as 1/3 IS 2/6 and 2/6 IS 1/3. They are two representations of the exact same number. That's why they are the same, not because the nines never end. In terms of math: 1-.999... = 0 not .(000)1. True, and in real life calculations, they're not used. To me, this seems to lead to holes in the mathematical system, though, for example the infamous divide by 0 problem. The idea is too see how infinity and numbers would co-exist in the same mathematical system. Formal math seems to say they don't/can't. They exist, you've just not been introduced to the formal math for dealing with infinities and real numbers. Extended real number line - Wikipedia, the free encyclopedia The problem with this is the assumption that any operation on infinity results in infinity, for example, 0 * infinity, is of course, 0. Just as we have 1000, and 0.001, we have infinity, that is, 1(00)0, and infinitesimal, that is, 0.(00)1, I purposed 1 as an answer because if 1000 * 0.001 = 1, then why not 1(00)0 * 0.(00)1 = 1? Do you mean 0 or 1/∞? If the later, then you need to reread my last post. We know that infinity is bigger than 1, and that infinitesimal is smaller than 1 We don't. Infinity isn't measurable. There may be a need to express infinity in two directions in mathematics. But it's probably because we shouldn't, that theorists have so much trouble defining what mathematical rules they will obey. so, when a difference in relationship to tangible real numbers can be shown, doesn't that prove that they aren't the same thing? On other words: Surely something can't be smaller and bigger than something else at the same time We can surely prove, by logic alone, that an infinitesimal is infinite. What you should probably be rid of is the notion that infinity is only something that is bigger than something else. The problem with this is the assumption that any operation on infinity results in infinity, for example, 0 * infinity, is of course, 0. Oh, no it's not As I said before, you cannot think in algebra terms. I'd say it is infinite. But a search on google reveals I would be wrong; It's in fact undetermined. Which in my own view means we don't have a proper mathematical model to deal with infinity. In any case, no. It's not zero. Again infinity is not a number, it's no longer even a measurable concept. The fact we try to give it mathematical properties is probably something someone one day will give a condescending smile at, for our brave but futile attempts. Last edited by Mario F.; 02-21-2011 at 09:02 PM. The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.” The programmer comes home with 12 loaves of bread. Originally Posted by brewbuck: Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster. Indeterminate. See above. Yeah, we do, but it only deals with the sure parts of it, and leaves you to fend for yourself when you get to the indeterminate ones. It does give you a few weapons to help you defend yourself though. I've slaughter many a indeterminate with l'Hôpital's Rule. For more detail, see above. Let me extend on this by giving you a concrete example why infinite is both "bigger" than 42 and "smaller" than 42. Or, in other words why infinity is both infinitely large and infinitely small. Or, to be more precise, why infinity isn't either. The example is a derivation from Zeno's arrow paradox. If you fire an arrow at a target, that arrow has to travel half its way before it can start doing the other half. But then, it needs to travel half of that before it can travel the other half of the first half. And so on, ad aeternum. You can suddenly reach the conclusion that the arrow path can be divided infinite times into infinitesimal sections. Note already the need to express "infinity" and "infinitesimal" in the same sentence to mean the same thing. One can attribute that to a trick of the tongue. But if you can divide space into infinitely small portions, the arrow need to take an infinite amount of travelling before it reaches its target. And that's the paradox. It shouldn't ever reach it. Physics will have something to say about all this. But here I'm just demonstrating to you that infinity is both small and large. And it is both things at the same time. And exactly because it doesn't have any measurable properties, it should in fact not even be determined in terms of being small or big. It's just infinite. Last edited by Mario F.; 02-21-2011 at 09:49 PM. The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.” The programmer comes home with 12 loaves of bread. Originally Posted by brewbuck: Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster. The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.” The programmer comes home with 12 loaves of bread. Originally Posted by brewbuck: Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster. This is all very interesting, but please bear with me while I don't get anything. I don't understand why that would be so. Does this have to do with calculus equations often have the double mirror effect? Kind of like calculating PI? (Not sure what the real term is) The problem is that infinity doesn't exist in the real numbers, so it doesn't apply to the same rules as real numbers. The only way of dealing with infinities, without using the extended reals, is to use infinite limits. You're conjecture that 1/∞ * ∞ = ∞/∞ = 1 can be disproven by showing that both limx->∞ 1/x = 0 and limx->∞ 1/x^2 = 0 and that limx->∞ x = ∞, then considering both limx->∞ 1/x * limx->∞ x = limx->∞ x/x = 1 and limx->∞ 1/x^2 * limx->∞ x = limx->∞ x/x^2 = limx->∞ 1/x = 0, to see that there are different possible answers depending on the infinity in question. So, in other words, depending on the "size" of the infinity, you get different results. Ratios of infinities, and a few other special cases like these are called indeterminate forms. There are a few good calculus resources out there, if you want to learn this type of stuff. I particularly like MIT's OCW Calculus, with Wikipedia to fill in the gaps. Unfortunately, they spend absolutely no time learning limits. I suggest you learn limits. They can be a fun for people who like algebra. You finally get to *almost* divide by 0! Your equations don't make sense, why do you say that limx->∞ 1/x * limx->∞ x must equal limx->∞ x/x? No, 1 IS .999... and .999... IS 1, just as 1/3 IS 2/6 and 2/6 IS 1/3. They are two representations of the exact same number. That's why they are the same, not because the nines never end. In terms of math: 1-.999... = 0 not .(000)1. Do you mean 0 or 1/∞? If the later, then you need to reread my last post. Yep, when I said .(000)1 I meant 1/∞, I guess I should use that form instead, from now on. So, if .999~ is 1 (which I'm not necessarily disagreeing with), what is 1 - 1/∞? Why? Because we live such that we can't 'capture' infinity, everytime we try, it just acts like a number so big that it can't be counted - even when not restrained by time. I can't think of an exception to this. (Yes I do realize that infinity isn't a real number) Oh, no it's not As I said before, you cannot think in algebra terms. I'd say it is infinite. But a search on google reveals I would be wrong; It's in fact undetermined. Which in my own view means we don't have a proper mathematical model to deal with infinity. In any case, no. It's not zero. Again infinity is not a number, it's no longer even a measurable concept. Look at Well-definition - Wikipedia, the free encyclopedia There's It shows that it kind of is. Yeah probably, but I like to try anyway ;-) Let me extend on this by giving you a concrete example why infinite is both "bigger" than 42 and "smaller" than 42. Or, in other words why infinity is both infinitely large and infinitely small. Or, to be more precise, why infinity isn't either. The example is a derivation from Zeno's arrow paradox. If you fire an arrow at a target, that arrow has to travel half its way before it can start doing the other half. But then, it needs to travel half of that before it can travel the other half of the first half. And so on, ad aeternum. You can suddenly reach the conclusion that the arrow path can be divided infinite times into infinitesimal sections. Note already the need to express "infinity" and "infinitesimal" in the same sentence to mean the same thing. One can attribute that to a trick of the tongue. But if you can divide space into infinitely small portions, the arrow need to take an infinite amount of travelling before it reaches its target. And that's the paradox. It shouldn't ever reach it. Physics will have something to say about all this. But here I'm just demonstrating to you that infinity is both small and large. And it is both things at the same time. And exactly because it doesn't have any measurable properties, it should in fact not even be determined in terms of being small or big. It's just infinite. 42. That's good Why do you say "infinity and infinitesimal in the same sentence to mean the same thing"? It seemed to me, that you meant different things. It seems like you said "infinite" in reference to "so many that the amount has no end, and "infinitesimal" in reference "so small their smallness has no end". I don't see the paradox your talking about. If the length is infinite of course the arrow would take an infinite amount of time to reach the target. My thinking is, your right, infinity can't be measured. Let me give an illustration of what I'm thinking. Say, in space (by space I mean an imaginary void, not real space), there's a road, who's size is infinite (NOT infitesimal), remember, infinity means "boundless", so the road's size wouldn't be bound, and without bound, things keep going. So while on the road, we take a measuring tape and measure part of the road we're standing on, which we measure to be... 42 inches. Now, if we want to, we can walk a few yards and measure the same length out again. How does that not prove that 42 is smaller than infinity? Now, let's say we can pick up the road. Now matter which way we place it, we won't be able to put it within the 42 inches we have measured out. How does that not prove that infinity is bigger than 42? Your equations don't make sense, why do you say that limx->∞ 1/x * limx->∞ x must equal limx->∞ x/x I haven't even studied limits to a satisfactory degree, but they do have properties like other things, such as logs. lim x to a f(x) * lim x to a g(x) is the same as lim x to a f(x)g(x). Of course, you're asking people to explain calculus to you at this point which is a bit annoying I imagine. So while on the road, we take a measuring tape and measure part of the road we're standing on, which we measure to be... 42 inches. Now, if we want to, we can walk a few yards and measure the same length out again. How does that not prove that 42 is smaller than infinity? My interpretation of what's been said is this: the road is really a number line, with signs posted. Start from an arbitrary number N not forty-two and try to go to 42. I'm willing to bet that there are infinite real numbers between N and N+1 that you won't even make it to the next sign. Now, is infinity bigger or smaller than forty-two? Well, it's damn meaningless to measure! Of course, you can deal with infinity by using limits. For example, one of the first things you do, studying limits is find definition of a function. Is that information useful? I dunno. For me math is more about application than anything else, but I can't tell you what limits are applied to yet. I'm still learning calculus myself.... I haven't even studied limits to a satisfactory degree, but they do have properties like other things, such as logs. lim x to a f(x) * lim x to a g(x) is the same as lim x to a f(x)g(x). Of course, you're asking people to explain calculus to you at this point which is a bit annoying I imagine. So these algebraic rules are suppose to be transparent of limits? I guess that makes sense. I'll look closer at it again. My interpretation of what's been said is this: the road is really a number line, with signs posted. Start from an arbitrary number N not forty-two and try to go to 42. I'm willing to bet that there are infinite real numbers between N and N+1 that you won't even make it to the next sign. Now, is infinity bigger or smaller than forty-two? Well, it's damn meaningless to measure! Yep, it's basically a number line. Sure, there's an infinite real numbers between N and N+1, but that holds true to even finite number lines, and that doesn't stop one from traversing them by 42. Differentials(denoted with the d in front of them) are not zero, if they were, the derivative would be meaningless, the integral also, by inclusion. df/dx=limh->0 (f(x+h)-f(x))/h This says that the ratio of two differentials(infinitesimals), is finite. If they were zero, then the ratio would 0/0 , and therefore undefined. I'm not going into detail on what they really are, it takes more explaining than I really want to give, since there are many resources already available. (MIT OCW + Wikipedia = everything I know) A law of limits is that, iff limx->c f(x)*g(x) = L, then (limx->c f(x))*(limx->c g(x)) = L I used the inverse. 1/∞ is indeterminate, you can't give it a single representation. Neither can it be represented with a real number(.(000)1 is not a number). To prove .(000)1 is not a number, AFAIK, requires a real analysis concept called Cauchy Sequences. Basically, it means that any real number can be expressed as a convergent sequence of differences. There is not sequence that can converge to .(000) Remember that ∞ can't be used without limits(within standard analysis, in nonstandard, they use rules that give the same results as if you were to take a limit, which is why nonstandard analysis is largely considered extraneous). So, you take the limit limx->∞ 1-1/x = 1. In general, any number minus an infinitesimal is that same number. Counterintuitive, maybe, but true, definitely. To help Mario, what he means by infinities and infinitesimals being the same, is that the are the same in their immeasurableness. On is always, unconditionally larger, and the other is always, unconditionally, smaller. The derivative of f(x) is defined as f'(x)=limh->0 (f(x+h)-f(x))/h, notice the fact that at x=0, it is 0/0, an indeterminate form. The derivative, intuitively, is how much the function rises, as it the amount it runs approaches zero. Derivatives, an application of limits, have innumerable applications. And integration, the most important application of derivatives, have even more. How long have you been studying? I did for a few months before I actually started this semester at my local community college. It will definitely help you later. Many have issues when they first encounter a new paradigm of math, and being pre-exposed is like a vaccine against failing. Anyway, good luck, and if you ever get stuck on something, I have my PMs turned on, *unlike some people* cough cough... 02-21-2011 #2 02-21-2011 #3 02-21-2011 #4 Join Date Dec 2009 02-21-2011 #5 02-21-2011 #6 Join Date Dec 2009 02-21-2011 #7 02-21-2011 #8 Join Date Dec 2009 02-21-2011 #9 02-21-2011 #10 02-23-2011 #11 02-23-2011 #12 02-23-2011 #13 02-23-2011 #14 Join Date Dec 2009 02-23-2011 #15 Join Date Dec 2009
{"url":"http://cboard.cprogramming.com/general-discussions/134963-concept-quantity.html","timestamp":"2014-04-16T14:36:39Z","content_type":null,"content_length":"138816","record_id":"<urn:uuid:14eec42a-c028-4a23-a2d1-6b99aa537846>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Emily, Jeremy L. Fuchs, L. Théry: A Formalization of Grassmann-Cayley Algebra in COQ and Its Application to Theorem Proving in Projective Geometry. Automated Deduction in Geometry 2010. Georges Gonthier, Assia Mahboubi, Laurence Rideau, Enrico Tassi, Laurent Théry `A Modular Formalisation of Finite Group Theory' Research Report 6156. Laurent Théry and Guillaume Hanrot `Primality Proving with Elliptic Curve' Research Report 6155. Laurent Théry, `Proving the group law for elliptic curves formally' Technical Report 330. Laurence Rideau and Laurent Théry, `Formalising Sylow's theorems in Coq', Technical Report 327 . Laurent Théry, ` Simplifying Polynomial Expressions in a Proof Assistant', Research Report 5614 . Proof certificates for geometry. Fourier A reflected version of Fourier-Motzkin elimination for Coq. Sudoku Solver: A certified solver for the Sudoku. CoqRubik: A certified solver for the Mini Rubik. CoqSos A port of John Harrison's Sum of Square tactic for Coq. CoqPrime: certifying prime numbers Propositional logic: an applet to build formulae, an applet to build natural deduction proofs. Cyp, a proof tool to colour proofs Aïoli, a toolkit for interactive symbolic applications. Figue, an incremental two-dimensional layout engine. Gbcoq, a certified implementation of Buchberger's algorithm. CtCaml, a programming environment for CamlLight. Chol, a proof environment for Hol Hexadecimal digits of Π in Coq using the Plouffe formula A Formal Proof in Coq of the Robbins problem. A Formal Proof of Feit-Thompson theorem (completed on September 20, 2012). Formalising Geometric Algebra. A workshop on numbers and proofs in June 2006 in Paris. A formalization in the Coq theorem prover of a Sudoku solver: paper, Coq files. Some tactics to reason about inequalities in Coq. A bibliography on formalised mathematics. Some formalisations in the Coq prover. A course on formal methods (in italian) A course on Java and Esterel (in italian) “ The aim of science is not to open the door to infinite wisdom, but to set a limit to infinite error. ” Bertolt Brecht Laurent Théry
{"url":"http://www-sop.inria.fr/marelle/Laurent.Thery/me.html","timestamp":"2014-04-17T01:19:40Z","content_type":null,"content_length":"8359","record_id":"<urn:uuid:27060542-4cfb-46f3-b5a1-ed73133f390b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
The Fields Medal awards ceremony 2006 in Madrid Video Description: Andrei Okounkov (Russia),Grigori Perelman (Russia),Terence Tao (Australia),and Wendelin Werner (France) were awarded the Fields Medal for their contributions to mathematics. But Perelman, who proved the ''Poincaré conjecture'', refused his medal. Source: anuglyboy. Fields Medal. The Fields Medal is a prize awarded to two, three, or four mathematicians not over 40 years of age at each International Congress of the International Mathematical Union, a meeting that takes place every four years. The Fields Medal is often viewed as the top honor a mathematician can receive. Founded at the behest of Canadian mathematician John Charles Fields, the medal was first awarded in 1936, to Finnish mathematician Lars Ahlfors and American mathematician Jesse Douglas, and has been regularly awarded since 1950. Its purpose is to give recognition and support to younger mathematical researchers who have made major contributions. Field Medalists 2006 • Andrei Okounkov, Russia, "for his contributions bridging probability, representation theory and algebraic geometry" • Grigori Perelman, Russia - Medal declined, "for his contributions to geometry and his revolutionary insights into the analytical and geometric structure of the Ricci flow" • Terence Tao, Australia, "for his contributions to partial differential equations, combinatorics, harmonic analysis and additive number theory" • Wendelin Werner, France, "for his contributions to the development of stochastic Loewner evolution, the geometry of two-dimensional Brownian motion, and conformal field theory" The next prizes will be awarded during the opening ceremony of ICM 2010 in Hyderabad, India on August 19, 2010. Source: Wikipedia, Fields Medal, Fields Institute, IMU Awards and Prizes.
{"url":"http://www.gogeometry.com/brain/fields_medal_okounkov_perelman_tao_werner.htm","timestamp":"2014-04-18T18:10:32Z","content_type":null,"content_length":"8857","record_id":"<urn:uuid:1ecd9d48-94dc-470e-80cf-27040af4a285>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Visualiing Derivatives with Cubes Replies: 9 Last Post: Sep 8, 2013 1:53 AM Messages: [ Previous | Next ] Re: Visualiing Derivatives with Cubes Posted: Sep 1, 2013 5:51 PM Lots of ways to delta a shape by a little bit. Any delta you indicate as a change in volume in something may be modeled cube-wise or tetrahedron-wise or some other. But it's not my agenda to replace the cube across the board. That's an agenda that gets projected by the cube-insecure. Lets say calculus is what it is with its cubes and squares. What I'm into this days, long with Koski, is the scissoring rhombuses, sharing an axis, evolved from the "two book covers" thought experiment (each book cover 60-60-60 and kept open to on-another at 180 degrees -- a page flaps back and forth). Keeping 5 edges equal and changing just one, that's what interests me. We use that program for getting volume from edges as inputs. I had a link to edu-sig for the Python code. Whether there's some cool "tetrahedron calculus" in the pipeline I wouldn't necessarily know. The branch of math I've been talking about is fairly Zubek keeps repeating a few names but there are more. I don't know what everyone is up to, don't make it to all the conferences (like the SNEC ones -- I was in of the founders of SNEC, more so Russell Chu, and see Chris in Philadelphia maybe once a year). in Chicago On Sun, Sep 1, 2013 at 11:37 AM, Joe Niederberger > Its easy to visualize what's going on with the derivatives of x**2 and > x**3 with the usual square and cube representations of those functions: a > square can be enlarged by "building out" along two edges, a cube can be > likewise by "building out" on three faces -- the "error" artifacts are the > little dx corner square in the 2D case, the corner cube as well as the > three edge "lines" in the 3D case. Its just a cute way of seeing where the > derivatives d/dx(x**2) = 2x, and d/dx(x**3) = 3(x**2) come from. > But I don't see how to do anything similar with triangles or tetrahedrons. > Perhaps Kirby will show us. Or does this simple exercise point to something > a bit more fundamental than simple "cultural choice"? > Cheers, > Joe N Date Subject Author 9/1/13 Visualiing Derivatives with Cubes Joe Niederberger 9/1/13 Re: Visualiing Derivatives with Cubes frank zubek 9/1/13 Re: Visualiing Derivatives with Cubes kirby urner 9/1/13 Re: Visualiing Derivatives with Cubes kirby urner 9/1/13 Re: Visualiing Derivatives with Cubes Louis Talman 9/2/13 Re: Visualiing Derivatives with Cubes Joe Niederberger 9/8/13 Re: Visualiing Derivatives with Cubes kirby urner 9/2/13 Re: Visualiing Derivatives with Cubes frank zubek 9/2/13 Re: Visualiing Derivatives with Cubes frank zubek 9/2/13 Re: Visualiing Derivatives with Cubes Joe Niederberger
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2596922&messageID=9251684","timestamp":"2014-04-20T13:31:45Z","content_type":null,"content_length":"29850","record_id":"<urn:uuid:c6995551-d930-4a39-ad52-b3e6c8eb3826>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
East Lake, CO Geometry Tutor Find an East Lake, CO Geometry Tutor I have taught students from kindergarten through eighth grade for 20 years in Boulder Valley School District, spending the majority of that time teaching middle school math and science. I have also tutored high school students and adults in Honors Geometry, Advanced Algebra 2, and Precalculus. I l... 14 Subjects: including geometry, reading, algebra 1, algebra 2 ...Mathematics, University of Louisiana, Lafayette, LA B.S. Physics, University of Louisiana, Lafayette, LA M.S. Paleoclimatology, Georgia Institute of Technology, Atlanta, GA Ph.D. 16 Subjects: including geometry, chemistry, calculus, physics ...I am a mathematics and writing tutor with three years of experience teaching and tutoring. I spent two years teaching mathematics and physics at a secondary school in rural Tanzania. Because of that experience, I am able to work with students who are far behind in their school curriculum or have trouble grasping the material. 27 Subjects: including geometry, reading, writing, algebra 1 ...I have passed the math portion of the GRE exam with a perfect 800 score, also! My graduate work is in architecture and design. I especially love working with students who have some fear of the subject or who have previously had an uncomfortable experience with it.I have taught Algebra 1 for many years to middle and high school students. 7 Subjects: including geometry, GRE, algebra 2, algebra 1 ...I am a life-long learner and have an eclectic background. Since I started college as a math major, I'm also proficient in math and feel comfortable tutoring algebra and geometry. (I've forgotten all the calculus I once knew!)I'm also an artist and have been knitting and crocheting since I was a ... 24 Subjects: including geometry, reading, English, writing Related East Lake, CO Tutors East Lake, CO Accounting Tutors East Lake, CO ACT Tutors East Lake, CO Algebra Tutors East Lake, CO Algebra 2 Tutors East Lake, CO Calculus Tutors East Lake, CO Geometry Tutors East Lake, CO Math Tutors East Lake, CO Prealgebra Tutors East Lake, CO Precalculus Tutors East Lake, CO SAT Tutors East Lake, CO SAT Math Tutors East Lake, CO Science Tutors East Lake, CO Statistics Tutors East Lake, CO Trigonometry Tutors Nearby Cities With geometry Tutor Bow Mar, CO geometry Tutors Columbine Valley, CO geometry Tutors Commerce City geometry Tutors Dacono geometry Tutors Eastlake, CO geometry Tutors Edgewater, CO geometry Tutors Erie, CO geometry Tutors Federal Heights, CO geometry Tutors Firestone geometry Tutors Henderson, CO geometry Tutors Lafayette, CO geometry Tutors Lakeside, CO geometry Tutors Northglenn, CO geometry Tutors Thornton, CO geometry Tutors Westminster, CO geometry Tutors
{"url":"http://www.purplemath.com/East_Lake_CO_geometry_tutors.php","timestamp":"2014-04-18T03:55:28Z","content_type":null,"content_length":"24117","record_id":"<urn:uuid:5b8fa94a-7c43-40ba-ab76-8bc58294100b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find the Period of \[2 cos^3 x + 5cos4x\] • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/504b77fde4b0925ffedd029b","timestamp":"2014-04-21T04:42:40Z","content_type":null,"content_length":"58229","record_id":"<urn:uuid:3248dc08-92a3-4e8f-865a-68e008cca376>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Estimating the size of reduction of rational points on $\mathbb{G}_m^2$ up vote 5 down vote favorite Let $\Gamma$ be a free subgroup of rank 2 in $\mathbb{G}_m^2(\mathbb{Q})$. For all but finitely many primes p we can reduce $\Gamma$ modulo p. Let $S$ be the of primes for which $\Gamma$ does not reduce modulo p, and for any $p$ not in $S$, let $\gamma_p$ be the size of $\Gamma \mod p$. My question is what is known about the function $f(x)= \sum_{p\not\in S,\ p\leq x}\frac{\log p }{\gamma_p}$ In particular what is the asymptotic behavior of $f$? Is the corresponding infinite series convergent whenever $\Gamma$ is not contained in an algebraic subgroup of $\mathbb{G}_m^2$? Do you know of any references that might be relevant to those questions? Thanks in advance, nt.number-theory analytic-number-theory 1 What exactly is the "exceptional set"? Is it the primes dividing the numerator or denominator of some element of $\Gamma$? – David Loeffler May 30 '11 at 15:43 Yes, that is what I meant. I corrected the question to make the statement clearer. Thank you for for the remark. – Tzanko Matev May 31 '11 at 7:31 add comment 1 Answer active oldest votes Presumably "exceptional" means primes where either one of the generators of $\Gamma$ is 0 or $\infty$ mod p, or where $\Gamma$ mod $p$ has rank smaller than $2$. The following reference is possibly relevant to your question, although we consider a somewhat different sum. We give an upper bound (that should be fairly sharp) for the sum $$\sum_{p} \frac{\log p}{p\cdot\ gamma_p^\epsilon}.$$ In particular, we prove that $$\limsup_{\epsilon\to0} ~~\epsilon \cdot \sum_{p} \frac{\log p}{p\cdot\gamma_p^\epsilon} \le 1+\frac{1}{\text{rank}~\Gamma}.$$ The up vote 4 article is down vote accepted Murty, M. Ram and Rosen, Michael and Silverman, Joseph H., Variations on a theme of Romanoff, Internat. J. Math. 7 (1996), 373-391 (MR1395936). Joe: For any prime number $p$ (such that the two generators of $\Gamma$ are $p$-adic units), the group $\Gamma\mod p$ is finite, so it always has rank smaller than $2$! – ACL May 31 '11 at 7:02 @ACL: I believe "rank" here means "minimal number of generators". – S. Carnahan♦ May 31 '11 at 8:08 Actually, our result is a little different in that we are looking at subgroups $\Gamma$ of $\mathbb{G}(\mathbb{Q})$. The rank means the free rank, which is the dimension of $\Gamma\ otimes\mathbb{Q}$ as a $\mathbb{Q}$-vector space. We also deal with abelian varieties of arbitrary dimension, but looking again at our paper, I see that we didn't do the case of finitely generated subgroups of $\mathbb{G}^d(\mathbb{Q})$ for $d\ge2$. However, the argument that we use will easily generalize, and I suspect that the limsup formula remains the same. – Joe Silverman May 31 '11 at 12:47 @ACL: The rank refers to the free rank of $\Gamma$ as a finitely generated abelian group, not to its rank after being reduced modulo p. – Joe Silverman May 31 '11 at 12:48 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory analytic-number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/66463/estimating-the-size-of-reduction-of-rational-points-on-mathbbg-m2/66492","timestamp":"2014-04-17T18:26:42Z","content_type":null,"content_length":"59027","record_id":"<urn:uuid:4d95a691-8616-4ebe-a73b-b732868fbcbc>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Computation of Fuzzy Truth Values for The Liar and Related Self-Referential Systems Ath. Kehagias and K. Ezerides We study the Liar paradox and related systems of self-referential sentences. Specifically, we consider the problem of assigning consistent fuzzy truth values to systems of self-referential sentences. We show that this problem can be reduced to the solution of a system of nonlinear equations and we prove that,under certain conditions, a solution (i.e. a consistent truth value assignment) always exists. Furthermore, we show that, for the min/max implementation of logical and / or and the standard negation, the mid-point solution is always consistent.
{"url":"http://oldcitypublishing.com/MVLSC/MVLSCabstracts/MVLSC12.5-6abstracts/MVLSCv12n5-6p539-559Kehagias.html","timestamp":"2014-04-20T10:46:30Z","content_type":null,"content_length":"2418","record_id":"<urn:uuid:23487b12-231b-4d3a-8061-6d5c5a317b57>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
How to compute the Betti numbers of S-D for a surface S and a divisor D? up vote 0 down vote favorite Let S be a projective non-singular surface and D a Cartier divisor which has a smooth representative. Can the Betti numbers of S-D be represented by the Betti numbers of S and D? In a paper $b_i(S-D) =b_i(S)-b_i(D)$ is used without explanation; however, I cann't prove it and I doubt whether it would hold. at.algebraic-topology divisors This is clearly false, think about $i=0$... In general you would study this by means of the long exact sequence for relative cohomology and the isomorphism $H^i(S;D) = H^{4-i}(S-D)^\vee$. – Dan Petersen Mar 19 '12 at 9:07 Using the long exact sequence it is easy to represent the Euler number of S-D by the Euler numbers of S and D, but we still cannot represent the Betti numbers of S-D by the Betti numbers of S and D, right? At least we cannot find a universal formula... – rose Mar 19 '12 at 9:24 What it is true is that $\sum (-1)^i b_i(S-D)=\sum (-1)^i b_i(S) - \sum (-1)^i b_i(D)$. In fact, the topological Euler-Poincaré characteristic is additive, as you can see by taking a triangulation of $S$ that contains a triangulation of $D$ as a sub-triangulation (racall that any smooth, complex projective variety can be triangulated). – Francesco Polizzi Mar 19 '12 at 10:07 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged at.algebraic-topology divisors or ask your own question.
{"url":"http://mathoverflow.net/questions/91599/how-to-compute-the-betti-numbers-of-s-d-for-a-surface-s-and-a-divisor-d","timestamp":"2014-04-18T03:17:02Z","content_type":null,"content_length":"50273","record_id":"<urn:uuid:6abb50af-b071-4662-9169-f44ff61cd083>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix Exponentiation Matrix Exponentiation is a useful technique that can be applied to a wide variety of problems. Most commonly it is used for efficiently solving linear recurrence problems and as such can be used in any problem that can be represented as a linear recurrence. The main problem lies with the fact that it is inefficient when we naively evaluate them. Let's consider an easy example that you should know about: Fibonacci numbers. The sequence is defined by: It's easy enough to increasingly evaluate it 1 by 1 each time until we hit our targetted n-th Fibonacci number. However, let's say n was 1 billion - we need at least 1 billion computations of the sequence to derive the answer. If we represent the sequence in terms of a matrix we discover a clever optimisation which can be applied. The correctness can be verified by a simple matrix multiplication. What's more important to note is that the next two sequence of Fibonacci numbers can be represented like the one above. If we expand: This is equivalent to: We can continue expanding and eventually we see a remarkable observation. We can always reduce it to the power of our matrix multiplied by our initial conditions. This yields the recurrence below: Since exponentiation can be done in logarithmic time we have essentially reduced our Fibonacci computation speed from linear to logarithmic. A huge difference in speed for large numbers! As an exercise, write a logarithmic Fibonacci term generator - as the numbers do get quite large, feel free to modulo it with an appropriate number (like 10^8). To solve this recurrence we simply just do the same thing as we did with Fibonacci with a slight modification. Fibonacci is really a special case of the above recurrence where x = 1 and y = 1 and a0 = a1 = 1. Note the prime observation below: If we matrix multiply - we yield the correct recurrence for both Fn and Fn-1. Therefore we simply need to change our original Fibonacci matrix of [ 1 1, 1 0 ] to [ x y, 1 0] and the initial conditions from being always 1 and 1 (F1 and F0 respectively) to [a1 a0]. We then simply use matrix exponentiation to calculate the correct term, as always we apply modulo arithmetic to keep the number representable with integers. The implementation below demonstrates this idea: class Matrix { long long a; long long b; long long c; long long d; Matrix() { } Matrix(long long _a, long long _b, long long _c, long long _d) : a(_a), b(_b), c(_c), d(_d) { } Matrix multiply(const Matrix& lhs, const Matrix& rhs) { long long a = lhs.a * rhs.a + lhs.b * rhs.c; long long b = lhs.a * rhs.b + lhs.b * rhs.d; long long c = lhs.c * rhs.a + lhs.d * rhs.c; long long d = lhs.c * rhs.b + lhs.d * rhs.d; return Matrix(a % 100, b % 100, c % 100, d % 100); ostream& operator<<(ostream& out, const Matrix& M) { out << "a: " << M.a << " b: " << M.b << " c: " << M.c << " d: " << M.d; return out; Matrix power(Matrix M, int n) { if (n == 1) return M; Matrix tmp = power(M, n/2); if (n % 2 != 0) { Matrix n = multiply(tmp,tmp); Matrix m = multiply(n, M); return m; Matrix v = multiply(tmp,tmp); return v; void output(int n) { if (n < 10) cout << "0"; cout << n << "\n"; int main() { long long x, y, a0, a1, n; while (cin >> x >> y) { if (x == 0 && y == 0) break; cin >> a0 >> a1 >> n; if (n == 0) { output(a0%100); continue; } else if (n == 1) { output(a1%100); continue; } Matrix m = power(Matrix(x,y,1,0), n-1); output((m.a * a1 + m.b * a0) % 100); return 0; Now for a more advanced example, we use matrix exponentiation to determine the number of cycles with a length smaller than k in a given directed graph. Problem: TourCounting Source: TopCoder SRM 306 D1-1050 An elegant way to calculate the number of paths from a source vertex u to a destination vertex v that takes exactly k steps is to use matrix multiplication. First let's define A to be the adjacency matrix with each entry representing the number of edges from u to v. The (u,v) entry of A^t is precisely the answer we are looking for. The way this works is similar to the way that Floyd-Warshall algorithm works. The matrix multiplication considers the connectivity between an intermediate node k and attempts to link u to k and k to v. Due to the multiplicative nature of matrix multiplication as opposed to an additive (for shortest paths) it determines the number of paths based on the multiplication principle from discrete mathematics. To illustrate this principle, consider a snapshot of the matrix A^t and it's adjacency matrix which is represented as A^1: If we were to calculate the number of paths that start from vertex 0 to itself (i.e. a cycle) that takes exactly t+1 steps. Note that we can access vertex 0 from either vertex 1 or vertex 2. Since at state t, we can reach vertex 1 from vertex 0 with 2 paths and we can reach vertex 2 from vertex 0 with 3 paths. It stands that we can now reach vertex 0 to itself from 2+3=5 paths. Note that this is a direct consequence of the matrix multiplication process - validate this yourself for the first row and column. If the adjacency matrix was changed such that there are two edges from vertex 1 to vertex 0 like the following: Then we can reach vertex 0 in a total of 2*2 + 1*3 = 7 ways since we have an option of choosing one of the two edges from vertex 1 to vertex 0. Again, this is consequence of the matrix multiplication algorithm. The argument holds for every other cell in the matrix. So what's the advantage of this method? The prime factor is efficiency - we can calculate matrix powers in logarithmic number of matrix multiplications. We accomplish this through a similar algorithm to fast powering. So now we know how to calculate the number of paths from a source vertex u to a destination vertex v that takes exactly k steps. The actual problem requires us to compute the number of cycles that are present with less than k steps. So for a given instance of the matrix A^t we want to sum the diagonal to yield the number of paths with exactly t steps. To calculate it for less than k steps we would normally have to compute each power of t and sum all the diagonals up, however this is not fast enough as the number of steps can be as much as 1 million. We can reduce this using a similar method to our matrix powering algorithm. First define a function f: f(n) = represents the number of ways we can get from all vertices to other vertices using less than n steps (matrix form) The main optimisation is reducing it from linear (summing all matrices less than n) to logarithmic. The huge hint in logarithmic is dividing the input space by a (usually) constant factor. Note that if we divide the space into two f(n/2) parts we are close to our answer but not quite there. The problem is that upper n/2 part is not symmetrical to the lower n/2 part. But wait! Given the simple mathematical property of powers a^(b+n) = a^b * a^n we can simply just multiply one of the f(n/2) parts by A^(n/2) to yield the correct upper half! With this our recurrence becomes: Like our fast powering algorithm we need to distinguish between odd and even states. The easy way to make an odd number even is to subtract it by 1. We then just calculate one instance of the odd power and make all successively calls even. This maintains the correctness of the recurrence whilst also maintaining the time complexity. Then it's simply a matter of implementation which is the easy part. Just note we need to use sensible modulo arithmetic to ensure that we don't overflow any of our computations. class TourCounting { int countTours(vector <string>, int, int); class Matrix { vector<vector<long long> > data; Matrix() { } Matrix(int n, int m) { data = vector<vector<long long> >(n, vector<long long>(m, 0)); long long MOD; Matrix operator*(const Matrix& lhs, const Matrix& rhs) { Matrix res(lhs.data.size(), rhs.data[0].size()); for (int i = 0; i < lhs.data.size(); i++) { for (int j = 0; j < lhs.data[i].size(); j++) { for (int k = 0; k < rhs.data[0].size(); k++) { res.data[i][k] = (res.data[i][k] + lhs.data[i][j] * rhs.data[j][k])%MOD; return res; Matrix operator+(const Matrix& lhs, const Matrix& rhs) { Matrix res(lhs.data.size(),lhs.data[0].size()); for (int i = 0; i < lhs.data.size(); i++) { for (int j = 0; j < lhs.data[0].size(); j++) { res.data[i][j] = (lhs.data[i][j] + rhs.data[i][j])%MOD; return res; Matrix power(const Matrix& lhs, int P) { if (P == 1) return lhs; Matrix tmp = power(lhs, P/2); if (P % 2 != 0) { Matrix n = tmp * tmp; Matrix m = n * lhs; return m; Matrix v = tmp * tmp; return v; long long compute(const Matrix& m) { long long res = 0; for (int i = 0; i < m.data.size(); i++) res = (res + m.data[i][i]) % MOD; return res % MOD; Matrix dp[1000001]; int visited[1000001]; Matrix f(const Matrix& ref, int n) { if (visited[n]) return dp[n]; visited[n] = 1; if (n == 1) return dp[n] = ref; if (n % 2 != 0) { // odd Matrix v = f(ref, n-1) + power(ref, n); return dp[n] = v; // even Matrix vE = f(ref, n/2) * power(ref, n/2); Matrix vEE = vE + f(ref, n/2); return dp[n] = vEE; int TourCounting::countTours(vector <string> g, int k, int m) { Matrix mat((int)g.size(),(int)g.size()); for (int i = 0; i < g.size(); i++) for (int j = 0; j < g[i].size(); j++) mat.data[i][j] = g[i][j] == 'Y' ? 1 : 0; MOD = m; long long res = 0; Matrix rr = f(mat, k-1); res = compute(rr); return res;
{"url":"http://wilanw.blogspot.co.uk/2009/12/matrix-exponentiation.html","timestamp":"2014-04-20T13:19:21Z","content_type":null,"content_length":"74707","record_id":"<urn:uuid:fc85787c-4271-41b5-8c87-b81186ee8052>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Puzzle..find out the fallacy...(interesting) Author Puzzle..find out the fallacy...(interesting) Ranch Hand Consider the equattion X+X+X...(X TIMES).. Joined: Feb 23, 2+2=2^2 2006 3+3+3=3^2 Posts: 628 similarly x+x+x+x...(x times)=x^2 Now Differentiate on both side with respect to x 1+1+1...x times = 2x =>1 =2 What is the mistake here??any idea? [ March 13, 2006: Message edited by: Rambo Prasad ] Helping hands are much better than the praying lips Joined: Mar 22, The equality is not true (or rather, is not defined) for numbers that are not integers. And since differentiation is defined only for continuous functions -which the one on the LHS 2005 is not- it can't be applied here. Posts: 39548 27 Ping & DNS - updated with new look and Ping home screen widget Ranch Hand Another explanation: Joined: Jun 02, x^2 might be a notation for a function x^2. 2003 But you use it as a value (x^2=x | x=1). Posts: 1923 Then you mix both notations, but you may not differentiate a value. I like... http://home.arcor.de/hirnstrom/bewerbung Ranch Hand Joined: Feb 23, Ulf Dittmer Posts: 628 great!correct answer subject: Puzzle..find out the fallacy...(interesting)
{"url":"http://www.coderanch.com/t/35411/Programming/Puzzle-find-fallacy-interesting","timestamp":"2014-04-19T20:55:35Z","content_type":null,"content_length":"24632","record_id":"<urn:uuid:6e6880c3-5375-4eb6-a3f6-afc4036d0cfd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
If one shape can become another using Turns, Flips and/or Slides, then the shapes are Congruent: After any of those transformations (turn, flip or slide), the shape still has the same size, area, angles and line lengths. These shapes are all Congruent: Rotated Reflected and Moved Reflected and Rotated Congruent or Similar? The two shapes need to be the same size to be congruent. When we need to resize one shape to make it the same as the other, the shapes are Similar. When we ... Then the shapes are ... ... only Rotate, Reflect and/or Translate Congruent ... also need to Resize Similar Congruent? Why such a funny word that basically means "equal"? Probably because they would only be "equal" if laid on top of each other. Anyway it comes from Latin congruere, "to agree". So the shapes "agree"
{"url":"http://www.mathsisfun.com/geometry/congruent.html","timestamp":"2014-04-18T23:16:35Z","content_type":null,"content_length":"7491","record_id":"<urn:uuid:d2193c75-66b6-44f2-b0d6-c391f63ecc96>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
February, 2013 44 Comments As a puzzle, commenter nobugz asks, "What kind of infinity is 1.#J?" double z = 0; printf("%.2f", 1/z); Now, the division by zero results in IEEE positive infinity, would would normally be printed as 1#INF. But the catch here is that the print format says "Display at most two places after the decimal point." But where is the decimal point in infinity? The Visual C runtime library arbitrarily decided that all of the exceptional values have one digit before the decimal (namely, the "1"). Actually, it turns out that this puzzle might be an answer to Random832's question, "What's the 1 for?" Maybe the 1 is there so that there is a digit at all. Okay, so now you have one digit before the decimal (the "1"), and now you need to show at most two places after the decimal. But "#INF" is too long to fit into two characters. The C runtime then says, "Well, then I'd better round it off to two places, then." The first character is "#". The second character is "I". Now we need to round it. That's done by inspecting the third character, which is "N". We all learned in grade school that you round up if the next digit is 5 or greater. And it so happens that the code point for "N" is numerically higher than the code point for "5", so the value is rounded up by incrementing the previous digit. Incrementing "I" gives you "J". That's why printing IEEE positive infinity to two places gives you the strange-looking "1#J". The J is an I that got rounded up. I doubt this behavior was intended; it's just a consequence of taking a rounding algorithm intended for digits and applying it to non-digits. Of course, in phonetics, rounding an i produces a ü. Imagine the nerdiness of rounding "1#INF" to two places and producing "1#Ü". That would have been awesome. 4 Comments The MENU­ITEM­INFO structure lets you specify a bitmap to appear next to the menu item. Is there a way to do this from a menu resource template? If you look at the format of menu templates, you'll see that there is nowhere to specify a bitmap. Which kind of makes sense, because it is the responsibility of the application to destroy the bitmap referenced by the hbmpItem member when the menu is destroyed, but if you created the menu from a template, you don't know what that handle is, so you can't destroy it either! 17 Comments A user of the imaginary Program Q program wanted to write an automated test that created a table, then ran various sub-test which communicated among each other by updating that table. When my test tries to create a table, the program asks the following question: q install server -r testdb Setting up this machine to be a registered table server... Registered table servers must adhere to Microsoft information security policies. See http://programq/policy for details. If you have questions, contact mailto:qpolicy. Do you agree to adhere to Microsoft policies regarding registered table servers (y/n/q)? Is there a way to suppress the question? I can't pre-create a single server that all the tests connect to, because multiple tests running simultaneously would end up colliding with each other. I would prefer that each test run on its own isolated table server, but when I try to install a table server on the machine being tested, I get the above prompt. Why not just create an unregistered table server instead? Just leave off the -r flag. Give your problem description, there appears to be no need for the table server to be registered. Ah, didn't know about the ability to create an unregistered server. Works great! The user was apparently so accustomed to creating registered table servers that he didn't realize that there was any other kind. My guess is that he had no idea what the -r flag did; he just cargo-culted it from somewhere. Remember: The target audience for Program Q is not non-technical end-users. The target audience is other programmers, and this person was clearly a programmer since he was writing an automated test! 56 Comments This is a story from a friend of a friend, which makes it probably untrue, but I still like the story. One of my colleagues jokingly suggested that we could speed up our code by adding these lines to our project #define EnterCriticalSection(p) ((void)0) #define LeaveCriticalSection(p) ((void)0) I replied, "You think you're joking, but you're not." According to legend, there was a project whose product was running too slow, so they spun off a subteam to see what architectural changes would help them improve their performance. The subteam returned some time later with a fork of the project that they had "tuned". And it was indeed the case that the performance-tuned version ran a lot faster. Later, the development team discovered that part of the "tuning" involved simply deleting all the synchronization. They didn't replace it with lock-free algorithms or anything that clever. They just removed all the critical sections. 13 Comments I dreamed that a friend of mine was showing me her new appliance: A combination urinal/bidet/washing machine. As we loaded clothes into it, she said, "You okay in there, Stephanie?" A muffled voice emerged from within: "Just let me know when you're ready." 12 Comments By default, when the taskbar or any other application wants to display a thumbnail for a window, the result is a copy of the window contents shrunk down to the requested size. Today we're going to override that behavior and display a custom thumbnail. Take the program from last week and make these changes: #include <dwmapi.h> OnCreate(HWND hwnd, LPCREATESTRUCT lpcs) g_hicoAlert = LoadIcon(nullptr, IDI_EXCLAMATION); g_wmTaskbarButtonCreated = RegisterWindowMessage( BOOL fTrue = TRUE; DwmSetWindowAttribute(hwnd, DWMWA_FORCE_ICONIC_REPRESENTATION, &fTrue, sizeof(fTrue)); DwmSetWindowAttribute(hwnd, DWMWA_HAS_ICONIC_BITMAP, &fTrue, sizeof(fTrue)); return TRUE; We start by enabling custom thumbnails by setting the DWMWA_HAS_ICONIC_BITMAP attribute to TRUE. This overrides the default thumbnail generator and allows us to provide a custom one. Next is a helper function that I broke out from this program because it's useful on its own. It simply creates a 32bpp bitmap of the desired size and optionally returns a pointer to the resulting HBITMAP Create32bppBitmap(HDC hdc, int cx, int cy, RGBQUAD **pprgbBits = nullptr) BITMAPINFO bmi = { 0 }; bmi.bmiHeader.biSize = sizeof(bmi.bmiHeader); bmi.bmiHeader.biWidth = cx; bmi.bmiHeader.biHeight = cy; bmi.bmiHeader.biPlanes = 1; bmi.bmiHeader.biBitCount = 32; bmi.bmiHeader.biCompression = BI_RGB; void *pvBits; HBITMAP hbm = CreateDIBSection(hdc, &bmi, DIB_RGB_COLORS, &pvBits, NULL, 0); if (pprgbBits) *pprgbBits = static_cast<RGBQUAD*>(pvBits); return hbm; Next, we take our Paint­Content function and make it render into a DC instead: RenderContent(HDC hdc, LPCRECT prc) LOGFONTW lf = { 0 }; lf.lfHeight = prc->bottom - prc->top; wcscpy_s(lf.lfFaceName, L"Verdana"); HFONT hf = CreateFontIndirectW(&lf); HFONT hfPrev = SelectFont(hdc, hf); wchar_t wszCount[80]; swprintf_s(wszCount, L"%d", g_iCounter); FillRect(hdc, prc, GetStockBrush(WHITE_BRUSH)); DrawTextW(hdc, wszCount, -1, const_cast<LPRECT>(prc), DT_CENTER | DT_VCENTER | DT_SINGLELINE); SelectFont(hdc, hfPrev); In our case, we will want to render into a bitmap: GenerateContentBitmap(HWND hwnd, int cx, int cy) HDC hdc = GetDC(hwnd); HDC hdcMem = CreateCompatibleDC(hdc); HBITMAP hbm = Create32bppBitmap(hdcMem, cx,cy); HBITMAP hbmPrev = SelectBitmap(hdcMem, hbm); RECT rc = { 0, 0, cx, cy }; RenderContent(hdcMem, &rc); SelectBitmap(hdcMem, hbmPrev); ReleaseDC(hwnd, hdc); return hbm; We can use this function when DWM asks us to generate a custom thumbnail or a custom live preview bitmap. UpdateThumbnailBitmap(HWND hwnd, int cx, int cy) HBITMAP hbm = GenerateContentBitmap(hwnd, cx, cy); DwmSetIconicThumbnail(hwnd, hbm, 0); UpdateLivePreviewBitmap(HWND hwnd) RECT rc; GetClientRect(hwnd, &rc); HBITMAP hbm = GenerateContentBitmap(hwnd, rc.right - rc.left, rc.bottom - rc.top); DwmSetIconicLivePreviewBitmap(hwnd, hbm, nullptr, 0); // WndProc case WM_DWMSENDICONICTHUMBNAIL: UpdateThumbnailBitmap(hwnd, HIWORD(lParam), LOWORD(lParam)); One of the quirks of the WM_DWM­SEND­ICONIC­THUMB­NAIL message is that it passes the x- and y-coordinates backwards. Most window messages put the x-coordinate in the low word and the y-coordinate in the high word, but WM_DWM­SEND­ICONIC­THUMB­NAIL does it the other way around. Since we're generating a custom thumbnail and live preview bitmap, we need to let the window manager know that the custom rendering is out of date and needs to be re-rendered: Invalidate the custom bitmaps when the counter changes. void OnCommand(HWND hwnd, int id, HWND hwndCtl, UINT codeNotify) switch (id) { case IDC_INCREMENT: InvalidateRect(hwnd, nullptr, TRUE); And finally, just to be interesting, we'll also stop rendering content into our main window. PaintContent(HWND hwnd, PAINTSTRUCT *pps) // do nothing Run this program and observe that the window comes up blank. Ah, but if you hover over the taskbar button, the custom thumbnail will appear, and that custom thumbnail has the number 0 in it. Click on the button in the thumbnail, and the number in the custom thumbnail increments. As a bonus, move the mouse over the thumbnail to trigger Aero Peek. The live preview bitmap contains the magic number! Move the mouse away, and the magic number vanishes. Now, this was an artificial example, so the effect is kind of weird. However, you can imagine using this in less artificial cases where the result is useful. You application might be a game, and instead of using the default thumbnail which shows a miniature copy of the game window, you can have your thumbnail be a tiny scoreboard or focus on a section of the board. For example, if you are a card game, the thumbnail might show just the cards in your hand. I can't think of a useful case for showing a live preview bitmap different from the actual window. The intended use for a custom live preview bitmap is for applications like Web browsers which want to minimize a tab's memory usage when it is not active. When a tab becomes inactive, the browser can destroy all graphics resources except for a bitmap containing the last-known-valid contents of the window, and use that bitmap for the thumbnail and live preview. 36 Comments Many years ago, I wrote, "Do not write in-process shell extensions in managed code." Since I originally wrote that article, version 4 of the .NET Framework was released, and one of the features of that version is that it supports in-process side-by-side runtimes. Does that mean that it's now okay to write shell extensions in managed code? The answer is still no. The Guidance for implementing in-process extensions has been revised, and it continues the recommendation against writing shell extensions and Internet Explorer extensions (and other types of in-process extensions) in managed code, even if you're using version 4 or higher. Although version 4 addresses the side-by-side issue, it is still the case that the .NET Framework is a high-impact runtime, and that there are various part of COM interop in the .NET Framework that are not suitable for use in an extension model designed around native code. Note that managed code remains acceptable for out-of-process extensions. 4 Comments When you associate a file handle with an I/O completion port with the Create­Io­Completion­Port function, you can pass an arbitrary pointer-sized integer called the Completion­Key which will be returned by the Get­Queued­Completion­Status function for every I/O that completes against that file handle. But isn't that parameter superfluous? If somebody wanted to associated additional data with a file handle, they could just extend the OVERLAPPED structure to contain that additional data. Yes, they could, so in a purely information-theoretical sense, the parameter is superfluous. And heated seats in your car are superfluous, too. But they sure are nice! From a purely information-theoretical point of view, a lot of things are superfluous. The GWLP_USER­DATA window bytes are not necessary, because you could just put the information in the window extra bytes. And window extra bytes are superfluous, since you could have just put them in properties. And properties are superfluous, since you could just have a hash table which maps window handles to "all the other data I want to associate with this window handle." But it's nice to have GWLP_USER­DATA. I find it interesting that people complain when Windows does not provide a convenience for some operation, and here's a case where it provides one, and then people complain that it's wasteful! It can be nice to have some information associated with the file handle (to record some general information, like the overall operation this file is responsible for) and different information associated with the I/O request (to record some specific information, like which phase of the operation most recently completed). That way, you don't have to try to pack the two pieces of information A more practical reason is that you may not be able to pass the extended OVERLAPPED through to the Read­File. For example, you may be calling another function which will in turn issue the Read­File, and that other function builds its own OVERLAPPED structure rather than letting you pass one in. In that case, you will thank your lucky stars that there's some redundant data elsewhere that will let you recover your state. 29 Comments As every computer scientist knows, the IEEE floating point format reserves a number of representations for infinity and non-numeric values (collectively known as NaN, short for not a number). If you try to print one of these special values with the Visual C runtime library, you will get a corresponding special result: │ Output │ Meaning │ │1#INF │Positive infinity │ │-1#INF │Negative infinity │ │1#SNAN │Positive signaling NaN │ │-1#SNAN │Negative signaling NaN │ │1#QNAN │Positive quiet NaN │ │-1#QNAN │Negative quiet NaN │ │1#IND │Positive indefinite NaN │ │-1#IND │Negative indefinite NaN │ Positive and negative infinity are generated by arithmetic overflow, or when the mathematical result of an operation is infinite, such as taking the logarithm of positive zero. (Don't forget that IEEE floating point supports both positive and negative zero.) For math nerds: IEEE arithmetic uses affine infinity, not projective, so there is no point at infinity. Signaling and quiet NaNs are not normally generated by computations (with one exception noted below), but you can explicitly create one for a floating-point type by using the std::numeric_limits<T> class, methods signaling_NaN() and quiet_NaN(). Recall that there is not just one signaling and quiet NaN, but rather a whole collection of them. The C runtime does not distinguish among them when printing, however. All signaling NaNs are reported as 1#SNAN, regardless of the signal bits. The C runtime does report the sign of the NaN, for what little that is worth. The weird one is the Indefinite NaN, which is a special type of quiet NaN generated under specific conditions. If you perform an invalid arithmetic operation like add positive infinity and negative infinity, or take the square root of a negative number, then the IEEE standard requires that the result be a quiet NaN, but it doesn't appear to specify what quiet NaN exactly. Different floating point processor manufacturers chose different paths. The term Indefinite NaN refers to this special quiet NaN, whatever the processor ends up choosing it to be. Some floating point processors generate a quiet NaN with the signal bits clear but the sign bit set. Setting the sign bit makes the result negative, so on those processors, you will see the indefinite NaN rendered as a negative indefinite NaN. (The x86 is one of these processors.) Other floating point processors generate a quiet NaN with the signal bits and the sign bit all clear. Clearing the sign bit makes the result positive, so on those processors, you will see the indefinite NaN rendered as a positive indefinite NaN. In practice, the difference is not important, because either way, you have an indefinite NaN. 22 Comments If you want to figure out some quirks of a calling convention, you can always ask the compiler to do it for you, on the not unreasonable assumption that the compiler understands calling conventions. "When a __stdcall function returns a large structure by value, there is a hidden first parameter that specifies the address the return value should be stored. But if the function is a C++ instance method, then there is also a hidden this parameter. Which goes first, the return value parameter or the this pointer?" This is another case of You don't need to ask me a question the compiler can answer more accurately. struct LargeStructure char x[256]; class Something LargeStructure __stdcall TestMe(); void foo(Something *something) LargeStructure x = something->TestMe(); You could compile this into a program and then look in the debugger, or just ask the compiler to generate an assembly listing. I prefer the assembly listing, since it saves a few steps, and the compiler provides helpful symbolic names. 00015 mov eax, DWORD PTR _something$[ebp] ; LargeStructure x = something->TestMe(); 00018 lea ecx, DWORD PTR _x$[ebp] 0001e push ecx 0001f push eax 00020 call ?TestMe@Something@@ ; Something::TestMe We see that the last thing pushed onto the stack (and therefore the top parameter on the stack at the point of the call) is the something parameter, which is the this for the function. Conclusion: The this pointer goes ahead of the output structure pointer.
{"url":"http://blogs.msdn.com/b/oldnewthing/archive/2013/02.aspx?PageIndex=1","timestamp":"2014-04-20T20:12:24Z","content_type":null,"content_length":"101874","record_id":"<urn:uuid:845f4c87-8b4f-479f-9061-872b2f1a598e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Advance Functions. Must See! April 17th 2009, 08:11 AM Advance Functions. Must See! Explain what happens to the graph of the function f(x) = logbx when b is equal to 1. If g(x) = bx and h(x) = logbx, would the graphs of these functions intersect if b = 1? If so, at what ordered pair? April 17th 2009, 08:24 AM is "b" the base of the logarithm here? note two things: (1) $\log_a b = c \implies a^c = b$ (2) $\log_a b = \frac {\log_c b}{\log_c a}$ April 17th 2009, 08:36 AM Do you mean $log(b\cdot x)$ ? If so, if b=1, $log(1 \cdot x) = log(x)$ If $g(x)=b\cdot x \, \mbox{ and } h(x)=log(b\cdot x)$ Now if b=1, then $g(x) = x \mbox{ and } h(x) = log(x)$ g(x) and h(x) does not intersect, look at graph. Really should try to be more clear in your question, now we donīt know if you mean $log_{1}(x) \mbox{ or } log(x)$
{"url":"http://mathhelpforum.com/calculus/84181-advance-functions-must-see-print.html","timestamp":"2014-04-16T13:21:43Z","content_type":null,"content_length":"6304","record_id":"<urn:uuid:883e3437-8807-46a4-8c3d-43a3e936f54e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Specific Gravity How to Calculate the Specific Gravity of a Coin ] [ENG] To Know the specific gravity of a coin it is useful for the determination of its gold or silver content, this is an useful information to discover some fakes or to study the composition of certain Knowing the weight and volume it is possible to determine the specific gravity (SG) of the coin, moreover if the alloy components are known it is possible to determine with good precision the relative weight. Often the purity of the metal is a good sign of authenticity. If we want to use a simple measure of weight in order to identify a false coin is not a procedure that can always be right, because the weight is the physical parameter easier to measure, and therefore it is reproduced with absolute accuracy by forgers. The experiment exploits the physical principle that a body immersed in a liquid increases the extent of the liquid volume by an amount equal to the volume of the body. If we know the specific weight of the liquid we can determine the volume of the body and therefore its specific gravity. We take our "guinea pig" for this experiment: a Vittorio Emanuele II scudo 1874. The steps are as follows: • 1 - calibrate the scale (in this case was used a 100g step 0.01 g) • 2 - control of the calibration (perfectly ok) • 3 - first it is need to weigh the coin "dry", the coin shows no obvious signs of dust, only a slight patina. measured weight 24.89 grams. • 4 - construction of the suspension system for the coin in the following pictures you can see the suspension system built with pieces of a wooden fruit box, the wire is an ordinary sewing thread, it is important that this should be as light as possible. I tried to weigh the wire in water but, after carefully dipped a piece of wire 20 times greater than that we're going to immerse with the coin, I hadn't a considerable measure, so the wire immersed give as difference in weight less than 5% of 0,01 gram. We can consider its weight negligible. In the images below it is possible to see the suspension system At this point we can check the built suspension system and we can adjust the length of the wire. We need to get a small can of clear plastic (must be light and transparent). In these two pictures below we can see the coin into the jar suspended and in the second picture, especially, we can see the coin that does not touch any wall of the jar At this point we can continue with the following steps: • 5 - add distilled water in the jar (washed and cleaned if possible), it is important to use distilled water because it it a liquid which is known exactly the specific gravity. It is possible to use other liquids but we have also different calculations. • 6 - weight the jar with water taking care not to go over the capacity of the digital scale. • 7 - tare the scale to 0 (Figure below) • 8 - put the suspension system into the jar at this stage we must take care that no air bubbles stay under the coin, it can be a bit laborious, maybe try to drown the coin a bit tilted, taking care not to touch the water with your fingers (the penalty it the failure of the experiment) you can see in the figure below the coin immersed in water and the reading of the weigh. The following image is my favourite, as you can see there are no bubbles under the coin Now we're almost there! a bit of math will be with us until the end. The extra weight of water with the coin submerged is: 2.41 grams (the weight of the volume of water moved from the coin (and wire)) given the accuracy of the scale we can assume that the last weight is between 2.405 g and 2.415 g. The same the weight of the coin we can consider between 24.885 and 24.895 will be useful to calculate the error. Note: We are calculating the specific gravity, since we use distilled water that has the same value of density (gt/cm3) if we use another liquid the calculations are different. Well now we simply do Coin dry weight / weight of water that have the volume of the coin = 24.89 / 2.41 specific gravity of our coin = 10.3278 g/cm³ Specific gravity of silver 900 (Latin Monetary Union): Silver: 0.01049 g / mm ³ Copper: 0.00893 g / mm ³ Silver 900: 0.010334 g / mm ³ = 10.334 g/cm³ We have achieved something very precise: percentage difference in measure: 0.06% We can now estimate the maximum error in the calculation starting from the know error for the measure of weight: Specific Gravity Minimum 10.3043 and maximum 10.3514 error ± 0.2% Considerations on accuracy The calculation of a specific gravity of a big can be carried out, as we have seen, with a digital scale that have an accuracy of 0,01 grams. Such kind of digital weight can be used with coins that weight more than 5 grams. In table below it is possible to see the errors made using a digital scale with an accuracy of 0,01 grams when calculating silver coins with a purity of 900, so with a specific gravity of 10,33 (see next tables) ┃ Weight Coin │ SGmin │ SGmax │ ERR ┃ ┃ 24,89 │ 10,304 │ 10,351 │ 0,23% ┃ ┃ 12,45 │ 10,281 │ 10,375 │ 0,45% ┃ ┃ 6,22 │ 10,235 │ 10,423 │ 0,90% ┃ ┃ 3,11 │ 10,143 │ 10,519 │ 1,79% ┃ ┃ 2,51 │ 10,099 │ 10,566 │ 2,21% ┃ ┃ 1,25 │ 9,880 │ 10,814 │ 4,34% ┃ ┃ 0,63 │ 9,466 │ 11,344 │ 8,34% ┃ As it is possible to see an error of 1% can give a different specific gravity, up to 10% ! In table if a coin with a weight of 6,22 grams is taken than it is possible to find purity from 835 to 980: too much. For example below the result of some measurement carried out on some Italian coins of 5 lire of Vittorio Emanule III that have purity of 835 and a weight of 5 grams. In chart below the squared dots are the result of experiment carried with a 0,01 digital scale. So if we want evaluate the specific gravity of a Roman denari or a gold zecchino we have to use a digital scale with an accuracy of 0,001 grams. In table below are reported the errors we can have with a 0,001 digital scale and with silver coin that have a purity of 900/1000 and so should have a specific gravity of 10,33 (see next tables) ┃ Weight Coin │ Pmin │ Pmax │ ERR ┃ ┃ 24,89 │ 10,3255 │ 10,3302 │ 0,02% ┃ ┃ 12,45 │ 10,3231 │ 10,3325 │ 0,05% ┃ ┃ 6,22 │ 10,3184 │ 10,3372 │ 0,09% ┃ ┃ 3,11 │ 10,3090 │ 10,3466 │ 0,18% ┃ ┃ 2,51 │ 10,3045 │ 10,3512 │ 0,23% ┃ ┃ 1,25 │ 10,2814 │ 10,3746 │ 0,45% ┃ ┃ 0,63 │ 10,2353 │ 10,4218 │ 0,90% ┃ As we can see we can have great accuracy also for "small" coins. General SG of some alloy SG of some metals [Kg/dm³] ┃ Platinum │ 21.37 ┃ ┃ Silver │ 10.492 (10.40-10.53) ┃ ┃ Gold │ 19.32 ┃ ┃ Copper │ 8.93 ┃ ┃ lead │ 11.24 ┃ ┃ Zinc │ 7.13 (7.04-7.16) ┃ ┃ Tin │ 7.29 ┃ ┃ Nikel │ 8.02 ┃ ┃ Cadmium │ 8,648 ┃ For Alloy we have to calculate the percentage in weight of the mail metal against the other alloy composition. For example if a SG of a denarius of Titus is found to be 10.4, meaning it contains 94% silver and 6% copper and impurities. SG of some alloys, calculated values (Gold with copper, Silver with copper, Copper with Zinc and Copper with Tin) ┃ │ Purity 100% │ Purity 98% │ Purity 90% │ Purity 835% │ Purity 75% │ Purity 50% ┃ ┃ Gold │ 19.30 │ 19.09 │ 18.26 │ 17.59 │ 16.71 │ 14.12 ┃ ┃ Silver │ 10.49 │ 10.46 │ 10.33 │ 10.23 │ 10.10 │ 9.71 ┃ ┃ Copper (+Zinc) │ 8.93 │ 8.89 │ 8.75 │ 8.63 │ 8.48 │ 8.03 ┃ ┃ │ │ │ Bronze │ Bronze -asses │ Orichalcum - │ White alloy ┃ ┃ Copper (+Tin) │ 8.93 │ 8.93 │ 8.90 │ 8.77 │ Sestertii and dupondii │ used for many fakes ┃ ┃ │ │ │ │ │ 8.52 │ 8.11 ┃ Other information can be found in 'The Specific Gravity of the Gold Coins of Aksum' by W. A. Oddy and S. C. Monro-Hay in 'Metallurgy in Numismatics' by Metcalf and Oddy, 1980. This paper showed gold of 94% = SG of 18.5, 92% = SG 18.15, 91% = SG of 18.01. Also 'The Chemical composition of Parthian Coins' by E. R. Caley, chapter VII in 'Numismatic Notes and Monographs' #129, ANS, 1955, which has a Table XXIV, of silver content related to SG. Some figures from that chart are; 99% silver = SG 10.48, 95% = 10.41, 86% = 10.24, 75% = SG 10.05. Thanks to Marvin of Moneta-L for this data. Plese remember that in many gold coins the know gold content in the alloy can be found to be with an error of +/- 2,0%. Possible errors in the experiment weight of the wire: we demonstrated that the weight of the wire give an error less than 0.0005 grams specific gravity of water: the specific gravity of a body changes with temperature, the specific weight of distilled water at 4 degrees is 1.0000 g/cm³ the specific gravity of water at 20 degrees is 0.9982 g/cm³ there is therefore an error of about ± 0.18% in the calculation, although we can correct it only by calculation for maximum accuracy, the coin should be cleaned and degreased, as is feasible without major problems on gold is not on silver coins that could be ruined if patina is taken away. becomes difficult to resolve the problem mathematically in the case of three or more metal alloys, the only viable solution is the use of XRF spectrometry (X-ray Fluorescence) (but that give indication of alloy composition only of the most surface layer of the coin). With an alloy of three metals, if we do not know any of the three ratios we should solve two equations with three unknowns and this is not possible, however, we could take same assumptions and calculate the possible error to see how this could affect the result. For many ancient silver coins like antoninianus, Alexander tetradracms, etc. namely coins that are not in simple binary alloy, the result would be just a number to be interpreted, and can be used at least, for comparative tests. Also for Republican and Imperial denarii sometimes you might make a wrong measure. While ignoring the trace of other elements, should be considered, as example, that a lot of denarii may contain gold percentages to 0.7-0.9% and in other cases lead to over 6%. For example denariA: Ag 90% - Cu 10% = specific gravity 10.334 g/cm³ (theoretical) denariB: Ag 80% - Cu 13% - Pb 7% = specific gravity 10.339 g/cm³ (theoretical) In practice, the results must always be interpreted and not used as the verdicts on the coins analysed. If you analyse gold and silver having a high title, say above 95% then the results can be accurate and can give us a good indication of the alloy of the coin. Gradually, the title of principal metal decreases the results are increasingly inaccurate. Coins that have low gold content would not give such great signs from their specific gravity because we do not know the proportion of other metals in the alloy. The proposed method it is useful to calculate with good accuracy the specific gravity of a coin, applies to antique and modern coins, and it is not at all destructive. A really useful tool in the hands of enthusiasts and professionals to get a good indication of the specific gravity of the coin that is going to be studied! Alessandro Attila the collector Use of another liquid for the measure. to avoid generation of bubbles in our weight experiment we can use a different liquid insted demineralized water. We have to keep in account the different specific gravity of the new liquid we are going to use. with the following notations Pm weight of the coin Vm volume of the coin = volume of liquid moved by the coin body Pq weight of the liquid with the same volume of the coin Psq Specific gravity of the liquid in a case of a generic liquid we have Psm = Pm /Vm weighting the liquid moved by the body of the coin: Vm = Pq / Psq Psm = Pm * Psq / Pq so we can see why when we use distilled water with a specific gravity equal to 1 the formula became Ps = Pm / Pq if for example we use ethanol at 95% that have a specific gravity of 0,81 the formula to use get the following: Ps = Pm * 0,81 / Pq A different solution is to wet the coin in alcohol, the adherence of alcohol to coin is, we would dare to say, far less than volume of the thread that hangs the coin. On the other side, alcohol mixes with water, and is only measured the volume (well, weight) of coin. One could use indeed ethanol, but that involves some troubles, like it is very sensitive to increases/decreases in temperature Also we could use de-mineralized water with a drop of tensioactive, being the best dioctylsodiumsulfosuccinate (aerosol OT is a commercial brand). But that is not that easy to get. Finally we can simply add a drop of the dishwasher we use generally. Use of a little dish for the weight: we could use a little transparent dish where we could place the coin we are going to weight. We always be careful to avoid bubbles between dish and coin, for little coins can be useful to submerge before the little dish and then "diving" over the coin being careful do not touch the water. Note on specific gravity of a alloy we assume that the specific gravity of alloy composition is given on mass and not in volume. This is not correct but in case of two metals the error is negligible. If we assume that a coin is made of two metals we could calculate the title. For example: is X the silver tile of a coin made with silver and copper is P and V the coin weight and volume is Ps the specific gravity, measured with our system, of the coin Parg and Pram the weights respectively of silver and copper in the coin. Psarg and Psram the specific gravity respectively of silver and copper. P = Parg + Pram = Psarg * Varg + Psram * Vram = Parg * V * X + Psram * V * (1-X) so considering P/V =Ps Psarg * X + Psram * (1-X) = Ps that with simple maths operations X = (Ps-Psram)/(Psarg - Psram) Note attached: demonstration that from calculation of diameter and thickness we do not get the volume of the coin I take one of the Scudo coin I have and I measure the following data: Diameter: 37 mm Thickness: 2.5 mm The thickness was measured at the edge with a micrometer that appreciates the tenth of a millimetre. Unfortunately I can not measure well the thickness at the fields but I note that the thickness at the field is near to 2.35 mm the surface of the coin is 3.14 · D² / 4 = 1075.21 mm² Volume = 2688.03 mm ³ Well at this point I report the specific gravities of the content metals of the coin: Silver: 0.01049 g / mm ³ Copper: 0.00893 g / mm ³ Please note however that the specific gravity of alloy composition is given on mass and not in volume. Ie every 100 grams of Ag900 90 grams are silver and 10 grams copper. If we make the calculation of the specific gravity with volume ratio we would have an approximation because the assumption that the total volume of the alloy is equal to the volume of the two separated metals is not always true, however in this case the error would be not so high. In any case if we do the calculation using volume ratios we reach a very close result, it is not the sore point. With a bit of calculations by solving two equations with two unknowns (omitted here) our result: Volume occupied by silver: 2377.69 mm ³ Volume occupied by copper: 310.34 mm ³ Multiply the Specific Gravity for the volume just found: SG x Volume occupied by silver = weight of silver Silver weight: 24.94 g Copper weight: 2.77 g The sum is 27.71 grams Ok ok, you can say that the problem is simply the measure of the thickness of the coin. Probably, but I did the calculation for the correct value and we should measured a thickness of 2.255 mm but it would be too much, even below the surface of the field, to try again I did this analysis with 5 Lira 1927 little eagle Italy and the result is that I should have a thickness of 1.18 mm instead of 1.68 correctly measured (we are really out here) then this technique is wrong and can not be used for the calculation of a coin volume. Alessandro Attila the collector
{"url":"http://www.attilacoins.com/Calculate_Specific_Gravidity_coin.asp","timestamp":"2014-04-17T12:55:39Z","content_type":null,"content_length":"45495","record_id":"<urn:uuid:74e792e9-a5f7-4b7b-9f45-307f0d16a44b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: i have 2 questions of similar type. looking for a permutation and combination method to it Number of integral solutions of xyz=30 and xyz=24? also explain what if it was positive integral solutions? a start would be good. • one year ago • one year ago Best Response You've already chosen the best response. 30 = 2 * 3 *5 24= 2 * 2 *2 * 3 If that helps ? Best Response You've already chosen the best response. 1 is also there. Best Response You've already chosen the best response. thats what i did too. what next? Best Response You've already chosen the best response. factors of 30 = 1,2,3,5,6,10,15,30 Lets fix x=1 and say x<=y<=z now (y,z) can be (1,30) (2,15) (3,10) (5,6) No more. If we fix x=2, (x<=y<=z) (y,z) = (3,5) No more If we fix x=3, (x<=y<=z) (y,z) =None So we come to a halt Now note that x,y,z are all symmetrical, all x,y,z can be permuted in 3! ways except a special case when 2 are equal, then no. of permutations will be 3!/2! Ans = (4*3!) + (1* 3!/2!) =27 Unless I'd have done something wrong ? Best Response You've already chosen the best response. Similar approach for 24 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/511a7d8ce4b06821731a35a9","timestamp":"2014-04-18T00:39:28Z","content_type":null,"content_length":"37655","record_id":"<urn:uuid:2aafdf2e-1754-4b1a-a8ce-212d88b38b88>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
Young’s double slit experiment | Fringe Width | Wavelength | Solved Examples on Wave optics | Intensity Variation on Screen | IIT JEE Preparation | IIT JEE Physics | IIT JEE Wave Optics Study Material A train of plane light waves is incident on a barrier containing two narrow slits separated by a distance’d’. The widths of the slits are small compared with wavelength of the light used, so that interference occurs in the region where the light from S[1] overlaps that from S[2]. A series of alternately bright and dark bands can be observed on a screen placed in this region of overlap. The variation in light intensity along the screen near the centre O shown in the figure Now consider a point P on the screen. The phase difference between the waves at P is θ, where θ= 2π/λ ΔP[o] (where ΔP[o] is optical path difference, ΔP[o]=ΔP[g]; ΔP[g ] being the geometrical path difference.) = 2π/λ [ S[2]P - S[1]P ] (here λ = 1 in air) As As, D >> d, S[2]P - S[1]P ≈ λ d sinθ sin θλ ≈ tanθ( = y/D). [for very small θ] Thus, θ = 2π/λ (dy/D) For constructive interference, θ = 2nλ (n = 0, 1, 2...) ⇒ 2π/λ (dy/D) = 2nπ ⇒ y = n λD/d Similarly for destructive interference, y = (2n - 1) λD/2d (n = 1, 2 ...) Fringe Width W It is the separation of two consecutive maxima or two consecutive minima. Near the centre O [where θ is very small], W = y[n+1] – y[n] [y[n] gives the position of nth maxima on screen] = λD/d Intensity Variation on Screen If A and I[o] represent amplitude of each wave and the associated intensity on screen, then, the resultant intensity at a point on the screen corresponding to the angular position θ as in above figure, is given by I = I[o]­ + I[o] + 2√I[o]^2 cosθ, When θ = 2π(dsinθ)/ λ = 4I[o] cos^2 Φ/2 Illustration 1: A beam of light consisting of two wavelengths 6500 ^oA and 5200 ^oA is used to obtain interference fringes in YDE. The distance between the slits is 2.0 mm and the distance between the plane of the slits and the screen is 120 cm. (a) Find the distance of the third bright fringe on the screen from the central maxima for the wavelength 6500 ^oA. (b) What is the least distance from the central maxima where the bright fringes due to both the wavelengths coincide? (i) y[3] = n. Dλ/d = 3 x 1.2m x 6500 x 10^-10m / 2 x 10^-3m = 0.12cm Let nth maxima of light with wavelength 6500 Å coincides with that of m^th maxima of 5200Å. (ii) m x 6500A^o x D/d = n x 5200A^o x D/d ⇒ m/n = 5200/6500 = 4/5 Least distance = y[4] = 4.D (6500A^o)/d = 4 x 6500 x 10^-10 x 1.2/ 2 x 10^-3m = 0.16cm
{"url":"http://www.askiitians.com/iit-jee-wave-optics/youngs-double-slit-experiment/","timestamp":"2014-04-17T15:27:28Z","content_type":null,"content_length":"69029","record_id":"<urn:uuid:be1768c4-3a07-4a34-b545-ebfba7f7dbab>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
The kinetics of lactate production and removal during whole-body exercise Based on a literature review, the current study aimed to construct mathematical models of lactate production and removal in both muscles and blood during steady state and at varying intensities during whole-body exercise. In order to experimentally test the models in dynamic situations, a cross-country skier performed laboratory tests while treadmill roller skiing, from where work rate, aerobic power and blood lactate concentration were measured. A two-compartment simulation model for blood lactate production and removal was constructed. The simulated and experimental data differed less than 0.5 mmol/L both during steady state and varying sub-maximal intensities. However, the simulation model for lactate removal after high exercise intensities seems to require further examination. Overall, the simulation models of lactate production and removal provide useful insight into the parameters that affect blood lactate response, and specifically how blood lactate concentration during practical training and testing in dynamical situations should be interpreted. Aerobic power; Anaerobic power; Blood lactate; Cross-country skiing; Muscle lactate; pH The metabolic power in humans is based on the production and consumption of adenosine triphosphate (ATP). Despite the approximately 100-fold increase in ATP utilization from rest to maximal-intensity exercise, the energetic demands of the muscles are usually satisfied without depleting the intracellular ATP e.g., [1-3]. In this connection, three sources for ATP synthesis are available; First, ATP can be produced aerobically in the mitochondria by oxidative phosphorylation. Second, ATP can be produced by anaerobic synthesis due to glycolysis or glycogenolysis. Finally, ATP can be produced by phosphocreatine (PCr) break down to Creatine (Cr) (i.e., ADP + PCr gives ATP + Cr in the Creatine Kinease (CK) reaction) e.g., [1-3]. The rate of oxygen (O[2]) consumption can be set to the sum of 1) a constant rate (resting O[2 ]consumption), 2) a rate due to unloaded body movements and 3) a rate proportional to the aerobic energy used to perform work. For moderate constant work rates, the aerobic power increases towards a steady state condition. The concept of maximal lactate steady state (MLSS), that is the highest intensity where a steady state lactate can be obtained, has been regarded as important for endurance performance e.g., [4-6]. For exercise intensities above MLSS, associated with sustained acidosis, a slow component delays the attainment of a steady state value and causes O[2 ]uptake to increase to values greater than those predicted from aerobic steady state demands. For exercise intensities exceeding the maximal oxygen uptake, the steady state corresponds to the level that would be attained if it was possible to carry out the exercise under pure aerobic conditions [7]. Obviously, this virtual steady state is never reached as the increase in oxygen uptake ends when the maximal oxygen uptake is achieved. When the rate of ATP production by oxidative sources becomes insufficient, high rates of glycolytic or glycogenolytic ATP production are required. The endpoint of glycolysis is pyruvate, which represents a metabolite that can be reduced to lactate or oxidized to CO[2 ]or H[2]O. Thus, by increasing exercise intensities, the working muscles and various tissues produce more lactate and release it into the plasma. At the same time, the skeletal muscles, the heart, the liver and the kidney cortex remove lactate from the circulation, and lactate is suggested to act as an intermediate for the shuttling of carbohydrate from cells and tissue with relatively low oxidative capacity to cells and tissues with high oxidative capacity [8-11]. Thus, it is well established that the blood lactate concentration is the result of the production and the removal of lactate in the blood. During steady state sub-maximal exercise, when lactate production (influx) equals lactate removal (outflux), the lactate concentration in the lactate pool stays constant and the rate of oxygen consumption is the measure of the whole body energy expenditure regardless of the magnitude of lactate production and removal or the absolute blood lactate concentration. At exercise intensities above steady state, a rise in the concentration could be attributed to an increase in the rate of lactate production or result from a decrease in the rate of lactate removal. Lactate itself does not lead to muscle fatigue at high exercise intensities, and fatigue is most likely a result of decreased muscle pH and the associated reactions e.g., [12,13]. For the skeletal muscles, the pH is normally around 7.1 but can fall to 6.4 during heavy exercise [14], and numerous experiments suggest a negative relationship between decreased pH and the muscle contractile function [15-20]. In the capillary blood pH is normally (in steady state conditions) around 7.45, but can fall to around 7.05 during heavy exercise [21]. Previous researchers have found a close relationship with a time delay between lactate concentrations in muscle and blood [22]. Because muscle groups often work unequally, heterogeneity with regard to lactate concentration is likely in muscles during dynamic exercise. A rigorous estimation would require application of several compartments with production, removal and exchange between compartments. As an approximation, a two-compartment model can be applied, where the muscles and other organs that can remove lactate (such as the heart, the liver and the kidney cortex) are regarded as one compartment and the blood space and other tissues are grouped into a second compartment [23-25]. These studies indicate that the flux of lactate into a compartment depends upon the lactate gradient and its permeability. Furthermore, blood lactate recovery curves from muscular exercise can be described by a bi-exponential time function and a two-compartment model consisting of the previously worked muscle and the remainder lactate space. The time constants of the bi-exponential time function fitted to the arterial blood lactate recovery curves reflect the abilities to exchange lactate between the two-compartments and the ability to remove lactate from the total lactate space including the working muscle department [ 26]. The two time constants are found to decrease with work rate and duration of the preceding exercise [27,28]. The maximum anaerobic energy that can be utilized is proportional to the sum of Cr and lactate that can be accumulated in the body. PCr is an energy buffer that supports the transient failure of other metabolic pathways to support ATP. The equilibrium constant of the CK reaction is around 20 and the slightest drop in ATP allows the reaction to proceed to ATP. Thus, the ATP concentration stays nearly constant until almost all the PCr is utilized. The PCr levels follow an exponential time course after changes in work rate before approaching a steady state condition at moderate exercise intensities. In such cases, a strong similarity has been reported for the time constants of the O[2 ]kinetics and the PCr consumption [29]. However, for exercise intensities above lactate threshold, the anaerobic glycolytic energy supply becomes significant and the association between PCr and O[2 ]rate has not yet been systematically reported. During recovery, the level of PCr must be recovered, the pH must be re-established and ADP removed. While the PCr recovery is mainly due to oxidative ATP synthesis, the PCr stores may be rebuilt by anaerobic glycolysis [30-32]. The current study aimed to derive mathematical models for production and removal of lactate in the blood and muscles during dynamic whole-body exercise. As cross-country skiing is a whole body exercise where athletes both train and compete on varying terrain and at constantly varying speeds and work rates [33-35] this locomotion was used for that purpose. Thus, the mathematical models were compared with experimental results from the laboratory during treadmill roller skiing. We hypothesized a mathematical two-compartment model of lactate production and removal to accurately predict blood lactate concentration during steady state and at varying exercise intensities. Overall design Initially, the current study derived mathematical simulation models of lactate production and removal in the blood and muscles by utilizing Mathematica 8 (Wolfram Research Inc., Champaign, IL, USA). Thereafter, the simulations were compared with experimental data from an elite skier performing laboratory tests while treadmill roller ski skating both during a steady state and at varying exercise intensities (see details below). Steady state aerobic power Work rate W on the treadmill is , where v is the treadmill velocity, μ the coefficient of friction, m the mass of the skier and α ≈ Sin(α) is the treadmill incline in radians. The power due to the change of kinetic energy is zero on the treadmill since the velocity is constant. μmgCos(α)v is the power of roller friction, and mgSin(α)v is the power of gravity due to the inclination of the treadmill. We define Q[max ]to be the maximal aerobic power and to be the steady state aerobic power that has been found while treadmill roller skiing. is regarded as the virtual steady state aerobic power, with , where Min is the minimum function. The Min function is employed to ensure that the aerobic power does not exceed the maximum aerobic power. Below Q[max ]the virtual steady state power is found to be linear with work rate for a given cycle rate and incline, and as a hypothesis we apply this linearity also for metabolic powers above Q[max ]and find Q[b ]is the metabolic power at rest (set to 80 J/s), Q[un ]the metabolic power of unloaded movements (zero work rate) which is dependent of the cycle rate. We define the cycle rate to be constant in this article. Thus, Q[un ]= 111 J/s here. Thereafter, we let Q[a ]be the aerobic power, the virtual aerobic power and . A first order differential equation of aerobic power with the virtual aerobic power as input was used to account mathematically for the delay in aerobic power with a time lag during steady state work rate. Thus, The "dot" means time derivative, τ is a time parameter quantifying the time before the aerobic power reaches steady state during sub-maximal work rates. We uses τ = 30 s according to di Prampero [36 Figure 1 shows the steady state aerobic power as function of the work rate for the G3 skiing technique. Figure 1. The aerobic power Q(W, α) as a function of work rate (W) for inclines of α = 0.05 (upper line) and α = 0.12 (lower line) during treadmill roller skiing using the skating technique. The curve fittings are based on least square fit to the data. Maximal metabolic rate (Q[max ]= 1886 J/s) is represented by a straight horizontal line. The lactate threshold (Q[LT ]= 1755 J/s) is represented by the straight dotted horizontal line. ■: Experimental values for α = 0.05, ▲: Experimental values for α = 0.12. Anaerobic power and lactate concentration in blood and muscle Lactate concentration in the lactate pool (C(t)), i.e. the mass of lactate per unit volume of this pool, increases only if the rate of lactate appearance (influx) in the lactate pool is larger than the rate of lactate disappearance (outflux). The current study uses a modified version of Brooks [8] and determines the levels of lactate pool concentration by modeling both the influx and outflux streams of lactate and simulate the blood lactate concentration according to Moxnes and Hausken [37]. Here two pyruvate molecules are produced for each glucose or glycogen molecule during glycolysis/ glycogenolysis. One molecule of pyruvate gives one molecule of lactate. The increase in glycogenolysis/glycolysis, due to increased exercise intensity, ends when a maximal rate of glycogenolysis/ glycolysis is achieved [37]. Therefore, the rate of pyruvate appearance (P) has a least upper bound, which is denoted as P[max]. By neglecting rates due to changing pyruvate concentration in the plasma, the rate of lactate appearance in mmol/L is R[a ]= P. However, as pyruvate can be oxidized in the mitochondria we set the influx of pyruvate into the mitochondria to be α[0]Tanh(β[0], P)/(β [0], α[0 ]< 1, where α[0 ]and β[0 ]are to parameters fitted to the data and Tanh() is a function that accounts for saturation at high lactate concentrations according to Moxnes and Hausken [37]. Thus, the rate of lactate appearance is R[a ]= P - α[0]Tanh(β[0]P)/β[0 ]where we expect that α[0 ]is around 1. During severe exercise, glycogen re-synthesis by the liver is severely depressed. As a hypothesis, we forecast that the rate of lactate disappearance due to both glycogen re-synthesis and lactate oxidation is R[d ]= d[0 ]× (Tanh(χ^C)/χ) × D(Q[a]) × (Q[max ]- Q[a]), where d[0 ]and χ are two parameters that are fitted to the data. D(Q[a](t)) is an unknown function that is monotonically increasing with the aerobic power. When aerobic power equals the maximum aerobic power (i.e. Q[a ]= Q[max]) no lactate disappearance takes place. Altogether, for a one compartment model of lactate we In equation (3) we need a model for the rate of pyruvate appearance (P). To our knowledge, no such model exists in the literature, and we hypothesize that this function is linear with work rate up to P[max ]which is the least upper bound. This simple assumption might constitute a potential weakness in our argument. However, if we have experimental support this assumption also seems like the most reasonable one. Thus, we define a virtual steady state ( ) analogous to the virtual aerobic steady state and hypothesize that where τ[an ]is the time constant for full activation of glycolysis/glycogenolysis during muscle contractions, set to 10 s [38]. is the steady state rate of pyruvate appearance. Note that the model in equation (3) applies only for the chosen type of exercise and a fixed concentration of glycogen in the body. For aerobic power we assume that is a linear function of work rate and forecast that the steady state rate of pyruvate appearance is approximately proportional to the virtual steady state power leading to Due to the rather fast response time in equation (4), an approximation is that . For equation (3) this gives that Finally, a model for D(Q[a]) is needed. As a hypothesis we set that Here the constant of proportionality can be scaled by d[0]. A general solution for constant aerobic power is feasible in (5)-(7) when we let [37]. Altogether, the steady state lactate concentration for constant aerobic power at intensities below the MLSS from equation (5)-(7) is The maximum aerobic power that can be used for steady state concentration is given by According to equation (8) all steady state lactate concentrations can be achieved for steady state aerobic powers below (MLSS). Furthermore, the steady state lactate concentration approaches infinity when the aerobic power approaches . However, it has been discovered that not all steady state lactate levels are tolerated over time. This means that blood lactate levels above a certain level of exercise can be terminated before steady state is reached. The anaerobic power due to anaerobic glycogenolysis/glycolysis can be calculated from the increase lactate concentration when using the relation from di Prampero and Ferretti [7] as Blood lactate concentration usually continues to rise a short period of time after exercise. The model in equation (6) does not capture this phenomenon since we only considered one compartment; the lactate pool. In the next step we consider the different compartments involved: working muscles, blood and other tissues such as the liver, kidney and heart. For exercise powers significantly above resting values, we regard a two-compartment model as sufficient: a) the blood compartment and b) muscles and other tissues as one compartment (denoted muscles in the rest of current manuscript). C[b] (t) is the lactate concentration in the blood and C[m](t) the lactate concentration in the muscles. V[b](t) and V[m](t) are the volumes of muscles and blood, respectively. The total lactate pools volume is V = V[b](t) + V[m](t), set to 0.18 L per kg body mass as an approximation for the skier modelled here [39]. The muscle mass is set to be 10 kg, based on an iDexa scan of the skier, and muscle volume as 10 L. We assume that lactate moves between the muscle and blood compartment with some time dynamics. Thus, we propose the following model for the muscle's (C[m]) and blood's (C[b]) concentrations of lactate K[1a ]is a parametric function that scales the movement of lactate into and out of the blood. The rate of lactate appearance in the blood is set proportional to the difference in lactate concentration between blood and muscles. To account for different rates of transport into or out of the blood we let K[1a ]be dependent on C[m](t) - C[b](t). Estimates of these parameters were found by visual curve fitting. Visual curve fitting gives plausible values for the parameters based on plotting of experimental data that is compared with simulations. The parameters were sought to have biologically trustworthy numerical values and a least square fit of the data was performed to produce best fit estimates. The parameters employed here are Experimental tests The derived mathematical simulations are compared with experimental data from an elite crosscountry skier while roller skiing on a treadmill employing the skating G3 technique. The mass of the skier was m = 77.5 kg and the friction coefficient on the treadmill was μ = 0.024 in all tests. Equipment and procedures were similar to the studies by Sandbakk, Holmberg, Leirdal and Ettema [40,41]. All treadmill tests were performed on a 6 × 3 m motor-driven treadmill (Bonte Technology, Zwolle, The Netherlands). Inclination and speed were calibrated using the Qualisys Pro Reflex system and the Qualisys Track Manager software (Qualisys AB, Gothenburg, Sweden). The treadmill belt consisted of a non-slip rubber surface that allowed the skier to use his own poles (pole length: 90% of body height) with special carbide tips. The skier used a pair of Swenor skating roller skis with standard wheels (Swenor Roller skis, Troesken, Norway) and the Rottefella binding system (Rottefella AS, Klokkartstua, Norway), and the roller skis were pre-warmed before each test through 20-min of roller skiing on the treadmill. The roller skis were tested for rolling friction force (F[f]) before the test period, and the friction coefficient (μ) was determined by dividing F[f ]by the normal force (N) (μ = F[f ]· N^-1). This was performed in a towing test on three subjects (70, 80 and 90 kg) rolling passively at 3.9, 4.4 and 5.0 m/s for 5 min on a flat treadmill (0%) whilst connected to a strain gauge force transducer (S-type 9363, Revere Transducers Europe, Breda, The Netherlands). The measured μ was independent of speed and body mass, and the mean μ-value (0.0237) was incorporated in the work rate calculations. Gas exchange values were measured by open-circuit indirect calorimetry using an Oxycon Pro apparatus (Jaeger GmbH, Hoechberg, Germany). Before each measurement, the VO[2 ]and VCO[2 ]gas analyzers were calibrated using high-precision gases (16.00 ± 0.04% O[2 ]and 5.00 ± 0.1% CO[2], Riessner-Gase GmbH & co, Lichtenfels, Germany), the inspiratory flow meter was calibrated with a 3 L volume syringe (Hans Rudolph Inc., Kansas City, MO). Heart rate (HR) was measured with a heart rate monitor (Polar S610, Polar Electro OY, Kempele, Finland), using a 5-s interval for data storage. Blood lactate concentration (BLa) was measured on 5 μL samples taken from the fingertip by a Lactate Pro LT-1710 t (ArkRay Inc, Kyoto, Japan). In a first test, the skier performed 5-min constant work rates at 0.05 and 0.12 inclines in radians when treadmill roller skiing in the skating G3 technique. Gas exchange values were determined by the average of the last minute during each stage. The lactate threshold was defined at the metabolic power when blood lactate began to accumulate (OBLA) (defined as a concentration of 4 mmol/L, as calculated by a linearly interpolated point out of the three measurement points of blood lactate concentration at the incline of 0.05. Maximal metabolic power was tested at an incline of 0.05 in the G3 technique with a starting speed of 4.4 m/s. The speed was increased by 0.3 m/s every minute until exhaustion. VO[2 ]was measured continuously, and the average of the three highest 10-s consecutive measurements determined VO[2]max and used to calculate the maximal metabolic power. The test was considered to be a maximal effort if the following three criteria were met: 1) a plateau in VO[2 ]was obtained with increasing exercise intensity, 2) respiratory exchange ratio above 1.10, and 3) blood lactate concentration exceeding 8 mmol/L. Thereafter, the velocity and the angle of incline on the treadmill were varied according to Figure 2 with gas exchange measured continuously. Immediately after finishing the protocol in Figure 2, the skier had a recovery period while skiing at 0.05 incline at 2.2 m/s, inducing a work rate of 125 J/s and an aerobic power of approximately 900 J/s. Figure 2. Velocity and incline as a function of time while treadmill roller skiing using the skating G3 technique.___: The skier's velocity (v) in 10^2 m/s as a function of time (t) in seconds....:The treadmill's incline (α) in radians as a function of time (t) in seconds. The simulated data for steady state concentration of blood lactate during 5-min constant work rates show good agreement with the experimental results, with less than 0.5 mmol/L disagreement (Figures 3 and 4). For the protocol showed in Figure 2, the simulated and experimental data for work rate and metabolic power as functions of time show general good agreement (Figure 5). However, we see that Q[a](t) is delayed compared to the quasi steady state value . Additionally, Q[a](t) is delayed compared to the experimental data because the measuring apparatus of VO[2 ]has an inherent time lag (around 15 s) that is not modeled. Based on the same test (i.e., Figure 2), anaerobic power is simulated in Figure 6. Thereafter simulated and experimental lactate levels as functions of time were compared in Figure 7. There was a good agreement between the simulated and experimental results at these varying sub-maximal intensities, showing less than 0.5 mmol/L disagreements. During the recovery period after performing the protocol in Figure 2, the initial simulation model of the concentrations of blood and muscle lactate did not fit well with experimental results. The experimental data showed a much slower reduction in blood lactate concentration compared to the simulated model. Thus, in a second trial we assumed a maximum rate of flux of lactate from blood to muscles. We achieved this mathematically when C[m](t) - C[b](t) → Max[-0.1, C[m](t) - C[b](t)] which accounts for a restriction on the lactate flux from blood to muscles. This means that the flux of lactate from the blood to the muscles has a least upper bound. The value of -0.1 was the value that fitted the experimental data best by visual inspection. Using this assumption, the simulated and experimental data were in much better agreement (see Figure 7). Figure 3. Steady state concentrations of blood lactate (C) in mmol/L as a function of the fraction of maximum aerobic power while treadmill roller skiing using the skating G3 technique at a 0.05 incline. ■: Experimental data ------: . Figure 4. The concentration of blood and muscle lactate concentration (C) in mmol/L for a skier while treadmill roller skiing in the skating G3 technique at a 0.05 incline at velocities of 3.89 m/s, 5 m/s and 7.44 m/s, respectively. * = experimental data, ___: Muscle compartment simulation, --------: Blood compartment simulation. Figure 5. Calculations of work rate (W) and metabolic powers (Q) as functions of time (t) in J/s for a skier while treadmill roller skiing in the skating G3 technique at the velocities and inclines shown in Figure 2. _____: Upper ,____: Lower: Work rate (W), - - - - -: Experimental data, .........:Q[a](t). Figure 6. Simulated anaerobic power (Q) in J/s as a function of time (t) in seconds for a skier while treadmill roller skiing in the skating G3 technique at the velocities and inclines shown in Figure 2 and a subsequent recovery period at 0.05 incline at 2.2 m/s. Figure 7. Simulated blood and muscle lactate concentrations (C) in mmol/L as a function of time (t) in seconds for a skier while treadmill roller skiing in the skating G3 technique at the velocities and inclines shown in Figure 2 and a subsequent recovery period at 0.05 incline at 2.2 m/s. *: Experimental data, ___: Muscle compartment simulation, -------: Blood compartment simulation. The current study constructed mathematical models of lactate production and removal and compared these with experimental results from treadmill roller skiing. The main findings were as follows: 1) a mathematical two-compartment model of lactate production and removal could accurately predict blood lactate concentration during steady state and at varying exercise intensities and; 2) the understanding of lactate removal after high-intensity exercise during whole-body exercise requires further examination. In the current study, the simulated lactate production and removal fitted the experimental data well during steady state and at varying sub-maximal exercise intensities, and indicate an uncertainty of less than ± 0.5 mmol/L using these mathematical models. To the best of our knowledge, the current study is the first to compare a simulation model of blood lactate concentration during dynamic whole-body exercise against experimental data. Overall, we propose that the current simulation models provide useful insight into how blood lactate concentration during practical training and testing in dynamical situations should be interpreted. The initial simulation model of lactate removal after high exercise intensities deviated from the experimental findings. This can be related to a longer time delay between lactate concentrations in muscle and blood than simulated by the model [22] or a slower removal of blood lactate than the bi-exponential time function employed here [26-28]. In any case, when assuming a maximum rate of flux of lactate from blood to muscles did the simulation model fit experimental data satisfactorily. A rationale for this asymmetry of the time scale for influx and outflux of lactate from the blood may be related to the transport of lactate across membranes accomplished by monocarboxylate transport proteins (MTC). MCT are used for active transport of lactate and is driven by concentration gradients [9,13]. However, the physiological role of the two important isoforms MCT1 and MCT4 differ. MCT1 is mainly located in slow twitch muscle fibers and is regarded responsible for the influx of lactate to the high oxidative muscles, whereas MCT4 is mainly located in fast muscle fibers and is responsible for outflux of lactate from cells. The current data indicate that the amount of MTC4 is low in blood cells, which leads to a smaller outflux of lactate from the blood during recovery than the influx of lactate to the blood during high intensity exercise. Nevertheless, the understanding of lactate removal after high-intensity exercise during whole-body exercise requires further examination. Modeling in human biology is always a challenge since one is confronted with conceiving a simple, yet realistic representation of complex phenomena occurring at different levels (cells, organs, tissue). The parameters used in the current study will, in principle, be dependent on the exercise mode employed; fitness level of the individual tested and is only applicable for fixed concentrations of glycogen e.g., [42-44]. Thus, to construct valid simulation models of lactate concentration during dynamic exercise these need to be developed for each individual. W, Work rate in J/s; v, Velocity of the treadmill in m/s; α, Angle of inclination in radians; m = 77.5 kg, Mass of the skier; μ = 0.024, Coefficient of roller friction on the treadmill; g = 9.82 m/ s2, Acceleration of gravity; Q[max ]= 1886, Maximal aerobic power in J/s; :, Aerobic power at the lactate threshold relative to the maximal aerobic power; Qa, Aerobic power in J/s; , Steady state aerobic power in J/s; , Virtual aerobic power in J/s; , Virtual steady state aerobic power in J/s; Qb = 80 J/s, Resting aerobic power; Q[a ](t[0]) = Q[b], Initial aerobic power; Qul = 111 J/s, Aerobic power of unloaded movement of arms and legs; f, Cycle frequency of the skier in the G3 technique in 1/s; , Function that describes the influence of incline on ; c[2 ]= 5.8 J/s, Parameter that describes the influence of work rate on ; τ = 30 s, Time parameter in seconds quantifying the time before the aerobic power reaches steady state during sub-maximal work rates; C, Lactate concentration in the lactate pool in kg/ml or in mmol/L; , Steady state lactate concentration in the lactate pool in kg/ml or in mmol/L; C(t[0]) = 0.045kg/m^3 = 0.5mmol/L, Initial concentration of lactate; R[a], Rate of lactate appearance in kg/(sm^3) or in mmol/(sL); R[d], Rate of lactate disappearance in kg/(sm^3) or in mmol/(sL); P[max], Maximum rate of pyruvate appearance in kg/(sm^3) or in mmol/(sL); P, Rate of pyruvate appearance in kg/(sm^3) or in mMol/(sL); , Steady state rate of pyruvate appearance in kg/(sm^3) or in mmol/(sL); , Virtual rate of pyruvate appearance in kg/(sm^ 3) or in mmol/(sL); , Virtual steady state pyruvate appearance in kg/(sm^3) or in mmol/(sL); τ[an ]= 10 s, Time parameter quantifying the time before the pyruvate appearance reaches steady state during sub-maximal work rates; α[0 ]= 0.9, Parameter that describe the rate of pyruvate disappearance due to oxidation in the mitochondria; β[0 ]= 1/(0.6 × p[0 ]× Q[max]), Parameter that describes the rate of pyruvate disappearance due to oxidation in the mitochondria; p[0 ]= 10^-5kg/(m^3s)/(J/s), Parameter that describes the rate of lactate appearance as a function of aerobic power; D(Q[a](t )), Function that describes the rate of lactate disappearance as a function of aerobic power; d[0 ]= 7.2410^-8/(J/s)^2/s, Parameter that describes the rate of lactate disappearance as a function of aerobic power; χ = 0.95m^3/kg, Parameter that describes the saturation of disappearance of lactate; , Power caused by the use of ATP produced anaerobically from glycolysis or glycogenolysis; λ = 3 × 20:J/(kgmmol/L), Parameter that scales the rate of change in lactate concentration and anaerobic power; V[m ]= 10L, Lactate volume of the muscles; V[b ]= 4L, Lactate volume of the blood; V = V[b ]+ V [m], Total lactate volume; K[1a ]= 0.05/s × L, Parameter that scales the movement of lactate into and out of the blood Authors' contributions JM performed the mathematical simulations, whereas ØS performed all laboratory testing. Both authors contributed similarly with important intellectual content and in drafting, revising and finishing the manuscript. All authors read and approved the final manuscript. 1. Schmidt-Nielsen K: Animal Physiology: adaptation and environment. 5th edition. Cambridge: Cambridge University Press; 1997. 2. Margaria R, Cerretelli P, Mangili F: Balance and kinetics of anaerobic energy release during strenuous exercise in man. 3. Heck H, Mader A, Hess G, Mucke S, Muller R, Hollmann W: Justification of the 4 - mmol/l lactate threshold. International Journal of Sports Medicine 1985, 6:117-130. PubMed Abstract | Publisher Full Text 4. Billat VL, Sirvent P, Py G, Koralsztein JP, Mercier J: The concept of maximal lactate steady state. Sports Medicine 2003, 33:407-426. PubMed Abstract | Publisher Full Text 5. di Prampero PE, Ferretti G: The energetics of anaerobic muscle metabolism: a reappraisal of older and recent concepts. Respir Physiol 1999, 118:103-115. PubMed Abstract | Publisher Full Text 6. Brooks GA: Lactate production under fully aerobic conditions: the lactate shuttle during rest and exercise. 7. Brooks GA: Current concepts in lactate exchange. Medicine and Science in Sports and Exercise 1991, 23:895-906. PubMed Abstract 8. Brooks GA: Lactate shuttles in nature. Biochem Soc Trans 2002, 30:258-264. PubMed Abstract | Publisher Full Text 9. Brooks GA: Link between glycolytic and oxidative metabolism. Sports Medicine 2007, 37:341-343. PubMed Abstract | Publisher Full Text 10. Myers J: Dangerous curves: a perspective on exercise, lactate, and the anaerobic threshold. Chest 1997, 111:787-795. PubMed Abstract | Publisher Full Text 11. Juel C: Lactate-proton cotransport in skeletal muscle. Physiol Rev 1997, 77:321-358. PubMed Abstract | Publisher Full Text 12. Hermansen L, Osnes JB: Blood and muscle pH after maximal exercise in man. J Appl Physiol 1972, 32:304-308. PubMed Abstract | Publisher Full Text 13. Hogan MC, Welch DC: Effect of varied lactate levels on bicycle performance. J Appl Physiol 1984, 57:507-513. PubMed Abstract | Publisher Full Text 14. Westerblad H, Lee JA, Allen DG: Cellular mechanisms of fatigue in skeletal muscle. 15. Sahlin K: Metabolic factors in fatigue. Sports Medicine 1992, 13:99-107. PubMed Abstract | Publisher Full Text 16. Hogan MC, Gladden LB, Kurdak S, Polle DC: Increases (lactate) in working dog muscle reduces tension development independent of pH. Medicine and Science in Sports and Exercise 1995, 27:371-377. PubMed Abstract 17. Andrews MAW, Godt RE, Nosek TM: Influence of physiological L + -lactate concentrations on contractibility of skinned striated muscle fibers of rabbit. J Appl Physiol 1996, 80:2060-2065. PubMed Abstract | Publisher Full Text 18. Favero TG, Zable AC, Colter D, Abramson JJA: Lactate inhibits Ca2 + -activated Ca2 + - channel activity from skeletal muscle sarcoplasmic reticulum. J Appl Physiol 1997, 82:447-452. PubMed Abstract | Publisher Full Text 19. Hermansen L, Saltin B: Oxygen uptake during maximal treadmill and bicycle exercise. J Appl Physiol 1969, 26:31-37. PubMed Abstract | Publisher Full Text 20. Robergs RA, Ghiasvand F, Parker D: Biochemistry of exercise-induced metabolic acidosis. Am J Physiol Regul Integr Comp Physiol 2004, 287:502-516. Publisher Full Text 21. Freund H, Zouloumian P: Lactate after exercise in man I. Evolution kinetics in arterial blood. European Journal of Applied Physiology 1981, 46:121-133. Publisher Full Text 22. Zouloumian P, Freund H: Lactate after exercise in man II. Mathematical model. European Journal of Applied Physiology 1981, 46:135-147. Publisher Full Text 23. Zouloumian P, Freund H: Lactate after exercise in man III. Properties of the compartment model. European Journal of Applied Physiology 1981, 46:149-160. Publisher Full Text 24. Oyono-Enguelle S, Freund H, Lampert E, Lonsdorfer A, Lonsdorfer J: Modeling lactate kinetics during recovery from muscular exercise in human. Influence of some physiological factors. Science & Sports 1993, 8(31):181-187. PubMed Abstract 25. Freund H, Oyono-Enguelle S, Heitz A, Marbach J, Ott C, Zouloumian P, Lampert E: Work rate dependent lactate kinetics after exercise in humans. J Appl Physiol 1986, 61:932-939. PubMed Abstract | Publisher Full Text 26. Freund H, Oyono-Enguelle S, Heitz A, Marbach J, Ott C, Gartner M: Effect of exercise duration on lactate kinetics after short muscular exercise. Eur J Appl Physiol 1989, 58:534-542. Publisher Full Text 27. Rossiter HB, Ward SA, Doyle VL, Howe FA, Griffiths JR, Whipp BJ: Interference from pulmonary O2 uptake with respect to intramuscular (phosphcreatine) kinetcs during moderate exercise in humans. J Physiol 1999, 518:921-932. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 28. Crowther GJ, Kemper WF, Carey MF, Conley KE: Control of glycolysis in contracting skeletal muscle. II. Turning it off. 29. Jubrias SA, Esselman PC, Price LB, Cress ME, Conley KE: Large energetic adaptations of elderly muscle to resistance and endurance training. J Appl Physiol 2001, 90:1663-1670. PubMed Abstract | Publisher Full Text 30. Lanza IR, Wigmore DM, Befroy DE, Kent-Braun JA: In vivo ATP production during free-flow and ischaemic muscle contractions in humans. J Physiol 2006, 577:353-367. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 31. Norman RW, Komi PV: Mechanical energetics of world-class cross-country skiing. International Journal of Sport Biomechanics 1987, 3:353-369. 32. Norman RW, Ounpuu S, Fraser M, Mitchell R: Mechanical power output and estimated metabolic rates of Nordic skiers during Olympic competition. International Journal of Sport Biomechanics 1989, 5:169-184. 33. Sandbakk Ø, Holmberg HC, Leirdal S, Jakobsen V, Ettema G: Analysis of a sprint ski race and associated laboratory determinants of world-class performance. Eur J Appl Physiol 2011, 111:947-957. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 34. di Prampero PE: Factors limiting maximal performance in humans. European Journal of Physiology 2003, 90:420-429. Publisher Full Text 35. Moxnes JF, Hausken K: A Dynamic Model for Training Impulse and Lactate Influx and Outflux During Exercise. International Journal of Modern Physics C 2009, 20:147-177. Publisher Full Text 36. Meyer RA: A linear model of muscle respiration explains monoexponential phosphocreatine changes. 37. Bergman BC, Wolfel EE, Butterfield GE, Lopaschuk GD, Casazza GA, Horning MA, Brooks GA: Active muscle and whole body lactate kinetics after endurance training in men. J Appl Physiol 1999, 87:1684-1696. PubMed Abstract | Publisher Full Text 38. Sandbakk Ø, Holmberg HC, Leirdal S, Ettema G: The physiology of world-class sprint skiers. Scandinavian Journal of Medicine and Science in Sports 2010. 39. Sandbakk Ø, Holmberg HC, Leirdal S, Ettema G: Metabolic rate and gross efficiency at high work rates in elite and national level sprint skiers. Eur J Appl Physiol 2010, 109:473-481. PubMed Abstract | Publisher Full Text 40. Green HJ, Hughson RL, Orr GW, Ranney DA: Anaerobic threshold, blood lactate, and muscle metabolites in progressive exercise. J Appl Physiol 1983, 54:1032-1038. PubMed Abstract | Publisher Full Text 41. Gollnick PD, Armstrong RB, Saubert IV, Sembrowich WL, Shepherd RE, Saltin B: Glycogen depletion patterns in human skeletal muscle fibers during prolonged work. Pflugers Archive - European Journal of Physiology 1973, 344:1-12. Publisher Full Text 42. Galbo H, Holst JJ, Christensen NJ: The effects of different diets and of insulin on the hormonal response to prolonged exercise. Acta Physiologica Scandandinavica 1979, 107:19-32. Publisher Full Text Sign up to receive new article alerts from Theoretical Biology and Medical Modelling
{"url":"http://www.tbiomed.com/content/9/1/7","timestamp":"2014-04-20T10:46:57Z","content_type":null,"content_length":"158325","record_id":"<urn:uuid:2bbab75f-c709-474b-af89-f6b95f088914>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
[plt-scheme] Timings From: Jens Axel Søgaard (jensaxel at soegaard.net) Date: Tue May 20 14:56:16 EDT 2003 I am in the process of implementing various data structures. One is ordinary sets represented by sorted lists. Here is the straight implementation of union: (define (union2 s1 s2) [(null? s1) s2] [(null? s2) s1] [else (let ([x (car s1)] [y (car s2)]) [(< x y) (cons x (union2 s2 (cdr s1)))] [(> x y) (cons y (union2 s1 (cdr s2)))] [else (cons x (union2 (cdr s1) (cdr s2)))]))])) Then I noticed that in the cases (< x y) and (> x y) we know that one of the lists is non-empty, thus it is unncessary to test for this in the recursive call. This leads to this version: (define (union s1 s2) ; union1 : set set -> set ; s1 is non-empty (define (union1 s1 s2) [(null? s2) s1] [else (let ([x (car s1)] [y (car s2)]) [(< x y) (cons x (union1 s2 (cdr s1)))] [(> x y) (cons y (union1 s1 (cdr s2)))] [else (cons x (union (cdr s1) (cdr s2)))]))])) (if (null? s1) (union1 s1 s2))) To get some timings I use: (define (interval m n) (if (> m n) (cons m (interval (add1 m) n)))) > (let ([s1 (interval 1 100000)] [s2 (interval 50000 150000)]) (time (union2 s1 s2)) (time (union s1 s2)) cpu time: 3475 real time: 3575 gc time: 0 cpu time: 4176 real time: 4196 gc time: 0 Then I tried: > (let ([s1 (interval 1 100000)] [s2 (interval 50000 150000)]) (time (union s1 s2)) (time (union2 s1 s2)) cpu time: 3125 real time: 3125 gc time: 0 cpu time: 4246 real time: 4267 gc time: 0 ... and now I'm confused. Which union is faster? Jens Axel Søgaard Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2003-May/002759.html","timestamp":"2014-04-19T03:03:26Z","content_type":null,"content_length":"7106","record_id":"<urn:uuid:a3108ccb-8967-4668-8dd8-ae3915b2169f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
NASA - Resources for the segment: "Comets" Resources for the segment: "Comets" Resources for Educators Deep Impact Mission Education Site -- Find games and activities to introduce students to comets. Amazing Space: Comparison Between Asteroids and Comets -- Find information about the differences between asteroids and comets. Download materials to use in your classroom. NASA eClips™ Our World: More than Just Dirty Snowballs -- These 5-E lessons guide student teams in grades 3-5 to create ice cream comet models to be analyzed by another team of students. They evaluate basic facts about comets to determine how scientists discovered this information. They explore the Stardust Mission and Deep Impact Mission and discuss the scientific contributions of each mission. Comparing Comets -- In this activity, students compare surface features on the nucleus of two comets; explain some possible causes for differences between the two nuclei; and list questions that they have about the surface of comet nuclei. Comet on a Stick -- In this activity, students build a model of a comet to study the way the sun affects it; simulate the sun's solar wind as it interacts with the comet; and evaluate the strengths and weaknesses of their comet model. Stardust NeXT: Education -- This site offers interactive features, educator guides, and classroom activities (including Comet Mystery Boxes and Comet Lingo Bingo), as well as fun facts relating to comets. SpaceMath @ NASA -- SpaceMath @ NASA introduces students to the use of mathematics in today’s scientific discoveries. To access these problems click on the green registration button on the top right and follow the instructions at We suggest the following problems related to comets: • Problem 116: The Comet Encke Tail Disruption Event -- In this problem, students analyze an image taken by the STEREO spacecraft to determine the speed of a comet tail disruption event. • Problem 255: Tempel 1 -- Close-up of a Comet -- Students examine an image of Comet Tempel 1 taken by the Deep Impact spacecraft to determine feature sizes and other details. • Problem 277: Deep Impact Comet Encounter -- Students learn about the Deep Impact experiment involving Comet Tempel 1 and how the path of an asteroid can be changed by using the Law of Conservation of Momentum. • Problem 324: Deep Impact Comet Flyby -- Students determine the form of a function that predicts the changing apparent size of the comet as viewed from the spacecraft along its trajectory. • Problem 340: Computing the Orbit of a Comet -- Students use data from the orbit of Halley's Comet to determine the equation for its elliptical orbit. • Problem 374: Deep Impact -- Closing In on Comet 103P/Hartley 2 -- Students use the tangent formula to figure out the angular size of the comet at closest approach and the scale of the High Resolution Instrument, or HRI, camera image. • Problem 377: Deep Impact: Approaching Comet Hartley 2 -- Students use data for the brightness of comet Hartley 2 measured by the Deep Impact spacecraft to create a linear equation for its approach distance and use the inverse-square law to estimate its brightness on Oct. 13, 2010. • Problem 382: Estimating the Mass and Volume of Comet Hartley 2 -- Students use an image of the nucleus of comet Hartley 2 taken by the Deep Impact/EPOXI camera and a simple geometric "dumbbell" model based on a cylinder and two spheres, to estimate the volume of the comet’s nucleus and its total mass. • Problem 387: A Mathematical Model of Water Loss from Comet Tempel 1 -- Students use data acquired by the Deep Impact spacecraft to create a simple empirical model for predicting the rate of water loss from a comet. Resources for Students NASA Stardust Kids and Parents Page -- This site provides activities about comets that students and parents can do at home. Comet Interactive -- Watch videos and learn about the anatomy and life cycle of comets with this interactive feature. Amazing Space: Comparison Between Asteroids and Comets -- What is the difference between a comet and an asteroid? Use this table to investigate the differences. Amazing Space: Comets -- Whip up a batch of comets without trashing the kitchen! Interactive Comet Animation -- View a comet's orbit using this interactive simulation. Amazing Space: Comet Facts, Myths and Legends -- Explore facts and fantasies about comets through the ages. Space Place: What's in the Heart of a Comet? -- Learn about the parts of a comet and see what NASA learned from the Deep Impact Mission to comet Tempel 1. NASA eClips™ Our World: Stardust -- Learn how NASA's Stardust mission collected a sample of comet dust and brought it back to Earth using a material called aerogel. Space Place: Ode to Aerogel -- Learn about the unique invention aerogel and how scientists have used this material to capture pieces of a comet. Space Place: Comet Word Find -- Find the hidden words about comets and solve the word find puzzle about comets and the Stardust mission. Space Place: Tails of Wonder -- Test your comet IQ in this interactive quiz. Space Place: Looking for Water Everywhere -- Use the "I See Ice Viewer" to see some of the places where ice has been found in our solar system.
{"url":"http://www.nasa.gov/audience/forstudents/nasaandyou/home/comets_resources_en.html","timestamp":"2014-04-20T16:33:47Z","content_type":null,"content_length":"25212","record_id":"<urn:uuid:3e92c6e0-df20-450c-ab47-9f4d37b5ea77>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Identifying spikes and seasonal components in electricity spot price data: A guide to robust modeling Janczura, Joanna and Trueck, Stefan and Weron, Rafal and Wolff, Rodney (2012): Identifying spikes and seasonal components in electricity spot price data: A guide to robust modeling. Download (793Kb) | Preview An important issue in fitting stochastic models to electricity spot prices is the estimation of a component to deal with trends and seasonality in the data. Unfortunately, estimation routines for the long-term and short-term seasonal pattern are usually quite sensitive to extreme observations, known as electricity price spikes. Improved robustness of the model can be achieved by (a) filtering the data with some reasonable procedure for outlier detection, and then (b) using estimation and testing procedures on the filtered data. In this paper we examine the effects of different treatment of extreme observations on model estimation and on determining the number of spikes (outliers). In particular we compare results for the estimation of the seasonal and stochastic components of electricity spot prices using either the original or filtered data. We find significant evidence for a superior estimation of both the seasonal short-term and long-term components when the data have been treated carefully for outliers. Overall, our findings point out the substantial impact the treatment of extreme observations may have on these issues and, therefore, also on the pricing of electricity derivatives like futures and option contracts. An added value of our study is the ranking of different filtering techniques used in the energy economics literature, suggesting which methods could be and which should not be used for spike identification. Item Type: MPRA Paper Original Identifying spikes and seasonal components in electricity spot price data: A guide to robust modeling Language: English Keywords: Electricity spot price; Outlier treatment; Price spike; Robust modeling; Seasonality C - Mathematical and Quantitative Methods > C5 - Econometric Modeling > C51 - Model Construction and Estimation Subjects: C - Mathematical and Quantitative Methods > C5 - Econometric Modeling > C52 - Model Evaluation, Validation, and Selection Q - Agricultural and Natural Resource Economics; Environmental and Ecological Economics > Q4 - Energy > Q47 - Energy Forecasting C - Mathematical and Quantitative Methods > C8 - Data Collection and Data Estimation Methodology; Computer Programs > C80 - General Item ID: 39277 Depositing Rafal Weron Date 06. Jun 2012 14:02 Last 13. Feb 2013 14:51 Barlow, M. (2002) A diffusion model for electricity prices. Mathematical Finance 12, 287-298. Becker, R., Hurn, S., Pavlov, V. (2007) Modelling Spikes in Electricity Prices. The Economic Record 83(263), 371-382. Benth, F.E., Benth, J.S., Koekebakker, S. (2008) Stochastic Modeling of Electricity and Related Markets. World Scientific, Singapore. Bhanot, K. (2000) Behavior of power prices: Implications for the valuation and hedging of financial contracts. The Journal of Risk 2, 43-62. Bierbrauer,M.,Menn, C., Rachev, S.T., Tr¨uck, S. (2007) Spot and derivative pricing in the EEX power market. Journal of Banking and Finance 31, 3462-3485. Bierbrauer, M., Tr¨uck, S., Weron, R. (2004) Modeling electricity prices with regime switching models. Lecture Notes in Computer Science 3039, 859-867. Boogert, A., Dupont, D. (2008) When supply meets demand: The case of hourly spot electricity prices. IEEE Transactions on Power Systems 23(2), 389-398. Borovkova, S., Permana, F.J. (2006) Modelling electricity prices by the potential jump-diffusion. In: A.N. Shiryaev et al. (eds.), Stochastic Finance – Proceedings of StochFin2004, Springer, 239-264. Cappe, O., Moulines E., Ryden T. (2005). Inference in Hidden Markov Models. Springer. Cartea, A., Figueroa, M. (2005) Pricing in electricity markets: A mean reverting jump diffusion model with seasonality. Applied Mathematical Finance 12(4), 313-335. Clewlow, L., Strickland, C. (2000). Energy Derivatives – Pricing and Risk Management. Lacima Publications. De Jong, C. (2006) The nature of power spikes: A regime-switch approach. Studies in Nonlinear Dynamics & Econometrics 10(3), Article 3. Deng, S.-J. (1998) Stochastic models of energy commodity prices and their applications: Mean-reversion with jumps and spikes. PSerc Working Paper 98-28. Erlwein, C., Benth, F.E., Mamon, R. (2010) HMM filtering and parameter estimation of an electricity spot price model. Energy Economics 32, 1034-1043. Ethier, R., Mount, T. (1998) Estimating the volatility of spot prices in restructured electricity markets and the implications for option values. PSerc Working Paper 98-31. Eydeland, A., Wolyniec, K. (2012) Energy and Power Risk Management (2nd ed.). Wiley, Hoboken, NJ. Fanone, E., Gamba, A., Prokopczuk, M. (2012) The case of negative day-ahead electricity prices. Energy Economics, In press, doi:10.1016/j.eneco.2011.12.006. Fleten, S.-E., Heggedal, A.M., Siddiqui, A. (2011) Transmission capacity between Norway and Germany: a real options analysis. Journal of Energy Markets 4(1), 121-147. Garcia, R.C., Contreras, J., van Akkeren, M., Garcia, J.B. (2005) A GARCH forecasting model to predict day-ahead electricity prices. IEEE Transactions on Power Systems 20(2), 867-874. Geman, H., Roncoroni, A. (2006) Understanding the fine structure of electricity prices. Journal of Business 79, 1225-1261. Haldrup, N., Nielsen, F.S., Nielsen, M.Ø. (2010) A vector autoregressive model for electricity prices subject to long memory and regime switching. Energy Economics 32, 1044-1058. Hambly, B., Howison, S., Kluge, T. (2009) Modelling spikes and pricing swing options in electricity markets. Quantitative Finance 9(8), 937-949. Hamilton, J. (1990) Analysis of time series subject to changes in regime. Journal of Econometrics 45, 39-70. Haerdle,W., Kerkyacharian, G., Picard, D., Tsybakov, A. (1998) Wavelets, Approximation and Statistical Applications. Lecture Notes in Statistics 129. Springer-Verlag, New York. Higgs, H., Worthington, A. (2008) Stochastic price modeling of high volatility, mean-reverting, spike-prone commodities: The Australian wholesale spot electricity market. Energy Economics 30, 3172-3185. Hirsch, G. (2009) Pricing of hourly exercisable electricity swing options using different price processes. Journal of Energy Markets 2(2), 3-46. Hochberg, Y., Tamhane, A.C. (1987) Multiple Comparison Procedures. Wiley, Hoboken, NJ. Hollander, M., Wolfe, D.A. (1999) Nonparametric Statistical Methods. Wiley, Hoboken, NJ. Huisman, R. (2009) An Introduction to Models for the Energy Markets. Risk Books. Huisman, R., de Jong, C. (2003) Option pricing for power prices with spikes. Energy Power Risk Management 7.11, 12-16. Huisman, R., Mahieu, R. (2003) Regime jumps in electricity prices. Energy Economics 25, 425-434. Jabłonska, M., Nampala, H., Kauranne, T. (2011) The multiple-mean-reversion jump-diffusion model for Nordic electricity spot prices. Journal of Energy Markets 4(2), 3-25. Janczura, J., Weron, R. (2010). An empirical comparison of alternate regime-switching models for electricity spot prices. Energy Economics 32, 1059-1073. References: Janczura, J., Weron, R. (2012) Efficient estimation of Markov regime-switching models: An application to electricity spot prices, AStA - Advances in Statistical Analysis, Online First Kaminski, V. (2004) Managing Energy Price Risk: The New Challenges and Solutions, 3rd ed. Risk Books, London. Kanamura, T., Ohashi, K. (2008) On transition probabilities of regime switching in electricity prices. Energy Economics 30, 1158-1172. Karakatsani, N.V., Bunn, D.W. (2008) Intra-day and regime-switching dynamics in electricity price formation. Energy Economics 30, 1776-1797. Keles, D., Hartel, R., Most, D., Fichtner, W. (2012) Compressed-air energy storage power plant investments under uncertain electricity prices: An evaluation of compressed-air energy storage plants in liberalized energy markets. Journal of Energy Markets 5(1), 53-84. Kholodnyi, V.A. (2005). Modeling power forward prices for power with spikes: A non-Markovian approach. Nonlinear Analysis 63, 958-965. Kim, C.-J. (1994) Dynamic linear models with Markov-switching. Journal of Econometrics 60, 1-22. Knittel, C.R., Roberts, M.R. (2005) An empirical examination of restructured electricity prices. Energy Economics 27, 791-817. Koopman, S.J., Ooms,M., Carnero,M.A. (2007) Periodic seasonal reg-ARFIMA-GARCH models for daily electricity spot prices. Journal of the American Statistical Association 102(477), 16-27. Lapuerta, C., Moselle, B. (2001) Recommendations for the Dutch electricity market. The Brattle Group Report, London. Lucia, J.J., Schwartz, E.S. (2002) Electricity prices and power derivatives: Evidence fromthe Nordic Power Exchange. Review of Derivatives Research 5, 5-50. Makridakis, S., Wheelwright, S.C., Hyndman, R.J. (1998) Forecasting – Methods and Applications, 3rd ed.. Wiley. Mari, C. (2008) Random movements of power prices in competitive markets: A hybrid model approach. Journal of Energy Markets 1(2), 87-103. Merton, R.C. (1976) Option pricing when underlying stock returns are discontinuous. Journal of Financial Economics 3, 125-144. Misiorek, A., Trueck, S.,Weron, R. (2006) Point and interval forecasting of spot electricity prices: Linear vs. non-linear time series models. Studies in Nonlinear Dynamics and Econometrics 10(3), Article 2. Mount, T.D., Ning, Y., Cai, X. (2006) Predicting price spikes in electricity markets using a regime-switching model with time-varying parameters. Energy Economics 28: 62-80. Nomikos, N.K., Soldatos, O.A. (2010) Analysis of model implied volatility for jump diffusion models: Empirical evidence from the Nordpool market. Energy Economics 32, 302-312. Nowotarski, J., Tomczyk, J.,Weron, R. (2011)Wavelet-based modeling and forecasting of the seasonal component of spot electricity prices. The Energy Finance Christmas Workshop (EFC11), Wrocław, Dec. 19-20, 2011. Pilipovic, D. (1998) Energy Risk: Valuing and Managing Energy Derivatives. McGraw-Hill, New York. Percival, D.B., Walden, A.T. (2000) Wavelet Methods for Time Series Analysis. Cambridge University Press. Seifert, J., Uhrig-Homburg, M. (2007) Modelling jumps in electricity prices: theory and empirical evidence. Review of Derivatives Research 10, 59-85. Shahidehpour, M., Yamin, H., Li, Z. (2002) Market Operations in Electric Power Systems: Forecasting, Scheduling, and Risk Management. Wiley. Simonsen, I. (2005) Volatility of power markets. Physica A 355, 10-20. Stevenson, M. (2001) Filtering and forecasting spot electricity prices in the increasingly deregulated Australian electricity market. Research Paper No 63, Quantitative Finance Research Centre, University of Technology, Sydney. Stevenson, M.J., Amaral, J.F.M., Peat, M. (2006) Risk management and the role of spot price predictions in the Australian retail electricity market. Studies in Nonlinear Dynamics and Econometrics 10(3), Article 4. Trueck, S., Weron, R., Wolff, R. (2007) Outlier treatment and robust approaches for modeling electricity spot prices. Proceedings of the 56th Session of the ISI. Available at MPRA: Weron, R. (2006) Modeling and Forecasting Electricity Loads and Prices: A Statistical Approach. Wiley, Chichester. Weron, R. (2008) Market price of risk implied by Asian-style electricity options and futures. Energy Economics 30, 1098-1115. Weron, R. (2009) Heavy-tails and regime-switching in electricity prices. Mathematical Methods of Operations Research 69(3), 457-473. Weron, R., Bierbrauer, M., Trueck, S. (2004a) Modeling electricity prices: jump diffusion and regime switching. Physica A 336, 39-48. Weron, R., Simonsen, I.,Wilman, P. (2004b) Modeling highly volatile and seasonal markets: Evidence from the Nord Pool electricity market. In: The Application of Econophysics, H. Takayasu (ed.), Springer, Tokyo, 182-191. URI: http://mpra.ub.uni-muenchen.de/id/eprint/39277
{"url":"http://mpra.ub.uni-muenchen.de/39277/","timestamp":"2014-04-19T11:08:59Z","content_type":null,"content_length":"40339","record_id":"<urn:uuid:87551eee-8b38-4de6-b1da-8f0862bd8944>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00045-ip-10-147-4-33.ec2.internal.warc.gz"}