content
stringlengths
86
994k
meta
stringlengths
288
619
Handbook Of Electronics Formulas Handbook Of Electronics Formulas PDF Sponsored High Speed Downloads PHYS 401 Physics of Ham Radio 26 Basic Electronics Chapter 2, 3A (test T5, T6) Basic Electrical Principles and the Functions of Components Figures in this course book are ALLIED'S ELECTRONICS DATA HANDBOOK Formerly Allied's Radio Handbook A Compilation of Formulas and Data Most Com- monly Used in the Field of Radio and Electronics BASIC ELECTRICAL CIRCUIT FORMULAS. IMPEDANCE : VOLT-AMP EQUATIONS ; CIRCUIT ELEMENT . absolute value: complex form. instantaneous values: RMS values for ... electronics reference, formulas, schematics. L.Rozenblat on Google+. Title: Electrical Engineering Formulas Author: Lazar Rozenblat Subject: ELECTRONICS The Electronics Handbook Electrical Engineering Handbook Series; 2nd Ed. by Whitaker, ... Naldi, C., Analytical formulas for coplanar lines in hybrid and monolithic MICs, Electronics Letters, Vol. 20, No. 4, 179–181, February 1984. 14. o Electronics Handbook Ref TK 7867 .E4244 2005 ... o Handbook of Electronics Tables and Formulas Ref TK 7825 .H378 1986 o Handbook of VLSI Chip Design and Expert Systems Ref TK 7842 .S32 1993 o Standard Handbook for Electrical Engineers Ref TK 151 .S83 Handbook of Electrical Engineering Handbook of Electrical Engineering: For Practitioners in the Oil, Gas and Petrochemical Industry. Alan L. Sheldrake ENGINEERING HANDBOOK Approved for public release: ... EDM Engineering Development Model ET Electronics Technician ... Decibel Formulas (where Z is the general form of R, including inductance and capacitance) When impedances are equal: The Electronics Handbook, Jerry C. Whitaker ... The Handbook of Formulas and Tables for Signal Processing, Alexander D. Poularikas The Industrial Electronics Handbook, J. David Irwin Measurements, Instrumentation, and Sensors Handbook, J. Michael Golio 6x9 Handbook / Electronic Filter Design / Williams & Taylor /147171-5 / Front Matter ABOUT THE AUTHORS ... An EXCEL spreadsheet contains formulas from the individual chapters keyed to the text so that the tedious calculations required in The second part of this handbook has been compiled to provide the technician with a collection of helpful information. ... BASIC ELECTRONICS FORMULAS Basic electronics formulas are included to aid you in solving any electronics problem that you may Radio Electronics Handbook By Boyce, William F and Joseph J Roche Radio Electronics Handbook Details: Electronics Handbook Variable resistors are used in applications such as the volume control on a radio ... Preface I have had many requests to update Transformer and Inductor Design Handbook, because of the way power electronics has changed over the past few years. Clifford, Martin: Master Handbook of Electronic Tables and Formulas, 3rd Ed., Tab, 1980 ... Giacoletto, L. J: Electronics Designers’ Handbook, 2nd Ed., McGraw-Hill, 1977 Harper, Charles A, Ed.: Handbook of Components for Electronics, McGraw-Hill, 1977 MADE EASY team realised that there is a need of good Handbook which can provide the crux of Civil Engineering in a concise form to the student to brush up the formulae and important concepts required for IES, GATE, PSUs and other The Electronics Handbook, Jerry C. Whitaker ... The Handbook of Formulas and Tables for Signal Processing, Alexander D. Poularikas The Industrial Electronics Handbook, J. David Irwin The Measurement, Instrumentation, and Sensors Handbook, John G. Webster ELECTRONICS ILLUSTRATED HAM RADIO HANDBOOK (Pb, 1959) By HERTZBERG, ROBERT W2DJJ ... Shop eBay! ... 1956 Allied's Electronics Data Handbook PB Coil Winding Attenuator Formulas VG. Expedited shipping ... Allied's Electronics Data Handbook Allied i CONTENTS Files in Adobe .PDF format are highlighted in blue, and are linked to the appropriate section Section INDE X.....vii power electronics 13.1 section 14. ... explicit butterworth and chebyshev formulas 10.30 the inverse chebyshev approximation 10.33 the elliptic filter 10.34 ... source: standard handbook of electronic engineering. the high-pass filter 10.40 All New Electronics Self-Teaching Guide / H. Kybett. TK 7868 .D5 K92 2008 ... CRC Handbook of Chemistry and Physics. Ref QD 65 .H3 2011 ... Civil Engineering Formulas / T. Hicks. McGraw-Hill. 2010. Digital Circuits / E. Cooney. Global Media, ... The Handbook of Formulas and Tables for Signal Processing, Alexander D. Poularikas The Handbook of Nanoscience, Engineering, and Technology, Second Edition, ... Professor Grigsby is a fellow of the Institute of Electrical and Electronics Engineers (IEEE). The Electrical Engineering Handbook Ed. Richard C. Dorf Boca Raton: ... key formulas, that is, those formulas that are most often needed in the formulation and solution of engineering ... solid-state electronics) that use concepts of modern physics. The world of mathematics, even applied ... The Electronics Handbook, Second Edition, Jerry C. Whitaker The Engineering Handbook, Third Edition, Richard C. Dorf The Handbook of Formulas and Tables for Signal Processing, Alexander D. Poularikas The Handbook of Nanoscience, Engineering, ... The most common references in the world of electronics ar e the milliwatt (mW) and the watt. The abbreviation dBm indicates dB referenced to 1.0 ... Decibel Formulas (where Z is the general form of R, including inductance and capacitance) When impedances are equal: When impedances are ... ARRL HANDBOOK Š For decades, this book has established itself as THE reference book for hams, ... W1FB. One source for all those regularly used electronics tables, charts and formulas. Plus, hundreds of popular circuit diagrams of oscillators, mixers, amplifiers, ... Handbook contains thousands of formulas, conversions, and definitions on computers, air and gases, electronics, tools, plumbing, construction, surveying, trademarks, and more..... 768 pp. ISBN 1-885071-33-7. Shpg wt 1 lb (0.5 kg). Standard ... Power Electronics Notes 30BPower Electronics Notes 30B Inductance Calculation Methods Marc T. Thompson, Ph.D. Thompson Consulting, Inc. 9 Jacob Gates Road9 Jacob Gates Road Physics Handbook Bryn Mawr College 2013-2014 http://www.brynmawr.edu/physics Welcome to the Bryn Mawr Physics Department. Over ... electronics experience, to have two semesters of chemistry, and to have completed a summer research program. The Electronics Handbook, Jerry C. Whitaker ... The Handbook of Formulas and Tables for Signal Processing, Alexander D. Poularikas The Industrial Electronics Handbook, J. David Irwin Measurements, Instrumentation, and Sensors Handbook, John Webster Engineering formulas: conversions, definitions, and tables REF TA151 .S49 1996 - Chemical - Kirk-Othmer concise encyclopedia of chemical technology ... The industrial electronics handbook REF TK7881 .I52 1997 Modern dictionary of ... The lessons referenced here are those of most use to a student of radio electronics. Basic Numbers & Formulas Order of Operations ... Geometric Formulas - www.equationsheet.com/sheets/ Equations-4.html ... Handbook Master Tenplate Text: Forrest Mims, Getting Started in Electronics Murray Spiegel, Schaum’s Mathematical Handbook of Formulas and Tables Grading: Quiz #1 10% You might need a calculator for these Quiz #2 10% Quiz #3 10% Quiz #4 10% Homework 15% ... Bosch – Automotive Handbook The world's definitive automotive reference work STAR Group – Your single-source partner for information services & tools Aviation Mechanic Handbook. By Dale Crane. Handy toolbox-size reference for professionals and hobbyists. Nonabrasive, spiral-bound book provides conversions, formulas, densities, solid state electronics, and more. Part No. Description ISBN Pages Handbook of Porphyrin Science represents a ... tables and structural formulas, and thousands of ... to come. Readership: chemists, physicists, material scientists, polymer scientists, spectroscopists, electrochemists, electronics and photonics engineers, biochemists, biophysicists ... HANDBOOK FOURTH EEDITION ELECTRONIC VVERSION 105 Wilbur Place, Bohemia, New York 11716-2482 ... With the advent of digital electronics, it became pos-sible to replace one or more of the synchro (or resolver) components in such a system by an accu- Electrical Engineering and Electronics, TL ... Engineering Formulas REF TA151 .G4713 2006 ... Handbook of Design Components REF TA174.H25 2003 Machinery’s Handbook REF TJ151.M3 O247 2004 ... Vol 4 -Electronics_rev 3 IDC Pocket Guide - Vol 4 -Electronics_rev 3 - Free download as Text file ... Hacking The Ethical Hackers Handbook 3rd Edition Handbook of Electrical Installation ... mechanical engineering formulas FREE Download ... Electronics Technology Handbook Energy Systems Engineering: Evaluation and Implementation ... Forensic Structural Engineering Handbook, Second Edition Formulas for Structural Dynamics: Tables, Graphs and Solutions Foundation Engineering Handbook: ... Electronics- (ebook - PDF) CRC Press - Numerical Techniques in ... Handbook (Stergiopoulos @2001).rar (DSP) CRC Press - Handbook of Formulas and Tables for Signal Processing.rar\ (eBook - CRC Press) Electrical Engineering Handbook (2000, science)(1) 2181431 ARRL Handbook for Radio Communications (2013 edition). ... 2112778 Forrest M. Mims III Engineer’s Guide to Electronics Volume IV: Electronic Formulas, Symbols & Circuits 192 13.95 2112786 Forrest M. Mims III Getting Started in Electronics, ... The Handbook of Formulas and Tables for Signal Processing, Alexander D. Poularikas The Handbook of Nanoscience, Engineering, ... The Industrial Electronics Handbook, J. David Irwin The Measurement, Instrumentation, and Sensors Handbook, John G. Webster Formulas and Resources Douglas Brooks, President ... “Noise Reduction Techniques in Electronics,” Wiley Interscience, 1988, p. 281 3. “MECL System Design Handbook,” Rev. 1, Motorola Semiconductor Products, Inc. 1988, p. 157 4. These formulas are correct for Quattro Pro and for Lotus 1-2-3. Electronics Engineers' Handbook - Detailed articles with diagrams, tables, formulas, etc. in areas such as circuits, materials' properties, microwaves, energy sources, navigation, and many others. Articles have bibliographies. Index at the end. The Electronics Handbook . 2nd ed. (Taylor & Francis, 2005) (Available in ENGnetBASE ) The Encyclopedia of Electronic Circuits . ... The Handbook of Formulas and Tables for Signal Processing (CRC Press, 1999) (Available in ENGnetBASE ) interactive formulas.. Industrial electricity direct-current machines, ... Engineering, 735 pages. . Electronics engineers' handbook , Donald G. Fink, Donald Christiansen, Jan 1, 1989, , 26 pages. Very Good,No Highlights or Markup,all pages are intact.. knowledge of metering formulas; knowledge of operating standards for potential ... References for Electrical Metering Theory: Edison Electrical Institute. Handbook for Electricity Metering. Instruments ... Teach Yourself Electricity and Electronics. Basic DC Circuits chapter. McGraw ... Standard Handbook for Electrical Engineers, Roark's Formulas for Stress and Strain, and many more. We offer the widest and deepest repository of engineer- ... Power electronics handbook Call number: r7K7871.15 P887 2002 Electronics, power electronics, optoelectrics, microwaves, electromagnetics, Handbook Second Edition Charlie Wing International Marine / McGraw-Hill ... Installing Electronics 222 ... Color Codes • Useful Electrical Formulas • Identifying Resistors • Identifying Capacitors • Trigonometric Tables ELECTRONICS HANDBOOK SERIES Series Editor: Jerry C. Whitaker Technical Press Morgan Hill, California ... ELECTRONIC SYSTEMS MAINTENANCE HANDBOOK Jerry C. Whitaker FORMULAS FOR THERMAL DESIGN OF ELECTRONIC EQUIPMENT Ralph Remsberg THE RESOURCE HANDBOOK OF ELECTRONICS Jerry C. Whitaker. POWER Dictionary for the Electrician with Formulas / T. Henry. Ref TK 9 .H46 1997 Electric Circuits / J. Nilsson. ... Electricity and Basic Electronics / S. Matt. TK 146 .M376 2009 Electricity and Electronics / H. Gerrish. ... Handbook of Electrical Installation Practice / G. Stokes. TK 3271 .H28 2003
{"url":"http://ebookily.org/pdf/handbook-of-electronics-formulas","timestamp":"2014-04-16T10:11:57Z","content_type":null,"content_length":"42827","record_id":"<urn:uuid:b074a79f-8efb-4ae2-88c7-21b0d805b094>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
38 (number) This article needs additional citations for verification. (August 2009) This article discusses the number thirty-eight. For the year 38 CE, see 38. For other uses of 38, see 38 (disambiguation) 38 (thirty-eight) is the natural number following 37 and preceding 39. In mathematics 38 is the 11th distinct semiprime and the 7th in the {2.q} family. It is the initial member of the third distinct semiprime pair (38, 39). 38 has an aliquot sum of 22 which is itself a distinct semiprime In fact 38 is the first number to be at the head of a chain of four distinct semiprimes in its 8 member aliquot sequence (38, 22, 14, 10, 8, 7, 1, 0). 38 is the 8th member of the 7-aliquot tree. 38! - 1 yields 523022617466601111760007224100074291199999999, which is the 16th factorial prime. There is no answer to the equation φ(x) = 38, making 38 a nontotient.^[1] 38 is the sum of the squares of the first three primes. 37 and 38 are the first pair of consecutive positive integers not divisible by any of their digits. 38 is the largest even number which cannot be written as the sum of two odd composite numbers. There are only two normal magic hexagons, order 1 (which is trivial) and order 3. The sum of each row of an order 3 magic hexagon is 38.^[2] In science In mythology • The number 38 was especially prominent in Norse mythology. The number was said to represent unnatural bravery, characteristic of the legendary heroes of Norse sagas. Most legendary sagas were divided into 38 chapters, and the number often recurred throughout stories, with the heroes combating giants or other beasts in groups of 38. The number came to be adopted by the Hardrada clan, and was displayed on their crest in the form of 38 ravens set around 38 outward-facing arrows.^[citation needed] • The number was also significant in Egyptian mythology, as it was the characteristic number of Anubis, the jackal-headed god of death and mummification. Egyptian pharaohs were often buried with 38 statues of cat guardians, and their sarcophagi were adorned with 38 ankhs. In other fields Thirty-eight is also: • The number of slots on an American Roulette wheel (0, 00, and 1 through 36; European roulette does not use the 00 slot and has only 37 slots) • The number of games that each team in the current English Premiership, the top division in English Association Football, plays in a season • Bill C-38 legalized same-sex marriage in Canada • The number of years it took the Israelites to travel from Kadesh Barnea to the Zered valley in Deuteronomy. • A "38" is often the name for a snub nose .38 caliber revolver • Name of the southern rock band 38 Special • The 38 class is the most famous class of steam locomotive used in New South Wales • The number of the French department Isère • There are 38 surviving plays written by William Shakespeare. • The gate of the sci-fi TV series Stargate SG-1 can stay open a maximum of 38 minutes. • In Taiwan and some places in southern Chinese mainland, "3, 8", but not "38", is slang for stupid/idiot,especially women. Historical years 38 A.D., 38 B.C., 1938 A.D., 2038, etc.
{"url":"http://blekko.com/wiki/38_(number)?source=672620ff","timestamp":"2014-04-16T19:47:19Z","content_type":null,"content_length":"26245","record_id":"<urn:uuid:a5c343b4-f750-4416-914c-ee0dfd9869d0>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
P.D.F of X-Y when X Y are independent August 29th 2011, 05:33 AM #1 Aug 2011 P.D.F of X-Y when X Y are independent In general, how do you solve for X-Y? e.g X and Y have p.d.f $f(x)=3x^2$ for 0<x<1 I know how to solve for X+Y but for X-Y, C.D.F is effectively the region above a line Y=X-Z bounded by the square 0 to 1. I think I am stuck with a non standard multiple integral from top left to right bottom. any idea? Re: P.D.F of X-Y when X Y are independent you can easily get the pdf of -Y, with the region that is different. You can then use the convolution or the transform of variables method. Re: P.D.F of X-Y when X Y are independent Note that $-1\le a\le 1$ where $P(Z\le a)=P(X-Y\le a)$, which has two cases. If $-1\le a\le 0$ use $\int_{-a}^1\int_0^{a+y}9x^2y^2dxdy$ and if $0< a\le 1$ use $1-\int_{a}^1\int_0^{x-a}9x^2y^2dydx$ It's 2am, I hope my bounds are correct, but the idea is certainly right. August 30th 2011, 08:09 AM #2 September 4th 2011, 11:20 PM #3
{"url":"http://mathhelpforum.com/advanced-statistics/186910-p-d-f-x-y-when-x-y-independent.html","timestamp":"2014-04-20T09:20:07Z","content_type":null,"content_length":"37872","record_id":"<urn:uuid:7c28b8a0-b361-4e12-8648-cabc7abb067e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Collection Development Policy Bobst Library, New York University Kara Whatley, Head, Coles Science Reference Center Mathematics has been described as an art, competing only with music in its abstraction and beauty. It has also been described as the Queen of the Sciences. The study of Mathematics is essential for all of the sciences, for Computer Science, for Economics, and for the social sciences. It is the gateway to a large part of a new world that includes computer applications, planet-wide phenomena, and physical, molecular, physiological, and evolutionary systems. Mathematics and computer science are viewed as living parts of the stream of science, not as isolated specialties. The Bobst Library collection supports undergraduate instruction in the Department of Mathematics, teaching of mathematics (pedagogy) at the graduate and research level in the School of Education, and programs in other disciplines, particularly for the School of Continuing and Professional Studies, the Stern School of Business, and the Wagner School of Public Service . Mathematical statistics classes in the QA's of the Library of Congress system. Computer Science, which is a part of mathematics in the Library of Congress Subject Headings, is described in a separate statement. The collection also includes mathematical materials for the non-mathematician, such as mathematical recreation and mathematics for standardized tests. Academic Programs The Department of Mathematics offers an undergraduate major in Mathematics, the requirements for which are set forth in the College of Arts and Science Bulletin. In accordance with the Courant Institute of Mathematical Science philosophy, the Department also participates in a number of interdisciplinary programs. There are two double majors, one with Computer Science and another with Economics. The Department also has two five-year programs: a B.S./B.E. in conjunction with Stevens Institute of Technology that leads to a B.S. in Mathematics and a B.E. in Engineering Physics, Electrical, Computer, or Mechanical Engineering; and a B.A./M.S. with the Leonard N. Stern School of Business for students considering a career in Actuarial Science. Morse Academic Progam (MAP) For students who are not planning to be directly engaged in scientific or technical endeavors the three-course Foundations of Scientific Inquiry sequence was created. The Quantitative Reasoning course is taught by members of the Department of Mathematics. The Department of Mathematics at NYU is a world leader in Applied Mathematics, emphasizing the application of Mathematics to technology and other branches of science. The Graduate Department of Mathematics at the Courant Institute offers balanced training in the mathematical analysis and applications of mathematics in the broadest sense, especially in ordinary and partial differential equations, probability theory and stochastic processes, differential geometry, numerical analysis, scientific computation, mathematical physics and materials science, and fluid and gas dynamics. In computer science, the Institute excels in theory, programming languages, computer graphics, and parallel computing. The above section is composed largely of extracts from the various WWW pages cited. 1. Language Material is collected in English, except for dictionaries and a small collection of textbooks for English as a second language students. 2. Geographical Areas Limitations do not apply for this subject. 3. Chronological Periods Emphasis is on current materials. Materials on the history of mathematics are collected from every time period. 4. Sources used to Develop the Collection [Section to be completed] 5. Weeding [Section to be completed] 6. Selection Criteria [Section to be completed] Types of Materials 1. Included Circulating Collection Textbooks, monographs, monographic series, facsimiles, reprints, conference proceedings, journals, microforms, video recordings, CDROMs, electronic texts, popularizations. Microform is avoided because of technical problems with reproduction of mathematical notation. Reference Collection Encyclopedias, dictionaries, directories, guides, handbooks, abstracts and indexes, instructional support materials, selected textbooks, tables. 2. Excluded Circulating Collection Newsletters, ephemera, preprints, reprints, offprints, technical reports, juvenile works, K-12 textbooks, manuscripts, problem sets, dissertations and theses (except from NYU). Microform is avoided because of technical problems with reproduction of mathematical notation. Reference Collection Printed specialized bibliographies or compilations of citations that could be derived from online databases, problem sets, examination files. Strengths & Weaknesses of the Collection One challenge facing all of the science collections at NYU is to make a successful transition from print to electronic information resources. Another is to manage the collections well, given limited space, the deteriorating physical condition of materials, and limited staff to conduct inventories. Every effort is made to replace missing materials in a timely way if they are available. Increased access to electronic format may help to address this challenge. The collection is strong on periodicals from the 19th century. There is a good collection on college-level pedagogy, especially use of software such as MAPLE and MATLAB. The collection of textbooks is strong, but it needs to be assessed and weeded, and a selection of textbooks for undergraduate mathematics made in Spanish, Russian, and Chinese for English as a Second Language students. Another ongoing challenge is to mesh collecting with the Courant Institute Library. Other Resources 1. Related Collections within NYU Libraries The Z's: Outdated or superseded bibliographies that retain historical or bibliographic research interest are located elsewhere in the Bobst Library. Courant Institute of Mathematical Sciences (CIMS) Graduate and research-level materials in pure and applied mathematics. Unnecessary duplication is avoided. Reference works to support research and graduate study, including the basic indexes for mathematics and statistical sciences, are housed at the Courant Institute and linked to the Bobst Library online public access catalog whenever possible. The Institute's excellent library contains one of the largest collections of journals and advanced texts in Mathematics and Computer Science in the U.S. While efforts are made to avoid unnecessary duplication between Bobst and Courant, and to focus subject collections in one location or the other, this can be a challenge and there are numerous grey areas. Some Courant faculty have joint appointments with other NYU departments and are doing research in subjects areas that are established parts of research. Using mathematical tools to model aspects of neuroscience is one example. Mathematical physics is collected both in Bobst and CIMS. Modeling of flows in the atmosphere and the ocean is collected at CIMS because the research interest originated there. The Fales Library and Special Collections Houses the Berol Collection of Lewis Carroll Materials that includes some of his mathematical works. 2. Other Collections in New York City Cooper Union Undergraduate level mathematics to support the engineering degree program. Columbia University Mathematics Library Not recommended for NYU undergraduate use, as the collections largely overlap. Subjects & Collection Levels LC Class Subject ECS CCI DCS Notes QA1-99 Mathematics, General B B B QA1-4 Periodicals B B B D level collection at Courant Institute QA11-20 Study and Teaching B C C QA21-35 History, Biography B B B D level collection at Courant Institute QA76 See Computer Science Statement QA77-141 Elementary Mathematics, Arithmetic QA101-145 Arithmetic B B B QA150-271 Algebra, General B B B QA265-271 Linear Programming and Game Theory B B B QA273-280 Probability, Mathematical Statistics B B B QA300-316 Calculus B B B QA372-377 Differential Equations B B B QA440-699 Geometry, General; Trigonometry; Topology B B B QA801-939 Analytical Mechanics B B B
{"url":"http://library.nyu.edu/collections/policies/math.html","timestamp":"2014-04-20T10:48:21Z","content_type":null,"content_length":"27678","record_id":"<urn:uuid:65c8bf61-a2ea-4691-a48c-ce52f388802f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
Speed Kills? Father & Son team, William and Arthur Edelstein discuss one of the dangers of near lightspeed travel in their paper published just last month: Speed kills: Highly relativistic spaceflight would be fatal for passengers and instruments [citation: Edelstein, W. and Edelstein, A. (2012) Speed kills: Highly relativistic spaceflight would be fatal for passengers and instruments. Natural Science, 4, 749-754.doi: 10.4236/ns.2012.410099.] They highlight the lethality of the high-energy proton head-wind that the Interstellar Medium (ISM) becomes when moving at near light-speed, which they define as above about ~0.9c. I hadn’t realised the Edelsteins finally published their work until a Facebook friend, Jay Real, sent me a link. Of course these issues have been discussed in the literature for years so their discussion is nothing new – but welcome nonetheless as an explicit statement of the problem. High relativistic speeds are difficult to achieve, so most vehicles would probably stay below ~0.9c unless something exotic appeared, like an easy way of making one of Sonny’s warp-drive fields for rapid sub-light travel. In our part of the Galaxy the proton flux is much lower than the 1.8 protons/cc assumed by the Edelsteins. Some hot bubbles in the Local ISM go down to ~0.01-0.05 protons/cc and the local clouds are ~0.1-0.2/cc. This doesn’t change the results very much, but does lessen the local applicability. Their analysis focuses chiefly on mass-shielding – big enough chunks of material to absorb the incoming flux. Magnetic shielding is mentioned dismissively, but I think that’s premature. Workable designs using known materials exist which can deflect 10 GeV cosmic rays, the equivalent of flying at 0.995c. Advanced superconductors, which will be needed for antimatter containment, plasma nozzles, magnetic-sails, will allow even higher protection levels. Thus I submit the Edelsteins’ negativity is premature. The energy flux of interstellar matter hitting the ship can cause a lot of heating. If the ISM is just 100,000 atoms per cubic meter the flux is equivalent to 536 K temperature at 0.866 c. Peak temperature during re-entry is 2700 K for a moonflight – that level is reached at about 0.997c. Of course a starship wouldn’t just absorb that heat on its forward surfaces. A magnetic deflector would channel most of it away- but deflecting particles makes them lose momentum as high energy photons (x-rays) which would need to be shielded against. And the shield would get HOT! Fast starships would need to be long and narrow to minimise the energy absorbed. An x-ray reflective diamond coating could be used, but will need to be keep highly reflective while operating. Maintenance will be tricky! As an example of the kinds of particle energies we can handle the Large Hadron Collider regularly bends a high energy stream of particles into a circle – the protons in the beam have a speed of 0.99999999c when it’s at full power. Cosmic-rays can reach much higher energies and need protection against. However the very highest energy cosmic rays are very rare, so only lower energy particles need deflecting in a crew habitat. The ones of biological concern, due to their numbers, are in the 1-10 GeV range. If we can deflect 10 GeV protons coming at us from our motion through space, then cosmic rays aren’t an issue. Aberration comes into play at such high-speeds – the direction of origin of incoming particles and photons starts piling up directly in front of the starship. I would suggest the best protection at very high speed might be a “diffuser” – a high intensity magnet held far forward of the starship’s main hull which deflects the charged particles and creates a “shadow cone” behind it. The faster we go, for the same magnetic intensity, the further forward we put the diffuser. We fly, in safety, in its shadow thanks to aberration concentrating all the radiation to directly in front of us. If we can deflect particles up to LHC energies, then how far can we accelerate at 1 gee? The acceleration distance required to increment the time-distortion/gamma factor (call it the TDF) by 1 is about 1 light year at 1 gee. At 0.99 c the TDF is about 7. So it takes about 6 light-years (because we start with TDF = 1) to get to 0.99c. To reach 0.9999c (TDF = 70) takes about 69 light years. Thanks to the time distortion, on ship the trip-time is much less. Remember a light-year is a distance, but as we’re flying so close to light-speed the ship is seen to take about 70 years to travel 69 light-years. A speed of 0.999999c (TDF = 700) takes 700 years Earth-time and 699 light-years of distance, but on the ship only just over 7 years have passed. If we decide to stop, then another 7 years ship time, 700 Earth-time, and 699 light years is needed – meaning we’ve flown 1398 light years in 14 years ship-time. But let’s push on. We’re pushing to TDF = 7,000 (0.99999999c) so the distance is 6,999 light-years, 7,000 years Earth-time, about 9.5 years onboard ship. Thus we could travel 13,998 light years and stop, in 19 years of our time, if can protect against proton energies equal to the LHC. 2 thoughts on “Speed Kills?” 1. I believe the TDF goes as powers of 7 for each “double 9″, so at 0.9999c it would be 49, and at 0.999999c it would be 343, and at 0.99999999c it would be 2401. 2. Hi tmazanec1. Actually, no, it goes up by 10 fold with every extra pair of “9s”. Thus 0.99c is 7, 0.9999 is 70, 0.999999 is 700 etc. To see why consider the following… Gamma is (1-(V/c)^2)^-1/2 for speeds close to c, make V = c(1-e) where is e < < 1. Therefore the square of (V/c) becomes 1-2e-e^2. Gamma is approximately ~(2e)^-1/2, since e^2 is very small the closer we get to c. For V = 0.99c, e = 0.01, so gamma = 10/sqrt(2). For V = 0.9999, e = 0.0001, so gamma = 100/sqrt(2) and so on. You must be logged in to post a comment.
{"url":"http://crowlspace.com/?p=1504","timestamp":"2014-04-21T01:59:44Z","content_type":null,"content_length":"28415","record_id":"<urn:uuid:2e16df44-9b6d-4e30-91bf-f34f60330f33>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric Representation Theory (Lecture 22) Posted by John Baez This time in the Geometric Representation Theory Seminar, Jim introduces a new example: the Hall algebra of a quiver. I talked about Hall algebras back in “week230”; now we’re going to groupoidify them. Hall algebras are exciting because they’re a slick way to get a quantum group from an $A D E$ type Dynkin diagram — or at least the top half of a quantum group. Let me recall some stuff from “week230”, where I explained the the Hall algebra $H(A)$ of an abelian category $A$. As a set, this consists of formal linear combinations of isomorphism classes of objects of $A$. No extra relations! It’s an abelian group with the obvious addition. But the cool part is, with a little luck, we can make it into a ring by letting the product $[a] [b]$ be the sum of all isomorphism classes of objects $[x]$ weighted by the number of isomorphism classes of short exact sequences $0 \to a \to x \to b \to 0$ This only works if the number is always finite. The fun starts when we take the Hall algebra of $Rep(Q)$, where $Q$ is a quiver. We could look at representations in vector spaces over any field, but let’s use a finite field - necessarily a field with $q$ elements, where $q$ is a prime power. Then, Ringel proved an amazing theorem about the Hall algebra $H(Rep(Q))$ when $Q$ comes from a Dynkin diagram of type $A$, $D$, or $E$: • C. M. Ringel, Hall algebras and quantum groups, Invent. Math. 101 (1990), 583-592. He showed this Hall algebra is a quantum group! More precisely, it’s isomorphic to the $q$-deformed universal enveloping algebra of a maximal nilpotent subalgebra of the Lie algebra associated to the given Dynkin diagram. That’s a mouthful, but it’s cool. For example, the Lie algebra associated to $A_n$ is $sl(n+1)$, and the maximal nilpotent subalgebra consists of strictly upper triangular matrices. We’re $q$ -deforming the universal enveloping algebra of this. One cool thing here is that the "q" of q-deformation, familiar in quantum group theory, gets interpreted here as a prime power - something we’ve already seen in "week185" and subsequent weeks. • Lecture 22 (Jan. 10) - James Dolan on groupoidifying the Hall algebra of an abelian category. Any abelian category A gives a “trispan” of groupoids: namely, three functors from the groupoid of short exact sequences in A to the underlying groupoid of A, say A[0]. These three functors send any exact sequence $0 \to a \to x \to b \to 0$ to the subobject $x$, the quotient object $b$ and the ‘total’ object $x$, respectively. Degroupoidifying $A_0$ we get a vector space $H(A)$ — this consists of formal linear combinations of isomorphism classes of objects of $A$. Ignoring possible divergences, degroupoidifying the trispan then gives a product $H \otimes H \to H$ A magical fact: this product is associative, making $H$ into an associative algebra called the Hall algebra of $A$. So, we have groupoidified the Hall algebra. The classic example arises when $A$ is the category of representations of a quiver on vector spaces over the field with $q$ elements, $F_q$. The simplest example: the quiver $A_2$, which looks like this: $\bullet \longrightarrow \bullet$ □ Streaming video in QuickTime format; the URL is Posted at January 18, 2008 1:54 AM UTC Re: Geometric Representation Theory (Lecture 22) There’s now a downloadable version of the video for this seminar! Check it out and see if it works for you. Posted by: John Baez on January 28, 2008 10:43 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2008/01/geometric_representation_theor_21.html","timestamp":"2014-04-17T19:08:03Z","content_type":null,"content_length":"22482","record_id":"<urn:uuid:fb5cca0b-633b-49ec-9e29-7c218396218a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Definite Integrals in Infinite Dimensional Space December 23rd 2012, 10:32 PM #1 Dec 2012 Definite Integrals in Infinite Dimensional Space As it can be easily found in literature, indefinite integrals are treated as inner product of infinite dimensional vectors. But what about definite integrals? When we deal with definite integrals, I think, we would have been previously defined the associated vectors such that the integration limits to be applied on them. However, the vectors x in such infinite dimensional spaces are always shown by summation of x[k]a[k] for k=1 to infinite. In better word, when we assume a given definite integral (with limits a, b) as an inner product of infinite dimensional vectors, how can we define the associated vectors? I will appreciate if anyone would help me. Re: Definite Integrals in Infinite Dimensional Space Hey Laotzu. We can do definite integrals in the same way. You should take a look at Hilbert-Space Theory as well as l^2 and L^2 spaces to get more of an idea. Re: Definite Integrals in Infinite Dimensional Space Thanks. But I have no problem with definite integrals. What I want to know is the form of two vectors (in infinite dimensional space) that were multiplied to produce a certain definite integral. How they may be showed that respect the integration limits? Re: Definite Integrals in Infinite Dimensional Space You should check out l^2 spaces and L^2 spaces for that answer since they both require specific convergence properties. Re: Definite Integrals in Infinite Dimensional Space i'm not sure what you're asking here. usually, a function space (that is, a vector space where the vectors are functions) assumes a common set of definition (domain) for the functions in question. that is one speaks of V = {f: D-->F, f in some family of functions} where F is the underlying field of V. to study inner products, F is usually taken to be either R, or C. since we are talking about using integrals as inner products, a common restriction is that f be square-integrable on D (which is typically a subset (often compact) of R^n or C^n). this is enough to ensure the inner product exists. "integrable" here is rather vague, different integral definitions exist (riemann, lebesgue,harr,darboux,etc.) in this case that D is unspecified, often the entire space R^n or C^n is intended as the domain of the functions (this is the case for polynomial spaces, for example). the usual "multiplication" of the functions is given "point-wise": (fg)(x) = f(x)g(x) <---the RHS is the multiplication of the field F. it is not true that all infinite-dimensional vectors are sums of the form: $\sum_{k=1}^{\infty} a_kx_k$ some infinite-dimensional vector spaces are of uncountable dimension, and such a series "doesn't give enough terms". for example, the real numbers are a vector space over the rationals, but there is no countable set of (Q-linearly independent) real numbers that will serve to describe all reals as Q-linear combinations of them. i think there is some confusion in what you are asking because, in general, integrals are used to define (certain) inner products, inner products are not used to define integrals. Re: Definite Integrals in Infinite Dimensional Space Dear Deveno, thanks for your useful help. I will try to clear what I’m looking for. When we define vectors in infinite dimensional space, we put no limits on number of its components (if we are allowed to use "component" here). So, the vector length would be calculated accounting all of its infinite components. On the other hand, when we assume a definite integral with limits a and b as an inner product of two infinite dimensional vectors, I think, we limit the component number of the vectors, thereby limiting their length. In fact, the length of vectors are calculated now by integration from a to b. Now, if it is true, how can we show or apply this limitation on vectors definition (or expression), before forming the inner product? In other word, how the changes of integration limits a and b reflect in the initial form of the multiplied vectors? Or, the integration limits are artificially added, without any particular meaning for the initial vectors? Re: Definite Integrals in Infinite Dimensional Space well, the "first step" in extending the ordinary concept of "length" to infinite-dimensional vector spaces would be to extend the FINITE sum: $\sqrt{\sum_{k=1}^n (x_k)^2}$ of a finite number of coordinates to an infinite sum: $\sqrt{\sum_{k=1}^{\infty} (x_k)^2 }$ but this then raises questions of "convergence" (if we are dealing with "square-summable sequences" we're good, but this won't work for an arbitrary sequence). with an arbitrary function, let's say a function f:R-->R, it's no longer clear at first how we should define the "length" of it. but what comes to our aid is the notion of linear functional....that is linear functions L:V--->F. the inner product <u,v> can be thought of as a linear functional <u,_>: v---><u,v>. so for a space V of functions (let's say real-valued functions, just to be specific), what we want is a linear functional L(f):V-->R. it turns out that for vector spaces comprised of (integrable) functions, a definite integral is just such a functional, we have: $L(f) = \int_D f$. L is clearly linear: $L(f+g) = \int_D f+g = \int_D f + \int_D g = L(f) + L(g)$ $L(af) = \int_D af = a\int_D f = a(L(f))$ it is important that this be a definite or improper integral, we need to have it spit out a NUMBER, not another function. if we have an orthonormal basis w.r.t. a certain inner product, say, {u[1],...,u[n]} in the finite case, then if x = x[1]u[1]+...+x[n]u[n] then orthonormality lets us express the "coordinates" in terms of the inner product: x[j] = <x,u[j]>. this is precisely what is done with fourier analysis, here the functions are only defined on (-π,π) (or some other interval of length 2pi) and the basis used is {1,cos(x),sin(x),cos(2x),sin(2x),cos(3x),sin(3x),. .......} (technically 1/√2 should be used instead of 1, but square roots are annoying) and the inner product used is: $\langle f,g \rangle = \frac{1}{\pi}\int_{-\pi}^{\pi} f(x)g(x)\ dx$ the "coordinates" of a function f are then given by its fourier coefficients, each of which is a coefficient of a term in the following series $f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty} (a_n\cos(nx) + b_n\sin(nx))$ $a_0 = \frac{1}{\pi}\int_{-\pi}^{\pi} f(x)\ dx$ $a_n = \frac{1}{\pi}\int_{-\pi}^{\pi} f(x)\cos(nx)\ dx$ $b_n = \frac{1}{\pi}\int_{-\pi}^{\pi} f(x)\sin(nx)\ dx$ the limits of integration here have nothing to do with the "infiniteness" of the dimension of the square-integrable functions on [-π,π]. the limits of integration are usually chosen for intervals [a,b] on which we want to know something, and can be imposed rather arbitrarily (often we are interested in what happens in a certain time interval, say from t = 0 to t = 1, so we might chose the interval [0,1] over which to integrate). that is the passage from definite--->indefinite integral has nothing to do with finite-dimensional spaces--->infinite-dimensional spaces. indefinite integrals (anti-derivatives, or primitives) are an entirely different kind of animal that definite integrals, which are linear functionals. indefinite integrals are more like linear OPERATORS (except they yield an equivalence class of functions, f(x)+C instead of any one particular function). this is why differential equations need "boundary conditions" (initial value specifications) to be fully solved for a specific situation. put another way, the differential operator, D (which IS a linear operator) is not injective, so to specify an inverse image (something in D^-1(f(x)), we need to provide more information (to determine C). Re: Definite Integrals in Infinite Dimensional Space Dear Deveno. It give me pleasure to be informed your deep understanding of the subject. I’m not mathematician and I hope my non-professional questions do not trouble you. It is said in references that angle between to infinite dimensional vectors u(x) and v(x) can be calculated based on their inner product as:It give me pleasure to be informed your deep understanding of the subject. I’m not mathematician and I hope my non-professional questions do not trouble you. It is said in references that angle between to infinite dimensional vectors u(x) and v(x) can be calculated based on their inner product as: This expression can be rewritten in familiar form: So far, so good. Since I have not found anywhere the definite version of this formula, I’m dubious about: If it’s wrong, I have nothing to say, otherwise, I have to regard to my first question. Because, here the length of vectors are clearly defined regarding the integration limits that in turn affect the angle between them. Now, we can ask how infinite dimensional vectors can be limited? What do mean limits a and b in respect of vectors u(x) and v(x) initial definition? Is it true that dimensions of the vectors are infinite but limited?! Last edited by Laotzu; December 25th 2012 at 11:51 PM. Re: Definite Integrals in Infinite Dimensional Space actually the correct formula is: $\cos(\phi) = \frac{\langle u,v \rangle}{\|u\|\|v\|}$ which when an integral is used as an inner product becomes: $\cos(\phi) = \frac{\int_a^b u(x)v(x)\ dx}{\sqrt{\int_a^b u^2(x)\ dx \cdot \int_a^b v^2(x)\ dx}}$ it is important that limits of integration be used. to take a simple example, over the real numbers, the angle between the polynomials f(x) = 1, and g(x) = x would be (if there were no limits of integration): $\cos(\phi) = \frac{\frac{x^2}{2} + C_1}{\sqrt{(x + C_2)(\frac{x^3}{3} + C_3)}}$ which isn't a number, but a (somewhat ugly) function of x (and we don't even know which one since there are 3 unspecified parameters). in a more general sense, the limits a and b in the definite integral have more to do with "the area of the functions' domain we are interested in". it is unreasonable to expect that functions that are orthogonal when we integrate from -1 to 1, will still be orthogonal when we integrate from 2 to 3. in fact, it is common to integrate with a "weighting function" w(x) that has the effect of "stretching" the underlying space and thereby "skews" the angles between vectors. but to answer your question more simply: the limits of integration usually means we are in a subspace of C[a,b], the space of all continuous functions f:[a,b]-->R (although it is possible to make a "larger space" out of "integrable functions", for example, step-functions, which are clearly NOT continuous). which functions one is going to allow to a large extent depends on which flavor of integral you're using. the infinite-dimensional-ness of a function space has to do with how many "basis functions" we need to specify to get "coordinates". the simplest case is polynomial functions where the functions: f(x) = x^k form a countable basis. that is, any polynomial can be regarded as a finite sequence of coefficients (but we need an infinite basis, because we might have polynomials of arbitrarily high degree). since polynomials are "defined everywhere", any closed interval of R could be used as a domain of definition. limiting the domain of definition does NOT change the dimension. December 23rd 2012, 10:40 PM #2 MHF Contributor Sep 2012 December 24th 2012, 01:11 AM #3 Dec 2012 December 24th 2012, 01:12 AM #4 MHF Contributor Sep 2012 December 24th 2012, 02:35 AM #5 MHF Contributor Mar 2011 December 24th 2012, 03:25 AM #6 Dec 2012 December 24th 2012, 10:50 AM #7 MHF Contributor Mar 2011 December 25th 2012, 10:47 PM #8 Dec 2012 December 26th 2012, 05:47 AM #9 MHF Contributor Mar 2011
{"url":"http://mathhelpforum.com/advanced-algebra/210286-definite-integrals-infinite-dimensional-space.html","timestamp":"2014-04-17T02:29:09Z","content_type":null,"content_length":"68734","record_id":"<urn:uuid:dc7a2a92-663c-4932-a79f-5cface58f49a>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
When a group ring is a local ring up vote 2 down vote favorite Hi there, I'm stuck with my undergraduate thesis on the following proposition: If $k$ is a field of characteristic $p > 0$ and $G$ is a finite $p$-group, then the group ring $kG$ is local. In particular where $k = \mathbb{Z}_p$ and $G$ is cyclic of order $p$. Thanks for any help. 1 Hint: let $t$ generate your cyclic group $G$ of order $p$, and consider $(t-1)^p \in kG$. – Konstantin Ardakov Aug 27 '11 at 18:00 For the general problem, can you see why the augmentation ideal is nilpotent? – Greg Marks Aug 27 '11 at 18:29 3 Marco, if you read the FAQ you'll see that this site is not quite the best fit for your question. The FAQ suggests other places, like the sister (brother?) site math.stackexchange.com, where questions like yours will feel much more at home. – Mariano Suárez-Alvarez♦ Aug 27 '11 at 19:21 2 I think this question does belong to MO. – Qfwfq Aug 27 '11 at 21:10 4 @unknowngoogle: this is a standard piece of knowledge, proved in pretty much any textbook which deals with the subject. It is quite not research math in any sense. – Mariano Suárez-Alvarez♦ Aug 28 '11 at 4:05 show 1 more comment 4 Answers active oldest votes Dear Marco, it is well-known that if $F$ is a field of characteristic $p>0$ and $G$ is a group then the augmentation ideal of the group algebra $FG$ is nilpotent if and only if $G$ is a finite $p$-group (see the book "D. Passman: The algebraic structure of group rings", Lemma 1.6 of Chapter 3, page 70). In that case the Jacobson radical $J$ of $FG$ clearly coincides with up vote the augmentation ideal which has codimension $1$ in $FG$, and you are done. You can also find the description of the Jacobson radical of the group algebra of a finite $p$-group over a field 3 down of characteristic $p>0$ in the book "R. Pierce: Associative algebras" (Corollary in Section 4.7). add comment Since you say this is an undergraduate thesis, I will take a few steps back. The augmentation ideal $I$ of the group algebra $kG$ is $\{ \sum_{g \in G} \alpha_{g}g: \sum_{g \in G} \alpha_g = 0 \}.$ It is easy to see (in several ways) that $I$ is a two-sided ideal of $kG.$ One way is to note that it is the annihilator of the trivial module, which is $1$-dimensional, with a $k$-basis $\{v \}$ such that $vg = v $ for all $g \in G.$ No element of $I$ is a unit, as $I$ is a proper ideal. It remains to prove that every element of $kG \backslash I$ is invertible. Again, there up are several ways to do this: one is to note that if $M$ is a simple (sometimes called irreducible) $kG$-module, then $G$ fixes a non-zero vector of $M$, so $M$ must be the trivial module. I vote 2 leave this to you to do, or to research. Then it follows that $I$ annihilates every simple $kG$-module. Then you can use the fact that every finite dimensional $kG$-module has a composition down series to see that $I^{n}$ annihilates the regular module $kG$ for some integer $n.$ It follows in particular that every element of $I$ is nilpotent. Every element of $kG \backslash I$ is of vote the form $\lambda 1_G + j$ for some $j \in I$ and nonzero $\lambda \in k.$ Then it is relatively easy to see that $1_{G} + \frac{j}{\lambda}$ is invertible, using the nilpotency of $j.$ add comment Maschke's Theorem says that if $p \not | |G|$ then $kG$ is semisimple. You've probably learned this fact. What you may not know is that even if $p$ does divide $|G|$ as in your situation, you still know $kG$ is quasi-Frobenius. You should look into a homological algebra textbook, e.g. "Lectures on Modules and Rings" by T.Y. Lam, to learn more about such rings. There are many up vote 1 theorems and lots of counter-examples which may help you. As an example, one property of quasi-Frobenius rings $R$ is that the class of injective $R$-modules is the same as the class of down vote projective $R$-modules. There's a way to go from quasi-Frobenius to local, though :) – Mariano Suárez-Alvarez♦ Aug 27 '11 at 19:19 Indeed. I figured this question would be closed, and wasn't really sure the best way to give some useful information without giving it all away. I thought giving the reference where that answer is found--but not the answer itself--would be a nice middle ground for an undergrad who's trying to learn about the research process. – David White Aug 27 '11 at 20:30 add comment Use the following theorem by Brauer, that you can find in many books about representation theory of finite groups: up vote 0 The number of simple modules is equal to the number of conjugacy classes whose elements have order coprime to the characteristic of the field $p$, the so-called $p$-regular classes. down vote Hi Julian: you need $k$ algebraically closed to say it that way, I think (doesn't really matter for the $p$-group case). – Geoff Robinson Aug 27 '11 at 21:07 2 (or, at least, a splitting field). – Geoff Robinson Aug 27 '11 at 21:08 2 what an overkill! – Konstantin Ardakov Aug 28 '11 at 7:36 @Konstantin: it isn't such a difficult theorem by modern standards. – Geoff Robinson Aug 28 '11 at 12:02 @Geoff: of course not, but I think I prefer the more direct proof: induct on $|G|$ and pick $z \in Z(G)$ of order $p$; then $\mathfrak{m}^n \subseteq (z-1)kG$ for some $n$ by induction so $\mathfrak{m}^{np} \subseteq (z-1)^pkG = 0$. – Konstantin Ardakov Aug 28 '11 at 17:25 show 2 more comments Not the answer you're looking for? Browse other questions tagged group-rings local-rings fields modules p-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/73856/when-a-group-ring-is-a-local-ring?sort=oldest","timestamp":"2014-04-20T11:08:09Z","content_type":null,"content_length":"78985","record_id":"<urn:uuid:d67ce3a5-0b7a-4edf-bc6a-81ffe6164de1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Expressing a quaternion with zero rotation Hi! How do you do this? Up to now all formulas I've seen will cause the x y and z parts to collapse by sin(theta/2) if theta is 0. So how do you express say an orientation straight down the x-axis with no rotation about that axis? Thanks ;) Thanks mate. Out of curiousity what would its values be for an orientation down the x-axis? I did some research on unit quats but got more confused :) What do you mean an orientation down the x - axis? I think the question is, how can you produce a quaternion which points down the x-axis but expresses a rotation of 0? The answer is there's no such thing -- a rotation of 0 about the x-axis is a rotation of 0 about any axis. It's like asking which way the zero vector is pointing. It doesn't point anywhere at all. O_o Left-handed devils or right-handed devils? *shrug* Either way, I'd say a rotation of 90 degrees should suit you. Soma Quote: how can you produce a quaternion which points down the x-axis but expresses a rotation of 0? Wow. If that's his question, he wasn't kidding about being confused. Soma You'd better believe it!!!! I got it figured in the end, thanks guys ;) Some code from my quaternion camera. Most of it taken from various tutorials on the internet and then plugged in to my existing system. Don't ask me how quaternions work b/c the math behind them is complex. I just know properties of them and various operations that can be performed on them. So this is one of those things in my book that 'just works'. Code: void X3DCamera::Pitch(float angle) { D3DXQUATERNION quat = m_qRot; D3DXQuaternionRotationAxis(&quat,TransformVector(&m_qRot, &D3DXVECTOR3(1.0f, 0.0f, 0.0f)), angle); m_qRot *= quat; D3DXQuaternionNormalize(&m_qRot,&m_qRot); m_inTransition = true; } void X3DCamera::Yaw(float angle) { m_yaw += angle; D3DXMATRIX matRot; D3DXQUATERNION quat; D3DXMatrixRotationY(&matRot,m_yaw); D3DXQuaternionRotationMatrix(&quat,&matRot); D3DXQuaternionRotationAxis(&quat,TransformVector(&quat, &D3DXVECTOR3(0.0f, 1.0f, 0.0f)), angle); m_qRot *= quat; D3DXQuaternionNormalize(&m_qRot,&m_qRot); m_inTransition = true; } void X3DCamera::Roll(float angle) { D3DXQUATERNION quat = m_qRot; D3DXQuaternionRotationAxis(&quat,TransformVector(&m_qRot, &D3DXVECTOR3(0.0f, 0.0f, 1.0f)), angle); m_qRot *= quat; D3DXQuaternionNormalize (&m_qRot , &m_qRot); m_inTransition = true; } D3DXVECTOR3* X3DCamera::TransformVector(D3DXQUATERNION *pOrientation, D3DXVECTOR3 *pAxis) { D3DVECTOR vNewAxis; D3DXMATRIX matRotation; // Build a matrix from the quaternion. D3DXMatrixRotationQuaternion(&matRotation, pOrientation); // Transform the queried axis vector by the matrix. vNewAxis.x = pAxis->x * matRotation._11 + pAxis->y * matRotation._21 + pAxis->z * matRotation._31 + matRotation._41; vNewAxis.y = pAxis->x * matRotation._12 + pAxis->y * matRotation._22 + pAxis->z * matRotation._32 + matRotation._42; vNewAxis.z = pAxis->x * matRotation._13 + pAxis->y * matRotation._23 + pAxis->z * matRotation._33 + matRotation._43; memcpy(pAxis, &vNewAxis, sizeof(vNewAxis)); // Copy axis. return(pAxis); } void X3DCamera::GetViewMatrix (D3DXMATRIX *outMatrix) { D3DXMATRIX matRot; D3DXQuaternionNormalize(&m_qRot,&m_qRot); D3DXMatrixRotationQuaternion(&matRot,&D3DXQUATERNION(-m_qRot.x,-m_qRot.y,-m_qRot.z,m_qRot.w)); D3DXVECTOR3 vecLook(matRot._13,matRot._23,matRot._33); D3DXVECTOR3 vecOrbitPos = m_vecTargetPos + vecLook * -m_fCamDist; D3DXMATRIX matOrbit; D3DXMatrixTranslation(& matOrbit,-vecOrbitPos.x,-vecOrbitPos.y,-vecOrbitPos.z); D3DXMATRIX matTrans; D3DXMatrixTranslation(&matTrans,-Pos.x,-Pos.y,-Pos.z); m_matView = matOrbit * matRot * matTrans; *outMatrix = m_matView; } void X3DCamera::Pitch(float angle) { D3DXQUATERNION quat = m_qRot; D3DXQuaternionRotationAxis(&quat,TransformVector(&m_qRot, &D3DXVECTOR3(1.0f, 0.0f, 0.0f)), angle); m_qRot *= quat; D3DXQuaternionNormalize(&m_qRot,&m_qRot); m_inTransition = true; } void X3DCamera::Yaw(float angle) { m_yaw += angle; D3DXMATRIX matRot; D3DXQUATERNION quat; D3DXMatrixRotationY(&matRot,m_yaw); D3DXQuaternionRotationMatrix(&quat,&matRot); D3DXQuaternionRotationAxis(&quat,TransformVector(&quat, &D3DXVECTOR3(0.0f, 1.0f, 0.0f)), angle); m_qRot *= quat; D3DXQuaternionNormalize(&m_qRot,&m_qRot); m_inTransition = true; } void X3DCamera::Roll(float angle) { D3DXQUATERNION quat = m_qRot; D3DXQuaternionRotationAxis(&quat,TransformVector(&m_qRot, &D3DXVECTOR3(0.0f, 0.0f, 1.0f)), angle); m_qRot *= quat; D3DXQuaternionNormalize(&m_qRot , &m_qRot); m_inTransition = true; } D3DXVECTOR3* X3DCamera::TransformVector(D3DXQUATERNION *pOrientation, D3DXVECTOR3 *pAxis) { D3DVECTOR vNewAxis; D3DXMATRIX matRotation; // Build a matrix from the quaternion. D3DXMatrixRotationQuaternion(&matRotation, pOrientation); // Transform the queried axis vector by the matrix. vNewAxis.x = pAxis->x * matRotation._11 + pAxis->y * matRotation._21 + pAxis->z * matRotation._31 + matRotation._41; vNewAxis.y = pAxis->x * matRotation._12 + pAxis->y * matRotation._22 + pAxis->z * matRotation._32 + matRotation._42; vNewAxis.z = pAxis->x * matRotation._13 + pAxis->y * matRotation._23 + pAxis->z * matRotation._33 + matRotation._43; memcpy(pAxis, &vNewAxis, sizeof(vNewAxis)); // Copy axis. return (pAxis); } void X3DCamera::GetViewMatrix(D3DXMATRIX *outMatrix) { D3DXMATRIX matRot; D3DXQuaternionNormalize(&m_qRot,&m_qRot); D3DXMatrixRotationQuaternion(&matRot,&D3DXQUATERNION (-m_qRot.x,-m_qRot.y,-m_qRot.z,m_qRot.w)); D3DXVECTOR3 vecLook(matRot._13,matRot._23,matRot._33); D3DXVECTOR3 vecOrbitPos = m_vecTargetPos + vecLook * -m_fCamDist; D3DXMATRIX matOrbit; D3DXMatrixTranslation(&matOrbit,-vecOrbitPos.x,-vecOrbitPos.y,-vecOrbitPos.z); D3DXMATRIX matTrans; D3DXMatrixTranslation(&matTrans,-Pos.x,-Pos.y,-Pos.z); m_matView = matOrbit * matRot * matTrans; *outMatrix = m_matView; } O_o I don't imagine that he would've posted the example if he didn't intend for you to make use of it. Soma Quote: I don't imagine that he would've posted the example if he didn't intend for you to make use of it. Hehe. :) One would think that such a conclusion was obvious but perhaps not. :D Or it could even be described as being polite...... And i don't see why something you sweated over to make should just be acquired by someone without asking your permission first ;)
{"url":"http://cboard.cprogramming.com/cplusplus-programming/129235-expressing-quaternion-zero-rotation-printable-thread.html","timestamp":"2014-04-19T18:51:03Z","content_type":null,"content_length":"15372","record_id":"<urn:uuid:205dd3c6-0a77-4eb9-997d-3e11323b7342>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
{- | This module provides the 'BBox1' type (mainly for completeness). module Data.BoundingBox.B1 where import Data.Vector.Class import Data.Vector.V1 import qualified Data.BoundingBox.Range as R -- | The 'BBox1' type is basically a 'Range', but all the operations over it work with 'Vector1' (which is really 'Scalar'). While it's called a bounding /box/, a 1-dimensional box is in truth a simple line interval, just like 'Range'. newtype BBox1 = BBox1 {range :: R.Range} deriving (Eq, Show) -- | Given two vectors, construct a bounding box (swapping the endpoints if necessary). bound_corners :: Vector1 -> Vector1 -> BBox1 bound_corners (Vector1 xa) (Vector1 xb) = BBox1 $ R.bound_corners xa xb -- | Find the bounds of a list of points. (Throws an exception if the list is empty.) bound_points :: [Vector1] -> BBox1 bound_points = BBox1 . R.bound_points . map v1x -- | Test whether a 'Vector1' lies within a 'BBox1'. within_bounds :: Vector1 -> BBox1 -> Bool within_bounds (Vector1 x) (BBox1 r) = x `R.within_bounds` r -- | Return the minimum endpoint for a 'BBox1'. min_point :: BBox1 -> Vector1 min_point = Vector1 . R.min_point . range -- | Return the maximum endpoint for a 'BBox1'. max_point :: BBox1 -> Vector1 max_point = Vector1 . R.max_point . range -- | Take the union of two 'BBox1' values. The result is a new 'BBox1' that contains all the points the original boxes contained, plus any extra space between them. union :: BBox1 -> BBox1 -> BBox1 union (BBox1 r0) (BBox1 r1) = BBox1 (r0 `R.union` r1) -- | Take the intersection of two 'BBox1' values. If the boxes do not overlap, return 'Nothing'. Otherwise return a 'BBox1' containing only the points common to both argument boxes. isect :: BBox1 -> BBox1 -> Maybe BBox1 isect (BBox1 r0) (BBox1 r1) = do r <- (r0 `R.isect` r1) return (BBox1 r) -- | Efficiently compute the union of a list of bounding boxes. unions :: [BBox1] -> BBox1 unions = BBox1 . R.unions . map range
{"url":"http://hackage.haskell.org/package/AC-Vector-2.3.1/docs/src/Data-BoundingBox-B1.html","timestamp":"2014-04-19T10:00:19Z","content_type":null,"content_length":"10861","record_id":"<urn:uuid:9bf5035f-2325-44aa-a126-f1e26e915c32>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Yonkers Science Tutor I have enjoyed working as a private tutor and homeschool teacher (at the middle and high school levels) with New York City families over the past four years. In terms of credentials, I hold a Masters degree in Secondary Education (English) from Harvard University and am also currently pursuing a Ph... 33 Subjects: including anthropology, reading, psychology, biology ...I worked at a daycare for 5 years, and have volunteered at the Easton Area Community Center tutoring young children ages 4-9 for 3 years. I have been involved in philanthropy events such as Literacy Day for children from the Easton, PA area to help promote the importance of literacy for the pa... 7 Subjects: including anatomy, biology, vocabulary, physiology ...In each psychology, math, or accounting course I have taken thus far in my college career I have received a 95 or above (nothing less than an A). I am also part of two honors societies at my school which are based on students having a GPA of 3.5 or above. I am looking forward to sharing my inter... 11 Subjects: including physics, psychology, calculus, algebra 1 ...For the past 8 years, I have taught Earth Science Regents in the New York Public Schools. I am New York State teacher certified in Earth Science for grades 7 through 12. I am fluent in Spanish and can tutor Spanish-speaking students as well. 4 Subjects: including astronomy, physical science, geology, Regents ...Graduate with a BSc in Physics, extensive knowledge of mathematics at all levels. Experienced tutoring advanced subjects that rely on algebra as a foundation. I'm a college graduate in Physics. 1 year of calculus in high school, 2 years of calculus/analysis in university. 17 Subjects: including chemistry, physical science, astronomy, Spanish
{"url":"http://www.purplemath.com/yonkers_ny_science_tutors.php","timestamp":"2014-04-17T11:30:34Z","content_type":null,"content_length":"23728","record_id":"<urn:uuid:418b3741-ff69-42af-bb79-a17cf33ef852>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
Observed frequencies of transitions in a Markov Chain December 11th 2009, 01:24 AM Observed frequencies of transitions in a Markov Chain Dear Forum, I came across the following remark in P. Billingsley, "Statistical Inference for Markov Processes", and I wonder whether someone can point me to further information: "It is possible to show that if (the transition probability) p_ij > 0, then P (n_ij = 0) goes to 0 exponentially as n --> infinity." Remark: We assume that we have observed n transitions of the Markov chain, and n_ij denotes the number of times we have observed the transition i --> j. I tried to find information about this online but I failed. Maybe someone has seen this or a related statement online or in a book? Any help would be very much appreciated. December 11th 2009, 10:39 AM This is not always true. This is true iff the Markov chain is positive recurrent, I believe. (And if state i is accessible from the starting point). Maybe your Markov chain has finitely many Here is a proof (that positive recurrence suffices). Let $\tau$ be the time of the first transition from i to j, so that $P(n_{ij}=0)=P(\tau>n)$. Note that $P_x(\tau>1)=1$ if $xeq i$, and $P_i(\ tau>1)=1-p_{ij}$. By Markov property we have, for all $n$, $P(\tau>n+1)=E[{\bf 1}_{(\tau>n)}P_{X_n}(\tau>1)]=P(\tau>n)-p_{ij}P(\tau>n, X_n=i)$, where the last equality results from the previous remark. Thus, $P(\tau>n+1)=P(\tau>n) (1-p_{ij}P(X_n=i|\tau>n))$. For $P(\tau>n)$ to decrease exponentially, it would be sufficient if the last probability is greater than a positive constant, for a positive proportion of indices $n$ (we can't expect the probability to be positive for all $n$). After a short thought, it appears that the conditional distribution of $(X_0,\ldots,X_n)$ given $\{\tau>n\}$ is a Markov chain with similar transitions as the initial ones, expect that the transition from i to j is removed and the other transitions starting from i are normalized accordingly. Note that this doesn't work if (i,j) is the only possible transition starting from i, i.e. if $p_{ij}=1$; this special case could be dealt with by modifying the Markov chain at the very beginning so as to "erase the (useless) state i" (it's a bit messy to explain, I hope you get the picture, anyway it's not the crucial part). So let us consider this new Markov chain (without the transition (i,j)). If the state space was finite, of course it still is finite (same state space), so that the Markov chain still is positive recurrent. In the general case, the Markov chain (restricted to the still accessible states) still is positive recurrent as well, I guess, but I can't think of an argument so I let you figure that out if you want. Therefore, if $d$ denotes the period of the new Markov chain (whose law is denoted by $\widetilde{P}$), then $\widetilde{P}(X_{kd+n_0}=i)\to_{k\to\infty}\pi(i)$ for some invariant probability measure $\pi$, with $\pi(i)>0$ because i is accessible from the initial state (and where $n_0\in\{0,\ldots,d-1\}$). So we have a constant $c>0$ and $k_0,n_0>0$ such that, for all $k>k_0$, $P(X_{kd+n_0}=i|\tau>kd)>c$. This gives the conclusion: we have $P(\tau>n+1)\leq P(\tau>n)$ for all $n$, and $P(\tau>n+1)\leq (1-p_{ij}c)P(\tau>n)$ for indices $n$ that are multiple of $d$ ( $+n_0$, and large enough). Hence a geometric(=exponential) rate of decrease (here we use $p_{ij}>0$). If this is not obvious to you, try to prove it. This should answer your questions; you may ask for additional explanations if you need. December 14th 2009, 02:37 AM Brilliant! Thank you very much for this excellent response. December 15th 2009, 01:28 AM For the sake of completeness, I would like to add two remarks to Laurent's proof: (1) The case $p_{ij} = 1$: Unfortunately, in my case the Markov Chain is given and I do not have the possibility to modify it upfront. However, in this case the question whether we observe transition $(i,j)$ within $n+1$ steps reduces to the question whether we observe state $i$ within $n$ steps. And this probability is easily seen to reduce exponentially if $\pi(i) > 0$, which holds if the Markov Chain is finite and irreducible. (2) Positive recurrence of the modified Markov Chain: Unfortunately, I believe that the modified Markov Chain is not necessarily positive recurrent. To see this, imagine that transition $(i,j)$ is the only link from a set of states $S_1$ to a disjoint set of states $S_2$ (while there are other links in the other direction). However, we do not need positive recurrence of the modified Markov Chain to arrive at the conclusion that $\mathbb{P} (\widetilde{X}_n = i) \geq c > 0$ for a positive proportion of indices $n$, where the tilde refers to the state evolution in the new Markov Chain. Indeed, since the old Markov Chain (before removing transition $(i,j)$) had a strictly positive probability to reach state $i$ from any state $k$ within $\left| S \right|$ steps ( $ \left| S \right|$ = number of states in the Markov Chain), the same applies to the new Markov Chain (this can be verified easily). It follows that $\mathbb{P} (\widetilde{X}_n = i) \geq c > 0$ for a positive proportion of indices $n$. But, in any case, these are only minor details. December 15th 2009, 07:04 AM (2) Positive recurrence of the modified Markov Chain: Unfortunately, I believe that the modified Markov Chain is not necessarily positive recurrent. To see this, imagine that transition $(i,j)$ is the only link from a set of states $S_1$ to a disjoint set of states $S_2$ (while there are other links in the other direction). I can't see how this would be a counter-example to the statement recalled above (cf. what I put between parentheses) : since we only consider the subset of "still accessible states", in your case we would drop $S_2$ in the modified Markov chain, and the question is: is the Markov chain on $S_1$ still positive recurrent? Another way to ask the question is: given a transient, or null recurrent Markov chain on $S_1$, is it possible to turn it into a positive recurrent Markov chain by glueing it to a new subset of states $S_2$ with only one transition from $S_1$ to $S_2$ (the transition i,j) and possibly several transitions from $S_2$ back to $S_1$? If $S_1$ is transient, the enlarged Markov chain is still transient (starting from i, the probability of no-return is only decreased by a constant positive factor due to the new transition). If $S_1$ is null-recurrent,... I would bet the enlarged Markov chain still is, but I can't find a proof. However, we do not need positive recurrence of the modified Markov Chain to arrive at the conclusion that $\mathbb{P} (\widetilde{X}_n = i) \geq c > 0$ for a positive proportion of indices $n$, where the tilde refers to the state evolution in the new Markov Chain. Indeed, since the old Markov Chain (before removing transition $(i,j)$) had a strictly positive probability to reach state $i$ from any state $k$ within $\left| S \right|$ steps ( $\left| S \right|$ = number of states in the Markov Chain), the same applies to the new Markov Chain (this can be verified easily). It follows that $\mathbb{P} (\widetilde{X}_n = i) \geq c > 0$ for a positive proportion of indices $n$. Wait, wouldn't you be assuming that there are finitely many states in the Markov chain?... If such is the case, then positive recurrence is automatic (like I wrote in the previous post), on the subset of the still accessible states. The problem only arises for Markov chains with infinitely many states. December 15th 2009, 07:22 AM I was indeed assuming a finite Markov Chain (sorry for not saying that at the beginning). I guess I misinterpreted the statement: "If the state space was finite, of course it still is finite (same state space), so that the Markov chain still is positive recurrent." and thought that the restriction to still accessible states in the next statement "In the general case, the Markov chain (restricted to the still accessible states) still is positive recurrent as well, I guess, but I can't think of an argument so I let you figure that out if you want." was limited to infinite Markov Chains. Given your clarification, I completely agree with you.
{"url":"http://mathhelpforum.com/advanced-statistics/119850-observed-frequencies-transitions-markov-chain-print.html","timestamp":"2014-04-20T21:44:03Z","content_type":null,"content_length":"26777","record_id":"<urn:uuid:e5b2a1a1-93ac-4064-a67b-642661549a8c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Average of twin prime pairs is Abundant number, except for 4 and 6. Any prime is a factor of average of twin prime pair up vote 1 down vote favorite I have an intuition that Average of twin prime pairs is always Abundant number except for 4 and 6. For example: 12 < 1+2+3+4+6=16 18 < 1+2+3+6+9=21 But I can't prove this. Could you give me any good idea? 2010-08-22 I think that Any prime is a factor of average of twin prime pair. Do you agree with me? Questions about things like abundant numbers (which fall under the category of "recreational mathematics") are probably best asked at math.stackexchange.com. – Andy Putman Aug 20 '10 at 2:56 2 What are "things like abundant numbers"? Specifically, how do you differentiate between important concepts and recreational ones? – muad Aug 20 '10 at 11:09 I agree this is a genuine mathematical question, but there is a general instinct to close a question like this. I should explain the reason. This question has been around for over 2000 years. The Ancient Greeks probably thought about this question. I personally don't know the answer, but one of the two following possibilities hold: 1) This question can be answered by elementary methods known before Euler's time. In this case, this is not a research level question. – Alexander Woo Aug 20 '10 at 21:02 2) This question requires algebraic or analytic number theory methods. (For example, a proof might require use of the Riemann zeta function.) In this case, the poser of the question has indicated no familiarity with any of these methods. (Arguably, this still might not make it a research level question, depending on the depth of the methods required.) – Alexander Woo Aug 20 '10 at 21:07 @a-boy, not much point in tacking a new question on to one that has been closed, as no one can post an answer to a closed question. Then again, there may not be much point in posting it as a new question, either, as an affirmative answer would settle the twin primes conjecture, a negative answer is unlikely, and whether anyone agrees with you or not is not what MO is about. – Gerry Myerson Aug 22 '10 at 12:43 add comment closed as off topic by Akhil Mathew, Andy Putman, Qiaochu Yuan, Robin Chapman, S. Carnahan♦ Aug 22 '10 at 7:30 Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question. 1 Answer active oldest votes Prove the sum is always a multiple of 6, then prove that multiples of 6 are abundant. up vote 9 down vote accepted Thank you! I see. – user8140 Aug 20 '10 at 2:40 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/36154/average-of-twin-prime-pairs-is-abundant-number-except-for-4-and-6-any-prime-is","timestamp":"2014-04-24T00:23:32Z","content_type":null,"content_length":"52250","record_id":"<urn:uuid:8029723e-4d59-49cf-b445-a81d195d8d81>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Assignment #1 Solutions Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support. Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education. Below is a small sample set of documents: McGill - MATH - 263 Stephanie Campbell due 09/12/2010 at 11:55pm EDT.Assignment 1MATH263, Fall 2010You may attempt any problem an unlimited number of times. 1. (1 pt) Match each of the following differential equations with a solution from the list below. 1. 2. 3. 4. y +y McGill - MATH - 263 Stephanie Campbell due 09/18/2010 at 11:55pm EDT.Assignment 2MATH263, Fall 2010You may attempt any problem an unlimited number of times. 5. (1 pt) Find the function y = y(x) (for x &gt; 0 ) which satises the differential equation dy 9 + 16x = , (x &gt; 0) dx McGill - MATH - 263 Stephanie Campbell due 09/18/2010 at 11:55pm EDT.Assignment 2MATH263, Fall 2010You may attempt any problem an unlimited number of times. with the initial condition: y(1) = 5 1. (1 pt) Suppose that the initial value problem y = 9x2 + 5y2 7, y(0) = 2 y= McGill - MATH - 263 Stephanie Campbell due 10/01/2010 at 04:00pm EDT.Assignment 3MATH263, Fall 2010Please note that you have a limit of 6 tries for all problems on this assignment. 1. (1 pt) Determine the type of each ODE from among the types linear, separable and homogen McGill - MATH - 263 Stephanie Campbell due 09/30/2010 at 11:55pm EDT.Assignment 3MATH263, Fall 2010Please note that you have a limit of 6 tries for all problems on this assignment. 4. (1 pt) Find the solution to the initial value problem that is a non-zero polynomial func McGill - MATH - 263 Stephanie Campbell due 10/16/2010 at 11:55pm EDT.Assignment 4MATH263, Fall 2010You may attempt any problem an unlimited number of times. 2. (1 pt) Solve the initial value problem d2y dx2 y(x) = + 10 dy + 21y = 0, dx . y(0) = 6, y (0) = 30W (t ) = Rema McGill - MATH - 263 Stephanie Campbell due 10/16/2010 at 11:55pm EDT.Assignment 4MATH263, Fall 2010You may attempt any problem an unlimited number of times. 7. (1 pt) Find the function y1 of t which is the solution of 81y + 36y 5y = 0 with initial conditions y1 = y1 (0) = McGill - MATH - 263 Stephanie Campbell due 11/09/2010 at 11:55pm EST.Assignment 5MATH263, Fall 2010You may attempt any problem an unlimited number of times. 1. (1 pt) Find y as a function of x if y(4) 4y + 4y = 64e2x , y(0) = 8, y (0) = 4, y (0) = 8, y (0) = 8. y(x) = 2. McGill - MATH - 263 Stephanie Campbell due 11/20/2010 at 11:55pm EST.Assignment 6MATH263, Fall 2010You may attempt any problem an unlimited number of times. 3. (1 pt) Find the Inverse Laplace Transform of the following functions: F (s) = f (t ) = f (t ) = 6e4t sin(7t ) + McGill - MATH - 263 Stephanie Campbell due 12/01/2010 at 11:55pm EST.Assignment 7MATH263, Fall 2010You may attempt any problem an unlimited number of times. 6. (1 pt) Determine the two singular points of the differential equation1. (1 pt) Find the indicated coefcients of McGill - MATH - 263 HeyI just did a mistake while rewritint the formuluaOn the first pageAfter 'Know/Be familiar' The First Formula IS FALSEIt should beuc* f(t-c) = F(s) *e^(-cs) NOT F(s-c)Thks :)Steve McGill - MATH - 263 ORDINARY DIFFERENTIAL EQUATIONS FOR ENGINEERS THE LECTURE NOTES FOR MATH263 (2010-Fall)ORDINARY DIFFERENTIAL EQUATIONS FOR ENGINEERSJIAN-JUN XUDepartment of Mathematics and Statistics, McGill UniversityKluwer Academic Publishers Boston/Dordrecht/Londo McGill - MATH - 263 McGill University Math 263: Differential Equations for EngineersCHAPTER 1:INTRODUCTION1 Denitions and Basic Concepts1.1 Ordinary Differential Equation (ODE)An equation involving the derivatives of an unknown function y of a single variable x over an McGill - MATH - 263 McGill University Math 263: Differential Equations for EngineersCHAPTER 2:FIRST ORDER DIFFERENTIAL EQUATIONS In this lecture we will treat linear and separable rst order ODEs.1 Linear EquationThe general rst order ODE has the form F (x, y, y ) = where McGill - MATH - 263 McGill University Math 263: Differential Equations for EngineersCHAPTER 3: N-TH ORDER DIFFERENTIAL EQUATIONS (II)1 Solutions for Equations with Constants CoefcientsIn what follows, we shall rst focus on the linear equations with constant coefcients:L( McGill - MATH - 263 McGill University Math 263: Differential Equations for EngineersCHAPTER 3:N-TH ORDER DIFFERENTIAL EQUATIONS (III)1 Finding a Particular Solution for Inhomogeneous EquationIn this lecture we shall discuss the methods for producing a particular solution McGill - MATH - 263 McGill University Math 263: Differential Equations for EngineersCHAPTER 3:N-TH ORDER DIFFERENTIAL EQUATIONS (IV)1 Solutions for Equations with Variable CoefcientsIn this lecture we will give a few techniques for solving certain linear differential equ City UK - ECONOMICS - 100 Click to edit Master subtitle style12/4/1011I ntr oduction:After comprehensive market research by both primary and secondary means we have compiled a report from the findings of a SWOT/Situation analysis which will show that Chabahar is a port worth m McGill - MATH - 263 McGill University Math 263: Differential Equations and Linear AlgebraCHAPTER 4:LAPLACE TRANSFORMS (I)1 IntroductionWe begin our study of the Laplace Transform with a motivating example: Solve the differential equation y + y = f (t) =with ICs:0, 0 t McGill - MATH - 263 McGill University Math 263: Differential Equations and Linear AlgebraCHAPTER 4:LAPLACE TRANSFORMS (II)1 Solve IVP of DEs with Laplace Transform MethodIn this lecture we will, by using examples, show how to use Laplace transforms in solving differentia McGill - MATH - 263 McGill University Math 263: Differential Equations and Linear AlgebraCHAPTER 4:LAPLACE TRANSFORMS (III)1 Further Studies of Laplace Transform1.11.1.1Step FunctionDenitioncfw_ uc (t) =1.1.20 1t &lt; c, t c.Some basic operations with the step funct McGill - MATH - 263 4 4 5 2 2 6 4 5 3 = 0 Let y = xv 4()4 5 2 ()2 6 4 5 ()3 = 0 4 4 4 5 4 2 6 4 5 4 3 = 0 4 4 5 2 6 5 3 = 0 () = + = + 4 4 5 2 6 5 3 =0 + = 0 5 4 = 0 4 4 5 2 6 5 3 4 4 5 2 6 5 3 4 5 2 6 = 5 35 3 1 = 4 5 2 6 5 3 = McGill - MATH - 263 McGill - MATH - 263 UC Davis - AGRONOMY - PLB174 MTBCH001.QXD.130084614/12/043:08 PMPage 1PART1INTRODUCTION: MARKETS AND PRICESPART 1surveys the scope of microeconomics and introduces some basic concepts and tools. Chapter 1 discusses the range of problems that microeconomics addresses, and t McGill - MATH - 263 McGill - MATH - 263 So, exact equations will be of the form: M(x,y)dx+N(x,y)dy=0 To test if the equation is exact, check ifIf the equation is not exact, you need to multiply by integrating factor (x,y)so, now you need to expand the equationExanding this gives:Separate th McGill - ENG - 263 McGill - MATH - 263 AquickreviewFirstOrderODEsLinearODE isoftheformSolutioncanbederivedusingintegratingfactormethod orvariationofparametermethodSeparableEquation:FirstOrderODEs(cont) McGill - MATH - 263 UC Davis - AGRONOMY - PLB174 MTBCH002.QXD.130084614/12/043:24 PMPage 19CHAPTER2The Basics of Supply and DemandCHAPTER OUTLINE2.1 Supply and Demand 20 2.2 The Market Mechanism 23 2.3 Changes in Market Equilibrium 24 2.4 Elasticities of Supply and Demand 32 2.5 Short-Run UC Davis - AGRONOMY - PLB174 MTBCH003.QXD.130084614/12/043:32 PMPage 61PART2PRODUCERS, CONSUMERS, AND COMPETITIVE MARKETSPART 2presents the theoretical core of microeconomics. Chapters 3 and 4 explain the principles underlying consumer demand. We see how consumers make con UC Davis - AGRONOMY - PLB174 MTBCH004.QXD.130084614/12/043:51 PMPage 107CHAPTER4Individual and Market DemandCHAPTER OUTLINEC1.hapter 3 laid the foundation for the theory of consumer demand. We discussed the nature of consumer preferences and saw how, given budget cons UC Davis - AGRONOMY - PLB174 BGIA D C V O T OTR NG I H C NNG NGHI P H N INguy n M nh Kh i (Ch bin) Nguy n Th Bch Thu , inh Sn QuangGIO TRNHB O QU N NNG S NH N i, 20051L I NI U Cy tr ng ni ring v th c v t xanh ni chung ng gp ph n quan tr ng trong vi c cung c p th c ph m cho co UC Davis - AGRONOMY - PLB174 Color profile: Disabled Composite Default screenChapter 6Agronomy and Cropping SystemsDietrich LeihnerResearch, Extension and Training Division, Sustainable Development Department, FAO, Via delle Terme di Caracalla, 00100 Rome, ItalyIntroductionCass UC Davis - AGRONOMY - PLB174 FACULTY OF ECONOMICS AND RURAL DEVELOPMENT ARE 100 A THEORY OF PRODUCTION AND CONSUMPTION (INTERMEDIATE MICROECONOMICS) Pham Van Hung (pvhung@hua.edu.vn); Nguyen Huu Nhuan and Nguyen Thi Duong Nga GENERAL INFORMATION Course Description This course is desi UC Davis - AGRONOMY - PLB174 Fruit VegetablesMonocots Dicots Cucurbitaceae Cucumber Zucchini Honeydew Muskmelon Winter squash Leguminosae Lima bean Kidney bean Pea Broad bean Mung bean Malvaceae Okra Solanaceae Pepper, bell Pepper, chili Tomato Eggplant Sweet corn Zea maysCucurbita UC Davis - AGRONOMY - PLB174 10/28/2010LeafyandsucculentvegetablesPostharvest physiologyand handlinglaboratory Hanoi,2010Characteristicsandpostharvest considerationsHarvest Byhand Sharptools Bymachineforlightlyprocessedproducts110/28/2010Packing Infield Boxesdependoncoolingm UC Davis - AGRONOMY - PLB174 10/28/2010Postharvest biology and technology for floricultural crops floricultural cropsPostharvest biology and handling laboratory Hanoi, 2010Factors affecting the life of cut flowers &amp; potted plantsSell better cultivarsTemperature management25 yel UC Davis - AGRONOMY - PLB174 PLS 172 Lecture 31 of 10RESPIRATIONMikal E. Saltveit, Department of Plant Sciences, UC DavisTABLE OF CONTENTS Topic Introduction Respiration Basic Mechanism Respiration Overall reactions of respiration Measurement of respiration Measurement of gas exc UC Davis - AGRONOMY - PLB174 PLB 172 Lecture #10Compositional Changes: carbohydrates, organic acids, pectin, lipidsFlorence Negre-Zakharov Plant Sciences, UC Davis fnegre@ucdavis.eduCO2Calvin Cycle SugarsSugarsGlycolysis1 (photosynthetic assimilates) Amino Acids Carbohydrates UC Davis - AGRONOMY - PLB174 PLB 172 Lecture #9Compositional Changes: amino acids, proteins, pigments and aromaFlorence Negre-Zakharov Plant Sciences, UC Davis fnegre@ucdavis.edu(photosynthetic assimilates) Amino Acids Carbohydrates Organic Acids PEP Phenylalanine Pyruvate Acetyl- UC Davis - AGRONOMY - PLB174 Cropping system assignmentInstructor: Trn Danh Thn Student: Nguyn Trng Qu ID: 520521 Subject: From a case study of cropping system on your country, please analyze inter-relationship between its components as well as to other systems in the agriecosystem UC Davis - AGRONOMY - PLB174 Physiological disordersPostharvest physiology and handling laboratory Hanoi University of Agriculture, 2010Physiological disorders Non-pathogenic disorders resulting from abnormal conditions during production and marketing Handling damage and effects o UC Davis - AGRONOMY - PLB174 Lab 2RespirationGoals1. To observe the range of respirations of different perishable commodities and relate respiration to relative perishability 2. To determine respiratory quotient for different commodities 3. To determine Q10 for a commodity, and gr UC Davis - AGRONOMY - PLB174 PLS 172 Lecture 11 of 12POSTHARVEST LOSSES OF HORTICULTURAL COMMODITIESMikal E. Saltveit, Dept. Plant Sciences, UC DavisTABLE OF CONTENTSTopic Scope of postharvest physiology What is postharvest physiology Historical perspective The global picture Po UC Davis - AGRONOMY - PLB174 PLB 1721 of 8PHYSIOLOGICAL DISORDERS OF FRESH HORTICULTURAL CROPSM.S. Reid, A.A Kader, M.E. Saltveit University of California, Davis 95616 Before and after harvest, the quality of perishable commodities can be reduced by a variety of external or intern UC Davis - AGRONOMY - PLB174 Initial Cooling COOLING METHODJim Thompson and Michael ReidPostharvest physiology and Handling physiology and Handling Laboratory Hanoi, October 2010 Conduction Convection Radiation Of little importance at these temperaturesCOOLING MEDIUM Air Conve UC Davis - AGRONOMY - PLB174 Quality of horticultural cropsPostharvest physiology &amp; Handling LaboratoryHanoi Agricultural University October, 2010Florence Negre &amp; Michael Reid UC Davisfnegre@ucdavis.eduQuality Appearance Color, gloss, feel Shape Defects and blemishes Nutritiv UC Davis - AGRONOMY - PLB174 STT 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.L p KDNN 55 KE 55 A QTKD 55 B KHCT53T CNSH 54B CNSH55B QTKD53T KTNN52B KTNN54B KT55B TY53B CNTP55A QL53B QL55C QL55E QL54D MT53CNgi th hin Nguyn Th Nga Nguyn Vn Minh Nguyn Vn Ninh o Th Kim Oa UC Davis - AGRONOMY - PLB174 PLS 172 Lecture 31 of 8RESPIRATION: THE PHYSIOLOGIST'S YARDSTICKMikal E. Saltveit and Michael S. Reid, University of California, Davis 95616TABLE OF CONTENTS Topic Introduction Internal factors Environmental factors Physical stress References c. Stage UC Davis - AGRONOMY - PLB174 Most slides in this presentation are from Trevor SuslowFood Safety Considerations for Fruits and Vegetables.Perspective of a non-microbiologistMarita Cantwell Dept. Plant Sciences, UCDavis micantwell@ucdavis.eduhttp:/postharvest.ucdavis.edu http:/post UC Davis - AGRONOMY - PLB174 The effect of cropping systems on production and environment CROPSYS 2007-2010ICROFSInternational Centre for Research in Organic Food SystemsHarvest of cereals with a well-developed undersown clover as catch crop (photo: Henning C. Thomsen)The effect UC Davis - AGRONOMY - PLB174 Opportunities in AgricultureCONTENTS WHY DIVERSIFY? 2TO SURVIVEPROFILE: THEY DIVERSIFIED 3Diversifying Cropping SystemsALTERNATIVE CROPS 4 PROFILE: DIVERSIFIED NORTH DAKOTAN WORKS WITH MOTHER NATURE 9 PROTECT NATURAL RESOURCES, RENEW PROFITS 10 AGROF UC Davis - AGRONOMY - PLB174 Lab 3EthyleneGoal To observe the effects of ethylene in fruit ripening, and to see the action of inhibitors CO2 1-MCPMaterials Ethylene treatment system ethephon at an appropriate dilution Samples of the different commodities (15 of each) Banana, UC Davis - AGRONOMY - PLB174 Maturation &amp; Maturity Indices, Standardization &amp; InspectionMaturity Definition of horticultural maturity Stage at which a commodity has reached a sufficient stage of development that after harvesting and postharvest handling (including (i ripening, if r UC Davis - AGRONOMY - PLB174 PLS 172 Lecture 21 of 11MORPHOLOGY, STRUCTURE, GROWTH AND DEVELOPMENTMikal E. Saltveit, Dept. Plant Sciences, UC DavisTABLE OF CONTENTSTopic Methods of classification Cellular structure Anatomy Morphology Growth and development Horticultural vs. phys UC Davis - AGRONOMY - PLB174 PLB 172 Lecture #9Composition and Nutritive Value of Fruits and VegetablesFlorence Negre-Zakharov Plant Sciences, UC Davis fnegre@ucdavis.eduComposition of Fruits and Vegetables1. Use as Human Food (Nutrition, Quality and Safety Considerations) 2. Pre UC Davis - AGRONOMY - PLB174 The biology of ethylene production and action in fruitsEthylene - an important factorUseful: Accelerates ripening Causes abscissionA problem: Accelerates ripening Accelerates senescence Causes abscissionWhat is ethylene? C2H4 Verysimple molecule A UC Davis - AGRONOMY - PLB174 PLB 1721 of 10COMPOSITION AND COMPOSITIONAL CHANGESAdel A. Kader, Mikal Saltveit Department of Plant Sciences, University of California, Davis 95616TABLE OF CONTENTSTOPIC Page 1. Introduction 1 2. Carbohydrates 2 Sugars, organic acids, polysaccharide UC Davis - AGRONOMY - PLB174 Productphotographin field,orinmarketLaboratoryreportProductnameStructure Photographofproductanddrawingor photographofcrosssectionwithtissues labelledRespiration Reportrespirationat2,14,26,Q10(10 14),andRQ(mean) BriefdescriptionWaterloss Ifyouuseda
{"url":"http://www.coursehero.com/file/6038289/Assignment-1-Solutions/","timestamp":"2014-04-20T05:45:00Z","content_type":null,"content_length":"55730","record_id":"<urn:uuid:d74efb8d-a620-4137-aff1-afeeebf1a507>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Electricity Price Forecasting with Neural Networks This example demonstrates building and validating a short term electricity price forecasting model with MATLAB using Neural Networks. The models take into account multiple sources of information including fuel prices, temperatures and holidays in constructing a day-ahead price forecaster. Import Weather, Load and Price Data The data set used is a table of historical hourly loads, prices and temperature observations from the New England ISO for the years 2004 to 2008. The weather information includes the dry bulb temperature and the dew point. This data set is imported from an Access database using the auto-generated function fetchDBPriceData. If using the MATLAB Central File Exchange script, load the MAT-files from the Data folder and skip to line 74. addpath ..\Util data = fetchDBPriceData('2004-01-01', '2008-12-31'); Import list of holidays A list of New England holidays that span the historical date range is imported from an Excel spreadsheet [num, text] = xlsread('..\Data\Holidays.xls'); holidays = text(2:end,1); Generate Predictor Matrix The function genPredictors generates the predictor variables used as inputs for the model. For short-term forecasting these include • Dry bulb temperature • Dew point • Hour of day • Day of the week • A flag indicating if it is a holiday/weekend • System load • Previous day's average load • Load from the same hour the previous day • Load from the same hour and same day from the previous week • Previous day's average price • Price from the same hour the previous day • Price from the same hour and same day from the previous week • Previous day's natural gas price • Previous week's average natural gas price If the goal is medium-term or long-term price forecasting, only the inputs hour of day, day of week, time of year and holidays can be used deterministically. The weather/price information would need to be specified as an average or a distribution % Select forecast horizon term = 'short'; [X, dates, labels] = genPredictors(data, term, holidays); Split the dataset to create a Training and Test set The dataset is divided into two sets, a training set which includes data from 2004 to 2007 and a test set with data from 2008. The training set is used for building the model (estimating its parameters). The test set is used only for forecasting to test the performance of the model on out-of-sample data. % Interpolate missing values ind = data.ElecPrice==0; data.ElecPrice(ind) = interp1(find(~ind), data.ElecPrice(~ind), find(ind)); % Create training set trainInd = data.NumDate < datenum('2008-01-01'); trainX = X(trainInd,:); trainY = data.ElecPrice(trainInd); % Create test set and save for later testInd = data.NumDate >= datenum('2008-01-01'); testX = X(testInd,:); testY = data.ElecPrice(testInd); testDates = dates(testInd); save Data\testSet testDates testX testY clear X data trainInd testInd term holidays dates ans num text Build the Price Forecasting Model The next few cells builds a Neural Network regression model for day-ahead price forecasting given the training data. This model is then used on the test data to validate its accuracy. Initialize and Train Network Initialize a default network of two layers with 20 neurons. Use the "mean absolute error" (MAE) performance metric. Then, train the network with the default Levenburg-Marquardt algorithm. For efficiency a pre-trained network is loaded unless a retrain is specifically enforced. reTrain = false; if reTrain || ~exist('Models\NNModel.mat', 'file') net = newfit(trainX', trainY', 20); net.performFcn = 'mae'; net = train(net, trainX', trainY'); save Models\NNModel.mat net load Models\NNModel.mat Forecast using Neural Network Model Once the model is built, perform a forecast on the independent test set. load Data\testSet forecastPrice = sim(net, testX')'; Compare Forecast Price and Actual Price Create a plot to compare the actual price and the predicted price as well as compute the forecast error. In addition to the visualization, quantify the performance of the forecaster using metrics such as mean absolute error (MAE), mean absolute percent error (MAPE) and daily peak forecast error. err = testY-forecastPrice; fitPlot(testDates, [testY forecastPrice], err); errpct = abs(err)./testY*100; fL = reshape(forecastPrice, 24, length(forecastPrice)/24)'; tY = reshape(testY, 24, length(testY)/24)'; peakerrpct = abs(max(tY,[],2) - max(fL,[],2))./max(tY,[],2) * 100; MAE = mean(abs(err)); MAPE = mean(errpct(~isinf(errpct))); fprintf('Mean Absolute Percent Error (MAPE): %0.2f%% \nMean Absolute Error (MAE): %0.2f MWh\nDaily Peak MAPE: %0.2f%%\n',... MAPE, MAE, mean(peakerrpct)) Mean Absolute Percent Error (MAPE): 6.57% Mean Absolute Error (MAE): 5.21 MWh Daily Peak MAPE: 5.61% Examine Distribution of Errors In addition to reporting scalar error metrics such as MAE and MAPE, the plot of the distribution of the error and absolute error can help build intuition around the performance of the forecaster subplot(3,1,1); hist(err,100); title('Error distribution'); subplot(3,1,2); hist(abs(err),100); title('Absolute error distribution'); line([MAE MAE], ylim); legend('Errors', 'MAE'); subplot(3,1,3); hist(errpct,100); title('Absolute percent error distribution'); line([MAPE MAPE], ylim); legend('Errors', 'MAPE'); Group Analysis of Errors To get further insight into the performance of the forecaster, we can visualize the percent forecast errors by hour of day, day of week and month of the year [yr, mo, da, hr] = datevec(testDates); % By Hour boxplot(errpct, hr+1); xlabel('Hour'); ylabel('Percent Error Statistics'); title('Breakdown of forecast error statistics by hour'); % By Weekday boxplot(errpct, weekday(floor(testDates)), 'labels', {'Sun','Mon','Tue','Wed','Thu','Fri','Sat'}); ylabel('Percent Error Statistics'); title('Breakdown of forecast error statistics by weekday'); % By Month boxplot(errpct, datestr(testDates,'mmm')); ylabel('Percent Error Statistics'); title('Breakdown of forecast error statistics by month'); Generate Weekly Charts Create a comparison of forecast and actual price for every week in the test set. generateCharts = true; if generateCharts step = 168*2; for i = 0:step:length(testDates)-step fitPlot(testDates(i+1:i+step), [testY(i+1:i+step) forecastPrice(i+1:i+step)], err(i+1:i+step)); title(sprintf('MAPE: %0.2f%%', mean(errpct(i+1:i+step))));
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/28684-electricity-load-and-price-forecasting-webinar-case-study/content/Electricity%20Load%20&%20Price%20Forecasting/Price/html/PriceScriptNN.html","timestamp":"2014-04-20T19:21:24Z","content_type":null,"content_length":"43319","record_id":"<urn:uuid:c792419a-abff-476a-a030-816c1ebd2fc4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Consider Following Circuit Which Two Diodes With ... | Chegg.com Consider following circuit which two diodes with reverse saturation currents Is1 and Is2 placed in parallel. (a) Prove that the parralle combination operates as an exponential device. (I-V characteristic is exponential function) (b) If the total current is I-tot, determine the current carried by each diode. Image text transcribed for accessibility: Consider following circuit which two diodes with reverse saturation currents Is1 and Is2 placed in parallel. Prove that the parralle combination operates as an exponential device. (I-V characteristic is exponential function) If the total current is I-tot, determine the current carried by each diode. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/consider-following-circuit-two-diodes-reverse-saturation-currents-is1-is2-placed-parallel--q4263004","timestamp":"2014-04-19T12:05:16Z","content_type":null,"content_length":"21990","record_id":"<urn:uuid:55ac0e92-bf45-40b3-b516-278ba3889239>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Northern Liberties, Philadelphia, PA Havertown, PA 19083 PhD in Physics -Tutoring in Physics, Math, Engineering, SAT/ACT ...!! )) PARENTS: Bring the full weight of a PhD, as tutor, and student advocate. Hello Students! If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair! Learning new disciplines... Offering 10+ subjects including algebra 1 and algebra 2
{"url":"http://www.wyzant.com/Northern_Liberties_Philadelphia_PA_algebra_tutors.aspx","timestamp":"2014-04-24T08:52:02Z","content_type":null,"content_length":"60522","record_id":"<urn:uuid:617ab25a-822b-4226-9dc1-cebec017b90b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
concentric circle problem April 22nd 2013, 03:31 PM concentric circle problem I have no idea how to solve this. I love geometry, but I've found that I'm not very naturally good at it. please help! The circles in the figure I have drawn out are concentric. The chord AB is tangent to the inner circle and has a length of 12 cm. What is the Area of of the non-shaded region? (A of big triangle - A of small triangle) Attachment 28090 April 22nd 2013, 03:46 PM Re: concentric circle problem The circles in the figure I have drawn out are concentric. The chord AB is tangent to the inner circle and has a length of 12 cm. What is the Area of of the non-shaded region? (A of big triangle - A of small triangle) Attachment 28090 As posed, I fear there is no unique answer to this question. Look at this webpage. I think that you need to know the radius of one of those two circles. Or some other term from that webpage. April 22nd 2013, 05:35 PM Re: concentric circle problem Hello, aaronrpoole! This is a classic problem . . . with a surprising punchline. The circles in the figure I have drawn are concentric. The chord AB is tangent to the inner circle and has a length of 12 cm. What is the area of of the non-shaded region? (Area of big circle - Area of small circle) * * * * * * C 6 * A *- - - - ♥ - - - -♥ B * | * o * * r| o* R * * * ♥ * * * * O * * * * * * * * * * * * * * $O$ is the center of the circles. $C$ is the midpoint of chord $AB\!:\;CB = 6$ Let $R = OB$, the radius of the large circle. Let $r = OC$, the radius of the small circle. From right triangle $BOC\!:\;r^2 + 6^2 \,=\,R^2 \quad\Rightarrow\quad R^2-r^2 \,=\,36$ .[1] The area of the large circle is: $\pi R^2$ The area of the small circle is: $\pi r^2$ The area of the ring is: $A \:=\:\pi R^2 - \pi r^2 \:=\:\pi(R^2-r^2)$ Substitute [1]: . $A \:=\:\pi(36) \:=\:36\pi$ Surprise! .We didn't need to know the two radii. The small circle could be a golfball or the Earth. The area of the ring is constant! April 22nd 2013, 11:41 PM Re: concentric circle problem While this subject can be very touchy for most people, my opinion is that there has to be a middle or common ground that we all can find. I do appreciate that youve added relevant and intelligent commentary here though. Thank you!
{"url":"http://mathhelpforum.com/geometry/218018-concentric-circle-problem-print.html","timestamp":"2014-04-17T10:58:58Z","content_type":null,"content_length":"10091","record_id":"<urn:uuid:da485151-e743-464c-9278-17a679ad205e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Asymptotic analysis and solution of a finite-horizon ${H}_{\infty }$ control problem for singularly-perturbed linear systems with small state delay. (English) Zbl 1036.93013 A finite-horizon ${H}_{\infty }$ state-feedback control problem for singularly-perturbed linear time-dependent systems with a small state delay is considered. Two approaches to the asymptotic analysis and solution of this problem are proposed. In the first one, an asymptotic solution of the singularly perturbed system of functional-differential equations of Riccati type, associated with the original ${H}_{\infty }$ problem via sufficient conditions on the existence of its solution, is constructed. Based on this asymptotic solution, conditions for the existence of a solution of the original ${H}_{\infty }$ problem, independent of the small parameter of the singular perturbations, are derived. A simplified controller problem for all sufficiently small values of this parameter is obtained. In the second approach, the original ${H}_{\infty }$ problem is decomposed into two lower-dimensional parameter-independent ${H}_{\infty }$ subproblems, the reduced-order (slow) and the boundary-layer (fast) subproblems; controllers solving these subproblems are constructed. Based on these controllers, a composite controller is derived, which solves the original ${H}_{\infty }$ problem for all sufficiently small values of the singular perturbation parameter. An illustrative example is presented. 93B36 ${H}^{\infty }$-control 93C70 Time-scale analysis and singular perturbations 93C23 Systems governed by functional-differential equations 93B11 System structure simplification
{"url":"http://zbmath.org/?q=an:1036.93013","timestamp":"2014-04-17T13:10:18Z","content_type":null,"content_length":"23462","record_id":"<urn:uuid:78a65516-087c-44be-b8c6-f56ab7bda744>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Luzerne, PA Math Tutor Find a Luzerne, PA Math Tutor Hi, my name is Zachary. I am a senior at Bloomsburg University majoring in Chemistry. Some of my hobbies include reading, writing, and playing music. 18 Subjects: including algebra 2, geometry, precalculus, trigonometry ...I have done both in the past. To me, it is personally rewarding to be able to help a student by using my skills and knowledge to enhance the learning progress and see each student achieve measurable results. I strive to achieve just that in every student I work with. 80 Subjects: including algebra 2, biology, calculus, chemistry ...I am willing to meet students either at their homes or in public places such as library meeting rooms. Group tutoring rates are also available.I do public speaking for a living. I give at least one public speaking engagement a week and teach a 6-week class twice a week. 10 Subjects: including geometry, precalculus, algebra 1, trigonometry ...I've tutored peers in the past, children with ADD and ADHD and have been able to bring C and D students to A's. Since I am still progressing through my education, I'm more aware of the hurdles that kids go through and the standards in which schools have and in some cases, should have but fail to meet. I strive for excellence and try to instill that in the young adults that I help. 30 Subjects: including statistics, prealgebra, probability, ACT Math ...I am studying to become an Elementary Teacher. I am also in the process of completing my Professional Semester at an urban elementary school. I observed and taught a class of thirty students. 6 Subjects: including algebra 2, algebra 1, prealgebra, Microsoft Word Related Luzerne, PA Tutors Luzerne, PA Accounting Tutors Luzerne, PA ACT Tutors Luzerne, PA Algebra Tutors Luzerne, PA Algebra 2 Tutors Luzerne, PA Calculus Tutors Luzerne, PA Geometry Tutors Luzerne, PA Math Tutors Luzerne, PA Prealgebra Tutors Luzerne, PA Precalculus Tutors Luzerne, PA SAT Tutors Luzerne, PA SAT Math Tutors Luzerne, PA Science Tutors Luzerne, PA Statistics Tutors Luzerne, PA Trigonometry Tutors Nearby Cities With Math Tutor Ashley, PA Math Tutors Courtdale, PA Math Tutors Dallas, PA Math Tutors Edwardsville, PA Math Tutors Exeter, PA Math Tutors Forty Fort, PA Math Tutors Kingston, PA Math Tutors Larksville, PA Math Tutors Laurel Run, PA Math Tutors Pringle, PA Math Tutors Swoyersville, PA Math Tutors Trucksville, PA Math Tutors West Pittston, PA Math Tutors West Wyoming, PA Math Tutors Yatesville, PA Math Tutors
{"url":"http://www.purplemath.com/Luzerne_PA_Math_tutors.php","timestamp":"2014-04-16T07:58:22Z","content_type":null,"content_length":"23690","record_id":"<urn:uuid:9201f28a-a28f-4b87-bf51-ea2911350df4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Algebra Proof January 7th 2009, 03:11 PM #1 [SOLVED] Algebra Proof Consider the statement that may have occurred to early algebraists: A quadratic equation ax^2 +bx+c = 0 has a solution in the set of real numbers if and only if its discriminant is non-negative, i.e., b^2-4ac=>0. Is this statement true or false? If true, give a proof and explain what a proof of a mathematical statement is. If false, give a counterexample and explain what a counterexample to a statement is. I have no idea how to do this question please help. Consider the statement that may have occurred to early algebraists: A quadratic equation ax^2 +bx+c = 0 has a solution in the set of real numbers if and only if its discriminant is non-negative, i.e., b^2-4ac=>0. Is this statement true or false? If true, give a proof and explain what a proof of a mathematical statement is. If false, give a counterexample and explain what a counterexample to a statement is. I have no idea how to do this question please help. Hint: look at the quadratic formula. note where the discriminant is. what would happen if it were less than zero? when you come up with the reason we can talk about whether or not a counter-example exists and how to find one if it does What do you know about a, b, & c? Consider the quadratic equation $ix^2 - x + i = 0$. Consider the statement that may have occurred to early algebraists: A quadratic equation ax^2 +bx+c = 0 has a solution in the set of real numbers if and only if its discriminant is non-negative, i.e., b^2-4ac=>0. Is this statement true or false? If true, give a proof and explain what a proof of a mathematical statement is. If false, give a counterexample and explain what a counterexample to a statement is. I have no idea how to do this question please help. $ax^2+bx+c = 0$ $= a(x^2+\frac{b}{a}x)+c = 0$ $= a(x^2+\frac{b}{a}x + (\frac{b}{2a})^2-(\frac{b}{2a})^2)+c = 0$ $= a((x+\frac{b}{2a})^2-(\frac{b}{2a})^2)+c = 0$ $= a(x+\frac{b}{2a})^2-a(\frac{b}{2a})^2+c = 0$ $= a(x+\frac{b}{2a})^2-(\frac{b^2}{4a})+c = 0$ $= a(x+\frac{b}{2a})^2 =(\frac{b^2}{4a})^2-c$ $= (x+\frac{b}{2a})^2=(\frac{b^2}{4a^2})-\frac{c}{a}$ $= (x+\frac{b}{2a})^2 =(\frac{b^2}{4a^2})-\frac{4ac}{4a^2}$ $= (x+\frac{b}{2a}) =\pm \sqrt{(\frac{b^2-4ac}{4a^2})}$ $= (x+\frac{b}{2a}) =\pm (\frac{\sqrt{b^2-4ac}}{2a})$ $= x = -\frac{b}{2a} \pm (\frac{\sqrt{b^2-4ac}}{2a})$ $= x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}$ And you can see from this that there are only real solutions if $b^2-4ac \geq 0$. Thanks for the help I just used an example of a quadratic equation and showed that the negative can not be square rooted. is that enough for the proof? ronaldo, a polynomial with two real roots implies that the discriminant is non-negative. However, a non-negative discriminant does not imply that the solutions are real. The proposition is then false. Plato gave the couterexample that proves this by contradiction: $b^2-4ac = 1 - 4i^2 = 1 + 4 = 5$ but the solutions are not real. Last edited by mr fantastic; January 7th 2009 at 07:53 PM. Reason: Fixed up the post what are these examples please explain What if for example A,B,C, are real numbers where A can not equal zero, whould this statement then hold. January 7th 2009, 03:22 PM #2 January 7th 2009, 03:29 PM #3 January 7th 2009, 03:39 PM #4 Super Member Dec 2008 January 7th 2009, 03:49 PM #5 January 7th 2009, 04:04 PM #6 Super Member Dec 2008 January 7th 2009, 04:40 PM #7 January 7th 2009, 05:44 PM #8 Junior Member May 2007 January 7th 2009, 05:53 PM #9 January 7th 2009, 07:55 PM #10 January 12th 2009, 11:26 PM #11 Junior Member Jan 2009 January 13th 2009, 02:03 AM #12
{"url":"http://mathhelpforum.com/algebra/67228-solved-algebra-proof.html","timestamp":"2014-04-20T12:09:19Z","content_type":null,"content_length":"73515","record_id":"<urn:uuid:e0dc9fec-3160-413c-9ef4-9b3858c9089d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the Shear Modulus and the Bulk Modulus For simplicity, consider a rectangular block of material with dimensions a[0], b[0], and c[0]. Its volume V[0] is given by, When the block is loaded by stress, its volume will change since each dimension now includes a direct strain measure. To calculate the volume when loaded V[f], we multiply the new dimensions of the Products of strain measures will be much smaller than individual strain measures when the overall strain in the block is small (i.e. linear strain theory). Therefore, we were able to drop the strain products in the equation above. The relative change in volume is found by dividing the volume difference by the initial volume, Hence, the relative volume change (for small strains) is equal to the sum of the 3 direct strains.
{"url":"http://www.efunda.com/formulae/solid_mechanics/mat_mechanics/elastic_constants_G_K.cfm","timestamp":"2014-04-21T09:39:04Z","content_type":null,"content_length":"24914","record_id":"<urn:uuid:3527ab2f-5eb7-4b5a-94fa-45992ded903a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Parallel Lines is Blondie's third album. Who's that? Seriously? Blondie was breaking hearts of glass way before Gaga got her blonde on. The album cover is rockin' some stripes. Maybe that's where they got the title. Aside from the obvious influence that parallel lines have on everyone and everything, we mean. Parallel lines are coplanar lines that never intersect. If pictures are overkill, you can use the symbol || to get the job done. So in the image above, we could say, "Line DE and line PQ are parallel," or we could just write DE || PQ. Of course, drawing the lines out is always an option too. What about the other types of lines out there? Lines that lie on the same plane and do intersect are called intersecting lines. Really creative, we know. We'll talk about intersecting lines a little later, so don't get your panties in a bunch. Lines that lie on different planes and never intersect are called skew lines. They aren't perfectly in sync like parallel lines but they don't clash like intersecting lines either. They kind of just go about their own business separately, never meeting or even acknowledging one another. To show that lines are parallel in drawings, it's common to draw an arrowhead or two on the parallel lines. Just like tick marks are used to indicate congruent segments, these arrowheads clarify which line is parallel to which other line. Sample Problem Which pairs of lines are parallel? Well, we know that intersecting lines can't be parallel, so that means that lines d and e can't be parallel to a, b, or c. From the image, we can tell that d and e both have one arrowhead each, so they must be parallel to each other. Of a, b, and c, only a and c have double arrowheads, so that means they are parallel. Therefore a || c and d || e, but b isn't parallel to any of them. What is the defining characteristic of parallel lines? What symbol do you use to indicate that CD is parallel to WV? How do you indicate that CD and PQ are parallel when drawn? A line that goes through point (2, 2) is parallel to a line with an equation of 2x – 4y = 12. What is the equation of the second line? Can two lines that intersect be parallel to each other? Can two lines that lie on the same plane be parallel to each other? Line u is parallel to line l and line l is parallel to line i. Is it true that line u is parallel to line i? Line j is parallel to line k. If line j has the equation y = 3x – 1, what slope must line k have? Line p is parallel to line q. If line p has the equation 4x + 5y = 15 and line q passes through point (5, 5), what is the equation of line q?
{"url":"http://www.shmoop.com/parallel-perpendicular-lines/parallel-lines-help.html","timestamp":"2014-04-21T14:40:29Z","content_type":null,"content_length":"45852","record_id":"<urn:uuid:fb924e4a-503f-4a8e-9f39-125dd5ab5a97>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Eleven Multivariate Analysis Techniques: Key Tools In Your Marketing Research Survival Kit Michael Richarme Situation 1: A harried executive walks into your office with a stack of printouts. She says, You re the marketing research whiz tell me how many of this new red widget we are going to sell next year. Oh, yeah, we don t know what price we can get for it either. Situation 2: Another harried executive (they all seem to be that way) calls you into his office and shows you three proposed advertising campaigns for next year. He asks, Which one should I use? They all look pretty good to me. Situation 3: During the annual budget meeting, the sales manager wants to know why two of his main competitors are gaining share. Do they have better widgets? Do their products appeal to different types of customers? What is going on in the market? All of these situations are real, and they happen every day across corporate America. Fortunately, all of these questions are ones to which solid, quantifiable answers can be provided. An astute marketing researcher quickly develops a plan of action to address the situation. The researcher realizes that each question requires a specific type of analysis, and reaches into the analysis tool bag for. . . Over the past 20 years, the dramatic increase in desktop computing power has resulted in a corresponding increase in the availability of computation intensive statistical software. Programs like SAS and SPSS, once restricted to mainframe utilization, are now readily available in Windows-based, menu-driven packages. The marketing research analyst now has access to a much broader array of sophisticated techniques with which to explore the data. The challenge becomes knowing which technique to select, and clearly understanding their strengths and weaknesses. As my father once said to me, If you only have a hammer, then every problem starts to look like a nail. The purpose of this white paper is to provide an executive understanding of 11 multivariate analysis techniques, resulting in an understanding of the appropriate uses for each of the techniques. This is not a discussion of the underlying statistics of each technique; it is a field guide to understanding the types of research questions that can be formulated and the capabilities and limitations of each technique in answering those questions. In order to understand multivariate analysis, it is important to understand some of the terminology. A variate is a weighted combination of variables. The purpose of the analysis is to find the best combination of weights. Nonmetric data refers to data that are either qualitative or categorical in nature. Metric data refers to data that are quantitative, and interval or ratio in nature. Initial Step Data Quality Before launching into an analysis technique, it is important to have a clear understanding of the form and quality of the data. The form of the data refers to whether the data are nonmetric or metric. The quality of the data refers to how normally distributed the data are. The first few techniques discussed are sensitive to the linearity, normality, and equal variance assumptions of the data. Examinations of distribution, skewness, and kurtosis are helpful in examining distribution. Also, it is important to understand the magnitude of missing values in observations and to determine whether to ignore them or impute values to the missing observations. Another data quality measure is outliers, and it is important to determine whether the outliers should be removed. If they are kept, they may cause a distortion to the data; if they are eliminated, they may help with the assumptions of normality. The key is to attempt to understand what the outliers represent. Multiple Regression Analysis Multiple regression is the most commonly utilized multivariate technique. It examines the relationship between a single metric dependent variable and two or more metric independent variables. The technique relies upon determining the linear relationship with the lowest sum of squared variances; therefore, assumptions of normality, linearity, and equal variance are carefully observed. The beta coefficients (weights) are the marginal impacts of each variable, and the size of the weight can be interpreted directly. Multiple regression is often used as a forecasting tool. Logistic Regression Analysis Sometimes referred to as choice models, this technique is a variation of multiple regression that allows for the prediction of an event. It is allowable to utilize nonmetric (typically binary) dependent variables, as the objective is to arrive at a probabilistic assessment of a binary choice. The independent variables can be either discrete or continuous. A contingency table is produced, which shows the classification of observations as to whether the observed and predicted events match. The sum of events that were predicted to occur which actually did occur and the events that were predicted not to occur which actually did not occur, divided by the total number of events, is a measure of the effectiveness of the model. This tool helps predict the choices consumers might make when presented with alternatives. Discriminant Analysis The purpose of discriminant analysis is to correctly classify observations or people into homogeneous groups. The independent variables must be metric and must have a high degree of normality. Discriminant analysis builds a linear discriminant function, which can then be used to classify the observations. The overall fit is assessed by looking at the degree to which the group means differ (Wilkes Lambda or D2) and how well the model classifies. To determine which variables have the most impact on the discriminant function, it is possible to look at partial F values. The higher the partial F, the more impact that variable has on the discriminant function. This tool helps categorize people, like buyers and nonbuyers. Multivariate Analysis of Variance (MANOVA) This technique examines the relationship between several categorical independent variables and two or more metric dependent variables. Whereas analysis of variance (ANOVA) assesses the differences between groups (by using T tests for two means and F tests between three or more means), MANOVA examines the dependence relationship between a set of dependent measures across a set of groups. Typically this analysis is used in experimental design, and usually a hypothesized relationship between dependent measures is used. This technique is slightly different in that the independent variables are categorical and the dependent variable is metric. Sample size is an issue, with 15-20 observations needed per cell. However, too many observations per cell (over 30) and the technique loses its practical significance. Cell sizes should be roughly equal, with the largest cell having less than 1.5 times the observations of the smallest cell. That is because, in this technique, normality of the dependent variables is important. The model fit is determined by examining mean vector equivalents across groups. If there is a significant difference in the means, the null hypothesis can be rejected and treatment differences can be determined. Factor Analysis When there are many variables in a research design, it is often helpful to reduce the variables to a smaller set of factors. This is an independence technique, in which there is no dependent variable. Rather, the researcher is looking for the underlying structure of the data matrix. Ideally, the independent variables are normal and continuous, with at least three to five variables loading onto a factor. The sample size should be over 50 observations, with over five observations per variable. Multicollinearity is generally preferred between the variables, as the correlations are key to data reduction. Kaiser s Measure of Statistical Adequacy (MSA) is a measure of the degree to which every variable can be predicted by all other variables. An overall MSA of .80 or higher is very good, with a measure of under .50 deemed poor. There are two main factor analysis methods: common factor analysis, which extracts factors based on the variance shared by the factors, and principal component analysis, which extracts factors based on the total variance of the factors. Common factor analysis is used to look for the latent (underlying) factors, whereas principal component analysis is used to find the fewest number of variables that explain the most variance. The first factor extracted explains the most variance. Typically, factors are extracted as long as the eigenvalues are greater than 1.0 or the Scree test visually indicates how many factors to extract. The factor loadings are the correlations between the factor and the variables. Typically a factor loading of .4 or higher is required to attribute a specific variable to a factor. An orthogonal rotation assumes no correlation between the factors, whereas an oblique rotation is used when some relationship is believed to exist. Cluster Analysis The purpose of cluster analysis is to reduce a large data set to meaningful subgroups of individuals or objects. The division is accomplished on the basis of similarity of the objects across a set of specified characteristics. Outliers are a problem with this technique, often caused by too many irrelevant variables. The sample should be representative of the population, and it is desirable to have uncorrelated factors. There are three main clustering methods: hierarchical, which is a treelike process appropriate for smaller data sets; nonhierarchical, which requires specification of the number of clusters a priori; and a combination of both. There are four main rules for developing clusters: the clusters should be different, they should be reachable, they should be measurable, and the clusters should be profitable (big enough to matter). This is a great tool for market segmentation. Multidimensional Scaling (MDS) The purpose of MDS is to transform consumer judgments of similarity into distances represented in multidimensional space. This is a decompositional approach that uses perceptual mapping to present the dimensions. As an exploratory technique, it is useful in examining unrecognized dimensions about products and in uncovering comparative evaluations of products when the basis for comparison is unknown. Typically there must be at least four times as many objects being evaluated as dimensions. It is possible to evaluate the objects with nonmetric preference rankings or metric similarities (paired comparison) ratings. Kruskal s Stress measure is a badness of fit measure; a stress percentage of 0 indicates a perfect fit, and over 20% is a poor fit. The dimensions can be interpreted either subjectively by letting the respondents identify the dimensions or objectively by the researcher. Correspondence Analysis This technique provides for dimensional reduction of object ratings on a set of attributes, resulting in a perceptual map of the ratings. However, unlike MDS, both independent variables and dependent variables are examined at the same time. This technique is more similar in nature to factor analysis. It is a compositional technique, and is useful when there are many attributes and many companies. It is most often used in assessing the effectiveness of advertising campaigns. It is also used when the attributes are too similar for factor analysis to be meaningful. The main structural approach is the development of a contingency (crosstab) table. This means that the form of the variables should be nonmetric. The model can be assessed by examining the Chi-square value for the model. Correspondence analysis is difficult to interpret, as the dimensions are a combination of independent and dependent variables. Conjoint Analysis Conjoint analysis is often referred to as trade-off analysis, since it allows for the evaluation of objects and the various levels of the attributes to be examined. It is both a compositional technique and a dependence technique, in that a level of preference for a combination of attributes and levels is developed. A part-worth, or utility, is calculated for each level of each attribute, and combinations of attributes at specific levels are summed to develop the overall preference for the attribute at each level. Models can be built that identify the ideal levels and combinations of attributes for products and services. Canonical Correlation The most flexible of the multivariate techniques, canonical correlation simultaneously correlates several independent variables and several dependent variables. This powerful technique utilizes metric independent variables, unlike MANOVA, such as sales, satisfaction levels, and usage levels. It can also utilize nonmetric categorical variables. This technique has the fewest restrictions of any of the multivariate techniques, so the results should be interpreted with caution due to the relaxed assumptions. Often, the dependent variables are related, and the independent variables are related, so finding a relationship is difficult without a technique like canonical correlation. Structural Equation Modeling Unlike the other multivariate techniques discussed, structural equation modeling (SEM) examines multiple relationships between sets of variables simultaneously. This represents a family of techniques, including LISREL, latent variable analysis, and confirmatory factor analysis. SEM can incorporate latent variables, which either are not or cannot be measured directly into the analysis. For example, intelligence levels can only be inferred, with direct measurement of variables like test scores, level of education, grade point average, and other related measures. These tools are often used to evaluate many scaled attributes or to build summated scales. Each of the multivariate techniques described above has a specific type of research question for which it is best suited. Each technique also has certain strengths and weaknesses that should be clearly understood by the analyst before attempting to interpret the results of the technique. Current statistical packages (SAS, SPSS, S-Plus, and others) make it increasingly easy to run a procedure, but the results can be disastrously misinterpreted without adequate care. Copyright © 2001 by Decision Analyst, Inc. This article may not be copied, published, or used in any way without written permission of Decision Analyst.
{"url":"http://www.decisionanalyst.com/publ_art/multivariate.dai","timestamp":"2014-04-16T07:13:57Z","content_type":null,"content_length":"41736","record_id":"<urn:uuid:5d70cc9c-03bc-4943-908d-07aef860fd72>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Spreadsheet Physics PHYS 251 - Introduction to Computer Techniques in Physics Spreadsheet Physics Spreadsheets for Physics Computing Spreadsheets, such as MS Excel, represent a relatively quick and accurate means of doing problems in physics. However, they do present some drawbacks. For one, they present some awkwardness in setting up the problem because of their cell orientation. However, with some thought most problems, even differential equations, can be set up fairly rapidly. Spreadsheets allow one to establish input parameters that can be varied with relative ease. Because of their graphing capability, one is able to display results with reasonably high quality graphics. For serious spreadsheet use, one should supplement the graphics with one of the many scientific graphing packages, many of which are design to work directly with a spreadsheet such as Excel. Spreadsheet Example - Projectile Motion As an example of using a spreadsheet for physics computations, let us consider the example of simple projectile motion contained in the following document: Projectile Motion Simulator. (If you need to refresh your knowledge of projectile motion, then consult the following document: Physics Application: Projectile Motion. In the example spreadsheet on projectile motion, one should note several points. 1. This example document was produced by Excel by simply saving the spreadsheet workbook as an HTML document with Excel doing all the work. 2. In the layout of the spreadsheet, space has been set-aside in the upper cells for explicit identification of the input parameters, which for this problem are the launch velocity and launch 3. Underneath the Input Parameters, cells have been allocated for Computed Quantities, which are important results. These include the time of flight, T, the range, R, and the maximum height, H. 4. The time of flight has been used to establish the time interval in order to calculate the details of the motion rather than using a time step of say 0.1 seconds for example. The time of flight has been divided into 41 intervals giving a time step of 0.360524 seconds. Thus regardless of the launch velocity and launch angle, the time of flight sets the time step. This avoids having to extend the computations to additional rows with the consequent changes necessary to the graphs. 5. Graphs are provided of the path of the motion, the position in x and y as a function of time, and the velocity components as a function of time. Internet Resources for Spreadsheet Programs Physics & Astronomy Department, George Mason University Maintained by Amin Jazaeri, amin@physics.gmu.edu
{"url":"http://physics.gmu.edu/~amin/phys251/Topics/ScientificComputing/spreadSheets.html","timestamp":"2014-04-20T10:47:13Z","content_type":null,"content_length":"4651","record_id":"<urn:uuid:62d76b8d-fa5b-4c31-a895-54a16d311073>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Publications of the Hughes group @ MIT Black hole orbits and perturbation theory: S. Drasco and S. A. Hughes, Rotating black hole orbit functionals in the frequency domain , Physical Review D , 044015 (2004); astro-ph/0308479. N. A. Collins and S. A. Hughes., Towards a formalism for mapping the spacetimes of massive compact objects: Bumpy black holes and their orbits , Physical Review D , 124022 (2004); gr-qc/0402063. S. A. Hughes, S. Drasco, É. E. Flanagan, and J. Franklin, Gravitational radiation reaction and inspiral waveforms in the adiabatic limit , Physical Review Letters , 221101 (2005); gr-qc/0504015. S. Drasco, É. E. Flanagan, S. A. Hughes, Computing inspirals in Kerr in the adiabatic regime. I. The scalar case , Classical and Quantum Gravity , S801 (2005); gr-qc/0505075. S. Drasco and S. A. Hughes, Gravitational wave snapshots of generic extreme mass ratio inspirals , Physical Review D , 024027 (2006); gr-qc/0509101. S. Babak, H. Fang, J. R. Gair, K. Glampedakis, and S. A. Hughes, "Kludge" gravitational waveforms for a test body orbiting a Kerr black hole , Physical Review D , 024005 (2007); gr-qc/0607007. P. A. Sundararajan, G. Khanna, and S. A. Hughes, Towards adiabatic waveforms for inspiral into Kerr black holes: I. A new model of the source for the time domain perturbation equation , Physical Review D , 104005 (2007); gr-qc/0703028. E. Barausse, S. A. Hughes, and L. Rezzolla, Circular and non-circular nearly horizon-skimming orbits in Kerr spacetimes , Physical Review D , 044007 (2007); arXiv:0704.0138. P. A. Sundararajan, G. Khanna, S. A. Hughes, and S. Drasco, Towards adiabatic waveforms for inspiral into Kerr black holes: II. Dynamical sources and generic orbits , Physical Review D , 024022 (2008); arXiv:0803.0317. P. Sundararajan, Transition from adiabatic inspiral to geodesic plunge for a compact object around a massive Kerr black hole: Generic orbits , Physical Review D , 124050 (2008); arXiv:0803.4482. N. Yunes, A. Buonnano, S. A. Hughes, M. C. Miller, and Y. Pan, Modeling extreme mass-ratio inspirals within the effective-one-body approach , Physical Review Letters , 091102 (2010); arXiv:0909.4263. S. J. Vigeland and S. A. Hughes, Spacetime and orbits of bumpy black holes , Physical Review D , 024030 (2010); arXiv:0911.1756. P. A. Sundararajan, G. Khanna, and S. A. Hughes, Binary black hole merger waves and recoil in the large mass ratio limit , Physical Review D , 104009 (2010); arXiv:1003.0485. S. J. Vigeland, Multipole moments of bumpy black holes , Physical Review D , 104041 (2010); arXiv:1008.1278. N. Yunes, A. Buonanno, S. A. Hughes, Y. Pan, E. Barausse, M. Coleman Miller, and W. Throwe, Extreme mass-ratio inspirals in the effective one-body approach: Quasicircular, equatorial orbits around a spinning black hole , Physical Review D , 044044 (2011); arXiv:1009.6013. S. Vigeland, N. Yunes, and L. C. Stein, Bumpy black holes in alternative theories of gravity , Physical Review D , 104027 (2011); arXiv:1102.3706. E. Barausse, A. Buonanno, S. A. Hughes, G. Khanna, S. O’Sullivan, and Yi Pan, Modeling multipolar gravitational-wave emission from small mass-ratio mergers , Physical Review D , 024046 (2012); arXiv:1110.3081. Binary black hole astrophysics: M. Favata, S. A. Hughes, and D. E. Holz, How black holes get their kicks: Gravitational radiation recoil revisited , Astrophysical Journal Letters , L5 (2004); astro-ph/0402056. D. Merritt, M. Milosavljevic, M. Favata, S. A. Hughes, and D. E. Holz, Consequences of gravitational radiation recoil , Astrophysical Journal Letters , L9 (2004); astro-ph/0402057. S. A. Hughes and K. Menou, Golden binary gravitational-wave sources: Robust probes of strong-field gravity , Astrophysical Journal , 689 (2005); astro-ph/0410148. R. N. Lang and S. A. Hughes, Measuring coalescing massive binary black holes with gravitational waves: The impact of spin-induced precession (see also erratum 1 erratum 2 ), Physical Review D , 122001 (2006); gr-qc/0608062. Erratum 1: Physical Review D , 089902(E) (2007). Erratum 2: Physical Review D , 109901(E) (2008). R. N. Lang and S. A. Hughes, Localizing coalescing massive black hole binaries with gravitational waves , Astrophysical Journal , 1184 (2008); arXiv:0710.3795. R. N. Lang, S. A. Hughes, and N. J. Cornish, Measuring parameters of massive black holes with partially aligned spins , Physical Review D , 022002 (2011); arXiv:1101.3591. Other topics in gravitational-wave and black hole physics: C. L. Fryer, D. E. Holz, and S. A. Hughes, Gravitational waves from stellar collapse: Correlations to explosion asymmetries , Astrophysical Journal , 288 (2004); astro-ph/0403188. D. E. Holz and S. A. Hughes, Using gravitational-wave standard sirens , Astrophysical Journal , 15 (2005); astro-ph/0504616. N. Dalal, D. E. Holz, S. A. Hughes, and B. Jain, Short GRB and binary black hole standard sirens as a probe of dark energy , Physical Review D , 063006 (2006); astro-ph/0601275. T. Regimbau and S. A. Hughes, Gravitational-wave confusion background from cosmological compact binaries: Implications for future terrestrial detectors , Physical Review D , 062002 (2009); arXiv:0901.2958. S. Nissanke, S. A. Hughes, D. E. Holz, N. Dalal, and J. L. Sievers, Exploring short gamma-ray bursts as gravitation-wave standard sirens , Astrophysical Journal , 496 (2010); arXiv:0904.1017. T. Hinderer, B. D. Lackey, R. N. Lang, and J. S. Read, Tidal deformability of neutron stars with realistic equations of state and their gravitational wave signatures in binary inspiral , Physical Review D , 123016; arXiv:0911.3535. N. Yunes and S. A. Hughes, Binary pulsar constraints on the parameterized post-Einsteinian framework , Physical Review D , 082002 (2010); arXiv:1007.1995. L. Burko and S. A. Hughes, Falloff of radiated energy in black hole spacetimes , Physical Review D , 104029 (2010); arXiv:1007.4596. L. C. Stein and N. Yunes, Effective gravitational-wave stress-energy tensor in alternative theories of gravity , Physical Review D , 064038 (2011); arXiv:1012.3144. N. Yunes and L. C. Stein, Non-spinning black holes in alternative theories of gravity , Physical Review D , 104002 (2011); arXiv:1101.2921. R. H. Price, G. Khanna, and S. A. Hughes, Systematics of black hole binary inspiral kicks and the slowness approximation , Physical Review D , 124002 (2011); arXiv:1104.0387. K. Yagi, L. C. Stein, N. Yunes, and T. Tanaka, Post-Newtonian, quasi-circular binary inspirals in quadratic modified gravity , Physical Review D, in press; arXiv:1110.5950. Review articles: S. A. Hughes, Listening to the universe with gravitational-wave astronomy , Annals of Physics , 142 (2003); astro-ph/0210481. É. E. Flanagan and S. A. Hughes, The basics of gravitational wave theory , New Journal of Physics , 204 (2005), special issue "Spacetime 100 Years Later"; gr-qc/0501041. S. A. Hughes, Trust but verify: The case for astrophysical black holes , Writeup of lectures given at the 2005 SLAC Summer Institute; hep-ph/0511217. S. A. Hughes, Gravitational waves from merging compact binaries , Annual Reviews of Astronomy and Astrophysics , 107 (2009); arXiv:0903.4877. Conference proceedings: S. A. Hughes, Gravitational-wave physics , in Proceedings of the 8th International Workshop on Topics in Astroparticle and Underground Physics, edited by F. Avignone and W. Haxton: Nuclear Physics B 138, 429 (2005). S. A. Hughes, M. Favata, and D. E. Holz, How black holes get their kicks: Radiation recoil in binary black hole mergers , in Accretion in a Cosmological Context, edited by A. Merloni, S. Nayakshin, and R. Sunyaev (Springer-Verlag series of "ESO Astrophysics Symposia", Berlin, 2005); astro-ph/0408492. S. A. Hughes, (Sort of) Testing relativity with extreme mass ratio inspirals , in Laser Interferometer Space Antenna --- Proceedings of the Sixth International LISA Symposium, edited by S. M. Merkowitz and J. C. Livas (AIP Conference Proceedings 873, Melville, New York, 2006); gr-qc/0608140. S. A. Hughes, A brief survey of LISA sources and science , in Laser Interferometer Space Antenna --- Proceedings of the Sixth International LISA Symposium, edited by S. M. Merkowitz and J. C. Livas (AIP Conference Proceedings 873, Melville, New York, 2006); gr-qc/0609028. D. G. Blair, P. Barriga, A. F. Brooks, P. Charlton, D. Coward, J.-C. Dumas, Y. Fan, D. Galloway, S. Gras, D. J. Hosken, E. Howell, S. Hughes, L. Ju, D. E. McClelland, A. Melatos, H. Miao, J. Munch, S. M. Scott, B. J. J. Slagmolen, P. J. Veitch, L. Wen, J. K. Webb, A. Wolley, Z. Yan, C. Zhao, The science benefits and preliminary design of the Southern Hemisphere gravitational wave detector AIGO , Journal of Physics: Conference Series 122, 012001 (2008). R. N. Lang and S. A. Hughes, Advanced localization of massive black hole coalescences with LISA , in Proceedings of the Seventh International LISA Symposium, to be published by Classical and Quantum Gravity; arXiv:0810.5125. K. G. Arun, S. Babak, E. Berti, N. Cornish, C. Cutler, J. Gair, S. A. Hughes, B. R. Iyer, R. N. Lang, I. Mandel, E. K. Porter, B. S. Sathyaprakash, S. Sinha, A. M. Sintes, M. Trias, C. Van Den Broeck, and M. Volonteri, Massive black hole binary inspirals: Results from the LISA Parameter Estimation Taskforce , in Proceedings of the Seventh International LISA Symposium, to be published by Classical and Quantum Gravity; arXiv:0811.1011. S. A. Hughes, Probing strong-field gravity and black holes with gravitational waves , in Proceedings of the 19th Workshop on General Relativity and Gravitation in Japan; arXiv:1002.2591.
{"url":"http://gmunu.mit.edu/pub/pub.html","timestamp":"2014-04-16T13:03:01Z","content_type":null,"content_length":"24668","record_id":"<urn:uuid:870b370e-fa3b-4dd3-928c-8ac04504f01b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Electron. J. Diff. Eqns., Vol. 1999(1999), No. 32, pp. 1-22. The fundamental solution for a consistent complex model of the shallow shell equations Matthew P. Coleman Abstract: The calculation of the Fourier transforms of the fundamental solution in shallow shell theory ostensibly was accomplished by J. L. Sanders [J. Appl. Mech. 37 (1970), 361-366]. However, as is shown in detail in this paper, the complex model used by Sanders is, in fact, inconsistent. This paper provides a consistent version of Sanders's complex model, along with the Fourier transforms of the fundamental solution for this corrected model. The inverse Fourier transforms are then calculated for the particular cases of the shallow spherical and circular cylindrical shells, and the results of the latter are seen to be in agreement with results appearing elsewhere in the literature. Submitted June 9, 1999. Published September 9, 1999. Math Subject Classifications: 35Q72, 35A08, 73K15, 42B10. Key Words: shallow shell theory, fundamental solution, spherical shell, cylindrical shell. Show me the PDF file (172K), TEX file, and other files for this article. │ │ Matthew P. Coleman │ │ │ Department of Mathematics and Computer Science │ │ │ Fairfield University │ │ │ Fairfield, CT 06430, U.S.A. │ │ │ e-mail: mcoleman@fair1.fairfield.edu │ Return to the EJDE web page
{"url":"http://ejde.math.txstate.edu/Volumes/1999/32/abstr.html","timestamp":"2014-04-16T19:19:15Z","content_type":null,"content_length":"2026","record_id":"<urn:uuid:a88bde57-1c01-47a2-99a4-0e2f022f63e9>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Easy scientific measurement The Shadow cast by a vertical pillar in Alexandria at noon during the summer solstice is found to be 1/8 the height of the pillar. The distance between Alexandria and Syene is 1/8 the Earth's radius. Is there a geometric connection between these two 1-to-8 ratios? Hi NateTheGreat! Welcome to PF! You've left out that at noon during the summer solstice, the sun at Syene is directly overhead (I know that 'cos this is a well-known experiment by Eratosthenes in about 200 BC Does that help?
{"url":"http://www.physicsforums.com/showthread.php?t=262292","timestamp":"2014-04-19T15:01:42Z","content_type":null,"content_length":"31008","record_id":"<urn:uuid:1efec1b7-7470-4256-b13c-12a175c543d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
keep getting slightly wrong answer for integration question June 3rd 2009, 11:46 AM keep getting slightly wrong answer for integration question "Use the substitution x=secθ where 0<=θ<π/2 to evaluate (integrand) √(x^2-1)/x^4 dx." so after substitution: ⌠ 2 ⎮ √(SEC(θ) - 1) ⎮ ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ dx (it still should be dx right? not dθ?) ⎮ 4 ⌡ SEC(θ) after using trig identities I get (sigma)(1-sin^2θ)sinθcosθdx the I let u = sinθ and -du=cosθdx ⌠ 3 - ⌡ (u - u ) du and that is -(u^2)/2+(u^4)/4 +C I suspect this is where I mess things up. and then I plug back in the value of u to get my CAS gives me SIN(θ)^2·(SIN(θ)^2 - 2)/4 but I don't think that's the same(Doh) June 3rd 2009, 11:56 AM "Use the substitution x=secθ where 0<=θ<π/2 to evaluate (integrand) √(x^2-1)/x^4 dx." so after substitution: ⌠ 2 ⎮ √(SEC(θ) - 1) ⎮ ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ dx (it still should be dx right? not dθ?) (**) you need to switch the dx over ⎮ 4 ⌡ SEC(θ) after using trig identities I get (sigma)(1-sin^2θ)sinθcosθdx the I let u = sinθ and -du=cosθdx ⌠ 3 - ⌡ (u - u ) du and that is -(u^2)/2+(u^4)/4 +C I suspect this is where I mess things up. and then I plug back in the value of u to get my CAS gives me SIN(θ)^2·(SIN(θ)^2 - 2)/4 but I don't think that's the same(Doh) With your substitution $x = \sec \theta$then $\int \frac{\sqrt{x^2-1}}{x^4}\, dx = \int \frac{\sqrt{\sec ^2 \theta -1}}{\sec ^4 \theta }\, \sec \theta \tan \theta d\theta = \int \frac{ \tan^2 \ theta}{\sec ^3 \theta}\, d \theta = \int \sin^2 \theta \cos \theta\, d \theta$. You should be able to get the rest. June 3rd 2009, 11:57 AM Chris L T521 "Use the substitution x=secθ where 0<=θ<π/2 to evaluate (integrand) √(x^2-1)/x^4 dx." so after substitution: ⌠ 2 ⎮ √(SEC(θ) - 1) ⎮ ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ dx (it still should be dx right? not dθ?) ⎮ 4 ⌡ SEC(θ) after using trig identities I get (sigma)(1-sin^2θ)sinθcosθdx the I let u = sinθ and -du=cosθdx ⌠ 3 - ⌡ (u - u ) du and that is -(u^2)/2+(u^4)/4 +C I suspect this is where I mess things up. and then I plug back in the value of u to get my CAS gives me SIN(θ)^2·(SIN(θ)^2 - 2)/4 but I don't think that's the same(Doh) When you make the substitution $x=\sec\theta$, it follows that $\,dx=\sec\theta\tan\theta\,d\theta$. Thus, the integral becomes $\int\frac{\sqrt{\sec^2\theta-1}\sec\theta\tan\theta\,d\theta}{\sec^4\theta}=\in t\frac{\tan^2\theta}{\sec^3\theta}\,d\theta=\int\s in^2\theta\cos\theta\,d\theta$ Now let $u=\sin\theta\implies\,du=\cos\theta\,d\theta$ The integral now becomes $\int u^2\,du=\tfrac{1}{3}u^3+C$ Back-substituting, we get $\tfrac{1}{3}\sin^3\theta+C$ Now since $x=\sec\theta$ and $0\leqslant\theta\leqslant\frac{\pi}{2}$, it follows that $\sin\theta=\frac{\sqrt{x^2-1}}{x}$ (verify). Therefore, $\int\frac{\sqrt{x^2-1}}{x^4}\,dx=\frac{\left(x^2-1\right)\sqrt{x^2-1}}{3x^3}+C$. Does this make sense? June 3rd 2009, 12:06 PM let $x=sec\theta$ $\int\left(\frac{\sqrt{sec^2\theta-1}}{sec^4(\theta)}\right)sec(\theta)tan(\theta)d\t heta$ $\int\left(\frac{\sqrt{tan^2(\theta)}}{sec^3(\theta )}\right)tan(\theta)d\theta$ $\int\left(\frac{tan^2(\theta)}{sec^3(\theta)}\righ t)d\theta$ $\int\left(\frac{sec^2(\theta)}{sec^3(\theta)}\righ t)d\theta-\int\frac{1}{sec^3(\theta)}d\theta$ I leave the rest for you ...best wishes I am always late I start to answer then I see a lot of people post faster than me that's wonderful it is the best forum the questions solved sooooooo quickly I love this thanks to all of you June 3rd 2009, 12:59 PM put $x=\frac1u$ and you'll get an easy integral to solve. June 3rd 2009, 01:03 PM June 4th 2009, 12:21 PM one thing I don't get I follow up to the point where I get (sin^3θ/3)+C .I don't understand where you get the $x=\sec\theta$ from, is that an identity? when I plug that in for the value of sinθ I still get a differnt i.e. (√(x^2-1))^3/3+C ≠ $\int\frac{\sqrt{x^2-1}}{x^4}\,dx=\frac{\left(x^2-1\right)\sqrt{x^2-1}}{3x^3}+C$ June 4th 2009, 07:18 PM from this you have $sin\theta=\frac{\sqrt{x^2-1}}{x}$ right then from the picture below you can get sec(theta) Attachment 11760 June 6th 2009, 07:26 PM thanks. i get it
{"url":"http://mathhelpforum.com/calculus/91680-keep-getting-slightly-wrong-answer-integration-question-print.html","timestamp":"2014-04-16T20:56:28Z","content_type":null,"content_length":"19827","record_id":"<urn:uuid:153ca629-3344-4c1a-baf9-c760f77fcb54>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
A tool to compute aliquot chains with small starting numbers and keep a record of the results. For a positive natural number n, let s(n) denote the sum of its proper divisors. The sequence obtained by iterating s for some n is called an aliquot sequence or chain. (adapted from the documentation)
{"url":"http://archives.math.utk.edu/software/msdos/dynamics/aliquot/.html","timestamp":"2014-04-19T11:58:40Z","content_type":null,"content_length":"2876","record_id":"<urn:uuid:6abcc9fc-dced-421f-b115-1dcff16d2ada>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
The Formation of Galaxies - G. Efstathiou and J. Silk 4.3.2. Small scale anisotropy On small angular scales (<< ct[0], the horizon scale at present), the residual radiation temperature fluctuations at present reduce to an approximate expression of the form where the amplitude of the Fourier components of the temperature fluctuations, the dimensionless wavenumber K = kt[d]c is normalized to the horizon scale at decoupling, µ is the cosine of the angle between wavenumber k and the observation direction and K Z[d])^1/2K = 1 defines a comoving wave vector equal to the present horizon. The first term in T[rms](k) yields the residual temperature fluctuation at the epoch t[d] when decoupling is effective, and the remaining terms are associated with secondary fluctuations induced by Thomson scattering in density fluctuations due to the residual ionization remaining after decoupling. The constants C[1] and C[2] are associated with the two adiabatic modes; the C[3] term is due to the isothermal mode; The C[1] term is due to gravitational potential fluctuations [~ (l / ct[d])^2 C[2] term is associated with peculiar motions ~ [(l / ct[d]) ^(1-n)/2 for M^-1/2-n/6 (Sachs and Wolfe, 1967). The angular correlation function for the temperature fluctuations is where cos[1] ^. [2]. This simplifies to where x = [1] - [2]| where I[0] is a modified Bessel function and C( Predicted fluctuations arc shown in Figure 4.3 for a beam with T / T ^-4T / T ^-5. The small-scale anisotropy experiment at 6°, while probably an upper limit, is still especially significant. It clearly rules out the n = 1 adiabatic model with n = 0. Experiments on finer angular scales are not as useful. While the 9' experiment also constrains the adiabatic model, it is relatively insensitive to n because of the smearing on the surface of last scattering over angular scales of less than a degree. The approximate scaling with T / T ^-1, large amplitudes of the initial density fluctuation spectrum being required in a low n n z In a galaxy formation theory based on adiabatic fluctuations, the first objects capable of ionizing the universe form late, at a redshift M[D], the objects must collapse at a relatively late epoch in order to produce the low density large-scale structure of the universe that is observed. Hence reionization probably does not occur if only adiabatic fluctuations are present initially.
{"url":"http://ned.ipac.caltech.edu/level5/March02/Efstathiou/Efst4_3_2.html","timestamp":"2014-04-18T21:07:01Z","content_type":null,"content_length":"9684","record_id":"<urn:uuid:c1fbbf16-9b4c-4980-a673-2584960cf3c6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Angles... [Archive] - OpenGL Discussion and Help Forums 09-06-2000, 07:28 PM After performing translations and rotates how would you get the currect modelview matrix vector... like the vector it is looking towards... it would be greatly helpfull if someone can give me some insight or webpages/etc.. about this topic.
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-148833.html","timestamp":"2014-04-18T00:28:33Z","content_type":null,"content_length":"3790","record_id":"<urn:uuid:c36d35a1-32e0-45eb-bb57-43973220b12a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Find the Inverse of a Quadratic Function Edit Article Edited by Musab J, Maluniu, Peter, Hawkstar and 6 others Calculating the inverse of a linear function is easy: just make x the subject of the equation, and replace y with x in the resulting expression. Finding the inverse of a quadratic function is considerably trickier, not least because Quadratic functions are not, unless limited by a suitable domain, one-one functions. 1. 1 Make y or f(x) the subject of the formula if it isn't already. During your algebraic manipulation, make sure that you do not change the function in any way and perform the same operations to both "sides" of the equation. 2. 2 Rearrange the function so that it is in the form y=a(x-h)^2+k. This is not only essential for you to find the inverse of the function, but also for you to determine whether the function even has an inverse. You can do this by two methods: □ By completing the square 1. "Take common" from the whole equation the value of a (the coefficient of x^2). Do this by writing the value of a, starting a bracket, and writing the whole equation, then dividing each term by the value of a, as shown in the diagram on the right. Leave the left hand side of the equation untouched, as there has been no net change to the right hand side. 2. Complete the square. The coefficient of x is (b/a). Halve it, to give (b/2a), and square it, to give (b/2a)^2. Add and subtract it from the equation. This will have no net effect on the equation. If you look closely, you will see that the first three terms inside the bracket are in the form a^2+2ab+b^2, where a is x, and b is (b/2a). Of course these two values will be numerical, rather than algebraical for a real equation. This is a completed square. 3. Because the first three terms are now a perfect square, you can write them in the form (a-b)^2 or (a+b)^2. The sign between the two terms will be the same as the sign of the coefficient of x in the equation. 4. Take the term which is outside the perfect square out of the square bracket. This brings the equation into the form y=a(x-h)^2+k, as intended. □ By comparing coefficients 1. Form an identity in x. On the left, put the function as it is expressed in terms of x, and on the right put the function in the form that you want it to be, in this case a(x-h)^2+k. This will enable you to find out the values of a, h, and k that are true for all values of x. 2. Open and expand the bracket on the right hand side of the identity. We shall not be touching the left hand side of the equation, and may omit it from our working. Note that all working on the right hand side is algebraical as shown and not numerical. 3. Identify the coefficients of each power of x. Then group them and place them in brackets, as shown on the right. 4. Compare the coefficients of each power of x. The coefficient of x^2 on the right hand side must equal that on the left hand side. This gives the value of a. The coefficient of x on the right hand side also must equal that on the left hand side. This leads to the formation of an equation in a and h, which can be solved by substituting the value of a, which has already been found. The coefficient of x^0, or 1, on the left hand side must also equal that on the right hand side. Comparing them yields an equation that will help us find the value of k. 5. Using the values of a,h, and k found above, we can write the equation in the desired form. 3. 3 Ensure that the value of h is either on the boundary of the domain, or outside it. The value of h gives the x-coordinate of the turning point of the equation. A turning point within the domain would mean that the function is not one-one, and hence does not have an inverse. Note that the equation is a(x-h)^2+k. Thus if there is (x+3) inside the bracket, the value of h is negative 3. 4. 4 Make (x-h)^2 the subject of your formula. Do this by subtracting the value of k from both sides of the equation, and then dividing both sides of the equation by a. By now you will have numerical values for a,h, and k, so use those, not the symbols. 5. 5 Square-Root both sides of the equation. This will remove the power of two from (x-h). Do not forget to put the "+/-" sign on the other side of the equation. 6. 6 Decide between the + and the - sign, as you can not have both (having both would make it a one to many "function", which would make it invalid as the same). For this, look at the domain. If the domain lies to the left of the stationary point i.e. x < a certain value, use the - sign. If the domain lies to the right of the stationary point i.e. x > a certain value, use the + sign. Then, make x the subject of the formula. 7. 7 Replace y with x, and x with f^-1(x), and congratulate yourself on having successfully found the inverse of a quadratic function. • Check your inverse by calculating the value of f(x) for a certain value of x, and then substituting that value of f(x) in the inverse to see if it returns the original value of x. For example, if the function of 3 [f(3)] is 4, then substituting 4 in the inverse should return 3. • If it is not too much trouble you can also check the inverse by inspecting its graph. It should look like the original function reflected across the line y=x. Article Info Categories: Algebra Recent edits by: LilFireflies, BR, Teresa In other languages: Español: Cómo encontrar la inversa de una función cuadrática Thanks to all authors for creating a page that has been read 101,356 times. Was this article accurate?
{"url":"http://www.wikihow.com/Find-the-Inverse-of-a-Quadratic-Function","timestamp":"2014-04-16T10:21:07Z","content_type":null,"content_length":"69807","record_id":"<urn:uuid:f24045cd-8ab4-4f1a-867a-ef7716401e7e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the Antenna Slant Angle and What is its effect? I would like to know more about Slant Angle? and What is its effect on Wave Propagation?? What is the difference between Slant Angle,Elevation Angle and azimuth angle?? Warm Regards Critical Angle & Frequency As the transmitting frequency increases the RF energy that passes thru the ionized layer also increases and thus the mult-hop or skip condition necessary for long distance communication is affected. The angle of incidence has a direct impact on the angle of reflection as it relates to this ionized surface in that if the critical angle at which the signal leaves the antenna is too high for a given frequency then it will once again merely pass thru this ionized level and again conditions for multi-hop communications is also diminished. The relationship between radiation angles, the frequency you are transmitting on, the proximity of the sun to the earth, and which part of the ionosphere that is ionized all are directly related to the ability to communicate long distances via skip conditions. Typically first thing in the morning these layers are at there lowest point of ionization while during the mid day we see the high peaks and expect to experience better long distance communications on the shorter wavelength HF bands. Someone can correct me or add to this if I have missed or interpreted this incorrectly. i am not sure on slant angle maybe like polorazation angle like with satellittes? Elevation angle is the RF take off angles from a antenna in relation to true horizon like in 5 degrees up from true horizon, Azimuth Angle is in Which direction where most RF is going in like in a beam say you are pointed South That would make your Azimuth Angle 180 Degrees and for a omni directional antenna you are using 360 degrees around with hopefully low takeoff angles and for propagation into the different ionshphere regions. i like to think like skipping a flat stone across water thinking as water as being the ionazation area Certain angles(and also the criticial angle) would hit the water and Reflect off the water other angles would be to steep and enter the water. so certain Elevation angles and frequency for a given Ionzation Area could be to steep and pass through that area into space , but for lower frequencies steeper angles can be used which thus produces shorter hops, very low angles would produce very long hops but remeber everything is in some relation with something else. i thought i would add my two cents worth hope it enhanced the other guy explantion sorry for spelling as well
{"url":"http://hfradio.org/forums/viewtopic.php?f=2&t=236&p=618","timestamp":"2014-04-19T17:06:12Z","content_type":null,"content_length":"27244","record_id":"<urn:uuid:6194bb2e-aef3-4459-bb58-953a67299f7a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
help with proving relaationship between sets May 13th 2011, 09:29 AM help with proving relaationship between sets How would I prove this A⊆B ^ |A|=|B| -> A=B Is this proof ok and formal? Suppose ¬(A⊆B ^ |A|=|B|) Then |A|≠|B| Therefore A⊆B ^ |A|=|B| -> A=B May 13th 2011, 09:41 AM May 13th 2011, 10:02 AM I forgot to mention that the sets were finite. So if A≠B then since A⊆B, then |A|<|B| therefore A⊆B ^ |A|=|B| -> A=B is this better? why wont it work for non-finite sets? Is it because then you cant say A⊆B ^ |A|=|B| ? May 13th 2011, 10:42 AM May 13th 2011, 10:50 AM does the symbol of the C with the 'not equal sign' under mean not a proper subset of ? And also is my proof ok now? May 13th 2011, 12:12 PM does the symbol of the C with the 'not equal sign' under mean not a proper subset of ? No, it means that E is a proper subset of Z^+, i.e., it does not equal the whole Z^+. And also is my proof ok now? There is still a question whether enough details are provided to justify this statement: "if A≠B then since A⊆B, then |A|<|B|." This statement is true and sufficiently obvious, but whether more needs to be said depends on the course requirements and the definition of |A|. May 13th 2011, 01:31 PM (1) As has been pointed out, we need to assume that either A or B is finite. (2) Your logic is wrong anyway. You don't prove P -> Q by assuming ~P and showing ~Q. You have it backwards. What you want here is to assume ~Q and show ~P. So suppose ~A=B. Show ~(A subset of B & |A|=|B|). Suppose A subset of B. Then, since ~A=B, we have A proper subset of B. But no finite set has the same cardinality as a proper subset of itself (this is called 'the pigeonhole principle'). However, if you haven't already proven the pigeonhole principle, then you have to prove it, and that is a longer, more involved proof. May 13th 2011, 02:31 PM i dont understand what more i can add to this: A and B are finite sets. if A≠B then suppose A⊆B, then |A|<|B| therefore A⊆B ^ |A|=|B| -> A=B May 14th 2011, 08:12 AM
{"url":"http://mathhelpforum.com/discrete-math/180428-help-proving-relaationship-between-sets-print.html","timestamp":"2014-04-19T06:12:28Z","content_type":null,"content_length":"11291","record_id":"<urn:uuid:c3a90974-729e-46a0-9b61-d7f9f23bbc3c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Encinitas Prealgebra Tutor Find an Encinitas Prealgebra Tutor I am a recent graduate from Cal Poly looking for students in need of a math tutor. I majored in Physics and minored in Mathematics with a 3.5 GPA. I tutor Pre-Algebra, Algebra, Geometry, Trigonometry, Calculus, and Physics for all grade levels. 8 Subjects: including prealgebra, calculus, algebra 2, algebra 1 ...As the education system in California prepares for and adopts the new Common Core State Standards (CCSS), schools are re-tooling their curricula to meet these standards, which will require students to perform at higher levels of thinking. For this reason, schools are scrambling to focus more of ... 15 Subjects: including prealgebra, physics, geometry, GRE ...Along with my studies I have also learned guitar and how to cook as a way of keeping myself well rounded and regularly compose music with members within the San Diego, and more specifically UCSD, community. I used to hold events where I would cook healthy meals for up to fifty people every other... 10 Subjects: including prealgebra, calculus, geometry, algebra 1 ...It has always been my favorite subject, and I get a kick out of seeing how the numbers all work out. I have taught math for many years in both the public and private school settings. In addition, I have tutored others since I was in high school. 11 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...I have also taught and tutored English overseas. I can take difficult material and explain it in ways that the students understand. I love helping children and young adults learn and reach their highest potential.I taught Algebra 1 for 3 years in the public school and have my California State Teaching Credentials current. 9 Subjects: including prealgebra, geometry, algebra 1, algebra 2
{"url":"http://www.purplemath.com/encinitas_ca_prealgebra_tutors.php","timestamp":"2014-04-16T08:01:40Z","content_type":null,"content_length":"23981","record_id":"<urn:uuid:74f7be2d-06dd-4949-a2d0-5af3e6cc0fab>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
There are Only Four Billion Floats–So Test Them All! A few months ago I saw a blog post touting fancy new SSE3 functions for implementing vector floor, ceil, and round functions. There was the inevitable proud proclaiming of impressive performance and correctness. However the ceil function gave the wrong answer for many numbers it was supposed to handle, including odd-ball numbers like ‘one’. The floor and round functions were similarly flawed. The reddit discussion of these problems then discussed two other sets of vector math functions. Both of them were similarly buggy. Fixed versions of some of these functions were produced, and they are greatly improved, but some of them still have bugs. Floating-point math is hard, but testing these functions is trivial, and fast. Just do it. The functions ceil, floor, and round are particularly easy to test because there are presumed-good CRT functions that you can check them against. And, you can test every float bit-pattern (all four billion!) in about ninety seconds. It’s actually very easy. Just iterate through all four-billion (technically 2^32) bit patterns, call your test function, call your reference function, and make sure the results match. Properly comparing NaN and zero results takes a bit of care but it’s still not too bad. Aside: floating-point math has a reputation for producing results that are unpredictably wrong. This reputation is then used to justify sloppiness, which then justifies the reputation. In fact IEEE floating-point math is designed to, whenever practical, give the best possible answer (correctly rounded), and functions that extend floating-point math should follow this pattern, and only deviate from it when it is clear that correctness is too expensive. Later on I’ll show the implementation for my ExhaustiveTest function but for now here is the function declaration: typedef float(*Transform)(float); // Pass in a range of float representations to compare against. // start and stop are inclusive. Pass in 0, 0xFFFFFFFF to scan all // floats. The floats are iterated through by incrementing // their integer representation. void ExhaustiveTest(uint32_t start, uint32_t stop, Transform TestFunc, Transform RefFunc, const char* desc) Typical test code that uses ExhaustiveTest is shown below. In this case I am testing the original SSE 2 _mm_ceil_ps2 function that started the discussion, with a wrapper to translate between float and __m128. The function didn’t claim to handle floats outside of the range of 32-bit integers so I restricted the test range to just those numbers: float old_mm_ceil_ps2(float f) __m128 input = { f, 0, 0, 0 }; __m128 result = old_mm_ceil_ps2(input); return result.m128_f32[0]; int main() // This is the biggest number that can be represented in // both float and int32_t. It’s 2^31-128. Float_t maxfloatasint(2147483520.0f); const uint32_t signBit = 0×80000000; ExhaustiveTest(0, (uint32_t)maxfloatasint.i, old_mm_ceil_ps2, ceil, "old _mm_ceil_ps2"); ExhaustiveTest(signBit, signBit | maxfloatasint.i, old_mm_ceil_ps2, ceil, "old _mm_ceil_ps2"); Note that this code uses the Float_t type to get the integer representation of a particular float. I described Float_t years ago in Tricks With the Floating-Point Format . How did the original functions do? _mm_ceil_ps2 claimed to handle all numbers in the range of 32-bit integers, which is already ignoring about 38% of floating-point numbers. Even in that limited range it had 872,415,233 errors – that’s a 33% failure rate over the 2,650,800,128 floats it tried to handle. _mm_ceil_ps2 got the wrong answer for all numbers between 0.0 and FLT_EPSILON * 0.25, all odd numbers below 8,388,608, and a few other numbers. A fixed version was quickly produced after the errors were pointed out. Another set of vector math functions that was discussed was DirectXMath. The 3.03 version of DirectXMath’s XMVectorCeiling claimed to handle all floats. However it failed on lots of tiny numbers, and on most odd numbers. In total there were 880,803,839 errors out of the 4,294,967,296 numbers (all floats) that it tried to handle. The one redeeming point for XMVectorCeiling is that these bugs have been known and fixed for a while, but you need the latest Windows SDK (comes with VS 2013) in order to get the fixed 3.06 version. And even the 3.06 version doesn’t entirely fix XMVectorRound. The LiraNuna / glsl-sse2 family of functions were the final set of math functions that were mentioned. The LiraNuna ceil function claimed to handle all floats but it gave the wrong answer on 864,026,625 numbers. That’s better than the others, but not by much. I didn’t exhaustively test the floor and round functions because it would complicate this article and wouldn’t add significant value. Suffice it to say that they have similar errors. Sources of error Several of the ceil functions were implemented by adding 0.5 to the input value and rounding to nearest. This does not work. This technique fails in several ways: 1. Round to nearest even is the default IEEE rounding mode. This means that 5.5 rounds to 6, and 6.5 also rounds to 6. That’s why many of the ceil functions fail on odd integers. This technique also fails on the largest float smaller than 1.0 because this plus 0.5 gives 1.5 which rounds to 2.0. 2. For very small numbers (less than about FLT_EPSILON * 0.25) adding 0.5 gives 0.5 exactly, and this then rounds to zero. Since about 40% of the positive floating-point numbers are smaller than FLT_EPSILON*0.25 this results in a lot of errors – over 850 million of them! The 3.03 version of DirectXMath’s XMVectorCeiling used a variant of this technique. Instead of adding 0.5 they added g_XMOneHalfMinusEpsilon. Perversely enough the value of this constant doesn’t match its name – it’s actually one half minus 0.75 times FLT_EPSILON. Curious. Using this constant avoids errors on 1.0f but it still fails on small numbers and on odd numbers greater than one. NaN handling The fixed version of _mm_ceil_ps2 comes with a handy template function that can be used to extend it to support the full range of floats. Unfortunately, due to an implementation error, it fails to handle NaNs. This means that if you call _mm_safeInt_ps<new_mm_ceil_ps2>() with a NaN then you get a normal number back. Whenever possible NaNs should be ‘sticky’ in order to aid in tracking down the errors that produce them. The problem is that the wrapper function uses cmpgt to create a mask that it can use to retain the value of large floats – this mask is all ones for large floats. However since all comparisons with NaNs are false this mask is zero for NaNs, so a garbage value is returned for them. If the comparison is switched to cmple and the two mask operations (and and andnot) are switched then NaN handling is obtained for free. Sometimes correctness doesn’t cost anything. Here’s a fixed version: template< __m128 (FuncT)(const __m128&) > inline __m128 _mm_fixed_safeInt_ps(const __m128& a){ __m128 v8388608 = *(__m128*)&_mm_set1_epi32(0x4b000000); __m128 aAbs = _mm_and_ps(a, *(__m128*)&_mm_set1_epi32(0x7fffffff)); // In order to handle NaNs correctly we need to use le instead of gt. // Using le ensures that the bitmask is clear for large numbers *and* // NaNs, whereas gt ensures that the bitmask is set for large numbers // but not for NaNs. __m128 aMask = _mm_cmple_ps(aAbs, v8388608); // select a if greater then 8388608.0f, otherwise select the result of // FuncT. Note that 'and' and 'andnot' were reversed because the // meaning of the bitmask has been reversed. __m128 r = _mm_xor_ps(_mm_andnot_ps(aMask, a), _mm_and_ps(aMask, FuncT(a))); return r; With this fix and the latest version of _mm_ceil_ps2 it becomes possible to handle all 4 billion floats correctly. Conventional wisdom Nazis Conventional wisdom says that you should never compare two floats for equality – you should always use an epsilon. Conventional wisdom is wrong. I’ve written in great detail about how to compare floating-point values using an epsilon, but there are times when it is just not appropriate. Sometimes there really is an answer that is correct, and in those cases anything less than perfection is just sloppy. So yes, I’m proudly comparing floats to see if they are equal. How did the fixed versions do? After the flaws in these functions were pointed out fixed versions of _mm_ceil_ps2 and its sister functions were quickly produced and these new versions work better. I didn’t test every function, but here are the results from the final versions of functions that I did test: • XMVectorCeiling 3.06: zero failures • XMVectorFloor 3.06: zero failures • XMVectorRound 3.06: 33,554,432 errors on incorrectly handled boundary conditions • _mm_ceil_ps2 with _mm_safeInt_ps: 16777214 failures on NaNs • _mm_ceil_ps2 with _mm_fixed_safeInt_ps: zero failures • LiraNuna ceil: this function was not updated so it still has 864,026,625 failures. Exhaustive testing works brilliantly for functions that take a single float as input. I used this to great effect when rewriting all of the CRT math functions for a game console some years ago. On the other hand, if you have a function that takes multiple floats or a double as input then the search space is too big. In that case a mixture of test cases for suspected problem areas and random testing should work. A trillion tests can complete in a reasonable amount of time, and it should catch most problems. Test code Here’s a simple function that can be used to test a function across all floats. The sample code linked below contains a more robust version that tracks how many errors are found. // Pass in a uint32_t range of float representations to test. // start and stop are inclusive. Pass in 0, 0xFFFFFFFF to scan all // floats. The floats are iterated through by incrementing // their integer representation. void ExhaustiveTest(uint32_t start, uint32_t stop, Transform TestFunc, Transform RefFunc, const char* desc) printf("Testing %s from %u to %u (inclusive).\n", desc, start, stop); // Use long long to let us loop over all positive integers. long long i = start; while (i <= stop) Float_t input; input.i = (int32_t)i; Float_t testValue = TestFunc(input.f); Float_t refValue = RefFunc(input.f); // If the results don’t match then report an error. if (testValue.f != refValue.f && // If both results are NaNs then we treat that as a match. (testValue.f == testValue.f || refValue.f == refValue.f)) printf("Input %.9g, expected %.9g, got %1.9g \n", input.f, refValue.f, testValue.f); Subtle errors My test code misses one subtle difference – it fails to detect one type of error. Did you spot it? The correct result for ceil(-0.5f) is -0.0f. The sign bit should be preserved. The vector math functions all fail to do this. In most cases this doesn’t matter, at least for game math, but I think it is at least important to acknowledge this (minor) imperfection. If the compare function was put into ‘fussy’ mode (just compare the representation of the floats instead of the floats) then each of the ceil functions would have an additional billion or so failures, from all of the floats between -0.0 and -1.0. The original post that announced _mm_ceil_ps2 can be found here – with corrected code: This post discusses the bugs in the 3.03 version of DirectXMath and how to get fixed versions: This post links to the LiraNuna glsl-sse2 math library: The original reddit discussion of these functions can be found here: Sample code for VC++ 2013 to run these tests. Just uncomment the test that you want to run from the body of main. The reddit discussion of this post can be found here. The hacker news discussion of this post can be found here. I’ve written before about running tests on all of the floats. The last time I was exhaustively testing round-tripping of printed floats, which took long enough that I showed how to easily parallelize it and then I verified that they round-tripped between VC++ and gcc. This time the tests ran so quickly that it wasn’t even worth spinning up extra threads. 24 Responses to There are Only Four Billion Floats–So Test Them All! 1. Great post! I’d like to see more exhaustive testing of ALL functions with a fairly limited input space 2. might I suggest using a monospaced sans font inside a pre tag instead of an italic serif… much more readable. You can easily create and embed code examples using https://gist.github.com/ □ My preferred toolset for blog writing handles this badly, but I agree it’s important. I went in and fixed up the HTML to display the code sanely. Thanks for the critique. 3. Sadly I lost most of my intrest in glsl-sse2, thankfully, it’s open source so people can fork it and contribute. It’s not the best code in the world, but it’s mine and I’m proud of it. □ That’s too bad. But, I’m glad you open sourced it so that others can use it and learn from it. 4. Ah, the tried and true exhaustive search. Had lunch with Doug Hoffman just this weekend, who was discussing a similar situation he found himself in many years ago. He’s written about it at http:/ Funny how the times may change, but the same problems keep getting solved over and over. □ Excellent! I was at first surprised they didn’t try writing the sqrt() results to a disk and then comparing them against known-good results from another computer, but I guess this was before multi-GB disks were available. When I did an exhaustive comparison of printf/scanf on gcc and VC++ (http://randomascii.wordpress.com/2013/02/07/float-precision-revisited-nine-digit-float-portability/) I used a shared folder and simple compression to let me store all of the results and transfer them between Windows and Linux without wasting too much of my 750 GB drive. ☆ The way he talked about about the problem, you sometimes have to get creative with your oracle (the component of a highly automated test that is used to verify the correctness of system under test) when solving problems of this mechanical scale. He had a known good implementation of sqrt() (the article glosses over the details of this part of the problem solving), and simply had the program provide the same input to both implementations. You only have to log an error if the two implementations disagree. Repeat x4billion on a 4096 core “super”computer and you’ve got yourself a “high volume functional equivalence test”. Finding an oracle is most of the hard thought when solving problems like that. I’ve seen groups compare a system to itself on a different platform (testing a port?), a previous version of itself (a literal regression test!), a competitor (want to test to see if the math lib in OpenOffice Calc is as good as the latest Excel? Wire up the same RNG to both pieces of software and compare results), and even a toy/model implementation (As part of an undergrad research project, I wanted to test how well Windows Media Player handles large playlists. My oracle was a playlist manager I wrote in Ruby in a weekend. Write a script to send the same noise to the interface parts that should work the same- add track, move track, delete track in my case- then run diff on the output m3u files. If the diff output is more than just the generator ID lines, you’ve got a long-sequence bug in either your playlist writer implementation or one of those actions when executed under stress. WMP had just such a problem). While we’re throwing around links of interest, I’ll point you towards some other resources I’m familiar with on this sphere of problems: Part of my undergraduate research has been on High Volume Test Automation (we call it HiVAT). As a culmination of that research, I was invited to attend the 12th Workshop On Teaching Software Testing. You can see all the public artifacts of that workshop at http://wtst.org/?page_id=12. There’s an even mix of teaching resources (including what I brought to the table) and research or just distilled rumination on the topic of HiVAT from a practitioner’s point of view. There is also a lovely database of articles and dissertations on http://kaner.com/?p=163 that I spent most of my freshman year working on, all of which are related to software testing in some way, and many of which are HiVAT-related. If I think of any others, I’ll drop them here after class. 5. Glad to see the “TEST ALL THE FLOATS” is actually a good idea. We do this for a bunch of our own math functions in our SIMD related project :-) Now to see how good/bad we fair on this very specific functions. Thanks for this insightful post :) 6. I’m amazed that people don’t try this exhaustive test strategy more often. There are many cases where one can test all possible values (not just floating point). I use an exhaustive test for a date conversion system (used in financial calculator). I use 32 bit values to store the Julian day number,. I go through every value converting day number to day/month/year and back again. I also check that each date succeeded the previous one correctly and that each day of the week is correct. I also check that all last working day, beginning of month and end of month calculations are correct for every possible day number. The whole test suite runs in about 2 minutes (includes some perpetual holiday tests) and is run as part of a general financial calculation library regression test. 7. I’m not a programmer any more, but 25 years ago I was helping port a large program, written in FORTRAN. Our first step was to upgrade to the latest compiler, but there were differences in the trigonometric functions (SIN, COS, TAN) at very small angles. Sounds trivial but in our program it made a difference, like when running a simulation for 5 (simulated) minutes at time steps of 0.002 seconds. Lots of fun to track down. Later we moved it from a Univac with 36-bit words to something else, and had to make sure we were getting similar results. 8. Them curly quotes. □ Aw dang. Okay, fixed. 9. Maybe a newb comment, but where would you get a RefFunc from? Your “errors” would only be as good as your RefFunc . With every failed test the question is “Is this a problem in the code (SUT) or a problem in the test? “ □ In this case I had existing reference functions that were presumed to be correct. If the reference functions had errors then (the hope is) they would follow different patterns so that discrepancies would still be found and investigated. It is surprisingly common to have an existing reference function, either on another machine, or with a slower implementation. If the tests find bugs in both functions — better yet. If not then see the link in Casey Doran’s comment — write one, using a different algorithm. It’s *so* worth it. When I was rewriting the CRT math functions for a game console I found that the existing ones had several bugs. That complicated the testing but didn’t stop it (and I chose correctness over 10. Pingback: How to verify that a 64-bit FPU works?CopyQuery CopyQuery | Question & Answer Tool for your Technical Queries,CopyQuery, ejjuit, query, copyquery, copyquery.com, android doubt, ios question, sql query, sqlite query, nodejsquery, dns query, update quer 11. Reblogged this on The Scoop. 12. There are only four billion. Four billion is plural. □ Damn! You’re right. And the error is embedded in the URL so I really can’t change it. My grade six English teacher must be so ashamed of me right now. ☆ unit test next post ;) 13. You can correct the test to catch the -0.5 case by comparing the integer values: if (testValue.i != refValue.i && Then it will also catch the case where floor(-0.0f) returns 0.0f. Also, I corrected the functions in glsl-sse2, maybe this weekend I will do some more througout range testing, at least for the 1-param methods. □ That change (comparing the integer representations) is exactly what the full version of my test does. It’s in the sample code (linked at the end). That’s awesome that you’re fixing up glsl-sse2 — very cool. 14. Pingback: Январская лента: лучшее за месяц This entry was posted in Floating Point and tagged floating point, floating point comparisons. Bookmark the permalink.
{"url":"http://randomascii.wordpress.com/2014/01/27/theres-only-four-billion-floatsso-test-them-all/","timestamp":"2014-04-18T03:23:24Z","content_type":null,"content_length":"108936","record_id":"<urn:uuid:0faa46b8-7328-4649-a9d8-89d559625360>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Are the figures below similar? How do you know? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/511cb59be4b06821731b3856","timestamp":"2014-04-24T16:11:45Z","content_type":null,"content_length":"63329","record_id":"<urn:uuid:93615356-f036-4fd2-b07a-97d3077d4374>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove That an Expression is a Multiple of 10 Date: 12/19/2002 at 23:48:40 From: Ritchie Subject: Prove that an expression is a multiple of 10 If a and b are positive integers, prove that (a^5)*(b) - (a)*(b^5) is a multiple of 10. I can't find the connection between being a multiple of 10 and the expression given. I thought about factoring the expression into this form: ab(a+b)(a-b)(a^2 + b^2), but I can't do anything with that. I tried factoring and rearranging the expression many different ways but couldn't find the connection. Date: 12/20/2002 at 04:26:43 From: Doctor Mike Subject: Re: Prove that an expression is a multiple of 10 Hello Ritchie, What an interesting problem! My first reaction was to doubt the truth of the theorem, but it really IS true. I had to stretch my grey matter a bit to find out why, though. But that's okay. Stretching is good. There may be many creative ways to prove this, but here's an outline of my approach. My first step was to look at the special case where b=1. Then the theorem says a^5 - a is a multiple of ten. There was considerable doubt in my mind as to whether even this is true, but I looked at a few examples. 2^5-2 is 30. 3^5-3=240. 8^5-8=32760. You should make a table with 3 columns. The first column is N, the second is N^5, and the third is N^5-N. Fill in this table for N = 1 up to N = 9. Notice that the units digit of all the numbers in column three is zero. Now we know, at least for 1-digit numbers N, that N^5-N is a multiple of ten, and we also know the related fact that N^5 has the same units digit as N does. Think about it. The next step of the proof is to extend this result to all positive integers N, not just the ones from 1 to 9. Just concentrate on what happens with the units digit when you do the multiplications for N*N* N*N*N = N^5. Do it for something like 13^5 and see how it works. A special simpler result used to prove a more general result is called a Lemma. That's what this is. The fact that N^5-N is a multiple of ten will help us prove that for (a^5)*(b) - (a)*(b^5). How? We know from the Lemma that a^5 - a is a multiple of ten, and b^5 - b is also some multiple of ten. They might not be the same multiple, so let's use the notation of a^5 - a = (10)(K) and also b^5 - b = (10)(Q) where K and Q are some integers. Then a^5 = a + (10)(K) and b^5 = b + (10)(Q). Now substitute these expressions for a^5 and b^5 into your original expression (a^5)*(b) - (a)*(b^5) and then simplify. You should be able to factor out a ten, which shows that this expression is a multiple of You should go back and actually do all the things that I have described in my proof outline, so you completely understand the whys and wherefores of each step. Good luck. And thanks for sending in such an interesting problem. - Doctor Mike, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/61904.html","timestamp":"2014-04-21T05:03:58Z","content_type":null,"content_length":"7854","record_id":"<urn:uuid:30c1f215-355d-4c69-beeb-a55e2d1b14eb>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
reference for simplicial homology and cohomology Arghh, it lost my long post - let's write it all up again... I think I see what you are getting at. The definition of a simplicial complex, as it stands on wiki, seems to me to be very strict. We require that the faces of the simplices not only "match up" but "are the same" whenever they overlap. Now, in the definition of the singular chain complex, there is nothing to say that for any given σ, the chain -σ is equal to σ with the opposite orientation. For example, if we take the chain given by σ:[0,1]->[0,1], σ(t)=t, there is nothing in the definition of the singular chain complex which says that -σ = σ' where σ' is the map σ':[0,1]->[0,1], σ'(t)=1-t. However, the following probably can be said: [(-1)^n]σ' is homologous to σ where σ' is given by a reordering of vertices with sign n. Hence, it seems to me that we arrive at this point: we can either think of σ with a negative orientation and -σ to be different in the chain complex, which means we will require a much stricter version of a simplicial complex (and probably require a finer triangulation), or we can acknowledge that they will end up being homologous and work in the chain complex where they are identified (in fact, we may like to identify more than just these to make things even easier on ourselves). You should just view the simplicial chain complex as being a subcomplex of the singular chain complex, where this inclusion is a quasi-isormorphism. Of course, the correct statement of this depends on the above - we may want to work with the chain complex where -σ and σ with the negative orientation are already identified. But in the end, we should still get quasi-isomorphisms and the same homology and everything should follow smoothly. Also the cochains are the same as the chains. there are no homomorphisms of chains into the integers as in singular cohomology. There is a single graded group of ordered simplices made into a chain complex in two ways - one with the boundary operator, the other with its transpose. I didn't follow this - the cochains of the simplicial cochain complex are still homomorphisms of chains into the integers (or other coefficient group) - this is how they are defined! The only difference now is that, in most cases, we now are working with a finitely generated module where the dual will be isomorphic to the original with the obvious identification. Taking the transpose maps gives precisely the cochain complex. We know this is the right cochain complex - for example, the UCT says that the homology determines the cohomology. If we accept that the simplicial homology is isomorphic to the singular homology, then their cohomology must also be the same, so indeed taking the transpose maps and then taking homology gives us precisely the singular cohomology.
{"url":"http://www.physicsforums.com/showthread.php?p=3897770","timestamp":"2014-04-21T02:18:17Z","content_type":null,"content_length":"79840","record_id":"<urn:uuid:840199e5-92c8-4c67-b8ce-d5b626546779>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
Westfield, NJ Math Tutor Find a Westfield, NJ Math Tutor My name is Joyce and I am a Rutgers graduate. I graduated with a bachelors degree in mathematics. I am in pursuit of my masters in K-12 mathematics education with a specialization in urban 24 Subjects: including calculus, Microsoft Excel, geometry, Microsoft Word ...You will experience the difference.I have 20+ years in all phases of accounting practices. I majored in accounting for both my graduate and undergraduate study. With WyzAnt, I have tutored more than 50 graduate and undergraduate students in accounting and finance subjects. 14 Subjects: including algebra 1, grammar, Microsoft Excel, geometry ...I have instructed gifted students from first grade up to sixth grade in math topics through my work with Spirit of Math. Finally, I have student taught fifth graders in Spain, as well as second graders in New Jersey. I am qualified to tutor phonics, reading, and spelling mostly through my volunteer experiences. 18 Subjects: including algebra 1, algebra 2, calculus, vocabulary ...Most importantly, every student has succeeded qualitatively rather than just quantitatively, gaining confidence, resourcefulness, and strength of character. My tutoring philosophy begins and ends with the student's sense of self, which I endeavor to illuminate and enrich. Test Preparation: SAT,... 36 Subjects: including ACT Math, Spanish, reading, prealgebra ...Usually, this entails helping a student learn challenging new material or providing a student with reinforcement that will keep them performing at their peak. In a fun and easygoing atmosphere, I teach the student strategies for mastering new material and removing obstacles from their learning p... 17 Subjects: including algebra 1, geometry, SAT math, reading Related Westfield, NJ Tutors Westfield, NJ Accounting Tutors Westfield, NJ ACT Tutors Westfield, NJ Algebra Tutors Westfield, NJ Algebra 2 Tutors Westfield, NJ Calculus Tutors Westfield, NJ Geometry Tutors Westfield, NJ Math Tutors Westfield, NJ Prealgebra Tutors Westfield, NJ Precalculus Tutors Westfield, NJ SAT Tutors Westfield, NJ SAT Math Tutors Westfield, NJ Science Tutors Westfield, NJ Statistics Tutors Westfield, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Westfield_NJ_Math_tutors.php","timestamp":"2014-04-19T12:25:41Z","content_type":null,"content_length":"23789","record_id":"<urn:uuid:497f2b75-9dee-4a70-9526-be253d48b4f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Obstructions to homotopy sections of curves over number fields Seminar Room 1, Newton Institute Grothendieck's section conjecture is analogous to equivalences between fixed points and homotopy fixed points of Galois actions on related topological spaces. We use cohomological obstructions of Jordan Ellenberg coming from nilpotent approximations to the curve to study the sections of etale pi_1 of the structure map. We will relate Ellenberg's obstructions to Massey products, and explicitly compute mod 2 versions of the first and second for P^1-{0,1,infty} over Q. Over R, we show the first obstruction alone determines the connected components of real points of the curve from those of the Jacobian. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/NAG/seminars/2009072814001.html","timestamp":"2014-04-19T17:10:28Z","content_type":null,"content_length":"6136","record_id":"<urn:uuid:79f8b4a1-29fe-45fb-93db-59a2ecaf42a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
Folding (also known as reduce or accumulate) is a method of reducing a sequence of terms down to a single term. This is accomplished by providing a fold function with a binary operator, an initial (or identity) value, and a sequence. There are two kinds of fold: a right one and a left one. Here's the definition for the fold-right function: (define (fold-right f init seq) (if (null? seq) (f (car seq) (fold-right f init (cdr seq))))) And here is the definition of fold-left function: ;; recursive function (optimized due to proper tail recursion) (define (fold-left f init seq) (if (null? seq) (fold-left f (f init (car seq)) (cdr seq)))) (fold-right + 0 '(1 2 3 4)) ; expands to (+ 1 (+ 2 (+ 3 (+ 4 0)))) (fold-left + 0 '(1 2 3 4)) ; expands to (+ (+ (+ (+ 0 1) 2) 3) 4) A list reversal function can easily be defined using fold-left: (define (reverse l) (fold-left (lambda (i j) (cons j i)) Further reading:
{"url":"http://community.schemewiki.org/?fold","timestamp":"2014-04-18T21:09:31Z","content_type":null,"content_length":"6400","record_id":"<urn:uuid:6eedb0ba-f8bb-4e3c-a10e-6d37e6cafeda>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tutors Huntington Station, NY 11746 Experienced and patient Math Tutor I am a former Wall Street quant with degrees in , Computer Science and Finance. I have been tutoring since my days on the Team at Stuyvesant HS. I was a high school teacher for a brief time and, more recently, I taught Probability and Statistics at Stony... Offering 10+ subjects including algebra 1, algebra 2 and calculus
{"url":"http://www.wyzant.com/Islandia_Math_tutors.aspx","timestamp":"2014-04-20T21:32:53Z","content_type":null,"content_length":"58643","record_id":"<urn:uuid:96644425-c9b4-4e4f-adca-1227235146ba>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Do isogenies with rational kernels tend to be surjective? up vote 18 down vote favorite Dear MO Community, this is a pretty vague title, so let me tell you the precise observation I have made. Consider the family of elliptic curves over $\mathbf{Q}$ having a rational $5$-torsion point $P$. They are given by $$E_d: Y^2 + (d+1)XY +dY=X^3+dX^2,$$ for $d \in \mathbf{Q}^*$ and $P=(0,0)$. Let $\ eta: E_d \rightarrow E_d'$ the isogeny whose kernel consists exactly of the five rational $5$-torsion points. Now assume that $E_d$ has rank $1$. After modding out torsion the Mordell-Weil group is isomorphic to $\mathbf{Z}$, and hence, $\eta$ induces an injective group homomorphism $\mathbf{Z} \rightarrow \ mathbf{Z}$ which is either an isomorphism or has cokernel of size $5$. It seems to me that this map tends to be an isomorphism. To be precise, among all $d$, such that the numerator and denominator is bounded by $100$, there are $3,038$ elliptic curves of analytic rank equal to $1$ (out of $6,087$ total curves), and among those the above map $\mathbf{Z} \rightarrow \mathbf{Z}$ is an isomorphism in $91.2\%$ of the cases. So I wonder, what you should expect on average? $50\%$? $100\%$? Maybe, this was just a coincidence in a small database. Maybe someone has seen a similar behaviour somewhere else. Maybe this is nothing new and I just haven't heard about it. I am curious to read what you think about it. Many thanks. nt.number-theory arithmetic-geometry 1 Similar phenomena are discussed in the highest-ranking answer to this question: mathoverflow.net/questions/113968/…. – René Feb 27 '13 at 22:03 2 But 2 isogenies will not give new information since the kernel of a 2 isogeny is always rational. Let $phi$ be an isogeny of prime degree between two rank 1 curves. Then either $phi$ or it's dual will be surjective modulo the torsion. So if you look at all isogenies of a fixed prime degree and order them by conductor then the ratio surjective - not surjective will always be 50-50. – Maarten Derickx Feb 28 '13 at 16:58 1 @René: I agree with you that the formula (if it is this or anything similar) is hardly ever a $5$-th power in $\mathbb{Q}^{\times}$ if the coordinates $x_0,y_0$ of the point are random rationals. However, they are far from random. For instance, you know from descent that the formula will be a $5$-th power in $\mathbb{Q}_v$ at all good places $v$ (and many more), which translates into stringent conditions on $x_0$ and $y_0$. So I am not sure one can say anything through this approach... – Chris Wuthrich Mar 1 '13 at 20:54 2 The j-invariant of $E$ is: $$\frac{-(d^4 + 12d^3 + 14d^2 - 12d + 1)^3}{d^5(d^2 + 11d - 1)}$$ The j-invariant of $E'$ is: $$\frac{-(d^4 - 228d^3 + 494d^2 + 228d + 1)^3}{d(d^2 + 11d - 1)^5}$$ So, in a sense, $j(E)$ is just arithmetically simpler. For example, computation shows that the modular degree (using $X_0$) of $E$ is usually the smaller one of the two, so its Heegner points should have smaller height, making $\eta$ surjective. – Dror Speiser Mar 3 '13 at 1:39 I did some computations regarding the étaleness of the isogeny $\phi : E_d \to E'_d$, with $d$ running through all rational numbers of height $\leq 1000$. There are approximately 96.6% values of 2 $d$ such that the Faltings height of $E_d$ is smaller that that of $E'_d$, which (conjecturally) amounts to say that $\phi$ is étale. Of course it's hard to tell but this ratio seems to approach $29/30$. I did the same for the family of curves with rational $7$-torsion point and now the ratio seems to approach $7/8$. – François Brunault Mar 5 '13 at 9:20 show 14 more comments 3 Answers active oldest votes Here is a rather long comment in which I try to justify why I think that the majority will have a surjective map on the Mordell-Weil group. I would not want to guess what the % is. Let $E$ be an elliptic curve defined over $\mathbb{Q}$ with a rational point of order $p>2$. Write $\varphi : E \to E'$ for the quotient of $E$ by this rational torsion point. Suppose $E$ has rank $1$. Let us also make some simplifying assumptions first: Suppose that the reduction at $p$ is not additive, that the Tate-Shafarevich groups of $E$ and $E'$ do not have $p$-torsion, and that $E'$ has no $p$-torsion. This implies that the curves have everywhere semistable reduction (at least if $p>3$). This again implies that the map $E\to E'$ is etale and the real Neron periods change by $\Omega' = \tfrac {1}{p}\Omega$. Let us consider the BSD formula which is known to be invariant under this isogeny. The quotient of the two formulae for the $L$-value gives a relation like $$ 1 = \frac{w \cdot h \cdot c \cdot s}{t^2} $$ where first $w = \Omega/\Omega' = p$, then $t=p$ is the quotient of the order of the torsion group of $E$ by the one of $E'$. Next $h$ is the quotient of the regulator of $E$ by the regulator of $E'$. Hence $h=1/p$ if the map $\varphi$ on the Mordell-Weil group is surjective and $h=p$ otherwise. Then $s=1$ is the quotient of the order of Shas and $c$ is the quotient of the Tamagawa numbers of $E$ by the ones of $E'$. So we find in our case that $$ c = \prod_v \frac{ c_v(E)}{c_v(E')} \qquad \text{ is $p^2$ if our map surjective and $1$ otherwise.} $$ (If $s>1$, this has to be divided by $s$.) Now at all places $v$ the reduction is semistable (a part from when $p=3$ and the type is IV or IV*). For such places the quotient of Tamagawa numbers is easy to compute. If the reduction is non-split, then the quotient is $1$. Otherwise, either $c_v(E)/c_v(E')$ is $p$ or $1/p$. However the second case can only occur if $v\ up equiv 1 \pmod{p}$. So I would think that the latter is less frequent. vote 2 down Let $a$ be the number of split places for which the quotient of Tamagawa numbers is $p$ and $b$ the number of split places when it is $1/p$. Then $c = p^{a-b}$. We are asking if $a-b$ is $2$ vote (surjective case) or $0$. My remark above suggests that $a$ is likely to be larger than $b$. However $a$ and $b$ are not free: in fact $a-b$ is equal to the difference between the dimensions $\hat d$ of the $\hat\varphi$-Selmer group and the dimension $d$ of the $\varphi$-Selmer group. From considering the descents, one also sees now that $d,\hat{d}\leq 2$ and $d+\hat{d} \geq 2$ (still assuming trivial contribution from Shas). This leaves the possibilities $a-b= \hat {d}-d$ to be either $-2,0,2$ where the first option is excluded by the above. Now to my simplifying assumptions. I think the only one restricting to the non-generic case is that $s=1$. I could image that one could push the above argument further. A related result is the fact that for curves $E$ with a $p$-torsion point with $p>3$, it is almost impossible that that $\prod c_v(E)$ is not divisible by $p$. Lorenzini shows that there are only finitely many such $E$. Hopefully someone can extend and complete my attempt to answer this. Dear Chris, thanks for your answer. Unfortunately, I don't think your argument works. See my answer below for my explanation. – Stefan Keil Mar 1 '13 at 13:47 Thanks, Stefan, for your answer. You have thought about it more than I have. See my comments below your answer. – Chris Wuthrich Mar 1 '13 at 20:56 add comment I recycled my code from the other thread to test this. There are 559 elliptic curves of conductor < 300000 that have rank 1 and a rational 5 torsion point. Of these 559 curves there are 452 for which the map $\mathbb Z \to \mathbb Z$ induced by $\eta$ is surjective (this is about 81%). I did the same computation for rational 7 torsion point. The problem now becomes that the dataset is very small because there are only 31 elliptic curves of conductor < 300000 of rank 1 up vote 2 with a rational 7 torsion point. But the remarkable thing is that for 30 of these 31 cases the map $\mathbb Z \to \mathbb Z$ is surjective. down vote Edit: I updated the results after discovering a small bug in my code that caused some curves in the database to be skipped in the test. Dear Maarten, thanks for your answer. I also checked all elliptic curves of conductor < 1,000,000 that have rank 1 and a rational 5 torsion point. There are 1109 such curves and $84.8\ %$ of them are `surjective'. – Stefan Keil Mar 1 '13 at 13:53 Dear Maarten, I just realized that your database misses a few curves. There are 559 elliptic curves of conductor < 300000 that have rank 1 and a rational 5 torsion point. And 452, i.e. 80.9%, give surjectivity. – Stefan Keil Mar 1 '13 at 15:11 Oh, and I should add that I a haven't proved that my database contains all curves up to conductor 1,000,000. It is just very likely, because I searched through all $d\in \mathbf{Q}^*$, such that the numerator and denominator of $d$ is bounded by 50,000. And the last curve of conductor smaller than 1,000,000 in this range has $d=650/4617$. – Stefan Keil Mar 1 '13 at Apparently there was a bug in my code that explains why my results where different. – Maarten Derickx Mar 2 '13 at 16:39 add comment I tried the same as Chris, but I don't think it gives you an argument for the surjectivity. As it is too long for a comment, I post an answer. Assume we are given an elliptic curve $E$ over $\mathbf{Q}$ with rational $5$-torsion and denote by $\eta:E \rightarrow E'$ the isogeny modding out the rational $5$-torsion. As I mentioned in my question this curve is given by a rational non-zero number $d$. Write $d=u/v$, with $u$ and $v$ coprime non-zero integers. Now we apply the equation of Cassels and Tate which encodes the invariance of BSD: $$ \frac{\# sha(E/\mathbf{Q})}{\# sha(E'/\mathbf{Q})} = \frac{R_{E'}}{R_E} \cdot \frac{ \# E(\mathbf{Q})^2_{tor} }{\# E'(\mathbf{Q})^2_{tor} } \cdot \frac{P_{E'}}{P_E} \cdot \prod_{p \leq \infty}\frac{c_{E',p}}{c_{E,p}}.$$ Now assume, that the elliptic curve has rank 1. As Chris already stated, the regulator quotient equals $5^a$, for $a \in \{\pm 1 \}$, where $a=1$ if and only if the induced map of $\eta$ on the free part of the Mordell-Weil group is surjective. Hence this is the case we are interested in. For the torsion quotient we have, that it equals $5^b$, for $b \in \{0,2 \}$, where $b=2$ if and only if $d$ is not a fifth power. This is true for $100\%$ of the cases. For the periods and Tamagawa numbers we have, that they are equal to $5^{c-1}$, for $$c=\# \{p \equiv 1(5),\ p \mid u^2+11uv-v^2\} + \# \{p=5,\ p^3 \mid u^2+11uv-v^2 \}$$ $$-\# \{p,\ p\mid up vote uv \},$$ where $p$ ranges over the finite primes. (The results on the torsion quotient and on the period and Tamagawa quotient (= local quotient) can be found here http://arxiv.org/abs/ 2 down 1206.1822/) (The case $5^3\mid u^2+11uv-v^2$ happens if and only if $u \equiv 7v (25)$.) Hence, in 100% of the cases we have $$ \frac{\# sha(E/\mathbf{Q})}{\# sha(E'/\mathbf{Q})} = 5 \cdot 5^{\{\pm 1 \}} \cdot 5^c.$$ I don't see any reason why a negative value of $c$, which you should expect quite often, should force $a$ to be $1$, and not be (completely) subsumed in the Sha quotient. So, instead of having a pro argument for $a=1$, this is just a pro argument, that the isogeny $\eta$ can alter the $5$-primary part of Sha in an arbitrary way. (In case you wonder whether $c$ can be positive, look at $u=1$, $v=76971487$. Then $uv$ is prime and $u^2+11uv-v^2$ factors as $-1 \cdot 11 \cdot 31 \cdot 41 \cdot 61 \cdot 131 \cdot 151 \ cdot 331 \cdot 1061$, hence $c=7$.) (In my database of 3,038 curves, there is the following situation: $c=0$ for 247 curves. For all of them $\eta$ is NOT surjective. $c=-2$ for 2258 curves. For 20 of them $\eta$ is NOT surjective. $c=-4$ for 533 curves. For all of them $\eta$ is surjective.) Of course I agree with you that the local information translates your question into one about the quotient of Shas above. My assumption was that this quotient $s$ is 1 and I was not following the curves in the order they appear in your family. So I - probably wrongly - would have guessed that this quotient is very often 1 for curves of small conductor. If you restrict your attention to those with $s=1$, do you get the same phenomenon as I ? How many have $s\neq 1$ in your list ? – Chris Wuthrich Mar 1 '13 at 20:48 Your formula shows well that $c$ which is my $-(a-b)$ has a tendancy to be negative. Do you also have conditions from the descent on $c$ implied by the fact that your curve has rank $1$ ? – Chris Wuthrich Mar 1 '13 at 20:49 Dear Chris, among the 3038 curves of rank 1 in my database, 2485 (81.8%) curves have $s=1$. (These are the 247 curves with $c=0$ and the 2238 'surjective' curves with $c=-2$. Hence, among 1 the $s=1$-curves 89.0% of them are surjective.) The remaining 20 curves with $c=-2$ and the 533 curves with $c=-4$ have $s = 1/5^2$, and among them the $c=-4$-curves, i.e. 96.4%, are surjective. So, assuming that $s=1$, I agrre that your argument above indicates that $\eta$ is likely to be surjective, but I don't know how often we will have $s=1$ on average. – Stefan Keil Mar 4 '13 at 14:33 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory arithmetic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/123126/do-isogenies-with-rational-kernels-tend-to-be-surjective/123228","timestamp":"2014-04-16T10:58:10Z","content_type":null,"content_length":"84896","record_id":"<urn:uuid:ef33fc8b-eeb2-4441-baab-62fae08570c3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Slicing a numpy array and getting the "complement" [Numpy-discussion] Slicing a numpy array and getting the "complement" Anne Archibald peridot.faceted@gmail.... Mon May 19 11:20:22 CDT 2008 2008/5/19 Orest Kozyar <orest.kozyar@gmail.com>: > Given a slice, such as s_[..., :-2:], is it possible to take the > complement of this slice? Specifically, s_[..., ::-2]. I have a > series of 2D arrays that I need to split into two subarrays via > slicing where the members of the second array are all the members > leftover from the slice. The problem is that the slice itself will > vary, and could be anything such as s_[..., 1:4:] or s_[..., 1:-4:], > etc, so I'm wondering if there's a straightforward idiom or routine in > Numpy that would facilitate taking the complement of a slice? I've > looked around the docs, and have not had much luck. If you are using boolean indexing, of course complements are easy (just use ~). But if you want slice indexing, so that you get views, sometimes the complement cannot be expressed as a slice: for example: A = np.arange(10) The complement of A[2:4] is np.concatenate((A[:2],A[4:])). Things become even more complicated if you start skipping elements. If you don't mind fancy indexing, you can convert your index arrays into boolean form: complement = A==A complement[idx] = False More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-May/034021.html","timestamp":"2014-04-16T14:26:52Z","content_type":null,"content_length":"4264","record_id":"<urn:uuid:0c4b0317-bd36-4f6b-9891-acf4cdb17509>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Logic Question "Another true statement about real numbers is the following: If $x^2<0$, then $x=23$." How is this a true statement? Let p= x^2<0 ,and q= (x=23) and by the definition of a conditional statement if a true p, implies a false q, then the statement p----->q is false. IN all the other cases p----->q is true. Since in our case p is false irrespectively of what q is ( x could be 23 or not) p---->q is true. The above is a semantical definition of the conditional statement: ...................p------> q................................................ ......................................... The statement says that for any real x such that $x^2$ is negative, x is 23. That is false if and only if there is some x so that $x^2<0$ but x is not 23. There is no such x so the statement is true (vacuously). There is a free download of a book here, which walks through additional similar examples in detail (see the first part on logic and sets): Bobo Strategy - Topology
{"url":"http://mathhelpforum.com/discrete-math/74419-logic-question-print.html","timestamp":"2014-04-20T05:05:49Z","content_type":null,"content_length":"8814","record_id":"<urn:uuid:d8b84a97-0ea4-4f85-bae1-569b6e02a932>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Inches to Square foot - Page 5 - OnlineConversion Forums Originally Posted by bought a raw material and the size is 2mm x 60" x80" what is the conversion in square feet, please help I assume you want the area of the flat 60"x80" sheet converted to square meters 60" x 80" x (0.0254 m/ft)² = 4.13 m² (or you could just measure it in meters and multiply
{"url":"http://forum.onlineconversion.com/showthread.php?t=2425&page=5","timestamp":"2014-04-19T09:53:58Z","content_type":null,"content_length":"67395","record_id":"<urn:uuid:aaf62e40-aa41-4013-90d8-48f325525af3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
math contest problem on probability. November 11th 2009, 09:04 PM math contest problem on probability. Two fair coins are to be tossed once. For each head that results, one fair die is to be rolled. What is the probability that the sum of the die rolls is odd? ( Note that if no die is rolled the sum is 0) I could think of the cases HH the sum to be odd is 18/36 HT one die rolled 3/6 is odd TH one die rolled 3/6 is odd I'm stuck from here.... My knowledge of probability is very basic so could this please be explained as easy as possible? November 12th 2009, 02:00 AM mr fantastic Two fair coins are to be tossed once. For each head that results, one fair die is to be rolled. What is the probability that the sum of the die rolls is odd? ( Note that if no die is rolled the sum is 0) I could think of the cases HH the sum to be odd is 18/36 HT one die rolled 3/6 is odd TH one die rolled 3/6 is odd I'm stuck from here.... My knowledge of probability is very basic so could this please be explained as easy as possible? Pr(HH) = 1/4, Pr(HT) = 1/4 ..... So the answer will be (1/4)(18/36) + (1/4)(3/6) + .... = .... November 12th 2009, 06:25 PM
{"url":"http://mathhelpforum.com/statistics/114015-math-contest-problem-probability-print.html","timestamp":"2014-04-25T01:33:55Z","content_type":null,"content_length":"6213","record_id":"<urn:uuid:c1f58773-6776-43cb-bde3-030e3e3b732b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: qubit measurement-quantum mechanics • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. so far i did the multiplication of \[\left(\begin{matrix}a \\ \sqrt{1-a^2}\end{matrix}\right)\left(\begin{matrix}b \space \space \sqrt{1-b^2} \ \end{matrix}\right)\]= \[(ab)^2+2ab\sqrt{(1-a^2) Best Response You've already chosen the best response. The outcome of one measurement relative a different basis is the dot product of those two measurements you have not done the dot product correctly Best Response You've already chosen the best response. Best Response You've already chosen the best response. yep!! but the dead line is 27 feb ... so i haven't looked into it. Best Response You've already chosen the best response. oh I see what you wrote, but It's backwards\[(a|0\rangle+b|1\rangle)\cdot(c|0\rangle+d|1\rangle)=ac+bd\] Best Response You've already chosen the best response. so its just \[ab+\sqrt{1-a^2}\sqrt{1-b^2}\] Best Response You've already chosen the best response. \[\left[\begin{matrix}a&b\end{matrix}\right]\left[\begin{matrix}\sqrt{1-a^2}\\\sqrt{1-b^2}\end{matrix}\right]\]yes, then find the probability from the sum of each product of the amplitudes Best Response You've already chosen the best response. does that mean the result i just square it?? Best Response You've already chosen the best response. not the result, but each amplitude, so in this case each term individually gets squared we could have squared each probability amplitude (the coefficients of |0> and |1>) before taking the dot Best Response You've already chosen the best response. oh thats where i was confusiing it so we have \[(ab)^2,1-b^2-a^2-2(ab)^2\] Best Response You've already chosen the best response. , or + Best Response You've already chosen the best response. no, square each term individually to find the probability. remember that in\[a|0\rangle+b|1\rangle\]the probabilities of measuring 1 or 0 are \(a^2\) and \(b^2\), respectively. So the *probability* of measuring the outcome, as we go from one coordinate system to the other is\[\text{Pr}(a,b|c,d)=a^2|0\rangle+b^2|1\rangle\cdot c^2|0\rangle+d^2|1\rangle=a^2b^2+c^2d^2\] Best Response You've already chosen the best response. typo, I meant\[a^2c^2+b^2d^2\] Best Response You've already chosen the best response. \[Pr(a,\sqrt{1-a^2}|b,\sqrt{1-b^2})=a^2|0>+(1-a^2)|1>(b^2)|0>+(1-b^2)|1>\] \[a^2b^2+(1-a^2)(1-b^2)\] Best Response You've already chosen the best response. is this our case? Best Response You've already chosen the best response. is this related to Week II Quantum Computing?? Best Response You've already chosen the best response. yes lecture 3,4 Best Response You've already chosen the best response. well, more formally\[\large P_\psi(u)=\|\langle u|\psi\rangle\|^2\]\[=\|(b\langle0|+\sqrt{1-b^2}\langle1|)(a|0\rangle+\sqrt{1-a^2}|1\rangle)\|^2\]\[\|ab+\sqrt{(1-b^2)(1-a^2)}\|^2\]from which you get the same result Best Response You've already chosen the best response. oh great first principle i get from the course thanks ca we continue with two more problems??? Best Response You've already chosen the best response. oh great first principle i get from the course thanks ca we continue with two more problems??? Best Response You've already chosen the best response. for 13a) i got 0.5 Best Response You've already chosen the best response. I actually didn't get 13b either :p but I can show you 14 I think, let me redo it so I am sure how I did the problem Best Response You've already chosen the best response. okay but did you try and guess the answer for 13 b Best Response You've already chosen the best response. yeah, I tried a few things, still don't quite understand how to get the second state since a|00>+b|00> was entangled and therefore not necessarily able to be decomposed... anyway shall we continue with P14 ? Best Response You've already chosen the best response. it's pretty late around here ... 1:40 am ... i guess i'll have to start tomorrow ... is @TuringTest in the course? Best Response You've already chosen the best response. Heck yeah I am! I hope I survive it :) I really hope you join too @experimentX Best Response You've already chosen the best response. lol ... I managed to finish the first assignment on the last day ... I am always later runner .. i hope i will be able to complete it :) ... well nice to know there are other people who's doing it. looks like we can cooperate :D Best Response You've already chosen the best response. Yeah, between us that makes 5 I hve found. I'm sure it will take more than one mind here to do all the assignments :P Best Response You've already chosen the best response. lol yes thats good ,sry for the delay but is there time for us to do 14 Best Response You've already chosen the best response. honestly i also have hard tym doing this courses chemistry,qphysics and electricity i have to stay in touch with my college lessons and still do the online courses its like triple major students so its a bit challenging 4 me Best Response You've already chosen the best response. I am with EM ... too Best Response You've already chosen the best response. so for the edX the final exams it open book ??? Best Response You've already chosen the best response. yeah ... good think about it, And I am good at cheating. Best Response You've already chosen the best response. sry i was actually asking if they allow Best Response You've already chosen the best response. did you try the study app Best Response You've already chosen the best response. Yes the final exams are open-book,but you are not supposed to discuss on OS for example. it's okay for homework though shall we do 14? Best Response You've already chosen the best response. yes we can do it Best Response You've already chosen the best response. it's pretty simple really, after the first measurement we can eliminate the possibilities |10> and |11>, leaving the other two amplitudes now to make sure that their probabilities still add to one we have to normalize the remaining vector. after normalization just square the amplitude of |00> to find the probability of it being the outcome Best Response You've already chosen the best response. i tried to write \[\frac{3}{4}|00+\frac{-\sqrt {5}}{4}|01\] Best Response You've already chosen the best response. so we do that|dw:1361574751462:dw| Best Response You've already chosen the best response. since it is not normalised Best Response You've already chosen the best response. Best Response You've already chosen the best response. rather, we divide the vector by \(\sqrt{a^2+b^2}\) Best Response You've already chosen the best response. new state \[\frac{ 3 }{ 4 }\sqrt{\frac{7}{8}}|00-\frac{ 2\sqrt{5} }{ 7 }\sqrt{\frac{7}{8} }|01\] P(00)=9/14 Best Response You've already chosen the best response. do we still call it 00,is this correct? Best Response You've already chosen the best response. wow its really easy Best Response You've already chosen the best response. i will take this as notes lol Best Response You've already chosen the best response. thanks @TuringTest @experimentX Best Response You've already chosen the best response. yeah, i's still 00 and so far it is pretty easy; we'll see how long it stays that way Best Response You've already chosen the best response. lol ... i didn't to a damn thing except troll around ... thanks to @TuringTest Best Response You've already chosen the best response. :p well I stil want to know how 13b works... we'll have to find someone else I guess Best Response You've already chosen the best response. well i guess we shud start with 13 a sincethe question lays ref to a Best Response You've already chosen the best response. i see from the notes that this is the bell state \[|\phi^+>=\frac{1}{\sqrt{2}}(|00+|11)\] Best Response You've already chosen the best response. Best Response You've already chosen the best response. still not getting it :( Best Response You've already chosen the best response. \[|\phi^+\rangle=\frac{1}{\sqrt2}(|00\rangle+|11\rangle\]is just one possible Bell state. To quote the notes: "In fact, there are states *such as* \(|\phi^+\rangle=\frac{1}{\sqrt2}(|00\rangle+|11 \rangle\) which cannot be decomposed in this way as a state of the first qubit and that of the second qubit." So how can we assume ours is able to be decomposed since we don't know what \(\alpha\) and \(\beta\) are? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5127baf2e4b0dbff5b3d4382","timestamp":"2014-04-19T20:02:06Z","content_type":null,"content_length":"227154","record_id":"<urn:uuid:677edcbe-d6ae-41eb-a1cd-2b8bb932cde4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Math Forum » Discussions » Policy and News » mathed-news Topic: Middle-school textbooks failing students Replies: 0 Middle-school textbooks failing students Posted: Jan 27, 1999 12:07 AM Detroit News, Saturday, January 23, 1999 Study: Most middle-school math textbooks failing students By Richard Whitmire / Gannett News Service WASHINGTON -- Only four of the dozen middle-school math textbooks evaluated by the American Association for the Advancement of Science have been rated This study released Friday is the latest of several reports suggesting middle schools are doing a poor job teaching both math and science. "I think the results of our analysis are yet another example of why the middle grades are such a problem," said Gerald Kulm, who oversaw the evaluation for the Association's Project 2061 -- named for the year Halley's Comet returns. The questions raised by this study resemble the questions arising from last year's Third International Math and Science Study, TIMSS. That study found U.S. elementary students faring well in international comparisons but 8th-graders scoring poorly. There are two problems, said William Schmidt, national research coordinator for the TIMSS. First, middle schools allow students to keep taking warmed-over arithmetic for far too long. Only about 20 percent of U.S. 8th-graders take algebra, compared with nearly 100 percent of 8th-graders in Japan and Germany. Second, the middle-grades curricula offer a "mile wide, inch deep" approach to both math and science -- serving up far too many topics in too little The evaluators from the American Association for the Advancement of Science found the same problem. The worst textbooks, Kulm said, offered a laundry list of math subjects, followed by two pages of drills, followed by the next topic. Better textbooks, said Kulm, offered students some real-world exercises. Kulm cited one textbook that challenged students to plan a large-scale bicycle tour. Students had to develop bar graphs and charts predicting the cost involved, depending on the number of cyclists and the distance of the "The main point is they developed their own data," said Kulm, "so they get to the point where they understand what increases when something else Another criterion used: What guidance does the textbook offer teachers to make sure students absorb the lessons? The better books offer teachers specific advice on how to design exercises and what questions to ask at the conclusion. By contrast, one textbook, said Kulm, advised teachers if students don't understand the lesson "they should get together and talk about it." Each textbook was graded on 24 separate criteria. The top-rated textbook was developed with grants from the National Science Foundation, said Kulm. There are problems with middle-school math and science beyond a tendency to linger too long on reviewing arithmetic, say experts like Schmidt: -- Middle schools have the worst track record of teachers who teach out of their specialty field. Many states allow middle-school teachers to use the more general elementary-school credentials. In addition, teachers who majored in math and science are scarce and far more likely to teach at the high school level. -- The middle-school reform movement, sparked by the Carnegie Corporation's influential, decade-old "Turning Points" report, veered off course. Middle schools adopted the Carnegie advice on making adolescents feel more comfortable at school -- dividing large schools into interdisciplinary teams, for example. But most middle schools never raised academic objectives. -- Middle-school math tests are far too easy, concluded a study released last year by the American Federation of Teachers. About 90 percent of the questions asked on commonly used 8th-grade math tests fall into the "easy" category, according to an analysis by the teachers union. In the French and German tests studied, about half the questions were rated easy. -- The middle-school problem worsens in the South: Nationally, 39 percent of 8th-graders taking the National Assessment of Educational Progress score below basic; in Southern states, 50 percent score below basic, according to a report released last year by the Southern Regional Education Board. Sue Swaim, who oversees the National Middle Schools Association, encouraged schools to re-examine their curricula. Referring to the TIMSS criticism about the "mile wide, inch deep" curricula, she said, "Sometimes, they find that less is more." The four texts passing muster with the group, ranked in order, include: Connected Mathematics (Dale Seymour Publications); Mathematics in Context (Encyclopedia Britannica Educational Corp.); MathScope (Creative Publications); and Middle Grades Math Thematics (McDougal Littell). Those rated as unsatisfactory, also ranked in order, include: Mathematics Plus (Harcourt Brace & Company); Middle School Math (ScottForesman-Addison Wesley); Math Advantage (Harcourt Brace & Company); Heath Passport (McDougal Littell); Transition Mathematics (ScottForesman); Mathematics Applications and Connections (Glencoe/McGraw Hill); Middle Grades Math (Prentice Hall). Copyright 1999, The Detroit News PRESS RELEASE -- AAAS [This accompanied the article above.] Contact: Eileen Kugler American Association For The Advancement Of Science Phone: 703-644-3039 Few Middle School Math Textbooks Will Help Students Learn, Says AAAS Project 2061 Evaluation Anaheim, Calif. -- In a rigorous analysis of 12 middle school mathematics textbooks, only four recently published series received high ratings, while the other more well-established textbooks were rated as unsatisfactory, according to Project 2061, the long-term math and science education reform initiative of the American Association for the Advancement of Science "The good news is that there are excellent math textbooks now available for middle school students. It is imperative that these books become the textbook of choice in more classrooms if we are to reach our goal of developing students who are math and science literate," stated Dr. George Nelson, director of Project 2061. The evaluation was conducted by independent analysis teams made up of classroom teachers and college and university faculty who had extensive knowledge of mathematics content and of research on teaching and learning. Using a procedure developed by Project 2061, the analysts evaluated textbooks on how likely they are to help students achieve six key learning goals from Project 2061's landmark Benchmarks for Science Literacy. These benchmarks are consistent with the widely adopted standards developed by the National Council of Teachers of Mathematics. The benchmarks deal with number and geometry concepts and related skills, as well as algebra equation concepts, and algebra graph concepts. A key feature of the Project 2061 evaluation is its analysis of how successfully the textbooks supported teachers in their efforts to help students learn. The analysis teams reviewed specific instructional strategies that textbooks provide for each benchmark idea or skill. To evaluate the quality of these strategies, the analysts applied a set of 24 research-based instructional criteria to specific lessons, activities, teacher notes, assessments and other evidence. "AAAS conducted this study because we know that textbooks are the critical link to implementing the curriculum. Carrying on the mantle of leadership that we assumed a decade ago in publishing Science for All Americans, Project 2061 has made a major contribution to education reform efforts with this standards-based textbook analysis," stated Dr. M.R.C. Greenwood, Chancellor of the University of California, Santa Cruz, and President of AAAS. Good News: There are a few excellent middle-grades mathematics textbook series. The best series contains both in-depth mathematics content and excellent instructional support. Most of the textbooks do a satisfactory job on number and geometry skills. A majority of textbooks do a reasonable job in the key instructional areas of engaging students and helping them develop and use mathematical ideas. Bad News: There are no popular commercial textbooks among the best rated. Most of the textbooks are inconsistent and often weak in their coverage of conceptual benchmarks in mathematics. Most of the textbooks are weak in their instructional support for students and teachers. Many textbooks provide little development in sophistication of mathematical ideas from grades 6 to 8, corroborating similar findings of the Third International Mathematics and Science Study. A majority of textbooks are particularly unsatisfactory in providing a purpose for learning mathematics, taking account of student ideas, and promoting student thinking. "States and school districts are bombarded with information from textbook publishers claiming their materials are aligned with benchmarks and standards. The Project 2061 analysis gives busy educators the solid information they need to make informed choices about which textbooks will help their students improve their understanding of and skills in mathematics," stated Dr. Gerald Kulm, who led Project 2061's evaluation. "It's important to note that our analysis describes a textbook's potential for helping students learnà ³to be used effectively, excellent materials require excellent and well-trained teachers." As benchmarks and standards for student learning become the focus of education reform efforts in more states and school districts, textbooks play an increasingly important role. The National Education Goals Panel, for example, has characterized textbooks as "the nation's de facto curriculum," calling for "an independent and credible 'consumer reports' review service" to inform educators, policymakers, and the general public about "the degree to which instructional materials are aligned with challenging academic standards." This evaluation answers that call. Findings from the middle school math textbook evaluation were released at the AAAS Annual Meeting in Anaheim, CA, on January 22. Drafts of the technical reports on each of the textbooks are available from Project 2061. This analysis is the first in a series of evaluations of mathematics and science textbooks to be conducted by Project 2061. The benchmarks-based approach to evaluation was developed with funding from the National Science Foundation. Funding for the middle school math book evaluation was provided by the Carnegie Corporation of New York. Contacts: Eileen Kugler Jan. 20-24 -- AAAS Press Room in Anaheim, 714-703-0122; Mary Koppal: Jan 21-26 -- Anaheim Hilton, 714-750-4321 Jerry P. Becker Dept. of Curriculum & Instruction Southern Illinois University Carbondale, IL 62901-4610 USA Fax: (618)453-4244 Phone: (618)453-4241 (office) E-mail: jbecker@siu.edu
{"url":"http://mathforum.org/kb/thread.jspa?threadID=733992","timestamp":"2014-04-17T13:39:39Z","content_type":null,"content_length":"25115","record_id":"<urn:uuid:a5568db2-490f-44c1-bed5-b93f3b6bae13>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximum fixed genus Bipartite graphs up vote 2 down vote favorite Say $B_{n,n}$ is a bipartite graph on $2n$ vertices with $n$ vertices of color $1$ and with $n$ vertices of color $2$. What is the maximum number of edges that a genus $g$ graph $B_{n,n}$ can have? Are there any good references for this topic? co.combinatorics graph-theory topological-graph-theory 2 It's an easy exercise to show that if $B$ is simple and bipartite and embeds in an orientable surface of genus $g$ then $$ |E(B)| \le 2|V(B)| - 4 + 4g $$ and equality holds if and only if each face has degree four. – Chris Godsil Jun 7 '11 at 23:46 I think you can post it as an answer!! – J.A Jun 7 '11 at 23:49 Could you please provide a reference as well? – J.A Jun 7 '11 at 23:53 add comment 1 Answer active oldest votes Let's flesh out Chris Godsil's answer after the recent bump. Euler's formula tells us that $V-E+F=2-2g$, where $V$, $E$ and $F$ are the number of vertices, edges and faces respectively in an embedding of $G$. The smallest possible faces in an up vote 1 embedding of a bipartite graph are 4-cycles, so, by counting the edges round each face, $E \geq 4F/2$, i.e. $F \leq E/2$, with equality if and only if all faces are 4-cycles. Rearranging down vote gives $E \leq 2V -4 + 4g$. add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics graph-theory topological-graph-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/67185/maximum-fixed-genus-bipartite-graphs","timestamp":"2014-04-19T19:43:52Z","content_type":null,"content_length":"53963","record_id":"<urn:uuid:7c094ac4-6d5c-4776-a97a-415c33f172e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
black hole vs big bang I didn't see Mordred already replied. Had to do a chore and got sidetracked half way thru. I'll leave this as it is anyway. The "big bang" as far as I understand, refers to a time when the universe was very small. ... Well the term "big bang" is misleading because it suggests an explosion from a point outwards into empty space. That is not the idea. "start of expansion" is more neutral. In the standard picture of the early stage of expansion all space was filled with a uniform high density. There was no empty space and there was no one favored location where stuff was concentrated. We don't know 'very small' but we do think "very dense". High density does not by itself cause collapse to a black hole. There is a kind of "tug-of-war" contest between the expansion rate and the density. In popularizations they don't tell you about that. They only tell you that IN NON-EXPANDING SPACE a certain density concentrated at some location will result in formation of a black hole at that location. They don't discuss other cases. Suppose the expansion rate overwhelms the density. We have no proof that the universe as a whole was "very small" at the start of expansion. According to the mainstream expansion cosmology model it could have been infinite volume, or a large finite volume at the start. Measurements are not yet good enough to put a number on the current volume of the universe as a whole. We only know the size of the currently observable part of it. The main thing is that it was very DENSE. So the currently observable portion would have been concentrated in a very small volume. What we can currently see, out of the limits of observation, was very small at the start. What started the expansion is so far not known. There are various theories.
{"url":"http://www.physicsforums.com/showthread.php?p=4288755","timestamp":"2014-04-16T22:12:57Z","content_type":null,"content_length":"83743","record_id":"<urn:uuid:865f5345-f57a-4556-a7e8-eeb9dedaca89>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
L'Hopitals Rule September 27th 2009, 08:16 AM #1 Junior Member Dec 2008 L'Hopitals Rule here's the problem: Find the limit of lim (e^t)-1 / t^3 I thought to use L'Hopital's Rule 3 times to get lim e^t / 6 Then it would no longer be indeterminate (0/0) and I would have 1/6 as my answer. But my book says the answer is infinity. What did I do wrong? You need to check the hypothesis each time you use L.H's rule. After one application you get $\lim_{t \to 0}\frac{e^{t}-1}{t^3}=\lim_{t \to 0}\frac{e^{t}}{3t^2}$ This limit is not of the form $\frac{0}{0}$ or $\frac{\infty}{\infty}$ so we CANNOT use L.H's rule. Now if we take the limit the numerator goes to 1 and the denominator goes to zero from both sides so $\lim_{t \to 0}\frac{e^{t}-1}{t^3}=\lim_{t \to 0}\frac{e^{t}}{3t^2}=\infty$ wait I thought dividing by zero doesn't exist. you're saying 1/0 is infinity? okay I graphed it on my calculator so now it makes sense. is it always a good idea to check like that? thank you both for your help September 27th 2009, 08:22 AM #2 September 27th 2009, 08:30 AM #3 Junior Member Dec 2008 September 27th 2009, 08:31 AM #4 September 27th 2009, 08:35 AM #5 Junior Member Dec 2008 September 27th 2009, 08:37 AM #6 September 27th 2009, 08:40 AM #7 Junior Member Dec 2008
{"url":"http://mathhelpforum.com/calculus/104570-l-hopitals-rule.html","timestamp":"2014-04-16T17:04:41Z","content_type":null,"content_length":"49733","record_id":"<urn:uuid:56a93b2b-e817-4e54-931f-1c7991eb0200>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
impulse response interpretation 3 Answers Hi Elige, thanks for the contribution once again. I would like to upload some images on what i am talking about here, indeed, but I can't seem to find a way to attach them on my post. 3 Comments You can do this by uploading a picture to the web (I upload to my Picasa website) and then you just put the picture URL inside << www.something.com/example/mypic.jpg >>. You can add this as an edit to the bottom your original question. Let me know if that doesn't work. Hi Elige, I managed to upload some pictures, thanks for the hint. Those impulse responses do look a bit strange. Are those still plotted using "plot(time,abs(y))" or "plot(time,y)" with no ABS? I think it will be easier for me to diagnose if you either A) copy/ paste the complex FR_data into another answer here or B) supply the function that was used to create the FR_data. The IR plots are plotted using plot(abs(y)). So no time on x axis just samples. The function I am using to generate my FR_data is: function [G,f]=planeFR(wx,wy,h,res) % syms u v; % Pinf=zeros(1,length(k)); % P=zeros(1,length(k)); % for i=1:1:length(k) ul= cos(psi) * sqrt( (R1+R2)./(2*l*R1*R2) ).* wx; vl= sqrt( (R1+R2)./(2*l*R1*R2) ).* wy; % A=(1j*B*R12*exp(-1j*k(i)*(R1+R2)))/(2*(R1+R2)); % I1=int(exp(-1j*(pi/2)*(u^2)),u,-ul,ul); % I2=int(exp(-1j*(pi/2)*(v^2)),v,-vl,vl); [Cul Sul]=fcs(ul); [Cvl Svl]=fcs(vl); I1=2*( Cul - 1j*Sul); I2=2*( Cvl - 1j*Svl); G = ( 1j* I1.* I2 )/2; % semilogx(k,abs(G),'r'); where fcs is taken from http://www.mathworks.com/matlabcentral/fileexchange/6580 in order to calculate the fresnel integrals. I use h=2, res=1 and I play with wx and wy. What this function gives me is the frequency response I was talking about. 2 Comments Can you give me an example of the "wx" and "wy" you used to create those figures... just so I am in the same ballpark range. Hi elige, I created those pictures by inputing, wx=wy=2, wx=wy=0.5 and wx=wy=0.2 No products are associated with this question. impulse response interpretation Hi to everybody, I have implemented a little function that calculates the impulse response from a complex dataset (FR_data). I am trying to test how different FR_data influnce the impulse response so I plot the FR_data over frequency using plot(frequency,abs(FR_data)) and after I ran the routine and plot the results again using plot(time,abs(y)). the routine looks like: function y=IR(FR_data) I notice that when I boost the higher frequencies in my FR_data the peak of the impulse response gets closer to time 0 and increases in amplitude and is somehow shorter. When I boost the low frequencies then the peak of the impulse response gets further away from time 0, decreases in amplitude and is wider. Can anybody enlighten me please on if this is normal behaviour? Furthermore, all the impulse responses exhibit a very very short and sharp maximum amplitude on time 0 (actually it extends for 2-3 samples, then dies out and then the impulse response follows. Is there a way to interpret this? Any input would be greatly appreciated. thank you for your patience A small addition that might explain better what I am talking about. My datasets looks like: https://picasaweb.google.com/106674205971471180972/Matlab?authkey=Gv1sRgCKjU47ij4922SQ#5743069984065489538 they represent complex data that extend from 1 Hz to 20 kHz with a step of 1 Hz. So 20000 data. When I plot abs of those in a semilogx I get: https://picasaweb.google.com/106674205971471180972/ Matlab?authkey=Gv1sRgCKjU47ij4922SQ#5743069982822662066. They represent 3 different FR_data sets. The x axis represents frequency. After I run my routine I get IRs that look like: https://picasaweb.google.com/106674205971471180972/Matlab?authkey=Gv1sRgCKjU47ij4922SQ#5743069982995627554 and a zoomed version in the beginning of the x axis to actually see the IRs: https://picasaweb.google.com/106674205971471180972/Matlab?authkey=Gv1sRgCKjU47ij4922SQ#5743070000860043586 The x axis here represents the samples (not the time). 1 Comment A note of clarification (only because I helped in a previous post: http://www.mathworks.com/matlabcentral/answers/38316-obtain-the-impulse-response-from-a-frequency-response) is that FR_data represents the non-zero, positive frequency amplitude values in the frequency domain. I would make one small modification to your function (just so it is more applicable to other situations in the future). Change your line from "fliplr(FR_data(1:end-1))" to "fliplr(conj(FR_data (1:end-1)))" in order to account for the possibility that FR_data is complex (instead of only real valued). The fact that your pulse gets narrower with a larger contribution of high frequencies or wider with a larger contribution of low frequencies seems normal. But.. is it possible that you post a picture of "plot(time,y)" (without the "abs") or copy/paste the values associated with a high-frequency and low-frequency version of "FR_data" ? I reran through the process of generating the Frequency Response data (G) using your code and the one from the FEX just to see if there could have been any steps that were not properly handled in converting it to the time-domain... but I still got the same bizarre looking Impulse Response you provided pictures of. As I am not familiar with the type of FR function you are using, I cannot really comment on why the shape looks so bizarre... but I can tell you that the "spike" at your first time-domain data sample is basically equal to mean(real(G)). This is simply a consequence of having real frequency amplitudes that have a static shift, in this case they are all pretty much centered around 1... so you get a spike with amplitude 1 at your first time-domain data sample. If you were to subtract your frequency-domain data by 1, you would get rid of this spike in the time-domain. Not sure if I have any other helpful advice if you are expecting a different shaped IR in the time domain without knowing where this FR formulation is coming from. I forgot to mention, that I would change your : because your first non-zero frequency will not always be 1 (only when res=1 is that valid). I am only going to add the next bit just so you can double check the way you create your symmetric frequency response data (a little more general than my previous post): N = numel(G); % number of non-zero, positive frequency amplitudes df = (f(end)-f(1))/(N-1); % frequency increment Nyq = f(end); % Nyquist frequency Fs = 2*Nyq; % Sample frequency %Fs = 2*N*df; %<--- should be same as above newN = 2*N; % Number of samples in the full frequency response spectrum %newN = Fs/df; %<--- should be same as above newG = zeros(newN,1); % Pre-allocate full spectrum array newG(2:1+N) = G; % populate non-zero, positive frequency amplitudes newG(2+N:end) = fliplr(conj(G(1:end-1))); % neg. frequency amplitudes % newG = newG - mean(real(newG)); %<---- remove that spike in time-domain % newG = newG*Fs; %<---- correct time-domain amplitudes newf = -Nyq : df : Nyq-df; % Plot Frequency Response % figure; plot(newf, abs(newG)); % Plot Impulse response t = (0:newN-1)/Fs; % time g = ifft(newG); figure; plot(t,g); 1 Comment Elige, first of all, I would like to thank you for all the time you spent on my problem. My FR_data give me the frequency response of the reflections from a surface when hit by a sound wave. They are normalized around 1, that is why all the frequency responses converge to that point. When the surface is big (wx,wy) compared to the distance (h) of the source of the sound wave from the object the frequency response gets its maximum at low frequencies. When wx, wy are small compared to h then the frequency response is shifted towards higher frequencies. I was asked to get the impulse responses out of those frequency responses, but I was not able to interpret them. I think I understand what you said about the spike at the beginning of the IRs but still, I cannot understand why the second spikes are getting further and further from the origin as wx,wy increasing (or in other words when the FRs are extending on a wider range of frequency). Do you have any recommendation on any reference I could study in order to understand more on how to interpret IRs?
{"url":"http://www.mathworks.es/matlabcentral/answers/38465-impulse-response-interpretation","timestamp":"2014-04-19T04:25:18Z","content_type":null,"content_length":"43384","record_id":"<urn:uuid:d81619a0-5468-4bd3-8af0-6680dbcf50f7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
Treatise On Analysis Vol-Ii 16 BAIRE'S THEOREM AND ITS CONSEQUENCES 81 finite subsets of E, but taking E with the weak topology, so that this topology is defined by the seminorms pXty(U)= \(U • x\y)\ for all x,y in E. This topology is coarser than the strong topology. (a) Show that the weak topology is not metrizable (same method as in Problem 8(a)). (b) The unit ball ^ in &(E; E) is metrizable and compact in the weak topology (method of (12.5.7)). (c) The bounded sets in &(E; E) are the same for the weak topology as for the strong (d) The mapping U\~*U* of &(E; E) into itself is continuous with respect to the weak topology. (e) Show that the mapping (U, V) -* UV of V x <$ into tf is not continuous for the weak topology. (If fe)nez is a Hilbert basis of E indexed by Z, consider the "shift operator" U defined by U- en — en+1 for all n eZ, and its powers U* and U~n.) The topology induced on the unitary group U(E) by the weak topology is identical with that induced by the strong topology, and hence is compatible with the group structure. Also U(E) is not closed in f for the weak topology. (f) Show that, for each U0 e &(E; E), the linear mappings Fh-» U0 V and V\-+ VUQ are continuous on &(E; E) for the weak topology. (g) Let Jt be a submonoid of *€ (i.e., Jf is stable under multiplication). Show that the closure of Jt with respect to the weak topology on JS?(E; E) is a compact monoid, which is commutative if Jt is commutative. 10. The notation is the same as in Problem 9. (a) Let UeV. Show that the relations U - x = jc, (*| U • x) = (x | *), and U* - x = x are equivalent. (b) Deduce that the idempotent operators in V are the orthogonal projections (Sections 6.3 and 11.5.) 11. Let E be a separable Hilbert space, Jt a submonoid containing the identity 1E which is weakly closed (and therefore weakly compact) in *$ (notation of Problem 8). For each x e E, the orbit of x under Jt is the set Jt - x of vectors U - x where U e Jt\ it is weakly compact in E. We say that x is a flight vector with respect to Jt if 0 e Jt - x, and that x is a reversible vector with respect to Jf if for each Ue Jt there exists Ve Jit such that VV • x = x. Let F(^f) (resp. R(uO) denote the set of flight (resp. reversible) vectors with respect to Jt. If x is reversible, then ||£/*x|| = \\x\\ for all UzJt. (a) Every orbit Jt • x contains a minimal orbit N (Section 12.10, Problem 6), and every y e N is reversible with respect to Jt. Let U e Jt be such that U • x = y e N, and let <stf be the weakly closed submonoid generated by 1E and U: this submonoid jtf is commutative. Let pc^-^cN be a minimal orbit with respect to jtf, and let Fe jaf be such that V- y = z e P. Show that there exists We jtf such that VUW - z = z; deduce that the vector x — W - z is a flight vector, and hence that every jc e E is the sum of a flight vector and a reversible vector belonging to Jt • x. (b) Show that if x e E is reversible with respect to Jt then x is also reversible with respect to every weakly closed submonoid ^T, (Split up x into the sum of a vector y e R( Jf) belonging to JT • x and a vector z e F(^); use the fact that if a, b are two distinct vectors in E with the same norm, then ||i(*M~ b)\\ < \\a\\ = \\b\\, and deduce that y = x.) Deduce that if x e R(^), then given any U e Jt there exists an element V in the weakly closed submonoid of Jt generated by 1E and U\ such that VU-x^x. group of a right Cauchy
{"url":"http://archive.org/stream/TreatiseOnAnalysisVolII/TXT/00000100.txt","timestamp":"2014-04-20T20:02:15Z","content_type":null,"content_length":"14092","record_id":"<urn:uuid:7fa438e2-97a3-4bcc-a1a8-1ff990827356>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
School of Mathematics Faculty PublicationsOn a transformation of Bohl and its discrete analogueAsymptotic Almost Periodicity of Scalar Parabolic Equations with Almost Periodic Time DependenceDynamics of Almost Periodic Scalar Parabolic EquationsRandom Restarts in Global Optimization http://hdl.handle.net/1853/26891 Pre-prints by faculty members in the School of Mathematics 2014-04-20T21:32:43Z http://hdl.handle.net/1853/48752 On a transformation of Bohl and its discrete analogue Harrell, Evans M., II; Wong, Manwah Lilian Fritz Gesztesy’s varied and prolific career has produced many transformational contributions to the spectral theory of one-dimensional Schrödinger equations. He has often done this by revisiting the insights of great mathematical analysts of the past, connecting them in new ways, and reinventing them in a thoroughly modern context. In this short note we recall and relate some classic transformations that figure among Fritz Gestesy’s favorite tools of spectral theory, and indeed thereby make connections among some of his favorite scholars of the past, Bohl, Darboux, and Green. After doing this in the context of one-dimensional Schrödinger equations on the line, we obtain some novel analogues for discrete one-dimensional Schrödinger equations. First published in Proceedings of Symposia in Pure Mathematics in Volume 87, 2013, published by the American Mathematical Society.; DOI: 10.1090/pspum/087/01433 2013-01-01T00:00:00Z Harrell, Evans M., II Wong, Manwah Lilian Fritz Gesztesy’s varied and prolific career has produced many transformational contributions to the spectral theory of one-dimensional Schrödinger equations. He has often done this by revisiting the insights of great mathematical analysts of the past, connecting them in new ways, and reinventing them in a thoroughly modern context. In this short note we recall and relate some classic transformations that figure among Fritz Gestesy’s favorite tools of spectral theory, and indeed thereby make connections among some of his favorite scholars of the past, Bohl, Darboux, and Green. After doing this in the context of one-dimensional Schrödinger equations on the line, we obtain some novel analogues for discrete one-dimensional Schrödinger equations. http://hdl.handle.net/1853/31312 Asymptotic Almost Periodicity of Scalar Parabolic Equations with Almost Periodic Time Dependence Shen, Wenxian; Yi, Yingfei 2009-12-07T00:00:00Z Shen, Wenxian Yi, Yingfei http://hdl.handle.net/1853/31311 Dynamics of Almost Periodic Scalar Parabolic Equations Shen, Wenxian; Yi, Yingfei 2009-12-07T00:00:00Z Shen, Wenxian Yi, Yingfei http://hdl.handle.net/1853/31310 Random Restarts in Global Optimization Hu, X.; Shonkwiler, R.; Spruill, M. C. In this article we study stochastic multistart methods for global optimization, which combine local search with random initialization, and their parallel implementations. It is shown that in a minimax sense the optimal restart distribution is uniform. We further establish the rate of decrease of the ensemble probability that the global minimum has not been found by the nth iteration. Turning to parallelization issues, we show that under independent identical processing (iip), exponential speedup in the time to hit the goal bin normally results. Our numerical studies are in close agreement with these finndings. 2009-12-07T00:00:00Z Hu, X. Shonkwiler, R. Spruill, M. C. In this article we study stochastic multistart methods for global optimization, which combine local search with random initialization, and their parallel implementations. It is shown that in a minimax sense the optimal restart distribution is uniform. We further establish the rate of decrease of the ensemble probability that the global minimum has not been found by the nth iteration. Turning to parallelization issues, we show that under independent identical processing (iip), exponential speedup in the time to hit the goal bin normally results. Our numerical studies are in close agreement with these finndings.
{"url":"https://smartech.gatech.edu/feed/rss_1.0/1853/26891","timestamp":"2014-04-20T21:32:43Z","content_type":null,"content_length":"5809","record_id":"<urn:uuid:23192289-9a74-434e-8931-9451385e65b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Curved Trajectories [Archive] - OpenGL Discussion and Help Forums 06-18-2002, 03:32 PM I need to be able to move many objects from point a to b avoiding fixed objects on route. Initially I used routines that determined the objects proximity to those of the fixed objects ... it worked but the result looked odd. What I needed was a predetermined curved trajectory specified by a start point, end point and one or two control points ... bezier sounded like the solution and the trajectories now look much more acceptable ... but incrementing the object's position is non-linear with respect to the bezier curve. Does anyone have any suggestions of another formula that I could use or any idea how to retrieve linearly spaced segments of a bezier curve ?
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-144257.html","timestamp":"2014-04-20T21:22:51Z","content_type":null,"content_length":"13200","record_id":"<urn:uuid:15aedbd5-6ee4-4c64-9be2-3ae361baac92>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Programming Praxis - Digits Of E Programming Praxis – Digits Of E In today’s Programming Praxis exercise, our goal is to implement two algorithms to calculate the digits of e using a spigot algorithm: one bounded and one unbounded version. My solution for the unbounded version is already posted in the exercise itself, so I’ll only cover the bounded version here. Let’s get started, shall we? A quick import: import Data.List The main trick used for the whole carry and modulo stuff is the use of mapAccumR; it allows us to produce a list of the modulos while also producing the resulting digit. Other than that, the implementation is straightforward: call the function n-1 times with an initial argument of n+1 ones and tack a 2 on the front. spigot_e :: Int -> [Int] spigot_e n = 2 : take (n - 1) (f $ replicate (n + 1) 1) where f = (\(d,xs) -> d : f xs) . mapAccumR (\a (i,x) -> divMod (10*x+a) i) 0 . zip [2..] A test to see if everything is working properly: main :: IO () main = print $ spigot_e 30 Tags: bonsai, code, digits, e, Haskell, kata, praxis, programming, spigot
{"url":"http://bonsaicode.wordpress.com/2012/06/19/programming-praxis-digits-of-e/","timestamp":"2014-04-21T13:34:31Z","content_type":null,"content_length":"48722","record_id":"<urn:uuid:c8f9d6ac-15c9-458f-aefa-bd6dd5ab2fff>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
9.1 Generalized Eigenvalue Problem The generalized eigenvalue problem for the matrix pair The scalars eigenvector matrix If the determinant of the matrix regular; otherwise it is called singular. If the pair is regular and B is nonsingular, then B is singular, then there are Generalized eigenvalues and eigenvectors. The functions GeneralizedEigenvalues and GeneralizedEigenvectors compute, respectively, the generalized eigenvalues and eigenvectors; while the function GeneralizedEigensystem computes both generalized eigenvalues and eigenvectors. If GeneralizedEigenvectors and GeneralizedEigensystem return unevaluated with a warning message. If the generalized eigenvalue problem is not ill-posed, then there are exactly Make sure the application is loaded. Here is a pair of 2 × 2 matrices. Here are the two pairs of scalars Since matrix The generalized eigenvalues are the same as the eigenvalues of These are the generalized eigenvectors. This verifies that the eigenvalues and eigenvectors satisfy Eq. (9.2).
{"url":"http://reference.wolfram.com/legacy/applications/anm/FunctionIndex/GeneralizedEigenvalues.html","timestamp":"2014-04-17T21:29:07Z","content_type":null,"content_length":"38400","record_id":"<urn:uuid:df4b7d34-620a-4fc4-959f-e16778f4eedc>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
User Paul Taylor bio website PaulTaylor.EU location London, UK visits member for 4 years, 4 months seen Dec 21 '09 at 15:49 stats profile views 152 I am an independent researcher in the foundations of mathematics and computation, using the techniques of category theory and type theory. I wrote a book called Practical Foundations of Mathematics (CUP 1999). My main work now is Abstract Stone Duality, which seeks to axiomatise computable general topology directly, without any recourse to set theory. I am also the author of a TeX package for drawing categorical diagrams. This user has not any questions 0 Votes Cast This user has not cast any
{"url":"http://mathoverflow.net/users/2719/paul-taylor?tab=recent","timestamp":"2014-04-17T04:08:11Z","content_type":null,"content_length":"46853","record_id":"<urn:uuid:5fcab252-c91e-436a-b8e1-ce4b84a82e75>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: command option in ado programming [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] RE: st: command option in ado programming From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject RE: st: command option in ado programming Date Tue, 18 Nov 2003 09:20:39 -0000 I don't think there is a way round this, so long as you use -syntax- to parse input. The rule is that capitalisation signals acceptable abbreviation, and Stata does not identify the "2" as upper case. It's not the answer you seek, possibly, but if this were my problem I would flag the choice much more obviously by a choice between -sd()- and -var()-. Names are more fundamental than notation, here at least. > The problem was/is, I wish to have an option like -sigma2(z)-, and I > want users to specify exactly -sigma2(z)-, not -sigma(z)- or > something else; that is, the "2" is important. [This is because the > option parameterizes sigma^2 (the variance), which should be > carefully distinguished from sigma (the standard > deviation).] I tried > something like > syntax varlist, SIGMA2(string) > , but it seems that the "2" is not binding, so that if users > specify -sigma(z)-, Stata does not complain and treats it as if the > user specifies -sigma2(z)-, which is a result I want to avoid. That > is, I wish Stata would complain if the user inadvertently specifies > -sigma(z)-. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2003-11/msg00474.html","timestamp":"2014-04-18T21:00:07Z","content_type":null,"content_length":"6138","record_id":"<urn:uuid:207c9db3-fbdd-4464-a40d-f3e1efd095c6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Business Plans Print Version Break-Even Point Analysis By Louis Frongello This article deals with a tool that lets you know when you are making money and that suggests ways to make more. I will discuss the break-even point, see how to calculate and chart it, and see its uses in the day-to-day operation of business. First, let's find out what it is. Break-even analysis is another accounting tool developed by business owners to help plan and control the business operations. The Break-Even Point The break-even point is the point at which the income from sales will cover all costs with no profits. The business owner or manager usually considers several factors when studying break-even 1. The capital structure of the company. 2. Fixed expenses such as rent, insurance, heat, and light. 3. Setup of the organization. 4. Variable expenses. 5. The inventory, personnel, and space required to operate properly. The study of these factors will inform the business owner of the possibilities of lowering the break-even point and increasing the gross profit margins. When attempting to determine the prospect of success for a new operation, the analysis of the break-even point may indicate the advantages or disadvantages in modifying the proposed level of operation. Break-Even Analysis The break-even point informs the business owner of the level of sales at which the business wil realize neither a profit nor a loss. It can be expressed in numbers or by the use of graphs. To arrive at the break-even point using either method, we need to calculate the projected and fixed manufacturing, selling, and administrative expenses, and the expected ratios of sales for each category of Types of Costs To fully understand break-even formulas, it is important to know that different types of costs exist. The total fixed cost is the sum of all costs that do not change regardless of the level of sales. Rent and executive salaries are two examples. The total variable cost, on the other hand, is the sum of all the expenses that flucuate directly with the level of activity or sales. Cost of materials to produce a product or purchases of items for resale are two variable costs. The average fixed cost is the figure obtained by dividing the total fixed cost by the number of units (covers, meals, etc.) produced. The average variable cost is obtained by dividing the total variable cost by the quantity produced. The average cost is calculated by dividing the total cost by the number of units produced. Break-Even Point Formulas To determine the break-even point, we should use the following formula: Sales (S) equals Fixed Expenses (FE) plus Variable Expenses (VE). It is a usual business policy to evaluate expenses as a percentage of sales. We will use this method when determining the break-even point. For example, if the fixed costs are $100,000 and variable expenses equals 50 percent of the sales of a specific store, the break-even point is computed as follows: S = $100,000 + .05S -0.5S + S = $100,000 0.5S = $100,000 S = $200,000 To prove the result, we can substitute the figures for the letters in the equation. At this point, it is easy to see how the break-even analysis can be used to determine the level of sales required to realize a certain amount of net income. The formula used will be Sales (S) equals Fixed Expenses (FE) plus Variable Expenses plus Net Income (I). Using the formula above, we can easily determine the level of sales that will produce a net income of $40,000. S = $100,000 + .05S + $40,000 -0.5S + S = $100,000 + $40,000 0.5S = $140,000 S = $280,000 Restaurants and fast-food chains will often want to express their their break-even point by the number of portions sold. This is fairly easy to compute. Here, the break-even point is determined by applying the following formula: break-even point (in units) equals Total Fixed Cost divided by (selling price per unit minus variable cost per unit). If the selling price of a steak is $1.20, the total fixed cost is $30,000, and the variable cost per unit is 80 cents, how many units must be sold to break-even? If we apply the formula, the break-even point is 30,000 = 30,000 = 75,000 units $1.20 - 0.80 0.40 Using this formula, we can compare different sale prices. The break-even point for each price will be calculated and an analysis of the results will determine how reasonable they are and which is to be used when forecasting and budgeting. Another possibility that the break-even point offers is for a study of the relation between the revenue and cost. A chart can be drawn showing the total revenue and cost at different levels of production when selling a hamburger or a drink at a specified price. Graphing the Break-Even Point Break-even point can also be indicated by graphing. Figure 1 below is a sample graph for a business. To draw the graph, we should follow these steps: 1. Number of units produced is marked along the horizontal axis and the total revenue expressed in dollars is set on the vertical axis. 2. The sales line is drawn to indicate the sales at each level of production. 3. A horizontal line is drawn at the $12,000 level of sales to represent the fixed costs for our sample business. 4. A total cost line is drawn from the point of intersection of the fixed cost line and the vertical axis to the point of total costs as full capacity --$28,000. 5. The intersection of the total cost line with the sales line represents the break-even point, in this case $20,000. The dotted lines represent the level of production and the total costs at this level of operation. 6. Areas of net loss and of net profit are marked. The break-even point graph helps the business owner determine the levels of production that will create profits for every level of sales. The business owner then works to increase profits without investing extra funds. To do this, he/she should study the following important points: 1. A possible increase in utilization of existing capacity through reduction of idle time. 2. Better repair and maintenance of equipment to reduce down time --time elapsed from the moment the machine breaks down to the time it gets back in service. 3. Improved working schedules and inventory levels. 4. Longer business hours. 5. Improved production control. 6. Markup policy. Let us take a closer look at two of these points. Markup Policy Another item to study when considering ways to improve profit without increasing investment is the company's markup policy. Markup is the amount above cost that the business charges for an item. Too many business owners believe that the only way to larger profits is through higher markups. As a result, they tend to use either a fixed percentage of cost markup or some vague and arbitrary "rule of thumb" which multiples costs by some mystical figure in the manager's head to arrive at the selling price. Actually, markup should be flexible. Break-even analysis allows studies to be made of volumes of sales at various price levels. It is often discovered that a lover markup will produce a higher volume of sales and increased profits. If a customer feels costs are too high he/she will take their business elsewhere. Reduced turnover means slow sales. It also means that the business owner may have to raise prices to cover its inventory investment. This will drive more customers away. An appreciation of the meanings of break-even analysis can prevent such a vicious cycle from even starting. Figure 2 illustrates the point. Assume that a restaurant is presently operating at 6,000 meals per week; present profit is expressed by line P1P1 and is about $2,000. We feel we can increase business to 7,500 meals per week by reducing price 10 percent. A new sales line S2S2 would have to be drawn. Profit would be increased considerably; new profit is represented by line P2P2. We were able to increase our profit because costs rose at a lesser rate than sales rose. Break-even analysis and techniques are the tools that finally tell the business owner or manager when he/she is making a profit. Break-even charts and analysis will be part of every budget the business owner put out. They enable he/she to gauge the business' production rate accurately. They will tell whether an increase or a slowdown in production is called for. They are a vital part of the business owner's life.
{"url":"http://www.bizbound.com/breakeven.htm","timestamp":"2014-04-17T01:16:50Z","content_type":null,"content_length":"26448","record_id":"<urn:uuid:6602a6b0-95ca-438f-b61f-7a24b3da8075>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Hardy's Paradox Confirmed Experimentally In 1992 PI faculty member Lucien Hardy proposed a thought experiment that gave a simple proof of non-locality in quantum theory that has since become known as Hardy’s Paradox, though strictly speaking it is a theorem. Although the classical scheme of particle behavior says that when matter and antimatter meet, they should annihilate one another in a burst of energy, Hardy’s Paradox showed it was theoretically possible that in some cases when a particle-antiparticle interaction is not observed they could interact with one another and survive. In the last several months, two teams working independently have provided experimental confirmation of Hardy’s Theorem. In work published in Physical Review Letters, Perimeter Institute Affiliate Member Aephraim Steinberg and Jeff Lundeen, both of the University of Toronto, used a technique called ‘joint weak measurement’ of the locations of entangled pairs of photons, which were substituted for the particle-antiparticles of Hardy’s Theorem (Physical Review Letters 102:020404). The weak measurement technique was pioneered by Yakir Aharonov, a Perimeter Institute Distinguished Research Chair who is also a professor of theoretical condensed matter physics at Chapman University. Weak measurement can achieve the effect of measuring quantum states, which normally cannot be measured without destroying them, by not gathering enough information from any one interaction to make a “full” measurement, and then pooling these partial results so that the total yields meaningful measurements. In independent work published shortly afterwards in the New Journal of Physics Kazuhiro Yokota of Osaka University and colleagues also confirmed Hardy’s paradox using a slightly different experimental setup that also employed weak measurement (New Journal of Physics 11: 033011). In both cases, pairs of photons were entangled via polarization. Photons make a reasonable substitute for the particles and antiparticles in Hardy’s thought experiment, since they obey the same quantum-mechanical rules. Using pairs of interferometers they were able to gather joint weak measurements that did not interfere with the path of the photons. Both teams measured more photons at some detectors and fewer in others than classical physics would predict, confirming Hardy’s Paradox. According to Dr. Steinberg, “Until recently, it seemed impossible to carry out Hardy's proposal in practice, let alone to confirm or resolve the paradox. We have finally been able to do so, and to apply Aharonov's methods to the problem, showing that there is a way, even in quantum mechanics, in which one can quite consistently discuss past events even after they are over and done.” These results are a significant milestone, in that they may offer a method for getting around the fact that observation inherently changes a quantum system. In addition to the inherent interest of the result, the techniques employed by the research teams are also likely to be useful in quantum metrology, and quantum information technology.
{"url":"https://perimeterinstitute.ca/news/hardys-paradox-confirmed-experimentally","timestamp":"2014-04-18T19:11:10Z","content_type":null,"content_length":"30368","record_id":"<urn:uuid:55a8f54c-b12b-4330-ac12-d9741ff0132f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating differences in spheres with variable as a radius January 12th 2010, 08:26 PM #1 Jan 2010 Calculating differences in spheres with variable as a radius I'm close to finishing this problem. This isn't strictly a calculus problem, but it seemed closer to one than to a physics problem. I first had to calculate the mass of a sphere inside a sphere. The outside sphere is denser than the inside sphere. Now I need to create a function that represents how a radius r of the inner sphere would change the mass of the sphere. Thanks in advance for any help. I'm close to finishing this problem. This isn't strictly a calculus problem, but it seemed closer to one than to a physics problem. I first had to calculate the mass of a sphere inside a sphere. The outside sphere is denser than the inside sphere. Now I need to create a function that represents how a radius r of the inner sphere would change the mass of the sphere. Thanks in advance for any help. Let R denote the radius of the outer sphere and r the radius of the inner sphere. Let d denote the density of the inner sphere and D the density of the surrounding shell that means what is left from the outer sphere. I assume that only the radius of the inner sphere is variable, all other values are constant. Then the mass of the complete solid is calculated by: $m(r)=\left(\frac43 \pi R^3-\frac43 \pi r^3\right) D + \frac43 \pi r^3 d = \frac43 \pi \left(\left(R^3-r^3\right) D+ r^3 d\right)$ January 12th 2010, 11:56 PM #2
{"url":"http://mathhelpforum.com/calculus/123520-calculating-differences-spheres-variable-radius.html","timestamp":"2014-04-17T12:33:52Z","content_type":null,"content_length":"35551","record_id":"<urn:uuid:6e37634e-24e7-4887-95de-c6f41560b5cf>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
st: xtabond2 and instruments (HELP) [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: xtabond2 and instruments (HELP) From robertjbrienza@aol.com To statalist@hsphsun2.harvard.edu Subject st: xtabond2 and instruments (HELP) Date Sun, 28 Oct 2007 14:26:39 -0400 Dear all, I'm a newbie with Stata and I'm trying to estimate a dynamic panel data model using an unbalanced data set with more than 350 firms over the 1995 -2000 period. Each firm has a minimum of three consecutive years of data. This is the dynamic model: y_it = y_it-1 , x1_it , x2_it , x3_it, alpha_i alpha_t uit where alpha_1 and alpha_t represent firm-specific effects and time-effects, All this given, I have two different questions: 1) In my specification, all variables should be treated as endogenous and time dummies are included among the independent variables. In particular, y_it-2, x1_it-2 , x2_it-2 , x3_it-2 should be used as instruments. I'm asking whether the following command string is correct in this case. Using the One Step Robust GMM System estimator (xtabond2), I should write: xi:xtabond2 y l.y x1 x2 x3 i.year, robust gmm(y x1 x2 x3, lag(2 2)) Is ii right or incomplete? 2) In the (applied) econometrics article from which I have borrowed such procedure, I also read this important note: "I [the author] have investigated whether the explanatory variables are predetermined or strictly exogenous with respect to the error term. To do this, I started using instruments dated t-2 for each regressor. Later, I added the instrument dated t-1 to analyze the potential bias arising from the correlation between x_it-1 and the first-differenced error term, delta_uit . To investigate the possibility of strict exogeneity we also included the current value, x_it in the instrument set. This investigation leads me to conclude that the explanatory variables are neither predetermined nor strictly exogenous. I, therefore, use instruments dated t-2 in our estimation." ** With respect to the model above, what concrete steps I should do to follow this sentence? When the author say that "we added the instrument dated t-1 to analyze the potential bias arising from the correlation between x_it-1 and the first-differenced error term, delta_uit ", it is unclear to me how (and where) he actually analyze the "potential bias" between x_it-1 and the first-differenced error term. Is there an automatically generated statistics to see? Or something else? And how I should modify (step by step) the gmm style options? Cookbook suggestions would be very useful, too. Can anybody offer any assistance to solve these problems? Looking forward to any response, thank you in advance. Yours faithfully, Robert Brienza, Bocconi U. (Milan, Italy) Email and AIM finally together. You've gotta check out free AOL Mail! - http://mail.aol.com * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-10/msg01092.html","timestamp":"2014-04-19T07:12:15Z","content_type":null,"content_length":"8099","record_id":"<urn:uuid:114e2ef9-d9db-40d9-bd61-f0fb5c6d10c1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Stem and leaf on excel February 2nd 2008, 04:26 PM #1 Jan 2008 Stem and leaf on excel How do I make a stem and leaf plot on excel, or just a spreadsheet on the computer? Im just starting stats, and have to do one on the states and how much $ is spent on each person in each state. I dont know much about excel. Stem and Leaf plots can be done on Excel using an add-on. Post some data and we'll see. February 2nd 2008, 05:04 PM #2 February 4th 2008, 04:50 AM #3
{"url":"http://mathhelpforum.com/statistics/27303-stem-leaf-excel.html","timestamp":"2014-04-18T13:52:55Z","content_type":null,"content_length":"35885","record_id":"<urn:uuid:db88618d-5a51-4bd1-83a6-7a9d4e9087e4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
Equations of Perpendicular Lines Time to surf the x and y vibes and find out what the coordinate plane has in store for perpendicular lines. Sure, the x- and y-axes are perpendicular and those graph paper squares are totally perpendicular, but how do we find the equations of lines that are perpendicular to other lines? And we aren't talking your typical y = 3. We mean weird lines, with slopes of ¼ and y-intercepts like The secret to both parallel and perpendicular lines on the coordinate plane is the slope. Parallel lines have the same slope and they never intersect. Since perpendicular is sort of the opposite of parallel, we'd expect the slopes to be opposite some way. What we might not expect is that the slope is opposite in two ways. For two lines to be perpendicular, one line has to have a slope that is the negative reciprocal of the slope of the other. Say what? Okay, think of it this way. We've got a line y = 2x + 1. We want a line that intersects this line at a right angle. How do we get that? For one thing, the line has to slope in the opposite direction, so let's try going from a 2 to -2. They definitely intersect, but it still doesn't look quite right, does it? The angles formed aren't 90° angles…yet. Basically, to form a right angle, we need to get nitty gritty. A line with a slope of 2 goes up 2 units on the y-axis for every 1 on the x-axis. A perpendicular line should go down 1 unit for every 2 units on the x-axis. This means a slope of not just -2, but -½. That's the negative reciprocal. So let's take a really weird line like y = ¼x – 23. A perpendicular line would be y = -4x – 23. Or y = -4x + 1, or y = -4x + 1001. The y-intercept doesn't really matter because it only changes where the two lines intersect, not how. Sample Problem Find a line perpendicular to the line 2x + y = 7 that goes through the point (4, 5). We've gotta find the slope of the line, so let's change this sucker into slope-intercept form. It'll look something like y = -2x + 7. Actually, it'll look exactly like that. After we pluck the slope -2 out from the equation, we need its negative reciprocal. The reciprocal means flipping the number so its numerator and denominator are switched, and then taking its negative. In this case, we end up with -½. (Two wrongs might not make a right, but in this case, two negatives certainly help that process along.) To get the new equation, we start with y = ½x + b. To find what b equals, we'll need to plug in that point (4, 5). y = ½x + b 5 = ½(4) + b 5 = 2 + b b = 3 So, our perpendicular line is y = ½x + 3 or 2y – x = 6. Sample Problem Are x + y = 5 and x – y = 5 perpendicular? In other words, this question asks whether or not the slopes of these two lines are negative reciprocals of each other. That's really all it takes for two lines to be perpendicular, right? In slope-intercept form, x + y = 5 turns into y = -x + 5, which has a slope of -1. The other equation, x – y = 5, turns into y = x – 5, which has a slope of 1. So, is 1 the negative reciprocal of -1? You betcha. They're as perpendicular as can be.
{"url":"http://www.shmoop.com/parallel-perpendicular-lines/perpendicular-lines-equations.html","timestamp":"2014-04-17T01:37:07Z","content_type":null,"content_length":"37326","record_id":"<urn:uuid:932cbc7d-6a22-4ef9-b5a1-e98168792649>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
Classical and Quantum Mechanics - in a Nutshell Next: Statistical Mechanics - Calculating Up: intro_simulation Previous: intro_simulation Classical Mechanics Building on the work of Galileo and others, Newton unveiled his laws of motion in 1686. According to Newton: I. A body remains at rest or in uniform motion (constant velocity - both speed and direction) unless acted on by a net external force. II. In response to a net external force, m accelerates with acceleration III. If body i pushes on body j with a force j pushes on body i with a force For energy-conserving forces, the net force i is the negative gradient (slope in three dimensions) of the potential energy with respect to particle i's position: N particles, Classical mechanics is completely deterministic: Given the exact positions and velocities of all particles at a given time, along with the function trajectory. Quantum Mechanics A number of experimental observations in the late 1800's and early 1900's forced physicists to look beyond Newton's laws of motion for a more general theory. See, for example, the discussion of the heat capacity of solids. It had become increasingly clear that electromagnetic radiation had particle-like properties in addition to its wave-like properties such as diffraction and interference. Planck showed in 1900 that electromagnetic radiation was emitted and absorbed from a black body in discrete quanta, each having energy proportional to the frequency of radiaion. In 1904, Einstein invoked these quanta to explain the photo-electric effect. So under certain circumstances, one must interpret electromagnetic waves as being made up of particles. In 1924 de Broglie asserted that matter also had this dual nature: Particles can be wavey. To make a long and amazing story [1] short, this led to the formulation of Shrödinger's wave equation for matter: Don't let the brevity of notation fool you; this partial differential equation is difficult to deal with and generally impossible to solve analytically. It is tailored to a given physical system by defining the Hamiltonian operator quantized) values (or eigenvalues) of energy wave function complex-valued functions (involving real and thus may correspond to something physical. (probability density. For motion in the single dimension x, it is `a probability per unit x': x and normalized (scaled) by the requirement that the particle must be somewhere, i.e., that these probabilities must sum to one: Quantum mechanics is thus not deterministic, but probabilistic. It forces us to abandon the notion of precisely defined trajectories of particles through time and space. Instead, we must talk in terms of probabilities for alternative system configurations. To clarify these concepts, consider two major successes for the quantum theory, predictions of the discrete energy levels of the harmonic oscillator and the hydrogen atom. Pictured below are the potential energy (solid lines) and the four lowest energy levels (dashed lines) for a one dimensional harmonic oscillator (red) and the three dimensional hydrogen atom (blue). The harmonic oscillator depicted corresponds to a hydrogen atom oscillating at the frequency f = 100/ps and represents one of the highest frequency atomic motions in macromolecules. The energy levels of harmonic oscillators are equally spaced, separated by an energy of hf, or 9.5 kcal/mol for the oscillator shown. The energy gaps for a hydrogen atom oscillating at f = 10/ps are 0.95 kcal/mol, on the order of thermal energy, and so classical mechanics better approximates quantum results (e.g., average energy and motional amplitude) for this slower oscillator. Excitation of electrons within atoms requires much more energy than excitation of atomic vibrations. Promotion of the hydrogen atom's electron from its ground state to its first excited state requires 235 kcal/mol. Way beyond the reach of thermal energy, this excitation requires the absorption of ultraviolet radiation with a wavelength of 121 nm. │ Potential and four lowest E levels │ │for a harmonic oscillator (red) and the hydrogen atom (blue).│ Next: Statistical Mechanics - Calculating Up: intro_simulation Previous: intro_simulation Peter J. Steinbach 2010-11-15
{"url":"http://cmm.cit.nih.gov/intro_simulation/node1.html","timestamp":"2014-04-19T11:57:49Z","content_type":null,"content_length":"11457","record_id":"<urn:uuid:a4d05e79-a82d-4383-8d0a-232065fe993c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Prisoner's dilemma prisoner's dilemma is a fundamental problem in game theory that demonstrates why two people might not cooperate even if it is in both their best interests to do so. It was originally framed by Merrill Flood Melvin Dresher working at in 1950. Albert W. Tucker formalized the game with prison sentence payoffs and gave it the "prisoner's dilemma" name (Poundstone, 1992). In its classical form, the prisoner's dilemma ("PD") is presented as follows: Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal. If one testifies (defects from the other) for the prosecution against the other and the other remains silent (cooperates with the other), the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act? If we assume that each player cares only about minimizing his or her own time in jail, then the prisoner's dilemma forms a non- game in which two players may each cooperate with or defect from (betray) the other player. In this game, as in all game theory, the only concern of each individual player (prisoner) is maximizing his or her own payoff, without any concern for the other player's payoff. The unique equilibrium for this game is a solution, that is, rational choice leads the two players to both play defect, even though each player's individual reward would be greater if they both played cooperatively. In the classic form of this game, cooperating is strictly dominated by defecting, so that the only possible for the game is for all players to defect. No matter what the other player does, one player will always gain a greater payoff by playing defect. Since in any situation playing defect is more beneficial than cooperating, all players will play defect, all things being equal. In the iterated prisoner's dilemma , the game is played repeatedly . Thus each player has an opportunity to punish the other player for previous non-cooperative play. If the number of steps is known by both players in advance, economic theory says that the two players should defect again and again, no matter how many times the game is played. However, this analysis fails to predict the behavior of human players in a real iterated prisoners dilemma situation, and it also fails to predict the optimum algorithm when computer programs play in a tournament. Only when the players play an indefinite or random number of times can cooperation be an equilibrium, technically a subgame perfect equilibrium meaning that both players defecting always remains an equilibrium and there are many other equilibrium outcomes. In this case, the incentive to defect can be overcome by the threat of punishment. In casual usage, the label "prisoner's dilemma" may be applied to situations not strictly matching the formal criteria of the classic or iterative games, for instance, those in which two entities could gain important benefits from cooperating or suffer from the failure to do so, but find it merely difficult or expensive, not necessarily impossible, to coordinate their activities to achieve Strategy for the classical prisoner's dilemma The classical prisoner's dilemma can be summarized thus: Prisoner B Stays Silent Prisoner B Betrays Prisoner A: 10 years Prisoner A Stays Silent Each serves 6 months Prisoner B: goes free Prisoner A: goes free Prisoner A Betrays Each serves 5 years Prisoner B: 10 years In this game, regardless of what the opponent chooses, each player always receives a higher payoff (lesser sentence) by betraying; that is to say that betraying is the strictly dominant strategy. For instance, Prisoner A can accurately say, "No matter what Prisoner B does, I personally am better off betraying than staying silent. Therefore, for my own sake, I should betray." However, if the other player acts similarly, then they both betray and both get a lower payoff than they would get by staying silent. Rational self-interested decisions result in each prisoner being worse off than if each chose to lessen the sentence of the accomplice at the cost of staying a little longer in jail himself (hence the seeming dilemma). In game theory, this demonstrates very elegantly that in a non-zero-sum game a Nash equilibrium need not be a Pareto optimum. Generalized form We can expose the skeleton of the game by stripping it of the prisoner framing device . The generalized form of the game has been used frequently in experimental economics . The following rules give a typical realization of the game. There are two players and a banker. Each player holds a set of two cards, one printed with the word "Cooperate", the other printed with "Defect" (the standard terminology for the game). Each player puts one card face-down in front of the banker. By laying them face down, the possibility of a player knowing the other player's selection in advance is eliminated (although revealing one's move does not affect the dominance analysisA simple tell that partially or wholly reveals one player's choice — such as the Red player playing her Cooperate card face-up — does not change the fact that Defect is the dominant strategy. When one is considering the game itself, communication has no effect whatsoever. When the game is being played in real life, communication may matter due to considerations outside of the game itself; however, when external considerations are not taken into account, communications do not affect the single-instance prisoner's dilemma. Even in the single-instance prisoner's dilemma, meaningful prior communication about issues external to the game could alter the play environment by raising the possibility of enforceable side contracts or credible threats. For example, if the Red player plays their Cooperate card face-up and simultaneously reveals a binding commitment to blow the jail up if and only if Blue Defects (with additional payoff -11,-10), then Blue's Cooperation becomes dominant. As a result, players are screened from each other and prevented from communicating outside of the game.). At the end of the turn, the banker turns over both cards and gives out the payments accordingly. Given two players, "red" and "blue": if the red player defects and the blue player cooperates, the red player gets the Temptation to Defect payoff of 5 points while the blue player receives the Sucker's payoff of 0 points. If both cooperate they get the Reward for Mutual Cooperation payoff of 3 points each, while if they both defect they get the Punishment for Mutual Defection payoff of 1 point. The checker board payoff matrix showing the payoffs is given below. Example PD payoff matrix Cooperate Defect Cooperate 3, 3 0, 5 Defect 5, 0 1, 1 In "win-lose" terminology the table looks like this: Cooperate Defect Cooperate win-win lose much-win much Defect win much-lose much lose-lose These point assignments are given arbitrarily for illustration. It is possible to generalize them, as follows: Canonical PD payoff matrix Cooperate Defect Cooperate R, R S, T Defect T, S P, P stands for Temptation to defect Reward for mutual cooperation Punishment for mutual defection Sucker's payoff . To be defined as prisoner's dilemma, the following inequalities must hold: This condition ensures that the equilibrium outcome is defection, but that cooperation Pareto dominates equilibrium play. In addition to the above condition, if the game is repeatedly played by two players, the following condition should be added. If that condition does not hold, then full cooperation is not necessarily Pareto optimal, as the players are collectively better off by having each player alternate between Cooperate and Defect. These rules were established by cognitive scientist Douglas Hofstadter and form the formal canonical description of a typical game of prisoner's dilemma. A simple special case occurs when the advantage of defection over cooperation is independent of what the co-player does and cost of the co-player's defection is independent of one's own action, i.e. Human behavior in the prisoner's dilemma One experiment based on the simple dilemma found that approximately 40% of participants played "cooperate" (i.e., stayed silent). The iterated prisoner's dilemma If two players play prisoner's dilemma more than once in succession and they remember previous actions of their opponent and change their strategy accordingly, the game is called iterated prisoner's The iterated prisoner's dilemma game is fundamental to certain theories of human cooperation and trust. On the assumption that the game can model transactions between two people requiring trust, cooperative behaviour in populations may be modelled by a multi-player, iterated, version of the game. It has, consequently, fascinated many scholars over the years. In 1975, Grofman and Pool estimated the count of scholarly articles devoted to it at over 2,000. The iterated prisoner's dilemma has also been referred to as the " Peace-War game If the game is played exactly N times and both players know this, then it is always game theoretically optimal to defect in all rounds. The only possible Nash equilibrium is to always defect. The proof is : one might as well defect on the last turn, since the opponent will not have a chance to punish the player. Therefore, both will defect on the last turn. Thus, the player might as well defect on the second-to-last turn, since the opponent will defect on the last no matter what is done, and so on. . Unlike the standard prisoner's dilemma, in the iterated prisoner's dilemma the defection strategy is counterintuitive and fails badly to predict the behavior of human players. Within standard economic theory, though, this is the only correct answer. The strategy in the iterated prisoners dilemma with fixed N is to cooperate against a superrational opponent, and in the limit of large N, experimental results on strategies agree with the superrational version, not the game-theoretic rational one. For cooperation to emerge between game theoretic rational players, the total number of rounds N must be random, or at least unknown to the players. In this case always defect is no longer a strictly dominant strategy, only a Nash equilibrium . Amongst results shown by Nobel Prize winner Robert Aumann in his 1959 paper, rational players repeatedly interacting for indefinitely long games can sustain the cooperative outcome. Iterated prisoners dilemma experiments Interest in the iterated prisoners dilemma (IPD) was kindled by Robert Axelrod in his book The Evolution of Cooperation (1984). In it he reports on a tournament he organized of the N step prisoner dilemma (with N fixed) in which participants have to choose their mutual strategy again and again, and have memory of their previous encounters. Axelrod invited academic colleagues all over the world to devise computer strategies to compete in an IPD tournament . The programs that were entered varied widely in algorithmic complexity, initial hostility, capacity for forgiveness, and so forth. Axelrod discovered that when these encounters were repeated over a long period of time with many players, each with different strategies, greedy strategies tended to do very poorly in the long run while more strategies did better, as judged purely by self-interest. He used this to show a possible mechanism for the evolution of altruistic behaviour from mechanisms that are initially purely selfish, by natural selection The best strategy was found to be tit for tat , which Anatol Rapoport developed and entered into the tournament. It was the simplest of any program entered, containing only four lines of BASIC, and won the contest. The strategy is simply to cooperate on the first iteration of the game; after that, the player does what his or her opponent did on the previous move. Depending on the situation, a slightly better strategy can be "tit for tat with forgiveness." When the opponent defects, on the next move, the player sometimes cooperates anyway, with a small probability (around 1%-5%). This allows for occasional recovery from getting trapped in a cycle of defections. The exact probability depends on the line-up of opponents. By analysing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to be successful. Nice: The most important condition is that the strategy must be "nice", that is, it will not defect before its opponent does (this is sometimes referred to as an "optimistic" algorithm). Almost all of the top-scoring strategies were nice; therefore a purely selfish strategy will not "cheat" on its opponent, for purely utilitarian reasons first. Retaliating: However, Axelrod contended, the successful strategy must not be a blind optimist. It must sometimes retaliate. An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as "nasty" strategies will ruthlessly exploit such players. Forgiving: Successful strategies must also be forgiving. Though players will retaliate, they will once again fall back to cooperating if the opponent does not continue to defect. This stops long runs of revenge and counter-revenge, maximizing points. Non-envious: The last quality is being non-envious, that is not striving to score more than the opponent (impossible for a ‘nice’ strategy, i.e., a 'nice' strategy can never score more than the Therefore, Axelrod reached the -sounding conclusion that selfish individuals for their own selfish good will tend to be nice and forgiving and non-envious. The optimal (points-maximizing) strategy for the one-time PD game is simply defection; as explained above, this is true whatever the composition of opponents may be. However, in the iterated-PD game the optimal strategy depends upon the strategies of likely opponents, and how they will react to defections and cooperations. For example, consider a population where everyone defects every time, except for a single individual following the tit for tat strategy. That individual is at a slight disadvantage because of the loss on the first turn. In such a population, the optimal strategy for that individual is to defect every time. In a population with a certain percentage of always-defectors and the rest being tit for tat players, the optimal strategy for an individual depends on the percentage, and on the length of the game. A strategy called Pavlov (an example of Win-Stay, Lose-Switch ) cooperates at the first iteration and whenever the player and co-player did the same thing at the previous iteration; Pavlov defects when the player and co-player did different things at the previous iteration. For a certain range of parameters, Pavlov beats all other strategies by giving preferential treatment to co-players which resemble Pavlov. Deriving the optimal strategy is generally done in two ways: 1. Bayesian Nash Equilibrium: If the statistical distribution of opposing strategies can be determined (e.g. 50% tit for tat, 50% always cooperate) an optimal counter-strategy can be derived 2. Monte Carlo simulations of populations have been made, where individuals with low scores die off, and those with high scores reproduce (a genetic algorithm for finding an optimal strategy). The mix of algorithms in the final population generally depends on the mix in the initial population. The introduction of mutation (random variation during reproduction) lessens the dependency on the initial population; empirical experiments with such systems tend to produce tit for tat players (see for instance Chess 1988), but there is no analytic proof that this will always occur. Although tit for tat is considered to be the most robust basic strategy, a team from Southampton University in England (led by Professor Nicholas Jennings [7235] and consisting of Rajdeep Dash, Sarvapali Ramchurn, Alex Rogers, Perukrishnen Vytelingum) introduced a new strategy at the 20th-anniversary iterated prisoner's dilemma competition, which proved to be more successful than tit for This strategy relied on cooperation between programs to achieve the highest number of points for a single program. The University submitted 60 programs to the competition, which were designed to recognize each other through a series of five to ten moves at the start. Once this recognition was made, one program would always cooperate and the other would always defect, assuring the maximum number of points for the defector. If the program realized that it was playing a non-Southampton player, it would continuously defect in an attempt to minimize the score of the competing program. As a result, this strategy ended up taking the top three positions in the competition, as well as a number of positions towards the bottom. This strategy takes advantage of the fact that multiple entries were allowed in this particular competition, and that the performance of a team was measured by that of the highest-scoring player (meaning that the use of self-sacrificing players was a form of ). In a competition where one has control of only a single player, tit for tat is certainly a better strategy. Because of this new rule, this competition also has little theoretical significance when analysing single agent strategies as compared to Axelrod's seminal tournament. However, it provided the framework for analysing how to achieve cooperative strategies in multi-agent frameworks, especially in the presence of noise. In fact, long before this new-rules tournament was played, Richard Dawkins in his book The Selfish Gene pointed out the possibility of such strategies winning if multiple entries were allowed, but remarked that most probably Axelrod would not have allowed them if they had been submitted. It also relies on circumventing rules about the prisoner's dilemma in that there is no communication allowed between the two players. When the Southampton programs engage in an opening "ten move dance" to recognize one another, this only reinforces just how valuable communication can be in shifting the balance of the game. Another odd case is "play forever" prisoner's dilemma. The game is repeated infinitely many times and the player's score is the average (suitably computed). Continuous iterated prisoner's dilemma Most work on the iterated prisoner's dilemma has focused on the discrete case, in which players either cooperate or defect, because this model is relatively simple to analyze. However, some researchers have looked at models of the continuous iterated prisoner's dilemma, in which players are able to make a variable contribution to the other player. Le and Boyd found that in such situations, cooperation is much harder to evolve than in the discrete iterated prisoner's dilemma. The basic intuition for this result is straigh­tforward: in a continuous prisoner's dilemma, if a population starts off in a non-cooperative equilibrium, players who are only marginally more cooperative than non-cooperators get little benefit from assorting with one another. By contrast, in a discrete prisoner's dilemma, tit for tat cooperators get a big payoff boost from assorting with one another in a non-cooperative equilibrium, relative to non-cooperators. Since Nature arguably offers more opportunities for variable cooperation rather than a strict dichotomy of cooperation or defection, the continuous prisoner's dilemma may help explain why real-life examples of tit for tat-like cooperation are extremely rare in nature (ex. Hammerstein) even though tit for tat seems robust in theoretical models. Learning psychology and game theory Where game players can learn to estimate the likelihood of other players defecting, their own behaviour is influenced by their experience of the others' behaviour. Simple statistics show that inexperienced players are more likely to have had, overall, atypically good or bad interactions with other players. If they act on the basis of these experiences (by defecting or cooperating more than they would otherwise) they are likely to suffer in future transactions. As more experience is accrued a truer impression of the likelihood of defection is gained and game playing becomes more successful. The early transactions experienced by immature players are likely to have a greater effect on their future playing than would such transactions have upon mature players. This principle goes part way towards explaining why the formative experiences of young people are so influential and why, for example, those who are particularly vulnerable to bullying sometimes become bullies The likelihood of defection in a population may be reduced by the experience of cooperation in earlier games allowing to build up. Hence self-sacrificing behaviour may, in some instances, strengthen the moral fibre of a group. If the group is small the positive behaviour is more likely to feed back in a mutually affirming way, encouraging individuals within that group to continue to cooperate. This is allied to the twin dilemma of encouraging those people whom one would aid to indulge in behaviour that might put them at risk. Such processes are major concerns within the study of reciprocal altruism group selection kin selection moral philosophy Douglas Hofstadter's Superrationality Douglas Hofstadter in his Metamagical Themas proposed that the conception of rationality that led "rational" players to defect is faulty. He proposed that there is another type of rational behavior, which he called " ", where players take into account that the other person is presumably superrational, like them. Superrational players behave identically and know that they will behave identically. They take that into account they maximize their payoffs, and they therefore cooperate with each other. This view of the one-shot PD leads to cooperation as follows: • Any superrational strategy will be the same for both superrational players, since both players will think of it. • therefore the superrational answer will lie on the diagonal of the payoff matrix • when you maximize return from solutions on the diagonal, you cooperate If a superrational player plays against a known rational opponent, he or she will defect. A superrational player only cooperates with other superrational players, whose thinking is correlated with his or hers. If a superrational player plays against an opponent of unknown superrationality in a symmetric situation, the result can be either to cooperate or to defect depending on the odds that the opponent is superrational (Pavlov strategy). Superrationality is not studied by academic economists, because the economic definition of rationality excludes any superrational behavior by definition. Nevertheless, analogs of one-shot cooperation are observed in human culture, wherever religious or ethical codes exist. Hofstadter discusses the example of an economic transaction between strangers passing through a town--- where either party stands to gain by cheating the other, with little hope of retaliation. Still, cheating is the exception rather than the rule. While it is sometimes thought that must involve the constraint of self-interest, David Gauthier famously argues that co-operating in the prisoners dilemma on moral principles is consistent with self-interest and the axioms of game theory. In his opinion, it is most prudent to give up straigh­tforward maximizing and instead adopt a disposition of constrained maximization, according to which one resolves to cooperate in the belief that the opponent will respond with the same choice, while in the classical PD it is explicitly stipulated that the response of the opponent does not depend on the player's choice. This form of claims that good moral thinking is just an elevated and subtly strategic version of basic means-end reasoning. Douglas Hofstadter expresses a strong personal belief that the mathematical symmetry is reinforced by a moral symmetry, along the lines of the Kantian categorical imperative : defecting in the hope that the other player cooperates is morally indefensible. If players treat each other as they would treat themselves, then they will cooperate. Real-life examples These particular examples, involving prisoners and bag switching and so forth, may seem contrived, but there are in fact many examples in human interaction as well as interactions in nature that have the same payoff matrix. The prisoner's dilemma is therefore of interest to the social sciences such as , as well as to the biological sciences such as evolutionary biology . Many natural processes have been abstracted into models in which living beings are engaged in endless games of prisoner's dilemma. This wide applicability of the PD gives the game its substantial In politics political science , for instance, the PD scenario is often used to illustrate the problem of two states engaged in an arms race . Both will reason that they have two options, either to increase military expenditure or to make an agreement to reduce weapons. Either state will benefit from military expansion regardless of what the other state does; therefore, they both incline towards military expansion. The is that both states are acting , but producing an apparently irrational result. This could be considered a deterrence theory In science environmental studies , the PD is evident in crises such as global climate change . All countries will benefit from a stable climate, but any single country is often hesitant to curb emissions. The benefit to an individual country to maintain current behavior is greater than the benefit to all countries if behavior was changed, therefore explaining the current impasse concerning climate change. In program management and technology development, the PD applies to the relationship between the customer and the developer. Capt Dan Ward, an officer in the US Air Force, examined The Program Manager's Dilemma in an article published in Defense AT&L, a defense technology journal. In social science , the PD may be applied to an actual dilemma facing two inmates. The game theorist Marek Kaminski, a former political prisoner, analysed the factors contributing to payoffs in the game set up by a prosecutor for arrested defendants (see below). He concluded that while the PD is the ideal game of a prosecutor, numerous factors may strongly affect the payoffs and potentially change the properties of the game. Steroid use The prisoner's dilemma applies to the decision whether or not to use performance enhancing drugs in athletics. Given that the drugs have an approximately equal impact on each athlete, it is to all athletes' advantage that no athlete take the drugs (because of the side effects). However, if any one athlete takes the drugs, they will gain an advantage unless all the other athletes do the same. In that case, the advantage of taking the drugs is removed, but the disadvantages (side effects) remain. In economics Advertising is sometimes cited as a real life example of the prisoner’s dilemma. When advertising was legal in the United States, competing cigarette manufacturers had to decide how much money to spend on advertising. The effectiveness of Firm A’s advertising was partially determined by the advertising conducted by Firm B. Likewise, the profit derived from advertising for Firm B is affected by the advertising conducted by Firm A. If both Firm A and Firm B chose to advertise during a given period the advertising cancels out, receipts remain constant, and expenses increase due to the cost of advertising. Both firms would benefit from a reduction in advertising. However, should Firm B choose not to advertise, Firm A could benefit greatly by advertising. Nevertheless, the optimal amount of advertising by one firm depends on how much advertising the other undertakes. As the best strategy is dependent on what the other firm chooses there is no dominant strategy and this is not a prisoner's dilemma but rather is an example of a stag hunt . The outcome is similar, though, in that both firms would be better off were they to advertise less than in the equilibrium. Sometimes cooperative behaviors do emerge in business situations. For instance, cigarette manufacturers endorsed the creation of laws banning cigarette advertising, understanding that this would reduce costs and increase profits across the industry. This analysis is likely to be pertinent in many other business situations involving advertising. Without enforceable agreements, members of a are also involved in a (multi-player) prisoners' dilemma. 'Cooperating' typically means keeping prices at a pre-agreed minimum level. 'Defecting' means selling under this minimum level, instantly stealing business (and profits) from other cartel members. authorities want potential cartel members to mutually defect, ensuring the lowest possible prices for In law The theoretical conclusion of PD is one reason why, in many countries, plea bargaining is forbidden. Often, precisely the PD scenario applies: it is in the interest of both suspects to confess and testify against the other prisoner/suspect, even if each is innocent of the alleged crime. Arguably, the worst case is when only one party is guilty — here, the innocent one is unlikely to confess, while the guilty one is likely to confess and testify against the innocent. Multiplayer dilemmas Many real-life dilemmas involve multiple players. Although metaphorical, Hardin's tragedy of the commons may be viewed as an example of a multi-player generalization of the PD: Each villager makes a choice for personal gain or restraint. The collective reward for unanimous (or even frequent) defection is very low payoffs (representing the destruction of the "commons"). Such multi-player PDs are not formal as they can always be decomposed into a set of classical two-player games. The commons are not always exploited: William Poundstone , in a book about the prisoner's dilemma (see References below), describes a situation in New Zealand where newspaper boxes are left unlocked. It is possible for people to take a paper without paying ) but very few do, feeling that if they do not pay then neither will others, destroying the system. Quantum game theory quantum game theory , a player in the prisoner's dilemma can implement a quantum strategy. Unlike a mixed strategy , which can't improve on the payoff to the dominant strategy, a quantum strategy can increase the player's expected payoff. The results can be explained in terms of efficient quantum algorithms. Related games Closed-bag exchange once suggested that people often find problems such as the PD problem easier to understand when it is illustrated in the form of a simple game, or trade-off. One of several examples he used was "closed bag exchange": Two people meet and exchange closed bags, with the understanding that one of them contains money, and the other contains a purchase. Either player can choose to honor the deal by putting into his or her bag what he or she agreed, or he or she can defect by handing over an empty bag. In this game, defection is always the best course, implying that rational agents will never play. However, in this case both players cooperating and both players defecting actually give the same result, assuming there are no gains from trade , so chances of mutual cooperation, even in repeated games, are few. Friend or Foe? Friend or Foe? is a game show that aired from 2002 to 2005 on the Game Show Network in the United States . It is an example of the prisoner's dilemma game tested by real people, but in an artificial setting. On the game show, three pairs of people compete. As each pair is eliminated, it plays a game similar to the prisoner's dilemma to determine how the winnings are split. If they both cooperate (Friend), they share the winnings 50-50. If one cooperates and the other defects (Foe), the defector gets all the winnings and the cooperator gets nothing. If both defect, both leave with nothing. Notice that the payoff matrix is slightly different from the standard one given above, as the payouts for the "both defect" and the "cooperate while the opponent defects" cases are identical. This makes the "both defect" case a weak equilibrium, compared with being a strict equilibrium in the standard prisoner's dilemma. If you know your opponent is going to vote Foe, then your choice does not affect your winnings. In a certain sense, Friend or Foe has a payoff model between prisoner's dilemma and the game of Chicken The payoff matrix is Cooperate Defect Cooperate 1, 1 0, 2 Defect 2, 0 0, 0 This payoff matrix was later used on the British television programmes Shafted and Golden Balls. It was also used earlier in the UK Channel 4 Trust Me , hosted by Nick Bateman , in 2000. See also • Robert Aumann, “Acceptable points in general cooperative n-person games”, in R. D. Luce and A. W. Tucker (eds.), Contributions to the Theory 23 of Games IV, Annals of Mathematics Study 40, 287–324, Princeton University Press, Princeton NJ. • Axelrod, R. (1984). The Evolution of Cooperation. ISBN 0-465-02121-2 • Bicchieri, Cristina (1993). Rationality and Coordination. Cambridge University Press • Kenneth Binmore, Fun and Games. • David M. Chess (1988). Simulating the evolution of behavior: the iterated prisoners' dilemma problem. Complex Systems, 2:663–670. • Dresher, M. (1961). The Mathematics of Games of Strategy: Theory and Applications Prentice-Hall, Englewood Cliffs, NJ. • Flood, M.M. (1952). Some experimental games. Research memorandum RM-789. RAND Corporation, Santa Monica, CA. • Kaminski, Marek M. (2004) Games Prisoners Play Princeton University Press. ISBN 0-691-11721-7 http://webfiles.uci.edu/mkaminsk/www/book.html • Poundstone, W. (1992) Prisoner's Dilemma Doubleday, NY NY. • Greif, A. (2006). Institutions and the Path to the Modern Economy: Lessons from Medieval Trade. Cambridge University Press, Cambridge , UK. • Rapoport, Anatol and Albert M. Chammah (1965). Prisoner's Dilemma. University of Michigan Press. • S. Le and R. Boyd (2007) "Evolutionary Dynamics of the Continuous Iterated Prisoner's Dilemma" Journal of Theoretical Biology, Volume 245, 258–267. Full text • A. Rogers, R. K. Dash, S. D. Ramchurn, P. Vytelingum and N. R. Jennings (2007) “Coordinating team players within a noisy iterated Prisoner’s Dilemma tournament” Theoretical Computer Science 377 (1-3) 243-259. [7236] Further reading • Bicchieri, Cristina and Mitchell Green (1997) "Symmetry Arguments for Cooperation in the Prisoner's Dilemma", in G. Holmstrom-Hintikka and R. Tuomela (eds.), Contemporary Action Theory: The Philosophy and Logic of Social Action, Kluwer. • Plous, S. (1993). Prisoner's Dilemma or Perceptual Dilemma? Journal of Peace Research, Vol. 30, No. 2, 163-179. External links
{"url":"http://maps.thefullwiki.org/Prisoner's_dilemma","timestamp":"2014-04-18T18:11:28Z","content_type":null,"content_length":"66154","record_id":"<urn:uuid:b2672cb5-c21c-4b48-b117-137f2237f616>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
7 projects tagged "Mathematics" Python Web Graph Generator is a threaded Web graph (Power law random graph) generator. It can generate a synthetic Web graph of about one million nodes in a few minutes on a desktop machine. It supports both directed and undirected graphs. It implements a threaded variant of the RMAT algorithm. A little tweak can produce graphs representing social networks or community networks. It can also output connected components in a graph.
{"url":"http://freecode.com/tags/mathematics?page=1&with=1316&without=","timestamp":"2014-04-20T19:09:17Z","content_type":null,"content_length":"45087","record_id":"<urn:uuid:1f75945e-355d-4443-b944-e93b03d4450a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Dirac points multiply in the presence of a BEC The ability to prepare ultracold atoms in graphenelike hexagonal optical lattices is expanding the types of systems in which Dirac dynamics can be observed. In such cold-atom systems, one could, in principle, study the interplay between superfluidity and Dirac physics. In a paper appearing in Physical Review Letters, Zhu Chen at the Chinese Academy of Sciences and Biao Wu of Peking University use mean-field theory to calculate the Bloch bands of a Bose-Einstein condensate confined to a hexagonal optical lattice. The Dirac point is a point in the Brillouin zone around which the energy-momentum relation is linear. Its existence in graphene is at the heart of this material’s unusual properties, in which electrons behave as massless particles. Chen and Wu’s study predicts, surprisingly, that in the analog cold-atom system, the topological structure of the Dirac point is drastically modified: intersecting tubelike bands appear around the original Dirac point, giving rise to a set of new Dirac points that form a closed curve. More importantly, this transformation should occur even with an arbitrarily small interaction between the atoms, upending the idea that such topological effects can only occur in the presence of a finite interaction between atoms. The modified band structure prevents an adiabatic evolution of a state across the Dirac point, violating the usual quantum rule that a system remains in its instantaneous eigenstate if an external perturbation is sufficiently slow. This effect could be tested experimentally in a so-called triple-well structure, which is a combination of rectangular and triangular optical lattices. – Hari Dahal
{"url":"http://physics.aps.org/synopsis-for/print/10.1103/PhysRevLett.107.065301","timestamp":"2014-04-21T05:10:14Z","content_type":null,"content_length":"4860","record_id":"<urn:uuid:4904df00-f2bc-4ff2-a0b1-1a0f22afd90d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
red herring principle red herring principle The red herring principle The mathematical red herring principle is the principle that in mathematics, a “red herring” need not, in general, be either red or a herring. Frequently, in fact, it is conversely true that all herrings are red herrings. This often leads to mathematicians speaking of “non-red herrings,” and sometimes even to a redefinition of “herring” to include both the red and non-red versions. Thus, “red herring” as used here is to be interpreted neutrally: it refers to a name of a concept which might throw the reader off-track, by accident as it were. Some adjectives are almost universally used as “red herring adjectives,” i.e. placing that adjective in front of something makes it more general in some way. Some red herring adjectives almost always have the same meaning, such as “pseudo” and “lax,” but others, such as “weak,” have different meanings in different contexts. Some uses of terminology are similar in some ways, but don’t quite fall under the same category. For instance, in a number of cases mathematicians working in a particular field tend to omit niceness adjectives, e.g.: These terminological uses can create situations that appear similar to actual red herrings, such as the use of “noncommutative ring” by people who are familiar with using “ring” to mean “commutative ring.” However, since the actual definitions of terms like “ring” and “topological space” is generally accepted to be unchanged (as opposed to the commonly used abbreviations), these are not true red Revised on December 12, 2012 03:10:35 by Toby Bartels
{"url":"http://ncatlab.org/nlab/show/red+herring+principle","timestamp":"2014-04-21T14:49:37Z","content_type":null,"content_length":"22587","record_id":"<urn:uuid:05b1634e-e394-4965-b7c8-5cf8c174d2b1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
La Puente Algebra 1 Tutor Find a La Puente Algebra 1 Tutor ...Aside from my hospital job, I also work as a retail pharmacist. There, I dispense various medications not limited to psychotropic medications. In this setting, I dispense the majority of the medication spectrum including: Antihypertensives, antilipidemia, antidiabetic, antiinflammatory medications and so on. 23 Subjects: including algebra 1, chemistry, Spanish, English ...As a recent graduate from USC with a Masters degree in Global Medicine, I was required to take biology classes at the medical school level where I performed at the top of my class in each course. Therefore, I have a very strong background in biology and enjoy helping students discover the intrig... 15 Subjects: including algebra 1, chemistry, geometry, biology ...First, It is very important to guide my students the fundamentals in mathematics. Second, I make sure that my students are learning, not just memorizing techniques. Thirdly, I would give my students proper techniques to solve word problems in mathematics. 3 Subjects: including algebra 1, statistics, algebra 2 ...One of my main focuses falls on children, whether it be through education or counseling. I have taken and am taking classes that involve education young children. I have received a certificate that provides evidence that I have the sufficient abilities and skills to work with children. 19 Subjects: including algebra 1, Spanish, reading, English ...Any parent or student who thinks that I can be of help is encouraged to talk with me about subject matter after WyzAnt requirements are met. I thank you for consideration.I have a Masters in Mechanical Engineering from University of Texas at Arlington with a minor in mathematics. I have scored ... 15 Subjects: including algebra 1, calculus, ACT Math, differential equations
{"url":"http://www.purplemath.com/la_puente_ca_algebra_1_tutors.php","timestamp":"2014-04-16T19:09:56Z","content_type":null,"content_length":"23990","record_id":"<urn:uuid:a0c2675c-9c25-42f5-bfbd-118a67461d03>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: One-Dimensional Motion Up: Newton's Laws of Motion Previous: Non-Isolated Systems 1. Consider an isolated system of 2. Consider a function of many variables for all homogenous function of degree . Prove the following theorem regarding homogeneous functions: 3. Consider an isolated system of e.g., their electric charges). Write an expression for the total potential energy where virial theorem. Demonstrate that there are no bound steady-state equilibria for the system (i.e., states in which the global system parameters do not evolve in time) when 4. A star can be through of as a spherical system that consists of a very large number of particles interacting via gravity. Show that, for such a system, the virial theorem, introduced in the previous exercise, implies that 5. Consider a system of Next: One-Dimensional Motion Up: Newton's Laws of Motion Previous: Non-Isolated Systems Richard Fitzpatrick 2011-03-31
{"url":"http://farside.ph.utexas.edu/teaching/336k/Newton/node13.html","timestamp":"2014-04-18T10:35:04Z","content_type":null,"content_length":"12489","record_id":"<urn:uuid:efd649ee-d691-49b0-9cbe-60c5958b82c2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Download And Run The Program 'lissajous. M' From ... | Chegg.com Image text transcribed for accessibility: Download and run the program 'lissajous. m' from the class web page under 'Demos for Lab2'. Using Euler's identify, show that cos(omegat + theta) is a sum of two rotating phasors, one of which rotates counter-clockwise at angular frequency omega and the other which rotates clockwise at angular frequency omeg Give a mathematical explanation and a graphical explanation. HINT: The function cos(omegat+theta) is the real part of a rotating phasor, so it must be a sum of phasors that represents a signal that oscillates between -1 and 1 on the real axis of the complex plane with frequency omega/2pi. Use your results from #6 above to generalize that a Lissajous figure is just the sum of two rotating phasors, one of which rotates counter-clockwise at angular frequency omega and the other which rotates clockwise at angular frequency omega. State mathematically what the clockwise and counter-clockwise parts are. When is the Lissajous figure of the form Aej theta ejut, and what kind of figure is this? Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/download-run-program-lissajous-m-class-web-page-demos-lab2--using-euler-s-identify-show-co-q1477873","timestamp":"2014-04-21T02:43:51Z","content_type":null,"content_length":"21354","record_id":"<urn:uuid:f0814974-94e9-41ed-b531-240ac14c586b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
Peltier Tech Blog - Page 2 of 105 - Peltier Tech Excel Charts and Programming Blog In Mind the Gap – Charting Empty Cells, I described in gory detail how Excel’s various chart types treat empty cells, that is, cells which are totally empty. I also described why none of the approaches we mortals have ever tried to produce a gap across a simulated blank cell has ever worked. Cells which have formulas that return a null string (i.e., “”) are plotted like any text, with a marker at a value of zero. In line and XY Scatter charts, an error value of #N/A almost works, because it suppresses plotting of a point, but does not produce a gap in the line segments connecting points. The only way to get a gap in the lines of a chart is to have empty cells is to rely on a VBA routine to correct the appearance of the chart, wither by changing the formatting of the points and lines you want to hide, or by clearing the contents of the associated cells. These work well, but must be repeated every time the data changes, and if you’ve cleared any cells, their contents must be recreated in case those points should no longer show gaps. This post contains two routines to fix the chart’s appearance, one by changing the chart series formatting, the other by changing the data. The Gap Problem Below is a simple data range I created to illustrate this problem. The “Broken” column shows a set of random values centered around 3, with either #N/A or “x” inserted into random cells in place of the random values. The “x” is text, so it will be treated the same as “”, but it is used instead so it shows up in the cells. In the “ChartFix” and “DataFix” I’ve included this same data, increased by 1 so I could plot it with the “Broken” data without obscuring it. I’ll use two XY Scatter charts, one to show how fixing the chart formatting works, the other to show how fixing the data works. Here are the two charts I promised. The lower (blue) series is the “Broken” data, which will not be changed by these routines. The upper (orange) series is the “ChartFix” series (left) or the “DataFix” series (right). Both series in both charts show a missing marker at X=5, with a line connecting the points at X=4 and X=6. These missing points correspond to #N/A errors in the corresponding cells. All four series also show a point plotted at X=15, Y=0, corresponding to the “x” value in the corresponding cells. We want our code to leave no markers and no connecting lines at these points in the orange series, while the blue series remain intact to remind us where changes were made. In general you will probably want to process all series in your charts. The Code Both procedures parse the series formula (see detailed documentaiton at the end of this article) to find the ranges containing the X and Y values. If you are not using XY Scatter charts, remove the X value components of the code, because nonnumeric X values are allowed for other chart types. The first procedure loops through the parsed X and Y values, and where it finds a nonnumeric value, it formats the point to have no marker and for the connecting line segment on either side to show no line. Where it finds a numeric value, it restores marker and line segment formatting, in case the chart already was “fixed” but now the data has changed. Sub FixLineFormatInChart() Dim iPt As Long Dim sFmla As String Dim vFmla As Variant Dim sXVals As String Dim sYVals As String Dim rXVals As Range Dim rYVals As Range Dim vXVals As Variant Dim vYVals As Variant With ActiveChart ' just process the orange series With .SeriesCollection(2) sFmla = .Formula vFmla = Split(sFmla, ",") sXVals = vFmla(1) sYVals = vFmla(2) Set rXVals = Range(sXVals) Set rYVals = Range(sYVals) vXVals = rXVals.Value vYVals = rYVals.Value For iPt = 1 To .Points.Count If IsNumeric(vXVals(iPt, 1)) And IsNumeric(vYVals(iPt, 1)) Then .Points(iPt).MarkerStyle = .MarkerStyle .Points(iPt).MarkerStyle = xlMarkerStyleNone End If For iPt = 2 To .Points.Count If IsNumeric(vXVals(iPt - 1, 1)) And IsNumeric(vXVals(iPt, 1)) And _ IsNumeric(vYVals(iPt - 1, 1)) And IsNumeric(vYVals(iPt, 1)) Then .Points(iPt).Format.Line.Visible = True .Points(iPt).Format.Line.Visible = False End If End With End With End Sub The second procedure loops through the parsed X and Y values, and where it finds a nonnumeric value, it clears the cell containing the nonnumeric value. When rerun on a chart with changed data, it cannot restore the appropriate cell contents where it finds a numeric value, because the relevant formula or value was previously deleted. If you plan to use this approach, it is best to leave the original data or calculations intact, and use another worksheet range that simply links to the original data, so the links are easy to Sub FixSourceDataInSheet() Dim iPt As Long Dim sFmla As String Dim vFmla As Variant Dim sXVals As String Dim sYVals As String Dim rXVals As Range Dim rYVals As Range Dim vXVals As Variant Dim vYVals As Variant With ActiveChart ' just process the orange series With .SeriesCollection(2) sFmla = .Formula vFmla = Split(sFmla, ",") sXVals = vFmla(1) sYVals = vFmla(2) Set rXVals = Range(sXVals) Set rYVals = Range(sYVals) vXVals = rXVals.Value vYVals = rYVals.Value For iPt = 1 To .Points.Count If Not IsNumeric(vXVals(iPt, 1)) Then End If If Not IsNumeric(vYVals(iPt, 1)) Then End If End With End With End Sub The Results I selected the first chart (title = “Change Chart”) and ran the first procedure (FixLineFormatInChart). Then I selected the second chart (title = “Change Data”) and ran the second procedure (FixSourceDataInSheet). Here is the resulting data. Note that the ChartFix data is unchanged because the FixLineFormatInChart procedure only changes the chart, while the DataFix data now has a couple blank cells because the FixSourceDataInSheet works by changing the data and leaving the chart alone. Here are the resulting charts. The two procedures produce identical results, changing the interpolated line across point 5 (the #N/A value) to a gap, and changing the plotted zero at point 15 (the text label “x”) to a gap. The interpolated line across point 5 and the zero value plotted at point 15 remain in the blue series. What if the Data Changes? Worksheets are not always static pictures of data. The original data may change, or you may perform the same analysis with new data. The following table represents new data overlaid on the previous data range. The original nonnumeric cells now contain numeric values, but we’ve gained text values in rows 4 and 5. Because the previously nonnumeric cells in column D were cleared, whatever formulas we had there could not recompute new values for cells D6 and D16. In the charts from before, we still see gaps, wither because we formatted them not to appear (left) or because we deleted the values We’re fine if we used the Change Chart approach, or if we simply pasted values on top of the data, which filled in any blank cells from before. But if we deleted formulas, as in column D above, we need to restore them, as shown in the worksheet range below. The chart with reformatted points still shows gaps (left) but the formatting will be restored to numeric points by the procedure. The chart with unchanged formatting and restored data (right) shows all markers and line segments. The text values are plotted as the two zeros near the left edge of these charts. We run the corresponding procedures on the two charts. The data is changed in column D, as before. The orange series in both charts now show gaps where the blue series remind us of the text values plotted as zero. Parsing the Series Formula A series formula has the following form: The four arguments of the series formula are: 1. Series Name – can be a worksheet address or defined name, a text label, or empty. In this example it’s a cell reference, Sheet1!$C$2. 2. X Values (Category Labels) – can be a worksheet address or defined name, a literal array such as {1,2,3} or {“A”,”B”,”C”}, or empty. In this example it’s a worksheet address, Sheet1!$A$3:$A$19. 3. Y Values – can be a worksheet address or defined name, or a literal array such as {1,2,3}. In this example it’s a worksheet address, Sheet1!$C$3:$C$19. 4. Plot Order – an integer, in this case 2. Generally to parse the series formula, you first need to strip off everything except the parameters, which means the open parenthesis and the preceding text, and the closing parenthesis. Then you split the resulting string into a comma separated array. This works as long as none of the ranges contain multiple areas, in which case there are commas separating the addresses of the individual areas, and simple separation by commas will produce surprises. The result is a zero-based array. Since we only want the X and Y values, we don’t care that the first parameter will contain the opening parenthesis and the preceding text, or that the last parameter will contain the closing So our code looks like this, with loads of nice documentation: With .SeriesCollection(2) ' get series formula ' =SERIES(Sheet1!$C$2,Sheet1!$A$3:$A$19,Sheet1!$C$3:$C$19,2) sFmla = .Formula ' split formula into its arguments vFmla = Split(sFmla, ",") ' vFmla(0) = "=SERIES(Sheet1!$C$2" ' vFmla(1) = "Sheet1!$A$3:$A$19" ' vFmla(2) = "Sheet1!$C$3:$C$19" ' vFmla(3) = "2)" ' get the individual addresses for X and Y sXVals = vFmla(1) sYVals = vFmla(2) ' find the ranges containing the X and Y values Set rXVals = Range(sXVals) Set rYVals = Range(sYVals) ' put the values from the X and Y ranges into arrays ' (faster processing in an array than cell-by-cell in a range) vXVals = rXVals.Value vYVals = rYVals.Value Now we can test for nonnumeric values in the arrays, and either clear the corresponding cells or change the formatting of the plotted points. You can put the X and Y values of a series directly into an array as follows: vXVals = ActiveChart.SeriesCollection(2).XValues vYVals = ActiveChart.SeriesCollection(2).Values The problem here is that the values saved internally in the series have already been changed. The #N/A remains #N/A in the array, but any blanks and any text values are converted to zeros, so they are undetectable as nonnumeric.
{"url":"http://peltiertech.com/WordPress/page/2/","timestamp":"2014-04-17T12:43:01Z","content_type":null,"content_length":"76212","record_id":"<urn:uuid:e587c94b-b873-48b8-9afc-8f1d91b25e2f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
A. Radius of gyration as an order parameter B. Protein backbone geometry C. Soliton description of protein-backbone geometry D. UNRES model of polypeptide chains A. Phase co-existence in protein oligomers B. Structure of AICD in the AICD/Fe65 dimer C. Collapse simulations of isolated AICD and Fe65
{"url":"http://scitation.aip.org/content/aip/journal/jcp/137/3/10.1063/1.4734019","timestamp":"2014-04-17T09:46:33Z","content_type":null,"content_length":"97555","record_id":"<urn:uuid:469ebe0d-90d4-4f8c-86ac-dd0256e7b8f5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Tracking Node Path in Graph I'm trying to develop several visualization of tools for a network of individuals (nodes). For example, one tool computes the (user input) number of smallest distances between each node and every other node, and then plots arrows between nodes that are sensing each other (they sense each other one is a shortest distance away from another). Some edges are directed because the sensing only travels in one direction, and some are undirected because the sensing goes both ways--each member of the pair connected is one of each other's nearest neighbors. Now, I'm trying to make a tool that finds independent groups nested inside the larger network. To do this, I want to traverse the array of node pairs (both directed and undirected), and track each "node path". So, say the first pair is 1 6. Then I want to find where 6 appears in the first column and use whatever is in the column next to it as the connector. If what is in the second column is ever the first node that we started with, then there's a complete path, which means that those pair connections define an independent group within the larger network. I'm having trouble coding this because the code doesn't seem to be updating. Here's what I have so far: for i=1:length(directedEdges) nodePath(j,:) = directedEdges(i,:); nodeConnection = nodePath(j,2); while nodeConnection ~= nodePath(i,1) [nodeSensor,nodeSensed] = find(directedEdges(:,1)==nodeConnection); nodePath(j,:) = directedEdges(nodeSensor,:); nodeConnection = nodePath(j,2); ^this is what I need help with. I don't know how to change the assignment statements so that the code properly checks every node path to see if it's an independent cycle. % if there's a complete cycle, plot it: this code works. if nodeConnection == nodePath(i,1) directedEdgesCycle = nodePath; for l=1:numberOfNeighbors for k=1:length(directedEdgesCycle) xVal1 = x(directedEdgesCycle(k,1)); xVal2 = x(directedEdgesCycle(k,2)); f(j) = xVal2 - xVal1; for k=1:length(directedEdgesCycle) yVal1 = y(directedEdgesCycle(k,1)); yVal2 = y(directedEdgesCycle(k,2)); g(j) = yVal2 - yVal1; for k=1:length(directedEdgesCycle) zVal1 = z(directedEdgesCycle(k,1)); zVal2 = z(directedEdgesCycle(k,2)); h(j) = zVal2 - zVal1; xplotdcycle = x(directedEdgesCycle(:,1)); yplotdcycle = y(directedEdgesCycle(:,1)); zplotdcycle = z(directedEdgesCycle(:,1)); f = transpose(f); g = transpose(g); h = transpose(h); Any help would be greatly appreciated, thank you.
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/309076","timestamp":"2014-04-18T05:56:10Z","content_type":null,"content_length":"33264","record_id":"<urn:uuid:4cc495c3-1d9e-4a68-bdb1-0bfbabcd26f0>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Hero's Engine (Rena Benor) - sed695b3 Rena Benor Steam engine Newton's 2nd and 3rd law of motion. Turbines Torque Motion and Forces 1. Newton’s laws predict the motion of most objects. As a basis for understanding this concept: a. Students know how to solve problems that involve constant speed and average speed. c. Students know how to apply the law F=ma. (Force=mass x acceleration) to solve one-dimensional motion problems that involve constant forces (Newton’s second law). d. Students know that when one object exerts a force on a second object, the second object always exerts a force of equal magnitude and in the opposite direction (Newton’s third law). Materials needed Hero's engine waterheat source safety glasses Fill the flask with a small amount of water. Wear safety glasses. Put a heat source underneath it. As the water starts to boil, the flask will start to rotate. Hero's Engine is today a generic name for any device which propels itself by shooting steam from one or more orifices. These devices are also known as Eolipiles. It is considered to be the first recorded steam engine or reaction steam turbine. After filling the sphere with water, a flame is applied to it until the water boils, and the device begins to rotate. Having oppositely bent or curved nozzles projecting from it. When the vessel is pressurized with steam, steam is expelled through the nozzles, which generates thrust due to the rocket principle as a consequence of the 2nd and 3rd of Newton's laws of motion. The thrusts combine to result in a rotational moment or torque, causing the vessel to spin about its axis. Aerodynamic drag and frictional forces in the bearings build up quickly with increasing rotational speed (rpm) and consume the accelerating torque, eventually canceling it and achieving a steady state speed. 1. What must occur before the Hero's engine spins? 2. What does the shape and direction of the arms have to do with its motion? 3. How is this translated for today's machines? Everyday examples of the principles illustrated A steam engine is a heat engine that performs mechanical work using steam as its working fluid.Water turns to steam in a boiler and reaches a high pressure. When expanded through pistons or turbines, mechanical work is done. The reduced-pressure steam is then condensed, and it is pumped back into the boiler. Early devices were not practical power producers, but more advanced designs producing usable power have become a major source of mechanical power over the last 300 years, his power source would later be applied to prime movers, mobile devices such as steam tractors and railway locomotives. The steam engine was a critical component of the Industrial Revolution, providing the power source to propel modern mass-production manufacturing methods. Modern steam turbines generate about 90% of the electric power in the United States using a variety of heat sources. F = ma lets us work out the forces at work on objects by multiplying the mass of the object by the acceleration of the object. The force at work on a Formula 1 car as it starts a race! If the F1 car has a Mass of 600kg and an Acceleration of 20m/s/s then we can work out the Force pushing the car by multiplying the Mass by the Acceleration like this 600 x 20 = 12000N F = ma is the second law of motion proposed by Sir Isaac Newton.
{"url":"https://sites.google.com/site/sed695b3/projects/demonstration-equipment/hero-s-demo-rena-benor","timestamp":"2014-04-16T07:58:15Z","content_type":null,"content_length":"32526","record_id":"<urn:uuid:8a7d0e37-ae4d-4670-bae6-b339bcd8c3c6>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Mike Roth 11. An analogue of Liouville's theorem and an application to cubic surfaces (with David McKinnon), arXiv:1306.2977; submitted. In this paper we generalize, in the sense of 10, Liouville's approximation theorem to arbitrary varieties, as well as giving an extension involving the asymptotic base locus. As an application, we compute ε[x](L) and α[x](L) for all nef line bundles L on a cubic surface, and all points x not on a line. 10. Seshadri constants, Diophantine approximation, and Roth's theorem for arbitrary varieties (with David McKinnon), arXiv:1306:2976; submitted. Let X be a variety defined over a number field k. The Bombieri-Lang conjecture is a prediction that global positivity of the canonical bundle of X implies a global limition on the accumulation of rational points. In this paper we investigate local versions of this predicted phenomena. Given a line bundle L and point x &in; X, the main theme of the paper is the interrelations between ε[x] (L), the Seshadri constant, measuring local positivity of L at x, and α[x](L), an invariant measuring the accumulation of rational points at x, as gauged by L. The two invariants share similar formal properties. Moreover the classic approximation results on the affine line — the theorems of Liouville and Roth — generalize as inequalities between α[x] and ε[x] valid for all projective 9. Reduction rules for Littlewood-Richardson coefficients, International Mathematics Research Notices, Vol 2011, No. 18, 4105–4134. This paper shows that every regular face of the Littlewood-Richardson cone of a semisimple group G gives rise to a reduction rule: a rule reducing every problem of computing the multiplicity of an irreducible representation in a tensor product "on that face" to a similar problem on a group of smaller rank. 8. Geometric realization of PRV components and the Littlewood-Richardson cone (with Ivan Dimitrov), Contemp. Math. 490, Amer. Math. Soc. Providence, RI 2009, 83–95. This paper is a mostly expository discussion of the main questions (and some of the answers) in 7, written from the point of view of representation theory. The paper was written while the previous article was in preparation and as a result some of the problems raised here are solved in 7. 7. Cup products of line bundles on homogeneous varieties and generalized PRV components of multiplicity one (with Ivan Dimitrov), arXiv:0909.2280; submitted. Let X=G/B be a complete flag variety. The two main questions answered by this paper are when the cup product of cohomology groups of line bundles on X is surjective, and which irreducible G-representations of a tensor product can be realized by such a surjection. The paper also gives bounds on the multiplicities in a tensor product, and relates these considerations to the boundary of the Littlewood-Richardson cone. 6. Abel-Jacobi Maps Associated to Smooth Cubic Threefolds (with Joe Harris and Jason Starr), arXiv:math.ag/0202080 This paper studies the Abel-Jacobi map from the space H^g,d(X) of degree d, genus g curves on a smooth cubic threefold X to the intermediate Jacobian of X, and proves that this map coincides with the maximally rationally connected fibration of H^g,d for d < 6. 5. Curves of Small Degree on Cubic Threefolds (with Joe Harris and Jason Starr), Rocky Mountain Journal of Mathematics 35 (2005) 761–818. This paper contains prepatory results needed for paper 6 on the study of the Abel-Jacobi map on cubic threefolds. It proves that for any smooth cubic threefold the space parameterizing degree d, genus g, curves is irreducible of the expected dimension, for d < 6. 4. Inverse Systems and Regular Representations, Journal of Pure and Applied Algebra 199 (2005) 219–234. This paper generalizes a result used in algebraic combinatorics when studying reflection groups and gives a conceptual explanation in terms of Grothendieck duality. 3. The Affine Stratification Number and the Moduli Space of Curves (with Ravi Vakil), Centre de Recherches Mathématiques Proceedings and Lecture Notes, Volume 38 (2004) 213–227. This paper introduces the idea of the affine stratification number of a scheme, develops its basic properties, and shows how it can be used to bound the topological complexity of the scheme. 2. Rational Curves on Hypersurfaces of low degree (with Joe Harris and Jason Starr), Journal für die reine und angewandte Mathematik (Crelle's Journal) 571 (2004) 73–106. This paper proves that for a general hypersurface of degree d < (n+1)/2 in P^n the scheme parameterizing degree e rational curves in the hypersurface is a complete intersection of the expected dimension for all e. 1. Stable maps and Quot Schemes (with Mihnea Popa), Inventiones Mathematicae 152 (2003) 625–663. This paper uses stable maps to construct an alternate compactification of the space of vector bundles quotients of a fixed vector bundle, and applies this to obtain results about the Quot scheme.
{"url":"http://www.mast.queensu.ca/~mikeroth/","timestamp":"2014-04-16T16:43:39Z","content_type":null,"content_length":"15747","record_id":"<urn:uuid:db4c6e7c-1185-4c3d-b640-dd63f8078210>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
Exceptional MathReviews Posted by: Dave Richeson | October 14, 2010 Exceptional MathReviews If you have access to MathSciNet and are in the mood for some good laughs, head over to Kimball Martin’s collection of Exceptional MathReviews. He introduces his collection as follows: Were you ever looking up papers in MathSciNet and you found one that especially made you smile or laugh? And were you ever wishing that MathReviews were sorted by amusement factor? Well, we did. In fact, it is a long-standing game among many mathematicians to find the most amusing or worst MathReviews they can. As a gesture of humanitarianism, I’ve compiled a list of what I take to be, for one reason or another, exceptional MathReviews of which I have been made aware. A review may merit listing by being unexpected, witty, scathing, humorous or otherwise deviating considerably from the run-of-the-mill reviews. It may be either good or bad, called for or uncalled for, but it should be in some way Here are a few of the reviews (or snippets of reviews) that I found while browsing the list. Spike Milligan wrote of a certain poet that he tortured the English language, yet had still not managed to get it to reveal its meaning. Trying to fathom the paper under review is similarly It is hard to imagine in a single paper such an accumulation of garbled English, unfinished sentences, undefined notions and notations, and mathematical nonsense. The author has apparently read a large number of books and papers on the subject, if one looks at his bibliography; but it is doubtful that he has understood any of them… What is amazing to the reviewer is that such a thing was ever printed. Not every text containing mathematical formulae or terminology may be considered as a scientific work. Sometimes it is a mere imitation. My impression is that this is exactly the case of the paper under review. The author adds three introductory sentences to a paper of G. P. Whittle [Discrete Math. 54 (1985), no. 2, 239; MR0791665 (86j:05047)] and changes the word “principal” to “fundamental”. He also adds an acknowledgement to the referee but fails to add an acknowledgement to Whittle for writing the original paper. This paper seems to the reviewer to contain no mathematics. The author asserts that “this book is written by an amateur for other amateurs, but we amateurs won’t mind having the professionals reading over our shoulders”. The reviewer, a professional, would like to benefit from whatever insights the author may have. However, he has not been able to see through the blizzard of unusual (and largely unexplained) notations and formulas that fill the 274 hand-written pages of this book. If the author has a message for professional mathematicians, he will have to try again, using a language that is not a personal secret. Posted in Humor, Links, Math | Tags: Humor, MathReviews, MathSciNet
{"url":"http://divisbyzero.com/2010/10/14/exceptional-mathreviews/","timestamp":"2014-04-18T18:11:28Z","content_type":null,"content_length":"61746","record_id":"<urn:uuid:96f4e9cf-e643-4c04-8c11-b73bece177a5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Next Lesson: Transforming Functions My renewed enthusiasm for old childhood joys has come at an opportune time. In Tuesday's lesson, we will be working on transforming functions. I have a few transformers that have (incredibly) survived until now, which are currently gathering dust on my mantle. Hmm... do I bring in the opportunistic, irritating Starscream, the powerful, pea-brained Grimlock, or, everyone's favorite transforming boombox, Soundwave (though his head is broken off and I don't have any of his tape-minions). In any case, my idea for this lesson is to have the students work on a scaffolded exploration for most of the class. They will work on learning transformations, as well as absolute value (of linear) functions. They can work individually, in pairs, or in groups, but each person must turn in their own paper by the end, and it will be graded like a quiz. The exploration/quiz is broken into 4 parts: 1) Creating absolute value functions. In this part, students will graph a linear function. They will then graph the absolute value of the same line by looking at the y-coordinates of several points, and plotting their absolute values. They will have to then answer questions that help them see that y = |mx + b| will always make a V shape. 2) Translating absolute value functions. In this part, students will review horizontal and vertical shifts from the previous lesson, but applied to absolute value functions. 3) Transforming absolute value functions. In this part, students will plot out y = |x| and various transformations in the form y = a|x| to see what happens. By the end of this part, they should have a good sense for how the coefficient a affects the shape of the graph. 4) Synthesis In the final part, students will put together what they know about translations and transformations to create the graph of f(x) = 2|x + 4| - 3. Then, they will generate a table and see if their ordered pairs fall on the graph that was created via the translation/transformation process. After this, we will end the class with 15 - 20 minutes of direct instruction where I help formalize their understanding of transformations. I'm worried about running out of time for this. If I do, I can push it to the next class, and that will be ok, though it would be better in the same period. I hope that making the exploration into a quiz will help students focus and be more efficient with their time. The lesson will be posted on Whoops.. I forgot that the entire sophomore class is out on a field trip to the Monterey Bay Aquarium for biology. My classes today have had about 4 people each. I got some good one-on-one time with my handful of juniors, and I'll just have to push things back to Thursday. 2 comments: Grimlock pea-brained?? from Wikipedia: "One of his most distinguishing features is his famous speech impediment, which leads him to shorten sentences and refer to himself constantly as "Me," never "I" - the reason for this varies from depiction to depiction, with some making it the result of true mental limitations, and others a ruse Grimlock perpetrates to allow others to think of him as less intelligent than he actually is." In step with silly 80's cartoon life lessons, perhaps Grimlock was the smartest of them all! Dan Greene said... And here I was questioning the value of Wikipedia. I never knew a Dinobot could be so Machiavellian.
{"url":"http://exponentialcurve.blogspot.com/2006/10/next-lesson-transforming-functions.html","timestamp":"2014-04-20T23:32:08Z","content_type":null,"content_length":"75207","record_id":"<urn:uuid:a9c5edac-33ce-47dc-aacf-e410f2a36cc9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Dead Spins And The Dirty Ground Yep, it’s that time again. Paper dance time! Making Classical Ground State Spin Computing Fault-Tolerant Isaac J. Crosson, Dave Bacon, Kenneth R. Brown We examine a model of classical deterministic computing in which the ground state of the classical system is a spatial history of the computation. This model is relevant to quantum dot cellular automata as well as to recent universal adiabatic quantum computing constructions. In its most primitive form, systems constructed in this model cannot compute in an error free manner when working at non-zero temperature. However, by exploiting a mapping between the partition function for this model and probabilistic classical circuits we are able to show that it is possible to make this model effectively error free. We achieve this by using techniques in fault-tolerant classical computing and the result is that the system can compute effectively error free if the temperature is below a critical temperature. We further link this model to computational complexity and show that a certain problem concerning finite temperature classical spin systems is complete for the complexity class Merlin-Arthur. This provides an interesting connection between the physical behavior of certain many-body spin systems and computational complexity. 1. #1 John Sidles June 25, 2010 Dave, that’s a really interesting preprint … full of good ideas IMHO. I to reach the topic that most interests me, I had to read all the way to the final sentence: “Another interesting question is the rate at which the ground state spin model thermalizes; a proof that the system reaches thermal equilibrium in a time polynomial in the size of the circuit being implement would be another step towards making this model more physically relevant.” Here several interlocking considerations arise. Let’s imagine that we investigate this question by a direct numerical computation, more specifically, by numerically computing the trajectory as an integral curve of a Hamiltonian flow. If we integrate dynamical trajectories on a tensor network state-space (for reasons of numerical efficiency, perhaps) then for rank 1 the dynamical flow is, unsurprisingly, the (classically symplectic) Hamiltonian flow of the Bloch equations. Similarly, in the large-dimension limit the state-space curvature is “dimensionally drowned” (specifically, for rank >= 2^{O(n_spin)}), and the resulting simplified dynamical flow is the (quantum-symplectic) Schroedinger equations on a linear Hilbert state-space; thus both the low-rank and high-rank state-space limits are geometrically simple. From a simulationist point of view, the geometric simplifications and numerical efficiencies attendant to this large-rank “dimensional drowning” are the pragmatic reasons that Hilbert space is such a popular venue for quantum dynamical calculations; in essence, Hilbert space is a (hugely convenient) large-rank approximation. Lastly, for intermediate rank we have a hybrid simulation that is *also* symplectic. Now we appreciate that because the dynamical flow is symplectic for all state-space ranks, by adjusting the rank we vary the dynamics to be “as quantum as we want”; this provides a well-posed (and numerically tractable) model for studying the classical-quantum transition. Of course, we still have to couple the dynamics to an external reservoir, and here many subtleties arise, both formally and numerically, in both the fully classical and fully quantum limits (and for every state-space rank in-between). Unsurprisingly, any obstruction to a physical system’s approach to thermal equilibrium, also obstructs the efficient numerical simulation of that system’s approach to equilibrium. When it comes to working through the details, I highly recommend two 1988 articles by Ford, Lewis and O’Connell, titled “Independent oscillator model of a heat bath: exact diagonalization of the Hamiltonian” and “Quantum Langevin equation”. AFAICT, it would be possible to adapt the Ford-Lewis-O’Connel thermalization models to intermediate-rank state-spaces in general, and to your ground-state computational dynamics in particular, but to the best of my knowledge, no-one has yet done so. In summary, your (outstanding IMHO) preprint establishes that both low-rank and high-rank symplectic ground-state dynamics exhibits a rich computational structure … which is to say, the Hamiltonian potentials on these manifolds exhibits a rich computational structure … and now so we have the natural challenge of describing the computational structure of the middle-rank state-spaces too. That’s *one* way to read your article, anyway … the reason it’s an outstanding article (IMHO), is that there are so *many* interesting ways to read it. 2. #2 saç ekimi fiyatlari June 26, 2010 Hi alla; “In summary, your (outstanding IMHO) preprint establishes that both low-rank and high-rank symplectic ground-state dynamics exhibits a rich computational structure … which is to say, the Hamiltonian potentials on these manifolds exhibits a rich computational structure … and now so we have the natural challenge of describing the computational structure of the middle-rank state-spaces too. That’s *one* way to read your article, anyway … the reason it’s an outstanding article (IMHO), is that there are so *many* interesting ways to read it.” mary lou… 3. #3 John Sidles June 30, 2010 I provided some historical context for the above discussion over on Ian Durham’s Quantum Moxie, in the context of an evolving narrative in which: (1) 19th century Parallel Postulate ⇒ 21st century “Quantum Linearity” (2) 19th century Riemannian dynamics ⇒ 21st century Kählerian dynamics (3) 19th century navigational science ⇒ 21st century quantum systems engineering (4) 19th century texts like Bowditch ⇒ … texts that haven’t been written yet! Even the scientific factions map cleanly: Gauss’ 19th century concerns regarding the “outcry of the Bœotians”, that is, 19th century mathematicians who embraced Euclidean geometric axioms, maps onto the early 21st century outcry of the “quantum Bœotians”, that is, those physicists who axiomatically embrace Euclidean quantum dynamics. 4. #4 Jonathan Vos Post June 30, 2010 What a great paper! Now I’ll have to quote you in the remaining 7 chapters of my Quantum Computing/First Contact novel (working title “Fermi’s Facebook) which, in the first 37 chapters, already quoted Prof. Scott Aaronson (with permission) on Merlin-Arthur complexity. The ETs use Anyons, and the DOD wants to break their error-detection/error detection, thermally if necessary (chemical laser onboard an Aurora). Oh, and I posted your abstract and arXiv link on my facebook page… 5. #5 Jonathan Vos Post June 30, 2010 From my Facebook page in the past couple of minutes: Nicholas Post Why did they let Jim Lee redesigned her? Alex Ross is better, even Adam Hughes! Jonathan Vos Post Nicholas: I assume that you mean Nelson Alexander “Alex” Ross (born 22 January 1970) : American comic book painter, illustrator, and plotter, acclaimed for the photorealism of his work; as opposed to Alex Ross (born 1968): American music critic. He has been on the staff of The New Yorker magazine since 1996 and published a critically acclaimed book on 20th-century classical music in 2007, The Rest Is Noise: Listening to the Twentieth Century. I suppose that Dave Bacon (double B.S. in Physics and English from Caltech) might someday write a book called “The Rest Is Quantum Noise: Listening to the Twentyfirst Century.” 6. #6 John Sidles June 30, 2010 JVP, I take it you wouldn’t agree with Roger Cooke’s “I think that the work of the mathematician is precisely to remove the mystery from the universe, even though a universe without mystery would be dull … Reducing the need for genius is (in my view) the central aim of mathematics.” Here Cooke is expressing a theme that has been prominent ever since Bowditch’s 1802 mariner’s text … and is well-exemplified in modern times by (for example) Petkovsek/Wilf/Zeilberger’s A=B … and it is a theme that no doubt will continue in our 21st century. So an interesting question is, which 21st century math-and-science disciplines are destined to prominently diminish in mystery, and thereby, reduce their need for genius? It is only mildly contrarian (IMHO) to foresee that quantum information science may perhaps be one of those disciplines … especially since articles like the Crosson/Bacon/Brown preprint can be read as significant advances in that direction. Of course, clearing away old mysteries always makes room for newer, deeper mysteries. Good! We engineers are eager for this cycle to progress as rapidly as feasible! As Steve Martin memorably says in The Jerk: “Waiter! Take away this old wine and bring us some new wine!” 7. #7 John Sidles July 1, 2010 For that subset of Quantum Pontiff readers who enjoy studying the early history of math and science, on Ian Durham’s Quantum Moxie blog I have posted some key bibliographic references relating to the central role that ideas of state-space structure played in the development of math, science, and engineering during the 19th century. This I take to be a key issue too for modern-day quantum information science, and for the Crosson/Bacon/Brown line of research in particular. As I have said before, it is not necessary that everyone regard quantum information science (QIS) in the same way—this would not even be desirable. The supplied references provide an (exceedingly optimistic) 19th century perspective on 21st century QIS—a perspective that no-one is obligated to share. The 19th century perspective on QIS does turn out to be mighty fun, though! 8. #8 Professor X July 4, 2010 Indeed, I did continue to continue the topic: I have heard Scott Aaronson’s comparison of a proof p = np to finding out God is either right or left handed. To me this seems a little silly. I openly welcome your advice and a rigorous reproof: Of the text posted here: http://scottaaronson.com/blog/?p=446#comment-43384 Professor X root@bt:~# help prove p = np true pushd: pushd [ dir | +N | -N ] [-n] Adds a directory to the top of the directory stack, or rotates the stack, making the new top of the stack the current working directory. With no arguments, exchanges the top two directories. +N Rotates the stack so the Nth directory (counting from the left of the list shown by ‘dirs’, starting with zero) is at the top. -N Rotates the stack so the Nth directory (counting from the right of the list shown by ‘dirs’, starting with zero) is at the top. -n suppress the normal change of directory when adding directories to the stack, so only the stack is manipulated. dir adds DIR to the directory stack at the top, making it the new current working directory. You can see the directory stack with the ‘dirs’ command. pwd: pwd [-LP] Print the current working directory. With the -P option, pwd prints the physical directory, without any symbolic links; the -L option makes pwd follow symbolic links. true: true Returns a successful result The portion below is from a student: Rodriguez-Ariz, Xochitl Period 4 The math behind Game Theory Game Theory is a wide assortment of topics and themes having to do with daily life, business or the mechanics behind games like poker and checkers. Besides the game there is a mathematical side, such as with Miserere play rules and P-position and n-position both stand for turn of players. P-position stands for previous player position and N-position with next player position. Both are not difficult to understand but become difficult to use and explain for certain games. Certain games like Nim Sum use difficult to understand rules. A certain rule goes hand in hand with this game is Fibonacci Nim. Similar to regular Nim the only change is the maximum can be taken is double the amount the first player took. For example for 1 token taken in the game at your turn, you can take either 1 or 2, however if your opponent decides to take 5, you in turn my take 1 to 10. The sequence of numbers can be taken is defined as F1=1, F2=2, and Fn +1= Fn + Fn-1 for n ≥. So it can then be written as 1,2,3,5,8,13,21,34,55 and so on and so on. Although there is a limited side of mathematics to Game Theory, it takes place as the foundation and explanation for many of the aspects of each game played. A, Xochitl please keep up the good work. Sincerely, M. M. M. h t t p : / / m e a m i . o r g 9. #9 John Sidles July 5, 2010 I just added to my BibTeX database the following quotation from Michael Neufeld’s Von Braun: Dreamer of Space, Engineer of War: “Most touching for von Braun was the enthusiasm of children and teenagers, who wrote in large numbers, many asking how they could get an education suitable for a space program.” It is sobering that in quantum information science, there is nowadays not much evidence of comparable enthusiasm, anywhere in the Blogo-sphere. The intellectual vigor and cheerful optimism of blogs like Quantum Pontiff, Shtetl Optimized, and Quantum Moxie serves a vital social purpose … and conversely, when blogs fall silent, it’s mighty discouraging to prospective young mathematicians, scientists, and engineers. 10. #10 Jonathan Vos Post July 5, 2010 There was much conversation about von Braun, with whom my father worked, and one of my brothers met, at Westercon 63. There were several panels on human space travel, and a space historian corrected me as we were Giving Panel. I’d said that von Braun was smuggled into Alabama via Operation Paper Clip, and he said (rightly): no, El Paso. Again, there was conversation with Karen Anderson (widow of Poul Anderson) and others in the Green Room about the A-9/A-10, 2-stage V-2 successor designed for transatlantic strikes against New York and Washington D.C., a universe in the multiverse which appears in a nearly completed novel manuscript of mine. But back to Quantum Computing: any thoughts on Wan-li Yang et al. One-step implementation of multi-qubit conditional phase gating with nitrogen-vacancy centers coupled to a high-Q silica microsphere cavity. Applied Physics Letters, 2010? Now a team of researchers from the Wuhan Institute of Physics and Mathematics, the Chinese Academy of Sciences and the Hefei National Laboratory for Physical Sciences at the Microscale at the University of Science and Technology of China has made a step toward a warmer solution. As reported in the journal Applied Physics Letters, published by the American Institute of Physics (AIP), the team is exploring the capabilities of diamond nitrogen vacancy (NV) materials. In this material, a “molecule” at the heart of an artificially created diamond film consists of a nitrogen atom (present as in impurity amid all those carbon atoms) and a nearby vacancy, a place in the crystal containing no atom at all. These diamond structures offer the possibility of carrying out data storage and quantum computing at room temperature. 11. #11 John Sidles July 7, 2010 JVP, we’ve updated our MRFM home page to provide links to some of the issues that you raise, under the aegis of a debate between quantum “skeptics” and quantum “Boeotians”.
{"url":"http://scienceblogs.com/pontiff/2010/06/23/dead-spins-and-the-dirty-groun/","timestamp":"2014-04-20T18:46:13Z","content_type":null,"content_length":"71064","record_id":"<urn:uuid:4456a6a6-019a-4c00-995b-55b965b41ca1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with exponential growth solutions! September 6th 2010, 10:20 PM #1 Sep 2010 Need help with exponential growth solutions! hi guys, i could really use your help right now. i just started school at a university 2 weeks ago and i transferred out of one of my classes into a new class. the class im in now is environmental biology and i am confused on a few problems the teacher gave out. the math is basic exponential growth and input-output relations....: 1.) (a) what is the doubling time for a population of rabbits growing at 5% annually? (b) what is the doubling time if the annual growth rate is 10%? 2.) assume you are growing bacteria in a large petri dish. if you start with 10 g of bacteria growing exponentially at a rate of 10% per hour, then what is quantity of bacteria at the end of 10 4.) if you have 10,000 fish and want to grow the stock to 100,000 fish, then how long will it take if the growth rate is 7% per year? how long if the growth rate is 2% per year? 5.) what is the input-output relation for the following examples(e.g., 1>0,1<0, or 1=0)? for those at steady state, calculate the average resident time, and for those that are shrinking determine when the stock will be depleted. 1.) $1000/mo.----->[Bank account $5000]------>$1500/mo-----> i have class tomorrow morning(in 7 hours) and would love some help asap. thanks in advance! can anyone solve these?? Yes but that would defy the point. You can (must) use logarithms to solve question 1 as Pickslides shows. 2.) assume you are growing bacteria in a large petri dish. if you start with 10 g of bacteria growing exponentially at a rate of 10% per hour, then what is quantity of bacteria at the end of 10 The general formula for exponential growth is $A(t) = A_0(1+n)^{t}$ □ $A(t)$ = Amount at time t □ $A_0$ = Amount at time 0 □ n = growth rate □ t = time In question 2 you have $A(10) = A(10)$, $A_0 = 10$, $n = 0.1$ and $t=10$ $A(10) = 10(1+0.1)^{10}$ Use your calculator to solve 25.9 to 3sf September 7th 2010, 12:43 AM #2 September 8th 2010, 10:35 AM #3 Sep 2010 September 8th 2010, 11:01 AM #4
{"url":"http://mathhelpforum.com/algebra/155433-need-help-exponential-growth-solutions.html","timestamp":"2014-04-23T16:33:14Z","content_type":null,"content_length":"44454","record_id":"<urn:uuid:b631271a-41d4-4fc6-b1b1-b1017ce030be>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of conservation of electricity , the law of conservation of energy states that the total amount of in an isolated system remains constant and cannot be created, although it may change forms, e.g. friction turns kinetic engery into heat (radiant energy). In , the first law of thermodynamics is a statement of the conservation of energy for thermodynamic systems, and is the more encompassing version of the conservation of energy. In short, the law of conservation of energy states that energy can neither be created nor destroyed, it can only be changed from one form to another or transferred from one body to another, but the total amount of energy remains constant (the same). Ancient philosophers as far back as Thales of Miletus had inklings of the conservation of some underlying substance of which everything is made. However, there is no particular reason to identify this with what we know today as "mass-energy" (for example, Thales thought it was water). In 1638, published his analysis of several situations—including the celebrated "interrupted pendulum"—which can be described (in modern language) as conservatively converting potential energy to kinetic energy and back again. However, Galileo did not state the process in modern terms and again cannot be credited with the crucial insight. It was Gottfried Wilhelm Leibniz during 1676–1689 who first attempted a mathematical formulation of the kind of energy which is connected with (kinetic energy). Leibniz noticed that in many mechanical systems (of several each with velocity v[i] $sum_\left\{i\right\} m_i v_i^2$ was conserved so long as the masses did not interact. He called this quantity the vis viva or living force of the system. The principle represents an accurate statement of the approximate conservation of kinetic energy in situations where there is no friction. Many physicists at that time held that the conservation of momentum, which holds even in systems with friction, as defined by the momentum: $,!sum_\left\{i\right\} m_i v_i$ was the conserved vis viva. It was later shown that, under the proper conditions, both quantities are conserved simultaneously such as in elastic collisions. It was largely engineers such as John Smeaton, Peter Ewart, Karl Hotzmann, Gustave-Adolphe Hirn and Marc Seguin who objected that conservation of momentum alone was not adequate for practical calculation and who made use of Leibniz's principle. The principle was also championed by some chemists such as William Hyde Wollaston. Academics such as John Playfair were quick to point out that kinetic energy is clearly not conserved. This is obvious to a modern analysis based on the second law of thermodynamics but in the 18th and 19th centuries, the fate of the lost energy was still unknown. Gradually it came to be suspected that the heat inevitably generated by motion under friction, was another form of vis viva. In 1783, Antoine Lavoisier and Pierre-Simon Laplace reviewed the two competing theories of vis viva and caloric theory. Count Rumford's 1798 observations of heat generation during the boring of cannons added more weight to the view that mechanical motion could be converted into heat, and (as importantly) that the conversion was quantitative and could be predicted (allowing for a universal conversion constant between kinetic energy and heat). Vis viva now started to be known as energy, after the term was first used in that sense by Thomas Young in 1807. The recalibration of vis viva to $frac \left\{1\right\} \left\{2\right\}sum_\left\{i\right\} m_i v_i^2$ which can be understood as finding the exact value for the kinetic energy to work conversion constant, was largely the result of the work of Gaspard-Gustave Coriolis and Jean-Victor Poncelet over the period 1819–1839. The former called the quantity quantité de travail (quantity of work) and the latter, travail mécanique (mechanical work), and both championed its use in engineering calculation. In a paper Über die Natur der Wärme, published in the Zeitschrift für Physik in 1837, Karl Friedrich Mohr gave one of the earliest general statements of the doctrine of the conservation of energy in the words: "besides the 54 known chemical elements there is in the physical world one agent only, and this is called Kraft [energy or work]. It may appear, according to circumstances, as motion, chemical affinity, cohesion, electricity, light and magnetism; and from any one of these forms it can be transformed into any of the others." A key stage in the development of the modern conservation principle was the demonstration of the mechanical equivalent of heat. The caloric theory maintained that heat could neither be created nor destroyed but conservation of energy entails the contrary principle that heat and mechanical work are interchangeable. The mechanical equivalence principle was first stated in its modern form by the German surgeon Julius Robert von Mayer. Mayer reached his conclusion on a voyage to the Dutch East Indies, where he found that his patients' blood was a deeper red because they were consuming less oxygen, and therefore less energy, to maintain their body temperature in the hotter climate. He had discovered that heat and mechanical work were both forms of energy, and later, after improving his knowledge of physics, he calculated a quantitative relationship between them. Meanwhile, in 1843 James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. In the most famous, now called the "Joule apparatus", a descending weight attached to a string caused a paddle immersed in water to rotate. He showed that the gravitational potential energy lost by the weight in descending was equal to the thermal energy (heat) gained by the water by friction with the paddle. Over the period 1840–1843, similar work was carried out by engineer Ludwig A. Colding though it was little known outside his native Denmark. Both Joule's and Mayer's work suffered from resistance and neglect but it was Joule's that, perhaps unjustly, eventually drew the wider recognition. For the dispute between Joule and Mayer over priority, see Mechanical equivalent of heat: Priority In 1844, William Robert Grove postulated a relationship between mechanics, heat, light, electricity and magnetism by treating them all as manifestations of a single "force" (energy in modern terms). Grove published his theories in his book The Correlation of Physical Forces. In 1847, drawing on the earlier work of Joule, Sadi Carnot and Émile Clapeyron, Hermann von Helmholtz arrived at conclusions similar to Grove's and published his theories in his book Über die Erhaltung der Kraft (On the Conservation of Force, 1847). The general modern acceptance of the principle stems from this In 1877, Peter Guthrie Tait claimed that the principle originated with Sir Isaac Newton, based on a creative reading of propositions 40 and 41 of the Philosophiae Naturalis Principia Mathematica. This is now generally regarded as nothing more than an example of Whig history. The first law of thermodynamics is a function of a quantity of heat which shows the possibility of conversion of that heat into work. For a thermodynamic system with a fixed number of particles, the first law of thermodynamics may be stated as: $delta Q = mathrm\left\{d\right\}U + delta W,$, or equivalently, $mathrm\left\{d\right\}U = delta Q - delta W,$, where $delta Q$ is the amount of energy added to the system by a heating process, $delta W$ is the amount of energy lost by the system due to work done by the system on its surroundings and $mathrm\ left\{d\right\}U$ is the increase in the internal energy of the system. The δ's before the heat and work terms are used to indicate that they describe an increment of energy which is to be interpreted somewhat differently than the $mathrm\left\{d\right\}U$ increment of internal energy. Work and heat are processes which add or subtract energy, while the internal energy $U$ is a particular form of energy associated with the system. Thus the term "heat energy" for $delta Q$ means "that amount of energy added as the result of heating" rather than referring to a particular form of energy. Likewise, the term "work energy" for $delta W$ means "that amount of energy lost as the result of work". The most significant result of this distinction is the fact that one can clearly state the amount of internal energy possessed by a thermodynamic system, but one cannot tell how much energy has flowed into or out of the system as a result of its being heated or cooled, nor as the result of work being performed on or by the system. In simple terms, this means that energy cannot be created or destroyed, only converted from one form to another. For a simple compressible system, the work performed by the system may be written $delta W = P,mathrm\left\{d\right\}V$, where $P$ is the pressure and $dV$ is a small change in the volume of the system, each of which are system variables. The heat energy may be written $delta Q = T,mathrm\left\{d\right\}S$, where $T$ is the temperature and $mathrm\left\{d\right\}S$ is a small change in the entropy of the system. Temperature and entropy are also system variables. In mechanics, conservation of energy is usually stated as where T is kinetic and V potential energy. Actually this is the particular case of the more general conservation law $sum_\left\{i=1\right\}^N p_i dot\left\{q\right\}_i - L=const$ and $p_i=frac\left\{partial L\right\}\left\{partial dot\left\{q\right\}_i\right\}$ is the Lagrangian function. For this particular form to be valid, the following must be true: • The system is scleronomous (neither kinetic nor potential energy are explicit functions of time) • The kinetic energy is a quadratic form with regard to velocities. • The potential energy doesn't depend on velocities. Noether's theorem The conservation of energy is a common feature in many physical theories. It is understood as a consequence of Noether's theorem, which states every symmetry of a physical theory has an associated conserved quantity; if the theory's symmetry is time invariance then the conserved quantity is called "energy". In other words, if the theory is invariant under the continuous symmetry of time translation then its energy (which is canonical conjugate quantity to time) is conserved. Conversely, theories which are not invariant under shifts in time (for example, systems with time dependent potential energy) do not exhibit conservation of energy -- unless we consider them to exchange energy with another, external system so that the theory of the enlarged system becomes time invariant again. Since any time-varying theory can be embedded within a time-invariant meta-theory energy conservation can always be recovered by a suitable re-definition of what energy is. Thus conservation of energy for finite systems is valid in all modern physical theories, such as special and general relativity and quantum theory (including QED). With the invention of special relativity Albert Einstein , energy was proposed to be one component of an energy-momentum 4-vector . Each of the four components (one of energy and three of momentum) of this vector is separately conserved in any given inertial reference frame . Also conserved is the vector length ( Minkowski norm ), which is the rest mass . The relativistic energy of a single particle contains a term related to its rest mass in addition to its kinetic energy of motion. In the limit of zero kinetic energy (or equivalently in the rest frame of the massive particle, or the center-of-momentum frame for objects or systems), the total energy of particle or object (including internal kinetic energy in systems) is related to its rest mass via the famous equation . Thus, the rule of conservation of energy in special relativity was shown to be a special case of a more general rule, alternatively called the conservation of mass and energy the conservation of mass-energy the conservation of energy-momentum the conservation of invariant mass or now usually just referred to as conservation of energy In general relativity conservation of energy-momentum is expressed with the aid of a stress-energy-momentum pseudotensor. Quantum theory quantum mechanics , energy is defined as proportional to the time derivative of the wave function . Lack of of the time derivative operator with the time operator itself mathematically results in an uncertainty principle for time and energy: the longer the period of time, the more precisely energy can be defined (energy and time become a conjugate Fourier pair However, there is a deep contradiction between quantum theory's historical estimate of the vacuum energy density in the universe and the vacuum energy predicted by the cosmological constant. The estimated energy density difference is of the order of 10^120 times. The consensus is developing that the quantum mechanical derived zero-point field energy density does not conserve the total energy of the universe, and does not comply with our understanding of the expansion of the universe. Intense effort is going on behind the scenes in physics to resolve this dilemma and to bring it into compliance with an expanding universe. Mathematical viewpoint From a mathematical point of view, the energy conservation law is a consequence of the shift ; energy conservation is implied by the empirical fact that the laws of physics do not change with time itself (see: Noether's theorem ). Philosophically this can be stated as "nothing depends on time per se". See also Modern accounts • Goldstein, Martin, and Inge F., 1993. The Refrigerator and the Universe. Harvard Univ. Press. A gentle introduction. • Kroemer, Herbert; Kittel, Charles (1980). Thermal Physics (2nd ed.). W. H. Freeman Company. ISBN 0-7167-1088-9. • Nolan, Peter J. (1996). Fundamentals of College Physics, 2nd ed.. William C. Brown Publishers. • Oxtoby & Nachtrieb (1996). Principles of Modern Chemistry,'' 3rd ed.. Saunders College Publishing. • Papineau, D. (2002). Thinking about Consciousness. Oxford: Oxford University Press. • Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7. • Stenger, Victor J. (2000). Timeless Reality. Prometheus Books. Especially chpt. 12. Nontechnical. • Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4. • Lanczos, Cornelius (1970). The Variational Principles of Mechanics. Toronto: University of Toronto Press. ISBN 0-8020-1743-6. History of ideas • Brown, T.M. (1965). "Resource letter EEC-1 on the evolution of energy concepts from Galileo to Helmholtz". American Journal of Physics 33 759–765. • Cardwell, D.S.L. (1971). From Watt to Clausius: The Rise of Thermodynamics in the Early Industrial Age. London: Heinemann. ISBN 0-435-54150-1. • Guillen, M. (1999). Five Equations That Changed the World. ISBN 0-349-11064-6. • Hiebert, E.N. (1981). Historical Roots of the Principle of Conservation of Energy. Madison, Wis.: Ayer Co Pub. ISBN 0-405-13880-6. • Kuhn, T.S. (1957) “Energy conservation as an example of simultaneous discovery”, in M. Clagett (ed.) Critical Problems in the History of Science pp.321–56 • Sarton, G. (1929). "The discovery of the law of conservation of energy". Isis 13 18–49. • Smith, C. (1998). The Science of Energy: Cultural History of Energy Physics in Victorian Britain. London: Heinemann. ISBN 0-485-11431-3. Classic accounts • Colding, L.A. (1864). "On the history of the principle of the conservation of energy". London, Edinburgh and Dublin Philosophical Magazine and Journal of Science 27 56–64. • Mach, E. (1872). History and Root of the Principles of the Conservation of Energy. Open Court Pub. Co., IL. • Poincaré, H. (1905). Science and Hypothesis. Walter Scott Publishing Co. Ltd; Dover reprint, 1952. ISBN 0-486-60221-4., Chapter 8, "Energy and Thermo-dynamics" External links
{"url":"http://www.reference.com/browse/conservation+of+electricity","timestamp":"2014-04-17T04:24:55Z","content_type":null,"content_length":"101552","record_id":"<urn:uuid:75b73be0-ed40-4c1d-be66-516ded809cb5>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
FRB:Data:H.15 Selected Interest RatesUS: H15 2.80 2014-04-17 FRB Rate paid by fixed-rate payer on an interest rate swap with maturity of ten year. http://www.federalreserve.gov/releases/H15/about.htm FRB:Data:H.15 Selected Interest Rates en 2014-04-18T16:15:00-05:00 Board of Governors of the Federal Reserve http://www.federalreserve.gov/ releases/H15#1 Rate paid by fixed-rate payer on an interest rate swap with maturity of ten year. 2014-04-17T12:00:00-05:00 en FRB US FRB 2.80 H15 Rate paid by fixed-rate payer on an interest rate swap with maturity of ten year. 2014-04-17
{"url":"http://www.federalreserve.gov/feeds/Data/H15_H15_RIFLDIY10_N.B.XML","timestamp":"2014-04-21T02:17:44Z","content_type":null,"content_length":"2367","record_id":"<urn:uuid:a9b294cb-bed7-4ea4-84f2-75a45e8febb2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Intro to Abstract Math Question about divison of integers. It's false, because you when you say if and only if it is the same things as If-And-Only-If Proofs Often, a statement we need to prove is of the form \X if and only if Y ." We are then required to do two things: 1. Prove the if-part: Assume Y and prove X. 2. Prove the only-if-part: Assume X, prove Y . taken from Did 1. But, number 2 is Assume n is divisible by b and n is divisible by a if n is divisible by ab Choose n = 8, b = 2 a = 3 n is divisible by b and n is divisible by a but n is not divisible by ab so it's false thx norwegian i see what you mean
{"url":"http://www.physicsforums.com/showthread.php?p=3762641","timestamp":"2014-04-19T22:49:16Z","content_type":null,"content_length":"31603","record_id":"<urn:uuid:3036a56e-06f4-417e-aa59-572496f2aea0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
When is a Pseudo-differential operator trace class or in Dixmier ideal? up vote 3 down vote favorite Let's denote the set of all Pseudo-differential operators with symbol of “order” $d$ by $\Psi_d(M)$ and Sobolev space on $M$ by $H_s(M)$. It is known that If $P\in\Psi_d(M)$ Then $P$ extends to a continuous map $P:H_{s}(M)\to H_{s-d}(M)$ for all $s$. Moreover, since the natural inclusion $H_s\to H_t$, for $s>t$ is compact, $P:H_{s}(M)\to H_{t}(M)$ is compact operator if $t< s-d$. See for example Lemma 1.3.4, Gilkey's book Invariance Theory: The Heat Equation and the Atiyah-Singer Index Theorem. In special case, when $s=0$, $L^2(M)=H_0(M)$, $P:L^2(M)\to L^2(M)$ is continuous if $d\leq 0$. and it is compact if $d<0$. Now my question is when is $P:L^2(M)\to L^2(M)$ trace class? and when is it in Dixmier ideal $\mathcal{L}^{1,\infty}(L^2(M))$ or in general $\mathcal{L}^{(p,q)}(L^2(M))$? 2 It is trace class when it is an operator of order $-k$, $k>\dim M$. For a proof see Section 4.3 of these notes www3.nd.edu/~lnicolae/Pseudo.pdf – Liviu Nicolaescu Jan 26 '13 at 14:43 1 Moreover, by the Connes trace formula (alainconnes.org/docs/action88.pdf), if your operator is of order $-k$ for $k = \dim M$, then it is in the Dixmier ideal; indeed, it is measurable (in the sense of the theory of Dixmier traces), and the (unique value of the) Dixmier trace is given by the Wodzicki residue of your operator. – Branimir Ćaćić Jan 28 '13 at 10:30 Thanks for the useful references. However, I still wondering if there is a bound like $l$ such that pseudo differential operator of the order $d$ is in the Dixmier ideal (not necessary measurable) when $d<l$. of course $l$ should be in $[-k,0)$ where $k=dim M$. – Asghar Ghorbanpour Jan 28 '13 at 17:24 add comment 2 Answers active oldest votes An operator is trace class whenever it is the product of Hilbert-Schmidt operators. There is a simple characterization of Hilbert-Schmidt operators pseudodifferential operators: a pseudodifferential operator with symbol $a$ is HS if and only if $a$ belongs to $L^2$ of the cotangent bundle (this is equivalent also to the fact that the kernel of the operator is $L^ up vote 3 2$). down vote add comment For a comprehensive account of what you are looking for, see the book by Simon Scott ``Traces and determinants of pseudodifferential operators'' up vote 2 down vote add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry oa.operator-algebras pseudo-differential-opera or ask your own question.
{"url":"http://mathoverflow.net/questions/119898/when-is-a-pseudo-differential-operator-trace-class-or-in-dixmier-ideal/120013","timestamp":"2014-04-17T01:32:35Z","content_type":null,"content_length":"59067","record_id":"<urn:uuid:4ff9228f-5a88-425c-a8e8-68e366bac250>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: The Axiom of Choice -- positive and negative versions Joe Shipman shipman at savera.com Tue Sep 29 09:23:03 EDT 1998 My feeling is that, on the contrary and perhaps somewhat paradoxically, the full axiom of choice (= AC) is *more* plausible than its restriction to sets of reals. First, the naive idea of choosing elements from nonempty sets seems completely general and intuitive, almost a "law of thought"; to restrict it to sets of reals strikes me as an artificial or arbitrary restriction. Second, full AC is equivalent to Zorn's lemma, which is traditionally regarded as a nice principle with many nice consequences, while AC for sets of reals leads to many pathological counterexamples, e.g. a non-measurable set, or a set of reals X such that neither X nor R \ X contains a perfect I agree that AC is more natural and intuitive in its general form (even more so when you express it in the equivalent form "a product of nonempty sets is nonempty"). Any "law of thought" ought to be stated in terms of arbitrary collections rather than mathematical ones. But you raise an interesting issue regarding "positive" and "negative" versions of AC. The provable equivalence of Zorn's Lemma or similar versions (which are used to prove the "good" general theorems in ordinary mathematics") with the Well-Ordering Theorem (which is used to get the "bad" or "pathological" counterexamples) seems so straightforward to me that I don't see the existence of a nonmeasurable set or a set neither containing nor disjoint from a perfect set as counterintuitive. It's not clear why the reals "ought" to have a countably additive translation invariant measure. There does exist a finitely additive translation invariant measure (which you prove using one of the "good" versions of AC) and this is as far as my intuition goes, since the identification of the mathematical reals with the intuitive continuum is not at all perfect, and it is not clear the quasi-physical intuitive continuum allows for the kind of infinite subdivision relevant to countable additivity. What I *do* find somewhat surprising and counterintuitive is the Banach-Tarski theorem which applies in dimension >=3 (there is no *finitely* additive rigid-motion-invariant measure) -- but rotational invariance is a lot more complicated than translational invariance because there is a free nonabelian group of rotations. (I strongly recommend Stan Wagon's book "The Banach-Tarski paradox".) In the same way I find the "strong Fubini theorems" of the form "any iterated integrals which exist are equal" intuitively appealing because the only kind of invariance involved is under coordinate permutations, not under arbitrary rotations, and this comes from axioms of symmetry of the type discussed by Freiling (JSL 1986) and Riis (FOM 1998). In my thesis (T.A.M.S. 10/90) I show that the strong Fubini theorems follow from the existence of a countably additive measure defined on all subsets of R. We already know such a measure can't be translation-invariant but that's not implausible--we should expect a finite subdivision to be compatible with translations but the intuition is not clear for infinite subdivisions. -- Joe Shipman More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-September/002228.html","timestamp":"2014-04-21T13:26:35Z","content_type":null,"content_length":"5610","record_id":"<urn:uuid:3c3a4741-14c9-4c70-9fdf-1b5c7a372baf>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimization - Three Pens A farmer has exactly 1200 feet of fencing and needs to create a rectangular enclosure with three pens, as shown above. What should the dimensions of the enclosure be, to create the maximum area? As you move the blue dot, the total length of fencing, 1200 feet, remains constant.
{"url":"http://webspace.ship.edu/msrenault/GeoGebraCalculus/derivative_app_opt_pens.html","timestamp":"2014-04-16T19:09:44Z","content_type":null,"content_length":"4757","record_id":"<urn:uuid:75842148-a215-42d7-8833-185fa648711a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Seemingly simple probability question... February 23rd 2009, 10:58 AM #1 Junior Member Jan 2009 This looks to be pretty straight forward, but I am not getting the right answer... 300 chickens are in a barn. 63 are brown roosters and 77 are black roosters. 64 are brown hens and 96 are black hens. So, there are a total of 140 roosters and 160 hens, also there are 127 brown chickens and 173 black chickens. If one is chosen at random, what is the probability that it is a brown hen. When I worked this through, I took (64/160)( 160/300) and came up with .213333333333... Am I on the right track here? This looks to be pretty straight forward, but I am not getting the right answer... 300 chickens are in a barn. 63 are brown roosters and 77 are black roosters. 64 are brown hens and 96 are black hens. So, there are a total of 140 roosters and 160 hens, also there are 127 brown chickens and 173 black chickens. If one is chosen at random, what is the probability that it is a brown hen. When I worked this through, I took (64/160)( 160/300) and came up with .213333333333... Am I on the right track here? There are 64 brown hens for a total of 300 chickens. So the probability is 64/300 Why complicating things by writing (64/160)(160/300) ? (which gives the correct result, but obviously shows that the reasoning was not the correct one February 23rd 2009, 11:05 AM #2
{"url":"http://mathhelpforum.com/statistics/75332-seemingly-simple-probability-question.html","timestamp":"2014-04-18T11:52:21Z","content_type":null,"content_length":"34471","record_id":"<urn:uuid:0ecd814f-859d-4b89-8a46-92a7e97123cd>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Lowry, CO Precalculus Tutor Find a Lowry, CO Precalculus Tutor ...Probably not - you know what you want to say, and you construct the sentence or the phrase you need in order to say it. Once you get comfortable with math, it's just like talking ... you just "say" what you need to say, and that's it! After you learn it, math is fun. 18 Subjects: including precalculus, calculus, geometry, statistics ...I have tutored several students in calculus. I am a native French speaker who likes French literature. I have been teaching/tutoring geometry for at least a decade. 15 Subjects: including precalculus, French, calculus, algebra 1 ...I really enjoy getting to connect with students on interests and find they really help to make my lessons engaging and further solidify learning. I look forward to hearing from you, DanI currently hold a valid Initial 1 teaching license in Oregon for secondary special education, and a M.S. in s... 24 Subjects: including precalculus, chemistry, special needs, GRE ...I tend to approach math with a real - world perspective, interpreting problems and equations in terms of how they communicate an observable principle. I'm 19 years old, and have had experience tutoring a large range of ages, from high school students to those returning to school after many years. I'm very easy-going, and well versed in different methods of solving problems. 17 Subjects: including precalculus, reading, English, French ...I'm patient, friendly, and easy-going while being goal-oriented, practical, and encouraging. My mission is to provide my clients with the best tools possible to solve their own problems and succeed on their own. I graduated in May of 2013 with a degree in Physics and a minor in Mathematics. 13 Subjects: including precalculus, reading, physics, calculus Related Lowry, CO Tutors Lowry, CO Accounting Tutors Lowry, CO ACT Tutors Lowry, CO Algebra Tutors Lowry, CO Algebra 2 Tutors Lowry, CO Calculus Tutors Lowry, CO Geometry Tutors Lowry, CO Math Tutors Lowry, CO Prealgebra Tutors Lowry, CO Precalculus Tutors Lowry, CO SAT Tutors Lowry, CO SAT Math Tutors Lowry, CO Science Tutors Lowry, CO Statistics Tutors Lowry, CO Trigonometry Tutors Nearby Cities With precalculus Tutor Adams City, CO precalculus Tutors Aspen Park, CO precalculus Tutors Castle Pines, CO precalculus Tutors Columbine Valley, CO precalculus Tutors Dupont, CO precalculus Tutors Fort Logan, CO precalculus Tutors Foxton, CO precalculus Tutors Glendale, CO precalculus Tutors Irondale, CO precalculus Tutors Montbello, CO precalculus Tutors Montclair, CO precalculus Tutors Roxborough, CO precalculus Tutors Sheridan, CO precalculus Tutors Wattenburg, CO precalculus Tutors Welby, CO precalculus Tutors
{"url":"http://www.purplemath.com/Lowry_CO_precalculus_tutors.php","timestamp":"2014-04-20T10:54:03Z","content_type":null,"content_length":"24039","record_id":"<urn:uuid:4c6283d8-1520-4b18-931b-7cf191f189bf>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
What is CLASS IV IMO QUESTION PAPER? Mr What? is the first search engine of definitions and meanings, All you have to do is type whatever you want to know in the search box and click WHAT IS! All the definitions and meanings found are from third-party authors, please respect their copyright. © 2014 - mrwhatis.net
{"url":"http://mrwhatis.net/class-iv-imo-question-paper.html","timestamp":"2014-04-16T08:34:24Z","content_type":null,"content_length":"39247","record_id":"<urn:uuid:d2a1db60-c6c6-4fda-8615-a55642779863>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Ron's ShoutBoot Loops that transform sequence of data Let’s start by a neat trick I observed in Fay: for :: [a] -> (a -> b) -> [b] for = flip map or, for the curious: for :: Functor f => f a -> (a -> b) -> f b for = flip fmap Now our good old C++/Java transformation loop vector<int> out; for (int x : input) { out.push_back(x + 2); can be written in Haskell as out = for input (\x -> x + 2) or for input (\x -> x + 2) or even for input (+2). Well, not rocket science, but a bit easier for the imperative eye to read. Using map is still nice when composing functions. Loops that accumulate data The old code int sum = 0; for (int x : inputs) { sum += x; sum = foldl (+) 0 inputs Just take care about foldl/foldr laziness, or use a stricter version. Loops that perform IO Our old friend for (int x : inputs) { printf("%d", x); import Control.Monad (forM_) forM_ inputs putStrLn Link Review – late March 2013 March 24th, 2013 No comments Why this series? When I read something interesting, I mark it with a star, bookmark it, or occasionally retweet it. But then it fades into the void. This series will record my impressions. Life hacking Trello is a great free software for creating online task boards. Its applications range from simple TODO lists to complicated process management workflows, however it doesn’t tie the user’s hand. The user is responsible for creating (and occassionally bypassing) the rules. See some interesting Trello usage for project management. A compact yet to the point summary from Swiss authorities on the electromagnetic fields generated by train lines, with nice figures depicting EM field intensity as the function of distance. Personal suggestion: live at least 50 to 100m away from high voltage or train lines. Lochart’s Lament is a deeply inspiring writing about teaching maths for kids. Don’t be surprised, the linked resource is just a foreword to the whole thing in PDF. The Hierarchy of awareness is a nice recollection of basic design practices (mostly AI) that can make computer opponents feel more human-like. I think these practices, if applied, also make human opponents feel more human-like. Nicolas Cannasse (creator of Haxe language) won a recent LD using his entry Evoland, which was fun to play indeed! Reminds us that indie games should rather be fun and creative, instead of being large-scale. Nevertheless, he left Motion-Twin and is up to new business at Shiro Games, which I guess is worth watching. Andy Moore posts a shocking post Iteration and Game Design in Mexico. Shocking, since the games he talks about do not require a computer. That is hard to imagine, isn’t it? The post captures a naive, childish (not in the derogatory, but in the pure sense) feeling of having fun in whatever activity. Something that is easy to forget as an adult. Gabriel Gonzalez writes about Monad Morphisms in his latest blog post. If you have not yet, read his series from the beginning. He combines understanding of theory and practice in an amazing and entertaining way. Remember formal languages and model verification from University? Now its’s playtime again! Model Checking Kit provides a load of fun and powerful tools for verifying your models. Categories: Uncategorized Battle Game In Haskell #2 – Fighting using recursion and Either October 28th, 2012 No comments Welcome back! Last time we laid the basics of the battle game by applying damage to a unit. However, we had to specify the amount of damage, while it would be more realistic that the damage results from a hit of a hostile unit. Read more… Categories: Uncategorized battle-game, Coding Corner, functional-programming, game, haskell Battle Game In Haskell #1 – Intro and basics October 26th, 2012 No comments Hi! This is the first in a series of posts in which I gradually refine Haskell code to get a working game. The goal of the series is twofold: First, to motivate myself to put into practice the concepts I read about so far, and second, to serve as a practical example of Haskell. I don’t plan to explain the various concepts (since others have done already in countless wiki pages and blog posts), but might give pointers on them. If you are not familiar with them, I expect you to google and do the homework. The game itself is about army battles. Pretty open scope, let’s not fix much upfront and let the game logic unfold as we write it. Categories: Uncategorized battle-game, Coding Corner, functional-programming, game, haskell Composing functions – functional programming part 1 August 3rd, 2012 No comments I try to address a relative lack of entry-level material on composing functions targeted for FP newcomers. While the usage and benefit of data types like Option or List are quickly grasped, there is less focus on how to create and compose functions operating in the context of these types. While the following are general for FP, frameworks in focus are functionaljava for Java, and Scalaz for Lifting functions to operate in a context This article is pretty in medias res, holes would be filled in in later parts, but don’t be afraid to comment or [DEL:google:DEL] duckduckgo if encountering anything unexplained. A function from a value of type A to a value of type B can be denoted A => B, or in case of functionaljava F[A, B] (please forgive using Scala’s square brackets instead the Java standard less/greater-than sign for type parameters, but it less pain when writing HTML code). Now assume we have a function f1: A => B, and also a value v1: Option[A]. It is evident that since Option is a Functor, we can map f1 on v1 to get a new value v2 of type Option[B]. The code is something like v2: Option[B] = v1.map(f1). But we hear all the time that FP makes function composition easy. Now if we have a function f0: X => Option[A], can we compose it with our f1 to get a function fc: X => Option[B]? Since the types of the output of f0 and the input of f1 don’t match, a newcomer might begin writing an (anonymous) function taking an optional value and applying map on it with the function (I actually saw this). Instead we can just lift any function to operate in a given Functor context. How? Since Java doesn’t support typeclasses, there are different methods on the F[A, B] class to perform the lifting. To get a function Option[A] => Option[B] from our f1: A => B, we would call f1.mapOption(). There are similiarly named methods for various functors, like mapList, mapSuccess or mapFail (the latter two lifting the function to operate on the success or failure side of a Validation, respectively). So to compose, we would just write fc = f0.andThen(f1.mapOption()) instead of the boilerplate with extra wrapper functions. With Scalaz 6, we can use the polymorphic f1.lift[Option] method for lifting, and for any Functor, not just Option. Lifting the return type of a function We might get into a situation where we have a function A => B but would need A => F[B]. If F is Pure (or Pointed, the latter is the new name of Pure in Scalaz 7), then this should be possible (note that if F is a Monad then it is Pure by definintion). Just “should” since again, Java has no typeclass support, so we have to rely on the specific method being available on F[A, B], for example optionK for promoting to F[A, Option[B]]. Just look for methods ending with K, which refers to the Kleisli operation. Indeed, in Scalaz 6 we can use f.kleisli[Option] instead. Lifting a value While we are here it is worth mentioning that if F is Pure then we should have a way of producing F[A] from a value of type A. For functionaljava there are various static methods on the data structures, like Option.some, List.list or Validation.success. For Scalaz 6 we can use the a.pure[F] way. Parting words Instead of writing a custom wrapper lambda for a specific function, rather lift that function into the context using the appropriate method if there is one. If isn’t (in case of functionaljava) then at least write the lifting method as a general static helper method, or better consider adding it to fj and sending a pull request. In Scala a lambda is more lightweight syntactically than in Java, so in Scala wrapping using a lambda is more common I assume (no hard evidence), but following or at least knowing the FP way can ease our life if eventually refactoring the function to be polymorhic in the Functor type is needed. Have fun coding! Categories: Uncategorized Coding Corner, composition, fj, functional-programming, java, scala, scalaz
{"url":"http://ron.shoutboot.com/","timestamp":"2014-04-19T06:54:39Z","content_type":null,"content_length":"74507","record_id":"<urn:uuid:c766987e-53f0-457c-9c39-f50ebb9094ac>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Can p's involving slopes of z on x, z on y, & z on (x,y) appear in same Bonferroni ranking? Replies: 0 Can p's involving slopes of z on x, z on y, & z on (x,y) appear in same Bonferroni ranking? Posted: Aug 12, 2012 2:07 PM Suppose the two simple linear regressions z on x z on y and the multiple linear regression z on (x,y). In another thread, Ray (Koopman) has warned me that it's not legit to t-test slopes or intercepts from z on x against slopes or intercepts from z on (x,y), nor to t-test slopes or intercepts from z on y against slopes or intercepts from z on (x,y). In particular, he observed that you can't do this because, for example, the slope of the regression line involving the x variable for z on (x,y) is not independent of the slope of the regression line for z on x. But suppose now that one has: i) a p resulting from t-testing two sets of slopes of z on x, where the two sets are obtained in two different "data selection frames" using the same 20 samples in each frame. ii) a p resulting from t-testing two sets of the "x-slopes" of z on (x,y), where the two sets are obtained in the same two "data selection frames" using the same 20 samples as in (i) above. Can both of these p's appear in the same Bonferroni ranking, since no property of z on x was ever t-tested against any property of z on (x,y) in order to develop each of these p's? Or does the nature of the Bonferroni correction also rule out allowing these two p's to appear in the same ranking?
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2394951","timestamp":"2014-04-17T04:08:14Z","content_type":null,"content_length":"14936","record_id":"<urn:uuid:cffc02aa-6d56-4ab4-a75a-3729f95d870d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparing Regular Expressions Robert Inder <(E-Mail Removed)> writes: > I'm interested in comparing the "coverage" of regular expressions. > In particular, if I have two regular expressions, I want to know > whether it is possible to construct a string that will match > both, or indeed that will match one and not the other. Regular languages are closed under intersection and set difference, so you can not only find strings (if any exist) in the intersection and difference sets, you can construct regular expressions or DFA's that describe all such strings. > So, for instance, suppose I believe that if an object has a > descripiton string matching regexp R1, it is of type A. > Someone now tells me that an object with a > descripiton string matching regexp R2 is of type B. What can I tell > about (the sets of things of type) A and B by looking at R1 and R2? > So if R1= "^a.*$" and R2 = "^b.*$", I know that A and B are disjoint. > Whereas if R1= "foo" and R2 = "foobar", I know that B is a subset of A. Say what? The set containing only the string "foobar" is not a subset of the set containing only "foo". Or do you consider a string to match a regular expression if a prefix of the string does? > But how do I formalise (i.e. program) this comparison process? > This strikes me as a very general problem/issue and there > just has to be some work on formalising and solving it. But I don't > know how to find it: Google searches for things like "regular > expression comparison" are swamped by algorithms for matching regexps > against strings and so forth, which isn't what I want. If you have a DFA D1 with states s_0 ... s_m for A and a DFA D2 with states t_0 ... t_n for B, you can construct a DFA D3 for the intersection of A and B in the following way: 1. The start state of D3 is the pair (s_0,t_0) of starting states of D1 and D2. 2. If D1 has a transition on symbol c from s_i to s_j and D2 has a transition on symbol c from t_k to t_l, D3 has a transition from (s_i,t_k) to (s_j,t_l) on c. 3. If s_i is accepting in D1 and t_k is accepting in D2, (s_i,t_k) is accepting in D3. For A\B (A minus B), the construction is: 0. Add a non-accepting state t_(n+1) to D2. If there is a state t_k in D2 and a symbol c such that t_k has no transition on c, add a transition from t_k to t_(n+1) on c. There is a transition from t_(n+1) to t_(n+1) on all symbols. This way, D2 has transitions on all symbols from all states. 1. The start state of D3 is the pair (s_0,t_0) of starting states of D1 and D2. 2. If D1 has a transition on symbol c from s_i to s_j and D2 has a transition on symbol c from t_k to t_l, D3 has a transition from (s_i,t_k) to (s_j,t_l) on c. 3. If s_i is accepting in D1 and t_k is not accepting in D2, (s_i,t_k) is accepting in D3.
{"url":"http://www.velocityreviews.com/forums/t891675-comparing-regular-expressions.html","timestamp":"2014-04-16T15:59:08Z","content_type":null,"content_length":"65371","record_id":"<urn:uuid:ae8da87b-b38c-4a60-9f37-5a5928caaef0>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
y o University of West Georgia Course Syllabus Trigonometry & Calculus for P-8 Teachers (MATH 4753) Fall 2006 Instructor: M. Yazdani, Ph.D. Phone: 678-839-4132 Office: 322 Boyd Building Conference Hours: 12:00 – 4:00 M, 11:00 – 12:00 TR, 3:30 – 5:30 TR. I am available at any other time by appointment. Class Time and Location: 5:30 – 8:00 R, Boyd 307 Text: Stewart, J., (2003), Single Variable Calculus Early Transcendentals, Thompson Brooks/Cole, Belmont, California. ISBN: 0-495-01613-6 After completion of the course, the student will demonstrate the following: 1. An understanding of standard vocabulary and symbols associated with trigonometry and calculus; 2. A better understanding of fundamental concepts in trigonometry, including angle measure (degree and radian), trig ratios, identities, Law of Sines, Law of Cosines, solving triangles, and graphing trigonometric functions; 3. A better understanding of fundamental concepts in calculus, including limits, continuity, derivatives and their applications; 4. An understanding of the scope and sequence of the P-12 mathematics curriculum. Course Contents, Part 1, Trigonometry: ● Right angle trigonometry ● Radian and degree measure ● Trigonometric functions: the unit circle ● Trigonometric functions: any angle ● Graph of sine and cosine functions ● Graphs of other trigonometric functions ● Solving Triangles Course Contents, Part 2, Calculus: ● Slope of a curve at a point ● Definition of the derivative ● Formal definition of the limit ● Binomial theorem, derivatives of powers of x ● Linearity of the derivative, derivatives of all polynomials ● Distance traveled at non-constant velocity ● Area under a curve ● Definition of the definite integral ● Fundamental Theorem of Calculus Class lectures will include the following: presentation of material and concepts, problem solving techniques, and class discussions. Quizzes will be given periodically through out the semester. All tests will be comprehensive. There is no make up for daily quizzes. There is no make up for the tests unless the student presents a legitimate excuse. Quizzes 40% 2 tests 40% Final Exam 20% Final grade will be determined by point accumulation as follows: A = 90% -100% B = 80% - 89% C = 70% - 79% D = 60% - 69% F = Below 60% Larson, R., Edwards, B., (1999), Brief Calculus, An Applied Approach to Mathematics. Houghton Mifflin Company, Boston, MA. Kay, D. (2001), Trigonometry. Cliff Notes, Inc. New York, New York Attendance: Attendance is mandatory. I expect each student to attend all classes and follow university policy. There are only 5 unexcused or excused absences allowed per semester. If you exceed 5 absences you will fail the course. Attendance will be checked each class period and it is your responsibility to sign the attendance sheet. Conferences: Conferences can be beneficial and are encouraged. All conferences should occur during the instructor's office hours, whenever possible. If these hours conflict with a student's schedule, then appointments should be made. The conference time is not to be used for duplication of lectures that were missed; it is the student's responsibility to obtain and review lecture notes before consulting with the instructor. The instructor is very concerned about the student's achievement and well-being and encourages anyone having difficulties with the course to come by the office for extra help. Note: If you have a documented disability, which will make it difficult for you to carry out the course work as I have outlined and / or if you need special accommodation or assistance due to disability, please contact me as soon as possible.
{"url":"http://www.westga.edu/~math/syllabi/syllabi/fall06/MATH4753.html","timestamp":"2014-04-19T17:19:26Z","content_type":null,"content_length":"25188","record_id":"<urn:uuid:e96e4940-66b8-4ba8-8cb1-f278175a674d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Fluid Flow and Hole Size Name: Holly Status: student Grade: 6-8 Location: CA Country: N/A Date: 1/24/2005 Question: Does the size of the hole affect the water pressure? This is from a student conducting a test. She wanted to know if you change the hole size (not the location) will the rate of water flow out of that hole decrease or increase? But more specifically, what is the relationship between the hole size and water flow rate - for example, if a hole is doubled its size does the water flow rate also double? Also, is there a point where a hole becomes so small that water flow is non existent. Replies: This depends largely on the source of water. If you have a pump pressurizing the water, then the pressure it is able to maintain will drop with larger holes, because more water will be escaping. (More flow at less pressure) If the water pressure behind the hole is constant, (say, a relatively small hole at the bottom of a constant source such as a dam or water tower), then you pressure will remain constant, and flow will increase even more. I do not believe you will ever get a hole small enough to completely prevent water flowing through it, (at least not unless you are drilling holes the size of molecules), however, a small enough hole will have such a small amount of water flowing through it that it can be disregarded for all practical concerns. Ryan Belscamper Yes, you will certainly get more water out of larger holes. If the water is flowing out of a tank where the air above the water is at atmospheric pressure as is the air outside the hole, and if the area of the surface of the water is much larger than the area of the hole, then, by Bernoulli's Law, the water will leave through the hole at a speed v = sqrt (2gh), where h is the height of the surface of the water above the hole. Notice that this is the speed an object would get by falling through a height h. So the volume of water leaving per second will be Q = A*v. Incidentally, the water accelerates as it leaves the hole since the air pressure outside the hole is less than the water pressure inside the hole, causing the diameter of the stream of water to be less than the diameter of the hole . So Q is actually less (by a factor of roughly 2/3) than the value given above. It could be an interesting experiment to measure this effect by measuring Q as a function of h. As h is increased, the speed of the stream increases, its diameter shrinks and Q decreases by an increasing amount from the simple calculation To be sure this calculation is clear to you, I will calculate it for h = 1m (just over 3 feet) and a hole 1 cm in diameter (A = pi*R^2 = pi/4 cm^2 = 7.8E-5 m^2 = 0.000078 m^2.) g = 9.8 m/s^2 so v = sqrt (2*9.8 m/s^2*1m) = 4.43 m/s. Finally Q = Av = 7.8E-5 m^2*4.43 m/s = 3.45E-4 m^3/s = 345 cm^3/s. Water will continue to flow out of the hole as its size is decreased until the hole diameter becomes comparable to the diameter of a water molecule. That's a few angstrom units, which is a few times 1E-10 m. Any hole you could possibly make is much larger than that. This argument ignores surface tension, which is important as the hole gets smaller. A circular drop of water at the outside of the hole generates a pressure holding the water inside the tank. The pressure is given by p = s/r, where s is the surface tension of water and r is the radius of the drop. s = 0.073 N/m. A drop with radius 7.3E-7m (about 0.03 thousandths of an inch)would generate a pressure of 1 atmosphere = 1E5 N./m^2 and could support the water in a tank 32 feet high! Best, Dick Plano... The above web site has a animation which illustrates Bernoulli principle, which is "Bernoulli's Principle states that as the speed of a moving fluid increases, the pressure within the fluid decreases." So, decreasing the size of the hole, lets less water through the hole, thus building up the pressure, with increased pressure the water flows faster. If the hole decreases indefinitely, one would think that there is an ultimate limit where the molecules, or atoms in the water cannot get through the hole, but that case is a theoretical issue. Venturi followed Bernoulli and developed equations to establish flows rates this site illustrates the equations, and you can answer the problem about doubling the size. From what little I know about lasers, they do not work as one expects from Bernoulli's principle. In other words, here the discussion deals with water, but light does not work by the same logic, meaning a laser is not just concentrated light rays. I think the laser works by exciting atoms. James Przewoznik Glad you got to refine that question. I presume the hole is in the side of a container of water. The water pressure behind the hole stays the same, is independent of the size of the hole. The flow in response to that pressure is what changes. Flow usually varies as some power of the hole size. The power might be 1 (proportional), 2 (square), or even 3 (cube). The flow in a long, thin tube tends to go as the diameter cubed. The power-law in effect for a hole might be less because of the inertia or viscosity of the water in the immediate vicinity of the hole, most holes in thin walls have flow effects about like a short tube whose length is something like 1/2 the diameter. So then the effective tube-length increases when the diameter increases, so perhaps the flow will go as the square of the diameter. For this square-law, doubling the diameter would give 4 times the flow. But the power-law-number in effect will depend on major changes in scale. If the hole is very small or the liquid is very viscous, the power-law might be different than if the hole is large and the liquid is watery. Flow through small holes at low pressures can completely stop, if there is air rather than water on the outside of the hole, and the wall-substance is water-repellant. A tiny half water-drop forms, bulging out through the hole, and the surface tension inside that water drop can be a larger pressure that the water-pressure inside the container. So that surface-tension can hold back the flow. There are a bunch of things you can do to break this surface-tension and allow the water to flow again, at the rate normally predicted by the power-law for that size of hole. They include: throwing soapy water over the hole, continuously pouring water down the outside of the container, hanging wet paper across the hole on the outside, or sitting your container in a bucket of water filled to just about the level of the hole, or higher. If higher, then the pressure is the difference between the water level inside and out, not between the water level inside and the height of the hole. I think I recommend hanging wet paper or cotton string. Gluing a vertical toothpick just beside the hole might work too. I think you need to measure and discover that power-law for yourself. Your set-up might give a different power-law than I would predict. Jim Swenson The flow of a fluid can be very complicated, especially if the flow becomes turbulent -- for example, when you pull the plug in a basin or sink and the water forms a "whirl pool". But let us make some simplifying arrangements, as follows: A large cylindrical tank with a small horizontal exit pipe of radius R and length L near the bottom of the large tank. In this configuration the level of the water does not change very much during the time period in which the flow from the exit pipe is being measured. Then the volume rate (volume / time) V = (pi / 8) * P * R^4 / n * L where: pi = 3.14, P is the pressure difference between the ends of the small horizontal pipe, L is the length of the small horizontal pipe, n is the viscosity of the fluid, and R is the radius of the small horizontal pipe. So you can see that the flow rate is VERY sensitive to the size of the exit pipe -- the radius of the hole in the large tank. Putting in some typical values, say L = 1 meter, R = 2 mm (pretty small), the depth of the water in the tank = 0.5 meters, n (the viscosity of water) = 1x10^-3 kg/ meter*sec., then the numbers fall out to be about 3x10^-5 (cubic meters / sec). Since the flow varies as R^4 doubling the radius of the small pipe will increase the flow rate by 2^4 = 16 times, so the flow rate is very sensitive to the size of the hole. Note: the long, narrow exit pipe is to ensure that the fluid flow is smooth. There is a minimum size below which the fluid (in this case water) will no longer flow freely. This is related to the surface tension of the water, but then once more the equations get complicated again. Vince Calder For fluid flow, there is a relationship among flow rate, flow velocity, and flow area, expressed by the following formula: Q = A x V (Q = A times V) where Q = flow rate or discharge A = cross-sectional flow area V = velocity of flow In American units, V is usually expressed in terms of ft/s (feet per second), therefore A should be in ft2 (feet squared), and the units for Q would be ft3/s (feet cubed per second). It is important to use the appropriate units in the above equation so that the results are dimensionally correct. I hope that this helps. Bob Trach Click here to return to the Engineering Archives NEWTON is an electronic community for Science, Math, and Computer Science K-12 Educators, sponsored and operated by Argonne National Laboratory Educational Programs , Andrew Skipor, Ph.D., Head of Educational Programs. For assistance with NEWTON contact a System Operator (help@newton.dep.anl.gov) , or at Argonne's Educational Programs NEWTON AND ASK A SCIENTIST Educational Programs Building 360 9700 S. Cass Ave. Argonne, Illinois 60439-4845, USA Update: June 2012
{"url":"http://newton.dep.anl.gov/askasci/eng99/eng99365.htm","timestamp":"2014-04-19T22:37:27Z","content_type":null,"content_length":"19278","record_id":"<urn:uuid:58486bfb-6b26-47a1-99b8-0ecca19c53d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Circles: Area of a Sector How to find the area of a sector A sector is a section or part of a circle - shaped like a pie slice. It is enclosed by the radiuses (radii) of the circle and an arc (where the sector meets the circumference). The area of a sector can be easily found with a formula which uses the radius of the circle and the sector's central angle: The central angle is located at the sector's vertex, this is the point where the sector meets the center point of the circle. The radius (r) is the distance from the center point to the edge of the circle. The first part of the formula is πr² - this is the total area of the circle. In the next part of the formula we multiply the total area of the circle by central angle/360° - this is the fraction of the circle that the sector covers. So, in short, to get the area of a sector, you multiply the total area of the circle by the fraction that the sector covers. See how in the worked examples below. If you already know the fraction of the circle that the sector covers, you can simply multiply the area of the circle by this. For example, if we have a sector that covers exactly a quarter of the circle (a 90° angle), then you can multiply the area of the circle by ¼ (πr² x ¼) to get the sector area. Worked Examples What are the areas of the following sectors in the diagram? i) ADB ii) BDC iii) ADC Radius = 7 cm
 Central angle = 72° Substituting the radius and central angle into the sector area formula: ADB = (π x 7²) x (72/360) = 153.94 cm²(area of the circle) x 0.2(1/5 of the circle) ADB = 30.79 cm² Radius = 7cm Central angle = 90° We already know the area of the circle (π x 7²) from the previous question. Circle area = 153.94 cm² 90° is exactly 1/4 of 360° this tells us that the sector covers exactly 1/4 of the circle: BDC = 153.94cm² x 0.25(1/4) = 38.485 cm² Radius = 7cm Central angle = 72° + 90° = 162° 153.94 x 162/360 ADC = 153.94 x 0.45 = 69.27 cm² Major and Minor Sectors When there are two sectors in a circle the smaller of the two is called the minor sector, it has a central angle less than 180° and covers less than half the area of the circle. The major sector is the larger sector and has a central angle greater than 180°, it covers more than half of the area of the circle. Sector Types A semicircle is a sector which covers exactly half of a circle and has a central angle of 180°. It is enclosed by a semicircle arc and the diameter of the circle. - An octant is a sector which covers 1/8 of a circle and has a central angle of 45°. - A sectant is a sector which covers 1/6 of a circle and has a central angle of 60°. - A quadrant is a sector which covers 1/4 of a circle and has a central angle of 90°. Back to top
{"url":"http://www.getgeometry.com/area-of-a-sector.html","timestamp":"2014-04-18T20:42:47Z","content_type":null,"content_length":"16753","record_id":"<urn:uuid:fcb7d3df-68cd-46b0-814c-8a66cff55f7d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00645-ip-10-147-4-33.ec2.internal.warc.gz"}