content
stringlengths
86
994k
meta
stringlengths
288
619
Wolfram Demonstrations Project A Concurrency Involving the Midpoints of the Sides and the Altitudes Let ABC be a triangle and let A', B', and C' be the midpoints of the sides opposite A, B, and C. Let A", B'', and C'' be the midpoints of the altitudes from A, B, and C, respectively. Then A'A'', B'B'', and C'C'' are concurrent.
{"url":"http://demonstrations.wolfram.com/AConcurrencyInvolvingTheMidpointsOfTheSidesAndTheAltitudes/","timestamp":"2014-04-16T19:47:57Z","content_type":null,"content_length":"43120","record_id":"<urn:uuid:04d8d78e-439e-4e89-91bc-66b655a39b5f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: CMC-South 2012 Unsilence Students' Voices Suzanne Alejandre and Marie Hogan Friday, November 2, 2012 8:30 - 10:00 Session 120 Description: Every classroom has silenced voices. Why? We'll share activities to increase CCSSM practices #1 [make sense of problems and persevere] and #3 [construct arguments and critique] focused on students' accountable math talk. Handouts will be provided. My Students Can Notice/Wonder, Now What? Marie Hogan and Suzanne Alejandre Friday, November 2, 2012 1:30 - 3:00 Session 324 Description: What problem-solving strategies do students already have? How can they develop their mathematical practices? How do they get better at specific strategies? How do they get better at choosing strategies? We'll share ideas and handouts for you to try!
{"url":"http://mathforum.org/workshops/cmc/2012/south/","timestamp":"2014-04-18T05:34:29Z","content_type":null,"content_length":"5161","record_id":"<urn:uuid:6450a1e9-f7ae-4fec-b1f0-2c69714c8a92>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Syllabus Entrance PY 205 Introduction to Physics I Silvius, Alexander A. Mission Statement: Park University provides access to a quality higher education experience that prepares a diverse community of learners to think critically, communicate effectively, demonstrate a global perspective and engage in lifelong learning and service to others. Vision Statement: Park University, a pioneering institution of higher learning since 1875, will provide leadership in quality, innovative education for a diversity of learners who will excel in their professional and personal service to the global community. Course PY 205 Introduction to Physics I Semester SP 2013 HO Faculty Silvius, Alexander A. Title Assistant Professor of Physics Degrees/Certificates Ph.D. Physics - Missouri Science and Technology Office Location Science 308 Office Hours 9:00 - 10:00 AM Open Door Policy, or by appointment. Daytime Phone 816-584-6883 E-Mail Alexander.Silvius@park.edu Class Days M T W R Class Time M-W 11:00a-11:50a; T-R 11:35a-12:50p; W 01:30p-04:20p Credit Hours 5 Text: Physics for Scientists & Engineers Giancoli 4^th edition Lab Book: Physics Laboratory Manual, David H. Loyd, 3rd Ed. ISBN-13:978-0-495-11452-9 Additional Resources: McAfee Memorial Library - Online information, links, electronic databases and the Online catalog. Contact the library for further assistance via email or at 800-270-4347. Career Counseling - The Career Development Center (CDC) provides services for all stages of career development. The mission of the CDC is to provide the career planning tools to ensure a lifetime of career success. Park Helpdesk - If you have forgotten your OPEN ID or Password, or need assistance with your PirateMail account, please email helpdesk@park.edu or call 800-927-3024 Resources for Current Students - A great place to look for all kinds of information http://www.park.edu/Current/. Course Description: PY 205 Introduction to Physics I: Lecture and laboratory introducing the calculus based physics. Topics include:introductory kinematics and Newtonian dynamics of both particles and solid bodies, work and energy, momentum, and thermodynamics. 4:3:5 Prerequisite: MA 221. Co requisite: MA 223. Educational Philosophy: Learning Outcomes: Core Learning Outcomes 1. Apply Newton's three laws of motion and the universal law of gravity 2. Perform vector analysis to the physical world 3. Explain and analyze relative motion. 4. Explain and calculate the energies involved in the physical world 5. Recognize, discuss, and calculate moment of inertia, torque, and angular momentum 6. Compare and contrast Kepler's laws and Newton's Laws 7. Apply Pascal's principle, Archimedes' principle, fluid flow: continuity, Bernoulli's equation, the Venturi effect, coefficient of viscosity, Poiseuille's law, Laminar flow, turbulent flow 8. Identify and interpret the laws of Thermodynamics and apply these laws to thermodynamic and phase change. 9. Use spreadsheets such as Window's Excel to analyze data and produce tables and graphs as part of a laboratory report. Core Assessment: Link to Class Rubric Class Assessment: 1. Assigned Reading: each student is expected to read the textbook, carefully and critically. 2. Homework & Quizzes: Homework will be assigned from the end of the chapter problems. The assignments will be collected and a single problem will be arbitrarily be selected and graded. To ensure that an appropriate level of understanding is acquired from the homework, there will be weekly quizzes covering the homework, reading, and lecture material for that week. The lowest homework and quiz scores will be dropped. A quiz will be given each week during which there is no exam scheduled. 3. Examinations: Exams will consist of multiple choice, matching, definitions, and short-answer problems. Make-up exams will be given only if the student contacts the instructor prior to the scheduled exam time and has a reasonable excuse for missing the exam. Exams (10) 40% Homework 10% Quizzes 10% Laboratory 20% Final 20% The scale is traditional: 90% = A; 80% = B; 70% = C; 60% = D; <60% = F. Late Submission of Course Materials: No late work will be accepted. Classroom Rules of Conduct: Cell phone usage is not allowed during class. Course Topic/Dates/Assignments: Academic Honesty: Academic integrity is the foundation of the academic community. Because each student has the primary responsibility for being academically honest, students are advised to read and understand all sections of this policy relating to standards of conduct and academic life. Park University students and faculty members are encouraged to take advantage of the University resources available for learning about academic honesty (www.park.edu/current or http://www.park.edu/faculty/).from Park University 2011-2012 Undergraduate Catalog Page 95-96 Plagiarism involves the use of quotations without quotation marks, the use of quotations without indication of the source, the use of another's idea without acknowledging the source, the submission of a paper, laboratory report, project, or class assignment (any portion of such) prepared by another person, or incorrect paraphrasing. from Park University 2011-2012 Undergraduate Catalog Page 95 Attendance Policy: Instructors are required to maintain attendance records and to report absences via the online attendance reporting system. 1. The instructor may excuse absences for valid reasons, but missed work must be made up within the semester/term of enrollment. 2. Work missed through unexcused absences must also be made up within the semester/term of enrollment, but unexcused absences may carry further penalties. 3. In the event of two consecutive weeks of unexcused absences in a semester/term of enrollment, the student will be administratively withdrawn, resulting in a grade of "F". 4. A "Contract for Incomplete" will not be issued to a student who has unexcused or excessive absences recorded for a course. 5. Students receiving Military Tuition Assistance or Veterans Administration educational benefits must not exceed three unexcused absences in the semester/term of enrollment. Excessive absences will be reported to the appropriate agency and may result in a monetary penalty to the student. 6. Report of a "F" grade (attendance or academic) resulting from excessive absence for those students who are receiving financial assistance from agencies not mentioned in item 5 above will be reported to the appropriate agency. Park University 2011-2012 Undergraduate Catalog Page 98 Disability Guidelines: Park University is committed to meeting the needs of all students that meet the criteria for special assistance. These guidelines are designed to supply directions to students concerning the information necessary to accomplish this goal. It is Park University's policy to comply fully with federal and state law, including Section 504 of the Rehabilitation Act of 1973 and the Americans with Disabilities Act of 1990, regarding students with disabilities. In the case of any inconsistency between these guidelines and federal and/or state law, the provisions of the law will apply. Additional information concerning Park University's policies and procedures related to disability can be found on the Park University web page: http://www.park.edu/disability . Additional Information: ┃Competency │ Exceeds Expectation (3) │ Meets Expectation (2) │ Does Not Meet Expectation (1) │ No Evidence (0) ┃ ┃Synthesis │Derive from basic principles an expression for mtion - │Formulate an expression for motion and │Formulate an expression for motion or │Use the given expressions for motion or ┃ ┃Outcomes │including relative motion Calculate and discuss the energies│assess the energies involved in the │assess the energies involved in the │determine the energy involved in a ┃ ┃ │involved in the physical world │physical world │physical world │physical process ┃ ┃ │Differentiate the three laws of thermodynamics Relate │Classify the three Laws of │Relate the three Laws of Thermodynamics │ ┃ ┃Analysis │Newton's laws to linear and rotational motion Relate a │Thermodynamics Relate Newton's laws to │to a problem or Relate Newton's laws to │Solve a problem dealing with Newtonian ┃ ┃Outcomes │problem to Newton's laws or the Laws of Thermodynamics to │linear and rotational motion Analyze a │linear and rotational motion to a problem│mechanics or Thermodymanics ┃ ┃ │solve it. │Newtonian or Thermodynamics problem and │Analyze a Newtonian or Thermodynamics │ ┃ ┃ │ │solve it. │problem and solve it. │ ┃ ┃ │Evaluate the energies involved in a real world physical │Evaluate the energies involved in a real│Evaluate the energies involved in a real │ ┃ ┃ │process Calculate the moment of inertia, torque, and angular│world physical process Calculate the │world physical process Calculate the │Evaluate elimentary problems involving ┃ ┃Evaluation │momentum of simple systems. Evaluate projectile motion │moment of inertia, torque, and angular │moment of inertia, torque, and angular │energy, moment of inertia, torque, ┃ ┃Outcomes │problems. Evaluate various forces. │momentum of simple systems Evaluate │momentum of simple systems Evaluate │angular momentum, projectile motion, and ┃ ┃ │ │projectile motion problems. Evaluate │projectile motion problems. Evalulate │forces. ┃ ┃ │ │various forces. (Three of these) │various forces. (Two of these) │ ┃ ┃ │ │Use and explain appropriate terms with │Use and explain appropriate terms with no│Use and explain appropriate terms with no┃ ┃ │Use and explain appropriate terms with no errors. │no errors. Specifically, eight to nine │errors. Specifically, five to seven of │errors. Specifically, four or less of the┃ ┃Terminology│Specifically: displacement, velocity, acceleration, │of the following: displacement, │the following: of the following: │following: of the following: ┃ ┃Outcomes │projectile motion, force, torque, fluid flow, torque, │velocity, acceleration, projectile │displacement, velocity, acceleration, │displacement, velocity, acceleration, ┃ ┃ │inertia, momentum, heat. │motion, force, torque, fluid flow, │projectile motion, force, torque, fluid │projectile motion, force, torque, fluid ┃ ┃ │ │torque, inertia, momentum, heat. │flow, torque, inertia, momentum, heat. │flow, torque, inertia, momentum, heat. ┃ ┃ │ │ │ │ ┃ ┃ │ │Given a physical world process, fully │Given a physical world process, discuss │Given a physical world process, discuss ┃ ┃Concepts │Given a physical world process, fully discuss the relevance │discuss the relevance of Newtonian │in part the relevance of Newtonian │in part the relevance of Newtonian ┃ ┃Outcomes │of Newtonian mechanics, thermodynamics, and fluid flow │mechanics and thermodynamics │mechanics and thermodynamics │mechanics or thermodynamics ┃ ┃ │ │ │ │ ┃ ┃Application│ │Apply Newton's three laws of motion │ │ ┃ ┃Outcomes │ │Apply the universal law of gravity Apply│ │ ┃ ┃ │ │the laws of thermodynamics │ │ ┃ ┃ │Construct a graph and/or interpret information from a graph │Construct a graph and/or interpret │Construct a graph and/or interpret │Construct a graph and/or interpret ┃ ┃Whole │Draw diagrams of the problem asked Analytically solve word │information from a graph Draw diagrams │information from a graph Draw diagrams of│information from a graph Draw diagrams of┃ ┃Artifact │problems Discuss the relevance of the kinetic theory of │of the problem asked Analytically solve │the problem asked Analytically solve word│the problem asked. Analytically solve ┃ ┃Outcomes │gases to heat and temperature and determine appropriate │word problems State the kinetic theory │problems State the kinetic theory of │word problems Use the given kinetic ┃ ┃ │conclusions. │of gases to heat and temperature and │gases to heat and temperature and │theory of gase equations as applied to ┃ ┃ │ │determine appropriate conclusions │determine appropriate conclusions │heat and temperature. ┃ ┃ │Recognize and interpret a cooling curve Draw force diagrams │ │ │ ┃ ┃ │and use them to explain and evaluate physical phenomena │Recognize and interpret a cooling curve │Recognize and interpret a cooling curve │Recognize and interpret a cooling curve ┃ ┃ │Select the correct or derive the correct thermodynamic │Draw force diagrams and use them to │Draw force diagrams and use them to │Draw force diagrams and use them to ┃ ┃Component │expression for the problem Recognize and interpret a cooling│evaluate physical phenomena Select the │evaluate physical phenomena Select the │evaluate physical phenomena Select the ┃ ┃Outcomes │curve Draw force diagrams and use them to explain and │correct thermodynamic expression for the│correct thermodynamic expression for the │correct thermodynamic expression for the ┃ ┃ │evaluate physical phenomena Select the correct or derive the│problem │problem (Two of these) │problem (One of these) ┃ ┃ │correct thermodynamic expression for the problem │ │ │ ┃ ┃ │ │ │ │ ┃ This material is protected by copyright and can not be reused without author permission. Last Updated:1/11/2013 1:07:16 PM
{"url":"https://app.park.edu/syllabus/syllabus.aspx?ID=911092","timestamp":"2014-04-21T00:26:56Z","content_type":null,"content_length":"125354","record_id":"<urn:uuid:f799f7c6-aa06-44f7-b0c4-2769e786010f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Phantom Error Messages in v9 Replies: 1 Last Post: Jan 5, 2013 2:17 AM Messages: [ Previous | Next ] Phantom Error Messages in v9 Posted: Dec 6, 2012 4:57 AM *Mathematica* v9 can produce error messages during the evaluation of the * next* line to be evaluated! Yes, that statement seems nonsensical. Read on: (*Mathematica* v9.0.0.0 running on MacBook Pro with OS X 10.6.8) I have placed a notebook with instructions and code to demonstrate these errors at https://www.dropbox.com/s/3w5e1fb834av5wp/RETURN%20BUG.nb otherwise you may observe them from scratch with these instructions: Enter the following four lines into a new cell: sum2::usage="f[*a,b*] returns *a+b*"; The use of some italics in line 2 is important! 1. Evaluate that cell. the output value, 3, appears; note the number of the Output Cell; if this is the first input cell to be evaluated in your session, then the output cell withh be Out[4]. 2. Double click on "sum2". 3. Press the forward arrow key, 4. Seven error messages now appear! These messages, in the *Messages*window, are: StringMatchQ::strse: String or list of strings expected at position 1 in StringMatchQ[$Failed,sum2*]. >> StringMatchQ::strse: String or list of strings expected at position 1 in StringMatchQ[$Failed,sum2*]. >> StringMatchQ::strse: String or list of strings expected at position 1 in StringMatchQ[$Failed,sum2*]. >> General::stop: Further output of StringMatchQ::strse will be suppressed during this calculation. >> (Then, after a slight pause without user doing anything): The String "f[ The string "FontSlant->"Italic"] The string "FontSlant->"Italic"] You will notice that the first four messages occurred "During evaluation of In[5], despite the fact that there is not yet any Input cell with that number (which you can confirm by clicking in some empty place in the notebook, entering a simple expression (such as 1+1) and evaluating). Date Subject Author 12/6/12 Phantom Error Messages in v9 James Stein 1/5/13 Re: Phantom Error Messages in v9 James Stein
{"url":"http://mathforum.org/kb/message.jspa?messageID=7933063","timestamp":"2014-04-20T11:43:23Z","content_type":null,"content_length":"19095","record_id":"<urn:uuid:3dc4a0ea-6edb-42cf-883f-7e132550121b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Winter School on Elliptic Objects Posted by Urs Schreiber Next February there is an entire winter school all about Stolz & Teichner’s work on elliptic objects. From Field Theories to Elliptic Objects Schloss Mickeln, Düsseldorf (Germany) February 28 - March 4, 2006 Graduiertenkolleg 1150: Homotopy and Cohomology Prof. Dr. G. Laures, Universität Bochum Dr. E. Markert, Universität Bonn Details : http://www.math.uni-bonn.de/people/GRK1150 $\to$ study program Announcement and Application Unfortunately, application deadline was already Dec. 15. Too bad. If anyone knows anything about this, please let me know. In the past two decades mathematicians started to investigate the geometric properties of manifolds by looking at their ambient loop spaces. The stimulation came from particle physists who analyzed these infinite dimensional objects heuristically. The results are very mystifying since they connect physics and differential topology with the theory of elliptic curves and modular forms. An explanation is expected to come from a new cohomology theory which should be regarded as a higher version of topological K-theory. Such a theory can be obtained by methods of algebraic topology but its relation to loop spaces and field theories is still unclear. Graeme Segal and Dan Quillen gave a first description of the elements in this new theory in terms of field theories. Later their approach was modified and improved by Stefan Stolz and Peter Teichner. The seminar gives an overview of their work while mainly focusing on the relationship between 1-dimensional euclidean field theories and classical K-theory which is now understood. Posted at December 24, 2005 1:05 PM UTC Re: Winter School on Elliptic Objects Hallo, Urs! In case you don’t already know, I remembered that John (Baez) has written something on Elliptic Cohomology in his twf’s. I went back and read it again (when it was originally written I was an undergrad and had no chance to gasp the meaning of anything - I don’t claim things have gotten much better since). you can find a generel introduction to general homology theory in week 149 and 150, a little bit concerning elliptic cohomology in week 153 and more in 197. Posted by: Florian on January 16, 2006 8:42 PM | Permalink | Reply to this Re: Winter School on Elliptic Objects Hallo Florian, thanks for your message! I am aware of these TWFs, but I don’t feel I really understand much about elliptic cohomology yet. With a little luck this will change some day. I was lucky enough to be accepted for the above mentioned winter school, even though I applied well after deadline had passed. My motivation for being interested in ‘enriched elliptic objects’ is mainly the aspect of describing CFT by means of 2-functors. I think there is something hidden there, and if it happens to be related to elliptic cohomology, all the better. :-) Posted by: Urs on January 17, 2006 10:14 AM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/string/archives/000712.html","timestamp":"2014-04-17T13:12:03Z","content_type":null,"content_length":"16995","record_id":"<urn:uuid:c5c0318c-6884-4a5e-b22d-1a885eeeab51>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Parabola Equation November 9th 2011, 10:49 PM #1 Nov 2011 New Zealand So here's my question: (by the way, please don't penalise me for putting it in the wrong forum - i didn't see one for graphs!) The cross section of a hayshed is in the shape of an inverted parabola. It can be modelled by the equation: h = 0.3w(8 - w) where w is the width of the hayshed, and h is the height. Solve w (width). Please explain how you worked it out? Re: Parabola Equation multiply the right side out, and then "complete the square": $ax^2 + bx = ax^2 + 2a(\frac{b}{2a})x$ $= a(x^2 + 2(\frac{b}{2a})x) = a(x^2 + 2(\frac{b}{2a})x + (\frac{b}{2a})^2 - (\frac{b}{2a})^2)$ $= a(x + \frac{b}{2a})^2 - \frac{b^2}{4a}$ (except you're using "w" instead of "x". can you figure out what your "a" and "b" are?) get the square on one side, and everything else on the other, take the square root, and then subtract to "isolate" your variable (in this case, w). November 9th 2011, 11:09 PM #2 MHF Contributor Mar 2011
{"url":"http://mathhelpforum.com/algebra/191557-parabola-equation.html","timestamp":"2014-04-16T21:56:37Z","content_type":null,"content_length":"32542","record_id":"<urn:uuid:d94efdbd-9dc9-4901-ae8c-c31f00f0bdb9>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Green Lane Math Tutor ...Writing skills and grammar are not taught as rigorously as they used to be. Chances are, even if you've attended good schools and gotten good grades, your skills are not adequate to achieve a good score on the SAT writing section. I studied reading and writing under strict, old-school English t... 23 Subjects: including algebra 1, algebra 2, calculus, vocabulary ...One of my goals is for the student to feel smarter and more confident when I am through with them! By middle school, many students (particularly boys) believe that they aren't capable of reading well. I work hard to help each student believe in themselves and to understand that they just need a little help to catch up. 28 Subjects: including logic, SAT math, linear algebra, reading ...I am very familiar with the format of the Praxis and am equipped to tutor in all three sections of the test: reading, writing, and math. I have helped other education students pass the Praxis. I have also passed the Praxis II in English, mathematics, and health for grades 7-12 in Pennsylvania. 47 Subjects: including SAT math, precalculus, ACT Math, piano ...It also emphasizes writing proofs to solve (prove) properties of geometric figures. Microsoft Word is a full-featured word processing program. Word contains rudimentary desktop publishing capabilities and is the most widely used word processing program on the market. 39 Subjects: including precalculus, trigonometry, English, geometry ...I am currently employed with a company as design engineer but want to fill my free time with something productive and at the same time earn a second income to pay off my heavy student debt. I have never officially tutored as a job but have done so for my peers who struggled in school. I am most... 8 Subjects: including algebra 1, algebra 2, Microsoft Excel, prealgebra
{"url":"http://www.purplemath.com/Green_Lane_Math_tutors.php","timestamp":"2014-04-18T14:04:18Z","content_type":null,"content_length":"23823","record_id":"<urn:uuid:a07c1cee-5707-441d-beb6-1726127457ec>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
AN1539/D AN1539 MC13156 9640B - Datasheet Archive MOTOROLA Order this document by AN1539/D AN1539/D SEMICONDUCTOR APPLICATION NOTE AN1539 AN1539 An IF Communication Circuit Tutorial Prepared by: Albert Franceschino Field Applications Engineer Motorola Semiconductor Product Sector ABSTRACT This article is intended to be a tutorial on the use of IF communication integrated circuits. The ISM band channel bandwidths and the Motorola MC13156 MC13156 are used within this article as a platform for discussion. An examination of the devices topology is provided along with a discussion of the classical parameters critical to the proper operation of any typical IF device. The parameters reviewed are impedance matching the mixer, selecting the quad tank and filters and concluding with a overview of bit error rate testing for digital applications. Upon completion, the reader will have a better understanding of IF communications basics and will be able to specify the support components necessary for proper operation of these devices. BACKGROUND ON ISM BAND The instrumentation, scientific, medical (ISM) band targets three bands of frequencies. These bands are: 902 ­ 928 MHz, 2400 ­ 2483.5 MHz and 5725 ­ 5850 MHz. Spread spectrum techniques can be used within these bands to minimize adjacent channel interference. Systems operating with radiated power greater than 0.75 µW and up to a maximum of 1.0 W are required to implement either a frequency hopping scheme or a direct sequence scheme to lower the spectral power density on a particular channel. Systems with radiated power levels equal to or below 0.75 µW are not required to implement spread spectrum. The frequency hopping channels associated with this band must have a bandwidth of 500 kHz or less. The RF carrier modulation technique is usually conventional fm. For the purposes of this article we will concentrate on the 902 ­ 928 MHz band, as this band will furnish the carrier which is down­converted later in the article. The two recommended methods of implementing the spreading function mentioned above operate as follows. The direct sequence scheme uses a pseudo­random data stream of at least 127 bits that is combined with the data to be modulated. The combination of the relatively low data rate baseband data with a high speed pseudo­random bit stream forces the spectrum of the baseband signal to occupy more bandwidth. By forcing this spreading, the power spectral density of the signal at any discrete frequency is reduced, thus limiting this signals propensity to interfere with other users. The frequency­hopping scheme requires that the modulator hop over at least 50 channels within 20 seconds. This ©MOTOROLA Motorola, Inc. 1996 corresponds to a park time of 0.4 seconds/channel. This scheme reduces the likelihood of co­channel interference by intentionally limiting the amount of continuous time a carrier remains on channel. The spectral power density (SPD) measurement criteria for either scheme is the same - 8.0 dBm/3.0 kHz bandwidth. This is computed as follows: SPD(dBm) + 10 log (1 Watt) (3 kHz) (500 kHz) (1 mWatt) Intuitively it may be obvious that spread spectrum systems would exhibit a better signal to noise ratio than their non­spread counterparts. This improvement in S/N ratio is termed processing gain. Systems operating within this band must exhibit 10 dB processing gain over systems not implementing a spreading function. For direct sequence systems, the processing gain is related to the pseudo­random code or chip rate. For frequency­hopping systems, the gain is related to the hopping rate and the number of channels being hopped. Allocation of these bands by the FCC has fostered new product opportunities. These products include and are not limited to high frequency cordless phones, high speed wireless lans (>9600 bps), point to point data terminals, and wireless telemetry. A FUNCTIONAL OVERVIEW OF IF ICs FOR COMMUNICATIONS In conventional radio, RF carriers conveying intelligence by way of some modulation technique are down­converted from the high frequency carrier to some other intermediate frequency through a process called mixing. Following the mixing process, the baseband signal is recovered through some type of demodulation scheme. We will be focusing our discussion on the IF portion of the receiver. By way of example, the Motorola MC13156 MC13156 will be used as practical alternative to a purely theoretical discussion. This split IF device is targeted at the "IF strip" or portion of the radio. This integrated device contains all the blocks necessary to completely recover an original baseband signal from an incoming higher frequency carrier. The functional blocks associated with this device are common to all IF communication type integrated circuits, and as such provide a convenient and widely applicable vehicle for discussion. AN1539 AN1539 1 THE MIXER The function of the mixer is to multiply the incoming RF carrier with a signal supplied by the local oscillator. The mixing process exploits the non­linear properties within the mixing device (i.e. a diode) to produce a whole host of product terms. As an example, consider the terms produced by a second order non­linear device with the following inputs: F(lo) = cos (lo)t and F(rf) = A cos (rf)t The output spectrum is: A0 + A cos w(lo)t + aA cos w (rf)t + b cos22 w(lo)t + 2bA cos w(lo)t cos w(rf)t + bAcos22 w(rf)t By trigonometric expansion, the term 2bA cos w(lo)t cos w(rf)t expands to: bA cos (w(lo) + w(rf) t + bA cos (w(lo) ­ w(rf) t The desired multiplication product is the difference product which becomes the intermediate frequency. The products produced from this multiplication are largely dependant on the type of mixer employed. The most common topology used in current integrated circuits is the double balanced configuration. The balance term is used to describe the degree of isolation between the local oscillator and rf input and between these two inputs and the IF port. Double balanced mixers perform better in terms of suppressing leakage between these ports than do their single balanced counterparts. Analysis of the double balanced mixer also shows that all but the sum and difference frequencies are effectively cancelled out. This eases the task of passband filtering the resultant IF difference frequency. Critical parameters for the mixer include its conversion gain and input and output impedances and third order intercept. Higher conversion gain is an asset that eases filter selection (a higher loss can be tolerated will still deliver enough gain to run the limiter). The mixer input impedance needs to be matched to the filter that is band­pass filtering the signal that has been mixed down from a down­converter. In the case of the ISM band, the rf carrier would be in the 902 ­ 928 MHz range. The down­converter's mixer would convert this frequency to some frequency less than 200 MHz, the upper frequency limit of the device. Third order intercept is a measure of how well the distortion products in the device are suppressed. The higher the third order intercept point (specified in dBm) the greater the suppression of these distortion products. THE LOCAL OSCILLATOR The local oscillator, as mentioned earlier, is the other half of the mixer's drive. The LO within the MC13156 MC13156 consists of a bipolar transistor pinned out to be configured as a Colpitts type oscillator. This internal transistor can also serve as a buffer for an external oscillator signal. The critical parameters associated with most IC forms of on chip LO's are stability and required drive. For discrete oscillators not using a crystal as their reference, external passive components are used to set the LO center frequency. Some degree of temperature stability can be achieved using components with complementary temperature coefficients. For very stable requirements, a crystal is recommended. In most cases, tuning of a crystal can be accomplished by using a series tunable inductor to "pull the crystal on frequency." AN1539 AN1539 2 THE IF AMPLIFIER The primary purpose of the IF amp is to amplify the output of the mixer. This output, which has been band pass filtered (the difference frequency produced by the mixer being the center frequency of the filter), requires amplification to ensure adequate drive to the limiter. The IF amp also supplies a current to the input of the summing amps of the RSSI circuit. Critical parameters associated with the IF are its input and output impedance and its gain. Input and output parameters are important because it is necessary to match these parameters to the band­pass filter impedances to minimize mismatch loss and subsequent loss in overall gain. Historically, the two most common IF frequencies are 455 kHz and 10.7 MHz. Both the 10.7 MHz and the 455 kHz ceramic filters used in IFs are easily matched to the MC13156 MC13156 using external resistors when appropriate. Proper filter impedance matching to the IF amplifier is essential for these devices to operate to specification. RSSI BLOCK The received signal strength indicator is a summing network which takes the output currents from the IF amp and the limiter and produces a current that can be used to indicate the receive signal strength. Most IF devices are configured such that the current produced is a log function of the received signal. Critical parameters of the RSSI section include dynamic range measured in dB and its slope measured either in µV/dB or µA/dB. Non­linearities in RSSI dynamic range can usually be traced back to either improper filter matching or selection which results in excess insertion loss. Devices specifying RSSI slope in µA/dB use current sources at their outputs and thus require a resistor to ground to develop a indication (voltage) of the magnitude of the received signal. THE LIMITER The limiter takes its input from the IF amplifier and removes any amplitude variations from the IF frequency. It is necessary to remove these variations because the detector in addition to responding to frequency changes will also attempt to demodulate amplitude variations in the IF. These variations decrease sensitivity and introduce distortion in the recovered audio or bit errors in the recovered data. Important parameters for the limiter section include its limiting gain and input impedance. THE DETECTOR Most integrated IF circuits employ a quadrature detector for recovering the original baseband information (data/voice). This circuit is favored because the frequency dependant components, the inductor, capacitor, and the damping resistor are external to the device. The part of the circuit that is implemented internal to the device is the four quadrant multiplier. This multiplier is very economical to embed in silicon and is easily interfaced to the remaining external detector components. The quadrature detector operates as follows: First the incoming IF frequency is split into two parts. Second one of the parts is phase shifted by 90 degrees. Third the two parts are multiplied together and the result is the recovered baseband information. A more detailed discussion of the operation of the quadrature detector is given later. MOTOROLA THE DATA SLICER The function of the data slicer is to reconstitute the data shape of the recovered baseband information. This information is usually heavily filtered at the transmitter and receiver IF to conserve spectrum and restrict noise. As a result of this filtering, the information is sinewave in nature and is not directly suitable for use in a digital system. Upon initial inspection, one may conclude that the slicer is just a comparator, however this is not the case. Incorporated within the slicer block is an auto­threshold mechanism that automatically tracks the dc value of the incoming data stream. Variation in the data stream's dc value is caused by the continuous change in the averaged duty cycle of the recovered data. Long strings of 1's and 0's are the cause of the extremes in this value. It is essential that the point of comparison (decision point) be tightly controlled throughout the recovery, or this dc variation may cause bit errors due to misinterpretation of the current data level. MIXER OVERVIEW MIXER INJECTION Before discussing high/low side injection, a review of image frequency is in order. Recall that the function of the mixer in this case is to down­convert the incoming rf signal to 10.7 MHz. With an input frequency of 144.45 MHz, the mixer requires that the LO frequency be either 155.15 MHz (high side injection) or 133.75 MHz (low side injection). The choice of running either high or low side injection is largely a function of the adjacent spectrum in the area of the receiver's rf frequency. As an example, consider the case where the LO is NETWORK ANALYZER INPUT high side injected at 155.15 MHz. An image frequency now exits at 165.85 MHz. This frequency, if present from another transmitter at sufficient power level, will mix to produce its own 10.7 MHz difference product and corrupt/distort the intended signal. In a perfect world, proper planning could be exercised when selecting the LO frequency as to place the image in a band that is relatively quiet in a particular area. Since this is impractical, heavy premixer bandpass filtering should be and is employed to attenuate the image. LAB MEASUREMENT OF MIXER INPUT IMPEDANCE The network analyzer is a convenient tool that can be used to measure the input impedance of the mixer. Generally the procedure for measuring Zin is as follows: - set up the analyzer and calibrate out parasitics. These include interconnecting cable and pcb inductance and capacitance. This process is usually automated on newer analyzers. The process of calibration essentially tells the analyzer what a short, open and 50 ohms load looks like on the pcb. With the short, open and load conditions now established in the system, the IF device can be inserted into the pcb and measured. - The analyzer is setup to sweep a set of frequencies around the incoming RF frequency, and a plot of the input impedance is displayed. Since some integrated IF devices use a differential input type mixer, care must be taken to properly terminate the other input while measuring the device. On most integrated devices, the mixer's differential input impedance will track, eliminating the need to perform the second measurement. See Figure 1 for a termination example. LO IN (129.3 MHz) PIN 24 PIN 1 1000 pF 1000 pF 50 OHMS PIN 23 PIN 2 1000 pF 0.1 µF PIN 3 GROUND PLANE 330 OHMS Figure 1. MC13156 MC13156 Mixer Impedance Measurement Circuit MOTOROLA AN1539 AN1539 3 Zo = 50 GAIN = 22 dB GAIN = 39 dB LNA DN.CYT. MIXER I/F AMP SAW FILTER INSERTION LOSS = 5 dB 915 MHz 140 MHz LIMITER DETECTOR FILTER INSERTION LOSS = 6 ­ 8 dB RECOVERED BASEBAND 10.7 MHz Figure 2. IF Block Diagram IMPEDANCE MISMATCH EFFECTS ON SENSITIVITY To evaluate the effects of impedance mismatch on overall system sensitivity, consider the block diagram in Figure 2. For discussion, let the 3.0 dB limiting sensitivity of the device be 6.0 µV or ­ 91.4 dBm. This power level must be maintained into the input of the mixer in order for the limiter to maintain constant envelope amplitude into the detector. This number is specified with a 6.0 dB insertion loss assumed for both filters. The actual power delivered to the limiter input can be calculated by summing all the losses and gains in the IF system measured in dB. In Figure 2, assuming a ­ 91.4 dBm input power to the MC13156 MC13156, the limiter input power is: DESIGNING THE MATCH The measured series input impedance of the mixer at 144.45 MHz is 82­j280 ohms. This corresponds to an equivalent parallel impedance of 1038 ohms in parallel with 3.62 pF. The objective is to match this impedance to the output impedance of the SAW (surface acoustic wave) bandpass filter, which is being fed by a low noise amp (LNA) and down­converter. Let the output impedance of the filter be 50 ohms. See Figure 3a below. SAW FILTER V(lim) = ­ 91.4 dBm + 22 dB ­ 6 dB + 39 dBm ­ 6 dB = ­ 42.4 dBm The measured single­ended mixer input impedance is 82 ­ j280 ohms. With no matching network employed, let's evaluate the effects of the mismatch in a system with a driving and transmission line impedance of 50 ohms. The load reflection coefficient is: R (Zl ­ + (Zl ) Zo) Zo) (82 j280 ­ + (82 ­­ j280 ) 50) 50) R + 0.91 at an angle of ­ 18.64 degrees R To compute the mismatch loss (ML) in dB we use the following formula, where r is the magnitude of R: ML = ­10 log (1 ­ |r2|) ML = ­10 log (1 ­ 0.83) = 7.69 dB The 3.0 dB limiting sensitivity has been degraded by 7.69 dB! Because of the mismatch our new 3.0 dB limiting sensitivity is ­ 83.71 dBm or 14.58 µV! The sensitivity in voltage is found by solving the following: V AN1539 AN1539 4 + 10 dBm 10 (50) (.001) MATCHING NETWORK MIXER INPUT Zo = 50 1038 ­j 304 Figure 3a. Parallel Equivalent Circuit To match the equivalent parallel input impedance, we need to parallel resonate out the 3.62 pF (­ j304 ) capacitor with a parallel inductor and then match the remaining 1038 resistive portion to the 50 ohm source. This is done as follows: L + ([(2)(p)(144.45 x 1016)]2 (3.62 x 10­12) + 334 nH Now setting (as above) the Qs equal; + Qp + 1038 ­ 1 + 4.44 50 Xs + XC + 50(4.44) + 222 W Xp + XL + 1038 + 233 W 4.44 Qs thus C = 4.96 pF and L = 256 nH. The complete circuit is shown in Figure 4. Notice that we have two inductors parallel. These can be replaced by 1 inductor with the value (256)(334)/(256)+(334) = 0.144 µH. The reader is encouraged to examine the applications section of the MC13156 MC13156 data sheet and note the variable inductor value of 0.1 µH used in the mixer input circuit. MOTOROLA MIXER INPUT SAW FILTER MATCHING NETWORK 4.96 pF Zo = 50 256 nH 334 nH 0.144 µH 1038 ­j 304 Figure 4. Parallel Equivalent Circuit IF DEVICE SUPPORT COMPONENT SELECTION IF FILTER SELECTION IN CONVENTIONAL Fm For conventional radio (i.e. analog fm), the bandwidth of the IF section for very low distortion was easily determined. An estimation of the required bandwidth and thus the filter bandwidth can be obtained by using Carson's Rule: Bw = 2(fm + Fc) where Fm = the maximum modulation frequency and Fc = peak carrier deviation (a function of the modulation index chosen.) As an example, a 1200 Hz sine wave feeding a transmitter with a modulation index (M) of 0.5, would cause a carrier deviation ( Fc) = (Fm x M) = 600 Hz. Therefore, the required IF bandwidth is 3600 Hz. To recover a square wave of the same frequency, the required bandwidth is subject to the harmonic content of the modulating waveform. To recover the square wave such that it exhibits minimal degradation in its rise and fall time, requires the recovery of at least the 7th harmonic, in our case 8400 Hz. Maintaining our modulation index of 0.5 gives a new IF required bandwidth of 2 x (8400 + 4200) = 25.2 kHz! Thus an analog system requires a bandwidth greater than 20 times the fundamental modulation frequency to reproduce the digital baseband information with little distortion. IF FILTER SELECTION IN A GFSK ENVIRONMENT To conserve bandwidth, digital systems filter the baseband digital signal to eliminate extraneous spectrum in the data. By filtering the data at the modulator and using a comparator to detect the zero crossing of the data at the receiver, we can reconstitute the baseband data. This comparator on integrated IF devices is called a data slicer. A theoretical analysis of the derivation of the required bandwidth of a digital IF is beyond the scope of this article. As an alternative, a set of empirically derived values, which yield acceptable bit error performance (10­5) when used in a Gaussian Filtered Shift Keying (GFSK) system, are listed. The required IF bandwidth is a function of the baseband data rate. The values that follow are all a function of the baseband data rate (BDR). MOTOROLA Baseband data rate IF bandwidth Baseband filter bandwidth Carrier Deviation Modulation Index = = = = = 100 Kb/s nrz (50 kHz effective) BDR = 100 kHz 0.50 x BDR = 50 kHz 0.32 x BDR = 32 kHz 0.64 Note: that the 100 kHz required bandwidth is well within the Note: ISM 500 kHz channel bandwidth. This device, as mentioned earlier, is termed a "split" IF part. This comes from the fact that the single difference product can be filtered in two places, after the mixer and after the IF amp. As it is sometimes difficult to find a single filter device with the right attenuation characteristics, splitting the overall attenuation requirement between two filters usually results in a more timely and economical solution. COMPOSITE BANDWIDTH EVALUATION Since multiple filters are being used in this split IF, it becomes necessary to evaluate the combined effects of these filters on the overall bandwidth of the IF. An approximation of the 3.0 dB IF circuit bandwidth given the 3.0 dB bandwidth of the two filters is given by: Composite Bandwidth(3 dB) = 1 ) 1 1 bw12 bw22 As mentioned earlier, the required IF bandwidth in our 100 kb/s digital application was 100 kHz. Thus the composite bandwidth of the two filters must be at least 100 kHz. Filters with a spec'ed 3.0 dB bandwidth of 150 kHz will suffice, as their combined bandwidth is approximately 106 kHz. FILTER INSERTION LOSS EFFECTS There are two losses associated with the IF filters, insertion loss and mismatch loss. As shown earlier with mixer mismatch loss, the effect of additional loss was to decrease the receiver's limiting sensitivity on a dB by dB basis. The same decrease applies to insertion loss. All filters have a specified insertion loss. As long as the filter matching parameters are met, the insertion loss specified will be valid. Insertion loss increases as the match departs from ideal. The mismatch loss can be evaluated in a similar vain as was the mismatch loss of the mixer. AN1539 AN1539 5 NOISE BANDWIDTH EFFECT ON SENSITIVITY Thermal noise present in any IF circuit will degrade device performance. This reduction in performance manifests itself as a higher level signal requirement (more signal power) to maintain a given signal to noise ratio on a channel. The input noise power (Np) in an IF system of bandwidth B is calculated as: Np = kTB where k = 1.38 x 10­23 (Boltzmans constant) T = temperature in degrees Kelvin (°C + 273) B = bandwidth of the system. In our discussion above, assuming a 20°C ambient, when the analog bandwidth was 3120 Hz, the noise power is 1.26 x 10­17 Watts or ­139 dBm. This is the noise floor. In the digital IF example where the composite bandwidth was 106 kHz, the new noise floor is 4.28 x 10­16 Watts or ­123 dBm. The specified 3.0 dB limiting sensitivity on the MC13156 MC13156 is ­ 91.4 dBm; therefore, in a matched system with no other noise sources present, the noise power component in the IF bandwidth would be negligible. However, where this number becomes significant is when you add other system factors into the sensitivity equation. These factors, to name a few, include the receiver noise figure (that is, the amount of noise added to the system from internal amplifiers etc.), fade margin in all systems using other than line of sight transmission, and in digital systems, the required carrier to noise ratio for a given bit error rate using GFSK modulation. When these and other factors are considered, the required signal power at the receive antenna increases dramatically. Thus the noise power needs to be evaluated in concert with these other system factors before full realization of its effect on sensitivity can be understood. These factors can easily force the minimum received signal power requirement into the ­ 75 dBm range. FM DEMODULATION USING THE QUADRATURE TANK With an overview of the quadrature detector having been given earlier, it is now possible to review the theory of operation behind the detector/quadrature tank. Recall that the carrier is split into two parts. One part goes directly to the four quadrant multiplier, while the other part is routed to the quad tank. With zero carrier deviation, the resulting IF center frequency is 10.7 MHz. As the carrier undergoes deviation (a phase shift), the IF signal will also deviate away from 10.7 MHz. The quad tank reacts to this deviation by producing yet an additional phase shift in the signal supplied to it. This phase shift can be as much as 90 degrees, and is proportional to the carrier deviation. Assuming a 90 degree initial phase shift, if one normalizes the instantaneous IF center frequency plus deviation as sine(t). The input signals to the multiplier are: Vin(a) = sin(t) Vin(b) = sin(t + p (k) = cos(t ­ (k) 2 since sin(a)cos(b) = 0.5(sin(a+b) + sin(a­b) where a = t and b = t ­ (k) Vout = 0.5(sin(t + t ­ (k) + sin(t ­ t + (k) ) AN1539 AN1539 6 The result of this function is an output at 2t and a low frequency component given by sin(k). This low frequency component is the recovered baseband audio or analog comparator input data. SELECTING/DESIGNING THE QUAD TANK The selection of the quad tank is easy once a decision is made as to what the IF center frequency will be. In our discussion, 10.7 MHz has been used and thus our tank's center frequency will match this selection. In some cases, the quad tank can be purchased as an assembly, with the cap and inductor in the same package. When the integrated tank is not available, a shielded inductor and shunt capacitor are used. The higher the unloaded inductor Q, the better, as a lower loaded Q value is always attainable using a shunt resistor. DE­QUING FOR INCREASED BANDWIDTH The bandwidth of the quad tank is usually set so that the tank has minimal effect on the overall IF. In our discussion, the IF bandwidth requirement was found to be 100 kHz. Setting the quad tank bandwidth at 1.5 times the IF bandwidth will in most cases satisfy this requirement. Thus our quad tank bandwidth will be 150 kHz. For example, consider the following: Quad tank center frequency = 10.7 MHz Capacitor value = 150 pF We need to solve for the inductance value which will cause resonance at 10.7 MHz. L + (2p10.7 x 106)1 (150 x 10­12) + 1.47 mH 2 Note: This inductor is tunable to permit adjustment of the Note: tank's resonant frequency to match the IF center Note: frequency. For this example, assume that the inductor has an unloaded Q of 100. Since the parallel equivalent resistance of the inductor (Rp) = (Q) x (XI) and XI = 99 ohms, Rp = 9900 ohms. Because the quad tank Q is related to the IF circuit Q, we need to determine the value of the IF Q before proceeding. IF Center Freq. + Composite Bandwidth Q(I F) ­ 10.7 MHz 106 kHz + 101 Since a good approximation of the quad tank bandwidth is 1.5 times the IF bandwidth, it follows that its Q should decrease. To "de­Q" the network we install a shunt resistor whose value is determined by: Tank + (1.5)(ICenter Freq. + 10.7 MHz + 71.3 150 kHz F Bandwidth) Rp Q(Tank) = , where XL = 99 , and Rp is Rp II Rext Q(Tank) XL Rp = Q(Tank) x (XL) = 7.1 K using the conductance values of Rp and Rp Rext + 1.408 x 10­4 1­ 1.01 x 10­4 + 25.1 KW Thus installing a 25.1 kohm resistor in parallel with the inductor with its existing Rp of 9.9 kohms will set the Q of the tank to approximately 71. MOTOROLA POST DETECTION FILTERING BIT ERROR TESTING Probably the most widespread post detection filter is the deemphasis filter used in broadcast FM. This filter, which is used to compensate for the pre­emphasis filter in the transmitter, equalizes the noise power between the low and high frequency components in the recovered audio from the detector. This filtering is mandated by the FCC. In a digital and analog receiver, and in particular the IF section, the recovered data that comes out of the detector is also accompanied by a 2 x IF component frequency discussed earlier. This component is usually easily filtered as it is much higher in frequency than the recovered baseband data. In our discussion, this 21.4 MHz component can be adequately attenuated by a single pole filter whose cut off frequency is set at 150 kHz, or 1.5 x the baseband data rate. Figure 5 shows a block diagram of a typical bit error rate test (bert) setup. This setup would be applicable to the evaluation of any GFSK IF receiver device. The bert is supplied a clock from an external source in this example; however, it can optionally be used with its own internal clock. The generator produces a repeating data pattern called a frame word that is filtered before being applied to the rf modulator. The filter in this example, as mentioned earlier, removes the harmonic content of the modulating data. The MC13156 MC13156 down­converts the rf signal and recovers the baseband 100 kb/s data. The recovered data is compared to the modulated frame word and errors are tabulated for display. Error rates of 10­4 are easily obtained with IF bandwidths in the 100 ­ 110 kHz range. GEN DATA OUT RCYR DATA IN RF OUT H/P MODEL #9640B 9640B OR EQUIV. MODULATION IN H/P MODEL #3790A OR EQUIV. RCYR CLOCK IN RF GENERATOR WAVETEK MODEL #164 OR EQUIV. GEN CLOCK IN BIT ERROR RATE TESTER CLOCK OUT FUNCTION GENERATOR MIXER IN DATA SLICER OUT BASEBAND FILTER UUT (MC13156 MC13156) Figure 5. MC13156 MC13156 Bit Error Rate Test Setup SUMMARY REFERENCES The use of integrated IF devices in wireless applications is so widespread that a basic understanding of the proper application of these devices is essential to both the novice and experienced designer. It is my sincere hope that this overview, having defined some of the basic attributes of FM IF strips, will serve as a cornerstone through which a broader understanding of wireless design can be secured. Krauss, H.; Bostian, C.; and Raab, F. Solid State Radio Engineering. New York: John Wiley and Sons Inc., 1980. Young, A. Electronic Communication Techniques. Ohio: Charles E. Merrill Publishing, 1985. Bowick, Chris. RF Circuit Design, Indiana: Sams, 1982. ACKNOWLEDGEMENT The author wishes to thank Vince Mirtich for his assistance in the preparation of this article. MOTOROLA AN1539 AN1539 7 Motorola reserves the right to make changes without further notice to any products herein. Motorola makes no warranty, representation or guarantee regarding the suitability of its products for any particular purpose, nor does Motorola assume any liability arising out of the application or use of any product or circuit, and specifically disclaims any and all liability, including without limitation consequential or incidental damages. "Typical" parameters can and do vary in different applications. All operating parameters, including "Typicals" must be validated for each customer application by customer's technical experts. Motorola does not convey any license under its patent rights nor the rights of others. Motorola products are not designed, intended, or authorized for use as components in systems intended for surgical implant into the body, or other applications intended to support or sustain life, or for any other application in which the failure of the Motorola product could create a situation where personal injury or death may occur. Should Buyer purchase or use Motorola products for any such unintended or unauthorized application, Buyer shall indemnify and hold Motorola and its officers, employees, subsidiaries, affiliates, and distributors harmless against all claims, costs, damages, and expenses, and reasonable attorney fees arising out of, directly or indirectly, any claim of personal injury or death associated with such unintended or unauthorized use, even if such claim alleges that Motorola was negligent regarding the design or manufacture of the part. Motorola and are registered trademarks of Motorola, Inc. Motorola, Inc. is an Equal Opportunity /Affirmative Action Employer. How to reach us: USA / EUROPE: Motorola Literature Distribution; P.O. Box 20912; Phoenix, Arizona 85036. 1­800­441­2447 JAPAN: Nippon Motorola Ltd.; Tatsumi­SPD­JLDC, Toshikatsu Otsuki, 6F Seibu­Butsuryu­Center, 3­14­2 Tatsumi Koto­Ku, Tokyo 135, Japan. 03­3521­8315 MFAX: RMFAX0@email.sps.mot.com ­ TOUCHTONE (602) 244­6609 INTERNET: http://Design­NET.com HONG KONG: Motorola Semiconductors H.K. Ltd.; 8B Tai Ping Industrial Park, 51 Ting Kok Road, Tai Po, N.T., Hong Kong. 852­26629298 AN1539 AN1539 8 *AN1539/D AN1539/D* MOTOROLA AN1539/D AN1539/D
{"url":"http://www.datasheetarchive.com/AN1539_D/Datasheet-020/DSA00350156.html","timestamp":"2014-04-19T17:20:36Z","content_type":null,"content_length":"43202","record_id":"<urn:uuid:7b96de99-91d5-40b9-8bb7-58d03bf5ee1e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate the Per Square Foot Cost of Ceramic Tile When your heart is set on ceramic tile and you’re ready to take the plunge, it helps to get all the facts before you begin. Whether you’re installing ceramic tile on the floor, walls or counters, you’ll want an estimate of your materials and costs to plan your project. Calculate the per-square-foot cost of ceramic tile to determine roughly how much you’ll be spending in materials to install ceramic tile. Step 1 Measure the dimensions of the installation area with the tape measure. If the space has an irregular shape, divide the area into separate squares or rectangles and measure the dimensions of each separate space. Step 2 Multiply the dimensions to calculate the area of the space. For example, if the length is 16 feet and the width is 12.5 feet, multiply 16 by 12.5 to equal 200 square feet. An example of an irregular shape might be space number 1 dimensions: 4 feet by 6 feet; space number 2 dimensions: 10 feet by 12 feet; and space number 3 dimensions: 3 feet by 5 feet. Multiply 4 by 6 to equal 24 square feet, 10 by 12 to equal 120 feet and 3 by 5 to equal 15 feet. Add 24, 120 and 15 to equal 159 square feet. Step 3 Add 10 percent to the square footage for extra to cover cutting and mistakes. Add another 15 percent to the square footage if you have a pattern on the tile that necessitates special placement. Add another 15 percent to the square footage if you plan to place the tiles on the diagonal. For example, if the total area equals 200 square feet, add an additional 20 feet to cover basic cutting. Add another 30 feet for tile patterns and another 30 feet if you plan to place the tile on the diagonal. Add the additional square footage to the measured area to arrive at the total square footage for calculating the number of tiles you will need. Step 4 Determine how many tiles will cover one square foot of your area. If the tiles are 12-inch squares, one tile will cover one square foot. If the tiles are 4-inch squares, you’ll need nine tiles to cover one square foot. Perform this calculation by multiplying 12 by 12 to equal 144 square inches (the surface area to cover). Multiply the length times the width of the tile in inches. Divide 144 by the area of the tile in inches to find the number of tiles that you’ll need to cover one square foot. Step 5 Multiply the number of tiles for a square foot by the number of square feet in your area. For example, if the adjusted area of your installation space is 250 square feet and you know you need nine tiles per square foot, multiply 250 by 9 to equal 2,250. You will need 2,250 tiles to cover your area – you'll probably have some left over. • You’ll need additional supplies to install ceramic tile: substrate materials, mortar, grout, sealer and tools such as trowels, spacers, tile cutters and sponges. The type of substrate corresponds directly to the area of the installation space – this material comes in boards. The amount of mortar, grout and sealer you need corresponds directly with the area of the installation space and the number of tiles. Photo Credits • Hemera Technologies/AbleStock.com/Getty Images
{"url":"http://budgeting.thenest.com/calculate-per-square-foot-cost-ceramic-tile-24948.html","timestamp":"2014-04-19T14:55:25Z","content_type":null,"content_length":"45709","record_id":"<urn:uuid:d0e41eb9-e811-49f3-b905-693ba19f0e3c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Graphing Question [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Graphing Question From "Data Analytics Corp." <dataanalytics@earthlink.net> To statalist@hsphsun2.harvard.edu Subject Re: st: Graphing Question Date Sat, 26 Jul 2008 16:19:52 -0400 Ditto on the 3D - but yet they love those also. The more the glitch, the better. "Data Analytics Corp. Wrote:"@hsphsun2.harvard.edu wrote: I agree - but this is what business clients like. Nod. I always wonder about that. The wretched 3D effect is the worst. jverkuilen wrote: I do have to ask the graph snob question: Why a pie chart? There are almost always better optons that are perceived more accurately by readers, use less space, etc. -----Original Message----- From: "Data Analytics Corp." <dataanalytics@earthlink.net> To: statalist@hsphsun2.harvard.edu Sent: 7/25/2008 8:59 AM Subject: st: Graphing Question Good morning, I have, what should be, a simple graphing problem. I clustered doctors into four groups. I created a table showing the proportion of doctors in each group who use a certain drug with their patients. The proportions are simply the weighted means of a binary variable which indicates whether or not the drug is prescribed by that doctor. Now I want to draw a pie chart showing those proportions. I can easily draw a pie that displays proportions which are not weighted, but how do I tell Stata to draw the pie using the weighted means? In short, I want the table and pie slices to be identical so I can give my client both. Incidentally, how do I get the pie slice labels to be outside the pie and have the percentages next to the slice label? It seems that everything goes inside the pie and we can either get labels or percentages, but not both. Walter R. Paczkowski, Ph.D. Data Analytics Corp. 44 Hamilton Lane Plainsboro, NJ 08536 (V) 609-936-8999 (F) 609-936-3733 * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-07/msg00968.html","timestamp":"2014-04-18T00:48:38Z","content_type":null,"content_length":"7454","record_id":"<urn:uuid:b032cdd9-722f-419d-be3a-78b687a5a076>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Sixth grade Sixth grade Here is a list of all of the skills students learn in sixth grade! These skills are organized into categories, and you can move your mouse over any skill name to view a sample question. To start practicing, just click on any link. IXL will track your score, and the questions will automatically increase in difficulty as you improve!
{"url":"http://www.ixl.com/math/grade-6","timestamp":"2014-04-18T13:06:57Z","content_type":null,"content_length":"124748","record_id":"<urn:uuid:9f4f8436-1818-4721-889c-42ac256c3f4b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra Ex.2 1. Find the value of m such that the equation x²-2(5+2m)x+3(7+10m) = 0 has equal roots. 2. If a, b are the roots of the equation x²+px+1=0 and c,d are the roots of the equation x²+qx+1=0, show that q²-p² = (a-c)(b-c)(a+d)(b+d). 3. Find the greatest and the least values of the expression for real values of x. 4. If the roots of the equation ax²+cx+c=0 be in the ratio m:n, prove that 5. If the equations x²+bx+ca=0 and x²+cx+ab=0 have a common root, show that their roots will satisfy the equation x²+ax+bc=0. Character is who you are when no one is looking.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=6110","timestamp":"2014-04-18T16:38:45Z","content_type":null,"content_length":"9239","record_id":"<urn:uuid:d7f91604-716e-4e7c-bff9-f093d9992787>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the answer for question 9) ? http://postimg.org/image/dcifjafgx/ - Homework Help - eNotes.com What is the answer for question 9) ? There are three types of symmetry: 1. Reflection Symmetry - A line of symmetry divides the figure in half, and the two halves are mirror images of each other. The image of the heart below is an example of reflection symmetry. If the heart is folded along the vertical line, the two halves coincide with each other. 2. Rotation Symmetry - A figure is turned around a fixed center point and aligns with the original image. The image of the star below is an example of rotation symmetry. If the star is turned, it appears to be the same shape. If the star is turned 5 times, it returns to its original position. 3. Translation Symmetry - A figure is slid left, right, up, down, or diagonal and remains the same shape. The image of the lightning bolt is an example of translation symmetry. The shape does not change, but rather is moved in space. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/what-answer-question-9-http-postimg-org-image-446697","timestamp":"2014-04-20T05:43:41Z","content_type":null,"content_length":"29990","record_id":"<urn:uuid:aafa63e6-054a-4975-9e0d-27aaa4bd35b6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Moreland, GA Algebra Tutor Find a Moreland, GA Algebra Tutor As a high school freshman, I had a lot of trouble with Algebra I. Fortunately for me, I benefited from a string of extremely talented math educators during my 4 years in high school. It would be a pleasure to pass on that sense of fun and exuberance which should accompany any math education. 1 Subject: algebra 1 ...I have also taken Biochemistry while in college. Therefore, I am thoroughly familiar with the chemical aspects of animal behavior and the teaching methods involved. I have successfully tutored several high school and college students in both Biology and Animal Science (Zoology). Thus, I possess the patience, focus, scientific vocabulary and expertise to be an effective Zoology tutor. 57 Subjects: including algebra 2, algebra 1, chemistry, English ...During my Junior and Senior year, I was a Recitation Leader for the Freshman College Algebra courses. Every Tuesday and Thursday of the semester I would lead the classroom in previous homework discussions and answer any questions students had in preparation for tests. This enjoyment of helping others inspired me to continue tutoring in my spare time. 9 Subjects: including algebra 1, algebra 2, precalculus, trigonometry ...I enjoy working with students, and I find great pleasure upon seeing a smile on a child's face who has conquered a skill that he/she has had trouble with in the past. I seek to first assess the student, and then plan a course of action based upon that student's strengths and/or weaknesses in the... 19 Subjects: including algebra 2, algebra 1, reading, geometry ...West Middle School as a bilingual community liaison, I realized that I loved the energy and passion shown by the youth and thus am considering teaching at a primary school level. I am currently working as a Bilingual Parent Liaison/ Title I Parent Liaison at Campbell Elementary School. I am very friendly, easy to get along with, cheerful, energetic, responsible, respectful, and professional. 16 Subjects: including algebra 2, biology, Spanish, algebra 1 Related Moreland, GA Tutors Moreland, GA Accounting Tutors Moreland, GA ACT Tutors Moreland, GA Algebra Tutors Moreland, GA Algebra 2 Tutors Moreland, GA Calculus Tutors Moreland, GA Geometry Tutors Moreland, GA Math Tutors Moreland, GA Prealgebra Tutors Moreland, GA Precalculus Tutors Moreland, GA SAT Tutors Moreland, GA SAT Math Tutors Moreland, GA Science Tutors Moreland, GA Statistics Tutors Moreland, GA Trigonometry Tutors Nearby Cities With algebra Tutor Brooks, GA algebra Tutors Glenn, GA algebra Tutors Grantville, GA algebra Tutors Haralson algebra Tutors Luthersville algebra Tutors Palmetto, GA algebra Tutors Raymond, GA algebra Tutors Red Oak, GA algebra Tutors Sargent, GA algebra Tutors Senoia algebra Tutors Turin, GA algebra Tutors Whitesburg, GA algebra Tutors Woodbury, GA algebra Tutors Woolsey, GA algebra Tutors Zebulon, GA algebra Tutors
{"url":"http://www.purplemath.com/Moreland_GA_Algebra_tutors.php","timestamp":"2014-04-16T04:59:42Z","content_type":null,"content_length":"23970","record_id":"<urn:uuid:52d60e66-7f19-4f96-9587-927321ba17a1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
What is petahertz (unit) Petahertz is a frequency measurement unit A petahertz (PHz) is a SI-multiple (see prefix peta) of the frequency unit hertz and equal to 1.0 × 10^15 hertz.OT? Foods, Nutrients and Calories Pie fillings, canned, cherry contain(s) 115 calories per 100 grams or ≈3.527 ounces [ calories ] Gravels and Substrates Substrate, Eco-Complete density is equal to 15.38 kg/m²/cm. Calculate how much of this gravel is required to attain a specific depth in a cylinder, quarter cylinder or a rectangular shaped object Materials and Substances Porphyry, solid weigh(s) 2.547 gram per (cubic centimeter) or 1.472 ounce per (cubic inch) [ weight to volume | volume to weight | density ] What is short ton per square centimeter? Short ton per square centimeter (short tn/cm²) is a non-metric measurement unit of surface or areal density. The surface density is used to measure the thickness of paper, fabric and other thin materials.read more...» What is speed or velocity measurement? The speed measurement was introduced to measure distance traveled by an object per unit of timeread more...» Do you like Aqua-Calc? Add us
{"url":"http://www.aqua-calc.com/what-is/frequency/petahertz","timestamp":"2014-04-17T18:23:56Z","content_type":null,"content_length":"33070","record_id":"<urn:uuid:c04064ff-3d6a-40eb-a80c-0e560c5645d6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Implementation of a network of contacts In this article I explain the implementation of a network of contacts I wrote for a website. A while back I had to implement a website that offered some social networking. You know, people could be contacts, could be connected if there was a path following contacts from one to the other, and disconnected otherwise. The set of people connected to some given user constituted his network. Nobody is contact of himself. Of course we needed to be able to say whether two people were contacts or not, and to know how many contacts a given user has. But we also needed to be able to determine whether two people were connected, to be able to restrict searches to networks, and to efficiently know their sizes. There are websites out there offering this kind of stuff so the first thing I did was to Google around. I didn't find anything of help though, so this article tries to fill that gap. Representing Contacts Since we need to keep track of contacts there's a join table with them. As far as those definitions is concerned it doesn't matter who invited to whom, so the relation being contact is symmetric. To simplify the SQL we specify commutativity in the database by doubling the entries in the contacts table. Thus, if u and v are contacts there are rows (u.id, v.id) and (v.id, u.id). This is not necessary technically, it is just a decission taken for convenience that may not be suitable in large sets. With that table, knowing whether two people are contacts becomes a simple SELECT, and knowing how many contacts some user has becomes a simple SELECT COUNT(*). We can also easily know whether two people who are not contacts are at distance 2 from each other, and obtain the ID of the intermediate contact if that's the case: SELECT a.to_id FROM contacts a, contacts b WHERE a.from_id = #{u.id} AND b.to_id = #{v.id} AND a.to_id = b.from_id LIMIT 1 Note how the duplication of rows simplifies the SQL, we can think of following contacts in some kind of direction. You may offer tests for distance 2, and perhaps distance 3, but this quickly explodes, so that needs some care. Of course you may cache stuff and provide just approximations. LinkedIn for instance does not give the exact size of your connections at distance 2, at least that's my interpretation of the plus sign attached to their counter. Representing Networks The contacts relation gives naturally a graph where u and v are contacts iff there's an edge between their nodes. Since being connected is a transitive relation, it takes a moment to realize that any two people who are connected have the same network. That's the first key observation. Suppose w belongs to the network of u, which is connected to v. By definition there is a path P that connects w and u. Since there is a path Q connecting u and v, the path PQ connects w and v, and so w belongs to the network of v. If you visualize this in the graph, at any given moment contact networks are exactly the connected components of the graph of contacts. Take a moment to visualize those connected components. I called them clusters in the application. Now the second key observation comes: it is trivial to maintain a cluster field in the users table as long as the graph grows. Suppose a new user a joins the website and becomes contact of user u. Right, assign to a the cluster of u. Suppose existing users u and v become contacts. Right, join the components if they do not belong already to the same cluster. That's a single SQL statement: UPDATE USERS SET CLUSTER = '#{u.cluster}' WHERE CLUSTER = '#{v.cluster}' And this is very important because social networks normally grow. It is rare that people break their edge, so the trade-off wins in mean. With this implementation it is trivial to know whether two people are connected, it translates to compare their cluster field. To compute the size of the network of some user translates to a simple SELECT COUNT(*) minus 1, and to restrict searches to the network of a given user you just put the cluster in some join. If u breaks a relation with v then you need to pick either of them and recompute their transitive closure. That's the trade-off. Either they stay in the same cluster because they are still connected somehow, or either there's a split, that's expensive: cluster_so_far = Set.new to_visit = Set.new([self.id]) while to_visit.size > 0 user_id = to_visit.entries[0] cluster_so_far << user_id contact_ids = [] rs = dbh.query("select to_id from contacts where from_id = #{user_id}") rs.each {|row| contact_ids << row[0]} to_visit += contact_ids.reject {|i| cluster_so_far.member?(i) || to_visit.member?(i)} # Normally IN is faster than OR, and both faster than an UPDATE per identifier. cluster_so_far.to_a.each_slice(200) do |s| dbh.query("update users set cluster = '#{cluster}' where id in (#{s.join(',')})") This implementation of a network of contacts has been running in production for some time, albeit the website has just a few number of users and have no experience about its scalability. I figured this out on my own and makes sense to me, but I am sure there's room for improvement and extensions.
{"url":"http://advogato.org/article/914.html","timestamp":"2014-04-19T14:59:35Z","content_type":null,"content_length":"13658","record_id":"<urn:uuid:a2b5c77b-a1bd-4ae5-99a6-6c97d3408e17>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
pairwiseCIInt {pairwiseCI} Internal functions for pairwiseCI For internal use. Two different methods for data representable as a two numeric vectors (pairwiseCICont) and data representable as matrix with two columns like cbind(successes, failures). Functions that split up a data.frame according to one factor, and perform all pairwise comparisons and comparisons to control among the levels of the factor by calling methods documented in pairwiseCImethodsCont and pairwiseCImethodsProp. pairwiseCICont(formula, data, alternative="two.sided", conf.level=0.95, method, control=NULL, ...) pairwiseCIProp(formula, data, alternative="two.sided", conf.level=0.95, control=NULL, method, ...) A formula of the structure response ~ treatment for numerical variables, and of structure cbind(success, failure) ~ treatment for binomial variables A data.frame containing the numerical response variable and the treatment and by variable as factors. Note, that for binomial data, two columns containing the number of successes and failures must be present in the data. Character string, either "two.sided", "less" or "greater" The comparisonwise confidence level of the intervals, where 0.95 is default A character string specifying the confidence interval method, one of the following options "Param.diff": Difference of two means, with additional argument var.equal=FALSE(default) as in t.test (stats) "Param.ratio": Ratio of two means, with additional argument var.equal=FALSE(default) as in t.test.ratio(mratios) "Lognorm.diff": Difference of two means, assuming a lognormal distribution, "Lognorm.ratio": Ratio of two means, assuming a lognormal distribution, "HL.diff": Exact nonparametric CI for difference of locations based on the Hodges-Lehmann estimator, "HL.ratio": Exact nonparametric CI for ratio of locations, based on the Hodges-Lehmann estimator, "Median.diff": Nonparametric CI for difference of locations, based on the medians (percentile bootstrap CI), "Median.ratio": Nonparametric CI for ratio of locations, based on the medians (percentile bootstrap CI), "Prop.diff": Asymptotic CI for difference of proportions prop.test(stats) "Prop.ratio": Asymptotic CI for ratio of proportions "Prop.or": Asymptotic CI for the odds ratio See ?pairwiseCImethods for details. Character string, specifying one of the levels of the treatment variable as control group in the comparisons; default is NULL, then CI for all pairwise comparisons are calculated. further arguments to be passed to the functions specified in methods These functions are for internal use in pairwiseCI. a list containing: numeric vector: the point estimates numeric vector: lower confidence bounds numeric vector: upper confidence bounds character vector with the names of comparisons See Also pairwiseCI for the user level function; pairwiseCImethodsCont, and pairwiseCImethodsProp for a more detailed documentation of the implemented methods; summary.pairwiseCI for a summary function. t.test(stats), wilcox.exact(exactRankTests), prop.test(stats) for the sources of some of the CI methods, multcomp for simultaneous intervals for difference for various contrasts, mratios for simultaneous intervals for the ratio in many-to-one comparisons Documentation reproduced from package pairwiseCI, version 0.1-22. License: GPL-2
{"url":"http://www.inside-r.org/packages/cran/pairwiseCI/docs/pairwiseCIInt","timestamp":"2014-04-21T12:14:50Z","content_type":null,"content_length":"16514","record_id":"<urn:uuid:eafab660-7459-4d46-b476-249019059d75>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Lawrence, MA Prealgebra Tutor Find a Lawrence, MA Prealgebra Tutor ...However, my preferences are nearby places such as Lawrence, Andover, Lowell, North Reading, Malden, Danvers, Salem, Methuen and Peabody. If you think you need some help in math and physics, do not hesitate to contact me. I look forward to hearing from you soon. 10 Subjects: including prealgebra, calculus, geometry, algebra 1 ...In addition, I can help you with test-taking skills. I am not a qualified expert in the vocational portions (electrical, car mechanics, or mechanical). However, I managed to pass the questions they asked me... and I can help you understand the questions and how they are asked, given a practice t... 20 Subjects: including prealgebra, English, Spanish, ASVAB ...Though my initial certificate is expired, my professional license is pending, and I have a Master's Degree in Education Administration. My numerous years experience teaching elementary school speaks to this, as does my education. I am familiar with the many phonics programs, and most recently taught using a Wilson-based reading and spelling program. 13 Subjects: including prealgebra, reading, grammar, elementary (k-6th) ...Students and their parents shouldn't hesitate to spend time on this subject because a weak foundation in algebra is the most common reason for later struggles with more advanced math coursework in calculus and beyond. Since 2003, I've tutored dozens of high school and college students in biology... 23 Subjects: including prealgebra, chemistry, writing, physics ...I studied literature as an undergraduate at MIT and Harvard and took many courses in formal linguistics, and I'm currently an Assistant Editor at the Boston Review -- a national magazine of politics, literature, and the arts -- so I'm well-trained in both the science and art of English grammar. ... 47 Subjects: including prealgebra, English, chemistry, reading
{"url":"http://www.purplemath.com/Lawrence_MA_Prealgebra_tutors.php","timestamp":"2014-04-19T23:11:02Z","content_type":null,"content_length":"24194","record_id":"<urn:uuid:c26b934f-bca3-4008-962d-c14088f910fd>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Calphysics Institute: Scientific Articles Papers by Calphysics Authors and Grantees Quantum Vacuum and Inertial Reaction in Nonrelativistic QED Hiroki Sunahata, Alfonso Rueda and Bernard Haisch, arXiv:1306.6036 [physics.gen-ph] (2013). Interaction of the Quantum Vacuum with an Accelerated Object and its Contribution to Inertia Reaction Force Hiroki Sunahata, Ph.D. thesis, Claremont Graduate University and Calif. State Univ. Long Beach (2006). Gravity and the Quantum Vacuum Inertia Hypothesis Alfonso Rueda & Bernard Haisch, Annalen der Physik, Vol. 14, No. 8, 479-498 (2005). Review of Experimental Concepts for Studying the Quantum Vacuum Fields E. W. Davis, V. L. Teofilo, B. Haisch, H. E. Puthoff, L. J. Nickisch, A. Rueda and D. C. Cole, Space Technology and Applications International Forum (STAIF 2006), p. 1390 (2006). Analysis of Orbital Decay Time for the Classical Hydrogen Atom Interacting with Circularly Polarized Electromagnetic Radiation Daniel C. Cole & Yi Zou, Physical Review E, 69, 016601, (2004). Quantum Mechanical Ground State of Hydrogen Obtained from Classical Electrodynamics Daniel C. Cole & Yi Zou, Physics Letters A, Vol. 317, No. 1-2, pp. 14-20 (13 October 2003), quant-ph/0307154 (2003). Update on an Electromagnetic Basis for Inertia, Gravitation, the Principle of Equivalence, Spin and Particle Mass Ratios Bernard Haisch, Alfonso Rueda, L. J. Nickisch & Jules Mollere, in Amer. Inst. Physics Conf. Proc., Space Technology and Applications International Forum (STAIF-2003), Ed. Mohamed S. El-Genk, pp. 922 - 931, gr-qc/0209016 (2003). Connectivity and the Origin of Inertia L. J. Nickisch & Jules Molere, preprint physics/0205086 (2002). Geometrodynamics, Inertia and the Quantum Vacuum Bernard Haisch & Alfonso Rueda, AIAA paper 2001-3360, presented at AIAA/ASME/SAE/ASEE Joint Propulsion Conference, Salt Lake City, July 8-12, (2001). Inertial mass and the quantum vacuum fields Bernard Haisch, Alfonso Rueda & York Dobyns, Annalen der Physik, 10, 393-414 (2001). Stochastic nonrelativistic approach to gravity as originating from vacuum zero-point field van der Waals forces Daniel C. Cole, Alfonso Rueda, Konn Danley, Phys. Rev. A, 63, 054101, (2001). Gravitation as a Super SL(2,C) Gauge Theory R. S. Tung, Proc. 9th Marcel Grossman Conf. (2001). Quasi-local "Conserved Quantities" R. S. Tung, Proc. 9th Marcel Grossman Conf. (2001). Covariant Hamiltonian Boundary Conditions in General Relativity for Spatially Bounded Spacetime Regions Stephen Arno & Roh S. Tung (2001). Properties of the Symplectic Structure of General Relativity for Spatially Bounded Spacetime Regions Stephen Arno & Roh S. Tung (2001). Chern-Simons Term for BF Theory and Gravitation as a Generalized Toplogocial Field Theory in Four Dimensions H.Y. Guo, Y. Ling, R.-S. Tung & Y.-Z. Thang. (2002). Hybrid Quintessence with an End or Quintessence from Branes and Large Dimensions Edi Halyo, Stanford U.-ITP-01-28 (2002). De Sitter Entropy and Strings Edi Halyo, Stanford U.-ITP-01-31 (2001). Universal Counting of Black Hole Entropy by Strings on the Stretched Horizon Edi Halyo, Stanford U.-ITP-01-xx (2002). Strings and the Holographic Description of Asymptotically de Sitter Spaces Edi Halyo, Stanford U.-ITP-02-03 (2002). Holographic Inflation Edi Halyo, Stanford U.-ITP-01-03 (2002). Domain Walls with Strings Attached Renata Kallosh, Sergey Prokushkin & Marina Shmakova, JHEP, accepted (2001). Domain Walls, Black Holes, and Supersymmetric Quantum Mechanics Klaus Behrndt, Sergei Gukov & Marina Shmakova, Nucl. Phys. B., 601, 49, (2001). One-loop corrections to the D3-brane action Marina Shmakova, Phys. Rev. D, 62, 104009, (2000). Excision of Singularities by Stringy Domain Walls Renata Kallosh, Thomas Mohaupt & Marina Shmakova, Stanford Univ. Rept. No. SU-ITP 00/27 (2000). Partial Renormalization of the Stress Tensor Four-Point Function in N=4 SYM and AdS/CFT B. Eden, A. Petkou, C. Schubert, E. Sokatchev, submitted to Nucl. Phys. B (2000). Gravitational Energy-Momentum in the Tetrad and Quadratic Spinor Representations of General Relativity R. S. Tung & J. M. Nester, in Proc. Vigier III Symp. (Aug. 21-25, 2000, U. C. Berkeley), Kluwer Acad. Press, in press, (2001). Zero-point field induced mass vs. QED mass renormalization Giovanni Modanese, in Proc. 18th Advanced ICFA Beam Dynamics Workshop on "Quantum Aspects of Beam Physics", Capri, Italy, October 15-20, 2000, in press (2001). The dipolar zero-modes of Einstein action: An informal summary with some new issues Giovanni Modanese, in Proc. Vigier III Symp. (Aug. 21-25, 2000, U. C. Berkeley), Kluwer Acad. Press, in press, (2001). Inertial mass and vacuum fluctuations in quantum field theory Giovanni Modanese. (2000) The Paradox of Virtual Dipoles in the Einstein Action Giovanni Modanese, Phys. Rev. D, 62, 087502 (2000). Large "Dipolar" Vacuum Fluctuations in Quantum Gravity Giovanni Modanese, Nucl. Phys. B, Vol. 588, 419 (2000). The Case for Inertia as a Vacuum Effect: a Reply to Woodward & Mahood Y. Dobyns, A. Rueda & B.Haisch, Foundations of Physics, Vol. 30, No. 1, 59 (2000). (NOTE — This paper discusses the differences between the quantum vacuum approach to inertia and a Machian approach.) On the relation between a zero-point-field-induced inertial effect and the Einstein-de Broglie formula B. Haisch & A. Rueda, Physics Letters A, 268, 224, (2000). Toward an Interstellar Mission: Zeroing in on the Zero-Point-Field Inertia Resonance B. Haisch & A. Rueda, Space Technology and Applications International Forum (STAIF-2000), Conference on Enabling Technology and Required Developments for Interstellar Missions, Amer. Inst. Phys. Conf. Publ. 504, p. 1047 (2000). Earlier Scientific Publications on SED and Inertia Electromagnetic Zero Point Field as Active Energy Source in the Intergalactic Medium A. Rueda, H. Sunahata & B. Haisch, 35th AIAA/ASME/SAE/ASEE AIAA Joint Propulsion Conference, AIAA paper 99-2145, (1999). Progess in Establishing a Connection Between the Electromagnetic Zero-Point Field and Inertia B. Haisch & A. Rueda, Space Technology and Applications International Forum-99, American Institute of Physics Conference Proceedings 458, Mohammed S. El-Genk, ed., p. 988 (1999). Inertial Mass Viewed as Reaction of the Vacuum to Accelerated Motion A. Rueda & B. Haisch, Proc. NASA Breakthrough Propulsion Physics Workshop, NASA/CP-1999-208694, p. 65 (1999). The Zero-Point Field and the NASA Challenge to Create the Space Drive B. Haisch & A. Rueda, Proc. NASA Breakthrough Propulsion Physics Workshop, NASA/CP-1999-208694, p. 55 (1999). Advances in the Proposed Electromagnetic Zero-Point Field Theory of Inertia B. Haisch, A. Rueda & H. E. Puthoff, 34th AIAA/ASME/SAE/ASEE AIAA Joint Propulsion Conference, AIAA paper 98-3143, (1998). Contribution to inertial mass by reaction of the vacuum to accelerated motion A. Rueda & B. Haisch, Foundations of Physics, Vol. 28, No. 7, pp. 1057-1108 (1998). Inertial mass as reaction of the vacuum to acccelerated motion A. Rueda & B. Haisch, Phys. Letters A, vol. 240, No. 3, pp. 115-126, (1998). An Electromagnetic Basis for Inertia and Gravitation: What are the Implications for 21st Century Physics and Technology B. Haisch & A. Rueda, CP-420, Space Technology and Applications International Forum (M. S.El-Genk, ed), DOE Conf. 960103, American Inst. of Physics, p. 1443 (1998). The Zero-Point Field and Inertia B. Haisch & A. Rueda, in "Causality and Locality in Modern Physics," G. Hunter, S. Jeffers & J.-P. Vigier (eds.), Kluwer Acad. Publ., pp. 171-178, (1998). Electromagnetic Vacuum and Inertial Mass A. Rueda & B. Haisch, in "Causality and Locality in Modern Physics," G. Hunter, S. Jeffers & J.-P. Vigier (eds.), Kluwer Acad. Publ., pp. 179-186, (1998). Physics of the Zero-Point-Field: Implications for Inertia, Gravitation and Mass B. Haisch, A. Rueda & H.E. Puthoff, Speculations in Science & Technology, Vol. 20, pp. 99–114, (1997). Reply to Michel's "Comment on Zero-Point Fluctuations and the Cosmological Constant" B. Haisch & A. Rueda, Astrophys. J., 488, 563, (1997). Quantum and classical statistics of the electromagnetic zero-point-field M. Ibison & B. Haisch, Physical Review A, 54, pp. 2737-2744, (1996). Vacuum Zero-Point Field Pressure Instability in Astrophysical Plasmas and the Formation of Cosmic Voids A. Rueda, B. Haisch & D.C. Cole, Astrophysical Journal, Vol. 445, pp. 7-16 (1995). Inertia as a zero-point-field Lorentz force B. Haisch, A. Rueda & H.E. Puthoff, Physical Review A, Vol. 49, No. 2, pp. 678-694 (1994). Extracting energy and heat from the vacuum D. C. Cole & H.E. Puthoff, Physical Review E, Vol. 48, No. 2, pp. 1562-1565 (1993). Ground state of hydrogen as a zero-point-fluctuation-determined state H.E. Puthoff, Physical Review D, Vol. 35, No. 10, pp. 3266-3269 (1987). Extracting electrical energy from the vacuum by cohesion of charged foliated conductors R. L. Forward, Physical Review B, Vol. 30, No. 4, pp. 1700-1702 (1984).
{"url":"http://www.calphysics.org/sci_articles.html","timestamp":"2014-04-18T20:43:40Z","content_type":null,"content_length":"15330","record_id":"<urn:uuid:941fa50a-cfe3-4907-9d74-d47f20e72356>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
change of measure martingale March 30th 2011, 12:41 AM #1 Junior Member Feb 2011 change of measure martingale There's a given probability space $(\Omega,\mathcal{F},P)$ equipped with a filtration $(\mathcal{G}_t)_{t \in [0,T]}$ and a subfiltration $\mathcal{F}_t \subset \mathcal{G}_t$. And now I want to do a change of measure $\frac{dQ}{dP}|_{\mathcal{F}_t}= Z_t$, with $Z_t=\exp\{-\int\limits_0^t\frac{\mu_t -r}{\sigma}dW_t-\frac{1}{2}\int\limits_0^t(\frac{\mu_t -r}{\sigma})^2dt\}$ $\mu_t:=E[\mu|\mathcal{F}_t]$ is an estimator for the unknown parameter $\mu$ which has a normal prior distribution $\mathcal{N}(\mu_0,{\sigma_0}^2)$ and $W$ is a brownian motion with respect to $\mathcal{F}_t$ $r,\sigma >0$ are constants $t \in [0,T]$ I've also an explicit version of $\mu_t$ $\mu_t=\frac{\sigma^2\mu_0+{\sigma_0}^2(\mu t+\sigma V_t)}{\sigma^2 + {\sigma_0}^2t}$ $V$ is a brownian motion with respect to $\mathcal{G}_t$ and $\mu$ is independent of $V$ How can I now prove, that $Z_t$ is a $\mathcal{F}_t-$martingale? I only know the Novikov condition, $E[exp\{\frac{1}{2}\int\limits_0^T(\frac{\mu_t -r}{\sigma})^2dt \}] < \infty$. But I think this condition isn't fulfilled. Can anybody help me? Thanks in advance! Last edited by Juju; March 30th 2011 at 04:04 AM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/176288-change-measure-martingale.html","timestamp":"2014-04-17T07:28:53Z","content_type":null,"content_length":"34306","record_id":"<urn:uuid:65fb38f7-806a-4c95-b115-324f82d20c2e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Whenever Wilson Williams had a problem, he talked to his hamster, Pip. He had had Pip for only two weeks, but already she understood him better than anybody else in his family did. "Multiplication was hard enough," Wilson told Pip on the first Saturday morning in April. "But now we have to do fractions." Pip twitched her nose. "Even worse, Mrs. Porter is giving us a huge test in three weeks." Pip blinked. "But that's not the worst thing." Pip scampered across Wilson's bedspread. Luckily Wilson had his bedroom door closed so that she couldn't escape and get lost. "Wait," Wilson said to Pip. "Don't you want to know what the worst thing is?" He scooped up Pip and held her in both hands, facing him, as he leaned back against his pillow. Her bright little eyes really did look interested. When Wilson had gotten Pip, her name had been Snuggles, but he had changed it to Pip, short for Pipsqueak. Pip's brother, Squiggles, was the classroom pet in Wilson's third-grade classroom. "The worst thing," Wilson said, "is that my parents are getting me a math tutor." Pip's eyes widened with indignation. "I know." Wilson set her down on his knee. Instead of scurrying away, she sat very still, gazing up at him sadly. But no amount of hamster sympathy could change that one terrible fact. A math tutor! That meant Wilson would go to school and do fractions, and then after school he'd go see Mrs. Tucker and do more fractions. He'd have fractions homework for Mrs. Porter and more fractions homework for Mrs. Tucker. And suppose his friends at school found out. Nobody else he knew had a math tutor. There were other kids who were bad at math. There were other kids who thought fractions were hard. There were even other kids who thought fractions were impossible. But Wilson had never heard of any other kid who had a math tutor. Wilson picked up Pip again and stroked the soft fur on the top of her little head. Pip was the only good thing left in Wilson's life. From now on, the rest of his life was going to be nothing but "Now, come on," Wilson's father said at lunch. "Cheer up. The point of a math tutor is to help you." "You've been struggling so much," his mother went on. "First with multiplication, and now with fractions. A math tutor will make math come more easily to you." Wilson's little brother, Kipper, who was in kindergarten, spoke up next. "Can I have a math tutor, too? Wilson and I can share the math tutor. Like we share Pip." Wilson stopped glaring at his parents and started glaring at Kipper instead. It was annoying enough to have a little brother, but Wilson had to have a little brother who happened to love math, and who was good at it, too. To the left of Kipper's plate sat his beanbag penguin, Peck-Peck. To the right sat his beanbag alligator, Snappy. "What's a math tutor?" Kipper made Peck-Peck ask in a deep, growly voice. For some strange reason, Kipper seemed to think that was how a penguin should talk. "Does a math tutor toot on a horn?" Kipper made Snappy ask. "Toot! Toot!" Snappy's head bobbed up and down with each cheerful toot, as if he were an alligator tugboat. "Mom!" Wilson complained. "Make Kipper stop!" But instead of giving a warning look to Kipper, she gave one to Wilson. "Kipper's just playing." Then she actually leaned across the table and spoke directly to Snappy. "No, Snappy, a math tutor doesn't go 'Toot.' A math tutor helps people learn math. A math tutor has a very important job." This was too much. Who else lived in a family where adults had serious conversations with beanbag alligators? "Toot! Toot!" Snappy said again, apparently not even listening to the answer to his own stupid question. "That's enough, Kipster," their father said. Wilson was grateful to him for trying, but it was already too late. "May I be excused?" Wilson asked. "You haven't finished your grilled-cheese sandwich," his mother said. "I'm not hungry."Anymore,Wilson added to himself. Before Peck-Peck or Snappy could make any further brilliant remarks, Wilson pushed his chair back from the table and fled to his room to have an intelligent conversation with Pip. Wilson's best friend, Josh Hernandez, came over at two. As if Wilson's mother was sorry for not standing up for him at lunch, she took Kipper for a long bike ride so that the two older boys could play undisturbed. Wilson didn't have a video game system, and he wasn't allowed to watch TV on playdates, so he and Josh tried to build the world's fastest race car with some junk in the garage. His dad made microwave popcorn, and Wilson and Josh had a contest for throwing popcorn up into the air and catching it in their mouths. Wilson won, with seven straight mouth catches to Josh's four. He began to feel more hopeful about his life. "Do you have an idea for your science fair project yet?" Josh asked, after missing another popcorn catch. April was science fair month at Hill Elementary. "Nope." Wilson had been too busy trying to talk his parents out of making him have a math tutor. "Do you?" Wilson could tell Josh was waiting for him to ask what it was. "What is it?" "I have to warn you," Josh said. "It's not just a good idea, it's a great idea. Are you ready?" Wilson nodded. He couldn't believe Josh thought his idea was so wonderful. Usually Josh thought everything was terrible. "All right. Here it is. At what temperature does a pickle explode?" Okay, Wilson had to admit, Josh's idea was wonderful. "You could do something about popcorn," Josh offered. "Who is better at catching popcorn in their mouths, boys or girls? Or kids or grownups? Or dogs or cats? Or kids or dogs? Or--" Wilson shoved him good-naturedly. "I get the idea." "You could even thrill Mrs. Porter and use fractions," Josh suggested. "Like: cats catch half as much popcorn as dogs. Or grownups catch half as much popcorn as kids. Or--" This time Wilson shoved Josh harder. It was fine for Josh to joke about fractions. Josh was pretty good at math. Of course, to be fair to Josh, Josh didn't know that Wilson was about to become the only kid in the history of Hill Elementary to have a math tutor. Wilson was going to make sure that Josh never found out.
{"url":"http://ls2content.tlcdelivers.com/content.html?customerid=735&requesttype=text-chapter&button=false&isbn=9780374367169&upc=","timestamp":"2014-04-18T03:01:55Z","content_type":null,"content_length":"9167","record_id":"<urn:uuid:1249d6b9-643f-4da5-8a34-600ca7f25608>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Why "Twice Service Time"? A colleague asked me why I said to use a ratio of response time to service time of 2:1 in Sizing to Fail. Was it just magic, or was there any science behind it? It turns out to be a range, found by observation, rather like the number of things you can keep in your mind at once: "five, plus or minus two". If you draw a graph of both utilization and response time, you'll find that at twice the service time, you're pretty close to the point about which the response time curves upwards and the utilization levels off, as shown here. In this particular case, the program has a service time of one tenth of a second. If I run it on a uniprocessor, the most requests per second it will ever deliver is ten, as one tenth of a second goes into one second exactly ten times. In theory, the utilization should be a straight line from zero to 100% at 10 requests per second (TPS), then a horizontal line. This is the blue dotted line labeled "Bound" in the top diagram. In practice, you'll never get a square corner because some requests come in while the machine is still working on a previous request. The next request has to wait (in a queue) for the processor to be free. That's why the real utilization curve starts to level off a bit below 10 TPS and then bends gently to the right until it's horizontal. In the lower diagram, the response time should theoretically be a horizontal line at one tenth of a second, then shoot upward as the program "hits the wall", as is shown in blue once more. At two tenths of a second, one would be just past the 10 TPS line. As with the utilization curve, the response time curve has a gentle bend instead of a square corner, and so is just a fraction before the 10 TPS line at two tenths of a second. It brackets the correct value, and so is a good engineering approximation. A better mathematician than I could probably demonstrate that twice the service time is in the center of the inflection points of the family of curves from real systems, and is therefor a good number mathematically as well. A worse one might try to use it to recommend a 70% utilization threshold, which Michael Ley disproved in Why a 70% Utilization Threshold is just ROT. In practice, it's a good number because it's reasonably easy to hit, by fiddling with the parameters to JMeter or LoadRunner. This is because the curve is just beginning to take off skywards at this point, and a small change in load will cause a similarly small change in response time. As for me, I just remember that a smart mathematician at Teamquest suggested 2:1, and remember it. Along with roughly four other things at any one time...
{"url":"http://broadcast.oreilly.com/2009/06/why-twice-service-time.html","timestamp":"2014-04-16T10:10:40Z","content_type":null,"content_length":"34736","record_id":"<urn:uuid:19d0469e-2e82-44c7-8082-e4332c1ef625>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
From MinesweeperWiki Minesweeper strategy is the art of solving games. Techniques include learning patterns and where to click first, using guessing tactics and developing efficient clicking and mouse movement. [edit] Patterns A pattern is a common arrangement of numbers that has only one solution. If you memorise a pattern it will reduce the amount of time you waste thinking. Before you start learning patterns, you should learn the basics. If a number is touching the same number of squares, then the squares are all mines. You can solve most Beginner games this way. Here are some examples: The 1 on the corner touches only 1 square, so it must be The 2 touches 2 squares, so they must both be The 3 touches 3 squares, so they must all be The 4 touches 4 squares, so they must all be a mine. mines. mines. mines. The 5 touches 5 squares, so they must all be mines. The 6 touches 6 squares, so they must all be The 7 touches 7 squares, so they must all be The 8 touches 8 squares, so they must all be mines. mines. mines. There are two basic patterns which combine to make all other patterns. The first is 1-1 and the second is 1-2. Whenever you see a 1-1 pattern starting from an edge (or where an open square functions as an edge) the 3rd square over is empty. This makes sense because the first 1 touches two squares, which must contain the mine, while the second 1 also touches a third square, which must be empty. Whenever you see a 1-2 pattern the 3rd square over is always a mine. This makes sense because the first 1 touches two squares, which must contain the mine, while the 2 also touches a third square, which must contain the second mine. Here are some examples: There is 1 mine in the first two squares, and 1 mine in the first There is 1 mine in the first two squares, and 2 mines in the first A 1-1 pattern starting from A more complicated version three squares. The 3rd square over must be empty. three squares. The 3rd square over must be a mine. an opened square. of the 1-2 pattern. The two most famous patterns are 1-2-1 and 1-2-2-1. These are so common new players should memorise them immediately. If you look carefully they are just combinations of the 1-2 pattern. The 1-2-1 pattern has one solution. Apply the 1-2 pattern from the left. Apply the 1-2 pattern from the right. The 1-2-2-1 pattern has one solution. Apply the 1-2 pattern from the left. Apply the 1-2 pattern from the right. At first it seems like there are many patterns. If you study them, they are actually 1-2-1 and 1-2-2-1 patterns (or combinations of patterns). These in turn are variations of the basic 1-1 and 1-2 patterns. Each set of numbers reduces when you subtract known mines. Here are some final examples: 242 reduces to 121 345 reduces to 121 1222 reduces to 1221 2331 reduces to 1221 222 reduces to 121 2331 reduces to 1221 13231 reduces to 12121 Reduces to 122121 [edit] Guessing Sometimes in Minesweeper you need to guess. A typical case is a 50/50 situation where one mine is hidden in two squares. Guess quickly and move on. Thinking does not improve your chance of guessing correctly, it only wastes time. Waiting to see if you guessed right also wastes time, so assume you survived and try to keep playing. Do not delay taking forced guesses - solving the rest of the board first is a waste of time if you end up guessing the wrong square. Many players are impatient and guess instead of solving. Do not guess unless it is necessary. The fastest way to solve 'Example A' is to click the unopened squares in a row. But if you click fast there is no time to react, so you will lose if the middle square is a mine. You have just guessed for no reason! A smart player will click the outer two squares first, which allows enough time to react to the initial click and decide if there is a mine. Opening safe squares is as important as finding mines. If you can prove a square is safe, open it instead of guessing where the mine is. In 'Example B' there is a mine in the two yellow squares. Instead of guessing, open the safe 3rd square. This can allow you to open even more squares (marked blue) which may help you solve the original guess. If you need to guess and there are more empty squares than mines involved, it is always better to guess an empty square instead of guessing a mine. Flaggers often make the mistake of guessing the mine because they love to chord. Sometimes you can improve the chance of guessing right. There might be an arrangement of numbers with more than one solution, and the solutions require different amounts of mines. Instead of guessing, you can solve it by flagging the rest of the board and seeing how many mines are left. You can solve 'Example D' if there is 1 mine or 3 mines, but you must guess if there are 2 mines left. If you decide to save time and guess immediately, think about the mine density of the level you are playing. For example, the solution with more mines is more likely on Expert than Intermediate. Keep in mind though that the density of each level is pretty low, so less dense solutions are more common overall. Example A: Do not take unnecessary guesses. Open the Example B: Open a safe square instead of Example C: You can not find any mines without guessing, but you can Example D: Is there 1, 2 or two outer squares first. While clicking on the 3rd guessing where the mine is. In this case open the safe blue squares. If a blue square is a 1 you can then open 3 mines in the corner? Find square you can look at the result of the 1st square. you can now also open the blue squares. the orange squares, and so on, maybe until the board is solved. out before you guess. Perhaps you have solved part of a board and need to guess in order to reach the rest of the board. You can improve your chance of winning by clicking randomly! The average chance of hitting a mine is 0.206 on Expert and 0.156 on Intermediate and Beginner. These odds are much better than a 50/50 guess. Remember you are more likely to get openings by clicking on edges. Your bravery is often rewarded by finding that the original 'guess' becomes solveable when approached from a different direction. Another important thing to remember is usefulness. If two solutions are equally likely, choose the one that will help most if it is correct. Sometimes one solution eliminates another guess, or gives an easier arrangement of mines. A common mistake is turning a 33/66 guess into a 50/50 guess instead of solving it. For example, if you know there is one mine in three squares, do not open the middle Always choose the most likely solution. This can be very difficult to calculate! Sean Barrett has written Minesweeper Advanced Tactics as a guide. Local probability is easy to calculate but is usually wrong. For example, in the image below some squares are both 50/50 and 66/33 guesses! When all unsolved areas are considered, a simple 50/50 guess often has one square much more likely to contain the mine. A general rule of thumb is that if one square in a 50/50 situation touches a high number, it is more likely to be a mine than the other square. A special case of probability is when guessing involves the top left corner. Minesweeper makes the 1st click safe, so if you click a mine it is moved to the top left corner (or the nearest empty square on its right). If you have a 50/50 guess and one square is the top left corner, the corner is always more likely to be the mine. When starting an Expert game the chance of a mine somewhere is 0.206 but the top left corner nearly doubles to 0.370 after the 1st click. The following example illustrates many of the above points. It looks like there are three unavoidable 50/50 guesses, and two unavoidable 66/33 guesses. One strategy is to guess quickly and hope for the best. This option will give the best score if you survive. A second strategy is to click a random square that does not touch any numbers. This usually has better odds of being safe and often helps solve the game. A third strategy is to determine the number of mines remaining by flagging the rest of the board. This reduces the number of solutions. In this example there are 79 possible solutions but only 2 of them have 4 mines. A fourth strategy is to guess in the most useful place. Clicking square I has the potential to eliminate all the other guesses! For example, if it is a 4 or 7 the game can be solved no matter how many mines remain. A fifth strategy is to guess the most likely solution. A mine is more likely in L than K and more likely in H than D. A final strategy is to calculate the exact probability of each square taking the entire game into consideration. This is the hardest but most accurate method. Results for this example are available. The guessing strategy you choose depends on whether you want to win more games or make time records. Three 50/50 situations Two 66/33 situations [edit] First Click The first click in Minesweeper is always safe, but where is the best place to start? It depends whether you want quantity or quality. Your best chance of finding an opening is in a corner, then on an edge, then in the middle. Emmanuel Brunelliere (France) calculated the theoretical odds as follows: Beginner Intermediate Expert Corner 59.54% 59.94% 49.94% Edge 42.14% 42.61% 31.42% Middle 25.09% 25.54% 15.69% Tim Kostka then used his knowledge of Board Cycles to find the actual chance of finding openings on Windows Minesweeper. The first click is always safe because any mine is moved to the top left corner or nearest empty square to its right. This means the top left corner gives fewer openings than the other corners. It also means fewer openings result from the edge and middle squares touching the top left corner. Exact values for each square are on his website. Most of the variation is due to low outlier values near the top left corner. Beginner Intermediate Expert Corner 50 - 60 % 50 - 60 % 40 - 50 % Edge 34 - 42 % 36 - 43 % 25 - 32 % Middle 19 - 24 % 21 - 26 % 12 - 16 % Your best chance of getting a large opening is in the middle, then on an edge, then in a corner. So far no one has calculated the theoretical advantage, but Tim collected actual results from Windows Minesweeper. The biggest openings occur in the very center of the board and decrease as you approach edges. The biggest openings from clicking on an edge are in the middle and decrease as you approach corners. This chart shows the variations in the average number of squares for each opening: Beginner Intermediate Expert Corner 18 27 16 Edge 20 - 24 31 - 42 19 - 26 Middle 23 - 32 35 - 66 23 - 41 In summary, the best place to start depends on your preference for size or frequency. Large openings are more helpful but you will lose more games trying to find them. Small openings can be difficult but you will start more games. It is possible the benefits of either method cancel each other. The Windows Vista version of minesweeper always gives an opening on the 1st click. In this case you should always start in the middle to get the biggest opening on average. (This version is not accepted for the World Ranking.) Probability of openings on Beginner. Average size of openings on Beginner. [edit] Efficiency The fewer clicks you take, the faster you will finish. Learn to be efficient. The game ends when all safe squares are open, not when all mines are flagged. Beginners often waste time flagging every mine. The only good reason to flag is to clear more squares by chording. So before you place a flag, decide if it is useful. There is no reason to flag this mine except to make it Flagging these 8 mines is a waste of time because you can not chord There is no reason to flag the pink squares, because you can not use look pretty! on the 8. them to chord. Some players never flag because time spent flagging can be better used to open more squares. This style is called No Flags, or NF. Flaggers argue that flags allow you to chord and clear multiple squares at the same time. It is generally agreed that NF is more efficient near high numbers (5,6,7,8) while Flagging is more efficient near low numbers (1,2,3,4). Near a high number like 7 a NF player needs only one click to open the safe square, but a Flagger needs seven flags and a chord. Near a low number like 1 a Flagger would place one flag and chord, but a NF player would need as many as seven clicks to open the safe squares. It is also generally agreed that NF is more efficient on low 3BV boards while Flagging is more efficient on high 3BV boards. For example, an Intermediate game with 3BV 40 has an average of one number touching each mine, while a 3BV 120 game has an average of three numbers. A perfect NF player would need 40 and 120 clicks. A very inefficient and unlucky Flagger would need 80 clicks (40 flags, 40 chords) for both games. These examples are extreme cases but show the general reasoning. In reality NF players are not perfect and waste clicks at full speed, while Flaggers never need to flag all mines or chord on every number. If a player uses only NF or Flagging, there is probably no advantage to either method. The advantage comes when both techniques are combined and the player uses the most efficient solution for each situation. If you flag you can save time by using the 1.5 Click technique. The normal way to chord is moving the right button down and up to flag, and move both buttons down and up to chord. The 1.5 Click trick moves the right button down to flag, presses the left button down, and releases both up to chord. This eliminates one movement from every flag and chord combination. As long as the right button starts going down before the left button, the flag will get placed. The shorter the gap, the more time you save. You can nearly double your flagging speed with this method. Here are some examples of efficient flagging: You could flag all 3 mines and chord on a 2, but it is You could flag both mines, but it is more efficient Flag as shown and chord on the 1 above. There is no reason to flag the pink squares. more efficient to flag as shown and chord on either 1. to flag the outer mine as shown and chord on the 1. There is no need to touch the other Flag as shown and chord on the 1 beside it. Here are some examples of efficient NF: Click once on the yellow Click once in the If the yellow square is an opening, the blue squares will open. NF players look for If the yellow square is an opening, the blue squares will square. corner. openings. open. It is not always easy to tell if NF or Flagging is more efficient. In these next four examples NF takes fewer clicks if there is an opening, but it takes more clicks if there is no opening. name removed has made an excellent slideshow of an Intermediate game being solved efficiently, with detailed explanations. Example A: An efficient Flagger would place one Example B: A NF player would click on the yellow Example C: NF players will click the yellow Example D: Flaggers will put one flag and chord on flag and chord on the two pink squares. Then a square and hope an opening clears the blue square. If it is an opening, at a minimum the the 1. If there is an opening, the blue and green click would be made on the yellow square. Total squares. Then a click would be made on the two blue squares will open. If it is not an opening, squares all open. If there is no opening, the blue of 4 clicks. (If you flagged all the mines you purple squares. Total of 3 clicks with an the player needs to click all coloured squares. squares open and the green squares need to be would need 8 clicks.) opening or 6 clicks with no opening. Total clicks vary from 1 to 6. clicked. Total clicks vary from 2 to 5. An important way to increase solving speed is to make fewer mouse movements. It takes time to move your mouse. New players follow their eyes with the mouse instead of only moving it intentionally toward a target. The next stage in reducing movement is learning to 'see' the solved board. This often allows you to solve at your current mouse location. For instance, if your mouse is near the 2 in 'Example A' you can flag the red square and chord instantly. This is obvious to a professional player because they have solved the adjacent squares in their head. A new player would have to move elsewhere and come back later. The red square in 'Example B' can be similarly solved. Less movement equals better scores. Example A: The red square is easily marked as a mine because the player is thinking several moves Example B: The red square is easily marked as a mine because the player is thinking several moves ahead. ahead. Efficiency is measured by your Index of Efficiency, or IOE. This compares the number of clicks taken to the 3BV of the board. An IOE of 1.00 means you solved a 3BV 50 board in 50 clicks. It is possible to solve a game in fewer clicks than its 3BV by combining Flag and NF techniques. Both Clone and Arbiter save IOE highscores as an incentive for improvement. Arbiter further breaks IOE into Correctness (clicks that changed the board) and Throughput (the potential IOE if all clicks had been correct). It also has a Path statistic that measures mouse movement in pixels. The best way to improve efficiency is to play slowly. Find the most efficient solution and path to each problem before pressing any buttons. You will soon see improvements while playing at full [edit] More Tips • Do not use Questionmarks. • Press 'F2' to start new games. Keep one finger on this button, it is faster than using the mouse. • Avoid moving the mouse without a reason. New players often waste time moving the mouse everywhere their eyes look. • Ignore the clock. Looking at the clock during a game wastes time, and will make you nervous if you are going fast. • Many players listen to music while they play. This distracts them and lets them play on autopilot without nerves. • Play in a warm room or heat your hands in hot water before you play. This increases blood flow and reaction time. • Take short exercise breaks to increase blood flow and stimulate your brain. • After a long playing session, it can help to change the version you are playing. This helps focus your eyes. • If you accidentally click down on a mine, slide onto a different square before releasing the mouse button. • Use the 1.5 Click.
{"url":"http://www.minesweeper.info/wiki/Strategy","timestamp":"2014-04-17T04:07:49Z","content_type":null,"content_length":"49217","record_id":"<urn:uuid:0585db79-4307-4468-8301-2bb13698951c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
Yahoo Groups Re: [PrimeNumbers] Two large consecutive smooth numbers Expand Messages View Source --- On Sat, 3/3/12, WarrenS < > wrote: > N=43623575184339996059537425773119366447006380455838\ > 696504055889999185302903791148393125043181272726633463298672436846034128 > and N+1, both are "smooth", i.e both factor entirely into > primes<=9168769. Calling Doctor Broadhurst for suggestion of the best metric by which to evaluate such records. A simple log doesn't necessarily tell the whole tale at all. Be warned, Warren - Dr. B is sitting on a corpus of algebraic formulae such that p(x) and p(x)+1 have algebraic factorisations, which makes smoothness measurably (I was going to say immeasurably, and then realised the stupidity of such a word choice) more likely. I'll not play this game, as I have an appointment with 21 farmers in Lithuania (otherwise known as the biggest brewery crawl yet...) View Source --- In Phil Carmody <thefatphil@...> wrote: > B is sitting on a corpus of algebraic formulae such that p(x) > and p(x)+1 have algebraic factorisations Those don't seem to be of immediate help, Phil. Warren knows about Prouhet-Tarry-Escott and has set a puzzle that goes deeper than that. I pass. View Source --- In "djbroadhurst" <d.broadhurst@...> wrote: > Warren knows about Prouhet-Tarry-Escott and > has set a puzzle that goes deeper than that. > I pass. Oh well, I guess that, having been set up by Phil, I ought not to pass. A quick coding of my favourite PTE identity seemed to leave a significant computational load for Pari-GP. I am willing to let that code run for a few hours, without taking the effort to tune it. View Source --- In "WarrenS" <warren.wds@...> wrote: > N=43623575184339996059537425773119366447006380455838\ > 696504055889999185302903791148393125043181272726633463298672436846034128 but might, less coyly and more helpfully, have written in the more explicit manner of View Source On 3/4/2012 7:59 AM, > 1a. Two large consecutive smooth numbers > Posted by: "WarrenS"warren.wds@... warren_d_smith31 > Date: Sat Mar 3, 2012 9:19 am ((PST)) > N=43623575184339996059537425773119366447006380455838\ > 696504055889999185302903791148393125043181272726633463298672436846034128 > and N+1, both are "smooth", i.e both factor entirely into primes<=9168769. > Can you do better (i.e. make N larger, and the max prime smaller)? e = limit(n--> infinity) (1+1/n)^n ln((n+1)/n) = ln(1 + 1/n) ln( (1+1/n)^n) is, for large n, approximately equal to 1. ln((1+1/n)^n) = n ln(1+1/n) ln(1+1/n) = approximately (1/n) for large n. Find solutions to k1 ln(2) + k2 ln(3) + k3 ln(5) + .... < 1/n for target large values of n. Minimize k1 ln(2) + k2 ln(3) + k3 ln(5) + .... where some of the k's are required to be positive, and some negative. k1 ln(2) + k2 ln(3) is approximately 0 k1 ln(2) = approximately - k2 ln(3) -k1/k2 = approximately ln(3)/ln(2) One of k1, k2 is positive. The other is negative. It seems straight forward to calculate the coefficients k1, k2, ,k3, etc which minimizes this sum for given sets of primes, From that minimum value of the sum for given sets of primes, calculate n = int(1/minimum) as an upper bound for the n which applies. Kermit Rose View Source On 3/4/2012 7:59 AM, > 1c. Re: Two large consecutive smooth numbers > Posted by: "Phil Carmody"thefatphil@... thefatphil > Date: Sat Mar 3, 2012 3:03 pm ((PST)) > Calling Doctor Broadhurst for suggestion of the best metric by which to evaluate such records. A simple log doesn't necessarily tell the whole tale at all. > Be warned, Warren - Dr. B is sitting on a corpus of algebraic formulae such that p(x) and p(x)+1 have algebraic factorisations, which makes smoothness measurably (I was going to say immeasurably, and then realised the stupidity of such a word choice) more likely. > I'll not play this game, as I have an appointment with 21 farmers in Lithuania (otherwise known as the biggest brewery crawl yet...) > Phil To construct quadratics, x^2 - b x + c and x^2 - b x + c + 1 which are both factorisable, we look for integers b, k1, k2 such that c = k1 * (b-k1) c+1 = k2 * (b-k2) 1*3 = 3; 2 * 2 = 4 x^2 - 4 x + 3 = (x-1)*(x-3) x^2 - 4 x + 4 = (x-2)^2 2 * 4 = 8; 3 * 3 = 9 x^2 - 6 x + 8 = (x-2)*(x-4) x^2 - 6 x + 9 = (x-3)^2 which is really the same as the first example translated by 1. I will guess that maybe the only quadratic polynomial solutions are translations of the first example. Cubic polynomial solutions should be more prolific. View Source Hi kermit, The quadratic case simply corresponds to a 3-tuple of smooth {Q, Q+1, Q+2} inferring a pair at {(Q+1)^2 - 1, (Q+1)^2} We are aiming for more massive "power boosting". cf: Prouhet-Tarry-Escott problem. Warrens SMODA II document has a good list of known cases for polynomials to orders up to 10, and I believe he has found pairs using a 12-th degree From: Kermit Rose < Sent: Monday, 5 March 2012, 1:59 Subject: [PrimeNumbers] Re: Two large consecutive smooth numbers > I will guess that maybe the only quadratic polynomial solutions are > translations of the first example. Cubic polynomial solutions should be more prolific. [Non-text portions of this message have been removed] View Source Jim White & I have been trying to construct these things because they are grist for my new factoring algorithm SMODA. That was one example of our output. Although we can break (and have broken) that record, I could only make N have about 30% more digits before my current program would get very slow or self destruct. If you have new approaches, and can break that record using them, great. The PTE approach Broadhurst & Carmody were hinting at, is what I am using now for the largest ones, so that's not a new idea unless you know something I do not about it. If you are interested in donating computer time to this effort, intel last 10 years, unix variant, ok to run weeks at a time in background, then email warren.wds AT gmail.com. Thank you. View Source --- In "WarrenS" <warren.wds@...> wrote: > N=43623575184339996059537425773119366447006380455838\ > 696504055889999185302903791148393125043181272726633463298672436846034128 > and N+1, both are "smooth", i.e both factor entirely into > primes <= 9168769. > Can you do better (i.e. make N larger, and the max prime smaller)? Yes. Let 67440294559676054016000 - 1;} With N = f(1210851834572), we find that N*(N+1) is 5205793-smooth. With N = f(1606741747790), we find that N*(N+1) is 8686687-smooth. View Source --- In "WarrenS" <warren.wds@...> wrote: > Jim White & I have been trying to construct these things > because they are grist for my new factoring algorithm SMODA. I have not been following this in detail, but I gained the impression that Warren's original advert was far too optimistic and that now his heuristic for oracular factorization has escalated from exp(log(N)^(1/3+o(1))) to the far less encouraging exp(log(N)^(2/3+o(1))) as the time to construct a database. Is this a fair summary of the setback? View Source --- In , "djbroadhurst" <d.broadhurst@...> wrote: > --- In primenumbers@yahoogroups.com, > "WarrenS" <warren.wds@> wrote: > > Jim White & I have been trying to construct these things > > because they are grist for my new factoring algorithm SMODA. > I have not been following this in detail, but I gained the impression > that Warren's original advert was far too optimistic and that now > his heuristic for oracular factorization has escalated from > exp(log(N)^(1/3+o(1))) to the far less encouraging > exp(log(N)^(2/3+o(1))) as the time to construct a database. > Is this a fair summary of the setback? --well, yes and no. (And it wasn't a "setback" since I knew it all along.) (1) My factorization algorithm SMODA as far as I know still works and still runs in exp(log(N)^(1/3+-o(1))) time PROVIDED database ("oracle") is available for its use. It is plausibly better under this proviso than quadratic sieve and number field sieve, but that at present is unconfirmed. (2) For the problem of computing the database, however, I only have exp(log(N)^(2/3+-o(1))) time algorithms for. However as we just saw, the o(1) is fairly beneficial, since we can reach at least 400-bit-long database entries on a single computer, indeed Broadhurst just found some database entries of that size in a matter of a few hours -- pretty fast turnaround! (His weren't as large as my best records, but obviously Broadhurst has already built a search code comparable to or better than mine.) In fact I hope to release a preliminary database by me & Jim White, going up to 400-bits, in a few more days to interested parties. (3) You might say that (2) sort of demolishes (1), but that is debatable. The thing is, the database-build is something that all factorers worldwide can do collaboratively and do only once. Therefore, it is not fair to judge this runtime on the same footing as the other runtime. I admit I'm not quite sure how to judge it, because it has been a fairly rare thing in the world so far, to have oracle-algorithms that actually are useful. It is conceivable that (2)'s theoretical runtime can be sped up, but at present, I haven't been able to. Furthermore, few or no experts have carefully examined either (1) or (2) yet so it remains possible I'm crazy and the whole thing is broken. I doubt that -- I think any remaining errors are minor -- I'm just giving you fair warning. --Warren D Smith View Source Hard puzzle, really hard puzzle We know also that, while max N might exist with ~5000 digits, his nearest p-smooth neighbour pair might well be hundreds of digits smaller. What we don't know is which pairs N are "PTE-compatible", ie can be found via some factoring polynomial whose roots are all p-smooth. Any ideas on that issue would be useful Jim White From: Andrey Kulsha < Sent: Sunday, 4 March 2012, 9:53 Subject: Re: [PrimeNumbers] Two large consecutive smooth numbers Heuristically, log(max_N) is nearly proportional to sqrt(max_prime). So, with p < 9168769, one can find N with more than 5000 digits. But that's a hard puzzle, really. [Non-text portions of this message have been removed] View Source Andrey's chain puzzle is interesting. Could it be he already has found the maximum possible result for chain length 13? It's hard to see how that result can be beaten. Some results with weights of 2.2 or more: 28246112570058, weight = 2.2053 (P = 1257251) 18911412089528, weight = 2.2077 (P = 1032307) 218381019281507, weight = 2.2410 (P = 2504167) 9288363679368, weight = 2.2480 (P = 587149) 3393509932556102, weight = 2.2536 (P = 7788997) 4532039198639948, weight = 2.2536 (P = 8856259) 4532039198639949, weight = 2.2536 (P = 8856259) 12469670986534198, weight = 2.2547 (P = 13762769) 10160468895884110, weight = 2.2592 (P = 12163843) 461881571558141, weight = 2.2615 (P = 3050603) 7909529450841510, weight = 2.2621 (P = 10669823) 211814723372355, weight = 2.2918 (P = 1782043) 430753934627814, weight = 2.4217 (P = 1103933) Perhaps the 14-chain at N = 4532039198639948 might be a good result? What are the best known results for 14 or longer chains? From: Andrey Kulsha < Sent: Sunday, 4 March 2012, 9:53 Subject: Re: [PrimeNumbers] Two large consecutive smooth numbers > Puzzle: find a chain of 13 consecutive p-smooth integers, > starting at N, with log(N)/log(p) greater than > log(8559986129664)/log(58393) = 2.71328 Best regards, [Non-text portions of this message have been removed] View Source > Andrey's chain puzzle is interesting. Could it > be he already has found the maximum possible > result for chain length 13? No, I think that log/log ratio has no limit. > Perhaps the 14-chain at N = 4532039198639948 > might be a good result? What are the best known > results for 14 or longer chains? Brute force search yielded: N = 505756884840 for 14-chain N = 285377140980 for 15-chain N = 32290958458 for 16-chain as listed in (there k+1 is chain length) Best regards, View Source I can't use that file, I don't have XL. Any chance of a text export? eg comma-separated fields From: Andrey Kulsha < Sent: Tuesday, 6 March 2012, 6:28 Subject: [PrimeNumbers] Re: 13-chains of consecutive smooth numbers > Andrey's chain puzzle is interesting. Could it > be he already has found the maximum possible > result for chain length 13? No, I think that log/log ratio has no limit. > Perhaps the 14-chain at N = 4532039198639948 > might be a good result? What are the best known > results for 14 or longer chains? Brute force search yielded: N = 505756884840 for 14-chain N = 285377140980 for 15-chain N = 32290958458 for 16-chain as listed in (there k+1 is chain length) Best regards, [Non-text portions of this message have been removed] Your message has been successfully submitted and would be delivered to recipients shortly.
{"url":"https://groups.yahoo.com/neo/groups/primenumbers/conversations/topics/24101?l=1","timestamp":"2014-04-23T20:50:51Z","content_type":null,"content_length":"85817","record_id":"<urn:uuid:3e12f7fe-410e-460a-9c93-3dade9e59ea1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
World-class irrelevant statistics: Chiefs vs. Broncos From the FanPosts -Joel Hello AP members and the inevitable cross-traffickers from MHR, I put together this FanPost for a couple of reasons: • First, I have seen several posts like this on both sites over the past few weeks which all suffered from similar mathematical errors, so I thought I would gen up my own • Second, after my attempt to compose an inspirational video failed miserably, I figured I had better stick to the stats • Third, this is Chiefs vs. Broncos! It's the "Irresistible Force" meets the "Immovable Object!" With a combined record of 17-1! In a game with obvious playoff implications! Why would we not over-analyze it? Now the Disclaimer: This post can tell you almost next to nothing. Continue reading for entertainment value only. What I can assure you is that my math is the best I can make it (which may or may not be "world class" but, hey, I get to write the title, right?) On the spectrum of Lies, Damn Lies, and Statistics, these numbers fall squarely in the "statistics" category. What's the great thing about statistics? When done properly, they tell you how accurate they are. In fact, that's whole point of proper econometrics - determining relevancy. What these numbers tell me is that their predictive power is just a little better than your chances of winning at black jack. Of course, I wouldn't have known that without running through the exercise. Bottom line: all of these numbers are real, yet do little to actually predict anything. But why should that stop us? So here it is in all its glory: your completely meaningless, yet highly analyzed statistical prediction for the game! The first thing I had to assess was which stats to look at. People have argued over Defensive and Offensive Yardage vs. Split Stats vs. Points Allowed. Here's what I know, the winner is determined by the score board, thus the ultimate goal is to predict the score. But how can we tell if yards are important or not? We look at correlation: KC’S OPP Pass Yards Rush Yards KCC Yards Allowed PTS vs. KCC DEN’S OPP Pass Yards Against Rush Yards Against Total Yards DEN PTS SCORED JAX 157 71 228 2 BAL 462 65 527 49 DAL 298 37 335 16 NYG 307 107 414 41 PHI 201 264 465 16 OAK 374 164 538 37 NYG 217 98 315 7 PHI 337 141 478 52 TEN 247 105 352 17 DAL 414 103 517 51 OAK 216 121 337 7 JAX 295 112 407 35 HOU 271 73 344 16 IND 386 64 450 33 CLE 293 57 350 17 WAS 354 107 461 45 BUF 229 241 470 13 SDC 313 84 397 28 CORR: 0.77 0.11 0.59 CORR: 0.48 0.17 0.60 This spreadsheet shows the correlation factor between each yardage stat and points scored on the two sides of the ball we care about: The Chiefs Defense and the Denver Offense. These correlation values actually increased by quite a bit after the DEN vs. SDC game Sunday Night, but they are still fairly weak. A "strong" correlation is 0.8 to 0.99. Moderate correlation is around 0.6 to 0.8. Anything below 0.6 is pretty weak for predicting results (In fact, most true statistical analysis won't report something as statistically significant until P-Values reach 95%). This tells me that I could go through the whole exercise of predicting the yardage of next Sunday's game, but even if we got the yardage perfect, we would be no better off predicting the score from that yardage than we would flipping a coin. But can correlation tell us everything? No, and the 0.6 correlation factor does show some relevance for the stat, so lets look at them side by side. Here are charts of the Chiefs Yards Allowed and Points Allowed side-by-side, and the Broncos Yards Gained and Points Scored side-by-side: via img203.imageshack.us What do we notice? First, we notice that although the individual data points don't correlate very well, the trend lines look very similar. We also see that although the trend lines are both regressing to the norm, that regression is pretty low for both teams. If you look at the "R^2" values that are printed on each graph, they tell you what percentage of the graph is "explained" by the trend line. For instance, in terms of KCC Points Allowed, the regression to the norm accounts for only 23.21% of the Chiefs trend from week to week. Roughly, it's like saying they are regressing a bit, but 77% of their game is not regressing. What does this mean? It means that again, looking at yardage won't tell us anything additional to what looking at points is going to tell us. It also tells us that we should expect the Chiefs to give up more points than usual, and the Broncos should score less than usual, but not by big margins, both teams are still really good - not exactly ground breaking stuff there, huh? So, enough with yardage. Win Percent Added A post at MHR claimed that the Broncos have a better chance of winning the game because, on average, they have spent more time at 100% WPA then the Chiefs. What that tells me is that when the Broncos lock up a game, they do it earlier than the Chiefs. Again, is that news given our contrasting styles of play? Not to me. So instead, I want to look at how much time during each game do the Chiefs have the odds in their favor, and compare that to the Denver numbers. Therefore, I went to pro-football-reference.com and looked at the box scores for every Chiefs and Broncos games. I then counted WPA by quarter and broke it into three categories: 1. Quarters where KCC/DEN had a "winning" probability, meaning that the WPA graph was above 50% for the entire quarter 2. Quarters that were "in-doubt," which are any quarters where the WPA line hovers right around the 50% mark or crosses from one team to the other during the quarter 3. Quarters in which KCC/DEN were "losing," in that their WPA was never above 50% during the entire quarter Here's what I found: # of Qtrs WPA: CHIEFS BRONCOS OPPONENT Winning In Doubt Losing OPPONENT Winning In Doubt Losing vs. JAC 4 BAL 2 2 vs. DAL 2 2 NYG 2 2 vs. PHI 3 1 OAK 4 vs. NYG 4 PHI 4 vs. TEN 2 2 DAL 1 3 vs. OAK 3 1 JAX 4 vs. HOU 3 1 IND 1 1 2 vs. CLE 4 WAS 3 1 vs. BUF 1 2 1 SDC 4 TOTAL: 26 9 1 TOTAL: 25 9 2 (You can look these numbers up yourself using this link here, and clicking on each game's "boxscore" to find the WPA graph) Yes, the Chiefs have sustained a winning probability in exactly the same number of quarters as the Broncos, and have only had 1 "losing" quarter of football which came in the Buffalo game. Denver had 2 against Indy, including the most important quarter, the fourth. What does this tell us? It says that even though the Chiefs play in a lot of close games, they have rarely been "out" of a game -- just like the Broncos. It also says that because the two teams are so alike, WPA won't help us differentiate them, or affect the score much. So, onto the points. The Score Now, several posts have tried to calculate the upcoming scores by looking at the percentage of points that each team scores above or below other teams, averaging those percentages, and then calculating points against each team. There is a problem with using percentages to do this, however. What's the problem? I'm glad you asked. The problem is that you can't average the percentages because they are different units (now, we could use the Harmonic Means to average unlike numbers, but that would favor the large numbers, which in football are the statistical outliers--meaning that rare games, like our 2 point game against JAX would skew our numbers even more than using arithmetic means, but I digress...). But Bull, you ask, point are points...how are these different units? "Points against Jacksonville" are not the same thing as "Points against Philly." Why? Mainly because they play different teams - it's the old, "who have you played?" dilemma. If every team in the NFL played every other team exactly once in every season, we wouldn't have that unit problem. Even so, we still couldn't use "percentages above or below mean" to define a team's overall relative strength because those units vary by team also. Seven extra points against JAX may not be the same as 7 extra points against PHI, even if their average scores are the same. Wait, what? Why not? Look at it this way: Let's imagine that the points scored against JAX in all of their games has been: 20, 0, 20, 0, 15, 5, 15 and 5. Philly has allowed scores of 10, 10, 10, 10, 11, 9, 11 and 9. Both teams have an average of 10 points allowed. Now, the Chiefs score 17 against JAX and 17 against PHI. Which one is more "impressive?" Which one tells us more about the Chiefs? Well, the extra 7 points against the Eagles is a more "unusual" feat then the ones against the Jags. Luckily, we can capture that "unusualness?" (yes, that's a word... if you can say it, it's a word) and fix our "units" issue with one thing: standard deviations. Stnd Devs give us a normalized error distribution around a determined mean. No matter what group of numbers you have, if you find the standard deviation, around 68% of your values will fall within one Stnd Dev of the average, about 95% of your values will fall within two Stnd Devs, and 99% of your values will fall within three standard deviations - regardless of the units or size of the sample. Presto, our problems are solved, it just takes more work. To find our two teams' performance using standard deviations, we have to look at the points scored for and against each of our opponents against each of their opponents. We then find each team's mean and standard deviation for points scored and allowed, using all of the games except for the ones against KCC and DEN. If we used our own games in our calculation, it would induce an endogeneity problem, which basically means that we would be using our own performance to measure our own performance. We don't really want to do that. Once we have the Mean and Stnd Dev for each team, we look at the Chiefs' and Broncos' performance against them, calculating how many standard deviations above or below average we scored. We can then average our standard deviations across our 9 games to determine how well our Offenses and Defenses perform as compared to the mean. All of that is here in this table (scroll all the way to the right for the final values): Opponent: JAX DAL PHI NYG TEN OAK HOU CLE BUF BAL COLTS WASH SDC Averages Stnd PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS PTS AVERAGE 13.43 32.57 27.57 20.14 27.00 20.75 19.29 24.43 22.88 21.25 19.71 23.14 19.25 28.88 19.38 21.75 20.67 26.22 20.13 17.50 22.88 20.00 26.13 30.25 23.56 22.44 STD DEV 8.40 8.89 6.04 10.86 14.15 6.10 9.02 12.45 7.59 7.66 3.69 11.58 9.40 4.70 9.91 8.12 4.37 5.69 5.01 6.24 9.89 10.54 8.33 8.00 5.48 8.81 Against 2 28 16 17 16 26 7 31 17 26 7 24 16 17 17 23 13 23 +/- DEF 1.36 1.91 0.78 1.36 0.77 3.44 0.35 0.24 1.75 1.33 0.93 +/- OFF -0.51 -0.29 0.86 0.53 0.62 0.07 -2.53 0.15 -0.57 -0.18 0.95 Against 19 35 48 51 20 52 23 41 21 37 27 49 39 33 21 45 24 21.75 +/- DEF -0.66 -3.38 0.49 -0.41 -0.35 -1.37 -1.63 0.62 -0.08 -0.75 1.17 +/- OFF 0.27 2.84 5.12 1.33 1.20 5.04 1.23 1.84 -0.08 2.09 1.78 What does this tell us? First, it tells us that Denver's Offense is "better" than our Defense is "good." Our defensive Stnd Dev is 1.33, which essentially means we've performed better than around 80% of others. Denver's Offensive Stnd Dev of 2.09 means they outplay around 95% of others. But, we also see that Denver's defense is "worse" than our offense is "bad." So, which matters more? Well, we don't know that until we know what our own average and standard deviations are. So, repeating the process above for Denver and KC, we get this: CHIEFS DONCOS Week PTS SCORED PTS ALLOWED PTS SCORED PTS ALLOWED AVERAGE 23.89 12.33 41.22 26.44 STD DEV 4.38 5.25 8.07 9.62 Now, if we multiply our Offense Average Stnd Dev against their Defense numbers, we get a predicted score of 24.67. If we look at their Defensive Average Stnd Dev against our Offense numbers, we get a score of 27.19. These numbers are pretty close, and thus we get a pretty stable prediction of the Chief's score on Sunday: 25-27 points. A Pause in the Action to Look at Sacks Before we look at the other side of the ball (the one with the intriguing storyline, Manning vs. The Red Crush), I wanted to take a quick look at sacks. One key to this game is going to be getting pressure on Manning--but will it happen? We lead the league in sacks, the Broncos are very good at protecting Manning. Again, we can't compare the two directly - our sacks have come against different offenses than Denver's, and their protection has been against different front 7's then KC. I therefore normalized the sack numbers across the league using total number of sacks allowed and sack I took each of the teams we have faced, and normalized them on a scale from 0 to 1, where 0 means you have the best pass protection and 1 means you have the worst, based on allowed sack percentage. I then used that number to compute a "sack ratio," which means that a sack against the worst team equals "1 sack," but a sack against the best team would be worth "3.2 sacks." Why 1 and 3.2? It's arbitrary, but the best team in the league had given up 10 sacks through 9 weeks and the worst had given up 32, so the 1 to 3.2 ratio matched the league extremes. With this number, we can compute the "adjusted sacks" the Chiefs have gotten against each team, and find our average number of adjusted sacks: 7.65. Comparing that to Denver's "3.2 adjusted sacks to sack" ratio, we predict the Chiefs will sack Manning 2.4 times. Here's the work: BY OPPONENT SACK FO ASR RANK FO ASR Normalize Ratio Adjusted Sacks vs. JAC 6.0 22 7.8 0.4872 2.1282 12.7692 vs. DAL 3.0 7 6.0 0.2564 2.6359 7.9077 vs. PHI 6.0 26 8.8 0.6154 1.8462 11.0769 vs. NYG 3.0 20 7.6 0.4615 2.1846 6.5538 vs. TEN 3.0 18 7.4 0.4359 2.2410 6.7231 vs. OAK 9.0 32 11.8 1.0000 1.0000 9.0000 vs. HOU 5.0 9 6.3 0.2949 2.5513 12.7564 vs. CLE 1.0 23 7.9 0.5000 2.1000 2.1000 vs. BUF 0.0 17 7.3 0.4231 2.2692 0.0000 vs. DEN 2 4.0 0.0000 3.2000 2.3919 That's not great, and a bit below our average, but our trend suggests that our pass rush has been slowing down, as you see here: via img833.imageshack.us Why did we take this little detour about sacks? Because our prediction for the Denver score is not as stable as our prediction for the Chief's score was. If we look at it from our Defenses perspective, we give up an average of 12.33 points a game with a Stnd Deviation around 5. Since Denver scores about 2 Stnd Devs above average, this suggests they will score 23.31 points against us. But, their offense scores an average of 41.22 with a Stnd Dev of 8, and since our defense normally holds teams to 1.33 Stnd Devs below, this predicts a Denver score of 30.49. That 23 to 30 point range is a much larger spread than the 25 to 27 for KC. Thus, math says: Denver: 23-30 vs. Kansas City: 25-27 I figured up the predicted sacks to tell me which ends I should favor. Although 2.4 sacks is below our average, it is well above Denver's average of 1.4 per game (but not quite the 4 sacks Indy got in Denver's only loss). If it were four sacks, I'd go with the KC number - if it were 1.4 sacks, I'd go with the Denver number. Since it splits the difference - I'll split it, thus my prediction for Denver is 26.90 points. If we compare to our earlier prediction of 25-27, we can see it's anyone's game, but to be fair, I took our average as well. This gives us our: Final Score Thus, after all this, I mathematically predict: DENVER - 27, KANSAS CITY - 26 Now, once again, the certainty of this prediction is so far out of statistical significance that you have a better chance of falling in a manhole tomorrow than you do betting this line, but hey, it was fun. What does my gut tell me? Honestly, I think this season could wind up a lot like 2010, where the score in Denver was a 49-29 loss, but the score in Arrowhead was a 10-6 win. Two teams playing in two games that took on completely different characters. Either way, I think this Sunday is going to be a close one. Either way, we will all know for sure on Sunday. Hope you enjoyed this. Go Chiefs! Edit: Author's Post-Script About the "Back-Up QB" Argument In response to a few comments in other threads, I have decided to add a quick blurb about the inevitable critique of these stats that will claim the KC numbers are skewed because we play against back-up quarterbacks. While they are right, the Chiefs did play back-ups, so did many of the other teams that set our baseline averages and standard deviations. Most of the back-ups have played at least three games, making up a third of our sample size which is more than enough to "unskew" any numbers--with one exception: Buffalo. Tuel played only one game, which means the Buffalo numbers are less reliable than all of our others. We can also see that our defense's Stnd Dev against Buffalo was one of our highest at 1.75. So, in fairness, I deleted the Buffalo data as a statistical outlier to see the change in results. What happens? It changes our "final score" from 26.9-25.93 to 27.11-26.16. In other words, it makes no difference what-so-ever.
{"url":"http://www.arrowheadpride.com/2013/11/13/5092044/world-class-irrelevant-statistics-chiefs-vs-broncos","timestamp":"2014-04-18T13:16:36Z","content_type":null,"content_length":"329048","record_id":"<urn:uuid:7faefc30-c489-4d9e-8994-b3e81cf2d9ff>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
NHL teams vary more in defense than offense NHL teams vary more in defense than offense In the NHL, the spread of goals allowed is wider than the spread of goals scored. For instance, in 2010-11, the standard deviation of team goals was 22.0, but, for team goals allowed, it was 25.9. That is: there's more variation in defense than offense. It's not the same in other sports. In baseball, since 1971, offense and defense have been almost equal -- a 102.4 run SD for runs scored, and 103.7 for runs allowed. For the NBA, it goes the other way -- offenses vary more than defenses. In 1978-79, the SD of team points per 100 possessions was 3.26, but only 2.64 for opposition points. In 2006-07, it was 3.78 to 2.64. (Those were the only two years I checked). (In the links above, you have to do your own SD calculations. I wish the Sports-Reference sites listed SDs along with averages, to save time for geeks like me.) Averaging out those two NBA seasons, and the NHL from 1995-96 to 2011-12, and switching to a nicer font, and making it bold to catch your eye in case you're just skimming: NHL: 24.8 offense, 28.7 defense MLB: 102.4 offense, 103.7 defense NBA: 3.5 offense, 2.6 defense Big differences. It might seem like offense and defense should be equal, since one team's offense is the other team's defense. But that's not necessarily true. Often, it's easier to influence your score than your opponent's score. Like in golf. Players' scores vary quite a bit; Tiger Woods averages low, other guys average high. But *opponents'* scores would be almost the same for every golfer. That, of course, is because there's actually no defense in golf -- you don't have any control over your opponents' scores, so there's no intrinsic variation in how well Tiger Woods scores against different opponents. That's an extreme case ... but it's easy to imagine "partial" cases. Imagine if MLB decided to use pitching machines instead of actual humans. Now, you can no longer keep your opponent off the scoreboard with good pitching. You only get fielding, which gives a much narrower range. Gold Gloves will still help, but the difference between the team allowing the most and fewest runs will be much narrower. For the other way around, imagine MLB using a batting machine for the designated hitter. Now, your team's hitting skill matters only 8/9 as much as it used to, and it's a bit harder to influence your own scoring, relative to the opponents'. So, the balance between offense and defense depends on the structure of the game. In fact, I think the near-equal balance in baseball is in large part just coincidence. Another factor is how much one or a few exceptional players can influence the results. The more players you average out, the lower the SD. It's like how the distribution of heights among individuals is more extreme than the distribution of average heights among groups of individuals. It's easy to find someone who's 6-foot-4 ... but it's nearly impossible to find a neighborhood, or even a street, where the average is 6-foot-4. Imagine changing baseball so that you only have one batter instead of nine. (If the one batter gets a hit, a pinch runner comes in, so the same guy can bat again.) In that case, team offense would have a much wider range than team defense. The team with Barry Bonds would score, say, 12 runs per game, while the team with Albert Pujols would wind up with only 7 or 8. That's a much larger range than real life. On the other hand, the range of defense would be narrower, since you can't have Pedro Martinez pitching every at-bat, and not every ball can be hit to Ozzie Smith. So, here's an idea. When you look at a sport, search for the tasks at which there's wide variation in player talent, and where those tasks can be concentrated the most on certain players. If those tasks tend to impact scoring -- like in "one player bats" baseball -- the offense will have the wider spread. But if those tasks tend to impact more on *preventing* scoring, it's defense that will vary more. In the NBA, where do you find that kind of task? Shooting is the most obvious -- there's wide variation in skill, and you can easily arrange to give your best players the ball more often. So, you'd expect wider team variation in offense than defense. Of course, the guy who shadows Kobe gets "concentration" on the defensive side, because he gets the most important job most often. But, intuitively, there's probably less variation in ability to defend good shooters than there is in ability to shoot against good defenders. (Of course, I say that knowing the result in advance, so that may just be benefit of hindsight. But I still think it's right.) What about baseball? You don't have a lot of concentration on offense; the better hitters, at the top of the lineup, bat a bit more often ... but that's about it. On defense, your best pitchers get a few more innings. It seems kind of even. And, hitting and pitching seem about equal in importance (again with the benefit of hindsight). So you get roughly an even spread. What about hockey? Where do you find a wide varation in talent, concentrated in just a few players? You've got five guys who work together on offense, but *six* guys who work together on defense -- and the sixth one has the most important position, and he's almost always the same guy, getting maybe 80 percent of the ice time. So I think that's why, in the NHL, defense has a wider spread than offense -- because of the goaltender. The numbers support that, kind of. A while ago, I ran some estimates for the talent distribution of number-one goaltenders. I got that the SD of talent, in terms of shooting percentage, was around .008. With around 1700 shots per season, that's 13.6 goals. It turns out, that's almost exactly makes up the difference between the spreads of offense and defense! The SD of goals allowed 28.7. The SD of goaltending talent was 13.6. If you subtract the square of 13.6 from the square of 28.7, you get ... the square of 25.3. That 25.3 is very close to the SD of goals scored, which was 24.8. You could credibly argue that if every team had the same caliber goalie, the spread in team offense would be the same as the spread in team defense. 13.6 -- SD of goals allowed attributed to starting goalie 25.2 -- SD of goals allowed attributed to the other players 24.8 -- SD of goals scored But, as I said, there's nothing that special about a 50/50 split, so this isn't really a proof of anything. In fact ... well, I'd have thought that if you adjusted for goaltending, you would find that offense had a slightly *higher* spread than defense, not an equal one ... for roughly the same reasons as in the NBA. The difference in hockey should be lower: your superstar can't control the play in hockey as much as in the NBA, and Wayne Gretzky got only half the playing time that Michael Jordan did. But you should get a *slight* effect. But we didn't control for everything. Even without the starting goalie, the "defense" SD still includes the effects of the backup goalie. Still another factor is that I used overall save percentage for goalie talent, without adjusting for power-play shots. That factor probably overstated the goalie SD a little bit. The biggest factor we didn't control for is that teams vary a lot less in power play opportunities than they do in *opposition* power-play opportunities (that is, some teams take a lot of penalties). Maybe that makes up the difference. Maybe my gut feeling is still correct, that Phil Espositos are generally more important than Harry Howells. I didn't look at football at all, so this allows the theory to make a prediction. Obviously, the QB is the most concentrated talent position on the field, and it affects offense the most. So, after adjusting for possessions and field position, the SD of points scored in the NFL should be significantly higher than the SD of points allowed. Anyone got that data? And are there other sports than hockey where defense varies more than offense? Labels: basketball, distribution of talent, hockey, NBA, NHL 17 Comments: Links to this post:
{"url":"http://blog.philbirnbaum.com/2013/02/nhl-teams-vary-more-in-defense-than.html","timestamp":"2014-04-20T15:51:10Z","content_type":null,"content_length":"51844","record_id":"<urn:uuid:5a306c37-a55d-4080-900b-00f6cb0ab269>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Graduation Requirements (1) A minimum of 124 credits, at least 64 of which must be taken at Bard. (2) A minimum of 40 credits outside the division or program of major. First-Year Seminar counts for 8 of the 40 credits. (3) Every student who entered in Fall 1991 or later must pass a Quantitative course. If the Q course is taken outside the Division of Natural Sciences and Mathematics at least one course in that Division must be laboratory or computational. (4) Every student must take two semesters of First-Year Seminar. Transfer students may be exempt. (5) Every student must be promoted to the Upper College by passing moderation. (6) Every student must complete an acceptable senior project. (7a) Distribution requirement for students who entered the college before Fall 1995: a minimum of 8 credits in each of the four academic divisions, and a Q course. (7b) Distribution requirement for first-year students entering the college from Fall 1995 on: one course from each of the distribution areas (see page xi), and a Q course.
{"url":"http://inside.bard.edu/academic/courses/fall97/grad.htm","timestamp":"2014-04-20T00:41:08Z","content_type":null,"content_length":"2079","record_id":"<urn:uuid:69c9dd5e-fd8b-4c19-82e2-ee2c002e81fd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help October 10th 2007, 01:40 PM #1 How is this evaluated. Is it k(k+1) or is it (k+1)(k+2). This is the problem if anyone wants to try it out. Prove by induction on n that n!>2^n for all integers n such that n >or= 4 $(k+1)!=(k+1) \cdot k \cdot (k-1) \cdot ... \cdot 2 \cdot 1$ Proposition: $n!>2^n$ for $n\geq 4$. Proof: We will prove the proposition by induction on a variable $n$. If $n=4$, we have $4!>2^4$ or $24>16$, which is true. Assume: $n! >2^n$ for $4\leq n \leq k$ Taking $n=k$, we have Multiplying by k+1, we have $k!(k+1)>2^k(k+1)$ or $(k+1)!>2^k(k+1)$ And since $k \geq 4$, the minimum value we can have for $k+1$ is 5, so we have $2^k(k+1)>2^k \cdot 2$ or $2^k(k+1)>2^{k+1}$ By transitivity, we have Hence, $(k+1)!>2^{k+1}$ for $n \geq 4$, QED October 10th 2007, 11:37 PM #2
{"url":"http://mathhelpforum.com/algebra/20330-k-1-a.html","timestamp":"2014-04-17T11:07:44Z","content_type":null,"content_length":"30850","record_id":"<urn:uuid:918662ec-8daf-4782-b512-b500fcfafc9c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Sample Size Estimation John Eng, M.D. The Russell H. Morgan Department of Radiology and Radiological Science Johns Hopkins University, Baltimore, Maryland, USA Computer requirements for this page: Reasonably up-to-date web browser and Java versions are required. Due to its continuing evolution, the latest Java version is highly recommended. Note that Java is no longer installed by default on some new computers, so manual installation of Java may be required. Please send any bugs, questions, comments, or suggestions to All electronic mail will be Instructions: This page calculates the sample size for four simple study designs. The equations are discussed more fully in Ref. 1, and this Web page is intended to accompany that article. To estimate a sample size, identify the equation appropropriate to your study design. Enter values for the parameters under "Input Values" and click the corresponding "Calculate" button. See below for program development details, acknowledgments, and a list of references. Program Development Details and Acknowledgments: The lavender box at the top of this page contains StatSupport, a program that supplies basic numerical functions for performing statistical calculations on the Web. StatSupport is written in the Java programming language to enable its use over the Web. Ref. 2 documents the algorithm for StatSupport's inverse normal distribution routine, which was used to calculate Eqs. 1 and 2. 1. Eng J. Sample size estimation: how many individuals should be studied? Radiology 2003; 227: 309-313. 2. Acklam PJ. An algorithm for computing the inverse normal cumulative distribution function. Available at home.online.no/~pjacklam/notes/invnorm. Accessed 28 August 2002. (Page last modified: 3/19/2014)
{"url":"http://www.rad.jhmi.edu/jeng/javarad/samplesize/","timestamp":"2014-04-20T06:11:16Z","content_type":null,"content_length":"11192","record_id":"<urn:uuid:dce0d881-a601-44f4-a61f-4d492ec9e104>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Kernel Algorithms We are studying the principled design of learning algorithms that are able to identify regularities in data. Subjects of research in this area are thus not only the development of improved algorithms for generic learning problems, but also the design of new algorithms for specific applications. Technically, many of the approaches in the department fall into the category of kernel algorithms. They are based on the notion of positive-definite kernels. These kernels can be shown to play three roles in learning. First, the kernel can be thought of as a (nonlinear) similarity measure that is used to compare the data (e.g., visual images). Second, the kernel can be shown to correspond to an inner product in an associated linear space with the mathematical structure of a reproducing kernel Hilbert space (RKHS). In this way, the kernel induces a linear representation of the data. Third, it can be shown that a large class of kernel algorithms leads to solutions that can be expanded in terms of kernel functions centered on the training data. In this sense, the kernel also determines the function class used for learning, i.e., the hypotheses that are used in examining the dataset for regularities. All three issues lie at the heart of empirical inference, rendering kernel methods an elegant mathematical framework for studying learning and designing learning algorithms. During their relatively short history in machine learning, kernel methods have already undergone several conceptual changes. Initially, kernels were viewed as a way of “kernelizing” algorithms, i.e., constructing nonlinear variants of existing linear algorithms. The next step was the use of kernels to induce a linear representation of data that did not come from a vector space to begin with, thus allowing the use of a number of linear methods for data types such as strings or graphs. The third change happened only recently. It was observed that kernels sometimes let us rewrite optimization problems over large classes of nonlinear functions as linear problems in RKHSs. In a statistical context, this usually amounts to transforming certain higher order statistics into first-order (linear) ones, and handling them using convenient tools from linear algebra and functional analysis. An example that we have co-developed is a class of methods for distribution embeddings in RKHSs. As well as providing a measure of distance on probability distributions (namely, the RKHS distance between embeddings), these mappings directly imply a measure of dependence between random variables, consisting of the RKHS distance between the embedding of the joint distribution and that of the product of marginals. When the embeddings are computed on the basis of finite samples, a question of particular interest is whether the distance between embeddings is large enough to be statistically significant (and thus, whether the distributions are deemed to be different on the basis of the observations). We have provided means for verifying this significance, and associated nonparametric hypothesis tests for homogeneity, independence, and conditional independence. The behavior and performance of any kernel algorithm using these distribution embeddings hinges upon properties of the kernel used. This led us to a detailed study of the class of kernels that induce injective RKHS embeddings, i.e., embeddings that do not lose information and uniquely characterize all probability distributions from a given set. Kernel dependence measures based on distribution embeddings may be used not only to detect whether significant dependence exists, but can also be optimized to reveal underlying structure in the data. Thus, data can be clustered more effectively when the resulting clusters are given structure using side information, by maximizing a kernel dependence measure with respect to this side information. Such information may take the form of additional descriptions of the data, such as captions for images, or might involve imposing a structure on the clusters using prior knowledge about their mutual relations. In the first case (additional descriptions of the data), we have developed a novel clustering algorithm, Correlational Spectral Clustering, which uses the kernel canonical correlations between the data and the side information to improve spectral clustering. In the example considered, images were clustered more consistently with human labeling when side information in the associated descriptions was used to guide the clustering. In the second case (prior cluster structure known), the clusters were assumed to follow a tree structure, leading to the Numerical Taxonomy Clustering algorithm. A second set of projects is concerned with use of non-standard inference principles in machine learning. We have already in the past devoted substantial efforts to such inference principles, including in particular semi-supervised learning. We investigated the use of local inference in several learning problems. We have also contributed to algorithms implementing a novel approach for regularization termed the “Universum,” and linked it up with known methods of learning and data analysis. In another focus started during the previous reporting period, we have continued and expanded our work on structured-output learning, dealing with learning algorithms that generalize classification and regression to a situation where the goal is to learn an input-output mapping between arbitrary sets of objects. Canonical examples of an output in this framework are sequences, trees and strings. For such problems, the large size of the output space renders standard learning methods ineffective (e.g., the size of the output space for sequences scales exponentially with the length of the sequences). A series of papers has developed new supervised and semi-supervised learning methods that combine advantages of kernel methods with the efficiency of dynamic programming algorithms. In recent work, we have provided a unifying analysis of existing supervised inference methods for structured prediction using convex duality techniques. Our analysis has shown that these methods can be cast as duals of various divergence minimization problems with respect to structured data constraints. By extending these constraints to employ unlabeled data, we developed a class of semi-supervised learning methods for structured output learning. Another direction we pursued in this framework is extending supervised learning methods to complex tasks, which consist of multiple structured output problems. This is particularly challenging since exact inference of such problems is intractable. We used multitask learning techniques and devised an efficient approximation algorithm to learn multiple structured output predictors jointly. The two last projects in the area of kernel algorithms draw their motivation from the needs of practical problems that we encountered in application domains. In kernel machines, the solution is usually written as an expansion in terms of an a-priori chosen kernel function. The choice of the kernel function is nontrivial yet important in practice. Sometimes a linear combination from a large library of kernels works best, or a multi-scale approach, with aggressive sparsification to keep the runtime complexity under control. This last work improves upon our earlier work on sparsification of kernel machines as it turns out that with the additional degree of freedom introduced by the multi-scale approach, a higher degree of sparsification can be achieved.
{"url":"http://www.kyb.tuebingen.mpg.de/de/forschung/abt/bs/kernel-algorithms.html","timestamp":"2014-04-17T00:48:30Z","content_type":null,"content_length":"26557","record_id":"<urn:uuid:86cb5c4e-8e21-4f61-af6f-05b41a80284e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Gonzaga University, Spokane Washington Math Circle WHAT DOES IT COST? There is no fee for student participation! The Spokane Math Circle will be sponsored by MSRI, the Mathematical Sciences Research Institute. Registration is requested. WHO IS IT FOR? The current target group is mathematically motivated students from grades 5 to 8, with an eye on expanding through 12th grade. If your student falls in this category, why not give Spokane Math Circle a try? WHY SHOULD I JOIN? Each individual joins a math circle for different reasons. For some, it's a way to be challenged beyond the school setting. For others, it's a way to get training to do better at math competitions, like the AMC or AIME. And for another handful of people, it is for the pure joy of learning mathematics and extending their capabilities. Plus, it'll be a lot of fun! If you truly enjoy math, you should give the Spokane Math Circle a try! The Math Circle is new to Spokane, the first to emerge in the nearest 200 miles. Why not take this opportunity to be part of something really special? WHERE IS IT? Right now, the Spokane Math Circle is scheduled to be on the beautiful and conveniently located on the Gonzaga campus (Herak Math and Engineering Building [map], Room 245 [map]). Coming to the Spokane Math Circle would also give a great opportunity to students to experience life on college campus! WHEN DOES IT START? During the 2012-2013 academic year, meetings have been held on the third Saturday of the month from 4:30 PM - 6:00 PM. Dates for the 2013-2014 academic year are pending. Some days will cover competition math questions: previous AMC tests, previous Math is Cool tests, etc. These days will cover a broad range of mathematics. Other days will be more topic based. Note that the math circle will cover these in more depth than in school. Some planned topics are: • Distance/Rate/Time • Counting • Probability • Geometry • Divisibility • Graph Theory • Sequences And much much more! DR. SHANNON OVERBAY, Gonzaga University Mathematics Department Dr. Overbay is a Professor of Mathematics at Gonzaga University. She received her Ph. D. from Colorado State University in 1998. Her research interests include graph theory, ring theory, number theory, and recreational mathematics. DR. LOGAN AXON, Gonzaga University Mathematics Department Dr. Axon is an Assistant Professor of Mathematics at Gonzaga University. He recently earned his Ph.D. from the University of Notre Dame advised by Peter Cholak. His research has been in computability theory, specifically algorithmic randomness. DG KIM, Math Circle Coordinator and Central Valley High School Student DG Kim is a mathematically motivated student in the Spokane area. He started the Spokane Math Circle to provide a unique learning experience to the youth of the city of Spokane. HOW CAN I CONTACT SMC? Please register on the Spokane Math Circle website at: Find us on facebook: Contact the Coordinator:
{"url":"http://www.gonzaga.edu/Academics/Colleges-and-Schools/College-of-Arts-and-Sciences/Majors-Programs/Mathematics/MathCircle.asp","timestamp":"2014-04-19T07:33:56Z","content_type":null,"content_length":"17580","record_id":"<urn:uuid:1ba01d83-2b20-4b20-8ab9-f91b0981fb19>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Adaptive error estimation in linearized ocean general circulation models Data assimilation methods, such as the Kalman filter, are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. In this study we address the problem of estimating model and measurement error statistics from observations. We start by testing the Myers and Tapley (1976, MT) method of adaptive error estimation with low-dimensional models. We then apply the MT method in the North Pacific (5°-60° N, 132°-252° E) to TOPEX/POSEIDON sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The MT method, closely related to the maximum likelihood methods of Belanger (1974) and Dee (1995), is shown to be sensitive to the initial guess for the error statistics and the type of observations. It does not provide information about the uncertainty of the estimates nor does it provide information about which structures of the error statistics can be estimated and which cannot. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). The CMA is both a powerful diagnostic tool for addressing theoretical questions and an efficient estimator for real data assimilation studies. It can be extended to estimate other statistics of the errors, trends, annual cycles, etc. Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. After removal of trends and annual cycles, the low frequency /wavenumber (periods> 2 months, wavelengths> 16°) TOPEX/POSEIDON sea level anomaly is of the order 6 cm2. The GCM explains about 40% of that variance. By covariance matching, it is estimated that 60% of the GCM-TOPEX/POSEIDON residual variance is consistent with the reduced state linear model. The CMA is then applied to TOPEX/POSEIDON sea level anomaly data and a linearization of a global GFDL GCM. The linearization, done in Fukumori et al.(1999), uses two vertical mode, the barotropic and the first baroclinic modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCMTOPEX/ POSEIDON residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the TIP signal, which are not part of the 20 by 10 GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simultaneous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.
{"url":"https://darchive.mblwhoilibrary.org/handle/1912/4743","timestamp":"2014-04-16T21:56:56Z","content_type":null,"content_length":"33667","record_id":"<urn:uuid:7de4a2d1-351c-4c29-a786-fd5abaf01f9e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
On Connectionist Models Results 1 - 10 of 25 - THEORETICAL COMPUTER SCIENCE , 1994 "... We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to ha ..." Cited by 87 (8 self) Add to MetaCart We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomial-time constraints there are limits on their capabilities, though being more powerful than Turing Machines. (A similar but more restricted model was shown to be polynomial-time equivalent to classical digital computation in the previous work [20].) Moreover, there is a precise correspondence between nets and standard non-uniform circuits with equivalent resources, and as a consequence one has lower bound constraints on what they can compute. This relationship is perhaps surprising since our analog devices do not change in any manner with input size. We note that these networks are not likely to solve polynomially NP-hard problems, as the equality ... , 1994 "... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks fr ..." Cited by 22 (6 self) Add to MetaCart . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks from examples of their behavior. CR Classification: F.1.1 [Computation by Abstract Devices]: Models of Computation---neural networks, circuits; F.1.3 [Computation by Abstract Devices ]: Complexity Classes---complexity hierarchies Key words: Neural networks, computational complexity, threshold circuits, associative memory 1. Introduction The currently again very active field of computation by "neural" networks has opened up a wealth of fascinating research topics in the computational complexity analysis of the models considered. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks... - In Proceedings of the 21st Conference on Computational Complexity (CCC , 2006 "... Given any linear threshold function f on n Boolean variables, we construct a linear threshold function g which disagrees with f on at most an ɛ fraction of inputs and has integer weights each of magnitude at most √ n · 2 Õ(1/ɛ2). We show that the construction is optimal in terms of its dependence on ..." Cited by 20 (6 self) Add to MetaCart Given any linear threshold function f on n Boolean variables, we construct a linear threshold function g which disagrees with f on at most an ɛ fraction of inputs and has integer weights each of magnitude at most √ n · 2 Õ(1/ɛ2). We show that the construction is optimal in terms of its dependence on n by proving a lower bound of Ω ( √ n) on the weights required to approximate a particular linear threshold function. We give two applications. The first is a deterministic algorithm for approximately counting the fraction of satisfying assignments to an instance of the zero-one knapsack problem to within an additive ±ɛ. The algorithm runs in time polynomial in n (but exponential in 1/ɛ 2). In our second application, we show that any linear threshold function f is specified to within error ɛ by estimates of its Chow parameters (degree 0 and 1 Fourier coefficients) which are accurate to within an additive ±1/(n · 2 Õ(1/ɛ2)). This is the first such accuracy bound which is inverse polynomial in n (previous work of Goldberg [12] gave a 1/quasipoly(n) bound), and gives the first polynomial bound (in terms of n) on the number of examples required for learning linear threshold functions in the “restricted focus of attention ” framework. , 1994 "... We survey some aspects of the computational complexity theory of discrete-time and discrete-state Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfi ..." Cited by 18 (4 self) Add to MetaCart We survey some aspects of the computational complexity theory of discrete-time and discrete-state Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfield nets (here we consider mainly worst-case results); 2. the power of Hopfield nets as general computing devices (as opposed to their applications to associative memory and optimization); 3. the complexity of the synthesis ("learning") and analysis problems related to Hopfield nets as associative memories. Draft chapter for the forthcoming book The Computational and Learning Complexity of Neural Networks: Advanced Topics (ed. Ian Parberry). , 1992 "... We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to hav ..." Cited by 15 (3 self) Add to MetaCart We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomial-time constraints there are limits on their capabilities, though being more powerful than Turing Machines. (A similar but more restricted model was shown to be polynomial-time equivalent to classical digital computation in the previous work [17].) Moreover, there is a precise correspondence between nets and standard non-uniform circuits with equivalent resources, and as a consequence one has lower bound constraints on what they can compute. This relationship is perhaps surprising since our analog devices do not change in any manner with input size. We note that these networks are not likely to solve polynomially NP-hard problems, as the equality "p... - IEEE Transactions on Information Theory , 1997 "... Abstract — The computational power of recurrent neural networks is shown to depend ultimately on the complexity of the real constants (weights) of the network. The complexity, or information contents, of the weights is measured by a variant of resource-bounded Kolmogorov complexity, taking into acco ..." Cited by 15 (2 self) Add to MetaCart Abstract — The computational power of recurrent neural networks is shown to depend ultimately on the complexity of the real constants (weights) of the network. The complexity, or information contents, of the weights is measured by a variant of resource-bounded Kolmogorov complexity, taking into account the time required for constructing the numbers. In particular, we reveal a full and proper hierarchy of nonuniform complexity classes associated with networks having weights of increasing Kolmogorov complexity. Index Terms—Kolmogorov complexity, neural networks, Turing machines. - In Proc. 17th International Symposium on Mathematical Foundations of Computer Science , 1992 "... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. 1 Introduction The recently revived field of computation by "neural" networks provides the complexity theorist with a wealth of fascinating research topics. While much of ..." Cited by 14 (4 self) Add to MetaCart . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. 1 Introduction The recently revived field of computation by "neural" networks provides the complexity theorist with a wealth of fascinating research topics. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks directly from examples of their desired input-output behavior, it is nevertheless important to pay attention also to the complexity issues: firstly, what kinds of functions are computable by networks of a given type and size, and secondly, what is the complexity of the synthesis problems considered. In fact, inattention to these issues was a significant factor in the demise of the first stage of neural networks research in the late 60's, under the criticism of Minsky and Papert [51]. The intent of this paper is to survey some of the centra... "... Two widely accepted assumptions within cognitive science are that (1) the goal is to understand the mechanisms responsible for cognitive performances and (2) computational modeling is a major tool for understanding these mechanisms. The particular approaches to computational modeling adopted in cogn ..." Cited by 13 (9 self) Add to MetaCart Two widely accepted assumptions within cognitive science are that (1) the goal is to understand the mechanisms responsible for cognitive performances and (2) computational modeling is a major tool for understanding these mechanisms. The particular approaches to computational modeling adopted in cognitive science, moreover, have significantly affected the way in which cognitive mechanisms are understood. Unable to employ some of the more common methods for conducting research on mechanisms, cognitive scientists ’ guiding ideas about mechanism have developed in conjunction with their styles of modeling. In particular, mental operations often are conceptualized as comparable to the processes employed in classical symbolic AI or neural network models. These models, in turn, have been interpreted by some as themselves intelligent systems since they employ the same type of operations as does the mind. For this paper, what is significant about these approaches to modeling is that they are constructed specifically to account for behavior and are evaluated by how well they do so—not by independent evidence that they describe actual operations in mental mechanisms. Cognitive modeling has both been fruitful and subject to certain limitations. A good way of exploring this is to contrast it with a different approach, one that involves more direct - Proc. IWANN'91 , 1992 "... : Quantization of the synaptic weights is a central problem of hardware implementation of neural networks using 0 technology. In this paper, a particular linear threshold boolean function, called majority function is considered, whose synaptic weights are restricted to only three values: \Gamma1, 0, ..." Cited by 10 (4 self) Add to MetaCart : Quantization of the synaptic weights is a central problem of hardware implementation of neural networks using 0 technology. In this paper, a particular linear threshold boolean function, called majority function is considered, whose synaptic weights are restricted to only three values: \Gamma1, 0, +1. Some results about the complexity of the circuits composed of such gates are reported. They show that this simple family of functions remains powerful in term of circuit complexity. The learning problem with this subclass of threshold function is also studied and numerical experiments of different algorithms are reported. Keywords: neural network, linear threshold function, circuit complexity, synaptic weights quantization, majority functions. 1 Introduction and Motivation The works reported in the literature on artificial neural nets can be subdivided in two classes. On one hand, theorists deal with the general issues of connexionism such as: machine learning, classification, optimiz... - In Proc. 24nd Annual IEEE Conference on Computational Complexity (CCC , 2009 "... We prove two main results on how arbitrary linear threshold functions f(x) = sign(w · x − θ) over the n-dimensional Boolean hypercube can be approximated by simple threshold functions. Our first result shows that every n-variable threshold function f is ɛ-close to a threshold function depending only ..." Cited by 8 (4 self) Add to MetaCart We prove two main results on how arbitrary linear threshold functions f(x) = sign(w · x − θ) over the n-dimensional Boolean hypercube can be approximated by simple threshold functions. Our first result shows that every n-variable threshold function f is ɛ-close to a threshold function depending only on Inf(f) 2 · poly(1/ɛ) many variables, where Inf(f) denotes the total influence or average sensitivity of f. This is an exponential sharpening of Friedgut’s well-known theorem [Fri98], which states that every Boolean function f is ɛ-close to a function depending only on 2 O(Inf(f)/ɛ) many variables, for the case of threshold functions. We complement this upper bound by showing that Ω(Inf(f) 2 + 1/ɛ 2) many variables are required for ɛ-approximating threshold functions. Our second result is a proof that every n-variable threshold function is ɛ-close to a threshold function with integer weights at most poly(n) · 2 Õ(1/ɛ2/3). This is an improvement, in the dependence on the error parameter ɛ, on an earlier result of [Ser07] which gave a poly(n) · 2 Õ(1/ɛ2) bound. Our improvement is obtained via a new proof technique that uses strong anti-concentration bounds from probability theory. The new technique also gives a simple and modular proof of the original [Ser07] result, and extends to give low-weight approximators for threshold functions under a range of probability distributions other than the uniform distribution.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1202082","timestamp":"2014-04-20T05:29:00Z","content_type":null,"content_length":"40871","record_id":"<urn:uuid:813b628f-7930-4f7e-98d2-bf6c62192ce5>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
Two-View Multibody Structure from Motion Rene Vidal, Yi Ma^1, and Stefano Soatto^2 (Professor S. Shankar Sastry) (ONR) N00014-00-1-0621 We present a geometric approach for the analysis of dynamic scenes containing multiple rigidly moving objects seen in two perspective views. Our approach exploits the algebraic and geometric properties of the so-called multibody epipolar constraint and its associated multibody fundamental matrix, which are natural generalizations of the epipolar constraint and of the fundamental matrix to multiple moving objects. We derive a rank constraint on the image points from which one can estimate the number of independent motions and linearly solve for the multibody fundamental matrix. We prove that the epipoles of each independent motion lie exactly in the intersection of the left null space of the multibody fundamental matrix with the so-called Veronese surface. We then show that individual epipoles and epipolar lines can be uniformly and efficiently computed by using a novel polynomial factorization technique. Given the epipoles and epipolar lines, the estimation of individual fundamental matrices becomes a linear problem. Then, motion and feature point segmentation is automatically obtained from either the epipoles and epipolar lines or the individual fundamental matrices. [1] R. Vidal, Y. Ma, S. Soatto, and S. Sastry, "Two-View Multibody Structure from Motion," Int. J. Computer Vision (submitted). [2] R. Vidal, S. Soatto, Y. Ma, and S. Sastry, "Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix," ECCV Workshop on Vision and Modeling of Dynamic Scenes, Copenhagen, Denmark, May ^1Professor, University of Illinois at Urbana Champaign ^2Professor, University of California at Los Angeless More information (http://www.eecs.berkeley.edu/~rvidal/) or Send mail to the author : (rvidal@eecs.berkeley.edu) Edit this abstract
{"url":"http://www.eecs.berkeley.edu/XRG/Summary/Old.summaries/03abstracts/rvidal.3.html","timestamp":"2014-04-20T10:49:00Z","content_type":null,"content_length":"2618","record_id":"<urn:uuid:6321e9aa-c4a2-4b41-bc81-499f61479dfc>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Several authoritative and very insightful papers and web sites were written on the subject of gamma (ref). However, definitions and interpretations still remain a subject of continuing discussion. Instead of going into the merits and controversials of gamma encoding/decoding/compensation, I will focus on its principles, application, and measurement. To at least introduce the topic, let's start with simple empirical observations that will lead us to more intuitive meaning of the gamma. Our everyday experience points to the fact that human senses, such as hearing, sight, smell, and heat respond and adapt to extremely wide ranges of stimuli. To cope with such a large dynamic range, our hearing system, eyes, nose, and sensory neurons in skin evaluate the corresponding stimuli on nonlinear (approximately logarithmic) scale. Simply, when we double the light intensity, we don't see the light as twice as bright! The magnitude of a physical stimulus and its perceived intensity or strength follows the Stevens' power law (Fig. 14). Stevens' formulation is widely considered to supersede the famous Weber-Fechner law on the basis that it describes a wider range of sensations. Mathematically, the Stevens' law is formulated as follows: R = k . (S-S0)p Logarithm of both sides of this equation leads to a linear relationship between log(S-S0) and log R, with slope of the line determined by p. Figure 14 illustrates the qualitative relationship between stimulus of the human sensor system (S) and the response that we feel (R). Dashed line (p =1) would correspond to the linear dependence. Figure 14: Stimulus intensity vs. objective response For example, the way we feel heaviness of an object is characterized by a high value of p, reflected in a steep curve. In other words, once a stimulus is strong enough to sense "heaviness" of the object, the perception of "heaviness" rapidly becomes stronger as the stimulus becomes stronger. The other sensory responses shown have lower p value, which means that they can cover much wider ranges of stimulus intensity (dynamic range). Perception of light brightness has particularly low power exponent, which indicates that our vision system adapts to a wide dynamic ranges of the light. We will see the coefficient 0.33 (1/3) later in the context of perceptually uniform lightness scale as defined in the CIELAB color space. The sensitivity of the human eye is influenced by the average light level to which the eye is exposed. Representative result of noticeable luminance-difference measurements performed in the range of typical LCD luminances is shown in Fig. 15. The quantity log(ΔL/L) is plotted against the log(L) where L is luminance in cd/m2 (correlates with relative brightness). Some theoretical approaches predict a three-segmented curve, with slopes of -1 (the linear range), -0.5 (the square root range), and 0 (the Weber range). The same data on the linear scale is shown in Fig. 16. For luminance levels of interest (0.1 - 200 cd/m2), the ratio of ΔL/L has approximately constant value of 0.01 (so called Weber-Fechner fraction). Fit to a line with the slope of ~0.01 is remarkably good. In this region, the just noticeable difference in luminance (JND ~ ΔL) then follows the linear equation: ΔL = 0.01 * L. This relationship just confirms the empirical observation that we are more sensitive to luminance changes in the darker levels (at L = 100, JND = 1 while at L = 10, JND = 0.1). This observation will be mentioned later when gamma encoding is discussed. It is important to note that sensitivity of human vision varies with the light level and that in order for us to discriminate between two close luminance levels, one has to be fully adapted to the surround luminance. To be able to distinguish shades of gray in darker areas, our eyes have to be completely adapted to a lower intensity environment (globally as well as locally). For additional information on this topic, refer to the paper "Gamma and its disguises" of Charles Poynton and see the section "Why gamma" at Norman Koren's web pages. In contrast to the human senses, linear data representation is widely used for computer generated imagery, mostly because models of illumination, white-point conversion, RGB color conversions, and color matching are defined linearly. Furthermore, the image sensors (CCD or CMOS) are linear devices that respond proportionally to the number of photons that interact with their electrode structure. At the outset of the gamma discussion, the topic of quantization will be briefly discussed. In order to render captured scenes faithfully, digital devices have to have the ability to adequately sample (quantize) dynamic ranges of typical everyday scenes. Digital imaging systems with usually a limited number of bits (typically 8) must be designed so that there is enough precision in the low levels (shadows) to avoid visible contouring (our eyes are quite sensitive to transitions in shadow areas - Fig. 16). This can be achieved by applying a logarithmic or gamma-law "correction" to the linear data (logarithmic quantization has often been recommended but rarely implemented). Such transformations usually proceeds in two steps. First, the image is converted by analog-to-digital converter to, e.g., 12 bits in uniform steps (4,096 steps). Second, the output signal is then reduced to 8 bits by a nonlinear transformation that leads to compression of high level bins. When working with a typical LCD display that has about a 300:1 contrast ratio, our eyes can distinguish about half of the luminance levels. For 8-bit grayscale gradient, there would be theoretically 8.2 f-stops (exposure zones, log2300 = 8.2). For all 256 levels, there would be about 256/8.2 = ~ 31 distinguishable levels per a zone. As a result of interference from higher luminance levels (flare, bright environment), our capacity to distinguish levels in darker areas is compromised. Coming back to the Weber-Fechner law, the theoretical number of distinguishable levels per exposure zone is equal to log(2)/log(1.01) = 70. Number two relates to the exposure zone definition (half or double of the light intensity) and 1.01 is the 1% JND difference coming from the Weber-Fechner law. Again, this number of levels assumes complete adaptation to relatively narrow range of luminances around the evaluated neighboring levels. Quantization of digital images: Process of assigning intensity levels to a digital output is called quantization. During quantization, every floating point value of luminance related to input signal has to be mapped to one of the single integer values accepted by typical visualization devices (monitor, printer, projector). If uniform (linear) quantization steps are used, more than 10 bits (1,024 bins) should be used to maintain details in shadows and smooth transitions*. Nonlinear quantization techniques provide compression by requiring a smaller sample size (i.e. number of bits) to cover the full range of input than a linear quantization technique. Logarithmic or power function quantization provides more resolution at lower levels with higher levels spaced further apart than the low ones. Thus darker colors are represented in greater detail than they wouldbe in the linearly coded image. This parallels the way in which we perceive differences in lightness as we are more sensitive to absolute changes in shadows than in bright areas. Note*: if we send linearly sampled data through a power function and quantize them, we will lose certain number of levels, depending on the number of bits of input and the number of bits of output. For example if we would send 256 linearly spaced levels (8-bit encoding) through the gamma compression curve of 0.4545 ( = 1/2.2), we would lose 72 levels to give us only 184 to work with. However, if our camera uses 10 or 12 bits to represent the captured linear RAW data, the gamma compression with the same power coefficient of 0.4545 would preserve all 256 levels in 8-bit encoding. See the Bruce Lindbloom's Level Calculator for details. The idea behind nonlinear quantization is depicted in Figure 17. The x-axis is normalized digital input such as a photon count passed through the analog-to-digital converter (ADC). Numbers just above are the input values for 8-bit encoding (typical image editing range of 0-255). Black line is the linear response of an arbitrary digital device (camera, scanner sensor). Clearly, the output (on the y-axis) is directly proportional to the input. As an example of logarithmic correction, the blue curve in Fig. 17 describes transformation from the linear (black) to a logarithmic domain. In order to avoid log troubles at 0 I) was used instead of direct log. Dc is the digital count input and p is a parameter ∈ (0, ∞). y = log (1 + p . Dc) log (1 + p . maxDc) The red curve is a "classical" simple gamma-corrected linear mapping. One can see that gamma mapping is steeper in shadows while highlights have nearly identical mapping. If we were quantizing in linear steps, we would be "wasting" bits in bright regions (remember, we can't distinguish much of the brightness level difference in lights anyway) and not having enough in areas where it matters most, i.e. darks. Here is why. For the linear scale, pixels above value of 204 (in 8-bit encoding) would require (1.0-0.8) x 100 = 20% of the available bits (same as pixels below 50). For the nonlinear scale, same regions will claim ~10% ((1.0-0.9) x 100) and 50% of available bits, respectively (orange rectangles). Read the following box and references for more examples and explanations. Let's compare outcomes of the linear and gamma encoding schemes: i) we will assume a linear grayscale file in 8-bit encoding (total of 256 levels). Since the output luminance changes linearly with the input luminance, the first exposure zone (from 128-255) will have 128 brightness levels (256/2), the second zone (64-128) 64, and the last three dark zones (no. 6-8) will have the total of 7 levels (4+2+1). ii) now for the 2.2 encoded 8-bit grayscale image (e.g., JPG). Taking the half of the normalized luminance input will result in normalized output luminance of 0.5 = 0.7298 for the first exposure zone. This would consume (1-0.7298)*256 = 69 brightness levels. Second zone would have (0.7298-0.25 )*256 = 50 levels, third 37, fourth 27, ... 20, 14, 10, .. Note that due to the power function applied on the input luminance data, the number of exposure zones is greater than 8. Clearly, the gamma encoding reallocates encoding levels from the upper exposure zones into lower zones to make the distribution of levels much more uniform. This way a serious banding at low levels is avoided. See "Understanding RAW files" at the Luminous Landscape and Human vision and tonal levels at Norman Koren's web page. Figure 17: Effect of quantization in a capture device Clear benefit of these transformations is that the number of bits required to eliminate contouring of camera images in a computer display can be reduced from the "safe" 10-12 bits for linear data to 8 bits for logarithmic or gamma-corrected data (see the note above). This has beneficial impact on signal storage and transmission. Also, such nonlinear ("gamma") correction mathematically transforms the physical light intensity into a perceptually-uniform domain. Furthermore, it just happens that operating principle of CRT displays requires a gamma correction close to 1/γ to maintain perceptual and pleasing tone uniformity. So, the first gamma compressed the signal in a capture device, while the second gamma effectively decompressed the corresponding image (Fig. 18), keeping it perceptually similar to the original scene. It is the second gamma that we are mostly concerned with when processing and manipulating digital images. Figure 18 depicts the typical display gamma relationship between digital RGB values (normalized at x-axis) and the luminance output (red curve). Once the input digital counts are linearized, linear relationship between input and luminance becomes apparent (blue line). In general, linearization can be achieved by any of the mathematical models described in the next section or by canceling out the gamma and 1/gamma curves shown in Figures 17, 18 (red lines). An example of such cancellation is the so called overall system gamma (viewing gamma) that can be also computed by multiplying the camera gamma by the display gamma (γ_overall = γ_camera * 1/ γ_display ~ 1). In reality, this overall gamma is typically in range of 0.96-1.30. The simple camera gamma is equal to 1/2.2 = 0.4545. Display gamma is gamma that we are concerned with when setting the calibration target. For Windows system its value is typically 2.2. It is important to note that gamma only affects middle tones and has negligible effect on dark or white areas. Figure 18: OETF curve and its linearized form Unfortunately, the term gamma is used rather ambiguously and it is always necessary to verify how the particular gamma is defined. For display devices which we are most interested in, the correct term for (or meaning of) gamma is the tone reproduction curve (TRC). It is also known as the optoelectronic transfer function (OETF) and in general, it describes the relationship between the signal sent to a display (from the video frame buffer Technically, for LCD displays, this relationship is not an easy function, but rather results of curve-fitting to representative input points that are stored in profile or calibration lookup table (LUT). This LUT typically involves 16-bit data mapping. Strictly speaking, it follows that there is no "gamma" for LCD displays as there is no "power function" physics behind the LCD luminance output. While electron guns in CRTs have smooth power-function characteristics along the whole tonal range of each RGB component, the native response of LCD is closer to a sigmoidal shape (Karim). However, LCD displays still have the tone reproduction curve! As for the CRT monitors, such transfer function describes the relationship between the digital input, given by the RGB values, and the luminance produced by each RGB channel. Since images should look very similar regardless of the display type, LCD manufacturers build correction tables into the display circuitry to account for gamma-corrected images. Hence, LCD displays show a response similar to a CRT gamma function !! Tristimulus values and radiometric scalars: We have mentioned the luminance output ( ) at several points in the preceding paragraph. This short note explains the math behind measurements of the luminance and the tristimulus values. For more details, visit the Color section of these web pages. The trichromatic nature of human vision is mathematically formulated by (The Commission Internationale de l'Éclairage) to provide tristimulus values X, Y, and Z. CIEXYZ tristimulus values are thus fundamental measure of color - "color coordinates". In this system, the tristimulus value of a luminous object is known as the luminance. Absolute values of luminance are reported in candela/m ). By convention, the XYZ values are normalized by a normalizing constant that sets Y = 100 for perfectly reflecting or transmitting samples (e.g. monitor white). For any RGB system (computer displays, multimedia and imaging applications), R'G'B' triplet refers to gamma encoded digital count values while RGB refers to linear RGB values. Note: in most application software (Photoshop, image editors), R'G'B' values are commonly referred to as "RGB". For color calculation, we will use the nomenclature common in the literature, i.e., RGB for linearized values and R'G'B' for the gamma encoded equivalents. The first step in determining the tristimulus values XYZ from R'G'B' values sent to the monitor is to examine the relationship between digital counts (R'G'B' ) and each channel's scalar (R,G,B). Each of the R'G'B' channels must first be linearized so that the output intensity is linearly related to the input. There are several forms of the linearization function. These include simple (R,G,B = dc ), GOG, or LUT (see below). The scalar values for each channel and the corresponding R,G,B values can be calculated either from the ramp measurement or from the tristimulus values of the primaries. In the former case: R = Xm/Xmax , G = Ym/Ymax , and B = Zm/Zmax , where index refers to a measured value and is the maximum value. The latter option relies on the linearity of the R,G,B and XYZ. From equation shown later, [RGB] = inv(3x3) * [XYZ] . Next, coefficients of the linearization function have to be determined, usually using some form of least-squares optimization. Now, when we have described the nonlinear relationship between the input RGBdc and scalars R', G', and B', tristimulus components XYZ can be calculated as follows: X = R . Xr,max + G . Xg,max + B . Xb,max Y = R . Yr,max + G . Yg,max + B . Yb,max Z = R . Zr,max + G . Zg,max + B . Zb,max(T-1) These calculations can easily be performed using matrix algebra (see the Color section R,G,B values are thus linearized R'G'B' input values that will be used to calculate the corresponding XYZ values. X is the tristimulus X for the red channel at maximum radiant output (R=G=B=255). During the image processing and color space transformations that involve device independent color spaces, a linear relationship between image pixel values specified in software and the luminance has to be established. We already know that monitors (CRT, LCD) will have a nonlinear response. The luminance can be generally modeled using a power function with an exponent, gamma, as in eq. (II) (simple gamma). During all these operations, luminance and RGB digital counts (values sent to a monitor) have to be normalized to values between 0 and 1. Again, in order to display image information as linear luminance we need to modify the RGB dc domain (i.e., we have to linearize it and thus remove the gamma encoding). As discussed in the previous paragraph, this need comes from display systems where the camera and displays had different transfer functions (which, unless corrected for, would cause problems with tone reproduction). Simple gamma correction is given by the following equation: Other (and more accurate) models include several parameters and nonlinear curve fitting to a power function. Models such as GOG or GOGO are successfully used to characterize CRT monitors. Briefly, the GOG (gain-offset-gamma) model uses the formula: while GOGO model (model recommended for CRT colorimetry) adds additional offset term: Another model used in many RGB color-encoding standards (essentially the GOG) is expressed as: and uses only one constant term for the gain and offset. O refers to linearized R,G, or B values and I is the digital count input normalized to <0-1>. Parameter a is an offset ("brightness" on CRTs), b is gain ("contrast" on CRTs), c is another offset, and γ is the power function coefficient. By formally substituting a for 1/(1+b) and c for b/(1+b) in eq. III, we arrive to the eq. V. This particular linearization model is used in GammaCalc script to fit your experimental data and calculate the corresponding gamma. To summarize, conversion of R'G'B' image pixel values to the CIEXYZ tri-stimulus values can be achieved via a two stage process. • Firstly, we need to calculate the relationship between input image pixel values (dc) and the displayed luminous intensity (Y). This relationship is the transfer function, often simplified to gamma. The transfer functions will usually differ for each channel so they are best measured independently. A note on the use of tristimulus values (XYZ) for gamma assessment: while luminance of each channel is described by the corresponding Y-component of the XYZ triplet, calculated gamma is the same regardless of which tristimulus component was used (providing the component was normalized in the range of <0,1>). Thus the red channel gamma can be calculated from normalized dc values and the X-component of XYZ, green channel would use Y-component and blue channel the Z-component. This approach is basically building one-dimensional look-up tables where luminance is substituted by a radiometric scalars (R,G,B) according to: R = LUT(dcr) (for the red channel). • The second stage is to convert between the displayed red, green and blue to the CIE tristimulus values. This is most easily performed by using a matrix transform of the following form: [XYZ]T = [3x3 matrix] * [RGB]T where X, Y, Z are the desired CIE tri-stimulus values, R, G, B are the RGB values obtained from the transfer functions (now linearized) and the 3x3 matrix contains the measured CIE tri-stimulus values for monitor's three channels at the maximum output. Practical Applications: In a typical characterization of displays, commonly used technique for monitor gamma assessment uses the so called "log-gamma". This gamma is based on the original gamma definition, i.e., the slope of the "linear part in the tone characteristics obtained by linear regression in the logarithmic domain". Unfortunately, such gamma cannot be defined unambiguously since the slope depends on which part of the curve is chosen to be linear. Besides that, only the simple gamma model (II) is assumed, that is: Y = dcγ + kTransformation into logarithmic domain leads to: log(Y) = γ * log(dc) + k1 which is a linear equation with the slope being equal to gamma (γ). Note that no linearization of R'G'B' values was needed, although the non linearity at the dark are is clearly evident (Fig. 19). Overall, this method is still the fastest and relatively accurate way to assess the monitor gamma. To measure grayscale gamma, one usually displays a series of gray patches from RGB=0 to RGB=255 (e.g. in steps of 17) and measures the luminance response as the Yr,g,b component of the tristimulus values (it is the middle Y in the XYZ output). Both R'G'B' and Y values are then normalized to fit within the range of (0-1) by dividing them by 255 and Ymax, respectively. Logarithm is calculated for both series and plotted as y=log(Y/Ymax) against x=log(R'G'B'/255). Figure 19: Example of γ calculation Slope of the linear portion of this plot is taken as the overall system gamma (red line in Figure 19). If you would rather skip the math part, here is the spreadsheet that calculates gamma for you in this simple situation. Alternatively, to have log-gamma calculated for you, upload measured values through the GammaCalc page. Unfortunately, as mentioned above, the OETF characteristics of LCDs are such that a single analytic equation cannot be used to accurately describe their general behavior. Consequently, equations such as eqs. (II-V) may poorly describe the OETF. However, since all computer-controlled systems include video look-up table, three one-dimensional (1D) look-up tables (one for each channel) can be obtained to define the OETF (see Display Color Management). When the GOG functions are replaced with simple one-dimensional LUTs to characterize the display's electro-optical transfer functions, the characterization performance is excellent. L-star curve: Some profiling software packages feature several settings for the TRC curve: • gamma of your choice, typically gamma 2.2 or 1.8 • sRGB TRC (has a linear segment at the dark end and the overall gamma of approximately 2.2), and/or • the so called L* (read L-star) curve, which simulates response of the L* channel of the CIELAB color space There is not much one can find on the L* curve and it seems that some vendors use proprietary algorithms for the L-star calibration curve and display icc profiles. To get at least a qualitative picture of how the L-star calibration curve looks like, Figure 20 shows the TRCs for both the sRGB and L-star curves in the form of Y(normalized) vs. R'G'B'(normalized) plot - in this section denoted as RGB. Figure 20: sRGB and L-star TRCs (Yn vs. RGBn) Relative luminance data (Y) were obtained from the icclu utility of Argyll CMS for both the sRGB L-star icc profiles. This utility uses the corresponding icc/icm profiles to output XYZ tristimulus or L*a*b* values for batches of input RGB device values. ColorThink Pro can also do the job, although the output data is rounded to only two decimal places. Note: It should be stressed at this point that sRGB and L-star icc profiles are just color space profiles, theoretical constructs that have nothing to do with display profiles created during calibration/profiling process. As such, these color space profiles could be considered as ideal examples to demonstrate the characteristics of display gamma and L-star profiles. Visual inspection of Fig. 20 suggests that L-star curve brings more contrast (steepness, higher gamma) into the midtones and highlights (RGB 0.5-0.9). We will also see later that value of gamma is not constant and that it varies along the whole tonal range. Curve characteristics in shadows (< 0.25) are not that clear and additional analysis has to be performed. Overall, both TRC curves are quite similar with no distinct features. An alternative way to arrive to the similar picture is to calculate luminance values (Y) for the L-star curve from the corresponding values of the CIELAB L-channel (0-100). However, it should be mentioned that typical L-star curve is not identical to the CIELAB L-channel. Here is the formula for L* to Y transformation: Y = 100 . [(L*+16)/116]3 for Y/100 > 0.008856, Y ∈ <0-100> and Y = 100 . L*/903.3 for Y/100 < 0.008856 Another characteristic plot is shown in Figure 21. Relationship between calculated CIELAB L*-values and the RGB values for a 5-step gray ramp is plotted as L* vs. normalized input RGB. Same ideal sRGB and L-star icc profiles were used, only this time the icclu utility was configured to output L*a*b* values. As one can see, the L-star response is clearly linear in all brightness ranges (blue line). This means that doubling the value of RGB always changes value of L by the factor of two or that by stepping the RGB values by e.g., 10 points will change the L values by a constant increment (in this case by 100/255*10 = 3.9). Also, since incremental changes in L are perceptually uniform, changes from dark to bright values in a synthetic grayscale (RGB form 0-255) will be perceived as smooth and uniform. On the other hand, the sRGB curve (or any other typical calibration gamma curve) results in brighter midtones (red curve). Changes of the same 10 RGB points will be perceptually different in shadows, midtones and highlights. For typical calibration gamma curves (2.2, 1.8, sRGB), the shape of the curve in shadows (orange rectangle) will vary depending on monitor black level and on how the calibration algorithm treats the curve in dark areas. Both the linear and typical gamma curves will usually have non-zero luminance at the black while only typical gamma curves may have a higher contrast in that region. Clear distinguishing feature is a convex character of the typical gamma curve in region from about R'G'B'=80 to R'G'B'=200. Real experimental L-star curves are nearly linear. Before we continue with more detailed analysis, some assumptions have to be made. As we have discussed earlier, LCD display has no underlying physics to follow the power law dependence of luminance vs. digital input. However, we also pointed to the fact that manufacturers build corrections into the display circuitry of the LCD panel to approximate the power function dependence. Such corrections may justify use of the gamma concept. Thus, if the power law dependence is adopted, logarithm of Yn vs. RGBn will be a linear function. Indeed, we often see very good linearity at higher luminance values (~ RGB > 160) as shown earlier in Fig. 19. Unfortunately, for midtones and shadows, this linear relationship (i.e. constant gamma) breaks down. To ensure high accuracy, examples in this section are based on calibrations of higher end Eizo CG19 display using ColorNavigator. Calibration software adjusts only the monitor 10-bit LUT without adding any corrections to the 8-bit video card Let's evaluate four different approaches to characterization of TRC curves. 1. First approach assumes that the power law dependence is obeyed for both curves (gamma-based and the L-star) and that gamma can be expressed as a power coefficient in general formula (a*xγ + c) or as the slope in the log-log plot in any part of the RGB brightness scale (R=G=B=0 -> R=G=B=255). 2. Second approach to TRC analysis is based on polynomial or spline fit to Yn vs. RGBn data points followed by analysis of the fitted function. While this approach seems the most rigorous, it is at the same time the least intuitive and generizable. We would be looking for parameters that uniquely describe the TRC, such as characteristic points and shapes of the n-th derivative of the fitted function (which is still polynomial function). When the same polynomial fit (I used 6th degree polynomial) is done on the log(Yn) vs. log(RGBn) scale, situation is very close to analyzing any log-log diagram such as in Fig. 19. Since the log-log plot should be mostly linear (at least in highlights), the tangent line to it at any point gives the best linear approximation to our fit function. Hence the first derivative of the fit function would give us the slope of the linear portion of the fitted curve, i.e. the gamma. This is of course a simplification, though good enough to analyze gamma and L-star curves. 3. Third method is based on comparison against a constant (reference) initial curve such as the one used in gamma encoded sRGB image. This is a reasonable starting point considering that digital cameras frequently transform raw images into the sRGB (or AdobeRGB) color space. Any display TRC curve would then be canceling the encoding gamma curve to provide the resulting TRC. Assuming LUT gamma = 1, resulting TRC would be very close to the real overall TRC curve (approximately linear). 4. Last method illustrates very simple and reliable test for gamma or L-star curves. Plot of Yn vs. RGBn is generated and CIELAB L*- lightness values are calculated from the measured Yn. Linear regression is performed on the newly created function: L* = f(RGBn). L-star curves are supposed to simulate response of the CIELAB L*-channel while the gamma curves are not. Thus fit to the L-star curve would ideally be a line with very small SSE value (sum of squares due to error). On the other hand, gamma curves (after the same Y to L* transformation) should result in poor linear fit with higher SSE. Analysis of residuals provides additional information on goodness of the fit. This approach is qualitatively depicted in Fig. 24. Method 1: Two types of data were analyzed - experimentally obtained data for both the L-star and the gamma 2.2. calibrated monitor (Tables 3, 5) and ideal color space profile data for both types of curves ( Tables 2, 4). For both the sRGB and L-star profiles, relative luminance data (Y) were obtained from the icclu utility of Argyll CMS . It is the same data and curves as in Fig. 20. More specifically, a grayscale RGB ramp (in increments of 5) was used as the input, run through the icc profile to get the corresponding XYZ tristimulus data. Y-component of the tristimulus output and the input RGB data were used in the curve fitting examples. The digital input range (0-255) was divided into smaller subranges as shown in Tables 2 to 5 (column 1). Data points in each subrange were fitted to a general power function and the power coefficient γ was recorded in column 2. Root mean squared error (the square root of the mean square error or Table 2 . Analysis of L-star icc profile curve (power fit, log-log(γ)) RGB (8-bit) a*xγ+c RMSE log-log(γ) RMSE 0-255 2.53 3.50E-3 2.55 (>210) 3.13E-4 200-255 2.69 3.95E-5 2.55 3.89E-4 150-200 2.62 2.48E-5 2.44 7.50E-4 100-150 2.49 4.67E-5 2.27 1.80E-3 50-100 2.25 8.27E-5 1.95 6.00E-3 0-50 1.56 4.07E-4 1.14 3.70E-2 0-25 1.00 6.39E-5 1.00 2.40E-3 Table 3 . Analysis of L-star display curve (power fit, log-log(γ)) RGB (8-bit) a*xγ+c RMSE log-log(γ) RMSE 0-255 2.56 5.69E-3 2.63 (>210) 1.20E-3 200-255 3.05 7.39E-4 2.60 1.61E-3 150-200 2.67 4.79E-4 2.39 1.54E-3 100-150 2.44 3.20E-4 2.18 1.75E-3 50-100 2.15 1.79E-4 1.82 4.76E-3 0-50 1.62 3.34E-4 0.89 6.20E-2 0-25 1.16 1.06E-4 0.69 3.36E-2 the standard error) is shown in column 3. Linear regression was performed on the log-log scale for the same data points and slope of the fitted line (γ) is listed in column 4. This would correspond to the so called log-log gamma. The corresponding RMSE data are shown in column 5. Gamma values calculated in columns 2 and 4 will obviously be different. It should not matter which definition of gamma we choose as long as we stay consistent. Remember, the log-log(γ) is the definition used in display calibration where linear regression is done in the logarithmic domain. Table 4 . Analysis of sRGB icc profile curve (power fit, log-log(γ)) RGB (8-bit) a*xγ+c RMSE log-log(γ) RMSE 0-255 2.25 1.55E-3 2.24 (>160) 5.94E-4 200-255 2.32 1.10E-5 2.26 1.62E-4 150-200 2.29 1.03E-5 2.22 2.92E-4 100-150 2.25 1.80E-5 2.15 7.70E-4 50-100 2.17 2.96E-5 2.02 3.38E-3 0-50 1.87 1.87E-3 1.37 5.78E-2 0-25 1.07 4.80E-4 1.14 2.86E-2 Table 5 . Analysis of display gamma curve (power fit, log-log(γ)) RGB (8-bit) a*xγ+c RMSE log-log(γ) RMSE 0-255 2.26 1.53E-3 2.21 (>160) 9.29E-3 200-255 2.32 1.36E-5 2.23 2.41E-4 150-200 2.30 7.88E-6 2.17 3.83E-4 100-150 2.26 1.40E-5 2.05 1.49E-4 50-100 2.17 5.12E-5 1.76 5.72E-3 0-50 1.84 1.65E-4 0.56 6.48E-2 0-25 1.38 2.06E-4 0.37 3.68E-2 Following are some observations made from Tables 2-5: 1. For the L-star curves, trends in γ values are approximately the same for both the power function and the "log-log" definitions. They decrease gradually with decreasing RGB input. Values in yellow indicate poor fit to the data. Log-log scale shows particularly poor linear fit in shadows. 2. L-star curves have variable γ across the whole RGB input range. The log-log(γ) fit is about 2.5-2.6 when maximum linear part is considered (RGB > 210). In RGB range of <50-100>, the L-star gamma is about 1.8-1.9, in RGB <100-150> ~ 2.2-2.3, in RGB <150-200> ~ 2.4, and RGB <200-255> ~ 2.6. Closer to the black point, log-log(γ) still decreases with linear fit getting very poor. 3. For the "classical' gamma based curves, trends in γ values are very similar for both the power function and the "log-log" definitions. Log-log(γ) falls off faster for the display curves. 4. The log-log(γ) of gamma curves is about 2.2 when maximum linear part is considered (RGB > 160). The log-log(γ) stays around 2.0-2.2 from highlights to midtones. Closer to the black point, γ decreases with log-log linear fit getting again very poor. Method 2: To further assess differences between the two calibration curves, relative luminance values (Y ∈ <0-100>) were either measured or calculated using the icclu CMS utility. Polynomial fit was done on the data (linear and log-log scale) followed by the plot and data analysis. In general, the first derivative of the fitted function (logYn vs. RGB or logYn vs. logRGBn ) reflects well the type of the TRC curve. Plots of d(logYn)/d(logRGBn) vs. RGB and d(logYn)/d(logRGBn) vs. logRGBn are shown in Fig. 22. L-star curve has extensive linear parts in the logRGBn plot and continuously decreasing " gamma" in the RGB plot. On the other hand, the plot of "classical" gamma curve has a typical S-shape in the logRGBn plot while the RGB plot shows more constant gamma in highlights. However, as indicated earlier, more data is needed to formulate any general characteristics from these plots. Due to the larger size of the images, the corresponding analysis is available in this document (v. 1.0) (check here for the latest version). Figure 22: 1st derivatives of logYn vs. RGB or logRGBn for L-star and gamma TRCs Method 3: This method provides only a qualitative comparison between two or more TRC curves. All are subtracted from a reference TRC (in our case the ideal camera sRGB TRC curve). When display TRC curve is also sRGB, a straight line from 0 to 1 will Figure 23: L-curves subtracted from camera sRGB γ (place mouse over the image to toggle) result from the subtraction. This is the case shown in Fig. 23 (black diagonal line). Other three lines are TRCs of the L-star curves relative to sRGB curve. Blue line shows an L-star curve coming from the L-star icc profile, the red line is an L-star curve coming from the icc profile of calibrated/profiled monitor, and the orange line is the experimentally measured TRC based on the same icc profile. Measurement was done directly off the screen using 5-step gray ramp. In general, the L-star curves make midtones slightly darker than the typical gamma based TRCs. We have already seen the same trend in Fig. 21. For more detailed inspection of these curves, place mouse over the image in Fig. 23. On the toggled image, differences from the sRGB TRC are exaggerated four times. As one can see, the major differences from "classical" gamma curves are in midtones, specifically in R'G'B' ranges of 25-100 (8-bit encoding). It is mostly the curve steepness (contrast) that differentiates the curves from each other. The relationship between measured tristimulus Y-values (normalized) and RGB normalized values for a 5-step gray ramp is plotted in Fig. 24 as Yn vs. RGBn. Left panel shows data obtained from L-star calibrated Eizo CG19 monitor, the right panel shows data obtained from similar gamma 2.2 calibration both using ColorNavigator and the X-rite DTP-94 colorimeter. In both cases, the Y-values were also transformed into L* values of the CIELAB color space. Here is the formula for Y to L* transformation: L* = 116 . (Y/100)1/3 - 16 for Y/100 > 0.008856, Y ∈ <0-100> and L* = 903.3 . Y/100 for Y/100 < 0.008856 Least squares fitting method was used to fit a line to the L* vs. RGBn function. Fitted straight line shown in the right panel (in black) is nearly identical to the original L* vs. RGBn function. On the other hand, the linear regression performed on the "classical' gamma curve (left panel) shows hints of deviation from the straight line. The corresponding sums of squares due to error (SSEs) are 0.0003 (L-star) and 0.0010 (gamma 2.2.). Analysis of residuals reveals further details of the fit. While residuals for the L-star curve have generally monotonic concave character, residuals of the gamma curve have a characteristic convex shape typical for other measured or theoretical gamma curves. Due to the uncertainty of curve behavior in the dark areas, evaluate only parts of the plot starting from about R'G'B'=50 (RGBn=0.2) and up. Here is the Excel worksheet that performs all the calculations. Figure 24: Linear fit to L* vs. RGB for gamma and L-star TRCs with plotted residuals (top)↑Links and References: Last update : April 15, 2009
{"url":"http://www.marcelpatek.com/gamma.html","timestamp":"2014-04-21T07:04:54Z","content_type":null,"content_length":"76529","record_id":"<urn:uuid:7b7fc3b9-4a4f-402b-b728-d52320d1ae0a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Mandy on Thursday, June 26, 2008 at 9:02pm. b)The water line is given by the equation Suppose you want to put a pink flamingo lawn ornament in your backyard, but you want to avoid placing it directly over the water line, in case you need to excavate the line for repairs in the future. Could you place it at the point (-4,-10)? c)What is the slope and y-intercept of the line in part b? How do you know? d)Suppose you want to add a sprinkler system, and the location of one section of the sprinkler line can be described by the equation Y= -1/2x-4 Complete the table for this equation. x y (x,y) e)What objects might be in the way as you lay the pipe for the sprinkler? • Algebra - drwls, Friday, June 27, 2008 at 7:42pm b) Plug in x=-4 and y=-10 into the equation = -(2/3)x -12 If the equation is not satisfied, then there is no water line below. (c) The slope is ths coefficient of the "x" term. The y-intercept is the constant term (-12 in this case). It is the value of y when x = 0. (d) You should be able to do this yourself. All you do is pick a series of x values and calculate the corresponding y values. One point is x=0, y=-4 Related Questions MATH 116 - b) The water line is given by the equation y=-2/3x-12. Suppose you ... math - water line is given by the equation y= -2/3x-12 Suppose you want to put a... math 116 - a water line is given by the equation y = - 2/3x - 12 . Suppose you ... Algabra - Landscape designers often use coordinate geometry and algebra as they ... MAT 116 Algebra 1A - Appendix D Landscape Design Landscape designers often use ... algebra - Axia College Material Appendix D Landscape Design Landscape designers ... ALGEBRA - Landscape Design Landscape designers often use coordinate geometry and... Math - Appendix D Landscape Design Landscape designers often use coordinate ... intermediate algebra - Find the equation of the line that pass through the given... Algebra - I need an equation for L that satisfies the given geometric condition...
{"url":"http://www.jiskha.com/display.cgi?id=1214528541","timestamp":"2014-04-21T10:51:41Z","content_type":null,"content_length":"9271","record_id":"<urn:uuid:cfe9399d-59f3-4bba-bf25-159ba3f58f39>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
New system allows cloud customers to detect program-tampering Public release date: 11-Sep-2013 [ | E-mail ] Contact: Andrew Carleen Massachusetts Institute of Technology New system allows cloud customers to detect program-tampering A new version of 'zero-knowledge proofs' allows cloud customers to verify the proper execution of their software with a single packet of data CAMBRIDGE, Mass-- For small and midsize organizations, the outsourcing of demanding computational tasks to the cloud — huge banks of computers accessible over the Internet — can be much more cost-effective than buying their own hardware. But it also poses a security risk: A malicious hacker could rent space on a cloud server and use it to launch programs that hijack legitimate applications, interfering with their execution. In August, at the International Cryptology Conference, researchers from MIT and Israel's Technion and Tel Aviv University presented a new system that can quickly verify that a program running on the cloud is executing properly. That amounts to a guarantee that no malicious code is interfering with the program's execution. The same system also protects the data used by applications running in the cloud, cryptographically ensuring that the user won't learn anything other than the immediate results of the requested computation. If, for instance, hospitals were pooling medical data in a huge database hosted on the cloud, researchers could look for patterns in the data without compromising patient privacy. Although the paper reports new theoretical results, the researchers have also built working code that implements their system. At present, it works only with programs written in the C programming language, but adapting it to other languages should be straightforward. The new work, like much current research on secure computation, requires that computer programs be represented as circuits. So the researchers' system includes a "circuit generator" that automatically converts C code to circuit diagrams. The circuits it produces, however, are much smaller than those produced by its predecessors, so by itself, the circuit generator may find other applications in cryptography. Zero knowledge Alessandro Chiesa, a graduate student in electrical engineering and computer science at MIT and one of the paper's authors, says that because the new system protects both the integrity of programs running in the cloud and the data they use, it's a good complement to the cryptographic technique known as homomorphic encryption, which protects the data transmitted by the users of cloud Joining Chiesa on the paper are Madars Virza, also a graduate student in electrical engineering and computer science; the Technion's Daniel Genkin and Eli Ben-Sasson, who was a visiting professor at MIT for the past two years; and Tel Aviv University's Eran Tromer, who was a postdoc at MIT. The researchers' system implements a so-called zero-knowledge proof, a type of mathematical game invented by MIT professors Shafi Goldwasser and Silvio Micali and their colleague Charles Rackoff of the University of Toronto. In its cryptographic application, a zero-knowledge proof enables one of the game's players to prove to the other that he or she knows a secret key without actually divulging it. But as its name implies, a zero-knowledge proof is a more general method for proving mathematical theorems — and the correct execution of a computer program can be redescribed as a theorem. So zero-knowledge proofs are by definition able to establish whether or not a computer program is executing correctly. The problem is that existing implementations of zero-knowledge proofs — except in cases where they've been tailored to particular algorithms — take as long to execute as the programs they're trying to verify. That's fine for password verification, but not for a computation substantial enough that it might be farmed out to the cloud. The researchers' innovation is a practical, succinct zero-knowledge proof for arbitrary programs. Indeed, it's so succinct that it can typically fit in a single data packet. Linear thinking As Chiesa explains, his and his colleagues' approach depends on a variation of what's known as a "probabilistically checkable proof," or PCP. "With a standard mathematical proof, if you want to verify it, you have to go line by line from the start to the end," Chiesa says. "If you were to skip one line, potentially, that could fool you. Traditional proofs are very fragile in this respect." "The PCP theorem says that there is a way to rewrite proofs so that instead of reading them line by line," Chiesa adds, "what you can do is flip a few coins and probabilistically sample three or four lines and have a probabilistic guarantee that it's correct." The problem, Virza says, is that "the current known constructions of the PCP theorem, though great in theory, have quite bad practical realizations." That's because the theory assumes that an adversary who's trying to produce a fraudulent proof has unbounded computational capacity. What Chiesa, Virza and their colleagues do instead is assume that the adversary is capable only of performing simple linear operations. "This assumption is, of course, false in practice," Virza says. "So we use a cryptographic encoding to force the adversary to only linear evaluations. There is a way to encode numbers into such a form that you can add those numbers, but you can't do anything else. This is how we sidestep the inefficiencies of the PCP theorem." Written by Larry Hardesty, MIT News Office [ | E-mail ]
{"url":"http://www.eurekalert.org/pub_releases/2013-09/miot-nsa091113.php","timestamp":"2014-04-17T15:43:53Z","content_type":null,"content_length":"11733","record_id":"<urn:uuid:1ee4d783-7677-40c4-a31d-153c3cc17d56>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Fundamentals of Derivatives Markets Sign in to access your course materials or manage your account. Back to skip links Example: ISBN, Title, Author or Keyword(s) Option 1: Search by ISBN number for the best match. Zero-in on the exact digital course materials you need. Enter your ISBNs below. Option 2: Search by Title and Author. Searching by Title and Author is a great way to find your course materials. Enter information in at least one of the fields below.
{"url":"http://www.coursesmart.com/0321551214/?a=1773944","timestamp":"2014-04-16T16:42:51Z","content_type":null,"content_length":"92338","record_id":"<urn:uuid:a15ff156-06aa-4158-817f-8d6d424c6f32>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Activities for Thinking Logically | eHow Math Activities for Thinking Logically Math activities of any kind are a rudimentary form of logical thinking; consider 2+2. Understanding higher math such as algebra, then, is a critical step forward for a mind of any age to gain logical thinking skills. Although many students find math difficult, it helps prepare them for the future, to succeed in jobs that may or may not have anything to do with math or equations. 1. Basic Arithmetic □ Mastery of arithmetic helps students develop logic and reasoning skills. According to the Math Rider website, arithmetic helps students learn to think logically and break down problems into distinct steps so they can be solved. It also tests their ability to solve basic math problems encountered in everyday situations. These problems require addition, subtraction, multiplication and division. □ Algebra enables students to think logically when solving equations. It helps students learn to reason symbolically and introduces abstract thinking. Students who take algebra class learn that symbols such as x and y stand for units that vary and can be used to solve the missing pieces of real-life mathematical puzzles. For example, in the equation 4 + x = 7, “x” is the unknown variable and “3” is the solution the equation. □ There are many geometry activities that will improve a person's ability to think logically. Elementary school children should be taught how to identify geometric shapes. For example, students should learn about parallel lines and how to use a ruler, compass and protractor so they can then draw squares, rectangles, parallelograms and circles. In middle school (grades 6 through 8), students should understand and form abstract definitions and understand relationships between different shapes. According to Homeschoolmath.net, teachers can help middle school students think logically by asking them to study geometric concepts and allow them to experiment, investigate and play with geometric figures. Word Problems □ Word problems require logical thinking and other skills a student has learned in class, such as reading comprehension, algebra, geometry or trigonometry. Solving word problems often requires a translation of the wording into an equation. Venn Diagrams □ Venn diagrams show relationships between sets or groups of objects that may or may not share something in common. A Venn diagram is a good tool for organizing, evaluating and representing complex relationships visually. Venn diagrams typically consist of two or more overlapping or non-overlapping circles that show the relationship between groups of things. When the circles overlap, items share a specified something in common. For example, lets say that circle A contains all red fruits and circle B contains all green fruits. Then, the intersecting portions of the two circles contain fruit that come in red and green varieties, such as apples and grapes. Venn diagram activities help students to organize similarities and differences visually. They can help students compare and contrast topics of any subject. According to Scholastic.com, a good math activity may involve using Venn diagrams for comparing and contrasting story elements. • Photo Credit Jupiterimages, Brand X Pictures/Brand X Pictures/Getty Images
{"url":"http://www.ehow.com/info_12193600_math-activities-thinking-logically.html","timestamp":"2014-04-18T13:08:30Z","content_type":null,"content_length":"85272","record_id":"<urn:uuid:95093eda-d4de-47ba-877c-4708edf9a5f6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate sublimation rate from pressure rise Hi, I did a pressure rise test on a chamber, where I closed the isolation valve and measured the increase in pressure over 30 seconds. I want to use this information to calcualte the rate of accumulation of vapour in the chamber. What I'm looking for is the sublimation rate during freeze drying, so how does this increase in pressure correlate to a sublimation rate? Any suggestions are much appreciated!!
{"url":"http://www.physicsforums.com/showthread.php?p=3806089","timestamp":"2014-04-16T04:21:20Z","content_type":null,"content_length":"24901","record_id":"<urn:uuid:69314992-aaed-4ee2-a2a1-2c407470a0f3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Finding Polynomials that are equal. Best Response You've already chosen the best response. USING THE POLYNOMIAL - SSN(x)= (x-1)(x+2)(x+1)(x-2)(x+1)(x-2)(x+1)(x-2)(x+1) Suppose you have a SSN in which the digits a1,a3,a5,a7,a9 are all different and suppose a2,a4,a6,a8 are different. give two SSNS that would have the same SSN polynomial a1,a3,a5 = the numbers in given order of the SSN you make. directed @ISSAN94 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ee571eae4b023040d4b5948","timestamp":"2014-04-19T22:42:48Z","content_type":null,"content_length":"27834","record_id":"<urn:uuid:d6a9a083-5711-4dde-84cf-729389d1122d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of Fourier series an infinite series that involves linear combinations of sines and cosines and approximates a given function on a specified domain. Fourier series an infinite trigonometric series of the form ½a[0] + a[1]cos x + b[1]sin x + a[2]cos 2x + b[2]sin 2x + …, where a[0], a[1], b[1], a[2], b[2] … are the Fourier coefficients. It is used, esp in mathematics and physics, to represent or approximate any periodic function by assigning suitable values to the coefficients Fourier series An infinite series whose terms are constants multiplied by sine and cosine functions and that can, if uniformly convergent, approximate a wide variety of functions.
{"url":"http://dictionary.reference.com/browse/Fourier%20series","timestamp":"2014-04-19T12:32:12Z","content_type":null,"content_length":"90528","record_id":"<urn:uuid:c68866ee-6833-4f92-9bce-48fd88ef0982>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Next Article Contents of this Issue Other Issues ELibM Journals ELibM Home EMIS Home Microlocal Tempered Inverse Image and Cauchy Problem Francesco Tonin Institut de Mathématiques, Analyse Algébrique, Université Pierre et Marie Curie, Case 82, 4, place Jussieu, F-75252 Paris Cedex 05 -- FRANCE E-mail: tonin@math.jussieu.fr Abstract: We prove an inverse image formula for the functor $\tmhom(\cdot,{\cal O})$ of Andronikof \cite{A}, that is, the microlocalization of the functor $\TH(\cdot,{\cal O})$ of tempered cohomology introduced by Kashiwara. As an application, following an approach initiated by D'Agnolo and Schapira, we study the tempered ramified linear Cauchy problem. We deal with ramifications of logarithmic type, or along a swallow's tail subvariety, or at the boundary of the data existence domain. Keywords: $\cal D$-mo\-du\-les; Cauchy problem; tempered cohomology. Classification (MSC2000): 32C38, 32S40, 35A10. Full text of the article: Electronic version published on: 31 Jan 2003. This page was last modified: 27 Nov 2007. © 1999 Sociedade Portuguesa de Matemática © 1999–2007 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/PM/56f2/3.html","timestamp":"2014-04-17T13:20:10Z","content_type":null,"content_length":"3798","record_id":"<urn:uuid:ffec5200-4b57-411b-a7db-73934c53ed0a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Adjective: average a-vu-rij or av-rij Verb: average a-vu-rij or av-rij 1. Amount to or come to an average, without loss or gain "The number of hours I work per work averages out to 40"; - average out 2. Achieve or reach on average "He averaged a C" 3. (arithmetic) compute the average of - average out Noun: average a-vu-rij or av-rij 1. (statistics) a statistic describing the location of a distribution "it set the average for American homes"; - norm 2. (sport) the ratio of successful performances to opportunities 3. An intermediate scale value regarded as normal or usual "he is about average in height"; "the snowfall this month is below average" Derived forms: averaging, averaged, averages See also: common, moderate, normal, ordinary Type of: accomplish, achieve, add up, amount, attain, calculate, cipher, come, compute, cypher, figure [N. Amer], number, ratio, reach, reckon, scale value, statistic, total, work out Encyclopedia: Average
{"url":"http://www.wordwebonline.com/en/AVERAGE","timestamp":"2014-04-17T07:00:55Z","content_type":null,"content_length":"14202","record_id":"<urn:uuid:7c21834a-429d-4128-8f73-2917128f11c5>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Re (tilly) 1: 5x5 Puzzle in reply to 5x5 Puzzle At first I had ignored this, then decided to do it. It was a more fun challenge than I thought. There are, not counting the order of the moves, actually 4 solutions in 15 moves for a 5x5 board. What follows is the throw-away script I wrote to find this. By default it solves a 5x5 board. Pass it an argument and it will solve an nxn board. (I tried it in the 1..10 range and found that there is 1 solution for 1, 2, 3, 6, 7, 8 and 10. As I mentioned, there are 4 for 5, plus 16 for 4 and 256 for 9. Don't ask me why, I merely report what I found...) It would not be hard to extend this to handle arbitrary rectangular boards. I also didn't need the globals but this is throw-away code and it was easier that way. I make no apologies for the huge numbers of anonymous functions. The fact that I can feasibly find all 64 solutions for an 11x11 board by brute-force search on my old laptop speaks loudly enough for the efficiency of the method... use strict; use Carp; use vars qw($min $max @board @soln @toggles); $min = 1; $max = shift(@ARGV) || 5; @board = map [map 0, $min..$max], $min..$max; foreach my $x ($min..$max) { foreach my $y ($min..$max) { push @toggles, ["$x-$y", ret_toggle_square($x, $y)]; sub find_soln { if (! @toggles) { # Solved! print join " ", "Solution:", map $_->[0], @soln; print "\n"; else { my $toggle = shift(@toggles); # Try with, then without if ($toggle->[1]->()) { push @soln, $toggle; pop @soln; if ($toggle->[1]->()) { unshift @toggles, $toggle; # Returns a function that switches one square and returns # true iff the new color is black sub ret_swap_square { my ($x, $y) = @_; #print "Generated with $x, $y\n"; my $s_ref = \($board[$x-1][$y-1]); return sub {$$s_ref = ($$s_ref + 1) %2;}; # Returns a function that toggles one square and its # neighbours, and returns whether or not any neighbour # has turned to white and cannot return to black without # swapping again with $x lower or $x the same and $y lower. sub ret_toggle_square { my ($x, $y) = @_; my @fin_swaps; my @other_swaps; unless ($x == $min) { push @fin_swaps, ret_swap_square($x - 1, $y); if ($x == $max) { unless ($y == $min) { push @fin_swaps, ret_swap_square($x, $y - 1); if ($y == $max) { push @fin_swaps, ret_swap_square($x, $y); else { push @other_swaps, ret_swap_square($x, $y); unless ($y == $max) { push @other_swaps, ret_swap_square($x, $y+1); else { unless ($y == $min) { push @other_swaps, ret_swap_square($x, $y - 1); push @other_swaps, ret_swap_square($x, $y); push @other_swaps, ret_swap_square($x + 1, $y); unless ($y == $max) { push @other_swaps, ret_swap_square($x, $y + 1); return sub { $_->() foreach @other_swaps; my $ret = 1; $ret *= $_->() foreach @fin_swaps; return $ret; Comment on Re (tilly) 1: 5x5 Puzzle Download Code Re: Re (tilly) 1: 5x5 Puzzle by extremely (Priest) on Jan 29, 2001 at 11:57 UTC Thinking about the nature of the beast, I'd bet on powers of 4 having extra solutions and the squares that that have 4xN+1 sides (5, 9, 13, 17) having extra solutions as well. I think the 16x16 will have at least 2**16 solutions since it is actually a 2x2 of 4x4s and the 4x4 had 2**4 solutions. It also wouldn't suprise me that 13x13 had 8**8 solutions. $you = new YOU; honk() if $you->love(perl) OK, I figured I would tackle this problem smarter, not harder. With some success. First of all note that if you specify the first column, the next column is completely determined by the need to make the entries in the first column come out black. The following column is likewise determined by the need to make the entries in the second column come out black. And so on to the end. So it all comes down to choosing the first column correctly so that the n'th column comes out all black. (Or equivalently so that nothing would go into the n+1'th column.) But note, what happens if you compare what happens if you reverse a single choice in the first column. Well you get a pattern of switching what toggles you make through the rest of the puzzle! And the pattern of switches does not depend upon what other parameters you chose. (The final outcome of toggle/not toggle depends on other patterns, but the pattern of toggles you reverse for a single toggle does not.) To someone with a math background this looks suspiciously like a linear algebra problem over Z/2. (Z/2 is the set of integers mod 2 - ie 1's and 0's with addition and multiplication mod 2.) In fact it is. For each choice in the first column we have a pattern of switches it would make to toggles in the n+1'st column. If we start with a blank first column we have a pattern of switches we see in the n+1'st column. We want to find a linear combination (that is linear combination in Z/2) of choices in the first column that add up to that base pattern of switches and cancels it out. Basic linear algebra tells us that the answer set is either empty or a vector space of some dimension over Z/2. So this doesn't tell us why there are any solutions, but it does tell us that if we have a solution, the number of solutions will be a power of 2. Of course we have seen cases where we have 1 solution, 2**2 solutions, 2**2**2 solutions, 2**2**2**2 solutions, and I suspect that 19 has 2**2**2**2**2 solutions. Why that is seen I don't know. I don't even know why there are any solutions. However if I remain interested enough over the next couple of days, I know I can use linear algebra to find how many solutions to the n*n problem exist. That can be O(n**3) rather than the current exponential beast. If I do that I will probably want to do the general n*m problem. And I am not sure how easy my reasoning will be for others to figure out. So I may not do it. But if anyone is interested, tell me about it and I will be more likely to take the effort. :-) Re (tilly) 2: 5x5 Puzzle by tilly (Archbishop) on Jan 29, 2001 at 14:09 UTC After some thought I realized that I could find several speedups. The first and biggest is what order the toggles are searched in. When you choose elements on one side, you can conclude diagonally. But I have to fill in the entire board before drawing interesting conclusions. Therefore by just reording what path you take you move the decision closer to the conclusion and speed things up. The other thing that I changed is that I separated the decision about what paths to take from the toggling. As it stands for most of the board the decision is obvious from examining one board element what you have to do. But I was toggling twice whether or not I needed it. But by separating out that logic I make the logical structure simpler, and I believe it is slightly So here is a much speeded up version of the code: use strict; use vars qw($min $max @board @soln @toggles); $min = 1; $max = shift(@ARGV) || 5; @board = map [map 0, $min..$max], $min..$max; foreach my $x ($min..$max) { foreach my $y ($min..$max) { push @toggles, [ [$x, $y], ret_valid_toggles($x, $y), ret_toggle_square($x, $y) # Sort them in an order where conclusions are discovered faster @toggles = sort { ($a->[0][0] + $a->[0][1]) <=> ($b->[0][0] + $b->[0][1]) or $a->[0][0] <=> $b->[0][0] } @toggles; sub find_soln { if (! @toggles) { # Solved! print join " ", "Solution:", map "$_->[0][0]-$_->[0][1]", @soln; print "\n"; else { my $toggle = shift(@toggles); foreach ($toggle->[1]->()) { if ($_) { push @soln, $toggle; pop @soln; else { unshift @toggles, $toggle; # Returns a function that toggles one square and its # neighbours. sub ret_toggle_square { my ($x, $y) = @_; my @to_swap= square_ref($x, $y); unless ($x == $min) { push @to_swap, square_ref($x - 1, $y); unless ($y == $min) { push @to_swap, square_ref($x, $y - 1); unless ($x == $max) { push @to_swap, square_ref($x + 1, $y); unless ($y == $max) { push @to_swap, square_ref($x, $y + 1); return sub { $$_ = not $$_ foreach @to_swap; }; # Returns a test functions that returns a list of valid # toggle states to try sub ret_valid_toggles { my ($x, $y) = @_; my @checks; if ($min < $x) { push @checks, square_ref($x-1, $y); if ($max == $x) { if ($min < $y) { push @checks, square_ref($x, $y-1); if ($max == $y) { push @checks, square_ref($x, $y); if (not @checks) { return sub {(0, 1)}; else { my $check = shift @checks; if (not @checks) { return sub {not $$check}; else { return sub { my $val = $$check; (grep {$$_ != $val} @checks) ? () : not $val; # Given x, y returns a reference to that square on the board sub square_ref { my ($x, $y) = @_; return \($board[$x-1][$y-1]); Removed the ret_swap_square() function. Toggles go much faster if each swap is done directly rather than indirectly through a function call. (Removing 5 extra function calls per toggle matters...) Also dropped the unused Carp that snuck in through habit. (This is throw-away code...) [reply] use strict; use vars qw($min $max_x $max_y @board @soln @toggles); $min = 1; $max_x = shift(@ARGV) || 5; $max_y = shift(@ARGV) || $max_x; # The board starts empty and entries will autovivify. :-) foreach my $x ($min..$max_x) { foreach my $y ($min..$max_y) { push @toggles, [ [$x, $y], ret_valid_toggles($x, $y), ret_toggle_square($x, $y) # Sort them in an order where conclusions are discovered faster @toggles = sort { ($a->[0][0] + $a->[0][1]) <=> ($b->[0][0] + $b->[0][1]) or $a->[0][0] <=> $b->[0][0] } @toggles; sub find_soln { if (! @toggles) { # Solved! print join " ", "Solution:", map "$_->[0][0]-$_->[0][1]", @soln; print "\n"; else { my $toggle = shift(@toggles); foreach ($toggle->[1]->()) { if ($_) { push @soln, $toggle; pop @soln; else { unshift @toggles, $toggle; # Returns a function that toggles one square and its # neighbours. sub ret_toggle_square { my ($x, $y) = @_; my @to_swap= square_ref($x, $y); unless ($x == $min) { push @to_swap, square_ref($x - 1, $y); unless ($y == $min) { push @to_swap, square_ref($x, $y - 1); unless ($x == $max_x) { push @to_swap, square_ref($x + 1, $y); unless ($y == $max_y) { push @to_swap, square_ref($x, $y + 1); return sub { $$_ = not $$_ foreach @to_swap; }; # Returns a test functions that returns a list of valid # toggle states to try sub ret_valid_toggles { my ($x, $y) = @_; my @checks; if ($min < $x) { push @checks, square_ref($x-1, $y); if ($max_x == $x) { if ($min < $y) { push @checks, square_ref($x, $y-1); if ($max_y == $y) { push @checks, square_ref($x, $y); if (not @checks) { return sub {(0, 1)}; else { my $check = shift @checks; if (not @checks) { return sub {not $$check}; else { return sub { my $val = $$check; (grep {$$_ != $val} @checks) ? () : not $val; # Given x, y returns a reference to that square on the board sub square_ref { my ($x, $y) = @_; return \($board[$x-1][$y-1]); [download] [reply] Log In^? Node Status^? node history Node Type: note [id://54957] How do I use this? | Other CB clients Other Users^? Others examining the Monastery: (17) As of 2014-04-17 15:42 GMT Find Nodes^? Voting Booth^? April first is: Results (453 votes), past polls
{"url":"http://www.perlmonks.org/index.pl/jacques?node_id=54957","timestamp":"2014-04-17T15:44:58Z","content_type":null,"content_length":"35572","record_id":"<urn:uuid:d31cd7a0-6f01-47d4-a78b-b7960f96958b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Quaternions and Reflections Copyright © University of Cambridge. All rights reserved. 'Quaternions and Reflections' printed from http://nrich.maths.org/ Quaternions are 4-dimensional numbers of the form $(a,x,y,z)= a+x{\bf i}+y{\bf j}+z{\bf k}$ where $a, x, y$ and $z$ are real numbers, ${\bf i, j}$ and ${\bf k}$ are all different square roots of $-1$ and ${\bf i j} = {\bf k} = {\bf -j i},\ {\bf j k} = {\bf i} = {\bf -k j},\ {\bf k i} = {\bf j} = {\bf -i k}.$ The quaternion $a + x{\bf i} + y{\bf j} + z{\bf k}$ has a real part $a$ and a pure quaternion part $x{\bf i} + y{\bf j}+ z{\bf k}$ where ${\bf i, j}$, and ${\bf k}$ are unit vectors along the axes in ${\bf R^3}$. (1) For the pure quaternions $v_1 = x_1{\bf i}+y_1{\bf j} + z_1{\bf k}$ and $v_2 = x_2{\bf i} +y_2{\bf j} +z_2{\bf k}$ evaluate the quaternion product $v_1v_2$ and compare your answer to the scalar and vector products $v_1 \cdot v_2$ and $v_1 \times v_2$. (2) Evaluate the quaternion product $v^2$ where $v=x{\bf i} + y{\bf j} + z{\bf k}$ and $|v| = \sqrt (x^2 + y^2 + z^2) = 1$. Show that, for all real angles $\theta$ and $\phi$, $$v = \cos \theta \cos \phi {\bf i} + \cos \theta \sin \phi {\bf j} + \sin \theta {\bf k}$$ is a square root of -1. This gives the set of all the points on the unit sphere in ${\bf R^3}$ and shows that the quaternion $-1$ has infinitely many square roots (which we call unit pure quaternions (3) Take any unit pure quaternion $n$ ($n^2=-1$) and consider the plane $\Pi$ through the origin in ${\bf R^3}$ with normal vector $n$. Then the plane $\Pi$ has equation $a x + b y + c z = 0 = v\ cdot n$. If $u_0$ is a point on the plane $\Pi$ then $u_0\cdot n =0$ and the points $u_0+ t n$ and $u_0 - t n$ are reflections of each other in the plane. Show that the quaternion map $F(u) = n u n$ gives reflection in the plane $\Pi$ by showing: (i)$u_0n = -n u_0$ and hence $F(u_0)=u_0$ so that all points on the plane are fixed by this mapping, and (ii) $F(u_0 + t n) = u_0 - t n$ for all scalars $t$. If you want to know how quaternions are used in computer graphics and animation in film making read the Plus Article Maths goes to the movies .
{"url":"http://nrich.maths.org/5628/index?nomenu=1","timestamp":"2014-04-18T08:17:56Z","content_type":null,"content_length":"5614","record_id":"<urn:uuid:3c067744-1860-42bb-a3e3-7a3f75158d53>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Evaluate the derivative using properties of logarithms where needed. November 28th 2012, 06:21 PM #1 Oct 2012 South Carolina Evaluate the derivative using properties of logarithms where needed. As some added context, I am taking Calculus: Early Transcendental Functions and we are studying "The Natural Logarithm as an Integral" The problem presented is as follows: d/dx [ln (x^5 sin x cos x)] I've read through the material and even looked at some of the odd numbered problems that have answers but I just can't seem to get started here. Any advice on how to approach this? Re: Evaluate the derivative using properties of logarithms where needed. I think I would simplify the application of the chain rule a bit by first writing the expression as: $\frac{d}{dx}\left(\ln(x^5\sin(2x))-\ln(2) \right)$ Now, the derivative of the constant $\ln(2)$ is zero, so we are left with: $\frac{d}{dx}\left(\ln(x^5\sin(2x)) \right)$ See if you can now apply the rule: $\frac{d}{dx}\left(\ln(u(x)) \right)=\frac{1}{u(x)}\cdot\frac{du}{dx}$ Re: Evaluate the derivative using properties of logarithms where needed. As some added context, I am taking Calculus: Early Transcendental Functions and we are studying "The Natural Logarithm as an Integral" The problem presented is as follows: d/dx [ln (x^5 sin x cos x)] I've read through the material and even looked at some of the odd numbered problems that have answers but I just can't seem to get started here. Any advice on how to approach this? Following Mark's example of simplifying the logarithm, you should simplify even further before trying to take the derivative. Write it as \displaystyle \begin{align*} y = 5\ln{(x)} + \ln{\left[ \ sin{(x)} \right]} + \ln{\left[ \cos{(x)} \right]} \end{align*} and then apply the much simpler chain rules to each term. Re: Evaluate the derivative using properties of logarithms where needed. Thanks for the quick reply! Here is what I came up with and hopefully I have not confused myself! = (1/x^5 sin2x) * x^5 = 1/sin2x Re: Evaluate the derivative using properties of logarithms where needed. Following Mark's example of simplifying the logarithm, you should simplify even further before trying to take the derivative. Write it as \displaystyle \begin{align*} y = 5\ln{(x)} + \ln{\left[ \ sin{(x)} \right]} + \ln{\left[ \cos{(x)} \right]} \end{align*} and then apply the much simpler chain rules to each term. By following that, I get.... Y = 5 ln (x) + ln[sin(x)] + ln[cos(x)] 5 ln (x) + ln[cos(x)] + ln[-sin(x)] grrr, and I am defintely sure I am confused now! LOL! Re: Evaluate the derivative using properties of logarithms where needed. Let's take a look at both approaches: My approach: $\frac{d}{dx}(\ln(x^5\sin(2x)))=\frac{x^5(2\cos(2x) )+5x^4\sin(2x))}{x^5\sin(2x)}=2\cot(2x)+5x^{-1}$ Prove It's approach: $\frac{d}{dx}(5\ln(x)+\ln(\sin(x))+\ln(\cos(x)))=5x ^{-1}+\frac{\cos(x)}{\sin(x)}-\frac{\sin(x)}{\cos(x)}=$ My approach has a more difficult application of the chain rule, but no need to apply double-angle identities (at least if you wish to combine the two trig. terms). November 28th 2012, 06:34 PM #2 November 28th 2012, 06:49 PM #3 November 28th 2012, 06:49 PM #4 Oct 2012 South Carolina November 28th 2012, 07:08 PM #5 Oct 2012 South Carolina November 28th 2012, 07:29 PM #6
{"url":"http://mathhelpforum.com/calculus/208663-evaluate-derivative-using-properties-logarithms-where-needed.html","timestamp":"2014-04-20T07:52:03Z","content_type":null,"content_length":"52025","record_id":"<urn:uuid:d6e2d23c-0004-456a-af77-c8d469f86a66>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
bounded sequences October 19th 2009, 09:06 AM bounded sequences Show that a bounded sequence in R that does not converge has more than one subsequential limit. That is, show that a nonconvergent bounded sequence has two subsequences each with a different October 19th 2009, 01:59 PM I'd try by absurd : Suppose that every subsequences of the bounded sequence converge to the same limit. If you can show that the sequence converges then you're done.
{"url":"http://mathhelpforum.com/differential-geometry/108990-bounded-sequences-print.html","timestamp":"2014-04-16T10:18:15Z","content_type":null,"content_length":"4203","record_id":"<urn:uuid:a0321ea8-9503-4140-b76a-b7fadd17c323>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Bellaire, TX Algebra Tutor Find a Bellaire, TX Algebra Tutor ...I not only have a mastery of the material, but experience in explaining it to students with a variety of learning styles. Algebra I and II are trivial to me. I have taken and tutored a great many mathematics courses, and algebra II is one of the simpler courses that I can tutor. 37 Subjects: including algebra 1, algebra 2, chemistry, geometry ...I have assisted both FBISD and LCISD students in Geometry. I took 4 semesters of physics in college. I have used physics in my engineering career. 11 Subjects: including algebra 1, algebra 2, chemistry, English ...About me: I started tutoring in college, and now that I am out of school, I continue to do what I love, which is to help struggling students overcome their academic difficulties and to help all students develop the practical learning skills they need to be successful in Life! I also started for... 22 Subjects: including algebra 1, algebra 2, chemistry, calculus ...While doing tutoring on the side in a local high school, I discovered my love for teaching. I then received my Texas teaching certification in science for grades 8-12. I taught high school chemistry and biology for 4 years, but I have experience tutoring other subjects in math and science. 13 Subjects: including algebra 1, chemistry, organic chemistry, SAT math ...I have tutored algebra previously to many students. I use math daily with my occupation, school and personal life as well. I would help my students understand the math concepts through real life experiences and solving word problems. 19 Subjects: including algebra 1, chemistry, reading, geometry Related Bellaire, TX Tutors Bellaire, TX Accounting Tutors Bellaire, TX ACT Tutors Bellaire, TX Algebra Tutors Bellaire, TX Algebra 2 Tutors Bellaire, TX Calculus Tutors Bellaire, TX Geometry Tutors Bellaire, TX Math Tutors Bellaire, TX Prealgebra Tutors Bellaire, TX Precalculus Tutors Bellaire, TX SAT Tutors Bellaire, TX SAT Math Tutors Bellaire, TX Science Tutors Bellaire, TX Statistics Tutors Bellaire, TX Trigonometry Tutors Nearby Cities With algebra Tutor Arcola, TX algebra Tutors Brookside Village, TX algebra Tutors Bunker Hill Village, TX algebra Tutors Greenway Plaza, TX algebra Tutors Hedwig Village, TX algebra Tutors Highlands, TX algebra Tutors Hilshire Village, TX algebra Tutors Hunters Creek Village, TX algebra Tutors Jacinto City, TX algebra Tutors Meadows Place, TX algebra Tutors Piney Point Village, TX algebra Tutors Southside Place, TX algebra Tutors Spring Valley, TX algebra Tutors Thompsons algebra Tutors West University Place, TX algebra Tutors
{"url":"http://www.purplemath.com/bellaire_tx_algebra_tutors.php","timestamp":"2014-04-21T10:51:37Z","content_type":null,"content_length":"24087","record_id":"<urn:uuid:d9e62b78-17e1-4dc3-9944-da65ec281eb9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Day #16-17 R-scripting templates in knime April 4, 2011 By Stageverloop Kris » R-En To install the community nodes in knime: help > install new software > http://tech.knime.org/update/community-contributions/release you’ll only need the “R and Groovy scripting extensions for KNIME” What is it and what does it do? These nodes are very handy if you want to harness the power of R, but don’t want to dive in the code. [...] daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/day-16-17-r-scripting-templates-in-knime/","timestamp":"2014-04-21T04:45:49Z","content_type":null,"content_length":"34657","record_id":"<urn:uuid:4e02fa77-1c72-4184-9194-302512e41d04>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Have a slice! (answer at bottom of page) Yes, I know a digit. How I love a drink, raspberry of course, after the heavy chapters involving quantum mechanics. And, O, have a slice, fruitcake or raisin bread, but watch calories carefully, obesity threatens. God I love, I first determine, on seeing grace, not works, produces salvation, forever redeeming, and he now empowers true living in Christ. A problem from the old Monty Hall game show "Let's Make a Deal" has an answer that not many people get the first time they look at it. The contestant is shown three doors, behind one of which is a prize, and behind the other two no prize. The contestant picks a door, but before opening it the host selects one of the other two doors and opens it, revealing no prize. (This is of course always possible.) Then the host asks the contestant whether she wants to stick with the original door or switch to the third one. Is it better to stay or to switch? Well, the probability that your first door hides the prize is 1/3, which remains the same if you see an empty door and do not switch. Therefore the probability that you were wrong in choosing that original door is 2/3, which is also the probability that between them, the other two doors conceal the prize. Now since the host always opens a door that does NOT have the prize, that 2/3 probability is no longer shared by 2 doors, but only one. Thus, if you switch the probability of winning is 2/3, but if you do not, it remains only 1/3. The intuitive answer of 1/2 is wrong, though even some mathematicians are likely to disagree until they think carefully. By the way, no matter how many doors there are, it is always better to switch, though the difference gets smaller as the number of doors increase. [1/n versus (n-1)/n(n-2)] where n is the number of doors. How can you distinguish among a mathematician, an engineer, and a physicist? Faced with identical situations, their responses differ. A theatre curtain catches fire. The engineer grabs a fire extinguisher and a hose, covers curtain, stage, and audience with 300% more water and powder than necessary, but she does put the fire out. The physicist runs to the front, pulls a thermometer from her pocket to measure the temperature of the flame, quickly determines the second and third derivatives of the function describing the pattern of fire in the curtain, looks up the material the curtain is made of in her handbook, does a fast calculation on her pocket computer, then pours 4.6736895 litres of water on the curtain, and the fire just goes out. There is a little wisp of smoke. The mathematician walks to the front, examines the situation, and announces, "It is possible to put this fire out." Then she turns and walks away. A Mathematician, a Biologist and a Physicist are sitting in a street cafe watching people going in and coming out of the house on the other side of the street. First they see two people going into the house. Time passes. After a while they notice three persons coming out of the house. The Physicist says: "The measurement wasn't accurate." The Biologist concludes: "They have reproduced." The Mathematician says: "Now if another person enters the house, it'll be empty again!" ---contributed by Linda L. Kerby From the limerick page: There once was a student from Trinity, Tried to take the square root of infinity. Whilst counting the digits Was seized with the fidgets. Gave up math and took up divinity. Another version: There once was a student at Trinity Who computed the square of infinity. But the number of digits Gave him the fidgets So he dropped math and took up divinity. -- George Gamow in 1, 2. 3 Infinity Math and the executive Knowledge is power. Time is money. Since power = work / time we substitute to obtain knowledge = work / money and now, solving for money, we obtain money = work / knowledge so the limit of money as knowledge grows without bound is zero or The more you know the less you make. There are a number of Murphy's mathematical sayings here. Arithmetic Tests Through the Decades A logger cuts and sells a truckload of lumber for $100. His cost of production is four-fifths of that amount, and the taxes on the sale are 7%. What is his net profit?" 1970's (new math) A logger exchanges a set L of lumber for a set M of money. The cardinality of M is 100. The set C of production costs contains 20 fewer items than does M. What is the cardinality of the set P of A logger sells a truckload of lumber for $100. His cost is $80 so his profit is $20. Circle the number 20. A redneck logger massacres a beautiful stand of trees to make a profit of $20. Write an essay explaining how you feel about this ruthless capitalistic exploitation. Try to explain how the birds and the squirrels feel. Though they are told of real mathematicians, I have anonymized the following stories, as they are partially apocryphal: Smith and Jones went to a restaurant to have dinner and discuss theorems. While there, they got into an argument over whether the common person knew much mathematics--Smith on the positive side of the debate and Jones on the negative. Before dessert was served, Jones went off to the ladies' room. Smith, growling to herself and determined to win the argument, called the waitress over. No more than a child, thought Smith. I shall have to resort to subterfuge. "Young lady, I want to play a trick on my colleague. Would you help me?" "Certainly Ma'am. What can I do for you?" "I'll ask you a question, and I want you to answer 'One third x cubed." (hesitating) "One thir dex cubed." The pair practiced to get it right, then Jones reappeared and the two ordered dessert. The argument resumed. After several minutes, the waitress returned with the pie and Smith was ready. "I'll prove to you the common person knows a lot of mathematics," she said." (Turning to the waitress) "Young lady, what's the integral of the function x squared?" Without hesitation, the waitress responded, "One third x cubed." There was a brief pause during which a grin not unlike that of a shark suffused the countenance of Smith, then the young waitress added, in a cold voice, "plus a constant." It was moving day for the family of Hamel, the famous and notoriously absent-minded mathematician. Since he could scarcely tie his own shoes, much less dress himself, his wife couldn't trust him to assist with anything so practical, so she sent him off to work as usual, but with a scrap of paper on which was written their new address, in hopes he could find his way home. In the university cafeteria that day, inspiration came while eating the quiche. Hamel seized first his napkin then the paper place mat and began frantically scribbling equations. Partway through, he ran out of room. Desperate, he rummaged in his pocket and found a scrap, on which he finished his putative masterpiece. Alas, as all too often happens, the final result was disappointing, and he threw the lot in the trash on his way out (along with the stone cold quiche). Returning home that night, he realized the curtains were missing from the windows. Suddenly, he remembered. Today had been moving day, and sure enough, he had forgotten. He fished in his pocket, but the paper was gone! What to do? He'd never live this one down. Then he noticed a little girl sitting on the front step and had an inspiration. Kids always knew what was going on in the neighbourhood. "Ahem, little girl, he enquired hopefully, "Do you know where the family who used to live in this house have moved to?" The little girl smiled back fetchingly. "Of course. I'm to take you there. Mom said you'd forget, Dad. She sent me over here to bring you home." The following is from Joel E. Cohen's article, taken from A Random Walk in Science: Theorem: A horse has an infinite number of legs. Proof (by intimidation): Horses have an even number of legs. Behind then have two legs and in front fore legs. This makes six legs, which is certainly an odd number of legs for a horse. But the only number that is both odd and even is infinity. Therefore horses have an infinite number of legs. Literature provides us with some fascinating examples of mathematicians at play. One of the best is Edwin Abbott's Flatland (link below) which is simultaneously mathematical fiction and biting The answer: 3.1415'9265358979'32384626 Like what you see? Want to exchange links? Want to contribute original or attributed material? Contact Us. If We use your material, We'll acknowledge the source.
{"url":"http://www.opundo.com/mathematica.htm","timestamp":"2014-04-16T15:58:45Z","content_type":null,"content_length":"26697","record_id":"<urn:uuid:b9cfc903-553c-4b49-a98f-283112ce18ff>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply bobbym wrote: You mean of the type Nearly. It is y^(xxxxxx). x and y are single-digit integers >0, and y may = x. So the test is this: For a=y^(xxxxxx) and b=y(x+x+x+x+x+x), the middle digit for Length[a]=odd (or the middle two digits for Length[a]=even) = b. So far, after not looking any further than my example in post #14, all I've found is just that one solution.
{"url":"http://www.mathisfunforum.com/post.php?tid=8645&qid=274150","timestamp":"2014-04-17T15:52:51Z","content_type":null,"content_length":"21515","record_id":"<urn:uuid:3f6d0ee7-9717-485a-9f40-c096c2fd3a7f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/serinarenee/asked","timestamp":"2014-04-20T11:16:32Z","content_type":null,"content_length":"106570","record_id":"<urn:uuid:3fbf017c-8b09-4470-9b9a-9c28adec9670>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
227 celsius in farenheit You asked: 227 celsius in farenheit Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/227_celsius_in_farenheit","timestamp":"2014-04-21T03:04:33Z","content_type":null,"content_length":"52958","record_id":"<urn:uuid:7b162801-8c60-4c2b-ac45-d14f56cd984e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Noncontractible connected topological rings ? up vote 11 down vote favorite Are there any non-contractible connected topological rings? Of course, such a thing cannot be a (topological) algebra over the reals. (I have a vague memory of having a glance at an erticle by Lurie in which some (for me) rather esoteric theory of higher categorical structures gave rise to topological rings that would have some very nontrivial topology, but I know nothing about that field(s) and, well, I just don't remember... Maybe someone can provide less "esoteric" examples! :) ) ra.rings-and-algebras homotopy-theory gn.general-topology The examples of rings in my answer probably won't qualify as "non-esoteric", since I only got the idea about a day ago. :-) – Todd Trimble♦ Jan 26 '13 at 20:30 add comment 2 Answers active oldest votes Here is a method for manufacturing such topological rings. The main technical ingredient is a product-preserving functor $$\Theta: \mathrm{Set}^{\Delta^{op}} \to \mathrm{CGHaus}$$ from the category of simplicial sets to the category of compactly generated Hausdorff spaces that is not, however, the usual geometric realization functor. This will almost undoubtedly be unfamiliar, and so will require some preface. The basic idea though is that while the usual geometric realization uses for its topological input the usual interval $I = [0, 1]$, the formal properties of the realization functor, particularly the fact it preserves finite products, still hold upon replacing $I$ by any compact topological interval $L$ and replacing ordinary affine simplices by $L$-valued simplices. This $L$-based realization $\Theta$, being product-preserving, takes simplicial rings to ring objects in $\mathrm{CGHaus}$. By choosing an appropriate $L$ that is connected but not path-connected, we can construct a topological ring that is connected but not path-connected, hence not contractible. We define an interval to be a linearly ordered set with distinct top and bottom elements, and an interval map as an order-preserving map that preserves the top and bottom. Observe that the usual affine simplex $\sigma_{n-1}$ of dimension $n-1$ can be described as the space of $(n-1)$-tuples $0 \leq x_1 \leq \ldots \leq x_{n-1} \leq 1$ (topologized as a subspace of $I^n$), or in other words as the space of interval maps $[n+1] \to I$ from the finite interval with $n+1$ points to $I$. Meanwhile, the category $\mathrm{FinInt}$ of finite intervals $[n+1]$ is equivalent to $\Delta^{op}$ (where $\Delta$ is the category of finite nonempty ordinals); indeed we have a functor $\hom(-, [2]): \Delta^{op} \to \mathrm{FinInt}$, where the set of order-preserving maps $\hom([n], [2])$ from the $n$-element ordinal $[n]$ to $[2]$ is given the pointwise order, thus inheriting an interval structure from the interval structure on $[2]$, where we have $[n+1] \cong \hom_{\Delta}([n], [2])$ as intervals). The usual geometric realization $R(X)$ of a simplicial set $X$, from a categorical point of view, is a tensor product $X \otimes_\Delta \sigma$ of a "right $\Delta$-module" $X: \Delta^{op} \to \mathrm{Set}$ with a left $\Delta$-module $\sigma: \Delta \to \mathrm{CGHaus}$ (the affine simplex functor): $$\sigma: \Delta \simeq \mathrm{FinInt}^{op} \stackrel{\hom(-, I)}{\to} \mathrm{CGHaus}$$ $$[n] \mapsto [n+1] \mapsto \hom_{\mathrm{Int}}([n+1], I).$$ This tensor product is often described by a coend formula $$R(X) = X \otimes_\Delta \sigma = \int^{[n] \in \Delta} X([n]) \cdot \hom_{\mathrm{Int}}([n+1], I).$$ As is well-known, $R$ is product-preserving. What is perhaps less well-known is that the only thing we need from $I$ to prove this fact is that it's compact Hausdorff and the interval order $\leq$ is a closed subset of $I \times I$. Complete details may be found in the nLab here. Therefore, if we replace $I$ with another compact Hausdorff topological interval $L$ (so that $\ leq_L$ is a closed subset of $L \times L$), we get the same result, that the functor $\Theta = R_L$ defined by the formula up vote 17 down $$R_L(X) = \int^{[n] \in \Delta} X([n]) \cdot \hom_{\mathrm{Int}}([n+1], L)$$ accepted is also product-preserving. Let us take our compact topological interval $L$ to be the end-compactification of the long line (so, adjoin points $-\infty$ and $\infty$ to the ends of the long line). This is connected, but not path-connected because for example there is no path from $\infty$ to any other point. Now we just turn a crank: start with any denumerable non-trivial ring $R$ in $\mathrm{Set}$ -- I'll take $R = \mathbb{Z}/(2)$ -- and apply a sequence of product-preserving functors, $$\mathrm{Set} \stackrel{K}{\to} \mathrm{Cat} \stackrel{N}{\to} \mathrm{Set}^{\Delta^{op}} \stackrel{R_L}{\to} \mathrm{CGHaus}.$$ (Here $K$ is the functor that takes a set $S$ to the category such that $\hom(x, y)$ is a singleton for any $x, y \in S$; this is right adjoint to the forgetful functor $\mathrm{Cat} \to \ mathrm{Set}$ that remembers only the set of objects, and being a right adjoint, $K$ preserves products. The nerve functor $N$ also preserves products.) Since ring objects can be defined in any category with finite products, we have that product-preserving functors transport ring objects to ring objects. One should draw a picture of the category $K(\mathbb{Z}/(2))$; it's pretty clearly connected, and its nerve will be a connected simplicial set, or indeed a connected simplicial ring. The $L$-based realization of that will thus be a connected colimit of (connected) $L$-based simplices $\sigma_L(n) = \hom([n+1], L)$ (see the nLab here for connected colimits of connected spaces), and so it too will be a connected ring object in $\mathrm At this point, the overall idea should be pretty clear, and the rest is just some technical mopping-up. • One technical point is that products in $\mathrm{CGHaus}$ need not be usual topological products (as shown by a famous example of Dowker), so one might object that we could end up not with a topological ring, but some kind of funny ring object in $\mathrm{CGHaus}$. However, in many cases of interest, topological products do coincide with $\mathrm{CGHaus}$ products. This is particularly the case for colimits of countable increasing sequences of compact Hausdorff spaces: their product in $\mathrm{CGHaus}$ is the usual topological product. (The same proof as given by Allen Hatcher for Theorem A.6 here will do.) Thus, what counts here is that $N(K(\mathbb{Z}/(2)))$ is a simplicial set with finitely many cells in each dimension, and $R_L$ applied to this involves taking a countable union of compact Hausdorff spaces, so we are okay here. • A second technical point involves showing that $X = (R_L \circ N \circ K)(\mathbb{Z}/(2))$ is not path-connected, which is intuitively clear, but an idea of proof would be nice. $X$ can be described as a union of nondegenerate simplices, where there are two such simplices in each dimension $n$ (corresponding to paths of length $n$ of the form $0 \to 1 \to 0 \to \ldots$ and $1 \to 0 \to 1 \to \ldots$), and a point in the interior of each such simplex has coordinates given by an increasing chain of length $n$ in a dictionary order, say $(j_1, t_1) < (j_2, t_2) < \ldots < (j_n, t_n)$ where the $j_k$ belong to the order type $-\omega_1 \cup \omega_1$ ($\omega_1$ being the first uncountable ordinal, and $-\omega_1$ is of opposite order type, extending in the "negative" direction), and the $t_k$ belong to $[0, 1)$. Every point of $X$ is an interior point of some unique $n$-simplex. Now if $\alpha: I \to X$ is a path connecting a point in the interior of an $n$-simplex, $n > 0$, to a 0-simplex, then let $(a, b) \subset I$ be a connected component of the open set of $t \in I$ such that $\alpha (t)$ is interior to an $n$-simplex with $n > 0$. Since $(a, b)$ has countable cofinality, there is a countable ordinal $\kappa$ such that for every $t \in (a, b)$, the maximum ordinal $ |j_k|$ occurring in the coordinate description of $\alpha(t)$ is bounded above by $\kappa$. But $\alpha(a)$, being a 0-cell, has a neighborhood $U$ where every point $p \in U$, $p \neq \alpha(a)$, has a maximum $|j_k|$ (in its coordinate description) greater than $\kappa$, and we have reached a contradiction. I referenced this answer in my paper nyjm.albany.edu/j/2013/19-5.html – David Roberts Aug 28 '13 at 0:44 @DavidRoberts Cool; thank you very much! – Todd Trimble♦ Aug 28 '13 at 0:53 add comment If you replace "connected" with "path-connected", then no. If 1 is in the same path component as 0, then choose such a path $\gamma$. Then the map $(x,t) \mapsto x \cdot \gamma(t)$ is a contraction of the space. up vote 15 down vote As a result, any counterexample would need to be connected but not locally path-connected. I do not know immediate examples of such a ring. add comment Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras homotopy-theory gn.general-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/22885/noncontractible-connected-topological-rings?answertab=votes","timestamp":"2014-04-18T06:19:20Z","content_type":null,"content_length":"68604","record_id":"<urn:uuid:87a4f959-fb2a-4de1-88ae-67b2fee3aae1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Contemporary Mathematics 1999; 326 pp; softcover Volume: 235 ISBN-10: 0-8218-1364-1 ISBN-13: 978-0-8218-1364-5 List Price: US$69 Member Price: US$55.20 Order Code: CONM/235 This volume presents the proceedings from the Eleventh Brazilian Logic Conference on Mathematical Logic held by the Brazilian Logic Society (co-sponsored by the Centre for Logic, Epistemology and the History of Science, State University of Campinas, São Paolo) in Salvador, Bahia, Brazil. The conference and the volume are dedicated to the memory of professor Mário Tourasse Teixeira, an educator and researcher who contributed to the formation of several generations of Brazilian logicians. Contributions were made from leading Brazilian logicians and their Latin-American and European colleagues. All papers were selected by a careful refereeing processs and were revised and updated by their authors for publication in this volume. There are three sections: Advances in Logic, Advances in Theoretical Computer Science, and Advances in Philosophical Logic. Well-known specialists present original research on several aspects of model theory, proof theory, algebraic logic, category theory, connections between logic and computer science, and topics of philosophical logic of current interest. Topics interweave proof-theoretical, semantical, foundational, and philosophical aspects with algorithmic and algebraic views, offering lively high-level research results. Graduate students and research mathematicians interested in mathematical logic and foundations; computer scientists. Part I. Advances in Logic • J.-Y. Béziau -- The mathematical structure of logical syntax • X. Caicedo and M. Krynicki -- Quantifiers for reasonizing with imperfect information and \(\Sigma{^1_1}\)-logic • W. A. Carnielli and M. Lima-Marques -- Society semantics and multiple-valued logics • J. C. Cifuentes -- A topological approach to the logic underlying fuzzy subset theory • M. E. Coniglio -- Categorical logic with partial elements • M. Dickmann and F. Miraglia -- Algebraic K-theory of fields and special groups • A. Di Nola, G. Georgescu, and S. Sessa -- Closed ideals of \(MV\)-algebras • K. Došen -- Definitions of adjunction • N. G. Martínez -- A reduced spectrum for \(MV\)-algebras Part II. Advances in Theoretical Computer Science • A. Avellone, M. Ferrari, P. Miglioli, and U. Moscato -- A tableau calculus for Dummett predicate logic • J. M. Turull Torres -- A hierarchy of unbounded almost rigid classes of finite structures • P. A. S. Veloso -- Some connections between logic and computer science Part III. Advances in Philosophical Logic • D. Krause and S. French -- Opaque predicates, veiled sets and their logic • O. Bueno -- Truth, quasi-truth and paraconsistency • G. E. Rosado Haddock -- To be a Fregean or to be a Husserlian: That is the question for Platonists • C. Pizzi -- A modal framework for consequential implication and the factor law
{"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-235","timestamp":"2014-04-16T19:28:59Z","content_type":null,"content_length":"16959","record_id":"<urn:uuid:c533a563-2b72-4be2-bba5-11083bf86716>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
A note on Wyner’s wiretap channel Results 1 - 10 of 13 - IEEE Transactions on Information Theory , 1983 "... time for all neighbors m of j and hence Zj will become (S + 1). Since j has no nodes at hop-distance (S + l), (7) will hold and this completes the proof of the lemma. Lemma MH-1 a) and Lemma MH-2 a), b) are exactly Theorem MH-1 and this completes the proof of the theo-rem. REFERENCES [ 1] R. G. Gall ..." Cited by 112 (0 self) Add to MetaCart time for all neighbors m of j and hence Zj will become (S + 1). Since j has no nodes at hop-distance (S + l), (7) will hold and this completes the proof of the lemma. Lemma MH-1 a) and Lemma MH-2 a), b) are exactly Theorem MH-1 and this completes the proof of the theo-rem. REFERENCES [ 1] R. G. Gallager, “A shortest path routing algorithm with automatic resynch, ” unpublished note, March 1976. [2] A. Segall, P. M. Merlin, and R. G. Gallager, “A recoverable protocol for loop-free distributed routing, ” Proc. ICC, June 1978. [3] S. G. Finn, “Resynch procedures and a failsafe network protocol - In Proc. Annu. Allerton Conf. Communication, Control and Computing , 2006 "... The fading broadcast channel with confidential messages (BCC) is investigated, where a source node has common information for two receivers (receivers 1 and 2), and has confidential information intended only for receiver 1. The confidential information needs to be kept as secret as possible from rec ..." Cited by 70 (11 self) Add to MetaCart The fading broadcast channel with confidential messages (BCC) is investigated, where a source node has common information for two receivers (receivers 1 and 2), and has confidential information intended only for receiver 1. The confidential information needs to be kept as secret as possible from receiver 2. The broadcast channel from the source node to receivers 1 and 2 is corrupted by multiplicative fading gain coefficients in addition to additive Gaussian noise terms. The channel state information (CSI) is assumed to be known at both the transmitter and the receivers. The parallel BCC with independent subchannels is first studied, which serves as an information-theoretic model for the fading BCC. The secrecy capacity region of the parallel BCC is established. This result is then specialized to give the secrecy capacity region of the parallel BCC with degraded subchannels. The secrecy capacity region is then established for the parallel Gaussian BCC, and the optimal source power allocations that achieve the boundary of the secrecy capacity region are derived. In particular, the secrecy capacity region is established for the basic Gaussian BCC. The secrecy capacity results are then - IEEE TRANSACTION ON INFORMATION THEORY , 2008 "... We consider the Gaussian multiple access wire-tap channel (GMAC-WT). In this scenario, multiple users communicate with an intended receiver in the presence of an intelligent and informed wire-tapper who receives a degraded version of the signal at the receiver. We define suitable security measures ..." Cited by 60 (8 self) Add to MetaCart We consider the Gaussian multiple access wire-tap channel (GMAC-WT). In this scenario, multiple users communicate with an intended receiver in the presence of an intelligent and informed wire-tapper who receives a degraded version of the signal at the receiver. We define suitable security measures for this multiaccess environment. Using codebooks generated randomly according to a Gaussian distribution, achievable secrecy rate regions are identified using superposition coding and time-division multiple access (TDMA) coding schemes. An upper bound for the secrecy sum-rate is derived, and our coding schemes are shown to achieve the sum capacity. Numerical results are presented showing the new rate region and comparing it with the capacity region of the Gaussian multiple-access channel (GMAC) with no secrecy constraints, which quantifies the price paid for secrecy. - IEEE Trans. Inf. Theory , 2008 "... We consider the General Gaussian Multiple Access Wire-Tap Channel (GGMAC-WT) and the Gaussian Two-Way Wire-Tap Channel (GTW-WT) which are commonly found in multi-user wireless communication scenarios and serve as building blocks for ad-hoc networks. In the GGMAC-WT, multiple users communicate with a ..." Cited by 50 (24 self) Add to MetaCart We consider the General Gaussian Multiple Access Wire-Tap Channel (GGMAC-WT) and the Gaussian Two-Way Wire-Tap Channel (GTW-WT) which are commonly found in multi-user wireless communication scenarios and serve as building blocks for ad-hoc networks. In the GGMAC-WT, multiple users communicate with an intended receiver in the presence of an intelligent and informed eavesdropper who receives their signals through another GMAC. In the GTW-WT, two users communicate with each other with an eavesdropper listening through a GMAC. We consider a secrecy measure that is suitable for this multi-terminal environment, and identify achievable such secrecy regions for both channels using Gaussian codebooks. In the special case where the GGMAC-WT is degraded, we show that Gaussian codewords achieve the strong secret key sum-capacity. For both GGMAC-WT and GTW-WT, we find the power allocations that maximize the achievable secrecy sum-rate, and find that the optimum policy may prevent some terminals from transmission in order to preserve the secrecy of the system. Inspired by this construct, we next propose a new scheme which we call cooperative jamming, where users who are not transmitting according to the sum-rate maximizing power allocation can help the remaining users by “jamming ” the eavesdropper. This scheme is shown to increase the achievable secrecy sum-rate, and in some cases allow a previously non-transmitting terminal to be able to transmit with secrecy. Overall, - IEEE TRANSACTIONS ON INFORMATION THEORY , 2008 "... The general Gaussian multiple-access wiretap channel (GGMAC-WT) and the Gaussian two-way wiretap channel (GTW-WT) are considered. In the GGMAC-WT, multiple users communicate with an intended receiver in the presence of an eavesdropper who receives their signals through another GMAC. In the GTW-WT, ..." Cited by 28 (1 self) Add to MetaCart The general Gaussian multiple-access wiretap channel (GGMAC-WT) and the Gaussian two-way wiretap channel (GTW-WT) are considered. In the GGMAC-WT, multiple users communicate with an intended receiver in the presence of an eavesdropper who receives their signals through another GMAC. In the GTW-WT, two users communicate with each other over a common Gaussian channel, with an eavesdropper listening through a GMAC. A secrecy measure that is suitable for this multiterminal environment is defined, and achievable secrecy rate regions are found for both channels. For both cases, the power allocations maximizing the achievable secrecy sum rate are determined. It is seen that the optimum policy may prevent some terminals from transmission in order to preserve the secrecy of the system. Inspired by this construct, a new scheme cooperative jamming is proposed, where users who are prevented from transmitting according to the secrecy sum rate maximizing power allocation policy “jam ” the eavesdropper, thereby helping the remaining users. This scheme is shown to increase the achievable secrecy sum rate. Overall, our results show that in multiple-access scenarios, users can help each other to collectively achieve positive secrecy rates. In other words, cooperation among users can be invaluable for achieving secrecy for the system. - IEEE Transactions on Information Theory , 1977 "... Abstract-Shannon’s information-theoretic approach to cryp-tography is reviewed and extended. It is shown that Shannon’s random cipher model is conservative in that a randomly chosen cipher is essentially the worst possible. This is in contrast with error-correcting codes where a randomly chosen code ..." Cited by 25 (2 self) Add to MetaCart Abstract-Shannon’s information-theoretic approach to cryp-tography is reviewed and extended. It is shown that Shannon’s random cipher model is conservative in that a randomly chosen cipher is essentially the worst possible. This is in contrast with error-correcting codes where a randomly chosen code is essentially the best possible. The concepts of matching a cipher to a language and of the trade-off between local and global uncertainty are also developed. I - in Proc. 39th Annu. Asilomar Conf. Signals, Syst., Comput , 2005 "... Abstract — We consider the Gaussian Multiple Access Wire-Tap Channel (GMAC-WT) where multiple users communicate with the intended receiver in the presence of an intelligent and informed wire-tapper (eavesdropper). The wire-tapper receives a degraded version of the signal at the receiver. We assume t ..." Cited by 12 (8 self) Add to MetaCart Abstract — We consider the Gaussian Multiple Access Wire-Tap Channel (GMAC-WT) where multiple users communicate with the intended receiver in the presence of an intelligent and informed wire-tapper (eavesdropper). The wire-tapper receives a degraded version of the signal at the receiver. We assume that the wire-tapper is as capable as the intended receiver, and there is no other shared secret key. We consider two different secure communication scenarios: (i) keeping the wire-tapper totally ignorant of the message of any group of users even if the remaining users are compromised, (ii) using the secrecy of the other users to ensure secrecy for a group of users. We first derive the outer bounds for the secure rate region. Next, using Gaussian codebooks, we show the achievability of a secure rate region for each measure in which the wire-tapper is kept perfectly ignorant of the messages. We also find the power allocations that yield the maximum sum rate, and show that upper bound on the secure sum rate can be achieved by a TDMA scheme. We present numerical results showing the new rate region and compare it with that of the Gaussian Multiple-Access Channel (GMAC) with no secrecy constraints. I. - URL: http://eprint.iacr.org/2011/074. Citations in this document , 2011 "... Abstract. The FSB (fast syndrome-based) hash function was submitted to the SHA-3 competition by Augot, Finiasz, Gaborit, Manuel, and Sendrier in 2008, after preliminary designs proposed in 2003, 2005, and 2007. Many FSB parameter choices were broken by Coron and Joux in 2004, Saarinen in 2007, and F ..." Cited by 5 (1 self) Add to MetaCart Abstract. The FSB (fast syndrome-based) hash function was submitted to the SHA-3 competition by Augot, Finiasz, Gaborit, Manuel, and Sendrier in 2008, after preliminary designs proposed in 2003, 2005, and 2007. Many FSB parameter choices were broken by Coron and Joux in 2004, Saarinen in 2007, and Fouque and Leurent in 2008, but the basic FSB idea appears to be secure, and the FSB submission remains unbroken. On the other hand, the FSB submission is also quite slow, and was not selected for the second round of the competition. This paper introduces RFSB, an enhancement to FSB. In particular, this paper introduces the RFSB-509 compression function, RFSB with a particular set of parameters. RFSB-509, like the FSB-256 compression function, is designed to be used inside a 256-bit collision-resistant hash function: all known attack strategies cost more than 2 128 to find collisions in RFSB-509. However, RFSB-509 is an order of magnitude faster than FSB-256. On a single core of a Core 2 Quad Q9550 CPU, RFSB-509 runs at 10.67 cycles/byte: faster than SHA-256, faster than 7 of the 14 secondround SHA-3 candidates, and faster than 3 of the 5 SHA-3 finalists. Key words: compression functions, collision resistance, linearization, generalized birthday attacks, information-set decoding, tight reduction to L1 cache. 1 "... Abstract. Fix positive integers B and w. Let C be a linear code over F2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndromebased) hash function a ..." Cited by 2 (1 self) Add to MetaCart Abstract. Fix positive integers B and w. Let C be a linear code over F2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndromebased) hash function and related proposals. This problem differs from the usual information-set-decoding problems in that (1) the target codeword is required to have a very regular structure and (2) the target weight can be rather high, so that there are many possible codewords of that weight. Augot, Finiasz, and Sendrier, in the paper that introduced FSB, presented a variant of information-set decoding tuned for 2-regular decoding. This paper improves the Augot–Finiasz–Sendrier algorithm in a way that is analogous to Stern’s improvement upon basic information-set decoding. The resulting algorithm achieves an exponential speedup over the previous algorithm. Keywords: Information-set decoding, 2-regular decoding, FSB, binary codes. "... For centuries, cryptography has been a valuable asset of the military and diplomatic communities. Indeed, it is so valuable that its practice has usually been shrouded in secrecyand mystery. Add to MetaCart For centuries, cryptography has been a valuable asset of the military and diplomatic communities. Indeed, it is so valuable that its practice has usually been shrouded in secrecyand mystery.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4805364","timestamp":"2014-04-18T17:59:22Z","content_type":null,"content_length":"40217","record_id":"<urn:uuid:519f1f37-d197-4fa8-badf-29c299444d46>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
2257 -- Balancing Bank Accounts Balancing Bank Accounts Time Limit: 1000MS Memory Limit: 65536K Total Submissions: 962 Accepted: 455 Special Judge Once upon a time there was a large team coming home from the ACM World Finals. The fifteen travellers were confronted with a big problem: In the previous weeks, there had been many money transactions between them: Sometimes somebody paid the entrance fees of a theme park for the others, somebody else paid the hotel room, another one the rental car, and so on. So now the big calculation started. Some people had paid more than others, thus the individual bank accounts had to be balanced again. "Who has to pay whom how much?", that was the question. As such a calculation is a lot of work, we need a program now that will solve this problem next year. The input will contain one or more test cases. Each test case starts with a line containing two integers: the number of travellers n (2<=n<=20) and the number of transactions t (1<=t<=1000). On the next n lines the names of the travellers are given, one per line. The names only consist of less than 10 alphabetic characters and contain no whitespace. On the following t lines, the transactions are given in the format name1 name2 amount where name1 is the person who gave amount dollars to name2. The amount will always be a non-negative integer less than 10000. Input will be terminated by two values of 0 for n and t. For each test case, first print a line saying "Case #i" where i is the number of the test case. Then, on the following lines, print a list of transactions that reverses the transactions given in the input, i.e. balances the accounts again. Use the same format as in the input. Print a blank line after each test case, even after the last one. Additional restrictions: • Your solution must consist of at most n-1 transactions. • Amounts may not be negative, i.e. never output "A B -20", output "B A 20" instead. If there is more than one solution satisfying these restrictions, anyone is fine. Sample Input Donald Dagobert 15 John Mary 100 John Cindy 200 Cindy Mary 40 Cindy Arnold 150 Sample Output Case #1 Dagobert Donald 15 Case #2 Mary John 140 Cindy John 10 Arnold John 150 Ulm Local 1998
{"url":"http://poj.org/problem?id=2257","timestamp":"2014-04-18T19:43:15Z","content_type":null,"content_length":"7745","record_id":"<urn:uuid:007601ec-92cd-4dc3-bccb-fbbff5a82b7d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
Zhi-Wei Sun's Homepage Research Interests Number Theory (especially Combinatorial Number Theory), Combinatorics, Group Theory, Mathematical Logic. Academic Service Editor-in-Chief of Journal of Combinatorics and Number Theory, 2009--. You may submit your paper by sending the pdf file to zwsun@nju.edu.cn or to one of the two managing editors Florian Luca and Jiang Zeng. (A sample tex file) Reviewer for Zentralblatt Math., 2007--. Reviewer for Mathematical Reviews, 1992--. Member of the American Mathematical Society, 1993--. Referee for Proc. Amer. Math. Soc., Acta Arith., J. Number Theory, J. Combin. Theory Ser. A, European J. Combin., Finite Fields Appl., Adv. in Appl. Math., Discrete Math., Discrete Appl. Math., Ramanujan J., SIAM Review etc. School Education and Employment History 1980.9--1983.7 The High Middle School Attached to Nanjing Normal Univ. 1983.9--1992.6 Department of Mathematics, Nanjing University (Undergraduate--Ph. D. Candidate; B. Sc. 1987, Ph. D. 1992) 1992.7-- Teacher in Department of Mathematics, Nanjing University 1994.4--1998.3 Associate Professor in Math. 1998.4-- Full Professor in Math. 1999.11- Supervisor of Ph. D. students My 60 Open Problems on Combinatorial Properties of Primes My Conjecture on the Prime-Counting Function (i) For any integer n&gt1, π(k*n) is prime for some k = 1,...,n, where π(x) denotes the number of primes not exceeding x. [I have verified this for n up to 10^7. See OEIS A237578.] (ii) For every positive integer n, π(π(k*n)) is a square for some k = 1,...,n. [I have verified this for n up to 2*10^5. See OEIS A238902 and OEIS A239884.] (iii) For each integer n&gt2, π(n-p) is a square for some prime p &lt n. [I have verified this for n up to 5*10^8. See OEIS A237706 and OEIS A237710.] My "Super Twin Prime Conjecture" Each n = 3, 4, ... can be written as k + m with k and m positive integers such that p(k) + 2 and p(p(m)) + 2 are both prime, where p(j) denotes the j-th prime. [I have verified this for n up to My Conjecture involving Primes and Powers of 2 Every n = 2, 3, ... can be written as a sum of two positive integers k and m such that 2^k + m is prime. [This has been verified for n up to 1.6*10^6.] My Conjecture on Sums of Primes and Numbers of the Form 2^k-k Any integer n&gt3 can be written in the form p + (2^k - k) + (2^m - m), where p is a prime, and k and m are positive integers. [This has been verified for n up to 10^10.] My Conjecture on Recurrence for Primes (see also Conj. 1.2 of this published paper) For any positive integer n different from 1,2,4,9, the (n+1)-th prime p[n+1] is just the least positive integer m such that 2s[k]^2 (k=1,...,n) are pairwise distinct modulo m, where s[k] = p[k] -p[k-1]+...+(-1)^k-1p[1]. [I have verified this for n=1,...,100000.] My Conjecture on Alternating Sums of Consecutive Primes (see also Conj. 1.3 of this published paper) For any positive integer m, there are consecutive primes p[k],...,p[n] (k &lt n) not exceeding 2m+2.2*sqrt(m) such that m = p[n]-p[n-1]+...+(-1)^n-kp[k]. [I have verified this for m up to 10^5.] My Conjecture on Unification of Goldbach's Conjecture and the Twin Prime Conjecture Any even number greater than 4 can be written as p + q with p, q and prime(p+2) + 2 all prime, where prime(n) denotes the n-th prime. My Conjecture related to Bertrand's Postulate (see also A185636 and A204065 in OEIS) Let n be any positive integer. Then, for some k=0,...n, both n+k and n+k^2 are prime. [I have verified this conjecture for n up to 200,000,000.] My Conjecture on Twin Primes and Sexy Primes Every n=12,13,... can be written as p+q with p, p+6, 6q-1 and 6q+1 all prime. [I have verified this for n up to 1,000,000,000.] My Curious Conjecture on Primes Each n = 2, 3, ... can be written as x^2 + y, where x and y are nonnegative integers with 2y^2 - 1 prime. My Conjecture on Prime Differences (see also arXiv:1211.1588 for more conjectures on primes) Any integer n&gt7 can be written as p+q, where q is a positive integer, and p and 2pq+1 are primes. In general, for each m=0,1,2,..., any sufficiently large integer n can be written as x+y, where x and y are positive integers with x-m, x+m and 2xy+1 all prime. [I have verified the first assertion for n up to 1,000,000,000. The second assertion implies that for any positive even integer d there are infinitely many prime pairs {p,q} with p-q=d.] My Conjectures on Representations via Sparse Primes Each integer n&gt3 can be written as p+q with p, 2p^2-1 and 2q^2-1 all prime, where q is a positive integer. (See OEIS A230351.) My Conjecture on Primes of the Form a^n+b My 18 Conjectures in Additive Combinatorics Let A be a subset of an additive abelian group G with |A|=n&gt3. Then there is a numbering a[1], ..., a[n] of all the elements of A such that a[1]+a[2]+a[3], ..., a[n-1]+a[n]+a[1], a[n]+a[1]+a [2] are pairwise distinct. [We have proved this for any torsion-free abelian group G, see also A228772 in OEIS.] My 15 Conjectures on Determinants (see also arXiv:1308.2900 and Three mysterious conjectures on Hankel-type determinants) My 100 Open Conjectures on Congruences My 181 Conjectural Series for Powers of π and Other Constants (Announcements: 1, 2, 3, 4, 5, 6) Let C(n,k) denote the binomial coefficient n!/(k!(n-k)!) and let T[n](b,c) denote the coefficient of x^n in (x^2+bx+c)^n. Then ∑[k ≥ 0] (126k+31)T[k](22,21^2)^3/(-80)^3k = 880*sqrt(5)/(21π), ∑[k ≥ 0] (24k+5)C(2k,k)T[k](4,9)^2/28^2k = 49(sqrt(3)+sqrt(6))/(9π), ∑[k ≥ 0](2800512k+435257)C(2k,k)T[k](73,576)^2 /434^2k = 10406669/(2sqrt(6)π), ∑[k>0](28k^2-18k+3)(-64)^k /(k^5C(2k,k)^4C(3k,k)) = -14∑[n>0]1/n^3. ∑[n ≥ 0](28n+5)24^-2n C(2n,n)∑[k ≥ 0 ]5^k C(2k,k)^2C(2(n-k),n-k)^2/C(n,k) = 9(sqrt(2)+2)/π. ∑[n ≥ 0 ](18n^2+7n+1)(-128)^-n C(2n,n)^2∑[k ≥ 0 ]C(-1/4,k)^2C(-3/4,n-k)^2 = 4*sqrt(2)/π^2. ∑[n ≥ 0](40n^2+26n+5)(-256)^-n C(2n,n)^2∑[k ≥ 0 ]C(n,k)^2C(2k,k)C(2(n-k),n-k) = 24/π^2. My Hypothesis on the Parities of Ω(n)-n (see also a public message and arXiv:1204.6689) We have |{n ≤ x: n-Ω(n) is even}| > |{n ≤ x: n-Ω(n) is odd}| for any x ≥ 5, where Ω(n) denotes the total number of prime factors of n (counted with multiplicity). Moreover, ∑[n ≤ x](-1)^n-Ω(n) > sqrt(x) for any x ≥ 325. [I have shown that the hypothesis implies the Riemann Hypothesis, and verified it for x up to 10^11.] My Conjecture on Sums of Primes and Triangular Numbers Each natural number not equal to 216 can be written in the form p+T[x] , where p is 0 or a prime, and T[x]=x(x+1)/2 is a triangular number. [This has been verified up to 1,000,000,000,000.] In general, for any a,b=0,1,2,... and odd integer r, all sufficiently large integers can be written in the form 2^ap +T[x] , where p is either zero or a prime congruent to r modulo 2^b. My Conjecture on Sums of Polygonal Numbers For each integer m>2, any natural number n can be expressed as p[m+1](x[1]) + p[m+2](x[2]) + p[m+3](x[3]) + r with x[1],x[2],x[3] nonnegative integers and r among 0,...,m-3, where p[k](x)=(k-2)x (x-1)/2+x (x=0,1,2,...) are k-gonal numbers. In particular, every natural number is the sum of a square, a pentagonal number and a hexagonal number. [For m=3, m=4,...,10, and m=11,...,40, this has been verified for n up to 30,000,000, 500,000 and 100,000 respectively.] My Conjecture on Disjoint Cosets (see Conjecture 1.2 of this published paper) Let a[1]G[1] , ..., a[k]G[k] (k>1) be finitely many pairwise disjoint left cosets in a group G with all the indices [G:G[i]] finite. Then, for some distinct i and j the greatest common divisor of [G:G[i]] and [G:G[j]] is at least k. My Conjecture on Covers of Groups Let a[1]G[1] , ..., a[k]G[k] be finitely many left cosets in a group G which cover all the elements of G at least m>0 times with a[j]G[j] irredundant. Then k is at least m+f([G:G[j]]), where f (1)=0 and f(p[1] ... p[r]) =(p[1]-1) + ... +(p[r]-1) for any primes p[1] , ..., p[r] . My Conjecture on Linear Extension of the Erdos-Heilbronn Conjecture Redmond-Sun Conjecture (in PlanetMath.)
{"url":"http://math.nju.edu.cn/~zwsun/","timestamp":"2014-04-20T18:23:51Z","content_type":null,"content_length":"43284","record_id":"<urn:uuid:a32444c9-09f8-41db-af4a-e59ea01deb4a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Having major issues with this problem - can someone get me started? June 9th 2012, 09:00 PM Having major issues with this problem - can someone get me started? "The relationship between the load on a reel, L (kilotons) per metre (L kNm-1) and the reel diameter, x meters, is modelled by a graph consisting of two parabolic arcs, AB and BC Picture of graph with L on the y-axis and x(meters) on the x axis with a parabola with points in order of A,D,E,F,B and C on the parabola. Arc AB is part of the parabola L=px^2 +qx+r. Points D(0.1, 2.025), E(0.2,2.9) and F(0.3, 3.425) lie on the arc AB. Set up a system of three simultaneous equations relating to p, q and r. DO NOT SOLVE EQUATIONS. Can anyone give me any help with this without the graph? Thank you so much! June 10th 2012, 04:44 AM Re: Having major issues with this problem - can someone get me started? if three points $(x_1,y_1)$ , $(x_2,y_2)$ , $(x_3,y_3)$ lie on the parabola $y = px^2 + qx + r$ , then ... $y_1 = px_1^2 + qx_1 + r$ $y_2 = px_2^2 + qx_2 + r$ $y_3 = px_3^2 + qx_3 + r$
{"url":"http://mathhelpforum.com/algebra/199860-having-major-issues-problem-can-someone-get-me-started-print.html","timestamp":"2014-04-20T18:26:49Z","content_type":null,"content_length":"5590","record_id":"<urn:uuid:1df404cc-540f-4a70-b163-eac85980bf61>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
EEQT, Quantum Jumps and Quantum Fractals Ark's Homepage Curriculum Vitae Physics and the Mysterious Event Enhanced Quantum Physics (EEQT) Quantum Future My Kaluza-Klein pages Links to my other online papers dealing with hyperdimensional physics Note: These two applets crash on certain browsers. Internet Explorer seems to be OK and seems to be OK. The applets crash with Opera (for reasons that are not understood) and with older Netscape. Therefore the best thing is to download the .jar files from , unpack them, download Java SDK appropriate to the operating system, install it, and then run the .jar files with "java -jar qf.jar" or "java -jar wave.jar" EEQT or Event-Enhanced-Quantum-Theory I. EEQT stands for "Event Enhanced Quantum Theory" - the term introduced by Ph. Blanchard and A. Jadczyk to describe the piecewise deterministic algorithm replacing the Schrödinger equation for continuously monitored quantum systems (and we suspect all quantum systems fall under this category). From EEQT FAQ: 1. Isn't it so that EEQT is a step backward toward classical mechanics that we all know are inadequate? EEQT is based on a simple thesis: not all is "quantum" and there are things in this universe that are NOT described by a quantum wave function. Example, going to an extreme: one such case is the wave function itself. Physicists talk about first and second quantization. Sometimes, with considerable embarrassment, a third quantization is considered. But that is usually the end of that. Even the most orthodox quantum physicist controls at some point his "quantizeeverything" urge - otherwise he would have to "quantize his quantizations" ad infinitum, never being able to communicate his results to his colleagues. The part of our reality that is not and must not be "quantized" deserves a separate name. In EEQT we are using the term "classical." This term, as we use it, must be understood in a special, more-general-than-usually-assumed way. "Classical" is not the same as "mechanical." Neither is it the same as "mechanically deterministic." When we say "classical" - it means "outside of the restricted mathematical formalism of Hilbert spaces, linear operators and linear evolutions." It also means: the value of the "Planck constant" does not govern classical parameters. Instead, in a future theory, the value of the Planck constant will be explained in terms of a "non-quantum" paradigm. 12. The name "Event Enhanced Quantum Theory" is misleading. As we have stated: "EEQT is the minimal extension of orthodox quantum theory that allows for events." It DOES enhance quantum theory by adding the new terms to the Liouville equation. When the coupling constant is small, events are rare and EEQT reduces to orthodox quantum theory. Thus it IS an enhancement. (...) II. Most of the essential papers dealing with various aspects of EEQT are available online. III. EEQT allows us to simulate "Nature's Real Working". Of course EEQT is an incomplete theory, yet it tries to simulate the real world events with an underlying quantum substructure. The algorithm of EEQT is non-local, that suggests that Nature itself, in its deeper level, is non-local too. IV. Normally students learning quantum mechanics are being taught that it is impossible to measure position and momentum of a quantum particle. They learn how to derive Heisenberg's uncertainty relations, and they are told that these mathematical relations have such-and-such interpretation. Some are told that the interpretation itself is disputable. In EEQT all the probabilistic interpretation of quantum theory, including Born's interpretation of the wave function, is derived from the dynamics. EEQT allows us to simulate and predict behavior of a quantum system when several, as one normally calls them, incommensurable observables are being measured. The fact is that in such situation the dynamics is chaotic, and no joint probability distribution exists. That explains why ordinary quantum mechanics rightly noticed the problems with defining such a distribution. (For visualization purposes physicists, especially those dealing with quantum chaos, often use Wigner's distribution (which is not positive definite) or Husimi's distribution (which does not reproduce marginal distributions). Quantum Jumps According to EEQT quantum jumps are not directly observable. What we see are the accompanying "events". This part is somewhat tricky, and I will try to explain the trickiness here, in few paragraphs, but without any hope that there will be even one person who will understand what I mean. Well, perhaps mathematicians will do, but are they going to read this page? I doubt. Physicists certainly will think that it is too weird. And they have better things to do than following someone's weird ideas - as every physicist with guts has weird ideas of his/her own! But I would feel guilty if I did not give it a try. So here it is. Physicists do consider quantum jumps. In particular those who deal with theoretical quantum optics and/or quantum computing and information. But these quantum jumps are not being taken as "real". If not for any other reason, than because there are infinitely many jump processes that can be associated with a given Liouville equation, and there is no good reason to choose one rather than other. Thus discontinuous quantum jumps in theoretical quantum optics are considered mainly as a convenient numerical methods for solving the continuous Liouville equation. It is not so in EEQT. But EEQT splits the world into a quantum and a classical part, and quantum physicists deny that the classical part exist. They think all is quantum - the same way Ptolemeian physicists thought that all is perfectly round. Can we propose a clever idea that will show that not all is quantum? Indeed, according to quantum physics the only thing that exists is the quantum wave function. Now, let us us ask this: is the wave function itself a classical or a quantum object? That is, we ask, is location of the wave function in the Hilbert space governed by classical or by quantum laws? Most quantum physicists would pretend they do not understand the question. Some will understand, and will answer: "sure, there is an uncertainty in the state vector, but that is altogether different story. They will point me to Braginsky or Vaidman or some other, more recent, paper - but they will not answer my question: is the quantum wave function a classical or a quantum object? Is it an object at all? And if it is an object, then what kind of animal it is, and where it fits? Philosophers perhaps will point me to Eccles and Popper, but this is not an answer either. What is my answer to my own question? I do not know the answer, but I can speculate. So, here it is: we are talking about models. Models of "Reality". Perhaps nothing but models "exists", but that is not our problem now. If all is about models, then we can think of a model in which wave function is both classical and quantum. In which Wave Function "observes" itself - as John Archibald Wheeler has imagined: "The universe viewed as a self-excited circuit. Starting small (thin U at upper right), it grows (loop of U) to observer participancy - which in turn imparts 'tangible reality' (cf. the delayed-choice experiment of Fig. 22.9) to even the earliest days of the universe" "If the views that we are exploring here are correct, one principle, observer-participancy, suffices to build everything. The picture of the participatory universe will flounder, and have to be rejected, if it cannot account for the building of the law; and space-time as part of the law; and out of law substance. It has no other than a higgledy-piggledy way to build law: out of statistics of billions upon billions of observer participancy each of which by itself partakes of utter randomness." (J.A. Wheeler, "Beyond the Black Hole", in "Some Strangeness in the Proportion", Ed. Harry Woolf, Addison-Wesley, London 1980) To observe itself "It" must split into two "personalities", a quantum one and a classical one. So, here comes the model: consider a pair of wave functions, the function trying to determine its own shape. One element of the pair is considered to be "quantum" - as it determines probabilities and quantum jumps, while the second element of the pair is interpreted as a classical one - its shape is the classical variable. Topics in Quantum Dynamics". And here we come to the mathematical description of quantum jumps in EEQT. Of course the simplest situation is when we separate jumps from the continuous evolution. To analyze this particular situation let us think of the simplest possible "toy model". Physicists like toy models, as they usually provide us with explicit solutions whose properties we can study in order to try to understand more complex, real world situations, where the problems get so complicated that there is no hope even for an approximate solution. Physicists usually replace real-world problems with other problems, build out of their toy models, which are still simple enough to be solvable, even if only approximately, and yet mirror some essential features of the "true problems." So, what would be the simplest toy model to play with, that teaches us something about quantum jumps? The quantum system, to be nontrivial, must live in a Hilbert space of at least two complex dimensions. The classical system must have at least two states. Such a toy model was indeed studied in connection to the Quantum Zeno effect., where it was demonstrated that a flip-flop detector strongly coupled (that is "under intensive observation" - watched pot never boils... ) to a two-state quantum system effectively stops the continuous quantum evolution. This model is not interesting though if we want to study pure quantum jumps. Here we need a more complicated model and that is how "tetrahedral model" was developed. It was found that it leads to chaotic dynamics and to fractals of a new type: fractals drawn by a quantum brush on the quantum canvas - a complex projective space. And that is how we come to quantum fractals. Quantum Fractals Quantum Jumps, EEQT and the Five Platonic Fractals." Here let us describe the algorithm and the Java applet. (The applet is a part of an OpenSource project, so additions and enhancements will probably follow its release. ) The canvas, is a surface of the unit sphere In coordinates its points are represented by vectors n = (n^1,n^2,n^3) of unit length, thus (n1)^2+(n2)^2+(n3)^2 = 1. There are five Platonic solids: tetrahedron (N=4), octahedron (N=6), cube (N=8), icosahedron (N=12), dodecahedron (N=20), where N is the number of vertices. They have equal faces, bounded by equilateral polygons. It was Euclid who proved that only five such can exist in a three dimensional world. In his Mysterium Cosmographicum (1595) Johannes Kepler attempted to account for the orbits of the six then known planets by radii of concentric spheres circumscribing or inscribing the solids. ┃This site is a member of WebRing.┃ ┃ To browse visit Here. ┃
{"url":"http://quantumfuture.net/quantum_future/qfractals.htm","timestamp":"2014-04-19T22:06:21Z","content_type":null,"content_length":"24635","record_id":"<urn:uuid:6d0fe307-c66f-44a1-9e66-daba31b2e453>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
9.5.3 Reduced-Communication Pivoting Next: 9.5.4 New Data Distributions Up: LU Factorization of Previous: 9.5.2 Design Overview At each stage of the concurrent LU factorization, the pivot element is chosen by the user-defined pivot function. Then, the pivot row (new row of U) must be broadcast, and pivot column (new column of L) must be computed and broadcast on the logical process grid (cf., Figure 9.12), vertically and horizontally, respectively. Note that these are interchangeable operations. We use this degree of freedom to reduce the communication complexity of particular pivoting strategies, while impacting the effort of the LU factorization itself negligibly. We define two ``correctness modes'' of pivoting functions. In the first correctness mode, ``first row fanout,'' the exit conditions for the pivot function are: All processes must know krow is eliminated at the kstep of the factorization. From this fact, each process can derive the process row kelimination row and column are both stored as without communication. Furthermore, the pivot process looks up the pivot value. Hence, preset pivoting satisfies the requirements of this correctness mode also. For ``first row fanout,'' the universal knowledge of of this row (new row of U). In addition, we broadcast L) column may be correctly computed and broadcast . Along with the multiplier column broadcast , we include the pivot value. After this broadcast , all processes have the correct indices For the second correctness mode ``first column fanout,'' the exit conditions for the pivot function are: All processes must know Skjellum:90c]. For ``first column fanout,'' the entire pivot process column knows the pivot value, and local column of the pivot. Hence, the multiplier column may be computed by dividing the pivot matrix column by the pivot value. This column of L can then be broadcast horizontally, including the pivot value, . This second broadcast completes the needed information in each process for effecting the k elimination step. Thus, when using partial row or partial column pivoting, only local combines of the pivot process column (respectively row) are needed. The other processes don't participate in the combine, as they must without this methodology. Preset pivoting implies no pivoting communication, except very occasionally (e.g., 1 in 5000 times), as noted in [Skjellum:90c], to remove memory unscalabilities. This pivoting approach is a direct savings, gained at a negligible additional broadcast overhead. See also [Skjellum:90c]. Next: 9.5.4 New Data Distributions Up: LU Factorization of Previous: 9.5.2 Design Overview Guy Robinson Wed Mar 1 10:19:35 EST 1995
{"url":"http://www.netlib.org/utk/lsi/pcwLSI/text/node189.html","timestamp":"2014-04-17T04:23:11Z","content_type":null,"content_length":"8424","record_id":"<urn:uuid:644ee08e-d231-45ee-aa55-b1e97e0a9719>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Laveen Math Tutor Find a Laveen Math Tutor ...I love being able to help a student understand a subject with which they have previously struggled. I have tutored students in most math and science courses, and I am comfortable with all grade levels, from elementary school to college. My teaching style varies according to the individual needs of the student. 28 Subjects: including trigonometry, ACT Math, algebra 1, algebra 2 ...I think my two main strengths as a tutor are my ability to impart this understanding using visual aids and analogies, and my ability to break down complex problems to simple and easy ones. I am great at organizing problems in a way that makes them become easy, and I can teach you how to do it to... 14 Subjects: including algebra 1, algebra 2, calculus, geometry ...I teach Literacy Strategies, English Fundamentals, and sophmore English classes. As a Special Educator I am qualified to teach English from elementary to high school levels. As part of my education to become a certified Special Education teacher I took classes on helping students who are English Language Learners. 40 Subjects: including algebra 1, algebra 2, dyslexia, biology ...Much of my tutoring has been for ESL students. I have had many years preparing students for the SAT, GRE, GED, ACT and various military service tests. While the ultimate results of tutoring lie with the student, I have been successful in helping to raise test scores significantly. 34 Subjects: including algebra 1, English, SAT math, prealgebra ...This past year I taught math to students in grades 1 through 12. I have a BS Accounting degree from ASU. I have over 30 years in accounting and over 10 years with QuickBooks Pro. 9 Subjects: including algebra 1, precalculus, SAT math, prealgebra
{"url":"http://www.purplemath.com/Laveen_Math_tutors.php","timestamp":"2014-04-19T23:15:02Z","content_type":null,"content_length":"23456","record_id":"<urn:uuid:c88b38fc-b592-4a56-84db-06ca8a460471>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Troubling Doubling at School Math brain teasers require computations to solve. Category: Math Submitted By: dalfamnest Read the little poem and answer its question if you can. The number of girls who do wear a watch is double the number who don't. But the number of boys who do not wear a watch is double the number who do. If I tell you the number of girls in my class is double the number of boys, Can you tell me the number I teach? Here's a clue: More than 20; below 32!
{"url":"http://www.braingle.com/wii/brainteasers/teaser.php?id=49613","timestamp":"2014-04-18T23:34:53Z","content_type":null,"content_length":"10340","record_id":"<urn:uuid:0598ec90-6918-4a6f-9d2d-4b59d8d97fe6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric Series (Sum Formulas) July 12th 2011, 05:23 PM #1 Junior Member Jul 2009 Geometric Series (Sum Formulas) Ok, I have more of a technical issue here since I know how to calculate the sum of geometric series. The question asked is: Find the sum of the following series: 3 + 3square root of 9 + 9... S7. My question is, how is "3square root 9" entered into a calculator? Note that the "3" is not a cubed root... it is just on the left-hand side of the square root sign. I know that it has to equal 5.196... since r = the square root of 3 (the question provides this), but I just can't figure out how to enter this into my graphing calculator. Re: Geometric Series (Sum Formulas) Is this the series? $\displaystyle 3+3\sqrt{9}+9+\cdots$ If so, is it geometric? Re: Geometric Series (Sum Formulas) Ok, I have more of a technical issue here since I know how to calculate the sum of geometric series. The question asked is: Find the sum of the following series: 3 + 3square root of 9 + 9... S7. My question is, how is "3square root 9" entered into a calculator? Note that the "3" is not a cubed root... it is just on the left-hand side of the square root sign. I know that it has to equal 5.196... since r = the square root of 3 (the question provides this), but I just can't figure out how to enter this into my graphing calculator. Dear oryxncrake, It depends on the graphing calculator that you use. So the best thing is to refer the manual of the calculator. What is the model of the graphing calculator that you use? Re: Geometric Series (Sum Formulas) Yes, that is the series and it is geometric. Re: Geometric Series (Sum Formulas) I am using the TI-83 Plus Re: Geometric Series (Sum Formulas) Re: Geometric Series (Sum Formulas) $3 + 3\sqrt{3} + 9 + ...$ is geometric. a "typo" maybe? Re: Geometric Series (Sum Formulas) 3 + 3sqrt(9) is 3 + 3*3, so multiplier = 3; so 3,9,27,81... Re: Geometric Series (Sum Formulas) July 12th 2011, 05:44 PM #2 July 12th 2011, 05:45 PM #3 Super Member Dec 2009 July 12th 2011, 05:50 PM #4 Junior Member Jul 2009 July 12th 2011, 05:52 PM #5 Junior Member Jul 2009 July 12th 2011, 05:54 PM #6 July 12th 2011, 06:45 PM #7 July 12th 2011, 06:51 PM #8 MHF Contributor Dec 2007 Ottawa, Canada July 12th 2011, 06:53 PM #9
{"url":"http://mathhelpforum.com/algebra/184487-geometric-series-sum-formulas.html","timestamp":"2014-04-19T13:14:06Z","content_type":null,"content_length":"58237","record_id":"<urn:uuid:9fdfc0b7-6c02-4399-abf5-0796009d4102>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the formula to find work? Corporate finance is the area of finance dealing with the sources of funding and the capital structure of corporations and the actions that managers take to increase the value of the firm to the shareholders, as well as the tools and analysis used to allocate financial resources. The primary goal of corporate finance is to maximize shareholder value. Although it is in principle different from managerial finance which studies the financial management of all firms, rather than corporations alone, the main concepts in the study of corporate finance are applicable to the financial problems of all kinds of firms. Investment analysis (or capital budgeting) is concerned with the setting of criteria about which value-adding projects should receive investment funding, and whether to finance that investment with equity or debt capital. Working capital management is the management of the company's monetary funds that deal with the short-term operating balance of current assets and current liabilities; the focus here is on managing cash, inventories, and short-term borrowing and lending (such as the terms on credit extended to customers).]citation needed[ A financial ratio (or accounting ratio) is a relative magnitude of two selected numerical values taken from an enterprise's financial statements. Often used in accounting, there are many standard ratios used to try to evaluate the overall financial condition of a corporation or other organization. Financial ratios may be used by managers within a firm, by current and potential shareholders (owners) of a firm, and by a firm's creditors. Financial analysts use financial ratios to compare the strengths and weaknesses in various companies. If shares in a company are traded in a financial market, the market price of the shares is used in certain financial ratios. Ratios can be expressed as a decimal value, such as 0.10, or given as an equivalent percent value, such as 10%. Some ratios are usually quoted as percentages, especially ratios that are usually or always less than 1, such as earnings yield, while others are usually quoted as decimal numbers, especially ratios that are usually more than 1, such as P/E ratio; these latter are also called multiples. Given any ratio, one can take its reciprocal; if the ratio was above 1, the reciprocal will be below 1, and conversely. The reciprocal expresses the same information, but may be more understandable: for instance, the earnings yield can be compared with bond yields, while the P/E ratio cannot be: for example, a P/E ratio of 20 corresponds to an earnings yield of 5%. In accounting, gross profit or sales profit is the difference between revenue and the cost of making a product or providing a service, before deducting overhead, payroll, taxation, and interest payments. Note that this is different from operating profit (earnings before interest and taxes). The various deductions (and their corresponding metrics) leading from Net sales to Net income are as follow: Gross margin is the difference between revenue and cost before accounting for certain other costs. Generally, it is calculated as the selling price of an item, less the cost of goods sold (production or acquisition costs, essentially). The purpose of margins is "to determine the value of incremental sales, and to guide pricing and promotion decision." Cost of goods sold or COGS refer to the carrying value of goods sold during a particular period. Costs are associated with particular goods using one of several formulas, including specific identification, first-in first-out (FIFO), or average cost. Costs include all costs of purchase, costs of conversion and other costs incurred in bringing the inventories to their present location and condition. Costs of goods made by the business include material, labor, and allocated overhead. The costs of those goods not yet sold are deferred as costs of inventory until the inventory is sold or written down in value. Contribution margin is the dollar contribution per unit divided by the selling price per unit. “Contribution” represents the portion of sales revenue that is not consumed by variable costs and so contributes to the coverage of fixed costs. This concept is one of the key building blocks of break-even analysis. In cost-volume-profit analysis, a form of management accounting, contribution margin—the marginal profit per unit sale—is a useful quantity in carrying out various calculations, and can be used as a measure of operating leverage. Typically, low contribution margins are prevalent in the labor-intensive tertiary sector while high contribution margins are prevalent in the capital-intensive industrial sector. Generally accepted accounting principles (GAAP) refer to the standard framework of guidelines for financial accounting used in any given jurisdiction; generally known as accounting standards or standard accounting practice. These include the standards, conventions, and rules that accountants follow in recording and summarizing and in the preparation of financial statements. Many countries use or are converging on the International Financial Reporting Standards (IFRS), established and maintained by the International Accounting Standards Board. In some countries, local accounting principles are applied for regular companies but listed or large companies must conform to IFRS, so statutory reporting is comparable internationally, across jurisdictions. Business Finance Business Finance Related Websites:
{"url":"http://answerparty.com/question/answer/what-is-the-formula-to-find-work","timestamp":"2014-04-20T00:40:12Z","content_type":null,"content_length":"32234","record_id":"<urn:uuid:90007c6c-9327-48d9-904d-c3f9a9cf9bec>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] python numpy code many times slower than c++ Neal Becker ndbecker2@gmail.... Tue Jan 20 20:09:28 CST 2009 I tried a little experiment, implementing some code in numpy (usually I build modules in c++ to interface to python). Since these operations are all large vectors, I hoped it would be reasonably efficient. The code in question is simple. It is a model of an amplifier, modeled by it's AM/AM and AM/PM characteristics. The function in question is the __call__ operator. The test program plots a spectrum, calling this operator 1024 times each time with a vector of 4096. Any ideas? The code is not too big, so I'll try to attach it. -------------- next part -------------- A non-text attachment was scrubbed... Name: ampl.py Type: text/x-python Size: 2961 bytes Desc: not available Url : http://projects.scipy.org/pipermail/numpy-discussion/attachments/20090120/ff89353e/attachment.py -------------- next part -------------- A non-text attachment was scrubbed... Name: linear_interp.py Type: text/x-python Size: 851 bytes Desc: not available Url : http://projects.scipy.org/pipermail/numpy-discussion/attachments/20090120/ff89353e/attachment-0001.py -------------- next part -------------- A non-text attachment was scrubbed... Name: plot_spectrum.py Type: text/x-python Size: 4618 bytes Desc: not available Url : http://projects.scipy.org/pipermail/numpy-discussion/attachments/20090120/ff89353e/attachment-0002.py More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-January/039733.html","timestamp":"2014-04-16T17:13:06Z","content_type":null,"content_length":"4289","record_id":"<urn:uuid:b49e98cc-d506-4d17-a9ab-7ed33023a294>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Ruth Gentry Born: 22 February 1862 in Indiana, USA Died: 15 October 1917 in Indianapolis, Indiana, USA Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Ruth Gentry was educated at the Indiana State Normal School. This School was first set up in 1865 and teaching began in 1870. It had therefore only been in operation for a short time when Gentry began her education there. At the time Indiana State Normal School was a teachers college but did not have the right to award bachelor's degree; this came later in 1908. The School later became Indiana State University. After graduating in 1880 Gentry had qualified as a teacher and indeed she did teach for ten years in preparatory schools. In 1870 the University of Michigan had become one the first colleges in the United States to admit women undergraduates so it was a fairly natural choice for Gentry to make when she decided to study for her bachelor's degree. She studied mathematics at the University of Michigan and was awarded her BA in 1890. Wishing to continue her studies to graduate level Gentry entered Bryn Mawr College in Pennsylvania. Again this was a natural choice since the College, which opened in 1885, was the first institution of higher education in the United States to offer graduate training to women. It also had an excellent reputation in mathematics although at the time when Gentry entered the College in 1890 very few women had received a Ph.D. in mathematics in the United States. The Head of the Mathematics Department at Bryn Mawr College was Charlotte Scott. She had undertaken research at Girton College, University of Cambridge, England, on algebraic geometry under Cayley's supervision. She was awarded a doctorate in 1885 and, on Cayley's recommendation, was appointed the first Head of Mathematics at Bryn Mawr College. Charlotte Scott supervised Gentry's graduate After a year at Bryn Mawr, Gentry was awarded the prestigious Association of College Alumnae European Fellowship which would finance her studies in Europe. Gentry was the second recipient of the award and the first mathematician. In 1891 she left the United States and travelled first to Berlin in Germany. There she was able to attend lectures but not to formally enrol so it was impossible for her to read for a degree. She wrote to Klein at the University of Göttingen asking if he would admit her to his lectures but he replied to say this was against the rules. Gentry then went to Paris where she spent a semester attending mathematics lectures at the Sorbonne before returning to Bryn Mawr. While a graduate student Gentry joined the American Mathematical Society in 1894. Her doctoral thesis was supervised by Charlotte Scott, not surprisingly, on geometry which was Scott's own area of expertise. Gentry submitted her thesis On the Forms of Plane Quartic Curves to Bryn Mawr in 1896 and was awarded a Ph.D. The work of her thesis is best described by quoting her own words from the Many papers dealing with curves of the fourth order, or Quartic Curves, are to be found in the various mathematical periodicals; but these leave the actual appearance of the curve as a whole so largely to the reader's imagination that it is here proposed to give a complete enumeration of the fundamental forms of Plane Quartic Curves as they appear when projected so as to cut the line at infinity the least possible number of times, together with evidence that the forms presented can exist. After the award of the Ph.D., Gentry was appointed to Vassar College in Poughkeepsie, New York taking up the appointment in 1896. This was a women's college which had been set up to allow women to obtain an education of equivalent standard to that available to men and the appointment of Gentry was important to them for she was the first mathematics faculty member with a Ph.D. Gentry was promoted to assistant professor in 1900 and she taught there until 1902 when she left to take up a position as Associate Principal and Head of the Mathematics Department at a private school in Pittsburgh, Pennsylvania. In 1905 Gentry resigned her position and became a volunteer nurse. For a number of years she travelled both in Europe and in the United States but her health deteriorated and she died at a young age. Article by: J J O'Connor and E F Robertson List of References (3 books/articles) Mathematicians born in the same country Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © April 2002 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Gentry.html","timestamp":"2014-04-20T00:40:34Z","content_type":null,"content_length":"12321","record_id":"<urn:uuid:37cc36df-96e5-40a3-8f0a-21be7513f658>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
examples of math trivia Yahoo visitors found our website today by typing in these keyword phrases: • completing the square interactive activity • slope math worksheets • two step word problems worksheets • scatter plots worksheets • subtract negative fractions worksheets • isolating the denominator • free math problem solver with intructions • formula how to get square root of a number • mcdougal littell the americans workbook answer key • pre-algebra pretest • cube factoring calculator • algebra 2 an integrated approach heath • 11+math test free on line • higher order polynomial zeros method • what is the difference between a linear presentation and a non-linear presentation • ti-84 root simplified • fractions least to greatest chart • free math worksheets adding integers • what is a remainder in a fraction • draw conclusions worksheet • non homogenous differential equations • the similarities between simplifying an expression and solving an equation • topic 7-b test of genius • free maths explainations for year 7 • 2 teacher permutation • root solver • algerbra worksheets with answers • math equation samples • solve the inequality x^2 + x - 6 > -4 by using a table and a graph • algebra 1 note taking guides • samples of a test on fractional numbersfor grade 5 • ordering numbers least to greatest free worksheets • basic mathematical formulas of matric • sixth edition elementary statistics • why do we need numbers in radicals or exponent in dialy life • Math how to solve combination albgebra • how to simplify an algebraic expression using C# • ilaplace ti • math division solver • graphing calculator picture equations • 'multiplying decimals worksheet year 6' • quadratic word problems worksheet • factor using ac method calculator • how to solve roots and radicals • maths paper year 11 • square root over square root calculator • ti 84 plus who t get third roots • Addition subtraction worksheets with 25 problems on page • general equations for hyperbolas and ellipses • lattice worksheets • logarithmic poems • inventor of the quadratic formula • arithmatics • least to greatest decimals sovler • imperfect radical calculator • what can you use quadratic functions for in real life • roots of equation calculator • free O level physics worksheets • lcm gcf worksheet free printables • radical calculators online • fractions and decimals least to greatest calculator • trivias about linear equation • does simplification and evaluation of an expression mean the same • solving quadratic equations two unknowns • grade 6 lesson pass 1.1 • complex fractions calculator • homework help matrix algebra • worksheets on solving fraction equations: addition & subtraction • free download aptitude questions with answers • multiply and divide integers worksheet • MULTIPLE EQUATIONS EXCEL • inverse logarithm in real life situations • multiplying binomials calculator • SAMPLE PRE-ALGEBRA SOLUTION SETS • solved problems for permutation and combination • mat 117 final exam • 83 calculator phoenix game • 9th grade algebra worksheets • solving inequalities by adding or subtracting pages 50-51 • TI 85 radical • square root method calculator • syllabus of class ten mathematics in United states Of America • free printable 6th grade math problems • free algebra worksheets • a number or product of numbers and variables with whole number exponents • balancing worksheets • png and algebra for gr 8 • multiplication division properties of exponents problem • ti89 solve in program • mathcad download • math percent formula • math formula chart • samples of math trivia • free ebook download for aptitude • algebrator operation signs • substitution method calculator online • how to graph an algebraic equation • Managerial Accounting, 12th edition, McGraw-Hill • simplifying square roots with exponents • algebra II step-by-step math problem help • past papers to do online • how do you write a fraction or mixed number as a decimal • examples of algebra midterms • solving a non linear first order differential equation • online aptitude question • first year math trivia with answer • limits calculator • give 5 example of poems related to math • math trivia about exponential equation • fraction in simplest form calculator • graphing calculator equation pictures • Newton's method program using strings TI-84 • aaa math 6th grade • 7 grade math printable test GCF, GCM • biology games • test in geometry • simplifying square root california standard • factor my equation • differentiation calculator • What is the difference between evaluation and simplification of an expression? • square cube lesson plan • 0.416666667 in bar notation • poems about fractions • trigonometry sample questions • solve rational equations calculator • calculate the largest divisor • linear apgebra ppt • exponential graphs and equations • radical expressions and rational exponents • lecture/ adding exponential fractions • Simplify the following. All answers must have rational denominators. • kumon d answers • balance equation calculator • glencoe algebra answers.com • quotient solver • Algebra: Chapter 0 • mathcad permutation • balancing math equations worksheet • fourth root of a number on a calculator • laplace transformation calculator • name the square root of the expression • step graphs math • Wronskian solving • factor tree worksheets fifth grade • example of quadratic surface equation • substitution method for quadtratic • integer addition and subtraction worksheet • calculate slope using graphing calculator • domain of 4th root of x • working out algebra problems • code for calculating gcd of two numbers • adding subtracting multiplying dividing integers • Simplifying Square Root Calculator • example math trivias • grade translation worksheet • integer questions • algebraic expressions worksheets for 6th grade • free printable drawing conclusion worksheets • tetris step by step • algebra equation with fraction • chemistry worksheets answers • explain converting a fraction into a decimal • ucsmp algebra lesson master • prentice hall geometry study guide and practice workbook answer key • solving systems of equations with 2 variables • solving equation with adding, subtracting, multiplying dividing • elimination method for solving equations calculator • lesson plan for worded linear equations • multiplying and dividing scientific notation • Roots of Real Numbers calculator • grade 9 math formulas • Prentice Hall mathematics course 3 workbook answer key for pages • algebra divide before multiply • algebra Completing the square tool • slope of a quadratic equation • simplifying block diagrams • binomial cheats • rationalizing fractions • mcdougall littell,geometry • really hard math problems and answers • Laminate Analysis Program 3 software • decimal to square root calculator • lattice box worksheet • examples of decimals • algebra de baldor matrices • real analysis by rudin video • gre papers with solution • what website can i go to do 9th grade algebra • easy algebraic equations balancing worksheets • 7th grade formula sheet • how to type in third root into a ti 89 calculator • how to write an algebraic rule for a parabola • Grade 6 Math Trivias • free sats practice papers ks2 • gmat math formulas sheet free • ratio and proportion KS2 • algebra problems • 2 step equations test questions • complete the square program • solving compound inequalities games • mcgraw-hill SOLVING dollar riddles • how to factor algebraic expressions • midpoint formula ti 83 • simplifying radicals math calculator free • hands on equations worksheet #17 answers • use the distributive property to write each equation in expanded form • quadratic formula ti-83 plus • prentice hall mathematics california algebra 2 answer key • binomial expansion powerpoint • linear metre definition • pre-algebra 8th grade • first and second derivatives and calculator hints • factoring quadratics calculator • helping 7th graders understand proportions • integer calculator • how to program exponents • how to solve for unknown inside square root • pre-algebra with pizzazz creative publications • algebra tiles manipulatives • prentice hall textbook algebra 1 answers • how to solve right triangle proportion word problems • finding midpoint using fractions • asset type questions maths for 5th graders • managerial accounting McGrawHill solution • free algebrator download • problems in quadric surface and its solution-calculus III • simplify square root 125 • simultaneous equations solver • negative fraction • pre-assessing in math • solving two step equations with decimals worksheet • 4 grade equations • cheat on problem solving practice divide by 8 and 9 worksheet • Calculator and Rational Expressions • how to take out the decimal from number • what is polynomial in one variable ANSWER • how to factor two terms calculator • importance of functions math • calculator that solves and graphs linear equation pairs • degrees of reading power worksheets • science equation online solver • prentice hall mathematics algebra 2 answer key • monomial solver • free simplifying ratio math worksheets • OSX version of Algebrator • rewrite algebra expressions • two step equations games • scale factor problem and answer • slope taks worksheet • saxon division worksheets • radical expressions calculator • nonlinear differential equations • plot calculator • tutor.com school pricing • 6th grade holt mathematics • predicting chemical reactions ti 84 • circle graph vs line graph • BEGINNERS ALGEBRA • wronskian calculator • nonlinear non homogeneous differential equation • free online algebra solver • addition equations powerpoint • Where might you use a linear equation in code • tutorial for mathmatics • simplest form of radicand 2x y • dividing polynomials solver • math trivia about slopes • bag o tricks algebra • diamond method division polynomial • adding negative and positive numbers worksheet • algebra 1 midterm exam 9th grade • gcse problems involving venn diagrams • adding fractions with variables • graphing inequalities interactive • simplify radical denominator algebra reciprocal radical • TI Interactive Exponents • 7th grade pre algebra worksheets • Essentials of Investments solution • matrix word problems • Topics in Investigatory Project in mathematics • 11th grade algebra worksheets • factoring quadratic equations worksheet • completing the square for dummies • grade 11 functions and relations practice questions • adding rational expressions calculator • how to find lcd complex functions • spreadsheet solver pocket • algebra1 sol practice test • power to fraction • viii class sample papers • solving inequalities puzzles • algebra plug in equations • adding unlike negative fractions • free tutoring intermediate algebra • calculator for simplest fractions for free • polynomial 2 variable +code • multi-step adding and subtracting integers • free answers on problem solving exercises in physics • negative and positive calculator • extremely hard algebra problems • precalculus problem solver • multiplication and division of rational expressions • Factor Theorem kumon • multiplying and dividing polynomials worksheets • method of solving second order linear equation • two step equations fun worksheet • most difficult math problem • equation of a nonlinear function • prentice hall algebra 1 answer key free • algebra number games simplifying rational expressions • help me learn to how solve fraction • world's hardest math puzzles • www.dictionaryandsentence.cpm • free school worksheet ninth grade • boolean algebra simplification software • clock problems in algebra • free printable algebra worksheets with answers • multi step math problems • multiplying cube roots • calculator with square root button • twins invest excel • some simple maths models • multiplying and dividing using scientific notation solver • 1st year problems in algebra (math) • free simultaneous equation solver • making sense of subtracting negative numbers • prentice hall algebra 2 with trigonometry answers • everyday uses of quadratic equations • Steps to multiply integers worksheet • formula of square • trig worksheets • ged math worksheets free • trigonometry questions for 10th class • 5th grade worksheets on algebraic expressions • linear equations with fractions calculator • math sheet for adding and subtracting from 1 to 20 • free fraction to decimal calculator • what difference function ALGEBRA FX 2.0 and t83 • answers to modern biology • 7x+1=43 • combinations and permutations examples 9th grade • maths games year 11 • TESTBOOKONLINE.IN • easy algebra question • worksheets on solving basic linear equations • synthetic division inventor • polynomial simplification • logarithms calculator • simple explanation of translations in 7th grade math • writing quadratic equations in vertex form • y intercept slope calculator • aptitude books download • mixed number java programming • expanding brackets worded problems • math line of best fit video • radical rules algebra • turning decimals into fractions on a scientific calculator • finding angles of a triangle value of variable • conceptual physics formula sheet • Grade 7 Math Sheets • free adding and subtracting fraction sample math problems • skeleton equation • free multiple choice adding decimal worksheet • grade conversion chart • factoring complex quadratic equations • mathematical statistics with applications free download • the sums and differences of rational algebraic expressions • polar graphing calculator • percent proportion worksheets • maths plus 6 answers • how to solve complicated radical inside radical equations • solve algebra questions free online • multiple choice questions discrete math • trigonometry 10th • easy two step word problems • how to solve for exponents variable • primary paper free • free 9th grade algebra 1 help • proportion worksheets • solving complex radical equations • Passport to Algebra and geometry online textbook • ti-83 quad root • 3rd grade saxon math division sheet • hot to balance chemical equations • print sample fraction source code • games for 7th graders • TI 89 BINARY NUMBERS • variable expressions algebra kids • multiplying exponents on ti-89 • converting square metres to lineal metres • rules to find root of quadratic equation • adding subtracting multiplying and dividing integers games • hyperbolas examples • apptitude+problems on clock • nonlinear system math calculator • solve equation matlab for loop • rationals calculator • year 3 algebra • how to calculate the general solution of a 2nd order differential equation • prentice-hall inc. algebra worksheets • math grade 11 model graph problems ontario • pre final test for equations • variables math worksheets • 0.1 in lowest terms • arrange the numbr • ti-83 simplify radicals • adding, subtracting, multiplying, dividing, powers of ten • free printable sampel tests • completing the square calculator • how to solve hard binomials • add and subtract radical expressions • binomial expansion activities • Equations with logarithms worksheets • a poem for the rules of subtracting integers • key to algebra review • spelling practice book grade 5 unit 3 week 5 day 3 page 59 • literal coefficient definition • multiplying radicals calculator • free worksheets equations with multiple variables • pre algebra with pizzazz pg. 239 • maths sums for 1st std • free algebra 1 transforming formulas • compatible numbers worksheets • measurement conversion table printable • square root method • interest formula programs • formula for greatest common divisor in visual basic 2008 • order of operations 3rd grade • math gcf lcm • math fraction worksheets find the LCD • matlab second order differential solver • how to factor a cubed polynomial • factoring program ti 83 calculator • polynomial divider • 4th std math textbook india • Solution linear algebra done right free ebook • simplify square root calculator • fractions and exponents calculator • ged math free printable • Problems on Boolean Algebra with solutions • step by step explanation of polynomials • how to turn decimals into fractions • modern chemistry chapter 6 mixed review answers • algebra calculator simplify • adding like square root quantities • cheat for fractions • formula writing decimals as fractions 7th grade • how to get rid of a square in math • how to put integration in calculator • math integers calculator • multiplication and division of rational expressions • free college integrated math help • intergers with decimals • free 7th grade algebra worksheet printouts • free online ti 84 calculator • extra credit work for pre algebra • online integer calculator • how to solve an 4th grade expression • cubed root standardized test problem • how to interpolation using calculator • logarithmic equation solver • least to greatest solver • geometry cpm answers • finding vertex of absolute value • find the roots of polynomial equations with TI84 • ratio sums for kids • 8 class sample paper • statechart diagram program • quadratic form to vertex form calculator • Answers for Glencoe Algebra 1 Book • 9th grade math sheets • lcm formulae • learning algebra 2 for free • pre algebra review for 6th grade • holt 6th math worksheets • math factoring machine • nth term of a sequence when difference increases by 2 each time • TUTORIAL ALGEBRATOR 4.0 • group like terms worksheet • collecting like terms worksheet • simplify the following expression calculator • radicals variable calculator • all mathematical formulas class 10 • lcd of fractions calculator • ohio holt algebra 2 textbook answers • convert fraction to radical • pre algebra with pizzazz book AA Answers • History of square(algebra) • prentice hall mathematics course 2 answer • quadratic formula for ti-84 plus • what is the decimal for 26 over 50 • solving radicals within radicals • pre algebra word problems - inequalities • texas instrument 89 instruction • worksheet on equation for year 9 • intergers printables test • simplifying expressions with exponents calculator • 3rd order polynomial • pdf "Contemporary Abstract Algebra" • Practice worksheet solving Equations by Factoring Glencoe Divisions • factoring polynomials cubed • what is the term to term rule between the following terms • simplifying fractional exponents • dividing fractions calculator • What is the difference between a linear presentation and non-linear presentation • to find squre • solving cubed quadratic equations • tutor algebra i software • value of tables. rule: multiply by 3, then subtract 2 • glencoe online learning center-math,student worksheets • ellipses real life • understanding parabolas • inequalities maths ks2 • pre-algebra with pizzazz worksheets • algebra 2 prentice hall solutions • math solutions+homework • lesson 5-9 practice worksheet pearson education, inc pre algebra • latest math trivia with answers • simplifying complex radical expressions calculator • Aptitude Questions and Answers • algebraic solution to chemical equations • Math trivias • converting percentages to decimals formula • free printable 7th grade history • convert decmial number into square root • free tutorial for evaluating of algebraic expression • free simplifying rational expressions solver • download buying kumon math books from india • how to graph inequalities using vertex form • usefull of algabra • how to find lcd in math • algebraic subtraction rules • solving matrix system ti89 • integers elementary school • hardest equation to solve • algebra solution program • simplifying radical expressions with exponents workshee with negative exponents • algebraic expression finder online • pre algebra pizzazz answers • algebra solving multivariable linear equations • maths aptitude questions with answers • algebra problem solver • square root simplifier • simplifying radicals with fractions calculator • factoring quadratic expression calculator • sample worksheets of kumon math • precalculus algebrator • order the fractions from least to greatest solver • 10th class mathematics • investigatory in math • derivation of cramer's rule standard from of an equation • how to get the range in the equation • activities linear equations • mcdougal littell 7th grade math • arithmetic sequences problems with solutions • prentice hall answer key • Balance equations online calculator • limits problems online • divide radicals • holt rinehart and winston algebra 1 answers • square root of two squares • strategies for greatest common factor • "step function" worksheet • Use symbols to represent an unknown quantity in a simple addition, subtraction, or multiplication equation. • Factoring Calculator • prentice hall biology lab answer key • monomial calculator online • math swf • algebra equations exercises • maths questions for 8th class • simplifying radical expressions with variables calculator • using a calculator ks2 • wave worksheets • variables as exponents • mix numbers • adding squared variables • variable solver • TI-84 online • matrix algebra 101 • free rational expressions and equations solver • algebra 2 cheat sheet • mixed number to decimal • solving linear equations calculator • inequalities pre-algebra • pizzazz worksheets • step by step exponent • percent proportion worksheet free • holt algebra 1 enrichment masters answer • adding subtracting multiplying and dividing exam • 10th matriculation maths previous question papers • 8th grade algebra problems • math trivia questions with answers • math tutor online for free graphing translations • Math Trivia with Answers • abstract algebra solutions • 7th grade variable equations worksheets • free aptitude question • integers multiplying and dividing • tools for arithmetic progression • gcd on ti83 plus • mathematical concepts textbook • how to use ti 84 to solve quadratic • inverse operations games • 5th grade algebraic expressions worksheets • holt mathematic books • free worksheets + commutative property • simplifying inequalities calculator free • answer fraction problems • half life chemistry equation • solving system of equations circle general form • java check for decimal number • graphing linear inequalities worksheet • aptitude test papers with answers • free factor tree worksheets • free math worksheets for 10th graders • algebra pyramids • factoring expressions solver • clep college algebra practice exam • Simplifying Radical form with variables • adding and subtracting games free online • what means linear differential equations • turn into fraction solver • college entrance exam sample • how to get percentage • solve polynomial java • what is the answer to subtract 1/2 of 1.25 maths for kids • 5 grade cheat sheets • math dilations • word problem solver • fractions in simplest form calculator • dividing fractions tests • grade 11 trigonometry • adding cube root • add two numbers in java • algebra 2 answers • partial sums division • prentice hall mathematics algebra 1 answer key • hard mathematical crosswords with answers • sleeping parabolas • multiplying and dividing decimals lesson plans • online version of holt algebra 1 book • multiroot solver • how to change a decimal to a fraction on a ti-89 • simplify each equation and write in standard form • algebra point slope form • giving me answer or my intercepts for slope • how to simplify by factoring • simplify the radical expression solver • beginning algebra printable worksheets • north carolina prentice hall mathematics algebra 1 answers • maths problem solver instant whole numbers • how to square in excel • TI 84 Calculator online • my algebra solver • onine algebra 2 solver and step showing • Simplify the following expression using positive exponents • math activities for seventh grade in VA • what is a factor in 3rd grade math • plotting ordered pairs pictures • what is 3a times 2b in maths algebra • one step equations printable worksheets • ax + by + xy = c • graphing translations worksheets • scale factor fractions • solving addition and subtraction equations • free math arithmetic & geometric sequencing worksheets • wave matlab • solving simultaneous equations with excel • solving a linear absolute value system solver • decimal into fraction calculator • how to find a scale factor • how to find the vertex on a graphing calculator • algebra sums for year 7 • y6 test • simultaneous equation by factorization • how to work out nc testing program math problems for 7th grade • two step equation worksheets • factoring out GCF in expressions containing fractional and negative exponents • adding subtracting multiplying dividing positive negative worksheet • simplifying polynomial expressions calculator • polynomial radical solver • prealgebra word definations • rules for simplifying radicals • convert 10 decimal to sq ft • parentheses order of operations worksheets • dots and blocks algebraic expressions • free x-y graph paper • algebrater • multiplying and dividing powers • using the quadratic equation to solve for location of a third point • square metres lineal metres • free rational expressions solver • algebra with pizzazz what is the title of this picture • online slope calculator • prentice hall mathematics pre-algebra answers • Functions in 9th class • coordinate grid activities 5th grade • algrebra excercises • step graph • writing from vertex form to standard form • how to teach scale • glencoe algebra 1 worksheets • prentice hall pre algebra workbook answer • trinomial factoring calculator online • exponential numbers presentation for middle school • combining like terms calculator • kumon level d • pre algebra solver • Money Games for 10th Graders • combining like terms powerpoint • equations racional • simplifying radicals calculator • ratios from least to greatest • TI-83 calculate lewis structure • simplify inequalities calculator • dividing negative fractions for algebra • online integral calculator step by step • fractions raised to fractions • pre algebra addition and subtraction integer worksheets • solving equations inequalities worksheet • matrix aptitude questions • find calclator t189 • how to solve an algebraic equation with fractions • scatter plot worksheet middle school • how to solve complex number in ti 89 • using matlab to solve nonlinear equations simultaneous • second order differential equations nonhomogeneous Cramers • multiplication chemistry • a program that solves bisection method • logarithm lesson plans • algebric grid for plottig ipos.neg integers • mathcad calculation of permutations • quantitative comparison worksheets • polar coordinates calculator • multiplying radicals geometry worksheets • factor this equation for me • algebra II formulas for TI-84 Plus • sixth grade percentage formulas • balancing algabra equations • kumon math worksheets • linear equations poems • convert whole numbers to decimals • free help solve algebra problems • simplify sqrt(a) -sqrt(b) • reducing the radicand show a calculation • solve variable is base of exponent • solution key linear algebra lay • factoring quadratic trinomials worksheet • year 8 maths worksheets • typing exponents in texas caculater • 7th grade math cheat sheets • how to factor trinomials on a ti-84 • free 6th grade midterm • lcm gcf worksheets • Newton Raphson,two nonlinear integrals,matlab • how difficult are the difference of quotient problems • solving fraction equations addition and subtraction • formula changing fractions into decimals • base rate and percentage problems • managerial accounting mcgraw hill answers • solving equations with roots and powers • free algebra solver • first order differential equation CALCULATOR • math investigatory • slope formula excel • solving simple equations in excel • algebra tricks • multiply radicals calculator • solver for factoring • algebra tile worksheets • 6th grade nys math test • simplifying exponential fraction expressions • algebraic expression finder • Free College Algebra Calculator • my maths year 8 • 9th grade algebra teks test • "solving equations with fractions and variables" • standard form to vertex form • point slope calculator • bar graph homework sheets • calculater able to do fractions • maths tests ks3 • algebra 1 problems and answers • how do I solve chemical equations • what's the least common factor • adding subtracting multiplying dividing integers worksheets • multiplying and dividing by numbers between 0 and 1 • how to type square root into excel • The Americans (McDougal Littell, 1998) examine questions • general formula for hyperbola • solve algebra problems for me • modern chemistry holt rinehart and winston answers • ordered pairs pictures • college algebra formulas • multiplying with integers worksheets 8th grade • algebra two change from standard to vertex form • factor solver • 9th Grade printable workbooks • age calculations • solve mty greatest commom factor • download math quizzes with answers • intermediate accounting ebooks • algebra easy way • solve simultaneous linear equations in excel • how to add and subtract integers • kinds of proportion • free download ti 84 • free printable ponts on a grid worksheets • texas instruments calculator instructions for algebra • mathematics 10th grade proofs and givens • Convert 5% into an equivalent decimal fraction. Do not use spaces when inputting answer • holt mathematics answers • x and y intercepts worksheet • Iowa Algebra Aptitude Test Worksheet • what is not considered radical expression • GCF and LCM similarities • first grade maths work sheet • glencoe mathematics with business applications • translate each exponential statement into an equivalent logarithmic one • 4th grade algebra worksheets • least common multiple calculator with work • Solving Equations by Multiplying/Dividing Decimals • algebraic calculator with fractions and exponents • longhand math exponent • simplify polynomial curve • consecutive integers hard worksheets • contemporary abstract algebra solutions manual gallian • 6th grade nj ask • step by step antiderivative calculator • ordering fractions from least to greatest • inequality problems to solve and print • linear equations system java • chapter 1 rudin • 10th algebra formulas • graphing logarithms • Holt algebra 1 book online • examples of the latest mathematical trivia • Linear and non-linear worksheets with answers • algebra coach • polonomial + solver • Algebra with pizzaz monomials • ti 89 missing ) • what the formula mix number to decimal • how to solve a limit without a calculator • dividing monomials online calculator • high school intermediate algebra 9th edition • free graphing slope intercept worksheets • 8.3 multiplying and dividing rational expressions • working with TI 84 equation focus • converting decimal to time • calculate next common denominator • pre-algebra crossword • power to a fraction • algebraic expression like terms elementary math worksheet free • factoring with exponents • walter rudin exercises • algebra square root • simplifying square roots worksheet • how to write decimal numbers to mixed numbers • algebra artin solutions • simplifying algebraic expressions • exponential equation algebraically solver • evaluate the expression calculator • factorise quadratic expressions solver • glencoe mathematics algebra 2 answer key • math linear combinations worksheet • college elementary algebra worksheets • gcse module 8 answers math • algebra 2 help work book • iaat sample test • two step word problems • algebra problem • factor quadratic equation online • math sheets negative and positive numbers • 6th grade nys math test answers • ti 89 programs integral • rules for adding and subtracting integers worksheets • simplifying factoring • Variations, Integral Exponents and Radical Expressions for Second Year Hig Schools- Worksheets and Problems • college algebra calculator online • complex formula for salary • middle grades substitution worksheet • polynomial functions - how to solve with sign chart? • free physics formular • variables fractions equations • examples of special products algebra (cube of the sum of binomial) • simplify a fraction in matlab • bar graph homework 5th grade • beginner trigonomotry • honors trig midterm • mathematical equations percentages • formula for adding and subtracting integers • answers to a algebra crossword puzzle • free multiplying and dividing negative integers worksheet • multiplying higher exponents • free ebooks for aptitude • study guide for ninth grade pre algebra • algebra 2 prentice hall notes • how to solve two fractions • foil calculator • rational expressions calculator free • exponent worksheets for 8th grade • dividing and subtracting questions • mathematic textbook grade 9 in highshool usa • matlab, solving non-linear equations • teaching second grade math complements • algebra clock problem with solutio • where can i find biology worksheets and answers • graphing inequalities in two variables & ppt • one-step equation worksheets • rinehart pre algerbra textbook • decimal to hexadecimal solve • online factoring tool • a power that is a fraction • definition of factoring trinomials • kumon math online • review chapter 10 geometry answers 8th id a multiple choice • adding and subtracting rational numbers worksheets • area and perimeter of triangles worksheets • decimal + manipulatives + hundredths + tenths + printables • solving equivalent algebraic fractions • lcm & gcf worksheet • interpolating in TI84 • algebra 1 answer key • what is least common multiple 34 and 78 • PDF ENGLISH APTITUDE TEST • steps on finding slope • when would you use linear equation in two variables in real life • powerpoints on permutations • gmat math formula sheet download • logarithms explained • worksheet printable negative exponents • pizzazz math worksheets fractions • solve equation with unkown TI89 • interesting formulas in real life • math coordinates for kids • math trivia for first year • doing fractions on a TI-89 • year 10 gcse maths cheat • 7th grade mathematics books answers • basic explanation of bar graphs • math trivias-polynomials • trivias sa matematika • math trivia facts about number theory and fractions • adding integers worksheet • absolute geometry filetype pdf • POLYNOMIAL FUNCTIONS worksheets • show me elementary algebra • least common multiple calculator • how to simplify cube root equations • decimal made into a mixed fraction • minimizing quadratic equations • calculator factoring program • solving multiplication equations with fractions • easy coordinate graphing worksheets • elementary algebra worksheet • convert whole number to decimal • 8 standard quesation paper • signed numbers worksheet • struggling with algebra • summation solver • factor and simplify radical expressions calculator • rearranging equations calculator • convert mixed number to decimal • factor tree worksheets for 5th grade • 8th Grade Fraction Review • all the answers in the algebra 1 book • how to solve cubed roots • PRE CALCULUS MATH PROBLEM ANSWERS • order of operations with fractions worksheet • solve equations with fractions calculator • tensor algebra • examples of math poem • simplifying cube roots • compound interest math worksheet • show easy work for a QUADRATIC EQUATION solver • sum and difference of cubes worksheet • inverse operations math worksheets • algebra combination formula • lcm with variables calculator • ks 2 square numbers • least common denominator of fractions calculator • free factoring calculator polynomials • practice workbook math course 2 Littell • solving a quadratic equations perfect squares • solving by elimination free calculator • merrill algebra 2 with trigonometry answers • online monomial calculator • analysis using abstract algebra • differential calculator online • simple grphing pictures • when add subtracting multipling and deviding, what order do you sue them • algebra 1 edtion solver step by step • algibra notes • visual basic 2008 cubic functions • trigonometric proofs solver • function implicit online solver • algebra worksheets, combining like terms • simplifying exponents worksheet • quadratic equation using calculator • factoring quadratic expressions calculator • difference quotient online solver • biology eoct practice test • math quiz for 9th grade • algebraic formula for percentage • principles of arithmetic operations multiply subtract • worksheets on division with decimals • pre algebra slope activities • mcdougal littell middle school math work book course 3 • coordinate graphing pictures • laws of exponents of multiplication • practice question about the necklace from prentice hall work book • explain how the distributive property is used to combine like terms • multiplication problem solver • free online algebra solver with steps • implicit differentiation online solver • how to divide cube root • why do you simply radical expressions before adding and subtracting • 6th grade coordinate plane powerpoints • solving algabra • equivalent fraction lcd calculator • decimal to a mixed number calculator • hardest physics problem • exponential equation solver • math function machine worksheets • simplify radical expression calculator • dilations math • solving for terms with exponents • How do you get from a radical of a fraction to a radical in just the numerator? • articles on dividing, multiplying, adding, subtracting complex numbers • lineal metre • simplify calculator • crating a tutorial on the rules for adding, subtracting,multiplying and dividing whole numbers • prealgrebra + Lial • ti-84 vertex form program • balancing equations calculator online • free online lesson geometry 6th grade • free solver for my problem in math • remainder theorem calculator • Fractions for 3rd grader work sheets • factoring polynomials calculators • radical expression exponents • prentice hall mathematics algebra 1 problem solutions • simplifying expressions with variables worksheets • .83 to fraction • linear programming for dummies • easy chemical equations for 5th graders • calculating chemical equations • elementary linear algebra solutions manual • free how to add and subtract rational expressions Yahoo users found our website today by typing in these keyword phrases: │Order mixed numbers from least to│reduce fraction worksheet javascript │ │greatest │ │ │workbooks to go with prentice │positives and negatives in algebra │ │hall biology │ │ │write each fraction or mixed │Finding the six trig values │ │number as a decimal │ │ │how to solve evaluate expression │how to solve math application problem │ │poems about algebra │stretching functions worksheet │ │free printable algebra games │solve equations 4 unknowns │ │NONLINER EQUATION SOLVER MATLAB │challenging order of operations worksheets │ │implicit differentiation on a │graphing pictures coordinate plane │ │ti-84 plus │ │ │dividind │add and subtract mixed numbers worksheets │ │how to solve problems with │algebra 1 step by step │ │multiples │ │ │simplify fifth root of 96 │distributive property calculator simplify │ │identifying equations to shapes │ │ │on graphs parabola/hyperbola gcse│worksheets printable changing the subject formula │ │maths │ │ │subtracting decimals worksheets │binom formula in c++ │ │sample math exam ontario 11 │math trivia meaning │ │multiplying and dividing integers│partial sum division │ │worksheets │ │ │unit circle cheat sheet download │exponents under radical graph │ │ti 84 │ │ │how to pass beginner algebra │boolean logic simplifier │ │how parabolas invented │how to reduce square roots on casio calculator │ │powers, exponents and square │Algebra Problem │ │roots │ │ │scale factors practice questions │locating coordinate printable worksheet for third grader │ │getting cube root manually │free radical expression solver │ │distributive property equations │plotting points worksheet │ │worksheet │ │ │homework and practice workbook │factoring cubed │ │Math course 2 answers │ │ │how to find rational zeros ti 89 │free algebraic expression worksheets 6th grade │ │Iowa Algebra Aptitude Test │exponents grade 8 free worksheet │ │seventh grade algebra │good ways to remember formulas for algebra 2 │ │gr 1o math substitution │evaluating radical expressions │ │FRACTION FORMULA │worksheet for simplifing expressions │ │general to standard form │kumon math download │ │calculator │ │ │ti83 intersections │trivia in mathematics │ │simplifying radical expressions │geometry scale factor example │ │solver │ │ │english gramer │free 6th grade algebra questions with answers worksheets │ │multiplying algebra calculator │fractions beginner │ │10th Matric Question Papers │o level math b │ │solutions of 9th class textbook │maple data equation │ │circles │ │ │maximum and minimum solver for │gcf finder │ │quadratic equations │ │ │adding subtracting decimals ks2 │solving equations with radicals │ │lesson plan using TI-84 graph │polynomial equation matlab │ │line │ │ │answers to the 9th grade algebra │signed numbers practice │ │1 exam │ │ │free worksheet on x and y │solving algebra using principles together │ │intercept │ │ │simultaneous equations calculator│How to solve equations using mental math? │ │simplify exponents calculator │find least common multiple of expressions calculator │ │dividing integers worksheets │2 step equations with fractions game │ │substitution method with │College Algebra explanation │ │algebrator │ │ │define evaluation and │free algebra cheat programs │ │simplification of expression │ │ │dividing negative numbers │square root of 6 as a fraction │ │worksheets │ │ │easy math trivia with answers │simple trig identity worksheet │ │free gcf monomial calculator │mcdougal littell algebra 2 │ │algebra questions and answers │What is the difference between an equation and an expression? Include an example of each. Can you solve for a variable in an expression? Explain your answer. Can │ │sheet │you solve for a variable in an equation? Explain your answer. Write a mathematical phrase or sentence for your classmates to translate. Translate your classmates’ │ │ │phrases or sentences and explain what clues indicate that the problems are either expressions or equations │ │north carolina test of algebra │how much is a cubed │ │gcf variables │"slope worksheets" │ │exponents worksheets 8th grade │calculator radical online │ │algebraic expressions explained │adding integers games │ │simply │ │ │adding and subtracting decimals │convert decimal hour to time matlab │ │worksheet │ │ │mcdougal littell world history │Linear Graphing Review Worksheets │ │cliffnotes │ │ │calculator radical │adding and dividing integers games │ │orleans hanna algebra prognosis │add subtract multiply divide fractions │ │test │ │ │equations to find out what shape │ │ │the graph will be parabola/ │adding and subtracting practice │ │hyperbola │ │ │holt math worksheets │TI 84 graphing calculator worksheets for inequalities │ │integral calculator step by step │factoring differences of two calculator │ │sixth grade solving equations │online ti 83 │ │online game │ │ │algebra calculator substitution │enter in math problem │ │step by step simultaneous │texas 9th grade eoc answers │ │equation calculator │ │ │free negative integer addition │simplify absolute value calculator │ │problems │ │ │intermediate 1st year maths model│McDougal Littell algebra 2 chapter 8.1 resource book │ │papers │ │ │the hardest math test │home work cheats │ │sample paper class seventh │how to work out equation problems? │ │online factoring calculator │rules for adding and multiplying approximations │ │equations │ │ │adding subtracting multiplying │online glencoe algebra 2 book │ │and dividing on paper │ │ │mathematics poem algebra │matlab edge vertex │ │in real life when i used │Free worksheets for x and y intercepts │ │permutations │ │ │simplifying exponents calculator │solving equations with whole numbers worksheets │ │free pre algebra textbook │Everyday Structures AND Grade 1 │ │download │ │ │coordinate plane 4th grade │algebra 1 math holt book │ │how do you factor out the │free logarithm worksheets │ │greatest common monomial factor │ │ │free printable factor tree │section 10- 1 review modern biology study guide │ │worksheets │ │ │what do you learn in year 10 │fractions lcd worksheets │ │maths? │ │ │pre algebra problems for kids │solve equation pre algebra using diamond method │ │-4+2+z-5 combine like terms │how to program derivatives of trigonometric functions in calculator │ │add, subtract, multiply and │PPT. ON FACTORISATION │ │division solving equations │ │ │simplify fractions calculator │Dividing Decimals Calculator │ │modern biology worksheets │coordinate plane practice worksheet │ │ALGEBRA HARDER PROBLEMS │FREE linear system problem solver online │ │adding subtracting positive │how to solve a logarithm tensor │ │negative numbers lesson │ │ │how to calculate fractions on │worksheet and percent │ │scientific calculator │ │ │ │kids free math workbook │ │an equation of a nonlinear │simplify rational expression calculator │ │function │ │ │printable worksheets for 9th │Entering cubed roots on calculator │ │graders │ │ │grade 12 math, ellipses │where can i find a math program to download on my ti 84 plus caculator for free │ │inequalities online calculator │simplifying square root expression calc online │ │factoring cubed polynomials │give answers to math problems │ │square root problems with │Rewriting a square root with a varible in simplified radical form │ │variables calculator │ │ │complex rational algebraic │substitution method for integrals │ │expression │ │ │lcd for fractions calculator │2nd grade symmetry homework │ │boolean algebra calculator │examples of math trivia questions with answers │ │1fa to binary │mcq on physic 9th standard │ │how do you convert decimal │Parabola in Visual Basic │ │measurements to a mixed number? │ │ │formulas to create a composite │how to solve vertex form quadratic functions │ │bar graph │ │ │free math percentage problems │"simultaneous equations"+free worksheet │ │simplifying square root radicals │free math ged worksheets │ │calculator │ │ │college entrance exam practice │program bisection │ │test for algebra │ │ │simplify algebraic expressions │ordered pair satisfies an equation examples │ │with fractions │ │ │free download aptitude test │addison wesley physics 11 online answers │ │ebooks │ │ │dividing integers answer sheet │ti 83 cheats │ │longhand funtions │solving for roots of a cubic in excel │ │ks3-test-paper-science-2009 │how do you convert whole numbers to decimal │ │College Algebra Software │free mcdougal littel algebra 2 answers │ │gce o level questions │solving polynomial fractions │ Search Engine visitors found our website yesterday by typing in these algebra terms: How to convert decimals to mixed fractions, "what is the title of this picture" math, common factors of 42 56 21, vertex form and standard form, solving problems involving rational equations, factoring with ti-84. Application problem solving in linear equation using cramer's rule, simplifying exponential expressions, homeworkcheating.com, write a program in c to find GCF and LCM of two numbers. Learn algebra 2 prentice hall, world's second hardest easy geometry problem solution, graphin calculator for roots. Do graph equations for me, graphing puzzles for slope elementary, how to multiply radicals by integers, finding y intercept worksheets, basic factoring. How do you divide a cube root and a number squared, working out equation from graph, math worksheets 8th grade measurments, linear equations substitution method calculator, a caculator that i can use for free for pre-algebra, longhand math exponent formula. Math dictionary for 6th graders, Using differential equations in excel, simplest form fraction calculator, FRee MATH trivia for print. 1st grade math papers, ti 83 graphing calculator online, clock problems with solutions. Ontario math gr 11 textbook, ti-84 online calculator, cross factorisation. Radical expression calculators, calculator c#, balance equations worksheets, quadratic apps for TI, simplify radicands, prentice hall mathematics pre-algebra teachers edition, online calculator for negatives and fractions. Aptitude questions with solution, super exponent calculator online, special products and factoring sample problem, Ti83 pictures, algebra 2 chapter 1 resource book, algebraic fraction solver, 9th grade factoring practice tests. Midterm algebra 2 hs, free simplifying polynomials calculator, objectives of online graphing calculator, mcdougal littell houghton mifflin world history notes, answers to algerbra volume 1. Algebranator, rational expressions answers, help with multiplying equations with exponents, biology worksheets with answers. Trivias about math, hard difference quotient, how to put quadratic equations in vertex form, explain how you know to use a variable in an addition or subtraction expression. Worksheets for canceling math free, algebra relating to baseball, topographic maps worksheet, investigatory project IN MATHEMATICS, multiplying and dividing equations, adding subtracting and multiplying fractions, equation with like terms. Find domain rational expression calculator, how to solve for x to the power of a fraction, logarithm worksheet, algebra 1 glencoe/mcgraw practice workbook answer key, solution set using quadratic equations, solving addition subtraction equations. Online scientific calculator with fractions, free online implicit differentiation calculator, how to take the 3rd root using graphing calculator. Formula for turning decimals into fractions, get rid of variable on the denominator, holt mathematics 7th grade practice book answers, Program that solves radical expressions. Real life parabola word problem, free algebra worksheets involving fractions, iowa test in 6th grade, 7th grade conversion factors free help, kumon math online worksheets, all common denominators "linear equations" worksheets, mcdougal littell algebra 2 online, gcf lesson plan, gcse permutation and combinations work sheet,y11, converting decimal as a fraction in simplest form, teach me solutions of equations and inequalities, solving third order polynomial. Comparing and scaling math book answers, hcf worksheets for 5th grade, gleonce answers algebra 1, flow charts to equations calulater, math variable equations quiz, story of question math percent. Examples of math trivia with answers mathematics, algebra 1 book holt rinehart winston anawers, equations calculator with foil. Multiply radical equations calculator, grade 9 midterm exams, free 8th grade algebra worksheets, 5th Grade Common Factor PowerPoint, scale factor examples, absolute value worksheets, online least common denominator calculator. Ks3 maths problem solving worksheets, multiplying and dividing radicals solver, TI 84 calculator online. Converting second-order differential to two first-order differential, easy way to factor a trinomial, algebra expanding and simplifying worksheets, middle school pizzazz book b, how to solve multiple variable equations. Math poems, writing fractions in order from least to greatest, websites to practice adding and subtracting fractions for 6th graders, california algebra 1 book free, 6th grade math graphs, free calculator that does polynomials and factoring. Inequalities with fractions and variables, solve endpoint online calculator, simple interest worksheets in math, solve simultaneous equations online, free 8th grade printable math worksheets. Square root decimals, pre algebra word definitions, beginner algebra games, simplify a cube, what is the difference between evaluation and simplification of an expression. Do 8th graders factor quadratics, how to convert a squere root, salina college students tutoring for high school students, applicability of linear algebra, advanced algebra simplification, non-linear equations system mathematica. Algebra 1 answer key, y-intercepts of quotients of polynomials, algebra factor expression calculator, free online TI-83 graphing calculator. Simplify the square root of 512, structures in grade one science, 6.3 adding , subtracting , and multiplying polynomials quiz, answers for algebra with pizzazz page 198, standard to vertex form calculator, biology all in one study guide answers, free plotting coordinate numbers. Prayer involving algebraic terms, evaluation problems in algebra, simple sample problems for linear programming, convert numbers to words,java coding examples, radical expressions solver, ratio and proportion word problems "free math worksheets". High school math trivia, free translations math worksheets, large sets quadratic equations, proving identities solver. Solving quadratic equations with fractional exponents, cube roots on a calculator, to find sum of series in java, log equations with variables calculator, equations for perpendicular lines, CHECK SONS MATH HOMEWORK, mcdougal littell answers. How much is 111.12 kilometers calculator, multiplying radical with nonradical, algebra worksheet seventh, multiplying and dividing positive and negative rational numbers, factoring tool. Investigatory about simplifying logarithms, matlab galois field get decimal, graph circles on ti-84. Sample trigonometry problems, logarithm solver, first order nonlinear differential equation, precalculus solver, trigonometry hard problems and solutions, mixed numbers to percents calculator. College algebra tutor, math problems with solutions in radicals, algebra 1 eoc nc practice, investigatory math, solve multi-step equations worksheets, Model paper Math for first year. Algebra+software, poems about math, casiocalculator with modulo arithmetic. How to turn decimals into fractions on graphing calculator, do my algebra for me, aLGEBR, adding negatives and positives worksheets. Pre-algebra, balancing equations worksheet, free algebra printable, program equation for slopes ti 84, free one step equation worksheets, fractional form of numbers in matlab, variables rational exponent calculator, instrumentation in special products in college algebra. Free commutative property worksheets, Sean O'Connor FreeBasic, 9th grade algebra questions and answers. Math tree diagram solver, Pre Algebra work, Is there any difference between solving a system of equations, Finding Midpoint word problems, quadratic formula TI83 plus, proportion worksheet, quadratic program for calculator. Holt Algebra 1 Problem Solving Workbook Answers, math problems solving radicals within radicals variable, addition and subtraction of ratio. BEGINNER ALEGBRA, pre algebra balancing equation worksheets, maths solved paper for class eight, lyapunov exponent flash, third root math, find the LCD worksheet. Precalculus textbook by Beechers 3rd edition answers, prentics hall mathematics algebra 1 free help, finding possible rational roots online calculator, radical&rational exponents, least to greatest decimals solver. Test about fractions, algebra II divide radical expressions, distributive property fractions, online calculator with square root button, nth term algebra. Aptitude test papers answers, lowest common denominator calculator, square root step by step, solving equationswith amu, 7th grade conversion problems, Ti-89 Find Zeros. English work y7, simplest radical form calculator with variables, worksheets slope and y-intercept, printable examples of permutations. Oder the four number from least to greatest fractions, lowest common denominator variables, mathematics investigatory topics, math 10th grade (ppt), algebra worksheets and answers printable, glencoe algebra 1 properties and key concepts. Whats the difference between an algebraic expression and an algebraic equation, skeleton equations, pearson education inc textbooks for 6th graders. Adding and subtracting negative decimals worksheets, partial fraction decomposition with 1 on top, How do you solve a slope square, college pre algebra worksheets, diamond method in maths, algebra quiz for 9th grade. New Matrix Intermediate Workbook Answer, ifst inmath, least to greatest mixed fractions, INTERMEDIATE FIRST YEAR MODEL PAPERS. Solving KUMON math problems, multiplying rational expressions answers, excel stile formula, pre algebra meaning, plotting points to make a picture worksheet. Root of linear equation, printable basic algebraic equations questions for Grade 6, Online calculator with exponens, comprehension related 9TH class, Sample Investigatory project in Mathematics, convert radical to decimal, simplifying or factoring algebraic expressions. Percentage online solver, to the power of a fraction, laplace question solver, algebra 2 answers prentice hall, abstract algebra gallian solutions, integers agme, algebra 1 worksheets glencoe/ Hungerford introduction to abstract algebra solutions, year 10 algebra, grade 9 algebraic equations, cube roots fraction, quadratic equation simplifier, free printout example sheets of math properties, holt algebra 1 answer key. Simplify radicals calculator, simplify exponents with variables, CONVERT MIXED NUMBER TO DECIMAL, simple power algebra exercises. Sample questionnaire in algebra, find lowest common denominator tool, abstract algebra intro hungerford, dividing scientific not, softmath.com, use the distributive property to write each equation in expanded form "prentice hall". Help on solving system by substitution, solving of 3rd degree quadratic equation, prentice hall mathematics course 2 answers, scott foreman math worksheets, online usable calculator. Adding radicals worksheet, how to find square roots by hand 7th grade, fraction rules, glencoe special right triangles worksheet answers. STEPS TO CHANGE A DECIMAL TO A FRACTION?, find equivalent fractions with a common denominator and order least to greatest, online polar graphing calculator, hard algebraic equations. Aptitude questions and answers download, how much can i reduce a fraction, money to percentage converter, simple aptitude test question and answer, polynomial factorer. An integrated approach algebra 2 key, integer worksheet game, finite math cheat sheets, online free calculator for simplifying rational expressions, simultaneous equation calculator, multiplying cubed roots. Math grade 11 questions about trigonometry, how to add,subtract, multiply and divide fractions with examples, abstract algebra dummit content, algebra grid for kids. Printable math worksheet for seven grader, kumon math sheets, how to use calculator for linear equations, miguel littell math book 6th, simplify radical expressions worksheet, how to teach algebra, glencoe algebra 1 answers. What is one eighth written as a decimal, online scientific notation solver, percentage rate and base, general aptitude test and answers, solve nonlinear differential equations, prentice hall trig identities answers. Online nj ask, sat combinations and permutations problems, college+math+ebook. Easy explanation on how to work out algebra fractions, usa school syllabus, extracting a root examples, online division calculator, sin solve for x, solving equations with rational numbers calculator, How to find equation in vertex form if you have vertex. Adding positive and negative worksheets, simplifying radicals quiz, LCM/GCF worksheets, free simplifying radical expressions, boolean logic simplification calculator, multiplication and division f rational expressions. 2 step equations calculator with fractions, free density worksheets, ti 83 roots of exponentials. Steps to solving an elimination math problem, how to add a radical to fractions, power point multiplying and dividing negative numbers, basic algebra problems and answers, dividing fractions with variables worksheet, simplify write in standard form. Ratio formula maths, holt physics textbook answers, how to use the substitution method, permutation and combinations work sheet,y11, aptitude questions and answers free download, calculate variables with exponents. Hardest math problem for kids, fractions as powers, math problem solving; enthalpy, how to solve linear equation on a TI-84, holt mathematics 6th grade, ks2 mental maths books, solving differential equations with matlab. How to create quadratic equations with roots, scatter plot data sets worksheets, calculator to solve for x in fractions, worksheet on integers, finding combinations in math Grade 4, linear algebra Interactive algebraic tiles for expansion, i need notes on algebraic radicals, factoring trinomials calculator expression. Saxon math worksheets 6th grade, free online polynomial factoring calculator, linearize root square function, quadratic equation with a third power, how to find slope on a ti 83 with table. Factoring generator, how to simplify irrational square roots radicals, simplify square root of numbers that have perfect square factors, mcdougal math answers, translate phrases into algebric expressions free printable work sheets, completing the square calculator online, you answer the hardest math problem in the world. Integers rules worksheet, square root of 6 in radical form, free worksheets for least common denominator. Printable coordinate grids, express decimals as mixed fractions, adding and subtracting rational numbers in equations, trigonometry powerpoints. Answers to survey of modern algebra, least common denominator formula, factor cubed polynomials. Equation java, hardest formula, algebra 2 mcdougal littell online free. Worksheet on prime factor, how to take log base 2 ti-83, manipulate algebraic formulas to solve quadratic equations, worksheets function machines, math sign for between, show me some steps in working algebraic expressions, simplification algebra children download. Ti 84 calculator online, summation notation on scientific calculator, combining like terms worksheet, dividing binomials, inequalities polynomials cubed. How to find the scale factor, solving domain and range of lines, everyday structures grade 1, fifth grade online graphing activities, vertex form of a parabola calculator, unit circle worksheet. Glencoe pre algebra answer key, 2 step word problems, Integrated Math, free calculator for factoring a quadratic with leading coefficient 1, using the ti 83 plus for algerbra made very easy, holt online math workbook. Calculator which does mod, answers to prentice hall math algebra 1, trigonometry proofs solver, how to keep decimals when dividing in excel. Immediate algebra help, take a maths test, gcse maths cheat sheet, solving second ordinary differential equations nonhomogeneous, pdf books on algebra. Java double zu time, online calculator square root, houghton mifflin trigonometry, adding square roots with variables, product finder(math), point fit polynomials third order. Mcdougal littell math course 3 lesson 6.2 answers, substitution calculator online, fraction decimal comparison printable. Math worksheets like lcm and gcf problems, liner system equation, free algebraic expressions help, explain finding lcm using exponent way, turn decimal to fractions. Difference between empirical and theoretical probability?, model papers of intermediate math, lcm c#, double interpolation program for the TI-84, solving equations by dividing and multiplying. Convert decimal to fraction formula, algebra quizzes with answers, convert decimal to fraction tutorial, north carolina algebra 1 eoc with answers, worksheet math gen, lcd worksheets. Diamond problems pre algebra, ordering fractions and decimals from least to greatest, cubic root calculator, simplify rational fractions calculator, geometric trivia mathematics, free 9th grade algebra worksheets, algebra II practice for the new tennessee EOC test. Online distributive property calculator, 7th grade geography worksheets, balanced equation calculator, solving nonlinear equation when the function is a sum in MATLAB. Scott foresman math/homework books grade 6, solve to the nth square root, finite math cheat, how to simplify expressions using positive exponents, intro to intemediate algebra define term in algebra. Beginning algebra cheat sheet, "step function" + worksheets, subtracting time solution. First standard maths, math poems with math words, linear algebra in everyday life, ppt presentation on trigonometry, square root equation calculator, Aptitude question papers. Missing denominator calculator, calculator for inverse log2, program to clculate gcd of a no in vb, easy ways to learn basic math, ordered pair from a set of equations, multiplying trig functions in Linear graphing worksheets, free downloads on +begining chemistry and biology books and worksheets, highest maths equation ever done, worksheet integers multiply divide, integral by parts calculator, how to get the least common denominator, solve my compound inequalities. Solve my precalcucus problem for free, maths for 12 year olds, cubic factorization, coordinate graphing printables, algebra direct and indirect variation printable worksheets. Exponents worksheet 8 grades, t1 calculators, inequality math problem worksheets. Help me solve my advanced math problems, lowest common demoninator calcultor, answers of equations. First order differential non linear simultenous, "algebra tile" lesson plans, answers to modern biology study guide, square root 65 simplified, a helping hand for fractions for 6 grader, math trivia's, compound inequality+division. Adding and subtracting worksheet softmath, prentice hall mathematics course 2 answers, multiplication list, I need help on an equation chart. Algebra tetris, glencoe algebra 2 answer key, 3rd order polynomial roots, ti 84 cubed root, nonlinear simultaneous equations matlab, mathematics and problem solving aptitude test questions and answers, pictures of pre- algebra expresions for a science project. Graphing inequalities on coordinate plane with absolute value, algebra 2 book online glencoe, inverse number homework sheets, fistin maht, hardest math problem, math software algebra, hardest math class in the world. Convert negative to positive in matlab, how to teach fractions, write a function with roots, multiplying square roots with exponents. Algebra substitution solver, simplifying radicals expressions calculator, nonlinear function problem. Decimals into mixed numbers calculator, gr 10 math exam, solve the first-order nonlinear ODE, determining a quadratic equation from a graph when there is a stretch. Trigonometry honors powerpoint, graph linear equations = powerpoint, plot points on a graph online, year 9 algebra questions, adding subtracting multiplying and dividing fractions, solving radicals in fractions, 3rd grade printouts. Trinomial solver online, trigonometry math projects, holt algebra 1 enrichment answer, maple symbolic vector, examples of exponential expressions. Algabric pyramids solver, how to complete the table of values with fractions, simplifying radicals calculator free, simplify a trinomial, math trivias with answers for first year, nonlinear equations Trinomials calculator, algebraic expression calculator online, merrill algebra 1, pythagoras calculator, equations worksheets, adding, solving one step equations multiplication division worksheet. How to calculate percentage of matric, mathmatics algebra 1 volume 2 10-6 solving equasions by factoring, solve quadratic formula calculator, multistep equations with fractions and decimals, investigatory project in math, caculators for rational expressions, factoring quadratic equations machine. Module 8 maths papers, synthetic division calculator free, worksheet solving for y, polynomial division calculator. How to do square roots on ti-83 calculator, how to write a mixed fraction as a decimal, adding square root variables. Middle school math with pizzazz book e e12, input output rules online calculator, learning objectives algebre formulas, worlds hardest math equation, online mental maths test ks2, radicals in algebra Algebra solver, factor expresson solver, converting mixed fractions to decimals calculator, applications of special products and factoring in geometry, practise problems of factoring rational expressions, how to get an equation from standard form to vertex form. Dividing cube roots, math test chapter 8, TI-89 LOG BUTTON, www.prenticehallmathmatics.com. Order of operations with radicals worksheet, parabola worksheet and solutions, number as a percent algebra, linear factors calculator, balance equations calculator . Why is it better to simplify radical expressions before adding or subtracting, printable worksheet on greats common factors, graphing reflections, factor complex binomials, solve my fraction, decimal to mixed numbers calculator, working with exponents worksheet. Easy cool ways to work out quadratic equations, hardest mathematical equation, everyday science mcqs, free factor theorem calculator, how to learn ninth grade prealgebra fast print outs, mcdougal littell geometry resource book chapter 2. Free 8th grade algebra worksheets with answer key, solve geometry problems for me, 6th grade proportions powerpoint, iowa algebra aptitude test. How do you simplify a fraction on your calculater?, how to balance ionic common factor, integrate on ti 89 titanium i get non algebraic variable. Calculations with standard form, parabola volume, complex multi-step algebraic equations, geometry practice workbook worksheet glencoe, algebra 1 textbook holt tx 2002. Algebra elimination method calculator, Simplifing radical expression homework help free, exponent worksheets and answers, quadratic equation calculator with steps, maths tests for yr 5. Scale word problems, decimal base system, math patterns calculator. The names of the step 2 the sciencetfic method, homework today finding rectangles worksheet, trinomial solver. Linear algebra anton solution homework, how do you find the least common multiple of variable, free worksheets on inequalities for fifth grade, 5 examples of multiplication of rationals, online factor division calculator. Year 3 maths worksheets usa, dividing fractions with unknown, fistin math, solving equations in matlab, creative publications pre-algebra with pizzazz answers, simplify the following complex fraction calculator, decimal to fraction conversion formula. Cubing equations, 9TH GRADE PRE ALGEBRA, free secondary test papers, Factoring a monomial calculator. Algebra basketball, linear combination worksheet, algebra worksheets and answers, help with fractional coefficients, a transition to advanced mathematics solutions, formula to convert square meters to meters. Printable measurement conversion chart, solving a proportion worksheet, listing rational numbers in order from least to greatest, factors worksheets ks2, how to simplified a cubed number, ti 83 calculator free online, prentice hall 6th grade math. F(x) to vertex form converter, simplifying cubed variable, order of operations worksheets grade 7, vertex calculator. Mcdougal littell algebra 2 answers evens, free science softwares for gcse, free math answers now. Prentice hall algebra 2 practice 5-7 answers, integer worksheets, kumon answers level d, Bittinger and Ellenbogen - 7th edition, online foil calculator, resource book algebra structure and method book 2 page 263, how to solve a third degree equation. Solving equations with fractions grade nine math, vertex form calculator, TI-89 COMPLEX, adding, subtracting, multiplying, and dividing fractions mixed review worksheet, free online math problems from multiplying and dividing negative and positive numbers, free online algebra I workbook, glencoe pre algebra pratcice worksheets 5-3. How to find the domain and range of a hyperbola, Matlab codes solving equatios, worksheets english ks3, how to find the slope with a ti83, free homework help How to solve a radical expression, online factor finder. Finding roots of 3rd order, worksheets graphing linear equations, pre calc tutoring solve equation extracting square roots, basic math formula sheets. Problem solving with proportions video, elementary linear algebra anton teacher's solutions, saxon math course 2 help, the sum of radical -18 and radical -72, how to find a variable, cache:_dDvPZ6JR-IJ:www.algebra-answer.com/algebra-helper/an-easy-way-to-solve-equations-with-fractions.html how to solve equations with fractions, pre-algebra workbook awnsers. Radicals with decimals, free online tutoring for algebra 2, solving 3 equations with 3 variables with calculator. Online algebraic equation calculator, mcdougal littell math course 3 answers, algebra clock problems, multiplying percentages, simplify the sum of radicals, geometry turn as a fraction. Give 5 examples of math trivia, complete the square calculator, algebra de baldor, multiple exponents. Provide a radical expression for your classmates to simplify., free multiple worksheets, Step by step evaluating variable expressions worksheets, McDougal Littell algebra 1 answer keys ch 5, multiplying like terms withpowers, lesson plan on algebraic equations sixth grade, calculator with trigonometry. Compound inequality solver, midpoint formula for range, algebra with pizzazz worksheet download, algebraic proofs worksheet, write each fraction or mixed number as a decimals, relational algebra exercises online. Hands on activity for exponents, compound inequalities calculator, online graphing circle calculator, math answers free, linear equations worksheets. Sample of math investigatory, solving for multiple variables, online number sequence solver. Chemical equation solver, mixed fractions percent to decimals, equation worksheets, maths free resources like terms, math algebra equations and fractions, what is the difference between a equation and an expression, binomial thoerem for cubes. Simultaneous non-linear equation solving, trig ratio values, basic facts on parabolas for 7th graders, soft math distributive property worksheets. How to write the remainder as a fraction, how to use quadratic formula in ti 89, algebriac expressions sheet, answers to converting fractions or mixed number into a decimal then simplify. World history mcdougal littell chapter 17 notes, differences and similarities between the procedure of simplifying an expression and solving an equation, arithmetric operations with rational expressions, do science exam online. Solving linear systems by addition worksheet, college algebra solver, java program sum of n numbers, example of polynomial problem, algebraic expressions worksheets, cube rule calculus, boolean algebra ti-89. Mathematical investigatory topic, cubed equation, pre algebra printouts, binomial factoring expression calculator, algebra half, isolate a term with x in denominator, how to solve a simultaneous equation that has no similarity. Algerbra finding equilibrium point, multiplying unknown fractions, HOW TO EXPLAIN LCM TO YEAR 7 STUDENT. Pass data chart ireport, when solving a rational equation, why is it necessary to preform a check, how to find slope on a ti 83, order numbers from least to greatest calculator, cube root of x times square root of x^5 /square root of 25x^16=. Algebra inequality worksheets, JAVA QUADRATIC SOURCE CODE IN JAVA, equation simplifying calculator, arithematic. Mixed Fraction converted to decimal, ks3 percentages maths, algebra with negative equations, Practice GED papers, detailed example of monomials, foundations of algebra year 2. Maths project class 10 on pie chart, writing a quadratic in vertex form, calculator to solve by elimination, solving one step equations worksheets, free 6th grade math order of opertations worksheet, trigonometric poems. Subsitution calculator, vertex form, equation excel 2007. Print grade 9 math help, exponential expression example, calculator that simplifying. Calculate the fraction, printouts for middle school, how to solve sin x, "graphing an equation", polynomial 3rd order. Addition Subtraction Negative numbers questions, algebraic expression chemical formulas, graphing calculator for 6th grade, adding subtracting multiplying and dividing integers, solving equations 6th grade, 8th grade linear functions. Online examination free templates in html, adding fractions grade 8 worksheets, free downlaodable calculators online, cubed equations, sample of c++ program how to compute the sum of 2 integers, writing a quadratic function in intercept form. My.hrw.math.com, freeprintableworksheetsand/but, algebraic written expression worksheets, geometry homework solver scale factor, differential equation solver, excel, wave problems worksheet. Remember to simplify completely before subtracting, conversion of quantities, fractions, decimals, % work sheets, multiply radicals solver, How to teach grade1 students materials and objects, kinds of math trivia, how to find equivalent fraction with common denominators. Lcd fractions calculator, scale factor calculator, simplifying exponents properties worksheets, java script frac, graphing calculator general parabola into standard, math promble picture. Simplifying fractions with variables, subtraction fraction, FREE MATH TRIVIA AND ANSWERS, free online calculus solver, simplify polynomial calculator, simplify division formula, free worksheets on adding and subtracting integers. Squre roots rules, how do you solve a multiplacation problem by finding a range, solving quadratic equation, integrated mathematics 2, college algebra worksheets. Interpolieren ti 84, holt algebra one, Pressure and Average Molecular Speeds animations, algebra cartoon concept, Abstract algebra hungerford solutions. Calculating roots on ti 83 plus, quadratic stretch factor, factoring trinomials calculator, help with solve slope and intercept. Rules on adding plus and minus, complex analysis problem solving, coordinate pairs pictures, formula of algebra in everyday life, cross method factorization, pizzazz math worksheets, algebra tutor 2 step equation worksheets, reducing exponents and roots, two step inequalities calculator, Is there a calculater that will solve algebra problems. Ti 83 solve complex number system, radical worksheet, solve with one variable, factoring with fractional exponents. Grade 8 algebra lessons, square root of 8 in simplified radical term, greatest common factor of 36,60, 70, how to simplify radical expressions, key stage three algebra worksheets, online isometric aptitude testing, algebra and trigonometry structure and method book online version soft chapter 7. Percent sign on ti-84, adding and subtracting integers strategies, texas instruments calculator t1 86 manuals download, absolute value nonlinear. Hardest algebra problem, rings abstract algebra solutions to assignments, algebra 2 book from texas, how do you change a decimal into a mix number?, 4th grade midle school math accelerated translation test, how can trigonometry help us in life?, pre-algebra study guides. Exponents for begginers, free mcdougal littell geometry textbook answers, math exams in quadratic equations printable for free, how to convert a decimal to a mixed number, Fractions/Decimals - Greatest to Least, 6th grade math dictionary. Aptitude question papers, algebra 1 workbook answers, dividing exponential partial fractions, middle school math with pizzazz book e answers Probability: Possible Outcomes, free 10th grade algebra Mathematics- Foil Calculater, lowest common denominator fractions calculator, units of square root, fraction find the error worksheet. Dividing fractions square roots, slope formula in a nonlinear, saxon math algebra 2 answer key, formulae algebra gce o math add, using pyramids to simplifying expressions, 7th grade Greatest Common Factors, Algebra Formula Sheet. Similarity and difference between polynomial by a binomial to long division, summation calculator, solution of non homogeneous second order differential equation, matrix algebra software, solve a system of equations addition online activity, identifiying funtions by equations, pre algebra 5th grade lessons. Formula for subtracting fractions, iowa algebra aptitude, how to divide fractions demo, hints to learn multiplication formula, solving inequalities worksheet. Subtracting standard form, finding intercepts in math, solving perfect square trinomials +solver, prentice hall mathematics pre algebra workbook answers. How to do a permutation on a ti-89, solving subtraction equations, how to adding, subtracting, multiplying, and dividing whole numbers, integers, fractions and decimals., online balance equations calculator, synthetic division calculator, multiplying and dividing worksheets no remainders. In matlab exp, how to convert exponential expression to radical form, one step equations decimals and fractions, tic tac toe factoring, free mcqs in computer science, working out elementary algebra Holt rinehart and winston algebra 1 workbook, number lines printable/negitive and positive, divide rational expressions calculator, Greatest Common Factor printable, PROBLEM WITH SOLUTION ABOUT PERMUTATION IN STATISTICS, free linear graph worksheets. Least to greatest worksheets, equations that have the answer of -4, north carolina practice eoc tests for geometry, plotting points graphing utility online, ordering fractions from least to greatest calculator, plotting points pictures, Teach yourself algebra printouts. Converting mixed fraction to decimal, convert lineal metre to square metre, year 6 multi step decimal worksheet, 5th grade word problems, adding subtracting multiplying dividing negatives, java specify precision for fractions. Exponent fraction, algebraic pyramids, help in math for adults, aptitude question on matrices, java convert fraction to decimal. Ti 89 how to change a decimal to a fraction, sample honors algebra 2 midterm questions, graphing calculator online trig derivative, a bar graph equation. Cpm geometry answers, multiply functions matlab, mcdougal littell geometry workbook answers, math balancing equation worksheet, cubed factoring, solving domain and range. Free calculator to use to find balance equations, decimal to simplest form fraction, ti 86 convert decimal fraction, linear vertex formula. Algebra 1 flash cards, 7th grade square roots, free adding and subtracting radicals worksheets, balancing chemical equations for 8th grade, work problems for square root simplification, algebraic expressions online. Lesson plan on factorizing quadratic equation, glencoe workbook algebra 2 answers, dividing mixed numbers to decimals. How to do adding and subtracting polynomials for +dummies, squarenumbers and square roots, online program to make sure my flyer measurements are right, adding, subtracting, multiplying and dividing fractions for dummies, college math for dummies. Nc algebra 1 practice test, prentice hall algebra 1 textbook, max and min of a quadratic function solver. How to type in mole problems into a calculator, math trivia in 1st year high school, chapter 4 practice worksheets, math formula class 10th, prentice hall online algebra 1 textbook, systems of equations word problems. I don't get algebra, aptitude question and answer, trinomial calculator, subtracting integers worksheets, number games using rational expressions, how do you solve possitives and negatives, Dr. Brown online math help. Calculater for solving radical expressions, 1 eight as a decimal, how to storepics on ti-83, free ged math printouts, learning elementary algebra, square root calculator, Prentice-Hall Algebra 1 Prentice hall pre-algebra, equation of circles, free printable drawing conclusions worksheets, find the lcd polynomials for y,y-5,y+2. Lcm calculator with variables, percent formulas, heath chemistry lab 19, Equivalent percentages, online equation solver, caculater for equations by adding and subtracting. Game for 10thgrader, online chemical equation solver, combining like terms algebra worksheets, easy guide to fractions formula. Orleans-hanna algebra prognosis test, holt geometry answers to multi step test prep, 7th grade math subtraction of integers, fractions t-108 calculator. Conceptual physics online textbook, how to second exponent on a ti-89, math homework cheats, multiplying dividing rational numbers worksheet, ks2 coordinates worksheet. Find variable fraction, topics in yr 8 maths test, x and y intercept worksheets, online sats test ks2, pre algebra workbook florida prentice hall, solve by substitution for ordered pair calculator, polynomial solver with mathematica. Hands on equations worksheets, can you have negative fractions, what's the difference between independent and dependent math wise, pre algebra quiz, geometric sequence within an arithmetic sequence, rational expressions online calculator, polynomials factoring cubed. Linear coversion table, aptitude questions book download, adding integers with decimals, first grademath on power point. Simplify ratios online, quadratic factoring calculator, java sum of numbers. Sample clep, algebra diamond method, between adding/subtracting and multiplying/dividing when using scientific notation., a worksheet for the lcd. Hardest physics equation, solving quadratic equations using sq roots, elimination + math. How to solve rational algebraic expressions, mcdougal Algebra 2 2004 online text, writing expressions with single radicals, how do you factor polynomials with square roots. Free worksheet + solve single-step linear inequalities + pre-algebra, year 11 algebra, standard form calculator online, adding/subtracting complex variable fractions. Fitzpatrick ti 89 pdf, math answers to mathematics concepts and skills course 2, matlab programs for non-linear equations, explaining balancing chemical equations. Write a square root in radical form, square root with exponents calculator, order fractions, decimals and percentages from least to greatest calculator, coordinate plane pictures worksheet, solution of non homogeneous second order differential equation with exponential constant, double variable algebra. Examples of math poems about algebra, pre algebra simplify variables fractions, difference between linear and nonlinear trend, how do i help my daughter review for honors algebra 2, algebra buster free download. Linear metre, kumon math beginners, ti-30x IIs factors, ANY FREE algebra solver, square root to decimal, TI 86 error 13, how to order positive & negative decimals from least to greatest. How can do square root plus, how to solve linear equations with 3 variables with graphing calculator, laws of exponents work sheet, factoring and expanding worksheets, integers and their patterns Ti-83 simplifying radicals, parabola graphing calculator, middle school palindrones worksheet, step by step division properties of exponents, math slope worksheets, lesson 5-9 pearson education pre algebra powers of products, gcf worksheets. Addition inside a square root, do you have a pre algebr`s teachers math book with answers?, McDougal Littell MAth CHEA, middle school math pizzazz book d, simplifying fifth roots, rules of subtracting integers 6th grade -7 - -7=. Rudin ch1 solution, 6th grade translate between words and math, How to factor a cubed polynomial, conceptual physics high school answers to chapter 6questions. Dividing exponents interactive, downloadable coordinate plane, how to convert bases using TI 89, Algebra 1 worksheets real numbers, how to convert a graph to a decimal, factorial worksheet. Examples of factoring, integer calculator online, algebra trivias, what is the title of this picture, mixed fraction into decimal, algebra calculator rational expressions. Ti89 permutations, solving uneven fractions, reducing monomial fractions, dividing rational expressions calculator, algebra with pizzazz problems, glencoe pre algebra worksheets answers 2009, LCM Factorise equations calculator, intercept formula, algebra graphing inequalities multiple choice, algebra 2 test generator, free lowest common factor worksheet. Examples math poems, elementary math trivia, 0.375 as a fraction, algebra calculator inequality. Finding lowest common denominator in equations, linear equations fractions calculator, algebra trivia, trigonometric work problems, writing polynomials in standard form, simplify fraction games+ ks2, geometry for 10 year olds. Java factor program, how to CLEP pre algebra, linear equation calculator, slope worksheets free, algebra derivate. Nonlinear equation solver, GED worksheets-math-algebra, factor quadratic equations+tic tac toe graphic organizer, how to subtract percentages from whole numbers, properties of rational exponents, best algebra cheat sheets. Free Least Common Denominator calculator, addition ks2, free precalculus algebra problems, how write programm of polynomial using bisection method, grade 9 math calculator, improper fraction to decimal calculator, simplifying cubed roots. Trigonometry poems, square equation, sample matlab 2nd order ode solver. Nth root for dummies, answers for mcdougal littell math 7th georgia, saxon transformation shows exponentials, how to take the cube root of a fraction, 7th grade balancing equations. How to program calculator factor, structures - grade one - science, ti-84 square root property program, square numbers activities, Trigonometric Properties and values, holt, rinehart, and winston algebra 1 workbook answers, solving two-step inequalities worksheet. Complex rational algebraic expressions, 3 simultaneous equations, yr 6 maths worksheets factorising, how is algebra useful in everyday life. LCM formulas, multiplying whole numbers worksheets, how to learn the steps and solve my algebra problem for free, PRENTICE HALL BIOLOGY SECTION 12-1 ANSWERS, math quiz for class 9th. Trigonomic ratios, free book download of mathmatical formulae, solving equations cross numberpuzzle. How to solve two step inequalities with fractions, multiplying and dividing numbers by numbers between 0 and 1, steps of compound interest formula videos, free multiplying fractions activities Equations when adding and multiplying, free printable 1st grade math sheets, online free boolean algebra simplifier, prime number with a suare root that has a repeating decimal. SAT cubic root function problem, c# math, saxon math 2009 algebra 1 free answers. Games for simplifying radicals, download aglibrator, holt directed reading 7th grade science, relating graphs to events worksheet. Permutation exercises, Chapter 8 +Algebra 2, gcf of monomials calculator, two step algebra word problems worksheet, divide polynomials free worksheet, free college algebra calculator, Prentice Hall chemistry worksheets answers. What is a factor 4th grade, free preprimary worksheets, The british method, reviews of saxon vs thinkwell math programs, free math sheets for 9th grade. Online integral calculator, comparing like terms calculator, 10th grade math factoring, solving for x calculator, factoring formula ti 84 download, fractional exponent equations, "string to decimal" Address matric english question bank cd, sample geometry radical trigonometry problems, decimals from least to greatest calculator, decimals least to greatest calculator. Gcf calculator with variables, Ratio Calculation 2:3:4, year 8 maths test, free worksheets and scale factor, fraction formula subtract, simplest answers in radicals and rational exponents, algebra converting decimal whold numbers. Graphing order pairs of integers, how to calculate lowest common denominator, radical multiplication rules, least common denominator calculator java script. Solve systems of equations exponential growth, dividing radical, ged worksheets math, in algebra similarities between linear equation and a function, college level progression equasions. Mcdougal littell algebra 1 answers, scientific calculator fractions online, 4th grade math combination. Saxon math course 2 solutions manual, distributive property calculator, math trivia algebra, ged printable math worksheets, matlab simulation difference equation homework, System for simplifying algebraic fraction, hietory of foiling in math. Adding, subtracting multiplying decimal worksheets, cube root on ti-84, FORMULA FOR FINDING SQUARE ROOT OF A DECIMAL, dividing monomials worksheet. Geometry holt rinehart and winston online, online ti-83, rules add and subtract integers worksheet, simplifying complex numbers calculator, factoring and simplifying, online integral calculator using substitutions, solving addition equations worksheet. Download general aptitude questions & answers, printable inequalities graphs, grade 9 math slope formula, new way to divide numbers, how to solve adding subtracting divided multiplying integers, easy drawing conclusions worksheets. Precalculus problems, algebraic thinking worksheets, adv algebra finals answers, difference between permutation and combination, holt mathematics online workbooks, how is "rational equations" used in Graphing linear equations with two variables + worksheets, radical inequality in math sign charts, how to find the decimal when you have mixed numbers, que es prealgebra en español. Modern chemistry chapter 5 test answers, how to solve nonlinear differential equations, algebra fractions with variables, weather log printable, maths for dummies online. Learn prealegebra, runge kutta matlab code, Why is it important to simplify radical expressions before adding or subtracting. Factoring algebraic expressions containing fractional and negative exponents, 4th grade algebraic equations worksheets, math worksheets subtracting fractions steps, covert square metres to lineal metres?, how to add and subtract integers with big numbers, quadratic calculator program. Ks3 printable work, elimination online calculator shows work, power point presentation of algebric expressions, modern biology study guide answer key 5-1, pizzazz worksheets.com, im a teacher looking for algebra 2 answers, math exponent chart. Solving for real numbers, addison wesley conceptual physics answers, When graphing a linear inequality, how do you know if the inequality represents the area above the l, maple solved examples, prealgebra with pizzazz creative publications. Exponential equation sample problems, integers free worksheets, basic algebra notes. Mathematics trivia, step by step integration calculator, high school calculator online ti-83. EXPRESSIONS AND EQUATIONS: TRANSFORMING FORMULAS, online final exam calculator, solving linearly independent, using algebra tiles to solve equations with combining like terms, decimal to mixed number online calculator, activities on factoring expressions. Expressing natural logs using logs, solve math problems with vb6, simplifying radical calculator, how to verify solutions to ratinoal equations by calculator, least common denominator fraction Multiplication of rational expressions calculator, method of factorization of quadratic, prentice hall online textbook alegbra 1, algebra solver algebrator, substitution method with exponent, hardest typing test, trigonometry problems with answers. Right the decimal as a mixed numbers, kumon franchise training test sample questions, my math solver. Quadratic equations with square root x, matlab positive and negative square root, math 11 trigonometry, online standardized test statisic, sample paper class 8, converting a mixed number to a decimal, working out quadratic equation step by step. Ti-84 binär hexadez, caculator for quadratic equations to solve problems, rounding decimal awnser, multiplying and dividing equations problems, secrets to solving fractions, seventh grade mathematics formula chart, algebra expression solver. Factor trinomials generator, mcdougal littell math course 3 online, the differences square root, java program to add rational fractions. Y3 optional sats papers, order fractions and decimals from least to greatest, worksheets for slope and y-intercept. Aptitude questions with solutions free, how to write mixed numbers as a decimal, free symmetry powerpoint, books on absolute value. Free matric math software, graphing powerpoints, cube numbers lesson plans, ks2 algebra, algebra 2 with trigonometry prentice hall. Glencoe mathematics Algebra 1 teachers edition, find intercept of inequality with square, decimal elementary, how to do worded simultaneous equations. Solving systems worksheet, what is the difference between function and linear equations, how do i pass algebra 2 class for college degree, pearson prentice hall 6th grade math, heath algebra 2 integrated approach. Calculator that solves functions, how to factor the quadratic expressions on a TI-83, perpedicular lines w/a point, college algebra calculator with division. Graph test 5th, activity of radical expressions, freealgebraworksheets, Investigatory Project 1 week, free TI 83 calculator, solve systems of equations graphing worksheet, how to calculate exponent and roots. Pre algebra with pizzazz what is the title of this picture, an equation that descibes a function in daily life, multiply and divide radical expressions, solving 2 step equations game, partial fraction calculator. Edhelper algebraic fractions grade 10, cube root of negative number, real life graphs worksheet, simplifying radical equations. Algebra calculations explained, website that can factor and simplify radical expressions, nonlinear equation matlab, practice quiz for finding least to greatest (7th grade). Commutative property multiplication worksheets, math worksheets for trigonometry, free math problem solver with steps, greatest common factor calculator with variables and exponents, solve 2y= -6 Runge kutta algorithm matlab second order odes, hands on equations worksheets problems, elementary algebra of fractions, trig complex fraction, algebra stories, how to determine an equation from points on a graph, download aptitude question answer. Solving integration using matlab, factor and simplify square root of x to the 10th, solve multivariable equations online, math trivia 1st year, practice 9th grade honors placement test, algebra connections answers, multi step equations worksheet. Convert mixed fractions to decimals calculator, simplifing an exponential exponents, algebra software tutor. Slope and y-intercept online calculator, algebrator software, square a variable, division square root calculator, how to simplify fractional exponents algebra, Using roots to write an equation, Managerial Accounting 12th Edition Solutions Manual. Solve my proportion, beginner algebra tutor, maths crossword solved, algebra x y calculator, lcm variables and exponents. What is the least common multiple of 38 and 29, subtracting the same exponents with an unknown value, math practice websites 11th grade, accounting equation calculator. Saxon algebra 1 review worksheets for free, homework sheet 19 answers, Long Divison of polynomials calculator, software to graph compound inequalities, writing squares microsoft powerpoint, algebra number line calulater, simplifying expressions with exponents and variables. Prentice hall mathematics algebra 1 workbook answers, equations based on factor triangles, prentice hall conceptual physics answers, elementary math-combination, Math Addition and Subtraction Negative worksheets, topics in algebra herstein solutions, pre algebra topics. Slope in word problems, how to write a arithmatic programme in matlab, factors of algebraic expressions 9th. Finding the difference of cubes, free accounting worksheets with answers, greatest common factor gcf WORKSHEET. Square root of 6 radical form, change each decimal to a fraction or mixed number, instructions on using slope field program on ti-84, gcf worksheets free, grade 10 maths worksheets. Finding common denominators worksheet, program that can solve equation in java, find slope of a line using ti 83, simplify 10 root 3 times 2 in radical form, 9th grade quadratic equation problems, write each fraction without a radical in its denominator. Free pre algebra refresher, algebra helper reviews, symbol on calculator for radical, nysed 6th grade math lesson plans. Solving quadratic equations on ti 83, beginning algebra tobey 7th, history square root symbol in mathematics, trig identity proof solver. How to find vertex on a TI-84, math worksheets for 10th graders, program to solve an equivalent fractions, solving questions on factorial. Algebra poems, coordinate in algebra, help-factoring the sum or difference of two cubes, partial differential of square roots, difference simplifying and evaluating an expression, factor complex number mathcad. Multiplying integers worksheet, investigatory about basic operations in complex numbers, decimal sequences worksheet, A level exam questions matrices. Phoenix TI84 cheats, how to convert sqm to sqf, how to solve ln problems, online algebrator, linear algebra equations 3 unknowns matlab, rational expressions and applications complex fractions. Math tree to figure out combinations, free math worksheets expanded notation with exponents, how to simplify a binomial equation, What are the four fundamental math concepts used in evaluating an expression, solving equations with addition, worksheets, complex multi step equations, how to simplify a radical expression on a ti-84. Calculator solve for x, free homework graphs, math powers calculator free online. Graphing ordered pairs picture, Divison math remainder when fraction simplified, solving permutation on ti calculator. How to add fractions with negatives, triangle method of converting mixed numbers, online TI 83. Division calculator that shows steps, solving trig problems, Investigatory Project in Mathematics, square root with variables calculator. Y3 maths worksheets, charts for adding and subtracting negatives, what's 8% as a decimal, study for algebra clep, linear systems comparison worksheet, inverse matrix casio fx 2.0 plus. Multiplying and dividing scientific notation worksheet, squre roots of c++ function and examples, elementary algebra fractions, List of Math Trivia, line equations with their corresponding graphs, how to divide radical fractions with a fraction. How to cube roots calculator, worksheets for negative exponents, free calculation for adding rational expressions, rational equation calc. Ninth grade calculator, solving systems by elimination calculator, similarities and difference do you see between functions and linear equations, HOLT Physics online textbook, algebra with pizzazz distributive property trinomial. Simplify rational expressions worksheet print, radical operations calculator, algebra helper, error analysis fractions worksheet, nonlinear second order differential equation matlab, calculating word probelms percents with variables. A free calculator online that can change fractions to percents, solving for radicals, examples of algebra java, sampling code matlab, math trivia questions and answers logarithmic. Calculator for distributive property, prentice hall chemistry answers, how to simplify complex radicals. Rudin mathematical analysis solution, simplify square root of decimal numbers, solve expressions calculator, free propotions word problem worksheets. "linear equations" one step worksheets, free online trigonometric functions calculator, programming equations, mcdougal littell algebra 2 worksheets, calculator for dividing fractions. Scale factor worksheet, occupations using exponential equations, creative publications algebra with pizzazz answers. Identifying like terms worksheet, simple way to learn algebra, creative publications pre algebra answers. Easy inequality worksheets, algebra solve scale java, math answers glencoe mcgraw algebra. Compound interest worksheets, functions + free printable worksheets, trivia questions for 3rd grade. Find the sum in algebra, nonlinear ode solutions, grade 10 math exam questions and answers for linear systems substitutiion and elimination, trigo worksheets, Write a mathematical phrase or sentence for your classmates to translate.. Adding similar fractions worksheet, simple explanation of prealgebra and variable to 6th grader, worksheets for polynomials division, equa test for grade 3. A-level algebra example problems, excel solving simultaneous equations, distributive property with fractions, prentice hall math worksheet answers, How do you solve word problem equations with variables on each side. Hyperbola examples, saxon math course 2 answers, fraction and decimal calcoulator with simplicfication, printouts for percents and fractions, matlab complete square, matlab ode45 second order. How to solve a fraction with a cube root in the denominator, What is the difference between an equation and an expression? Include an example of each. Can you solve for a variable in an expression? Explain. Can you solve for a variable in an equation? Explain. Write a mathematical phrase or sentence for your classmates, way to do quadratic formula, p.p for Hyperbola. Subtracting triangle, instant calculation for adding rational expressions, answers for polynomial calculator, prealgebra worksheet printout, math equations percentage, solving second order nonhomogeneous differential equations. Solve decimals, math trivia questions and answers, addition and subtraction patterns, equations, 7th grade mcdougal littell math answer book, algebraic equation with fractions calculator, Describe the difference between adding/subtracting and multiplying/dividing when using scientific notation., divide polynomials by monomials online calculator. HOw to use a quadratic equation in life, gcd calculation, gcf calculator with work, runge kutta matlab with example, solving for y worksheet, What is the difference between of evalutation and simplification of an expression, Holt algebra state workbooks. Quadratic square root equations, greatest to least worksheets, trigonometry mathematical poems, Which fraction of a square is the greatest?, College Algebra beginner worksheet. Wave calculations worksheets, least common multiple monomials calculator, square root of 3 javascript, excel formula get "first element of row", x percent of number formula, free worksheets order of operations exponents, boolean algebra simplifier. Trigonometry worksheets for college, algebra multi step equations, how to factor on a SCIENTIFIC calculator, Ks3 Printable Worksheets, how to graph linear equation on a handheld scientific Lcm worksheets, formula sheet arithmetic finite rule, where is log2 casio fx 83, online parabola graphing calculator, simplifying complex radicals, square route through prime factors, dividing monomials solver. How to add L2 on graphing calculator of texas, matlab ti 89, algebra for beginners uk only. How to put an inequality in a graphing calculator, logic questions for 4th grade students, free solve alegbra, algebra baldor, multi-step equation worksheets, math question solver, adding square roots calculator. Coordinate plane equations, application involving quadratic equation, intermediate first year model papers.
{"url":"http://softmath.com/math-com-calculator/graphing-inequalities/examples-of-math-trivia.html","timestamp":"2014-04-18T10:52:52Z","content_type":null,"content_length":"164175","record_id":"<urn:uuid:3b4cc2d6-fd2e-4eaf-a209-6fa838a30bee>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Why, and how badly, does the proof of "no percolation at the critical point in half-spaces" fail for full spaces? up vote 13 down vote favorite The proof by Barsky et. al. that there is no percolation in half-spaces proceeds by a dynamic renormalization argument. The proof couples critical percolation in the half-space $\mathbb{H}^d$ with a dependant site percolation model on $\mathbb{Z}^2$ such that if the block sizes in the renormalization are sufficiently large then the dependant percolation on $\mathbb{Z}^2$ is supercritical. Since the block size is still finite, the probability that a block is "good" is a polynomial in $p$ (the edge probability), and a continuity argument can then be used to show that if $\Theta(p_c)>0$ (for $ \mathbb{H}^d$) then there is in fact $p < p_c$ for which the dependent percolation on $\mathbb{Z}^2$ is still supercritical, and so the half-space in fact already contained an infinite component at While in principle I understand this style of proof, I realized that I don't understand this specific argument well enough to know just what goes wrong if we try to run the same argument on $\mathbb {Z}^d$ rather than $\mathbb{H}^d$. In fact, I don't even understand whether the reason it breaks down essentially technical, or whether there are good reasons to believe that the same line of attack is very unlikely to work for $\mathbb{Z}^d$. (One good reason to have the latter belief is that lots of smart people have thought about the problem without success, but that's not the kind of reason I mean.) So: Why, and how badly, does the proof of "no percolation in half-spaces" fail for full spaces? pr.probability stochastic-processes statistical-physics intuition add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged pr.probability stochastic-processes statistical-physics intuition or ask your own question.
{"url":"http://mathoverflow.net/questions/38336/why-and-how-badly-does-the-proof-of-no-percolation-at-the-critical-point-in-h","timestamp":"2014-04-18T18:28:17Z","content_type":null,"content_length":"48557","record_id":"<urn:uuid:9d015c0b-0926-4880-9330-621fef457e62>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Integration using given substitution difficulty. February 10th 2013, 10:46 AM #1 Integration using given substitution difficulty. Hello again, Just when I think I'm getting it I realize I'm really not. I have no idea how to approach this question, I have a bunch like this to do and would really appreciate some help. The question is, using the substitution: $u^2 = 1 + \tan(x)$ Evaluate the integral: $\int\sec^2(x)\tan(x)\sqrt{1 + \tan(x)} dx$ I know that: $\tan(x) = u^2 - 1$ that $\sec^2(x) = \tan^2(x) + 1$ and that $u = \sqrt{1 + \tan(x)}$ but have no idea how to proceed. I have looked in all my text books but can't't find an example similar enough to help me figure out what to do. Thank you. Re: Integration using given substitution difficulty. Use implicit differentiation to find du/dx $u^2 = 1 + \tan(x)$ $2u$$du/dx = \sec^2(x)$ $\sec^2(x)$$dx$ with $2u$$du$ And replace $\tan(x)$ with $u^2 -1$ February 10th 2013, 11:15 AM #2 Super Member Oct 2012
{"url":"http://mathhelpforum.com/calculus/212879-integration-using-given-substitution-difficulty.html","timestamp":"2014-04-16T16:35:53Z","content_type":null,"content_length":"36084","record_id":"<urn:uuid:33fa61cc-39f7-48e9-90d3-aa151d6c477d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Lebesgue Integral April 3rd 2010, 05:55 AM Lebesgue Integral Let $(f_n) \in L^p(X)$ for all $n \in \mathbb{N}$ and $1\le p <\infty$.Suppose there exist a function $g \in L^p(X)$ such that $|f_n| \le g$for all $n \in \mathbb{N}$. Prove that for each $\epsilon>0$, there exist a set $E_\epsilon \subseteq X$ with $m(E_\epsilon)< \infty$ such that if $F\subseteq X$ and $F \cap E_\epsilon =\phi$, then $\int_F |f_n|^p dm < \ epsilon^p$ for all $n \in \mathbb{N}.$ April 3rd 2010, 11:52 AM Let $(f_n) \in L^p(X)$ for all $n \in \mathbb{N}$ and $1\le p <\infty$.Suppose there exist a function $g \in L^p(X)$ such that $|f_n| \le g$for all $n \in \mathbb{N}$. Prove that for each $\epsilon>0$, there exist a set $E_\epsilon \subseteq X$ with $m(E_\epsilon)< \infty$ such that if $F\subseteq X$ and $F \cap E_\epsilon =\phi$, then $\int_F |f_n|^p dm < \ epsilon^p$ for all $n \in \mathbb{N}.$ Since $|f_n|\leqslant g$ for all n, it suffices to find a set $E_\varepsilon \subseteq X$ with $m(E_\varepsilon)< \infty$ such that if $F\subseteq X$ and $F \cap E_\epsilon =\emptyset$, then $\ int_F |g|^p dm < \varepsilon^p$ for all $n \in \mathbb{N}.$ That is a question that you have raised in this forum previously, and you'll find the answer here.
{"url":"http://mathhelpforum.com/differential-geometry/137089-lebesgue-integral-print.html","timestamp":"2014-04-23T17:35:35Z","content_type":null,"content_length":"10888","record_id":"<urn:uuid:ea19f40b-abe2-4cc6-9866-6447d9bba5ca>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Zinc Lozenges May Shorten the Duration of Colds: A Systematic Review A number of controlled trials have examined the effect of zinc lozenges on the common cold but the findings have diverged. The purpose of this study was to examine whether the total daily dose of zinc might explain part of the variation in the results. The Medline, Scopus and Cochrane Central Register of Controlled Trials data bases were searched for placebocontrolled trials examining the effect of zinc lozenges on common cold duration. Two methods were used for analysis: the P-values of the trials were combined by using the Fisher method and the results of the trials were pooled by using the inverse-variance method. Both approaches were used for all the identified trials and separately for the low zinc dose and the high zinc dose trials. Thirteen placebo-controlled comparisons have examined the therapeutic effect of zinc lozenges on common cold episodes of natural origin. Five of the trials used a total daily zinc dose of less than 75 mg and uniformly found no effect. Three trials used zinc acetate in daily doses of over 75 mg, the pooled result indicating a 42% reduction in the duration of colds (95% CI: 35% to 48%). Five trials used zinc salts other than acetate in daily doses of over 75 mg, the pooled result indicating a 20% reduction in the duration of colds (95% CI: 12% to 28%). This study shows strong evidence that the zinc lozenge effect on common cold duration is heterogeneous so that benefit is observed with high doses of zinc but not with low doses. The effects of zinc lozenges should be further studied to determine the optimal lozenge compositions and treatment strategies. Keywords: Meta-analysis, randomized controlled trials, respiratory infections, zinc.
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC3136969/?lang=en-ca","timestamp":"2014-04-16T22:37:16Z","content_type":null,"content_length":"150324","record_id":"<urn:uuid:0d30a1d2-fd51-4dba-9128-0ac4557b86a2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Tensor Operations Are NP Hard Tensor Operations Are NP Hard Follow Written by Mike James Thursday, 01 August 2013 Most non-mathematicians might think that tensor operations are pretty hard without any formal proof, but new results prove that they are NP Hard which is not good news if you are trying to work something out using them. A tensor is the n-dimensional generalization of a matrix. A matrix is a 2D tensor in that it has rows and columns. A 3D tensor is a cube of numbers with rows, columns and slices. In a matrix the numbers are indexed by two variables, e.g. something like a[ij]; in a tensor the number of indexing variables can be more than two, e.g. a[ijk] for a tensor of dimension, or more accurately rank 3. We use matrix algebra and matrix numerical methods very often in almost any numerical program - it is the foundation of just about all the number crunching we do. We solve linear systems of equations, find eigenvalues, perform singular value decompositions and so on. These might be regarded as difficult math by some programmers, but for the majority of these tasks we have polynomial time algorithms. That is, vector and matrix computations are in P. Tensor algebra generalizes the the linear algebra that we are all familiar with to higher dimensions - multi-linear algebra. It turns up in advanced geometry, the theory of gravity - the general theory of relativity that is, numerical mechanics, and so on. It is also being used in AI and computer vision and a number of cutting edge approaches to all sorts of topics. The good thing about tensor algebra is that from an algorithmic point of view it isn't really anything new - in most cases just a few more indexes to cope with. Surely the algorithms that we need to work with say rank-3 tensors is no more difficult than for rank-2 tensors, i.e. matrices? It has to be in P - right? Well no. According to a recent paper, things are much worse for rank-3 tensors. It seems that they mark a dividing line between the linear convex problems that are tractable and the more difficult non-linear non-convex class. In the paper a whole list of generalizations of tractable matrix problems are shown to be NP-hard in their rank 3 formulations. These include finding eigenvalues, finding approximate eigenvalues, symmetric eigenvalues, singular values, proving positive definiteness, approximating a tensor by rank 1 tensors and so on. There is even a mention of a generalized question about finding a matrix function which is shown to be undecidable for any tensor 20x20x2 or greater. This is a big surprise because the equivalent matrix function can be easily found and another hint that throwing in just one extra index to a matrix problem makes it very much more difficult. As the title of the paper puts it - Most tensor problems are NP-hard. Should we abandon tensor methods because most of them are not in P? You need to keep in mind that these sorts of analysis are only valid asymptotically when the size of the tensor n in an n x n x n tensor gets very large. It could well be that for a range of small n we can find algorithms that get the job done in reasonable time. The fact that a problem is NP-hard doesn't mean its impossible! The paper concludes with some open questions and, if you are reasonable at math you should find it an interesting read. Mozilla Enhances Browser-Based Gaming Mozilla has partnered with Unity Technologies and with Epic Games in order to provide games devs with facilities for delivering superior performance and near-native speeds in + Full Story Java 8 Launched With Supporting Line-Up Java 8 has been officially launched and the good news is that it also has support from its best known IDEs - NetBeans and Eclipse. + Full Story More News Last Updated ( Thursday, 01 August 2013 ) RSS feed of news items only Copyright © 2014 i-programmer.info. All Rights Reserved.
{"url":"http://www.i-programmer.info/news/112-theory/6173-tensor-operations-are-np-hard.html","timestamp":"2014-04-16T13:04:49Z","content_type":null,"content_length":"40252","record_id":"<urn:uuid:9f85743a-038e-449a-a005-3936ded8ee26>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Congruence modulo and equivalence classes April 8th 2010, 08:57 PM Congruence modulo and equivalence classes problem: Suppose that $M$ is a subspace of a vector space $V$. Prove that a subset of $V$ is an equivalence class modulo $M$ if and only if it is a coset of $M$ Let $N$ be a subset of $V$. 1st direction: If $N$ is a coset of $M$, then it is an equivalence class modulo $M$. Now, I don't really know how to go from N beeing a coset of M to it beeing an equivalence class modulo M. Any help is greatly appreciated! April 9th 2010, 05:28 AM The only thing you can really do is show that $N = x+M = [x] = \{y\in V : y-x \in M\}.$ This isn't too hard, you just need to keep the definitions in mind: $y \in [x] \Leftrightarrow y - x \in M \Leftrightarrow y \in x + M = N.$ Just make sure you keep appealing to the definitions, and make sure you use the closure of $M$ under addition and taking inverses (owing to the fact that it is a subspace) to convince yourself that $x\sim y \Leftrightarrow x-y\in M$ actually does define an equivalence relation.
{"url":"http://mathhelpforum.com/advanced-algebra/138044-congruence-modulo-equivalence-classes-print.html","timestamp":"2014-04-20T20:30:45Z","content_type":null,"content_length":"6923","record_id":"<urn:uuid:9635957b-4209-47ec-bebc-d1dabd8a7872>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Directional Derivative & Triple integral problems May 11th 2010, 09:16 AM #1 Junior Member Mar 2010 Directional Derivative & Triple integral problems Hi to all again =) I have 2 questions about directioanl derivate & Triple integral things. 1) The temparature in a rectangular box is approximated by $T(x,y,z)=xyz(1-x)(2-y)(3-z) \;\; ,\;\; 0\;{\leq}\;x\;{\leq}\;1 \; , \; 0\;{\leq}\; y\;{\leq}\;2 \;,\; 0\;{\leq}\;z\;{\leq}\;3<br />$ If a mosquito is located at $(1/2\;,1\;,1)$, in which direction should it fly to cool off as rapidly as possible? 2 A wedge is cut from a right circular cylinder of radius R by a plane perpendicular to the x-axis of the cylinder and a second plane that meets the first on the axis at an angle of $\theta$ degrees. Set up an evaluate a triple integral for the volume of the wedge. These two are questions and i don't know where to begin so i couldn't make any progress. Gonna be very appriciated for any help. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/144224-directional-derivative-triple-integral-problems.html","timestamp":"2014-04-16T13:39:06Z","content_type":null,"content_length":"30562","record_id":"<urn:uuid:6e8a410e-3311-486d-a4f6-50eba65da13f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
SDE-removal of the diffusion coefficients up vote 1 down vote favorite from math.stackexchange I'm currently looking at stochastic differential equations with irregular coefficients such as $W^{1,p}_{loc}$. If I have dX_t=b(X_t)dt+\sigma dW_t, where $b\in W^{1,1}_{loc}$ and $\sigma$ is a constant, I can write this SDE as a ODE for every Brownian path $w$ by defining $Y_t=X_t-w_t$ and a new vector field $b^w=b(Y,t)=b(Y_t+w_t)$, so the ODE is dY=b^w(t,Y)dt with initial condition $Y_0= y$. Since $b^w$ has Sobolev regularity I can then apply DiPerna-Lions theory (1989), which guarantees the existence and uniqueness of the flow of the ODE. My question is now what happens if $\sigma$ is not a constant, dX_t=b(X_t)dt+\sigma(X_t)dW_t. Apparently the above argument is NOT correct in general. Is there any ways that the diffusion coefficients $\sigma(X_t)$ can be absorbed into the Brownian motion? or maybe there are some conditions on $\sigma$ under which the above argument still hold? Many thanks!! stochastic-processes pr.probability stochastic-calculus brownian-motion add comment 1 Answer active oldest votes If I understood your question correctly, then the diffusion coefficient can be "absorbed" using a standard trick. So consider the equation $$dX_t=b(X_t)dt+\sigma(X_t)dW_t.$$ Assume that $\sigma(x)\ge\varepsilon>0$ for all real $x$. Define $Y_t:=\psi(X_t)$, where $$\psi(x):=\int\limits_0^{x} \ frac{1}{\sigma(s)} ds.$$ By Ito's lemma, up vote 2 down $$dY_t=\frac{b(X_t)}{\sigma(X_t)}dt-\frac12 \sigma'(X_t)dt+dW_t.$$ Now if we recall that $X_t=\psi^{-1}(Y_t)$, we will finally obtain $$dY_t=\frac{b(\psi^{-1}(Y_t))}{\sigma(\psi^{-1}(Y_t))}dt-\frac12 \sigma'(\psi^{-1}(Y_t))dt+dW_t.$$ Thus, the diffusion coefficient has "absorbed" and now equals 1. Hope, this add comment Not the answer you're looking for? Browse other questions tagged stochastic-processes pr.probability stochastic-calculus brownian-motion or ask your own question.
{"url":"http://mathoverflow.net/questions/106567/sde-removal-of-the-diffusion-coefficients","timestamp":"2014-04-16T22:31:30Z","content_type":null,"content_length":"50572","record_id":"<urn:uuid:ee69ddeb-4153-497e-aac7-14ad0ee446be>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Sir Isaac Newton Sir Isaac Newton finally returned to Cambridge in 1667, where he spent the next 29 years. During this time, he published many of his most famous works, beginning with the treatise, "De Analysi," dealing with infinite series. Newton s friend and mentor Isaac Barrow was responsible for bringing the work to the attention of the mathematics community. Shortly afterwards, Barrow who held the Lucasian Professorship (established just four years previously, with Barrow the only recipient) at Cambridge gave it up so that Newton could have the Chair. With his name becoming well known in scientific circles, Sir Isaac Newton came to the attention of the public for his work in astronomy, when he designed and constructed the first reflecting telescope. This breakthrough in telescope technology, which gave a sharper image than was possible with a large lens, ensured his election to membership in the Royal Society. The scientists, Sir Christopher Wren, Robert Hooke, and Edmond Halley began a disagreement in 1684, over whether it was possible that the elliptical orbits of the planets could be caused by gravitational force towards the sun which varied inversely as the square of the distance. Halley traveled to Cambridge to ask the Lucasian Chair, himself. Sir Isaac Newton claimed to have solved the problem four years earlier, but could not find the proof among his papers. After Halley s departure, Isaac worked diligently on the problem and sent an improved version of the proof to the distinguished scientists in London. Throwing himself into the project of developing and expanding his theories, Newton eventually turned this work into his greatest book, Philosophiae Naturalis Principia Mathematica in 1686. This work, which Halley encouraged him to write, and which Halley published at his own expense, brought him more into the view of the public and changed our view of the universe forever. Shortly after this, Sir Isaac Newton moved to London, accepting the position of Master of the Mint. For many years afterward, he argued with Robert Hooke over who had actually discovered the connection between elliptical orbits and the inverse square law, a dispute which ended only with Hooke s death in 1703. In 1705, Queen Anne bestowed knighthood upon him, making him Sir Isaac Newton. Another dispute began in 1709, this time with German mathematician, Gottfried Leibniz, over which of them had invented calculus. While it may never have been settled to the satisfaction of either man, it lasted until around 1716. One reason for Sir Isaac Newton's disputes with other scientists was his tendency to write his brilliant articles, then not publish until after another scientist created similar work. Besides his earlier work, "De Analysi" (which didn't see publication until 1711) and "Principia" (published in 1687), Newton's other works included "Optics" (published in 1704), "The Universal Arithmetic" (published in 1707), the "Lectiones Opticae" (published in 1729), the "Method of Fluxions" (published in 1736), and the "Geometrica Analytica" (printed in 1779). On March 20, 1727, Sir Isaac Newton died near London. He was buried in Westminster Abbey, the first scientist to be accorded this honor. Today, Stephen Hawking holds the Lucasian Chair.
{"url":"http://space.about.com/cs/astronomyhistory/a/isaacnewtonbio_2.htm","timestamp":"2014-04-21T14:46:04Z","content_type":null,"content_length":"42034","record_id":"<urn:uuid:3960128e-1743-4ef5-80ff-caaf36bd654d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Liking of apples – more than juiciness March 18, 2012 By Wingfeet In a previous blog it was shown using literature data that liking of apples was related to juiciness. However, there were some questions 1. Is the relation linear or slightly curved? 2. The variation in liking around CJuiciness is large. Are more explanatory variables needed? 3. So, what drives CJuiciness? In this post it becomes clear that indeed there is more to liking of apples than just juiciness. Error in average scoresThe paper of Peneau et al. uses Tukey's post hoc test to examine the differences between the products. The test is performed within the weeks. First we use the data to retrieve at what difference the test shows significant differences.library(xlsReadWrite)library (plyr)library(ggplot2)library(locfit) #get data as before datain <- read.xls('condensed.xls')#remove storage conditionsdatain <- datain[-grep('bag|net',datain$Products,ignore.case=TRUE),]#create week variabledatain$week <- sapply(strsplit(as.character (datain$Product),'_'), function(x) x[[2]])#function to extract pairwise numerical differences and significances extract.diff <- function(descriptor) { diffs1 <- ldply(c('W1','W2'),function(Week) { value <- as.numeric(gsub('[[:alpha:]]','', datain[,descriptor]))[datain$week==Week] sig.dif <- gsub('[[:digit:].]','',datain[,descriptor])[datain$week==Week] dif.mat <- outer(value,value,'-') sig.mat <- outer(sig.dif,sig.dif,function(X,Y) { sapply(1:length(X),function(i) { g1 <- grep(paste('[',X[i],']'),Y[i]) length(g1>0) }) }) data.frame(dif.val = as.vector(dif.mat), dif.sig = as.vector (sig.mat),Week=Week,descriptor=descriptor) }) diffs1} likedif <- extract.diff("CLiking") likedif <- likedif[likedif$dif.val>=0,] g <- ggplot(likedif, aes(dif.val,dif.sig)) g + geom_jitter(aes(colour=Week),position=position_jitter(height=.05)) The plot shows a difference of between 0.24 and 0.26 is enough to be significant. For Juiciness, the pattern is the same: likedif <- extract.diff("CJuiciness") likedif <- likedif[likedif$dif.val>=0,] g <- ggplot(likedif, aes(dif.val,dif.sig)) g + geom_jitter(aes(colour=Week),position=position_jitter(height=.05)) In juiciness a difference of 0.24 is enough to be significant. Given all, a difference of 0.24 can be used for both variables. Liking vs. Juiciness The plot with errors is easy to make. For completeness a local fit is added. dataval <- datain vars <- names(dataval)[-1] for (descriptor in vars) { dataval[,descriptor] <- as.numeric(gsub('[[:alpha:]]','',dataval[,descriptor])) l1 <- locfit(CLiking ~ lp(CJuiciness,nn=1),data=dataval) topred <- data.frame(CJuiciness=seq(3.6,4.8,.1)) topred$CLiking <- predict(l1,topred) g <- ggplot(dataval,aes(CJuiciness,CLiking)) g <- g + geom_point() + geom_errorbar(aes(ymin=CLiking-.24,ymax=CLiking+.24)) g <- g + geom_errorbarh(aes(xmin=CJuiciness-.24,xmax=CJuiciness+.24)) g <- g + geom_line(data=topred,colour='blue') Both the local fit and the errors suggest that curvature is interesting to pursue. On top of that, a linear relation has as implication that any increase in juiciness is good. In general an optimum level is expected. Compare with sugar, if you like two lumps of sugar, two is dislike as not sweet enough, while four is too sweet. Hence, again all reason for curvature. Regarding inclusion of extra variables, the data shows that the products Juiciness 4.1 are almost significant different. Given that this significance is Tukey HSD, a difference on the one point with much lower liking is probably relevant. On the other hand, the data is fairly well described by curvature, so one extra explaining variable should be enough. Adding an extra explaining variable The two prime candidates as second explaining variable are according to the previous calculation CSweetness and CMealiness. In general one would expect apples that are juicy to not be mealy so there is a reason to avoid CMealiness. Nevertheless both are investigated. l1 <- locfit(CLiking ~ lp(CJuiciness,CSweetness,nn=1.3),data=dataval) topred <- expand.grid(CJuiciness=seq(3.8,4.6,.1),CSweetness=seq(3.2,3.9,.1)) topred$CLiking <- predict(l1,topred) v <- ggplot(topred, aes(CJuiciness, CSweetness, z = CLiking)) v <- v + stat_contour(aes(colour= ..level..) ) v + geom_point(data=dataval,stat='identity',position='identity',aes(CJuiciness,CSweetness)) l1 <- locfit(CLiking ~ lp(CJuiciness,CMealiness,nn=1.3),data=dataval) topred <- expand.grid(CJuiciness=seq(3.8,4.6,.1),CMealiness=seq(1.4,2.1,.1)) topred$CLiking <- predict(l1,topred) v <- ggplot(topred, aes(CJuiciness, CMealiness, z = CLiking)) v <- v + stat_contour(aes(colour= ..level..) ) v + geom_point(data=dataval,stat='identity',position='identity',aes(CJuiciness,CMealiness)) The link between CMealiness and CJuiciness is quite strong. It is also clear that CMealiness does not explain the large difference in liking at CJuiciness 4.1. Hence CSweetness is chosen. Not the best of statistical reasons, but all in all it feels like the better model Simplified linear model Finally, even though I like the local model, it is more convenient to use a simple linear model. After reduction of non-significant terms, only three factors are left. CJuiciness, CJuiciness^2 and l1 <- lm(CLiking ~ CJuiciness*CSweetness + I(CJuiciness^2) + I(CSweetness^2),data=dataval) lm(formula = CLiking ~ CJuiciness * CSweetness + I(CJuiciness^2) + I(CSweetness^2), data = dataval) Min 1Q Median 3Q Max -0.12451 -0.02432 0.01002 0.02591 0.06914 Estimate Std. Error t value Pr(>|t|) (Intercept) -6.5326 19.8267 -0.329 0.7475 CJuiciness 6.7662 4.0414 1.674 0.1199 CSweetness -2.5410 8.7807 -0.289 0.7772 I(CJuiciness^2) -0.8638 0.3564 -2.424 0.0321 * I(CSweetness^2) 0.1264 0.9538 0.133 0.8968 CJuiciness:CSweetness 0.3321 0.6574 0.505 0.6225 Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.05827 on 12 degrees of freedom Multiple R-squared: 0.9107, Adjusted R-squared: 0.8735 F-statistic: 24.48 on 5 and 12 DF, p-value: 6.589e-06 l1 <- lm(CLiking ~ CJuiciness+CSweetness + I(CJuiciness^2) ,data=dataval) lm(formula = CLiking ~ CJuiciness + CSweetness + I(CJuiciness^2), data = dataval) Min 1Q Median 3Q Max -0.125337 -0.023834 0.004955 0.024922 0.087851 Estimate Std. Error t value Pr(>|t|) (Intercept) -14.3609 5.1169 -2.807 0.01400 * CJuiciness 8.5997 2.3715 3.626 0.00275 ** CSweetness -0.2683 0.1104 -2.430 0.02916 * I(CJuiciness^2) -0.9428 0.2795 -3.373 0.00455 ** Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.0547 on 14 degrees of freedom Multiple R-squared: 0.9082, Adjusted R-squared: 0.8885 F-statistic: 46.17 on 3 and 14 DF, p-value: 1.654e-07 topred <- expand.grid(CJuiciness=seq(3.8,4.6,.05),CSweetness=seq(3.2,3.9,.05)) topred$CLiking <- predict(l1,topred) v <- ggplot(topred, aes(CJuiciness, CSweetness, z = CLiking)) v <- v + stat_contour(aes(colour= ..level..) ) v + geom_point(data=dataval,stat='identity',position='identity',aes(CJuiciness,CSweetness)) The resulting plot shows quite some difference with the local fit, but this is mostly at the regions without data. Hence it is concluded that liking of apples depends mainly on Juiciness, and somewhat on Sweetness. Above a certain Juiciness no gain is to be made. Lower sweetness gives better liking It is a bit disappointing that the error in CJuiciness and CSweetness is not incorporated in the model. Unfortunately, this is easier said than done. The keyword here is Total Least Squares also named Deming regression. Unfortunately these are only viable if two variables are regressed. Curvature is also outside of the scope. In addition, this leads to the question, what is error?. The 'error' between the scores consists of different parts. Differences between slices of apple, differences in sensory perception and different ways to score. Regarding the model, differences in slices of apple and sensory perception are counted in liking, while scoring error is not. Ideally, these would be split. This is rather a tall order. Hence, two questions are remaining 1. what drives CJuiciness? 2. what drives CSweetness? for the author, please follow the link and comment on his blog: daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/liking-of-apples-more-than-juiciness/","timestamp":"2014-04-19T07:11:24Z","content_type":null,"content_length":"72212","record_id":"<urn:uuid:a8ca5439-e91d-44a3-a552-45e5848c215b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
SAT Math Tutors Fort Worth, TX 76132 Interactive Math Tutor ...ship and teamwork Also, I worked for Sylvan Learning, and learned to use the Apple I-Pad IV technology to instruct and monitor students with 161% efficiency, and developed the skill of differentiated instruction to eliminate learning gaps. In sum,tutoring allows... Offering 10+ subjects including SAT math
{"url":"http://www.wyzant.com/Fort_Worth_SAT_math_tutors.aspx","timestamp":"2014-04-21T07:24:45Z","content_type":null,"content_length":"61194","record_id":"<urn:uuid:eb952ff9-7b13-4e1b-8204-aa42ddfcadfb>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
saving modelview matrix(?) [Archive] - OpenGL Discussion and Help Forums 01-31-2001, 11:50 AM Hi everyone, I'm trying to write a pretty simple app with a ball bouncing around the inside of a box. Basically, I'm wondering what the best way to go about this is. Ideally, I could save the angle the ball is traveling in and then just move it a bit in that direction every time I call display (with some collision detection for the walls, of course). So what I wanted to do was somehow save a snapshot of the modelview matrix every time I go through display so that when I come back I can pick up where I left off. I know that I could calculate the new coordinates and save them and then call glTranslate but it seems like saving the matrix would be a lot cleaner. Anyway, just wondering, thanks in advance.
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-125440.html","timestamp":"2014-04-20T01:00:43Z","content_type":null,"content_length":"10356","record_id":"<urn:uuid:50f27ece-011f-481e-990b-d421c15d7d75>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Research Matters - to the Science Research Matters - to the Science Teacher Teaching Conceptual Understanding to Promote Students' Ability to do Transfer Problems by William C. Robertson Consider the following steps in a basic algebra problem: Solve for x: x + 3 = 5 x = 5 - 3 x = 2 Now suppose two different students have learned how to solve problems such as the one above and they now encounter a new situation: Solve for x: 4x = 16 Let us further suppose that our students have never seen a problem involving multiplication of algebraic variables. The students think out loud as they try to solve this problem. Here is what they might say: Student 1: "I want to solve for x, so I need to get x by itself. In order to maintain the equality as before, I must do the same thing to both sides of the equation. I can isolate x if I multiply 4x by 1/4, so that's what I'll do to both sides of the equation." (1/4)(4x) = (1/4)(16) x = (1/4)(16) x = 4 Student 2: "I need to get x by itself again. In the previous problems, I took the number to the other side and made it negative, so I'll do that again." x = 16 - 4 x = 12 Which of the two students would you say understands the concepts associated with solving linear equations? Which student has memorized a set of procedures or algorithms? If you said that student 1 understands the concepts, you are in agreement with most cognitive psychologists who study how people solve problems. Student 1 is able to successfully apply the concepts in a novel situation, which is an indication that the student understands the concepts. Unfamiliar problems that require previously-encountered concepts for their solution are "transfer problems." Conceptual understanding is superior to memorized algorithms for solving transfer problems (Katona, 1940; Mayer, 1974; Mayer, Stiehl & Greeno, 1975). Conceptual understanding is a worthwhile goal of science teaching; but what is conceptual understanding and how is it taught? In this article we shall take a look at the models of human memory and knowledge (cognitive structures) that are associated with conceptual understanding and the ability to do transfer problems. These models will then be used to recommend specific teaching strategies. Cognitive Structures Associated With Understanding Studies in complex domains such as solving science problems (Bromage & Mayer, 1981; Heller & Reif, 1984; Robertson, 1986) have suggested that conceptual understanding is associated with connections -- connections between science concepts and everyday life and connections among the different science concepts in a discipline. Someone who is good at solving transfer problems does not randomly connect concepts (which might occur when using memorized algorithms to solve problems) but rather integrates the concepts into a well-structured knowledge base. Broad, organizing concepts are situated at the top of the hierarchy and useful ancillary knowledge is contained in lower levels, as Figure 1 shows for selected physics concepts. The concept map illustrated in Figure 1 is not complete, but it does clearly show the relative importance and appropriate connections among some physics concepts. For instance, the map shows that friction and electrical forces are not major problem-solving principles but merely types of forces that one might consider when using the principle of Newton's Second Law. Major principles such as Conservation of Energy are at the top of the hierarchy, and physical characteristics of systems (e.g. whether a spring is present) are at the bottom, which lets the student know the relative importance of these ideas. For the purpose of solving transfer problems, this well-structured knowledge base appears to be more important than the utilization of strategies such as setting goals and subgoals and working backwards from the goal. These strategies may be helpful, but without utilizing an accompanying "connectedness" of concepts specific to the discipline, one cannot be good at solving problems in science and other complex domains. What to Do? If one agrees that conceptual understanding in a discipline is desirable, then what can a teacher do about it? The following are appropriate teaching strategies suggested by research: 1) Help your students to see the structure of your discipline. Show them the "big picture" -- how concepts connect with one another and with everyday experiences. Concept Mapping (Novak & Gowin, 1984) shown in Figure 1, is an excellent tool for illustrating how concepts are related. Explicate the "ancillary knowledge" associated with formulas and principles -- this is knowledge that students use to "make sense" of a formula rather than just memorize it. 2) The most important thing you are presenting to students is the big picture, so allow them to concentrate on the big picture by making sure that they can use certain skills almost automatically. For example, one is not free to acquire a conceptual understanding of an equation such as F = ma if the use of an algebraic symbols in an equation is not second nature. Similarly, one cannot begin to concentrate on the meaning of words if one doesn't know the alphabet well. This is not to say that skills should be memorized; you should teach them in a meaningful way, just as you should teach higher-level concepts in a meaningful way. However, students should then practice the skills until they no longer present a hindrance to concentrating on more important matters. 3) Install in your students the desire to make sense of the subject matter. Encourage them to look for the connections among concepts and to structure these concepts in a hierarchy. Encourage your students to be dissatisfied with explanations that are to be memorized rather than understood. Although a well structured knowledge base in one discipline will not transfer to another discipline, perhaps the ability and desire to look for the appropriate conceptual structure in the new discipline is transferable. 4) Test your students for their ability to solve transfer problems. Testing students on problems that are exactly like ones they have done in their homework is a sure way to promote memorization of problem types. If, however, students know that the test problems will be unfamiliar, they are more likely to try to acquire the conceptual understanding necessary to do them. If you are able to help your students truly understand concepts, perhaps at some point during the year they will stop referring to the transfer problems as "trick questions!" 5) Finally, allow your students the time necessary to acquire conceptual understanding. People need time to establish connections and see how concepts fit together. Reduce the number of topics you cover in your science course. Teach fewer (important) concepts in greater depth. Allow more time in laboratory explorations that are meaningful rather than an exercise in following recipes. It is better for your students to understand a limited number of science concepts than to memorize many concepts that they are unable to apply in novel situations. William C. Robertson is a staff associate at the Biological Sciences Curriculum Study. He is a member of the National Association for Research in Science Teaching, an organization dedicated to improving science teaching through research. Bromage, B. K., & Mayer, R. E. (1981). Relationship between what is remembered and creative problem solving performance in science learning. Journal of Educational Psychology, 73, 451-461. Chi, M. T. H., Feltovich, P., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5, 121-152. Heller, J. I. , & Reif, F. (1984). Prescribing effective human problem-solving processes: Problem description in physics. Cognition and Instruction, I, 177-216. Katona, G. (1940). Organizing and memorizing. New York: Columbia University Press. Mayer, R. E. (1974). Acquisition processes and resilience under varying testing conditions for structurally different problem solving procedures. Journal of Educational Psychology, 66, 644-656. Mayer, R. E., Stiehl, C. C., & Greeno, J. G. (1975). Acquisition of understanding and skill in relation to subjects' preparation and meaningfulness of instruction. Journal of Educational Psychology, 67, 331-350. Novak, J. D., & Gowin, D. B. (1984). Learning how to learn. New York: Cambridge University Press. Robertson, W. C. (1986). Measurement of conceptual understanding in physics: Predicting performance on transfer problems involving Newton's second law. Doctoral dissertation, University of Colorado. Simon, D. P., & Simon, H. A. (1978). Individual differences in solving physics problems. In Siegler, R. (Ed.), Children's thinking: What develops? Hillsdale, NJ: Erbau. Research Matters - to the Science Teacher is a publication of the National Association for Research in Science Teaching
{"url":"http://www.narst.org/publications/research/conceptual.cfm","timestamp":"2014-04-19T04:19:24Z","content_type":null,"content_length":"28225","record_id":"<urn:uuid:b87c2136-1426-4ff3-bb05-24d873193e3f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
The nature of the wave-induced flow around a circular cylinder is to a large extent determined by the ratio of water-particle orbit size to cylinder diameter, characterized in regular waves by the Keulegan- Carpenter number KC (see Appendix II for definitions]. When D/L > 0.2 wave scattering effects are negligible and it is conventional to describe the fluid loading in terms of drag and inertia forces in-line with the direction of wave propagation plus a transverse 'lift' force. The idealised two-dimensional situation of a cylinder normal to planar sinusoidal flow has been investigated in U-tubes by Sarpkaya (10, 11). As KC advances above 2 vorticity starts to be shed and produces forces in addition to the inertia force which would result from the undisturbed fluid acceleration. The vortex-induced forces become more important as KC increases. Defining the drag force as the component of the in-line force in phase with the fluid velocity and the inertia force as the component in phase with the acceleration, it is found that the drag, inertia and lift can have comparable magnitudes when KC is between 8 and 25. This paper is concerned with the corresponding regime in waves. In the idealised situation vortex shedding is almost perfectly correlated along the length of the cylinder but generally this will not be the case in waves. Here the degree of vortex coherence will influence the vortex-induced forces particularly the lift which is strongly dependent on history effects. Although the forces on fixed vertical cylinders have been measured, little is known about the loading on cylinders in general orientation in either unidirectional waves or planar flows. Real seas are further complicated by being random and multidirectional with the possibility of superimposed currents. The interaction of cylinder vibration with vortex shedding can be highly non-linear in currents, e.g see (12), but again little is known about what happens in waves. Although scale influences the magnitude of forces when vortex shedding is important, small-scale experiments can qualitatively represent full-scale flows. Thus, the interrelation between the various parameters which influence wave loading may be studied in the relatively controlled environment of a laboratory channel. Furthermore, analysis techniques which have been justified on the model scale can then be applied with greater confidence to full-scale situations. wave loading; wave response; cylinder Full Text: This work is licensed under a Creative Commons Attribution 3.0 License
{"url":"http://journals.tdl.org/icce/index.php/icce/article/view/3421/0","timestamp":"2014-04-19T09:49:06Z","content_type":null,"content_length":"18446","record_id":"<urn:uuid:453c1d57-6031-4add-baf6-5d8a80b346dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Electronic Warfare and Radar Systems Engineering Handbook Radar cross section is the measure of a target's ability to reflect radar signals in the direction of the radar receiver, i.e. it is a measure of the ratio of backscatter power per steradian (unit solid angle) in the direction of the radar (from the target) to the power density that is intercepted by the target. The RCS of a target can be viewed as a comparison of the strength of the reflected signal from a target to the reflected signal from a perfectly smooth sphere of cross sectional area of 1 m as shown in Figure 1 . σ = Projected cross section x Reflectivity x Directivity . RCS(σ) is used in Section 4-4 for an equation representing power reradiated from the target. Reflectivity: The percent of intercepted power reradiated (scattered) by the target. Directivity: The ratio of the power scattered back in the radar's direction to the power that would have been backscattered had the scattering been uniform in all directions (i.e. isotropically). The RCS of a sphere is independent of frequency if operating at sufficiently high frequencies where λ<<Range, and λ<< radius (r). Experimentally, radar return reflected from a target is compared to the radar return reflected from a sphere which has a frontal or projected area of one square meter (i.e. diameter of about 44 in). Using the spherical shape aids in field or laboratory measurements since orientation or positioning of the sphere will not affect radar reflection intensity measurements as a flat plate would. If calibrated, other sources (cylinder, flat plate, or corner reflector, etc.) could be used for comparative measurements. To reduce drag during tests, towed spheres of 6", 14" or 22" diameter may be used instead of the larger 44" sphere, and the reference size is 0.018, 0.099 or 0.245 m respectively instead of 1 m . When smaller sized spheres are used for tests you may be operating at or near where λ-radius. If the results are then scaled to a 1 m reference, there may be some perturbations due to creeping waves. See the discussion at the end of this section for further details. Figure 3. Backscatter From Shapes The sphere is essentially the same in all directions. The flat plate has almost no RCS except when aligned directly toward the radar. The corner reflector has an RCS almost as high as the flat plate but over a wider angle, i.e., over ±60°. The return from a corner reflector is analogous to that of a flat plate always being perpendicular to your collocated transmitter and receiver. Targets such as ships and aircraft often have many effective corners. Corners are sometimes used as calibration targets or as decoys, i.e. corner reflectors. An aircraft target is very complex. It has a great many reflecting elements and shapes. The RCS of real aircraft must be measured. It varies significantly depending upon the direction of the illuminating radar. As shown in Figure 5, the RCS is highest at the aircraft beam due to the large physical area observed by the radar and perpendicular aspect (increasing reflectivity). The next highest RCS area is the nose/tail area, largely because of reflections off the engines or propellers. Most self-protection jammers cover a field of view of +/- 60 degrees about the aircraft nose and tail, thus the high RCS on the beam does not have coverage. Beam coverage is frequently not provided due to inadequate power available to cover all aircraft quadrants, and the side of an aircraft is theoretically exposed to a threat 30% of the time over the average of all scenarios. Typical radar cross sections are as follows: Missile 0.5 sq m; Tactical Jet 5 to 100 sq m; Bomber 10 to 1000 sq m; and ships 3,000 to 1,000,000 sq m. RCS can also be expressed in decibels referenced to a square meter (dBsm) which equals 10 log (RCS in m2). Again, Figure 5 shows that these values can vary dramatically. The strongest return depicted in the example is 100 m in the beam, and the weakest is slightly more than 1 m in the 135°/225° positions. These RCS values can be very misleading because other factors may affect the results. For example, phase differences, polarization, surface imperfections, and material type all greatly affect the results. In the above typical bomber example, the measured RCS may be much greater than 1000 square meters in certain circumstances (90°, 270°). If each of the range or power equations that have an RCS (σ) term is evaluated for the significance of decreasing RCS, Figure 6 results. Therefore, an RCS reduction can increase aircraft survivability. The equations used in Figure 6 are as follows: Range (radar detection): From the 2-way range equation in Section 4-4: Range (radar burn-through): The crossover equation in Section 4-8 has: Power (jammer): Equating the received signal return (P ) in the two way range equation to the received jammer signal (P ) in the one way range equation, the following relationship results: Therefore, P [j] [j] Note: jammer transmission line loss is combined with the jammer antenna gain to obtain G Figure 6. Reduction of RCS Affects Radar Detection, Burn-through, and Jammer Power Example of Effects of RCS Reduction - As shown in Figure 6, if the RCS of an aircraft is reduced to 0.75 (75%) of its original value, then (1) the jammer power required to achieve the same effectiveness would be 0.75 (75%) of the original value (or -1.25 dB). Likewise, (2) If Jammer power is held constant, then burn-through range is 0.87 (87%) of its original value (-1.25 dB), and (3) the detection range of the radar for the smaller RCS target (jamming not considered) is 0.93 (93%) of its original value (-1.25 dB). OPTICAL / MIE / RAYLEIGH REGIONS Figure 7 shows the different regions applicable for computing the RCS of a sphere. The optical region (“far field” counterpart) rules apply when 2Br/λ > 10. In this region, the RCS of a sphere is independent of frequency. Here, the RCS of a sphere, σ = r2. The RCS equation breaks down primarily due to creeping waves in the area where λ-2 r. This area is known as the Mie or resonance region. If we were using a 6" diameter sphere, this frequency would be 0.6 GHz. (Any frequency ten times higher, or above 6 GHz, would give expected results). The largest positive perturbation (point A) occurs at exactly 0.6 GHz where the RCS would be 4 times higher than the RCS computed using the optical region formula. Just slightly above 0.6 GHz a minimum occurs (point B) and the actual RCS would be 0.26 times the value calculated by using the optical region formula. If we used a one meter diameter sphere, the perturbations would occur at 95 MHz, so any frequency above 950 MHz (-1 GHz) would give predicted results. The initial RCS assumptions presume that we are operating in the optical region (λ<<Range and λ<<radius). There is a region where specular reflected (mirrored) waves combine with back scattered creeping waves both constructively and destructively as shown in Figure 8. Creeping waves are tangential to a smooth surface and follow the "shadow" region of the body. They occur when the circumference of the sphere - λ and typically add about 1 m to the RCS at certain frequencies. Figure 7. Radar Cross Section of a Sphere Figure 8. Addition of Specular and Creeping Waves Table of Contents for Electronics Warfare and Radar Engineering Handbook Introduction | Abbreviations | Decibel | Duty Cycle | Doppler Shift | Radar Horizon / Line of Sight | Propagation Time / Resolution | Modulation | Transforms / Wavelets | Antenna Introduction / Basics | Polarization | Radiation Patterns | Frequency / Phase Effects of Antennas | Antenna Near Field | Radiation Hazards | Power Density | One-Way Radar Equation / RF Propagation | Two-Way Radar Equation (Monostatic) | Alternate Two-Way Radar Equation | Two-Way Radar Equation (Bistatic) | Jamming to Signal (J/S) Ratio - Constant Power [Saturated] Jamming | Support Jamming | Radar Cross Section (RCS) | Emission Control (EMCON) | RF Atmospheric Absorption / Ducting | Receiver Sensitivity / Noise | Receiver Types and Characteristics | General Radar Display Types | IFF - Identification - Friend or Foe | Receiver Tests | Signal Sorting Methods and Direction Finding | Voltage Standing Wave Ratio (VSWR) / Reflection Coefficient / Return Loss / Mismatch Loss | Microwave Coaxial Connectors | Power Dividers/Combiner and Directional Couplers | Attenuators / Filters / DC Blocks | Terminations / Dummy Loads | Circulators and Diplexers | Mixers and Frequency Discriminators | Detectors | Microwave Measurements | Microwave Waveguides and Coaxial Cable | Electro-Optics | Laser Safety | Mach Number and Airspeed vs. Altitude Mach Number | EMP/ Aircraft Dimensions | Data Busses | RS-232 Interface | RS-422 Balanced Voltage Interface | RS-485 Interface | IEEE-488 Interface Bus (HP-IB/GP-IB) | MIL-STD-1553 & 1773 Data Bus | This HTML version may be printed but not reproduced on websites.
{"url":"http://www.rfcafe.com/references/electrical/ew-radar-handbook/radar-cross-section.htm","timestamp":"2014-04-19T17:04:19Z","content_type":null,"content_length":"28783","record_id":"<urn:uuid:a47a11be-653a-45a1-a6ef-b7fee3855ab8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
SPSSX-L archives -- October 2002 (#143)LISTSERV at the University of Georgia Date: Sun, 13 Oct 2002 22:36:36 +1000 Reply-To: Bob Green <bgreen@dyson.brisnet.org.au> Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU> From: Bob Green <bgreen@dyson.brisnet.org.au> Subject: SPSS logistic regression terminology compared to Hosmer & Lemeshow Content-Type: text/plain; charset="us-ascii"; format=flowed I am hoping to obtain clarification regarding the SPSS logistic regression output. I couldn't obtain this information from the SPSS Regression Models My questions are: 1) SPSS calculates a Hosmer & Lemeshow Test. The text by Hosmer & Lemsehow (2001) refers to a goodness of fit measure C (hat) and a G statistic- is it the case that it is the G statistic and not the C (hat) that is reported in the SPSS output? SPSS refers to the statistic as a goodness of fit measure. 2) Are the Omnibus tests of Model Coefficients also likelihood ratio tests? 3) Hosmer & Lemeshow refer to a log likelihood and a likelihood ratio test. SPSS calculates the -2 Log likelihood for the model. Is this the statistic D described by Hosmer & Lemeshow? Any assistance is appreciated, Bob Green
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0210&L=spssx-l&F=&S=&P=16196","timestamp":"2014-04-18T15:41:23Z","content_type":null,"content_length":"9708","record_id":"<urn:uuid:40ec07da-8118-4875-bc72-6389802adf06>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Kepler's conjecture From Kepler's pamphlet on snowflakes No packing of spheres of the same radius in three dimensions has a density greater than the face-centered (hexagonal) cubic packing. This claim was first published by Johannes Kepler in his monograph "The Six-Cornered Snowflake" (1611) – a treatise inspired by his correspondence with Thomas Harriot (see Cannonball Problem). In his slender essay, Kepler asserted that face-centered cubic packing – the kind greengrocers use to stack oranges – is "the tightest possible, so that in no other arrangement could more pellets be stuffed into the same container." The question of whether Kepler's conjecture is right or not has became known, not surprisingly, as Kepler's problem. In the 19th century, Carl Gauss proved that face-centered cubic packing is the densest arrangement in which the centers of the spheres form a regular lattice, but he left open the question of whether an irregular stacking of spheres might be still denser. In 1953, László Tóth reduced the Kepler conjecture to an enormous calculation that involved specific cases, and later suggested that computers might be helpful for solving the problem. This was the approach taken by Thomas Hales, a mathematician at the University of Michigan at Ann Arbor, and which led him, in 1998, to claim that he had proved Kepler was right all along. Hales proof of Kepler's conjecture remains controversial simply because of the length of the computer calculations involved and the difficulty of verifying them. The casebook on this mystery remains open. Related entry Monster group Related category
{"url":"http://www.daviddarling.info/encyclopedia/K/Keplers_conjecture.html","timestamp":"2014-04-18T18:10:56Z","content_type":null,"content_length":"8156","record_id":"<urn:uuid:43b55774-382b-4b39-856e-0e357e6d734e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Characteristic_function_(probability_theory) probability theory , the characteristic function of any random variable completely defines its probability distribution . On the line it is given by the following formula, where is any random variable with the distribution in question: $varphi_X\left(t\right) = operatorname\left\{E\right\}left\left(e^\left\{itX\right\}right\right),$ where t is a real number, i is the imaginary unit, and E denotes the expected value. If F[X] is the cumulative distribution function, then the characteristic function is given by the Riemann-Stieltjes integral $operatorname\left\{E\right\}left\left(e^\left\{itX\right\}right\right) = int_\left\{-infty\right\}^\left\{infty\right\} e^\left\{itx\right\},dF_X\left(x\right).,$ In cases in which there is a probability density function, f[X], this becomes $operatorname\left\{E\right\}left\left(e^\left\{itX\right\}right\right) = int_\left\{-infty\right\}^\left\{infty\right\} e^\left\{itx\right\} f_X\left(x\right),dx.$ If X is a vector-valued random variable, one takes the argument t to be a vector and tX to be a dot product. Every probability distribution on R or on R^n has a characteristic function, because one is integrating a bounded function over a space whose measure is finite, and for every characteristic function there is exactly one probability distribution. The characteristic function of a symmetric PDF (that is, one with $p\left(x\right)=p\left(-x\right)$) is real, because the imaginary components obtained from $x>0$ cancel those from $x<0$. Lévy continuity theorem The core of the Lévy continuity theorem states that a sequence of random variables $scriptstyle \left(X_n\right)_\left\{n=1\right\}^infty$ where each $scriptstyle X_n$ has a characteristic function $scriptstyle varphi_n$ will converge in distribution towards a random variable $scriptstyle X$ $X_n xrightarrow\left\{mathcal D\right\} X qquadtextrm\left\{as\right\}qquad n to infty$ $varphi_n quad xrightarrow\left\{textrm\left\{pointwise\right\}\right\} quad varphi qquadtextrm\left\{as\right\}qquad n to infty$ $scriptstyle varphi\left(t\right)$ continuous in $scriptstyle t=0$ $scriptstyle varphi$ is the characteristic function of $scriptstyle X$ The Lévy continuity theorem can be used to prove the weak law of large numbers, see the proof using convergence of characteristic functions. The inversion theorem More than that, there is a bijection between cumulative probability distribution functions and characteristic functions. In other words, two distinct probability distributions never share the same characteristic function. Given a characteristic function φ, it is possible to reconstruct the corresponding cumulative probability distribution function F: $F_X\left(y\right) - F_X\left(x\right) = lim_\left\{tau to +infty\right\} frac\left\{1\right\} \left\{2pi\right\}$ int_{-tau}^{+tau} frac{e^{-itx} - e^{-ity}} {it}, varphi_X(t), dt. In general this is an improper integral; the function being integrated may be only conditionally integrable rather than Lebesgue integrable, i.e. the integral of its absolute value may be infinite. Reference: see (P. Levy, Calcul des probabilites, Gauthier-Villars, Paris, 1925. p166) Bochner-Khinchin theorem An arbitrary function $scriptstyle varphi$ is a characteristic function corresponding to some probability law $scriptstyle mu$ if and only if the following three conditions are satisfied: (1) $scriptstyle varphi ,$ is continuous (2) $scriptstyle varphi\left(0\right) = 1 ,$ (3) $scriptstyle varphi ,$ is a positive definite function (note that this is a complicated condition which is not equivalent to $scriptstyle varphi >0$). Uses of characteristic functions Because of the continuity theorem, characteristic functions are used in the most frequently seen proof of the central limit theorem. The main trick involved in making calculations with a characteristic function is recognizing the function as the characteristic function of a particular distribution. Basic properties Characteristic functions are particularly useful for dealing with functions of random variables. For example, if , ..., is a sequence of independent (and not necessarily identically distributed) random variables, and $S_n = sum_\left\{i=1\right\}^n a_i X_i,,!$ where the a[i] are constants, then the characteristic function for S[n] is given by varphi_{S_n}(t)=varphi_{X_1}(a_1t)varphi_{X_2}(a_2t)cdots varphi_{X_n}(a_nt). ,! In particular, $varphi_\left\{X+Y\right\}\left(t\right) = varphi_X\left(t\right)varphi_Y\left(t\right)$. To see this, write out the definition of characteristic function: right\right)Eleft\left(e^\left\{itY\right\}right\right)=varphi_X\left(t\right) varphi_Y\left(t\right)$. Observe that the independence of $X$ and $Y$ is required to establish the equality of the third and fourth expressions. Another special case of interest is when $a_i=1/n$ and then $S_n$ is the sample mean. In this case, writing $overline\left\{X\right\}$ for the mean, Characteristic functions can also be used to find of a random variable. Provided that the moment exists, characteristic function can be differentiated times and $operatorname\left\{E\right\}left\left(X^nright\right) = i^\left\{-n\right\}, varphi_X^\left\{\left(n\right)\right\}\left(0\right)$ = i^{-n}, left[frac{d^n}{dt^n} varphi_X(t)right]_{t=0}. ,! For example, suppose $X$ has a standard Cauchy distribution. Then $varphi_X\left(t\right)=e^\left\{-|t|\right\}$. See how this is not differentiable at $t=0$, showing that the Cauchy distribution has no expectation. Also see that the characteristic function of the sample mean $overline\left\{X\right\}$ of $n$independent observations has characteristic function $varphi_\left\{overline\left\{X\ right\}\right\}\left(t\right)=\left(e^\left\{-|t|/n\right\}\right)^n=e^\left\{-|t|\right\}$, using the result from the previous section. This is the characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution as the population itself. The logarithm of a characteristic function is a cumulant generating function, which is useful for finding cumulants. An example Gamma distribution with scale parameter and a shape parameter has the characteristic function $\left(1 - theta,i,t\right)^\left\{-k\right\}.$ Now suppose that we have $X ~sim Gamma\left(k_1,theta\right) mbox\left\{ and \right\} Y sim Gamma\left(k_2,theta\right)$ independent from each other, and we wish to know what the distribution of is. The characteristic functions are $varphi_X\left(t\right)=\left(1 - theta,i,t\right)^\left\{-k_1\right\},,qquad varphi_Y\left(t\right)=\left(1 - theta,i,t\right)^\left\{-k_2\right\}$ which by independence and the basic properties of characteristic function leads to $varphi_\left\{X+Y\right\}\left(t\right)=varphi_X\left(t\right)varphi_Y\left(t\right)=\left(1 - theta,i,t\right)^\left\{-k_1\right\}\left(1 - theta,i,t\right)^\left\{-k_2\right\}=left\left(1 - This is the characteristic function of the gamma distribution scale parameter and shape parameter , and we therefore conclude $X+Y sim Gamma\left(k_1+k_2,theta\right) ,$ The result can be expanded to independent gamma distributed random variables with the same scale parameter and we get $forall i in \left\{1,ldots, n\right\} : X_i sim Gamma\left(k_i,theta\right) qquad Rightarrow qquad sum_\left\{i=1\right\}^n X_i sim Gammaleft\left(sum_\left\{i=1\right\}^nk_i,thetaright\right).$ Multivariate characteristic functions If $X$ is a multivariate PDF, then its characteristic function is defined as varphi_X(t)=Eleft(e^{itcdot x}right). Here, the dot signifies vector dot product ($t$ is in the dual space of $x$). $Xsim N\left(0,Sigma\right)$ is a multivariate Gaussian with zero mean, then varphi_X(t)=Eleft(e^{itcdot x}right) int_{xin R^n}frac{1}{left|2piSigmaright|^{1/2}}e^{-frac{1}{2}x^TSigma^{-1}x}cdot e^{itcdot x}dx e^{-frac{1}{2}t^TSigma t}. Matrix-valued random variables If $X$ is a matrix-valued PDF, then the characteristic function is varphi_X(T)=Eleft(e^{i, mathrm{Tr}(XT)}right) Here $mathrm\left\{Tr\right\}\left(cdot\right)$ is the trace function and matrix multiplication (of $T$ and $X$) is used. Note that the order of the multiplication is immaterial ($XTneq TX$ but $tr\ Examples of matrix-valued PDFs include the Wishart distribution. Related concepts Related concepts include the moment-generating function and the probability-generating function . The characteristic function exists for all probability distributions. However this is not the case for moment generating function. The characteristic function is closely related to the Fourier transform: the characteristic function of a probability density function $p\left(x\right)$ is the complex conjugate of the continuous Fourier transform of $p\left(x\right)$ (according to the usual convention; see ). $varphi_X\left(t\right) = langle e^\left\{itX\right\} rangle = int_\left\{-infty\right\}^\left\{infty\right\} e^\left\{itx\right\}p\left(x\right), dx = overline\left\{left\left(int_\left\{-infty\ right\}^\left\{infty\right\} e^\left\{-itx\right\}p\left(x\right), dx right\right)\right\} = overline\left\{P\left(t\right)\right\},$ where $P\left(t\right)$ denotes the continuous Fourier transform of the probability density function $p\left(x\right)$. Likewise, $p\left(x\right)$ may be recovered from $varphi_X\left(t\right)$ through the inverse Fourier transform: $p\left(x\right) = frac\left\{1\right\}\left\{2pi\right\} int_\left\{-infty\right\}^\left\{infty\right\} e^\left\{itx\right\} P\left(t\right), dt = frac\left\{1\right\}\left\{2pi\right\} int_\ left\{-infty\right\}^\left\{infty\right\} e^\left\{itx\right\} overline\left\{varphi_X\left(t\right)\right\}, dt.$ Indeed, even when the random variable does not have a density, the characteristic function may be seen as the Fourier transform of the measure corresponding to the random variable. • Lukacs E. (1970) Characteristic Functions. Griffin, London. pp. 350 • Bisgaard, T. M., Sasvári, Z. (2000) Characteristic Functions and Moment Sequences, Nova Science
{"url":"http://www.reference.com/browse/wiki/Characteristic_function_(probability_theory)","timestamp":"2014-04-20T07:47:38Z","content_type":null,"content_length":"83906","record_id":"<urn:uuid:fe5312e5-3db6-4e5f-81cf-7b50b4b2d2b4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
CBSE Group Mathematical Olympiad (GMO) Group Maths Olympiad To be held in 2^nd December, 2012 Mathematical Olympiads are the competitions conducted by different organisations to identify, encourage, promote and nurture the talent in mathematics at state, regional, national and international After having been accorded the status of an independent group by National Board of Higher Mathematics (NBHM) in 1997, Central Board of Secondary Education, Delhi has been conducting CBSE group Mathematics Olympiad for students studying in its affiliated schools. The said competition will be conducted on 2^nd December, 2012 at different centres located in different parts of the country. Top 30 students from CBSE group will then participate in Indian National Mathematical Olympiad (INMO) conducted by National Board of Higher Mathematics (NBHM). Selection at National level leads to participation at the International Mathematical Olympiad. The role of mathematics cannot be denied in the age of rapid technological advancements and innovations. The broad objectives of conducting Mathematical Olympiad include: • Identifying talent in Mathematics • Providing opportunities to young and talented students to solve challenging problems in Mathematics • Promoting excellence in Mathematics • Encouraging talented students to pursue careers in the fields related to Mathematics 1. Any student studying in class XII, XI, X or IX can participate in the competition. However, there will be a common question paper for all participants. Bright students from class VIII may also appear in RMO. 2. A school can sponsor upto five students for the competition. 3. The five students should be selected strictly on the basis of a screening test conducted by the school itself. FEE TO BE PAID Every sponsored candidate shall remit a sum of Rs. 55/- as entry fee for the examination. The fee of the candidates should be forwarded by the principal of sponsoring school to the venue principal (centre) in the form of demand draft in favour of the principal of the venue school. The list of state coordinators (venue schools) appears at the end of this folder. 1. Sixteenth CBSE Group Mathematical Olympiad 2012 will be held on Sunday, 2^nd December, 2012 at the centre notified at the end of this folder. 2. The duration of the examination will be 3 hours (1:00 pm to 4:00 pm) The fee along with desired information about the sponsored students should reach the state coordinator by 10^th October, 2012. In no case should the fee or documents be sent to the Board. HOW TO APPLY? 1. The head of the participating schools will forward the names of sponsored students to the state coordinator nominated by the Board 2. The information about the sponsored candidates including name, father’s name, date of birth, school address, class, residential address, total marks obtained in the class one lower than in which he/she is studying, should be sent to the state coordinator along with the fee before the last date. 3. The sponsored candidates should be bonafide students of the institution 4. The admit card to be issued by the sponsored school should have complete information about the students. 5. The admit card should also carry the signature of the student and his/her latest photograph duly attested by the principal. 1. The Group Mathematical Olympiad will be held at one or two venues in each state as listed in the folder. 2. Every candidate is required to appear at the venue in his/her state only. No change of venue for the examination will be admissible. 3. Every candidate has to bear the expenditure for his/her transport, board and lodging etc. 4. In case of candidates studying in gulf schools, the schools will have to make arrangement for candidates to appear at the identified centre. 5. The state coordinator will inform all the participating schools about the roll nos. of the candidates, exact venue, date of examination, time of examination and the other details The question paper will be made available in English medium. The candidates are required to answer in English only. There is no specific syllabus for the examination. Areas of school mathematics such as algebra, geometry, basic number theory, trigonometry, arithmetic, combinatorics etc. are to be prepared for the test. The question paper will consist of 6-7 questions of non-routine nature. 1. The coordinator will act as the Centre Superintendent. For the examination at his/her centre. He/she will appoint invigilator as per CBSE examination norms. 2. In addition to invigilators the Centre Superintendent may appoint one clerk/peon for the conduct of the examination. 1. The question papers will be sent by CBSE to the state/regional coordinator directly in advance. 2. The coordinator shall keep the sealed paper in his/her safe custody till the commencement of examination. The sealed packets containing the question paper should be opened only half an hour before the start of examination. The answer sheets of candidates who appear in CBSE GMO will be evaluated at CBSE Headquarters. CBSE will select 30 candidates strictly in order of merit. Only the selected candidates will be informed about the result and will be eligible to appear in the Indian National Mathematical Olympiad conducted by NBHM. The name of the selected candidates will be communicated to respective schools as well as NBHM for further necessary action. More on CBSE Group Mathematical Olympiad (GMO):
{"url":"http://www.icbse.com/olympiads/group-mathematics","timestamp":"2014-04-19T19:33:25Z","content_type":null,"content_length":"25543","record_id":"<urn:uuid:fdfc92d2-835f-4780-aab4-cc1d7be1d7d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
FastTree 2 – Approximately Maximum-Likelihood Trees for Large Alignments • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information PLoS One. 2010; 5(3): e9490. FastTree 2 – Approximately Maximum-Likelihood Trees for Large Alignments Art F. Y. Poon, Editor^ We recently described FastTree, a tool for inferring phylogenies for alignments with up to hundreds of thousands of sequences. Here, we describe improvements to FastTree that improve its accuracy without sacrificing scalability. Methodology/Principal Findings Where FastTree 1 used nearest-neighbor interchanges (NNIs) and the minimum-evolution criterion to improve the tree, FastTree 2 adds minimum-evolution subtree-pruning-regrafting (SPRs) and maximum-likelihood NNIs. FastTree 2 uses heuristics to restrict the search for better trees and estimates a rate of evolution for each site (the “CAT” approximation). Nevertheless, for both simulated and genuine alignments, FastTree 2 is slightly more accurate than a standard implementation of maximum-likelihood NNIs (PhyML 3 with default settings). Although FastTree 2 is not quite as accurate as methods that use maximum-likelihood SPRs, most of the splits that disagree are poorly supported, and for large alignments, FastTree 2 is 100–1,000 times faster. FastTree 2 inferred a topology and likelihood-based local support values for 237,882 distinct 16S ribosomal RNAs on a desktop computer in 22 hours and 5.8 gigabytes of memory. Inferring evolutionary relationships, or phylogenies, from families of related DNA or protein sequences is a central method in computational biology. Sequence-based phylogenies are widely used to understand the evolutionary relationships of organisms and to analyze the functions of genes. The largest gene families already contain tens to hundreds of thousands of representatives, and with the rapid improvements in DNA sequencing, we expect even larger data sets to arrive soon. Large families can be aligned with profile-based methods that scale linearly with the number of sequences (http://hmmer.janelia.org/; [1]). However, most methods for inferring phylogenies from these alignments scale as O( We recently described a scalable method for inferring phylogenies, FastTree 1.0 [2]. FastTree 1.0 is based on the “minimum-evolution” principle – it tries to find a topology that minimizes the amount of evolution, or the sum of the branch lengths. FastTree 1.0 uses a heuristic variant of neighbor joining [3], [4] to quickly find a starting tree and uses nearest-neighbor interchanges (NNIs) to refine the topology. (A nearest-neighbor interchange swaps a node and its neighbor; for example, it might change ((A,B),C,D) to ((A,C),B,D) or ((A,D),B,C).) FastTree implements these operations in O( http://www.microbesonline.org/fasttree/ChangeLog.) In comparison, computing all pairwise distances, which is required with most minimum-evolution approaches, requires O([2]. In the maximum-likelihood (ML) approach, evolution is explicitly modeled with a transition rate matrix, and the tree that best explains the data – the tree with the highest likelihood – is the best tree [5]. The ML criterion ranks the trees but does not specify how to find a good topology. Because ML phylogenetic inference is NP complete [6], no practical method can guarantee that it will find the optimal topology for a large alignment. The most scalable ML methods, such as PhyML and RAxML, begin with a starting tree produced by a faster method, and try to increase the likelihood by optimizing individual branch lengths and performing local rearrangements [7]–[9]. By re-optimizing only a few branch lengths at each move, the cost of considering or performing a move can be reduced to O([10] generally increases the computational requirements another 100-fold (although this can be reduced by reusing computations across replicates [11]). Here, we describe FastTree 2, a tool for inferring ML trees for large alignments. Besides constructing an initial tree with neighbor joining and improving it with minimum-evolution NNIs, FastTree 2 uses minimum-evolution subtree-pruning-regrafting (SPRs) [8], [12] and ML NNIs to further improve the tree. (In subtree-pruning-regrafting, a subtree is removed from the tree and reinserted elsewhere, e.g., pruning and regrafting C might change ((A,B),(C,D),E) to ((A,(B,C)),D,E).) FastTree 2 uses heuristics to reduce the search space and hence to maintain the scalability of both stages. Another justification for reducing the search space is that intensive tree search often finds small improvements in the tree's length or likelihood, but these changes may not be statistically or biologically significant (e.g., [13]). Briefly, FastTree's key heuristics are: • It uses “linear SPRs” to consider just • It searches for SPR moves for every subtree just twice, instead of iterating until convergence. • During the ML phase, it limits the ML NNIs to at most • It limits the effort to optimize model parameters and branch lengths. • It abandons optimization for NNI moves that seem, after partial optimization, to significantly lower the likelihood. • It does not try to improve parts of the tree that did not improve in recent rounds. To account for the variation in rates across sites, FastTree uses the “CAT” approximation [14] rather than the standard discrete gamma model with four rates ([15]. Some sites evolve much more slowly than others, and the ideal way to account for this is to integrate the likelihood at each site over the (unknown) relative evolutionary rate of that site, using a prior distribution over the relative rates such as a gamma distribution. However, these integrals are analytically intractable and computationally prohibitive. The “[14]. FastTree selects the most likely rate for each site from among 20 fixed possibilities. Because of the heuristics, FastTree 2 is not guaranteed to reach a locally optimal likelihood in tree space. However, at each step it does guarantee that the likelihood increases (under the CAT approximation). Thus, FastTree 2 is an approximately-maximum-likelihood method. We will show that in practice, FastTree 2 is slightly more accurate than a standard implementation of maximum-likelihood NNIs, PhyML 3 with default settings [16], [17]. Specifically, in simulations, FastTree 2 recovers a higher proportion of true splits, and on genuine alignments, FastTree 2's topologies tend to have higher likelihoods. FastTree's minimum-evolution SPR moves give it a better starting tree than PhyML's starting tree, which is obtained with BIONJ (a weighted variant of neighbor joining [18]). This more than makes up for FastTree's heuristics, which reduce the intensity of search for ML NNIs but have little effect on accuracy. We also confirm that using the CAT approximation instead of the Although FastTree 2 is significantly less accurate than ML methods that use SPR moves, such as PhyML with slower settings or RAxML, most of the splits that disagree are poorly supported, and FastTree is much faster. FastTree 2 can analyze alignments with tens or hundreds of thousands of sequences in under a day on a desktop computer. For alignments with 500 sequences or more, FastTree 2 is at least 100 times faster than either PhyML 3.0 or RAxML 7.2.1. FastTree 2 is faster than RAxML 7 mostly because of less intensive ML search (NNIs instead of SPRs) and because RAxML 7 optimizes branch lengths under the Because of its speed, FastTree 2 is suitable for bootstrapping. However, to provide a quicker estimate of the tree's reliability, FastTree 2 provides local support values based on the Shimodaira-Hasegawa (SH) test [16], [17], [19]. FastTree 2 should be useful for reconstructing the tree of life and for analyzing the millions of uncharacterized proteins that are being identified by genome sequencing. We compared FastTree's speed and accuracy to those of PhyML 3.0 and RAxML 7, the most popular maximum-likelihood methods. To measure the quality of the resulting trees, we measured the topological accuracy on simulated alignments and the likelihood on genuine biological alignments. Topological Accuracy in Simulations We tested FastTree on simulated protein alignments with 250 to 5,000 sequences [2]. These simulations were derived from diverse gene families that arise in genome-scale studies (“Collections of Orthologous Groups” or COGs, [20]). The simulations include varying evolutionary rates across sites and include realistic placement of gaps. The simulations are available from the FastTree web site ( We defined the topological accuracy as the proportion of the splits in the true trees that are recovered by each method. This is the converse of the topological (“Robinson-Foulds”) distance, scaled to range from 0 to 1. As shown in Table 1, FastTree 2 was slightly more accurate than PhyML 3 with default settings (NNI search), and much more accurate than minimum-evolution or parsimony methods, but not as accurate as ML methods that use SPR moves. The differences in accuracy between FastTree 2 and the other methods were statistically significant (all Topological accuracy of trees inferred from simulated alignments. To test the practical significance of the additional true splits that are found by using ML SPR moves, we examined the local support values reported by PhyML 3. We defined “strongly supported” as having both SH-like local supports and approximate likelihood ratio test (aLRT) supports [21] of 95% or higher. Only 16% of the true splits that are found by PhyML 3 with SPR moves but missed by FastTree 2 were strongly supported. The full distribution of support values is shown in Figure 1. Conversely, among the strongly supported splits that were found by PhyML 3 with SPRs but not FastTree, 20% were incorrect. Thus, few of the additional true splits have high support, and of the splits that disagree, even the ones that have high support have a significant probability of being Local support values for splits found by PhyML with SPR moves and/or FastTree. To understand why FastTree 2 was outperforming PhyML 3 with NNI search, we ran PhyML 3 with FastTree's minimum-evolution tree as its starting tree. For the protein simulations with 250 sequences, this improved PhyML's accuracy to 86.8%, which is statistically indistinguishable from FastTree's accuracy of 86.9% (Table 1). CAT-Based Branch Lengths and Local Support Values Because FastTree 2 does not exhaustively optimize the likelihood, and because it reports branch lengths and local support values that were estimated using the CAT approximation, we compared its branch lengths and local support values to If accurate branch lengths are essential, however, then neither the CAT approximation nor the standard [15], [22]. For alignments of 16S ribosomal RNAs, Figure S1). As explained in Figure S1, correcting by the average posterior rate reduces this problem, and FastTree can compute a fast but accurate approximation to The local SH-like support values also showed a good correlation between FastTree and PhyML ([23]). Effectiveness of Heuristics We then examined how the topological accuracy of FastTree 2 is affected by its heuristics. As shown in Table 1, the minimum-evolution phase of FastTree, which uses linear SPRs, is not as accurate as FastME 2, a minimum-evolution method that performs exhaustive SPR moves [8], [12]. FastME computes distances between internal nodes differently from the minimum-evolution phase of FastTree: FastME uses averages of distances between sequences, while FastTree uses distances between profiles, which are averages of sequences. Nevertheless, FastTree 1 with only NNI moves gave very similar results as FastME with only NNI moves [2]. Thus, we attribute the modest difference in accuracy of the minimum-evolution methods with SPRs to FastTree's heuristics. To eliminate this effect, we ran FastTree with the FastME starting tree. To eliminate the effect of FastTree's ML heuristics, we ran it with exhaustive ML NNIs, and with more exhaustive optimization of branch lengths within each NNI (4 rounds of optimizing branch lengths for each quartet, instead of 1–2 rounds). In combination, FastTree 2 with FastME+SPR starting trees and exhaustive NNIs improved the accuracy on simulated alignments with 5,000 protein sequences from 84.3% to 85.0%. This modest effect illustrates that all of FastTree's heuristics have little effect on accuracy, and that removing them would improve the topology little relative to adding ML SPRs (e.g., RAxML 7.2.1 was 88.4% accurate). We also tested FastTree on simulations with over 78,000 nucleotide sequences. These simulations are derived from a 16S ribosomal RNA alignment (see Methods). The large size of these simulated alignments makes them a stringent test of FastTree's heuristics. In these simulations, FastTree gave much more accurate topologies than exact neighbor joining or Clearcut [24], a faster heuristic variant of neighbor joining (Table 1). (To analyze such large alignments with exact neighbor joining, we used NINJA [25].) To verify that the heuristics in FastTree's neighbor joining phase do not reduce accuracy, we also ran FastTree with the exact neighbor-joining tree as its starting tree, before doing minimum-evolution NNIs and SPRs and ML NNIs. This gave the same accuracy as the regular FastTree or as FastTree with the fastest settings of its heuristics for the neighbor joining phase (-fastest). All three variants found 92.10% of splits correctly. It may seem surprising that FastTree can reach accurate topologies when it does not compare all pairs of sequences to each other. However, minimum-evolution NNIs and SPRs are “consistent” – they find correct trees, even if the distances contain some errors, as long as the errors are much smaller than the internal branch lengths [26], [27]. In practice, the errors are often larger than the internal branch lengths, but this still probably explains why NNIs and SPRs suffice to find most of the splits correctly. Quality of Topologies for Biological Alignments To confirm that FastTree finds good topologies for genuine alignments, and not just in simulations, we tested it on 16S ribosomal RNAs and on protein families from COG. Although these families are quite large (up to 300,000 or 19,000 members, respectively), we first tested random subsets of just 500 sequences, so that we could run PhyML 3 with Table 2). Average log-likelihood for genuine alignments with 500 sequences. We then tested FastTree and RAxML on larger alignments of 16S rRNAs and COGs. For alignments with thousands of sequences, RAxML 7.0.4 is a bit slow, so we used RAxML 7.2.1, which introduced a fast convergence option as well as other optimizations. With fast convergence, RAxML terminates the search if less than 1% of splits change during a round of SPR moves. As shown in Table 1, for 5,000 proteins, RAxML with fast convergence is nevertheless quite accurate. On the larger alignments, RAxML 7.2.1's likelihoods were much higher than FastTree's, and all of the differences in likelihood were statistically significant (all [28]). However, FastTree did find most of the splits in the RAxML topology that had strong support (Table 3). For example, FastTree found 96–98% of RAxML's splits that had global bootstrap of 90% or higher. Comparison of RAxML and FastTree's log likelihoods, and the agreement of FastTree with RAxML's well-supported splits, for large genuine alignments. Running Time and Memory Required Finally, we compared the computational performance of FastTree, RAxML, and PhyML, on genuine alignments. As shown in Table 4, for alignments with 500 sequences, FastTree is about 100 times faster than RAxML 7.0.4 when using the same model of evolution, and even faster relative to PhyML 3. For alignments with thousands of sequences, FastTree was still 100–800 times faster than RAxML 7.2.1 with fast convergence of SPRs, while PhyML 3 did not complete in a reasonable amount of time. Running time and memory usage on genuine alignments. For one of the largest alignments existing today, containing 237,882 16S ribosomal RNAs, FastTree took less than a day and 5.8 GB of memory on a desktop computer. For comparison, given that RAxML took over 2 days for just 15,011 sequences, and optimistically assuming O( All of the FastTree times include the computation of local SH-like support values, while the other tools were run without support values. The local support values do not affect FastTree's running time much. For example, across seven COG alignments with 2,500 protein sequences each, the average time for FastTree to infer a tree is 345 seconds, and the average time for it to compute SH-like supports is 51 seconds. For the full alignment of 237,882 16S rRNAs, the supports required just one hour. Much of the time in RAxML 7.2.1 is spent optimizing the branch lengths under the Table 4. For example, for 15,011 16S rRNAs, if the Improvement of Likelihood Over Time To compare the search strategies of FastTree and RAxML more directly, we compared their improvement in likelihoods over time for a nucleotide alignment of 4,114 16S rRNAs [11] and for seven protein alignments of COG families with 2,500 members. We ran both methods with the CAT approximation and with either the generalized time-reversible (GTR) model of nucleotide substitution or the JTT model of amino acid substitution. We computed likelihoods for intermediate and final trees with RAxML, re-optimized branch lengths, and Figure 2 shows the running time and log likelihood for FastTree's minimum-evolution and final tree, for RAxML's initial parsimony tree and successive rounds of SPR moves, and also for RAxML with FastTree's minimum-evolution tree as its starting tree. These times do not include FastTree's support values or RAxML optimizing branch lengths under Likelihoods over time for genuine alignments. Given the same starting tree, FastTree's ML phase improved the likelihood by roughly the same amount as one round of RAxML's SPR moves, and in about 40% of the time (Figure 2). FastTree's ML phase also performs about as well as one round of RAxML's SPR moves in finding well-supported splits (Figure S2). We obtained similar results for other large 16S alignments (Table S1). Although this comparison shows that FastTree is initially faster than RAxML, the RAxML's first round of SPR moves is only a fraction of its run time. Most of the difference in speed between FastTree and RAxML is because of RAxML's more thorough search for a better topology and because of RAxML's Starting Trees: Minimum-Evolution versus Maximum Parsimony RAxML's parsimony phase was 4–17 times slower than FastTree's minimum evolution phase, and generally slower than FastTree with ML NNIs. FastTree's speed advantage grows with larger alignments (data not shown), which is expected because FastTree should scale as O([29], but these still scale as O( As measured by likelihood, FastTree's minimum-evolution starting trees were much better than RAxML's parsimony starting trees for the COG alignments, but much worse for large 16S rRNA alignments ( Figure 2 and Table S1). The differences in likelihood reflects the criterion, and not merely differences in the search strategy: for the COG alignments, the RAxML parsimony starting trees were more parsimonious than FastTree's minimum-evolution trees (average parsimony scores of 281,237 and 283,125, respectively). Conversely, for the 16S alignment with 4,114 sequences, FastTree's minimum-evolution tree was shorter than the parsimony tree (lengths of 43.0 and 44.6, respectively). For this alignment, the minimum-evolution tree's log likelihood was 2,705 worse than parsimony's, yet minimum evolution found more of the strongly-supported splits in the final RAxML tree: minimum evolution found 826 of the 851 splits with a global bootstrap We have shown that FastTree 2 computes accurate topologies in a reasonable amount of time for alignments with up to hundreds of thousands of sequences. FastTree is open source software and is available at http://microbesonline.org/fasttree. The C source code is extensively documented and contributions are welcome. FastTree trees for every microbial gene family, including families with tens of thousands of members such as ABC transporters, are available at MicrobesOnline (http://microbesonline.org/), along with a “tree-browser” for examining these trees. These trees will be updated from FastTree 1 to FastTree 2 in the next release of MicrobesOnline. Because DNA sequencing technology is improving rapidly, we expect to have alignments with millions of sequences soon. For these huge alignments, the most computationally demanding step will be the initial neighbor-joining phase. In FastTree 2.0, which is described here, neighbor joining takes O([30]. In our simulations, PartTree starting trees do not allow FastTree to reach the same accuracy as FastTree's neighbor-joining starting tree does (data not shown), but a divide-and-conquer approach might still suffice to obtain a partially resolved initial tree. Such huge families also raise challenges for multiple sequence alignment. We have used profile alignment to avoid the challenges of multiple sequence alignment on large families. This works well for 16S RNAs because Infernal takes advantage of highly conserved secondary structure [1], but we are not sure that it gives accurate results for diverse protein families. In contrast, traditional progressive multiple sequence alignment methods are not scalable because their output grows as O([31]. Combining this representation with fast guide tree construction, it should be possible to build progressive multiple sequence alignments with millions of sequences. Finally, it is not clear how to assess the quality or reliability of such large trees. Different methods gave very different topologies and large differences in likelihood, and yet few of the differences were well-supported by the bootstrap. In fact, a topology with relatively poor likelihood could have relatively good agreement with the best tree. This could indicate that higher-likelihood trees contain many improvements, but that few of the individual improvements are statistically significant. This is expected if there is limited phylogenetic signal. Alternatively, the bootstrap could be too conservative. Local support values do suggest a greater number of significant differences (Table 3), but local support values may be biased upwards because they do not consider all of the alternate topologies. Further study of these questions is needed. Materials and Methods Minimum-Evolution “Linear” Subtree-Pruning-Regrafting To reduce the number of SPR moves considered from O( As suggested by Richard Desper and Olivier Gascuel, FastTree treats each potential SPR move as a sequence of NNIs. The change in tree length for the SPR move is then just the sum of the changes due to the NNIs, much as The change in tree length for an NNI from [12], [32], In FastME, the above formula for the change in tree length is exactly correct because the changes in other branch lengths in the tree can be expressed as combinations of distances that cancel each other out [26]. In FastTree, however, the formula for the change in tree length is an approximation, because the log-corrected distances do not cancel in this way. Nevertheless, FastTree with NNIs and FastME with NNIs give very similar results [2], and computing the exact change in total tree length does not improve the accuracy of FastTree's SPRs (data not shown). The Maximum-Likelihood Phase The key data structures for the maximum likelihood phase are the tree topology, the branch lengths, and the posterior distributions for each internal node. (FastTree stores the tree with a trifurcation at the root, but the placement of the root is not biologically meaningful and does not affect the likelihood [5].) The posterior distribution for an internal node describes the state of the corresponding ancestor, given the branch lengths and the sequences beneath it. For example, for nucleotide data, it stores the probability that a given site was an A, C, G, or T. FastTree stores posterior distributions for The key primitive operations are (1) to compute the joint likelihood of two posterior distributions, given the length between them, and (2) to compute the posterior distribution of a parent node given the posterior distributions of its two children and their two branch lengths. These suffice to compute the likelihood of the tree [5]: for example, the likelihood of the tree (A,B,(C,D)) is At the beginning of the ML phase, we have a minimum-evolution topology and branch lengths. The steps for the maximum-likelihood phase are: • Compute an approximate posterior distribution for each node, using the weighted averages of its children. Although the initial posterior distributions are approximate, all future changes to the topology or to the branch lengths will update the posterior distributions to their exact values. • Optimize all branch lengths for one round, using a simplified model with no parameters (without CAT, and with Jukes-Cantor instead of GTR if GTR was requested). • Perform one round of ML NNIs, using the simplified model. • If the GTR model is being used, optimize the nucleotide transition rate parameters, switch from Jukes Cantor to the GTR model and recompute posterior distributions, and optimize all branch lengths for one round with the new model. • If the CAT model is being used, estimate rate categories for each site, recompute posterior distributions, and optimize all branch lengths for one round with the new model. • Perform additional rounds of ML NNIs, with subtree skipping and the star topology test. • Perform a final round of ML NNIs without subtree skipping or the star topology test. • Optimize all branch lengths for one round. • Compute SH-like local support values. A round of ML NNIs During each round of NNIs, FastTree visits each node before it visits its parents (depth-first post-order traversal). At each node, it compares the likelihood of the trees not children of N (see Figure 3). These up-posteriors can be thought of as a way to temporarily reroot the tree at the current location. In particular, the likelihood of the tree can be computed from the posteriors A, B, C, and D. Traversing a tree with up-posteriors. The up-posterior for a node can be computed from its parent's up-posterior and its sibling's posterior distribution. FastTree only stores these up-posteriors for the path to the root from its current location in the tree, so they take O( When it visits each node, for each of the three alternate topologies around the node, FastTree optimizes the branch lengths to maximize the likelihood. For the topology By default, FastTree optimizes the branch lengths within all three quartet topologies for one round. Any topology that is significantly (5 log-likelihood units) worse than the current topology is abandoned after the first round. If more than one topology remains, then the remaining topologies are optimized for another round. After the rounds of optimization are complete, FastTree updates the topology if necessary. In either case, it updates the branch lengths to the re-optimized values and recomputes the posterior distribution for the node. A difference of 5 in log likelihood may seem like a small difference, so that the heuristic might miss a good change to the topology. However, optimization of branch lengths after the first round usually leads to small improvements in the log likelihood. For example, if we analyze 40 randomly selected 16S rRNAs with FastTree and the GTR+CAT model, and we increase the rounds of branch length optimization to 4 (-mlacc 4), then the average improvement for any NNI is just 1.1 log-likelihood units in the second round of branch length optimization and just 0.04 in rounds 3 and 4 combined. To put these numbers in perspective, differences in log-likelihood of less than 2 are not statistically significant ( Optimizing model parameters After the first round of NNIs, FastTree optimizes any parameters in the model. First, if the GTR model is being used, there are six relative rates to optimize, one for each nucleotide conversion. (The stationary distribution for the transition matrix is set to the empirical frequency of the four nucleotides.) FastTree optimizes the likelihood of the tree (with fixed branch lengths and topology) by numerically optimizing each of the six parameters in the model in turn. With each change in the model, it recomputes all posterior distributions. It then optimizes the six parameters a second time. This does not fully optimize the model parameters, but it gives acceptable results (Table 2). Second, unless the -nocat option is set, FastTree estimates the rate of evolution at each site. Given the desired number of categories of relative rates We confirmed that the Bayesian approach to setting the rate categories prevents overfitting on small alignments. For example, on simulated protein alignments with just 10 sequences (from [2]), adding the CAT model improves FastTree's accuracy from 76.2% to 78.0%. (For comparison, PhyML without [2].) Conversely, on nucleotide simulations with 24 sequences that (unrealistically) do not contain any rate variation across sites (the fast-evolving alignments of [12]), the CAT model only reduces accuracy slightly, from 93.6% to 93.4%. (For comparison, PhyML without [2].) Completing the ML NNIs In later rounds of NNIs, FastTree uses the more accurate model and it uses two additional heuristics “subtree skipping” and the “star topology test,” which are described below. As discussed in the Results, these heuristics have little effect on accuracy. If no NNI leads to an improvement of more than 0.1 in the likelihood of any quartet, then FastTree considers the NNIs to have converged. FastTree repeats rounds of NNIs until convergence, up to a limit of After convergence, FastTree does one final round of ML NNIs with the subtree skipping and the star topology test turned off, as in the first round. We view this as a safety valve for the heuristics. Finally, FastTree does a final round of optimizing the branch lengths and computes the SH-like local supports. Subtree skipping The intuition behind subtree skipping is that if a subtree has not changed during recent rounds of NNIs, then further attempts to optimize the subtree will be fruitless. Specifically, during ML NNIs, FastTree does not traverse into subtrees that have not seen any significant improvement in likelihood (0.1 log likelihood units) in either of the previous two rounds. Before skipping a subtree, FastTree also checks that none of the nodes adjacent to the parent node were affected by a significantly improving NNI in the previous round. The “subtree skipping” heuristic typically gives a 3-fold speedup, making it the most important of FastTree's ML heuristics. Subtree skipping might be useful for SPR moves as well. Star topology test If the current topology (A,B,(C,D)) is much better than the star topology (A,B,C,D) then an NNI is unlikely to give an improvement. Specifically, if the current topology is significantly (5 log-likelihood units) more likely than the star topology (after optimizing the internal branch length), then FastTree does not optimize the other branch lengths or consider the two alternate topologies. However, FastTree only uses this heuristic if the node that was unchanged in the last round of NNIs. To approximate the likelihood of the star topology, FastTree uses the likelihood with the minimal internal branch length of 0.0001. Branch lengths To optimize all branch lengths in the tree at the beginning and end of the ML phase and after optimizing the model parameters, FastTree again uses post-order traversal. At each node, it considers a three-node star topology on the node's children and parent, using the posterior distributions for the two children and the up-posterior for itself. (At the root, it uses all three children instead.) It numerically optimizes these three branch lengths in series for two rounds. SH-like local supports For each node, the local support is derived from the per-site likelihoods for the current topology and the two alternate (NNI) topologies. For the current topology, FastTree uses the current (already optimized) branch lengths. For the alternate topologies, FastTree optimizes branch lengths for the quartets, as during the NNIs, for up to two rounds. Given the per-site likelihoods for the three topologies, FastTree uses the SH test with 1,000 bootstrap replicates to estimate the confidence in the given split [19]. If there are poorly resolved nodes nearby, then the support values should be interpreted cautiously, because a high-likelihood alternate topology might not have been considered. Low-level optimization of likelihood computations Whereas RAxML stores likelihood vectors (that is, the joint likelihood of a subtree and of a given character at an internal node), FastTree stores posterior distributions, which are normalized so that each site's values sum to 1. This may improve numerical stability for huge alignments. To reduce memory usage, FastTree stores these vectors in single-precision floating point. Log-likelihoods for the tree or for specific sites are stored with double precision. Similar to RAxML, FastTree stores the posterior distributions in a rotated form, multiplied by the eigen-matrix of the transition matrix. (For the Jukes Cantor model, this is not necessary.) This reduces the time for likelihood computations from O( While computing the joint likelihood for a pair of posterior distributions, FastTree avoids performing a logarithm at every site by operating on likelihoods instead of log likelihoods. To prevent numerical underflow, FastTree rescales the likelihood by a constant when necessary. It updates a separate (log-likelihood-based) counter whenever it does this. Similarly, when computing the tree's likelihood at each site, for example while optimizing the rate categories, FastTree rescales each site's likelihood if necessary after visiting each node. FastTree uses SSE2 instructions, a special feature of recent CPUs from Intel and AMD, to operate on 4 single-precision floating point values with one instruction. This speeds up computations for protein alignments by up to 50% (data not shown). Numerical optimization To find the parameters that optimize the likelihood, FastTree uses Brent's method, a numerical method that iteratively halves the interval it is searching within (http://en.wikipedia.org/wiki/ Brents_method). Because Brent's method only operates in one dimension, FastTree optimizes different parameters in turn, and then repeats the rounds of optimization (for example, it optimizes the first branch length, then the second, then the third, then repeats). FastTree estimates the initial interval to search within from the initial guess Biological and Simulated Alignments The simulated protein alignments and the genuine COG alignments were described previously [2]. The 16S alignment with 237,882 distinct sequences was taken from GreenGenes [33] (http:// greengenes.lbl.gov). The 16S alignment with 15,011 distinct “families” is a non-redundant subset of these sequences ([11]. For the 16S-like simulations with 78,132 distinct sequences, we used a maximum-likelihood tree inferred from a non-redundant aligned subset of the full set of 16S sequences ([34] under the HKY model and no transition bias. To allow Rose to handle branch lengths of less than 1%, we set “MeanSubstitution Software Used We used FastTree 2.0.0. We used the July 6 2009 release of the PhyML 3.0 source code and modified BL_MIN from 1.e-10 to 1.e-8 to overcome numerical problems with some of the simulated protein alignments, as suggested by Stepháne Guindon. FastME 2.06 was provided by Olivier Gascuel. RAxML 7.0.4 and 7.2.1 were obtained from the author's web sites. RAxML 7.2.1 was compiled with SSE instructions. NINJA was provided by Travis Wheeler and is available at http://nimbletwist.com/software/ninja/. BIONJ was obtained from http://www.lirmm.fr/~w3ifa/MAAS/BIONJ/BIONJ.c. BIONJ was run with maximum-likelihood distances obtained with phylip's protdist (http://evolution.genetics.washington.edu/phylip.htm) and the JTT model (no gamma). Log-corrected distances were obtained with FastTree and the -makematrix option. Supporting Information Figure S1 Branch lengths for an alignment of 200 16S rRNA sequences vary systematically with the Γ approximation used. The CAT lengths are from FastTree, and all Γ branch lengths are from PhyML with FastTree's topology and with optimized shape parameters. The top panel shows that branch lengths from the various models have a roughly linear relationship with each other, but they have different scales. The bottom panel shows how the total length of the tree varies with the number of categories (note log χ axis). The “Use Median” lengths are from running PhyML with –use_median, which uses the median of each region, rather than the mean, to approximate the gamma distribution. The “Corrected” lengths are the “Use Median” lengths multiplied by the average posterior rates, which can be obtained by running PhyML with –print_site_lnl (thanks to Stepháne Guindon for pointing this out). The corrected lengths converge to the correct value much more quickly than the other rates. The “CAT/Gamma” tree length, from FastTree 2.1 with -gamma, is also reasonably accurate. With this option, FastTree 2.1 optimizes the Γ[20] likelihood with a shape parameter and a rescaling parameter, using the site likelihoods from FastTree's 20 relative rates and branch lengths that were optimized under the CAT model. (0.13 MB PS) Figure S2 Total Splits or Strongly Supported Splits that Disagree with RAxML's Final Tree, versus Time. The 16S tree has 4,111 splits and the COG trees have 2,497 splits each. All values for the COG trees are averages over the 7 COGs. (0.02 MB PS) Table S1 Times and likelihoods for large 16S rRNA alignments (0.02 MB PDF) We thank Alexandros Stamatakis for suggesting the time-and-likelihood comparison between FastTree and RAxML and for commenting on the manuscript. Competing Interests: The authors have declared that no competing interests exist. Funding: This work was supported by a grant from the US Department of Energy Genomics: GTL program (DE-AC02-05CH11231). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Nawrocki EP, Kolbe DL, Eddy SR. Infernal 1.0: inference of RNA alignments. Bioinformatics. 2009;15:1335–7. [PMC free article] [PubMed] Price MN, Dehal PS, Arkin AP. FastTree: computing large minimum evolution trees with profiles instead of a distance matrix. Mol Biol Evol. 2009;26:1641–50. [PMC free article] [PubMed] Saitou N, Nei M. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol. 1987;4:406–425. [PubMed] Studier JA, Keppler KJ. A note on the neighbor-joining algorithm of Saitou and Nei. Mol Biol Evol. 1988;5:729–31. [PubMed] Felsenstein J. Evolutionary trees from dna sequences: A maximum likelihood approach. J Mol Evol. 1981;17:368–376. [PubMed] Roch S. A short proof that phylogenetic tree reconstruction by maximum likelihood is hard. IEEE/ACM Trans Comput Biol Bioinform. 2006;3:92–94. [PubMed] Guindon S, Gascuel O. A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst Biol. 2003;52:696–704. [PubMed] Hordijk W, Gascuel O. Improving the efficiency of SPR moves in phylogenetic tree search algorithms based on maximum-likelihood. Bioinformatics. 2005;21:4338–4347. [PubMed] Stamatakis A. RAxML-VI-HPC: maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models. Bioinformatics. 2006;22:2688–2690. [PubMed] 10. Felsenstein J. Confidence limits on phylogenies: an approach using the bootstrap. Evolution. 1985;39:783–791. Stamatakis A, Hoover P, Rougemont J. A rapid bootstrap algorithm for the RAxML web servers. Syst Biol. 2008;57:758–771. [PubMed] Desper R, Gascuel O. Fast and accurate phylogeny reconstruction algorithms based on the minimum-evolution principle. Journal of Computational Biology. 2002;9:687–705. [PubMed] Nei M, Kumar S, Takahashi K. The optimization principle in phylogenetic analysis tends to give incorrect topologies when the number of nucleotides or amino acids used is small. Proc Natl Acad Sci USA. 1998;95:12390–7. [PMC free article] [PubMed] 14. Stamatakis A. Phylogenetic models of rate heterogeneity: a high performance computing perspective. 2006. In: Proceedings of the 20th International Parallel and Distributed Processing Symposium Yang Z. Maximum likelihood phylogenetic estimation from DNA sequences with variable rates over sites: Approximate methods. J Mol Evol. 1994;39:306–314. [PubMed] Guindon S, Delsuc F, Dufayard JF, Gascuel O. Estimating maximum likelihood phylogenies with PhyML. Methods Mol Biol. 2009;537:113–37. [PubMed] Guindon S, Dufayard J, Lefort V, MAnisimova, Hordijk W, et al. New algorithms and methods to estimate maximum-likelihood phylogenies: Assessing the performance of PhyML 3.0. Syst Biol 2010 : in press. [PubMed] Gascuel O. BIONJ: an improved version of the NJ algorithm based on a simple model of sequence data. Mol Biol Evol. 1997;14:685–695. [PubMed] 19. Shimodaira H, Hasegawa M. Multiple comparisons of log-likelihoods with applications to phylogenetic inference. Mol Biol Evol. 1999;16:1114–1116. Tatusov RL, Natale DA, Garkavtsev IV, Tatusova TA, Shankavaram UT, et al. The COG database: new developments in phylogenetic classification of proteins from complete genomes. Nucleic Acids Res. 2001; 29:22–8. [PMC free article] [PubMed] Anisimova M, Gascuel O. Approximate likelihood-ratio test for branches: A fast, accurate, and powerful alternative. Syst Biol. 2006;55:539–52. [PubMed] Galtier N, Jean-Marie A. Markov-modulated markov chains and the covarion process of molecular evolution. J Comput Biol. 2004;11:727–733. [PubMed] DeLong ER, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1998;44:837–45. [PubMed] Evans J, Sheneman L, Foster J. Relaxed neighbor joining: a fast distance-based phylogenetic tree construction method. J Mol Evol. 2006;62:785–92. [PubMed] 25. Wheeler TJ. Large-scale neighbor-joining with NINJA. 2009. In: Proceedings of the 9th Workshop on Algorithms in Bioinformatics. Desper R, Gascuel O. Theoretical foundation of the balanced minimum evolution method of phylogenetic inference and its relationship to weighted least-squares tree fitting. Mol Biol Evol. 2004;21 :587–598. [PubMed] Bordewich M, Gascuel O, Huber KT, Moulton V. Consistency of topological moves based on the balanced minimum evolution principle of phylogenetic inference. IEEE/ACM Trans Comput Biol Bioinform. 2009;6 :110–7. [PubMed] Shimodaira H, Hasegawa M. CONSEL: for assessing the confidence of phylogenetic tree selection. Bioinformatics. 2001;17:1246–1247. [PubMed] 29. Goloboff PA, Farris JS, Nixon KC. TNT, a free program for phylogenetic analysis. Cladistics. 2008;24:774–786. Katoh K, Toh H. PartTree: an algorithm to build an approximate tree from a large number of unaligned sequences. Bioinformatics. 2007;23:372–374. [PubMed] Bradley RK, Roberts A, Smoot M, Juvekar S, Do ea J. Fast statistical alignment. PLoS Comput Biol. 2009;5 [PMC free article] [PubMed] Sonnhammer ELL, Hollich V. Scoredist: A simple and robust protein sequence distance estimator. BMC Bioinformatics. 2005;6:108. [PMC free article] [PubMed] DeSantis TZ, Hugenholtz P, Larsen N, Rojas M, Brodie EL, et al. Greengenes, a chimera-checked 16S rRNA gene database and workbench compatible with ARB. Appl Environ Microbiol. 2006;72:5069–5072. [PMC free article] [PubMed] Stoye J, Evers D, Meyer F. Rose: generating sequence families. Bioinformatics. 1998;14:157–163. [PubMed] Articles from PLoS ONE are provided here courtesy of Public Library of Science • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2835736/?tool=pubmed","timestamp":"2014-04-20T19:38:02Z","content_type":null,"content_length":"158208","record_id":"<urn:uuid:6a345cf5-ffba-4ab6-aee4-32ccedd0303b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Robbing the bandit: less regret in online geometric optimization against an adaptive adversary Results 1 - 10 of 40 - IN ACM CONFERENCE ON ELECTRONIC COMMERCE , 2005 "... We present approximation and online algorithms for a number of problems of pricing items for sale so as to maximize seller's revenue in an unlimited supply setting. Our first result is an O(k) -approximation algorithm for pricing items to single-minded bidders who each want at most k items. This impr ..." Cited by 58 (9 self) Add to MetaCart We present approximation and online algorithms for a number of problems of pricing items for sale so as to maximize seller's revenue in an unlimited supply setting. Our first result is an O(k) -approximation algorithm for pricing items to single-minded bidders who each want at most k items. This improves over recent independent work of Briest and Krysta [6] who achieve an O(k ) bound. For the case k = 2, where we obtain a 4-approximation, this can be viewed as the following graph vertex pricing problem: given a (multi) graph G with valuations w e on the edges, find prices p i 0 for the vertices to maximize (p i + p j ) . - In Proceedings of the 21st Annual Conference on Learning Theory (COLT , 2008 "... We introduce an efficient algorithm for the problem of online linear optimization in the bandit setting which achieves the optimal O ∗ ( √ T) regret. The setting is a natural generalization of the nonstochastic multi-armed bandit problem, and the existence of an efficient optimal algorithm has bee ..." Cited by 50 (9 self) Add to MetaCart We introduce an efficient algorithm for the problem of online linear optimization in the bandit setting which achieves the optimal O ∗ ( √ T) regret. The setting is a natural generalization of the nonstochastic multi-armed bandit problem, and the existence of an efficient optimal algorithm has been posed as an open problem in a number of recent papers. We show how the difficulties encountered by previous approaches are overcome by the use of a self-concordant potential function. Our approach presents a novel connection between online learning and interior point methods. 1 - STOC'08 , 2008 "... In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of n trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with larg ..." Cited by 45 (6 self) Add to MetaCart In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of n trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions. In this work we study a very general setting for the multiarmed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the Lipschitz MAB problem. We present a complete solution for the multi-armed problem in this setting. That is, for every metric space (L, X) we define an isometry invariant MaxMinCOV(X) which bounds from below the performance of Lipschitz MAB algorithms for X, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions. - In STOC ’08: Proceedings of the fortieth annual ACM symposium on Theory of computing , 2007 "... We propose weakening the assumption made when studying the price of anarchy: Rather than assume that self-interested players will play according to a Nash equilibrium (which may even be computationally hard to find), we assume only that selfish players play so as to minimize their own regret. Regret ..." Cited by 38 (7 self) Add to MetaCart We propose weakening the assumption made when studying the price of anarchy: Rather than assume that self-interested players will play according to a Nash equilibrium (which may even be computationally hard to find), we assume only that selfish players play so as to minimize their own regret. Regret minimization can be done via simple, efficient algorithms even in many settings where the number of action choices for each player is exponential in the natural parameters of the problem. We prove that despite our weakened assumptions, in several broad classes of games, this “price of total anarchy ” matches the Nash price of anarchy, even though play may never converge to Nash equilibrium. In contrast to the price of anarchy and the recently introduced price of sinking [15], which require all players to behave in a prescribed manner, we show that the price of total anarchy is in many cases resilient to the presence of Byzantine players, about whom we make no assumptions. Finally, because the price of total anarchy is an upper bound on the price of anarchy even in mixed strategies, for some games our results yield as corollaries previously unknown bounds on the price of anarchy in mixed strategies. 1 - In Proceedings of the 39 th annual ACM Symposium on Theory of Computing , 2007 "... Abstract. In an online linear optimization problem, on each period t, an online algorithm chooses st ∈ S from a fixed (possibly infinite) set S of feasible decisions. Nature (who may be adversarial) chooses a weight vector wt ∈ R n, and the algorithm incurs cost c(st, wt), where c is a fixed cost fu ..." Cited by 21 (3 self) Add to MetaCart Abstract. In an online linear optimization problem, on each period t, an online algorithm chooses st ∈ S from a fixed (possibly infinite) set S of feasible decisions. Nature (who may be adversarial) chooses a weight vector wt ∈ R n, and the algorithm incurs cost c(st, wt), where c is a fixed cost function that is linear in the weight vector. In the full-information setting, the vector wt is then revealed to the algorithm, and in the bandit setting, only the cost experienced, c(st, wt), is revealed. The goal of the online algorithm is to perform nearly as well as the best fixed s ∈ S in hindsight. Many repeated decision-making problems with weights fit naturally into this framework, such as online shortest-path, online TSP, online clustering, and online weighted set cover. Previously, it was shown how to convert any efficient exact offline optimization algorithm for such a problem into an efficient online algorithm in both the full-information and the bandit settings, with average cost nearly as good as that of the best fixed s ∈ S in hindsight. However, in the case where the offline algorithm is an approximation algorithm with ratio α> 1, the previous approach only worked for special types of approximation algorithms. We show how to convert any offline approximation algorithm for a linear optimization problem into a corresponding online approximation algorithm, with a polynomial blowup in runtime. If the offline algorithm has an α-approximation guarantee, then the expected cost of the online algorithm on any sequence is not much larger than α times that of the best s ∈ S, where the best is chosen with the benefit of hindsight. Our main innovation is combining Zinkevich’s algorithm for convex optimization with a geometric transformation that can be applied to any approximation algorithm. Standard techniques generalize the above result to the bandit setting, except that a “Barycentric Spanner ” for the problem is also (provably) necessary as input. Our algorithm can also be viewed as a method for playing large repeated games, where one can only compute approximate best-responses, rather than best-responses. 1. Introduction. In the 1950’s "... We present a modification of the algorithm of Dani et al. [8] for the online linear optimization problem in the bandit setting, which with high probability has regret at most O ∗ ( √ T) against an adaptive adversary. This improves on the previous algorithm [8] whose regret is bounded in expectatio ..." Cited by 17 (0 self) Add to MetaCart We present a modification of the algorithm of Dani et al. [8] for the online linear optimization problem in the bandit setting, which with high probability has regret at most O ∗ ( √ T) against an adaptive adversary. This improves on the previous algorithm [8] whose regret is bounded in expectation against an oblivious adversary. We obtain the same dependence on the dimension (n 3/2) as that exhibited by Dani et al. The results of this paper rest firmly on those of [8] and the remarkable technique of Auer et al. [2] for obtaining highprobability bounds via optimistic estimates. This paper answers an open question: it eliminates the gap between the high-probability bounds obtained in the full-information vs bandit settings. 1 "... We study sequential prediction problems in which, at each time instance, the forecaster chooses a binary vector from a certain fixed set S ⊆ {0, 1} d and suffers a loss that is the sum of the losses of those vector components that equal to one. The goal of the forecaster is to achieve that, in the l ..." Cited by 17 (5 self) Add to MetaCart We study sequential prediction problems in which, at each time instance, the forecaster chooses a binary vector from a certain fixed set S ⊆ {0, 1} d and suffers a loss that is the sum of the losses of those vector components that equal to one. The goal of the forecaster is to achieve that, in the long run, the accumulated loss is not much larger than that of the best possible vector in the class. We consider the “bandit ” setting in which the forecaster has only access to the losses of the chosen vectors. We introduce a new general forecaster achieving a regret bound that, for a variety of concrete choices of S, is of order √ nd ln |S | where n is the time horizon. This is not improvable in general and is better than previously known bounds. We also point out that computationally efficient implementations for various interesting choices of S exist. 1 - 24TH ANNUAL CONFERENCE ON LEARNING THEORY , 2011 "... In a multi-armed bandit (MAB) problem, an online algorithm makes a sequence of choices. In each round it chooses from a time-invariant set of alternatives and receives the payoff associated with this alternative. While the case of small strategy sets is by now wellunderstood, a lot of recent work ha ..." Cited by 15 (3 self) Add to MetaCart In a multi-armed bandit (MAB) problem, an online algorithm makes a sequence of choices. In each round it chooses from a time-invariant set of alternatives and receives the payoff associated with this alternative. While the case of small strategy sets is by now wellunderstood, a lot of recent work has focused on MAB problems with exponentially or infinitely large strategy sets, where one needs to assume extra structure in order to make the problem tractable. In particular, recent literature considered information on similarity between arms. We consider similarity information in the setting of contextual bandits, a natural extension of the basic MAB problem where before each round an algorithm is given the context – a hint about the payoffs in this round. Contextual bandits are directly motivated by placing advertisements on webpages, one of the crucial problems in sponsored search. A particularly simple way to represent similarity information in the contextual bandit setting is via a similarity distance between the context-arm pairs which bounds from above the difference between the respective expected payoffs. Prior work - In ACM-EC , 2009 "... We consider a multi-round auction setting motivated by payper-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked or not. An advertiser derives value from clicks; the value of a click is her private information. I ..." Cited by 13 (0 self) Add to MetaCart We consider a multi-round auction setting motivated by payper-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked or not. An advertiser derives value from clicks; the value of a click is her private information. Initially, neither the auctioneer nor the advertisers have any information about the likelihood of clicks on the advertisements. The auctioneer’s goal is to design a (dominant strategies) truthful mechanism that (approximately) maximizes the social welfare. If the advertisers bid their true private values, our problem is equivalent to the multi-armed bandit problem, and thus can be viewed as a strategic version of the latter. In particular, for both problems the quality of an algorithm can be characterized by regret, the difference in social welfare between the algorithm and the benchmark which always selects the same“best”advertisement. We investigate how the design of multi-armed bandit algorithms is affected by the restriction that the resulting mechanism must be truthful. We find that truthful mechanisms have certain strong structural properties – essentially, they must separate exploration from exploitation – and they incur much higher regret than the optimal multi-armed bandit algorithms. Moreover, we provide a truthful mechanism which (essentially) matches our lower bound on regret. , 2009 "... We provide a principled way of proving Õ( √ T) high-probability guarantees for partial-information (bandit) problems over convex decision sets. First, we prove a regret guarantee for the full-information problem in terms of “local ” norms, both for entropy and self-concordant barrier regularization, ..." Cited by 13 (4 self) Add to MetaCart We provide a principled way of proving Õ( √ T) high-probability guarantees for partial-information (bandit) problems over convex decision sets. First, we prove a regret guarantee for the full-information problem in terms of “local ” norms, both for entropy and self-concordant barrier regularization, unifying these methods. Given one of such algorithms as a black-box, we can convert a bandit problem into a full-information problem using a sampling scheme. The main result states that a high-probability Õ ( √ T) bound holds whenever the black-box, the sampling scheme, and the estimates of missing information satisfy a number of conditions, which are relatively easy to check. At the heart of the method is a construction of linear upper bounds on confidence intervals. As applications of the main result, we provide the first known efficient algorithm for the sphere with an Õ( √ T) high-probability bound. We also derive the result for the n-simplex, improving the O ( √ nT log(nT)) bound of Auer et al [3] by replacing the log T term with log log T and closing the gap to the lower bound of Ω ( √ nT). The guarantees we obtain hold for adaptive adversaries (unlike the in-expectation results of [1]) and the algorithms are efficient, given that the linear upper bounds on confidence can be computed. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=945092","timestamp":"2014-04-18T14:48:55Z","content_type":null,"content_length":"42319","record_id":"<urn:uuid:78d7ba41-226c-497a-b91a-f676c1dae95d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Light enters a substance from air at 45 degrees to the normal. It continues through the substance at 34.7 degrees... - Homework Help - eNotes.com Light enters a substance from air at 45 degrees to the normal. It continues through the substance at 34.7 degrees to the normal. What would be the critical angle for this substance? Answer: 53.6° Recall Snell's Law: where n is the index of refraction of the material and `\theta` is the angle the ray makes with the normal (perpendicular) of the surface. `n_(air) = 1.0 ` by definition (since the speed of light is very close to the same as that of a vacuum) The critical angle, `\theta_c` , is the angle of incidence of a ray such that the exiting ray will be along the surface. In other words, the ray angle of refraction is 90°. This only works when, in this case, if `n_2gtn_1` Setting the output ray angle to 90° gives the following: `n_1/n_2 = sin(\theta_c) ` critical angle formula Note that if `n_1 gt n_2` , the fraction will be greater than 1 - which means there is not a critical angle in that situation. Now that the concept is down, the solution is as follows: let `n_1` be air, and `n_2` be our substance. `n_2 gt n_1` Islolate n_2 in Snell's law, then plug in the result into the critical angle formula: `n_2 = n_1(sin(\theta_1)/sin(\theta_2))` plugging into critical angle formula: `n_1/((n_1(sin(\theta_1)/sin(\theta_2)))) = sin(\theta_c)` `rArr sin(\theta_2)/sin(\theta_1) = sin(\theta_c)` `:. theta_c = 53.6^o` Hope that helps! Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/light-enters-substance-from-air-45-degrees-normal-430471","timestamp":"2014-04-18T00:27:31Z","content_type":null,"content_length":"28317","record_id":"<urn:uuid:3d684e7f-b028-446c-a79e-d7bc8c162cde>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Infinite Computations, Convergence, Formal Describability Next: Formally Describable Functions Up: Preliminaries Previous: Turing Machines: Monotone TMs Most traditional computability theory focuses on properties of halting programs. Given an MTM or EOM or GTM for `` Although each beginning or prefix of in the limit [19,33,15] and [20,32]. Definition 2.2 (TM-Specific Individual Describability) Given a TM T, an iff there is a finite According to this definition, objects with infinite shortest descriptions on Definition 2.3 (Universal TMs) Let ) such that for all possible programs compiler theorem (for instance, a fixed compiler can translate arbitrary LISP programs into equivalent FORTRAN programs). Definition 2.5 (Individual Describability) Let if it is computably enumerable (c.e.). G-describable strings are called formally describable or simply For example, MTMs and EOMs converge always. GTMs do not. Definition 2.7 (Approximability) Let by TM for all times if there is at least one GTM Henceforth we will exchangeably use the expressions approximable, describable, computable in the limit. Next: Formally Describable Functions Up: Preliminaries Previous: Turing Machines: Monotone TMs Juergen Schmidhuber 2003-02-13
{"url":"http://www.idsia.ch/~juergen/ijfcs2002/node5.html","timestamp":"2014-04-19T07:35:43Z","content_type":null,"content_length":"13302","record_id":"<urn:uuid:2b29f663-c7c9-4eaa-bff4-053d21105d21>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Thu Nov 04 1999 18:01: I finally got Dan to accept that no natural number is of infinite magnitude. The upshot of this is that he now has to accept that the real numbers can be diagonalized but the natural numbers cannot. Now maybe we can have some peace around here. The proof is a simple proof by induction: Zero is of finite magnitude. If x is of finite magnitude, so is x+1. Therefore, all natural numbers are of finite magnitude.
{"url":"http://www.crummy.com/1999/11/4/3","timestamp":"2014-04-17T18:26:44Z","content_type":null,"content_length":"4708","record_id":"<urn:uuid:f082dce1-821b-4af5-bd54-e55ce5105105>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 66 66 Table 24. Summary of database UML-GTR RockFound07 cases used for foundation capacity evaluation. No. No. of Location Foundation No. of Size range of rock Shape South type sites (ft) USA Canada Italy UK Australia Taiwan Japan Singapore Russia cases types Africa Shallow Square 4 0.07 < B < 23 Foundations 331 22 10 Circular 2 1 1 3 13 0 1 0 1 0 Bavg = 2.76 (D = 0) 29 Shallow Circular 0.23 < B < 3 Foundations 28 8 2 0 0 0 8 0 0 0 0 0 0 28 Bavg = 1.18 (D > 0) Rock Circular 0.33 < B < 9 61 49 14 19 4 1 0 21 1 0 1 0 2 Sockets 61 Bavg = 2.59 1 Three (3) cases had been omitted in the final statistics due to a clay seam in the rock. diameter (B) ranging from 0.33 ft to 9 ft with an average (Bavg) bearing capacity is taken as equal to this stress. Ultimate bearing of 2.59 ft. Table 24 presents a summary of the database case capacity is the stress at which sudden catastrophic settlement histories breakdown based on foundation type, embedment, of a foundation occurs. Bearing capacity and ultimate bear- sites, size and country. It can be inferred from Table 24 that most ing capacity define the ULS and differ only in the foundation of the shallow foundation and rock socket data were obtained response to load. Appendix F presents a review of foundation from load tests carried out in Australia and the United States, modes of failure and suggests that the terms "bearing capacity" respectively. and "ultimate bearing capacity" should be used interchangeably to define the maximum loading (capacity) of the ground, depending on the mode of failure. 3.3 Determination of the Measured Strength Limit State for Foundations Under 3.3.2 Failure (Ultimate Load) Criteria Vertical-Centric Loading 3.3.2.1 Overview--Shallow Foundations on Soils 3.3.1 Overview The strength limit state is a "failure" load or the ultimate The strength limit state of a foundation may address two capacity of the foundation. The bearing capacity (failure) can kinds of failure: (1) structural failure of the foundation material be estimated from the curve of vertical displacement of the itself and (2) bearing capacity failure of the supporting soils. footing against the applied load. A clear failure, known as a While both need to be examined, this research addresses the general failure, is indicated by an abrupt increase in settle- ULSs of the soil's failure. The ULS consists of exceeding the ment under a very small additional load. Most often, however load-carrying capacity of the ground supporting the founda- (other than for small scale plate load tests in dense soils), test tion, sliding, uplift, overturning, and loss of overall stability. load-settlement curves do not show clear indications of bear- In order to quantify the uncertainty of an analysis, one needs ing capacity failures. Depending on the mode of failure, a clear to find the ratio of the measured ("actual") capacity to the cal- peak or an asymptote value may not exist at all, and the failure culated capacity for a given case history. The measured strength or ultimate load capacity of the footing has to be interpreted. limit state (i.e., the capacity) of each case needs, therefore, to Appendix F provides categorization of failure modes fol- be identified. lowed by common failure criteria. The interpretation of the Depending on the footing displacements, one may define failure or ultimate load from a load test is made more complex (1) allowable bearing stress, (2) bearing capacity, (3) bearing by the fact that the soil type or state alone does not determine stress causing local shear failure, and (4) ultimate bearing the mode of failure (Vesic ´, 1975). For example, a footing on capacity (Lambe and Whitman, 1969). Allowable bearing stress very dense sand can also fail in punching shear if the foot- is the contact pressure for which the footing movements are ing is placed at a greater depth, or if loaded by a transient, within the permissible limits for safety against instability and dynamic load. The same footing will fail in punching shear functionality, hence defined by SLS. Bearing capacity is that if the very dense sand is underlain by a compressible stratum contact pressure at which settlements become very large and such as loose sand or soft clay. It is clear from the above dis- unpredictable because of shear failure. Bearing stress causing cussion that the failure load of a footing is clearly defined only local shear failure is the stress at which the first major non- for the case of general shear; for cases of local and punching linearity appears in a load-settlement curve, and generally the shear, it is often difficult to establish a unique failure load. OCR for page 66 67 Criteria proposed by different authors for the failure load 3.3.2.3 The Uncertainty in the Minimum Slope interpretation are presented in Appendix F, while only the Failure Criterion Interpretation selected criterion is presented in the following section. Such In order to examine the uncertainty in the method selected interpretation requires that the load test be carried to very large displacements, which constrains the availability of test for defining the bearing capacity of shallow foundations on soils, data, in particular for larger footing sizes. the following failure criteria (described in detail in Appendix F) were used to interpret the failure load from the load-settlement curves of footings subjected to centric vertical loading on 3.3.2.2 Minimum Slope Failure (Ultimate) granular soils (measured capacity): (a) minimum slope cri- Load Criteria, Vesic ´ (1963) terion (Vesic ´, 1963), (b) limited settlement criterion of 0.1B Based on the load-settlement curves, a versatile ultimate load (Vesic´, 1975), (c) log-log failure criterion (De Beer, 1967), and criterion is recommended to define the ultimate load at the (d) two-slope criterion (shape of curve). point where the slope of the load-settlement curve first reaches Examples F1 and F2 in Appendix F demonstrate the zero or a steady, minimum value. The interpreted ultimate application of the four examined criteria to the database loads for different tests are shown as black dots in Figure 53 for UML-GTR ShalFound07. The measured bearing capacity could soils with different relative densities, Dr . For footings on the be interpreted for 196 cases using the minimum slope criterion surface of, or embedded in, soils with higher relative densities, (Vesic´, 1963) and 119 cases using the log-log failure criterion there is a higher possibility of failure in general shear mode, and (De Beer, 1967). Most of the footings failed before reaching a the failure load can be clearly identified for Test Number 61 in settlement of 10% of footing width (the limited settlement Figure 53. For footings in soils with lower relative densities, criterion of 0.1B [Vesic ´, 1975] could therefore only be applied however, the failure mode could be local shear or punching to 19 cases). A single "representative" value of the relevant shear, with the identified failure location being arbitrary at measured capacity was then assigned to each footing case. times (e.g., see Test Number 64). A semi-log scale plot with the This was done by taking an average of the measured capacities base pressure (or load) in logarithmic scale can be used as an al- interpreted using the minimum slope criterion, the limited ternative to the linear scale plot if it facilitates the identification settlement criterion of 0.1B (Vesic ´, 1975), the log-log fail- of the starting of minimum slope and hence the failure load. ure criterion, and the two-slope criterion (shape of curve). Figure 53. Ultimate load criterion based on minimum slope of load-settlement curve (Vesic´ , 1963). OCR for page 66 68 120 60 no. of data = 196 100 mean = 0.978 50 Relative frequency (%) No. of footing cases COV = 0.053 80 40 60 30 40 20 20 10 0 0 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 Ratio of "representative" capacity to the capacity interpreted using minimum slope criterion Figure 54. Histogram for the ratio of representative measured capacity to Figure 55. Example of L1-L2 method interpreted capacity using the minimum for capacity of foundations on slope criterion for 196 footing cases in rocks showing the regions of the granular soils under centric vertical loading. load-displacement curve and interpreted limited loads (Hirany and Kulhawy, 1988). The statistics of the ratios of this representative value over the interpreted capacity using the minimum slope criterion curves, the L1-L2 method proposed by Hirany and Kulhawy and the log-log failure criterion were comparable with the (1988) was adopted. mean of the ratio for the minimum slope criterion being 0.98 A typical load-displacement curve for foundations on rock is versus that for the limited settlement criterion being 0.99. presented in Figure 55. Initially, linear elastic load-displacement Due to the simplicity and versatility of its application, the relations take place; the load defining the end of this region is minimum slope criterion was selected as the failure inter- interpreted as QL1. If a unique peak or asymptote in the curve pretation criterion to be used for all cases of footing, includ- exists, this asymptote or peak value is defined as QL2. There is ing those with combined loadings. Figure 54 shows the histo- a nonlinear transition between loads QL1 and QL2. If a linear gram for the ratio of the representative measured capacity to region exists after the transition, as in Figure 55, the load at the the interpreted capacity using the minimum slope criterion. start of the final linear region is defined as QL2. In either case, Figure 54 presents the uncertainty associated with the use of QL2 is the interpreted failure load. This criterion is similar to the the selected criterion, suggesting that the measured capacity aforementioned minimum slope failure proposed by Vesic ´ interpreted using the minimum slope criterion has a slight for foundations in soil. The selection of the ultimate load using overprediction. this criterion is demonstrated in Example F3 of Appendix F using a case history from the UML-GTR RockFound07 data- base. It can be noted that the axes aspect ratios (scales of axes 3.3.3 Failure Criterion for Footings on Rock relative to each other) in the plot of the load-settlement curve The bearing capacity interpretation of loaded rock can changes the curve shape, and thus could affect the inter- become complex due to the presence of discontinuities in the pretation of the ultimate load capacity. However, unlike the rock mass. In a rock mass with vertical open discontinuities, interpretation of ultimate capacity from pile load tests, which where the discontinuity spacing is less than or equal to the utilizes the elastic compression line of the pile, there is no footing width, the likely failure mode is uniaxial compression generalization of what the scales of the axes should be relative of rock columns (Sowers, 1979). For a rock mass with closely to each other for the shallow foundation load tests. It can only spaced, closed discontinuities, the likely failure mode is the be said that depending on the shape of the load-settlement curve, general wedge occurring when the rock is normally intact. For a "favorable" axes aspect ratio needs to be fixed. This should a mass with vertical open discontinuities spaced wider than the be done on a case-by-case basis, using judgment, so that the footing width, the likely failure mode is splitting of the rock region of interest (e.g., if the minimum slope criterion is mass and is followed by a general shear failure. For the inter- used, the region where the change in the curve slope occurs) pretation of ultimate load capacities from the load-settlement is clear. The L1-L2 method was applied to all cases for which
{"url":"http://www.nap.edu/openbook.php?record_id=14381&page=66","timestamp":"2014-04-18T01:16:27Z","content_type":null,"content_length":"59262","record_id":"<urn:uuid:bd9db163-990d-498e-b14c-b9a5d1947c53>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project CNOT and Toffoli Gates in Multi-Qubit Setting The most commonly used two-qubit gate is the controlled-not ( ) gate. In two-qubit space, is a matrix of the form . In general it is a matrix. The most important three-qubit gate is the universal Toffoli gate, or controlled-controlled-not ( ) gate with two control bits. For the Toffoli gate, both control bits are operative; for CNOT, only the first control bit. The program generates the basic elements that make up a quantum computation. Especially instructive is a method for constructing operators (gates) acting within multi-qubit states. Based on a program by: Bruno Juliá-Díaz and Frank Tabakin
{"url":"http://demonstrations.wolfram.com/CNOTAndToffoliGatesInMultiQubitSetting/","timestamp":"2014-04-21T09:41:32Z","content_type":null,"content_length":"43673","record_id":"<urn:uuid:85ca1272-60e4-4e4d-b765-47858d955207>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: November 2010 [00787] [Date Index] [Thread Index] [Author Index] Re: Problems with Mathematica 8.0 Solve • To: mathgroup at smc.vnet.net • Subject: [mg114280] Re: Problems with Mathematica 8.0 Solve • From: Daniel Lichtblau <danl at wolfram.com> • Date: Tue, 30 Nov 2010 04:01:32 -0500 (EST) leigh pascoe wrote: > [...] > Dear Daniel, > Thanks for your most helpful suggestions. It is good to know that the > legacy option is available for "symbolic" solution of the equations when > needed. > When writing about the method I have used to analyse my data, I can > write equations relating the parameters to be estimated {x,y} and the > observed data {a,b..,g}. Obviously in that case one would like to have > simple symbolic solutions, x=f(a,b..)etc. However in the present case > that seems impossible, as the symbolic solutions take up several pages > and give little insight. In any case no editor would permit publication > of the formulas for want of space. The approach indicates below will make them smaller, but still perhaps too large/ugly for publication. > To get numerical values for the parameters in a particular experiment > it is easy to substitute the observed data into the formulas to estimate > the parameters, in Ma 7.0. > In Mathematica 8.0 the solution of the equations seems impossible > (without invoking the legacy method), despite the fact that they appear > relatively simple. However numerical results can be obtained as > suggested in your email in two ways: > 1. By substituting the data values into the equations first and then > solving them. The command you suggested doesn't in fact solve the > equations, I presume because of a misplacement of parentheses viz > Timing[solns = Map[Solve[(exprs /. #) == 0, Cubics -> False] &, > Take[subs, 4]];] > However it prompted me to find the syntax that did > sols = N[Map[Solve[expr /. #, Cubics -> False] &, subs2]] I suspect the difference is that I changed your equations to expressions, that is, removed the '==0' parts. Hence the different behavior. > 2. The solutions can also be found using Nsolve directly as you suggested > sols2 = Map[NSolve[expr /. #] &, subs2] > In summary it turns out to be more efficient to solve the equations > multiple times with specific coefficients than to solve them generally > and evaluate the particular solutions. > The above commands solve my immediate problem but not my curiousity. > Could anyone comment on the methods used by 7.0 and 8.0 to solve systems > of equations? I've not looked to hard to figure out why it ever worked. "Dumb luck" is most likely the right answer. I am fairly sure it has to do with the older Solve using a really rickety Groebner basis code internally. This code in effect treats all parameters as "variables", and creates a slew of new polynomials. It then has the good luck to be able to formulate symbolic roots in one variable, winding its way backwards through these polynomial equations until it can also provide roots for the second one. That it does not choke on the sizes is a strike of luck, I suspect. Version 8 Solve attempts to compute a lexicographic Groebner basis over a field of fractions in the parameters. I gather this turns out to be very difficult for this example, and it hangs here. I do not know offhand whether this indicates a limitation of our Groebnerbasis code, or this is just an intrinsically difficult computation. > Why is it easier to solve the equations with exact > numerical coefficients than with symbolic constants? The Groebner basis computations can be much more strenuous, and the results much larger, when there are symbolic parameters in the mix. > Would it be easier > if the data values were limited to positive integers? I doubt it. > Is Reduce a > preferred command in this situation? It could be, if inequality or domain information were useful. Though I think version 8 Solve might also avail itself of such information. The problem is that such information might not make the algorithmic complexity improve. It might even get worse. (Why? Because it might force use of cylindrical algebraic decomposition, and that in turn might be more strenuous even than the GroebnerBasis computation it is already The above are just general observations. Only experimentation will show whether or not such assumptions do help for your particular computation. > I can't get much assistance from > the Help files on these questions. > Thanks again > LP Here is another workaround, if you really want a parametrized solution set (already sent your way in private email). exprs = Together[{(b + d + f)/x - (a + b)/(1 + x) - 2*(c + d + e)/(1 + 2*x + y) - (f + g)/(x + y), (e + g)/y - (c + d + e)/(1 + 2*x + y) - (f + g)/(x + y)}]; Use NSolve, but at infinite precision. In this case it will not mind that there are symbolic parameters, that is, non-numeric input. In[2]:= Timing[solns = NSolve[exprs, {x,y}, Out[2]= {29.5225, Null} Compare to legacy (versions <=7) Solve. In[3]:= Timing[solns2 = Solve[exprs == 0, {x, y}, Method->"Legacy"];] Out[3]= {22.6456, Null} In[4]:= LeafCount[solns] Out[4]= 37318 In[5]:= LeafCount[solns2] Out[5]= 232642 Daniel Lichtblau Wolfram Research
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Nov/msg00787.html","timestamp":"2014-04-18T23:56:30Z","content_type":null,"content_length":"30233","record_id":"<urn:uuid:14371f88-8d92-43f5-b8b1-d8225dfe8eca>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
The n-Category Café July 30, 2011 Coinductive Definitions Posted by Mike Shulman I’ve come to believe, over the past couple of years, that anyone trying to study $\omega$-categories (a.k.a. $(\infty,\infty)$-categories) without knowing about coinductive definitions is going to be struggling against nature due to not having the proper tools. But although coinductive definitions are a basic notion in mathematics, for some reason they don’t seem to be taught, even to graduate students. Write something like A 1-morphism $f\colon x\to y$ in an $(n+1)$-category is an equivalence if there exists a 1-morphism $g\colon y\to x$ and equivalences $1_x \to g f$ and $f g\to 1_y$ in the relevant hom-$n$ -categories; and every 1-morphism in a 0-category is an equivalence and any mathematician (who has some inkling of what $n$-categories are) will be happy. If you ask why this definition isn’t circular, since it defines the notion of “equivalence” in terms of “equivalence”, the mathematician will say “it’s an inductive definition” and expect you to stop complaining. But if you write something like A 1-morphism $f\colon x\to y$ in an $\omega$-category is an equivalence if there exists a 1-morphism $g\colon y\to x$ and equivalences $1_x \to g f$ and $f g\to 1_y$ in the relevant hom-$\omega$ the same mathematician will object loudly, saying that this definition is circular. (In fact, not very long ago, that mathematician was me.) But actually, this latter is a perfectly valid coinductive Posted at 8:11 PM UTC | Followups (14) July 27, 2011 Local and Global Supersymmetry Posted by Urs Schreiber The field of fundamental high energy physics – that part of physics that deals with fundamental particles probed in particle accelerators – is witnessing interesting developments these days: after decades of only a minimum of new experimental observations of interest, finally plenty of data has been collected and now analyzed. And finally a multitude of theoretical models that have been developed over the years can be tested against experiment. Apart from lots of new information about which mass the hypothetical Higgs particle – if it indeed exists – does not have, one of the striking experimental results is that they increasingly – and by now strongly – disfavour what are called supersymmetric extensions of the standard model of particle physics . Well-informed discussion of these developments can for instance be found on this blog. In the course of these developments, I see and hear a lot of discussion around me of whether the concept of “supersymmetry” as such is thus experimentally ruled out. There is an enormous amount of literature revolving around the concept of supersymmetry quite independently of the “supersymmetric standard model of particle physics”. Is all that now proven to be ill-conceived? Is “supersymmetry” being shown to play no role in nature? We almost had a discussion of this kind also here on the blog recently. Since this is a widespread misunderstanding, I thought I’d try to say something about it here. A little appreciated but important fact is this: there is a crucial distinction between what is called local supersymmetry and what is called global supersymmetry and between target space supersymmetry and worldvolume supersymmetry. I have tried to say a bit about this in the $n$Lab entry Even less widely appreciated seems to be the following noteworthy fact: local worldline supersymmetry is experimentally verified since 1922 – when the Stern-Gerlach experiment showed that there are fundamental particles with a property called spin : these spinning fermion particles – the electrons and quarks that you, me, and everything around us is made of – happen to have worldline supersymmetry . I have tried to give an indication of this in the entry which also collects a bunch of original references and textbook chapters where this fact is discussed in detail. So the assumption that there is local worldvolume supersymmetry in nature is not speculation, but experimental fact as soon as there is any spinor in the world. Of course this is not the global target space supersymmetry that is currently being experimentally ruled out at the LHC. So it is good to distinguish these concepts. And indeed, despite of what many people are on record as having said: nothing at all in sigma-model theory implies that a supersymmetric sigma-model (such as the spinning particle, or the spinning string, for that matter) has target space backgrounds that generically are globally supersymmetric. On the contrary: the generic background will not be! This simple fact seems not to be widely appreciated, either. It is the direct analog of the following self-evident bosonic statement: while ordinary gravity is a locally Poincaré-invariant theory (a Poincaré-gauge theory) its generic solution – a given pseudo-Riemannian manifold – does not have a nontrivial action of the Poincaré group or of any of its nontrivial subgroups. It will only have such actions if it has flows of isometries given by Killing vectors. Analogously, the generic solution to a theory of supergravity – which is a locally super-Poincaré-invariant theory– does not have any covariantly constant spinor, hence the perturbative quantum field theory on this background does not have a global supersymmetry. This has always been clear. Some more sophisticated discussion of this point is for instance in Dienes, Lennek, Sénéchal, Wasnik, Is SUSY natural? (arXiv:0804.4718) which is effectively a detailed expansion of the statement about generic absence of global symmetries in backgrounds. There’d be much more to say (and there’d be need to expand the above $n$Lab entries much more), but I must stop here and take care of other tasks. The upshot is: 1. there is still all the reason in the world to believe that the concept of local supersymmetry (aka: supergravity) is fundamental for our world – not the least because 1-dimensional worldline supergravity is an experimentally observed fact; 2. the models of global supersymmetry that are currently being ruled out by experiment are not rooted in theory, but in phenomenological model building. The general theory of supersymmetry is as unaffected by these models being ruled out as the theory of gravity is unaffected by a given cosmological model being ruled out. Posted at 2:10 AM UTC | Followups (45) July 25, 2011 Bohr Toposes Posted by Urs Schreiber To every quantum mechanical system is associated its Bohr topos : a ringed topos which plays the role of the quantum phase space . The idea of this construction is that it naturally captures the geometric and logical aspects of quantum physics in terms of higher geometry/topos theory. Below the fold I try to give an exposition of the facts that motivate the construction, the construction itself, and an indication of the resulting notion of presheaves of Bohr toposes associated with every quantum field theory. For more details and further links see the $n$Lab entry Bohr topos . See also the previous entry A Topos for Algebraic Quantum Theory . Posted at 12:48 AM UTC | Post a Comment July 11, 2011 Doctrinal and Tannakian Reconstruction Posted by David Corfield Interest in doctrines at the Café goes right back to one of its earliest posts nearly five years ago, and even to one a few days earlier. We’ve begun to record some material at the doctrine page in nLab, but I’m sure there’s more wisdom from the blog to be extracted. Looking over the material again raised a question or two in my mind, which I’d like to pose now. Gabriel-Ulmer duality is a biequivalence between the 2-category of finite limit categories and the 2-category of locally finitely presentable categories. It allows for the recovery of a theory from the category of models of that theory. Posted at 11:01 AM UTC | Followups (42) July 7, 2011 Operads and the Tree of Life Posted by John Baez Tomorrow I’m giving a talk about an operad that shows up in biology. I wrote my lecture notes in the form of a blog entry: A remark by Tom Leinster was important in helping me figure out this stuff. Posted at 5:59 PM UTC | July 2, 2011 Definitions of Ultrafilter Posted by Tom Leinster One of these days I want to explain a precise sense in which the notion of ultrafilter is inescapable. But first I want to do a bit of historical digging. If you’re subscribed to Bob Rosebrugh’s categories mailing list, you might have seen one of my historical questions. Here’s another: have you ever seen the following definition of ultrafilter? Definition 1 An ultrafilter on a set $X$ is a set $U$ of subsets with the following property: for all partitions $X = X_1 \amalg \cdots \amalg X_n$ of $X$ into a finite number $n \geq 0$ of subsets, there is precisely one $i$ such that $X_i \in U$. This is equivalent to any of the usual definitions. It’s got to be in the literature somewhere, but I haven’t been able to find it. Can anyone help? Posted at 11:35 PM UTC | Followups (25) July 1, 2011 Nikolaus on Higher Categorical Structures in Geometry Posted by Urs Schreiber Yesterday Thomas Nikolaus – former colleague of mine in Hamburg – has defended his PhD. His nicely written thesis Higher categorical structures in geometry – General theory and applications to QFT discusses plenty of subjects of interest here; the main sections are titled: 1. Bundle gerbes and surface holonomy 2. Equivariance in higher geometry 3. Four equivalent versions of non-abelian gerbes 4. A smooth model for the string-group 5. Equivariant modular categories via Dijkgraaf-Witten theory . Have a look at his slides for a gentle overview. Myself, I have to dash off now. Maybe I’ll say a bit more about what Thomas did in these chapters a little later. Or maybe he’ll do so himself… Posted at 5:27 PM UTC | Followups (9)
{"url":"http://golem.ph.utexas.edu/category/2011/07/index.shtml","timestamp":"2014-04-20T09:24:35Z","content_type":null,"content_length":"69758","record_id":"<urn:uuid:f9f46624-f0cc-4312-89a0-d56a9ab1e972>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
On Herbrand's thought C Chevalley: On Herbrand's thought In June 1934 Claude Chevalley gave a lecture On Herbrand's thought at the Colloquium on Mathematical Logic which had been organised at the University of Geneva. Below is an extract from his lecture:- Jacques Herbrand was born in 1908. In 1925, at the end of his secondary studies, he took first place at the École Normale Supérieure. I entered the École myself in 1926, and immediately noticed the entirely special place he occupied among his comrades, due to the vigour and universality of his spirit. In 1928, he took first place in the Agrégation, and was able to stay a fourth year at the École, during which he completed his thesis, which contains the work in mathematical logic that he had been pursuing for two years. In 1929-1930 he did his military service; during that year he published two notes on the units of algebraic number fields. These notes form the basis of the new methods in class-field theory. He also finished the article 1931, which was a continuation of his thesis. He spent the academic year 1930-1931 in Germany: first in Berlin with von Neumann, where he continued his work in logic, notably comparing his results to those of Gödel; then he went to Hamburg and finally to Göttingen. He returned to France at the end of July and left immediately to indulge in his favourite sport, mountaineering. There he died in a fall, on July 27. Thus passed one whom a mathematician described as "one of the greatest of his generation". Here is an extract from a letter of Professor Courant to Herbrand's father:- During his short stay in Göttingen and even before this from his works, we have come to respect your son as one of the most promising and, because of the results he had already obtained, prominent young mathematicians of the world. Jacques Herbrand expressed himself rather little on the philosophical ideas relating to the problems of mathematical logic (see, however, the Introduction to his thesis). This is why I have eagerly taken the opportunity afforded by the University of Geneva to deliver several remarks on his ideas. I draw them from memories of the many conversations I had with him. One should remember that I have no other source and, having myself thought about these questions, my own opinions could have unconsciously infected my recollections. Given the agreement which there was in general between us, I hope that the distortions I might thus have involuntarily introduced will be minimal. In our attempt to penetrate Herbrand's system of thought, we shall rely on the following quotation (1930):- But it should not be hidden that perhaps the role of mathematics is merely to furnish us with arguments and forms, and not to find out which of these apply to which objects. Just as the mathematician who studies the equation of wave propagation no longer has to ask himself whether waves satisfy this equation in nature, so no longer in studying set theory or arithmetic should he ask whether the sets or numbers of which he intuitively thinks satisfy the hypotheses of the theory he is considering. He ought to concern himself with developing the consequences of these hypotheses and with presenting them in the most suggestive manner; the rest is the role of the physicist or the philosopher. When we refer to the distinctions that Bernays and Fraenkel formulated very clearly in their lectures [Bernays 1934 and Fraenkel 1934] between Platonist and intuitionist (or Aristotelian) mathematicians, I believe that this quotation allows us to conclude immediately that there are no Platonist inclinations to be found in Herbrand's thought. Indeed, Platonism, no matter what form it appears in, always admits the existence of a given world ruled by purely rational laws. Consequently mathematics is quite naturally considered to be knowledge of this world by man. Its role is precisely to find the human arguments which fit this world and which let us penetrate its structure. It was this that Herbrand called into question. If one abandons the Platonist view, one must admit that mathematical objectivity, no longer a sign of the existence of a rational world, is created by man. The processes of axiomatization and the formalist method are the most extreme points of this movement towards the objective. This is to say, and I believe that this is what Herbrand thought, that objectivity is attained only in a pure symbolism, in emptying symbols completely of all meaning. Objectivity and concrete reality, far from being synonyms, exclude each other. We can now understand why Herbrand's thought, although not Platonist, was not intuitionist either. In fact, the Intuitionists do not disallow treating in mathematics objects that are simultaneously rational and real. Undoubtedly they do not believe that such objects are given a priori. But they construct these objects starting from an intuition, namely, temporal intuition, so that mathematical assertions represent for them the assertions one can make regarding intuitions about time. Only the assertions that can be translated in this manner are valuable. For Herbrand, such restrictions were without foundation, for he believed that no reasoning whatsoever concerning something given and concrete would be valuable from a purely mathematical point of view, nor all the more that it was necessary to limit oneself to such reasoning. The same considerations apply to logic. Logic comprises a schema which is objective only insofar as it is purely formal. Were one to give a sense to the symbols appearing in the formulas of logic, were one to consider them as representing operations of thought, one could cause logic to lose its objectivity. This is why it is not surprising that different thinkers have different opinions on the value of the axioms of logic. If the system of forms of classical logic is repugnant to Brouwer's thought, for example, this does not mean that this logic is denuded of value; it is an assertion about Brouwer's thought. If Heyting's formalism agrees with Brouwer's thought, this means that this formalism is suitable for describing the datum that his thought comprises. But in any case a human thought remains incongruous in any formalism; there is the same relationship between a formal logic and a mode of thought as between a mathematical equation and a physical phenomenon. In regard to this, let us recall another quotation from Herbrand (1930l):- It can be said that many of the obscurities and discussions that have arisen in regard to the foundations of mathematics have their origin in a confusion between the 'mathematical' and the 'metamathematical' senses of terms. These difficulties spring from one's wanting to treat purely symbolic formulas of the domain of mathematics as assertions relating to something given. They are not of a different nature from those which engendered the birth of the infinitesimal calculus, which one wished to exclude for philosophical reasons because one wanted to see a 'real' object in the differential. Similarly, today, certain people wish to exclude the principle of the excluded middle because they wish to see 'real' assertions in the propositions. From the preceding considerations we must not conclude that a mathematical act was for Herbrand a 'gratuitous' sort of act. Undoubtedly it is possible to carry out mathematics with any axioms and any rules of reasoning whatsoever; but in reality, and Herbrand liked to insist on this point, rigor has in a sense two complementary faces: if it is first the requirement of formalism, with respect to the 'rules of the game', it is also, in the sense given it by Leonardo da Vinci, an attempt at an ever more perfect description of something given. This description comes about by interpreting the axioms by means of experimental concepts. Mathematical physics already shows that one approaches positivist schemata (the direct interpretation of sensation) only at the price of an increasing abstraction, which one can compare to a sort of magic by which man dominates the domain of sensation only in first completely leaving it and in passing to the pure world of mathematics. Just as mathematical physics permits us to penetrate further and further into the structure of matter, logic allows us to describe something nearer yet to man than his sensations: his intellectual thought. Herbrand said to me one day, "I would like to construct a system that contains all present-day thoughts". This is the greatest demand one could make on a formal logic: it leads us to the very centre of the drama of Herbrand's thought, balancing between an investigation always more concrete and a formalism always more abstract. This drama was enacted in Herbrand's thought with a poignant intensity. Could it perhaps be a necessity of fate that where the spirit attains such a degree of violent purity, there death would be closest? JOC/EFR August 2006 The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Extras/Chevalley_Herbrand.html","timestamp":"2014-04-16T13:11:23Z","content_type":null,"content_length":"10292","record_id":"<urn:uuid:8eb3d6b7-2018-4dfa-9af9-faf94e4c38fa>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
I am simply trying to test the values in 2 arrays to see if they are the same. the next step tho is to test each of them with each value in guesse with each value in rannum. the problem is that no matter what I cannot get $correct to be more than 1 or 0; it almost seems to be acting like a bool. sub TestCorrect #tests for the correct numbers in the correct order my $correct = 0; @guesse = split " ,_-/", $guesse; if($guesse[0]==$rannum[0]){ $correct++} if($guesse[1]==$rannum[1]){ $correct++} if($guesse[2]==$rannum[2]){ $correct++} if($guesse[3]==$rannum[3]){ $correct++} return $correct; thanks for any help This post has been edited by Jingle: 07 June 2012 - 02:38 AM
{"url":"http://www.dreamincode.net/forums/topic/282009-passing-values-to-and-from-arrays/","timestamp":"2014-04-19T15:05:03Z","content_type":null,"content_length":"82872","record_id":"<urn:uuid:ccd0522b-1e27-4dc4-bce4-fb809bc688e3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Circuit Idea/Simple Op-amp Summer Design From Wikibooks, open books for an open world <<< contents - source - page stage >>> How to Simplify the Design of the Mixed Op-amp Voltage Summer After the EDN's article Single-formula technique keeps it simple Procedure idea: Supplementing the gains and imposing the requirement for equal equivalent input resistances simplifies circuit design. Novel design procedure[edit] It seems that there is nothing new to say about single op-amp amplifying circuits. Only, the innovator Dieter Knollman has suggested a new simpler design procedure in an EDN's article ^[1]. He has also placed his work on the web^[2]. His idea is described more concisely in Electronics. The active design procedure^[3] is based on the new Daisy's theorem^[4] and Plato's gain formula^[5]. The result is amazing especially for summing op-amp circuits with multiple positive-gain inputs: you can calculate the resistor values for each input by using the same formula R[i] = R[F]/|gain|! It sounds wonderful, doesn't it? Only, in order to really understand things we, human beings, need to grasp the basic ideas behind them. Then, let's reveal the ideas behind this mystic procedure following the heuristic approach of the famous mathematician George Polya. For this purpose, we will cite the original text in an italic type and will interpret it in a normal type. Of course, it would be best if the author tells how he has invented the procedure. So, we suggest contributing to this page. Author - Plato's Gain Formula is a simplification of the General Summing Amplifier gain formula. The GSA formula is derived via K9 Analysis, a dog-gone simple way to obtain circuit equations. What does Daisy's theorem mean?[edit] Daisy’s Theorem tells: the sum of the gains in a single op-amp amplifying circuit is equal to 1 ^[4]. Only, don't you think that it seems quite strange to sum gains? It is far natural to think that voltage summers sum voltages! Then, let's examine this author's assertion. Scrutinizing the material we will find out that Daisy's theorem concerns mainly parallel voltage summing circuits. What they are? Can such circuits exist at all (we usually think we must not connect in parallel voltage sources with different voltages)? Daisy's comment - The theorem applies to linear circuits, with ideal voltage source inputs. The node voltage is a linear equation containing inputs and gain. The equation shows how via superposition each input contributes to the node voltage, the gain. The theorem applies to all nodes. As you probably know, we can sum voltages directly by connecting the voltage sources in series (according to Kirchhoff's voltage law). In this series voltage summer, the whole input voltages participate in the overall sum: V[OUT] = V[IN1] + V[IN2] + ... V[INn] Only, a problem with the common ground appears - some input voltage sources or the load remain flying. Then, we can sum voltages indirectly by connecting the voltage sources "in parallel" through resistors (according to Kirchhoff's current law). In this parallel voltage summer, the input voltages are weighted by coefficients α[i] (< 1, in case of passive summer) or gains G[i] (> 1, in case of active summer): V[OUT] = α[1].V[IN1] + α[2].V[IN2] + ... α[n].V[INn] The equation simply means that the circuit is linear, superposition applies. In this common case (Fig. 2), the parallel summer sums products of gains and voltages. Now imagine that all the input coefficients α[i] (or gains Gi) are equal to 1. As a result, the summer will sums voltages as above - V[OUT] = V[IN1] + V[IN2] + ... V[INn]. Well, why do not we assume that all the input voltages are equal to 1V? In this case, the summer will sum coefficients (gains) - G[OUT] = G [IN1] + G[IN2] + ... G[INn]; now, the output voltage represents the overall gain! An example: a DAC is just a summer with digital-controlled binary-weighted input voltages (at constant input resistances) or gains (at constant input voltages equal to the reference voltage V[REF]). Maybe, the best way to understand what Daisy's theorem means is to apply it to various summing circuits. Well, let's begin! Passive summer[edit] Let's first prove Daisy's theorem in the case of the simplest passive parallel summer. We may observe the parallel summing phenomenon in nature and our routine: (input) power sources connected in parallel through some "resistances" to the same (output) point where their influences are superimposed. For example, imagine how sources fight each other like as people in the game tug of war or arm Similarly, in an electrical passive parallel voltage summer (Fig. 3) the input voltage sources are connected "in parallel" through resistors to the same output point. The sources try to change the output voltage by "sucking" or "blowing" a current from/through the common point. As a result, their influences (voltages) are superimposed in the output point. Why is there a need of resistances at all? Well, zero one resistance and you will see that the corresponding source completely controls the output -- the output no longer depends on any of the other sources. It's no longer the "summer" we wanted. Zero two or more resistances? Even worse than zeroing just one -- it directly shorts the voltage sources in a damaging short-circuit. The resistances are necessary to allow every input to have some influence on the sum. The resistors continuously dissipate power. After this intuitive viewpoint at the bare summing circuit let's make a bit more serious analysis. Regarding to each input, the summing circuit is a voltage divider composed by two resistances: the input resistance (R[i1], R[i2] or R[G]) and the equivalent resistance of the remaining resistances (R[i2]||R[G], R[i1]||R[G] and R[i1]||R[i2]). Note that the third input voltage is zero (the corresponding resistor is just connected to the ground). In this way, the inputs attenuate the voltages by coefficients α[1], α[2] and α[G] according to the well-known voltage divider formula (Fig. Now, if we sum the input coefficients, we establish an interesting fact: the sum of the input coefficients is 1! When we decrease one resistance, its coefficient increases while the others decrease and v.v.; the coefficients shade one into other! We have reached a conclusion that the bare passive summing circuit obeys Daisy's theorem! Only, what do we do, if we would like to set arbitrary input coefficients? What if we want ones that do not sum to 1? Eureka! We can connect an additional resistor to the ground with a complementary coefficient! This "parasitic" ground resistor will take off gain from the input coefficients; it acts as an attenuating element. Example 1: If we have chosen α1 = 0.4 and α2 = 0.6, there is no need of a ground resistor because their sum is exactly 1. Example 2: If we have chosen α1 = 0.4 and α2 = 0.4, there is a need of a ground resistor with α1 = 0.2; it adds the sum up to 1. If you convert this circuit into a Norton equivalent circuit, the analysis becomes trivial. Non-inverting summer[edit] If we would like the summer to amplify, we may connect a non-inverting buffering amplifier with gain K after the bare summing circuits (Fig. 4). Obviously, in this case, the sum of the input gains will appear to constitute K: (α[1] + α[2] + α[3]).K = K In this arrangement, the function of the ground resistor is the same as above: it lets us set arbitrary input coefficients (not only complementing to K). Here, it takes off gain from the input gains but the overall sum remains equal to K. Daisy's theorem states that the sum is always equal to one, at the amplifier input and also at the output. The amplifier must have a hidden ground that will have a gain of (1-k). If you construct a complete schematic for this circuit, you may notice that the circuit has a ground resistor on both op-amp inputs. This will degrade performance. Moral - Always use complete schematics. Inverting summer[edit] Whenever possible, we prefer to use the clearer inverting summer. The inverting summer is easy to design, but provides inferior performance. Moral - K9 is simple for all cases. Design for The idea of this clever circuit (Fig. 5) is simple: the op-amp "neutralizes" the "disturbing" output voltage of the imperfect passive summing circuits by an equivalent "antivoltage"^[6]^[7]^[8] The inverting summer like as the non-inverting one has weighted inputs. If we assume that the gain of the equivalent non-inverting amplifier is K, we can establish that the sum of the input negative gains constitutes K-1: G[1] = R[F]/R[i1], G[2] = R[F]/R[i2], G[3] = R[F]/R[G] G[1] + G[2] + G[3] = R[F]/(1/R[i1]+1/R[i2]+1/R[G]) = R[F]/(R[i1]||R[i2]||R[G]) = K - 1 It is interesting to see what happens, if we add a ground resistor to the inverting input. Now, it adds gain to the overall sum without affecting the other inverting input gains. Here's another example of an incomplete schematic. The (+) op-amp input is not shown. The circuit will not function without a (+) input. Daisy's theorem only applies to complete circuits. This circuit will be rejected by SPICE. It's easy to construct figures that violate circuit principles. Moral - The internet contains contains many circuit figures that can't function. You need to recognize these. Mixed summer[edit] Usually, op-amps have differential inputs (if we need only a bare single-input, we just ground the "unused" input). So, we can connect summing circuits to both non-inverting and inverting inputs. In this way, we obtain a universal summing-subtracting circuit - Fig. 6 (the author has named it general summing amplifier). Looking from the positive-input side, the sum of the positive input coefficients (between the sources and the non-inverting op-amp's input) is as usual 1 and the sum of the positive input gains (between the sources and the op-amp's output) is K. Looking from the negative-input side, the sum of the negative input gains (between the sources and the op-amp's output) is K - 1. As a result, (the sum of the positive gains) - (the sum of the negative gains) = 1. Wonderful! The circuit obeys Daisy's theorem! How does a ground resistor affect the circuit? Adding a ground resistor to the non-inverting input takes off gain from the positive input gains but the overall sum of gains (caused by the non-inverting inputs) remains equal to K. The reason: the ground resistor does not affect the negative feedback. Adding a ground resistor to the inverting input doesn't affect the other inverting input gains but it increases proportionally the positive input gains. The reason: the ground resistor affects the negative feedback. This action increases both the sums (inverting and non-inverting) but the difference remains as before equal to 1. Maybe, these observations have given an idea to the author to introduce a coefficient p into Plato's formula... p is needed if a circuit has not been optimally designed. A better circuit, but not without problems. Since Rg is connected to the (-) op-amp input, it may not be equal to zero. Plato's gain formula reveals this. Rg is in the denominator. Some mixed sum circuits will Rg connected to the (+) input. A short is allowed here. Moral - Don't trust simple circuit tricks. Look at formulas and assumptions. Non-inverting amplifier[edit] The author claims that even the most elementary op-amp amplifying circuits obey Daisy's theorem. Then, let's examine this assertion beginning by the classic non-inverting amplifier. We can think of a non-inverting amplifier just as a "degenerated" summing-subtracting circuit, if we assume that the ground acts as another input (...most op-amp circuits are a subset of the general summing circuit; for example, a non-inverting amplifier would have a single positive input and a single negative input that is connected to ground...). Well, let's see, if the circuit obeys Daisy's theorem: G[i] + G[0] = K - (K - 1) = 1 It obeys the theorem! It even works for the most elementary circuit, a short from input to output. V(out) = 1 * Vin Inverting amplifier[edit] Similarly, we can think of an inverting amplifier as a "degenerated" summing-subtracting circuit assuming again that the ground acts as another input (...for example, an inverting amplifier would have a single positive input that is ground and a single negative input...). Let's see again, if the circuit obeys Daisy's theorem: G[0] + G[i] = K - (K - 1) = 1 It obeys the theorem too! Differential amplifier[edit] Actually, the inverting and non-inverting amplifiers are differential circuits that subtract the input voltage from the "ground" voltage. From this viewpoint, the ground voltage is just an input voltage. Only, they are unbalanced circuits because the two input gains differ by 1. We can balance the circuits, if we decrease by 1 the non-inverting input gain or, if we increase by 1 the inverting input gain. The first technique is more popular; it leads to the classic circuit of an op-amp differential amplifier. In this circuit, the non-inverting input resistor R[i1] and the ground resistor R[G] constitute a voltage divider, which attenuates two times the input voltage. Let's see, if the circuit obeys Daisy's theorem: G[i1] + G[3] + G[i2] = α[1].K + α[3].K - (K - 1) = 0.5K + 0.5K - (K - 1) = 1 It obeys the theorem too! What is the role of the "ground resistor"?[edit] Finally, let's generalize the role of the ground resistor in all the circuits discussed. The ground resistor adds the gain needed in the case when the sum of the signal gains does not constitute 1. Three cases are possible: 1. If the signal gain sum is more than 1, we connect a ground resistor to the inverting op-amp input. 2. If the signal gain sum is less than 1, we connect a ground resistor to the non-inverting op-amp input. 3. If the signal gain sum is exactly 1, we do not connect a ground resistor. Is the ground resistor a "parasitic" component? Not always! For example, in the circuit of an inverting amplifier, it is a vital element! Give example where it is a "parasitic" component. Daisy's theorem shown when a ground resistor is needed. What does Plato's formula mean?[edit] Plato's gain formula tells: the "non-inverting" gains R[i] are proportional to the ratio between the "feedback" resistor R[F] and the input resistance R[i] ^[5]. What does it mean? Once we defined the gains (by using Daisy's theorem), we have to determine the magnitudes of the circuit resistors. There is no problem to calculate resistances connected to the inverting input; only, it is too difficult to calculate the resistances connected to the non-inverting input. Fortunately, the author has managed to find out an interesting relation (he has named it Plato's formula^ [5]): the "non-inverting" gains R[i] are proportional to the ratio between the "feedback" resistor R[F] and the input resistance R[i]: G[i] = p.R[F]/R[i] The coefficient of proportionality p is the same for all the positive gains and is equal to the ratio between the two equivalent resistances connected to the op-amp inputs: p = R[e(+)]/R[e(-)] If the equivalent resistances are equal (this is just what we want, in order to minimize bias-current error), then p = 1 and G[i] = R[F]/R[i]. It is really doggone simple! It sounds wonderful! Only, we would like to know what the idea behind Plato's formula is. As the author has hidden the idea, let's try to reveal it by ourselves. Author - There is no circuit trick, just K9 Analysis (a simplified form of Node Analysis). The intent of K9 is avoid tricks and to make Analog Circuit Design and Analysis Dog-Gone Simple. A key for understanding: equalizing the input resistances[edit] Obviously, we have to find some relation between the circuit parameters that can help us to simplify the calculations. Well, let's try using the requirement for equal equivalent input resistances R[e (+)] = R[e(-)], which minimizes the error caused by the input bias currents. What is this mean? If the equivalent input resistances connected to the op-amp inputs are different, the op-amp input bias currents produce different voltages across the resistances; their difference acts as an undesired differential input voltage. In order to compensate this harmful voltage, we can make equal the two equivalent input resistances (a favorite compensation technique in op-amp design). Usually, this means to connect an additional resistor between the op-amp input having lower resistance and the ground; its resistance is equal to the equivalent resistance connected to the other input. As a result, the op-amp input bias currents produce equal voltages; their difference acts as a common-mode input voltage that is rejected by the op-amp. Well, let's now apply this technique to the same op-amp summing circuits, in order to reveal the idea behind Plato's formula. Op-amp inverting amplifier[edit] Let's begin with the op-amp inverting amplifier illustrated in Fig. 10. Some people "simplify" this circuit by replace R[G] with a zero value resistor (a wire). This simplified circuit consists only of two phyical resistors and an op-amp. The resulting R[G] = 0 circuit seems to violate Daisy's theorem. When we set R[G] = 0, the equivalent input resistances are very different (R[e(+)] = 0, R[e(-)] = R[i]||R[F]). Theoretically (with an ideal op amp) this seems to be OK. But when we use a real op-amp, its equal input bias currents flowing through the these unequal equivalent input resistances cause an undesired offset voltage. To compensate, we replace the zero-ohm wire, between the "+" input pin and the ground with a real resistor R[G]. Equalizing equivalent input resistances requires R[G] = R[e(-)] = R[i]||R[F] (Fig. Now, if we substitute the term R[i]/(R[i] + R[F]) with K (the gain of the equivalent non-inverting amplifier), we will get the Plato's formula for the inverting amplifier: R[G] = R[F]/K = R[F]/G[0] The ground resistor R[G] does not define the ground gain; it only compensates the undesired voltage drop caused by the input bias current. And so Daisy's Theorem is true of a properly compensated inverting amplifier. (Compared to the properly compensated circuit, the "simplified" R[G] = 0 circuit has degraded performance). Op-amp non-inverting amplifier[edit] Then, let's consider the next elementary op-amp amplifying circuit - the non-inverting amplifier. Following the same compensation technique, we connect an additional resistor between the input voltage source V[IN] and the op-amp non-inverting input having resistance R[i] = R[e(-)] = R[G]||R[F] (Fig. 11). Now, if we substitute the term R[G]/(R[G] + R[F]) with K (the gain of the non-inverting amplifier), we will get the Plato's formula for the non-inverting amplifier: R[i] = R[F]/K = R[F]/G[i] As above, R[i] does not define the voltage gain; it only compensates the undesired voltage drop caused by the input bias current. (Like the "simplified" inverting amplifier, the "simplified" R[i] = 0 non-inverting amplifier has degraded performance when compared to a properly compensated non-inverting amplifier). Op-amp non-inverting summer[edit] It is time to consider a real summing circuit; let's begin with the simpler op-amp non-inverting summer (Fig. 12). Here, there are already some resistances connected to the op-amp inputs; we have only to equalize them, in order to minimize the input bias current error: R[e(+)] = R[e(-)], R[i1]||R[i2] = R[G]||R[F] Now, if we substitute the term R[G]/(R[G] + R[F]) with K (the gain of the op-amp non-inverting amplifier) and α[i].K = G[i], we will get again Plato's formula for the non-inverting summer: R[i] = R[F]/K = R[F]/G[i] (α[i] is the ratio between the voltage at the non-inverting input and the input voltage while G[i] is the ratio between the op-amp output voltage and the input voltage). Note that the "ground" resistor R[G] is a vital element in this circuit! Op-amp mixed summer[edit] Finally, let's see, if the universal summing-subtracting circuit obeys Plato's formula. Again, we equalize the input equivalent resistances, in order to minimize the input bias current error: R[e(+)] = R[e(-)], R[i1]||R[i2] = R[i3]||R[G]||R[F] After we substitute the terms as above, we will see that really the most complex summing circuit obeys Plato's formula: R[i] = R[F]/K. Plato's formula illusions[edit] Note that Plato's formula cherishes the illusion that calculating the non-inverting gains is as simple as inverting ones and the non-inverting gains are independent. Only, if we consider the case when we have defined the gains but later we have to change some of them, we will establish the fact that after we have recalculated the input resistor we have to recalculate the ground gain (Daisy) and ground resistor (Plato). Plato - There is no illusion in K9. Whenever you change a gain,you need to calculate two new resistor values namely the input resistor and the ground resistor. The only difference between inverting and non-inverting gains is the op-amp input that they connect to. In mixed gain circuits, a gain change may move the ground resistor connection. Daisy tells you where to connect. Legacy analysis can only handle negative gain changes and only if you don't need a balanced circuit. Legacy promotes the illusion that positive and mixed gains are difficult. Doesn't have to be. The K9 procedure is not only simpler, but better. <<< top - contents - source - page stage >>>
{"url":"http://en.wikibooks.org/wiki/Circuit_Idea/Simple_Op-amp_Summer_Design","timestamp":"2014-04-17T07:31:54Z","content_type":null,"content_length":"78231","record_id":"<urn:uuid:7d9ca739-d5b5-432f-ac11-0a6c6f87436d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00166-ip-10-147-4-33.ec2.internal.warc.gz"}