content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Re: Truth table evaluation
norman@flaubert.bellcore.com (Norman Ramsey)
Sat, 26 Feb 1994 19:57:43 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers
From: norman@flaubert.bellcore.com (Norman Ramsey)
Keywords: logic, optimize
Organization: Bellcore, Morristown NJ
References: 94-02-189
Date: Sat, 26 Feb 1994 19:57:43 GMT
Philip Riebold <philip@livenet.ac.uk> wrote:
> I need to evaluate the truth table for [an arbitrary Boolean] expression.
> At present I use a straightforward method of evaluating the expression for
> each of the possible combination of the variables.
> Are there any ways I can speed this up ?
A variation of this problem was given as a class assignment back when
I was a TA for Andrew Appel. The students had to implement two tricks:
For N variables, we have to evaluate the Boolean expression 2^N
times, but the machine is capable of doing 32 evaluations at once.
Write an interpreter using the C bit operators to do those evaluations.
Instead of interpreting the expression, generate machine code to
evaluate it. How many variables does the expression have to have
for this strategy to be worthwhile?
This assignment was great fun. Don't forget that on a modern machine
you may have to worry about I-cache vs D-cache if you write code and
then branch to it. David Keppel's `fly' library should handle those
Norman Ramsey
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/94-02-197","timestamp":"2014-04-18T19:00:49Z","content_type":null,"content_length":"5153","record_id":"<urn:uuid:bcca60ae-d261-4d09-adb8-0433d84f54b8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analysis of Resistive Circuits
Analysis of Resistive Circuits
The following text is broken into several sections. Most are simply explanatory. You may skip directly to SCAM, a MATLAB® tool for deriving and solving circuit equations symbolically if you are
not interested in the theory.
All documents condensed into one (for easy printing).
Solving a set of equations that represents a circuit is straightforward, if not always easy. However, developing that set of equations is not so easy. The two commonly taught methods for forming
a set of equations are the node voltage (or nodal) method and the loop-current (or mesh) method. I will briefly describe each of these, and mention their benefits and disadvantages. I will end
with a discussion of a third method, Modified Nodal Analysis, that has some unique benefits. Among its benefits is the fact that it lends itself to algorithmic solution -- the ultimate goal of
these pages is to describe how to use a MATLAB program for generating a set of equations representing the circuit that can be solved symbolically. If you are only interested in using that program
you may go directly to the page describing SyCiSi.
Circuits discussed herein are simple resistive circuits with independent voltage and current sources. Dependent sources can be added in a straightforward way, but are not considered here.
To apply the node voltage method to a circuit with n nodes (with m voltage sources), perform the following steps (after Rizzoni).
1. Selective a reference node (usually ground).
2. Name the remaining n-1 nodes and label a current through each passive element and each current source.
3. Apply Kirchoff's current law to each node not connected to a voltage source.
4. Solve the system of n-1-m unknown voltages.
Consider the circuit shown below
Steps 1 and 2 have already been applied. To apply step 3:
In this case there is only one unknown, v[b]. Plugging in numbers and solving the circuit we get
The node-voltage method is generally straightforward to apply, but becomes a bit more difficult if one or more of the voltage sources is not grounded.
Consider the circuit shown below.
Clearly this circuit is the same as the one shown above, with V1 and R[1] interchanged. Now we write the equations:
The difficulty arises because the voltage source V1 is no longer identical to one of the node voltages. Instead we have
Note that the last line is the same as that from the previous circuit, but to solve the circuit we had to first solve for v[a]. This procedure wasn't difficult, but required a little
cleverness, and will be a bit different for each circuit layout. Another way to handle this problem is to use the concept of a supernode, which complicates the rules for setting up the
equations (DeCarlo/Lin). However, the supernode concept handles the case of a non-grounded voltage source without any need for solving intermediate equations, as we did here.
The examples chosen here were simple but illustrated the basic techniques of nodal analysis. It also illustrated one of the difficulties with the technique, setting up equations with a floating
voltage source. The technique of modified nodal analysis, introduced later, also has no difficulties when presented with floating voltage sources.
The loop current (or mesh current) method is, not surprisingly, similar to the node voltage method. The rules below follow those in Rizzoni.
To apply the loop current method to a circuit with n loops (and with m current sources), perform the following steps.
1. Define each loop current. This is easiest with a consistent method, e.g. all unknown currents are clockwise, all know currents follow direction on current source.
2. Apply Kirchoff's voltage law to each loop not containing a current source.
3. Solve the system of n-m unknown voltages.
Example 3
Consider the circuit from Example 1, with mesh currents defined.
We can apply KVL to both loops
Since there are two equations and two unknowns we can solve by substitution or by matrix methods. To solve by matrix methods we rewrite the equations
Solving for the two unknown currents we get
While floating current sources tended to complicate the formulation of circuit equations when using the node voltage method, neither the presence of current sources or voltage sources complicates
the loop current method.
General Comments
The choice between the node voltage method and the loop current method is often made on the basis of the circuit at hand. For the example chosen, there was only one independent node but two
independent loops. Therefore the node voltage method would be expected to be easier. The situation shown below is the opposite, with two nodes, but only one loop; hence the loop current method is
For this circuit you would draw three loops, but two of them go through known current sources - so you would only need one equation. Nodal analysis would require two equations, one each for the
voltage on each side of R3.
The next document describes a modified nodal analysis (MNA) method that is amenable to computer solution.
Back Erik Cheever's Home Page
Please email me with any comments or suggestions
|
{"url":"http://www.swarthmore.edu/NatSci/echeeve1/Ref/mna/MNA1.html","timestamp":"2014-04-17T13:38:29Z","content_type":null,"content_length":"11711","record_id":"<urn:uuid:9fefd760-d1fe-4403-8daa-2b8395002b96>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 27
Pre Algebre-please help!
If there is a cylinder that is 3cm. tall and has a radius of 7.5cm. what is the surface area? Please help! Thanks!
21-3c if c=7 {substitute and solve}. so... 21-3x7 21-21 =0
Lang. Arts//Reading
What is the theme of the book "Kingdom Keepers", Disney After Dark? It is the first book and it is written by Ridley Pearson.
Pre Algebra
Haha, ya! thank you so much steve!
Pre Algebra
Hi, does anybody know what a quadrilateral in which opposite sides are congruent and parallel is? It's on a crossword puzzle on my math homework and is thirteen letters long and from the beginning
the seventh letter is an E. Thanks so much to anyone who will help me out!
Pretending to be hurt.
Social Studies
Thanks Ms.Sue! I really needed an answer like yours. It was so helpful!!
Social Studies
In ancient Greece, what was the importance of the Olympics? Thanks so much to anyone who will try to help!
Sorry I wasn't entirely sure if rounding was needed. My mistake-sorry!
Hi I really hopes this helps. So from what I understand your asking for 11 times 12 divided by 15. So, 11 times 12=132 and 132 divided by 15 is 8.8 if you round then 9.
If you do a neck stretch, you can do a type of stretch on the right side of your head and then you could do that same type of stretch on the left side. And for the calf stretch you can do calf raises
with your toes pointing in and out they give a different stretch too.
Kelly hiked a total of 4 miles in 80 minutes. How fast was Kelly walking.
math percentage
1/5 of 8300 2/3 of 10,599
A student dissolves 0.550 mol of a nonvolatile, nonelectrolyte solute in one kilogram of benzene (C6H6). What is the boiling point elevation of the resulting solution? in degrees celsius
5th grade
5th grade
5th grade
Trig Identity Prove: cos(x+y)cos(x-y)=cos^2(x)+cos^2(y)-1
Required to Prove the following trig identity: (cos2x)^2 + (sin2x)^2 = 1
If cos A = 1/3 with 0 < A< pi/2, and sinB=1/4, with pi/2<B<pi. calculate cos(A+B). My answer is (-sqrt15-2sqrt2)/12 But the back of the book says the answer is (-sqrt15+2sqrt2)/12 Am I wrong or the
back? Please explain
Explain how you can transform the grap of f(x)= logx to produce g(x)=log(10nx), for any n>0.
Sorry the question is: log[(x^2+7x+12)/x^2-9)]
Hi there I need help with the restrictions on the variables of this question: Simplify. State any restrictions on the variables. log(x^2+7x+12)/log(x^2-9) So my answer is: log(x+4/x-3) which is
correct. Now for the restrictions, I have: x<-4 and x>3 However the back of t...
Use your knowledge of exponents to solve. a) 1/2^x=1/(x+2) b) 1/2^x>1/x^2 So I know that these functions are rational functions.. and I am trying to solve for x. I tried to solve them by I keep
getting stuck with the exponent 2 which is the exponential function.. Help please
The acceleration due to gravity is inversely proportional to the square of the distance from the centre of Earth. The acceleration due to gravity for a satellite orbiting 7000 km above the centre of
Earth is 8.2 m/s^2. a) write a formula for this relationship. b) at what heigh...
The angle 2x lies in the fourth quadrant such that cos2x=8/17. 1.Which quadrant contains angle x? 2. Determine an exact value for cosx 3. What is the measure of x in radians? ---------------- I know
that quadrant 4 has 2x in it, so quadrant _____ has to have x ? for part 2, th...
6th Grade Social Studies
why was there an increase in the amount of goods produced during the industrial revoluton
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Halle","timestamp":"2014-04-16T17:56:56Z","content_type":null,"content_length":"10938","record_id":"<urn:uuid:31fa32c4-a879-4efe-807f-c854870d6954>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
factoring help
How do you solve (5a+3)2-10a2-6a? Which method is used for this?
Because it is being multiplied by the 1st term (5a + 3) Try understanding this: (5a + 3)^2 - 2a(5a + 3) Let x = 5a + 3 ; then: x^2 - 2ax x(x - 2a) : kapish?
Ok... but in a(3+b)-a+b does the coefficient of a in the first term affect whether it is a complex trinomial or not, or does it depend on what numbers are in the brackets?
Which one of these is a complex trinomial a(3 + b) - a + b or 3(a+b)-a+b or are they both complex trinomials?
Are you fooling around? Neither is a trinomial. Trinomial - Wikipedia, the free encyclopedia
|
{"url":"http://mathhelpforum.com/algebra/138018-factoring-help-print.html","timestamp":"2014-04-20T04:07:48Z","content_type":null,"content_length":"9825","record_id":"<urn:uuid:e8a5400d-d677-454b-bac8-dc25620902df>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CURRENT RESEARCH INTERESTS
Parallel and Vector Computation for Chemical Process Design
Computer aided design in chemical engineering has emphasized numerical solution of large sets of algebraic and differential equations that describe the process. Solution techniques for such large
sets of equations usually concentrate on decomposing the problem for iterative solution. A sequential-modular decomposition for iterative solution, however, also partitions the large set of equations
in way suitable for parallel computation. In addition, the equations describing a typical module often take a vector form allowing further optimization vector computers.
Initial research on parallel and vector computation would take place in two areas. The first area would concentrate on modular process design systems. Problem decomposition for parallel processing
would be based on the process unit. In addition, process units would provide a basis for computational objects. Object oriented programming techniques would also contribute to the parallel
computational decomposition. The second area would focus on solution of transport continuum equations. Finite difference and finite element solution of PDE's both generate large sets of algebraic
equations suited to vector and parallel computation.
Visualization for Chemical Process Analysis
Rapid developments of computer hardware will cause a significant qualitative change in the chemical process design cycle. Computers play a strong role in solving the analytical equations describing
the chemical process. These results are analyzed as individual case studies in an iterative design cycle. The analysis cycle in chemical engineering design is just beginning to make significant use
of the rapidly developing capabilities of CAD systems. The ability to provide rapid graphical displays will greatly aid in the intuitive understanding of the process under analysis. Developing this
understanding would be especially important in the educational setting. The displays would illustrate how material properties influence the transport processes that control chemical processes. For
example, a proper graphic presentation can show the distinction between homogeneous and non-homogeneous materials. In addition, the display can show differences in isotropic and anisotropic transport
processes. Research in this area would closely follow the above research on parallel processing for continuum equations. The methods developed would serve an educational role in teaching transport
Multi-phase Processes in Chemical Processing
Many processes in chemical technology involve multi-phase systems in which heat and mass transfer occurs between phases. These operations depend on the particle size area resulting from the
individual particle interactions. Typical operations with multi-phase systems include coal and hydrocarbon spray combustion, emulsification and polymerization, crystallization, liquid atomization,
and fluidization. Fundamental understanding of particle interactions would be useful for understanding such processes. This research area would include both theoretical and experimental studies.
The particle size distribution of a multi-phase process depends on the individual particle dynamics averaged over the entire particle population. The population balance equation (PBE) models the
effects of particle processes such as breakage, agglomeration, entrance, and exit from a control volume. In its most general form, the PBE completely describes the particle population in
non-homogeneous systems. Solution of the PBE, however, must rely on numerical methods for most systems. The theoretical aspect of this work would concentrate on variational methods for solution of
the integro-differential PBE. In addition, Monte Carlo methods would also be considered for modeling the PBE.
Experimental studies would require development of experimental equipment to study drop breakup by high speed photography of atomizing sprays. In addition, a fluidized bed would be developed for
analysis of systems with minimal particle breakage and agglomeration. On-line instrumentation would measure and control the desired fluidization conditions. The on-line measurements would include
fluidization velocity, bed depth, bed pressure drop, and material loss rates. Additional off-line analysis of the materials fed to and collected from the fluidized bed would be performed to determine
material properties such as particle size, density, and morphology.
[Back] [Home] [Up] [Next]
Last revised on September 14, 1998 Gregory W. Smith (WD9GAY)
To comment, please email gsmith@well.com
|
{"url":"http://www.well.com/user/gsmith/current.html","timestamp":"2014-04-17T12:48:17Z","content_type":null,"content_length":"8327","record_id":"<urn:uuid:cbf7b7de-1c97-4f5a-b30d-3d7d3b8300fb>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Department of Mathematics
Undergraduate Programs
Bachelor's Programs
The Mathematics Department at San Jose State University offers several Bachelor's degrees in mathematics.
The B. A. Mathematics (PDF) degree is our most flexible degree and is a good choice for students planning to go on to graduate school in mathematics and students that just like mathematics who don't
yet have a specific career goal in mind.
The B. A. Mathematics Preparation for Secondary Teaching (PDF) is designed for aspiring high school math teachers as its curriculum encompasses subject areas which satisfy the mathematics subject
matter competency required by the credential program.
The B. S. Applied Mathematics (PDF) degree is designed for students who want a career using mathematics in business, government, or industry. Students pursuing this degree must select one of three
concentrations: Concentration in Economics and Actuarial Science, Concentration in Applied and Computational Mathematics, or Concentration in Statistics.
The B. S. Applied Mathematics, Concentration in Economics and Actuarial Science (PDF) degree is designed for students who want to become actuaries and for students who want a program that integrates
business, economics, and mathematics. Actuaries are trained to analyze risk and are typically employed by insurance companies, banks, the government, and companies that handle retirement funds.
The B. S. Applied Mathematics, Concentration in Applied and Computational Mathematics (PDF) degree is recommended for students who wish to work in the research and development area of industry. This
program also prepares a student for graduate study in applied mathematics, numerical analysis, or operations research. This program provides a solid foundation in classical applied mathematics as
well as computational mathematics, which could be informally described as "how to employ mathematics on computers wisely." A graduate could seek direct employment assisting a group of scientists with
the formulation and solution of problems.
The B. S. Applied Mathematics, Concentration in Statistics (PDF) degree is designed for students pursuing a career involving the collection and analysis of numerical data, the use of statistical
techniques to predict population growth or economic conditions, the use of statistics to analyze medical, environmental, legal and social problems, or to help business managers make decisions and
carry out quality control. The statistics concentration also provides a solid foundation for students who plan to become actuaries.
Minor Programs
• Minor in Mathematics (PDF) A minor in mathematics is valuable to students majoring in science, computer science, engineering, business and the social sciences as it provides an understanding of
important concepts that have applications in those subject areas. Students majoring in physics, computer science and engineering can use support courses in their major toward a math minor. Many
students in those programs choose to complete a math minor.
• Minor in Mathematics for K-8 Teachers (PDF) This minor is designed for prospective elementary school teachers. Completion of the required courses will strengthen the ability to teach mathematics
at the K-8th grade level.
Change of Major
Students who are interested in changing their major to math should see an advisor for consultation. Please fill out this form http://sites.google.com/site/mathadvisingsjsu/home/contact-us and request
to see an advisor. After you have talked to an advisor, you need to fill out the change of major form found at the registrar's website and get department approval from Dr. Blockus.
Please note, the requirements to change your major to math are as follows:
1. A C- or better in Calculus I (Math 30 or Math 30P) or AP credit for Calculus I;
2. A C- or better in Discrete Math (Math 42);
3. A 2.0 GPA or higher in all mathematics courses taken, Precalculus (Math 19) and above.
All transfer students or students wanting to change their major to a BA Math or BS Applied Math degree who have 60+ units will be required to have completed Math 30, 31, 32, and 42 with a grade of C-
or better in each course (where Math 32 and 42 could be replaced with any other approved upper division course such as Math 129A, 133A, 161A etc.). These students will also be required to have an
overall GPA of at least 2.0 in all math courses taken, Precalculus (Math 19) and above.
|
{"url":"http://www.sjsu.edu/math/programs/undergraduate/","timestamp":"2014-04-19T12:17:41Z","content_type":null,"content_length":"22693","record_id":"<urn:uuid:2bddac76-35b8-4b3b-9c76-7e4936969fe7>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Colma, CA Science Tutor
Find a Colma, CA Science Tutor
...My responsibilities included tutoring my classmates and grading homework and exams. I enjoyed helping my classmates with their challenges as math has always been one of my favorite subjects,
and I continued to help my classmates during my free time in college. Now I am happy to become a professional tutor so I can help more students.
22 Subjects: including psychology, trigonometry, algebra 1, algebra 2
...I specialize in tutoring high school mathematics, such as geometry, algebra, precalculus, and calculus, as well as AP physics. In addition, I have significant experience tutoring students in
lower division college mathematics courses such as calculus, multivariable calculus, linear algebra and d...
25 Subjects: including physics, calculus, physical science, astronomy
...I have helped children organize and structure study time to optimize study time. I have taught students successful ways to read test questions to identify what the question is asking and ways
to distinguish between multiple choice answers. I have tutored children in various study skills such as...
30 Subjects: including sociology, biology, psychology, reading
...I've helped students prepare their college personal statements. I've also run a school tutoring center as an adjunct to a high school's honor society. I have also taught theatre workshops.
15 Subjects: including psychology, English, reading, writing
...My experience can decomposed into several major directions: [1] 4 years of statistical consulting in the Bay Area, [2] extensive data mining experience, covering dozens of different projects,
[3] 6 years of applying statistical and quantitative finance methods in the industry,[4] 7 years of tutori...
9 Subjects: including biostatistics, statistics, finance, SPSS
Related Colma, CA Tutors
Colma, CA Accounting Tutors
Colma, CA ACT Tutors
Colma, CA Algebra Tutors
Colma, CA Algebra 2 Tutors
Colma, CA Calculus Tutors
Colma, CA Geometry Tutors
Colma, CA Math Tutors
Colma, CA Prealgebra Tutors
Colma, CA Precalculus Tutors
Colma, CA SAT Tutors
Colma, CA SAT Math Tutors
Colma, CA Science Tutors
Colma, CA Statistics Tutors
Colma, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Colma_CA_Science_tutors.php","timestamp":"2014-04-20T23:29:47Z","content_type":null,"content_length":"23819","record_id":"<urn:uuid:f44af4bb-9e36-4a66-9854-ac743bbcea7b>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Saxon Publishing On-Core Mathematics Grade 6 Bundle
On-Core Mathematics
is a supplemental program that can be used with any math curriculum to provide complete coverage of the Common Core State Standards in Math.
Step-by-step instruction and modeling helps students to understand the concepts presented, while progressively difficult practice exercises continue to build fluency. Problem-solving activities are
also integrated into every lesson to help students synthesize and apply newly-learned concepts and skills.
This Grade 6 Student Workbook covers fractions, absolute value, solutions of inequalities, area, volume, statistics, percents, ratios, addition & subtraction, and more. Problems are presented in a
variety of ways, including through fill in the blanks, charts, tables, pictures, and traditional problems. Each page has the CCSS and objective listed at the top of the page.
The Teacher Edition provides parents with an easy-to-use reference and grading guide. Each lesson includes the student text page numbers, common core state standard number, objective, "essential
question," any new vocabulary words, materials needed, and prerequisites. An "About the Math" section provides a simple overview of the topic to be taught; "The Lesson" section is divided between an
introduction, teaching, and practice, with directions for writing on the board, activities, and modeling ideas. The bottom of the page features reduced-size student pages with the correct answers
overlaid in pink ink.
This On-Core Grade 6 Homeschool Kit includes:
• On-Core Mathematics Grade 6 Student Text, 192 perforated pages, softcover
• On-Core Mathematics Grade 6 Teacher's Edition, 192 pages, softcover
Homeschoolers, please note: this curriculum references assessment guides and a test generator CD-ROM. These items are not included in this homeschool kit, nor are they currently available
individually at Christianbook.com.
Customer Questions & Answers:
|
{"url":"http://answers.christianbook.com/answers/2016/product/805478/saxon-publishing-on-core-mathematics-grade-6-bundle-questions-answers/questions.htm","timestamp":"2014-04-21T02:10:15Z","content_type":null,"content_length":"73334","record_id":"<urn:uuid:685f671b-cb3c-4a1f-97d1-3b4024957adf>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pressure Drop Calculation
Steam Sizing Chart
Control V a lves Globe Control V a lves Steam Sizing Chart In the interests of development and improvement of the product, we reserve the right to change the specification.
ball valve pressure drop calculation | Ball Valves
Most recent searches . Plast-O-Matic on-line pressure drop calculator Calculate Delta P or pressure drop across a valve using GPM and supplied Cv.
A model for calculation of steam injector performance
A model for calculation of steam injector performance N. Deberne a, *, J.F. Leone a, A. Duque b, A. Lallemand a a CETHIL, UPRESACNRS 5008, INSALyon, 20 avenue Albert Einstein ...
A model for calculation of steam injector performance
A model for calculation of steam injector performance N. Deberne a, *, J.F. Leone a, A. Duque b, A. Lallemand a a CETHIL, UPRESACNRS 5008, INSALyon, 20 avenue Albert Einstein ...
Steam distribution
2 Introduction Steam distribution Steam system basics The steam distribution system is an important link between the central steam source and the steam user.
Pressure Drop in Steam Pipes - Engineering ToolBox
Steam pipes and pressure drop diagrams - imperial and metric units
Pipe flow rate calculation | calculate pipe pressure drop with ...
Pipe flow rate calculation. Calculate pipe pressure drop with calculator
Steam Pipe Pressure drop Calculator - Engineering ToolBox
Steam Pipe Pressure drop Calculator Calculate pressure drop in steam distribution pipe lines . Sponsored Links . The pressure drop in saturated steam distribution ...
Hoffman Specialty - Steam Specialties Calculators
Allows easy calculation of flash steam loss and associated energy cost. Values from Properties of Saturated Steam Tables automatically considered no need ...
EVEREST . Leaders in Blower Technology MVR Everest Transmission 1 VAPOR RECOMPRESSION TO RECOVER LOW PRESSURE WASTE STEAM Increasing ...
Replace Pressure-Reducing Valves with Backpressure Turbogenerators ...
Replace Pressure-Reducing Valves with Backpressure Turbogenerators: Industrial Technologies Program (ITP) Steam Tip Sheet #20
STEAM - Steam Pipe Pressure drop Calculator
Calculate pressure drop in steam distribution pipe lines ... Steam and Condensate Systems and Applications! - Resources and Tools for Steam and Condensate Engineering and ...
pressure drop
www.hydrocarbonengineering.com* Reprinted*from* February 2009*** Hydrocarbon EnginEEring * pressure drop p late heat exchangers (PHE) contribute to considerable energy savings ...
High-Pressure Steam Sterilizers thermally- (heat-) regulated valve to close. Once the valve is closed, the steam continues to build up pressure until the operating ...
1 of 5 2000 HydraulicSupermarket.com VELOCITY AND PRESSURE DROP IN PIPES Velocity The velocity of hydraulic fluid through a conductor (pipe, tube or hose) is dependent on ...
Steam Assisted Desuperheater
Data sheet D526.06/1en Desu p erheater We Solve Control Valve Problems DA-90SE Steam Assisted Desuperheater High reliability Large control range - up to 50:1 independent of the ...
Steam Conditioning Manual-01-2008 PDFC
- 2 - SUMMARY 0. INTRODUCTION 1. SELECTION CRITERIA 1.1. DESUPERHEATING CONTROL SYSTEM SELECTION 1.1.1. Enthalpic calculation (feedforward control loop) 1.1.2.
Pressure drop
Pressure drop and flow rate calculator can be used for pressure drop and flow rate calculation for all newtonian fluids with constant density.
Steam Valve Calculator - Animated 3D HVAC Graphics from ControlPix
Interactive steam valve calculations, pressure drop, Cv, capacity ... Steam Valve Cv Calculator Brought to you by ControlPix, your source for high quality, 3D, animated HVAC ...
Pressure Drop Calculations : Pipe Flow Software For Pressure Drop ...
Pipe Flow Network Analysis Software for calculating flows and pressure drop in pipe systems, including ... Webbased calculation and information services. The available ...
Flare Tip Pressure
Considerations on Flare Parameters by Otis Armstrong, Licensed Professional Engineer [1] Summary This document details a thermodynamically sound method for calculation of ...
GD Star Rating
Pipe pressure drop calculation formula trend: SF Pressure Drop ...
Selection of software according to Pipe pressure drop calculation formula topic.
pressure drop across reducer calculation free PDF ebook downloads. eBooks and manuals for Business, Education,Finance, Inspirational, Novel, Religion, Social, Sports ...
Aspen Plus PFR Reactors Tutorial using Styrene with Pressure Drop
1 Aspen Plus PFR Reactors Tutorial using Styrene with Pressure Drop Spring 2008 In this laboratory we will incorporate pressure-drop calculations into our Aspen Plus reactor ...
|
{"url":"http://www.cawnet.org/docid/steam+pressure+drop+calculation/","timestamp":"2014-04-20T17:21:10Z","content_type":null,"content_length":"56102","record_id":"<urn:uuid:8d28575b-390c-4311-b087-5c874e55b8c4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The prime number lottery
Issue 27
November 2003
On a hot and sultry afternoon in August 1900, David Hilbert rose to address the first International Congress of Mathematicians of the new century in Paris. It was a daunting task for the 38 year-old
mathematician from Göttingen in Germany. You could hear the nerves in his voice as he began to talk. Gathered in the Sorbonne were some of the great names of mathematics. Hilbert had been worrying
for months about what he should talk on. Surely the first Congress of the new century deserved something rather more exciting than just telling mathematicians about old theorems.
So Hilbert decided to deliver a very daring lecture. He would talk about what we didn't know rather what we had already proved. He challenged the mathematicians of the new century with 23 unsolved
problems. He believed that "Problems are the life blood of mathematics". Without problems to spur the mathematician on in his or her journey of discovery, mathematics would stagnate. These 23
problems set the course for the mathematical explorers of the twentieth century. They stood there like a range of mountains for the mathematician to conquer.
As the century came to a close, all the problems had essentially been solved. All except one: the Riemann Hypothesis. The Everest of all Hilbert's problems. It was in fact Hilbert's favourite
problem. When asked what would be the first thing he would do if he were brought to life again after 500 years he said "I would ask whether the Riemann Hypothesis has been proved". All indications
are that the answer might still be "no" for it appears to be one of the hardest problems on the mathematical books.
As we entered the new millennium mathematicians decided to repeat Hilbert's challenge. In issue 24 of Plus, we saw How maths can make you rich and famous - by solving one of the seven Millennium
Prize Problems. The Riemann Hypothesis is the only problem from Hilbert's list that is also on this new list. So read on - this article might be your passport to becoming a millionaire.
The Pattern searcher
One of the great pattern searchers: Leonardo Fibonacci
The Riemann Hypothesis goes to the heart of what it means to be a mathematician: looking for patterns. The search for patterns is perfectly captured by those challenges that invariably come up in the
classroom: find the next number.
Here are a list of challenges for you to have a go at. Can you find the patterns and fill in the next numbers?
1 3 6 10 15 ?
1 1 2 3 5 8 13 ?
6 19 29 32 34 39 ?
2 3 5 7 11 13 17 19 ?
The first two probably presented little problem. The first sequence is known as the triangular numbers. The Nth number in the sequence records the number of stones in a triangle with N rows or in
other words, the sum of the first N numbers: 1+2+...+N.
The second sequence is one of Nature's favourite sequences. Called the Fibonacci numbers after the thirteenth century mathematician who first recognized their importance, each number in the sequence
is got by adding together the previous two numbers. Invariably the number of petals on a flower is a number in this sequence.
The third sequence was probably a little more challenging. Indeed, if you were able to predict that 46 was the next number in this sequence, I would recommend you buy a lottery ticket next Saturday.
These were the winning lottery numbers on the 22 October 2003.
The last sequence is of course the sequence of prime numbers, the indivisible numbers that can only be divided by themselves and one. It is trying to explain the sequence of prime numbers that the
Riemann Hypothesis is all about.
Faced with a sequence of numbers like these, numerous questions spring to the mathematical mind. As well as the challenge of finding the rule to predict the next number, mathematicians are also keen
to understand if there is some underlying formula which can help generate these numbers: Is there a way to produce the 100th number on this list without having to calculate the previous 99?
In the case of our sequences, the first two do indeed have formulas that will generate the sequence. For example, to get the 100th triangular number you just have to set
But when you look at the sequence of prime numbers they seem to share more in common with the numbers from the National Lottery. It seems very difficult to predict when the next prime will appear,
let alone produce a formula that will tell you that the 100th prime is 541.
Despite the random nature of the primes they have a universal character. The National Lottery numbers have nothing special about them and change from one week to the next. The primes are numbers that
have been there for eternity, set into the fabric of the universe. The fact that 541 is prime is a fact that seems stamped into the nature of the Universe. There maybe a different chemistry or
biology on the other side of the cosmos but 541 will still be prime. This is why many science fiction writers (for example, Carl Sagan, in his classic novel, Contact, also made into a film of the
same name) have chosen primes as the way alien life will communicate with Earth. There is something so special about this sequence of numbers that we couldn't fail to notice the prime number beat if
it came pulsating through the cosmos from a distant galaxy.
Aliens may have discovered the primes millions of years ago, but what is the first evidence of humankind listening to the prime number beat? Some people have suggested that the first culture to
recognize the primes lived over 8,000 years ago. Archaeologists discovered a bone now called the Ishango bone in Central Equatorial Africa, which has three columns of notches carved down the side.
The bone seems to be mathematical in nature. And in one column we find the primes between 10 and 20. But others have argued that these bones are related to keeping track of dates and it is the random
nature of the primes which means that a list of dates could well be a list of primes. (You can find out more about the Ishango bone at Ishango: The exhibition.)
The first culture to truly understand the significance of the primes to the whole of mathematics was the Ancient Greeks. They realised that the primes are the building blocks of all numbers. Every
number can be built by multiplying together prime numbers. They are the atoms of arithmetic. Every subject has its building blocks: Chemistry has the Periodic Table, a list of 109 elements which
build matter; physicists have the fundamental particles which range over weird things as quarks and gluons; biology is seeking to sequence the human genome which is the building kit for life.
But for thousands of years mathematicians have been listening to the primes, the heart beat of mathematics, unable to make sense or to predict when the next beat will come. It seems a subject wired
by a powerful caffeine cocktail. It is the ultimate tease for the mathematician: mathematics is a subject of patterns and order and symmetry, yet it is built out of a set of numbers that appears to
have no rhyme or reason to them. For two thousand years we have battled to understand how Nature chose the atoms of arithmetic.
There is of course the possibility that, as with the atoms of chemistry, there are only 109 primes which can be used to build all numbers. If that were true, we wouldn't need to worry about looking
for patterns to predict primes because we could just make a finite list of them and be done. The great Greek mathematician Euclid scuppered that possibility. In what many regard as the first great
theorem of mathematics, Euclid explained why there are infinitely many prime numbers. This proof is described in A whirlpool of numbers from Issue 25 of Plus. Gone, then, is the chance to produce a
list containing all the prime numbers like a Periodic Table of primes or prime genome project. Instead we must bring our mathematical tools to bear to try to understand any patterns or structure in
this infinite list.
Perhaps the primes start off rather unpredictably and then settle down into a pattern. Let's have a look at the primes around 10,000,000 and see whether here a pattern emerges, whether we might find
a formula for predicting the ebb and flow of the primes.
There are 9 primes in the 100 numbers before 10,000,000:
• 9,999,901,
• 9,999,907,
• 9,999,929,
• 9,999,931,
• 9,999,937,
• 9,999,943,
• 9,999,971,
• 9,999,973,
• 9,999,991.
But look how few there suddenly are in the hundred numbers after 10,000,000:
The primes seem to come along like buses: first a great cluster of primes and then you have to wait for ages before the next ones appear. It seems hopeless to find a formula that would spit out this
strange list or that would tell us that the 664,571th prime is 9,999,901.
Euclid, who discovered that there are infinitely many primes
Try a little experiment. Take the list of primes around 10,000,000 and try to memorize them. Switch off the computer and see how well you do at reproducing the list. Most of us will try to create
some underlying pattern to help us memorize the sequence. Effectively our brains end up trying to store a shorter program that will create the sequence. A good measure of the random character of
these numbers is the fact our brains find it hard to construct a program that is significantly shorter than just memorizing the sequence outright.
It's like the difference between listening to a tune and listening to white noise. The inner logic of the tune allows you to whistle it back after a few times, while the white noise gives you no
clues as to where to whistle next. The magic of the primes is that, despite first hearing only white noise, a cultural shift into another area of mathematics will reveal an unexpected harmony. This
was Gauss's and Riemann's great insight. Like western ears listening to the music of the east, we will require a different perspective before we understand the pattern responsible for this
The lateral thinker
What mathematicians are good at is lateral thinking. Professor Enrico Bombieri, a professor in Princeton, expresses what one should do when facing an insurmountable mountain: "When things get too
complicated, it sometimes makes sense to stop and wonder: Have I asked the right question?"
It was a fifteen year old boy called Carl Friedrich Gauss who started to change the question. Gauss became an overnight star in the scientific community at the beginning of the nineteenth century
thanks to a tiny tumbling rock. The first day of the new century had started auspiciously with the discovery of what many regarded as a new planet between Mars and Jupiter. Christened Ceres, its path
was tracked for several weeks when suddenly astronomers lost it as it passed behind the sun. But Gauss was up to the challenge of finding some order in the data that had been collected. He pointed at
a region of the sky that astronomers might be expected to see Ceres. Sure enough, there it was.
Carl Friedrich Gauss, 1803
But it wasn't only in the stars that Gauss loved to find order. Numbers were his passion. And of all the numbers that fascinated Gauss, the primes above all were the jewels that he wanted to
understand. He had been given a book of logarithms as a child which contained in the back a table of primes. It is somewhat uncanny that the present contained both, because Gauss managed to find a
connection between the two.
Gauss decided he would see whether he could count how many primes there were rather than just trying to predict which numbers were prime. Here was the lateral move that would eventually unlock the
secret of the primes. He asked: What proportion of numbers are prime numbers? He discovered that primes seemed to get rarer as one counted higher. He made a table recording the changing proportions.
│ │ Number of │ On average, │
│ │primes from 1 up │ how many numbers │
│ N│ to │you need to count │
│ │ referred to │before you expect │
│ │ as │ a prime number │
│ 10│ 4│ 2.5│
│ 100│ 25│ 4.0│
│ 1,000│ 168│ 6.0│
│ 10,000│ 1,229│ 8.1│
│ 100,000│ 9,592│ 10.4│
│ 1,000,000│ 78,498│ 12.7│
│ 10,000,000│ 664,579│ 15.0│
│ 100,000,000│ 5,761,455│ 17.4│
│ 1,000,000,000│ 50,847,534│ 19.7│
│10,000,000,000│ 455,052,511│ 22.0│
So, for example, 1 in 6 numbers around 1,000 are prime.
Since the primes look so random, perhaps tossing a dice might provide a good model of how the primes were distributed. Maybe Nature used a prime number dice to choose primes around 1,000, "PRIME"
written on one side and the other five sides blank. To decide if 1,000 was prime, Nature tossed the dice to see if it landed on the "PRIME" side. Of course this is just a heuristic model. A number is
prime or it isn't. But Gauss believed that this "prime number dice" might produce a list of numbers with very similar properties to the true list of primes.
How many sides are on the dice as we check the primality of bigger and bigger numbers? For primes around 1,000, Nature appears to have used a six-sided dice; for primes around 10,000,000 one needs a
15-sided-dice. (So there is a 1 in 15 chance that a London telephone number is prime.) Gauss discovered that the tables of logarithms at the beginning of his book containing the tables of primes
provided the answer to determining how many sides there are on the prime number dice.
Look again at Gauss's table counting the number of primes. Whenever Gauss multiplied the first column by 10, then the last column, recoding the number of sides on the prime number dice, goes up by
adding approximately 2.3 each time. Here was the first evidence of some pattern in the primes. Gauss knew another function which performed the same trick of turning multiplication into addition: the
logarithm function.
It was a 17th century Scottish baron, John Napier, who first discovered the power of the logarithm as an important function in mathematics. Napier was generally regarded to be in league with the
devil as he strode round his castle with a black crow on his shoulder and a spider in a little cage muttering about the predictions of his apocalyptic algebra. But he is best remembered today for the
invention of the logarithm function.
If you input a number
So, for example,
Whenever I multiply the input by 10 then I add 1 to the output.
But we needn't choose the number 10 to raise to the power x. That's just our obsession with the fact that we have ten fingers. Choosing different numbers gives logarithms to different bases. Gauss's
prime number dice function goes up by 2.3 every time I multiply by 10. The logarithm behind this function is to the base of a special number called e=2.718281828459... .
Gauss guessed that the probability that a number N is prime is 1/log(N) where log is taken to the base e. This is the probability that a die with log(N) sides lands on the "PRIME" side. Notice that
as N gets bigger, log(N) gets bigger and the chance of landing on the prime side gets smaller. Primes get rarer as we count higher.
If Nature tosses the prime number dice 100,000 times, how many primes will you expect to get with these dice with varying numbers of sides? If the die has a fixed number of sides, say 6, then you
expect 100,000/6 which is the probability 1/6 added up 100,000 times. Now Gauss is varying the number of sides on the die at each throw. The resulting number of primes is expected to be
The staircase of the primes versus Gauss's guess Li(N)
Gauss refined this guess at the number of primes into a function called the logarithmic integral, denoted by Li(N). How good is Gauss's guess compared to the real number of primes? We can see the
difference in the graph to the left. The red line is Gauss's guess using his prime number dice; the blue one records the real number of primes.
Gauss's guess is not spot-on. But how good is it as we count higher? The best measure of how good it is doing is to record the percentage error: look at the difference between Gauss's prediction for
the number of primes and the true number of primes as a percentage of the true number of primes.
│ │ │How far Gauss's guess │ │
│ │ Number of primes │ Li( │ │
│ N │ │ the number of primes │Percentage error:│
│ │ 1 up to │ less than │ │
│ │ │ Li( │ │
│ 100│ 25│ 5│20 │
│ 1,000│ 168│ 10│5.95 │
│ 10,000│ 1,229│ 17│1.38 │
│ 100,000│ 9,592│ 38│0.396 │
│1,000,000│ 78,498│ 130│0.166 │
│ 10^7│ 664,579│ 339│0.051 │
│ 10^8│ 5,761,455│ 754│0.0131 │
│ 10^9│ 50,847,534│ 1,701│0.00335 │
│ 10^10│ 455,052,511│ 3,104│0.000682 │
│ 10^11│ 4,118,054,813│ 11,588│0.000281 │
│ 10^12│ 37,607,912,018│ 38,263│0.000102 │
│ 10^13│ 346,065,536,839│ 108,971│0.0000299 │
│ 10^14│ 3,204,941,750,802│ 314,890│0.00000983 │
│ 10^15│ 29,844,570,422,669│ 1,052,619│0.00000353 │
│ 10^16│279,238,341,033,925│ 3,214,632│0.00000115 │
Gauss believed that as we counted higher and higher the percentage error would get smaller and smaller. He didn't believe there were any nasty surprises waiting for us. His belief became known as:
Gauss's Prime Number Conjecture: The percentage error gets smaller and smaller the further you count.
There is a lot of evidence here, but how can we really be sure that nothing weird happens for very large N?
In 1896 Charles de la Vallée-Poussin, a Belgian, and Jacques Hadamard, a Frenchman, proved that Gauss was correct. But here is a warning that it is far from obvious that patterns will persist. Gauss
also thought that his guess would always overestimate the number of primes. The evidence from the tables looks overwhelming. But in 1912 Littlewood, a mathematician in Cambridge, proved Gauss was
wrong. However the first time that Gauss's guess underestimates the primes is for N bigger than the number of atoms in the observable universe - not a fact that experiment will ever reveal.
Georg Friedrich Bernhard Riemann
Gauss had discovered the "Prime Number Dice" used by Nature to choose the primes. They were dice whose number of sides increased as larger and larger primes are chosen. The number of sides grew like
the logarithm function. The problem now was to determine how this dice landed. Just as a coin rarely lands exactly half heads and half tails, Gauss still didn't know exactly how this dice was
It was Gauss's student Riemann who discovered that music gave the best explanation of how to get from the graph showing Gauss's guess to the true graph of the number of primes. As we shall discover
in next issue's instalment, Riemann's music could explain how Nature's dice really landed.
About the author
Professor Marcus du Sautoy is an Arsenal fan. His favourite primes are featured on the cover of his book The music of the primes (reviewed in last issue of Plus).
There is the chance to explore more about the exciting world of prime numbers in the interactive website www.musicoftheprimes.com. You can build a prime number fantasy football team and get it to
play other teams. Will you be top of the prime number premiership?
And why not take the opportunity to do an experiment with prime number cicadas? By choosing different life cycles for the cicada and the predator, the game shows why primes became the key to the
cicadas survival.
There is also the chance to contribute to the Prime Number Photo Gallery. Do you have a picture of your favourite prime number? Have you seen a prime number somewhere strange - like the date on a
building? Can you find pictures for the missing primes in our gallery?
And there are more details about how to win a million dollars and prove the Riemann Hypothesis.
Submitted by Anonymous on November 2, 2013.
Unfortunately prime count with Li(x) and even R(x) Riemann zêta function make a very roughly count of prime with large counting deviation.
Here is a new table count for curious:
Table du décompte des nombres premiers avec Go(X) et les écarts de calcul par rapport à pi(x).
X Go(x) pi(x) -Go(x) @
1,E+01 3,9 0,1
1,E+02 24,6 0,4
1,E+03 167,7 0,3
1,E+04 1 228,4 0,6
1,E+05 9 592,6 -0,6
1,E+06 78 498,6 -0,6
1,E+07 664 578,1 0,9
1,E+08 5 761 454,3 0,7
1,E+09 50 847 534,5 -0,5
1,E+10 455 052 511,9 -0,9
1,E+11 4 118 054 813,4 -0,4
1,E+12 37 607 912 018,1 -0,1
1,E+13 346 065 536 838,3 0,7
1,E+14 3 204 941 750 802,4 -0,4
1,E+15 29 844 570 422 668,4 0,6
@ gaston ouellet
This generalized count is realized with ( X2^2 - X1^2) / ln( X2^2). Fanstastic.
Submitted by Anonymous on May 19, 2013.
An inspired guess about how Hilbert chose the subject of his famous lecture of 1900! In fact, by then Hilbert's reputation was matched only by that of Henry Poincaré among his contemporaries, and
that is why he was chosen by his colleagues to address the momentous topic of that lecture.
Submitted by Anonymous on December 2, 2012.
I have already commented on how the above problem could be solved using the linear algebra of a hyperspace of even dimension and a discrete version of the Legendre Transformation involving higher
derivatives of the intercept with respect to the gradient in 2D space by the process of a descent to the lower dimension and ascent to the higher dimension without loss of information. It is almost
self evident that a successor prime must depend exclusively on its antecedents. [the number two [2] is in an excluded class of its own, itself].
The difficulty I have with the Riemann zeros is that I do not understand their significance in relationship to the sequence: [3, 5, 7, 11, ... ]. Perhaps if the Riemann zeros were computed in a for
series where every even number was suppressed, then this would clarify matters.
I have an e-mail address, if someone wants to communicate with me on the matter of hyperspaces, Legendre Transformations and dimension reductions. This is: wpgshaw@hotmail.com
Submitted by Anonymous on December 2, 2012.
Further to my comment that the sequence of primes can be thought of as a series of points that are solutions to an equation such as [p[1] + p[2] + p[3] - [p[1] + p[2] + p[3]] = 0, where the
co-ordinates are: [p[1], p[2], p[3], - [p[1] + p[2] + p[3]] forming a polygonal arc in a hyperspace of even dimension: [2, 4, 6, 8, ... ], four in this case, the problem being treated as an exercise
in linear algebra [the dimension 2D case is degenerate] and a discrete version of the Legendre Transformation, since there is no loss of information in lowering the dimension, it should be possible
to reconstruct the 4D, 6D, ... arc from discrete 2D finite functions, derived from the hyperspace in the first instance. If the 2D functions of intercept vs gradient or higher derivatives are
regular, then these 2D functions can be augmented and then reconstructed in the hyperspace so allowing more points in the hyperspace to be found and hence more prime numbers computed. In the case of
a 6D space, the sequence would read: [3, 5, 7, 11, 13, - 39], [5, 7, 11, 13, 17, - 53] , ... Additionally, the fact that uneven integers differ by a multiple of two and the two categories: [4k + 1]
and [4.k + 3] can be built into the maths. I do not know that this approach has ever been used elsewhere.
Submitted by Anonymous on December 2, 2012.
This was said to be unsolvable by a no less eminent scientist than G.B. F. Riemann. However, if ordered sets of the prime sequence are set out as another sequence:
[3, 5, 7, - 15], [5, 7, 11, - 23], [7, 11, 13, - 31], ... and these sets are treated as points in a four dimensional hyperspace, then the sequence becomes a polygonal arc in 4D. Using linear algebra,
finding the direction ratios and intercepts of the line segments produced of the arc, starting from the first two points provides a means of lowering the dimension of the space by unity at a time,
but with no loss of information about the prime sequence. This is because on the axis planes, where intercepts are located, one of the co-ordinates is zero and the other remaining co-ordinates become
a point in a space of dimension, one less. Finally in 2D space, the intercepts of finite discrete functions of the gradients or higher derivatives might have regularity, or a way of producing
regularity might be found. This exercise can be done in any even dimension hyperspace: [4, 6, 8, ... ]
There are four co-ordinates here because of a constraint, that is the sum of an even number of uneven integers is an even integer [zero included]. The solution to the subject appears to reduce to an
exercise in linear algebra and a discrete version of the Legendre transformation.
Submitted by Anonymous on January 14, 2011.
The Croft Spiral Sieve, a very efficient multiplicative sieving algorithm derived from the set of all numbers not divisible by 2, 3, & 5, reveals the perfectly symmetrical deterministic process,
employing 8 progressively expanding "chord" patterns, whereby the final post-factorization result is all primes >5 up to a given n: www.primesdemystified.com
Submitted by Anonymous on November 9, 2010.
Professor Marcus du Sautoy :
I am a Chinese student,
I am reading your book ----- ---- translated into chinese
is very nice,and I am reading the book of wrote by John Ddebyshire,
PRIME OBSESSION is easy to understand ,but your book is more widely and deeply in prime,I cound feel you love the
mathsmatic so much and understand mathsmatic intelligence.
but how to get the 1/2+14.174725i etc?
can you give me some web site about how to get the 1/2+14.174725i ?
|
{"url":"http://plus.maths.org/content/prime-number-lottery","timestamp":"2014-04-18T15:39:20Z","content_type":null,"content_length":"69754","record_id":"<urn:uuid:220748a5-0458-43e6-bf47-bae40a5b9eee>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Polyhedral Cell Model
Over the past few months, starting on 1 April, I’ve been presenting the theory that when cells grow and divide, they maintain a constant ratio of surface area to volume. This is an unorthodox view.
I’ve managed to show this numerically, and I also now have the mathematical proof (not yet published) that it’s possible for a cube to double in volume and divide in half to produce two perfect
replicas of the original cube (which may also be the 2500 year-old Delian Riddle). I’ve also been able to double a sphere in the same way, but I haven’t got a mathematical proof of that yet. [Update
Nov 2012: Both proofs can be found here]
There are several virtues in having a constant A/V ratio. First of all, the cell never starves, because its surface area never grows too small relative to its volume, as happens if it just swells in
all directions. Secondly, in order to grow and divide, the cell only has to make new internal material and external membrane in the same constant ratio, and not in some complex manner. And thirdly,
this method of growth and division is (probably) a Least Action variant of all the numbers of ways in which a cell might grow and divide.
If it were possible (and it may be possible) I would build a model of of such a cell, using a polythene bag of water. But I can’t, because while it would be easy to add volume to the bag of water by
pouring more water into it, I don’t know how to add surface area to such a bag by adding more polythene.
So instead, I’ve set out to construct a computer simulation model of a bag of water. And I started with an icosahedron, a regular solid which has 20 triangles on its surface. And I then sub-divided
each of the triangles into 4 triangles, and pushed the vertices of these triangles radially outwards from the centre of the icosahedron until they lay on the surface of the sphere that encompasses
the icosahedron, to create an 80-faced polyhedron. And then I repeated the process, to produce a 320-faced polyhedron.
This 320-faced polyhedron replicates Buckminster Fuller’s very first geodesic dome.
I’m able to show this in 3D colour because about 10 years ago I got interested in 3D graphics, and wrote my own code to do it, so I used that code to produce these images.
Because this is going to be a physical model, I then introduced point masses at each vertex of the polyhedron, and connected the point masses with ties with a known coefficient of elasticity. I’d
written the code to model the dynamic behaviour of such structures nearly 20 years ago with my orbital siphon model, so I’ve used that old code to do it.
I now had a spherical polyhedron made up of point masses connected by ties. I wondered how it would behave when something happened to it. So I dropped it onto a floor from a height of about 4 times
the diameter of the polyhedron. When it hit the floor, the underside of the polyhedron buckled, but the upper portion retained surprising integrity. And when I looked at the underside, I found that
the buckled faces had produced a concavity, much like those you see on ping pong balls that have been hit too hard. This seemed very plausible. And if I strengthened the ties, and dropped it from a
lesser height, the polyhedron even bounced a little. Which was also plausible.
Thinking that the multi-coloured polyhedron’s buckled surface was a bit hard to read, I tried to introduce simple shading of the faces, so that ones facing upwards were white, and ones facing
downward were black, with shades of grey on the ones in between. But I didn’t get it quite right, somehow (see right). Why are two of the faces so bright? I don’t know. So for now I’m sticking to my
multi-coloured polyhedra.
The next thing I want to do is fill the polyhedral cell with water, and work out the pressure on its surfaces, and hence the force acting on each one of the point masses on the polyhedron’s 320
vertices. This will be new physics for me, so I’ve been poring over my physics textbooks hydrostatics pages trying to figure out how to do it. I think I’ve more or less worked it out, although I’m
not sure if I should have water outside the cell as well as inside. I also have to use the bulk elasticity of water to calculate the force it exerts when it’s squeezed.
And when I’ve got that, and also worked out the surface area and volume of the polyhedron, I’ll have a tiny little water-filled sphere.
Which I can then try to grow. I still haven’t quite figured out how I’m going to do this. But I’ve started out by stretching first stretching the equator of the polyhedron so that it became
sausage-shaped, and then pinching the waist of the stretched bit. This was interesting to watch, because when the waist became very narrow, it was insufficiently strong to prevent the two halves
folding apart (which is what would happen when this cell divides).
Interestingly, this cell looks very like some of the cancer cells I’ve been looking at. And I also managed, by a happy accident, to create the other kind of ‘spiky’ cancer cells I’ve seen.
So it looks like I’m going to have no trouble producing cancer cells. I might even manage to produce a working model of the rather astonishing HeLa cells which spring apart when they divide.
What looks like it’s going to be hardest to do is to reproduce the growth and division of normal spherical cells. At least, I’m supposing that normal cells start out spherical.
Until this Wednesday, I was just thinking about cells as little bags of water. I’m not interested (yet) in what might be going on inside them. That’s to say I’m not interested in DNA or the nucleus
or mitochondria of the endoplasmic reticulum and all the rest of the crazy things that are inside real cells. I’m not a biologist or a chemist or a geneticist, and so I don’t know what all these damn
things are. As far as I’m concerned right now, my cells are just full of goo – or will be when I’ve filled them with water. I’m only interested in the geometry and physics of cells. But on Wednesday
I got to thinking how the geometry might extend inside the cells, with a surprising result, which I’ll write something about sometime.
I’m also somewhere near the limits of my mathematical skills. I’m doing all this using my tried and tested trigonometry. But I’m beginning to think that I need to upgrade to vector algebra. I was
actually once taught vector algebra, but I never quite ‘got it’. That’s how it is with a lot of my mathematics. I use a simple instruction set, a bit like a carpenter who’s only got a hammer and a
screwdriver in his toolbox, and hasn’t got any of the fancy electric drills and power saws and stuff that most carpenters have. But you can do one hell of a lot with just a hammer and a screwdriver.
All this may seem a long way from smoking and smoking bans, but it’s not really. Because smoking, as everybody knows, causes cancer. And I’ve already got a little bit of a handle on cancer. And maybe
even a better one than the ‘experts’ in CRUK. They’ve had 70 years of getting nowhere, and blaming it all on smoking. It’s time for somebody else to have a go at the problem.
18 Responses to A Polyhedral Cell Model
1. Very, very tricky stuff, Frank. Too late in the night to really think about it. But a thought popped into my mind as I read your piece, which was Einstein’s ‘picture’ of a cloud of dust being
compressed and elongated by the effects of gravity, as the cloud was pulled towards the gravitational source.
I don’t really know what I am talking about since I am not clever enough to gain more than a rough understanding.
What is important is that structures such as ‘the cloud of dust’ obey very weird mathematical ‘rules’ which are not Euclidean.
I cannot help but feel that body cells also have very weird mathematics, which depend an awful lot on the conglomeration of massive electrical forces, gravitational forces, etc, etc. It may be
that body cells are anything but spherical! It may be that the human body would have never survived had its cells been spherical!
□ I think a lot about the nuclear physicists. I don’t really understand their model of the atom, but I can see what they were trying to do, as I’m sure you can too. In some ways, I’m trying to
produce the same thing, at the (much larger) cellular dimension.
I agree that the mathematics are kinda “weird”.
2. If you do move toward publication Frank, you might consider the journals of PLoS as a venue:
□ I’m consulting Leg-iron about this. As far as I know, he’s a microbiologist with about 100 publications behind him. Unlike a little cunt like me who’s got almost got zero publications behind
3. You might find this interesting.
□ Thanks for that. I had a bit of trouble understanding what they were trying to do. They seem to have produced a mathematical model of cell growth rates in relation to cell volume. And
concluded that, in mammalian cells, “cell growth rates appear to be approximately proportional to cell volume.” Which is interesting.
4. Hmm First April….?
On reading the first few paragraphs, I though immediatly of how the universe expands. (Modern THEORY wise). AND have you thought of applying the Mandelbrot theory into the equation?
(If you have already, then ignore the last. :-) )
□ Well, yes, I know that the date is worrisome. But it just happened to be the day that I wrote it. There are lots of other days like that. Like Christmas Day, or the Ides of March, or the 22nd
of November. If I was to pay attention to the connotations attached to any particular date, I’d never write anything.
And I don’t see what Mandelbrot has got to do with this. Perhaps you can explain that to me.
☆ XX Frank Davis says:
And I don’t see what Mandelbrot has got to do with this. Perhaps you can explain that to me. XX
Well, not being anything like a maths expert, but I FEEL in things regarding expansion, such as the universe, and your sentence “The next thing I want to do is fill the polyhedral cell
with water, and work out the pressure on its surfaces, and hence the force acting on each one of the point masses on the polyhedron’s 320 vertices.”, the Mandelbrot, theory has much to
HOW that could be applied is, because of my abismal mathematical skills, a mystery, but as I say, it is a FEELING that it could be usefull…..? Just a thought…. :-§
5. Can’t contribute – too ignorant and mathematically thick – but it’s fascinating, and your models are very pretty and I think it’s dead clever!
□ Yeah, they are pretty, aren’t they? I hope to publish a helluva lot more of them in the months to come.
☆ Frank Im just a simple redneck always looking for an edge. You think you could make one of those dern polysided balls with numbers on it like dice and makem hit point everytime. I will
cut ya in on the proceeds.
Other than that I got lost at hello!
○ Nice idea. I’ll see if I can do one for you. It’ll have numbers on every face, going from 1 to 320.
Not sure how you’d tell which one came out on top though…
○ Electron microscope perhaps lol
6. Weil should see this if anyone has his addy:
Dutch shipbuilder to fine smoking workers
October 6, 2012 – 16:20 AMT
PanARMENIAN.Net – Dutch shipbuilder IHC Merwede is to fine workers who light up a cigarette outside official breaks €100 for each offence, RIA Novosti reported citing Dutch media outlets.
Repeat offenders may miss out on their profit share entitlements of some €2,400.
The plan has been approved by the company’s works council. Merwede has a workforce of 3,000.
The measure has been brought in because the company wants a healthier workforce, especially now the state pension age is being increased to 66. The company also wants to stop smokers taking more
breaks than non-smokers.
□ Hmm… I could roll some of my 15″ long ones for them….
7. Stanton Glantz released the Helena study on April 1st and my first reaction to it was that it was probably an April Fools joke. I mean, seriously, claiming that a six month smoking ban in bars
cut heart attacks on the general population by 60%? Doya think I’m STOOPID???
- MJM
□ Look at the claims John Bahnzhaff makes. He lies through his teeth. He put the liar into lawyer.
There’s a guy with some real issues, whew.
No need to log in
This entry was posted in Uncategorized and tagged idle theory. Bookmark the permalink.
|
{"url":"http://cfrankdavis.wordpress.com/2012/10/06/a-polyhedral-cell-model/","timestamp":"2014-04-20T21:37:42Z","content_type":null,"content_length":"101850","record_id":"<urn:uuid:3c6566f1-eed5-4c70-addb-df3612941164>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Puzzle page
December 2008
The coloured hat exam
This puzzle was kindly provided by Christopher Dowden.
Three students have been put in detention by their evil maths teacher, Mr Chalk. The pupils (Alice, Ben and Chris) have been unfairly accused of not liking probability! Now they will all be expelled
unless they can prove Chalk wrong by passing his bizarre test.
In a moment, the crazy Chalk will lead the students blindfolded to his secret exam hall, where he will place mysterious coloured hats on their heads. The blindfolds will then be removed, enabling the
three pupils to see each other's hats, but not their own. Finally, the students will sit an exam with only one cunning question: what colour is your hat?
Of course, normal exam regulations will apply, so the students won't be allowed to communicate with each other in any way once they are in the hall. They only know that the colours will be red and
yellow, and that all eight possible combinations (RRR, RRY, RYR, YRR, RYY, YRY, YYR and YYY) are equally likely.
Alice, Ben and Chris will each be allowed to write one of three words "red", "yellow", or "pass". They will not be allowed to see each other's answers. If at least one student guesses his hat colour
correctly and none guess incorrectly, they all win. Otherwise (if at least one guesses incorrectly or they all write "pass"), all three will be expelled.
The pupils are currently in the headmaster's office, waiting nervously for Chalk's arrival. This gives them a few precious minutes together to devise their strategy. They can see that if they decide
now that Alice and Ben will both write "pass" and that Chris will write "red", then this will give them a 50% chance of winning (as this tactic will succeed if Chris is given a red hat and fail if he
is given a yellow one, regardless of the colours that Alice and Ben have). Can they do any better than that?
Here is a hint
About the author
This problem has recently been circulating around universities all over the world. It was written up for Plus in this format by Christopher Dowden, who studied maths at Gonville and Caius College,
Cambridge, and then Merton College, Oxford, where he recently completed a DPhil on random graphs. He is currently a postdoctoral research fellow at the University of Canterbury in Christchurch, New
If you are stumped by last issue's puzzle, here is the solution.
For some challenging mathematical puzzles, see the NRICH puzzles from this month or last month.
|
{"url":"http://plus.maths.org/content/puzzle-page-87","timestamp":"2014-04-17T03:53:50Z","content_type":null,"content_length":"25250","record_id":"<urn:uuid:3ff180b9-c2e4-4a08-bdf1-cbf2cab5b7a5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SuperKids Math Review
free newsletter! tell a friend! contests
bestseller list
price survey
what's new
product support
educational tools
math worksheets
vocabulary builders
iPhone/iPad apps
logic games
brain food
feature articles
iPhone/iPad apps
reading corner
movie corner
SuperKids home
about SuperKids
* * *
* * *
* * *
* * *
math worksheets > > fractions > > multiplying fractions
SuperKids Math Review
How to Multiply Fractions
Remember . . .
Here's a memory trick: the Denominator is the bottom, or Down number in a fraction -- and both Denominator and Down start with the letter D.
Multiplying fractions is simple. Unlike adding and subtracting, where care must be taken to ensure that both fractions have a common denominator, the multiplication operation has no such requirement.
Method 1 - (for beginners) Just multiply the numerators (top numbers), and the denomominators (bottom numbers), and place the resulting answers in their respective top / bottom location in the answer
fraction. Then reduce the fraction, if possible
Example 1: Simple fraction multiplication
No reduction is possible, so we have found the answer!
Example 2: Multiplication requiring answer fraction to be reduced
Then reduce:
Example 3: Multiplication with a mixed number
First convert the mixed number to an improper fraction:
Then multiply:
Then reduce the fraction:
Method 2 - (for anyone who understands the above concept). Before multiplying the numerators and denominators, look for ways to pre-reduce the fractions, both within each fraction, and across the
Example 1: Pre-reducing within a fraction
First reduce the 2/4 to 1/2, then multiply the numerators (top numbers), and the denominators (bottom numbers), and place the resulting answers in their respective top / bottom location in the answer
fraction. Then reduce the fraction, if possible.
Example 2: Reducing across fractions
Here we divide the first numerator (5) by the second denominator (5), before multiplying the numerators (top numbers), and the denomominators (bottom numbers).
This method has the advantage of allowing the use of smaller numbers in the calculation process.
Got it? Great! Then go to the SuperKids Math Worksheet Creator for Basic Fractions, and give it a try!
[Questions?] Make this your browser's home page!
Questions or comments regarding this site? webmaster@superkids.com
Copyright © 1998-2014 Knowledge Share LLC. All rights reserved. Privacy Policy
|
{"url":"http://www.superkids.com/aweb/tools/math/fraction/commond/multiply.shtml","timestamp":"2014-04-18T16:20:14Z","content_type":null,"content_length":"13257","record_id":"<urn:uuid:1be33f87-4edf-4445-ae4c-7d047903ffef>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Industrial Problems Seminar
G. Bao, An inverse diffraction problem in periodic structures, proceedings of third international conference on mathematical and numerical aspects of wave propagation, ed. by G. Cohen, SIAM,
Philadelphia, 694--704, 1995.
G. Bao, A uniqueness theorem for an inverse problem in periodic diffractive optics, Inverse Problems 10 (1994), 335--340.
G. Bao, Direct and inverse diffraction by periodic structures, in the proceedings of international conference on nonlinear PDE and applications, submitted.
G. Bao, Finite element approximation of time harmonic waves in periodic structures, SIAM J. Numerical Analysis 32, No. 4 (1995), 1155--1169.
G. Bao, Inverse diffraction by a periodic perfect conductor with several measurements, in the proceedings of 2nd international conference on inverse problems in engineering, to appear.
G. Bao, Numerical analysis of diffraction by periodic structures: TM polarization, numerische mathematik 75 (1996), 1--16.
G. Bao, Regularity of an inverse diffraction problem, inverse problems, submitted.
G. Bao, Variational approximation of Maxwell's equations in biperiodic structures, SIAM J. Appl. Math. 57, No. 2 (1997), 364--381.
G. Bao, Y. Cao and H. Yang, Least-squares finite element computation of diffraction problems, SIAM J. Sci. Comput., submitted.
G. Bao and Y. Chen, A nonlinear grating problem in diffractive optics, SIAM J. Math. Anal. 2 (1997), 322--337.
G. Bao and D. Dobson, Diffractive optics in nonlinear media with period structures, IMA Preprint Series # 1124, March 1993, University of Minnesota, Minneapolis.
G. Bao and D. Dobson, Diffractive optics in nonlinear media with period structures, Euro. J. Appl. Math. 6 (1995), 573--590.
G. Bao and D. Dobson, Modeling and optimal design of diffractive optical structures, submitted to Surveys on Math. for Industry.
G. Bao and D. Dobson, Nonlinear optics in periodic diffraction structures, Second international Conference on mathematical and numerical aspects of wave propagation, ed. by R. Kleinman, T. Angell, D.
Colton, F. Santosa, and I. Stak gold, SIAM, Philadephia, 30--38, 1993.
G. Bao and D. Dobson, Second harmonic generation in nonlinear optical films, J. Math. Physics 35 (1994), 1622--1633.
G. Bao, D. Dobson, and J.A. Cox, Mathematical issues in the electromagnetic theory of gratings, in diffractive optics: design, fabrication and applications, technical digest, Optical Society of
American 11 (1994), 8--11.
G. Bao, D. Dobson, and J.A. Cox, Mathematical studies in rigorous grating theory, J. Opt. Soc, Amer. A 12 (1995), 573--590.
G. Bao and H. Yang, Least-squares finite elements computation of diffraction problems, SIAM J. Sci. Comput., submitted.
G. Bao and Y. Chen, A nonlinear grating problem in diffractive optics, SIAM J. Math. Anal. 8, No. 2 (1997), 322-337.
G. Bao and Z. Zhou, Inverse diffraction by a doubly periodic structure, C.R. Acad. Sci., Paris, Serie I, t 324 (1997), 627-632.
G. Bao and Z. Zhou, An inverse problem for scattering by a doubly periodic structure, AMS Trans., to appear.
G. Bao and A. Friedman, Inverse problems for scattering by periodic structure, Archive Rat. Mech. Anal. 132 (1995), 49--72.
H. Bellout and A. Friedman, Scattering by stripe grating, J. Math. Anal. Appl. 147 (1990), 228--248.
O.P. Bruno and F. Reitich, A new approach to the solution of problems of scattering by bounded obstacles, in SPIE 2192, Mathematics and control in smart structures (1994), H.T. Banks, ed., 20--28.
O.P. Bruno and F. Reitich, Accurate calculation of diffractive grating efficiencies, SPIE 1919, Mathematics in Smart Structures (1993), 236--247.
O.P. Bruno and F. Reitich, Approximation of analytic functions: a method of enhanced convergence, Math. Comp. 63 (1994), 195--213.
O.P. Bruno and F. Reitich, Boundary-variation solutions for bounded-obstacle scattering in three dimensions, submitted to J. Comp. Phys. (1997).
O.P. Bruno and F. Reitich, Calculation of electromagnetic scattering via boundary variations and analytic continuation, ACES J. 11 (1996), 17--31.
O.P. Bruno and F. Reitich, Maxwell equations in a nonlinear Kerr medium, Proc. R. Soc. Lond. A 447 (1994), 65--76.
O.P. Bruno and F. Reitich, Numerical solution of diffraction problems: A method of variation of boundaries, J. Optical Soc. Amer. A. 10 (1993), 1168--1175.
O.P. Bruno and F. Reitich, Numerical solution of diffraction problems: a method of variation of boundaries, J. Opt. Soc. Am. A 10 (1993), 1168--1175.
O.P. Bruno and F. Reitich, Numerical solution of diffraction problems: a method of variation of boundaries II, Dielectric gratings, Pade approximants and singularities, J. Optical Soc. Amer. A 10
(1993), 2307--2316.
O.P. Bruno and F. Reitich, Numerical solution of diffraction problems: a method of variation of boundaries III. Doubly-periodic gratings, J. Optical Soc. Amer. A 10 (1993), 2551--2562.
O.P. Bruno and F. Reitich, Solution of a boundary value problem for Helmhotz equation of the boundary into the complex domain, Proc. Roy. Soc. Edinburg, 122A (1992), 317--340.
O.P. Bruno, A. Friedman, and F. Reitich, Asymptotic behavior for a coalescence problem, Trans. Amer. Math. Soc. 338 (1993), 133--158.
X. Chen, Axially symmetric jets of compressible fluids, Nonlinear Analysis 16 (1991), 1057--1087.
X. Chen, Collision of two jets of compressible fluid in FREE BOUNDARY PROBLEM IN FLUID FLOW WITH APPLICATIONS, J.M. Chadam & H. Rasumuseen, eds., Pitman Research Notes in Mathematics Ser. 282 (1993),
X. Chen and A. Friedman, A nonlocal diffusion equation arising in terminally attached polymer chains, European J. Appl. Math. 1 (1990), 311--326.
X. Chen, A. Friedman, and T. Kimura, Nonstationary filtration in partially saturated porous media, European J. Math. Appl. Anal. 5 (1994), 405--429.
X. Chen and R. Ore, Electro-optic modulation in an arbitrary cross-section waveguide, IEEE, J. Quantum Electronics 26 (1990), 532--540.
X. Chen and A. Friedman, Maxwell's equations in a periodic structure, Trans. Amer. Math. Soc. 323 (1991), 465--507.
X. Chen, A. Friedman, and L.S. Jiang) Mathematical modeling of semiconductor lasers, SIAM J. Appl. Math. 53 (1993), 168--186.
D.P. Chock, S.L. Winkler and P. Sun, A comparison of stiff chemistry solvers for air quality modeling, Environmental Science & Technology 28, No. 11 (1994), 1882--1892.
D.C. Dobson, A boundary determination problem from the design of diffractive periodic structures, in Free boundary problems: theory and applications, J.I. Diaz, M.A. Herrero, A. Li\~{n}an, and J.L.
Vazquez, eds., Pitman Research N otes in Math. Series 323, Longman (1995), 108--120.
D.C. Dobson, A variational method for electromagnetic diffraction in biperiodic structures, RAIRO, Mod\'{e}l, Math. Anal. Num\'{e}r 28 (1994), 419--439.
D.C. Dobson, Controlled scattering of light waves: optimal design of diffractive optics, in Control problems in industry, I. Lasiecka and B. Morton, eds., Birkh\"{a}user, Boston (1995), 97--118.
D.C. Dobson, Designing periodic structure with specified low frequency scattered far field data, in ``Advances in computer methods for partial differential equations VII, '' edited by R.
Vichnevetsky, D. Knight and G. Richter, IMAC S (1992), 224--230.
D.C. Dobson, Optimal design of periodic antireflective structures for the Helmholtz equation, European J. Appl. Math. 4 (1993), 321--340.
D.C. Dobson, Phase reconstruction via nonlinear least-squares, Inverse Problems 8 (1992), 541--557.
D.C. Dobson, Optimal shape design of blazed diffraction gratings, Appl. Math. Opt., to appear.
D.C. Dobson and J.A. Cox, Mathematical modeling for diffractive optics, in Diffractive and miniaturized optics, S. Lee.,ed., SPIE CR-49 (1994), 32--53.
D.C. Dobson, J.A. Cox, J.D. Zook, and T. Ohnstein, Optical performance of high-aspect LIGA gratings, Opt. Eng. 36 (5) (1997), 1367--1373.
D.C. Dobson, J.A. Cox, J.D. Zook, and T. Ohnstein, Optical performance of high-aspect LIGA gratings, in SPIE Proc. 2383A (1995).
D.C. Dobson and A. Friedman, The time harmonic Maxwell equations in doubly periodic structure, J. Math. Anal. Appl. 166 (1992), 507--528.
A. Friedman and M.L. Honig, On the spread of continuous-time linear systems, SIAM J. Math. Anal. 21 (1990), 757--770.
A. Friedman and B. Hu, A free boundary problem arising in electrophotography, Nonlinear Analysis 9 (1991), 729--759.
A. Friedman and B. Hu, A free boundary problem arising in superconductor modeling, Asymptotic Analysis 6 (1992), 109--133.
A. Friedman and B. Hu, A non-stationary multi-scale oscillating free boundary for the Laplace and heat equations, Journal of Differential Equations 137 (1997), 119-165.
A. Friedman and B. Hu, A Stefan problem for multi-dimensional reaction diffusion systems, SIAM J. Math. Anal. 27 (1996).
A. Friedman and B. Hu, Head-media interaction in magnetic recording, Archive Rat. Mech. Anal.
A. Friedman and B. Hu, Homogenization approach to light scattering from polymer-dispersed liquid crystal film, SIAM J. Appl. Math. 52 (1992), 46--54.
A. Friedman and B. Hu, Optimal control of chemical vapor deposition reactor, Journal of Optimization Theory and Applications.
A. Friedman and B. Hu, The Stefan problem with kinetic condition at the free boundary, Scuol. Norm. Sup. Pisa. 19 (1992), 615--636.
A. Friedman, B. Hu, and Y. Liu, A boundary value problem for the Poisson equation with multi-scale oscillating boundary, Journal of Differential Equation 137 (1997), 54--93.
A. Friedman and C. Huang, Averaged motion of charged particles in a curved strip, SIAM J. Appl. Math.
A. Friedman and C. Huang, Averaged motion of charged particles under their self-induced field, Indiana University Mathematical Journal 43 (1994), 1167--1225.
A. Friedman, C. Huang and J. Yong, Effective permeability of the boundary of a domain, Comm PDE. 20 (1995), 59--102.
A. Friedman and W. Liu, A system of partial differential equations arising in electrophotography 89 (1991), 272--304.
A. Friedman and Y. Liu, Propagation of cracks in elastic media, Archive Rat. Mech. Anal. 136 (1996), 235--290.
A. Friedman and B. Ou, A Model of crystal precipitation, J. Math. Anal. Appl. 137 (1989), 550--575.
A. Friedman, B. Ou, and D. Ross, Crystal precipitation with discrete initial data, J. Math. Anal. Appl. 137 (1989), 576--590.
A. Friedman and F. Reitich, A hyperbolic inverse problem arising in the evolution of combustion aerosol, Archive Rat. Mech. Anal. 110 (1990), 313--350.
A. Friedman and F. Reitich, Asymptotic behavior of solutions of coagulation-fragmentation models, submitted to Indiana Univ. Math. J. (1997).
A. Friedman and F. Reitich, Parameter identification in reaction diffusion models, Inverse Problems 8 (1992), 187--192.
A. Friedman, D.S. Ross, and J. Zhang, A Stefan problem for a reaction diffusion system, SIAM J. Math. Analysis 26 (1995), 1089--1112.
A. Friedman and G. Rossi, Phenomenological continuum equations to describe case II diffusion in polymetric materials, Macromolecules, 30 (1997), 153--154.
A. Friedman and J.J.L. Vel\'{a}zquez, A time-dependent free boundary problem modeling the visual image in electrophotography, Arch. Rat. Mech. Anal. 123 (1993), 259--303.
A. Friedman and J.J.L. Vel\'{a}zquez, Liouville type theorems for fourth order elliptic equations in a half-space, Transactions Amer. Math. Soc. 349 (1997), 2537--2603.
A. Friedman and J.J.L. Vel\'{a}zquez, The analysis of coating flows in a strip, J. Diff. Eqs. 121 (1995), 134--182.
A. Friedman and J.J.L. Vel\'{a}zquez, The analysis of coating flows near the contact line, J. Diff. Eqs. 119 (1995), 137-208. in electrophotography, Arch. Rat. Mech. Anal. 123 (1993), 259--303.
A. Friedman and J.J.L. Vel\'{a}zquez, Time-dependent coating flows in a strip, Part I: the linearized problem Transactions Amer. Math. Soc. 349 (1997), 2981--3074.
A. Friedman and J. Zhang, Swelling of a rubber ball in the presence of good solvent, Nonlinear Analysis 25 (1995), 547--568.
B. Hu, A fiber tapering problem, Nonlinear Analysis: Theory, method an application 15, No. 6 (1990), 513--525.
B. Hu, A free boundary problem for a Hamilton Jacobi equation arising in ion etching, Journal of Differential equation 86, No. 1 (1990), 158--182.
B. Hu, A quasi-variational inequality arising in elastohydrodynamics, SIAM J. Math. Anal. 21, No. 1 (1990), 18--36.
B. Hu, Diffusion of penetrant in a polymer: a free boundary problem, SIAM J. Math. Anal. 22, No. 4 (1991), 934--956.
B. Hu and Lihe Wang, A free boundary problem arising in electrophotography: solutions with connected toner region, SIAM J.. Math. Anal. 23, No. 6 (1992), 87--111.
B. Hu and L. Wang, A free boundary problem arising in electrophotography: solutions with connected toner region, SIAM J. Appl. Math. 23 (1992), 1439--1454.
W. Liu, A parabolic system arising in film development, IMA Preprint Series # 577 (1989).
Y. Liu, Axially symmetric jet flows arising from high speed fiber coating in Nonlinear Analysis, Nonlinear Anal., Theory, Methods \& Applications, An international Multidisciplinary J. 23 (1994), No.
3, 319--363.
B. Morton and M. Elgersma, A new computational algorithm for 7R spatial mechanisms, Mech. Mach. Theory 31 (1996), 24--43.
C.P. Please and D.W. Schwendeman, Light-off behavior of catalytic converter, SIAM J. Appl. Math. 54 (1994), 72--92.
F. Reitich, Rapidly stretching plastic jets: The linearized problem, SIAM J. Math. Anal 22 (1991), 107--128.
F. Reitich, Singular solutions of a transmission problem in plane linear elasticity for wedgeshaped regions, Numer. Math. 59 (1991), 179--216.
F. Reitich and K. Ito, A high-order perturbation approach to profile reconstruction. I: perfectly conducting gratings, submitted (1997).
A. Solomonoff, Fast algorithms for micromagnetic computations, IMA Preprint Series # 1176 (1993).
P. Sun, D.P. Chock and S.L. Winkler, An implicit-explicit hybrid solver for a system of stiff kinetic equations, Journal of Computational Physics 115, No. 2 (December 1994), 515--523.
L. Wang, J.A. Cox, and A. Friedman, Model analysis of homogeneous optical waveguides by boundary integral method, IMA Preprint Series # 1457, February 1997.
T.H. Whitesides and D. Ross, Experimental and theoretical analysis of the limited coalescence process: stepwise limited coalescence, J. Colloid and Interface Science 196 (1995), 48--59.
J. Zhang, A nonlinear multi-dimensional conservation law, J. Math. Anal. Appl. 204 (1996), 353--388.
|
{"url":"http://ima.umn.edu/industrial/research.html","timestamp":"2014-04-16T20:10:44Z","content_type":null,"content_length":"58784","record_id":"<urn:uuid:e071b93e-42da-42dd-b059-21f684fa4384>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
|
If a result is apparently provable with AC, is actually independent of ZF?
up vote 1 down vote favorite
Given the number of results that are independent of ZF. It seems that once you've found a proof of a theorem that uses the axiom of choice, the odds are that it will be independent of ZF. So my
question is:
-Is there any result that has a solution in ZFC which relies on AC, but has another proof that can be done only in ZF?
I'm especially interested in those results which people thought that were going to be independent of ZF, because only a proof relying on AC was known. And then someone found a proof in ZF.
When I say axiom of choice I'm also including weaker versions like countable choice or DC.
model-theory axiom-of-choice
4 I don't understand your question. 98% of theorems in advanced algebra that are usually proven with the AC don't depend on the AC. – darij grinberg Feb 5 '11 at 14:06
@darij At least the basic results in algebra need AC: the different definitions of noetherian aren't equivalent without DC. ( (mathoverflow.net/questions/53523/maximal-ideal-and-zorns-lemma). –
Gabriel Furstenheim Feb 5 '11 at 14:23
Yes, but few of their applications do. If you read some French, try out this: hlombardi.free.fr/liens/constr.html (particularly "Algèbre Commutative"). There are lots of results of the kind "any
2 sufficiently elementary theorem provable in ZFC is provable in ZF or even constructively", where "sufficiently elementary" means things like "1st order formula", "geometric formula" and the likes.
Some of these results, ironically, require ZFC themselves, while others don't. Alas, I am not an expert in this field ("program extraction from classical proofs"). – darij grinberg Feb 5 '11 at
9 The Cantor-Bernstein theorem was first proved using AC but is provable without it as well. – Asaf Karagila Feb 5 '11 at 15:01
4 I don't think the question is well-defined as stated. You can trivially insert "by the Axiom of Choice, ..." into the proof of any statement and then remove it. I think you meant to say "if a
result apparently depends on AC, does it necessarily?" and then there are a million counterexamples. – Qiaochu Yuan Feb 5 '11 at 15:34
show 1 more comment
4 Answers
active oldest votes
Construction of the Haar integral for locally compact Hausdorff group $G$ ... as a linear functional on $C_{00}(G)$ ... is often done using the Axiom of Choice. In the Hewitt & Ross
textbook ABSTRACT HARMONIC ANALYSIS this is Theorem (15.5), and they do it without AC. They do then have an exercise (15.25) where they outline the much shorter proof using Tihonov's
up vote 7 down Theorem.
vote accepted
1 I don't understand this answer. They are using the axiom of choice in the form of a well-known equivalent version. – Andres Caicedo Feb 5 '11 at 19:48
3 I guess (but don't know) that Gerald means they present a proof without AC and then as an exercise suggest a much easier proof that does use AC (via Tichonoff's theorem). – Tom
Ellis Feb 5 '11 at 22:05
@Tom: Oh, I see! Thanks. – Andres Caicedo Feb 6 '11 at 0:33
Tom is correct, isn't that what I said? – Gerald Edgar Feb 6 '11 at 2:53
@Gerald: Yes, that is precisely what you said. (I misread and was confused.) It is a nice example. – Andres Caicedo Feb 6 '11 at 2:57
add comment
The paper "Division by Three" by Doyle and Conway (link to PDF) gives a proof of the following result without appeal to the axiom of choice:
Let $A$ and $B$ be sets, and let $3$ denote a three-element set. If there exists a bijection from $3\times A$ to $3\times B$, then there exists a bijection from $A$ to $B$.
up vote 7 down vote
(This result is not due to Doyle and Conway -- it was first obtained by Lindenbaum and Tarski in 1926.)
1 This paper is also notable for giving a great conceptual proof of the Schröder-Bernstein theorem, as a warm-up to the main proof. – Jim Conant Feb 6 '11 at 19:30
add comment
Tarski proved that, for any set $A$, the set $W(A)$ of well-orderable subsets of $A$ has strictly larger cardinality than $A$. This is trivial with AC, as then $W(A)$ is the whole power
set of $A$ and thus Cantor's theorem applies. But Tarski gave a proof that avoids AC.
up vote 5 I don't have my copy of Howard and Rubin's "Consequences of the Axiom of Choice" handy, but if I did then I could probably find lots of examples by looking at the various forms numbered
down vote 0A, 0B, etc. I believe all of these are provable without AC (hence the number 0) but there was once a reason to suspect AC was needed.
2 The online version at consequences.emich.edu/conseq.htm allows you to enter a form number and get a pdf of all equivalent statements. Tarski's result is form 0 AE. – François G.
Dorais♦ Feb 6 '11 at 10:45
add comment
There are many such proofs which use $AC$, but which are not even close to being independent of $ZF$. The general reason for this is how $AC$ is used. Normally the non-essential use of $AC$
appears when you have a very targeted application in mind, with additional structure in the background.
For example: Given an arbitrary collection of non-empty sets $\{X_\alpha: \alpha \in Y\}$ asserting that the product $Z =\Pi_{\alpha \in Y} X_{\alpha} $ is non-empty requires $AC$ when you
have no structure imposed on the $X_\alpha$ and $Y$. However, when we add the assertion that each $X_\alpha$ is a ring, with additive identity $0_\alpha\in X_\alpha$, we then know that $Z$
up vote is always non-empty without $AC$. The reason for this is because we now can define a function which witnesses that $Z$ is non-empty, in fact the function $\varphi:Y \rightarrow \bigcup X_\
2 down alpha$, given by $\varphi(\alpha) = 0_\alpha$ is such a witness, because $\varphi \in Z$.
That having been said, as for a specific example of a theorem for which everyone thought relied on $AC$ but was proven to hold in $ZF$, I cannot think of one off-hand that has not already
been mentioned. But I think an example of what you are looking for might be contained in a question by Andres Caicedo, Distinct well-orderings of the same set and in his insightful answer to
his own question.
add comment
Not the answer you're looking for? Browse other questions tagged model-theory axiom-of-choice or ask your own question.
|
{"url":"http://mathoverflow.net/questions/54399/if-a-result-is-apparently-provable-with-ac-is-actually-independent-of-zf/54424","timestamp":"2014-04-21T02:31:21Z","content_type":null,"content_length":"80467","record_id":"<urn:uuid:8a9f4e29-31d3-4452-bdb0-112f12c1dba2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mixed Problems Worksheets for Practice
Here is a graphic preview for all of the Mixed Problems Worksheets. You can select different variables to customize these Mixed Problems Worksheets for your needs. The Mixed Problems Worksheets are
randomly created and will never repeat so you have an endless supply of quality Mixed Problems Worksheets to use in the classroom or at home. Our Mixed Problems Worksheets are free to download, easy
to use, and very flexible.
These Mixed Problems Worksheets are a great resource for children in Kindergarten, 1st Grade, 2nd Grade, 3rd Grade, 4th Grade, and 5th Grade.
Click here for a Detailed Description of all the Mixed Problems Worksheets.
Quick Link for All Mixed Problems Worksheets
Click the image to be taken to that Mixed Problems Worksheet.
Single Digit for Addition and Subtraction
Mixed Problems Worksheets
These single digit addition and subtraction worksheets are configured for 2 numbers in a vertical problem format. The range of numbers used for each worksheet may be individually varied to generate
different sets of mixed operator problems.
Single Digit for Addition, Subtraction, & Multiplication
Mixed Problems Worksheets
These single digit mixed problems worksheets are configured for 2 numbers in a vertical problem format. The range of numbers used for each worksheet may be individually varied to generate different
sets of mixed operator problems.
Adding & Subtracting with Dots
Mixed Problems Worksheets
These mixed problems worksheets will produce 12 vertical problems with dots to the right of each number to aid the children with the addition or subtraction. The range of numbers used for each
worksheet may be individually varied to generate different sets of problems.
Single or Multiple Digit for Addition, Subtraction, & Multiplication
Mixed Problems Worksheets
These mixed problems worksheets may be configured for either single or multiple digit horizontal problems with 2 numbers. You may select between 12 and 30 problems to be displayed on each worksheet.
Adding and Subtracting 2, 3, or 4 Digit Problems
Mixed Problems Worksheets
These mixed problems worksheets may be configured for adding and subtracting 2, 3, and 4 digit problems in a vertical format. For the subtraction problems you may select some regrouping, no
regrouping, all regrouping, or subtraction across zero. You may select up to 30 mixed problems per worksheet.
Adding and Subtracting 4, 5, or 6 Digit Problems
Mixed Problems Worksheets
These mixed problems worksheets may be configured for adding and subtracting 4, 5, and 6 digit problems in a vertical format. For the subtraction problems you may select some regrouping, no
regrouping, all regrouping, or subtraction across zero. You may select up to 20 mixed problems per worksheet.
Adding & Subtracting With No Regrouping
Mixed Problems Worksheets
These mixed problems worksheets are great for problems that do not require regrouping. The addition and subtraction problems may be configured with either 2 digits (plus/minus) 1 digit problems or 2
digits (plus/minus) 2 digits problems. The no regrouping option may be switched off if some regrouping is desired. The problem format is vertical and you may select up to 30 mixed problems per
Adding & Subtracting Decimal Numbers
Mixed Problems Worksheets
These Mixed Problems Worksheets may be configured for 1, 2, and 3 Digits on the right of the decimal and up to 4 digits on the left of the decimal. You may select up to 25 Mixed Problems per
Adding & Subtracting Money Numbers
Mixed Problems Worksheets
These Mixed Problems Worksheets may be configured for up to 4 digits in each problem. The currency symbol may be selected from Dollar, Pound, Euro, and Yen. You may select up to 25 mixed problems per
Negative Numbers for Addition, Subtraction, & Multiplication
Mixed Problems Worksheets
These mixed problems worksheets may be configured for either single or multiple digit horizontal problems. The numbers may be selected to be positive, negative or mixed. You may vary the numbers of
problems on each worksheet from 12 to 30.
Missing Numbers for Addition, Subtraction, Multiplication, and Division
Mixed Problems Worksheets
These mixed problems worksheets are a good introduction for algebra concepts. You may select various types of characters to replace the missing numbers. The formats of the problems are horizontal and
the numbers range from 0 to 99. You may select up to 30 mixed problems per worksheet.
1 or 5 Minute Drills for Addition, Subtraction, Multiplication, and Division
Mixed Problems Worksheets
These mixed problems worksheets are for timed drills. They contain single digit addition, subtraction, multiplication, and division problems on one page. A student who has memorized all of these
single digit problems should be able to work out these mixed problems worksheets correctly in the allowed time. You may select which operations you wish to use, and vary the number range as well.
Adding & Subtracting Missing Digits
Mixed Problems Worksheets
These mixed problems worksheets are configured for a vertical problem format. The missing digits are randomly selected to challenge the children in solving the problems. The number of digits in each
problem may be varied between 2 and 4. You may select up to 30 mixed problems per worksheet.
Solving for Equalities in an Equation
Mixed Problems Worksheets
These Mixed Problems Worksheets are great for testing students on solving equalities in an equation. You may select the type of problems and the range of numbers that will be used in the worksheet.
Each selected type of problem will produce four different variations of the location for the unknown. These equality equations worksheets are appropriate for Kindergarten, 1st Grade, 2nd Grade, and
3rd Grade. You may select up to 20 mixed problems per worksheet.
Adding & Subtracting Irregular Units
Mixed Problems Worksheets
These mixed problems worksheets are great for teaching children to add and subtract irregular units of measurement. The problems may be selected to include Feet & Inches, Pounds & Ounces, Hours &
Minutes, and Minutes & Seconds. These Mixed Problems Worksheets will produce 15 problems per worksheet.
Adding, Subtracting, Multiplying, and Dividing Two Numbers
Mixed Problems Worksheets
These mixed problems worksheet may be configured for adding, subtracting, multiplying, and dividing two numbers. You may select different number of digits for the addition and subtraction problems.
You may control carrying in the addition problems as well as regrouping and subtraction across zero in the subtraction problems. The number of digits may differ in the multiplicand as well as the
multiplier for the multiplication problems. The division problems have five different digit configuration to select from, and you may select whether the problems have remainders or not. These
problems will be produced in a vertical format for the addition, subtraction and multiplication, and standard long division format. These mixed problems worksheets will produce six problems for each
type of operation for a total of 24 problems.
Adding, Subtracting, Multiplying, and Dividing Two Fractions
Mixed Problems Worksheets
These mixed problems worksheets are great for working on adding, subtracting, multiplying, and dividing two fractions on the same worksheet. You may select between three different degrees of
difficulty and randomize or keep in order the operations for the problems. These mixed problems worksheets will produce 12 problems per page.
|
{"url":"http://www.math-aids.com/Mixed_Problems/","timestamp":"2014-04-21T04:42:49Z","content_type":null,"content_length":"42414","record_id":"<urn:uuid:c8db2cd2-b355-419e-9839-4006bf124164>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Britannica Blog
. Author and book are the subject of David Berlinski's new book
The King of Infinite Space
, the subject of our transatlantic question-and-answer session.
Read the rest of this entry »
Journal de Physique
that equal volumes of gases, at the same temperature and pressure, contain an equal number of molecules. His idea became known as Avogadro's law, a fundamental concept in the physical sciences.
Read the rest of this entry »
series on Women in History 2011
, we take a look at five women who beat the odds, becoming celebrated for their genius and mathematical talent.
Read the rest of this entry »
Alfred North Whitehead
, born Feb. 15, 1861, was one of the most influential mathematician-philosophers of the 20th century. Known for his work with
Bertrand Russell
on the three-volume masterpiece
Principia Mathematica
, as well as for his
, Whitehead devoted his career to grasping the nature of
, science, and
Read the rest of this entry »
Walking around (not under) ladders, avoiding black cats, stepping over cracks, avoiding a building's 13th floor (if the building even has one) -- are you superstitious this way, and especially today,
on Friday the 13th? And if so, why? Friday the 13th is widely hailed as the most common superstition in the world, whose roots trace back to antiquity. Mathematician and Britannica contributor Ian
Stewart discusses
number symbolism
and our love-hate relationship with numbers, and even runs through the many cultural associations we have with numbers
1 - 20
in particular.
So click on the link above and read on (if you dare) ...
Read the rest of this entry »
You are below average. I’m sorry to have to be the one to tell you this, but there it is. It’s no use denying it. Facts are facts, and the figures don’t lie. Once we get beyond the average and the
median, most of us get lost in statistics. It is a form of mathematics for which the brain was not designed. (If there were an Intelligent Designer, things would be otherwise, of course.) But the
fact that we can’t follow it or don’t like the results it yields gives us no warrant to mock it or to pretend that its results are bogus.
Read the rest of this entry »
To live outside the law, says the poet, you must be honest. Two outlaws discovered this week that you'd better live outside caves, too. Come along on a whirlwind tour of Antarctica, Leonardo da
Vinci, Claude Lévi-Strauss, Carl Reiner (the Shakespearean), and that great anthem of civilized life, the
Addams Family
theme song.
Read the rest of this entry »
|
{"url":"http://www.britannica.com/blogs/category/science-technology/mathematics/","timestamp":"2014-04-16T13:56:38Z","content_type":null,"content_length":"87351","record_id":"<urn:uuid:aa426f12-9dc6-41aa-b144-62631c1281b3>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Back to practice exercises.
1: Background Reading
• 4.8 Local Search
2: Learning Goals
• Implement local search for a CSP
• Implement different ways to generate neighbours
• Implement scoring functions to solve a CSP by local search through either greedy descent or hill-climbing
• Implement SLS with random steps and random restarts
• Compare SLS algorithms with runtime distributions
3: Directed Questions
1. In local search, how do we determine neighbours? [solution]
2. What is the difference between random walk and random restart? [solution]
3. What is the key weakness of stochastic local search? [solution]
4: Exercise: Traffic Flow
Consider the following scenario. You are on a city planning committee and must decide how to control the flow of traffic in a particular residential neighbourhood. At each intersection, you have to
decide whether to install a 4-way stop, a roundabout, or an uncontrolled intersection (in an uncontrolled intersection, the streets intersect without signs, roundabouts or other traffic conrols).
There are several restrictions in how the intersections can be controlled. The following figure gives an overview of the neighbourhood.
The red stars indicate the 8 intersections under consideration. The intersection nearest the school in the NW corner of the neighbourhood must contain a 4-way stop for safety reasons. Along the east
side of the neighbourhood runs a truck route, and no roundabouts can be placed on this street because they pose a problem for large trucks. Also, it is not allowed to have 4-way stops at consecutive
intersections or to have two consecutive uncontrolled intersections. Finally, due to the cost of installing roundabouts compared with the other options, each block can have at most one of its four
corners with a roundabout.
4.1: CSP Representation
1. How would you represent the above problem as a CSP? Identify the variables, their domains, and the constraints involved. Once you are done a sketch on paper, open the stochastic local search tool
and load the file http://www.aispace.org/exercises/roundabouts.xml by clicking File → Load from URL. This shows one possible representation, but there might be more than one correct
4.2: Comparing Local Search Algorithms
Using the local search tool, we will experiment with several local search algorithms for solving this problem.
1. Greedy Descent
From the menu, choose Hill Options -> Algorithm Options and then select Greedy Descent from the dropdown menu. Click Ok. Click Initialize. This will assign a value to each variable. Note: you can
choose Hill Options -> Auto Solve Speed -> Very Fast to speed up the solver. Click Auto Solve. What happens? Does it find a solution within 100 steps? Hypothesize why or why not. Now click Batch
Run, which will calculate the runtime distribution and plot the percentage of successes against the number of steps. What does the runtime distribution tell you about this solver? [solution]
2. Greedy Descent
Go back to Algorithm Options and now select Greedy Descent with Random Restarts. Click Batch Run again and compare the runtime distributions. Do the random restarts improve Greedy Descent? Why or
why not? [solution]
3. Random Walk
Go back to Algorithm Options and now select Random Walk. Click Batch Run again. How does this compare with plain Greedy Descent? How would the two algorithms compare if you gave them 10000 steps?
(a logarithmic scale might help the visualization) [solution]
5: Learning Goals Revisited
• Implement local search for a CSP
• Implement different ways to generate neighbours
• Implement scoring functions to solve a CSP by local search through either greedy descent or hill-climbing
• Implement SLS with random steps and random restarts
• Compare SLS algorithms with runtime distributions
|
{"url":"http://aispace.org/exercises/exercise4-c-2.shtml","timestamp":"2014-04-19T17:02:04Z","content_type":null,"content_length":"11609","record_id":"<urn:uuid:27656e4c-1c23-49f1-827d-797f48fde07a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Google Summer of Code 2012: Week 13
Posted by on August 20, 2012
Hi all, here’s a brief summary of my 13th (and last) week of GSoC.
• I continued my work on centralizers, improving normal closure, derived in lower central series, etc. My most recent pull request containing these additions just got merged and can be found here.
This week I spent a lot of time on writing better tests and developing some new test practices. The group-theoretical algorithms in the combinatorics module are getting more and more complicated,
so better, cleverer and more thorough tests are needed. I came up with the following model for verification:
- since the results of the tests are very hard to compute by hand, some helper functions are needed that find the wanted object in a brute-force manner using only definitions. For example, we
often look for a subgroup with certain properties. The most naive and robust approach to this is to:
- list all group elements, go over the list and check each element for the given property.
- Then, make a list of all the “good” elements and compare it (as a set) with the list of all elements of the group the function being tested returns.
Hence, a new file was created, sympy/combinatorics/testutil.py, that will host such functions. (Needless to say, they are exponential in complexity, and for example going over all the elements of
SymmetricGroup(n) becomes infeasible for n larger than 10.)
- The presence of functions being used to test other functions gets us in a bit of a Quis custodiet ipsos custodes? situation, but this is not fatal: the functions in testutil.py are extremely
straightforward compared to the functions in perm_groups.py that they test, and it’s really obvious what they’re doing, so it’ll take less tests to verify them.
- In the tests for the new functions from perm_groups.py, I introduced some comments to indicate what (and why) I’m testing. Another practice that seems to be good is to verify the algorithms for
small groups (degrees 1, 2, 3) since there are a lot of corner cases there that seem to break them.
• I started work on improving the disjoint cycle notation, namely excluding singleton cycles from the cyclic form; however, there are other changes to handling permutations that are waiting to be
merged in the combinatorics module here, so I guess I’ll first discuss my changes with Christopher. Currently, I see the following two possibilities for handling the singleton cycles:
- add a _size attribute to the Permutation class, and then, when faced with something like Permutation([[2, 3], [4, 5, 6], [8]]), find the maximum index appearing in the permutation (here it’s 8)
and assign the size of the permutation to that + 1. Then it remains to adjust some of the other methods in the class (after I adjusted mul so that it treats permutations of different sizes as if
they leave all points outside their domain fixed, all the tests passed) so that they make sense with that new approach to cyclic forms.
- more ambitious: make a new class, ExtendedArrayForm or something, with a field _array_form that holds the usual array form of a permutation. Then we overload the __getitem__ method so that if
the index is outside the bounds of self._array_form we return the index unchanged. Of course, we’ll have to overload other things, like the __len__ and __str__ to make it behave like a list. Then
instead of using a list to initialize the array form of a permutation, we use the corresponding ExtendedArrayForm. This will make all permutations behave as if they are acting on a practically
infinite domain, and if we do it that way, we won’t have to make any changes to the methods in Permutation – everything is going to work as expected, no casework like if len(a) > len(b),... will
be needed. So this sounds like a rather elegant approach. On the other hand, I’m not entirely sure if it is possible to make it completely like a list, and also it doesn’t seem like a very
performance-efficient decision since ExtendedArrayForm instances will be created all the time. (see the discussion here).
• Still nothing on a database of groups. I looked around the web for a while but didn’t find any resources… the search continues. Perhaps I should ask someone more knowledgeable.
That’s it for now, and that’s the end of my series of blog posts for the GSoC, but I don’t really feel that something has ended since it seems that my contributions to the combinatorics module will
continue (albeit not that regularly : ) ). After all, it’s a lot of fun, and there are a lot more things to be implemented/fixed there! So, a big “Thank you” to everyone who helped me get through
(and to) GSoC, it’s been a pleasure and I learned a lot. Goodbye!
|
{"url":"http://amakelov.wordpress.com/2012/08/20/google-summer-of-code-2012-week-13/","timestamp":"2014-04-16T13:04:30Z","content_type":null,"content_length":"64369","record_id":"<urn:uuid:cc10f02e-ae43-45a8-ad46-73e7d9593214>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Boolean operations with 3D meshes [Archive] - OpenGL Discussion and Help Forums
read my post, thats a good way for doing csg.
no actually, me and my friend searched the web and found nothing but a few words that it is possibile to use bsp trees for csg. so we learned about bsp trees, and "reinvented" the wheel.
poor okapoka, i even found once a tutorial exactly describing your way on the web.. http://www.opengl.org/discussion_boards/ubb/smile.gif
Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved.
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-155762.html","timestamp":"2014-04-20T21:13:44Z","content_type":null,"content_length":"10646","record_id":"<urn:uuid:67b2976e-2bc8-44cd-bb98-b865126fc692>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fishtown, Philadelphia, PA
Havertown, PA 19083
PhD in Physics -Tutoring in Physics, Math, Engineering, SAT/ACT
...Beyond academics, I spend my time backpacking, kayaking, weightlifting, jogging, bicycling, metalworking, woodworking, and building a wilderness home of my own design. In between formal tutoring
sessions, I offer my students FREE email support to keep them moving...
Offering 10+ subjects including algebra 1 and algebra 2
|
{"url":"http://www.wyzant.com/Fishtown_Philadelphia_PA_algebra_tutors.aspx","timestamp":"2014-04-24T16:21:50Z","content_type":null,"content_length":"60696","record_id":"<urn:uuid:9d19e666-fb7b-433f-b0f6-d983ff972e4f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SPOJ.com - Problem LABYR1
Submit All submissions Best solutions PS PDF Back to list
SPOJ Problem Set (classical)
38. Labyrinth
Problem code: LABYR1
The northern part of the Pyramid contains a very large and complicated labyrinth. The labyrinth is divided into square blocks, each of them either filled by rock, or free. There is also a little
hook on the floor in the center of every free block. The ACM have found that two of the hooks must be connected by a rope that runs through the hooks in every block on the path between the
connected ones. When the rope is fastened, a secret door opens. The problem is that we do not know which hooks to connect. That means also that the neccessary length of the rope is unknown. Your
task is to determine the maximum length of the rope we could need for a given labyrinth.
The input consists of T test cases. The number of them (T) is given on the first line of the input file. Each test case begins with a line containing two integers C and R (3 <= C,R <= 1000)
indicating the number of columns and rows. Then exactly R lines follow, each containing C characters. These characters specify the labyrinth. Each of them is either a hash mark (#) or a period
(.). Hash marks represent rocks, periods are free blocks. It is possible to walk between neighbouring blocks only, where neighbouring blocks are blocks sharing a common side. We cannot walk
diagonally and we cannot step out of the labyrinth.
The labyrinth is designed in such a way that there is exactly one path between any two free blocks. Consequently, if we find the proper hooks to connect, it is easy to find the right path
connecting them.
Your program must print exactly one line of output for each test case. The line must contain the sentence "Maximum rope length is X." where Xis the length of the longest path between any two free
blocks, measured in blocks.
Sample Input:
Sample output:
Maximum rope length is 0.
Maximum rope length is 8.
Warning: large Input/Output data, be careful with certain languages
Added by: Adrian Kosowski
Date: 2004-06-06
Time limit: 5s
Source limit: 50000B
Memory limit: 256MB
Cluster: Pyramid (Intel Pentium III 733 MHz)
Languages: All
Resource: ACM Central European Programming Contest, Prague 1999
hide comments
2014-03-14 02:24:39 Jean-Ralph Aviles
answer is 4
2014-03-08 16:54:12 Nirmal
Last edit: 2014-03-11 16:48:46
2014-01-27 14:28:05 Sanyam Kapoor
@stranger shouldn't the answer be 18???
Edit: Got my error!! It is 22 indeed!
Last edit: 2014-01-29 05:44:10
2013-12-09 02:01:44 Mislav Jurinic
Last edit: 2013-12-09 09:33:38
2013-06-03 21:08:20 DgenerationX
Don't forget the period after the answer.
Cost me a lot of wa
2013-05-03 12:03:08 Ahmed
TLE TLE TLE !! grrr
2013-04-29 23:17:57 Francky
@ Mohammed El-Ansary:
Read description -> "The labyrinth is designed in such a way that there is exactly one path between any two free blocks", so your sample is impossible according to this rule.
2013-04-29 22:26:28 Mohammed El-Ansary
Can an input be in this format?
If yes, should the output be 5 in this case?
2013-04-08 19:52:41 SkAd@@sh2
pls everyone note the output format b4 u submit costed me 2 WA :)
enjoyed and learnt a new concept nice problem...
2013-02-28 15:06:44 Aldo Lemos
Obviousness is always the enemy of correctness.(Russell)
|
{"url":"http://www.spoj.com/problems/LABYR1/","timestamp":"2014-04-16T22:01:11Z","content_type":null,"content_length":"24455","record_id":"<urn:uuid:064f6839-ca5f-4633-8c9c-1d9eaa7f9c2f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Laguna Niguel Math Tutor
...So much of Math is an application of functions and a good knowledge of Algebra will serve you well and provide an excellent foundation if you go on to take Trigonometry, Precalculus and
Calculus. Today, for obvious safety reasons, fewer chemical experiments involving reactions are taught in the lab class. Much more of the work is theoretical.
12 Subjects: including algebra 1, trigonometry, SAT math, precalculus
...I have experience as a tutor while in college wherein I assisted students in K-12. Additionally, and more recently, I have instructed MBA courses at the University of Phoenix for three years.
Before starting a tutoring assignment, I would fist have a sit-down meeting to define the objectives and associated plan (no cost meeting).My graduate degree was a Masters in Operations
20 Subjects: including algebra 1, American history, biology, elementary (k-6th)
...I am sure you will not have a better tutor after me.I have owned a Language Company for 13 years. I have taught all languages to companies, teenagers and adults. I love teaching and learning.
16 Subjects: including algebra 1, algebra 2, prealgebra, Spanish
Hi students and parents,My name is Anabel from Philippines, I migrated recently to US. I would like to share my knowledge I learned in my teaching and profession. I finished Bachelor of Science in
Civil Engineering in the Philippines and passed the board examination that made mathematics my major subject.
9 Subjects: including algebra 2, geometry, prealgebra, reading
...I am a former special education teacher with thirteen years of experience. I have worked with students in the classroom and as a tutor in preschool through high school. I have also taught many
students with autism.
18 Subjects: including prealgebra, reading, English, writing
|
{"url":"http://www.purplemath.com/laguna_niguel_math_tutors.php","timestamp":"2014-04-19T02:36:00Z","content_type":null,"content_length":"24054","record_id":"<urn:uuid:b7144f16-84b1-418b-a755-871d06408e65>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Functional sets in C++
I am taking a functional programming class. One of the exercises involves defining sets in terms of functions; since C++ now supports
(and closures), I figured it would be interesting to implement them in C++ (the assignment was in a different language. It is similar to assignments in
The idea is to define a set (of integers, to simplify), as a function; you pass the element and it returns whether the element is in the set or not. So, we can define a Set as a function that takes a
boolean and returns an int. I also define a predicate, which is identical to a set. BTW, I'm using the new
Now, we can define a singletonSet function, that takes an int, and returns a set that contains that int. We need to create a function on the fly, at runtime, and that's what lambdas let us do. We
define a lambda with the [] syntax, and the = inside means to capture any variables (make a closure) by value. In this case, we're capturing the elem variable, so every call to singletonSet returns a
new lambda, with the right value of elem!
And with that, we can define the union of two sets, as a new function (lambda) that returns true if either set contains the elements, and the intersection as a function that returns true if both sets
contain the element, as in the following code:
You could use singletonSet and set_union, as follows:
And set_intersection as follows:
The assignment had a couple other functions (foreach and exists), and you can find the full source, some unit tests and a demo at
my github
3 comments:
1. Why do you need a contains function ? Would it not be enough to implement union as [=](int x) { return s1(x) || s2(x); } ?
2. Also, you could do with fewer ... dysfunctional ^-^ ... comments if you did `std::cout << std::boolalpha;` too :)
3. I used contains since the assignment had it :) I like it because it reminds me I'm viewing it as a set, rather than a function.
And, thanks for reminding about boolalpha :)
|
{"url":"http://programminggenin.blogspot.com/2012/10/functional-sets-in-c.html","timestamp":"2014-04-16T11:30:58Z","content_type":null,"content_length":"70145","record_id":"<urn:uuid:b272dd0f-b81e-4763-9fd8-06b54c342043>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
scatch the graph
September 8th 2009, 09:04 AM #1
Mar 2009
scatch the graph
I need help with the analysis of the following function.
Before to draw it need to find intersection points, min. max. asymptotes, discuss the domain, convexity/concavity by the usage of the second derivative test.. Having difficuloties with the
following function:
f= x^x
any help is welcome.
f = x^x = e^(xln(x))
f' = e^(xln(x))(ln(x)+1)
f' = x^x(ln(x)+1)
I'll let you take it from here
thank you,
but the problem for me is to define the functions find D.O (is it defined for the negative numbers?), more is the limit of x^x when x approaches zero 1 ? and what is is zero when x approaches
-infinity ??
x^x is only defined for x> 0 so it makes no sense in talking asbout the limit
as x goes to - infinity since x is never negative
You would have to know L Hopital's rule in order to show x^x->1 as
x ->0 :
write y = lim (x^x)
x ->0
lny = lim xln(x)
lim xln(x) = lim ln(x)/(1/x) = lim -x = 0
lny = 0
y = e^0 = 1
i.e lim x^x = 1 as x goes to 0
Without L'hopital's rule you'll have to rely on a graph
September 8th 2009, 10:59 AM #2
September 8th 2009, 12:09 PM #3
Mar 2009
September 8th 2009, 01:21 PM #4
|
{"url":"http://mathhelpforum.com/calculus/101166-scatch-graph.html","timestamp":"2014-04-19T02:49:18Z","content_type":null,"content_length":"37736","record_id":"<urn:uuid:6306a51c-287f-4cc0-a349-aa51289f44bc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cross tabulation
(Most of the statistical material related to cross-tabulation is covered under Chi-square.)
Cross-tabluation is about taking two variables and tabulating the results of one variable against the other variable. An example would be the cross-tabluation of course performance against mode of
│ │ HD │ D │ C │ P │ NN │
│ FT - Internal │ 10 │ 15 │ 18 │ 33 │ 8 │
│ PT Internal │ 3 │ 4 │ 8 │ 15 │ 10 │
│ External │ 4 │ 3 │ 12 │ 15 │ 6 │
Each individual would have had a recorded mode of study (the rows of the table) and performance on the course (the columns of the table). For each indivdual, those pairs of values have been entered
into the appropriate cell of the table.
What does cross-tabulation tell you?
A cross-tabulation gives you a basic picture of how two variables inter-relate.
It helps you search for patterns of interaction. Obviously, if certain cells contain disproportionately large (or small) numbers of cases, then this suggests that there might be a pattern of
In the table above, the basic pattern is what you would expect as a teacher but, at a general level, it says that the bulk of students get a P rating independant of mode of study.
What we normally do is to calculate the Chi-square statistic to see if this pattern has any substantial relevance.
To be Completed
|
{"url":"http://www.csse.monash.edu.au/~smarkham/resources/crosstab.htm","timestamp":"2014-04-17T18:38:17Z","content_type":null,"content_length":"2330","record_id":"<urn:uuid:121af0df-8755-42df-b1a9-feabcc7b5a54>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: August 2006 [00715]
[Date Index] [Thread Index] [Author Index]
Re: Change of Basis function
• To: mathgroup at smc.vnet.net
• Subject: [mg68997] Re: [mg68949] Change of Basis function
• From: Daniel Lichtblau <danl at wolfram.com>
• Date: Sat, 26 Aug 2006 02:04:39 -0400 (EDT)
• References: <200608250934.FAA09206@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
David Boily wrote:
> I would like to know if there is a function capable of giving as output
> the representation of a vector in a given basis. For example:
> FunctionX[{1,2,3}, {{1,2,0},{0,1,0},{0,0,1}}]
> (where the first argument is the vector and the second the basis)
> would yield
> {1,0,3}
> and
> FunctionX[f x1 - b x2 + x3 - x2, {x1,x2,x3}]
> would yield
> {f, -b-1, 1}
> I'm more interested in the second case, obviously, because the first one
> can be achieved with a simple matrix multiplication.
> David Boily
> Center for Intelligent Machines
> Mcgill University
I think the easiest way to do the first is with LinearSolve:
In[72]:= LinearSolve[Transpose[{{1,2,0},{0,1,0},{0,0,1}}], {1,2,3}]
Out[72]= {1, 0, 3}
But this will not readily generalize. Since today is my day for
PolynomialReduce I'll show how that might be applied. The second example
is straightforward. We want to write the polynomial f*x1 - b*x2 + x3 -
x2 as a combination of {x1,x2,x3}. PolynomialReduce can find the
coefficients needed.
In[73]:= First[PolynomialReduce[f*x1 - b*x2 + x3 - x2,
{x1,x2,x3}, {x1,x2,x3}]]
Out[73]= {f, -1 - b, 1}
For the first example one would need to convert from matrix to a
polynomial representation, in effect treating columns as variables and
rows as polynomials. So the set of polynomials representing
{{1,2,0},{0,1,0},{0,0,1}} would be {x1,2*x1+x2,x3}. The target vector
{1,2,3} becomes x1+2*x2+3*x3.
In[74]:= First[PolynomialReduce[x1 + 2*x2 + 3*x3,
{x1,2*x1+x2,x3}, {x1,x2,x3}]]
Out[74]= {1, 0, 3}
To make this into a more generally usable procedure you'd need to code
the part that translates from matrix to polynomial form.
Daniel Lichtblau
Wolfram Research
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Aug/msg00715.html","timestamp":"2014-04-17T13:18:45Z","content_type":null,"content_length":"36273","record_id":"<urn:uuid:a400d423-c890-4af3-960c-a45e1fcbc89b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Inductance calculation
Uwe Keller (uk@c-lab.de)
Tue, 18 Feb 1997 09:20:18 +0100 (MET)
just have a question with regard to Howard Johnsons email. He wrote
> ...
> If you are interested in the loss effects of the
> eddy currents, you will also need an approximation for the
> current distribution under the wire:
> j(x) = (1/(pi*h))*(1/(1 + (x/h)**2))
> where all current in the ground plane flows parallel to the wire
> where the ground plane is oriented parallel to the earth, and the sun is at
> high noon
> ...
To my understanding the current distribution in a plane goes somehow
with the square root, since it is proportional to the distance and related
to the singular behaviour of the fields at the edges. In particular, it seems
to represent the charge distribution in the cross--section of the conducting
sheet. I also feel the need to replace the "+" in the denominator by a minus
sign, and to introduce a constant C to be determined due to excitation.
So the formula should look like
j(x) = (C/(pi*h))*(1/sqrt(1 - (x/h)**2))
But possibly i'm wrong because i couldn't make something out of
> where the ground plane is oriented parallel to the earth, and the sun is at
> high noon
which i suppose shall define a coordinate system. Can anybody help me?
Thanks in advance,
| | |
| Uwe Keller | tel. : +49 5251-606181 |
| clab / Analog System Engineering | fax : +49 5251-606155 |
| Fuerstenalle 11 -- 33102 Paderborn -- Germany | email: uk@c-lab.de |
| | |
|
{"url":"http://www.qsl.net/wb6tpu/si-list2/pre99/0286.html","timestamp":"2014-04-20T23:55:55Z","content_type":null,"content_length":"3723","record_id":"<urn:uuid:dcf07334-5c0b-4fad-90e1-4aace19d2807>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Multiplicative non-abelian sharing schemes and their application to threshold cryptography
- Advances in Cryptology -- CRYPTO 97 , 1997
"... We describe efficient techniques for a number of parties to jointly generate an RSA key. At the end of the protocol an RSA modulus N = pq is publicly known. None of the parties know the
factorization of N. In addition a public encryption exponent is publicly known and each party holds a share of the ..."
Cited by 124 (4 self)
Add to MetaCart
We describe efficient techniques for a number of parties to jointly generate an RSA key. At the end of the protocol an RSA modulus N = pq is publicly known. None of the parties know the factorization
of N. In addition a public encryption exponent is publicly known and each party holds a share of the private exponent that enables threshold decryption. Our protocols are efficient in computation and
communication. All results are presented in the honest but curious settings (passive adversary).
- In Proceedings of the 8th USENIX Security Symposium , 1999
"... The ITTC project (Intrusion Tolerance via Threshold Cryptography) provides tools and an infrastructure for building intrusion tolerant applications. Rather than prevent intrusions or detect them
after the fact, the ITTC system ensures that the compromise of a few system components does not compromis ..."
Cited by 61 (0 self)
Add to MetaCart
The ITTC project (Intrusion Tolerance via Threshold Cryptography) provides tools and an infrastructure for building intrusion tolerant applications. Rather than prevent intrusions or detect them
after the fact, the ITTC system ensures that the compromise of a few system components does not compromise sensitive security information. To do so we protect cryptographic keys by distributing them
across a few servers. The keys are never reconstructed at a single location. Our designs are intended to simplify the integration of ITTC into existing applications. We give examples of embedding
ITTC into the Apache web server and into a Certication Authority (CA). Performance measurements on both the modied web server and the modied CA show that the architecture works and performs well. 1
Introduction To combat intrusions into a networked system one often installs intrusion detection software to monitor system behavior. Whenever an \irregular" behavior is observed the software noties
an admi...
- In Proc. of CRYPTO '02, LNCS 2442 , 2002
"... Abstract. A black-box secret sharing scheme for the threshold access structure Tt,n is one which works over any finite Abelian group G. Briefly, such a scheme differs from an ordinary linear
secret sharing scheme (over, say, a given finite field) in that distribution matrix and reconstruction vector ..."
Cited by 25 (7 self)
Add to MetaCart
Abstract. A black-box secret sharing scheme for the threshold access structure Tt,n is one which works over any finite Abelian group G. Briefly, such a scheme differs from an ordinary linear secret
sharing scheme (over, say, a given finite field) in that distribution matrix and reconstruction vectors are defined over Z and are designed independently of the group G from which the secret and the
shares are sampled. This means that perfect completeness and perfect privacy are guaranteed regardless of which group G is chosen. We define the black-box secret sharing problem as the problem of
devising, for an arbitrary given Tt,n, a scheme with minimal expansion factor, i.e., where the length of the full vector of shares divided by the number of players n is minimal. Such schemes are
relevant for instance in the context of distributed cryptosystems based on groups with secret or hard to compute group order. A recent example is secure general multi-party computation over black-box
rings. In 1994 Desmedt and Frankel have proposed an elegant approach to the black-box secret sharing problem based in part on polynomial interpolation over cyclotomic number fields. For arbitrary
given Tt,n with 0 < t < n − 1, the expansion factor of their scheme is O(n). This is the best previous general approach to the problem. Using certain low degree integral extensions of Z over which
there exist pairs of sufficiently large Vandermonde matrices with co-prime determinants, we construct, for arbitrary given Tt,n with 0 < t < n − 1, a black-box secret sharing scheme with expansion
factor O(log n), which we show is minimal. 1
- ADVANCES IN CRYPTOLOGY, ASIACRYPT 98, LNCS 1514 , 1998
"... With equitable key escrow the control of society over the individual and the control of the individual over society are shared fairly. In particular, the control is limited to specified time
periods. We consider two applications: time controlled key escrow and time controlled auctions with closed b ..."
Cited by 17 (5 self)
Add to MetaCart
With equitable key escrow the control of society over the individual and the control of the individual over society are shared fairly. In particular, the control is limited to specified time periods.
We consider two applications: time controlled key escrow and time controlled auctions with closed bids. In the rst the individual cannot be targeted outside the period authorized by the court. In the
second the individual cannot withhold his closed bid beyond the bidding period. We propose two protocols, one for each application. We do not require the use of temper-proof devices.
, 1998
"... . In this paper, we show an efficient (k; n) threshold secret sharing scheme over any finite Abelian group such that the size of share is q=2 (where q is a prime satisfying n q ! 2n), which is a
half of that of Desmedt and Frankel's scheme. Consequently, we can obtain a threshold RSA signature sche ..."
Cited by 4 (1 self)
Add to MetaCart
. In this paper, we show an efficient (k; n) threshold secret sharing scheme over any finite Abelian group such that the size of share is q=2 (where q is a prime satisfying n q ! 2n), which is a half
of that of Desmedt and Frankel's scheme. Consequently, we can obtain a threshold RSA signature scheme in which the size of shares of each signer is only a half. 1 Introduction Secret sharing schemes
[1, 2] are a useful tool not only in the key management but also in multiparty protocols. Especially, threshold cryptosystems [3] which are very important, where the power to sign or decrypt messages
is distributed to several agents. For example, in (k; n) threshold signature schemes, the power to sign messages is shared by n signers P 1 ; \Delta \Delta \Delta ; Pn in such a way that any subset
of k or more signers can collaborate to produce a valid signature on any given message, but no subset of fewer than k signers can forge a signature even after the system has produced many signatures
- Eurocrypt'96, pp.96--106. Lecture Notes in Computer Science vol.1070
"... Yranklin and Reiter introduced at Eurocrypt '95 verifiable signature sharing, a primitive for a fault tolerant distribution of signature verification. They proposed various practical protocols.
For RSA signatures with exponent e -- 3 and n processors their protocol allows for up to (n - 1)/5 fau ..."
Cited by 3 (0 self)
Add to MetaCart
Yranklin and Reiter introduced at Eurocrypt '95 verifiable signature sharing, a primitive for a fault tolerant distribution of signature verification. They proposed various practical protocols. For
RSA signatures with exponent e -- 3 and n processors their protocol allows for up to (n - 1)/5 faulty processors (in general (n - 1)/(2 + e)).
, 1998
"... This paper first extends the result of Blakley and Kabatianski [3] to general non-perfect SSS using information-theoretic arguments. Furthermore, we ..."
, 2006
"... In this paper we extend the threshold secret sharing schemes based on the Chinese remainder theorem in order to deal with more general access structures. Aspects like ..."
Cited by 2 (1 self)
Add to MetaCart
In this paper we extend the threshold secret sharing schemes based on the Chinese remainder theorem in order to deal with more general access structures. Aspects like
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=524865","timestamp":"2014-04-21T06:39:53Z","content_type":null,"content_length":"31519","record_id":"<urn:uuid:58da0147-6f40-4c6b-bb55-4493f6c9b227>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent application title: ENHANCED AWG WAVEF0RM CALIBRATION USING S-PARAMETERS
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
Embodiments of the present invention provide enhanced methods of calibrating arbitrary waveform generators using s-parameters, and arbitrary waveform generators calibrated according to those methods.
Methods are provided for calibrating a single, non-interleaved channel of an arbitrary waveform generator, calibrating multiple interleaved channels, and calibrating pairs of channels, both
interleaved and non-interleaved, to generate differential signals.
A method of calibrating a channel of an arbitrary waveform generator comprising the steps of: measuring an output response of the channel (τ); measuring a source match of the channel (Γ
); determining an input reflection coefficient of a device under test (Γ
); and calculating a correction filter (g) for the channel based on τ, Γ
, and Γ
A method as in claim 1 wherein the channel is a single, non-interleaved channel.
A method as in claim 1 wherein the channel comprises a plurality of interleaved channels.
A method as in claim 1 wherein the channel comprises a pair of non-interleaved channels.
A method as in claim 1 wherein the channel comprises a pair of interleaved channels.
A method as in claim 1 wherein Γ
is an ideal, calculated value.
A method as in claim 1 wherein Γ
is a measured value.
A method as in claim 1 wherein: an external device is connected to an output port of the arbitrary waveform generator; and the calibration is performed at an output port of the external device.
An arbitrary waveform generator calibrated according to the method of any one of claims
FIELD OF THE INVENTION [0001]
The present invention relates to test and measurement instruments, and more particularly to the calibration of arbitrary waveform generators.
BACKGROUND OF THE INVENTION [0002]
Arbitrary Waveform Generators (AWGs) are test and measurement instruments that are used to generate analog signals having virtually any waveshape. In operation, a user defines a desired analog signal
point-by-point as a series of digital values. An AWG then "plays out" the digital values using a precision digital-to-analog converter to provide the analog signal. AWGs such as the AWG7000 Arbitrary
Waveform Generator Series available from Tektronix, Inc, of Beaverton, Oreg. are used for wideband signal generation applications, receiver stress testing of high-speed serial data, and other
applications where complex signal creation is required.
For various reasons, the measured frequency characteristics of signals produced by AWGs sometimes differ from the frequency characteristics of their input waveform data. Calibration techniques have
been proposed to correct the output responses of AWGs, however, none of them has proven entirely satisfactory.
Thus, there exists a need for enhanced methods of calibrating AWGs.
SUMMARY OF THE INVENTION [0005]
Embodiments of the present invention provide enhanced methods of calibrating arbitrary waveform generators using s-parameters, and arbitrary waveform generators calibrated according to those methods.
Methods are provided for calibrating a single, non-interleaved channel of an arbitrary waveform generator, calibrating multiple interleaved channels, and calibrating pairs of channels, both
interleaved and non-interleaved, to generate differential signals.
The objects, advantages, and other novel features of the present invention are apparent from the following detailed description when read in conjunction with the appended claims and attached
BRIEF DESCRIPTION OF THE DRAWINGS [0007]
FIG. 1 depicts a simplified, high-level block diagram of an arbitrary waveform generator according to a first embodiment of the present invention.
FIG. 2 depicts a first signal flow graph that corresponds to FIG. 1.
FIG. 3 depicts a second signal flow graph that corresponds to FIG. 1.
FIG. 4 depicts a method that corresponds to FIG. 1.
FIG. 5 depicts a simplified, high-level block diagram of an arbitrary waveform generator according to a second embodiment of the present invention.
FIG. 6 depicts a first signal flow graph that corresponds to FIG. 5.
FIG. 7 depicts a second signal flow graph that corresponds to FIG. 5.
FIG. 8 depicts a method that corresponds to FIG. 5.
DETAILED DESCRIPTION OF THE INVENTION [0015]
1. Accounting for Reflected Waves
The inventor has recognized that AWGs appear to have imperfect output responses because prior AWG calibration techniques have not taken into account the interaction of reflected waves between the AWG
and the measurement instrument during calibration, or between the AWG and the device under test (DUT) during use.
Accordingly, embodiments of the present invention provide methods of calibrating a channel of an AWG, and arbitrary waveform generators calibrated according to those methods, that take into account
not only the output response of the channel, but also the interaction of reflected waves between the AWG and a measurement instrument during calibration, and between the AWG and the DUT during use.
FIG. 1 depicts an AWG 100 having a single, non-interleaved channel according to an embodiment of the present invention. In operation, a processor 105 receives waveform data that describes a desired
output analog signal. The waveform data may be received from a memory, a storage device, or the like. The processor 105 may be implemented as software running on a general-purpose microprocessor, a
dedicated application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or the like. The processor 105 applies a correction filter g to the waveform data in order to correct
the output response of the channel. The correction filter g can be applied to the waveform data by convolving the correction filter g with the waveform data in the time domain, or by multiplying them
together in the frequency domain. The processed waveform data is converted into an analog signal using a digital-to-analog converter (DAC) 110. The analog signal is filtered by an analog output
circuit 115, which may include an amplifier, an attenuator, a switch, a reconstruction filter, and the like. The filtered analog signal is then applied to a DUT 120. "The single, non-interleaved
channel" refers to the signal path from the DAC 110 through the analog output circuit 115. In some embodiments (not shown), the DAC 110 provides a differential output. In that case, the two outputs
may be considered either a pair of channels or a single differential channel.
In some embodiments, the correction filter g is calculated as follows:
Referring now to FIG. 2, the output response (amplitude and phase) of the channel is measured with a calibrated measurement instrument such as a sampling oscilloscope. The source match, or reflection
coefficient, is measured with a calibrated measurement instrument such as a time-domain reflectometer (TDR) or a network analyzer. Together, they form the s-parameters of the source S
s and S
s. For clarity later in the analysis, these are written as τ and Γ
[ b 1 b 2 ] = [ 0 0 τ Γ s ] [ a 1 a 2 ] ( Equation 1 ) ##EQU00001##
s and S
s equal zero because the input of the DAC is digital in nature, not analog, and thus, no digital data applied to its input can reflect back, and no analog signal applied to its output can pass
through to its input.
To complete the analysis, the DUT input reflection coefficient (Γ
) must be known. Then, the response equations can be written:
(Equation 2)
(Equation 3)
Substituting Equation 3 into Equation 2 yields:
(Equation 4)
Rearranging Equation 4 to solve for b
2 = a s g τ 1 - Γ L Γ s ( Equation 5 ) ##EQU00002##
Substituting Equation 5 into Equation 3 yields:
2 = a s g τ Γ L 1 - Γ L Γ s ( Equation 6 ) ##EQU00003##
A. Calibration
A calibrated measurement instrument is defined as an instrument that correctly measures the phase and amplitude of the incoming wave from a matched 50 ohm source. Its input is not necessarily
matched, and has an input reflection coefficient Γ
. Similarly, a calibrated AWG is defined as an AWG that produces an accurate waveform into a matched 50 ohm load and has an output reflection coefficient Γ
. In that case, Equation 5 reduces to:
τ (Equation 7)
g match
= 1 τ ( Equation 9 ) ##EQU00004##
2 match a s = 1 ( Equation 8 ) ##EQU00005##
However, when the calibrated instrument and the source are put together (with g=1), the measured result is:
2 meas = a s τ 1 - Γ L Γ s ( Equation 10 ) ##EQU00006##
Rearranging Equation 10 yields:
τ = b 2 meas a s ( 1 - Γ L Γ s ) ( Equation 11 ) ##EQU00007##
Substituting Equation 11 into Equation 9 yields:
g match
= 1 τ = a s b 2 meas ( 1 - Γ L Γ s ) ( Equation 12 ) ##EQU00008##
B. Driving a DUT
Now, when a new device is driven with the calibrated source, the resulting forward wave is:
2 DUT = a s g match τ 1 - Γ LDUT Γ s ( Equation 13 ) ##EQU00009##
Lastly, the DUT input reflection coefficient must be taken into account. The desired output is just a
τ, so the correction filter g is:
) (Equation 14)
Where g
represents the correction filter assuming a matched load, Γ
represents the input reflection coefficient of the DUT, and Γ
represents the output reflection coefficient of the AWG.
In some embodiments, the DUT input reflection coefficient is an ideal, calculated value, selected so that the correction filter corrects the output response so that it is right when working into a
matched 50 ohm load, an open circuit, or any other specified impedance. This correction filter can be generated during manufacturing, stored in the AWG, and used when the DUT s-parameters are not
available. In other embodiments, the DUT input reflection coefficient is a measured value by the user, in which case the correction filter corrects the output response so that it is right when
working into the DUT.
Although the AWG shown and described above only has a single, non-interleaved channel, it will be appreciated that this same calibration approach can also be used to improve the output response of an
AWG having multiple interleaved channels. That is, the output response of an interleaved AWG can be improved by taking into account the interaction of reflected waves between the AWG and the
measurement instrument during calibration, and between the AWG and the DUT during use. In that case, the correction filter g developed above can be used as-is, provided that the multiple interleaved
channels are treated as a single higher rate non-interleaved channel, and the source match of the channel (Γ
) equals the net source match of the arbitrary waveform generator (S
), described in detail below.
C. Adding an External Device
Referring now to FIG. 3, when an external device 125 such as a cable, an up-converter, or the like is used between the AWG 100 and the DUT 120, it can be more appropriate to calibrate at the output
of the external device 125. In that case, the correction filter g is essentially the same as described above. This is because, when the s-parameters of the external device 125 are cascaded with the
source parameters, the form of the new effective source output remains the same because of the two zeros in the first row of the source matrix. The shifted source parameters can be measured directly
with the external device 125 in place or calculated using known front panel referenced AWG 100 parameters and external device parameters.
FIG. 4 depicts a method 400 of calibrating a channel of an arbitrary waveform generator according to an embodiment of the present invention, In step 405, an output response of the channel is measured
(τ). In step 410, a source match of the channel is measured (Γ
). In step 415, an input reflection coefficient of a DUT is determined (Γ
). In step 420, a correction filter (g) for the channel is calculated based on τ, Γ
, and Γ
. Steps 405, 410, and 415 are not required to be performed in the order shown, but rather can be performed in any order.
2. Correcting Multiple Interleaved Channels
Many AWGs achieve higher samples rate by interleaving multiple channels together. However, when doing so, the resulting output response is more difficult to correct for several reasons. The first
reason is that the individual output responses of the interleaved channels will not match, and thus a single correction filter cannot be completely right. The second reason is that the overall output
response will be influenced by reflections between the multiple sources, as well as reflections between the multiple sources and the DUT.
Accordingly, embodiments of the present invention provide methods of calibrating multiple interleaved channels of an AWG, and arbitrary waveform generators calibrated according to those methods, that
take into account the output response of each interleaved channel, the interaction of reflected waves between the AWG and a measurement instrument during calibration and between the AWG and the DUT
during use, or both simultaneously. For reasons that will be explained below, these methods correct the output response of each channel independently, and apply the correction filter to the lower
sample rate waveform input to each DAC rather than the full sample rate waveform.
FIG. 5 depicts an AWG 500 having two interleaved channels according to an embodiment of the present invention. The AWG 500 is similar to the AWG 100, except that it includes two DACs 510A and 510B
instead of a single DAC 110, and a combiner 530. The two DACs 510A and 510B are clocked by two clock signals (not shown) that are phase shifted relative to one another by 180 degrees. In operation,
the processor 505 separates the waveform data into samples for the first channel and samples for the second channel, and then applies a first correction filter g
to the samples for the first channel, and applies a second correction filter g
to the samples for the second channel. g
and g
correct the output responses of the first and second interleaved channels, respectively, and also take into account the interaction of reflected waves between the AWG 500 and a measurement instrument
during calibration, and between the AWG 500 and the DUT 120 during use. The DAC 510A converts the samples for the first channel into a first analog signal, and the DAC 510B converts the samples for
the second channel into a second analog signal. The first and second analog signals are then combined into a single analog signal with the combiner 530, which is any device used to combine analog
signals. The resulting analog signal has double the sample rate of either of the individual DACs 510A and 510B. As in the AWG 100, the combined analog signal is then filtered with an analog output
circuit 115 and applied to a DUT 120, "The first interleaved channel" refers to the signal path from the processor 505 through the DAC 510A to the analog output circuit 115, and "the second
interleaved channel" refers to the signal path from the processor 505 through the DAC 510B to the analog output circuit 115.
In some embodiments, the correction filters g
and g
are developed as follows:
The combiner 530 can be any device used to combine analog signals. However, in the following discussion, the combiner 530 is considered to be a symmetric, resistive power combiner. Thus, referring
now to FIG. 6, the combiner 530 can be represented by a 3×3 s-parameter matrix:
= [ s 11 s 12 s 13 s 21 s 22 s 23 s 31 s 32 s 33 ] ( Equation 15 ) ##EQU00010##
In matrix notation, the s-parameter equation is:
=SA (Equation 16)
= [ b 1 b 2 b 3 ] , and A = [ a 1 a 2 a 3 ] ( Equation 17 ) ##EQU00011##
The solution for the output taking into account the source parameters and the combiner is developed in the Appendix.
=(1-S64 )
(Equation 18)
The difficulty with this approach, however, is that the solution requires knowledge of the details of the reflection and transmission parameters of each channel, along with the two internal ports of
the combiner. However, it is very difficult to directly measure individual parameters once the instrument is assembled. If they can be determined at all, it would only be through a complex set of
calibration measurements and calculations because the response can only be observed at the output.
On the other hand, if the AWG is viewed from the perspective of the single output port, then the details of the internal interactions are not important. This perspective is depicted in FIG. 7, where
the output wave is the sum of the response from each channel to the output, and the DUT 120 interacts with a net single port reflection coefficient at the output port. An overall net three port
s-parameter network can be considered to include the DAC outputs, the interconnect, and the combiner. With this simplification, Equations 16 and 17 become:
' = S net A ' ( Equation 19 ) B ' = [ b 1 ' b 2 ' b 3 ' ] , and A ' = [ a 1 ' a 2 ' a 3 ] = [ a s 1 g 1 a s 2 g 2 a 3 ] ( Equation 20 ) ##EQU00012##
The two source ports are idealized; there are no reflections between the sources and the effective combiner, meaning that s
and s
are zero; and, returning waves b'
and b'
are zero, meaning that s
, s
3, s
, and s
3 are all zero. Thus:
S net
= [ 0 0 0 0 0 0 s 31 net s 32 net s 33 net ] = [ 0 0 0 0 0 0 τ 1 net τ 2 net Γ s net ] ( Equation 21 ) ##EQU00013##
Where τ
and τ
are the output responses of the two sources measured through the effective combiner. τ
and τ
are measured "independently," that is, the individual output response of DAC 510A is measured with DAC 510B set to zero, and the individual output response of DAC 510B is measured with DAC 510A set
to zero.
We are left with an equation for b
dependent on the two source waves and the reflection from the load, a
2g.- sub.2+Γ
(Equation 22)
(Equation 23)
Substituting Equation 23 into Equation 22 yields:
2g.- sub.2+Γ
(Equation 24)
Rearranging yields:
3 = τ 1 net a s 1 g 1 + τ 2 net a s 2 g 2 1 - Γ s net Γ L ( Equation 25 ) ##EQU00014##
Equation 25 is the sum of the response from each channel modified by reflections between the output port and load. It is identical to Equation 5 for the single channel case, except that the source
transmission is the sum of two sources.
A. Calibration
Now, when working into a matched load, the output is:
.sub- .s2g
(Equation 26)
If the output from each DAC is measured independently with the other DAC set to zero, then the two correction factors are:
1 match = 1 τ 1 net = a s 1 b 3 meas 1 ( 1 - Γ L Γ s ) ( Equation 27 ) g 2 match = 1 τ 2 net = a s 2 b 3 meas 2 ( 1 - Γ L Γ s ) ( Equation 28 ) ##EQU00015##
Finally, the output of the calibrated source is:
3 = ( τ 1 net a s 1 g 1 match + τ 2 net a s 2 g 2 match ) 1 1 - Γ s net Γ LDUT ( Equation 29 ) ##EQU00016##
B. Driving a DUT
Like the single channel case, if the DUT reflection coefficient is known, then the source waveform can be compensated to correct for it. This part of the correction can be included in a total filter
for each DAC or applied to the starting waveform at the full sample rate, since it is the same for both DACs.
= ( τ 1 net a s 1 g 1 match + τ 2 net a s 2 g 2 match ) 1 g refl ( Equation 30 ) ##EQU00017##
(Equation 31)
Thus, the correction filters g
and g
are as follows:
(Equation 32)
(Equation 33)
Where g
and g
2 represent the first and second correction filters assuming a matched load.
Although the discussion above describes generating two correction filters for a system having two interleaved channels, it will be appreciated that, by applying similar reasoning, additional
correction filters can also be developed for systems using higher degrees of interleaving. That is, correction filters can be generated for systems having three interleaved channels, four interleaved
channels, and so on. In that case, to generalize the notation for an arbitrary number of interleaved channels, g
, g
, and so on are collectively referred to as g
, and τ
, τ
, and so on are collectively referred to as τ
Also, although the correction filters described above simultaneously take into account both the individual output responses of the interleaved channels and the effects of reflections between the DUT
and the multiple sources at the same time, correction filters can also be generated that only take into account the individual output responses of the interleaved channels. That is, in some
embodiments, an arbitrary waveform generator is calibrated by measuring the output response of each interleaved channel independently and then generating a plurality of correction filters, one for
each interleaved channel, based solely on its corresponding measured output response. In that case, each correction filter equals the inverse of its associated measured output response. In other
embodiments, the DUT input reflection coefficient and the net source match of the AWG are also measured and used to improve the accuracy of those correction filters.
FIG. 8 depicts a method 800 of calibrating a plurality of interleaved channels of an arbitrary waveform generator according to an embodiment of the present invention. In step 805, a plurality of
output responses, one for each of the plurality of interleaved channels is measured (τ
). Optionally, in step 810, a net source match of an output port of the arbitrary waveform generator is measured (S
). Optionally, in step 815, an input reflection coefficient of a device under test is determined (Γ
). In step 820, a plurality of correction filters (g
) are calculated, one for each of the plurality of interleaved channels, based on τ
, S
, and Γ
. Steps 805, 810, and 815 are not required to be performed in the order shown, but rather can be performed in any order.
3. Correcting Pairs of Channels Used to Generate Differential Signals
In some cases, pairs of channels are used to generate differential signals, both pairs of single, non-interleaved channels and pairs of multiple interleaved channels. Each of those channels can be
individually calibrated using the techniques described above. Alternatively, the pairs of channels can be calibrated simultaneously using the techniques described above by replacing the single-ended
parameters with differential parameters. That is, the single-ended output response of a channel (t) would be replaced with the differential output response of a pair of single-ended, non-interleaved
channels, or the differential output response of a pair of multiple interleaved channels, and so on.
It will be appreciated from the foregoing discussion that the present invention represents a significant advance in the field of test and measurement instruments. Although specific embodiments of the
invention have been illustrated and described for purposes of illustration, it will be understood that various modifications may be made without departing from the spirit and scope of the invention.
Accordingly, the invention should not be limited except as by the appended claims.
APPENDIX [0074]
As defined above, the combiner is represented with a 3 port s-parameter matrix:
=SA (Equation 34)
= [ b 1 b 2 b 3 ] , and A = [ a 1 a 2 a 3 ] ( Equation 35 ) ##EQU00018##
Given reflection coefficients for the two channels and the load, the elements of A are:
(Equation 36)
(Equation 37)
(Equation 38)
Then, Equation 34 can be written:
+STB (Equation 39)
Γ = [ Γ s 1 0 0 0 Γ s 2 0 0 0 Γ L ] , T = [ τ 1 0 0 0 τ 2 0 0 0 τ L ] , and A s = [ a s 1 g 1 a s 2 g 2 a 3 ] ( Equation 40 ) ##EQU00019##
Next, rearrange to solve for B:
(Equation 41)
(Equation 42)
Equation 42 can be written as a simple matrix equation with (1-SΓ)
(Equation 43)
And then:
(Equation 44)
Equation 43 solves for all three terms in B, but b
is the one we are interested in. From Equations 40 and 43 the solution for b
(Equation 45)
But a
is just:
(Equation 46)
Which can be substituted into and Equation 43 and solved for b
Equation 43 is not quite an s-parameter equation since b
and b
are internal and not at the ports where a
and a
2 are defined. However, it suggests that a net effective s-parameter equation can be written.
Patent applications by TEKTRONIX, INC.
Patent applications in class Sensor or transducer
Patent applications in all subclasses Sensor or transducer
User Contributions:
Comment about this patent or add new information about this topic:
|
{"url":"http://www.faqs.org/patents/app/20130080105","timestamp":"2014-04-16T11:26:33Z","content_type":null,"content_length":"56694","record_id":"<urn:uuid:414e0817-d328-4150-a276-8e4d7269191f>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New Page 1
ODESSA COLLEGE
SCIENCE, HEALTH, AND MATHEMATICS DIVISION
DEPARTMENT OF MATHEMATICS
COURSE NUMBER: Math 0371
COURSE TITLE: Prealgebra
CREDIT HOURS: 3 LECTURE HOURS: 3 LAB HOURS: 14
PREREQUISITES: MATH 0370 passed with a “C” or better or satisfactory placement score.
COREQUISITE: Math Academic Resource Lab
CATALOG DESCRIPTION: A developmental course using whole numbers, decimals, fractions, integers, linear equations, problem solving, geometry formulas, real number properties, polynomials, exponents,
radicals, equations, and graphs of lines. This course does not satisfy requirements for any degree plan at Odessa College and will not be accepted by any senior colleges. Placement testing is
available. Attendance is mandatory for TASP liable students. (SCANS 3, 8, 9)
TEXTBOOK: PreAlgebra, by Jamie Blair, John Tobey and Jeffrey Slater, Second edition, Prentice Hall Publisher, 2002.
After completing this course, the student should be able to demonstrate competency in:
1.0 Reading and writing whole numbers and the operations of arithmetic on whole numbers including the order of operations and solving applied problems.
2.0 The basic operations of integers and solving applied problems using integers.
3.0 The basic operations of arithmetic on equations and algebraic expressions and techniques for solving applied problems.
4.0 The basic operations of arithmetic on fractions and techniques for solving applied problems.
5.0 The use of ratios in comparing magnitudes and solving proportions and using ratio and proportions to solve applied problems.
6.0 The basic operations of arithmetic on polynomials including identifying terms and solving applied problems using polynomials.
7.0 The basic operations of arithmetic on decimal numbers and solving applied problems using decimals.
8.0 The basic definitions and formulas of geometry and the use of this knowledge to solve applied problems.
9.0 The interpretation of information from line graphs, bar graphs, pie charts, pictographs, and tables and recognizing graphic representations of data.
See Instructor Information Sheet for specific course requirements.
See Instructor Information Sheet for specific method of evaluation.
The number of absences is limited to the following:
MWF classes - 7 absences allowed
TTH or MW classes - 5 absences allowed
One 3 hour class/week - 2 absences allowed
You are expected to create your own assignments and take tests without notes or other outside assistance. All work is expected to be your own. If unethical behavior is detected, the instructor will
take disciplinary steps consistent with departmental and college policy.
1. Tutors available in the Math Academic Resource Center
2. Math video tapes
3. Computer tutorial software
1.0 To demonstrate competency in reading and writing whole numbers and the operations of arithmetic on whole numbers including using the order of operations and solving applied problems, the student
should be able to:
1.1 Express numbers in standard form, expanded notation and written word form.
1.2 Round whole numbers.
1.3 Identify place value.
1.4 Compare whole numbers using inequality symbols.
1.5 Compute the sum of whole numbers.
1.6 Compute the difference of whole numbers.
1.7 Compute the product of whole numbers.
1.8 Compute the quotient of whole numbers.
1.9 Evaluate expressions involving addition.
1.10 Evaluate expressions involving subtraction.
1.11 Simplify expressions using the properties of multiplication.
1.12 Write whole numbers and variables in exponential notation.
1.13 Evaluate whole numbers and variables in exponential notation.
1.14 Evaluate expressions using the order of operations.
1.15 Solve equations using whole numbers.
2.0 To demonstrate competency in the basic operations of arithmetic on integers and solving applied problems using integers the student should be able to:
2.1 Write the opposite of a number.
2.2 Write integer numbers in order.
2.3 Write the absolute value of a number.
2.4 Compare integers using inequality symbols.
2.5 *Interpret line graphs.
2.6 Compute the sum of integer numbers.
2.7 Compute the difference of integer numbers.
2.8 Compute the product of integer numbers.
2.9 Compute the quotient of integer numbers.
2.10 Evaluate algebraic expressions with integers.
2.11 Evaluate integer expressions using the order of operations.
2.12 Compute integers with exponents.
2.13 Combine like terms with integer coefficients.
2.14 Solve applied problems with integers.
3.0 To demonstrate competency in the basic operations of arithmetic on equations and algebraic expressions and techniques for solving applied problems, the student should be able to:
3.1 Solve equations using the addition principle.
3.2 Solve equations using the multiplication principle.
3.3 Solve equations using the division principle.
3.4 Translate English statements into equations.
3.5 Solve equations involving perimeter.
3.6 Solve equations involving angle measurements.
3.7 Compute areas of rectangles and parallelograms.
3.8 Solve equations involving volume.
3.9 Evaluate expressions using product rule for exponents.
3.10 Multiply algebraic expressions with exponents.
4.0 To demonstrate competency in the basic operations of arithmetic on fractions and techniques for solving applied problems the student should be able to:
4.1 Identify prime and composite numbers.
4.2 Compute the prime factors of whole numbers.
4.3 Rewrite proper fractions as improper fractions.
4.4 Rewrite improper fractions as proper fractions.
4.5 *Order fractions with the same denominator.
4.6 Compute equivalent fractions.
4.7 *Order fractions with different denominators.
4.8 Rewrite fractions in simplest form.
4.9 Simplify fractional expressions with exponents.
4.10 Calculate the least common multiple of expressions.
4.11 Compute the sum of fractions.
4.12 Compute the difference of fractions.
4.13 Compute the product of fractions.
4.14 Compute the quotient of fractions.
4.15 Evaluate fraction expressions using the order of operations.
4.16 Simplify complex fractions.
5.0 To demonstrate competency in the use of ratios in comparing magnitudes, in solving proportions and in solving applied problems using ratios and proportions, the student should be able to:
5.1 Write a ratio.
5.2 *Calculate ratios in business applications.
5.3 *Generate proportions and solve for missing quantity.
5.4 Write a rate.
5.5 Write a proportion.
5.6 Solve applied problems involving ratios and rates.
5.7 *Compute percents using proportion.
5.8 Convert percents to equivalent fractions and decimals.
5.9 Compute similarity using proportions.
6.0 To demonstrate competency in the basic operations of arithmetic on polynomials and solving applied problems using polynomials the student should be able to:
6.1 Identify terms of a polynomial.
6.2 Compute the sum of polynomials.
6.3 Compute the difference of polynomials.
6.4 Compute the product of polynomials.
6.5 Use the distributive property to multiply a monomial and a binomial.
6.6 Simplify geometric formulas involving monomials and binomials.
7.0 To demonstrate competency in the basic operations of arithmetic on decimal numbers and solving applied problems using decimals the student should be able to:
7.1 Read and write decimal numbers.
7.2 Calculate fraction equivalents for a decimal number.
7.3 Compare and order decimal numbers.
7.4 Round decimals to a given place value.
7.5 Compute the sum of decimals.
7.6 Compute the difference of decimals.
7.7 Compute the quotient of decimals.
7.8 Compute the product of decimals.
7.9 Compute perimeter and area of decimal plane figures.
7.10 Estimate sums and differences involving decimals.
7.11 Calculate a decimal equivalent for a fraction.
7.12 Solve equations using decimal numbers.
8.0 To demonstrate competency in using the most basic definitions and formulas of geometry and solving applied problems using these basic definitions the student should be able to:
8.1 *Compute perimeter of plane figures.
8.2 *Compute area of plane figures.
8.3 *Compute volume.
8.4 *Compute ratios for similar geometric figures.
9.0 To demonstrate competency in interpreting information from line graphs, bar graphs, pie charts, pictographs, and tables; recognizing graphic representations of data; and analyzing and
interpreting data using measures of central tendency, the student should be able to:
9.1 *Interpret bar graphs.
9.2 *Interpret line graphs.
9.3 *Interpret area graphs.
9.4 *Interpret circle graphs.
(Math, Reading, Communication, Technological Literacy, and/or Critical Thinking)
Back to Homepage
Theresa Evans, Coordinator of Math Academic Resource Center and Developmental Math Instructor
Please see my personal website for contact information, office hours, and general information.
Send questions and comments about this course to Theresa Evans at tevans@odessa.edu
If you experience technical difficulties with this page contact webmaster@odessa.edu
|
{"url":"http://www.odessa.edu/dept/math/tevans/Pre-Alg%20syllabus.htm","timestamp":"2014-04-18T05:42:21Z","content_type":null,"content_length":"46717","record_id":"<urn:uuid:e2cbf50c-d6fb-41be-9f96-793593a8b899>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebraic Topology Course
Algebraic Topology Course,
Math 634, 635, 636, 2012/2013,
Professor Boris Botvinnik
The class meets on MWF at 9:00 a.m. Deady 210.
Office hours: by appointment.
We will use mostly my lecture notes for this course and the book
Algebraic Topology by A. Hatcher .
I stongly recommend to study in detail all assigned material.
There will be several homework assignments (totally 6-8 this Fall),
one Midterm Exam and the Final Exam.
Here is the review questions for the Fall Midterm Exam
Late homework is not appreciated: you should have a really good reason for that.
The grade will be calculated as follows:
Homework: 20 %
Midterm 30 %
Final 50 %
|
{"url":"http://darkwing.uoregon.edu/~botvinn/Topology-12-13.html","timestamp":"2014-04-19T04:20:40Z","content_type":null,"content_length":"1555","record_id":"<urn:uuid:b53e4e5f-b8e7-4b40-8fcf-5016783d4de5>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
|
base-3.0.1.0: Basic libraries Source code Contents Index
Portability portable
Data.Typeable Stability experimental
Maintainer libraries@haskell.org
The Typeable class reifies types to some extent by associating type representations to types. These type representations can be compared, and one can in turn define a type-safe cast operation. To
this end, an unsafe cast is guarded by a test for type (representation) equivalence. The module Data.Dynamic uses Typeable for an implementation of dynamics. The module Data.Generics uses Typeable
and type-safe cast (but not dynamics) to support the "Scrap your boilerplate" style of generic programming.
class Typeable a where
cast :: (Typeable a, Typeable b) => a -> Maybe b
gcast :: (Typeable a, Typeable b) => c a -> Maybe (c b)
data TypeRep
data TyCon
showsTypeRep :: TypeRep -> ShowS
mkTyCon :: String -> TyCon
mkTyConApp :: TyCon -> [TypeRep] -> TypeRep
mkAppTy :: TypeRep -> TypeRep -> TypeRep
mkFunTy :: TypeRep -> TypeRep -> TypeRep
splitTyConApp :: TypeRep -> (TyCon, [TypeRep])
funResultTy :: TypeRep -> TypeRep -> Maybe TypeRep
typeRepTyCon :: TypeRep -> TyCon
typeRepArgs :: TypeRep -> [TypeRep]
tyConString :: TyCon -> String
typeRepKey :: TypeRep -> IO Int
class Typeable1 t where
class Typeable2 t where
class Typeable3 t where
class Typeable4 t where
class Typeable5 t where
class Typeable6 t where
class Typeable7 t where
gcast1 :: (Typeable1 t, Typeable1 t') => c (t a) -> Maybe (c (t' a))
gcast2 :: (Typeable2 t, Typeable2 t') => c (t a b) -> Maybe (c (t' a b))
typeOfDefault :: (Typeable1 t, Typeable a) => t a -> TypeRep
typeOf1Default :: (Typeable2 t, Typeable a) => t a b -> TypeRep
typeOf2Default :: (Typeable3 t, Typeable a) => t a b c -> TypeRep
typeOf3Default :: (Typeable4 t, Typeable a) => t a b c d -> TypeRep
typeOf4Default :: (Typeable5 t, Typeable a) => t a b c d e -> TypeRep
typeOf5Default :: (Typeable6 t, Typeable a) => t a b c d e f -> TypeRep
typeOf6Default :: (Typeable7 t, Typeable a) => t a b c d e f g -> TypeRep
The Typeable class
class Typeable a where Source
The class Typeable allows a concrete representation of a type to be calculated.
typeOf :: a -> TypeRep Source
Takes a value of type a and returns a concrete representation of that type. The value of the argument should be ignored by any instance of Typeable, so that it is safe to pass undefined as the
Type-safe cast
cast :: (Typeable a, Typeable b) => a -> Maybe b Source
The type-safe cast operation
gcast :: (Typeable a, Typeable b) => c a -> Maybe (c b) Source
A flexible variation parameterised in a type constructor
Type representations
A concrete representation of a (monomorphic) type. TypeRep supports reasonably efficient equality.
An abstract representation of a type constructor. TyCon objects can be built using mkTyCon.
showsTypeRep :: TypeRep -> ShowS Source
Construction of type representations
:: String the name of the type constructor (should be unique in the program, so it might be wise to use the fully qualified name).
-> TyCon A unique TyCon object
Builds a TyCon object representing a type constructor. An implementation of Data.Typeable should ensure that the following holds:
mkTyCon "a" == mkTyCon "a"
mkTyConApp :: TyCon -> [TypeRep] -> TypeRep Source
Applies a type constructor to a sequence of types
mkAppTy :: TypeRep -> TypeRep -> TypeRep Source
Adds a TypeRep argument to a TypeRep.
mkFunTy :: TypeRep -> TypeRep -> TypeRep Source
A special case of mkTyConApp, which applies the function type constructor to a pair of types.
Observation of type representations
splitTyConApp :: TypeRep -> (TyCon, [TypeRep]) Source
Splits a type constructor application
funResultTy :: TypeRep -> TypeRep -> Maybe TypeRep Source
Applies a type to a function type. Returns: Just u if the first argument represents a function of type t -> u and the second argument represents a function of type t. Otherwise, returns Nothing.
typeRepTyCon :: TypeRep -> TyCon Source
Observe the type constructor of a type representation
typeRepArgs :: TypeRep -> [TypeRep] Source
Observe the argument types of a type representation
tyConString :: TyCon -> String Source
Observe string encoding of a type representation
typeRepKey :: TypeRep -> IO Int Source
Returns a unique integer associated with a TypeRep. This can be used for making a mapping with TypeReps as the keys, for example. It is guaranteed that t1 == t2 if and only if typeRepKey t1 ==
typeRepKey t2.
It is in the IO monad because the actual value of the key may vary from run to run of the program. You should only rely on the equality property, not any actual key value. The relative ordering of
keys has no meaning either.
The other Typeable classes
Note: The general instances are provided for GHC only.
class Typeable1 t where Source
Variant for unary type constructors
typeOf1 :: t a -> TypeRep Source
class Typeable2 t where Source
Variant for binary type constructors
typeOf2 :: t a b -> TypeRep Source
class Typeable3 t where Source
Variant for 3-ary type constructors
typeOf3 :: t a b c -> TypeRep Source
class Typeable4 t where Source
Variant for 4-ary type constructors
typeOf4 :: t a b c d -> TypeRep Source
class Typeable5 t where Source
Variant for 5-ary type constructors
typeOf5 :: t a b c d e -> TypeRep Source
class Typeable6 t where Source
Variant for 6-ary type constructors
typeOf6 :: t a b c d e f -> TypeRep Source
class Typeable7 t where Source
Variant for 7-ary type constructors
typeOf7 :: t a b c d e f g -> TypeRep Source
gcast1 :: (Typeable1 t, Typeable1 t') => c (t a) -> Maybe (c (t' a)) Source
Cast for * -> *
gcast2 :: (Typeable2 t, Typeable2 t') => c (t a b) -> Maybe (c (t' a b)) Source
Cast for * -> * -> *
Default instances
Note: These are not needed by GHC, for which these instances are generated by general instance declarations.
typeOfDefault :: (Typeable1 t, Typeable a) => t a -> TypeRep Source
For defining a Typeable instance from any Typeable1 instance.
typeOf1Default :: (Typeable2 t, Typeable a) => t a b -> TypeRep Source
For defining a Typeable1 instance from any Typeable2 instance.
typeOf2Default :: (Typeable3 t, Typeable a) => t a b c -> TypeRep Source
For defining a Typeable2 instance from any Typeable3 instance.
typeOf3Default :: (Typeable4 t, Typeable a) => t a b c d -> TypeRep Source
For defining a Typeable3 instance from any Typeable4 instance.
typeOf4Default :: (Typeable5 t, Typeable a) => t a b c d e -> TypeRep Source
For defining a Typeable4 instance from any Typeable5 instance.
typeOf5Default :: (Typeable6 t, Typeable a) => t a b c d e f -> TypeRep Source
For defining a Typeable5 instance from any Typeable6 instance.
typeOf6Default :: (Typeable7 t, Typeable a) => t a b c d e f g -> TypeRep Source
For defining a Typeable6 instance from any Typeable7 instance.
Produced by Haddock version 0.8
|
{"url":"http://www.haskell.org/ghc/docs/6.8.2/html/libraries/base/Data-Typeable.html","timestamp":"2014-04-18T21:15:42Z","content_type":null,"content_length":"55015","record_id":"<urn:uuid:73cc874c-345e-44a8-800f-733e6a996fee>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The surface area of a three-dimensional object is the total area of all the sides that make up the object. For example, a cube is made of six squares, a right triangular prism is made of three
rectangles and two triangles, and a cylinder is made of two circles and a rectangle (wrapped around, instead of laying flat).
Cube: The area of a square is the length of one side squared. Since the cube is made of six of these squares, multiply the result by six.
Example: A 3 inch cube has surface area
3^2 × 6 = 54 square inches.
Right Triangular Prism: The area of a triangle is 1/2 the base of the triangle times the height of the triangle. The area of a rectangle is the length of one side times the length of an adjacent
side. A Right Triangular Prism is made of two triangles and three rectangles, so multiply the area of the triangle by two and add the areas of the three rectangles.
Example: A 7" tall right prism with a 3" by 4" by 5" right triangle base has surface area
2 × (3 × 4) + (3 × 7) + (4 × 7) + (5 × 7) = 24 + 21 + 28 + 35 = 108 square inches.
Cylinder: The area of a circle is π × r^2 and the area of a rectangle is the length of one side times the length of an adjacent side. In this case, the length of one side of the rectangle is the same
as the circumference of the circle (given by π × d), and the length of the other side is the height of the cylinder. Multiply the area of the circle by two and add the area of the rectangle.
Example: A 5" tall cylinder with ends of a 4" diameter has surface area
2 × π × (4/2)^2 + π × 5 = 25.1 + 15.7 = 40.8 square inches.
...and so forth. All we need to do is add up all the areas of all the sides of the object. But what if the object isn't made of nice flat sides? For example, how do we find the surface area of a
sphere? The long answer involves calculus, and integrating to find the surface of a revolution about an axis, but for the purposes of this writeup I will assume no experience with advanced
mathematics. The surface area of many common shapes has been found for us and all we really have to do is look up the formula:
4 × π × r^2.
Likewise the surface area of a cone is not easily found without calculus. Its formula is:
π × r^2 + π × r × s
Where r is the radius of the circle making up the base of the cone and s is the length of the side of the cone from the edge of the circle to the tip. It can be found by imagining a right triangle in
the cone, where one side is the height (h) of the cone and the other is the radius (r) of the circle. This makes s the hypotenuse of this triangle and can be found with the pythagorean theorem
s = √(h^2 + r^2)
Nodeshell rescue
|
{"url":"http://everything2.com/title/Surface+area","timestamp":"2014-04-21T03:00:14Z","content_type":null,"content_length":"22902","record_id":"<urn:uuid:34197cdb-171d-4607-8e43-e7bc6ff20103>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS Fully Nonlinear Hydrodynamic Calculations for Ship Design on Parallel Computing Platforms G.Cowles, L.Martinelli (Princeton University, USA) 1
Introduction The prediction of the total drag experienced by an advancing ship is a complicated problem which requires a thorough understanding of the hydrodynamic forces acting on the ship hull, the
physical processes from which these forces arise and their mutual interaction. The advent of powerful computers —exhibiting both fast processing speed and large storage capabilities—has now made
possible computational solutions of the full set of mathematical equations which describe the coupled wave structure and viscous boundary layer interaction. Notable previous computational approaches
in this area include: the finite difference, velocity-pressure coupling approach of Hino [ 3]; the finite volume, velocity-pressure coupling approach of Miyata et. al. [4]; and the interactive
approach of Tahara et. al. [5] which combines the finite analytic approach of Chen et. al. [6] and the “SPLASH” panel method of Rosen et. al. [7]. These methods all represent major advances in the
computational solution of the coupled wave structure and viscous boundary layer interaction problem as it applies to ship hulls in general. However, they are all computationally intensive, requiring
significant amounts of CPU time. The motivation behind the present work follows directly from the shortcomings of the CFD techniques currently available for ship analysis and design. A method which
is robust and accurate for realistic hull shapes will greatly enhance hull design capabilities—from the naval architect designing frigates and destroyers, to the sailing yacht designer optimizing the
performance of an America's Cup hull. This task demands that techniques for incorporating the fully nonlinear free surface boundary condition be included in the CFD analysis. The ability to model the
fully nonlinear ship wave problem, in a robust and accurate fashion, is in and of itself still not sufficient for effective design practice. Thus, despite the advances that have been made, CFD is
still not being exploited as effectively as one would like in the design process. This is partly due to the long set-up and high costs, both human and computational of complex flow simulations, and
improvements are still needed. In particular, the fidelity of modelling of high Reynolds number viscous flows continues to be limited by computational costs. Consequently accurate and cost effective
simulation of viscous flow at Reynolds numbers associated with full scale ships, remains a challenge. Several routes are available toward the reduction of computational costs, including the reduction
of mesh requirements by the use of higher order schemes, improved convergence to a steady state by sophisticated acceleration methods, and the exploitation of massively parallel computers. This paper
presents recent advances in our work to accomplish these goals. The basic flow solver methodology follows directly from the cell-vertex formulation outlined in our previous work [8, 9]. This approach
has proven to be accurate, through use of an efficient moving grid technique which permits application of the fully nonlinear free surface boundary condition, and which in turn permits simulation of
the interaction between wavemaking and the viscous boundary layer. A cell-center formulation is also developed and used in the present work because it facilitates the implementation on parallel
architectures using the method of domain decomposition. The establishment of an efficient baseline steady state flow solver is extremely important because it provides the platform from which several
powerful ship analysis tools can be launched. In particular, it enables the implementation of
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS automatic design techniques based on control theory [10] as well as the extension of a time accurate multigrid driven, implicit scheme [11] for the
analysis of “seakeeping”, and maneuvering. 2 Mathematical Models For a Viscous incompressible fluid moving under the influence of gravity, the differential form of the continuity equation and the
Reynolds Averaged Navier-Stokes equations (RANS) in a Cartesian coordinate system can be cast, using tensor notation, in the form, Here, Ūi is the mean velocity components in the xi direction, the
mean pressure, and the gravity force acting in the i-th direction, and is the Reynolds stress which requires an additional model for closure. For implementation in a computer code, it is more
convenient to use a dimensionless form of the equation which is obtained by dividing all lengths by the ship (body) length L and all velocity by the free stream velocity U∞. Moreover, one can define
a new variable Ψ as the sum of the mean static pressure P minus the hydrostatic component –xkFr–2. Thus the dimensionless form of the RANS becomes: where is the Froude number and the Reynolds number
Re is defined by where v is the kinematic viscosity, and is a dimensionless form of the Reynolds stress. Figure 1 shows the reference frame and ship location used in this work. A right-handed
coordinate system Oxyz, with the origin fixed at the intersection of the bow and the mean free surface is established. The z direction is positive upwards, y is positive towards the starboard side
and x is positive in the aft direction. The free stream velocity vector is parallel to the x axis and points in the same direction. The ship hull pierces the uniform flow and is held fixed in place,
ie. the ship is not allowed to sink (translate in z direction) or trim (rotate in x–z plane). It is well known that the closure of the Reynolds averaged system of equation requires a model for the
Reynolds stress. There are several alternatives of increasing complexity. Generally speaking, when the flow remains attached to the body, a simple turbulence model based on the Boussinesq hypothesis
and the mixing length concept yields predictions which are in good agreement with experimental evidence. For this Figure 1: Reference Frame and Ship Location reason a Baldwin and Lomax turbulence
model has been initially implemented and tested [14]. On the other hand, more sophisticated models based on the solution of additional differential equations for the component of the Reynolds stress
may be required. Notice that when the Reynolds stress vanishes, the form of the equation is identical to that of the Navier Stokes equations. Also, the inviscid form of the Euler equations is
recovered in the limit of high Reynolds numbers. Thus, a hierarchy of mathematical model can be easily implemented on a single computer code, allowing study of the controlling mechanisms of the flow.
For example, it has been shown in reference [18] that realistic prediction of the wave pattern about an advancing ship can be obtained by using the Euler equations as the mathematical model of the
bulk flow, provided that a non-linear evolution of the free surface is accounted for. This is not surprising, since the typical Reynolds number of an advancing vessel is of the order of 108. Free
Surface Boundary Conditions When the effects of surface tension and viscosity are neglected, the boundary condition on the free surface consists of two equations. The first, the dynamic condition,
states that the pressure acting on the free surface is constant. The second, the kinematic condition, states that the free surface is a material surface: once a fluid particle is on the free surface,
it forever remains on the surface. The dynamic and kinematic boundary conditions may be expressed as (1) where z=β(x,y,t) is the free surface location. Hull and Farfield Boundary Conditions The
remaining boundaries consist of the ship hull, the meridian, or symmetry plane, and the far field of the computational
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS domain. In the viscous formulation, a no-slip condition is enforced on the ship hull. For the inviscid case, flow tangency is preserved. On the symmetry
plane (that portion of the (x,z) plane excluding the ship hull) derivatives in the y direction as well as the v component of velocity are set to zero. The upstream plane has u=Uo, v=0, w=0 and ψ=0 (p
=–zFr–2). Similar conditions hold on the outer boundary plane which is assumed far enough away from the hull such that no disturbances are felt. A radiation condition should be imposed on the outflow
domain to allow the wave disturbance to pass out of the computational domain. Although fairly sophisticated formulations may be devised to represent the radiation condition, simple extrapolations
proved to be sufficient in this work. For calculations in the limit of zero Froude number (double-hull model) the (x,y) plane is also treated as a symmetry plane. 3 Numerical Solution The formulation
of the numerical solution procedure is based on a finite volume method (FVM) for the bulk flow variables (u,v,w and ψ), coupled to a finite difference method for the free surface evolution variables
(β and ψ). Bulk Flow Solution The finite volume solution for the bulk flow follows the same procedures that are well documented in references [8, 9]. The governing set of differential flow equations
are expressed in the standard form for artificial compressibility [15] as outlined by Rizzi and Eriksson [16]. In particular, letting w be the vector of dependent variables: wt*+(f–fv)x+(g–gv)y+
(h–hv)z=0 (2) Here f,g,h and fv,gv,hv represent, respectively, the inviscid and viscous fluxes. Following the general procedures used in the finite volume formulation, the governing differential
equations are integrated over an arbitrary volume V. Application of the divergence theorem on the convective and viscous flux term integrals yields (3) where Sx, Sy and Sz are the projections of the
area ∂V in the x, y and z directions, respectively. In the present approach the computational domain is divided into hexahedral cells. Two discretization schemes are considered in the present work.
They differ primarily in that in the first, the flow variables are stored at the grid points (cell-vertex) while in the second they are stored in the interior of the cell (cell-center). While the
details of the computation of the fluxes are different for the two approaches, both cell-center and cell-vertex schemes yield the following system of ordinary differential equations [13] where Cijk
and Vijk are the discretized evaluations of the convective and viscous flux surface integrals appearing in equation 3 and Vijk is the volume of the computational cell. In practice, the discretization
scheme reduces to a second order accurate, nondissipative central difference approximation to the bulk flow equations on sufficiently smooth grids. A central difference scheme permits odd-even
decoupling at adjacent nodes which may lead to oscillatory solutions. To prevent this “unphysical” phenomena from occurring, a dissipation term is added to the system of equations such that the
system now becomes (4) For the present problem a fourth derivative background dissipation term is added. The dissipative term is constructed in such a manner that the conservation form of the system
of equations is preserved. The dissipation term is third order in truncation terms so as not to detract from the second order accuracy of the flux discretization. Discretization of the Viscous Terms
The discretization of the viscous terms of the Navier Stokes equations requires an approximation to the velocity derivatives in order to calculate the stress tensor. In order to evaluate the
derivatives one may apply the Gauss formula to a control volume V with the boundary S. where nj is the outward normal. For a hexahedral cell this gives (5) where ûi is an estimate of the average of
ui over the face. Alternatively, assuming a local transformation to computational coordinates ξj, one may apply the chain rule (6) Here the transformation derivatives can be evaluated by the same
finite difference formulas as the velocity derivatives In this case is exact if u is a linearly varying function.
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS For a cell-centered discretization (figure 2a) is needed at each face. The simplest procedure is to evaluate in each cell, and to average between the two
cells on either side of a face. The resulting discretization does not have a compact stencil, and supports undamped oscillatory modes. In a one dimensional calculation, for example, would be
discretized as [17]. In order to produce a compact stencil may be estimated from a control volume centered on each face, using formulas (5) or (6). This is computationally expensive because the
number of faces is much larger than the number of cells. In a hexahedral mesh with a large number of vertices the number of faces approaches three times the number of cells. This motivates the
introduction of dual meshes for the evaluation of the velocity derivatives and the flux balance as sketched in figure 2. The figure Figure 2: Viscous discretizations for cell-centered and cell-vertex
algorithms. shows both cell-centered and cell-vertex schemes. The dual mesh connects cell centers of the primary mesh. If there is a kink in the primary mesh, the dual cells should be formed by
assembling contiguous fractions of the neighboring primary cells. On smooth meshes comparable results are obtained by either of these formulations [23, 24, 25]. If the mesh has a kink, the
cell-vertex scheme has the advantage that the derivatives are calculated in the interior of a regular cell, with no loss of accuracy. Multigrid time-stepping Equation 4 is integrated in time to
steady state using an explicit multistage scheme. For each bulk flow time step, the grid, and thus Vijk, is independent of time. Hence equation 4 can be written as (7) where the residual is defined
as Rijk (w)=Cijk (w)–Vijk (w)–Dijk (w) and the cell volume Vijk is absorbed into the residual for clarity. The full approximation multigrid scheme of this work uses a sequence of independently
generated coarser meshes by eliminating alternate points in each coordinate direction. In order to give a precise description of the multigrid scheme, subscripts may be used to indicate the grid.
Several transfer operations need to be defined. First the solution vector on grid k must be initialized as where wk–1 is the current value on grid k–1, and Tk,k–1 is a transfer operator. Next it is
necessary to transfer a residual forcing function such that the solution grid k is driven by the residuals calculated on grid k–1. This can be accomplished by setting where Qk,k–1 is another transfer
operator. Then Rk(wk) is replaced by Rk (wk)+Pk in the time- stepping scheme. Thus, the multistage scheme is reformulated as The result then provides the initial data for grid k+1. Finally, the
accumulated correction on grid k has to be transferred back to grid k–1 with the aid of an interpolation operator Ik–1,k. Clearly the definition of Tk,k–1,Qk,k–1,Ik–1,k depends on whether a
cell-vertex or a cell-center formulation is selected. A detailed account can be found in reference [22]. With properly optimized coefficients, multistage time-stepping schemes can be very efficient
drivers of the multigrid process. In this work we use a five stage scheme with three evaluation of dissipation [ 17] to drive a W-cycle of the type illustrated in Figure 3. In a three-dimensional
case the number of cells is reduced by a factor of eight on each coarser grid. On examination of the figure, it can therefore be seen that the work measured in units corresponding to a step on the
fine grid is of the order of 1+2/8+4/64+…<4/3, and consequently the very large effective time step of the complete cycle costs only slightly more than a single time step in the fine grid.
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS Figure 3: Multigrid W-cycle for managing the grid calculation. E, evaluate the change in the flow for one step; T, transfer the data without updating the
solution. Free Surface Solution Both a kinematic and dynamic boundary condition must be imposed at the free surface which require the adaption of the grid to conform to the computed surface wave.
Equation 1 can be cast in a form more amenable to numerical computations by introducing a curvilinear coordinate system that transforms the curved free surface β(x,y) into computational coordinates β
(ξ,η). This results in the following transformed kinematic condition βt*+ũβξ+βη=w (8) where ũ and are contravariant velocity components given by ũ=uξx+vξy=(uyη–vxη) J–1 =uηx+vηy=(vxξ–uyξ) J–1 and J=
xξyη–xηyξ is the Jacobian. Equation 8 is essentially a linear hyperbolic equation, which in our original method was discretized by central differences augmented by high order diffusion [8,9]. Such a
scheme can be obtained by introducing anti-diffusive terms in a standard first order formula. In particular, it is well known that for a one-dimensional scalar equation model, central difference
approximation of the derivative may be corrected by adding a third order dissipative flux: (9) where at the cell interface. This is equivalent to the scheme which we have used until now to discretize
the free surface, and which has proven to be effective for simple hulls. However, on more complex configurations of interest, such as combatant vessels and yachts, the physical wave at the bow tends
to break. This phenomenon cannot be fully accounted for in the present mathematical model. In order to avoid the overturning of the wave and continue the calculations lower order dissipation must be
introduced locally and in a controlled manner. This can be accomplished by borrowing from the theory of non-oscillatory schemes constructed using the Local Extremum Diminishing (LED) principle [19,
20]. Since the breaking of a wave is generally characterized by a change in sign of the velocity across the crest, it appears that limiting the antidiffusion purely from the upstream side may be more
suitable to stabilize the calculations and avoid the overturning of the waves [21]. By adding the anti-diffusive correction purely from the upstream side one may derive a family of UpStream Limited
Positive (USLIP) schemes: Where L(p,q) is a limited average of p and q with the following properties: P1. L(p,q)=L(p,q) P2. L(αp,αq)=αL(p,q) P3. L(p,p)=p P4. L(p,q)=0 if p and q have opposite signs.
A simple limiter (α-mean) which limits the arithmetic mean by some multiple of the smaller of |p| or |q|, has been used with success. It may be cast in the following form: It is well known that
schemes which strictly satisfy the LED principle fall back to first order accuracy at extrema even when they realize higher order accuracy elsewhere. This Properties P1–P3 are natural properties of
an average, whereas P4 is needed for the construction of an LED scheme.
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS difficulty can be circumvented by relaxing the LED requirement. Therefore the concept of essentially local extremum diminishing (ELED) schemes is
introduced as an alternative approach. These are schemes for which, in the limit as the mesh width Δx 0, maxima are non-increasing and minima are non-decreasing. In order to prevent the limiter from
being active at smooth extrema it is convenient to set where D(p,q) is a factor designed to reduce the arithmetic average, and become zero if u and v have opposite signs. Thus, for an ELED scheme we
take D(p,q)=1–R(p,q) (10) where (11) and >0, r is a positive power, and s is a positive integer. Then D(p,q)=0 if p and q have opposite signs. Also if s=1, L(p,q) reduces to minmod, while if s=2, L
(p,q) is equivalent to Van Leer's limiter. By increasing s one can generate a sequence of limited averages which approach a limit defined by the arithmetic mean truncated to zero when p and q have
opposite signs. These smooth limiters are known to have a benign effect on the convergence to a steady state of compressible flows. Figures 5a and 5b compare waterline profiles on a combatant vessel
using ELED and LED methods in the free surface dissipation. One can see that, when compared with the experimental data, the ELED (solid line) profile is more accurate. Figures 5c and 5d show overhead
free surface contours for the same geometry. The ELED scheme gives better resolution of the far field waves in solution of both the Euler and RANS equations. Integration and Coupling with The Bulk
Flow The free surface kinematic equation may be expressed as where Qij(β) consists of the collection of velocity and spatial gradient terms which result from the discretization of equation 8. Once
the free surface update is accomplished the pressure is adjusted on the free surface such that ψ(n+1)=β(n+1)Fr–2. The free surface and the bulk flow solutions are coupled by first computing the bulk
flow at each time step, and then using the bulk flow velocities to calculate the movement of the free surface. After the free surface is updated, its new values are used as a boundary condition for
the pressure on the bulk flow for the next time step. The entire iterative process, in which both the bulk flow and the free surface are updated at each time step, is repeated until some measure of
convergence is attained: usually steady state wave profile and wave resistance coefficient. Since the free surface is a material surface, the flow must be tangent to it in the final steady state.
During the iterations, however, the flow is allowed to leak through the surface as the solution evolves towards the steady state. This leakage, in effect, drives the evolution equation. Suppose that
at some stage, the vertical velocity component w is positive (cf. equation 1 or 8). Provided that the other terms are small, this will force βn+1 to be greater than βn. When the time step is
complete, ψ is adjusted such that ψn+1> ψn. Since the free surface has moved farther away from the original undisturbed upstream elevation and the pressure correspondingly increased, the velocity
component w (or better still q · n where and F=z–β(x,y)) will then be reduced. This results in a smaller Δβ for the next time step. The same is true for negative vertical velocity, in which case
there is mass leakage into the system rather than out. Only when steady state has been reached is the mass flux through the surface zero and tangency enforced. In fact, the residual flux leakage
could be used in addition to drag components and pressure residuals as a measure of convergence to the steady state. This method of updating the free surface works well for the Euler equations since
tangency along the hull can be easily enforced. However, for the Navier-Stokes equations the no-slip boundary condition is inconsistent with the free surface boundary condition at the hull/waterline
intersection. To circumvent this difficulty the computed elevation for the second row of grid points away from the hull is extrapolated to the hull. Since the minimum spacing normal to the hull is
small, the error due to this should be correspondingly small, comparable with other discretization errors. The treatment of this intersection for the Navier-Stokes calculations, should be the subject
of future research to find the most accurate possible procedure. 4 Parallelization Strategy The objective of a fast flow solver for design and analysis motivates parallel implementation. The method
of domain decomposition is utilized for this work. The grid is divided into sections which are sent to separate processors for solution. This method is very compatible with the cell-center flux
discretization. In the cell-vertex scheme, processors corresponding to adjoining sections of the mesh must update coincidental locations on the common face. Thus both single processor and parallel
versions of the code have been developed using the cell-center formulation. Figure 6 displays
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS validation of the parallel, cell-center version by comparison with the previously developed, single processor, cell-vertex code. Figure 7 displays
comparison between a single processor cell-center code and experimental data. Figures 8 and 9 show overhead wave profiles around the Model 5415 hull [26] for speeds of 15 and 20 knots respectively.
These were computed using the cell-center discretization and the limited free surface dissipation described in section 3. Figure 14 displays pressure contours on the bulbous bow of the 5415 using
this method. The parallelization strategy has been developed and extensively tested thus far using a single block implementation. Due to topological constraints, more complicated geometries cannot be
treated with a single block structured mesh. As an example, the racing yacht pictured in figure 15 has multiple appendages which result in skewness and lowered efficiency of a single block grid.
Transom sterns and inclusion of propellers cause similar difficulties. To circumvent these problems, a multiblock version of the code is currently being developed. Single Block Parallel
Implementation The initial three-dimensional meshes for the hull calculations are generated using the GRIDGEN [27]. The computer code is parallelized using a domain decomposition model, a SPMD
(Single Program Multiple Data) strategy, and the MPI (Message Passing Interface) Library for message passing. The choice of message passing library was determined by the requirement that the
resulting code be portable to different parallel computing platforms as well as to homogeneous and heterogeneous networks of workstations [29]. Communication between subdomains is performed through
halo cells surrounding each subdomain boundary. Since both the convective and the dissipative fluxes are calculated at the cell faces (boundaries of the control volumes), all six neighboring cells
are necessary, thus requiring the existence of a single level halo for each processor in the parallel calculation. The dissipative fluxes are composed of third order differences of the flow
quantities. Thus, at the boundary faces of each cell in the domain, the presence of the twelve neighboring cells (two adjacent to each face) is required. For each cell within a processor, Figure 4
shows which neighboring cells are required for the calculation of convective and dissipative fluxes. For each processor, some of these cells will lie directly next to an interprocessor boundary, in
which case, the values of the flow variables residing in a different processor will be necessary to calculate the convective and dissipative fluxes. In the finest mesh of the multigrid sequence, a
two-level halo was sufficient to calculate the convective and dissipative fluxes for all cells contained in each processor. In the coarser levels of the multigrid sequence, a single level halo
suffices since a simplified model of the artificial dissipation terms is used. Figure 4: Convective and Dissipative Discretization Stencils. Similar constructs are required for the free surface
solution. A double halo of free surface locations is passed across interprocessor boundaries. The communication routines used are all of the asynchronous (or non-blocking) type. In the current
implementation of the program, each processor must send and receive messages to and from at most 6 neighboring processors (left and right neighbors in each of the three coordinate directions). The
communication is scheduled such that at every instant in time pairs of processors are sending/receiving to/from one another in order to minimize contention in the communication schedule. For a given
number of subdomains in a calculation, there are several ways to partition the complete mesh according to the scheme explained above. Depending on the choice of partitions, the bounding surface area
of each subdomain will vary, reaching a minimum when the sizes in each of the three coordinate directions are equal. Figure 10 shows an H-O grid around a combatant ship divided into 8 subdomains
corresponding to 8 processors. Currently the partition of the global mesh is an input to the code, but work is in progress to determine, in a pre-processing step, the optimal block distribution in
order to minimize the communication requirements. Efficiency of the parallelization is is a function of many factors, including numerical discretization, system of equations, choice of hardware/
software, number of processors, and size of the mesh. The granularity, or ratio of bytes a processor passes to work it performs, helps quantify several of these aspects. The lower the granularity,
the higher the efficiency. Switching to a more complex set of equations, for example, a RANS set, increases the amount of processor effort with only a small effect on the total message passes.
Granularity decreases. Cutting a given mesh into more pieces(processors), or equivalently, using a coarser mesh, increases the ratio of
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS interprocessor faces to interior cells which is directly proportional to the granularity, and efficiency decays. Figure 11 displays parallel performance
for the Euler equations, evaluated on an IBM SP2, a distributed memory machine, and confirms the good scalability of our algorithm. The effects of increasing the number of processors and increasing
the size of the mesh are both apparent. Results of parallel RANS solvers under development in our laboratory for aeronautical applications confirm the theoretical efficiency increase that will be
obtained when viscous fluxes are switched on. Multiblock Parallel Implementation The essential algorithm (convective and dissipative flux calculation, multigrid, viscous terms, etc.) is exactly the
same as the one applied to the single block case. The only difference resides in the fact that an additional outer loop over all the blocks in the domain is added [28]. The parallelization strategy,
however, is quite different. Similarly to the single block code, the multiblock is parallelized using a domain decomposition model, a SPMD strategy, and the MPI Library for message passing. Since the
sizes of the blocks can be quite small, sometimes further partitioning severely limits the number of multigrid levels that can be used in the flows. For this reason, it was decided to allocate
complete blocks to each processor. The underlying assumption is that there always will be more blocks than processors available. If this is the case, every processor in the domain would be
responsible for the computations inside one or more blocks. In the case in which there are more processors than blocks available, the blocks can be adequately partitioned during a pre-processing step
in order to at least have as many blocks as processors. This approach has the advantage that the number of multigrid levels that can be used in the parallel implementation of the code is always the
same as in the serial version. Moreover, the number of processors in the calculation can now be any integer number, since no restrictions are imposed by the partitioning in all coordinate directions
used by the single block program. The only drawback of this approach is the loss of the exact load balancing that one has in the single block implementation. All blocks in the calculation can have
different sizes, and consequently, it is very likely that different processors will be assigned a different total number of cells in the calculation. This, in turn, will imply that some of the
processors will be waiting until the processor with the largest number of cells has completed its work and parallel performance will suffer. The approach that we have followed to solve the load
balancing problem is to assign to each processor, in a pre-processing step, a certain number of blocks such that the total number of cells is as close as possible to the exact share for perfect load
balancing. One must note that load balancing based on the total number of cells in each processor is only an approximation to the optimal solution of the problem. Other variables such as the number
of blocks, the size of each block, and the size of the buffers to be communicated play an important role in proper load balancing, and are the subject of current study. The implementation is fully
scalable. Figure 12 shows an H-O grid around the Model 5415 hull divided into 20 blocks. Figure 13 shows speedups obtained on the 5415 for the zero Froude number condition. The shapes in the curves
results from an interplay of forces. Increased cache hits push the curve to a superlinear (better than ideal) region. The wiggles are a result of deviations in the load balance from unity, or equal
work (number cells) in each processor. Since the blocks are not all equal in size, the constraint that blocks are not shared among processors causes the taper as the number of processors approaches
the number of blocks in the grid. Conclusion By utilizing a cell-center formulation suitable for parallel computing, flow solutions about complex geometries on the order of a half hour for a grid
size up to one million mesh points have been achieved on 16 processors of an IBM SP2. Such efficiency makes our methodology suitable for routine calculations in the early stages of ship design. Also,
an extension to the computation of unsteady flows has been made feasible by the speedup. Underwater control surfaces and transom sterns warrant the necessity of multiblock meshes. Preliminary testing
of a multiblock version displays the scalability and efficiency of the method. Acknowledgment Our work has benefited greatly from the support of the Office of Naval Research through Grant
N00014–93-I-0079, under the supervision of Dr. E.P.Rood. The selection, and implementation of the parallelization strategy presented here is the fruit of extensive collaborations with other students
of the Princeton University CFD Laboratory. In particular we wish to acknowledge the contribution of Juan J.Alonso, and Andrey Belov. REFERENCES [1] Toda, Y., Stern, F., and Longo, J., “Mean-Flow
Measurements in the Boundary Layer and Wake and Wave Field of a Series 60 CB=0.6 Ship Model-Part 1: Froude Numbers 0.16 and 0.316,” Journal of Ship Research, v. 36, n. 4, pp. 360–377, 1992. [2]
Longo, J., Stern, F., and Toda, Y., “Mean-Flow Measurements in the Boundary Layer and Wake and Wave Field of a Series 60 CB=0.6 Ship Model-Part 2: Effects on Near-Field Wave Patterns and Comparisons
with Inviscid Theory,” Journal of Ship Research, v. 37, n. 1, pp. 16–24, 1993. [3] Hino, T., “Computation of Free Surface Flow Around an Advancing Ship by the Navier-Stokes Equations,” Proceedings,
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS Fifth International Conference on Numerical Ship Hydrodynamics, pp. 103–117, 1989. [4] Miyata, H., Zhu, M., and Wantanabe, O., “Numerical Study on a
Viscous Flow with Free-Surface Waves About a Ship in Steady Straight Course by a Finite-Volume Method,” Journal of Ship Research , v. 36, n. 4, pp. 332–345, 1992. [5] Tahara, Y., Stern, F., and
Rosen, B., “An Interactive Approach for Calculating Ship Boundary Layers and Wakes for Nonzero Froude Number,” Journal of Computational Physics, v. 98, pp. 33–53, 1992. [6] Chen, H.C., Patel, V.C.,
and Ju, S., “Solution of Reynolds-Averaged Navier-Stokes Equations for Three-Dimensional Incompressible Flows,” Journal of Computational Physics, v. 88, pp. 305–336, 1990. [7] Rosen, B.S., Laiosa,
J.P., Davis, W.H., and Stavetski, D., “SPLASH Free-Surface Flow Code Methodology for Hydrodynamic Design and Analysis of IACC Yachts,”, The Eleventh Chesapeake Sailing Yacht Symposium, Annapolis, MD,
1993. [8] Farmer, J.R., Martinelli, L., and Jameson, A., “A Fast Multigrid Method for Solving the Nonlinear Ship Wave Problem with a Free Surface,” Proceedings, Sixth International Conference on
Numerical Ship Hydrodynamics, pp. 155–172, 1993. [9] Farmer, J.R., Martinelli, L., and Jameson, A., “A Fast Multigrid Method for Solving Incompressible Hydrodynamic Problems with Free Surfaces,” AIAA
Journal, v. 32, no. 6, pp. 1175– 1182, 1994. [10] Jameson, A., “Optimum Aerodynamic Design Using CFD and Control Theory,” Proceedings, 12th Computational Fluid Dynamics Conference, San Diego,
California, 1995 [11] Belov, A., Martinelli, L., Jameson, A., “A New Implicit Algorithm with Multigrid for Unsteady Incompressible Flow Calculations,” AIAA Paper 95–0049, June 1995 [12] Farmer, J.R.,
Martinelli, L., and Jameson, A., “Multigrid Solutions of the Euler and Navier-Stokes Equations for a Series 60 Cb=0.6 Ship Hull For Froude Numbers 0.160, 0.220 and 0.316,” Proceedings, CFD Workshop
Tokyo 1994, Tokyo, Japan, March 1994. [13] Jameson, A., “A Vertex Based Multigrid Algorithm For Three Dimensional Compressible Flow Calculations,” ASME Symposium on Numerical Methods for Compressible
Flows, Annaheim, December 1986. [14] Baldwin, B.S., and Lomax, H., “Thin Layer Approximation and Algebraic Model for Separated Turbulent Flows,” AIAA Paper 78–257, AIAA 16th Aerospace Sciences
Meeting, Reno, NV , January 1978. [15] Chorin, A., “A Numerical Method for Solving Incompressible Viscous Flow Problems, ” Journal of Computational Physics, v. 2, pp. 12–26, 1967. [16] Rizzi, A., and
Eriksson, L., “Computation of Inviscid Incompressible Flow with Rotation,” Journal of Fluid Mechanics, v. 153, pp. 275–312, 1985. [17] Martinelli, L., “Calculations of Viscous Flows with a Multigrid
Method,” Ph.D. Thesis, MAE 1754-T, Princeton University, 1987. [18] Farmer, J., “A Finite Volume Multigrid Solution to the Three Dimensional Nonlinear Ship Wave Problem,” Ph.D. Thesis, MAE 1949-T,
Princeton University, January 1993. [19] A.Jameson, “Analysis and design of numerical schemes for gas dynamics 1, artificial diffusion, upwind biasing, limiters and their effect on multigrid
convergence,” Int. J. of Comp. Fluid Dyn., To Appear. [20] A.Jameson, “Analysis and design of numerical schemes for gas dynamics 2, artificial diffusion and discrete shock structure,” Int. J. of
Comp. Fluid Dyn., To Appear. [21] J.Farmer, L.Martinelli, A.Jameson, and G.Cowles, “Fully-nonlinear CFD techniques for ship performance analysis and design,” AIAA paper 95–1690, AIAA 12th
Computational Fluid Dynamics Conference, San Diego, CA, June 1995. [22] A.Jameson, “Multigrid algorithms for compressible flow calculations,” In Second European Conference on Multigrid Methods,
Cologne, October 1985. Princeton University Report MAE 1743. [23] L.Martinelli and A.Jameson, “Validation of a multigrid method for the Reynolds averaged equations, ” AIAA paper 88– 0414, 1988. [24]
L.Martinelli, A.Jameson, and E.Malfa, “Numerical simulation of three-dimensional vortex flows over delta wing configurations,” In M.Napolitano and F.Sabbetta, editors, Proc. 13th International
Conference on Numerical Methods in Fluid Dynamics, pages 534–538, Rome, Italy, July 1992. Springer Verlag, 1993. [25] F.Liu and A.Jameson, “Multigrid Navier-Stokes calculations for three-dimensional
cascades, ” AIAA paper 92–0190, AIAA 30th Aerospace Sciences Meeting, Reno, Nevada, January 1992. [26] T.Ratcliffe W.Lindenmuth, “Kelvin Wake Measurements Obtained on Five Surface Ship Models,” DTRC
Report DTNSRDC 89/038 [27] J.Steinbrenner J.Chawner, “User's Manual for Gridgen,” [28] J.Reuther, A.Jameson, J.Farmer, L.Martinelli, and D.Saunders, “Aerodynamic Shape Optimization of Complex
Aircraft Configurations via an Adjoint Formulation,” AIAA Paper 96– 0094, AIAA 34th Aerospace Sciences Meeting, Reno, NV, January 1996. [29] J.Alonso, and A.Jameson “Automatic Aerodynamic
Optimization on Distributed Memory Architectures, ” AIAA paper 96–0409, AIAA 34th Aerospace Sciences Meeting, Reno, NV, January 1996.
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS Figure 5: Comparison of computed wave elevations using the Euler (left) and RANS (right) equations.
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS Figure 6: Figure 7:
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS Figure 8: Free Surface Contours: 5415 Fr=.2067 Figure 9: Free Surface Contours: 5415 Fr=.2760
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS Figure 10: Domain Decomposition Figure 11: Parallel Speedup: Single Block H-O
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS Figure 12: Model 5415 MESH: 20 BLOCKS Figure 13: Parallel Speedup: Multiblock H-O
OCR for page 1033
Twenty-First Symposium on NAVAL HYDRODYNAMICS Figure 14: Pressure Contours on Sonar Dome Figure 15: Single Block Mesh: IACC Yacht Hull
|
{"url":"http://www.nap.edu/openbook.php?record_id=5870&page=1033","timestamp":"2014-04-19T22:38:19Z","content_type":null,"content_length":"107955","record_id":"<urn:uuid:584524a4-3288-4eb9-be5f-277b9f4b6a37>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding Inverse w/ Elementary Matrices
October 9th 2010, 02:08 PM #1
Finding Inverse w/ Elementary Matrices
Can someone please help me solve this?
I've attached a picture.
Bring the left hand matrix to the unit matrix by means of elementary operations on tis rows/columns, and repeat EXACTLY each operation on the right hand matrix (which is the unit matrix). When
you end, in the right hand side you'll get the inverse of the LHS matrix. (why?)
Thanks! I see how to do it now.
October 9th 2010, 02:46 PM #2
October 9th 2010, 09:16 PM #3
|
{"url":"http://mathhelpforum.com/advanced-algebra/158957-finding-inverse-w-elementary-matrices.html","timestamp":"2014-04-19T13:49:07Z","content_type":null,"content_length":"36180","record_id":"<urn:uuid:e729fac2-2dcf-4e27-84dd-e91ff8c1040d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fundamentals of kicking anthropic butt - Less Wrong
An anthropic problem is one where the very fact of your existence tells you something. "I woke up this morning, therefore the earth did not get eaten by Galactus while I slumbered." Applying your
existence to certainties like that is simple - if an event would have stopped you from existing, your existence tells you that that it hasn't happened. If something would only kill you 99% of the
time, though, you have to use probability instead of deductive logic. Usually, it's pretty clear what to do. You simply apply Bayes' rule: the probability of the world getting eaten by Galactus last
night is equal to the prior probability of Galactus-consumption, times the probability of me waking up given that the world got eaten by Galactus, divided by the probability that I wake up at
all. More exotic situations also show up under the umbrella of "anthropics," such as getting duplicated or forgetting which person you are. Even if you've been duplicated, you can still assign
probabilities. If there are a hundred copies of you in a hundred-room hotel and you don't know which one you are, don't bet too much that you're in room number 68.
But this last sort of problem is harder, since it's not just a straightforward application of Bayes' rule. You have to determine the probability just from the information in the problem. Thinking in
terms of information and symmetries is a useful problem-solving tool for getting probabilities in anthropic problems, which are simple enough to use it and confusing enough to need it. So first we'll
cover what I mean by thinking in terms of information, and then we'll use this to solve a confusing-type anthropic problem.
Parable of the coin
Eliezer has already written about what probability is in Probability is in the Mind. I will revisit it anyhow, using a similar example from Probability Theory: The Logic of Science.
It is a truth universally acknowledged that when someone tosses a fair coin without cheating, there's a 0.5 probability of heads and a 0.5 probability of tails. You draw the coin forth, flip it, and
slap it down. What is the probability that when you take your hand away, you see heads?
Well, you performed a fair coin flip, so the chance of heads is 0.5. What's the problem? Well, imagine the coin's perspective. When you say "heads, 0.5," that doesn't mean the coin has half of heads
up and half of tails up: the coin is already how it's going to be, sitting pressed under your hand. And it's already how it is with probability 1, not 0.5. If the coin is already tails, how can you
be correct when you say that it's heads with probability 0.5? If something is already determined, how can it still have the property of randomness?
The key idea is that the randomness isn't in the coin, it's in your map of the coin. The coin can be tails all it dang likes, but if you don't know that, you shouldn't be expected to take it into
account. The probability isn't a physical property of the coin, nor is it a property of flipping the coin - after all, your probability was still 0.5 when the truth was sitting right there under your
hand. The probability is determined by the information you have about flipping the coin.
Assigning probabilities to things tells you about the map, not the territory. It's like a machine that eats information and spits out probabilities, with those probabilities uniquely determined by
the information that went in. Thinking about problems in terms of information, then, is about treating probabilities as the best possible answers for people with incomplete information. Probability
isn't in the coin, so don't even bother thinking about the coin too much - think about the person and what they know.
When trying to get probabilities from information, you're going to end up using symmetry a lot. Because information uniquely specifies probability, if you have identical information about two things,
then you should assign them equal probability. For example, if someone switched the labels "heads" and "tails" in a fair coin flip, you couldn't tell that it had been done - you never had any
different information about heads as opposed to tails. This symmetry means you should give heads and tails equal probability. Because heads and tails are mutually exclusive (they don't overlap) and
exhaustive (there can't be anything else), the probabilities have to add to 1 (which is all the probability there is), so you give each of them probability 0.5.
Brief note on useless information
Real-world problems, even when they have symmetry, often start you off with a lot more information than "it could be heads or tails." If we're flipping a real-world coin there's the temperature to
consider, and the humidity, and the time of day, and the flipper's gender, and that sort of thing. If you're an ordinary human, you are allowed to call this stuff extraneous junk. Sometimes, this
extra information could theoretically be correlated with the outcome - maybe the humidity really matters somehow, or the time of day. But if you don't know how it's correlated, you have at least a de
facto symmetry. Throwing away useless information is a key step in doing anything useful.
Sleeping Beauty
So thinking with information means assigning probabilities based on what people know, rather than treating probabilities as properties of objects. To actually apply this, we'll use as our example the
sleeping beauty problem:
Suppose Sleeping Beauty volunteers to undergo the following experiment, which is described to her before it begins. On Sunday she is given a drug that sends her to sleep, and a coin is tossed. If
the coin lands heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday, given a second dose of the
sleeping drug that makes her forget the events of Monday only, and awakened and interviewed again on Tuesday. The experiment then ends on Tuesday, without flipping the coin again.
Beauty wakes up in the experiment and is asked, "With what subjective probability do you believe that the coin landed tails?"
If the coin lands heads, Sleeping Beauty is only asked for her guess once, while if the coin lands tails she is asked for her guess twice, but her memory is erased in between so she has the same
memories each time.
When trying to answer for Sleeping Beauty, many people reason as follows: It is a truth universally acknowledged that when someone tosses a fair coin without cheating, there's a 0.5 probability of
heads and a 0.5 probability of tails. So since the probability of tails is 0.5, Beauty should say "0.5," Q.E.D. Readers may notice that this argument is all about the coin, not about what Beauty
knows. This violation of good practice may help explain why it is dead wrong.
Thinking with information: some warmups
To collect the ingredients of the solution, I'm going to first go through some similar-looking problems.
In the Sleeping Beauty problem, she has to choose between three options - let's call them {H, Monday}, {T, Monday}, and {T, Tuesday}. So let's start with a very simple problem involving three
options: the three-sided die. Just like for the fair coin, you know that the sides of the die are mutually exclusive and exhaustive, and you don't know anything else that would be correlated with one
side showing up more than another. Sure, the sides have different labels, but the labels are extraneous junk as far as probability is concerned. Mutually exclusive and exhaustive means the
probabilities have to add up to one, and the symmetry of your information about the sides means you should give them the same probabilities, so they each get probability 1/3.
Next, what should Sleeping Beauty believe before the experiment begins? Beforehand, her information looks like this: she signed up for this experiment where you get woken up on Monday if the coin
lands heads and on Monday and Tuesday if it lands tails.
One good way to think of this last piece of information is as a special "AND" structure containing {T, Monday} and {T, Tuesday}, like in the picture to the right. What it means is that since the
things that are "AND" happen together, the other probabilities won't change if we merge them into a single option, which I shall call {T, Both}. Now we have two options, {H, Monday} and {T, Both},
which are both exhaustive and mutually exclusive. This looks an awful lot like the fair coin, with probabilities of 0.5.
But can we leave it at that? Why shouldn't two days be worth twice as much probability as one day, for instance? Well, it turns out we can leave at that, because we have now run out of information
from the original problem. We used that there were three options, we used that they were exhaustive, we used that two of them always happened together, and we used that the remaining two were
mutually exclusive. That's all, and so that's where we should leave it - any more and we'd be making up information not in the problem, which is bad.
So to decompress, before the experiment begins Beauty assigns probability 0.5 to the coin landing heads and being woken up on Monday, probability 0.5 to the coin landing tails and being woken up on
Monday, and probability 0.5 to the coin landing tails and being woken up on Tuesday. This adds up to 1.5, but that's okay since these things aren't all mutually exclusive.
This new problem looks sort of familiar. You have three options, {H, H}, {T, H} and {T, T}, and these options are mutually exclusive and exhaustive. So does that mean it's the same set of information
as the three-sided die? Not quite. Similar to the "AND" previously, my drawing for this problem has an "OR" between {T, H} and {T,T}, representing additional information.
I'd like to add a note here about my jargon. "AND" makes total sense. One thing happens and another thing happens. "OR," however, doesn't make so much sense, because things that are mutually
exclusive are already "or" by default - one thing happens or another thing happens. What it really means is that {H, H} has a symmetry with the sum of {T, H} and {T, T} (that is, {T, H} "OR" {T, T}).
The "OR" can also be thought of as information about {H, H} instead - it contains what could have been both the {H, H} and {H, T} events, so there's a four-way symmetry in the problem, it's just been
When we had the "AND" structure, we merged the two options together to get {tails, both}. For "OR," we can do a slightly different operation and replace {T, H} "OR" {T, T} by their sum, {T, either}.
Now the options become {H, H} and {T, either}, which are mutually exclusive and exhaustive, which gets us back to the fair coin. Then, because {T, H} and {T, T} have a symmetry between them, you
split the probability from {T, either} evenly to get probabilities of 0.5, 0.25, and 0.25.
Okay, for real now
Okay, so now what do things look like once the experiment has started? In English, now she knows that she signed up for this experiment where you get woken up on Monday if the coin lands heads and on
Monday and Tuesday if it lands tails, went to sleep, and now she's been woken up.
This might not seem that different from before, but the "anthropic information" that Beauty is currently one of the people in the experiment changes the formal picture a lot. Before, the three
options were not mutually exclusive, because she was thinking about the future. But now {H, Monday}, {T, Monday}, and {T, Tuesday} are both exhaustive and mutually exclusive, because only one can be
the case in the present. From the coin flip, she still knows that anything with heads is mutually exclusive with anything with tails. But once two things are mutually exclusive you can't make them
any more mutually exclusive.
But the "AND" information! What happens to that? Well, that was based on things always happening together, and we just got information that those things are mutually exclusive, so there's no more
"AND." It's possible to slip up here and reason that since there used to be some structure there, and now they're mutually exclusive, it's one or the other, therefore there must be "OR" information.
At least the confusion in my terminology reflects an easy confusion to have, but this "OR" relationship isn't the same as mutual exclusivity. It's a specific piece of information that wasn't in the
problem before the experiment, and wasn't part of the anthropic information (that was just mutual exclusivity). So Monday and Tuesday are "or" (mutually exclusive), but not "OR" (can be added up to
use another symmetry).
And so this anthropic requirement of mutual exclusivity turns out to make redundant or render null a big chunk of the previous information, which is strange. You end up left with three mutually
exclusive, exhaustive options, with no particular asymmetry. This is the three-sided die information, and so each of {H, Monday}, {T, Monday}, and {T, Tuesday} should get probability 1/3. So when
asked for P(tails), Beauty should answer 2/3.
"SSA" and "SIA"
When assigning prior probabilities in anthropic problems, there are two main "easy" ways to assign probabilities, and these methods go by the acronyms "SSA" and "SIA." "SSA" is stated like this^1:
All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.
For example, if you wanted the prior probability that you lived in Sweden, you might say ask "what proportion of human beings have lived in Sweden?"
On the other hand, "SIA" looks like this^2:
All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers.
Now the question becomes "what proportion of possible observers live in Sweden?" and suddenly it seems awfully improbable that anyone could live in Sweden.
The astute reader will notice that these two "assumptions" correspond to two different sets of starting information. If you want a quick exercise, figure out what those two sets of information are
now. I'll wait for you in the next paragraph.
Hi again. The information assumed for SSA is pretty straightforward. You are supposed to reason as if you know that you're an actually existent observer, in some "reference class." So an example set
of information would be "I exist/existed/will exist and am a human." Compared to that, SIA seems to barely assume any information at all - all you get to start with is "I am a possible observer."
Because "existent observers in a reference class" are a subset of possible observers, you can transform SIA into SSA by adding on more information, e.g. "I exist and am a human." And then if you want
to represent a more complicated problem, you have to add extra information on top of that, like "I live in 2012" or "I have two X chromosomes."
Trouble only sneaks in if you start to see these acronyms as mysterious probability generators rather than sets of starting information to build on. So don't do that.
Closing remarks
When faced with straightforward problems, you usually don't need to use this knowledge of where probability comes from. It's just rigorous and interesting, like knowing how to do integration as a
Riemann sum. But whenever you run into foundational or even particularly confusing problems, it's good to remember that probability is about making the best use you can of incomplete information. If
not, you run the risk of a few silly failure modes, or even (gasp) frequentism.
I recently read an academic paper^3 that used the idea that in a multiverse, there will be some universe where a thrown coin comes up heads every time, and so the people in that universe will have
very strange ideas about how coins work. Therefore, this actual academic paper argued, since reasoning with probability can lead people to be wrong, it cannot be applied to anything like a
My response is: what have you got that works better? In this post we worked through assigning probabilities by using all of our information. If you deviate from that, you're either throwing
information away or making it up. Incomplete information lets you down sometimes, that's why it's called incomplete. But that doesn't license you to throw away information or make it up, out of some
sort of dissatisfaction with reality. The truth is out there. But the probabilities are in here.
Comments (60)
Sort By: Best
Link nitpick: When linking to arXiv, please link to the abstract, not directly to the PDF.
Easy solution for the Sleeping Beauty problem: instead of merely asking her her subjective probability, we can ask her to bet. The question now becomes "at what odds would you be willing to bet?". So
here are the possibilities:
• Heads. There will be one bet, Monday.
• Tails. There will be two bets, Monday, and Tuesday.
Heads or tails comes up with equal probability (0.5). But when it comes up Tails, the stakes double (because she will bet twice). So, what will generate the correct bets is the assumption that Tails
will subjectively come up 2/3 of the time.
I know it looks cheap, because it doesn't answer the question "But what really is the subjective probability?". I don't know, but I'll find a way to make the correct decision anyway.
By asking, "At what odds would you be willing to bet?", you've skewed the payout matrix, not the probabilities - even subjectively. If she offers the bet at 2:1 odds, it's so that when her future/
past twin makes the same bet, it corrects the payout matrix. She adjusts in this way because the probability is 1/2.
It's just like if an online bookie discovers a bug in their software so that when someone takes the first option in a bet, and if they win, they get paid twice. He needs to lower the payout on option
1 by a factor of 2 (on the backend, at least - no need to embarrass everyone by mentioning it on the front end).
Sleeping Beauty can consistently say, "The probability of my waking up this time having been a waking-event following a tails coinflip is 2/3. The probability of the coinflip having come up tails is
1/2. On either of these, if you demand to bet on the issue, I'm offering 2:1 odds."
Sleeping Beauty can consistently say, […]
Consistently? Sorry, I can't even parse the sentence that follows. Trying to understand it:
"this event"
Could you mean "the fact that I just woke up from drugged induced sleep"? But this event is not correlated with the coin flip to begin with. (Whether it ends up head or tail, you will wake up
seemingly for the first time.)
"The probability of the coinflip having come up tails is 1/2."
Whose probability?
Also, how my solution could lead Sleeping Beauty to be Dutch-booked? Could you provide an example, please?
Whose probability?
Hers, right then, as she says it.
Let's go ahead and draw a clearer distinction.
SB is required, on sunday, to lay odds on the coin flip; the coin will be shown to her on wednesday, and the outcome judged. She is given the opportunity to change her mind about the odds she's
laying at any point during the experiment before it's over. Should she change her odds? No.
About Dutch-booking - You must have gotten in there before I rewrote it, which I did before you finished posting. I realized I may have been misusing the term. Does the version up now make sense? Oh,
heck, I'll rewrite it again to make it even clearer.
Your new formulation is much better. Now I can identify the pain point.
The probability of my waking up this time having been a waking-event following a tails coinflip is 2/3.
I think I unconditionally agree with this one (I'm not certain, though).
The probability of the coinflip having come up tails is 1/2.
This is when I get confused. See, if you ask the question before drugging SB, it feels obvious that she should answer "1/2". As you say, she gains no information by merely waking up, because she knew
she would in advance. Yet she should still bet 2:1 odds, whether it's money or log-odds. In other words, how on Earth can the subjective probability be different from the correct betting odds?!
Currently, I see only two ways of solving this apparent contradiction. Either estimating 2:1 odds from the beginning, or admitting that waking up actually provided information. Both look crazy, and I
can't find any third alternative.
(Note that we assume she will be made to bet at each wake up no matter what. For instance, if she knows she only have to bet Monday, then she wakes up and is told to bet, she gains information that
tells her "1/2 probability, 1/2 betting odds". Same thing if she only know she will bet once.)
how on Earth can the subjective probability be different from the correct betting odds?!
Because the number of bets she makes will be different in one outcome than the other. it's exactly like the bookie software bug example I gave. Normally you don't need to think about this, but when
you begin manipulating the multiplicity of the bettors, you do.
Let's take it to extremes to clarify what the real dependencies are. Instead of waking Bea 2 times, we wake her 1000 times in the event of a tails flip (I didn't say 3^^^3 so we wouldn't get boggled
by logistics).
Now, how surprised should she be in the event of a heads flip? Astonished? Not that astonished? Equanimous? I'm going with Equanimous.
Also, how my solution could lead Sleeping Beauty to be Dutch-booked? Could you provide an example, please?
My hunch is that any solution other than yours allows her to be Dutch-booked...
Not after correction for payout matrix, as described...
I know it looks cheap, because it doesn't answer the question "But what really is the subjective probability?"
I don't think using betting is cheap at all, if you want to answer questions about decision-making. But I still wanted to answer the question about probability :D
I've always thought of SSA and SIA as assumptions that depend on what your goal is in trying to figure out the probability. Sleeping Beauty may want to maximize the probability that she guesses the
coin correctly at least once, in which cases she should use the probability 1/2. Or she may want to maximize the number of correct guesses, in which case she should use the probability 2/3.
In either case, asking "but what's the probability, really?" isn't helpful.
Edit: in the second situation, Sleeping Beauty should use the probability 2/3 to figure out how to maximize the number of correct guesses. This doesn't mean she should guess T 2/3 of the time -- her
answer also depends on the payouts, and in the simplest case (she gets $1 for every correct guess) she should be guessing T 100% of the time.
In either case, asking "but what's the probability, really?" isn't helpful.
Strongly agree. My paper here: http://arxiv.org/abs/1110.6437 takes the problem apart and considers the different components (utilities, probabilities, altruism towards other copies) that go into a
decision, and shows you can reach the correct decision without worrying about the probabilities at all.
You're wondering whether or not to donate to reduce existential risks. You won't donate if you're almost certain the world will end soon either way. You wake up as the 100 billionth person. Do you
use this information to update on the probability that there will only be on the order of 100 billion people, and refrain from donating?
I really like your explanations in this thread.
However, I've always had the feeling that people raising "it just depends on the utility function / bet / payoff" were mostly trying to salve egos wounded by having wrongly analyzed the problem. It's
instructive to consider utility, but don't pretend to be confused about whether Beauty should be surprised to learn that the toss was H and not T.
You're right. For that reason, I think my explanations in the follow-up comments were better than this first attempt (not that this post is incorrect, it just doesn't quite address the main point).
I've previously tried to say the same thing here and here. The opinion I have hasn't changed, but maybe my way of expressing it has.
Probabilities are unique. They're a branch of math. They depend on your information, but your motivations are usually "extraneous junk information." And math still works the same even if you ask that
it is really (What's 2+2, really? 4).
Now, you could invent something else for the letters "probability" to mean, and define that to be 1/2 in the sleeping beauty problem, that's fine. But that wouldn't be some "other probability." That
would be some other "probability."
EDIT: It appears that I thoroughly misunderstood Misha to be saying two wrong things - first that probability can be defined by maximizing different things depending on what you want (not what was
said), and second that asking "but what's the probability really?" isn't helpful because I'm totally wrong about probabilities being unique. So, whoops.
What I'm saying is that there are two probabilities there, and they are both the correct probabilities, but they are the correct probabilities of different things. These different things seem like
answers to the same question because the English language isn't meant to deal with Sleeping Beauty type problems. But there is a difference, which I've done my best to explain.
Given that, is there anything your nitpicking actually addresses?
By "two probabilities" you mean this? :
Sleeping Beauty may want to maximize the probability that she guesses the coin correctly at least once, in which cases she should use the probability 1/2. Or she may want to maximize the number
of correct guesses, in which case she should use the probability 2/3.
That looks like two "probabilities" to me. Could you explain what the probabilities would be of, using the usual Bayesian understanding of "probability"?
I can try to rephrase what I said, but I honestly have no clue what you mean by putting probabilities in quotes.
2/3 is the probability that this Sleeping Beauty is waking up in a world where the coin came up tails. 1/2 is the probability that some Sleeping Beauty will wake up in such a world. To the naive
reader, both of these things sound like "The probability that the coin comes up tails".
Ah, okay, that makes sense to me now. Thanks.
I put the word "probability" in quotes is because I wanted to talk about the word itself, not the type of logic it refers to. The reason I thought you were talking about different types of logic
using the same word was because probability already specifies what you're supposed to be maximizing. For individual probabilities it could be one of many scoring rules, but if you want to add scores
together you need to use the log scoring rule.
To the naive reader, both of these things sound like "The probability that the coin comes up tails".
Right. One of them is the probability that the coin comes up tails given some starting information (as in a conditional probability, like P(T | S)), and the other is the probability that the coin
comes up tails, given the starting information and some anthropic information: P(T | S A). So they're both "P(T)," in a way.
Hah, so I think in your original comment you meant "asking "but what's P(T), really?" isn't helpful," but I heard "asking "but what's P(T | S A), really?" isn't helpful" (in my defense, some people
have actually said this).
If this is right I'll edit it into my original reply so that people can be less confused. Lastly, in light of this there is only one thing I can link to.
Can you add a summary break (one of the options when you edit the post) for the convenience of readers scrolling through lists of posts?
An anthropic problem is one where the very fact of your existence tells you something. "I woke up this morning, therefore the earth did not get eaten by Galactus while I slumbered."
It also gives you "I woke up this morning, therefore it is more likely that the earth was eaten by something a few orders of magnitude larger than Galactus*". This kind of consideration is frivolous
until you manage to find a way to use anthropics to solve np-complete problems (or otherwise encounter extreme anthropic circumstances).
* cf. Jonah.
The last time I had an anthropic principle discussion on Less Wrong I was pointed at the following paper: http://arxiv.org/abs/1110.6437 (See http://lesswrong.com/lw/9ma/
This struck me as interesting since it relates the Sleeping Beauty problem to a choice of utility function. Is Beauty a selfish utility maximizer with very high discount rate, or a selfish utility
maximizer with low discount rate, or a total utility maximizer, or an average utility maximizer? The type of function affects what betting odds Beauty should accept.
Incidentally, one thing that is not usually spelled out in the story (but really should be) is whether there are other sentient people in the universe apart from Beauty, and how many of them there
are. Also, does Beauty have any/many experiences outside the context of the coin-toss and awakening? These things make a difference to SSA (or to Bostrom's SSSA).
The last time I had an anthropic principle discussion on Less Wrong I was pointed at the following paper
While that work is interesting, knowing how to get probabilities means we can basically just ignore it :P Just assume Beauty is an ordinary utility-maximizer.
These things make a difference to SSA (or to Bostrom's SSSA).
They make a difference if those things are considered as mysterious processes that output correct probabilities. But we already know how to get correct probabilities - you just follow the basic
rules, or, in the equivalent formulation used in this post, follow the information. If SSA is used in any other way than as a set of starting information, it becomes an ad hoc method, not worth much
Not sure I follow that... what did you mean by an "ordinary" utility maximizer"? Is it a selfish or a selfless utility function, and if selfish what is the discount rate? The point about Armstrong's
paper is that really does matter.
Most of the utility functions do give the 2/3 answer, though for the "average utilitarian" this is only true if there are lots of people outside the Sleeping Beauty story (or if Beauty herself has
lots of experiences outside the story).
I'm a bit wary about using an indifference principle to get "the one true answer", because in the limit it suffers from the Presumptuous Philosopher problem. Imagine that Beauty (or Beauty clones) is
woken a trillion times after a Tails toss. Then the indifference principle means that Beauty will be very near certain that the head fell Tails. Even if she is shown sworn affidavits and video
recordings of the coin falling Heads, she'll believe that they were faked.
Not sure I follow that... what did you mean by an "ordinary" utility maximizer"? Is it a selfish or a selfless utility function, and if selfish what is the discount rate? The point about
Armstrong's paper is that really does matter.
So you have this utility function U, and it's a function of different outcomes, which we can label by a bunch of different numbers "x". And then you pick the option that maximizes the sum of U(x) * P
(x | all your information).
There are two ways this can fail and need to be extended - either there's an outcome you don't have a utility for, or there's an outcome you don't have a probability for. Stuart's paper is what you
can do if you don't have some probabilities. My post is how to get those probabilities.
If something is unintuitive, ask why it is unintuitive. Eventually either you'll reach something wrong with the problem (does it neglect model uncertainty?), or you'll reach something wrong with
human intuitions (what is going on in peoples' heads when they get the monty hall problem wrong?). In the meanwhile, I still think you should follow the math - unintuitiveness is a poor signal in
situations that humans don't usually find themselves in.
This looks like what Armstrong calls a "selfless" utility function i.e. it has no explicit term for Beauty's welfare here/now or at any other point in time.. The important point here is that if
Beauty bets tails, and the coin fell Tails, then there are two increments to U, whereas if the coin fell Heads then there is only one decrement to U. This leads to a 2/3 betting probability.
In the trillion Beauty case, the betting probability may depend on the shape of U and whether it is bounded (e.g. whether winning 1 trillion bets really is a trillion times better than winning one).
This looks like what Armstrong calls a "selfless" utility function i.e. it has no explicit term for Beauty's welfare here/now or at any other point in time.
Stuart's terms are a bit misleading because they're about decision-making by counting utilities, which is not the same as decision-making by maximizing expected utility. His terms like "selfish" and
"selfless" and so on are only names for counting rules for utilities, and have no direct counterpart in expected utility maximizers.
So U can contain terms like "I eat a candy bar. +1 utility." Or it could only contain terms like "a sentient life-form eats a candy bar. +1 utility." It doesn't actually change what process Sleeping
Beauty uses to make decisions in anthropic situations, because those ideas only applied to decision-making by counting utilities. Additionally, Sleeping Beauty makes identical decisions in anthropic
and non-anthropic situations, if the utilities and the probabilities are the same.
OK, I think this is clearer. The main point is that whatever this "ordinary" U is scoring (and it could be more or less anything) then winning the tails bet scores +2 whereas losing the tails bet
scores -1. This leads to 2/3 betting probability. If subjective probabilities are identical to betting probabilities (a common position for Bayesians) then the subjective probability of tails has to
be 2/3.
The point about alternative utility functions though is that this property doesn't always hold i.e. two Beauties winning doesn't have to be twice as good as one Beauty winning. And that's especially
true for a trillion Beauties winning.
Finally, if you adopt a relative frequency interpretation (the coin-toss is repeated multiple times, and take limit to infinity) then there are obviously two relative frequencies of interest. Half
the coins fall Tails, but two thirds of Beauty awakenings are after Tails. Either of these can be interpreted as a probability.
If subjective probabilities are identical to betting probabilities (a common position for Bayesians)
If we start with an expected utility maximizer, what does it do when deciding whether to take a bet on, say, a coin flip? Expected utility is the utility times the probability, so it checks whether P
(heads) * U(heads) > P(tails) * U(tails). So betting can only tell you the probability if you know the utilities. And changing the utility function around is enough to get really interesting
behavior, but it doesn't mean you changed the probabilities.
Half the coins fall Tails, but two thirds of Beauty awakenings are after Tails. Either of these can be interpreted as a probability.
What sort of questions, given what sorts of information, would give you these two probabilities? :D
For the first question: if I observe multiple coin-tosses and count what fraction of them are tails, then what should I expect that fraction to be? (Answer one half). Clearly "I" here is anyone other
than Beauty herself, who never observes the coin-toss.
For the second question: if I interview Beauty on multiple days (as the story is repeated) and then ask her courtiers (who did see the toss) whether it was heads or tails, then what fraction of the
time will they tell me tails? (Answer two thirds.)
What information is needed for this? None except what is defined in the original problem, though with the stipulation that the story is repeated often enough to get convergence.
Incidentally, these questions and answers aren't framed as bets, though I could use them to decide whether to make side-bets.
I haven't read the paper, but it seems like one could just invent payoff schemes customized for her utility function and give her arbitrary dilemmas that way, right?
There is the humanity of the observer to consider, but I don't think that simply adding existence and humanity transforms SIA into SSA.
The example for the sleeping beauty problem shows this. Under SIA, she can reason about the bet by comparing herself to a set of 3 possible waking beauties. Under SSA this is impermissible because
there is only a class of one or two existent waking beauties. Under SIA, she knows her existence and her humanity but this does not change the reasoning possible.
SSA is impossible for sleeping beauty to use, because using it properly requires knowing if there are 1 or 2 waking beauties, which requires knowing the problem under consideration. The same problem
would come up in any anthropic problem. As the answer to the question determines the set of SSA, the set of SSA cannot be a tool used in calculating the probabilities of different answers.
SSA is impossible for sleeping beauty to use, because using it properly requires knowing if there are 1 or 2 waking beauties
Depends on what you mean by "use." If you mean "use as a mysterious process that outputs probabilities," then you're right, it's unusable. But if you mean "use as a set of starting information,"
there is no problem.
I mean use as part of any process to determine probabilities of an anthropic problem. Mysterious or not. How can she use it as a set of starting information?
I may be misinterpreting, but to use either requires the identification of the set of items being considered. If I'm wrong, can you walk me through how sleeping beauty would consider her problem
using SSA as her set of starting information?
You're sort of right, because remember the Sweden problem. When we asked "what is the probability that I live in Sweden," using SSA, we didn't consider alternate earths. And the reason we didn't
consider alternate earths is because we used the information that Sweden exists, and is a country in europe, etc. We made our reference class "humans on this earth." But if you try to pull those same
shenanigans with Sleeping Beauty (if we use the problem statement where there's a copy of her) and make the reference class "humans who have my memories" you just get an "ERROR = DON'T HAVE COMPLETE
But what do you do when you have incomplete information? You use probabilities! So you get some sort of situation where you know that P(copy 1 | tails) = P(copy 2 | tails), but you don't know about P
(heads) and P(tails). And, hm, I think knowing that you're an observer that exists includes some sneaky connotation about mutual exclusivity and exhaustiveness of all your options.
Personally, I think saying there's "no particular asymmetry" is dangerous to the point of being flat out wrong. The three possibilities don't look the least bit symmetric to me, they're all
qualitatively quite different. There's no "relevant", asymmetry but how exactly do we know what's relevant and what's not? Applying symmetry in places it shouldn't be applied is the key way in which
people get these things wrong. The fact that it gives the right answer this time is no excuse.
So my challenge to you is, explain why the answer is 2/3 without using the word "symmetry".
Here's my attempt: Start with a genuinely symmetric (prior) problem, then add the information. In this case, the genuinely symmetric problem is "It's morning. What day is it and will/did the coin
come up heads?", while the information is "She just woke up, and the last thing she remembers is starting this particular bizzare coin/sleep game". In the genuinely symmetric initial problem all days
are equally likely and so are both coin flips. The process for applying this sort of additional information is to eliminate all scenarios that it's inconsistent with, and renormalise what's left. The
information eliminates all possibilities except (Monday, heads), (Monday, tails), (Tuesday, tails) - and some more obscure possibilities of (for the sake of argument) negligable weight. These main
three had equal weight before and are equally consistent with the new information so they have equal weight now.
Ok, I did use the word symmetry in there but only describing a different problem where it was safe. It's still not the best construction because my initial problem isn't all that well framed, but you
get the idea.
Note that more generally you should ask for p(new information | scenario) and apply Bayes Rule, but anthropic-style information is a special case where the value of this is always either 0 or 1.
Either it's completely inconsistent with the scenario or guaranteed by it. That's what leads to the simpler process I describe above of eliminating the impossible and simply renormalising what
The good thing about doing it this way is that you can also get the exact answer for the case where she knows the coin is biased to land heads 52% of the time, where any idea that the scenario is
symmetric is out the window.
Ok, I did use the word symmetry in there
Note that more generally you should ask for p(new information | scenario) and apply Bayes Rule, but anthropic-style information is a special case where the value of this is always either 0 or 1
But it's not entirely special, which is interesting. For example, say it's 8:00 and you have two buckets and there's one ball in one of the buckets. You have a 1/2 chance of getting the ball if you
pick a bucket. Then, at exactly 8:05, you add another bucket and mix up the ball. Now you have a 1/3 chance of getting the ball if you pick a bucket.
But what does Bayes' rule say? Well, P(get the ball | you add a third bucket) = P(get the ball) * P(you add a third bucket | get the ball) / P(you add a third bucket). Since you always add a third
bucket whether you get the ball or not, it seems the update is just 1/1=1, so adding a third bucket doesn't change anything. I would claim that this apparent failure of Bayes' rule (failure of
interpreting it, more likely) is analogous to the apparent failure of Bayes' rule in the sleeping beauty problem. But I'm not sure why either happens, or how you'd go about fixing the problem.
I'm yet to see how either the SSA or the SIA thinking can be instrumentally useful without reframing the SB problem in a way that lets her achieve a goal other than spitting out a useless number.
Once you reformulate the problem in a way that the calculated number affects her actual survival odds, the SSA vs SIA musings quickly disappear.
Does anyone know of any more detailed discussions of the Adrian Kent paper?
My response is: what have you got that works better?
I suppose my version is somewhere between SSA and SIA. It's "All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past,
present and future)".
I accept timeless physics, so I guess there'd be observers sideways in time too, but that's not the point.
What is a "reference class" anyway?
The reference class is the collection of things you pretend you could be any one of with equal probability. To specify a reference class (e.g., "humans"), you just need a piece of information ("I am
a human").
But then it depends on the reference class you choose. For example, if you choose "animals" and then update on being a human, you will conclude that a higher proportion of animals are humans than if
you choose "humans" to begin with. If you get different results from processing the same information two different ways, at least one of them must be wrong.
Right. The trick is that choosing "animals" should be equivalent to having a certain piece of information. To get different reference classes, there has to be something you know that gives you "I'm a
human" instead of "I'm a dog! Woof!". If you neglect this, you can (and did) derive contradictory stuff.
I don't understand. I have the information "I am an animal" and "I am a human". If I start with "I am an animal" and update with "I am a human", I get something different than if I start with "I am a
human" and update with "I am an animal". How do I get the correct answer?
It seems to me that you'd have to start with "I am conscious", and then update with everything.
I don't understand. I have the information "I am an animal" and "I am a human". If I start with "I am an animal" and update with "I am a human", I get something different than if I start with "I
am a human" and update with "I am an animal". How do I get the correct answer?
Why do you end up with something different if you update in a different order? If you want a way to get the correct answer, work out why you do that and stop doing it!
Why do you end up with something different if you update in a different order?
I'd say it's because I should be updating in both cases, rather than starting with "I am an animal" or "I am a human". I should start with "I am conscious", because I can't not be, and then update
from there.
I'm trying to show that picking reference classes arbitrarily leads to a contradiction, so SSA, as currently stated, doesn't work. If it does, what other solution is there to that paradox?
Manfred, thanks for this post, and for the clarifications below.
I wonder how your approach works if the coin is potentially biased, but the bias is unknown? Let's say it has probability p of Tails, using the relative frequency sense that p is the frequency of
Tails if tossed multiple times. (This also means that in multiple repetitions a fraction 2p / (1 + p) Beauty awakenings are after Tails, and a fraction 1 / (1 + p) Beauty awakenings are on Mondays.)
Beauty has to estimate the parameter p before betting, which means in Bayesian terms she has to construct a subjective distribution over possible values of p.
1. Before going to sleep, what should her distribution look like? One application of the indifference principle is that she has no idea about p except that it is somewhere between 0 and 1, so her
subjective distribution of p should be uniform on [0, 1].
2. When she wakes up, should she adjust her distribution of p at all, or is it still the same as at step 1?
3. Suppose she's told that it is Monday before betting. Should she update her distribution towards lower values of p, because these would give her higher likelihood of finding out it's Monday?
If the answer to 3 is "yes" then won't that have implications for the Doomsday Argument as well? (Consider the trillion Beauty limit, where there will be a trillion awakenings if the coin fell Tails.
In that case, the fraction of awakenings which are "first" awakenings - on the Monday right after the coin-toss - is about 1/(1 + 10^12 x p). Now suppose that Beauty has just discovered she's in the
first awakening... doesn't that force a big shift in her distribution towards p close to zero?)
I wonder how your approach works if the coin is potentially biased, but the bias is unknown?
The way I formulated the problem, this is how it is already :) If you wanted a "known fair" coin, you'd need some information like "I watched this coin come up infinity times and it had a heads:tails
ratio of 1:1." Instead, all Beauty gets is the information "the coin has two mutually exclusive and exhaustive sides."
This is slightly unrealistic, because in reality coins are known to be pretty fair (if the flipper cooperates) from things like physics and the physiology of flipping. But I think a known fair coin
would make the problem more confusing, because it would make it more intuitive to pretend that the probability is a property of the coin, which would give you the wrong answer.
Anyhow, you've got it pretty much right. Uniform distribution, updated by P(result | coin's bias), can give you a picture of a biased coin, unlike if the coin was known fair. However, if "result" is
that you're the first awakening, the update is proportional to P(Monday | coin's bias), since being the first awakening is equivalent to saying you woke up on Monday. But notice that you always wake
up on Monday, so it's a constant, so it doesn't change the average bias of the coin.
This is interesting, and I'd like to understand exactly how the updating goes at each step. I'm not totally sure myself, which is why I'm asking the question about what your approach implies.
Remember Beauty now has to update on two things: the bias of the coin (the fraction p of times it would fall Tails in many throws) and whether it actually fell Tails in the particular throw. So she
has to maintain a subjective distribution over the pair of parameters (p, Heads|Tails).
Step 1: Assuming an "ignorant" prior (no information about p except that is between 0 and 1) she has a distribution P[p = r & Tails] = r, P[p = r & Heads] = 1 - r for all values of r between 0 and 1.
This gives P[Tails] = 1/2 by integration.
Step 2: On awakening, does she update her distribution of p, or of the probability of Tails given that p=r? Or does she do both?
It seems paradoxical that the mere fact of waking up would cause her to update either of these. But she has to update something to allow her to now set P[Tails] = 2/3. I'm not sure exactly how she
should do it, so your views on that would be helpful.
One approach is to use relative frequency again. Assume the experiment is now run multiple times, but with different coins each time, and the coins are chosen from a huge pile of coins having all
biases between zero and one in "equal numbers". (I'm not sure this makes sense, partly because p is a continuous variable, and we'll need to approximate it by a discrete variable to get the pile to
have equal numbers; but mainly because the whole approach seems contrived. However, I will close my eyes and calculate!)
The fraction of awakenings after throwing a coin with bias p becomes proportional to 1 + p. So after normalization, the distribution of p on awakening should shift to (2/3)(1 + p). Then, given that a
coin with bias p is thrown, the fraction of awakenings after Tails is 2p / (1 + p), so the joint distribution after awakening is P[p = r & Tails] = (4/3)r, and P[p = r & Heads] = (2/3)(1 - r), which
when integrating again gives P[Tails] = 2/3.
Step 3: When Beauty learns it is Monday what happens then? Well her evidence (call it "E") is that"I have been told that it is Monday today" (or "This awakening of Beauty is on Monday" if you want to
ignore the possible complication of untruthful reports). Notice the indexical terms.
Continuing with the relative frequency approach (shut up and calculate again!) Beauty should set P[E|p = r] = 1/(1+r) since if a coin with bias r is thrown repeatedly, that becomes the fraction of
all Beauty awakenings which will learn that "today is Monday". So the evidence E should indeed shift Beauty's distribution on p towards lower values of p (since they assign higher probability to the
evidence E). However, all the shift is doing here is to reverse the previous upward shift at Step 2.
More formally, we have P[E & p = r] proportional to 1/(1 + r) x (1 + r) and the factors cancel out, so that p[E & p = r] is a constant in r. Hence P[p = r | E] is also a constant in r, and we are
back to the uniform distribution over p. Filling in the distribution in the other variable, we get P[Tails | E & p = r] = r. Again look at relative frequencies: if a coin with bias r is thrown
repeatedly, then among the Monday-woken Beauties, a fraction r of them will be woken after Tails. So we are back to the original joint distribution P[p = r & Tails] = r, P[p = r & Heads] = 1 - r, and
again P[Tails] = 1/2 by integration.
After all that work, the effect of Step 2 is very like applying an SIA shift (Bias to Tails is deemed more likely, because that results in more Beautiful experiences) and the effect of Step 3 is then
like applying an SSA shift (Heads-bias is more likely, because that makes it more probable that a randomly-selected Beautiful experience is a Monday-experience). The results cancel out. Churning
through the trillion-Beauty case will give the same effect, but with bigger shifts in each direction; however they still cancel out.
The application to the Doomsday Argument is that (as is usual given the application of SIA and SSA together) there is no net shift towards "Doom" (low probability of expanding, colonizing the Galaxy
with a trillion trillion people and so on). This is how I think it should go.
However, as I noted in my previous comments, there is still a "Presumptuous Philosopher" effect when Beauty wakes up, and it is really hard to justify this if the relative frequencies of different
coin weights don't actually exist. You could consider for instance that Beauty has different physical theories about p: one of those theories implies that p = 1/2 while another implies that p = 9/10.
(This sounds pretty implausible if a coin, but if the coin-flip is replaced by some poorly-understood randomization source like a decaying Higgs Boson, then this seems more plausible). Also, for the
sake of argument, both theories imply infinite multiverses, so that there are just as many Beautiful awakenings - infinitely many - in each case.
How can Beauty justify believing the second theory more, simpy because she has just woken up, when she didn't believe it before going to sleep? That does sound really Presumptuous!
A final point is that SIA tends to cause problems when there is a possibility of an infinite multiverse, and - as I've posted elsewhere - it doesn't actually counter SSA in those cases, so we are
still left with the Doomsday Argument. It's a bit like refusing to shift towards "Tails" at Step 2 (there will be infinitely many Beauty awakenings for any value of p, so why shift? SIA doesn't tell
us to), but then shifting to "Heads" after Step 3 (if there is a coin bias towards Heads then most of the Beauty-awakenings are on Monday, so SSA cares, and let's shift). In the trillion-Beauty case,
there's a very big "Heads" shift but without the compensating "Tails" shift.
If your approach can recover the sorts of shift that happen under SIA+SSA, but without postulating either, that is a bonus, since it means we don't have to worry about how to apply SIA in the
infinite case.
So what does Bayes' theorem tell us about the Sleeping Beauty case?
It says that P(B|AC) = P(B|C) * P(A|BC)/P(A|C). In this case C is sleeping beauty's information before she wakes up, which is there for all the probabilities of course. A is the "anthropic
information" of waking up and learning that what used to be "AND" things are now mutually exclusive things. B is the coin landing tails.
Bayes' theorem actually appears to break down here, if we use the simple interpretation of P(A) as "the probability she wakes up." Because Sleeping Beauty wakes up in all the worlds, this
interpretation says P(A|C) = 1, and P(A|BC) = 1, and so learning A can't change anything.
This is very odd, and is an interesting problem with anthropics (see eliezer's post "The Anthropic Trilemma"). The practical but difficult-to-justify way to fix it is to use frequencies, not
probabilities - because she can have a average frequency of waking up of 2 or 3/2, while probabilities can't go above 1.
But the major lesson is that you have to be careful about applying Bayes' rule in this sort of situation - if you use P(A) in the calculation, you'll get this problem.
Anyhow, only some of this a response to anything you wrote, I just felt like finishing my line of thought :P Maybe I should solve this...
Thanks... whatever the correct resolution is, violating Bayes's Theorem seems a bit drastic!
My suspicion is that A contains indexical evidence (summarized as something like "I have just woken up as Beauty, and remember going to sleep on Sunday and the story about the coin-toss"). The
indexical term likely means that P[A] is not equal to 1 though exactly what it is equal to is an interesting question.
I don't personally have a worked-out theory about indexical probabilities, though my latest WAG is a combination of SIA and SSA, with the caveat I mentioned on infinite cases not working properly
under SIA. Basically I'll try to map it to a relative frequency problem, where all the possibilities are realised a large but finite number of times, and count P[E] as the relative frequency of
observations which contain evidence E (including any indexical evidence), taking the limit where the number of observations increases to infinity. I'm not totally satisfied with that approach, but it
seems to work as a calculational tool.
I may be confused, but it seems like Beauty would have to ask "Under what conditions am I told 'It's Monday'?" to answer question 3.
In other problems, when someone is offering you information, followed by a chance to make a decision, if you have access to the conditions under which they decided to offer you that information
should be used as information to influence your decision. As an example, the other host behaviors in the Monty Hall problem. mention that point, and it seems likely they would in this case as well.
If you have absolutely no idea under what circumstances they decided to offer that information, then I have no idea how you would aggregate meaning out of the information, because there appear to be
a very large number of alternate theories. For instance:
1: If Beauty is connected to a random text to speech generator, which happens to randomly text to speech output "Smundy", Beauty may have misheard nonsensical gibberish as "It's Monday."
2: Or perhaps it was intentional and trying to be helpful, but actually said "Es Martes" because it assumed you were a Spanish speaking rationalist, and Beauty just heard it as "It's Monday." when
Beauty should have processed "It's Tuesday." which would cause Beauty to update the wrong way.
3: Or perhaps it always tells Beauty the day of the week, but only on the first Monday.
4: Or perhaps it always tells Beauty the day of the week, but only if Beauty flips tails.
5: Or perhaps it always tells Beauty the day of the week, but only if Beauty flips heads.
6: Or perhaps it always tells Beauty the day of the week on every day of the puzzle, but doesn't tell Beauty whether it is the "first" Monday on Monday.
7: It didn't tell Beauty anything directly. Beauty happened to see a calendar when it opened the door and it appears to have been entirely unintentional.
Not all of these would cause Beauty to adjust the distribution of P in the same way. And they aren't exhaustive, since there are far more then these 7. Some may be more likely than others, but if
Beauty don't have any understanding about which would be happening when, Beauty wouldn't know which way to update P, and if Beauty did have an understanding, Beauty would presumably have to use that
I'm not sure whether this insightful, or making it more confused then it needs to be.
OK, fair enough - I didn't specify how she acquired that knowledge, and I wasn't assuming a clever method. I was just considering a variant of the story (often discussed in the literature) where
Beauty is always truthfully told the day of the week after choosing her betting odds, to see if she then adjusts her betting odds. (And to be explicit, in the trillion Beauty story, she's always told
truthfully whether she's the first awakening or not, again to see if she changes her odds). Is that clearer?
Yes, I wasn't aware "Truthfully tell on all days" was a standard assumption for receiving that information, thank you for the clarification.
It's OK.
The usual way this applies is in the standard problem where the coin is known to be unbiased. Typically, a person arguing for the 2/3 case says that Beauty should shift to 1/2 on learning it is
Monday. Whereas a critic originally arguing for the 1/2 case says that Beauty should shift to 1/3 for Tails (2/3 for Heads) on learning it is Monday.
The difficulty is that both those answers give something very presumptuous in the trillion Beauty limit (near certainty of Tails before the shift, or near certainty of Heads after the shift).
Nick Bostrom has argued for a "hybrid" solution which avoids the shift, but on the face of things looks inconsistent with Bayesian updating. But the idea is that Beauty might be in a different
"reference class" before and after learning the day.
See http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0011/5132/sleeping_beauty.pdf or http://www.nickbostrom.com/ (Right hand column, about halfway down the page).
It looks like paragraphs 3--5 of "Thinking with Information" (starting with "Next, what should Sleeping Beauty") are in the wrong place.
Thank you, great post!
I recently read an academic paper3 that used the idea that in a multiverse, there will be some universe where a thrown coin comes up heads every time, and so the people in that universe will have
very strange ideas about how coins work.
Is this actually true? I always understood coins to be unaffected by quantum fluctuations.
The problem with the Sleeping Beauty Problem (irony intended), is that it belongs more in the realm of philosophy and/or logic, than mathematics. The irony in that (double-irony intended), is that
the supposed paradox is based on a fallacy of logic. So the people who perpetuate it should be best equipped to resolve it. Why they don't, or can't, I won't speculate about.
Mathematicians, Philosophers, and Logicians all recognize how information introduced into a probability problem allows one to update the probabilities based on that information. The controversy in
the Sleeping Beauty Problem is based on the fallacious conclusion that such "new" information is required to update probabilities this way. This is an example of the logical fallacy called affirming
the consequent: concluding that "If A Then B" means "A is required to be true for B to be true" (an equivalent statement is "If B then A").
All that is really needed for updating, is a change in the information. It almost always is an addition, but in the Sleeping Beauty Problem it is a removal. Sunday Sleeping Beauty (SSB) can recognize
that "Tails & Awake on Monday" and "Tails & Awake on Tuesday" represent the same future (Manfred's "AND"), both with prior probability 1/2. But Awakened Sleeping Beauty (ASB), who recognizes only the
present, must distinguish these two outcomes as being distinct (Manfred's "OR"). This change in information allows Bayes' Rule to be applied in a seemingly unorthodox way: P(H&AonMO|A) = P(H&AonMO)/
[P(H&AonMO) + P(T&AonMO) + P(T&AonTU)] = (1/2)/(1/2+1/2+1/2) = 1/3. The denominator in this expression is greater than 1 because the change (not addition) of information separates non-disjoint events
into disjoint events.
The philosophical issue about SSA v. SIA (or whatever these people call them; I haven't seen any two who define them agree), can be demonstrated by the "Cloned SB" variation. That's where, if Tails
is flipped, an independent copy of SB is created instead of two awakenings happening. Each instance of SB will experience only one "awakening," so the separation of one prior event into two disjoint
posterior events, as represented by "OR," does not occur. But neither does "AND." We need a new one called "ONE OF." This way, Bayes' Rule says P(H&Me on Mo) = P(H&Me on MO)/[P(H&Me on MO) + (ONE OF
P(T&Me on MO), P(T&Me on TU))] = (1/2)/(1/2+1/2) = 1/2.
The only plausible controversy here is how SB should interpret herself: as one individual who might be awakened twice during the experiment, or as one of the two who might exist in it. The former
leads to a credence of 1/3, and he latter leads to a credence of 1/2. But the latter does not follow from the usual problem statement.
|
{"url":"http://lesswrong.com/lw/85i/fundamentals_of_kicking_anthropic_butt/","timestamp":"2014-04-20T05:45:36Z","content_type":null,"content_length":"269348","record_id":"<urn:uuid:4b9d0495-e647-4fa9-b193-e6397a7b1eb1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Should numpy.sqrt(-1) return 1j rather than nan?
Should numpy.sqrt(-1) return 1j rather than nan?
Stefan van der Walt stefan at sun.ac.za
Thu Oct 12 09:28:28 CDT 2006
On Thu, Oct 12, 2006 at 08:58:21AM -0500, Greg Willden wrote:
> On 10/11/06, Bill Baxter <wbaxter at gmail.com> wrote:
> On 10/12/06, Greg Willden <gregwillden at gmail.com> wrote:
> > Speed should not take precedence over correctness.
> Unless your goal is speed. Then speed should take precedence over
> correctness.
> Huh?
> Person 1: "Hey you should use function X."
> Person 2: "No, it doesn't give me the correct answer."
> Person 1: "Who cares? It's fast!"
> What kind of logic is that?
I tried to explain the argument at
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-October/011398.html","timestamp":"2014-04-20T11:16:51Z","content_type":null,"content_length":"4138","record_id":"<urn:uuid:e50a50c0-6473-4ab3-9ae2-bbd17ef5d269>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The n-Category Café
June 28, 2012
Flat Ehresmann connections in Cohesive HoTT
Posted by Urs Schreiber
This is to share a little observation about the formulation of flat Ehresmann connections in cohesive homotopy type theory.
For context, this can be understood as following up on two different threads on this blog:
1. You may think of this as part IV of the little series HoTT Cohesion (part I: de Rham cohomology, part II: differential cohomology, part III: geometric prequantization) in which we discussed very
simple constructions in homotopy type theory that exist as soon as the axiom of cohesion is added and which, simple as they are, are interpreted as fundamental constructions in higher
differential geometry.
2. You may think of this as a curious example of the theory of twisted principal ∞-bundles discussed in the previous entry. For I will describe how a flat Ehresmann connection on a $G$-principal $\
infty$-bundle is formalized as a $\flat G$-twisted $\infty$-bundle on its total space, where $\flat$ (“flat”) is one of the two reflectors in the definition of cohesive homotopy type theory.
Posted at 4:13 PM UTC |
Followups (5)
June 25, 2012
Principal ∞-Bundles – general theory and presentations
Posted by Urs Schreiber
A few weeks back I had mentioned that with Thomas Nikolaus and Danny Stevenson we are busy writing up some notes on bundles in higher geometry. Now we feel that we are closing in on a fairly stable
version, and so we thought it may be about time to share what we have and ask for feedback.
As it goes, this project has become a collection of three articles now. They are subdivided as
1. General theory (pdf)
2. Presentations (pdf)
3. Applications (not yet out)
The idea is that the first proceeds in full abstraction, using just the axioms of $\infty$-topos theory (so roughly, up to the technical caveats discussed here at length: using just the axioms of
homotopy type theory), while the second discusses models of the axioms by presentations in categories of simplicial (pre)sheaves.
More explicitly, in the first one we discuss how, in a given $\infty$-topos, principal $\infty$-bundles are equivalent to cocycles in the $\infty$-topos, how fiber $\infty$-bundles are associated to
principal $\infty$-bundles and how their sections are cocycles in twisted cohomology classifying twisted principal $\infty$-bundles and extensions of structure $\infty$-groups. We close by
identifying the universal twisting bundles/local coefficient bundles with $\infty$-gerbes and discuss how this reproduces various notions of $n$-gerbes.
In the second one we show how principal $\infty$-bundles are equivalent to cocycles in simplicial hyper-Čech-cohomology, and we prove a strictification result: in a 1-localic $\infty$-topos the space
of principal $\infty$-bundles over any $\infty$-stack is modeled by ordinary simplicial bundles with an ordinary action of a simplicial group, the only weakening being that the principality condition
holds only up to local weak equivalence. We discuss what this looks like for discrete geometry and for smooth geometry.
For a tad more detail see the abstracts (General Theory abstract, Presentations abstract). And for full details, including references (see page 4 of part 2 for a discussion of the literature) etc.
see – of course – the writeups themselves: 1. General theory, 2. Presentations.
All comments would be welcome!
Posted at 7:29 PM UTC |
Followups (5)
June 18, 2012
The Gamification of Higher Category Theory
Posted by Mike Shulman
I found the following article when John posted about it on Google Plus:
Go read it; I’ll wait for you below the fold.
Posted at 9:21 PM UTC |
Followups (11)
June 14, 2012
Cohomology in Everyday Life
Posted by David Corfield
I have been looking for examples, accessible to a lay audience, to illustrate the prevalence of cohomology. Here are some possibilities:
• Penrose’s impossible figures, such as the tribar
• Carrying in arithmetic
• Electrical circuits and Kirchhoff’s Law
• Pythagorean triples (Hilbert’s Theorem 90)
• Condorcet’s paradox (concerning the impossibility of combining comparative rankings)
• Entropy, but I think we never quited nailed this.
Anyway, I’d be grateful for any other cases of cohomology in everyday life.
Posted at 12:27 PM UTC |
Followups (107)
June 7, 2012
Directed Homotopy Type Theory
Posted by Mike Shulman
Recently I’ve been talking a lot about homotopy type theory, and its potential role as a foundation for mathematics in which homotopy types — which is to say $\infty$-groupoids — are basic objects.
However, when I say this to $n$-categorists, I inevitably get the question “Why stop with $\infty$-groupoids? What about $n$-categories, or $(\infty,n)$-categories, or $(\infty,\infty)$-categories?”
Until now, I’ve had only two answers:
1. Maybe there is a foundation in which higher categories are basic objects — but we don’t know yet what it might look like. For instance, some of us have thought about a “directed type theory” for
1-categories. This seems to work, but it’s not as clean as homotopy type theory: the composition laws seem to have to be put in by hand, rather than falling out automatically like they do for $\
infty$-groupoids using the inductive definition of identity types. For $n$-categories or $\omega$-categories, it seems that we would essentially have to build something like a Batanin operad into
the structure of the type theory. This is probably possible, but not especially appealing to me, unless someone finds a clean way of “generating” such an operad in the same way that Martin-Löf
identity types “generate” all the structure of a Batanin $\infty$-groupoid.
There is also the question of what a “dependent type” should mean for directed categories. Presumably some sort of fibration — but of what variance? We seem to need different kinds of “dependent
type” for fibrations and opfibrations, and as we go up in categorical level this problem will multiply itself. Moreover, not all functors are exponentiable, so dependent product types will
require some variance assumptions. Furthermore, I think there are some issues with the categorical semantics when you get up to 3-categories: comma objects don’t seem to do the right thing any
2. These problems suggest that maybe even in the long run, we’ll decide that it’s better to only build $\infty$-groupoids into the foundational system, with directed categories as a defined notion.
(I’m not saying I think we necessarily will, just that I think it’s possible that we will.) If we go this route, then I think the most promising way to define $(\infty,n)$-categories inside
homotopy type theory would be an adaptation of Charles Rezk’s “complete Segal spaces” and $\Theta$-categories, since these are defined entirely in terms of diagrams of $\infty$-groupoids and
don’t require any truncatedness of the space of objects. (The fact that every $\infty$-groupoid admits an essentially surjective map from a 0-truncated one — that is, from a set — is a
“classicality” property that doesn’t hold in general homotopy type theory. Thus, any definition of $(\infty,n)$-categories which requires that the objects form a set — which includes most
definitions other than Rezk’s — would be insufficiently general.)
However, we don’t yet know how to define simplicial objects or $\Theta$-objects inside of homotopy type theory (unless we are willing to allow infinite lists of definitions, which is fine
mathematically if we assume a strong enough metatheory, but difficult to implement in a computer). Depending on how we end up solving that problem, this approach could also turn out to be
technically complicated.
In this post, I want to present a third approach, which sort of straddles these two pictures. On the one hand, it gives us a way to talk about $(\infty,1)$-categories (and, probably, $(\infty,n)$
-categories) inside homotopy type theory today, without waiting for a solution to the problem of defining simplicial objects (although in the long run, we’ll still need to solve that problem). On the
other hand, we can think of it as a rough approximation to what an eventual “directed homotopy type theory” might look like.
However, this is still just a basic idea; there are technical issues as well as conceptual questions that need to be answered. I’ve been rolling them around in my head for a while and not making much
progress, so I’m hoping to start a discussion in which other people may be able to see what I can’t.
Posted at 8:16 PM UTC |
Followups (22)
June 6, 2012
Compact Closed Bicategories
Posted by John Baez
guest post by Mike Stay
Thanks to primates’ propensity for trade and because our planet rotates, everyone is familiar with abelian groups. We can add, we’ve got a zero element, addition is commutative and associative, and
we can negate any element—or, using multiplicative language: we can multiply, we’ve got a 1 element, multiplication is commutative and associative, and we can divide 1 by any element to get its
Thanks to the fact that for most practical purposes we live in $\mathbb{R}^3,$ everyone’s familiar with at least one vector space. Peano defined them formally in 1888. The collection of vector spaces
is like a “categorified” abelian group: instead of being elements of a set, vector spaces are objects in a compact closed category. We can “multiply” them using the tensor product; we have the
1-dimensional vector space that plays the role of 1 up to an isomorphism called the unitor; the tensor product is associative up to an isomorphism called the associator, and is commutative up to an
isomorphism called the braiding; and every object $A$ has a “weak inverse”, or dual, an object equipped with morphisms for “cancelling”, $e_A:A\otimes A^{\ast} \to 1$ and $i_A:1 \to A^{\ast}\otimes
A$ and some “yanking” equations. As always in categorification, when we weaken equations to isomorphisms, we have to add new equations: a pentagon equation for the associator, triangle equations for
the unitors, hexagon equations for the braiding, and braiding twice is the identity.
Thanks to the fact that everyone has family and friends, everyone is (ahem) familiar with relations. Sets, relations, and implications form a compact closed bicategory. In a compact closed
bicategory, we weaken the equations above to 2-isomorphisms and add new equations: an associahedron with 14 vertices, 7-vertex prisms for the unitors, shuffle polytopes and a map of permutahedra for
the braiding, and an equation governing the syllepsis. The syllepsis is what happens when we weaken the symmetry equation: braiding twice is merely isomorphic to the identity.
We can start to see some sequences emerging from this process. The stuff below is mostly due to Chris Schommer-Pries’ definition of symmetric monoidal bicategories together with Day and Street’s
definition of a compact closed Gray monoid. Their work, in turn, relies greatly on Paddy McCrudden’s Balanced coalgebroids and a handwritten note for the swallowtail coherence law. In the graphics
below, I tried to make the symmetries apparent, something that was lacking from McCrudden’s presentation of the coherence laws due to the failings of the tools available for typesetting them.
Posted at 1:56 AM UTC |
Followups (10)
|
{"url":"http://golem.ph.utexas.edu/category/2012/06/index.shtml","timestamp":"2014-04-19T09:25:29Z","content_type":null,"content_length":"74500","record_id":"<urn:uuid:fdddd947-bf97-4a43-9b7a-a2cea0b618e7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Basel Problem A Rigorous Proof
A selection of articles related to basel problem a rigorous proof.
Original articles from our library related to the Basel Problem A Rigorous Proof. See Table of Contents for further available material (downloadable resources) on Basel Problem A Rigorous Proof.
Basel Problem A Rigorous Proof is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Basel Problem A Rigorous Proof books and
related discussion.
Suggested Pdf Resources
Suggested Web Resources
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site.
|
{"url":"http://www.realmagick.com/basel-problem-a-rigorous-proof/","timestamp":"2014-04-18T21:07:37Z","content_type":null,"content_length":"29217","record_id":"<urn:uuid:2c79e77b-7a6a-480a-a661-e8c16bb89484>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spherical Coordinate problem
May 3rd 2010, 11:38 AM
Spherical Coordinate problem
the problem reads "Find the volume of the part of the ball p(rho)=<a that lies between the cones phi=pi/6 and phi=pi/3"
can anyone help at least set up the triple integral? I'm not sure how to go about this
May 3rd 2010, 01:17 PM
From the question I gather the equation for the sphere(ball) is
$x^2 + y^2 + z^2 \le ( \sqrt{a} ) ^2$
What we're looking for is
$\iiint dV$ where $dV = p^2 sin \phi dp d \phi d \theta$
So what are our bounds for theta, phi and p? Well, for P this is the value of the minimum point and maximum point. In this case, clearly the farthest point P is $\sqrt{a}$. But what is the
shortest point/distance it can be? Well, in this case the origin! So,
$0 \le P \le \sqrt{a}$
From the question were are given our bounds of phi!
$\frac{ \pi }{6} \le \phi \le \frac{ \pi }{3}$
What about theta? Well, we are going 360 degress around the way! So our theta bounds become
$0 \le \pi \le 2 \pi$
$\int_0^{2 \pi } d \theta \int_{ \frac{ \pi }{6} }^{ \frac{ \pi }{3} } sin \phi d \phi \int_0^{ \sqrt{a} } p^2 dp$
|
{"url":"http://mathhelpforum.com/calculus/142840-spherical-coordinate-problem-print.html","timestamp":"2014-04-19T18:33:03Z","content_type":null,"content_length":"6671","record_id":"<urn:uuid:633a193a-c441-410e-acc9-d7a8859428dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Instructor Class Description
Time Schedule:
Aneesh S. Hariharan
Q SCI 292
Seattle Campus
Analysis for Biologists II
Introduction to integral calculus, emphasizing development of basic skills. Examples promote understanding of mathematics and applications to modeling and solving biological problems. Topics include
areas under curves, volumes, and differential equations. Prerequisite: minimum grade of.7 in either Q SCI 291 or MATH 124. Not available for credit to students who have completed MATH 125 with a 2.0
or higher Offered: WSpS.
Class description
This course is expected to cover techniques in integral calculus. Biological/ecological models such as exponential growth/decay, logistic, von-Bertallanfy, Ricker's, Monod/Michelis-Menten will be
analyzed in depth. Other applications include, finding volumes, surface of revolution, length of a curve and interpretations of area under curves.
Student learning goals
Learn techniques of integration.
When and how to apply integral calculus to real-world problems.
General method of instruction
Lectures, mostly involves problem solving from the exercise section of the book. The students are expected to read the relevant sections and worked out examples from the text.
Recommended preparation
Pre-cal (algebra, trig), Differential Calculus
Class assignments and grading
20%- 4 homeworks 40%- 4 Short quizzes 20%- Midterm 20%- Final project (based on whatever techniques you have learned during the course; depending on time there may/may not be a presentation and
outside faculty/grad students will be invited)
The information above is intended to be helpful in choosing courses. Because the instructor may further develop his/her plans for this course, its characteristics are subject to change without
notice. In most cases, the official course syllabus will be distributed on the first day of class. Last Update by Aneesh S. Hariharan
Date: 07/15/2012
|
{"url":"http://www.washington.edu/students/icd/S/quantsci/292aneesh.html","timestamp":"2014-04-17T15:51:53Z","content_type":null,"content_length":"4826","record_id":"<urn:uuid:a0769dc4-bec7-4770-885b-13f3395e8fc9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Consider the function f(x) = 2sin(x2) on the interval 0 ≤ x ≤ 3. (a) Find the exact value in the given interval where an antiderivative, F, reaches its maximum. x = (b) If F(1) = 9, estimate the
maximum value attained by F. (Round your answer to three decimal places.) y ≈
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f8fa4a6e4b000310fade9bd","timestamp":"2014-04-18T21:05:52Z","content_type":null,"content_length":"422980","record_id":"<urn:uuid:2d1686fd-2adf-4f86-93fe-f6559ba6713f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Metro Algebra 2 Tutor
Find a North Metro Algebra 2 Tutor
...She even studied abroad in Ireland during those three years! She's been tutoring for over 5 years in many different environments that include one-on-one tutoring in-person and online, as well
as tutoring in a group environment. She can adapt to many learning styles.
22 Subjects: including algebra 2, reading, writing, calculus
I hold a bachelor's degree in Secondary Education and a master's degree in Education. I am certified to teach in both PA and GA. I have teaching experience at both the Middle School and High
School level in both private and public schools.
10 Subjects: including algebra 2, geometry, algebra 1, logic
...I can tutor in most subjects, such as math or English, along with chemistry and business related subjects. I usually get to know a student by watching them work and figuring out where their
weaknesses are. I then provide ways to strengthen the weak points and give pointers on how to correct errors.
37 Subjects: including algebra 2, English, chemistry, reading
...My approach is to find the parts of algebra that students are comfortable with and build on those to reach areas that students are less proficient at. When students realize that all of algebra
follows rules that they already know, they can usually relax and have fun with it. Geometry is the subject where math teachers bring in more abstract concepts and many students are left behind.
17 Subjects: including algebra 2, chemistry, physics, geometry
...My patience and ability to relate with people allows me to adjust to the student and show the path to the solution in a way that they understand.As a high school player, I was a three year
varsity letterman. Soccer takes exceptional footwork, vision and team work. My play earned me an scholarship to play at the collegiate level; however, injuries derailed that dream.
29 Subjects: including algebra 2, chemistry, calculus, physics
Nearby Cities With algebra 2 Tutor
Barrett Parkway, GA algebra 2 Tutors
Big Canoe, GA algebra 2 Tutors
Embry Hls, GA algebra 2 Tutors
Fort Gillem, GA algebra 2 Tutors
Green Way, GA algebra 2 Tutors
Kelly, GA algebra 2 Tutors
North Corners, GA algebra 2 Tutors
Overlook Sru, GA algebra 2 Tutors
Raymond, GA algebra 2 Tutors
Rockbridge, GA algebra 2 Tutors
Sandy Plains, GA algebra 2 Tutors
Shenandoah, GA algebra 2 Tutors
Snapfinger, GA algebra 2 Tutors
Westside, GA algebra 2 Tutors
White Stone, GA algebra 2 Tutors
|
{"url":"http://www.purplemath.com/north_metro_ga_algebra_2_tutors.php","timestamp":"2014-04-18T01:09:46Z","content_type":null,"content_length":"24299","record_id":"<urn:uuid:8a47798c-ae2a-44ad-86c0-bbfdc8748457>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Le Roy, NY Math Tutor
Find a Le Roy, NY Math Tutor
...But more importantly I was my children's math tutor, both are doing very well in college and graduate school. Because of working with my children on math at all levels I have gotten to know
the math education well, including tests such as SAT. For example, I went through the study books and all sample tests myself.
9 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I am also knowledgeable in basic electronics, and I am an A+ certified computer support technician (previous job). I have been recognized in college with an award for achieving the highest GPA
in Liberal Arts for a semester at SUNY Canton. I am available most hours of the day (9am-9pm) to meet w...
13 Subjects: including geometry, trigonometry, algebra 1, algebra 2
...I believe that positive reinforcement, rewards, praise, and empathy are the most powerful tools to promote a love of learning. I am well-versed in the development of study skills, from my own
personal experiences as well as my tutoring students and my 3 stepchildren. I am able to find creative and engaging ways to tailor a study plan to each individual's needs.
63 Subjects: including geometry, algebra 1, English, French
...Complicated things must be broken down into parts and if it is still complicated, I haven’t broken it into the right parts. Teaching is finding and pointing out a path that might lead to
better understanding. My best outcomes occurred when the student believed we were going on the journey toget...
17 Subjects: including algebra 1, accounting, finance, economics
...I learned a lot from that experience and want to bring that to you at a much more affordable rate. My scores are ACT: 34/36 SAT: 2400/2400 superscore (My scores helped me get into Duke
University, earn a full tuition scholarship to Colgate U and a Full-Ride (Tuition and Room & Board + free lapt...
12 Subjects: including statistics, ACT Math, SAT math, GMAT
Related Le Roy, NY Tutors
Le Roy, NY Accounting Tutors
Le Roy, NY ACT Tutors
Le Roy, NY Algebra Tutors
Le Roy, NY Algebra 2 Tutors
Le Roy, NY Calculus Tutors
Le Roy, NY Geometry Tutors
Le Roy, NY Math Tutors
Le Roy, NY Prealgebra Tutors
Le Roy, NY Precalculus Tutors
Le Roy, NY SAT Tutors
Le Roy, NY SAT Math Tutors
Le Roy, NY Science Tutors
Le Roy, NY Statistics Tutors
Le Roy, NY Trigonometry Tutors
Nearby Cities With Math Tutor
Alexander, NY Math Tutors
Bergen Math Tutors
Dale, NY Math Tutors
East Bethany Math Tutors
Elba, NY Math Tutors
Leicester, NY Math Tutors
Linwood, NY Math Tutors
Mumford, NY Math Tutors
Oakfield, NY Math Tutors
Piffard Math Tutors
Retsof Math Tutors
Scottsville, NY Math Tutors
South Byron Math Tutors
Stafford, NY Math Tutors
Wyoming, NY Math Tutors
|
{"url":"http://www.purplemath.com/le_roy_ny_math_tutors.php","timestamp":"2014-04-16T16:23:33Z","content_type":null,"content_length":"23815","record_id":"<urn:uuid:0f5a805c-36dc-49a8-8b2f-3376a647c63c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
|
electric circuits
1. The problem statement, all variables and given/known data
If the current in an electric conductor is 2.4A, how many coulombs of charge pass any point in a 30 second interval?
2. Relevant equations
A = C/s
3. The attempt at a solution
I just want to make sure that I am doing this right. If my current is 2.4 A then I can write this as 2.4 C/s and then multiply by the 30 second interval.
2.4 X 30 = 72 Coulombs is this the correct method?
|
{"url":"http://www.physicsforums.com/showthread.php?t=242091","timestamp":"2014-04-21T14:50:10Z","content_type":null,"content_length":"21878","record_id":"<urn:uuid:01c70f01-ec04-4990-88c9-4bac267bf2df>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
|
there were algebras of concrete relations
The many descendants of Tarski’s Relation Algebras
Peter Jipsen
Vanderbilt University
Alfred Tarski Centenary Conference, Warsaw, May 29, 2001
A story about the creation of Relation Algebras
In the beginning there were algebras of concrete relations.
Tarski saw they were good, and he separated the interesting ideas from the trivial ones.
And Tarski said “Let there be an abstract theory about these algebras”.
So he made the theory of Relation Algebras. And he saw it was good.
And then Tarski said “Let the theory produce all the known results about concrete relations”. And it was so.
And he proved many interesting new results about relation algebras, including a correspondence with 3-variable logic that allowed the interpretation of set theory and he provided the first example of
an undecidable equational theory.
And Tarski said “Let the minds teem with new conjectures, let ideas fly, and let the community produce many new related theories and results”.
Thus the field of relation algebras was born, with its many applications and connections to other areas.
(all quotes fictitious; passage based on well known source)
And he did not rest. The trinity of Henkin, Monk, Tarski wrote two volumes about Cylindric Algebras (including several chapters on general algebra and relation algebras).
His many disciples worked tirelessly to spread the word.
And there was a cultural upheaval that made Tarski’s name spread far and wide: computer science emerged as a major discipline.
Some statistics about Tarski:
Number of authored papers in MR:
Tarski 125
Erdös 1535 (the most publications by any author)
Number of reviews mentioning name in MR:
Tarski 2133
Erdös 6878
Number of web pages mentioning name (on Google):
Tarski 36000
Erdös 30000
Number of papers in 11 major Mathematics journals that mention Tarski: 1041
Number of papers in 10 major Philosophy journals that mention Tarski: 1047
List of the journals searched in JSTOR:
Mathematics (11 journals)
1. American Journal of Mathematics (1878-1995)
2. American Mathematical Monthly (1894-1995)
3. Annals of Mathematics (1884-1995)
4. Journal of Symbolic Logic (1936-1996)
5. Journal of the American Mathematical Society (1988-1995)
6. Mathematics of Computation (1960-1995)
7. Proceedings of the American Mathematical Society (1950-1995)
8. SIAM Journal on Applied Mathematics (1966-1995)
9. SIAM Journal on Numerical Analysis (1966-1995)
10. SIAM Review (1959-1995)
11. Transactions of the American Mathematical Society (1900-1995)
Philosophy (10 journals)
1. Ethics (1938-1995)
2. Journal of Philosophy (1921-1995)
3. Journal of Symbolic Logic (1936-1996)
4. Mind (1876-1993)
5. Nous (1967-1995)
6. Philosophical Perspectives (1987-1995)
7. Philosophical Quarterly (1950-1995)
8. Philosophical Review (1892-1997)
9. Philosophy and Phenomenological Research (1940-1995)
10. Philosophy and Public Affairs (1971-1995)
Coauthors of Alfred Tarski
(From: Erdos1, Version 2001, January 30, 2001
This is a list of the 507 co-authors of Paul Erdos, together with their co-authors listed beneath them. The date of first joint paper with Erdos is given, followed by the number of joint publications
(if it is more than one). An asterisk following the name indicates that this Erdos co-author is known to be deceased. Please send corrections and comments to <grossman@oakland.edu>.)
TARSKI, ALFRED* 1943: 2
Andreka, Hajnal
Banach, Stefan
Beth, Evert W.
Chang, Chen Chung
Chin, Louise H.
Doner, John E.
Erdos, Paul
Fell, James M. G.
Givant, Steven R.
Henkin, Leon A.
Horn, Alfred
Jonsson, Bjarni
Keisler, H. Jerome
Kuratowski, Kazimierz
Lindenbaum, A.
Maddux, Roger D.
McKinsey, J. C. C.
Monk, J. Donald
Mostowski, Andrzej
Nemeti, Istvan
Schwabhauser, Wolfram
Scott, Dana S.
Sierpinski, Waclaw
Smith, Edgar C., Jr.
Szczerba, Leslaw W.
Szmielew, Wanda
Vaught, Robert L.
A real quote (according to MacTutor) of Tarski:
“You will not find in semantics any remedy for decayed teeth or illusions of grandeur or class conflict”
A tour of theories and structures close to relation algebras. We start with the variety RA.
Here is the definition of Relation Algberas as recorded by Bjarni Jonsson in a seminar that Alfred Tarski gave at Berkeley in the 1940s. Note that the equational axiomatisation is not chosen as the
original definition, but rather it is derived from the more useful and compact quasi-equational definition.
First some preliminaries: (A groupoid is what is nowadays called a monoid)
Now the definition:
And finally the equivalent equational definition:
An interactive definition of Relation Algebras can be found at http://www.math.vanderbilt.edu/~pjipsen/PCP/PCPothers.html
And now for a look at the many descendants of Relation Algebras:
|
{"url":"http://www1.chapman.edu/~jipsen/talks/Tarski2001/Tarskitalk.htm","timestamp":"2014-04-21T07:37:09Z","content_type":null,"content_length":"48676","record_id":"<urn:uuid:14a2a57a-e480-4388-9f19-bdae4de1aaa7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum: Teacher2Teacher - Q&A #963
View entire discussion
[<< prev] [ next >>]
From: Karl Dahlke <eklhad@comcast.net>
To: Teacher2Teacher Public Discussion
Date: 2002112607:43:57
Subject: Chicago Math has Failed my two Daughters
I am writing to express my concern about Chicago Math,
which is currently being taught in the Troy elementary school system.
This program has failed my two daughters,
and is not the best choice for our students.
First, allow me to introduce myself.
My name is Karl Dahlke, and I have always loved mathematics.
When I was in elementary school I quickly learned the basics of
arithmetic, using the "traditional" method. By traditional, I
mean the method most adults use to multiply and divide large numbers.
Something like this:
x 39
I pursued advanced mathematics in high school, and then in college,
obtaining a degree from Michigan State University,
and another degree from the University of California Berkeley.
The latter is easily one of the top ten graduate math programs in the
country. Today I maintain a web site of mathematics at the
undergraduate and graduate level.
You can visit it at {MathReference.com}.
Before I attended Berkeley, I interviewed at the University of
I had a chance to talk to their staff, and I realized
that this too was one of the finest math programs in the world.
The professors at the University of Chicago tackle some of the most
difficult math questions facing us today. I'm not sure why I didn't go
there, since I already lived in Illinois.
Maybe it was the Chicago winters. :-)
In any case, I was a bit surprised to learn that these august
professors had developed a math curriculum for elementary students.
This is like asking Einstein to write a physics primer for young
The resulting program is probably perfect for the gifted few
who will go on to study math and physics in later life,
while it confuses the hell out of the rest of us.
I believe this is the case with Chicago Math.
The program asks the student to draw various grids, and maintain
a catalog of intermediate results, with all the zeros in place.
This is suppose to teach you, indirectly, that multiplication
forms a ring, and that the traditional method works because of the
distributive property of multiplication over addition,
the commutative property of addition, and so on.
I see where they are going with this program, but nobody else does,
least of all the students.
I have three children in the Troy school system, which is, by the way,
one of the finest school districts in the country,
with the best teachers I have ever seen.
I am proud to have my children attend these schools.
Our students are learning math and getting high scores on
standardized tests, primarily because of these teachers,
and in spite of Chicago Math.
Let me illustrate with my two daughters.
(My son is in special education, and is learning math the traditional
With all his disabilities, I am glad he does not have to slog through
Chicago Math as well. That would simply be too much.)
My first daughter, whom I will call Jane, is extremely bright.
She is in the program for gifted children, and does well in all her
Nothing slows her down, except Chicago Math.
On rare occasions she has come to me in tears, asking for help.
I show her what they are asking for, and she understands the process,
but still seems confused. She applies it faithfully on the test
and gets an A, but doesn't see the point of it all.
Despite her keen intellect, she does not grasp the deeper meaning,
the "why it all works". And if she doesn't get it, nobody does!
And if nobody's getting it, then we may as well teach the
traditional way and be done with it.
My other daughter, Mary, has an average intelligence and a reading
disability. Chicago Math has failed her completely,
primarily because it entails a great deal of writing and copying.
Intermediate results are scattered all over the page,
and you have to be an accountant to keep track of everything.
For a girl who is borderline dyslexic,
every scratch of the pen is an opportunity for error.
She needs to multiply and divide using a process that conserves ink,
as though it were liquid gold. Intermediate results should be kept
to a minimum, and the answer should come together just
below or above the problem.
In other words, she needs traditional math!
I have seen Mary struggle mightily, as Chicago Math presented four
awkward algorithms for division. (Yes, they use the word "algorithm".
If you don't know what it means, where does that leave our kids?)
Perhaps the creators of Chicago Math wanted these four methods to
be optional, i.e. select the one that works best
and use it to solve the problem.
However, that is not how Chicago Math is taught in our district.
The student is expected to master each method,
and is tested on each in turn.
By the time the third method fights for territory in my daughter's
brain, she is hopelessly confused.
Furthermore, none of these methods are traditional,
which is exactly what my daughter needs.
After a semester of confusion and frustration I taught my daughter
how to multiply and divide using the traditional method.
The problem is solved in a couple lines,
rather than a page of scattered intermediate results that must be
assembled correctly at the end like a jigsaw puzzle.
I was able to teach her these concepts in one evening.
She quickly learned how to multiply our phone number by a two digit
number, and then we divided our phone number by a two digit number.
The entire problem remained within her visual and mental focus
at all times.
At this point I would like to make a distinction between Chicago Math
and Everyday Math, although many people in the Troy school district
use these terms interchangeably.
Everyday Math consists of story problems that a child can easily
Dividing 47 pieces of candy among three friends, for instance.
Everyday Math is a great idea,
and I don't want to throw out the baby with the bath water.
We should continue to incorporate Everyday Math in our curriculum.
However, there is no point in presenting the above story problem
until the student can divide 3 into 47, almost without thinking.
The process should be automatic, like driving a car.
Unfortunately, story problems are brought in far too early in Chicago
Having seen several confusing division algorithms,
Mary still had no idea how to divide 3 into 47
when the story problems came rolling in.
While she was busy looking for "friendly pairs of numbers",
(division algorithm number 3), she forgot the story completely.
When she finally had an answer, and the book asked what to do with
the remainder, Mary had no clue. It took her so long to do the math,
she forgot all about the candy and the three friends.
All the benefits of Everyday Math were lost,
because the algorithms of Chicago Math got in the way.
As an analogy, you don't teach someone how to read a map and find
their way around an unfamiliar city until they are completely
comfortable driving the car.
The young driver is so busy concentrating on the details of steering
and braking, he hasn't got time to read the road signs or consult the
In fact, the two tasks work against each other,
making it impossible to learn either one.
I have used this analogy with Mary, and I think she understands.
Chicago Math teaches you to drive by opening up the hood,
taking the engine apart, and putting it back together.
You are suppose to understand everything from the thermodynamics of
combustion to the hydrolics of the steering and brakes.
When you want to turn right, you are suppose to manipulate all four
steering rods with your hands and feet, while keeping your eyes,
and your focus, on the road.
For a girl who is borderline dyslexic, and partly ADD,
this is simply impossible. She just wants to drive the car!
So I showed her how to drive the car, in the simplest terms,
and she understands. She can now multiply 4218 by 39, as shown above.
I just wish we could have skipped the year of confusion and
In summary, I believe the awkward algorithms promoted by Chicago Math
are inappropriate for most of our students.
A small percentage of gifted children may grasp the deeper meaning,
the detailed construction of the car's engine, but most will not.
Many children are left behind, and cannot perform the simplest
arithmetic operations that we take for granted.
This is especially true for the kids who are already struggling.
When I was a teen-ager I watched my younger siblings trying to learn
the "New Math" that emerged in the late 60's and early 70's.
I just shook my head in disbelief. I couldn't see the point of it.
Why not teach them the same way I was taught?
As Tom Lehrer quipped in his famous parody,
"The idea is to know what you are doing, rather than to get the right
In the 80's New Math faded away, and I heaved a sigh of relief.
Now we have Chicago Math, and that's even worse!
I guess what goes around comes around.
I encourage our school district to return to traditional mathematics,
and I hope other districts will follow suit.
At the same time, I hope we can retain the valuable aspects of
Chicago Math. The alternate algorithms should be made available
for the few who do not grasp traditional math,
and each child should be allowed to use his favorite method,
any method, to solve the problem, provided he gets the right answer.
Then, when arithmetic can be performed automatically,
bring in the story problems that are associated with Everyday Math.
These are helpful indeed,
provided the child has already mastered the basics.
But please remember, traditional math, as a mechanical process,
must come first. We've been teaching it for centuries,
and despite a few fads in the 1970's and 1990's, there is no better
Karl Dahlke
Post a reply to this message
Post a related public discussion message
Ask Teacher2Teacher a new question
|
{"url":"http://mathforum.org/t2t/discuss/message.taco?thread=963&n=35","timestamp":"2014-04-16T19:57:35Z","content_type":null,"content_length":"14130","record_id":"<urn:uuid:0a88f15f-44af-4c8c-9027-ac99cc851159>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] floating point arithmetic issue
Pauli Virtanen pav@iki...
Fri Jul 30 17:16:29 CDT 2010
Fri, 30 Jul 2010 19:48:45 +0200, Guillaume Chérel wrote:
> Your solution is really good. It's almost exactly what I was doing, but
> shorter and I didn't know about the mgrid thing.
It's a very brute-force solution and probably won't be very fast with
many circles or large grids.
> There is an extra optimization step I perform in my code to avoid the
> computation of the many points of the grid (assuming the circles are
> relatively small) which you know only by their x or y coordinate they're
> out of the circle without computing the distance. And that's where I
> need the modulo operator to compute the left-most and top-most points of
> the grid that are inside the circle.
If your circles are quite small, you probably want to clip the "painting"
to a box not much larger than a single circle:
# untested, something like below
def point_to_index(x, y, pad=0):
return np.clip(200 * rr / (xmax - xmin) + pad, 0, 200), \
np.clip(200 * rr / (ymax - ymin) + pad, 0, 200)
i0, j0 = point_to_index(xx - rr, yy - rr, pad=-2)
i1, j1 = point_to_index(xx + rr, yy + rr, pad=2)
box = np.index_exp[i0:j0,i1:j1]
mask[box] |= (grid_x[box] - xx)**2 + (grid_y[box] - yy)**2 < rr**2
# same as: mask[i0:j0,i1:j1] |= (grid_x[i0:j0,i1:j1] ...
Pauli Virtanen
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-July/052029.html","timestamp":"2014-04-18T13:25:47Z","content_type":null,"content_length":"3869","record_id":"<urn:uuid:ee133c52-e624-4126-9dae-ac368482e900>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MAT 007 I News
I'd tell you what the mission of our newsletter is, but instead, I have appropriately placed one of the Editor's Notes of Steve Sculac, the founder of 007 News, below. It's quite self-explanatory.
At this point I'd like to encourage everyone who reads this newsletter and gets an idea, no matter how crazy it might be, to send it to us and, if it's at all possible, we will use it in a future
issue. I hope that this sharing of ideas might inspire someone to something of value.
At this point I'd like to acknowledge Marie Bachtis who is responsible for the existence of this publication. We had a much smaller idea in mind when we first approached her, and she blew it up
totally out of proportion to where now thousands of mathematicians are now exposed to our (shall I call it) work. Thanks Marie!
That's all I have to say for now. I hope you like what you read.
The following six-part problem was devised and answered by Karl Scherer, a computer scientist now living in Auckland, New Zealand. It has not previously been published.
... The other basic question was just as troubling: Should employers base their employment equity on the number of designated group members they were hiring -- intake -- or on the representation of
designated group members in their work force?
The same numbers can be counted in different ways depending on the results you want to achieve. For example, in 1993 at the University of British Columbia, women represented 20.56 per cent of all
faculty. That sounds pretty grim, considering UBC has had an employment equity policy since 1990. But that 20 per cent figure is weighted down by the aging male professiorate, most of whom will reach
retirement age before the end of the decade. In other words, they will be gone by 2000.
When you look more closely at the numbers, you can see that women accounted for 17.15 per cent of tenured appointments at UBC in 1993 and 32.99 per cent of tenure-track jobs. When you add those last
two figures together you get 50.14 per cent. By doing nothing, women will eventually outweigh men as a percentage of total faculty at UBC.
When I first began my study of "fractals", I was studying the properties of the equation z=z^2. The purpose of this was to see what would become of the value when subjected to round-off. Of course it
is obvious that any value of absolute value less than 1 would eventually converge to 0 - expedited by the truncation errors that finite, physical means will impose.
It occurred to me, one day, that perhaps this property would be more interesting when performed using complex numbers. The interactions of the multiplications would be far more interesting. I was
also interested in how adding a constant would perturb the system of errors. So, my now famous equation of z = z^2 + c -- where z and c are elements of the complex plane -- came to be. I decided to
keep z initialized to 0 since, having taken high-school science, thought that only changing one variable would make things less confusing.
Such was not to be the case. When I tried a few cases by hand, the results seemed to be nothing more than random! Some values of c caused the value of z to explode to infinity, others caused the
value of z to converge to a finite number of points, and others still caused the value of z to circulate without end, but without causing the rapid growth of z! In order to separate the bounded
sequences from the unbounded sequences, I decided that producing a graph would be the easiest way of mapping the set. When I tried producing a graph by hand, it seemed that I was getting random
points of convergence and divergence. I then tried producing a graph on a computer, but found the image even more bizarre. IBM somehow took interest in my work without me having to explain what it
was. All I told them was that I was working on "fractals". You may be asking yourself, "where did this guy get a name like 'fractal'"? You may have heard that it means "fractional dimension"
describing the possibility of the generated set being of infinite perimeter but finite area, etc. In fact, I coined the name fractal because it's the word "fractional" spelled erroneously! Get it!?!
I was studying "the errors of fractional values"!
Now, my fame arose, quite accidently, after a nosy physicist at IBM saw my bizarre set on the screen. He exclaimed, "That looks like a feedback system I've been trying to model!". The physicists took
to my "fractals" instantly! They saw it useful for simulating all kinds of phenomena they had measured in the lab. Well, the truth can now be told... The only reason these equations produce anything
that looks like what physicists see in the lab is that physicists are constantly being victimized by round-off error! The physics community is using computer generated round-off error to simulate
their own round-off errors! This is probably why physics has made so few advances after embracing this concept.
Now that there are no more accolades, I will have to say that more physicists should delve into pure mathematics, where the beauty of truth is not marred by round-off errors. I am sorry for allowing
science to run astray for so long, but, I have enjoyed a comfortable life, I was admired by many, I sold a lot of books, and that's the way human nature is...
A lamb, duh!
By Phil Morenz
The game Pong Hau K'i in Volume 2 Number 2 of MAT 007 I is a simple example of a much larger class of games played on finite graphs. Roughly speaking a graph is a set of points (called vertices)
joined by lines (called edges). For what follows we will not need to give formal definitions, but for the interested reader two good introductions to graph theory are [1], [2]. We will only be
concerned with finite simple graphs (finite number of vertices and edges; no edge joins a point to itself, any two points have at most one edge connecting them). Some examples of graphs:
Notice that all of these graphs are finite but #3 is not simple. Notice that graph #5 is in two "pieces". We will only be concerned with connected graphs those with only one "piece". Now for the fun
and games. The simplest game is called cop and robber. Given a finite simple connected graph, the cop starts at one vertex, the robber at another. At his turn each must move along an edge to an
adjacent vertex. They alternate turns. The cop tries to land on the robber (or have the robber land on him!), the robber tries to get away. The robber moves first. Now for the questions! What is the
smallest graph (i.e.: smallest number of vertices) so that a single robber can always evade a single cop, no matter where they start? (Hint: 3 is too small, 4 depends on starting positions, 5 is just
right.) That was easy! A tree is a special kind of graph (#4 is an example) If you don't know the definition, check [1] or [2] or ask a computer scientist. Can you prove that if the graph is a tree
the cop always catches the robber? Also not hard. Now suppose that we have two cops instead of one. Both move simultaneously. Can you construct a graph so that one robber can always evade two cops?
One more question, one that I don't know the answer to, so I'll offer a prize of $1 (one loony!) for the best solution by an undergraduate, time limit, 1 month. The question is, what is the smallest
(ie.: fewest vertices) graph such that one robber can always evade two cops (independent of starting positions)?
1. Bondy and Murty, Graph Theory with Applications, North Holland, 1976.
2. Harary, Graph Theory, Addison-Wesley, 1969.
[Editor's note: In this new feature we will discuss 'paradoxes' of a mathematical nature. We are always on the lookout for strange results; so send that weird idea in!]
This week's topic is the Banach-Tarski paradox. We expect many readers to be familiar with it so we shall only provide a brief explanation. For the uninitiated, please note it will be mind-expanding,
but probably not fatal. [Ed: The 007 and its affiliates will assume no liability for damaged brains, degrees or careers if you read any further.]
The Banach-Tarski paradox states that if you take any object (say, the unit sphere) then it is possible to cut it into finitely many parts (five in fact), and put these pieces back together to get
two copies identical to the original. There is no trickery here. The result is real and has been proven. It is left as an exercise for the interested reader to determine why no one has applied this
theorem to their economic advantage. A simple corollary is that any nice set can be cut into finitely many pieces and reassembled into a desired number (either more or fewer) of identical copies.
The editorial board of the 007 has found two applications of this theorem. Unfortunately it is used to explain, and not to cause events.
We like to play tennis. At least, that's what we call it. We arrive at the appointed hour, and the chaotic dynamics begin. We have discovered that it helps to have lots of tennis balls. The only
useful conjecture we have come up with during these sessions is:
Conjecture: If tennis is played with a sufficiently large number of tennis balls, and if you don't keep score, then it is impossible to have the same number of balls at the end.
It should be noted that even loud fluorescent orange balls can be Banach-Tarskied away. Generally, though, only the usual yellow-green ones Banach-Tarski into existence.
Proof: Meet us Thursday mornings on the court. It will be clear how to apply the paradox.
The second observation involves the formation of copy for the next issue. Invariably articles or other submissions that are supposed to appear, don't, and others (such as this one) appear in their
place. The same can be said about the typographical errors, and even whole paragraphs that are just plain wrong. Mathematics has provided this insight into one of the "weird" forces of nature. The
paradox is one of those driving forces which keeps the world interesting just when everything seems to be settling down into a nice understandable pattern.
As we noted above, the "paradox" has been proven; so it isn't really a paradox at all. The trouble hinges on the fact that the proof merely demonstrates that the partitions exist (isn't the axiom of
choice frustrating?), but gives no recipe for cutting. We propose that the nature of the cuts is such that they are only valid if not observable (in a Quantum Mechanical sense). The interesting thing
about the examples above is that the paradox exerts its influence just when the events in the world get confusing. It seems clear that if there were enough people following the tennis balls (or fewer
balls), that they would not Banach-Tarski. After all, highway 401 is almost as confusing as one of our games, but cars don't Banach-Tarski because there is a driver keeping an eye on each one.
At a local high school, population 1000, all the students are lined up outside the locker room. Initially, all the lockers, numbered sequentially from 1 to 1000, are closed. The first student enters
the room and toggles the state of every locker. (If it's closed then open it, but if it's open, then close it.) The second student enters and toggles those lockers starting at the second locker,
counting by twos. Student three starts at locker three and toggles every third one. Each student does this in turn. Find a general expression specifying exactly which lockers are open at the end of
the process. By Cathy Nangini
Sometimes textbook mathematics does little more than confound you with abstract notions and vague ideas. So I have compiled a list of some basic concepts sure to set anyone straight!
Groups. It is an inherent feature of groups that all its members have the ability to multiply things together. Conservative groups use the standard notion of mathematical multiplication whereby 2 *
3, say, is always 6. Then there are the Revolutionaries, who throw out all convention and embrace more radical platforms by defining multiplication to be anything they so desire. The integers, for
example, could become a Revolutionary Group if they choose to define multiplication as addition instead. This could create a lot of group tension, and perhaps even a group war, as the Radicals pull
away from their real roots. Incidentally, this explains why we never speak of a group of politicians, we say party. Politicians can neither add nor multiply as individuals, let alone as a group.
Sets. The notion of the mathematical set is slightly confusing to the uninitiated topologist. Set theory, however, can be made relatively simple if we consider the following analogy. A set is much
like your bank account. A non-empty set, like an account, must contain something. Bank accounts can be either open or closed, as it is with sets. But here we must be careful, because sets, unlike
your bank account, can be both open and closed at the same time. This is just a mathematical paradox we shouldn't have to worry too much about. We must also remember that a set that contains zero
contains something, but a bank account that contains zero means that you are broke. The other thing is that sets do not charge interest; that is why topologists do not work in banks.
Subspaces. An important concept associated with vector spaces is that of rank. The rank of any space is determined by its dimension; that is, the number of linearly independent elements in the basis
set. For subspaces, and more importantly, subspace messages, the rank of the transmitting party determines the priority of the message. So a Starfleet Officer of rank n, for example, will receive a
subspace transmission of n dimensions, which means it is a very important message indeed. Those of infinite rank, however, are too great to be found on any starship. Hence these people are usually
reserved for the Complex Plane...
• Recursive -- See recursive.
• Obvious -- This word means different things when used by different people. The difference is in the length of time between when someone states an "obvious" fact and when you come to realize the
truth of that fact. For Prof. A., that length of time is nil. For Prof. B., it lasts until the moment right after he walks away. For Prof. C., it lasts until one week after she says it. For Prof.
D., it is until right after your final exam. And for Prof. E., you'll never understand it!
• Q.E.D. -- Question every Detail/Deduction
• Proof -- A well ordered finite set of statements that is supposed to convince your wide-eyed audience (especially students) that you know something about the given proposition. The last statement
is usually "Q.E.D." (and you should!)
• Poof -- is one of:
1. A proof that sneaks up on you and hits you like an uncountable number of bricks; then gets erased off the blackboard before you absorb it.
2. The main point of such a proof.
3. A highly improbable construction (especially non-constructive) which gives rise to such a proof. (The rabbit gets pulled out of the hat.)
4. Something which some students supply when asked to supply a proof, particularly on tests. Such students do not necessarily continue in mathematics.
5. Proof by Intimidation. "You all see this, don't you!?!"
• Theorem --
1. A comment statement in a BASIC program written by a guy named Theo.
2. The statement of a mathematical claim, followed by a proof that supposedly pertains to the claim.
• Lemma -- A bashful theorem.
By Joel Chan
This article appears in the February 1995 issue of Math Horizons, the student magazine of the Mathematical Association of America.
In the late evening hours of April 26, 1994, it was announced that one of the most famous problems in cryptography, RSA-129, had been solved. A group of six hundred volunteers on the Internet led by
Derek Atkins of MIT, Michael Graff of Ohio State University, Arjen Lenstra of MIT, and Paul Leyland of Oxford University carried out a job which required factoring a large number into two primes in
order to crack a secret message and took eight months and over 5000 MIPS years, or approximately 150,000,000,000,000,000 calculations!
RSA-129 is actually a 129-digit number that is used to decrypt (or decode) the secret message by the RSA algorithm. RSA is a public-key cryptosystem and is named after the inventors of the algorithm:
Ron Rivest, Adi Shamir, and Leonard Adleman.
What is a public-key cryptosystem? In the past, messages were encoded by a scheme called a secret-key cryptosystem. The sender and receiver of a message would have the same key (or password) in order
to encrypt and decrypt messages. A problem of this system is that the sender and receiver must agree on this secret key without letting others find out. So in 1976, a new system called public-key
cryptography was invented that took care of this problem. In this system, each person receives a pair of keys, called the public key and the private key. Each person's public key is published but the
private key must be kept secret. This way, a user can encode a message using the intended recipient's public key, but the encoded message can only be decrypted by the recipient's private key.
The RSA algorithm is a simple, yet powerful, cryptosystem. It's simple in the sense that the algorithm can be easily understood by any mathematician. In fact, here's how it works:
First we translate the message into numeric form. For instance, suppose we want to send this message to your math professor: PLEASE DONT FLUNK ME. We can use a simple encoding scheme such as letting
01=A, 02=B, ..., 26=Z, and 00 be a space between words. So the numeric message becomes
1612050119050004151420000612211411001305 = t.
Actually, any text to numeral converter will do, since this is really not part of the RSA algorithm. But this means we have a secret-key cryptosystem in addition to our public-key encryption!
So how are RSA public and private keys generated?
• We take two large primes, p and q, and find their product n=pq . n is called the modulus and an example of n is RSA-129.
• We randomly choose a number, e, less than n, such that (p-1)(q-1) and e are relatively prime, i.e. gcd((p-1)(q-1),e)=1.
• We calculate the multiplicative inverse, d, where ed = 1(mod(p-1)(q-1)). For those of you not familiar with modular arithmetic, the notation a=b(mod m) means that if m is a natural number, then a
and b are integers that leave the same remainder when divided by m. And finally...
• We destroy p, q, and (p-1)(q-1). The public key is the pair (n, e). The private key is d.
Suppose Eric wants to send the secret message t to Darcy, his professor.
• For Eric to encrypt the message, he creates the ciphertext c by calculating the remainder of the division t^e by n, where e is Darcy's public key. Eric sends c to Darcy.
• For Darcy to decrypt the message, she calculates t where t is the remainder of the division c^d by n. She can then translate t into plain text by using the 01=A trick and read the message! I'll
bet Eric will be real surprised when he sees his report card!
Suppose you wanted to crack Darcy's message from Eric but you didn't know Darcy's private key. The obvious way to try to crack the code is by trying to calculate the two prime factors of the public
modulus n. But that is also the beauty of the RSA algorithm. Currently there are no quick ways of factoring large numbers. This also means that RSA depends on the fact that factoring is difficult.
Even though RSA-129 has been solved, a larger number takes exponentially longer to factor. So unless mathematicians are able to find relatively easy ways of factoring large numbers (which is unlikely
in the near future), the security of RSA is safe.
In fact, shortly after RSA was developed in 1977, Rivest, Shamir, and Adleman proposed a challenge to the scientific world to crack an encoded message given the public keys n = RSA-129 and e. In
fact, they asserted that even with the best factoring methods and the fastest computers available at the time, it would take over 40 quadrillion years to solve. But with the exponentially increasing
power of computers and advances in mathematical techniques in factoring large numbers -- a factoring scheme called the quadratic sieve was invented in 1981 by Dr. Carl Pomerance of the University of
Georgia, the method used in factoring RSA-129 -- it would be somewhat surprising that it would only be 17 years later when the message would be cracked.
Well, enough of the theory, and let's get to the gooey stuff: the calculations by the 600 miracle workers.
The encoded message published was c = 9686961375462206147714092225435588290575999112457431987469512093081629822514570
with the public key pair e = 9007 and n = RSA-129 = 1143816257578888676692357799761466120102182967212423625625618429357069352457338
= 3490529510847650949147849619903898133417764638493387843990820577 * 32769132993266709549961988190834461413177642967992942539798288533.
With these two primes, the private key is calculated to be d = 1066986143685780244428687713289201547807099066339378628012262244966310631259117
Upon applying the decryption step, the decoded message becomes t = 200805001301070903002315180419000118050019172105011309190800151919090618010705.
Using the 01=A, 02=B trick, the decoded message reads
The definitions of the last two words are left to the reader as an exercise.
For more information about RSA and cryptography, check out the World Wide Web Virtual Library.
Dear Gwendolyn, I am desperate. I mop floors on weekends at an unnamed university in Ontario. Unfortunately, due to the dwindling funds available for sanitation, the department hasn't been able to
maintain the upkeep of our equipment. I find that the width of my mop is shrinking by half each day due to the wear and tear I put on it. I asked an engineer friend of mine where it will all end. He,
pulling out his calculator, said I'd end up with a point after just thirty days. My problem is that I still have mop up the same area! What happens in a month? What will I do then? Your devoted
reader, Marty the Mopper.
Dear Marty, your problems are manifold. First, you have an engineer for a friend. Please correct the situation. Second, your mop will never shrink to a point in finite time (even nicely). Third, even
if it does become a point, then just follow a space-filling curve when you mop, and charge them for overtime.
Dear Gwendolyn, I really don't what to make of this, but ever since I started taking this algebra course I have been having difficulties doing my math homework. It seems that everytime I sit down to
do my work I feel like my head is going to explode. What is going on? Am I crazy? Sincerely yours, Vector Spaced
Dear Spaced, don't be alarmed. Apparently you have caught a mild case of Dysfunctionism, caused by cerebral-infecting organisms of genus linearum algebratus. It is a condition known only to affect
undergraduates, and it generally targets math students (all of whom, incidentally, have had to take courses in linear algebra). Those who have it often complain of sudden dizziness and confusion when
approaching a mathematics text of any kind, accompanied by an overwhelming sense of doom. The best solution is psychological. By deluding the brain into thinking of anything but math until the last
possible moment seems to be a good strategy. For instance, try covering your textbooks. That way, you will approach your desk unaware of the algebra lying in wait and, if you choose nice pictures,
you will at least have something interesting to look at.
A math class of 20 students got their test papers back. The average was 54.0% (it was an easy test!). The professor tells everyone to adjust their mark according to the following formula:
New mark = min(100, old mark/0.9),
with all marks non-negative integers less than or equal to 100. What are the least upper and greatest lower bounds on the new average?
By Aurora Mendelsohn
Due to an uncanny set of circumstances, I have dated three mathies, well maybe more, say, Pi mathies (What can I say -- I just love those pink ties). I can attribute my experience only to a depraved
joke of God's or to strange mutations in my DNA. At any rate, I have learned much from my experience and I hope that others may benefit from my confusion. Mathies, read this! Hand it out to your
boyfriends and girlfriends, and to your prospective dates. Better yet, simply pin it to a visible portion of your clothing as a warning label.
|
{"url":"http://www.math.toronto.edu/007/007-news.html","timestamp":"2014-04-21T05:47:59Z","content_type":null,"content_length":"38436","record_id":"<urn:uuid:5483bd5e-ce0c-440e-a623-563eacfc62eb>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sizing Conductors, Part XXXII
With a lot of work from many dedicated individuals, the 2014 edition of the National Electrical Code (NEC) became available at the end of August 2013. The Code is revised every three years, but the
revision cycle has not always been three years. Revision cycles have ranged from one to four years.
The NEC has been in existence for almost 117 years. The first edition of the Code book was published in 1897. The National Fire Protection Association (NFPA) has sponsored the NEC since 1911. The
2014 edition is the 53rd edition of the NEC. Look on the first page for a complete list of all 53 editions. A lot has changed since the first edition.
There were more than 3,500 proposals submitted to revise the 2011 edition to the 2014 edition. The number of proposals is actually down from the last three editions, which averaged more than 4,500
proposals for each of those. In this new edition, there are global changes, new articles, new sections, relocated sections and revisions to existing sections.
Changing the voltage threshold from 600 to 1,000 volts (V) in many locations throughout the Code was one of the global changes. This change was because of voltage levels used in wind generation and
photovoltaic systems.
Four new articles were added: Article 393 Low-Voltage Suspended Ceiling Power Distribution Systems, Article 646 Modular Data Centers, Article 728 Fire-Resistive Cable Systems, and Article 750 Energy
Management Systems.
Section 110.21(B) is a new section that has been added. It contains provisions for field-applied hazard markings, such as caution, warning, or danger signs or labels.
An example of a relocated section is the definition of an effective ground-fault current path. In the 2011 edition, this term was defined in 250.2. This definition was moved to Article 100 because—in
accordance with the scope of Article 100 and with the NEC Style Manual—only those terms that are used in two or more articles are defined in Article 100. Besides being used in Article 250, the term
“effective ground-fault current path” is also used in 404.9(B) and 517.13(A). Expanding the requirements for ground-fault circuit-interrupter (GFCI) protection for personnel and arc-fault
circuit-interrupter protection (AFCI) are examples of revisions to existing sections. See 210.8 for the expanded GFCI requirements and 210.12 for the expanded AFCI requirements.
There is a significant change that pertains to sizing branch-circuit, feeder and service conductors. This change is more of a clarification. The clarification is in 210.19(A)(1) for branch-circuit
conductors, 215.2(A)(1) for feeder conductors and 230.42(A) for service conductors. There was no change to the first sentence in 210.19(A)(1), which states branch-circuit conductors shall have an
ampacity not less than the maximum load to be served. Regardless of anything else, the ampacity of the conductors shall not be less than the load.
Before the 2014 NEC, the second sentence in this section was often misinterpreted. It stated: “where a branch circuit supplies continuous loads or any combination of continuous and noncontinuous
loads, the minimum branch-circuit conductor size, before the application of any adjustment or correction factors, shall have an allowable ampacity not less than the noncontinuous load plus 125
percent of the continuous load.” This appeared to be saying to multiply continuous loads by 125 percent before, or in addition to, the application of any adjustment or correction factors. The revised
wording in the 2014 edition clearly states that these are two separate calculations (see Figure 1).
In accordance with the second sentence of 210.19(A)(1), conductors shall be sized to carry not less than the larger of 210.19(A)(1)(a) or (b). These sections, (a) and (b), contain two calculation
procedures that are to be performed separately. The larger of the two sizes calculated is, therefore, the minimum size conductor.
For example, what size THHN copper conductors are required to supply a branch circuit under the following conditions? The load will be a 39 amperes (A), nonmotor, continuous load. These
branch-circuit conductors will be in a raceway. There will be a total of eight current-carrying conductors and an equipment grounding conductor in this raceway. All the terminations in this branch
circuit are rated 75°C. The maximum ambient temperature will be 40°C.
In accordance with 210.19(A)(1)(a), the minimum size conductors for a branch circuit that supplies continuous loads or any combination of continuous and noncontinuous loads shall have an allowable
ampacity not less than the noncontinuous load plus 125 percent of the continuous load. Since this entire load is continuous, multiply the entire load by 125 percent. The minimum ampacity after
multiplying by 125 percent is 49A (39 × 125% = 48.75 = 49). Although the conductors are rated 90°C, the allowable ampacity shall not exceed the 75°C column because of the terminations [see 110.14(C)
(1)(a)]. An 8 AWG copper conductor, in the 75°C column of Table 310.15(B)(16), has an allowable ampacity of 50A. Based only on the temperature ratings of the terminations and on the load being a
continuous load, the minimum size conductors are 8 AWG copper conductors (see Figure 2).
After performing the first of two calculation procedures in 210.19(A)(1), an 8 AWG conductor is required for the example in Figure 2. The second calculation procedure is used when there are more than
three current-carrying conductors and/or when the ambient temperature is something other than 30°C. In the example, the ambient temperature will be higher than 30°C, and there will be more than three
current-carrying conductors in the raceway. In accordance with 210.19(A)(1)(b), the minimum branch-circuit conductor size shall have an allowable ampacity not less than the maximum load to be served
after the application of any adjustment or correction factors. Use the exact load with this calculation, even if there are continuous loads.
There is more than one way to perform this calculation. One way is to divide the actual load of 39A by the correction and adjustment factors and then select a conductor. But, since there is already a
minimum size that has been selected to satisfy the requirements for continuous loads and for the terminations, check to see if the ampacity of those conductors will equal or exceed the load after
applying correction and adjustment factors. The Table 310.15(B)(16) ampacity for an 8 AWG THHN conductor, in the 90°C column, is 55A.
A good question usually comes up at this point. Since the terminations are only rated 75°C, why was the ampacity of a 90°C conductor selected? In accordance with the last sentence of 110.14(C),
conductors with temperature ratings higher than specified for terminations shall be permitted to be used for ampacity adjustment, correction or both. Although the terminations limit the ampacity to
the 75°C column, it is permissible to use the ampacity in the 90°C column for correction and adjustment. Be careful because—while it is permissible to start with the ampacity in the 90°C column—it is
not permissible to exceed the temperature rating, which, in this example, is the 75°C column. The ambient temperature in this example will be 40°C. The Table 310.15(B)(2)(a) correction factor, in the
90°C column, for an ambient temperature of 40°C is 0.91. The Table 310.15(B)(3)(a) adjustment factor for eight current-carrying conductors in the raceway is 70 percent (or 0.70). After applying the
correction and adjustment factors (which is often referred to as derating), 8 AWG THHN conductors have a maximum ampacity of only 35A (55 × 0.91 × 0.70 = 35). Since the load is 39A, this 8 AWG THHN
conductor will not be permitted because the ampacity is only 35A after derating. Therefore, select the next larger size conductor, and perform the calculation again. The next larger size conductor is
6 AWG THHN. The Table 310.15(B)(16) ampacity for a 6 AWG THHN conductor, in the 90°C column, is 75A. After applying the correction and adjustment factors, 6 AWG THHN conductors have an ampacity of
48A (75 × 0.91 × 0.70 = 47.775 = 48). Although the continuous load is 49A, because of 210.19(A)(1)(b), the conductors are only required to have a rating of the actual load of 39A (see Figure 3).
There is a similar change for sizing feeder conductors and service conductors. See 215.2(A)(1) for feeder conductors and 230.42(A) for service conductors.
Next month’s column continues the discussion of sizing conductors.
THHN (90°C) conductor
Motor marked with a design Letter D
THWN conductor (larger than 1 AWG)
1/0 AWG THHN
conductor
Do not exceed the 75°C for this conductor.
THHN (90°C) conductor
Motor marked with a design Letter D
Comment Count Comments
|
{"url":"http://www.ecmag.com/section/codes-standards/sizing-conductors-part-xxxii","timestamp":"2014-04-23T17:46:34Z","content_type":null,"content_length":"50147","record_id":"<urn:uuid:2868dd23-2418-423b-b5ef-c64a8b06befe>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Few Extra Points Part II
Board: Macro Economics
Author: yodaorange
Last week I posted about setting realistic investment goals and suggested using long-term market timing to improve performance. Specifically I discussed lowering your allocation to equities when
expected returns are low and vice-versa. The target was to achieve an extra 2% to 6% annual portfolio return over long periods, i.e. decades. Research by Ed Easterling/Crestmont, Robert Shiller,
Andrew Smithers and others suggest long term market timing can add extra “alpha” to the portfolio returns.
The next logical question is: “Yes, market timing is fine and good but what are some other methods of generating extra portfolio returns?”
Originally this post was going to be about one published technique to hopefully generate extra returns in a systematic manner. After reviewing the post it read like “Men are from Venus, women are
from Mars” and might not have been productive at this point in time. What I thought was missing was an understanding of the history and understanding of “alpha” and overall portfolio management/
measurement. Some METARites probably know everything in this post and could have written it themselves. If so, my apologies for wasting your time. This post is written for the METARites that are not
as familiar with the history and what implications it has for portfolio management. To keep this post under 100 pages and not require a degree in math/statistics to understand, I have simplified many
points. For those anal folks like me that want to see the gory details, I have provided links to all of the papers.
The first seminal event in portfolio management was when William Sharpe published “"Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk” in 1964. The model that Shape
proposed was latter called Capital Asset Pricing Model aka CAPM. It was and is widely used as the starting point to make financial decisions in many different industries, not just stock portfolio
choices. Sharpe was an operations research professor at the University of Washington when he first submitted the paper for publication in 1962 to the Journal of Finance. It was rejected as being
“irrelevant.” Sharpe had to wait until the editorial staff changed over before the paper was accepted. There is an interesting “pioneers are the ones with arrows in their backs” aspect to this
rejection. Sharpe ended up winning the 1990 Nobel Prize in economics for this single paper! He shared the prize with Harry Markowitz and Merton Miller.
Link to Nobel Prize announcement.
Sharpe’s paper was written from the perspective of making future investment choices. It is an entirely theoretical paper. There is no discussion of how the model works “ex-post” i.e. after the fact.
Nor is there any mention of whether the model applies to individual assets like a single stock or a group of stocks (portfolio). There are several assumptions that I am not going to present here. The
model is:
Expected return = Risk free interest rate + Beta *(Market return-risk free interest rate)
1) Risk free interest rate is generally considered to be the yield on short term US treasuries
2) Market return is generally considered to be the SP 500 for US equity investments
The main point to take away from CAPM is the concept of Beta. There is a mathematical definition of Beta which I will not show. Under conditions we are interested in, Beta is ~ how much the asset
price changes divided by the market return price change. I think this is the widely understood view of what Beta is. Beta is also called “systematic risk” implying that the asset movement directly
correlates to the underlying market movement. For example a Beta of 1.1 means the asset price moves up at 1.10X or 10% greater than the market moves. Sharpe has a discussion about non-systematic
risk, which I am also not going to cover. This formula is the basis for “high risk correlates to high reward and vice versa.” While the model does not perfectly fit individual assets, it has stood
the test of time and is valuable for an understanding of risk/reward. Note that there is no mention of “alpha” in the CAPM model.
Link to original Sharpe CAPM paper.
BOTTOM LINE 1: is that if you knew nothing other than the CAPM model, you could combine the concept with market timing I outlined last week. Instead of altering the portfolio allocation to equities,
you could use higher Beta funds/ETF’s when high market returns are projected. THIS IS NOT A SUGGESTION THAT YOU PLAY RUSSIAN ROULETTE WITH YOUR PORTFOLIO AND BUY THE 2X AND 3X LEVERAGED ETF’S. But
going to an ETF with a Beta of 1.1 to 1.3X is reasonable IMO. Just remember to NOT get addicted to the higher Beta ETF’s and have the discipline to switch to lower Beta ones when the predicted market
return is low. Note that lowering the allocation to equities has essentially the same effect, as lowering the Beta of the specific ETF you are using. (Slight simplification here.)
After Sharpe published the CAPM model, the question was did it work in the real world? The next seminal paper is “The Performance of Mutual Funds in the Period 1945-1964” by Michael Jensen. The paper
was an outgrowth of his Economics PHD at the University of Chicago. Later on, he was a professor at the Harvard Business School. Jensen started with the CAPM model and modified it to make ex-post
(after the fact) measurements of actual portfolios, in this case mutual funds. Jensen introduces the concept of “alpha” and in many places, it is still referred to as “Jensen’s alpha.”
Expected return = Risk free interest rate + Beta *(Market return-risk free interest rate) +
Alpha + error term
1) Risk free interest rate is generally considered to be the yield on short term US treasuries
2) Market return is generally considered to be the SP 500 for US equity investments
3) Alpha is “the average incremental rate of return on the portfolio per unit time which is due solely to the manager’s ability to forecast future security prices.”
4) Error term in simple terms is used to make the model fit the measured data better.
Jensen uses this model to fit the measured results for 115 mutual funds over 20 years. His model fits pretty well after the fact. Beta’s range from .219 to 1.405 with a median of .848. Alpha’s range
from -8.0% to +5.8% with a median of -1.1%, most likely due to fund expenses. To my knowledge this is the first paper that showed actively managed mutual funds underperform in a systematic and
consistent way. Calling John Bogle!
The key points for METARites are that you can accurately model alpha and beta for portfolios of equities. Beta’s are probably more persistent than alphas. I.e., a high risk, high volatility portfolio
is likely to remain high. The converse is also true. For the first time, we see that achieving positive alpha is NOT easy. John Bogle and the indexers definitely have a strong case against active
managers. And it has been that way since at least 1945! Pretty amazing IMO that the active fund managers have been able to keep the public investing all of these years.
Link to Jensen paper.
The final seminal paper is “The Cross-Section of Expected Stock Returns” by Eugene Fama and Ken French published in 1992. This paper introduces the “Fama French three factor model” for the first
time. Long story short, Fama/French add an additional two parameters to the CAPM model. They show this substantially improves the ability to model portfolio returns. Reported numbers are that the
CAPM beta explains about 70% of a portfolios performance. Adding the two additional Fama/French factors improves that to about 90%. The model is:
Expected return = Risk free interest rate + Beta1 *(Market return-risk free interest rate) +
Beta2 * SMB + Beta3 * HML
1) Risk free interest rate is generally considered to be the yield on short term US treasuries
2) Market return is generally considered to be the SP 500 for US equity investments
3) Beta1 is the same as beta in the CAPM model, the systematic risk
4) SMB is short for “Small minus big” This is the difference between the returns on diversified portfolios of small and big stocks.” Small/big is the market capitalization of the stocks.
5) HML is short for “High minus low” is the difference between the returns on diversified portfolios of high and low book value/price stocks,”
6) All three of the Betas are fitted to the model for each portfolio. They are NOT theoretically derived in advance.
Here are a few selected quotes from the Fama/French paper:
“ . . when the portfolios are formed on size along, we observe the familiar strong negative relation between size and average return. . . Average returns fall from 1.64% per month for the smallest ME
portfolio to .90% per month for the largest.”
The more striking evidence is the strong positive relation between average return and book to market equity. Average returns rise from .30% for the lowest BE/ME portfolio to 1.83% for the highest, a
different of 1.53% per month.
In fact, if stock prices are rational, BE/ME, the ratio of book value of a stock to the market’s assessment of its value, should be a direct indicator of the relative prospects of firms. For example,
we expect that high BE/ME firms have low earnings on assets relative to low BE/ME firms.”
This paper says that small cap stocks tend to outperform large cap stocks and high book value/price tend to outperform low book value/price stocks. The paper looked at several other factors to see if
they improve the model fit, but they were discarded in the end. Fama/French settle on these two additional factors as the best ones to add.
Link to Fama/French paper.
Since Fama/French published this paper in 1992, a lot of further research has been done. To the best of my knowledge, I have NOT seen any subsequent papers that substantially claim the Fama/French
model is invalid. William Bernstein, the neurologist turned financial researcher did a short paper on how casual investors could determine the three Beta’s for their own portfolios.
Link to William Bernstein.
Ken French regularly updates all of the data you would ever want to see regarding the model. This data allows enterprising folks to evaluate their own portfolios for the three betas used in the
BOTTOM LINE 2 is that over long periods of time, you can improve performance by “tilting” your portfolio towards lower market cap and higher book value/price stocks. The key phrase is “over long
periods of time” which is generally agreed to be years to decades. Using the Fama/French model is 180 degrees opposite and about a million miles away from ultra short term techniques like high
frequency trading. If you are looking for sure fire ways to out perform over 1 minute, 1 hour, 1 day, 1 week, 1 month and 1 year then you can ignore this post. If you are looking for high probability
ways to out perform over years to decades, the model is promising.
It would be nice if someone published the Fama/French factors for all mutual funds and/or ETF’s. I am not aware of any source of that data. Morningstar could easily do it, but I suspect it is too
complicated for the average, non-METARite investor to comprehend. If you had the Fama/French factors readily available, it would make it easy to compare different ETF’s and choose ones that are
appropriate. Lacking the specific model factors, we are left choosing ETF’s and hoping their charter fits them into the optimal model factors. I.e., small market cap and/or “value” ETF’s.
Obviously there is a lot more that can and needs to be said on selecting these ETF’s. I wanted to set the background so that everyone would have a common starting point as we discuss specific
portfolio strategies/investments. One other important point is that it is possible to achieve the extra few percent of gain WITHOUT having a positive alpha. Stated differently, you do not have to
rely on some hot shot manager du jour to make better stock selections. You can rely on some statistically proven techniques to “tilt” your portfolio for the few extra percent. This is good because it
seems that every time we deify a fund manager like Bill Miller for example, they go out and fall back to earth.
On any approach we take for selecting investments using either CAPM or Jensen’s alpha or Fama/French, we are assuming persistence of these factors. So a high beta fund based on historical data is
assumed to be a high beta fund going forward, etc. Clearly, if the world gets turned upside down, this may not be true. Even more clearly is that an actively managed portfolio can change its colors
over time. A portfolio manager could easily make enough changes to materially impact the model parameters.
My apologies for the length of this post. Congratulations if you made it this far. Next week I plan to post more details on the path of seeking a few extra few percent of return. There are several
more points/approaches that I think are pertinent on this path. Next week’s post will have more actionable investment choices for METARites.
|
{"url":"http://caps.fool.com/Blogs/a-few-extra-points-part-ii/642841","timestamp":"2014-04-17T07:45:26Z","content_type":null,"content_length":"113624","record_id":"<urn:uuid:3a4637f5-13a3-4941-b51e-73c80d787f12>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the area of the rhombus below? (Picture below)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/512bfe70e4b02acc415d9043","timestamp":"2014-04-18T03:56:12Z","content_type":null,"content_length":"40411","record_id":"<urn:uuid:bd9cb0a1-3f77-4699-a694-1407dbaad6bf>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Very Applied Math
“Applied math” has come to mean math that could be applied someday. I use the term “very applied math” to denote that subset of applied mathematics that actually is applied to real problems.
My graduate advisor used to say “Applied mathematics is not a subject classification. It's an attitude.” By that token, I'd say that very applied math is also an attitude, an interest in the grubby
work required to see the math actually used and a willingness to carry it out. This involves not just math but also computing, consulting, managing, marketing, etc.
|
{"url":"http://www.johndcook.com/veryappliedmath.html","timestamp":"2014-04-20T10:47:20Z","content_type":null,"content_length":"2853","record_id":"<urn:uuid:facaefbc-6a05-40f9-b064-125720e08591>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Likelihood Maximization on Phylogenic Trees
Seminar Room 1, Newton Institute
The problem of inferring the phylogenic tree of $n$ extant species from their genome is treated via mainly two approaches, the maximum parsimony and the maximum likelihood approach. While the latter
is thought to produce more meaningful results, it is harder to solve computationally. We show that under a molecular clock assumption, the maximum likelihood model has a natural formulation that can
be approached via branch-and-bound and polynomial optimization. Using this approach, we produce a counterexample to a conjecture relating to the reconstruction of an ancestral genome under the
maximum parsimony and maximum likelihood approaches.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
{"url":"http://www.newton.ac.uk/programmes/POP/seminars/2013071910001.html","timestamp":"2014-04-20T01:17:54Z","content_type":null,"content_length":"6289","record_id":"<urn:uuid:6203a050-05eb-4222-8fa6-beea8d62b380>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Adjusted Jump Ball Win Probability: Who Are The Best Jump Ballers in the NBA?
Update (1/05/12 2:10 PM EST): Added Andris Biedrins, Shaquille O'Neal, and Jermaine O'Neal to list.
Yesterday, over at Weak Side Awareness (a great NBA stats blog that you should check out, btw), "wiLQ" (@Exploring_NBA on twitter) posted jump ball data for the last 4 seasons:
New blog post where you can finally find out... who did win and attempt most jump balls in each of last 4 years weaksideawareness.wordpress.com/2012/01/04/jum…
— Mike wiLQ (@Exploring_NBA) January 4, 2012
I thought this was really neat, so I asked him for the raw matchup data, so that I could calculate an "Adjusted Jump" probability or odds ratio. Without going into too much technical detail, I set it
up the way I do my football ratings, except instead of margin of victory, the result of each jump ball is simply a "1" or a "0". For each jump, one player is arbitrarily assigned an indicator of +1
and the other -1. Trust me, it all works out. (Ok, in general, you should be leary of people who say "trust me", so go check this for yourself, if you don't actually trust me.) I then imported the
data into R (my favorite statistical programming language) and ran a multiple logistic regression. There were 552 players in the data set, but I only used a small subset of players (46) who had a
large number of jump ball opportunities last season as factors in the model. My thought is that if all 552 players were used, there would be a lot more noise and less significance.
Here is the function call in R (in case you might be interested in doing similar analysis):
Call: glm(formula = RESULT ~ Brook.Lopez + Dwight.Howard +
Amare.Stoudemire + Marc.Gasol + Al.Jefferson + Spencer.Hawes +
Tim.Duncan + Josh.Smith + Nene.Hilario + Darko.Milicic +
JaVale.McGee + Tyson.Chandler + Andrew.Bogut + Emeka.Okafor +
DeAndre.Jordan + Andrea.Bargnani + Luis.Scola + Robin.Lopez +
Nenad.Krstic + Serge.Ibaka + Joakim.Noah + Kwame.Brown +
David.Lee + DeMarcus.Cousins + Ben.Wallace + Marcus.Camby +
Samuel.Dalembert + Zydrunas.Ilgauskas + Josh.McRoberts +
Andrew.Bynum + Pau.Gasol + LaMarcus.Aldridge + Roy.Hibbert +
Kurt.Thomas + Anderson.Varejao + Nazr.Mohammed +
Marcin.Gortat + Erick.Dampier + Amir.Johnson + Chris.Wilcox +
Channing.Frye + Ryan.Hollins + Chris.Kaman + Andris.Biedrins +
Blake.Griffin + Joel.Anthony, family = binomial(link = "logit"), '
data = jumpballs_R)
The output of such a calculation is the log odds ratio, which I have turned into the odds ratio (ODDS). I also give the "Adjusted Win %" (50% is average, of course). The CODE column denotes whether
the coefficient was statistically significant (more *** are more highly significant). If a player does not have any *, then he is arguably an average jump baller, even if his numbers appear slightly
above or below 50%. Here is the full list:
The results at the top are not super surprising, but it's nice to see a confirmation of what we already believe. Bynum and Howard are the best. I'm a little surprised to see Dalembert that high. And
also Ben Wallace. My guess is that the reasons for success in getting the jump ball have to do with timing and quickness of leaping as much as length and reach. If we turn to the bottom of the list,
I am not at all surprised to see David Lee dwelling there. Most of those must have been from his NYK days, because I can't even recall him jumping a ball in GSW.
Once again, I'd like to thank wiLQ for making these data available - and be sure to check out his blog!
3 thoughts on “Adjusted Jump Ball Win Probability: Who Are The Best Jump Ballers in the NBA?”
1. So (almost) all the below-average jumpers are in the other 500 players? I would have guessed based on opening jump balls alone that more centers would be below average just by going against other
1. That wouldn't be that surprising to me considering this list contains most of the "quality" starting centers over the past few seasons.
|
{"url":"http://www.d3coder.com/thecity/2012/01/05/adjusted-jump-ball-win-probability-who-are-the-best-jump-ballers-in-the-nba/","timestamp":"2014-04-20T08:23:58Z","content_type":null,"content_length":"40295","record_id":"<urn:uuid:d6fafc07-29ec-4348-9380-63deda510560>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] problems with numdifftools
Pauli Virtanen pav@iki...
Tue Oct 26 17:59:43 CDT 2010
Tue, 26 Oct 2010 14:24:39 -0700, Nicolai Heitz wrote:
> > http://mail.scipy.org/mailman/listinfo/scipy-user
> I contacted them already but they didn't responded so far and I was
> forwarded to that list which was supposed to be more appropriated.
I think you are thinking here about some other list -- scipy-user
is the correct place for this discussion (and I don't remember seeing
your mail there).
> 1) Can I make it run/fix it, so that it is also going to work for the SI
> scaling?
Based on a brief look, it seems that uniform scaling will not help you,
as you have two very different length scales in the problem,
1/sqrt(m w^2) >> C
If you go to CM+relative coordinates you might be able to scale them
separately, but that's fiddly and might not work for larger N.
In your problem, things go wrong when the ratio between the
length scales approaches 1e-15 which happens to be the machine epsilon.
This implies that the algorithm runs into some problems caused by the
finite precision of floating-point numbers.
What exactly goes wrong and how to fix it, no idea --- I didn't look into
how Numdifftools is implemented.
> 2) How can I be sure that increasing the number of ions or adding a
> somehow more complicated term to the potential energy is not causing the
> same problems even in natural units?
> 3) In which range is numdifftools working properly.
That depends on the algorithm and the problem. Personally, I wouldn't
trust numerical differentiation if the problem has significantly
different length scales, it is important to capture all of them
accurately, and it is not clear how to scale them to the same size.
Writing ND software that works as expected all the time is probably
not possible even in theory.
Numerical differentiation is not the only game in the town. I'd look
into automatic differentiation (AD) -- there are libraries available
for Python also for that, and it is numerically stable.
has a list of Python libraries. I don't know which of them would be
the best ones, though.
Pauli Virtanen
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-October/053564.html","timestamp":"2014-04-20T23:52:51Z","content_type":null,"content_length":"4862","record_id":"<urn:uuid:a9cef3bc-bc42-4f07-8feb-e4c7d9a255cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Objected-oriented SIMD API for Numpy
Pauli Virtanen pav+sp@iki...
Wed Oct 21 03:24:09 CDT 2009
Wed, 21 Oct 2009 16:48:22 +0900, Mathieu Blondel wrote:
> My original idea was to write the code in C with Intel/Alvitec/Neon
> intrinsics and have this code binded to be able to call it from Python.
> So the SIMD code would be compiled already, ready to be called from
> Python. Like you said, there's a risk that the overhead of calling
> Python is bigger than the benefit of using SIMD instructions. If it's
> worth trying out, an experiment can be made with Vector4f to see if it's
> even worth continuing with other types.
The overhead is quickly checked for multiplication with numpy arrays of
varying size, without SSE:
Overhead per iteration (ms): 1.6264549101
Time per array element (ms): 0.000936947636565
Cross-over point: 1735.90801303
import numpy as np
from scipy import optimize
import time
import matplotlib.pyplot as plt
def main():
data = []
for n in np.unique(np.logspace(0, 5, 20).astype(int)):
print n
m = 100
reps = 5
times = []
for rep in xrange(reps):
x = np.zeros((n,), dtype=np.float_)
start = time.time()
for k in xrange(m):
x *= 1.1
end = time.time()
times.append(end - start)
t = min(times)
data.append((n, t))
data = np.array(data)
def model(z):
n, t = data.T
overhead, per_elem = z
return np.log10(t) - np.log10(overhead + per_elem * n)
z, ier = optimize.leastsq(model, [1., 1.])
overhead, per_elem = z
print ""
print "Overhead per iteration (ms):", overhead*1e3
print "Time per array element (ms):", per_elem*1e3
print "Cross-over point: ", overhead/per_elem
n = np.logspace(0, 5, 500)
plt.loglog(data[:,0], data[:,0]/data[:,1], 'x',
plt.loglog(n, n/(overhead + per_elem*n), 'k-',
label=r'fit to $t = a + b n$')
if __name__ == "__main__":
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-October/046114.html","timestamp":"2014-04-19T23:20:56Z","content_type":null,"content_length":"4866","record_id":"<urn:uuid:da41aaef-ebee-4a37-b3c5-8e05443ec088>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algorithmic correspondence and completeness in modal logic
Abstract This thesis takes an algorithmic perspective on the correspondence between modal and hybrid logics on the one hand, and first-order logic on the other. The canonicity of formulae, and by
implication the completeness of logics, is simultaneously treated. Modal formulae define second-order conditions on frames which, in some cases, are equiv- alently reducible to first-order
conditions. Modal formulae for which the latter is possible are called elementary. As is well known, it is algorithmically undecidable whether a given modal formula defines a first-order frame
condition or not. Hence, any attempt at delineating the class of elementary modal formulae by means of a decidable criterium can only consti- tute an approximation of this class. Syntactically
specified such approximations include the classes of Sahlqvist and inductive formulae. The approximations we consider take the form of algorithms. We develop an algorithm called SQEMA, which computes
first-order frame equivalents for modal formulae, by first transforming them into pure formulae in a reversive hybrid language. It is shown that this algorithm subsumes the classes of Sahlqvist and
inductive formulae, and that all formulae on which it succeeds are d-persistent (canonical), and hence axiomatize complete normal modal logics. SQEMA is extended to polyadic languages, and it is
shown that this extension succeeds on all polyadic inductive formulae. The canonicity result is also transferred. SQEMA is next extended to hybrid languages. Persistence results with respect to
discrete general frames are obtained for certain of these extensions. The notion of persistence with respect to strongly descriptive general frames is investigated, and some syntactic sufficient
conditions for such persistence are obtained. SQEMA is adapted to guarantee the persistence with respect to strongly descriptive frames of the hybrid formulae on which it succeeds, and hence the
completeness of the hybrid logics axiomatized with these formulae. New syntactic classes of elementary and canonical hybrid formulae are obtained. Semantic extensions of SQEMA are obtained by
replacing the syntactic criterium of nega- tive/positive polarity, used to determine the applicability of a certain transformation rule, by its semantic correlate—monotonicity. In order to guarantee
the canonicity of the formulae on which the thus extended algorithm succeeds, syntactically correct equivalents for monotone formulae are needed. Different version of Lyndon’s monotonicity theorem,
which guarantee the existence of these equivalents, are proved. Constructive versions of these theorems are also obtained by means of techniques based on bisimulation quantifiers. Via the standard
second-order translation, the modal elementarity problem can be at- tacked with any second-order quantifier elimination algorithm. Our treatment of this ap- proach takes the form of a study of the
DLS-algorithm. We partially characterize the for- mulae on which DLS succeeds in terms of syntactic criteria. It is shown that DLS succeeds in reducing all Sahlqvist and inductive formulae, and that
all modal formulae in a single propositional variable on which it succeeds are canonical.
|
{"url":"http://wiredspace.wits.ac.za/handle/10539/4569","timestamp":"2014-04-17T21:37:07Z","content_type":null,"content_length":"25426","record_id":"<urn:uuid:bef8ddb1-d055-4567-94f1-4d660ee5b2d3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
could anyone recommend some good books on string?
I am a graduated student but not so good at learning. we have learned Quantum Fields Theory (I) and are learning (II)(especially path integral).but to learn string, I still feel difficult to go on.
some name seems so strange, such as light coordination.Planck length,Planck mass,p-brane,reparametrization invariance,and I still couldn't understand what is the meaning of weyl invariance.and etc.
And when reading book,I found my knowledge on group theory is not enough, but to read all of them seems to be not possible.the same as differential geometry.
thanks anyway!
|
{"url":"http://www.physicsforums.com/showthread.php?t=15519","timestamp":"2014-04-20T11:30:55Z","content_type":null,"content_length":"25840","record_id":"<urn:uuid:f87e9499-dfd5-4da9-b18e-259dac6cc43c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PrimeGrid 12 sub-projects and counting
PrimeGrid: 12 sub-projects all surrounding Prime number discovery.
PrimeGrid's primary goal is to bring the excitement of prime finding to the "everyday" computer user.
3*2^n-1 Search (LLR)
Searching for mega primes of the form 3*2^n-1
Building on work done on primes of the form k*2^n-1 by Wilfred Keller et al:
http://www.prothsearch.net/riesel2.htmlCullen Prime Search (LLR)
Cullen Numbers are positive integers of the form Cn = n * 2^n + 1, where n is also a positive integer greater than zero. It has been shown that almost all Cullen numbers are composite - prime Cullen
Numbers are very rare. Only sixteen Cullen Primes are known to exist and they are when n = 1, 141, 4713, 5795, 6611, 18496, 32292, 32469, 59656, 90825, 262419, 361275, 481899, 1354828, 6328548, and
6679881. It is conjectured but not yet proven that there are an infinite number of Cullen Primes and it is also unknown whether or not n and Cn can be simultaneously prime.
Prime Sierpinski Project (LLR)
LLR support for the Prime Sierpinski Project
Proth Prime Search (LLR)
Proth primes of the form k*2^n+1, for k<1200; n<200K.
Sophie Germain Prime Search (LLR)The Riesel Problem (LLR)Woodall Prime Search (LLR)
Woodall Numbers are positive integers of the form Wn = n * 2^n - 1, where n is also a positive integer greater than zero. It is conjectured that there are infinitely many such primes. The Woodall
numbers Wn are primes for n=2, 3, 6, 30, 75, 81, 115, 123, 249, 362, 384, 462, 512, 751, 822, 5312, 7755, 9531, 12379, 15822, 18885, 22971, 23005, 98726, 143018, 151023, 22971, ... 2013992, 2367906,
3752948 and composite for all other n less than 2,367,906.
Generalized Cullen/Woodall Sieve
A combined sieve in support of the Cullen and Woodal prime searches.
Prime Sierpinski Project (Sieve)
- Suspended for now
Combined sieve support for the Prime Sierpinski Project and the 17 or Bust Project
Proth Prime Search SieveThe Riesel Problem (Sieve)
PrimeGrid currently has the following applications. When you participate in PrimeGrid, work for one or more of these applications will be assigned to your computer. The current version of the
application will be downloaded to your computer. This happens automatically; you don't have to do anything.
PrimeGrid ApplicationsCurrent Starting Point, except for AR HistorianS
Thanks Joe!!!
ARS is currently 16th by total credit:
Rank Name Members Recent Total Country
1 SETI.Germany 200 18,743 5,025,169 Germany
2 BOINC Synergy 194 10,909 4,069,648 International
3 SETI.USA 90 6,459 3,297,730 United States
4 TeAm AnandTech 35 2,462 2,777,962 International
5 BOINCstats 73 8,394 2,334,502 International
6 AMD Users 75 16,658 2,247,816 International
7 L'Alliance Francophone 234 16,300 2,155,764 International
8 BOINC@Heidelberg 49 1,624 1,711,692 Germany
9 PC Perspective Killer Frogs 5 2,863 1,222,207 International
10 Picard 22 6,921 1,094,754 International
11 USA 43 6,168 1,034,167 United States
12 Chicopee 6 4,201 995,819 United States
13 Canada 32 4,100 979,972 Canada
14 Team AlienWare 2 0 948,103 United States
15 Team Art Bell 12 3,430 925,882 International
16 Ars Technica 22 2,503 825,133 International
17 UK BOINC Team 92 4,211 811,502 United Kingdom
18 BOINC.BE 49 1,072 804,251 Belgium
19 Lithuania 12 4,241 659,537 Lithuania
20 The Knights Who Say Ni! 39 4,392 647,200 International
But 19th by Recent average credit:
Rank Name Members Recent Total Country
1 SETI.Germany 200 18,753 5,024,227 Germany
2 AMD Users 75 16,722 2,247,536 International
3 L'Alliance Francophone 234 16,218 2,154,023 International
4 BOINC Synergy 194 10,920 4,069,148 International
5 BOINCstats 73 8,429 2,334,386 International
6 Picard 22 6,960 1,094,710 International
7 SETI.USA 90 6,469 3,297,467 United States
8 USA 43 6,152 1,033,664 United States
9 Free-DC 15 5,370 522,613 International
10 Thylacinus.net 1 4,906 357,154 Spain
11 The Knights Who Say Ni! 39 4,377 646,871 International
12 Lithuania 12 4,246 659,354 Lithuania
13 Chicopee 6 4,207 995,644 United States
14 UK BOINC Team 92 4,119 810,343 United Kingdom
15 Canada 32 4,060 979,310 Canada
16 Team Art Bell 12 3,442 925,864 International
17 PC Perspective Killer Frogs 5 2,868 1,222,102 International
18 BOINC@AUSTRALIA 75 2,651 624,537 Australia
19 Ars Technica 22 2,511 825,072 International
20 TeAm AnandTech 35 2,462 2,777,962 International
I continue to have problems with WUs restarting constantly. Until this problem is solved, I cannot leave a machine on this to just go in a loop.
Originally posted by outlnder:
I continue to have problems with WUs restarting constantly. Until this problem is solved, I cannot leave a machine on this to just go in a loop.
Which applications do you have enabled?
What OS are you running?
If you are running a 64 bit Linux, do you have the 32 bit libraries downloaded? MyCat had to install:
The packages compat-libstdc++-33.x86_64, compat-libstdc++-296.i386, and compat-libstdc++-33.i386 installed the following files:
on his Fedora 7 system.
Both Kubuntu 7.1 64 and Win2K.
Unfortunately, I am not a *nix guy, so I don't know how to fix it.
Just enable LLR (Woodall) on your Unix box*(es) and disable the other applications/subprojects. LLR is statically linked so it should work. I'll see if I can determine the UBUNTU commands to install
the 32 bit libs so you can get back to sieving.
Are you having any trouble with your Windows 2000 boxes?
isn't it the same "apt-get install ia32-libs" as for Folding at Home ( http://forum.folding-community.org/ftopic17223.html ) ?
Originally posted by fractal:
isn't it the same "apt-get install ia32-libs" as for Folding at Home ( http://forum.folding-community.org/ftopic17223.html ) ?
Yes, but you forgot the sudo part of the command to type in (unless you are running as root)
sudo apt-get install ia32-libs
Decided to add what I've found so far:
As root:
Ubuntu and Debian: #apt-get install ia32-libs
Arch Linux: #pacman -Sy lib32-glibc
Fedora: #yum -y install compat-libstdc++-33
Gentoo: emul-linux-x86-sdl or
emul-linux-x86-compat and emul-linux-x86-baselibs
Centos or RHEL 4: compat-db
Still 16th by Total Credit, but dropped to 22nd by Recent Average Credit. Wrong direction folks!
Just found my 8th reportable prime from PG-TPS (also have 6 double checks)!
.... and now it's 9...
I've got all of my x86-BOINC machines on this right now for a little push. It's not a lot, but it's all I've got.
Another application has been added to PrimeGrid:
3*2^n-1 prime search
Building on work done on primes of the form k*2^n-1 by Wilfred Keller et al:
Please be sure to check your preferences to include/exclude this application.
Originally posted by Joe O:
Still 16th by Total Credit, but dropped to 22nd by Recent Average Credit. Wrong direction folks!
What gives? We've dropped to 17th by Total Credit and 26th by Recent Average Credit.
Well, speaking for myself, I've been rotating my x86 BOINC machines. They did a couple of days of RS-BOINC, then a few more days of Rectilinear Crossing Number, and are now going to do some SZTAKI.
I know that project-hopping screws with the averages, but sometimes it's the only thing that keeps me going.
Also, PrimeGrid is adding PSP-LLR. Currently Windows-only, and very few work units, but it should be going full speed soon.
Originally posted by BlisteringSheep:
Well, speaking for myself, I've been rotating my x86 BOINC machines. They did a couple of days of RS-BOINC, then a few more days of Rectilinear Crossing Number, and are now going to do some
I know that project-hopping screws with the averages, but sometimes it's the only thing that keeps me going.
Also, PrimeGrid is adding PSP-LLR. Currently Windows-only, and very few work units, but it should be going full speed soon.
Well having fun is job one!
Rank Name Total credit
1 SETI.Germany 6,441,937
2 BOINC Synergy 4,558,772
3 SETI.USA 3,511,652
4 L'Alliance Francophone 2,867,662
5 TeAm AnandTech 2,850,534
6 BOINCstats 2,685,926
7 AMD Users 2,583,920
8 Picard 2,071,573
9 BOINC@Heidelberg 1,773,917
10 USA 1,390,163
11 PC Perspective Killer Frogs 1,282,745
12 Canada 1,183,017
13 Chicopee 1,176,223
14 UK BOINC Team 1,043,669
15 Team Art Bell 1,027,969
16 Team AlienWare 948,103
17 Ars Technica 938,956
18 BOINC.BE 932,283
19 BOINC@AUSTRALIA 886,081
20 Invaders 877,364
Originally posted by BlisteringSheep:
Also, PrimeGrid is adding PSP-LLR. Currently Windows-only, and very few work units, but it should be going full speed soon.
Still Windows-only, but a lot more work units have been released.
If you want to find a large prime, this is the sub project for you!
PrimeGrid has found another Woodall Prime!
13 938237*2^3752950-1 1129757 L521 2007 Woodall
Yes, this is 13th on the Largest Known Primes List!
This is the largest, non Gimps(Mersenne), non SB known prime.
The finder posted:
"It was only the 8th Woodall I looked at.
You could be next.
18th by Total credit
37th by RAC
Hey Joe, which procs are best at which projects?
If I have Intel Q6600s, which projects should I run?
Originally posted by IronBits:
Hey Joe, which procs are best at which projects?
If I have Intel Q6600s, which projects should I run?
Like the proverbial 900 pound gorilla, any one you want! <G>
If you are running XP 64 or Server 2004 64 then I would run GCW Sieve for the most points.
If you are running Windows 32 bit then my favorites would be
LLR (Cullen) and/or PSP LLR
The first because whoever finds a prime there will join a very elite group. There are only 14 known so far.
The second because I like the sub-project and would like to see some progress on it.
If you are running Linux, then
LLR (3*2^n-1) has a very good chance at a mega-
or LLR (Woodall) to join an elite (only 33 known) group of prime finders. This is where the 13th largest prime was just found. http://primes.utm.edu/top20/page.php?id=7
and PSP LLR because I like the sub-project.
Hmm, well, I've put the 2 sieve projects in Primegrid on my 6 quads and 6 HT machines. It shares the quads with cosmology@home, though.
How much will this help the sieve projects? Is there a lot to sieve?
Originally posted by outlnder:
Hmm, well, I've put the 2 sieve projects in Primegrid on my 6 quads and 6 HT machines. It shares the quads with cosmology@home, though.
How much will this help the sieve projects? Is there a lot to sieve?
It will help a lot. The two sieve projects have a *lot* to sieve.
User and Team stats by sub-project are now available at http://boinc.aqstats.com/primegrid.php
Overtake of AMDUsers in only 999 days!!
For anyone comparing 32 bit v. 64 bit OS's, the 2 sieve projects run much better on a 64 bit OS.
I will be installing Win XP Pro 64 on my quads.
Overtake of AMDUsers in only 206 days!!
Where do the days go?
Overtake of AMDUsers in only 141 days.
All team stats:
Overtake at current "recent credit figures":
On another note, why do we care about passing AMD Users in particular?
Originally posted by Beyond:
On another note, why do we care about passing AMD Users in particular?
Don't really. It just seems to piss them off whenever we target them. Kinda like poking a beehive with a stick.
Originally posted by Beyond:
On another note, why do we care about passing AMD Users in particular?
First, AMDU and Ars have some recent history (<1 year); ie PI Segment.
Explained: we came from nothing but threw such a smack-down - whoops of victory still resound through the halls.
Second, they are the #2 vault team; it's a method of keeping motivation going for the many year journey we ride here.
That we respond is counting coup (and by extension - flattery).
Originally posted by outlnder:
Originally posted by Beyond:
On another note, why do we care about passing AMD Users in particular?
Don't really. It just seems to piss them off whenever we target them. Kinda like poking a beehive with a stick.
Suppose that's as good a reason as any
Good to see that you're crunching like a crazy man,
which sub-projects are you concentrating on?
Originally posted by digital_concepts:
Originally posted by Beyond:
On another note, why do we care about passing AMD Users in particular?
First, AMDU and Ars have some recent history (<1 year); ie PI Segment.
Explained: we came from nothing but threw such a smack-down - whoops of victory still resound through the halls.
Second, they are the #2 vault team; it's a method of keeping motivation going for the many year journey we ride here.
That we respond is counting coup (and by extension - flattery).
I thought the PI Segment thing got to be a bit over the top in the end. Kind of like kicking a wounded puppy.
They are however #2 in DCV, but way way behind in the Math Category.
Originally posted by Beyond:
Good to see that you're crunching like a crazy man,
which sub-projects are you concentrating on?
I'm doing the 2 sieve projects. Figure I'd try to get some factors out of the way.
I thought the PI Segment thing got to be a bit over the top in the end. Kind of like kicking a wounded puppy.
Hey - no dog harming metaphors - proud TDOW member here.
We could do a lot better, we aren't perfect.
But it is a team we been known to receive friendly jibes from too.
Thread hijack over.
Originally posted by digital_concepts:
I thought the PI Segment thing got to be a bit over the top in the end. Kind of like kicking a wounded puppy.
Hey - no dog harming metaphors - proud TDOW member here.
Me too, WOOF! That's why it's so horrendous...
Adding to the derailment...
Poking AMD is nice, but I'd like to turn The Dogs of War loose on Seti.USA. I was always told, "You don't get better by competing against weaker opponents." It may seem like they have us severely
outgunned, but you have to remember that they're nearly all Boinc all the time. I'm not too concerned with the Seti trophy (only speaking for myself), but I'm happy taking them down a peg in any
project we can. Not to mention pointing out that they couldn't Fold their way out of a paper bag.
Overtake of AMDUsers in 93 days.
Overtake of Seti.USA in 159 days.
As you say, being 100% BOINC leaves a lot of projects out of consideration.
What measurement do we have inclusive of non-BOINC?
Combined Boinc
Team Rank RAC Total Credit
SETI.USA 1 1 807.5M
Ars Technica 14 9 208.8M
AMD Users 34 23 120.8M
Free-DC 36 72 118.3M
DC Vault
Team Rank Total Credit
Ars Technica 1 412,359
AMD Users 2 398,092
Free-DC 3 396,357
SETI.USA 17 294,578
Team Rank Boinc Overall
SETI.USA 2 22,554 22,554
Free-DC 4 7,941 20,848
Ars Technica 6 6,965 19,139
AMD Users 7 14,334 18,218
IMO, you guys use puppy/weak too strongly.
Wow d_c, I didn't mean to make you go digging up a bunch of numbers.
Cross-project stats are a murky pool. I think all the ones you listed (and others I have seen) have strengths and weaknesses. I tip my hat to those who create and maintain them. I think the *closest*
1:1 comparison is boinc combined, but there lies plenty of variation even within boinc. If you could somehow correct that, then apply it to *all* projects, that would be awesome.
Anyway, I say we keep crunching, keep growing, keep taking down teams on all fronts, and see what happens. Go PrimeGrid!
|
{"url":"http://arstechnica.com/civis/viewtopic.php?f=18&t=40541&view=unread","timestamp":"2014-04-16T11:22:24Z","content_type":null,"content_length":"101958","record_id":"<urn:uuid:4c7311dc-715d-4cff-9d3d-4d39dfe7f159>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
optimisation problem
September 10th 2010, 04:19 AM #1
Junior Member
Sep 2009
assume we havesive species which can be in one of three classes: infant, immuture, or mature. The dynamics of the species is governed by the equation $N_{t+1}$= L $N_{t}$
where $N_{t}$ is a vector of length 3 giving the area occupied by each class, and L is 3*3 matrix which specifies the fecundity and survival of the species in terms of spread.
Give perunit area costs $c_{k}$ of culling individuals of type k, we wish to determine the optimal amount of class k to remove at the end of each year t=1,2-we denote these variables $H_{k,t}$ in
order to minimise the total area occupied by the invasive at the end of year 2.
Assume we have a maximum budget of $\$C in each year
thanks in advance
|
{"url":"http://mathhelpforum.com/advanced-applied-math/155748-optimisation-problem.html","timestamp":"2014-04-16T11:34:27Z","content_type":null,"content_length":"30354","record_id":"<urn:uuid:0e5d1a18-f892-42d3-adb3-172493085e37>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Transportation and Assignment Models in Operations Research - MBA Knowledge Base
Transportation and Assignment Models in Operations Research
Transportation and assignment models are special purpose algorithms of the linear programming. The simplex method of Linear Programming Problems(LPP) proves to be inefficient is certain situations
like determining optimum assignment of jobs to persons, supply of materials from several supply points to several destinations and the like. More effective solution models have been evolved and these
are called assignment and transportation models.
The transportation model is concerned with selecting the routes between supply and demand points in order to minimize costs of transportation subject to constraints of supply at any supply point and
demand at any demand point. Assume a company has 4 manufacturing plants with different capacity levels, and 5 regional distribution centres. 4 x 5 = 20 routes are possible. Given the
transportation costs per load of each of 20 routes between the manufacturing (supply) plants and the regional distribution (demand) centres, and supply and demand constraints, how many loads can be
transported through different routes so as to minimize transportation costs? The answer to this question is obtained easily through the transportation algorithm.
Similarly, how are we to assign different jobs to different persons/machines, given cost of job completion for each pair of job machine/person? The objective is minimizing total cost. This is best
solved through assignment algorithm.
Uses of Transportation and Assignment Models in Decision Making
The broad purposes of Transportation and Assignment models in LPP are just mentioned above. Now we have just enumerated the different situations where we can make use of these models.
Transportation model is used in the following:
• To decide the transportation of new materials from various centres to different manufacturing plants. In the case of multi-plant company this is highly useful.
• To decide the transportation of finished goods from different manufacturing plants to the different distribution centres. For a multi-plant-multi-market company this is useful.
• To decide the transportation of finished goods from different manufacturing plants to the different distribution centres. For a multi-plant-multi-market company this is useful. These two are
the uses of transportation model. The objective is minimizing transportation cost.
Assignment model is used in the following:
• To decide the assignment of jobs to persons/machines, the assignment model is used.
• To decide the route a traveling executive has to adopt (dealing with the order inn which he/she has to visit different places).
• To decide the order in which different activities performed on one and the same facility be taken up.
In the case of transportation model, the supply quantity may be less or more than the demand. Similarly the assignment model, the number of jobs may be equal to, less or more than the number of
machines/persons available. In all these cases the simplex method of LPP can be adopted, but transportation and assignment models are more effective, less time consuming and easier than the LPP.
Bookmark the permalink.
|
{"url":"http://www.mbaknol.com/management-science/transportation-and-assignment-models-in-operations-research/","timestamp":"2014-04-17T09:36:18Z","content_type":null,"content_length":"49378","record_id":"<urn:uuid:ae55abcf-3929-442a-8e62-17f37bfad0da>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reverse Polish Notation
What is Reverse Polish Notation?
Parentheses, Exponentiation (roots and powers), Multiplication, Division, Addition, Subtraction) to tell you the order in which to do sums. For example, if you are given:
1 + 2 x 3 = ?
you know to multiply 2 by 3 before adding the 1. Early pocket calculators were not acquainted with Aunt Sally, and would evaluate this expression as 9.
Another school-days mnemonic, BODMAS, tells us to attend to the bracketed expressions first, followed by the usual algebraic operator precedence. Brackets (or more properly parentheses) remove any
ambiguity about the order of evaluation of expressions. Most complex expressions could not be expressed in conventional arithmetic notation without brackets, and they are often used in computer
programming even when they are not strictly required, simply to eliminate confusion. For example:
(1+2) x 3 = ?
is guaranteed to evaluate to 9, whatever precedence rules are applied (or if the programmer has forgotten them). The BODMAS rule is applied recursively; brackets can contain other complex expressions
which themselves contain brackets, and so on. But in the early days of electronic calculators these rules proved fiendishly difficult to implement in calculator hardware.
Polish Notation was invented in the 1920's by Polish mathematician Jan Lukasiewicz, who showed that by writing operators in front of their operands, instead of between them, brackets were made
unnecessary. Although Polish Notation was developed for use in the fairly esoteric field of symbolic logic, Lukasiewicz noted that it could also be applied to arithmetic. In the late 1950's the
Australian philosopher and early computer scientist Charles L. Hamblin proposed a scheme in which the operators follow the operands (postfix operators), resulting in the Reverse Polish Notation. This
has the advantage that the operators appear in the order required for computation. RPN was first used in the instruction language used by English Electric computers of the early 1960's. Engineers at
the Hewlett-Packard company realised that RPN could be used to simplify the electronics of their calculators at the expense of a little learning by the user. The first "calculator" to use RPN was the
HP9100A, which was introduced in 1968, although this machine is now regarded by many as the first desktop computer.
Once mastered, RPN allows complex expressions to be entered with relative ease and with a minimum of special symbols. In the 1960's that initial effort would have been regarded as a reasonable
trade-off. For most calculator users of the time, the alternative was the error-prone practice of writing down intermediate results. Using RPN, it is possible to express an arbitrarily complex
calculation without using brackets at all. In RPN the simple example "(1+2) x 3" becomes:
3 2 1 + x
This notation may look strange at first, and clearly if the numbers were entered as shown you would get the number three-hundred and twenty-one! To make this work you need an extra key that tells the
calculator when you have finished entering each number. On most RPN calculators this is called the "Enter" key and fulfills a similar function to an equals key on a conventional calculator but in
reverse. So the example would actually be input as:
3 enter 2 enter 1 + x
This gives the correct answer, 9. If you wanted to work out "1 + 2x3" you would input this as:
1 enter 2 enter 3 x + (answer: 7)
You need to think of entering numbers as being like putting plates into one of those spring loaded plate stacking trolleys you get in canteens. Every time you enter a number, it is pushed onto the
stack. When you eventually start using arithmetic operators, numbers start "popping" off the stack as needed. You can also push more numbers onto the stack. At the end of the calculation you will
have "used up" all the numbers and the stack will be empty.
A calculator using conventional logic will internally convert the expression to the RPN form above. This may be achieved by parsing the bracketed expression before carrying out the calculation. But
it is more likely that the calculator logic will be pushing numbers down onto the stack every time a pair of brackets is opened or is implied by the operator precedence. So in effect an RPN
calculator is offloading this work to the user, resulting in simpler logic design in the calculator. The technical barriers to using conventional bracket notation in an electronic calculator no
longer exist, and yet users of RPN calculators rarely seem to want to move over to the more conventional algebraic logic. Although RPN seems strange to the uninitiated, people who overcome the
initial hurdle find it a powerful and elegant tool which is ultimately easier to use. Luckily for RPN devotees, Hewlett-Packard continues to develop RPN calculators, although these tend to be the
higher-end models, which may also support algebraic logic. And of course Calc98 continues to have RPN as a user configurable option.
|
{"url":"http://www.calculator.org/rpn.aspx","timestamp":"2014-04-20T18:38:40Z","content_type":null,"content_length":"12008","record_id":"<urn:uuid:69a9f871-45a6-44f8-9748-7c39afdb8909>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Become Maths Detectives
Copyright © University of Cambridge. All rights reserved.
'Become Maths Detectives' printed from http://nrich.maths.org/
Have a go at becoming a detective.
Watch the video below:
You can now explore this further. Click on the picture to get started.
When you've explored what you can do with $3 \times 4 - 5$ then it's time to explore further.
You could change just one part of the number plumber, for example the $-5$ bit.
You might try $3 \times 4 - 6$ or $3 \times 4 + 5$ or $3 \times 3 - 5$ and compare the results.
You'll have lots of your own ideas about things to explore too.
Mathematicians like to ask themselves questions about what they notice.
What possible questions could you ask?
These questions may lead you to make conjectures. A conjecture is something which you believe to be true but need to investigate further in order to convince yourself.
|
{"url":"http://nrich.maths.org/6928/index?nomenu=1","timestamp":"2014-04-19T20:24:04Z","content_type":null,"content_length":"4433","record_id":"<urn:uuid:31e878be-c610-4fa0-92d2-2a497324e579>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Open and Closed set topology question
April 15th 2009, 07:06 PM #1
Junior Member
Jan 2008
Open and Closed set topology question
Let be a collection of subset of the set X, satisfying:
i. and X contained in ,
ii. finite unions of elements in are in , and
iii. arbitrary unions of elements in are in
We know that is a topology on X
Give an example of a topology when X = . What are the open sets in this topology?What are the closed sets? What is the basis for the topology?
Last edited by mr fantastic; May 22nd 2009 at 04:10 AM. Reason: Restored question deleted by OP
Follow Math Help Forum on Facebook and Google+
I assume you mean arbitrary intersections because both "finite unions" and "arbitrary unions" conditions at the same time do not make much sense.
Cofinite topology on R
What are the open sets in this topology?What are the closed sets?
Finite point sets are closed in the cofinite topology on $\mathbb{R}$.
What is the basis for the topology?
A subbasis of a cofinite topology on R is the set of complements of singletons in R. You can get the basis for a cofinite topology on R using the subbasis.
Last edited by aliceinwonderland; April 17th 2009 at 03:52 PM.
Follow Math Help Forum on Facebook and Google+
April 17th 2009, 12:42 PM #2
Senior Member
Nov 2008
|
{"url":"http://mathhelpforum.com/differential-geometry/83984-open-closed-set-topology-question.html","timestamp":"2014-04-17T21:44:04Z","content_type":null,"content_length":"37671","record_id":"<urn:uuid:d2aa7b6a-5935-4750-8b08-de6309b25dab>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Experiment 4. GAMMA-RAY ABSORPTION
In this experiment you will measure the transmission of gamma rays through different absorbers. Theoretically, there should be an exponential decrease of the transmitted counts with thickness of
absorber, determined by a mass absorption coefficient. You will determine mass absorption coefficients and evaluate how they vary as a function of gamma energy and of the atomic number of the
Equipment: NaI scintillation detector, amplifier, pulse-height analyzer; gamma sources, sets of absorber plates.
Readings: Interactions of photons with matter, concept of crosssections, sect. 5.2.1 and 5.2.5; Absorption measurements, sect. 5.4.3.
Key concepts: Absorption and scattering crosssections for photons; absorption lengths, mass absorption coefficients.
4.1 Transmitted counts vs. absorber thickness
Absorbers of Al, Cu, Cd and Pb are available in plates that can be stacked to produce a range of thicknesses. Various gamma sources are available, including ^137Cs (662 keV), ^60Co (1.17 and 1.33
MeV) , ^57Co (122 keV), ^22Na (511 keV, 1.27 MeV) , and ^241Am (59.7 keV) may be available. Also, some sources emit x-rays of lower energy, e.g. K x-rays of Ba followind decay of ^137Cs. For various
paired choices of source and absorber, make measurements of transmitted counts as a function of absorber thickness. Choose a range of absorber thicknesses that result in a large change in the number
of transmitted counts and make a series of measurements to help experimentally to determine how the number of transmitted counts decreases with absorber thickness . Measure absorber thicknesses with
a micrometer or vernier caliper.
An important issue is to determine the "background" counting rate that is irrelevant to your measurements. Background radiation may come from cosmic rays and environmental radioactivity, but also,
e.g., from scattering of radiation from your source off of nearby objects into the detector. You need to use judgement to determine best the background counting rate. After subtraction of background
counting rates, plot count rates on semi-log paper versus absorber thickness. Determine absorption lengths and coefficients from slopes using the theory of gamma ray absorption (exponential
absorption law).
Compare your measured coefficients with those obtained from the attached graph of mass absorption coefficients or from the net (e.g. see http://www.csrri.iit.edu/periodic-table.html or other links on
the course home page). Ascertain whether or not your coefficients are consistent with those reported elsewhere. If they are not consistent, try to figure out why and to explain how better values
might be obtained.
4.2 Mass absorption coefficient versus gamma energy
Using one absorber determine absorption coefficients for a wide range of gamma energies. Al, Cu or Cd is recommended over Pb. Plot coefficients versus energy and try to establish a qualitative or
quantitative dependence,
4.3 Mass absorption coefficients vs. absorber atomic number Z
Using one gamma source (preferably ^57Co or ^241Am) determine absorption coefficients for absorbers having a wide range of atomic numbers Z. Plot mass absorption coefficients versus Z and try to
establish a qualitative or quantitative dependence.
Questions and Considerations
a. How can you determine the "true" background counting rate? By interposing a very thick absorber? By moving the source far away? Are there contributions to "background" from the source itself? If
so, could they be reduced through a better experimental design?
b. The total crosssection for removal of photons from the beam, or 'extinction' crosssection, is the sum of crosssections for photoelectric absorption, Compton scattering and pair production. How
might you try to distinguish between these contributions experimentally?
c. Consider the simple pulse-height spectrum of 137Cs, which emits a single gamma ray with an energy of 662 keV. To determine the counting rate, you have the choice of integrating counts over the
entire spectrum or only over the photopeak. What are the tradeoffs in each choice?
Diagram of Apparatus
Mass absorption coefficients for selected elements
Energy units BeV= Billion of electron-volts= GeV. Note that lower curves are extensions of the upper curves to higher energies and that there are different energy scales at top and bottom of the
Copyright Gary S. Collins, 1997-2002..
|
{"url":"http://public.wsu.edu/~collins/Phys415/writeups/gammaabs.htm","timestamp":"2014-04-17T21:24:32Z","content_type":null,"content_length":"5779","record_id":"<urn:uuid:a1dfdb9e-2bd2-4523-aacc-c871db8ba4d2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
|
\setkeys{Gin}{width=0.9\textwidth} % \setlength{\parindent}{0in}
<>= options(continue = " ") @
\usepackage{amsmath,pstricks,fullpage} \usepackage[authoryear,round]{natbib} \usepackage{hyperref}
%% \parindent 0in % Left justify
\newcommand{\scscst}{\scriptscriptstyle} \newcommand{\scst}{\scriptstyle}
\newcommand{\code}[1]{\texttt{#1}} \newcommand{\Rpackage}[1]{\textsf{#1}} \newcommand{\Rfunction}[1]{\texttt{#1}} \newcommand{\Robject}[1]{\texttt{#1}}
\newcommand{\incfig}[3]{% \begin{figure}[htbp] \begin{center} \includegraphics[width=#2]{#1} \caption{\label{#1}#3} \end{center} \end{figure} }
\title{Synthesis of Microarray Experiments}
\author{Robert Gentleman and Deepayan Sarkar} \date{January 12, 2007}
With many different investigators studying the same disease and with a strong commitment to publish supporting data in the scientific community, there are often many different datasets available for
any given disease. Hence there is interest in finding methods for combining these datasets to provide better and more detailed understanding of the underlying biology. In this tutorial we will
briefly cover $p$-value based approaches to combining multiple studies, and then move on to more general methods which are usually more appropriate for microarray studies.
\section*{Combining $p$-values: an artificial example}
We start with an artificial example to illustrate voting and $p$-value based methods. The following code generates some random data that we can work with: <>=
k <- 7 # number of experiments n <- 10 # number of samples in each group
mu1 <- 0; mu2 <- 1; sigma <- 2.5
x <- matrix(rnorm(k n, mean = mu1, sd = sigma), n, k) y <- matrix(rnorm(k n, mean = mu2, sd = sigma), n, k)
@ Each paired column in \code{x} and \code{y} represent samples from one study. We can perform a $t$-test for the first experiment as follows: <>= t.test(x[, 1], y[, 1]) @
The $p$-values for all \code{k} experiments (along with direction) can be obtained as a vector: <>= pvals <- sapply(1:k, function(i) t.test(x[, i], y[, i])$p.value) pvals.less <- sapply(1:k, function
(i) t.test(x[, i], y[, i], alternative = "less")$p.value) direction <- sapply(1:k, function(i) sign(mean(x[, i]) - mean(y[, i]))) pvals.less direction @ The number of significant votes (out of \Sexpr
{k}) at level $0.05$: <>= sum(pvals.less < 0.05) @ \begin{exc} What do you conclude about the effect? Would it make sense to use \code{pvals} instead of \code{pvals.less}? Repeat the process a few
times (regenerating the random numbers). \end{exc} Let random variables $U_1, \dotsc, U_K$ represent $K$ $p$-values. A new statistic derived from these is \[ U = U_1 \dotsc U_K \] which we expect to
be small (i.e. smaller than it would be by chance) when there is a weak signal. We can perform a consensus test if we knew the null distribution of $U$. Here are a couple of facts from probability
theory: \begin{itemize} \item If $X$ has a $\mathcal{U}(0, 1)$ distribution, the $-\log(X)$ has an exponential distribution with rate parameter $1$. \item If $Y_1, \dotsc, Y_n$ are independent
exponentials with rate $1$, then $\sum Y_i$ has a Gamma distribution with rate $1$ and shape parameter $n$ \end{itemize} Since under the null of no difference each $U_i \sim \mathcal{U}(0, 1)$, $-\
log(U) = \sum -\log(U_i) \sim \mathcal{G}amma(1, k)$. \begin{exc} Compute the consensus $p$-value $P(-\log(U) > -\log(u))$, where $u$ is the product of the observed $p$-values. Repeat with different
parameters. \end{exc} See \code{R} file for solution. In our example, the $p$-value was <>= pgamma(sum(-log(pvals.less)), shape = k, rate = 1, lower.tail = FALSE) @
\section*{Microarray studies}
The usual application of meta-analysis is to analyze a single outcome, or finding, using published data where typically only summary statistics are available. With microarray experiments, we are
often in the more fortuitous situation of having the complete set of primary data available, not just the summary statistics. By phrasing the synthesis in terms of standard statistical models, many
of the recently developed $p$-value adjustment methods for multiple comparisons can be applied directly.
\subsection*{The experimental data}
We will use three data sets as examples. One is a study of breast cancer reported by \cite{West2001} in which 46 patients were assayed and two phenotypic conditions were made public, the estrogen
receptor (ER) status and the lymph node (LN) status. We will refer to this as the Nevins data in the remainder of the text. The samples were arrayed on Affymetrix HuGeneFL GeneChips. ER status was
determined by immunohistochemistry and later by a protein immunoblotting assay. We have used 46 samples, of which 4 gave conflicting evidence of ER status depending on the test used. Lymph node
status was determined at the time of diagnosis. Tumors were reported as negative when no positive lymph nodes were discovered and as positive when at least three identifiably positive lymph nodes
were detected.
A second breast cancer data set was made public by \cite{vantVeer2002} in which tumors from 116 patients were assayed on Hu25K long oligomer arrays. Among other covariates the authors published the
ER status of the tumors. Their criterion was a negative immunohistochemistry staining, a sample was deemed negative if fewer than 10\% of the nuclei showed staining and positive otherwise. We refer
to this as the van't Veer data.
The third experiment is one published by \cite{Holstege2005}, which assayed patients with primary head and neck squamous cell carcinoma using long oligomer arrays. Lymph node status of the
individuals involved was determined by clinical examination followed by computed tomography and/or magnetic resonance imaging. Any nodes that were suspected of having metastatic involvement were
aspirated and a patient was classified as lymph node positive if the aspirate yielded any metastatic tumor cells. We refer to this as the Holstege data.
In our first example, the goal is to combine the two breast cancer data sets that report on the estrogen receptor (ER) status. In the second comparison, we combine the Holstege data and the Nevins
data on the basis of LN status.
Some of the issues that arise in combining experiments can already be seen. For the comparison on the basis of ER status we see that the two used similar, but different methods for assessing ER
status. One might want to revert the Nevins data to the classifications based only on immunohistochemistry staining to increase comparability across the two experiments. This is likely to come at a
loss of sensitivity since one presumes that the ultimate (and in four cases different) classification of samples was the correct one.
For the synthesis of experiments on the basis of lymph node status the situation is even more problematic. One might wonder whether approximately the same effort was expended in determining lymph
node status in the two experiments. The value of any synthesis of experiments will have a substantial dependency on the comparability of the patient classifications. If the classifications of samples
across experiments are quite different then it is unlikely that the outputs will be scientifically relevant.
The data are available in the compendium package \Rpackage{GeneMetaEx}. <>= library("nlme") library("GeneMeta") library("GeneMetaEx")
data("NevinsER") data("VantER") data("NevinsLN") data("HolstegeLN")
## test to make sure that we have matching data stopifnot(all(featureNames(VantER) == featureNames(NevinsER))) stopifnot(all(featureNames(NevinsLN) == featureNames(HolstegeLN)))
@ One problem that must be dealt with when combining experiments is the matching problem. For this data, probes were matched on the basis of GenBank or UniGene identifiers. For the Nevins -- van't
Veer synthesis we have \Sexpr{length(geneNames(NevinsER))} mRNA targets in common, while for the Nevins -- Holstege synthesis there are \Sexpr{length(geneNames(NevinsLN))} common mRNA targets.
\subsection*{Effect size models}
In situations where potentially different scales of measurement have been used it will be necessary to estimate an index of effect magnitude that does not depend on the scaling or units of the
variable used. For two-sample problems the scale-free index that is commonly used is the so-called \textit{effect size}, which is the difference in means divided by the pooled estimate of standard
deviation (note that this is not the $t$-statistic, which would use the standard error of the mean difference). Other measures include the correlation coefficient and the log odds ratio, but we do
not consider them here. \citet{choi} has proposed the use of meta-analytic tools for combining microarray experiments and argued in favor of synthesis on the basis of estimated effects. The \
Rfunction{zScores} function from the \Rpackage{GeneMeta} package can be used to compute various per experiment and combined summaries: <>=
eSER <- list(NevinsER, VantER) eCER <- list(NevinsER$ERstatus, VantER$ERstatus) eCER <- lapply(eCER, function(x) ifelse(x == "pos", 1, 0))
wSFEM <- zScores(eSER, eCER, useREM=FALSE) wSREM <- zScores(eSER, eCER, useREM=TRUE) @ See the help page \code{?zScores} for the meaning of the columns.
<>= t(head(wSFEM, 4)) @ \begin{exc} Are the effect sizes similar in the two studies? Would you expect them to be? Code to produce the plot below is given in the \code{R} file. How do you interpret
this plot? Repeat this process with the lymph node data sets \code{NevinsLN} and \code{HolstegeLN}. Do you get similar results? \end{exc}
\begin{center} \setkeys{Gin}{width=0.7\textwidth}
library("geneplotter") smoothScatter(wSFEM[,"Effect_Ex_1"], wSFEM[,"Effect_Ex_2"], xlab="Nevins", ylab="van't Veer", main = "per gene effect sizes in the two ER studies") abline(0, 1)
\setkeys{Gin}{width=0.9\textwidth} \end{center}
There are two candidate models, with and without random effects for experiments, to combine the effects for each gene. The usual procedure is to first assess which of the two models is appropriate
and to then subsequently fit that model. This determination is often based on Cochran's $Q$ statistic, if the value of this statistic is large then the hypothesis that the per-study measured effects
are homogeneous is rejected and a random effects model is needed. The value returned by the \Rfunction{zScores} function includes the values of $Q$. Below we plot a Q-Q plot comparing these $Q$
values to the reference $\chi^2_1$ distribution: \begin{center} \setkeys{Gin}{width=0.65\textwidth}
library("lattice") plot(qqmath(wSFEM[, "Qvals"], type = c("p", "g"), distribution = function(p) qchisq(p, df = 1), panel = function(...) { panel.abline(0, 1) panel.qqmath(...) })) @
\setkeys{Gin}{width=0.9\textwidth} \end{center} \begin{exc} There appears to be a substantial deviation --- the observed values are too large. Can we conclude that the random effect model (REM) is
required for all genes? Or should we use it only for the ones with a sufficiently low $p$-value? Does it really matter? Reproduce this plot for the LN data. \end{exc}
In the following graphic, we plot the estimated effect sizes using the two methods: \begin{center} \setkeys{Gin}{width=0.7\textwidth}
%% zSco
<>= plot(wSFEM[, "MUvals"], wSREM[, "MUvals"], pch = ".") abline(0, 1, col = "red") @
\setkeys{Gin}{width=0.9\textwidth} \end{center}
\begin{exc} This suggests that the estimated combined effect size is not particularly affected by the choice of model. Is the same true for the standardized $z$-scores? (Hint: replace \code{"MUvals"}
in the code with the relevant column name.) \end{exc}
\section*{Modeling individual observations}
We next consider a formal random effects model for each gene comparison. We note that in general the different genes are not independent and hence a \textit{gene at a time} approach will not be
optimal. However, in the absence of any knowledge about which genes are correlated with which other genes it is not clear how to approach a genuinely multivariate analysis. Here we describe the
gene-at-a-time approach. When the raw data is available, this is the recommended approach as it is easily generalizable to more complex situations.
Following \cite{coxsolomon} we write the model for each gene as: $$\label{eq:mixedeffects} Y_{tjs} = \beta_0 + \beta_t + b_j + \xi_{jt} + \epsilon_{tjs},$$ where $Y_{tjs}$ represents the expression
value for the $s^{\mbox{\scriptsize th}}$ sample in the $j^{\mbox{\scriptsize th}}$ experiment, which is on treatment $t$. Note that we use the term \emph{treatment} interchangeably with what would
be called the disease condition or phenotype in the current application. $\beta_0$ is the overall mean expression, $\beta_t$ is the effect for the $t^{\mbox{\scriptsize th}}$ treatment, $b_j$ is a
random effect characterizing the $j^{\mbox{\scriptsize th}}$ experiment, $\xi_{jt}$ is a random effect characterizing the treatment by experiment interaction. We assume that the $b_j$ have mean zero
and variance $\tau_b$, that the $\xi_{jt}$ have mean zero and variance $\tau_\xi$, and that $\epsilon_{tjs}$ are random variables with mean zero and variance $\tau_\epsilon$ that represent the
internal variability. The \Rfunction{lme} function in the \Rpackage{nlme} package can be used to fit such models.
\subsection*{Constructing per-gene data frames}
Since \Rfunction{lme} is not designed to work with expression sets directly, one needs to construct suitable data frames for each gene. It is useful to write a function that creates such data frames,
e.g. <>=
makeDf <- function(i, expr1, expr2, cov1, cov2, experiment.names = c("1", "2")) { y <- c(expr1[i, ], expr2[i, ]) # i-th gene treatment <- c(as.character(cov1), as.character(cov2)) experiment <- rep
(experiment.names, c(length(cov1), length(cov2))) ans <- data.frame(y = y, treatment = factor(treatment), experiment = factor(experiment)) rownames(ans) <- NULL ans }
makeDf.ER <- function(i) { makeDf(i, exprs(NevinsER), exprs(VantER), NevinsER$ERstatus, VantER$ERstatus, experiment.names = c("Nevins", "Vant")) }
df3 <- makeDf.ER(3) summary(df3)
@ This is not a particularly useful function, as it can only combine two studies at a time. Here is a more general version that we use below. <>=
makeDf2 <- function(i, ...) { args <- list(...) exprlist <- lapply(args, function(x) x$exprs[i, ]) covlist <- lapply(args, function(x) as.character(x$cov)) nsamples <- sapply(covlist, length) #
number of samples experiment.names <- names(args) ans <- data.frame(y = unlist(exprlist), treatment = factor(unlist(covlist)), experiment = factor(rep(experiment.names, nsamples))) rownames(ans) <-
NULL ans }
One point to keep in mind is that random effect models assume similar intrinsic variability ($\tau_{\epsilon}$) in all experiments. If there is no prior reason to believe that this is the case, it is
often useful to scale the data beforehand. This can be done globally or on a per-gene basis. In the latter case, any filtering to leave out genes with low variability needs to be done before this
step. See \code{R} code for examples. <>= scaled.exprs <- function(eset) { x <- exprs(eset) t(scale(t(x))) }
scaled.exprs.2 <- function(eset) { x <- exprs(eset) vx <- as.vector(x) x[] <- scale(vx, center = median(vx), scale = mad(vx)) x } @
<>= NevinsER.info <- list(exprs = scaled.exprs(NevinsER), cov = NevinsER$ERstatus) VantER.info <- list(exprs = scaled.exprs(VantER), cov = VantER$ERstatus) df3 <- makeDf2(3, Nevins = NevinsER.info,
Vant = VantER.info) @
<>= NevinsLN.info <- list(exprs = scaled.exprs(NevinsLN), cov = NevinsLN$LNstatus) HolstegeLN.info <- list(exprs = scaled.exprs(HolstegeLN), cov = HolstegeLN$LNstatus) @
<>= str(df3) @
\subsection*{Testing for interaction}
We fit the model in Equation~(\ref{eq:mixedeffects}) where the treatment effect is a fixed effect and experiment is considered to be a random effect. We also fit a model that includes a treatment by
experiment interaction, and test the hypothesis that no interaction term is needed. There are essentially two ways in which the interaction could be important. In one situation the treatment has an
opposite effect in the two experiments, we can also detect this by simply comparing the estimated effects for each experiment estimated separately. For such probes, or genes, it would not be
appropriate to combine estimates. In the other case, the interaction suggests that the magnitude of the effect is different in one experiment, versus the other. For these probes it may simply be the
case that the model is incorrect. For example, we might be looking for a change in mean abundance while the magnitude of the effect is a function of the abundance, and hence in samples where the
abundance of mRNA transcript is larger a larger effect is observed. In cases where an interaction is absent, the model without an interaction will have more power to detect a treatment effect.
The following code snippet can fit these models, one gene at a time, and store the $p$-value for a likelihood ratio test: <>=
## ngenes <- nrow(exprs(NevinsER)) # takes very long ngenes <- 5
ERpvals <- numeric(ngenes)
for(i in 1:ngenes) { cat(sprintf("\r%5g / %5g", i, ngenes)) dfi <- makeDf2(i, Nevins = NevinsER.info, Vant = VantER.info) fm.null <- lme(y ~ 1 + treatment, data = dfi, random = ~ 1 | experiment,
method = "ML") fm.full <- lme(y ~ 1 + treatment, data = dfi, random = ~ 1 | experiment/treatment, method = "ML") ERpvals[i] <- anova(fm.null, fm.full)[["p-value"]][2] } @ This will take a fairly long
while to run for all genes, but the results are already available (from slightly different fits) in the \Rpackage{GeneMetaEx} package. Next we plot these $p$-values in a Q-Q plot against the uniform
distribution as reference.
\begin{center} \setkeys{Gin}{width=0.7\textwidth}
data("ERpvs") ERpvals <- unlist(eapply(ERpvs, function(x) x[2]))
plot(qqmath(~ ERpvals, outer = TRUE, main = "p-values for interaction effect", xlab = "Quantiles of Uniform", ylab = "Observed Quantiles", distribution = qunif, pch = "."))
\setkeys{Gin}{width=0.9\textwidth} \end{center}
Perhaps the most striking feature in this plot is the very large number of $p$-values close to 1. This is a reflection of the fact that the hypothesis test here is being performed under non-standard
conditions. The test is that the variance of the random effect is zero, and hence is on the boundary of the parameter space. In this case the asymptotics can be delicate \citep{Crainiceanu} and
further study is needed to fully interpret the output.
\subsection*{Combined estimates of difference}
The following code fits the combined mixed effect model as well as models (using \Rfunction{lm}) for the individual experiments. <>=
## ngenes <- nrow(exprs(NevinsER)) # slow ngenes <- 20
ER.meandiff.combined <- data.frame(matrix(NA, nrow = ngenes, ncol = 4)) ER.meandiff.Nevins <- data.frame(matrix(NA, nrow = ngenes, ncol = 4)) ER.meandiff.Vant <- data.frame(matrix(NA, nrow = ngenes,
ncol = 4))
colnames(ER.meandiff.combined) <- colnames(ER.meandiff.Nevins) <- colnames(ER.meandiff.Vant) <- c("Value", "Std.Error", "t.value", "p.value")
for(i in 1:ngenes) { cat(sprintf("\r%5g / %5g", i, ngenes)) dfi <- makeDf2(i, Nevins = NevinsER.info, Vant = VantER.info) fm.lme <- lme(y ~ 1 + treatment, data = dfi, random = ~ 1 | experiment,
method = "ML") fm.Nevins <- lm(y ~ 1 + treatment, data = dfi, subset = (experiment == "Nevins")) fm.Vant <- lm(y ~ 1 + treatment, data = dfi, subset = (experiment == "Vant"))
ER.meandiff.combined[i, ] <- summary(fm.lme)$tTable[2, -3] ER.meandiff.Nevins[i, ] <- summary(fm.Nevins)$coefficients[2, ] ER.meandiff.Vant[i, ] <- summary(fm.Vant)$coefficients[2, ] }
This takes a fairly long time as well (though less than before), and the results are available in a supplied \textsf{R} data file. We use this data to draw a Q-Q plot of the siginificance of the
treatment effects in the three models: \begin{center}
comb.df <- make.groups(Combined = ER.meandiff.combined$p.value, Nevins = ER.meandiff.Nevins$p.value, Vant = ER.meandiff.Vant$p.value)
plot(qqmath(~ data | which, comb.df, pch = ".", distribution = qunif, aspect = "iso"))
We can also compare the estimated $t$-statistics pairwise:
\begin{center} \setkeys{Gin}{width=0.8\textwidth}
<>= cvERB1 <- ER.meandiff.combined$t.value cvERB2 <- ER.meandiff.Nevins$t.value cvERB3 <- ER.meandiff.Vant$t.value
pairs(data.frame(combined = cvERB1, Nevins = cvERB2, Vant = cvERB3), pch = ".")
\setkeys{Gin}{width=0.9\textwidth} \end{center}
Finally, we can study how many new (integration-driven) discoveries were made by looking at genes that were significant in the combined but not the individual studies.
\begin{center} \setkeys{Gin}{width=0.5\textwidth}
pvERB1 <- ER.meandiff.combined$p.value pvERB2 <- ER.meandiff.Nevins$p.value pvERB3 <- ER.meandiff.Vant$p.value
vennDiag <- function(p1, p2, p3, c1, c2, c3, pthresh = 0.01, labels = TRUE) { p <- cbind(p1,p2,p3)0 e <- sign(cbind(c1,c2,c3)[sel, ]) p[sel, ] ordfun <- function(x) { order(rowSums(x matrix(2^(ncol
(x):1), ncol = ncol(x), nrow = nrow(x), byrow = TRUE))) } image(y = 1:nrow(e), x = 1:ncol(e), z = t(e[ordfun(e), ]), ylab = paste(nrow(e), "probes"), xlab="", xaxt="n", col=c
("skyblue","white","firebrick1"), main = "comparison of\nsignificant gene sets") axis(1, at = 1:ncol(e), labels = labels) }
vennDiag(pvERB1, pvERB3, pvERB2, cvERB1, cvERB3, cvERB2, labels=c("C", "V", "N"))
\setkeys{Gin}{width=0.9\textwidth} \end{center}
\begin{exc} How would you interpret these results? Perform the similar analysis for the LN data sets. How do the two analyses compare? \end{exc} For a more complete discussion of these issues, see
the \Rpackage{GeneMetaEx} vignette: <>= openVignette("GeneMetaEx") @ The standard reference for fitting mixed effect models using the \Rpackage{nlme} package is \citet{batespinhiero}. The next
generation software being developed for such models is available in the \Rpackage{lme4} package.
|
{"url":"http://www.bioconductor.org/help/course-materials/2007/biocadv/Labs/Synthesis/synthesis.Rnw","timestamp":"2014-04-21T02:21:15Z","content_type":null,"content_length":"60619","record_id":"<urn:uuid:100d83f4-6124-4073-b5bf-76fc0833a12e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ricci Flow on Surfaces of High Genus
I’ve been a bit busy lately, and so I missed last week posting. So what I’ve decided to do is to take the talks I’ve given in various graduate student seminars over the last year or so and convert
them into posts. This one is a particularly tough prospect, as the talk didn’t go very well. I’m following a paper of Hamilton‘s titled “Ricci Flow on Surfaces” and only present the high genus case.
Comments and (especially!) corrections are encouraged.
Let $M$ be a compact surface, $g_{ij}$ a Riemannian metric and $R$ the Ricci curvature of $g_{ij}$. Then the Ricci Flow equation is given by $\frac{\partial}{\partial t}g_{ij}=(r-R)g_{ij}$ where $r$
is the average value of $R$. Ricci Flow can be used to prove many classical result, including the Uniformization Theorem, that every Riemann surface admits a metric of constant curvature.
This is actually slightly different from the standard Ricci flow, it is what is called the volume-preserving Ricci flow, as, unlike the standard, it doesn’t cause Spheres to shrink. To show this, set
$\mu=\sqrt{\det g_{ij}}$, and see $\frac{\partial}{\partial t}\mu=(r-R)\mu$, so if $A$ is total area, $\frac{d}{dt}A=\frac{d}{dt}\int 1d\mu=\int(r-R)d\mu=0$, because $r=\int Rd\mu/\int 1d\mu$. We
also note that the flow is pointwise a multiple of the metric, and so preserves conformal structure.
In face, we can see that $\int Rd\mu=4\pi \chi(M)$ by the Gauss-Bonnet Theorem, and so $r=4\pi \chi(M)/A$.
The curvature is a function of the metric, and so it will itself satisfy some evolution equation. This equation happens to be $\frac{\partial R}{\partial t}=\Delta R+R^2-rR$. (The gist is that for
surfaces $R=R_{1212}$, and we can then take the time derivative, and substitute in the Ricci Flow equation for $\dot{g}$, but doing it out is messy. Consider it an exercise.)
By the Maximum Principle, we have that if $R\geq 0$ at the start, it will remain so for all time. Likewise, if $R\leq 0$, it will remain so. Thus, both positive and negative curvature are preserved
for surfaces.
We will only prove that $R\geq 0$, and the other case follows from a similar argument. We will proceed by contradiction, and assume that at some time, for some point $p$, $R<0$. Let $t_0$ be the
first time such that $R(t_0,p)=0$ and $R(t_0+\epsilon,p)<0$ for some $\epsilon>0$. We define $m(t)=\min_{p\in M}(R(t))$. At $t_0$, we see that $\partial_t R\leq 0$ at $p$, by our assumption. Now, at
the same point and time, we look at $\Delta R+R^2-rR$. $R^2-rR=0$, as $R=0$, and so we look at $\Delta R$. As we are at a minimum with $m(t)$, $\Delta R\geq 0$, and as $\partial_tR=\Delta R+R^2-rR$,
this means we have a contradiction (citing the maximum principle, which gives one of these a strict inequality).
If $R\leq 0$, we can strengthen this, and get that if $-C\leq R\leq -\epsilon<0$ at the start, then it remains so, and $re^{-\epsilon t}\leq r-R\leq C e^{rt}$, so $R$ approaches $r$ exponentially.
To see it, let $R_{\max}$ be the maximum of $R$. Then it satisfies $\frac{d}{dt}R_{\max}\leq R_{\max}(R_{\max}-r)\leq -\epsilon (R_{\max} -r)$ and if $R_{\min}$ is the minimum of $R$, it satisfies $\
frac{d}{dt}R_{\min}\geq R_{\min}(R_{\min}-r)\geq r(R_{\min}-r)$
This implies immediately that on a compact surface, if $R<0$, a solution exists for all time and converges exponentially to a metric of constant negative curvature. For $R\geq 0$, things are harder,
as $R=r$ is a repulsive fixed point of $\frac{dR}{dt}=R^2-rR$. The best this method gives is the following:
If $r>0$ and $R/r\geq c>0$ at the start, then $c<1$ and for all time $\frac{R}{r}\geq\frac{1}{1-(1-\frac{1}{c})e^{rt}}$ and if $r>0$ and $R/r\leq C$ at the start, then $C>1$ and $\frac{R}{r}\leq \
frac{1}{1-(1-\frac{1}{c})e^{rt}}$, at least for $t<\frac{1}{r}\log\frac{c}{c-1}$.
These don’t give good bounds, as the lower bound goes to zero at infinity, and the upper bound goes to infinity in finite time.
If $R>0$ anywhere, we need better methods. We first define the potential $f$ to be the solution to $\Delta f=R-r$ with mean value zero. This equation can always be solved as $R-r$ has mean value
zero, and the solution is unique up to a constant, so $f$ can have mean value zero. Then $f$ satisfies the following equation:
$\frac{\partial f}{\partial t}=\Delta f+rf-b$ where $b=\int |Df|^2d\mu/\int 1d\mu$ with $b$ a constant on the surface and merely relying on time.
As $\Delta f=R-r$, we can get $\Delta \frac{\partial f}{\partial t}=\Delta(\Delta f+rf)$ by differentiating, and so $\frac{\partial f}{\partial t}=\Delta f+rf-b$ for some number $b$ which is only a
function of time. $b$ can be computed from the relation $\int fd\mu=0$.
To make more progress, we will need the function $h=\Delta f+|Df|^2$ and the tensor $M_{ij}=D_iD_j f-\frac{1}{2}\Delta f\cdot g_{ij}$, that is, the trace-free part of the second covariant derivative
of $f$.
We get the following equation for $h$, where $|M_{ij}|^2=M^{ij}M_{ij}$:
$\frac{\partial h}{\partial t}=\Delta h-2|M_{ij}|^2+rh$
If $h\leq C$ at the start, then $h\leq Ce^{rt}$ for all time. This is of value, as $R=h-|Df|^2+r$, so now $R\leq Ce^{rt}+r$, which gives us a bound on $R$ from above which goes to $\infty$ as $t$
increases if $r>0$. We can also get a lower bound, if $r\geq 0$ and the minimum of $R$ is negative, it increases. If $r\leq 0$ and the minimum of $R$ is less than $r$, it increases. This gives us
that for any initial metric, there is a $C$ such that $-C\leq R\leq Ce^{rt}+r$. Thus, the Ricci Flow has solutions for all time for any initial metric. In fact, if $r\leq 0$, then $R$ remains bounded
both above and below such that when $r<0$, $R<0$ for large time. Applying our earlier result for the situation when $R<0$, this gives the following result:
On a compact surface with $r<0$, for any initial metric the solution exists for all time and converges to a metric with constant negative curvature.
And as the Gauss-Bonnet Theorem relates $r$ to the Euler characteristic, we have proved that on a riemann surface with $g\geq 2$ there exists a metric of constant curvature. The other finitely many
cases can be checked by hand (in fact, restricting to compact surfaces, the only things remaining are the sphere, torus, projective plane and klein bottle).
|
{"url":"http://rigtriv.wordpress.com/2007/10/30/ricci-flow-on-surfaces-of-high-genus/","timestamp":"2014-04-19T17:28:05Z","content_type":null,"content_length":"65931","record_id":"<urn:uuid:371110b2-773d-488d-9dc2-b1a020b20820>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
proof about complex conjugate of a function
Thanks for the idea!
[tex]g\left( z^{\ast }\right) =\sum _{n=0}^{\infty }\left( z^{\ast }-z_{0}\right) ^{n}[/tex]
This is actually a very specific function. In general, you will have coefficients a
[tex]g\left( z \right) =\sum _{n=0}^{\infty }a_n \left( z -z_{0}\right) ^{n}[/tex]
Depending on whether you look at real or complex analytic functions (also see
this Wikipedia page
and related pages), a
are real or complex numbers and the proof will be more or less straightforward.
[tex]g^{\ast }\left( z^{\ast }\right) = \left( \sum _{n=0}^{\infty }\left( z^{\ast }-z_{0}\right) ^{n}\right) ^{\ast }= \sum _{n=0}^{\infty }\left( z-z_{0}\right) ^{n}= g(z) [/tex]
Is this right?
You are pretty quick jumping from ## \left( \sum _{n=0}^{\infty }\left( z^{\ast }-z_{0}\right) ^{n}\right) ^{\ast }## to ##\sum _{n=0}^{\infty }\left( z-z_{0}\right) ^{n}##. You are trying to prove
whether this equality holds, and it almost seems like you are assuming it now. At the moment, your 'proof' basically says "this theorem is true because it's true.". To convince a mathematician, you
will need some more intermediate steps.
For example: is the conjugate of the sum the sum of the conjugate? If so, you can write
$$\left( \sum _{n=0}^{\infty }\left( z^{\ast }-z_{0}\right) ^{n}\right) ^{\ast } = \sum _{n=0}^{\infty } \left(\left( z^{\ast }-z_{0}\right) ^{n}\right) ^{\ast }$$
Then, how do you get from ##\left( (z^\ast - z_0)^n \right)^\ast## to ##(z - z_0)^n##?
You will find that you actually need some restrictions!
|
{"url":"http://www.physicsforums.com/showthread.php?s=6d80cb3f5caf01dd0b1c96705fafb29d&p=4462504","timestamp":"2014-04-21T04:46:37Z","content_type":null,"content_length":"40943","record_id":"<urn:uuid:0d2e61bc-4b9b-48af-aaa0-37abd85112e0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How president of India elected
The process of election of president of India is complicated and today I will explain it in very easy language and by examples.
The voters of the election are:-
1. MPs of Lok Sabha
2. MPs of Rajya Sabha
3. MLAs of vidhan sabha
The Value of vote of MPs and each MLA is different.
~ Finding Value of vote of MLAs
The value of vote of MLA depends state by state or we can say MLA of each state has different value of his vote.
Divide the total population of the state by Total seat in Vidhansabha of that state and again divide it by 1000 and the result will be the value of vote of MLA.
In Rajasthan Suppose total population is 5,64,73,122 and Number of MLA are 200.
5,64,73,122/200 = 282365.61
Again divide it by 1000
282365.61/1000 = 282.365
The value will be 282.
~ Finding Value of vote of MPs
Note:- The value of vote of MPs of Rajya Sabha and Lok Sabha are same.
Now to find the value of vote of MPs
- Multiply the value of vote of MLAs in a state with number of MLA in that state(do it for each state)
In rajasthan it will be 282 * 200 = 56400
- Add them
- Divide it by total number of MPs (Lok Sabha + Rajya Sabha)
This will be the value of vote of MPs
The Voter will give the preference to each candidate.
It will look like :-
1. Bhairo singh shekhawat
2. Pratibha Patil
3. Ram Lal
4. Mohan singh
~ Now I will explain how counting is done …..
To win the election a candidate must get (total number of valid votes divided by 2) +1
Suppose candidate get first prefernce votes like:-
1. Bhairo singh shekhawat---- 5,250
2. Pratibha Patil --- ----------------4,800
3. Ram Lal---------------------------- 2,700
4. Mohan singh------- ------------2,250
Total Number of Valid votes = 5250 + 4800 + 2700 + 2250 = 15000
15000/2 = 7500
7500+1= 7501
So a candidate need 7501 votes to win the election, as you can see no one has got that
Now Last candidate will be out of the race and his votes will be distributed between remaining three on the basis of second preference.
Now Mohan singh is out of the race his first preference votes are 2250 now suppose in these 2250 ballot papers the second preference is recorded as :-
Bhairo singh shekhawat - 300
Pratibha Patil - 1050
Ram Lal – 900
These will be transferred and added to the first preferences in favour of 1, 2 and 3 as follows
A . Bhairo singh shekhawat.. 5,250 + 300 = 5,550 B Pratibha Patil.. 4,800 + 1050 = 5,850 C . Ram Lal-. 2,700 + 900 = 3,600 Now in the second count, therefore, C having obtained the last number of
votes is eliminated and 3,600 votes secured by him are once again transferred to A and B in the order of third preferences recorded thereon. Suppose the third preferences on the 3,600 ballot papers
recorded in favour of A and B are 1700 and 1900 respectively the result of this second transfer would then be as under: A Bhairo singh shekhawat. 5,550 + 1,700 = 7,250 B Pratibha Patil 5,850 + 1,900
= 7,750 Now Pratibha patil have votes more than 7501 so she wins. As you can see in first preference Bhairo singh shekhawat was ahead but at last Pratibha patil wins. Note :- The statistics used in
this post is imaginary and its aim is to make people understand the process easily.
53 comments:
1. wow!! beautifully explained..
i visited no. of sites but couldn't understand n finally my search ends here..
good job..
2. @ medha...
Thnx ...
3. its simple and easy to understand. thanx
4. Thanx sushant....
5. thanxxx.sir you explained the process in very very simple language
thanx for it
6. thank u so much .....friend
7. thanks a ton dude!
you've been grt help!
8. Thanks. but You have mentioned 'Add Them'. Adding what? you should have illustrated for value of M.P. votes like the one you did for value of M.L.A. vote.... Further, do you have any idea as to
why this cumbersom arthmatics?...D.C.Sekhar.
9. amazing process,magic
10. Good Explanation...
Thanks ......
11. Clearly and beautifully explained the process of Presidential Election of India. Thanks
12. good one keep it up!
13. realy verry nice...thnx.
14. Thanks.....u have made it so easy..
15. thank you for ur important and valuable sharing
16. Thanks............good job keep it.
17. Thanks............good job keep it.
18. Thanks a lot, very detailed explanation
19. sir, u have explained in detail. Thank u very much.
20. This comment has been removed by the author.
21. Really excellent way to explain a difficult topic in such an easy manner. I was looking for this kind of post for a long time. Thanks a lot and congratulations for a nice post.
22. Keep the good work going!!!
23. The process was intact & simplified, Really appreciate the way of expression in a such a lucid manner.
24. First of all thanks for an attempt to explain the Presidential elections in India and congratulations as u succeeded in doing so.
Please explain How one will get first preference votes for a candidate say 5250, 4800, 2700 & 2250 etc.
25. Thanks all for your appreciation ..
26. its beautiful calcution ,many people in india dnt understand this process,so give a easy way to understand,so many many thanks
AKHAYA KUMAR DASH
27. nice one ,
28. can u tell the process of electing american president in this ,easy to undestand way.
29. really a great work.....thanxxx
30. well explained
31. Perfect explanation.. Thanks you very much.
32. great job.. thanks a ton
33. Thanks everybody for appreciating my efforts..
34. under which rule or process the lowest votes are trasfer to the higher vote obtainers to meet the quota in presidential election .
35. good job..
36. Good Job ...
37. Very good Job ...Thanks..
38. Thank You for the Info which I got from your Blog by Google search
39. Thanks all for your appreciation ..
40. in punjabi .."22 siraa lata"... means did a tremendous work
41. don't u have any fb page?
if its there then msg me its link at mramitgarg@gmail.com
42. Thanks for appreciation ..
No, I dont have any FB page ..
You can follow me on twitter. My handle is Tirchi_nazar
43. This comment has been removed by the author.
44. Everything is already mentioned in http://pib.nic.in/archieve/others/pr.html
45. very very nice friend wish u very bright career. sunil
46. very nice serve them as a legend
47. really lucid explanation, in other words i can say that it is the easiest way to explain the election procedure of the persident of india. thanks buddy.
Manoj kumar singh
48. I m preparing for upsc.Your explanation about single transferable vote is superb .even in my coaching centre they didn't teach me about this clearly as u explained.thanks for this
49. Exuberant would be a small word to reply.. I tried understanding the process of electing the president earlier but I couldn't... Finally bcoz of you, it got imprinted in my mind.. Tx a ton
50. Thanks all for your appreciation. Really honoured ..
51. Y such complications...I know there shall be uniformity in the scale of representation of different states as well as parity between the states as a whole and the union...wat does this mean....ur
exp lucid and saved my time !!
52. still unclear......how 3600 votes of c in second phase is shared between A & B as the third preference of 2700 voters giving 1st preference to C may be D.
please do clear.....
53. amazing it was..great effort.. :)
|
{"url":"http://indiansushant.blogspot.com/2011/01/how-president-of-india-elected.html","timestamp":"2014-04-20T11:32:18Z","content_type":null,"content_length":"202045","record_id":"<urn:uuid:b38d17a6-d9e0-4cd1-ba95-c62ccce160f7>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Functions, Graphs, and Limits
Now we will shake things up a bit. Here's a piecewise-defined function:
What is
If we draw the graph of this function, we see that it looks like the line y = x + 1 except at one point. When x = 1, instead of having y = 2 like we would expect, the point has jumped off the line up
to y = 3.
How does a function like this affect what we know about limits? Imagine we're taking Bruno, the Chinese crested dog, for a walk. We would expect him to stay on the sidewalk. We wouldn't expect him to
suddenly teleport to Middle-earth, then reappear and continue on his path. He may look like Gollum, but still...When talking about limits, we're talking about what we expect the function to be doing.
We assume Bruno is approaching solid ground.
In the example above, because that's what we would expect the value of the function to be if we looked at values of x close to (but not equal to) 1.
We can think of as the value that f(x) gets "close" to as x gets close to 1.
|
{"url":"http://www.shmoop.com/functions-graphs-limits/piecewise-functions-limits.html","timestamp":"2014-04-21T15:01:32Z","content_type":null,"content_length":"29286","record_id":"<urn:uuid:73d2fe65-8584-41a7-bde1-3b1a699af679>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Riegelsville Trigonometry Tutor
...My approach is straight-forward exercising patience to assist each student with self-discovery of both the specific answer to a problem and how to apply the general concept to solve similar
problems.Experienced college chemistry instructor with strong math skills to tutor students in algebra I an...
36 Subjects: including trigonometry, chemistry, reading, physics
...Solve problems involving decimals, percents, and ratios. 4. Solve problems involving exponents. 5. Solve problems involving radicals. 6.
27 Subjects: including trigonometry, calculus, statistics, geometry
...Rev. Lett., Proc. Natl Acad.
13 Subjects: including trigonometry, reading, physics, writing
...I graduated from the Stevens Institute of Technology with a Bachelor of Engineering in Civil Engineering and a Master of Engineering in Structural Engineering. I have a strong knowledge of
physical and mathematical foundations and feel that I can be of help to anyone who needs help in most field...
14 Subjects: including trigonometry, calculus, physics, algebra 1
...I have been working in education since 1988 when I started substitute teaching. I have worked in various districts in the area. I also worked at Sylvan Learning Center for seven years as a
9 Subjects: including trigonometry, geometry, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/riegelsville_trigonometry_tutors.php","timestamp":"2014-04-18T18:49:41Z","content_type":null,"content_length":"23888","record_id":"<urn:uuid:ec6c6003-64e6-4a6d-a3a5-f3981159a1d3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
removable singularity
removable singularity
Let $U\subset\mathbb{C}$ be an open neighbourhood of a point $a\in\mathbb{C}$. We say that a function $f:U\backslash\{a\}\rightarrow\mathbb{C}$ has a removable singularity at $a$, if the complex
derivative $f^{{\prime}}(z)$ exists for all $zeq a$, and if $f(z)$ is bounded near $a$.
Removable singularities can, as the name suggests, be removed.
Theorem 1.
Suppose that $f:U\backslash\{a\}\rightarrow\mathbb{C}$ has a removable singularity at $a$. Then, $f(z)$ can be holomorphically extended to all of $U$, i.e. there exists a holomorphic $g:U\rightarrow\
mathbb{C}$ such that $g(z)=f(z)$ for all $zeq a$.
Proof. Let $C$ be a circle centered at $a$, oriented counterclockwise, and sufficiently small so that $C$ and its interior are contained in $U$. For $z$ in the interior of $C$, set
$g(z)=\frac{1}{2\pi i}\oint_{C}\frac{f(\zeta)}{\zeta-z}d\zeta.$
Since $C$ is a compact set, the defining limit for the derivative
converges uniformly for $\zeta\in C$. Thanks to the uniform convergence, the order of the derivative and the integral operations can be interchanged. Hence, we may deduce that $g^{{\prime}}(z)$
exists for all $z$ in the interior of $C$. Furthermore, by the Cauchy integral formula we have that $f(z)=g(z)$ for all $zeq a$, and therefore $g(z)$ furnishes us with the desired extension.
Mathematics Subject Classification
no label found
|
{"url":"http://planetmath.org/RemovableSingularity","timestamp":"2014-04-17T06:54:47Z","content_type":null,"content_length":"68806","record_id":"<urn:uuid:76c3cd10-f74a-4509-a02c-142d5eebaffe>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
|
My 8-year-old daughter is doing something about combinations of 3 which equal to her age. Please help.
favours4u asks:
My 8-year-old daughter is doing something about combinations of 3 which equal to her age. Please help.
In Topics: Helping my child with math
> 60 days ago
Subscribing to a question lets you receive e-mails when there are updates. Click here to subscribe and adjust the e-mail frequency.
Hi Favours4u,
Your question is a bit vague so I am going to guess at the answer.
In math, a combination is an arrangement in which order does not matter. In other words, combination can only applies to addition and multiplication but not subtraction and division. Based on this
definition of combination, I think the three combinations that equal your child age of 8 is
Combination 1:
1 + 7 = 8
7 + 1 = 8
Combination 2
2 + 6 = 8
6 + 2 = 8
Combination 3
3 + 5 = 8
5 + 3 = 8
I hope these are the answers.
Hi there,
Thanks for using JustAsk! In addition to the previous answer you received, I also want to suggest that you try editing your answer by adding a little more information. We're better able to assist if
we're clear on what the question is asking.
We can take q= something r=3 and qCr=8
qCr= q!/3!(q-3)!
This sounds a little like the brain teaser about the party, where one guest asks the age of the host's 3 children.
The host tells the guest that the product of his kids ages is 72 and the sum of their ages is the same as the the house number. They know the house number but ask for more information. The host says
that the oldest likes strawberry ice cream. This provided enough information for the guest is able to figure out the ages.
Answer: The oldest child is 8.
First factor is 72. 72= 1*2*2*2*3*3. Combine these factors into 3 ages, you should find 11 combinations of the factor of 72.
By adding all the combinations we find (2 6 6 ) and (3 3 8 ) both add up to 14 so the house number is 14, then they ask for more information when the host says that the oldest likes strawberry ice
cream which tells us that there is a oldest. 2 6 6 - shows that there is a youngest and the elder are the same age, so then we know that the ages of the 3 kids are 3 3 and 8.
I'm not sure that this is what you are looking for and I'm not sure that I even understand this problem, however, it is one that I found as a brain teaser for 8 year old kids and it seems like
something that your daughter may have seen in the classroom.
I've included a link that has this brain teaser along with a few others that you may find interesting.
|
{"url":"http://www.education.com/question/8-year-daughter-combinations-3/","timestamp":"2014-04-21T03:50:06Z","content_type":null,"content_length":"79767","record_id":"<urn:uuid:b759c28b-55c9-4743-b94d-37accc0abfce>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Section: LAPACK routine (version 1.5) (l) Updated: 12 May 1997 Local index Up
PCPBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS)
UPLO, N, BW, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO )
CHARACTER UPLO
INTEGER BW, IB, INFO, JA, LAF, LWORK, N, NRHS
INTEGER DESCA( * ), DESCB( * )
COMPLEX A( * ), AF( * ), B( * ), WORK( * )
PCPBTRS solves a system of linear equations
where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PCPBTRF.
A(1:N, JA:JA+N-1) is an N-by-N complex
banded symmetric positive definite distributed
matrix with bandwidth BW.
Depending on the value of UPLO, A stores either U or L in the equn A(1:N, JA:JA+N-1) = U'*U or L*L' as computed by PCPBTRF.
Routine PCPBTRF MUST be called first.
This document was created by man2html, using the manual pages.
Time: 21:52:08 GMT, April 16, 2011
|
{"url":"http://www.makelinux.net/man/3/P/pcpbtrs","timestamp":"2014-04-21T02:08:26Z","content_type":null,"content_length":"8657","record_id":"<urn:uuid:e6ff8b7a-4086-4a52-97ca-a9ac9a212299>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
Air resistance is a tough nut to crack, because it can behave in complex ways. My physics textbook gives only two:
R = bv
where R is the resistive force, a vector quantity directed opposite the direction of motion; v is the velocity of the object; and b is a constant (you know, one of those "constants" that varies in
every situation) that represents the properties of the medium and shape/size of the object.
R = (1/2) DpAv²
where D is the "drag coefficient", p (should be a rho) is the density of air, A is the cross-sectional area of the object, and v is again the velocity. This formula is for objects moving at high
speeds, such as planes, cars, and meteors.
|
{"url":"http://www.mathisfunforum.com/post.php?tid=1864&qid=17557","timestamp":"2014-04-16T13:27:34Z","content_type":null,"content_length":"24811","record_id":"<urn:uuid:4d714b71-7bbe-4776-a9e7-002e632c495e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Numerics topics
From 6.006 Wiki
For the exam, you should:
• Be able to express a given problem as a set of linear equations and/or a set of least-squares constraints
• Be able to eliminate constraints of the form x[i] = b[i] from a set of constraints.
• Understand how to solve linear equations and least-squares when the coefficient matrix is upper triangular with no zeros on the diagonal.
• Understand the mechanics of a Givens rotation (which elements of A and b are replaced by what linear combinations).
• Understand how a sequence of Givens rotations can zero all the elements of a system below the main diagonal, leaving the matrix upper triangular.
You DO NOT need to be able to:
• Derive the equations for a Givens rotation.
• Understand how to treat singular problems (e.g., a triangular A but with zeros on the diagonal).
• Know anything about numerical stability issues
|
{"url":"http://6.006.scripts.mit.edu/~6.006/fall08/wiki/index.php?title=Numerics_topics","timestamp":"2014-04-18T06:05:07Z","content_type":null,"content_length":"12124","record_id":"<urn:uuid:9e7fad18-6d00-4e2d-9f82-c02cc9087124>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
|
High Obesity levels found among fat-tailed distributions
In my never ending quest to find the perfect measure of tail fatness, I ran across this recent paper by Cooke, Nieboer, and Misiewicz. They created a measure called the “Obesity index.” Here’s how it
• Step 1: Sample four times from a distribution. The sample points should be independent and identically distributed (did your mind just say “IID”?)
• Step 2: Sort the points from lowest to highest (that’s right, order statistics)
• Step 3: Test whether the sum of the smallest and greatest number is larger than the sum of the two middle.
The Obesity index is the probability that the sum of these end points is larger than the sum of the middle numbers. In mathy symbols:
$Ob(X) = P (X_1 + X_4 > X_2 + X_3 | X_1 \leq X_2 \leq X_3 \leq X_4), X_i~IID$
The graph at the top of this post shows how the Obesity index converges for different distributions. As always, I’ve included my R code at the end of this article, so you can run this simulation for
yourself (though, as usual, I forgot to set a random seed so that you can run it exactly like I did).
The dots in the graph represent the mean results from 8, 16, 32, and so on, up to 4096 trials from each of the distributions I tested. Note that each trial involves taking 4 sample points. Confused?
Think of it this way: each sample of 4 points gives us one Bernoulli trial from a single distribution, which returns a 0 or 1. Find the average result after doing 4096 of these trials, and you get
one of the colored dots at the far right of the graph. For example, the red dots are averages from a Uniform distribution. The more trials you do, the closer results from the Uniform will cluster
around 0.5, which is the “true” Obesity value for this distribution. The Uniform distribution is, not coincidentally, symmetric. For symmetric distributions like the Normal, we only consider positive
The graph gives a feel for how many trials would be needed to distinguish between different distributions based on their Obesity index. I’ve done it this way as part of my Grand Master Plan to map
every possible distribution based on how it performs in a variety of tail indices. Apparently the Obesity index can be used to estimate quantiles; I haven’t done this yet.
My initial impressions of this measure (and these are very initial!) are mixed. With a large enough number of trials, it does a good job of ordering distributions in a way that seems intuitively
correct. On the other hand, I’d like to see a greater distance between the Uniform and Beta(0.01, 0.01) distribution, as the latter is an extreme case of small tails.
Note that Obesity is invariant to scaling:
$Ob(x) = Ob(k*X)$
but not to translations:
$Ob(X) eq Ob(X+c)$
This could be a bug or a feature, depending on what you want to use the index for.
Extra special karma points to the first person who comes up with a distribution whose Obesity index is between the Uniform and Normal, and that isn’t a variant of one I already tested.
Here’s the code:
# Code by Matt Asher for StatisticsBlog.com
# Feel free to redistribute, but please keep this notice
# Create random varaibles from the function named in the string
generateFromList = function(n, dist, ...) {
match.fun(paste('r', dist, sep=''))(n, ...)
# Powers of 2 for testAt
testAt = 3:12
testAtSeq = 2^testAt
testsPerLevel = 30
distros = c()
distros[1] = 'generateFromList(4,"norm")'
distros[2] = 'generateFromList(4,"unif")'
distros[3] = 'generateFromList(4,"cauchy")'
distros[4] = 'generateFromList(4,"exp")'
distros[5] = 'generateFromList(4,"chisq",1)'
distros[6] = 'generateFromList(4,"beta",.01,.01)'
distros[7] = 'generateFromList(4,"lnorm")'
distros[8] = 'generateFromList(4,"weibull",1,1)'
# Gotta be a better way to do this.
dWords = c("Normal", "Uniform", "Cauchy", "Exponential", "Chisquare", "Beta", "Lognormal", "Weibull")
plot(0,0,col="white",xlim=c(min(testAt),max(testAt)), ylim=c(-.5,1), xlab="Sample size, expressed in powers of 2", ylab="Obesity index measure", main="Test of tail fatness using Obesity index")
colorList = list()
# Create the legend
for(d in 1:length(distros)) {
x = abs(rnorm(20,min(testAt),.1))
y = rep(-d/16,20)
points(x, y, col=colorList[[d]], pch=20)
text(min(testAt)+.25, y[1], dWords[d], cex=.7, pos=4)
dCounter = 1
for(d in 1:length(distros)) {
for(l in testAtSeq) {
for(i in 1:testsPerLevel) {
count = 0
for(m in 1:l) {
# Get the estimate at that level, plot it testsPerLevel times
x = sort(abs(eval(parse( text=distros[dCounter] ))))
if ( (x[4]+x[1])>(x[2]+x[3]) ) {
count = count + 1
# Tiny bit of scatter added
ratio = count/l
points(log(l, base=2), ( ratio+rnorm(1,0,ratio/100)), col=colorList[[dCounter]], pch=20)
dCounter = dCounter + 1
Tags: kurtosis, obesity, tail probabilities, tails
Hey I just tested with t(0.01) using your code the obesity is .99 seems right dont you think?
A parabolic distr or cosine distr might lie in between uniform and normal with regards to Obesity. Like X ~ f(X) = 0.75(1-x)(x+1), I’ll have a go.
How about you take the average between a uniform and a normal dist?
|
{"url":"http://www.statisticsblog.com/2013/04/high-obesity-levels-found-among-fat-tailed-distributions/","timestamp":"2014-04-17T07:33:21Z","content_type":null,"content_length":"59592","record_id":"<urn:uuid:a1ac5098-7836-4ff3-b332-23ef95817d4b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Berwyn, PA Calculus Tutor
Find a Berwyn, PA Calculus Tutor
...I will never quote a solution that I can't explain thoroughly. I live in Plymouth Meeting, PA. I like to write in my free time: I write comedy sketches and scripts.
25 Subjects: including calculus, chemistry, physics, writing
...I have a quiet study where we can work, or I will travel to your home, school or local library. Algebra 1 is frequently a stumbling block because it is the first time many students start
looking at math in a different way - using variables, equations, and the like. I have used math professionally for over 20 years.
10 Subjects: including calculus, physics, geometry, algebra 1
...This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and
non-euclidean geometry. I taught Algebra 2 with a national tutoring chain for five years. I have taught Algebra 2 as a private tutor since 2001.
12 Subjects: including calculus, writing, geometry, algebra 1
Hi! My name is Helen. I have been teaching for over four years now and have some background in private and public tutoring.I earned my M.S. in Computer Science and B.S. in Statistics and minor in
24 Subjects: including calculus, accounting, Chinese, algebra 1
...Currently my 4th grader is doing 5th grade math in school and is completely comfortable with it. My approach is one of measured urgency, which allows me to be a very patient, yet result
oriented and encouraging tutor. This approach allows me to adapt my tutoring to suit the students' needs and level of understanding.
14 Subjects: including calculus, algebra 1, algebra 2, precalculus
Related Berwyn, PA Tutors
Berwyn, PA Accounting Tutors
Berwyn, PA ACT Tutors
Berwyn, PA Algebra Tutors
Berwyn, PA Algebra 2 Tutors
Berwyn, PA Calculus Tutors
Berwyn, PA Geometry Tutors
Berwyn, PA Math Tutors
Berwyn, PA Prealgebra Tutors
Berwyn, PA Precalculus Tutors
Berwyn, PA SAT Tutors
Berwyn, PA SAT Math Tutors
Berwyn, PA Science Tutors
Berwyn, PA Statistics Tutors
Berwyn, PA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Berwyn_PA_Calculus_tutors.php","timestamp":"2014-04-18T18:57:41Z","content_type":null,"content_length":"23838","record_id":"<urn:uuid:fe4c5ef3-14df-4a25-b814-7712f0af8968>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Classifying continuous characters $X \to \mathbb{Z}_p^*$, $X=\mathbb{Z}_p^*$ or $(1+p\mathbb{Z}_p)^{\times}$ ?
up vote 4 down vote favorite
Question : are the continuous characters of the form
• $\eta : \mathbb{Z}_p^* \to \mathbb{Z}_p^*$, or
• $\eta : (1+p\mathbb{Z}_p)^{\times} \to \mathbb{Z}_p^*$ (i.e., on the principal units in $\mathbb{Z}_p^*$)
well understood? Can such characters be classified in either case ?
I'm hoping to find an analytic classification ; i.e. to describe such characters as functions, or more precisely, how the functions $z\mapsto z^s$ for $s\in\mathcal{O}_{\mathbb{C}_p}$ 'sit' inside
the set of characters $\eta : (1 + p\mathbb{Z}_p)^\times \to \mathbb{Z}_p^*$ (i.e., how 'far' is a generic character from some character of this type ?).
add comment
1 Answer
active oldest votes
Yes, these are very well-understood! Here's what they are. If $p$ is odd then $\mathbf{Z}_p^\times$ is a direct product of $\mu$, the subgroup of $p-1$th roots of unity, and $1+p\mathbf{Z}
_p$, the principal units. A continuous character of the product is a product of continuous characters, so that reduces the first part to the second part. As for the second part, the
principal units are topologically generated by $1+p$ so it suffices to say where $1+p$ should go. Note however that $1+p$ can't go to an arbitrary element of $\mathbf{Z}_p^\times$ because
you need that if $(1+p)^{n_i}$ tends to 1 in $\mathbf{Z}_p$ then $s^{n_i}$ tends to 1 in $\mathbf{Z}_p$, where $s$ is the image of $1+p$. You can check that, for example, $s=-1$ does not
have this property (because the $n_i$ can be even or odd and still tend to zero $p$-adically). But it's also not hard to check that $s$ has this property iff $s$ is a principal unit. I do
this in Lemma 1 of my paper "On p-adic families of automorphic forms" here but this is most certainly standard and not due to me.
up vote 8
down vote So in summary, for $p>2$, characters of the principal units biject with $1+p\mathbf{Z}_p$ non-canonically, the dictionary being "image of $1+p$", and characters of the full unit group
accepted biject with the product of this and the cyclic group of order $p-1$, that being the characters of $\mu_{p-1}$.
For $p=2$ the two questions are the same, and the same trick, appropriately modified, works. The group $1+4\mathbf{Z}_2$ is procyclic, generated by 5, and its characters biject with the
principal units, the dictionary being "image of 5". For the full unit group the characters biject with the principal units product +-1, because $\mathbf{Z}_2^\times$ is just a product $\
pm1\times (1+4\mathbf{Z}_2)$.
Maybe it's a bit vague (and perhaps I should have said so to begin with -- I'll edit it in), but I'm hoping to be able to classify the characters analytically. It would be nice to
really describe them as functions, or more precisely, how the functions $z\mapsto z^s$ for $s\in\mathcal{O}_{\mathbb{C}_p}$ 'sit' inside the set of characters $\eta : (1 + p\mathbb{Z}
_p)^\times \to \mathbb{Z}_p^*$ (i.e., how 'far' is a generic character from some character of this type ?). – xuros Apr 27 '10 at 11:37
Again this is easy. What did you try? Hint: if p>2 then log and exp give topological isomorphisms of topological groups 1+pZ_p = pZ_p. – Kevin Buzzard Apr 27 '10 at 12:36
A warning is worth giving if you're going to use the ``$z^s$ for $s \in \mathcal{O}_{\mathbf{C}_p}$'' description. If $s \notin \mathbf{Z}_p$ then $\chi(z)=z^s$ takes values in $\
1 mathcal{O}_{\mathbf{C}_p}^\times$, not necessarily in $\mathbf{Z}_p^\times$, so you might consider this larger class of characters. And for these more general characters $\chi$, then $\
chi$ admits an "exponential" description ($\chi(z)=z^s$) if and only if $|\chi(1+p)-1|_p \leq p^{-1/(p-1)}$. This is related to the $p$-adic radius of convergence of the exponential
power series $\exp(X)$ in the last comment. – Jay Pottharst Apr 27 '10 at 14:09
add comment
Not the answer you're looking for? Browse other questions tagged p-adic-analysis or ask your own question.
|
{"url":"http://mathoverflow.net/questions/22702/classifying-continuous-characters-x-to-mathbbz-p-x-mathbbz-p-or","timestamp":"2014-04-21T16:01:12Z","content_type":null,"content_length":"54545","record_id":"<urn:uuid:2f69ddb2-13e4-40cc-ab02-1ab2a3845d69>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chemistry Question
April 23rd 2010, 11:25 PM
A Beautiful Mind
Chemistry Question
Question: In particle accelerators, protons can be accelerated to speeds near that of light. Estimate the wavelength in nm of such a proton moving at $2.40 * 10^8$ m/s. (mass of proton = $1.673 *
Here's the work I've done:
It says to use deBrogile's equation:
$\lambda = \frac{h}{mu}$
h stands for Planck's constant = $6.63*10^{-34} {kgm^2}{s}$
m for the mass = $1.673*10^{-27}$ kg
u for the speed = $2.40*10^8$ m/s
Plugging in...
$\frac {6.63*10^{-34} kgm^2}{(2.40*10^8 m/s)(1.673*10^{-27}kg})$
= $1.65*10^{-15}$ m
Conversion factors for nm:
$\frac{1m}{1*10^{-9}nm} = \frac{1*10^{-9}nm}{1m}$
And then I get wrong answers all over the place. I keep constantly getting new wrong answers and even on Yahoo they're giving me wrong answers. Totally don't know what I'm doing wrong.
April 24th 2010, 12:36 AM
Question: In particle accelerators, protons can be accelerated to speeds near that of light. Estimate the wavelength in nm of such a proton moving at $2.40 * 10^8$ m/s. (mass of proton = $1.673 *
Here's the work I've done:
It says to use deBrogile's equation:
$\lambda = \frac{h}{mu}$
h stands for Planck's constant = $6.63*10^{-34} {kgm^2}{s}$
m for the mass = $1.673*10^{-27}$ kg
u for the speed = $2.40*10^8$ m/s
Plugging in...
$\frac {6.63*10^{-34} kgm^2}{(2.40*10^8 m/s)(1.673*10^{-27}kg})$
= $1.65*10^{-15}$ m (from here , just divide by 10^(-9))
Conversion factors for nm:
$\frac{1m}{1*10^{-9}nm} = \frac{1*10^{-9}nm}{1m}$
And then I get wrong answers all over the place. I keep constantly getting new wrong answers and even on Yahoo they're giving me wrong answers. Totally don't know what I'm doing wrong.
April 24th 2010, 12:39 AM
Question: In particle accelerators, protons can be accelerated to speeds near that of light. Estimate the wavelength in nm of such a proton moving at $2.40 * 10^8$ m/s. (mass of proton = $1.673 *
Here's the work I've done:
It says to use deBrogile's equation:
$\lambda = \frac{h}{mu}$
h stands for Planck's constant = $6.63*10^{-34} {kgm^2}{s}$
m for the mass = $1.673*10^{-27}$ kg
u for the speed = $2.40*10^8$ m/s
Plugging in...
$\frac {6.63*10^{-34} kgm^2}{(2.40*10^8 m/s)(1.673*10^{-27}kg})$
= $1.65*10^{-15}$ m
Conversion factors for nm:
$\frac{1m}{1*10^{-9}nm} = \frac{1*10^{-9}nm}{1m}$
And then I get wrong answers all over the place. I keep constantly getting new wrong answers and even on Yahoo they're giving me wrong answers. Totally don't know what I'm doing wrong.
Dont you think the mass would change if speed nears speed of light
I guess its related to the first line of your question
April 24th 2010, 12:51 AM
A Beautiful Mind
April 24th 2010, 01:31 AM
Is your answer
$=0.99 \cdot 10^{-6}nm$ ?
if yes than its from the relative mass relation I had mentioned.
Mass in special relativity - Wikipedia, the free encyclopedia
We have to take the realtive mass/momentum in consideration for your question,since the speed is comparable to speed of light.
|
{"url":"http://mathhelpforum.com/math-topics/141025-chemistry-question-print.html","timestamp":"2014-04-19T22:12:06Z","content_type":null,"content_length":"14358","record_id":"<urn:uuid:92dae96b-4b6d-40af-8d56-f6b2f6b46863>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
|
• MarkovProcessProperties can be used for finite state Markov processes such as DiscreteMarkovProcess and ContinuousMarkovProcess.
• MarkovProcessProperties[mproc, "Properties"] gives a list of available properties.
• MarkovProcessProperties[mproc, "property", "Description"] gives a description of the property as a string.
• "InitialProbabilities" initial state probability vector
"TransitionMatrix" conditional transition probabilities m
"TransitionRateMatrix" conditional transition rates q
"TransitionRateVector" state transition rates
"HoldingTimeMean" mean holding time for a state
"HoldingTimeVariance" variance of holding time for a state
"SummaryTable" summary of properties
• For a continuous-time Markov process gives the transition matrix of the embedded discrete-time Markov process.
• The holding time is the time spent in each state before transitioning to a different state. This takes into account self-loops which may cause the process to transition to the same state several
• "CommunicatingClasses" sets of states accessible from each other
"RecurrentClasses" communicating classes that cannot be left
"TransientClasses" communicating classes that can be left
"AbsorbingClasses" recurrent classes with a single element
"PeriodicClasses" communicating classes with finite period greater than 1
"Periods" period for each of the periodic classes
"Irreducible" whether the process has a single recurrent class
"Aperiodic" whether all classes are aperiodic
"Primitive" whether the process is irreducible and aperiodic
• The states of a finite Markov process can be grouped into communicating classes where from each state in a class there is a path to every other state in the class.
• A communicating class can be transient when there is a path from the class to another class or recurrent when there isn't. A special type of recurrent class, called absorbing, consist of a single
• A state is periodic is if there is a non-zero probability that you return to the state after two or more steps. All the states in a class have the same period.
• "TransientVisitMean" mean number of visits to each transient state
"TransientVisitVariance" variance of number of visits to each transient state
"TransientTotalVisitMean" mean total number of transient states visited
• A Markov process will eventually enter a recurrent class. The transient properties characterize how many times each transient state is visited or how many different transient states are visited.
• "ReachabilityProbability" probability of ever reaching a state
"LimitTransitionMatrix" Cesaro limit of the transition matrix
"Reversible" whether the process is reversible
• If a property is not available, this is indicated by Missing["reason"].
New in 9
|
{"url":"http://reference.wolfram.com/mathematica/ref/MarkovProcessProperties.html","timestamp":"2014-04-20T16:29:47Z","content_type":null,"content_length":"40313","record_id":"<urn:uuid:7d4d42c1-96c3-4d06-91d0-6ced017c65b3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simplified and Abstracted Geometry for Forward Dynamics
From GICL Wiki
Geometric simplification has played an important role in the computer graphics field by allowing believable viewing of scenes too complex for timely computation. However, the approaches used in
computer graphics for geometric simplification have as their goal the realistic portrayel of the scene to a viewer, not the similarity between the simplified or abstracted system and the physical
ground truth. They are not then appropriate for simulations of robots in which the geometry plays a role.
Model Simplification(PDF) Adaptive Dynamics(PDF) Milling Machine Simplification(PDF) Geometric Model Simplification for CAD(PDF)
Example Problem
Empirical Results
|
{"url":"http://gicl.cs.drexel.edu/index.php?title=Simplified_and_Abstracted_Geometry_for_Forward_Dynamics&oldid=2948","timestamp":"2014-04-21T00:53:27Z","content_type":null,"content_length":"19819","record_id":"<urn:uuid:d889dfb6-4c93-4d73-80e6-f49d1cd42f03>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lax Presheaves and Exponentiability
Susan Niefield
The category of Set-valued presheaves on a small category B is a topos. Replacing Set by a bicategory S whose objects are sets and morphisms are spans, relations, or partial maps, we consider a
category Lax(B, S) of S-valued lax functors on B. When S = Span, the resulting category is equivalent to Cat/B, and hence, is rarely even cartesian closed. Restricting this equivalence gives rise to
exponentiability characterizations for Lax(B, Rel) by Niefield and for Lax(B, Par) in this paper. Along the way, we obtain a characterization of those B for which the category UFL/B is a coreflective
subcategory of Cat/B, and hence, a topos.
Keywords: span, relation, partial map, topos, cartesian closed, exponentiable, presheaf
2000 MSC: 18A22, 18A25, 18A40, 18B10, 18B25, 18D05, 18F20
Theory and Applications of Categories, Vol. 24, 2010, No. 12, pp 288-301.
TAC Home
|
{"url":"http://www.emis.de/journals/TAC/volumes/24/12/24-12abs.html","timestamp":"2014-04-21T14:48:18Z","content_type":null,"content_length":"2438","record_id":"<urn:uuid:7dff3529-b8e5-4af1-82f1-c0544d433d49>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Frederick, CO ACT Tutor
Find a Frederick, CO ACT Tutor
...I began tutoring by helping other students in my high school classes. In college, I worked as a private math and physics tutor with the Physics department at the University of Louisiana. I was
then hired by the ULL Athletic Department to help fellow student athletes maintain high GPA's and academic eligibility.
16 Subjects: including ACT Math, chemistry, calculus, physics
...If you are looking for some help with trigonometry, I look forward to helping get you back on track. I have a PhD in Physics with a minor in Math. I have been working with students of all
levels from Algebra 1 to Calculus 3 covering all the various areas covered by the SAT Math test.
14 Subjects: including ACT Math, calculus, GRE, physics
...I put particular emphasis on incorporating my students' personal interests into the lesson plan. Math is important for understanding many topics in today's complex world. Good math skills
start with strong base concepts.
30 Subjects: including ACT Math, reading, Spanish, English
...I am experienced in many methods of studying and preparing for standardized tests and would love the opportunity to help you succeed on your exams as well. I scored in the 95th percentile on
the ACT, and have recent standardized test taking experience succeeding on the MCAT. I am experienced in...
22 Subjects: including ACT Math, English, chemistry, algebra 2
...My student population for Algebra I and II (also known college algebra) ranges between 8 years old - 18 years old. As a chemistry teacher, I often have to review algebra with my students. My
background major is chemical engineering, graduated with high honors.
7 Subjects: including ACT Math, chemistry, algebra 1, algebra 2
Related Frederick, CO Tutors
Frederick, CO Accounting Tutors
Frederick, CO ACT Tutors
Frederick, CO Algebra Tutors
Frederick, CO Algebra 2 Tutors
Frederick, CO Calculus Tutors
Frederick, CO Geometry Tutors
Frederick, CO Math Tutors
Frederick, CO Prealgebra Tutors
Frederick, CO Precalculus Tutors
Frederick, CO SAT Tutors
Frederick, CO SAT Math Tutors
Frederick, CO Science Tutors
Frederick, CO Statistics Tutors
Frederick, CO Trigonometry Tutors
Nearby Cities With ACT Tutor
Brighton, CO ACT Tutors
Dacono ACT Tutors
East Lake, CO ACT Tutors
Eastlake, CO ACT Tutors
Erie, CO ACT Tutors
Evergreen, CO ACT Tutors
Firestone ACT Tutors
Fort Lupton ACT Tutors
Johnstown, CO ACT Tutors
Lafayette, CO ACT Tutors
Longmont ACT Tutors
Louisville, CO ACT Tutors
Mead, CO ACT Tutors
Platteville, CO ACT Tutors
Superior, CO ACT Tutors
|
{"url":"http://www.purplemath.com/Frederick_CO_ACT_tutors.php","timestamp":"2014-04-18T21:31:52Z","content_type":null,"content_length":"23686","record_id":"<urn:uuid:b28aec56-edab-414f-a5af-fad71d7f53b5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/silentkill96/asked","timestamp":"2014-04-16T13:10:04Z","content_type":null,"content_length":"112208","record_id":"<urn:uuid:d67020f0-c24c-477d-ae1a-755f2a821333>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Systems of Equations - Graphical Method (with worked solutions & videos)
Systems of Equations: Graphical Method
In this lesson, we will learn how to solve systems of equations or simultaneous equations by graphing.
At the end of this lesson, we have a systems of equations calculator that can solve systems of equations graphically and algebraically. Use it to check your answers.
Related Topics:
Solve Systems of Equations
by Substitution
Solve Systems of Equations
by Addition Method (Opposite-Coefficients Method)
More Algebra Lessons
Solve System of Equations by Graphing
To solve systems of equations or simultaneous equations by the graphical method, we draw the graph for each of the equation and look for a point of intersection between the two graphs. The
coordinates of the point of intersection would be the solution to the system of equations. If the two graphs do not intersect - which means that they are parallel - then there is no solution.
Example :
Using the graphical method, find the solution of the systems of equations
y + x = 3
y = 4x - 2
Solution :
Draw the two lines graphically and determine the point of intersection from the graph.
From the graph, the point of intersection is (1, 2)
Solving Systems of Equations Graphically
Some examples on solving systems of equations graphically.
Solving a Linear System of Equations by Graphing.
Have a look at the following video for another example on how to solve systems of equations using the graphical method:
Systems of Equations Calculator
This math tool will determine the intersection point of two lines or curves. Enter in the two equations and submit. The graphs of the two equations will be shown. Select step-by-step solution if you
want to see the equations solved algebraically.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
|
{"url":"http://www.onlinemathlearning.com/systems-of-equations.html","timestamp":"2014-04-17T21:39:21Z","content_type":null,"content_length":"37619","record_id":"<urn:uuid:49a5ef1e-ef32-4241-b346-28a828f77343>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Correct bug in symmetric functions caused by Symmetrica using integers longer than 32 bits
Reported by: saliola Owned by: sage-combinat
Priority: critical Milestone: sage-5.13
Component: combinatorics Keywords: symmetric functions, symmetrica, memleak
Cc: sage-combinat, aschilling, zabrocki, mguaypaq, darij, tscrim, mhansen Merged in: sage-5.13.beta2
Authors: Jeroen Demeyer Reviewers: Mike Zabrocki
Report Upstream: Reported upstream. No feedback yet. Work issues:
Branch: Commit:
Dependencies: Stopgaps:
There are two examples of bugs that this patch corrects.
The first is in the conversion of integers between Symmetrica and Sage:
sage: from sage.libs.symmetrica.symmetrica import test_integer
sage: test_integer(2^76)==2^76
sage: test_integer(2^75)==2^75
The second is that coefficients in symmetric function package are not always correct.
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: c = s(p([1]*36)).coefficient([7,6,5,4,3,3,2,2,1,1,1,1])
sage: c==StandardTableaux([7,6,5,4,3,3,2,2,1,1,1,1]).cardinality()
This bug is corrected by modifying the Symmetrica spkg to ensure that computations are done assuming that INT's are 4 bytes.
spkg: http://boxen.math.washington.edu/home/jdemeyer/spkg/symmetrica-2.0.p8.spkg (spkg diff)
apply 13413_symmetrica.patch
Attachments (4)
Change History (58)
• Description modified (diff)
• Cc aschilling zabrocki added
Running "sage symmetrica-test.py" may exhibit the bug.
Running the attached file symmetrica-test.py exhibits the problem on some machines (e.g., a Mac of some sort, I don't have the specs with me) but on my own laptop (ancient Dell laptop running Ubuntu
10.04 LTS) the test runs out of memory before finding a problem. Curiously enough, for the Mac I tried the problem appears at size 47, same as in the problem description.
I think this shows that the problem is in (or extremely close to) Symmetrica, not Sage itself.
Also, given that the coefficients change even without any intervening computations, I strongly suspect that we're seeing a memory leak in Symmetrica, not an integer overflow error. Most likely,
1. Symmetrica computes some result,
2. caches a pointer to the result,
3. frees the memory containing the result,
4. this memory eventually gets reused,
5. so the cached result now points to garbage.
It's interesting to note that the error only seems to happen when dealing with coefficients in QQ, not in ZZ, as Anne's testing shows. However, given the state of the Symmetrica source code, I'm not
optimistic about actually finding (and fixing) the bug.
Do we know if there are other changes of basis which exhibit similar problems?
For convenience, note that the content of symmetrica-test.py is simply:
from sage.all import QQ, ZZ, Partition
from sage.libs.symmetrica.all import t_POWSYM_SCHUR as convert
from time import time
one = QQ(1)
bad_input = {Partition([ZZ(2)] * 2): one}
good_output = convert(bad_input)
start_time = time()
k = 1
while True:
dummy = convert({Partition([ZZ(1)] * k): one})
if convert(bad_input) != good_output: break
print 'Size %d seems fine after %d seconds.' % (k, time() - start_time)
k += 1
print 'Found a problem at size %d:' % k
for k in range(10):
print convert(bad_input)
• Milestone changed from sage-5.11 to sage-5.12
This problem is much worse than originally stated. Mercedes Rosas contacted me and said that they are getting strange answers for calculations in the symmetric function package. I started to do some
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: %time s(p([1]*36))==sum(StandardTableaux(la).cardinality()*s(la) for la in Partitions(36))
CPU times: user 91.50 s, sys: 0.26 s, total: 91.75 s
Wall time: 91.65 s
sage: s(p([1]*35))==sum(StandardTableaux(la).cardinality()*s(la) for la in Partitions(35))
sage: s(p[2,2])
s[1, 1, 1, 1] - s[2, 1, 1] + 2*s[2, 2] - s[3, 1] + s[4]
The first statement should return True and the second and third are not returning the clearly false coefficients that we are seeing when the calculation goes as high as degree 47.
WARNING: calculations using the symmetric function package should not be believed beyond a certain degree. I will continue to see what we can do to track down this bug.
Hmmm. I wonder if this has something to do with Mac's being a 64 bit architecture? I checked the size of the coefficients for [1]*35 v. [1]*36 and found:
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: g = s(p([1]*35))
sage: max(float(log(c)/log(2)) for c in g.coefficients())
sage: g = s(p([1]*36))
sage: max(float(log(c)/log(2)) for c in g.coefficients())
I suspect the function _py_longint in symmetrica.pxi is at fault but I haven't been able to track down some of the definitions in that file.
If others are experiencing/not experiencing the same problems please let me know. I am running sage on mac running OSX 10.8.5 with Intel Core i7 processors. I am seeing the same bug at degree 36 on a
linux machine at work.
There is a function test_integer in the symmetrica package. It passes on some integers, but not all!
sage: from sage.libs.symmetrica.symmetrica import test_integer
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: g = s(p([1]*35))
sage: all( test_integer(c)==c for c in g.coefficients())
sage: g = s(p([1]*36))
sage: all( test_integer(c)==c for c in g.coefficients())
sage: c=StandardTableaux([10,9,8,7,6,5,4,3,2,1]).cardinality()
sage: test_integer(c)==c
Simple test:
sage: from sage.libs.symmetrica.symmetrica import test_integer
sage: test_integer(2^76)==2^76
sage: test_integer(2^75)==2^75
I have a fix (see patch) for *only* one problem. A two line change fixes the issue with test_integer.
But sadly :( this does not fix the issue with s(p([1]*36)).
The error seems to be within Symmetrica. I computed s(p([1]*36)) and stuck in a print statement to print out the intermediate symmetrica object. I found that the coefficient of the partition
111122334567 is 5.061293.630159.021056 and this agrees with what is returned by the command g = s(p([1]*36))
sage: g.coefficient([7,6,5,4,3,3,2,2,1,1,1,1])
however the coefficient should be
sage: StandardTableaux([7,6,5,4,3,3,2,2,1,1,1,1]).cardinality()
• Cc darij added
• Keywords symmetrica memleak added
• Priority changed from major to critical
I can't help with understanding C code, but if anyone needs some of the German in Symmetrica translated, I can help.
Has the error so far only occurred in high degrees or with big coefficients only, or should all of Sym be considered a minefield until further notice?
This is bad. Technically any calculation that involves the p basis over degree 20 and other bases of degree 30+ should be suspect. I make this assessment because it seems that the integer
calculations with coefficients using 64 bits may be a problem. If you check sage.combinat.sf.sfa.zee(la) for la partitions of 21 and the number of standard tableaux of degree 35 have values > 2**64.
I can understand C, but it is a bit rusty for me. The fact that the C is written in German makes it slightly more of a challenge since for some functions I really have to make wild guesses.
Mike H. posted a test.c program to try out in Symmetrica.
#include "def.h"
#include "macro.h"
t_POWSYM_SCHUR(a, b);
Run this program with input "36" followed by thirty-six "1"'s, followed by "1" (to indicate an integer coefficient), followed by "1" (to indicate that the coefficient is 1) "n" and the output says
again 5.061293.630159.021056 111122334567. This means that the error is within Symmetrica.
I shouldn't be too rusty with my C, but this might take a combined effort at Sage Days to fully complete it. I'll take a look at the code as well.
Here's my two guesses about the problem from the discussion:
• It is an overflow error and the big fix will be to convert Symmetrica to use mpz instead of the usual machine integers.
• Symmetrica is using its own high precision integers (perhaps with a fixed maximum size) and it's overflowing that block of memory.
I've spent some time looking through the Symmetrica code. It is not hard to figure out the basic ideas behind the library, but it uses some very terse C, non-obvious abbreviations, and a mix of
German and English. It also has very little documentation.
Symmetrica "longints" are a structure of three INTs w2,w1,w0 with 0<=w_i<=2^15-1 plus a pointer to the next batch of 3 or NULL.
I've played with some programs that do some addition and multiplication with these longints and I haven't been able to make them fail, but it must be that in the calculation of p([1]*36) that some
arithmetic fails when some overflow occurs. It will almost definitely easier to identify the bug in the library than to adapt it to another data structure.
Please report this upstream then.
• Report Upstream changed from N/A to Reported upstream. No feedback yet.
So I wrote to symmetric(at)symmetrica.de and told them about the bug and pointed them to the ticket for the example.
Maybe it's good to indicate in the ticket description where/how you reported this upstream.
Yea! I think I found it! I thought about it and realized that the coefficient that I have been concentrating on as my example must be coming from adding the number of standard tableaux of shape
formed by removing an outer corner together. I tried the following as a test.c.
#include "def.h"
#include "macro.h"
So above the line we will see the numbers of standard tableaux formed by removing a corner (which I calculated in sage, but compared them to the output of the program in comment 15. And the output
SYMMETRICA VERSION 3.0 - STARTING
Thu Feb 26 14:58:10 MET 1998
SYMMETRICA VERSION 3.0 - ENDING
last changed: Thu Feb 26 14:58:10 MET 1998
Lets do the same thing by hand in sage:
sage: 1003805253100650000+1357888495095475200
sage: _+1472208106707261000
sage: _+1336535106226320000
sage: _+2194567264800768000
sage: _+2252538987957858600
Notice that as soon as I add two 'integer' objects together so that they become a 'longint' object (marked by where the program starts printing out periods in the number) the answer is wrong. So now
I just have to find the routine which adds two ints and gives an overflow into longints.
Here is a mini test.c to verify if the programs are working
#include "def.h"
#include "macro.h"
printf("but the answer should be....\n");
Currently on my computer the answer is... 7.365004.528389.219328
I sent a second bug report to symmetrica(at)symmetrica.de with the above example.
Here seems to be the problem. If I take an integer and convert it into a long integer then it isn't right.
#include "def.h"
#include "macro.h"
printf("but when I convert it into a long integer...\n");
The output on my computer:
SYMMETRICA VERSION 3.0 - STARTING
Thu Feb 26 14:58:10 MET 1998
but when I convert it into a long integer...
SYMMETRICA VERSION 3.0 - ENDING
That makes sense since it's always expecting ints to be at most 32 bits. I'm testing out a patch now.
It seems as though ints can be as large as 45 bits and the function t_int_longint will still work properly.
Okay, the fix is to make def.h look like the following:
#include <stdint.h>
#ifdef __alpha
typedef int32_t INT;
typedef uint32_t UINT;
#else /* __alpha */
typedef int32_t INT;
typedef uint32_t UINT;
#endif /* __alpha */
(and you probably don't need to change the first part.
Thanks Mike.
The first two lines of def.h read:
/* file def.h SYMMETRICA */
/* INT should always be 4 byte */
Now it is becoming clearer to me...
So I haven't the first clue about how to muck with the spkg's. Will you post your fix? I haven't even been able to check that it corrects the problems with the symmetric function calculations in
sage. Essentially there are two problems that need to be checked (or three if you count the first issue that I provide a fix for above).
The first one is that at degree 36 the calculation is not correct. The second one is that after a calculation at degree 47 the coefficients start to become random.
I can easily make a spkg.
• Authors set to Jeroen Demeyer
• Description modified (diff)
I'm a bit worried by some of the compiler warnings, but these already existed before the patches...
I'm having some issues installing it because I seem to be getting errors and not just warnings.
Are you getting the same error messages?
Extracting package /Users/zabrocki/Downloads/symmetrica-2.0.p8.spkg
-rw-r--r--@ 1 zabrocki staff 680547 Oct 17 09:26 /Users/zabrocki/Downloads/symmetrica-2.0.p8.spkg
Finished extraction
Host system:
Darwin Mikes-MacBook-Pro.local 12.5.0 Darwin Kernel Version 12.5.0: Sun Sep 29 13:33:47 PDT 2013; root:xnu-2050.48.12~1/RELEASE_X86_64 x86_64
C compiler: gcc
C compiler version:
Using built-in specs.
Target: x86_64-apple-darwin12.3.0
Configured with: ../src/configure --prefix=/Users/dehayebuildbot/build/sage/dehaye/dehaye_binary/build/sage-5.10/local --with-local-prefix=/Users/dehayebuildbot/build/sage/dehaye/dehaye_binary/build/sage-5.10/local --with-gmp=/Users/dehayebuildbot/build/sage/dehaye/dehaye_binary/build/sage-5.10/local --with-mpfr=/Users/dehayebuildbot/build/sage/dehaye/dehaye_binary/build/sage-5.10/local --with-mpc=/Users/dehayebuildbot/build/sage/dehaye/dehaye_binary/build/sage-5.10/local --with-system-zlib --disable-multilib --disable-nls
Thread model: posix
gcc version 4.7.3 (GCC)
patching file de.c
patching file def.h
patching file macro.h
patching file bar.c
patching file def.h
Hunk #1 succeeded at 3100 (offset -5 lines).
Hunk #2 succeeded at 3266 (offset -5 lines).
patching file di.c
patching file ga.c
patching file galois.c
patching file macro.h
patching file nc.c
patching file nu.c
patching file part.c
patching file perm.c
patching file rest.c
patching file ta.c
patching file zyk.c
gcc -O2 -g -DFAST -DALLTRUE -c -o bar.o bar.c
gcc -O2 -g -DFAST -DALLTRUE -c -o bi.o bi.c
gcc -O2 -g -DFAST -DALLTRUE -c -o boe.o boe.c
gcc -O2 -g -DFAST -DALLTRUE -c -o bruch.o bruch.c
gcc -O2 -g -DFAST -DALLTRUE -c -o classical.o classical.c
gcc -O2 -g -DFAST -DALLTRUE -c -o de.o de.c
gcc -O2 -g -DFAST -DALLTRUE -c -o di.o di.c
gcc -O2 -g -DFAST -DALLTRUE -c -o ff.o ff.c
gcc -O2 -g -DFAST -DALLTRUE -c -o galois.o galois.c
gcc -O2 -g -DFAST -DALLTRUE -c -o ga.o ga.c
gcc -O2 -g -DFAST -DALLTRUE -c -o gra.o gra.c
gcc -O2 -g -DFAST -DALLTRUE -c -o hash.o hash.c
gcc -O2 -g -DFAST -DALLTRUE -c -o hiccup.o hiccup.c
gcc -O2 -g -DFAST -DALLTRUE -c -o io.o io.c
gcc -O2 -g -DFAST -DALLTRUE -c -o ko.o ko.c
gcc -O2 -g -DFAST -DALLTRUE -c -o list.o list.c
gcc -O2 -g -DFAST -DALLTRUE -c -o lo.o lo.c
lo.c:84:5: error: conflicting types for 'loc_index'
In file included from lo.c:3:0:
macro.h:559:13: note: previous declaration of 'loc_index' was here
lo.c:85:5: error: conflicting types for 'loc_size'
In file included from lo.c:3:0:
macro.h:559:24: note: previous declaration of 'loc_size' was here
lo.c:86:5: error: conflicting types for 'loc_counter'
In file included from lo.c:3:0:
macro.h:559:33: note: previous declaration of 'loc_counter' was here
lo.c:88:5: error: conflicting types for 'mem_counter_loc'
In file included from lo.c:3:0:
macro.h:562:35: note: previous declaration of 'mem_counter_loc' was here
lo.c:89:5: error: conflicting types for 'longint_speicherindex'
In file included from lo.c:3:0:
macro.h:562:13: note: previous declaration of 'longint_speicherindex' was here
lo.c:90:5: error: conflicting types for 'longint_speichersize'
In file included from lo.c:3:0:
macro.h:562:51: note: previous declaration of 'longint_speichersize' was here
make: *** [lo.o] Error 1
Error building Symmetrica.
real 0m28.966s
user 0m27.970s
sys 0m0.759s
Error installing package symmetrica-2.0.p8
Please email sage-devel (http://groups.google.com/group/sage-devel)
explaining the problem and including the relevant part of the log file
Describe your computer, operating system, etc.
If you want to try to fix the problem yourself, *don't* just cd to
/Applications/sage/spkg/build/symmetrica-2.0.p8 and type 'make' or whatever is appropriate.
Instead, the following commands setup all environment variables
correctly and load a subshell for you to debug the error:
(cd '/Applications/sage/spkg/build/symmetrica-2.0.p8' && '/Applications/sage/sage' --sh)
When you are done debugging, you can type "exit" to leave the subshell.
Any ideas?
This is work in progress, not yet needs_review.
• Description modified (diff)
• Status changed from new to needs_review
• Description modified (diff)
The version you just posted seems to compile (I was having issues before) but it looks like the fix only solves 1/2 of the problem.
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: g = s(p([1]*36))
sage: g.coefficient([7,6,5,4,3,3,2,2,1,1,1,1])
sage: g = s(p([1]*47))
sage: s(p[2,2])
s[1, 1, 1, 1] - s[2, 1, 1] + 1589352607*s[2, 2] - s[3, 1] + s[4]
Are you getting similar results? We may have to go back to the drawing board.
I get
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: g = s(p([1]*36))
sage: g.coefficient([7,6,5,4,3,3,2,2,1,1,1,1])
sage: g = s(p([1]*47))
sage: s(p[2,2])
s[1, 1, 1, 1] - s[2, 1, 1] + 2*s[2, 2] - s[3, 1] + s[4]
Sorry to ask the obvious, but you did run ./sage -b after installing the spkg, right?
I did. I just touched all the files in the sage/libs/symmetrica directory and re-ran the sage -b. It fixes the problem so now both problems are fixed. I'll do a few more tests but that looks
fantastic! Thanks.
I don't know if this kicks the can down the road or if something else is going wrong. I am still finding failing tests if I go higher in degree.
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: g = s(p([1]*47))
sage: g.coefficient([9,8,7,6,5,4,3,2,2,1])
sage: StandardTableaux([9,8,7,6,5,4,3,2,2,1]).cardinality()
sage: g = s(p([1]*55))
sage: g.coefficient([10,9,8,7,6,5,4,3,2,1])
sage: StandardTableaux([10,9,8,7,6,5,4,3,2,1]).cardinality()
sage: s(p[2,2])
s[1, 1, 1, 1] - s[2, 1, 1] + 2*s[2, 2] - s[3, 1] + s[4]
sage: g = s(p([1]*61))
sage: g.coefficient([10,9,8,7,6,5,5,4,3,2,1,1])
sage: StandardTableaux([10,9,8,7,6,5,5,4,3,2,1,1]).cardinality()
sage: s(p[2,2])
s[1, 1, 1, 1] - s[2, 1, 1] + 498984676*s[2, 2] - s[3, 1] + s[4]
Note that computing the degree 61 calculations takes a long time. Are you finding the same errors?
I get
jdemeyer@boxen:/release/merger/sage-5.12$ ./sage
│ Sage Version 5.12, Release Date: 2013-10-07 │
│ Type "notebook()" for the browser-based notebook interface. │
│ Type "help()" for help. │
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: g = s(p([1]*47))
sage: g.coefficient([9,8,7,6,5,4,3,2,2,1])
sage: StandardTableaux([9,8,7,6,5,4,3,2,2,1]).cardinality()
sage: g = s(p([1]*55))
sage: g.coefficient([10,9,8,7,6,5,4,3,2,1])
sage: StandardTableaux([10,9,8,7,6,5,4,3,2,1]).cardinality()
sage: s(p[2,2])
s[1, 1, 1, 1] - s[2, 1, 1] + 2*s[2, 2] - s[3, 1] + s[4]
sage: g = s(p([1]*61))
sage: g.coefficient([10,9,8,7,6,5,5,4,3,2,1,1])
sage: StandardTableaux([10,9,8,7,6,5,5,4,3,2,1,1]).cardinality()
sage: s(p[2,2])
s[1, 1, 1, 1] - s[2, 1, 1] + 2*s[2, 2] - s[3, 1] + s[4]
Trials on linux machine give correct answers, on Mac I am getting errors. I am running sage -ba to see if this makes a difference.
On two different macs am getting:
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: sage: g = s(p([1]*47))
sage: g.coefficient([9,8,7,6,5,4,3,2,2,1])
sage: sage: g = s(p([1]*47))
sage: g.coefficient([9,8,7,6,5,4,3,2,2,1])
The first time the value is computed it seems to be correct. The second time it is random. This does not happen on linux.
So I ran on an old copy of sage (no correction installed) on the same linux machine:
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: g = s(p([1]*47))
sage: s(p([1]*5))
s[1, 1, 1, 1, 1] + 4*s[2, 1, 1, 1] + 5*s[2, 2, 1] + 6*s[3, 1, 1] + 5*s[3, 2] + 4*s[4, 1] + s[5]
On an old copy of sage on mac (correction installed or not):
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: g = s(p([1]*47))
sage: s(p([1]*5))
s[1, 1, 1, 1, 1] - 1004621421*s[2, 1, 1, 1] - 2015815354*s[2, 2, 1] - 2018818386*s[3, 1, 1] - 2013245610*s[3, 2] - 1015866049*s[4, 1] + s[5]
This means that there are two (at least partially) independent problems going on here. One of them does not seem to appear on linux machines. When I run the examples in the patch description, I do
not see the random coefficients in the expansion of s(p([2,2])). Do you see it on boxen?
I am running on the hypothesis that on (at least certain) linux machines that the bug described in the patch description never existed. It was only a bug on Macs. However starting in comment 5, I
identified a second bug that we can recreate at degree 36. This second bug is all corrected by installing the spkg and the patch (maybe there is a third bug around test_integer?).
I've only tested this on a couple of Macs and one linux machine and Jeroen has been posting the results from boxen. All evidence that I have says that the first bug is not resolved, but is only a
problem on Macs. Can someone recreate the bug in the patch description on a linux machine?
The original description points to "some wrapper functions around symmetrica." This is probably still the case.
Ubuntu in a virtualbox, 64bit, sage 5.13.beta0 with patches installed (hopefully correctly: I -i'd symmetrica 2.0p8, but I never explicitly uninstalled 2.0p7).
Exhibit A:
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: g = s(p([1]*47))
sage: g.coefficient([9,8,7,6,5,4,3,2,2,1])
sage: g = s(p([1]*47))
sage: g.coefficient([9,8,7,6,5,4,3,2,2,1])
Exhibit B:
sage: p = SymmetricFunctions(QQ).powersum()
sage: s = SymmetricFunctions(QQ).schur()
sage: s(p[2,2])
s[1, 1, 1, 1] - s[2, 1, 1] + 2*s[2, 2] - s[3, 1] + s[4]
sage: time g = s(p([1]*47))
/home/darij/sage-5.13.beta0/local/lib/python2.7/site-packages/sage/misc/sage_extension.py:371: DeprecationWarning: Use %time instead of time.
See http://trac.sagemath.org/12719 for details.
line = f(line, line_number)
CPU times: user 19.23 s, sys: 0.06 s, total: 19.29 s
Wall time: 19.81 s
sage: s(p[2,2])
s[1, 1, 1, 1] - s[2, 1, 1] + 2*s[2, 2] - s[3, 1] + s[4]
This speaks for it being a Mac problem. (No, I don't have one to test.)
Seeing that there really seem to be two separate issues here, can we get the current patch into beta1 without waiting for the Mac issue to be resolved? I am scared of working with Sym now...
EDIT: And to verify that the other issue has been resolved, Exhibit C:
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: %time s(p([1]*36))==sum(StandardTableaux(la).cardinality()*s(la) for la in Partitions(36))
CPU times: user 122.02 s, sys: 0.52 s, total: 122.53 s
Wall time: 129.03 s
Many thanks for this, Mike and Jeroen!!
• Description modified (diff)
• Status changed from needs_review to positive_review
• Summary changed from fix integer overflow (?) in conversion of powersums to Schur functions to Correct bug in symmetric functions caused by Symmetrica using integers longer than 32 bits
Since 95% of this discussion really is about the correction of the second bug, I decided to move the original description to a new trac ticket #15312.
Replying to zabrocki:
Since 95% of this discussion really is about the correction of the second bug, I decided to move the original description to a new trac ticket #15312.
Good idea.
• Reviewers set to Mike Zabrocki
• Merged in set to sage-5.13.beta2
• Resolution set to fixed
• Status changed from positive_review to closed
• Reviewers changed from Mike Zabrocki to Mike Zabrocki
|
{"url":"http://trac.sagemath.org/ticket/13413","timestamp":"2014-04-17T01:26:22Z","content_type":null,"content_length":"87767","record_id":"<urn:uuid:6ba3c2ab-0428-4ba1-a896-3f7b2e4d1b04>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tree‐structured Data Regeneration with Network Coding
, 2010
"... During the last decades, a host of efficient algorithms have been developed for solving the minimum spanning tree problem in deterministic graphs, where the weight associated with the graph
edges is assumed to be fixed. Though it is clear that the edge weight varies with time in realistic applicatio ..."
Cited by 10 (5 self)
Add to MetaCart
During the last decades, a host of efficient algorithms have been developed for solving the minimum spanning tree problem in deterministic graphs, where the weight associated with the graph edges is
assumed to be fixed. Though it is clear that the edge weight varies with time in realistic applications and such an assumption is wrong, finding the minimum spanning tree of a stochastic graph has
not received the attention it merits. This is due to the fact that the minimum spanning tree problem becomes incredibly hard to solve when the edge weight is assumed to be a random variable. This
becomes more difficult, if we assume that the probability distribution function of the edge weight is unknown. In this paper, we propose a learning automata‐based heuristic algorithm to solve the
minimum spanning tree problem in stochastic graphs wherein the probability distribution function of the edge weight is unknown. The proposed algorithm taking advantage of learning automata determines
the edges that must be sampled at each stage. As the presented algorithm proceeds, the sampling process is concentrated on the edges that constitute the spanning tree with the minimum expected
weight. The proposed learning automata‐based sampling method decreases the number of samples that need to be taken from the graph by reducing the rate of unnecessary samples. Experimental results
show the superiority of the proposed algorithm over the well‐known existing methods both in terms of the number of samples and the running time of algorithm.
"... Abstract—Distributed storage systems provide large-scale reliable data storage by storing a certain degree of redundancy in a decentralized fashion on a group of storage nodes. To recover from
data losses due to the instability of these nodes, whenever a node leaves the system, additional redundancy ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract—Distributed storage systems provide large-scale reliable data storage by storing a certain degree of redundancy in a decentralized fashion on a group of storage nodes. To recover from data
losses due to the instability of these nodes, whenever a node leaves the system, additional redundancy should be regenerated to compensate such losses. In this context, the general objective is to
minimize the volume of actual network traffic caused by such regenerations. A class of codes, called regenerating codes, has been proposed to achieve an optimal tradeoff curve between the amount of
storage space required for storing redundancy and the network traffic during the regeneration. In this paper, we jointly consider the choices of regenerating codes and network topologies. We propose
a new design, referred to as RCTREE, that combines the advantage of regenerating codes with a tree-structured regeneration topology. Our focus is the efficient utilization of network links, in
addition to the reduction of the regeneration traffic. With the extensive analysis and quantitative evaluations, we show that RCTREE is able to achieve a both fast and stable regeneration, even with
departures of storage nodes during the regeneration.
- Applied Soft Computing , 2011
"... Due to the hardness of solving the minimum spanning tree (MST) problem in stochastic environments, the stochastic MST (SMST) problem has not received the attention it merits, specifically when
the probability distribution function (PDF) of the edge weight is not a priori known. In this paper, we fir ..."
Cited by 2 (2 self)
Add to MetaCart
Due to the hardness of solving the minimum spanning tree (MST) problem in stochastic environments, the stochastic MST (SMST) problem has not received the attention it merits, specifically when the
probability distribution function (PDF) of the edge weight is not a priori known. In this paper, we first propose a learning automata‐based sampling algorithm (Algorithm 1) to solve the MST problem
in stochastic graphs where the PDF of the edge weight is assumed to be unknown. At each stage of the proposed algorithm, a set of learning automata is randomly activated and determines the graph
edges that must be sampled in that stage. As the proposed algorithm proceeds, the sampling process focuses on the spanning tree with the minimum expected weight. Therefore, the proposed sampling
method is capable of decreasing the rate of unnecessary samplings and shortening the time required for finding the SMST. The convergence of this algorithm is theoretically proved and it is shown that
by a proper choice of the learning rate the spanning tree with the minimum expected weight can be found with a probability close enough to unity. Numerical results show that Algorithm 1 outperforms
the standard sampling method. Selecting a proper learning rate is the most challenging issue in learning automata theory by which a good trade off can be achieved between the cost and efficiency of
algorithm. To improve the efficiency (i.e., the convergence speed and convergence rate) of Algorithm 1, we
, 2010
"... A learning automata-based heuristic algorithm for solving the minimum spanning tree problem in stochastic graphs ..."
Add to MetaCart
A learning automata-based heuristic algorithm for solving the minimum spanning tree problem in stochastic graphs
"... A learning automata-based heuristic algorithm for solving the minimum spanning tree problem in stochastic graphs ..."
Add to MetaCart
A learning automata-based heuristic algorithm for solving the minimum spanning tree problem in stochastic graphs
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=9787743","timestamp":"2014-04-18T00:51:05Z","content_type":null,"content_length":"23959","record_id":"<urn:uuid:67ffde5a-3ea3-4993-83a5-175d820126c3>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find an equation in standard form for the ellipse with the vertical major axis of length 16, and minor axis of length 10.
• 10 months ago
• 10 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51a4cf0fe4b0aa1ad887e5ee","timestamp":"2014-04-17T19:24:20Z","content_type":null,"content_length":"45083","record_id":"<urn:uuid:7aca7e2f-862e-47c7-8bc9-ee817dc63826>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An Invitation to Nonstandard Analysis
- In Nonstandard analysis (Edinburgh
"... this paper is to describe the essential features of the resulting frameworks without getting bogged down in technicalities of formal logic and without becoming dependent on an explicit
construction of a specific field ..."
Cited by 10 (2 self)
Add to MetaCart
this paper is to describe the essential features of the resulting frameworks without getting bogged down in technicalities of formal logic and without becoming dependent on an explicit construction
of a specific field
, 1991
"... The first chapter presents Bayesian confirmation theory. We then construct infinitesimal numbers and use them to represent the probability of unrefuted hypotheses of standard probability zero.
Popper's views on the nature of hypotheses, of probability and confirmation are criticised. It is shown tha ..."
Cited by 1 (1 self)
Add to MetaCart
The first chapter presents Bayesian confirmation theory. We then construct infinitesimal numbers and use them to represent the probability of unrefuted hypotheses of standard probability zero.
Popper's views on the nature of hypotheses, of probability and confirmation are criticised. It is shown that Popper conflates total confirmation with weight of evidence. It is argued that Popper's
corroboration can be represented in a Bayesian formalism. Popper's propensity theory is discussed. A modified propensity interpretation is presented where probabilities are defined relative to
descriptions of generating conditions. The logical interpretation is briefly discussed and rejected. A Bayesian account of estimating the values of objective probabilities is given, and some of its
properties are proved. Belief functions are then compared with probabilities. It is concluded that belief functions offer a more elegant representation of the impact of evidence. Both measures are
then discussed in re...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2791569","timestamp":"2014-04-20T15:01:47Z","content_type":null,"content_length":"14777","record_id":"<urn:uuid:a1c94351-5a4f-490a-9542-1356c0627275>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Typical Bifurcations of Wavefront Intersections
This Demonstration shows all generic bifurcations of
intersections of wavefronts generated by a hypersurface with or without a boundary in a smooth
dimensional manifold for
. The time can be varied with a slider.
In this Demonstration, stable reticular Legendrian unfoldings and generic bifurcations of wavefronts are generated by a hypersurface germ with a boundary, a corner, or an r-corner (cf. [4]).
For the case , the hypersurface has no boundary; the fronts are described as perestroikas (in [1] the figures are given on p. 60). A one-parameter family of wavefronts is given by a generating
family defined on such that .
For the case , the hypersurface has a boundary; a reticular Legendrian unfolding gives the wavefront , where the set is the wavefront generated by the hypersurface at time and the set is the
wavefront generated by the boundary of the hypersurface at time .
A reticular Legendrian unfolding has a generating family. Then the wavefront is given by the generating family defined on such that .
Typical bifurcations of wavefronts in 2D and 3D are defined by generic reticular Legendrian unfoldings for the cases . Their generating families are stably reticular ---equivalent to one of the
For :
For :
Typical wavefronts in 2D and 3D are shown for singularities while typical bifurcations in 2D and 3D are shown for singularities.
The author also applies the theory of multi-reticular Legendrian unfoldings in order to construct a generic classification of semi-local situations.
A multi-reticular Legendrian unfolding consists of products of reticular Legendrian unfoldings. Its wavefronts are unions of wavefronts of the reticular Legendrian unfoldings.
A multi-generating family of a generic multi-reticular Legendrian unfolding () is reticular ---equivalent to one of the following:
In this Demonstration all generic bifurcations of intersections are given for wavefronts in an -dimensional manifold for , .
[1] V. I. Arnold,
Singularities of Caustics and Wave Fronts
, Dordrecht: Kluwer Academic Publishers, 1990.
[2] V. I. Arnold, S. M. Gusein–Zade, and A. N. Varchenko,
Singularities of Differential Maps I
, Basel: Birkhäuser, 1985.
[3] T. Tsukada, "Genericity of Caustics and Wavefronts on an r-Corner,"
Asian Journal of Mathematics
(3), 2010 pp. 335–358.
[4] T. Tsukada, "Bifurcations of Wavefronts on r-Corners: Semi-Local Classifications,"
Methods and Applications of Analysis
(3), 2011, pp. 303–334.
|
{"url":"http://www.demonstrations.wolfram.com/TypicalBifurcationsOfWavefrontIntersections/","timestamp":"2014-04-18T13:20:27Z","content_type":null,"content_length":"50356","record_id":"<urn:uuid:e86d2d76-20e0-4685-ae72-1ce286c0386c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Questions About Hash Tables
I am making a Hash Table with separate chaining as a project in school.
I am mainly confused about the size that I need/want the hash table to be. The general rule my teacher has given me is that a hash table should have twice as many buckets as expected to be filled,
and should never exceed 75% capacity. He also said something along the lines of "don't be greedy".
So the first question is, how do I know when I am being greedy with the capacity of my table? Secondly, we are making the table have separate binding, and thus, are ensured of a max size. Think I
should still double it?
To put a little context into it, our input is 100 items. Each item has three defining (I need to be able to find them quick) properties. The first property has 26 possibilities, the second has 5, and
the third also has 5. So, that's a table of 650 buckets. Why even have separate chaining at that point? When I double that, it seems even more ridiculous.
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/general/87352/","timestamp":"2014-04-19T07:05:38Z","content_type":null,"content_length":"6519","record_id":"<urn:uuid:6ff4947b-4504-4316-bbae-024d252feb84>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Volume and Tone Controls
How Volume and Tone Controls Work
I've combined the discussion of tone and volume controls because, as they are used in Strat circuits, they are so tightly coupled that it is important to understand one to understand the other.
Volume Circuits:
Whenever a current flows through a resistor a voltage potential is developed across that resistor. Further, the voltage is proportional to the amount of resistance and the amount of current. There is
a handy little rule, called Ohm's law which states that the voltage across the resistor is equal to the resistance times the current flowing through it:
E = IR
Where: E = voltage (in volts)
I = current (in amps)
R = resistance (in ohms)
In a series circuit exactly the same amount of current flows through all of the components in the circuit. Since that is the case, and since we know that voltage across a component is proportionate
to the current flowing through it and its resistance, we can skip a lot of the interim math and consider two or more resistors in series a simple voltage divider. That is exactly what we do in a
volume control, except that we use a single resistor with a sliding tap, called a potentiometer. Consider the schematic below:
Voltage Divider and Volume Control
Let's do a little simple math to see how this works (assume a signal input of 1 volt):
• When the pot is turned almost all the way down R1 might be 1k and R2 249k. The output voltage then would be Vin(1k / (1k + 249k)) or Vin(1/250) or 0.004 volts (4mV).
• When the pot is turned almost all the way up R1 might be 249k and R2 1k. The output voltage then would be Vin(249k / (249k + 1k)) or Vin(249/250) or 0.996 volts.
Since the pot wiper can be smoothly varied from one end to the other we can select 0 percent of the input, 100 percent of the input, or anything in between. Note, though, that at the physical center
of the pot we would not be taking 50% of the input signal. Because of the way the human ear responds to sound pressure (amplitude) we have to approximately double the signal strength to make a barely
audible difference. For this reason, audio volume controls use "log" or "audio" taper pots. If you built a volume control with a linear taper potentiometer it would be so sensitive that it would be
Tone Control Circuits:
First, understand that all "passive" controls, such as those used in Strats, are "cut" circuits. You are not "boosting" mid-range when you turn a tone control down -- you are "cutting" or throwing
away high frequencies!
Capacitors have a very useful property, the impedance (total resistance) of a capacitor varies with frequency. At high frequencies the impedance is low, while at low frequencies the impedance is
quite high. We combine capacitors and variable resistors (potentiometers) to make adjustable tone controls. There are several ways such controls could be wired, but we will look only at circuits as
they appear in Stratocasters. Below is a simplified schematic showing a single tone control:
Remember that the impedance of a capacitor varies with frequency? And that the impedance (resistance) of a resistor does not? In a nutshell, that is why the tone control works. As the variable
resistor is cranked up to a very high value, the difference in the capacitive impedance for high and low frequencies becomes insignificant in comparison. Without getting heavily into the math let's
use some easy numbers to see how it works. Let's say just for the sake of argument that the impedance of the capacitor is 25k ohms at 4khz, 12.5k at 8khz, and 50k at 2khz and that the pot is a 250k
ohm pot:
• When the pot is turned down there is 0k resistance (a short circuit) across it. Therefore, the impedance of the entire circuit will be 50k at 2khz and only 12.5k at 8khz. Thus, high frequencies
have a much "shorter" or "easier" path to ground than low frequencies do.
• When the pot is turned up there is 250k resistance across it. Therefore, the impedance of the entire circuit will be 300k at 2khz and 262.5k at 8khz. The impedance is still lower for a high
frequency signal -- but only by a very small ratio.
Note that this circuit would not work in a "perfect" world! If the input signal were regulated such that the voltage never varied regardless of the load impedance and the amplifier on the output had
infinite impedance and no padding -- this circuit would not vary the tone. However, magnetic pickups are not regulated and they are capable of delivering only a tiny amount of current. I'm trying to
avoid complex math in these pages so I'll oversimplify a bit and simply say that as the load impedance (represented by the tone control) goes down, the voltage that is output by the pickup goes down
too. Since the first gain stage of the amplifier is basically a voltage amplifier the final signal decreases with the voltage at the tone circuit.
In short, the tone control works because the impedance of the tone control is lower for high frequencies then it is for low frequencies -- which "pulls down" the high frequency output of the pickup.
Finally, note that the figures I used in the example above are "blue sky" figures intentionally chosen to emphasize the significance of the varying resistance of the pot. In the real world, a tone
control with that significant a difference between the resistance of the pot and the impedance of the capacitor would function more as a volume control than as a tone control!
|
{"url":"http://www.guitarnuts.com/wiring/voltonecon.php","timestamp":"2014-04-19T19:44:24Z","content_type":null,"content_length":"21764","record_id":"<urn:uuid:4872c0c6-24a0-42b1-9f10-d3084b8e9ea6>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
|
limit 1/sqrt(n)
February 14th 2013, 06:32 AM #1
Junior Member
Nov 2012
United Kingdom
limit 1/sqrt(n)
We've been doing proving limits and sequences in class and have been given this as one of the practice questions.
prove lim 1/sqrt(n)=0
I've done |1/sqrt(n)-0|=1/sqrt(n)<E so N(E)=[1/E^2]+1 but that seems waaay to simple to the ones we did before, is there another way of doing this or that literally all I need to do?
Re: limit 1/sqrt(n)
These proofs must begin with: Suppose that $\epsilon>0$.
Then we know that $\epsilon^2>0$ so $\left( {\exists N \in \mathbb{N}} \right)\left[ {n \geqslant N \Rightarrow \frac{1}{n} < \epsilon ^2 } \right]$.
Can you finish?
Re: limit 1/sqrt(n)
I'm a little confused by what definition that is. In class we had the definition of convergence to be:
Let (an)n∈N be a sequence of real numbers. The sequence is called convergent to a ∈ R if for every ε > 0 there exists N =N(ε)∈N such that
|an−a|<ε for all n≥N(ε).
If (an)n∈N converges to a we call a the limit of (an)n∈N and we writelim an = a.
so we start with a[n]-a and solve for N(ε)
Is that not the definition I need to prove this limit?
Re: limit 1/sqrt(n)
there nothing wrong with your proof but follow Plato's method it is more rigorous...besides your sequence is a null sequence therefore a=0.
February 14th 2013, 07:02 AM #2
February 14th 2013, 07:48 AM #3
Junior Member
Nov 2012
United Kingdom
February 14th 2013, 10:33 AM #4
Senior Member
Feb 2013
Saudi Arabia
|
{"url":"http://mathhelpforum.com/calculus/213106-limit-1-sqrt-n.html","timestamp":"2014-04-18T08:11:05Z","content_type":null,"content_length":"39734","record_id":"<urn:uuid:0ed947cc-12c4-4a72-8798-7da67f084142>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dome Glossary
Basket Weave Dome
A modified geodesic dome design that omits some of the edges, leaving a network of lines that interlock in a pattern of triangles, pentagons, and (in the higher frequencies) hexagons. These domes
are made with long, continuous pieces of material (or edges that are joined end-to-end so they act as continuous pieces). The long pieces of material in the dome pattern can be woven over and
under each other like strips of wicker in a basket. The basket weave domes use only about half the number of edges of a regular dome, so they are well suited to large, lightweight, portable
structures such as tents. The Small PVC Dome and Large PVC Dome are examples of basket weave domes.
A structure consisting of a partial sphere (usually about half of a sphere) used to enclose space. Domes are commonly used for homes, greenhouses, or to enclose equipment such as radar.
When talking about domes, frequency refers to the number of pieces that each edge of the base figure is divided into in the process of triangulating its sides. For instance, we might start with a
base figure of an icosahedron and divide each edge of each triangular face into 3 equal lengths. Those new points are connected to divide the original triangle into 9 smaller triangles. Since the
original edges were divided into 3 parts, we call this a 3-frequency dome. The frequency is commonly abbreviated as "f", so a 2-frequency dome is called 2f, 3-frequency is 3f, and so forth.
You can find more about frequency at the Synergetics Home Page section on geodesic dome geometry. In the discussion of Domes come in classes, the figure labeled Class 1shows a triangular face
subdivided to 5-frequency. The figure to its right shows an icosahedron triangulated to 3-frequency.
A circular line that goes around the full width of a sphere (like the equator or any of the longitude lines that go all the way around the earth). Geodesic domes get their name because in most of
these domes, the edges lie on geodesic lines. Geodesics are also called "great circles" because they are the largest circles you can draw on a sphere.
Golden Proportion
Sometimes called the golden mean or the golden ratio. An irrational number (but then, who's to say what isrational in this crazy world!). It has a special symbol, the Greek letter Tau (or, in
some texts, the Greek letter Phi), and is calculated as 1 plus the square root of 5, divided by 2 (approximately equal to 1.618033989). It shows up many places in geometry and nature. For
instance, it is the ratio of one of the diagonals of a regular pentagon to the length of its edge.
(plural: icosahedra) A polyhedron consisting of twenty equilateral triangles which are grouped with 5 triangles around each vertex. Icosahedra are commonly used as a basis for geodesic dome
designs because they are fairly round to begin with.
(plural: polyhedra) Any 3-dimensional geometrical figure with many sides. Pyramids, cubes, and geodesic domes are all polyhedra. From the Greek roots poly(meaning many) and hedron(meaning side).
Small Rhombicosidodecahedron
Say thatten times, fast! A 62-sided polyhedron with twelve pentagonal (5-sided) faces, 20 triangular (3-sided) faces, and 30 square (or sometimes rectangular) faces. This is the shape used for
the connectors in the Zometool modeling kits.
In a dome context, triangulation is used to mean the process of subdividing a triangle into smaller triangles. So if the base figure (usually an icosahedron) has large triangular faces, then
triangulation involves subdividing each of the icosahedron's triangular faces into a grid of smaller triangles. (Note: triangulationis the term I've always used, if anybody knows a more accepted
term for this concept, please let me know!) See also Frequency.
To "truncate" a polyhedron means to cut off part of it, usually along some natural dividing line (like the middle of a sphere). Here it means cutting a full geodesic sphere off at some "latitude
line" to make a dome out of it. For example, cutting the sphere exactly in half at the "equator" results in a dome which is 1/2 of a sphere, which is called a "1/2 truncation."
(plural: vertices) The mathematical term for one of the corners of a polyhedron, where the edges come together. At the vertices of a geodesic dome, edges typically come together in groups of 5 or
If there's any terms you don't see here that you think need to be defined (or if the definitions themselves are unclear!), please email Walt with your suggestions. Thanks for your help!
(return to Walt's Dome Page main page)
|
{"url":"http://www.alt-eng.com/DomePage/DomeGlossary/DomeGlossary.html","timestamp":"2014-04-18T02:57:58Z","content_type":null,"content_length":"8219","record_id":"<urn:uuid:7d71ed6a-49a0-4b13-97af-2eb2086ab54a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
|
square roots and i
December 6th 2006, 02:36 PM #1
square roots and i
i have always thought $i$ was defined as $\sqrt{-1}$, where $\sqrt{}$ is the principal square root. Now I heard that you can't take the square root of negative numbers. That makes me confused.
How do you solve this equation for example:
$x^2 = -4$
Do you have to specify that the solution can be complex as well? Is it okay then to take the square root of negative numbers? (Or is it called the complex square root, and how do you know when it
is the complex square root that's intended?)
And when is a number negative? is it when the real part of it is negative? Or is negative numbers only possible for real numbers?
That is correct you cannot take square roots of negative numbers. Unless you redefine the square root function for a complex co-domain. But since I have never seen it done, and am familar with
the construction of numbers. I consider it improper (mathematically illegal) to define it for negative numbers. Go here and try to understand what I say.
$x^2 = -4$
Theorem: The equation $x^2=a$ has two solutions when $a>0$ and they are $\pm\sqrt{a}$. A unique solution for $a=0$. And non-real solutions for $a<0$ which are $\pm i\sqrt{-a}$.
Proof: Check that for $a>0,a<0$ those solutions satisfy the equation. Furthermore because the complex numbers are a field (a type of algebra) there is at most 2 solutions. Thus those need to be
it. If $a=0$ then, $xx=0$ since it has no zero divisor (no non-zero numbers that give zero) we conclude that $x=0$ is the only solution.
(NOTE: No negative square roots were used).
Mathematical definitions evolve, as we all know.
The evolution is more often from the vague to the precise.
The complex number i has a fascinating evolution.
It is fair to say the historically $i = \sqrt { - 1}$ was the normal definition.
The foundations of complex numbers have changed greatly in the last forty years.
I think the changes are a result of what we know about model theory. (We even know how to define infinitesimals on a firm foundation as a result of model theory.)
Having said that, how is the number i now defined?
Define i to be a number that solves the equation $x^2 + 1 = 0.$
In other words, add one number to the real numbers.
The complete traditional ‘complex number field’ results; everything is preserved.
We have two square roots of –4: 2i and -2i.
We do not have to allow the use of $\sqrt { - 4}:$there is no need of it!
Are you familar with that? That is one thing I really wanted to know, non-standard analysis.
Define i to be a number that solves the equation $x^2 + 1 = 0.$
But the problem is there are 2 numbers that solve this
Yes very conversant with it. I have taught graduate classes in it.
Here is a great resource: Elementary Calculus
This is a complete calculus book by Jerome Keisler that is free for downloading from Keisler’s website at the University of Wisconsin. Keisler and his students developed the ideas of non-standard
analysis (infinitesimals) into a standard calculus in the 1970’s. You can download chapters. The questions about small and big (infinitesimal and infinite) are in chapter 1 on hyperreal numbers.
You have completely missed the point!
Of course there are two numbers that solve the equation!
But we need only define one of them. Then we naturally have the other: it negative.
You are probably no physisit but you should be familar with infinitesmal arguments from physics. They can be manipulated to make errors. However, is it possible that nonstandard analysis be part
of a Calculus course of physics so they will use them properly?
So, what you are saying really is that usally when you use the principal square root, it is the real principal square root rather than the complex principal square root, since you say that $\sqrt
{n}$ is a non-negative number and negativeness or positiveness does only exist for the rational numbers. And the real principal square root does only result in real numbers while the complex
principal square root doesn't. Have I understood this right?
But what if i said that when I wrote $\sqrt{-1}$, I was refering to the complex principal square root? Would it be okay then? And if it is okay, some additional questions will arise. How do you
know if someone intends to use the common square root or the complex square root when he writes $\sqrt{}$ ? Why do we not always use the complex square root if it is not undefined for negative
At last, is there some place where I can read about the true definitions of different things in mathematics?
Last edited by TriKri; December 7th 2006 at 05:19 AM.
No, because definition are not, in general, universal. The beautiful, but also sometimes annoying, thing about mathematics is that you're perfectly allowed to define things the way you want. Of
course, some definition have proven to be more useful, logical, consistent, ... than others. For example, the definition of the natural logarithm is done in a few ways which are all frequently
used: some authors prefer one definition, other authors use another. None of those definitions is wrong but also none is the "official" one - there is no such thing.
If we define i to be the solution to the equation x² = -1, then you immediately have a second solution, namely -i since if i² = -1 by definition, then (-i²) = (-i)(-i) = i² = -1 as well. Thus, if
we define i to be a solution to x² = -1, there's still some ambiguity left because we have two solutions which aren't really distict, yet. If you introduce the complex numbers as ordered pairs of
real numbers, you can define i to be (0,1), then -i = (0,-1) and you have a clear difference. I wrote more about this here.
Also, to return to your initial question, it actually is possible to define the square root for negative (real) numbers, as you can also read in the post I referred to. We don't usually do this
for a good reason: it appears that if we use that definition, we lose some interesting properties of this function which we did have before.
Like TD! each author uses his own definitions (but that results in the end are all the same).
For example, I am sure you are familar with sine and cosine. Here are ways of defining them....
1)In high school it is just a basic geometric explanation (though not mathematically sound).
2)It is somtimes defined to be the infinite series expansaions.
3)Another way is to define them as the solutions (independent) to y''+y=0. (This is a differencial equation, if you know anything about derivatives).
4)We can define them through their inverse functions. But how do we define inverse sine and cosine? We define it through an integral.
Note, #1,#2,#3,#4 are all equivalent. Meaning they lead to the same conclusions. It is just a matter which is the easiet to deal with. I belive #2 and #3 are the best ways. #4 is just too messy.
Okay, that made some things a bit clearer. I am still wondering though how you know if it's the common square root or the complex square root that is used. Since the complex principal square root
has the same sign as the real or the "common" principal square root, there would be nothing wrong with writing $\sqrt{-4}=i\cdot2$.
In the commonly used definition for the complex square root, negative numbers aren't allowed.
Luckily, the complex square root returns the same value as the classic real square root when applied to a real number.
So when the symbol is used, you just have to know whether you are working in R or C, but the square root is well-defined either way.
December 6th 2006, 03:05 PM #2
Global Moderator
Nov 2005
New York City
December 6th 2006, 03:33 PM #3
December 6th 2006, 04:08 PM #4
Global Moderator
Nov 2005
New York City
December 6th 2006, 04:18 PM #5
December 6th 2006, 04:28 PM #6
Global Moderator
Nov 2005
New York City
December 6th 2006, 04:40 PM #7
December 6th 2006, 04:55 PM #8
December 6th 2006, 05:16 PM #9
December 7th 2006, 04:43 AM #10
Senior Member
Jan 2006
Brussels, Belgium
December 7th 2006, 06:51 AM #11
Global Moderator
Nov 2005
New York City
December 7th 2006, 12:28 PM #12
December 7th 2006, 01:25 PM #13
Senior Member
Jan 2006
Brussels, Belgium
|
{"url":"http://mathhelpforum.com/math-topics/8513-square-roots-i.html","timestamp":"2014-04-16T14:52:33Z","content_type":null,"content_length":"84512","record_id":"<urn:uuid:8be0cd4e-20ba-4b5a-8f8e-2f9a341b33eb>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Future Worth Method Problem
October 9th 2008, 01:32 PM #1
Sep 2008
Future Worth Method Problem
Your uncle has almost convinced you to invest in his peach farm. It would require a $10,000 initial investment on your part. He promises you revenue (before expenses) of $1800 per year the first
year, and increasing by $100 per year thereafter. Your share of the estimated annual expenses is $500. You are planning to invest for six years. Your uncle has promised to buy out your share of
the business at that time for $12000. You have decided to set a personal MARR of 15% per year. Use the FW method to determine the profitability of this investment project. Include a cash
First of all, I made a table of the total profit every year:
Year, Revenue, Profit (Revenue-$500)
1, 1800, 1300
2, 1900, 1400
3, 2000, 1500
4, 2100, 1600
5, 2200, 1700,
6, 2300, 1800
Using this data, there is a geometric gradient of G=100, therefore A=1300. Here is my cash flow diagram:
Now when it comes to finding the actual FW, (assuming I've done everything right up to this point,) I'm not sure how to handle the Gradient. I'm not sure whether the Gradient should be positive,
or negative. Also, I made it present value first, then made a future value. This is what I've done, (making the Gradient positive). :
FW(15%) = 12000 + 1300(F/A, 15%, 6) + 100(P/G, 15%, 6)(F/P, 15%, 6) - 10000(F/P, 15%,6) =
FW(15%) = 12000 + 1300(8.7537) 100(7.937)(2.3131) - 10000(2.3131) =
2084.72 > 0, project is acceptable.
I'm fairly confident I did everything, other than the Gradient, correctly. Any and all help would be greatly appreciated!
Sorry for the bump, but is there anyone that could help?
October 12th 2008, 08:03 PM #2
Sep 2008
|
{"url":"http://mathhelpforum.com/business-math/52876-future-worth-method-problem.html","timestamp":"2014-04-16T17:06:13Z","content_type":null,"content_length":"31778","record_id":"<urn:uuid:5c76ed04-c8a8-4475-889e-6d783e225b1b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
For the Love of Math
Spring 2013
On a recent return flight from a math conference with nearly 7,000 attendees, I overheard the usual polite chatter amongst strangers forced into close proximity for a cross-country trek. As this
conference had only ended late the day before, there was a larger than usual number of mathematically inclined individuals around, and I heard the familiar questions posed once someone encounters a
mathematician: “What can you do in math?” and “Don’t we already know everything in math?” One can easily imagine that there is more to discover in biology, chemistry, medicine and the like as results
of studies from those fields regularly make their way into the popular media, but many have a difficult time believing the same is true in math. In fact, the online database of reviews of
mathematical books and peer-refereed articles, MathSciNet, catalogs more than 100,000 additions per year.
Researchers in some disciplines can draw on familiar or widespread terminology, history or science to illuminate some basic ideas of research, giving us a taste of their work, but leaving us knowing
that there is still great depth and detail beyond our reach. In mathematics, the language and material introduced in upper-division undergraduate math courses only hint at the basic building blocks
of research, complicating a mathematician’s ability to convey his or her work. Nonetheless, there are similarities across disciplines as the nature of mathematics research also requires invested
amounts of large blocks of time to investigate and pose questions, to sift through and learn other’s work, and to discuss ideas with fellow researchers.
The advancement of mathematical knowledge involves such things as extending or generalizing previously known theories, making previously unknown connections between known results, constructing or
characterizing objects, and answering open questions. These are accomplished by producing new mathematics through reason, logic, proof and rigor. Furthermore, a necessary component for a math
research publication is that this rigorous argument and development must be deemed significant enough by peer referees to advance the field.
While there are many exciting areas of mathematics research that are more immediately applicable to medicine, industry and government, there are just as many amazing and valuable avenues of pure
mathematics research that advance our knowledge. Moreover, there is a great deal of give and take between pure and applied mathematics, making each useful to the other. My research as a pure
mathematician is in the field of complex analysis with a specialization in geometric function theory. My motivation to understand and discover more mathematics is grounded in the search for knowledge
itself rather than being driven by an application, a perplexing thought for my parents wondering what I was doing all those years in graduate school and how I could make a living from it!
Since I study the geometric properties of mappings, I have the advantage of being able to use mathematical software and graphics programs to explore conjectures and pose new questions, a luxury not
available to many other mathematicians. However, the time comes to turn off the computer and pick up a pencil to pursue a rigorous argument. This process often requires creativity and can sometimes
be unsuccessful no matter how many examples and graphs seem to indicate a conjecture’s truthfulness. For example, in 1958, at the conclusion of a groundbreaking paper in my research niche, the two
authors posed an open question, now known as the Pólya-Schoenberg Conjecture. Though many outstanding mathematicians worked on this conjecture, making progress on special cases, it was not until
2003, through an imaginative and unexpected approach using a differential equation, that this conjecture was proved. In 2007, I published a paper on an extension of these results. One of the
hypotheses of my main theorem involves a geometric condition that I believe will be true in a more general setting. While I have yet to produce an example that fails to support my belief, the general
result remains elusive and is an ongoing work in progress. In 2008-2009, I mentored an honors student who investigated specific changes to the differential equation and the geometry of the resulting
solution graphs. The work this student conducted in her honors thesis just to understand the problem goes well beyond what our typical undergraduate majors learn, and she produced an additional
example related to this research problem in support of my more general hypothesis.
Surfaces from Parking Ramps to Chips
Recently, my work has led me into interesting connections between two areas that might seem disparate: minimal surfaces and planar harmonic mappings. Surfaces are two-dimensional objects in a
three-dimensional space. For example, the boundaries of a ball, an empty paper towel roll, or a donut are examples of the surfaces of a sphere, a cylinder and a torus, respectively. Just as a line
segment gives the minimum length amongst all curves connecting two points in a plane, loosely, a minimal surface is a surface with smallest area amongst all surfaces bounded by a given frame. In
fact, dipping a wire frame into a soap solution naturally forms a soap film that is a minimal surface. There are also examples of minimal surfaces in everyday life. A spiral parking ramp is an
example of a helicoid, and a Pringle™ potato crisp looks like a portion of Enneper’s surface.
Minimal surfaces are investigated from many different approaches in varying areas of mathematics. One way is through planar harmonic mappings, which I also study. Loosely speaking, planar harmonic
mappings are formulas with certain mathematical conditions that transform or assign a region in a plane onto another region in a plane. Through a process called “lifting,” minimal surfaces are
generated from some planar harmonic mappings where the geometry of the harmonic mapping and the surface are related. While this lifting process has been known since the 1860s, it does not always lead
to a clear-cut or usable formula for the surface; thus understanding the harmonic mappings from which the surfaces are constructed has been important to the development of minimal surface results.
Indeed, it is not always clear whether the surface produced is a portion or version of a known surface or a new one. Recently, a collaborator and I have provided additional identifications between
the mappings and surfaces. Therefore, this work also may possibly allow us to characterize previously unknown surfaces in this manner.
Overall, mathematics research is a dynamic and creative journey often with great moments of frustration and reward. There is great beauty and breadth in mathematics
Stacey Muir, Ph.D.
|
{"url":"http://www.scranton.edu/academics/ignite/issues/2013/spring/love-math.shtml","timestamp":"2014-04-21T04:54:20Z","content_type":null,"content_length":"33448","record_id":"<urn:uuid:f7e6250a-864a-42e8-9a99-f7c95b3a6ffe>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|