content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Dividing Mixed Fractions
"Math Salamanders Free Math Sheets"
Welcome to the Math Salamanders 5th Grade Dividing Mixed Fractions page.
Here you will find a range of free printable Fifth Grade Fractions sheets about how to divide mixed fractions, including support sheets and practice sheets which will help your child learn to divide
mixed fractions by other fractions.
How to Print or Save these sheets
• Follow the 3 easy steps below to get your worksheets printed out perfectly!
Step 1
Click on the sheet you would like.
It will open in a new browser window.
Step 2
Click on the magnifier to get your sheet full size.
Enable 'Shrink-to-fit' and set your page margins to zero.
Step 3
Go to the Print menu and select 'Print' to print a copy.
Right-click and select 'Save image as...' to save a copy.
Need further help? Use our How to Print Support Page
How to Print or Save these sheets
Need help with printing or saving?
Follow these 3 easy steps to get your worksheets printed out perfectly!
Math Salamanders Copyright Information.
Thank you for honoring our copyright. The Math Salamanders hope you enjoy using our collection of free Math Worksheets printable for kids.
5th Grade Math Learning
At Fifth Grade, children enjoy exploring Math with these free 5th Grade Math problems and Math games. Children will enjoy completing these 5th Grade Math Worksheets and playing these Math games
whilst learning at the same time.
During Fifth Grade, children learn about factors and prime numbers. They build on their learning of long division and can divide numbers up to 1000 by two digit numbers. They are able to multiply
decimals by whole numbers, and are able to work out powers of a number.
Children are able to add and subtract fractions, decimals and mixed numbers, and learn to multiply and divide fractions. They are able to solve multi-step problems involving whole numbers, fractions
and decimals.
These free printable Fifth Grade Math Games, and other Math Worksheets for fifth grade will help your child to achieve their Elementary Math benchmark set out by Achieve, Inc.
In the UK, 5th Grade is equivalent to Year 6.
Dividing Mixed Fractions
Here you will find support on how to divide mixed fractions, and also some practice worksheets designed to help your child master this skill.
The sheets are carefully graded so that the easiest sheets come first, and the most difficult sheet is the last one.
Before your child tackles dividing mixed fractions, they should be confident with dividing and multiplying fractions, and also converting mixed fractions to improper fractions and reducing fractions
to simplest form.
Using these sheets will help your child to:
• divide one mixed fraction by another;
• dividing an integer by a mixed fraction;
• dividing a mixed fraction by an integer;
• apply their understanding of simplest form;
All the Printable Fraction sheets in this section support the Elementary Math Benchmarks for Fifth Grade.
Dividing Fractions Calculator
If you want to divide fractions, you can use the calculator below.
To enter a fraction, you have to enter the numerator followed by '/' followed by the denominator. E.g. 4/5 or 23/7
To enter a mixed fraction, first type the whole number followed by space followed by the numerator followed by '/' followed by the denominator. E.g. 3 1/4 (3 and a quarter).
This calculator will work out Fraction1 ÷ Fraction2.
If you need support to find out how to divide fractions, there is more help further down this page!
How to Divide Mixed Fractions Support
Frazer says "To divide a mixed fraction by another fraction, follow these four easy steps..."
Step 1
Change the whole number to a fraction by putting it over a denominator of 1.
Convert any mixed fractions into improper fractions. Any integers (whole numbers) should be written as fractions with a denominator of 1.
Step 2
Swap the numerator and denominator of the dividend fraction (the fraction after the ÷ sign) and change the operand to a 'x' instead of a '÷'
Step 3
Multiply the numerators of the fractions together, and the denominators of the fractions together. This will give you the answer.
Step 4 (Optional)
You may want to convert the fraction into its simplest form or convert it back to a mixed fraction (if it is an improper fraction).
How to Divide Mixed Fractions Printable Sheet
Here is a printable support sheet to show you how it all works!
Example 1) 3 1 3 ÷ 1 5 6
Step 1)
3 1 3 ÷ 1 5 6 = 10 3 ÷ 11 6
Step 2
10 3 ÷ 11 6 = 10 3 x 6 11
Step 3
10 3 x 6 11 = 60 33
Steps 4)
Answer in simplest form is 20 11 or 1 9 11
Example 2) 4 ÷ 2 1 3
Step 1)
4 ÷ 2 1 3 = 4 1 ÷ 7 3
Step 2
4 1 ÷ 7 3 = 4 1 x 3 7
Step 3
4 1 x 3 7 = 12 7
Steps 4)
This answer is already in simplest form 12 7 or 1 5 7
Dividing Mixed Fractions Worksheets
Use these sheets to practice your division of mixed fractions.
How to Divide Fractions by Fractions Worksheets
Here you will find a selection of Fraction worksheets designed to help your child understand how to divide a fraction by another fraction.
You will also find a printable resource sheet and some practice sheets which will help you understand and practice this math skill.
All the free Math sheets in this section support the Elementary Math Benchmarks for Grade 5.
Dividing Fractions by Whole Numbers
Here you will find a selection of Fraction worksheets designed to help your child understand how to divide fractions by integers or whole numbers.
Using these sheets will help your child to:
• divide a fraction by a whole number;
All the Printable Fractions worksheets in this section support the Elementary Math Benchmarks for Fifth Grade.
Converting Improper Fractions
Here you will find a selection of Fraction worksheets designed to help your child understand how to convert an improper fraction to a mixed number.
Using these sheets will help your child to:
• convert an improper fraction to a mixed number;
• convert a mixed number to an improper fraction.
All the free printable Fraction worksheets in this section support the Elementary Math Benchmarks for 4th Grade.
How to Convert to Simplest Form
Here you will find a selection of Fraction worksheets designed to help your child understand how to convert a fraction to its simplest form.
Using these sheets will help your child to:
• develop an understanding of equivalent fractions;
• know when a fraction is in its simplest form;
• convert a fraction to its simplest form.
All the free Equivalent Fractions worksheets in this section support the Elementary Math Benchmarks for 4th Grade.
Whether you are looking for a free Homeschool Math Worksheet collection, banks of useful Math resources for teaching kids, or simply wanting to improve your child's Math learning at home, there is
something here at Math-Salamanders.com for you!
The Math Salamanders hope you enjoy using these free printable Math worksheets and all our other Math games and resources.
We welcome any comments about our site on the Facebook comments box at the bottom of every page.
comments received about this page so far!
New! Comments
Have your say about the Math resources on this page! Leave me a comment in the box below. | {"url":"http://www.math-salamanders.com/dividing-mixed-fractions.html","timestamp":"2014-04-18T15:38:19Z","content_type":null,"content_length":"63811","record_id":"<urn:uuid:26b66050-01a1-4e2a-954d-fe9f6fb76c7d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
□ Jump in the air […] Why did you come down again ?
□ I live 'ere.
explaining gravity: from “The Goon Show”
Relativity and Gravitation
My pages (generally fragmentary):
General Relativity
General Relativity is Einstein's (classical) theory of gravitation and the geometry of the universe on large scales. The beauty of the theory is that it is obtained by careful reasoning from some
very simple grounds:
Locally Inertial Frames
exist: to put it another way, there are frames of reference with respect to which, within some (possibly small) domain, the momentum of any isolated small system (small on the scale of the
domain) within the domain does not vary (or, rather, varies only to a degree small in relation to the system's size, relative to the domain's scale, and to the distance of the system from the
centre of the domain).
Space is Locally Euclidean.
For any point in the universe, there is some locally inertial frame whose domain contains a neighbourhood of that point which is locally Euclidean: which means that it can be mapped by a chart
using a portion of an actual Euclidean space to represent the neighbourhood of the point. The dimension of the Euclidean space involved is the same for all points of the universe. This leads to
the conclusion that geodesics are the trajectories along which mass and energy flow when subject to no other force than gravitational (i.e. mass-interaction).
The Speed of Light
is the same, regardless of the source of the light or the direction in which it is propagating, in all locally inertial frames. This leads to the conclusion that the natural metric of space-time,
which defines the geodesics above, is not positive-definite: all light-like geodesics have zero length (proper time) with respect to this metric. This leads to a partition of the collection of
geodesics into: space-like (imaginary proper time), forward and backward time-like (real proper time: positive if forward, negative backward) and light-like. The latter form the boundary between
time-like and space-like domains and can be subdivided into forward and backward by consideration of which time-like domain they border.
Inertial and Gravitational mass are the same thing. (The theory ends up inferring that they're also the same thing as energy.)
Inevitably, there's more to it than that but this is what I can remember off the top of my head as an aside while writing about quantum mechanics.
Reasoning from such simple premises leads ultimately to Einstein's field equations for general relativity, which relate the energy-momentum-stress tensor, T, (which describes the presence of matter)
to the Ricci tensor, R, (which describes the curvature of space-time) according to:
• κT = R −g.(trace(g\R)/2 + Λ)
wherein κ is Einstein's gravitational constant (equal to 8.π.G (times a suitable power of the speed of light), where G is Newton's gravitational constant), Λ is the Cosmological Constant, g is the
metric of space-time and g\R (pronounced “g under R” by analogy with “R over g” for R/g) is the result of contracting g's inverse on the left of R, a.k.a. ig·R if ig were g's inverse. Note that Λ
appears in such a rôle that it may be treated as though it were (half) the (negative) diagonal entry of g\R in an extra dimension.
Note, correspondingly, the equation connects T's components in our macroscopic dimensions to R's (small) components in these dimensions combined with a term in the metric scaled by R's trace – which
may contain large terms due to any microscopic dimensions of space-time. Consequently, analysis of macroscopic space (which only tells us the (small) portion of g\R's trace due thereto) may be
expected to give a radically different value of Λ from analyses influenced by the microscopic dimensions (e.g. quantum mechanical analyses based on the background energy of free space): the
difference is exactly the contributuon to g\R's trace due to any microscopic dimensions. [This all presumes a widely-expressed view that space-time has four slightly curved dimensions and some other
dimensions, to make up a total of about 10 or about 26, which are tightly curved, so that we never notice them.]
T contains a contribution from the electromagnetic field, which I discuss elsewhere. This is quadratic in the electromagnetic field tensor, which encodes the electric and magnetic fields. In general,
it is supposed that T satisfies τ[*,0,*](DT/g) = 0, which I should check for the electromagnetic contribution.
The weak field limit
The metric, g, includes a c.dt×c.dt.exp(2.φ/c/c) term with φ being the gravitational potential, φ = −G.M/r in the Newtonian approximation with M being the mass of the system. Now, where did I derive
that in my notes…
See also
Wikipedia has plenty of exact solutions to the general relativistic field equations, collected as a category.
Stray notes
Autumn 1997, minor detail … Jeremy tells me Yang-Mills takes a general Lie group (or its Lie algebra) and produces chromodynamics without the quantisation on a smooth manifold, along with an account
of the non-linearities that make up the boson-boson interactions. Yumm ;^)
Late summer 2007, detail: a technical explanation of technical explanation tells me, in passing, that the perihelion advance of Mercury predicted by Newtonian mechanics, using the best Victorian
data, was 5557 seconds of arc per century; and the measured advance was 5600 seconds of arc per century. General relativity accounts for the other 43 seconds of arc per century. I am impressed that
the Victorians had the computational power to determine their prediction, the astronomical expertise to measure the result, the confidence in the precision of each to be aware of the tiny difference
and the intellectual integrity to acknowledge that it presented a problem for the theory.
Written by Eddy. | {"url":"http://www.chaos.org.uk/~eddy/physics/gravity.html","timestamp":"2014-04-16T10:25:27Z","content_type":null,"content_length":"8735","record_id":"<urn:uuid:ebdd1617-cf75-4bb0-99b2-2676a58a3a58>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
eminar on motivic integration,
Motivic integration originated in a 1995 talk by M. Kontsevich, and since then has developed in several directions. Historically, the first theory that appeared (in the works of J. Denef and F.
Loeser dated 1996 -- 1998) was the theory of integration on arc spaces (what is nowadays called "geometric motivic integration"). This theory is designed for varieties defined over an algebraically
closed field. If the base field is not algebraically closed, motivic integration is still possible, but acquires a totally different flavour. The theory of arithmetic motivic integration was
developed by J. Denef and F. Loeser in 1999, and that's when they first introduced the machinery from logic into the construction. Arithmetic motivic integration provides a different point of view on
the classical integration over p-adic fields.
The most recent theory along these lines, due to Cluckers and Loeser, combines arithmetic motivic integration with geometric motivic integration, and takes it a step further by expanding the class of
functions that can be integrated. There is also an alternative construction by Hrushovki and Kazhdan.
It should be noted here that the values of motivic measure are not numbers but geometric objects (such as, roughly speaking, isomorphism classes of varieties, or, sometimes, Chow motives). In the
case of arithmetic motivic integration the way to get back to a classical, number-valued, measure, is roughly by counting points on the varieties over finite fields.
This seminar will be mostly focused on geometric motivic integration and its applications; we will also discuss some of the most modern unified approach, as it yields some very important results,
such as an analogue of Fubini theorem for the motivic measure.
Seminar schedule:
Currently scheduled on Wednesdays, 3:30-5pm in MATX 118. We will try to move it earlier starting next week. (please watch e-mail announcements).
• January 5: organizational meeting and overview. At this meeting, we came up with an approximate sequence of talks for the whole semester: please look at the topics and sources below, and
volunteer to talk! (Please note that some of these topics are independent and can be permuted).
Here is a tentative schedule of the first few talks:
• January 12: NO MEETING
• January 19: Guillermo Mantilla-Soler, Jet spaces and cylindrical sets; the values of the motivic measure.
• January 26: Robert Klinzmann, the motivic measure and the statement of the change of variables formula.
• February 2: Lance Robson, p-adic numbers and measures.
• February 9: Andrew Morrison, p-adic and motivic Igusa zeta-functions.
• February 16: BREAK, NO MEETING.
• February 23: Andrew Morrison, the monodromy conjecture.
• March 2: Atshushi Kanazawa, Stringy E-function and McKay correspondence.
• March 9: Julia Gordon, Quantifier elimination and rationality of Poincare series.
• March 16: Julia Gordon, cell decomposition and the "universal" theory of motivic integration.
• March 23: Andrew Staal, Mustata's work on the invariants of singularities and jet spaces.
• March 30, April 6: it would be good if someone returned to the questions related to monodromy conjecture that we left unfinished... Any other topic is fine, too. If you'd like to give a talk,
please e-mail me.
Topics and sources:
Motivic Integration in other contexts: | {"url":"http://www.math.ubc.ca/~gor/MotInt/motivic.html","timestamp":"2014-04-19T22:08:24Z","content_type":null,"content_length":"10608","record_id":"<urn:uuid:e20e6a18-1fdc-4e2b-876a-7e83030eadfb>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lesson Plan:Maxima and Minima Problems
Lesson Title: Maxima and Minima Problems
Topic/Focus Area: A.P. Calculus (Application of Derivatives)
Ken Smith
Mathematics(Geometry, Algebra-2, AP Calculus)
Phone: E-mail: ihs03@icoe.k12.cs.us
School: Imperial High
Lesson Overview
Subject : Mathematics
Grade : Eight through Twelve
Strand : Calculus
When taught in high school, calculus should be presented with the same level of depth and rigor as are entry-level college and university calculus courses. These standards outline a complete college
curriculum in one variable calculus. Many high school programs may have insufficient time to cover all of the following content in a typical academic year. For example, some districts may treat
differential equations lightly and spend substantial time on infinite sequences and series. Others may do the opposite. Consideration of the College Board syllabi for the Calculus AB and Calculus BC
sections of the Advanced Placement Examination in Mathematics may be helpful in making curricular decisions. Calculus is a widely applied area of mathematics and involves a beautiful intrinsic
theory. Students mastering this content will be exposed to both aspects of the subject.
Students use differentiation to solve optimization (maximum-minimum problems) in a variety of pure and applied contexts.
Student Learning Objectives
• Many problems that arise in science and mathematics require finding the largest or smallest values that a differentiable function can assume on a given domain. In the previous lesson, students
were introduced to the Maxima and Minima Theory, and learned how to use derivatives to determine where functions take on minimum or maximum values. In this lesson, students will utilize the
Minima and Maxima Theory and develop a five step strategy to solve applied optimization (maxima and minima) problems.
1. See Attachments: Lecture Notes (Sec 3.5) Minima and Maxima Application Problems, for detailed description of this three day activity.
Content Resources (books, articles, etc.)
Course Textbook: Thomas-Finney, "Elements of Calculus and Analytic Geometry": Addison-Wesley Publishing, 1989, p.135-203.
Supplemental Text: Leithold, "The Calculus 7 of a Single Variable" : Harper Collins College Publishers, 1996, p.219-228.
Supplemental Text: Larson-Hostetler-Edwards, "Calculus of a Single Variable", Houghton Mifflin Company, 1998, p.205-214.
Supplemental Text: Larson-Hostetler, "Calculus With Alalytic Geometry", D.C.Heath and Company, 1986, p.215-223.
Supplemental Reference: REA's Problem Solvers 'Calculus' p.239-294.
Web Resources URL:
URL: (147.4.150.5)
URL: (www.math.montana.edu)
URL: (www.exambot.com)
Hardware/Software Resources (computers, CD-ROMs, TV, VCR, etc.)
Computer, large screen(36 inch) TV with converter or Projector, Ti-83 Graphing Calculator w/o'head viewscreen, Chalkboard, Ruler, Vernier Caliper, String, Clear Plastic Sphere(must be able to
separate into two halves), Scissors, Sand, Balance Beam, Scanner/Copier, Digital Camera.
Miscellaneous Software: Power Point, Word, Math Type, Photo Delux, Ti-GraphLink, Geometer's SketchPad.
File Attachments
Additional Comments (Lecture 3.5) Maxima and Minima Application Problems
(21.5 KB)
Optimizing an open box (power point demo presentation)
(651 KB)
Maximum volume of cone in sphere (power point lab)
(1004 KB)
Students will be assessed on mastery of this topic via a series of evaluations as follows:
(1) Textbook Homework Assignment. (Pg.199, problems 1, 2, 11,
12, 22, and 24)
(2) Lab Construction: Maximum Volume of a Cone inside a
(3) (Optional) Development of Power Point presentation of Lab
Construction project. ((See sample evaluation rubric in
attachments.)) ((See Sample Student ppt Presentations in
(4) Graphing Calculator determination of a maximum value,
derived from graphs of Primary Function and Derivative, using
the calculator's maximum and zero menus.
(5) Solution of Optimization Application Problems located at
Teacher assigned Internet Sites.
(6) Formal topic test covering Chapter 3.1 through 3.5. (See
copy of AP Calculus Test (Ch 3.1 thru 3.5) in attachments.)
(7) (Optional) Individual Student generated Optimization
Application Proplem (maximum volume of an inscribed shape),
using Power Point Presentation and physical construction of
the calculated maximum dimensions/shape. (Note: This makes
a good weekend follow-up assignment.) ((See sample student
ppt presentations in attachments.))
(8) Student competition. Class sets up onto groups to judge
student power point presentatations, based on a specific,
teacher designed, judging rubrics.
Additional Comments
See Additional Comments: (Lecture 3.5) Maxima and Minima Application Problems, in the attachments. | {"url":"http://twt.borderlink.org/28274/2705/2705.html","timestamp":"2014-04-20T10:46:31Z","content_type":null,"content_length":"21672","record_id":"<urn:uuid:e1354522-b2b7-4bed-a89e-d6bb9c0d85a9>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 70
Chads buys peanuts in 2 pound bags. How many 2 pound bags of peanut should chad buy so that he can fill the 5/6 pound bags without having any peanuts left over
What is 18/32ths of a dollar.
4/5 of an hour
mr. smith pants 5 wooden chairs in 4 hours. if each chair takes the same amout of time to paint, what fraction of an hour does it take mr. smith to paint one chair?
1 4/6
3/5 meters of fabric
Lisa makes 6 identical flags from 10 meters of fabric. How many meters of fabric does she use for each flag?
A group of 8 office workers order 12 packs of sushi form the local Japanese restaurant for lunch. If the packs of sushi are shared equally, what fraction of a pack will each office worker get for
Jessica and her 7 friends share 3 liters of apple juice equally. How much juice does each friend get?
A preimage includes a line segment of length x and slope m. If the preimage is dilated by a scale factor of n, what are the length and slope of the corresponding line segment in the image
At what temperature does 150 mL of n2 at 300 K and 1.13 atm occupy a volume of 550 mL at a pressure of 1.89 atm?
How is a drop of water an element? PLEASE HELP!!!
what in nature could you find a source of the same energy as that provided by batteries?
6 thousand 38 hundreds =
5 3/100 into a decimal simplest form
old forge math
write an inequality to show the following. 4.90 was spent. there is still over 60$
how does tea flavor spread from a tea bag throughout a cup of hot water? PLEASE HELP!!!
Heat Transfer
I do not know
College chemistry
2[C3H8] +7O2 - 3[CO2](g) + 8[H2O](l)
thank you so much
:) last one: She is successful in mimicking others' voices. the phrase would be mimicking others and would be Object of the Preposition?
thank you and i have one more the sentence is: Her favorite pastime is entertaining friends. the phase would be is entertaining friends and would be a Predicate Nominative?
thank you and my question is on this sentence:He must like studying calculus. the gerund phase would be studying calculus and would be the direct object?
i need help on some help on gerund phrase and the noun function of it
a chemist discovered an ore and analyzed its composition to contain 2% iron, 5% phosphor, and 93% sodium. What is the correct formula?
a chemist discovered an ore and analyzed its composition to contain 2% iron, 5% phosphor, and 93% sodium. What is the correct formula?
4. Brick A is hurled vertically upward from a bridge at an initial speed of 4.90 m/s. One second later, Brick B is thrown horizontally from the same bridge with an initial speed of 9.80 m/s. Which
hits the water first?
4. Brick A is hurled vertically upward from a bridge at an initial speed of 4.90 m/s. One second later, Brick B is thrown horizontally from the same bridge with an initial speed of 9.80 m/s. Which
hits the water first?
Calculate %tage by mass of all component element NaNo3 given,(Na=23,N=14,o=16)
oops posted it twice XD
4 future reference, try posting the names of the four memoirs, and make the question more natural. it sounds like it is coming right off the paper. and for anyone that cares, the name of the four
memoirs are: "Cub Pilot" "No Gumption" an excerpt from "...
4 future reference, try posting the names of the four memoirs, and make the question more natural. it sounds like it is coming right off the paper. and for anyone that cares, the name of the four
memoirs are: "Cub Pilot" "No Gumption" an excerpt from "...
4 future reference, try posting the names of the four memoirs, and make the question more natural. it sounds like it is coming right off the paper. and for anyone that cares, the name of the four
memoirs are: "Cub Pilot" "No Gumption" an excerpt from "...
I need to find the 5 kingdoms of life animals chart? PLEASE HELP:)
Graph for Y=(x+2)2-3
solve Y=(x+2)2-3
business math
gordon rosel went to his bank to find out how long it will take for $2,300 to amount to $2, 860 simple interest.
business math
lane french had a bad credit rating and went to a local cash center. he took out a $119 loan payable in five weeks at $129. what is the percent of interest paid on this loan?
business math
margie pagano is buying a car. her june payment monthly interest at 13.2% was $208. what was margie principal balance at the beginning of june
business math
abe wolf brought a new kitchen set at sears. abe paid off the loan after 60 days with as interest of $9. if sears charges 8% interest. What dis abe pay for the kitchen set (assume 360 days)
I dont really know.. If I wasted your time. I just wanted to say Hi to the world! :D
When energy changes forms, the total amount of energy is conserved. However, the amount of useful energy is almost always less than the total amount of energy. Explain the energy conversions and
unwanted energy that might be produced by the hinges of a squeaky door.
67% and 375%
What is .67 and .375 equal in percent?
what is .09 in percentage?
how much is .5 in percent?
3rd grade
Can you divide 14 shirts into 2 equal groups?Why or why not?
does a baby squirrel have a name (cub,joey,etc.)?
what is the family name for the male,female,and young of the squirrel?
is a squirrel a vertabrate or invertabrate?
what are some things about wood spiders?
4th grade math
my anwser is 1/3 because it has been real tough for my guess would be 1/3 and also wheni was in school i didnt now what to say but i just did my best guess so thats why i need help thank you for
helping me
4th grade math
what is the equivalent fraction of 8/9
write a equivalent fraction of 3/9 and multiply or divide the numerator or denominator by the same number
Write an equivalent fraction for 3/9 and multiply or divide the numerator and the denominator by the same number
but I don't wont to do a experiment.
if I wanted to test how slope effected the amount of soil deposited by erosion,what are four steps to tests this?
what does slope do to the process of erosion?
4th grade
how do particles react for solids liquids and gasses?
science and social studies
Is there more to Neal Armstrong's speech on the moon than One small step for man One giant leap for mankind?
science and social studies
Is Neal Armstrong's on the moon speech really only one line? One small step for man one giant step for mankind?
Do you like French?
how to do what's my rule?
7th grade science hypothesis question
i tried the salt in boiling water it dosent work it just gave taste to the heated water
5th Grade Science
5th grade science
How do atronomers learn about space?
life science
What would happen if cytokinesis occured before mitosis? one daughter cell would have a nucleus and the other would not. | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=elijah","timestamp":"2014-04-18T11:23:51Z","content_type":null,"content_length":"16834","record_id":"<urn:uuid:0e65e6e0-9c5f-4d3d-8751-152b065b4f46>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
About Math Club Mystery
Library Home || Primary || Math Fundamentals || Pre-Algebra || Algebra || Geometry || Discrete Math || Trig/Calc
Author: Steve Risberg
Description: Algebra, difficulty level 2. Given information about the total
number of people, ticket prices, and money spent, determine how
many students, teachers, and parents went on the Math Club field
Please Note: Use of the following materials requires membership. Please see the
Problem of the Week membership page for more information.
Problem page: /library/go.html?destination=4036
Solution page: Problem #4036
© 1994-2012 Drexel University. All rights reserved. http://mathforum.org/ The Math Forum is a research and educational enterprise of the Drexel University School of Education. | {"url":"http://mathforum.org/library/problems/more_info/70384","timestamp":"2014-04-16T04:45:02Z","content_type":null,"content_length":"5613","record_id":"<urn:uuid:f76626b4-0d8f-4bc0-a603-9b34cd00f61c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
Talks and Stuff
Many of the works here have benefitted from the support of various NSF grants
Here is a talk, Knotted foams, their isotopy moves, and invariants presented at the Lloyd Roeling Conference University of Louisiana, Lafayette, Nov. 8, 2013.
Here is a talk, Examples of Non-orientable surfaces in 4-dimensional space presented at the Department of Mathematics at Kyungpook National University.
Here is a talk, Reidemeister/Roseman moves for knotted foams that I gave at Knots in Washington XXXV, Dec. 2012 .
I gave a similar talk, Reidemeister/Roseman moves for knotted foams at a TQFT workshop in Lisbon, Portugal, Sept. 2012.
After some strange things happened, I wrote this essay about some childhood experiences and the 4th dimension.
Here are two versions of a manuscript that will be posted at the arXiv soon. The smaller version has the same information that the larger version does. But the quality of the
figures is not as good. My plan for this page, or the math art page, is to have pdf versions of these figures readily available for those who wish to use such figures in their
own work. As of this writing, I don't have the time to sort through this, but stay tuned.
Here are the pdfs from my recent sabbatical to Kyungpook National University where I visited the Department of Mathematics from December 25, 2011 through August 20, 2012.
Not all items are available here yet. Updates to appear shortly.
Print version.
J. Scott Carter
Professor of Mathematics
Department of Mathematics and Statistics
ILB 325
University of South Alabama
Mobile, AL 36688-0002
(334) 460-6264 / 460-7969 FAX
click here for e-mail.
Back to Scott Carter's home page
The views and opinions expressed in these web page(s) are strictly those of the author. The contents of these page(s) have not been reviewed or approved by the University of South Alabama. I am not
responsible for content in linked material. Come to think of it, I am not repsonsible for much! | {"url":"http://www.southalabama.edu/mathstat/personal_pages/carter/talks.html","timestamp":"2014-04-21T14:40:31Z","content_type":null,"content_length":"18133","record_id":"<urn:uuid:b56f589d-e129-430e-b223-9c7dcdb92cc1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Paul--please help!! on Saturday, December 29, 2012 at 10:10pm.
I have posted this question earlier and had the answer given to me this way. But my teacher needs to know what type of factorization I used and I have tried to figure it out but have NO clue!!
This is the question.
Solve the following inequality. write your answer in interval notation.
x^3+9x^2-108 less than or equal to o
My anser:
we have double zeros at x= -6
graph comes from way low and bounces back down off the x-axis at x= -6
• calculus - Paul, Saturday, December 29, 2012 at 10:16pm
Here is the rest of the answer:
then dropping down negative again
then it comes back up again and goes positive and crosses the x-axis at x=3
and from then on is positive
**Now she says note that the last term does not contain an x. What type of method did I use in this posting. Explain this method. Can someone help me please
• calculus - Steve, Saturday, December 29, 2012 at 11:21pm
Not sure what you're going on about.
What do you mean "type of factorization"?
I used synthetic division to come up with the roots. If you have no clue, then you need to review finding roots of a function.
The roots are where the graph touches or crosses the x-axis.
The question is:
solve f(x) <= 0
The answer is:
f(x) <= 0 when x <= 3.
In interval notation, x is in (-oo,3]
Related Questions
calculus - hi I posted this question earlier: what is the median waiting time ...
algebra 1 - Reiny I posted a question earlier (about 3:00) and I have a question...
SCIENCE HEEELP - I posted this earlier but received no response. It's a tough ...
science - An electron had a constant acceleration of +3.2m/s^2. At a certain ...
Algebra - 1.1(3x+5)is equal to or greater than 1.6-(x+3) I had this one posted ...
advance functions - will someone please help me with the question i posted ...
Calculus - Hello, I posted a question earlier but now I'm stuck on the second ...
CHEM (DrBob222) - I had posted the question to that previous chem question, ...
pre-calculus - I posted a problem earlier and made a mistake the question is ...
To whoever "Anonymous" is - You posted this: Posted by Anonymous on Thursday, ... | {"url":"http://www.jiskha.com/display.cgi?id=1356837024","timestamp":"2014-04-18T23:24:48Z","content_type":null,"content_length":"9606","record_id":"<urn:uuid:1ad020a4-3581-43a5-8331-1591670db438>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Entanglement and Bell's theorem. Is the non-locality real?
Sylvia Else
The results of measurements of phase entangled particles together with Bell's theorem provide pretty convincing evidence that the Universe contains non-local interactions.
Depends on what you mean by nonlocal interactions. If you mean instantaneous action at a distance, then that is physically meaningless. If you mean faster than light propagations, then that has not
been demonstrated. It is true that, so far, only nonlocal hidden variable models of quantum entanglement are unquestionably viable. But that doesn't mean that they're a true description of reality.
There are local hidden variable models of quantum entanglement that are open to question and interpretation. Whether nature is local or nonlocal is still an open question. All that's known for sure
is that Bell-type models of quantum entanglement are ruled out. Both mathemetically and experimentally. Whether or not there might be another class of models that might be considered local realistic
remains an open question.
The bottom line is that it cannot be definitively said, from Bell tests, that nature is nonlocal. Not because of experimental loopholes, but because the Bell formulation of local hidden variable
models might not be general.
Sylvia Else
Let's imagine the usual idealised experimental scenario, where there is an emitter of particles in a twin state and two measuring devices on opposite sides of the system performing measurements in a
space-like separated way. The measurements on one side of the system are not interesting in themselves. They are just random. They only become interesting when they are compared with the measurements
from the other side, with a correlation being observed. We know that when performed appropriately, this will show that the measurement results are correlated in a way that, by Bell's theorem, cannot
be explained by any local interaction - the measurements appear to be non-locally linked.
We only know that these experimental results can't be explained by Bell's formulation of a local hidden variable supplement to QM. We don't know that this is general. We don't know that there might
not be other ways of formulating viable local hidden variable supplements to QM. It might seem to you that Bell has covered all the bases. But has he? Those who say that Bell's formulation is general
posit, from experimental violations of Bell inequalities, that nature is nonlocal. Yet, there's no physical evidence for this. The fact of the matter is that it's currently an interpretational,
philosophical issue that experimental results can't definitively provide the answer to.
I've omitted the rest of your post because I think it's irrelevant. Even if all experimental loopholes in Bell tests are eventually closed, then what does that mean? It means that Bell-type
formulations of quantum entanglement are not viable. And whether or not nature is nonlocal remains an open question.
Why Bell-type formulations of quantum entanglement preparations are nonviable has been the subject of numerous publications.
Here's another thing to consider. Suppose that it's found that absolutely no local model of quantum entanglement can be formulated. Does that mean that quantum entanglement is nonlocal? No, it
doesn't. This is because the correspondence of theory to reality is, and will always be, essentially unknown. | {"url":"http://www.physicsforums.com/showthread.php?p=4194931","timestamp":"2014-04-21T07:12:32Z","content_type":null,"content_length":"90415","record_id":"<urn:uuid:764d38f0-1752-4b8a-a54f-bf24cd1432ee>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interview questions About Analogy 1 Auger carpenter
About Analogy 1.Auger:carpenter::awl:caobbler 2.ode:song::chant:something can’t remember 3.Alarm:trigger::trap:spring 4.scales:justice::torch:liberty 5.witch:coven::actor:troupe I can’t get questions from this but go thru all anology practice Exercises in gre 13th edition.. Some data sufficiency questions. 1. the number is two digit number a.by adding we get 5 b.by subtractin we get 2 ans:both a,b required 2.given a quadrilateralABCD determine whweter it is rectangle a.AB=CD b.angle B=90dergees ans:both required 3.to detmine the no of rolls a wall paper has given 16feet width and 12 feet length a.area coverd is 20feet b.the room has no windows don’t know 4.a book shelf has some books and in that fine no of book it has a.if 2 books r removed it gives a total of 12 b.if 4 books are added it gives a total of 17 like that some questins r given regarding geomentry totoal questions r 20 and we have to complete it in 10min A comprehension is given but I can’t remember that But I provide some information regarding that] It is Water resources have been not sufficient ..this is due to over erosion or over using of irrigation or the water has been occupies by some wate materils..the passge is regarding that..i think u got the idea so please read the questions so that u can fetch it..each question carries 5marks and negative of 2.5 marks Arthmetic question answer from the last on wards that last questions r very easy.. 1.2m+n=10,n=5what is m 2.if x,y positives and x/y<1 then like that he has given some questions..u can do it ..but come from last question..not from the first one’s..they r very tough and r nothing but profit and loss,and averge,pecentge and so on..
In analytical reasoning Two passages r given in that I can provide onlyone 1.to obtain a government post int eh republic of malbar you must…. Hey this u can find the pargraph from 391page AnlyticAbility of GRE BARRONS BOOK Of 13th editon..the Answers for this r 1.c 2.e 3.d 4.a plz verift that book thoroughly..no need to look other also… some sentences in that pargraph r like that.. ie.ruling party or a personal associate of president Zamir..party members seeking govt post must either give a s substantial donation of golden bullian..it goes on like that..
2.a project cosolodate of large unvesity and a small college is set up.it is agred that the representatives working small committees of three with two representayives from large university.it was also agreed that no comitee be represented by faculty members of the same subject area.the large university was represented by following professors J who teach English,K who tech maths,L who teach natural science..the small college appointed m who teach maths and n who teach latinand O and P teach English 1.which following represets a comitee ans:k,ln 2.which seve P k and l 3.which must be true a.if J seves on P,P must be assigned to committee b.if J not seve on comitee then M cannot be assigne dto comitee c.if J seve on a comitee then L must seve on that comitee ans:b and c 4.if L is not available for sevice which must be on comitee ans:n and o 5.which must be true a.n and o r always on same comitee b.m and o never seve on same comitee c.when m seves ,Lmust seve ans:b and c In logical resoning
in logical reasoning we have given 1.in 1978 thomas published"essay on population" in which he postulated that food supply can never keep pace... q1.which of the following statements if true would tend weaken thomas argument? 1.the total population of human has risen at rapid rate because of removal of natural checks on populatin. 2.in many nations the increse i humann population has forstriped’ Hey u find this also in one model papers given in gre baron 13th edition or any ger book In this he has given two questions the first one answer is mostly c I think and the second one answer is A ie wars..verify this also from barren ok.. 2.if Ealnine is on srteing committee then she is on the central committee.tjis stemnt can be logically deduced from which the following statements? Ans:everyone who is on steering committee is also on central committee. 3.frank must be a football player.he is wering a football jesy. Ans:only football players wear jerseys. Comparisions The questions r 1.13/14 14/15 u have to compare and write it 2.10 power 11-10power 10 and 10 power 10 3.given circimference of a circle is 4pie and for other circle ihe has given the diameter.he asked to compare the radius of both circles.. 4.he ahs given a strigth line with points x z y on that such that compare xz and xy here we wiil think that xy is greater 5.he has given a triangle such that AB=BC=CA and he has drawn a straight linr from A a\such that the line be AD now we have to compare BD and CD here we have say information is not sufficient since here he didn’t give any information regarding that line..so we cant say whether it dives the segment BC into two equal halves 6.1/2*2/3*3/4*4/5 ½+2/3+3/4+4/5 7.some squareroot problems he has given can’t remember here the questions r very easy but see that they should be answered very carefully… I think u got this..he hs given 20 questions ans u have to answer it in 5min
Plain Text Attachment [ Download File | Save to my Yahoo! Briefcase ]
question paper pattern is as follows. there are 3 sections. 1)section 1 contains -------------------------------analozy ,comparisions and reading comprehension. the analozy questions are not from GRE book.data comparisions are very easy but time is very less.the time to do the first section is 15 minutes.number of questions from analozy and comparision are 20 each.and 5 questions on comprehension. 2) second section contain -------------------------------------arithmetic.the number of questions are 20 and time is 5 minutes.these are also very easy.each question carries 4 marks. 1) no.of different permutations of the word "COFFEE" ans:180 2)one question on "exterior angle ==sum of interior angles" property. these are also very easy.do the problems from last 8 and first 8 3)the third section contain -------------------------------------analytical and logical and data sufficiency time is 25 minutes.the two analytical passages each quetion carries 4 marks.logical reasoning carries 6 marks.and data sufficiency carries 4 marks. logical reasoning:(barrons 13 th edition) 28 pg 7 th question:My father, my three uncles..... 31 pg 25 th question :i am afraid that Roger....... 35 pg 25 th question:Television convinces....... 434 pg 7th " :Wilbur is over six feet tall... 437 pg 5th " :if Elaine is on the ........ " :Today's high school students............ Analytical reasoning:(two analytical but one is given) 34 pg :Mathematics 11 is prerequisite...... For the second one i will give some infromation. --- It's about films ie movies , The order of display, The films are of different countries .Search for it.
Datasufficeincy questions are easy after this test they will filter and allow to technical test. technical test contain: ----------------------------------dbms,os,s.e,c,and oop.this are 45 bits and answer in 30 min. there is special C test .it contains 15 bits and time is 10 min. read Thimoti J williams for S.E and O.S. 1)if we use front end processor for I/O operations then the load will be reduced on ans:cpu 2)one question on DMA ans:DMA 3)in list of 4 numbers the maximum number of comparisions to find the maximum and immediate maximum number 4)Confuguration management does not consider about ans:hard ware devices. 5)the most important factor of the coding: ans:readability. 6)which of the following testing methods is used as a aceptency test ans:functional testing 7)if the number of conditions in the decision table is n,the max number of ans:2 power n 8)what is meant by swaping? 9)if(node!==null) { write(node) A(left B,right D) traverse(right subtree) D(left E,right f) write(node) traverse(left subtree) } 10)a question on Functional Dependencies which is not FD in the following? 11)if T(x)==2T(x/2)+1 T(1)==1 and n is the power of 2 then T(x)==?
ans:(2n-1) 12)if we want to access the data and members of a class to the immediate derived class which access specifier is used? ans:protected 13)two questions on Queries(sql) technical test is easy.u must attempt first data comparisons, then arithmetic, then datasufficiency. 14)windows NT is: 1)extension to windows 95 2)mutiprocessing system 3)provides GUI 4)none of the above. C QUestions: -----------------read Exloring in c:(bitwise operators,precedence) 1)main() { int x= ,y==5,p,q; p==x>9; q==x>3&&y!==3; printf("p==%d q==%d",p,q); }ans:1,1. 2)main() { int x= ,y==6,z; z==x====5 || !==4; printf("z==%d",z); }ans:1 3)main() { int c==0,d==5,e= ,a; a==c>1?d>1||e>1?100:200:300; printf("a==%d",a); }ans:300 4)main() { int i==-5,j==-2; junk(i,&j); printf("i==%d,j==%d",i,j); } junk(i,j) int i,*j
{ i==i*i; *j==*j**j; } ans:-5,4; 5)#define NO #define YES main() { int i==5,j; if(i>5) j==YES; else j==NO; printf("%d",j); } ans:Error message 6)main() { int a==0xff; if(a<<4>>12) printf("leftist) else printf("rightist") } ans:rightist 7)main() { int i==+1; while(~i) printf("vicious circles") } ans:continuous loop. 8)one question on assigning two different structures; i.e structure1 variable1==structure1 variable2
HTML Attachment [ Download File | Save to my Yahoo! Briefcase ] Index of Question Papers
CMC Ltd.
CMC Ltd Test Paper
ANALYTICAL REASONING SECTION Directions for questions 1-5: The questions are based on the information given below There are six steps that lead from the first to the second floor. No two people can be on the same step Mr. A is two steps below Mr. C Mr. B is a step next to Mr. D Only one step is vacant ( No one standing on that step ) Denote the first step by step 1 and second step by step 2 etc. 1. If Mr. A is on the first step, Which of the following is true? (a) Mr. B is on the second step (b) Mr. C is on the fourth step. (c) A person Mr. E, could be on the third step (d) Mr. D is on higher step than Mr. C. Ans: (d) 2. If Mr. E was on the third step & Mr. B was on a higher step than Mr. E which step must be vacant (a) step 1 (b) step 2 (c) step 4 (d) step 5 (e) step 6 Ans: (a) 3. If Mr. B was on step 1, which step could A be on? (a) 2&e only (b) 3&5 only (c) 3&4 only
(d) 4&5 only (e) 2&4 only Ans: (c) 4. If there were two steps between the step that A was standing and the step that B was standing on, and A was on a higher step than D , A must be on step (a) 2 (b) 3 (c) 4 (d) 5 (e) 6 Ans: (c)
5. Which of the following is false i. B&D can be both on odd-numbered steps in one configuration ii. In a particular configuration A and C must either both an odd numbered steps or both an evennumbered steps iii. A person E can be on a step next to the vacant step. (a) i only (b) ii only (c) iii only (d) both i and iii Ans: (c)
Directions for questions 6-9: The questions are based on the information given below Six swimmers A, B, C, D, E, F compete in a race. The outcome is as follows. i. B does not win. ii. Only two swimmers separate E & D iii. A is behind D & E iv. B is ahead of E , with one swimmer intervening v. F is a head of D 6. Who stood fifth in the race ? (a) A (b) B (c) C
(d) D (e) E Ans: (e) 7. How many swimmers seperate A and F ? (a) 1 (b) 2 (c) 3 (d) 4 (e) cannot be determined Ans: (d) 8. The swimmer between C & E is (a) none (b) F (c) D (d) B (e) A Ans: (a)
9. If the end of the race, swimmer D is disqualified by the Judges then swimmer B finishes in which place (a) 1 (b) 2 (c) 3 (d) 4 (e) 5 Ans: (b) Directions for questions 10-14: The questions are based on the information given below Five houses lettered A,B,C,D, & E are built in a row next to each other. The houses are lined up in the order A,B,C,D, & E. Each of the five houses has a colored chimney. The roof and chimney of each housemust be painted as follows. i. The roof must be painted either green,red ,or yellow. ii. The chimney must be painted either white, black, or red. iii. No house may have the same color chimney as the color of roof. iv. No house may use any of the same colors that the every next house uses. v. House E has a green roof. vi. House B has a red roof and a black chimney
10. Which of the following is true ? (a) At least two houses have black chimney. (b) At least two houses have red roofs. (c) At least two houses have white chimneys (d) At least two houses have green roofs (e) At least two houses have yellow roofs Ans: (c) 11. Which must be false ? (a) House A has a yellow roof (b) House A & C have different color chimney (c) House D has a black chimney (d) House E has a white chimney (e) House B&D have the same color roof. Ans: (b) 12. If house C has a yellow roof. Which must be true. (a) House E has a white chimney (b) House E has a black chimney (c) House E has a red chimney (d) House D has a red chimney (e) House C has a black chimney Ans: (a) 13. Which possible combinations of roof & chimney can house I. A red roof 7 a black chimney II. A yellow roof & a red chimney III. A yellow roof & a black chimney (a) I only (b) II only (c) III only (d) I & II only (e) I&II&III Ans: (e) 14. What is the maximum total number of green roofs for houses (a) 1 (b) 2 (c) 3 (d) 4 (e) 5
NOTE: The questions from 15-27 are multiple choice in the paper 15. There are 5 red shoes, 4 green shoes. If one draw randomly a shoe what is the probability of getting a red shoe Ans 5c1/ 9c1
16. What is the selling price of a car? If the cost of the car is Rs.60 and a profit of 10% over selling price is earned Ans: Rs 66/-
17. 1/3 of girls , 1/2 of boys go to canteen .What factor and total number of classmates go to canteen. Ans: Cannot be determined.
18. The price of a product is reduced by 30% . By what percentage should it be increased to make it 100% Ans: 42.857%
19. There is a square of side 6cm . A circle is inscribed inside the square. Find the ratio of the area of circle to square. Ans. 11/14 20. There are two candles of equal lengths and of different thickness. The thicker one lasts of six hours. The thinner 2 hours less than the thicker one. Ramesh lights the two candles at the same time. When he went to bed he saw the thicker one is twice the length of the thinner one. How long ago did Ramesh light the two candles . Ans: 3 hours. 21. If M/N = 6/5,then 3M+2N = ? 22. If p/q = 5/4 , then 2p+q= ? 23. If PQRST is a parallelogram what it the ratio of triangle PQS & parallelogram PQRST . Ans: 1:2
24. The cost of an item is Rs 12.60. If the profit is 10% over selling price what is the selling price ? Ans: Rs 14/25. There are 6 red shoes & 4 green shoes . If two of red shoes are drawn what is the probability of getting red shoes Ans: 6c2/10c2 26. To 15 lts of water containing 20% alcohol, we add 5 lts of pure water. What is % alcohol. Ans : 15% 27. A worker is paid Rs.20/- for a full days work. He works 1,1/3,2/3,1/8.3/4 days in a week. What is the total amount paid for that worker ? Ans : 57.50 28. If the value of x lies between 0 & 1 which of the following is the largest? (a) x (b) x2 (c) -x (d) 1/x Ans : (d)
DATA SUFFICIENCY SECTION Directions : For questions in this section mark (a) If condition (i) alone is sufficient (b) If condition (ii) alone is sufficient (c) If both conditions together are sufficient (d) If condition (i) alone & (ii) alone are sufficient (e) information not sufficient 1. A man 6 feet tall is standing near a light on the top of a pole What is the length of the shadow cast by the man. (i) The pole is 18 feet high (ii) The man is 12 feet from the pole Ans: (c)
2. Two pipes A and B emptied into a reservoir , pipe A can fill the reservoir in 30 minutes by itself. How long it will take for pipe A and pipe B together to fill up the reservoir. (i) By itself, pipe B can fill up the reservoir in 20 minutes (ii) Pipe B has a larger cross-sectional area than pipe A Ans: (a) 3. K is an integer. Is K is divisible by 12 (i) K is divisible by 4 (ii) K is divisible by 3 Ans: (c)
4. What is the distance from A to B (i) A is 15 miles from C (2) C is 25 miles from B Ans: (e) 5. Was Melissa Brown's novel published? (i). If Melissa Brown's novel was published she would receive atleast $1000 in royalities during 1978 (ii). Melissa Brown's income for 1978 was over $1000 Ans: (e) 6. Does every bird fly? (i) Tigers do not fly. (ii) Ostriches do not fly Ans: (b)
7. How much does John weigh? Jim weighs 200 pounds. (i) Toms weight plus Moes weight equal to John's weight. (ii) John's weight plus Moe's weight equal to Twice Tom's weight. Ans: (c)
8. Is the figure ABCD is a rectangle if (i) angle ABC=90(degrees) (ii) AB=CD. Ans: (c)
9. Find x+2y (i). x+y=10 (ii). 2x+4y=20 Ans: (b)
10. Is angle BAC is a right angle (i) AB=2BC (2) BC=1.5AC Ans: (e) 11. Is x greater than y (i) x=2k (ii) k=2y Ans: (e) 12. A piece of string 6 feet long is cut into three smaller pieces. How long is the longest of the three pieces? (i). Two pieces are the same length. (ii) One piece is 3 feet 2 inches lone Ans: (b) 13. How many rolls of wall paper are necessary to cover the walls of a room whose floor and ceiling are rectangles 12 feet wide and 15 feet long (i) A roll of paper covers 20 sq feet (ii) There are no windows in the walls Ans: (e)
14. x and y are integers that are both less than 10. Is x>y? (i). x is a multiple of 3 (ii). y is a multiple of 2 Ans: (e) 15. Fifty students have signed up for atleast one of the courses GERMAN & ENGLISH, how many of the 50 students are taking GERMANI but not ENGLISH? (i). 16 students are taking GERMANI & ENGLISH (ii). The number of students taking ENGLISH but not GERMANI is the same as the number of students taking GERMAN
Ans: (c) 16. Is ABCD is a square ? A X B
C (i) AD = AB (ii). x=90(degres) Ans: (e)
17. How much card board will it take to make a rectangular box with a lid whose base has length 7 inches. (i). The width of the box 5 inches (ii). The height of the box will be 4 inches Ans: (c) . 18. Did ABC company made profit in 1980? (i) ABC company made a profit in 1979. (ii) ABC company made a profit in 1981. Ans: (e) 19. How much is Janes salary? (i). Janes salary is 70% of John's salary (ii). Johns salary is 50% of Mary's salary Ans: (e) 20. Is x>1 (i) x+y=2 (ii) y<0 Ans: (c) 21. How many of the numbers, x and y are positive? Both x and y are less than 20. (i) x is less than 5 (ii) x+y =24
Ans: (b) 22. Is the angle ACB is right angle (1) AC=CB (2). (AC)2+CB2=AB2 Ans: (b) 23. How far it from town A to town B? Town C is 12 miles east of town A (i). Town C is south of town B (ii). It is 9 miles from town B to town C Ans: (c) 24. A rectangular field is 40 yards long. Find the area of the field. (i). A fence around the boundary of the field is 140 yards long (ii). The field is more than 20 yards width Ans: (a) 25. An industrial plant produces bottles. In 1961 the number of bottles produced by the plant was twice the number of produced in 1960. How many bottles were produced altogether in the year 1960, 61,&62 (i). In 1962 the number of bottles produced was 3 times the number of produced in 1980 (ii). In 1963 the number of bottles produced was one half the total produced in the years 1960,1961,1962. Ans: (e) 26. Is xy > 1 ? If x & y are both positive (i) x is less than 1 (ii) y is greater than 1 Ans: (e) 27. Is it a Rhombus (i) All four sides are equal (ii) Total internal angle is 360 Ans: (e) 28. How many books are in the book shelf (i) The book shelf is 12 feet long (ii). The average weight of each book is 1.2 pound Ans: (e)
29. What is the area of the circle? (i) Radius r is given (ii) Perimeter is 3 times the area Ans: (a) ARITHMETIC SECTION 1. If the total distance of a journey is 120 km .If one goes by 60 kmph and comes back at 40kmph what is the average speed during the journey? Ans: 48kmph 2. A school has 30% students from Maharashtra .Out of these 20% are Bombey students. Find the total percentage of Bombay? Ans: 6% 3. An equilateral triangle of sides 3 inch each is given. How many equilateral triangles of side 1 inch can be formed from it? Ans: 9 4. If A/B = 3/5,then 15A = ? Ans : 9B 5. Each side of a rectangle is increased by 100% .By what percentage does the area increase? Ans : 300% 6. Perimeter of the back wheel = 9 feet, front wheel = 7 feet on a certain distance, the front wheel gets 10 revolutions more than the back wheel .What is the distance? Ans : 315 feet. 7. Perimeter of front wheel =30, back wheel = 20. If front wheel revolves 240 times. How many revolutions will the back wheel take? Ans: 360 times 8. 20% of a 6 litre solution and 60% of 4 litre solution are mixed. What percentage of the mixture of solution Ans: 36%
9. City A's population is 68000, decreasing at a rate of 80 people per year. City B having population 42000 is increasing at a rate of 120 people per year. In how many years both the cities will have same population? Ans: 130 years 10. Two cars are 150 kms apart. One is turning at a speed of 50kmph and the other at 40kmph . How much time will it take for the two cars to meet? Ans: 3/2 hours 11. A person wants to buy 3 paise and 5 paise stamps costing exactly one rupee. If he buys which of the following number of stamps he won't able to buy 3 paise stamps. Ans: 9 12. There are 12 boys and 15 girls, How many different dancing groups can be formed with 2 boys and 3 girls.
13. Which of the following fractions is less than 1/3 (a) 22/62 (b) 15/46 (c) 2/3 (d) 1 Ans: (b) 14. There are two circles, one circle is inscribed and another circle is circumscribed over a square. What is the ratio of area of inner to outer circle? Ans: 1 : 2 Directions for questions 15-17: The questions are based on the information given below Miss Dean wants to rennovate her house. She hires a plumber, a carpenter, a painter, an electrician and an interior decorator. The work to be finished in one working (Monday - Friday ). Each worker will take the full day to do his job. Miss Dean permits only one person to work each day. I. The painter can work only after the plumber and the carpenter have finished their jobs II. The interior decorator must do his job before the electrician. III. The carpenter cannot work on Monday or Tuesday 15. If the painter work on Thursday, which one of the following alternatives is possible?
(a) The electrician works on Tuesday. (b). The electrician works on Friday. (c) The interior decorator works after the painter does. (d). The painter works on consecutive days. (e). Miss Dean cannot fit all of the workers int schedule Ans: (b) 16. If the painter works on Friday which of the following must be false? (a) . The carpenter may works on Wednesday (b). The carpenter and the electrician may work on consecutive days (c). If the carpenter works on Thursday, the electrician has to work on Wednesday (d). The plumber may work before the electrician does (e). The electrician may work on Tuesday Ans: (c)
17. Which argument is possible? (a). The electrician will works on Tuesday and the interior decorator on Friday (b). The painter will work on wednesday and plumber on thursday (c). The carpenter will works on Tuesday and the painter on Friday (d). THe painter will work on Monday and the carpenter on Thursday (e). The carpenter will work on Wednesday and the plumber on Thursday Ans: (e)
Questionnaire Index Page Home | Our Services | Eligibility | About Us | Sign Up | President's Note
Copyright © 2001 Cassius Technologies Pvt Ltd. All rights reserved. Plain Text Attachment [ Download File | Save to my Yahoo! Briefcase ] CMC Analytical Reasoning -------------------(1-5) steps problem There are six steps that lead from the first to the second floor. No two people can be on the same step.
Mr A is two steps below Mr C Mr B is a step next to Mr D Only one step is vacant ( No one standing on that step ) Denote the first step by step 1 and second step by step 2 etc. (1) If Mr A is on the first step, Which of the following is true? (A) Mr B is on the second step (B) Mr C is on the fourth step. (C) A person Mr E, could be on the third step (D) Mr D is on heigher step than Mr C. Ans : (D) (2). If Mr E was on the third step & Mr B was on a higher step than Mr E which step must be vacant (A) step 1 (B) step 2 (C) step 4 (D) step 5 (E) step 6 Ans : (A) (3). If Mr B was on step 1, which step could A be on? (A) 2&e only (B) 3&5 only (C) 3&4 only (D) 4&5 only (E) 2&4 only Ans : (C) (4). If there were two steps between the step that A was standing and the step that B was standing on, and A was on a higher step than D , A must be on step (A) 2 (B) 3 (C) 4 (D) 5 (E) 6 Ans: (C) (5). Which of the following is false i. B&D can be both on odd-numbered steps in one configuration ii. In a particular configuration A and C must either both an odd numbered steps or both an even-numbered steps iii. A person E can be on a step next to the vacant step. (A) i only (B) ii only (C) iii only Ans : (C) Swimmers problem (6 - 9 ) Six swimmers A B C D E F compete in a race. There are no ties. The out comes are as follows. 1. B does not win. 2. Only two swimmers seperate E & D 3. A is behind D & E 4. B is ahead of E , wiht one swimmer intervening 5. F is a head of D (6). who is fifth (A) A (B) B (C) C (D) D (E) E Ans : (E) (7) How many swimmers seperate A and F " ( A) 1 (B) 2 (C) 3 (D) 4 (E) not deteraminable from the given
info. Ans :( D ) (8). The swimmer between C & E is (A) none (B) F (C) D (D) B (E) A Ans : (A) (9). If the end of the race, swimmer D is disqualified by the Judges then swimmer B finishes in which place (A) 1 (B) 2 (C) 3 (D) 4 (E) 5 Ans : (B). Cimney problem ( 10 - 14 ) -------------------------Five houses lettered A,B,C,D, & E are built in a row next to each other. The houses are lined up in the order A,B,C,D, & E. Each of the five houses has a coloured chimney. The roof and chimney of each house must be painted as follows. 1. The roof must be painted either green,red ,or yellow. 2. The chimney must be painted either white, black, or red. 3. No house may have the same color chimney as the color of roof. 4. No house may use any of the same colors that the every next house uses. 5. House E has a green roof. 6. House B has a red roof and a black chimney 10). Which of the following is true ? (A) At least two houses have black chimney. (B) At least two houses have red roofs. (C) At least two houses have white chimneys (D) At least two houses have green roofs (E) At least two houses have yellow roofs Ans: (C) 11). Which must be false ? (A) House A has a yellow roof (B) House A & C have different colour chimney (C) House D has a black chimney (D) House E has a white chmney (E) House B&D have the same color roof. Ans: (B) 12). If house C has a yellow roof. Which must be true. (A) House E has a white chimney (B) House E has a balck chimney (C) House E has a red chimney (D) House D has a red chimney (E) House C has a balck chimney Ans: (A) 13). Which possible combinations of roof & chimney can house I. A red roof 7 a black chimney
II. A yellow roof & a red chimney III. A yellow roof & a black chimney (A) I only (B) II only (C) III only (D) I & II only (E) I&II&III Ans; (E) 14). What is the maximum total number of green roofs for houses Ans: (C) 15). There are 5 red shoes, 4 green shoes. If one drasw randomly a shoe what is the probability of getting redshoe is 5c1/9c1 16). What is the selling price of a car? cost of car is Rs 60 & profit 10% profit over selling price Ans : Rs 66/17). 1/3 of girls , 1/2 of boys go to canteen .What factor and total number of clasmates go to canteen. Ans: cannot be determined. 18). price of a product is reduced by 30% . What percentage should be increased to make it 100% Ans: 42.857% 19) There is a square of side 6cm . A circle is inscribed inside the square. Find the ratio of the area of circle to square. r=3 circle/square = 11/14 20). Two candles of equal lengths and of different thickness are there. The thicker one will last of six hours. The thinner 2 hours less than the thicker one. Ramesh light the two candles at the same time. When he went to bed he saw the thicker one is twice the length of the thinner one. For how long did Ramesh lit two candles . Ans: 3 hours. 21). M/N = 6/5 3M+2N = ? Ans: cannot be determined 22). p/q = 5/4 2p+q= ? cannot determined. 23). If PQRST is a parallelogram what it the ratio of triangle PQS & parallelogram PQRST Ans: 1:2 24). cost of an item is Rs 12.60 7 profit is 10% over selling price what is the selling price Ans: Rs 13.86/25). There are 6 red shoes & 4 green shoes . If two of red shoes are drawn what is the probability of getting red shoes Ans: 6c2/10c2 26). 15 lts of water containing 20% alcohol, then added 5 lts of water. What is % alcohol. Ans : 15% 27). A worker pay 20/- day , he works 1, 1/3,2/3,1/8.3/4 in a week.
what is the total amount paid for that worker Ans : 57.50 28). The value of x is between 0 & 1 which is the larger? A) x B) x^2 C) -x D) 1/x Ans : (D) DATA SUFFICIENCY --------------(A) (1) alone sufficient (B) (2) alone sufficient (C) both together are sufficient (D) (1) alone & (2) alone sufficient (E) information not sufficient 1). A man of 6 feet tall is standing near a light on the top of a pole. what is the length of the shadow cost by the man. (1) The pole is 18 feet high (2) The man is 12 feet high Ans: (C) 2). Two pipes A and B empty into a resrvoir , pipe A can fill the reservoir in 30 minutes by itself. How long it will take for pipe A and pipe B together to fill up the reservoir. (1) By itself, pipe B can fill up the reservoir in 20 minutes (2) pipe B has a larger cross-sectional area than pipe A Ans: (A) 3). K is an integer. Is K is divisible by 12 (1) K is divisible by 4 (2) K is divisible by 3 Ans: (C) 4). How far it from A to B (1) It is 15 miles from A to C (2) it is 25 miles from C to B Ans: (E) 5). Was Melissa Brown's novel published? (1). If Melissa Brown's novel was published she would receive atleast $1000 in royalities during 1978 (2). Melissa Brown's income for 1978 was over $1000 Ans: (E) 6). Does every bird fly? (1) Tigers do not fly. (2) Ostriches do not fly Ans: (B)
7). How much does John weigh? Jim weigh 200 pounds. (1) Toms weight plus Moes weight equal to John's weight. (2) John's weight plus Moe's weight equal to Twice Tom's weight. Ans: (C) 8). Is the figure ABCD is a rectangle A ------------------- B | | | | | | | | D ------------------- C (1). x=90(degrees) (2). AB=CD Ans: (E) 9). Find x+2y (1). x+y=10 (2). 2x+4y=20 Ans: (B). 10). Is angle BAC is a right angle |\ |y\ | \ |x z\ -----(1). x=2y (2) y=1.5z Ans: (E) 11). Is x greater than y (1) x=2k (2) k=2y Ans: (E) 12). A piece of string 6 feet long is cut into three smaller pieces. How long is the longer of ther three pieces? (1). Two pieces are the same length. (2) One piece is 3 feet 2 inches lone Ans: (B) 13). How many rolls of wall paper necessary to cover the walls of a room whose floor and ceiling are rectangles 12 feet wide and 15 feet long (1). A roll of paper covers 20 sq feet (2). There are no windows in the walls Ans : (E) 14). x and y are integers that are both less than 10. Is x>y? (1). x is a multiple of 3 (2). y is a multiple of 2 Ans: (E).
15). Fifty students have signed up for atleast one of the courses GERMANI 1 & ENGLISH 1, how many of the 50 students are taking GERMANI 1 but not ENGLISH 1.? (1). 16 students are taking GERMANI 1 & ENGLISH 1 (2). The number of students taking ENGLISH 1 but not GERMANI 1 is the same as the number of students taking GERMANI 1. Ans: (C) 16). Is ABCD is a square ? A ------------ B (1). AD = AB |x | (2). x=90(degres) | | | | D ------------- C Ans: (E). 17). How much card board will it take to make a rectangular bos with a lid whose base has length 7 inches. (1). The width of the box 5 inches (2). The height of the box will be 4 inches Ans: (C). 18). Did ABC company made profit in 1980? (1). ABC company made a profit in 1979. (2). ABC company made a profit in 1981. Ans: (E). 19). How much is Janes salary? (1). Janes salary is 70% of John's salary (2). Johns salary is 50% of Mary's salary Ans: (E). 20). Is x>1 (1) x+y=2 (2) y<0 Ans: (c) 21). How many of the numbers x and y are positive? Both x and y are less than 20 (1) x is less than 5 (2) x+y =24 Ans: (B) 22). Is the angle ACB is right angle A (1). y=z |\ (2). (AC)^2+CB^2=AB^2 | z\ | \ | \ | \ |x y\ C -------- B Ans: (B). 23). How far it from town A to town B? Town C is 12 miles east of
town A (1). Town C is south of town B (2). It is 9 miles from town B to town C Ans :(C) 24). A rectangular field is 40 yards long. Find the area of the field. (1). A fence around the boundary of the field is 140 yards long (2). The field is more than 20 yards width Ans: (A). 25). An industrial plant produces bottles. In 1961 the number of bottles produced by the plant was twice the number of produced in 1960. How many bottles were produced altogether in the year 1960, 61,&62 (1). In 1962 the number of bottles produced was 3 times the number of produced in 1980 (2). In 1963 the number of bottles produced was one half the total produced in the years 1960,1961,1962. Ans: (E). 26). Is xy > 1 ? x & y are both positive (1) x is less than 1 (2) y is greater than 1 Ans : (E) 27). Is it a Rambus ---------(1). All four sides are equal / / (2) Total internal angle is / / 360 / / ---------Ans: (E) 28). How many books are in the book shelf (1) The book shelf is 12 feet long (2). The average weight of each book is 1.2 pound Ans: (E). 29). What is the area of circle? (1). Radius r is given (2). Perimeter is 3 times the area Ans: (A). ARITHMATIC --------1). Total distance is 120 km . Going by 60kmph and coming back by 40kmph what is the average speed? Ans: 48kmph 2). A school have 30% from MAHARASTRA .Out of this 20% from BOMBAY students. Find the total percentage of BOMBAY Ans: 6% 3). An equilateral triangle of side 3 inch is given. How many equilateral triangles of side 1 inch can be formed from it
Ans : 9 4). A/B = 3/5 15A = ? Ans : 9B 5). Each side of a rectangle is increased by 100% . How much the percentage of area will be increased Ans : 300% 6). Perimeter of the back whell = 9 feet, front wheel = 7 feet on a certain distance the front wheel gets 10 revolutions more than back wheel . what is the distance? Ans : 315 feet. 7). Perimeter of front wheel =30, back wheel = 20. If front wheel revolves 240 times. Howm many revolutions will the back wheel take? Ans: 360 times 8). 205 of 6 liter solution and 60% of 4 liter solution is mixed What percentage of the mixture of solution Ans: 36% 9). City A population is 68000, decreasing at a rate of 80 per year City B having population 42000 increasing at a rate of 120 per year. In how many years both the cities will have same population Ans: 130 years 10). Two cars, 15 km apart one is turning at a speed of 50kmph other at 40kmph . How will it take to two cars meet. Ans 3/2 hours 11). A person wants to buy 3 paise and 5 paise stamps costing exactly one rupee. If he buys which of the following number of stamps. he wont able to buy 3 paise stamps Ans: 9 12). There are 12 boys and 15 girls, How many different dancing groups can be formed. Ans: 180 13). Which of the following fractions is less than 1/3 (1) 22/62 (2) 15/46 Ans: 15/46 14). Two circles , one circle is inscribed and another circle is outscribed over a square. What is the ratio of area of inner to outer circle. Ans: 1 : 2 Plumber problem ( 15 - 17) Miss Dean wnats to renovate her house. She hires a plumber, a carpenter, a painter an electricial and interior deorator. The work to be finished in one working (Monday - Friday ). Each worker will take the full day to do his job. Miss Dean
permit only one person to work each day. I. The painter cna work only after the plumber and the carpenter have finished their jobs II. The interior decorator must do his job before the electrician. III. The carpenter cannot work on Monday or Tuesday 15) If the painter work on Thursday, which one of the following alternatives is possible? (A) The electrician works on Tuesday. (B). The electrician works on Friday. (C) The interior decorator works after the painter does. (D). The painter works on consecutive days (E). Miss Dean cannot fit all of the workers int schedule Ans: (B). 16). If the painter works on Friday which of the following must be false? (A) . The carpenter may works on Wednesday (B). The carpenter and the electrician may work on consecutive days (C). If the carpenter works on Thursday, the electrician has to work on wednesday (D). The plumber may work before the electrician does (E). The electrician may work on Tuesday Ans: (C). 17). Which argument is possible? (A). The electrician will works on Tuesday and the interior decorator on Friday (B). The painter will work on wednesday and plumber on thursday (C). The carpenter will works on Tuesday and the painter on Friday (D). THe painter will work on Monday and the carpenter on Thursday (E). The carpenter will work on Wednesday and the plumber on Thursday Ans: (E). **************************************************************** There is one section Figures , like 4 figures are given and we have to find the next one. In this 10 question are given | {"url":"http://www.docstoc.com/docs/9986832/Interview-questions-About-Analogy-1-Auger-carpenter","timestamp":"2014-04-17T14:49:39Z","content_type":null,"content_length":"86796","record_id":"<urn:uuid:3606c5d6-e9ff-47b4-ba74-c962c0f315a1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relativity and Cosmology
1002 Submissions
[26] viXra:1002.0056 [pdf] submitted on 28 feb 2010
Deceleration Parameter Q(Z) in Four and Five Dimensional Geometries, and Implications of Graviton Mass in Mimicking DE in Both Geometries
Authors: Andrew Beckwith
Comments: Eight pages, two figures. Template for submission to Beyond the Standard Model 2010 conference proceedings. May be cut to five papes, pending decision as to length of submission decision by
Professor Hans Klaptor Kleingross, overall chair of Beyond the Standard Model, as given in http://www.phy.uct.ac.za/beyond2010/
The case for a four dimensional graviton mass (non zero) influencing reacceleration of the universe in both four and five dimensions is stated, with particular emphasis upon if four and five
dimensional geometries as given below give us new physical insight as to cosmological evolution. The author finds that both cases give equivalent reacceleration one billion years ago which leads to
an inquiry if other criteria as to cosmology can determine the benefits of adding additional dimensions to cosmology models
Category: Relativity and Cosmology
[25] viXra:1002.0053 [pdf] replaced on 25 Feb 2010
An Astounding Not-So-Simple Retro-Causal Hologram Universe Simultaneous Solution to the Cosmological Constant & Arrow of Time Enigmas?
Authors: Jack Sarfatti
Comments: 7 pages.
The bias against Wheeler-Feynman retro-causal advanced waves from a future absorber, a general lack of understanding of when the asymptotically constant de Sitter horizon is in our subjective
observable causal diamond piece of the multiverse, Hawkings chronology protection conjecture, and the lack of comprehension of the strange implications of the tHooft-Susskind hologram principle [i]
have not allowed us to see what is in front of our eyes since the discovery of dark energy energy accelerating the expansion rate of 3D space ten years or so ago. Bernard Carr [ii] has already
published a brief account of my idea that retrocausality is the key to understanding the biggest problem in physics today why the dark energy density is so small. My paper with Creon Levit (NASA
AMES) [iii] based on my brief talk at DICE 2008 further developed that idea. This paper, is still a simpler explanation of why the virtual boson dark energy density is so small and how it is
intimately connected to the Arrow of Time of the Second Law of Thermodynamics. [iv] The basic idea is so simple that any bright curious schoolboy or girl can grasp it without too much difficulty. Our
universe grows from one qubit at the moment of inflation to an asymptotically constant de Sitter horizon hologram screen ~ 10^123 qubits that is also the upper limit to the total thermodynamic
entropy of our observable universe in the precise sense of Tamara Daviss 2004 Ph.D. dissertation at the University of New South Wales. The early universe is obviously not de Sitter, therefore, we
have already there an obvious temporal asymmetry explaining the Arrow of Time. The dark energy density we see in our past light cone is proportional to the inverse area of our future de Sitter
horizon at its intersection with our future light cone in accord with the Wheeler-Feynman retrocausal absorber principle. [v] Our future de Sitter null horizon is the Wheeler-Feynman total future
absorber of last resort giving us retrocausality without retrocausality similar to the nonlocality without nonlocality of the no cloning a quantum or passion at a distance of orthodox quantum theorys
signal locality. The link between our future and our past is a globally self-consistent time loop in the sense of Igor Novikov. Indeed, this is a bootstrap of self-creation from future to past. The
past dark energy density is indeed the Planck density at the moment of inflation, but Tamara Daviss Fig 5.1 shows that this density quickly drops to the small constant value that has been dominant in
the past few billion years bearing in mind that what matters, is not the spacelike intersection at a constant conformal time, but, rather, the intersection of the observers future light cone with his
future dark energy horizon. However, although I have not yet proved that the dark energy seen in our past light cone is really advanced Hawking radiation from our future observer-dependent de Sitter
cosmic horizon that is, in addition, likely to be a holographic (post) quantum computer not in sub-quantal equilibrium. I have given a plausible argument that this may turn out to be true.
Category: Relativity and Cosmology
[24] viXra:1002.0048 [pdf] replaced on 12 Mar 2010
Why Does the Electron and the Positron Possesses the Same Rest Mass But Different Charges of Equal Modulus and Opposite Signs??.and Why Both Annihilates??
Authors: Fernando Loup
Comments: 26 Pages. An equation of a 5D General Relativity ansatz included in the beginning of section 2 and minor changes in the text
We demonstrate how Rest Masses and Electric Charges are generated by the 5D Extra Dimension of a Universe possessing a Higher Dimensional Nature using the Hamilton-Jacobi equation in agreement with
the point of view of Ponce De Leon explaining in the generation process how and why antiparticles have the same rest mass m[0] but charges of equal modulus and opposite signs when compared to
particles and we also explains why both annihilates.
Category: Relativity and Cosmology
[23] viXra:1002.0047 [pdf] submitted on 21 Feb 2010
Gravitational Field of a Condensed Matter Model of the Sun: The Space Breaking Meets the Asteroid Strip
Authors: Larissa Borissova
Comments: 37 pages, Published in "The Abraham Zelmanov Journal", vol.2, pp. 224-260 (2009).
This seminal study deals with the exact solution of Einstein's field equations for a sphere of incompressible liquid without the additional limitation initially introduced in 1916 by Karl
Schwarzschild, according to which the space-time metric must have no singularities. The obtained exact solution is then applied to the Universe, the Sun, and the planets, by the assumption that these
objects can be approximated as spheres of incompressible liquid. It is shown that gravitational collapse of such a sphere is permitted for an object whose characteristics (mass, density, and size)
are close to the Universe. Meanwhile, there is a spatial break associated with any of the mentioned stellar objects: the break is determined as the approaching to infinity of one of the spatial
components of the metric tensor. In particular, the break of the Sun's space meets the Asteroid strip, while Jupiter's space break meets the Asteroid strip from the outer side. Also, the space breaks
of Mercury, Venus, Earth, and Mars are located inside the Asteroid strip (inside the Sun's space break).
Category: Relativity and Cosmology
[22] viXra:1002.0046 [pdf] submitted on 21 Feb 2010
On the Speed of Rotation of the Isotropic Space: Insight into the Redshift Problem
Authors: Dmitri Rabounski
Comments: 16 pages, Published in "The Abraham Zelmanov Journal", vol.2, pp. 208-223 (2009).
This study applies the mathematical method of chronometric invariants, which are physically observable quantities in the four-dimensional space-time (Zelmanov A.L., Soviet Physics Doklady, 1956,
vol.1, 227-230). The isotropic region of the space-time is considered (it is known as the isotropic space). This is the home of massless light-like particles (e.g. photons). It is shown that the
isotropic space rotates with a linear velocity equal to the velocity of light. The rotation slows in the presence of gravitation. Even under the simplified conditions of Special Relativity, the
isotropic space still rotates with the velocity of light. A manifestation of this effect is the observed Hubble redshift explained as energy loss of photons with distance, for work against the
non-holonomity (rotation) field of the isotropic space wherein they travel (Rabounski D. The Abraham Zelmanov Journal, 2009, vol.2, 11-28). It is shown that the light-speed rotation of the isotropic
space has a purely geometrical origin due to the space-time metric, where time is presented as the fourth coordinate, expressed through the velocity of light.
Category: Relativity and Cosmology
[21] viXra:1002.0045 [pdf] submitted on 21 Feb 2010
Hubble Redshift due to the Global Non-Holonomity of Space
Authors: Dmitri Rabounski
Comments: 18 pages, Published in "The Abraham Zelmanov Journal", vol.2, pp. 11-28 (2009).
In General Relativity, the change in energy of a freely moving photon is given by the scalar equation of the isotropic geodesic equations, which manifests the work produced on a photon being moved
along a path. I solved the equation in terms of physical observables (Zelmanov A. L., Soviet Physics Doklady, 1956, vol. 1, 227-230) and in the large scale approximation, i.e. with gravitation and
deformation neglected, while supposing the isotropic space to be globally non-holonomic (the time lines are non-orthogonal to the spatial section, a condition manifested by the rotation of the
space). The solution is E = E[0] exp(-Ωat/c), where Ω is the angular velocity of the space (it meets the Hubble constant H[0] = c/a = 2.3x10^-18 sec^-1), a is the radius of the Universe, t = r/c is
the time of the photon's travel. Thus, a photon loses energy with distance due to the work against the field of the space non-holonomity. According to the solution, the redshift should be z = exp(H
[0] r/c)-1 ≈ H[0] r/c. This solution explains both the redshift z = H[0] r/c observed at small distances and the non-linearity of the empirical Hubble law due to the exponent (at large r). The
ultimate redshift in a non-expanding universe, according to the theory, should be z = exp(π)-1 = 22.14.
Category: Relativity and Cosmology
[20] viXra:1002.0042 [pdf] replaced on 19 Feb 2010
Concept and Method of Physimatics
Authors: Robert Gallinat
Comments: 5 pages, v1 is in German, v2 is in English
Conceptual approach and heuristic method for an investigation of the possible algebraic structure of the interdependence between mathematical and physical reality and about the connection between
local, non-local and global properties in physics and mathematics, expressed by a general n-fold algebra
Category: Relativity and Cosmology
[19] viXra:1002.0041 [pdf] submitted on 19 Feb 2010
Absence of Significant Cross-Correlation Between WMAP and SDSS
Authors: Martín López-Corredoira, F. Sylos Labini, J. Betancort-Rijo
Comments: 5 pages, accepted to be published in A&A
Aims. Several authors have claimed to detect a significant cross-correlation between microwave WMAP anisotropies and the SDSS galaxy distribution. We repeat these analyses to determine the different
cross-correlation uncertainties caused by re-sampling errors and field-to-field fluctuations. The first type of error concerns overlapping sky regions, while the second type concerns nonoverlapping
sky regions. Methods. To measure the re-sampling errors, we use bootstrap and jack-knife techniques. For the field-to-field fluctuations, we use three methods: 1) evaluation of the dispersion in the
cross-correlation when correlating separated regions of WMAP with the original region of SDSS; 2) use of mock Monte Carlo WMAP maps; 3) a new method (developed in this article), which measures the
error as a function of the integral of the product of the self-correlations for each map. Results. The average cross-correlation for b > 30 deg. is significantly stronger than the re-sampling errors
- both the jack-knife and bootstrap techniques provide similar results - but it is of the order of the field-to-field fluctuations. This is confirmed by the crosscorrelation between anisotropies and
galaxies in more than the half of the sample being null within re-sampling errors. Conclusions. Re-sampling methods underestimate the errors. Field-to-field fluctuations dominate the detected
signals. The ratio of signal to re-sampling errors is larger than unity in a way that strongly depends on the selected sky region. We therefore conclude that there is no evidence yet of a significant
detection of the integrated Sachs-Wolfe (ISW) effect. Hence, the value of Ω[Λ] ≈ 0.8 obtained by the authors who assumed they were observing the ISWeffect would appear to have originated from noise
Category: Relativity and Cosmology
[18] viXra:1002.0038 [pdf] submitted on 18 Feb 2010
The Deflection of Light in the Dynamic Theory of Gravity
Authors: Ioannis Iraklis Haranas
Comments: 8 pages. Published Romanian Astronomical Journal, Vol. 14, No. 1, pp. 3-9, 2004 and SAO and NASA Astrophysics Data System
In a new theory gravity called the dynamic theory, which is derived from thermodymical principles in a five dimensional space, the deflection of a light signal is calculated and compared to that of
general relativity. This is achieved by using the dynamic gravity line element which is the usual four dimesional space-time element of Newtonian gravity modified by a negative inverse radial
exponetial term. The dynamic theory of gravity predicts this modification of the original Newtonian potential by this exponential term.
Category: Relativity and Cosmology
[17] viXra:1002.0037 [pdf] submitted on 18 Feb 2010
The Temperature of a Black Hole in a De-Sitter Space-Time
Authors: Ioannis Iraklis Haranas
Comments: 5 pages. Published Romanian Astronomical Journal, Vo. 12 No. 2, 2002
A relation for the black-hole temperature in a De-Sitter type universe is determined in the first step of this paper. As a result of that, the upper and the lower temperature limits of the black hole
are calculated, and then the limits of the radius of the universe containing the black hole. All these calculations are based upon the present values of the cosmological constant Λ. Further relations
for the dependance of this temperature on Hubble's constant and the gravitationsal energy of the hardons was also derived.
Category: Relativity and Cosmology
[16] viXra:1002.0036 [pdf] submitted on 18 Feb 2010
Sakharov's Temperature Limit in a Schwarzchild Metric Modified by the Cosmological Constant Λ
Authors: Ioannis Iraklis Haranas
Comments: 9 pages. Published Romanian Astronomical Journal, Vol. 12 No. 1, 2002 and SAO/NASA Astrophysics Data System.
In this paper we are going to examine the effect, if any exists, that a modification of the Schwarzchild metric by a lamda term could have on the so called Sakharov's upper temperature limit. It's
known that Zakharov's limit is the maximum possible black body temperature that can occur in our universe.
Category: Relativity and Cosmology
[15] viXra:1002.0035 [pdf] submitted on 19 Feb 2010
Two-World Background of Special Relativity. Part II
Authors: Akindele O. J. Adekugbe
Comments: 19 pages, 13 pages, published in Progress in Physics, 2010, vol.1, 49-61
The two-world background of the Special Theory of Relativity started in part one of this article is continued in this second part. Four-dimensional inversion is shown to be a special Lorentz
transformation that transforms the positive spacetime coordinates of a frame of reference in the positive universe into the negative spacetime coordinates of the symmetry-partner frame of reference
in the negative universe in the two-world picture, contrary to the conclusion that four-dimensional inversion is impossible as actual transformation of the coordinates of a frame of reference in the
existing one-world picture. By starting with the negative spacetime dimensions in the negative universe derived in part one, the signs of mass and other physical parameters and physical constants in
the negative universe are derived by application of the symmetry of laws between the positive and negative universes. The invariance of natural laws in the negative universe is demonstrated. The
derived negative sign of mass in the negative universe is a conclusion of over a century-old effort towards the development of the concept of negative mass in physics.
Category: Relativity and Cosmology
[14] viXra:1002.0034 [pdf] submitted on 19 Feb 2010
Two-World Background of Special Relativity. Part I
Authors: Akindele O. J. Adekugbe
Comments: 19 pages, published in Progress in Physics, 2010, vol.1 30-48
A new sheet of spacetime is isolated and added to the existing sheet, thereby yielding a pair of co-existing sheets of spacetimes, which are four-dimensional inversions of each other. The separation
of the spacetimes by the special-relativistic event horizon compels an interpretation of the existence of a pair of symmetrical worlds (or universes) in nature. Further more, a flat two-dimensional
intrinsic spacetime that underlies the flat four-dimensional spacetime in each universe is introduced. The four-dimensional spacetime is outward manifestation of the two-dimensional intrinsic
spacetime, just as the Special Theory of Relativity (SR) on four-dimensional spacetime is mere outward manifestation of the intrinsic Special Theory of Relativity (φSR) on two-dimensional intrinsic
spacetime. A new set of diagrams in the two-world picture that involves relative rotation of the coordinates of the two-dimensional intrinsic spacetime is drawn and intrinsic Lorentz transformation
derived from it. The Lorentz transformation in SR is then written directly from intrinsic Lorentz transformation in φSR without any need to draw diagrams involving relative rotation of the
coordinates of four-dimensional spacetime, as usually done until now. Indeed every result of SR can be written directly from the corresponding result of φSR. The non-existence of the light cone
concept in the two-world picture is shown and good prospect for making the Lorentz group SO(3,1) compact in the two-world picture is highlighted.
Category: Relativity and Cosmology
[13] viXra:1002.0030 [pdf] submitted on 16 Feb 2010
Einstein's Field Equations in Cosmology Using Harrison's Formula
Authors: Ioannis Iraklis Haranas
Comments: Published, Galilean Electrodynamics, vol. 18, SPI/3,pp. 49-53, 2007
The most important tool for the study of the gravitational field in Einstein's theory of gravity is his field equations. In this short paper, we demonstrate the derivation of Einstein field equations
for the Freedman cosmological model using the Robertson-Walker metric, and furthermore Harrison's formula for the Ricci tensor. The difference is that Harrison's formula is an actually shorter way of
obtaining the field equations. The advantage is that the Cristoffel symbols do not have to be directly calculated one by one. This can actually be a very useful demonstration for somebody who would
like to understand a slightly different but faster way of deriving the field equations, something that is actually rarely seen in many of undergraduate and even graduate textbooks.
Category: Relativity and Cosmology
[12] viXra:1002.0028 [pdf] submitted on 16 Feb 2010
"Let there be h" ! an Existence Argument for Planck's Constant
Authors: Constantinos Ragazas
Comments: 4 pages
Planck's constant h is considered to be a fundamental Universal constant of Physics. And although we can experimentally determine its value to great precision, the reason for its existence and what
it really means is still a mystery. Quantum Mechanics has adapted it in its mathematical formalism, as it also has the Quantum Hypothesis. But QM does not explain its meaning or prove its existence.
Why does the Universe need h and energy quanta? Why does the mathematical formalism of QM so accurately reflect physical phenomena and predict these with great precision? Ask any physicists and
uniformly the answer is "that's how the Universe works". The units of h are in energy-time and the conventional interpretation of h is as a quantum of action. But in this brief note we take a
different view. We interpret h as the minimal accumulation of energy that can be manifested in our measurements. Certainly the units of h agree with such interpretation. Based on this we provide a
plausible explanation for the existence of Planck's constant, what it means and how it comes about. We show that the existence of Planck's constant is not so much dictated by the Universe but rather
by Mathematics and the inner consistence and calibrations of Physics.
Category: Relativity and Cosmology
[11] viXra:1002.0025 [pdf] submitted on 14 Feb 2010
Radar Time Delays in the Dynamic Theory of Gravity
Authors: Ioannis Iraklis Haranas
Comments: 6 pages, Published: Serbian Astronomical Journal, no. 168, 2004, 49-54.
There is a new theory gravity called the dynamic theory, which is derived from thermodynamic principles in a five dimensional space, radar signals travelling times and delays are calculated for the
major planets in the solar system, and compared to those of general relativity. This is done by using the usual four dimensional spherically symmetric space-time element of classical general
relativistic gravity which has now been slightly modified by a negative inverse radial exponential term due to the dynamic theory of gravity potential.
Category: Relativity and Cosmology
[10] viXra:1002.0023 [pdf] submitted on 14 Feb 2010
The Dark Energy Problem
Authors: Michael Harney, Ioannis Iraklis Haranas
Comments: 3 pages, Published: Progress in Physics, vol. 4, pp. 16-18, 2008 .
The proposal for dark energy based on Type Ia Supernovae redshift is examined. It is found that the linear and non-Linear portions in the Hubble Redshift are easily explained by the use of the Hubble
Sphere model, where two interacting Hubble spheres sharing a common mass-energy density result in a decrease in energy as a function of distance from the object being viewed. Interpreting the
non-linear portion of the redshift curve as a decrease in interacting volume between neighboring Hubble Spheres removes the need for a dark energy.
Category: Relativity and Cosmology
[9] viXra:1002.0022 [pdf] submitted on 14 Feb 2010
Quantizing Torsion Effects in a DE Sitter Universe
Authors: Ioannis Iraklis Haranas, Michael Harney
Comments: 8 pages, Romanian Astronomical Journal, vol. 10, no. 1, 2009 and and SAO/NASA Astrophysics Data System.
We derive quantization relations in the case when torsion effects are added in a De-Sitter spacetime metric with or without a black hole at the Planck mass and Planck length limit. To this end we use
Zeldovich's definition of the cosmological constant.
Category: Relativity and Cosmology
[8] viXra:1002.0020 [pdf] submitted on 14 Feb 2010
Satellite Motion in a Non-Singular Potential
Authors: Ioannis Iraklis Haranas, Spiros Pagiatakis
Comments: 7 pages, Published: Astrophys Space Sci., Jan 22, 2010, DOI 10.1007/s10509-010-0274-5.
We study the effects of a non-singular gravitational potential on satellite orbits by deriving the corresponding time rates of change of its orbital elements. This is achieved by expanding the
non-singular potential into power series up to second order. This series contains three terms, the first been the Newtonian potential and the other two, here R1 (first order term) and R2 (second
order term), express deviations of the singular potential from the Newtonian. These deviations from the Newtonian potential are taken as disturbing potential terms in the Lagrange planetary equations
that provide the time rates of change of the orbital elements of a satellite in a non-singular gravitational field. We split these effects into secular, low and high frequency components and we
evaluate them numerically using the low Earth orbiting mission Gravity Recovery and Climate Experiment (GRACE). We show that the secular effect of the second-order disturbing term R2 on the perigee
and the mean anomaly are 4".307*10^-9/a, and -2".533*10^-15/a, respectively. These effects are far too small and most likely cannot easily be observed with today's technology. Numerical evaluation of
the low and high frequency effects of the disturbing term R2 on low Earth orbiters like GRACE are very small and undetectable by current observational means.
Category: Relativity and Cosmology
[7] viXra:1002.0019 [pdf] replaced on 11 Aug 2011
On the Precession of Mercury's Orbit
Authors: R. Wayte
Comments: 7 pages
The Sun's orbital motion around the Solar System barycentre contributes a small quadrupole moment to the gravitational energy of Mercury. The effect of this moment has until now gone unnoticed, but
it actually generates some precession of Mercury's orbit. Therefore the residual 43arcsec/cy, currently allocated to general relativity, has to account for this new component as well as a reduced
relativity component.
Category: Relativity and Cosmology
[6] viXra:1002.0016 [pdf] submitted on 12 Feb 2010
Detection of the Relativistic Corrections to the Gravitational Potential Using a Sagnac Interferometer
Authors: Ioannis Iraklis Haranas, Michael Harney
Comments: 6 pages, Published: Progress in Physics, vol. 3, pp. 3-8 , 2008.
General Relativity predicts the existence of relativistic corrections to the static Newtonian potential which can be calculated and verified experimentally. The idea leading to quantum corrections at
large distances is that of the interactions of massless particles which only involve their coupling energies at low energies. In this short paper we attempt to propose the Sagnac intrerferometric
technique as a way of detecting the relativistic correction suggested for the Newtonian potential, and thus obtaining an estimate for phase difference using a satellite orbiting at an altitude of 250
km above the surface of the Earth.
Category: Relativity and Cosmology
[5] viXra:1002.0015 [pdf] submitted on 12 Feb 2010
Geodetic Precession of the Spin in a Non-Singular Gravitational Potential
Authors: Ioannis Iraklis Haranas, Michael Harney
Comments: 6 pages, Published, Progress in Physics, vol. pp. 1-5, 2008
Using a non-singular gravitational potential which appears in the literature we analytically derived and investigated the equations describing the precession of a body's spin orbiting around a main
spherical body of mass M. The calculation has been performed using a non-exact Schwarzschild solution, and further assuming that the gravitational field of the Earth is more than that of a rotating
mass. General theory of relativity predicts that the direction of the gyroscope will change at a rate of 6.6 arcsec/year for a gyroscope in a 650 km high polar orbit. In our case a precession rate of
the spin of a very similar magnitude to that predicted by general relativity was calculated resulting to a ΔS[geo]/S[geo] =-5.570*10^-2
Category: Relativity and Cosmology
[4] viXra:1002.0014 [pdf] submitted on 11 Feb 2010
Particles Here and Beyond the Mirror
Authors: Borissova L., Rabounski D.
Comments: 118 pages, 2nd edition, published by Svenska fysikarkivet, 2008
This is a research on all kinds of particles, which could be conceivable in the space-time of General Relativity. In addition to mass-bearing particles and light-like particles, zero-particles are
predicted: such particles can exist in a fully degenerate space-time region (zero-space). Zero-particles seems as standing light waves, which travel in instant (non-quantum teleportation of photons);
they might be observed in a further development of the "stopped light experiment" which was first conducted in 2001, at Harvard, USA. The theoretical existence of two separate regions in the
space-time is also shown, where the observable time flows into the future and into the past (our world and the mirror world). These regions are separated by a space-time membrane wherein the
observable time stops. A few other certain problems are considered. It is shown, through Killing's equations, that geodesic motion of particles is a result of stationary geodesic rotation of the
space which hosts them. Concerning the theory of gravitational wave detectors, it is shown that both free-mass detector and solid-body detector may register a gravitational wave only if such a
detector bears an oscillation of the butt-ends.
Category: Relativity and Cosmology
[3] viXra:1002.0013 [pdf] submitted on 11 Feb 2010
Fields, Vacuum, and the Mirror Universe
Authors: Borissova L., Rabounski D.
Comments: 260 pages, 2nd edition, published by Svenska fysikarkivet, 2009
In this book, we build the theory of non-geodesic motion of particles in the space-time of General Relativity. Motion of a charged particle in an electromagnetic field is constructed in curved
space-time (in contrast to the regular considerations held in Minkowski's space of Special Relativity). Spin particles are explained in the framework of the variational principle: this approach
distinctly shows that elementary particles should have masses governed by a special quantum relation. Physical vacuum and forces of non-Newtonian gravitation acting in it are determined through the
lambda-term in Einstein's equations. A cosmological concept of the inversion explosion of the Universe from a compact object with the radius of an electron is suggested. Physical conditions inside a
membrane that separates space-time regions where the observable time flows into the future and into the past (our world and the mirror world) are examined.
Category: Relativity and Cosmology
[2] viXra:1002.0010 [pdf] submitted on 5 Feb 2010
Angular Size Test on the Expansion of the Universe
Authors: Martín López-Corredoira
Comments: 44 pages, accepted to be published in Int. J. Mod. Phys. D
Assuming the standard cosmological model as correct, the average linear size of galaxies with the same luminosity is six times smaller at z = 3.2 than at z = 0, and their average angular size for a
given luminosity is approximately proportional to z^-1. Neither the hypothesis that galaxies which formed earlier have much higher densities nor their luminosity evolution, mergers ratio, or massive
outflows due to a quasar feedback mechanism are enough to justify such a strong size evolution. Also, at high redshift, the intrinsic ultraviolet surface brightness would be prohibitively high with
this evolution, and the velocity dispersion much higher than observed. We explore here another possibility to overcome this problem by considering different cosmological scenarios that might make the
observed angular sizes compatible with a weaker evolution. One of the models explored, a very simple phenomenological extrapolation of the linear Hubble law in a Euclidean static universe, fits the
angular size vs. redshift dependence quite well, which is also approximately proportional to z^-1 with this cosmological model. There are no free parameters derived ad hoc, although the error bars
allow a slight size/luminosity evolution. The type Ia supernovae Hubble diagram can also be explained in terms of this model with no ad hoc fitted parameter. WARNING: I do not argue here that the
true Universe is static. My intention is just to discuss which theoretical models provide a better fit to the data of observational cosmology.
Category: Relativity and Cosmology
[1] viXra:1002.0007 [pdf] submitted on 4 Feb 2010
La TEORíA Conectada Soluciona el Problema DE la Materia Oscura DE la Relatividad General DE Einstein. (Dark Matter)
Authors: Xavier Terri Castañé
Comments: 38 pages, Spanish language
The connected theory solves the problem to the dark substance of the theory of general relativity of Einstein. What is the substance? Do we see the world and create theories, or we create theories
and observe the world? The real solution to the crisis of contemporary physics will be a physico-philosophical question or...
Category: Relativity and Cosmology | {"url":"http://vixra.org/relcos/1002","timestamp":"2014-04-16T13:07:12Z","content_type":null,"content_length":"43566","record_id":"<urn:uuid:f526a988-562d-4b30-aeba-9c6f74adffc2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tools
Fractions - Adding
Reviewer: arhard, Jun 10 2008 10:57:48:263AM
Review based on:
I am taking a course geared towards using technology to teach math.
Appropriate for:
introduction to a concept, practice of skills and understandings
Other Comments:
I would use this to introduce a concept and maybe for remediation for those who still struggle.
What math does one need to know to use the resource?
Multiplication, fractions
What hardware expertise does one need to learn to use the resource?
Basic keyboarding skills
What extra things must be done to make it work?
How hard was it for you to learn?
I feel the game was easy for me to use. However, I feel that a third grader may have a little bit of a hard time getting the hang of it. Although they are growing up in time when computers are
readily available, children at this age still have a limited knowledge of the Internet and how to manuever through it.
Ability to meet my goals:
Recommended for:
Math 3: Fractions
Math 4: Fractions
Math 5: Fractions
Reviewer: crobrad0709, Jun 9 2008 04:19:55:807PM
Review based on:
personal experience
Appropriate for:
practice of skills and understandings, applications of a concept or technique
Other Comments:
This tool is very useful for skills already learned. This activity asks you to find equivalent fractions. If students have no prior understanding of equivalent fractions, than they will have trouble
knowing how to find the equivalent fractions in this activity. However, to practice adding fractions once already learned this is a great visual for students.
What math does one need to know to use the resource?
You need to have an understanding of fractions, equivalent fractions, and a beginning knowledge for adding fractions.
What hardware expertise does one need to learn to use the resource?
Need to use the mouse to find the equivalent fractions, and number pad to key in their answers.
How hard was it for you to learn?
Everything is clearly labeled. There are not a lot of directions that are needed to be read in order to understand the activity. The most difficult part is being able to see if the fractions have
become equivalent.
Ability to meet my goals:
Recommended for:
Math 4: Fractions
Reviewer: Mike013, Nov 28 2005 07:40:10:510PM
Review based on:
I was designing a unit lesson for a college course and used it to help teach students adding fractions. I needed to a way to demonstrate common denominators to the students, and this applet seemed
like a good choice.
Appropriate for:
introduction to a concept, practice of skills and understandings
What math does one need to know to use the resource?
The student should know operations on whole numbers and be familiar with fractions.
What hardware expertise does one need to learn to use the resource?
You need to know how to use the internet.
How hard was it for you to learn?
Very Easy
It was easy to find a common denominator because you just had to find a common base and make sure your fraction worked for the base.
Ability to meet my goals:
Recommended for:
Math 6: Fractions | {"url":"http://mathforum.org/mathtools/all_reviews/434/","timestamp":"2014-04-20T12:00:54Z","content_type":null,"content_length":"17193","record_id":"<urn:uuid:73c6d9e7-e4c9-4b29-abe9-012bbf273542>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics question
May 21st 2011, 05:06 PM
Statistics question
I am wondering if you could please explain these questions to me please.
Count the fraction of samples from the forest fire data where loge(area) lies between 0 and 2.
Does this literally mean count the fractions in log form between 0 and 2?
So an example, if there are 200 number and 1 is between 0 and 2. In fraction form would I answer it as 1/200. Is that what they are asking?
Assuming that loge(area) follows a Normal distribution with a population mean and population standard deviation equal to the mean and standard deviation from part (i),
find the probability that a randomly sampled value from the population of loge(area) will be between 0 and 2.
I don't understand this question, would it be the same as question 1?
Is there a difference in the previous two answers? Explain why.
I don't think there will be a difference.
fraction is not an entire number
probability is something probable
If you could kindly help thanks heaps!
May 21st 2011, 05:30 PM
It would be most helpful here if we could see the data set. | {"url":"http://mathhelpforum.com/statistics/181270-statistics-question-print.html","timestamp":"2014-04-21T10:11:30Z","content_type":null,"content_length":"5979","record_id":"<urn:uuid:c2f803a2-770e-4554-be7c-63964525d05d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics with Love: The Courtship Correspondence of Barnes Wallis, Inventor of the bouncing Bomb
This is a fascinating book, even though there isn't much mathematics in it. (Love with Mathematics might have been a better title.) It gives us a glimpse of life and courtship in the 1920s, shows
what it was like to be a creative engineer at the time, and tells us a little bit about mathematical education and how mathematics was understood.
The first main character in the story is Barnes Wallis, a talented young engineer who was pretty much self-taught and was working on the development of airships (i.e., Zepellin-style dirigibles).
Barnes' father, having lost his first wife, remarried, and so Barnes got to know the family of Arthur Bloxam, brother to his father's new wife. Arthur Bloxam's daughter Molly is the other main
character of the story.
Barnes met Molly in 1922, when he was 34 and she was 17. He seems to have fallen in love instantly. He was about to leave England, however, the airship job having (temporarily, as it turned out)
disappeared. He decided to strike up a correspondence with Molly. Of course, this required obtaining her father's permission. This was granted, but conditions were imposed: it was to be a pen
friendship only, nothing that would pose a threat to Molly's college career (which she was about to start at University College in London) nor to "her open-minded contact with men her own age."
So began a long courtship by correspondence. In the early phase, perhaps in order to have an excuse for writing, Barnes offered to help Molly with her mathematics. As a result, the letters include
lessons in calculus, some elementary trigonometry, and a very little bit of physics.
These initial letters are fascinating. Both writers work under serious restrictions as to what they might say, and so both spend lots of time telling each other how wonderful the other's last letter
was and encouraging each other to continue writing. Barnes is a good teller of stories, and keeps things interesting.
As the courtship evolved, Molly's father became more and more concerned. He seems to have felt Molly was too young for this sort of thing, and worried that Barnes' age and experience would allow him
to convince her to make a decision too early and too easily.
Still, about halfway through the book Mr. Bloxam accepts the fact that Barnes really is courting his daughter, and relaxes the rules a little. He allows Barnes to express his feelings, but not Molly!
She is to withhold any hint of how she feels, any decision, until her 19th birthday. So we are treated to a rather strange correspondence. Barnes' letters become little more than repeated
protestations about how much he loves her (and therefore make for much less interesting reading). Her replies are, as her father stipulated, guarded, though there are ample hints about how it all
Of course, Barnes gets the girl in the end, and the author, his daughter, tells us a little — very little — about their life together.
So what's interesting? First of all, these are two interesting people. Were it not for that, the book would be unreadable. Barnes is smart but unsure of himself, especially at first. Molly is
sensitive but intelligent. We grow to like them... and to share their impatience with the rules under which they find themselves.
Second, the book gives a fascinating glimpse of life in the 1920s. Because Barnes and Molly must write without talking about feelings, they tell each other what they are doing, and so we see what it
was like to be an engineer, the ups and downs of the airship business (we don't quite get to see the final down, but there are hints), and the difficulties of university. We learn a little about
courtship and what it entails, and come to realize that the restrictions imposed by Arthur Bloxam actually helped these two young people get to know each other quite well before making their final
Finally, there are a few interesting bits in the mathematics. The notation for limit was apparently "Lt." Barnes uses bars over expressions where we would put them in parentheses. Molly seems to have
no trouble following an exposition that, though witty, is fairly serious. We find out that Molly's pre-university mathematical education was pretty dismal, contra all the legends about the good old
Barnes argues, at one point, that 0 is not simply "nothing", not a number in the sense that 2 is a number, but rather a code for a variable that becomes arbitrarily small. In other words, he takes 0
and ∞ as dual concepts. And we get some nice quotes. Here is Barnes writing about a temporary teaching job he had (I have preserved Barnes' idiosyncratic spelling and punctuation):
I've been getting so cross with some of my people — I thumped a desk today. People seem so stupid about maths. I don't mind how much explaining I do, or what pains I take to make them understand,
but inattention and wilful stupidity I cannot tolerate.
Or, getting ready to explain some calculus to Molly:
...perhaps it is best to say that I do not propose to give the full mathematical treatment of the subject — I do not see that that is at all necessary (also it is rather difficult). The calculus
is a very beautiful and simple means of performing calculations, which either cannot be done at all in any other way, or else can only be performed by very clumsy, roundabout, and approximate
methods. No one would suggest that you must be able personally to manufacture a needle, before you are allowed to sew, or that I must be able to make a watch, before I am allowed to tell the
time, — these things are tools, or instruments, put at our disposal by the accumulated experience of our forefathers, and we are quite justified in making such use of them as our skill and
ingenuity can contrive. So with the calculus — it is a mental tool left at our disposal by the great mathematicians of the past.
Here he is meditating on how hard it is to make trigonometry fun:
Somehow one can not make any fun out of trig. its all so matter of fact — its difficult to say just what I mean — Calculus is an art — it endows you with wonderful powers; you can let your
imagination go to all sorts of lengths and not pass out of the realm of reality — Calculus is like chocolate meringues — elementary trig is like very thick stale bread and margarine. It improves
a lot later on, and like everything else, merges into calculus...
At the end, the reader feels happy for Barnes and Molly, and perhaps wishes to know more. I was a little frustrated to find no explanation at all of the "bouncing bomb" business from the subtitle,
and no account of what happened to Barnes' airships. But that frustration only shows how well I got to know and like these people, and underscores that this curious book is well worth reading.
Fernando Q. Gouvêa is the Secret Master of MAA Reviews, the editor of FOCUS online, and Professor of Mathematics at Colby College in Waterville, ME. He thinks trigonometry is more like rice cakes and
calculus is more like rice and beans. It is number theory that is chocolate meringues, while history of mathemaitcs is regular meringues. | {"url":"http://www.maa.org/publications/maa-reviews/mathematics-with-love-the-courtship-correspondence-of-barnes-wallis-inventor-of-the-bouncing-bomb","timestamp":"2014-04-18T14:45:32Z","content_type":null,"content_length":"101906","record_id":"<urn:uuid:1ebe77c4-9612-4d20-b6b4-3b0540667785>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
From EPR Argument to One-Way Quantum Computing and Teleportation
Advances in Mathematical Physics
Volume 2010 (2010), Article ID 945460, 11 pages
Review Article
Two Versions of the Projection Postulate: From EPR Argument to One-Way Quantum Computing and Teleportation
School of Mathematics and Systems Engineering, International Center of Mathematical Modeling in Physics and Cognitive Sciences, Växjö University, 35195, Sweden
Received 17 August 2009; Accepted 29 December 2009
Academic Editor: Shao-Ming Fei
Copyright © 2010 Andrei Khrennikov. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Nowadays it is practically forgotten that for observables with degenerate spectra the original von Neumann projection postulate differs crucially from the version of the projection postulate which
was later formalized by Lüders. The latter (and not that due to von Neumann) plays the crucial role in the basic constructions of quantum information theory. We start this paper with the presentation
of the notions related to the projection postulate. Then we remind that the argument of Einstein-Podolsky-Rosen against completeness of QM was based on the version of the projection postulate which
is nowadays called Lüders postulate. Then we recall that all basic measurements on composite systems are represented by observables with degenerate spectra. This implies that the difference in the
formulation of the projection postulate (due to von Neumann and Lüders) should be taken into account seriously in the analysis of the basic constructions of quantum information theory. This paper is
a review devoted to such an analysis.
1. Introduction
We recall that for observables with nondegenerate spectra the two versions of the projection postulate, see von Neumann [1] and Lüders [2], coincide. We restrict our considerations to observables
with purely discrete spectra. In this case each pure state is projected as the result of measurement onto another pure state, the corresponding eigenvector. Lüders postulated that the situation is
not changed even in the case of degenerate spectra; see [2]. By projecting a pure state we obtain again a pure state, the orthogonal projection on the corresponding eigen-subspace. However, von
Neumann pointed out that in general the postmeasurement state is not pure, it is a mixed state. The difference is crucial! And it is surprising that so little attention was paid up to now to this
important problem. It is especially surprising if one takes into account the fundamental role which is played by the projection postulate in quantum information (QI) theory. QI is approaching the
stage of technological verification and the absence of a detailed analysis of the mentioned problem is a weak point in its foundations.
This paper is devoted to such an analysis. We start with a short recollection of the basic facts on the projection postulates and conditional probabilities in QM. Then we analyze the EPR argument
against completeness of QM [3]. Since Einstein et al. proceeded on the physical level of rigorousness, it is a difficult task to extract from their considerations which version of the projection
postulate was used. We did this in [4, 5]. Now we shortly recall our previous analysis of the EPR argument. We will see that they really applied the Lüders postulate. They used the fact that a
measurement on a composite system transforms a pure state into another pure state, the orthogonal projection of the original state. The formal application of the original von Neumann postulate blocks
the EPR considerations completely.
We analyze the quantum teleportation scheme. We will see again that it is fundamentally based on the use of the Lüders postulate. The formal application of the von Neumann postulate blocks the
teleportation type schemes; see for more detail [6].
Finally, we remark that “one way quantum computing,” for example, [7–9] (an exciting alternative to the conventional scheme of quantum computing) is irreducibly based on the use of the Lüders
The results of this analysis ought to be an alarm signal for people working in the quantum foundations. If one assumes that von Neumann was right, but Lüders as well as Einstein et al. were wrong,
then all basic schemes of QI should be reanalysed. However, a deeper study of von Neumann’s considerations on the projection postulate [1] shows that, in fact, under quite natural conditions the von
Neumann postulate implies the Lüders postulate. The detailed (rather long and technical) proof of this unexpected result can be found in preprint [10]. In this paper we just formulate the above
mentioned conditions and the theorem on the reduction of one postulate to another. Thus the basic QI schemes seem to be save in their appealing to the Lüders version of the projection postulate.
However, following additional analysis is still needed to understand the adequacy of conditions of a theorem on the reduction of one postulate to another to the original considerations of von Neumann
in his book [1]. He wrote on the physical level of rigorousness. To make a mathematically rigorous reformulation of his arguments is not an easy task!
The main conclusion of the present paper is that the study of the foundations of QM and QI is far from being completed; see also the recent monograph of Jaeger [11]. (We can also point to the recent
study on teleportation of Asano et al. [12]. It is the teleportation scheme in the infinite-dimensional Hilbert space, known as Kossakowski-Ohya scheme. It would be interesting to analyze this scheme
to understand the role of the projection postulate in its realization. We emphasize that measurements on composite systems play the crucial point of QI.) We remark that the operational approach to
QM, see, for example, [13], considers not only the von Neumann and Lüders versions of the projection postulate, but general theory of postmeasurement states. Formally, one may say that from the
viewpoint of the operational approach it is not surprising that, for example, the von Neumann postulate can be violated for some measurement. It is neither surprising that even both projection
postulates can be violated. But this viewpoint is correct only on the level of formal mathematical considerations. If we turn to the real physical situation, that is, experiments, we should carefully
analyze concrete experiments to understand which type of postmeasurement state is produced. Finally, we mention the viewpoint of De Muynck [14, 15] who emphasized that all projection type postulates
are merely about conditional probabilities. In principle, I agree with him, compare with my recent monograph [16]. However, experimenters are interested not only in probabilities of results of
measurements, but also in the postmeasurement states. We can mention the quantum teleportation schemes or one-way quantum computing.
2. Projection Postulate
2.1. Nondegenerate Case
Everywhere below denotes a complex Hilbert space. Let be a pure state, that is, We remark that any pure state induces a density operator
where denotes the orthogonal projector on the vector This operator describes an ensemble of identically prepared systems each of them in the same state
For an observable represented by the operator with nondegenerate spectrum von Neumann’s and Lüders’ projection postulates coincide. For simplicity we restrict our considerations to operators with
purely discrete spectra. In this case the spectrum consists of eigenvalues of . Nondegeneracy of the spectrum means that subspaces consisting of eigenvectors corresponding to different eigenvalues
are one dimensional. The following definition was formulated by von Neumann [1] in the case of nondegenerate spectrum. It coincides with Lüders’ definition (we remain once again that Lüders’ did not
distinguish the cases of degenerate and nondegenerate spectra).
PP:Let be an observable described by the self-adjoint operator having purely discrete nondegenerate spectrum. Measurement of observable on a system in the (pure) quantum state producing the result
induces transition from the state into the corresponding eigenvector of the operator
If we select only systems with the fixed measurement result then we obtain an ensemble described by the density operator . Any system in this ensemble is in the same state . If we do not perform
selections, we obtain an ensemble described by the density operator
where is the projector on the eigenvector .
2.2. Degenerate Case
Lüders generalized this postulate to the case of operators having degenerate spectra. Let us consider the spectral decomposition for a self-adjoint operator having purely discrete spectrum
where are different eigenvalues of (so and , is the projector onto subspace of eigenvectors corresponding to .
By Lüders’ postulate after a measurement of an observable represented by the operator that gives the result the initial pure state is transformed again into a pure state, namely,
Thus for the corresponding density operator we have
If one does not make selections corresponding to the values the final postmeasurement state is given by or simply This is the statistical mixture of the pure states Thus by Lüders there is no
essential difference between measurements of observables with degenerate and nondegenerate spectra.
von Neumann had a completely different viewpoint on the postmeasurement state [1]. Even for a pure state the postmeasurement state (for a measurement with selection with respect to a fixed result
will not be a pure state again. If has degenerate (discrete) spectrum, then according to von Neumann [1].
A measurement of an observable giving the value does not induce a projection of on the subspace .
The result will not be a fixed pure state, in particular, not Lüders’ state . Moreover, the postmeasurement state, say is not uniquely determined by the formalism of QM! Only a subsequent measurement
of an observable such that and is an operator with nondegenerate spectrum (refinement measurement) will determine the final state.
Following von Neumann, we choose an orthonormal basis in each . Let us take a sequence of real numbers such that all numbers are distinct. We define the corresponding self-adjoint operator having
eigenvectors and eigenvalues :
A measurement of the observable represented by the operator can be considered as a measurement of the observable because where is some function such that . The -measurement (without postmeasurement
selection with respect to eigenvalues) produces the statistical mixture By selection for the value of (its measurements realized via measurements of a refinement observable ) we obtain the
statistical mixture described by normalization of the operator
von Neumann emphasized that the mathematical formalism of QM could not describe in a unique way the postmeasurement state for measurements (without refinement) in the case of degenerate observables.
He did not discuss the properties of such states directly, he described them only indirectly via refinement measurements. (For him this state was a kind of hidden variable. It might even be that he
had in mind that it “does not exist at all,” i.e., it could not be described by a density operator.) We would like to proceed by considering this (hidden) state under the assumptions that it can be
described by a density operator, say . We formalize a list of properties of this hidden (postmeasurement) state each of which can be extracted from von Neumann’s considerations on refinement
measurements. Finally, we prove, see Theorem 5.3, that should coincide with the postmeasurement state postulated by Lüders in [2].
Consider the -measurement without refinement. By von Neumann, for each quantum system in the initial pure state the -measurement with the -selection transforms the in one of states belonging to the
eigensubspace . Unlike Lüders’ approach, it implies that, instead of one fixed state, namely, , such an experiment produces a probability distribution of states on the unit sphere of the subspace .
3. von Neumann’s Viewpoint on the EPR Experiment
Consider any composite system Consider any Let and be observables represented by the operators and with purely discrete nondegenerate spectra:
Any state can be represented as
where Einstein, Podolsky, and Rosen claimed that measurement of given by
induces a projection of onto one of states , .
In particular, for a state of the form
one of states is created.
Thus by performing a measurement on the with the result the “element of reality”
is assigned to . This is the crucial point of the considerations of Einstein et al. [3]. Now by selecting another observable, say acting on we can repeat our considerations for the operators . This
operator induces another decomposition of the state Another element of reality can be assigned to the same system . If the operators and do not commute, then the observables and are incompatible.
Nevertheless, EPR was able to assign to the system elements of reality corresponding to these obervables. This contradicts to the postulate of QM that such an assignment is impossible (because of
Heisenberg uncertainty relations). To resolve this paradox EPR proposed that QM is incomplete, that is, in spite of Heisenberg’s uncertainty relation, two elements of reality corresponding to
incompatible observables can be assigned to a single system. As an absurd alternative to incompleteness, they considered the possibility of action at distance. By performing a measurement on we
change the state of and assign it a new element of reality.
However, the EPR considerations did not match von Neumann’s projection postulate, because the spectrum of is degenerate. Thus by von Neumann to obtain an element of reality one should perform a
measurement of a “nonlocal observable” given by a nonlocal refinement of, for example, and .
Finally, (after considering of operators with discrete spectra) Einstein et al. considered operators of position and momentum having continuous spectra. According to the von Neumann [1] one should
proceed by approximating operators with continuous spectra by operators with discrete spectra.
In Section 5 we will show that under quite natural conditions von Neumann postulate implies Lüders postulate, even for observables with degenerate spectrum. It will close “loophole” in the EPR
4. von Neumann’s Viewpoint on the Canonical Teleportation Scheme
We will proceed across the quantum teleportation scheme, see, for example, [11], and point to applications of the projection postulate. In this section following the QI-tradition we will use Dirac’s
symbols to denote the states of systems. There are Alice () and Bob (), and Alice has a qubit in some arbitrary quantum state . Assume that this quantum state is not known to Alice and she would like
to send this state to Bob. Suppose Alice has a qubit that she wants to teleport to Bob. This qubit can be written generally as .
The quantum teleportation scheme requires Alice and Bob to share a maximally entangled state before, for instance, one of the four Bell states: , , , . Alice takes one of the particles in the pair,
and Bob keeps the other one. We will assume that Alice and Bob share the entangled state . So, Alice has two particles (the one she wants to teleport, and , one of the entangled pair), and Bob has
one particle, . In the total system, the state of these three particles is given by
Alice will then make a partial measurement in the Bell basis on the two qubits in her possession. To make the result of her measurement clear, we will rewrite the two qubits of Alice in the Bell
basis via the following general identities (these can be easily verified): , , , . Evidently the result of her (local) measurement are that the three-particle state would collapse to one of the
following four states (with equal probability of obtaining each): , , , . The four possible states for Bob’s qubit are unitary images of the state to be teleported. The crucial step, the local
measurement done by Alice on the Bell basis, is done. It is clear how to proceed further. Alice now has complete knowledge of the state of the three particles; the result of her Bell measurement
tells her which of the four states the system is in. She simply has to send her results to Bob through a classical channel. Two classical bits can communicate which of the four results she obtained.
After Bob receives the message from Alice, he will know which of the four states his particle is in. Using this information, he performs a unitary operation on his particle to transform it to the
desired state .
If Alice indicates that her result is , Bob knows that his qubit is already in the desired state and does nothing. This amounts to the trivial unitary operation, the identity operator.
If the message indicates , Bob would send his qubit through the unitary gate given by the Pauli matrix to recover the state. If Alice’s message corresponds to , Bob applies the gate to his qubit.
Finally, for the remaining case, the appropriate gate is given by . Teleportation is therefore achieved.
The main problem is that Alice’s measurement is represented by a degenerate operator in the 3-qubit space. It is nondegenerate with respect to her 2 quibits, but not in the total space. Thus the
standard conclusion that by obtaining, for example, , Alice can be sure that Bob obtained the right state , does not match the quantum measurement theory. According to von Neumann, to get this state
Bob should perform a refinement measurement. In order to perform it, Bob should know the state . Thus from von Neumann’s viewpoint there is a loophole in the quantum teleportation scheme. It will be
closed (under quite natural conditions) in the next section.
5. Reduction of von Neumann’s Postulate to Lüders’ Postulate
In this section we try to formalize von Neumann’s considerations on the measurement of observables with degenerate spectra.
Consider an -measurement without refinement. By von Neumann, for each quantum system in the initial pure state the -measurement with the -selection transforms in one of the states belonging to the
eigensubspace . This implies that, instead of one fixed state, namely, , such an experiment produces a probability distribution of states on the unit sphere of the subspace .
We postulate (it is one of the steps in the formalization of von Neumann’s considerations).
DO: For any value such that , the postmeasurement probability distribution on can be described by a density operator, say .
Here is such that and . Consider now the corresponding density operator in . Its restriction on coincides with . In particular this implies its property We remark that is determined by , so .
We would like to present the list of other properties of induced by von Neumann’s considerations on refinement. Since, for each refinement measurement , the operators and commute, the measurement of
with refinement can be performed in two ways. First we perform the -measurement and then we get as . However, we also can first perform the -measurement, obtain the postmeasurement state described by
the density operator , then measure and, finally, we again find .
Take an arbitrary and consider a refinement measurement such that is an eigenvector of . Thus . Then for the cases—[direct measurement of ] and [first and then —we get probabilities which are coupled
in a simple way. In the first case (by Born’s rule) In the second case, after the -measurement, we obtain the state with probability
Performing the -measurement for the state we get the value with probability By (classical) Bayes’ rule, we have Finally, we obtain Thus This is one of the basic features of the postmeasurement state
(for the -measurement with the -selection, but without any refinement). Another basic equality we obtain in the following way. Take an arbitrary and consider a measurement of the observable described
by the orthogonal projector under the state Since the later describes a probability distribution concentrated on we have Thus This is the second basic feature of the postmeasurement state. Our aim is
to show that (5.7) and (5.9) imply that, in fact, that is, to derive Lüders postulate which is a theorem in our approach.
Lemma 5.1. The postmeasurement density operator maps into .
Proof. By (5.1) it is sufficient to show that . By (5.9) we obtain for any . This immediately implies that for any pair of vectors from . The latter implies that for any .
Consider now the -measurement without refinement and selection. The postmeasurement state can be represented as
Proposition 5.2. For any pure state and self-adjoint operator with purely discrete (degenerate) spectrum the postmeasurement state (in the absence of a refinement measurement) can be represented as
where , , and, for any ,
Theorem 5.3. Let be a density operator described by Proposition 5.2. Then
6. Conclusion
We performed a comparative analysis of two versions of the projection postulate—due to von Neumann and Lüders. We recalled that for observables with degenerate spectra these versions imply
consequences which at least formally different. In the case of a composite system any measurement on a single subsystem is represented by an operator with degenerate spectrum. Such measurements play
the fundamental role in quantum foundations and quantum information: from the original EPR argument to shemes of quantum teleportation and quantum computing. We formulated natural conditions reducing
the von Neumann projection postulate to the Lüders projection postulate; see the theorem. This theorem closed mentioned loopholes in QI-schemes. However, conditions of this theorem are the subject of
further analysis.
1. J. von Neumann, Mathematische Grundlagen der Quantenmechanik, Springer, Berlin, Germany, 1932.
2. G. Lüders, “Uber die Zustansanderung durch den Messprozess,” Ann. Phys. Lpz., vol. 8, p. 322, 1951.
3. A. Einstein, B. Podolsky, and N. Rosen, “Can quantum-mechanical description of physical reality be considered complete?” Physical Review, vol. 47, p. 777, 1935.
4. A. Khrennikov, “The role of von Neumann and Lüders postulates in the Einstein, Podolsky, and Rosen considerations: comparing measurements with degenerate and nondegenerate spectra,” Journal of
Mathematical Physics, vol. 49, no. 5, Article ID 052102, 5 pages, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
5. A. Khrennikov, “EPR “Paradox”, projection postulate, time synchronization “nonlocality”,” International Journal of Quantum Information, vol. 7, no. 1, pp. 71–78, 2009.
6. A. Khrennikov, “Analysis of the role of von Neumann's projection postulate in the canonical scheme of quantum teleportation,” Journal of Russian Laser Research, vol. 29, no. 3, pp. 296–301, 2008.
View at Publisher · View at Google Scholar · View at Scopus
7. R. Raussendorf and H. J. Briegel, “A one-way quantum computer,” Physical Review Letters, vol. 86, no. 22, pp. 5188–5191, 2001. View at Publisher · View at Google Scholar · View at Scopus
8. G. Vallone, E. Pomarico, F. De Martini, and P. Mataloni, “One-way quantum computation with two-photon multiqubit cluster states,” Physical Review A, vol. 78, no. 4, Article ID 042335, 2008. View
at Publisher · View at Google Scholar · View at Scopus
9. N. C. Menicucci, S. T. Flammia, and O. Pfister, “One-way quantum computing in the optical frequency comb,” Physical Review Letters, vol. 101, no. 13, Article ID 130501, 2008. View at Publisher ·
View at Google Scholar · View at Scopus
10. A. Khrennikov, “The role of von Neumann and Lüders postulates in the EPR-Bohm-Bell considerations: did EPR make a mistake?” http://arxiv.org/abs/0801.0419.
11. G. Jaeger, Quantum Information: An Overview, Springer, New York, NY, USA, 2007. View at MathSciNet
12. M. Asano, M. Ohya, and Y. Tanaka, “Complete m-level teleportation based on Kossakowski-Ohya scheme,” in Proceedings of QBIC-2, Quantum Probability and White Noise Analysis, vol. 24, pp. 19–29,
13. M. Ozawa, “Conditional probability and a posteriori states in quantum mechanics,” Publications of Research Institute for Mathematical Sciences, vol. 21, no. 2, pp. 279–295, 1985. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
14. W. M. De Muynck, “Interpretations of quantum mechanics, and interpretations of violations of Bell's inequality,” Quantum Probability and White Noise Analysis, vol. 13, pp. 95–104, 2001.
15. W. M. De Muynck, Foundations of Quantum Mechanics, an Empiricists Approach, Kluwer Academic Publishers, Dodrecht, The Netherlands, 2002.
16. A. Khrennikov, Contextual Approach to Quantum Formalism, Springer, Berlin, Germany, 2009. | {"url":"http://www.hindawi.com/journals/amp/2010/945460/","timestamp":"2014-04-19T02:48:28Z","content_type":null,"content_length":"280813","record_id":"<urn:uuid:2f6ffc2b-8aca-4c92-846d-e408d622cb9a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Understanding Confidence Intervals. Please comment.
Replies: 1 Last Post: Nov 9, 2009 3:20 PM
Messages: [ Previous | Next ]
Bacle Understanding Confidence Intervals. Please comment.
Posted: Nov 9, 2009 2:30 AM
Posts: 818
From: NYC Could someone please tell me if I am understanding confidence intervals correctly.?. Here is a problem I
Registered: 6/ am trying to answer.( I will mark my answers with a ------- to make it easier to recognize. Please feel free to check just one-or-two of the answers if this seems too long). I would
21/09 appreciate your comments:
Here is the problem:
Software analysis of the salaries of a random sample of 288 Nevada teachers produced the confidence interval shown below. Which conclusion is correct? What's wrong with the others?
t-Interval for m: with 90.00% Confidence, 38944 < m(TchPay) < 42893
a)If we took many random samples of Nevada teachers, about 9 out of 10 of them would produce this confidence interval.
a)False: the confidence interval would depend on the value of sampling mean. Since we are using t-intervals, we must be using the sample error, which makes intervals even more
variable than if we knew the true pop. standard deviation.
All we can say is that there is a 95% probability that
the true average salary lies in a 95% confidence interval, whatever interval we construct.
b)If we took many random samples of Nevada teachers, about 9 out of 10 of them would produce a confidence interval that contained the mean salary of all Nevada teachers.
True, if we constructed 95% confidence t-intervals with the sampling data given.
c)About 9 out of 10 Nevada teachers earn between $38,944 and $42,893.
False. The confidence interval is about the true population mean, about the probability that the true mean lies in the interval, not about the probability that a teacher earns an
amount in this range.
d)About 9 out of 10 of the teachers surveyed earn between $38,944 and $42,893.
d)We are 90% confident that the average teacher salary in the United States is between $38,944 and $42,893.
d)True. This is the actual meaning of a confidence interval.
Thanks For Any Comments.
Date Subject Author
11/9/09 Understanding Confidence Intervals. Please comment. Bacle
11/9/09 Re: Understanding Confidence Intervals. Please comment. Richard Ulrich | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2003641","timestamp":"2014-04-17T17:05:04Z","content_type":null,"content_length":"19689","record_id":"<urn:uuid:f32e23c4-0776-4409-9469-0c5bb23e9bca>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
null system
null system
A null system in a triangulated category is a triangulated subcategory whose objects may consistently be regarded as being equivalent to the zero object. Null systems give a convenient means for
encoding and computing localization of triangulated categories.
A null system of a triangulated category $C$ is a full subcategory $N \subset C$ such that
• $N$ is saturated: every object $X$ in $C$ which is isomorphic in $C$ to an object in $N$ is in $N$;
• the zero object is in $N$;
• $X$ is in $N$ precisely if $T X$ is in $N$;
• if $X \to Y \to Z \to T X$ is a distinguished triangle in $C$ with $X, Z \in N$, then also $Y \in N$.
The point about null systems is the following:
for $N$ a null system, let $N Q$ be the collection of all morphisms in $C$ whose “mapping cone” is in $N$, precisely: set
$N Q := \{ X \stackrel{f}{\to} Y | \exists dist. tri. X \to Y \to Z in C with Z \in N\} \,.$
Then $N Q$ admits a left and right calculus of fractions in $C$.
David Roberts: Would Serre class?es fit in here? Perhaps that’s one step back.
For instance section 10.2 of
Revised on April 24, 2009 21:37:53 by
Toby Bartels | {"url":"http://www.ncatlab.org/nlab/show/null+system","timestamp":"2014-04-17T09:34:21Z","content_type":null,"content_length":"21186","record_id":"<urn:uuid:1359745b-e035-4746-863d-49a013e121e9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
ALEX Lesson Plans
Subject: Mathematics (7), or Science (7)
Title: SpongeBob RoundPants? What's the Chance?
Description: What are the chances of SpongeBob having kids with round pants? Working in cooperative learning groups, students explore the concept of probability. Using interactive websites, students
explore the possibilities of an organism having a particular trait by completing a virtual lab using Punnett squares. Students will apply their knowledge to predict possible outcomes of the offspring
of the residents of Bikini Bottom.
Subject: Mathematics (6 - 8)
Title: How Tall Is Hagrid?
Description: This activity uses data collection method for students to mathematically compute the height and shoulder width of the character Hagrid from Harry Potter. Students will measure their own
heights and shoulder widths to come up with a class average. They will use this average to find an approximation of the size of Hagrid.This lesson plan was created as a result of the Girls Engaged in
Math and Science University, GEMS-U Project.
Subject: Mathematics (7)
Title: Wheel of Fortune and Probability
Description: This activity will lead students to discover a real life application of probability. This activity utilizes various skills, such as data organization, data analysis, and probability
computations. Students will work in cooperative groups to complete the lesson.This lesson plan was created as a result of the Girls Engaged in Math and Science University, GEMS-U Project.
Subject: Mathematics (4 - 7), or Science (5), or Technology Education (6 - 8)
Title: Questioning NASA
Description: In this lesson students will work collaboratively to explore the "Big Question" that led up to this lesson was "Why are there two solid rocket boosters used to launch the space shuttle
instead of one with the same amount of fuel?" This lesson plan was created as a result of the Girls Engaged in Math and Science University, GEMS-U Project.
Subject: Mathematics (7 - 8), or Science (8)
Title: Transverse Waves
Description: Students will classify waves as mechanical or electromagnetic. Students will describe longitudinal and transverse waves. Students will show a transverse wave using a slinky.This lesson
plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation.
Subject: Mathematics (7 - 12), or Technology Education (9 - 12)
Title: Dice Roll Project
Description: This project is a fun way for students to observe the integration of a probability lesson with spreadsheet software. Students will record 36 rolls of a pair of dice. After they record
their data, students will manually calculate the mean, median, mode and range. Students will then observe how quickly a computer can do those same calculations and many more things with that same
data. Students will also compare experimental outcomes to the theoretical outcome.
Thinkfinity Lesson Plans
Subject: Mathematics
Title: Playing Games
Description: In this unit of five lessons, from Illuminations, students participate in activities in which they focus on the uses of numbers. The activities use the theme of games to develop concepts
of measurement and statistics. Students are asked to measure distances using standard and nonstandard units and to record their measurement in various tables. Then they are asked to use descriptive
statistics to report the results. These lessons include an individual activity for four different levels plus one for parents to complete with their child at home.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2
Subject: Mathematics
Title: Shopping Mall Math
Description: In this two-lesson unit, from Illuminations, students participate in activities in which they develop number sense in and around the shopping mall. Two grade-level activities deal with
size and space, estimation, measurement and applications involving percent.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8
Subject: Mathematics
Title: Building Height
Description: In this Illuminations lesson, students use a clinometer (a measuring device built from a protractor) and isosceles right triangles to find the height of a building. The class compares
measurements, talks about the variation in their results, and selects the best measure of central tendency to report the most accurate height.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Spinning Tops
Description: In this lesson, one of a multi-part unit from Illuminations, students participate in games and activities that develop concepts of measurement and statistics. Students are asked to
measure distances using standard and nonstandard units and to record their measurements in various tables. Then they are asked to use descriptive statistics to report the results.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Birthdays and the Binary System: Exploring Binary Numbers in a Real-World Application
Description: This lesson, from Illuminations, revolves around patterns and place value in the binary system. Students are drawn into mathematics by the magical ability to guess an unknown number and
by the use of birthdays.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics,Social Studies
Title: State Names
Description: In this Illuminations lesson, students use multiple representations to analyze the frequency of letters that occur in the names of all 50 states. In the process, they learn how various
representations, including steam-and-leaf plots, box-and-whisker plots, and histograms, can be used to organize the data.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8
Subject: Mathematics
Title: Finding the Balance
Description: In this lesson for grades 7 and 8, one of a multi-part unit from Illuminations, students participate in activities in which they focus on patterns and relations that can be developed
from the exploration of balance, mass, length of the mass arm, and the position of the fulcrum.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: The Game of SKUNK
Description: In this lesson, from Illuminations, students practice decision-making skills while playing a dice game called Skunk. This allows them to develop a better understanding of mathematical
probability and of the concept of choice versus chance.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Boxing Up
Description: In this lesson, from Illuminations, students explore the relationship between theoretical and experimental probabilities. They use an interactive box model that allows them to simulate
standard probability experiments such as flipping a coin or rolling a die.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Language Arts,Mathematics
Title: Can It Be?
Description: In this lesson, one of a multi-part unit from Illuminations, students participate in activities in which they focus on connections between mathematics and children s literature. They
listen to the story The Phantom Tollbooth, by Norton Juster, and then explore and interpret the concept of averages.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Sticks and Stones
Description: In this Illuminations lesson, students play Sticks and Stones, a game based on the Apache game Throw Sticks, which was played at multi-nation celebrations. Students collect data,
investigate the likelihood of various moves, and use basic ideas of expected value to determine the average number of turns needed to win a game.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8
Subject: Mathematics,Science
Title: The Beat of Your Heart
Description: This unit of five lessons, from Illuminations, gives students the opportunity to explore applications involving their own heart. The lessons, which span grades Pre-K-8, focus on
measuring and data collection.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2,3,4,5,6,7,8
Subject: Mathematics
Title: Count on Math
Description: In this unit of two lessons, from Illuminations, students develop number sense through activities involving collection, representation, and analysis of data. In addition, students
practice reading and writing large numbers and use estimation to arrive at appropriate answers.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8
Subject: Mathematics
Title: Sticks and Stones Demo
Description: This student interactive, from an Illuminations lesson, allows students to generate random throws for the game '' Sticks and Stones.'' In the game, three sticks are tossed and a player
moves his or her marker according to how the sticks land.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8
Subject: Health - Disease - Mathematics - Applied Mathematics - Science - Biology - Social Studies - Geography
Title: Africa's Struggle With AIDS
Description: In this Xpeditions lesson, students come to understand the enormity of the impact of AIDS on the population of Africa by comparing its effect there with its effect on the population of
the world in general, and especially on that of the United States. After locating Africa on a world map, and individual sub-Saharan nations on a map of Africa, students examine charts and graphs to
find and compare data about AIDS in Africa, the world, and the United States.
Thinkfinity Partner: National Geographic Education
Grade Span: 6,7,8
Subject: Mathematics,Science
Title: Travel in the Solar System: Lesson 2
Description: In this lesson, one of a multi-part unit from Illuminations, students consider the amount of time that space travelers need to travel to the four terrestrial planets. Students also think
about what kinds of events might occur on Earth while the space travelers are on their journey.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Combinations
Description: This unit of two lessons, from Illuminations, focuses on combinations, a subject related to the probability-and-statistics strand of mathematics. Students are encouraged to discover all
the combinations for a given situation using problem-solving skills (including elimination and collection of organized data) and drawing conclusions. The use of higher-level thinking skills
(synthesis, analysis, and evaluations) is the overall goal.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8
Subject: Mathematics
Title: Measuring Shadows
Description: In this Science NetLinks lesson, students determine the pattern (length and direction) of shadows cast by sunlight during a several month period. They develop an interpretation of the
daily and seasonal patterns and variations observed.
Thinkfinity Partner: Science NetLinks
Grade Span: 6,7,8
Subject: Mathematics
Title: A Swath of Red
Description: In this lesson, one of a multi-part unit from Illuminations, students estimate the area of the country that voted for the Republican candidate and the area that voted for the Democratic
candidate in the 2000 presidential election using a grid overlay. Students then compare the areas to the electoral and popular vote election results. Ratios of electoral votes to area are used to
make generalizations about the population distribution of the United States.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Information Represented Graphically
Description: In this three-lesson unit, from Illuminations, students participate in activities in which they analyze information represented graphically. Students are asked to discuss, describe,
read, and write about the graphs and the information they contain.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2,3,4,5,6,7,8
Subject: Mathematics
Title: Birthday Paradox
Description: This Illuminations lesson demonstrates the birthday paradox, using it as a springboard into a unit on probability. Students use the TI-83 graphing calculator to run a Monte Carlo
simulation with the birthday paradox and engage in a graphical analysis of the birthday-problem function.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8,9,10,11,12
Subject: Mathematics, Social Studies
Title: First Class First? Using Data to Explore the Tragedy of the Titanic
Description: In this Science NetLinks lesson, students analyze and interpret data related to the crew and passengers of the Titanic. They draw conclusions to better understand the people who were
lost or saved as a result of the disaster, and whether or not social status affected the outcome.
Thinkfinity Partner: Science NetLinks
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Explorations with Chance
Description: In this lesson, from Illuminations, students analyze the fairness of certain games by examining the probabilities of the outcomes. The explorations provide opportunities for the learning
phases of predicting results, playing the games, and calculating probability ratios.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Stick or Switch?
Description: This lesson, from Illuminations, presents a version of a classic game-show scenario. You pick one of three doors in hopes of winning the prize. The host opens one of the two remaining
doors, which reveals no prize, and then asks if you wish to stick or switch. Which choice gives you the best chance to win? Students explore different approaches to this problem including guesses,
experiments, computer simulations, and theoretical models.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8,9,10,11,12
Thinkfinity Learning Activities
Subject: Mathematics
Title: Marble Mania Facilitator Page
Description: This Science NetLinks Afterschool activity introduces kids to probability and chance with a fun interactive. By flipping coins and pulling marbles out of a virtual bag, afterschool
facilitators will help students begin to develop a basic understanding of probabilities, how they are determined, and how the outcome of an experiment can be affected by the number of times it is
Thinkfinity Partner: Science NetLinks
Grade Span: 3,4,5,6,7,8
Subject: Mathematics
Title: Marble Mania Student Page
Description: This Science NetLinks Afterschool activity introduces kids to probability and chance with a fun interactive. By pulling marbles out of a virtual bag, students begin to develop a basic
understanding of probabilities, how they are determined, and how the outcome of an experiment can be affected by the number of times it is conducted.
Thinkfinity Partner: Science NetLinks
Grade Span: 3,4,5,6,7,8
Subject: Mathematics
Title: Tower of Hanoi
Description: This student interactive, from Illuminations, presents a tower of from three to 20 disks, initially stacked in increasing size on one of three pegs. The goal is to move all the discs
from the left peg to the right one using the smallest number of moves possible.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8,9,10,11,12
Subject: Mathematics
Title: Adjustable Spinner
Description: This student interactive, from Illuminations, allows students to create their own spinners and examine the outcomes given a specified number of spins. Students learn that experimental
probabilities differ according to the characteristics of the model.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2,3,4,5,6,7,8,9,10,11,12
Subject: Mathematics
Title: Canada Data Map
Description: Investigate data for the Canadian provinces and territories with this interactive tool. Students can examine data sets contained within the interactive, or they can enter their own data.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8,9,10,11,12
Subject: Mathematics
Title: Fire
Description: In this student interactive, from Illuminations, students can see the results of a fire if a forest is densely planted in a rectangular grid. Students are able to choose a starting place
for the fire and enter the probability that a given tree will burn.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8,9,10,11,12
Subject: Mathematics
Title: Random Drawing Tool
Description: This student interactive, from Illuminations, allows students to explore the relationship between theoretical and experimental probabilities. Students use this '' box model'' as a
statistical device to simulate standard probability experiments such as flipping a coin or rolling a die.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8,9,10,11,12 | {"url":"http://alex.state.al.us/all.php?std_id=53948","timestamp":"2014-04-19T14:33:16Z","content_type":null,"content_length":"232942","record_id":"<urn:uuid:46220db8-a421-4b07-a40a-3d6dd73ef70e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Sum of Interior Angles of Polygons: A general Formula, and other updates
Replies: 3 Last Post: Jan 31, 2012 3:35 PM
Messages: [ Previous | Next ]
Sum of Interior Angles of Polygons: A general Formula, and other updates
Posted: Feb 1, 2011 5:39 AM
Just to announce for those interested that my homepage at http://mysite.mweb.co.za/residents/profmd/homepage4.html has been updated with the following new items:
1) PME (1989) paper "A comparative study of two Van Hiele testing instruments"
2) Feb 2011, Math e-Newsletter with info regarding new books, websites, conferences, etc.
3) mathematical/mathematics education quote
4) mathematics/science cartoon
My dynamic geometry sketches Link at http://math.kennesaw.edu/~mdevilli/JavaGSPLinks.htm has been updated with the following (new & revised) sketches:
1) De Villiers points & Hyperbola of a triangle (updated)
2) A generalization of Neuberg & Simson line (updated)
3) Interior Angle Sum of Polygons: A general formula to include crossed ones (new)
4) Some unproved conjectures (updated)
and the Student Explorations section with:
1) Cyclic quadrilateral rectangle result (new)
2) Interior Angle Sum of Polygons: A general formula to include crossed ones (new), with introductory Logo (Turtle Geometry) activity
3) Quadrilateral Inequality involving Perimeter & Diagonals (updated)
4) Added links to 'Mathematical Digest' and 'School Maths'
Message was edited by: Michael de Villiers
Message was edited by: Michael de Villiers
Message was edited by: Michael de Villiers
Message was edited by: Michael de Villiers
Date Subject Author
2/1/11 Sum of Interior Angles of Polygons: A general Formula, and other updates Michael de Villiers
2/1/11 Re: Sum of Interior Angles of Polygons: A general Formula, and other updates Michael de Villiers
1/31/12 Re: Sum of Interior Angles of Polygons: A general Formula, and other updates Michael de Villiers
1/31/12 Re: Moved - Sum of Interior Angles of Polygons: A general Formula, Michael de Villiers | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2232926","timestamp":"2014-04-16T22:37:03Z","content_type":null,"content_length":"21159","record_id":"<urn:uuid:806b15b0-c9be-495c-80ec-0c84374a945a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Woodbridge, VA Calculus Tutor
Find a Woodbridge, VA Calculus Tutor
...I value education and know that the future of this country, or any country, is in a well educated population. When I take a tutoring assignment I know I am helping shape the future of a young
man or woman. I see this as a high calling.
64 Subjects: including calculus, reading, chemistry, writing
...I passed the qualification exam for Geometry with an excellent grade. Also, I taught and tutor Geometry both for high school and college students and I got very good feedback from my previous
students. Currently, I am teaching the course for my college students.
12 Subjects: including calculus, statistics, algebra 2, algebra 1
...Test anxiety is cured through confidence. Confidence is gained through repetitive success. Most of all, I aim to teach in a way that ensures the student retains knowledge and can build on it
in the future.
28 Subjects: including calculus, reading, English, writing
...Science Marine biology has always been a hobby of mine. While I've been stationed around the country, I've taken the opportunity to work at the Boothbay Aquarium in Maine and California's
Monterey Bay Aquarium, where I worked inside the penguin exhibit. I've chosen to take additional courses ou...
51 Subjects: including calculus, chemistry, English, writing
...Additionally, I work in a school with a significant population of students who require overt consistent work on their study skills in each subject. I have had professional training including
Dr. Mel Levine's All Kinds of Minds, Understanding by Design, and Universal Design.
19 Subjects: including calculus, statistics, geometry, algebra 1
Related Woodbridge, VA Tutors
Woodbridge, VA Accounting Tutors
Woodbridge, VA ACT Tutors
Woodbridge, VA Algebra Tutors
Woodbridge, VA Algebra 2 Tutors
Woodbridge, VA Calculus Tutors
Woodbridge, VA Geometry Tutors
Woodbridge, VA Math Tutors
Woodbridge, VA Prealgebra Tutors
Woodbridge, VA Precalculus Tutors
Woodbridge, VA SAT Tutors
Woodbridge, VA SAT Math Tutors
Woodbridge, VA Science Tutors
Woodbridge, VA Statistics Tutors
Woodbridge, VA Trigonometry Tutors
Nearby Cities With calculus Tutor
Alexandria, VA calculus Tutors
Annandale, VA calculus Tutors
Arlington, VA calculus Tutors
Bethesda, MD calculus Tutors
Burke, VA calculus Tutors
Centreville, VA calculus Tutors
Fairfax, VA calculus Tutors
Falls Church calculus Tutors
Herndon, VA calculus Tutors
Hyattsville calculus Tutors
Lorton, VA calculus Tutors
Manassas, VA calculus Tutors
Occoquan calculus Tutors
Springfield, VA calculus Tutors
Washington, DC calculus Tutors | {"url":"http://www.purplemath.com/Woodbridge_VA_Calculus_tutors.php","timestamp":"2014-04-16T13:43:25Z","content_type":null,"content_length":"24018","record_id":"<urn:uuid:de70a877-0fdf-4040-ad81-32353150e723>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kentfield Algebra Tutor
...I have acted as a tutor for MBA students in every course they took in their graduate school curriculum. I have a strong background in statistics and econometrics. I have an undergraduate degree
in biology and math and have worked many years as a data analyst in a medical environment.
49 Subjects: including algebra 1, algebra 2, calculus, physics
...I taught an after school program through the Berkeley Chess School. I have a love for the game, and can show and explain all of the rules. I can play blindfolded too.
12 Subjects: including algebra 2, algebra 1, chemistry, physics
...I also teach people how to excel at standardized tests. Unfortunately for many people with test anxiety, test scores are very important in college and other school admissions and can therefore
have a huge impact on your life. If you approach test-taking in a way that makes it fun, it takes a lot of the anxiety out of the process, and your scores will improve.
48 Subjects: including algebra 2, algebra 1, Spanish, English
...Upon successful tutoring experience I received the opportunity to work with undergraduate students at University of California, Irvine. I have a B.A in Economics from UCI and a Master's Degree
from UC Santa Barbara. My infinite desire to learn and my enthusiasm toward teaching are the sources of positive energy that I share with my students.
29 Subjects: including algebra 2, algebra 1, reading, calculus
Drew is a current Honors student pursuing a double degree in English Literature and Applied Linguistics at the University of California, Berkeley. He is currently taking a yearlong absence from
formal schooling to pursue independent research in 19th Century American Poetry. He also serves as a Man...
53 Subjects: including algebra 1, Spanish, reading, English | {"url":"http://www.purplemath.com/Kentfield_Algebra_tutors.php","timestamp":"2014-04-20T04:35:31Z","content_type":null,"content_length":"23921","record_id":"<urn:uuid:d04510aa-c273-4230-95de-2be4f246eb24>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability of a Random Walk crossing a straight line
up vote 19 down vote favorite
Let $(S_n)\_{n=1}^{\infty}$ be a standard random walk with $S_n = \sum_{i=1}^n X_i$ and $\mathbb{P}(X_i = \pm 1) = \frac{1}{2}$. Let $\alpha \in \mathbb{R}$ be some constant. I would like to know the
value of
$$\mathcal{P}(\alpha) := \mathbb{P}\left(\exists \ n \in \mathbb{N}: S_n > \alpha n\right)$$
In other words, I am interested in the probability that a random walk $(S_n)_{n=1}^{\infty}$ crosses the straight line through the origin with slope $\alpha$.
Since the standard random walk is recurrent, it follows that $\mathcal{P}(\alpha) = 1$ for $\alpha \leq 0$, while obviously $\mathcal{P}(\alpha) = 0$ for $\alpha \geq 1$. Hence the non-trivial part
and the part I am interested in is the region $\alpha \in (0,1)$. For this region we know that $\mathbb{P}(S_1 > \alpha) = \frac{1}{2}$, hence $\mathcal{P}(\alpha) \geq \frac{1}{2}$, but finding an
exact value seems difficult.
Note: One way to explicitly calculate $\mathcal{P}(0)$ is by
$$\mathcal{P}(0) = \sum_{n=1}^{\infty} \frac{C_n}{2^{2n-1}} = 1$$
where $C_n$ is the $n$-th Catalan number, as was pointed out on e.g. http://oeis.org/A000108 by Geoffrey Critzer. For general $\alpha$ however such a summation does not seem to give a nice
expression, since the coefficients are uglier and the exponents of $2$ depend on $\alpha$ (for $\alpha = 0$ one gets exponents $2n - 1$, but for irrational $\alpha$ this exponent becomes something
ugly). But finding a closed form for these coefficients for general $\alpha$ might also help solve this problem.
Edit: As it may be too much to ask for a nice formula for $\mathcal{P}(\alpha)$, I would also be very happy if someone could provide (good) bounds on or approximations of the value of $\mathcal{P}(\
alpha)$. For example: Does $\mathcal{P}(\alpha)$ decrease linearly in $\alpha$? Any insight is very much appreciated!
pr.probability random-walk catalan-numbers co.combinatorics
What is $k$ in the equation for $\mathcal{P}(0)$? – Konstantinos Panagiotou May 9 '11 at 13:54
Thanks Konstantinos for pointing that out. The $k$ was supposed to be an $n$, so I changed it now. – TMM May 9 '11 at 14:26
Not an answer, just an observation. If you instead write $S_n=\epsilon+\sum_{i=1}^n X_i$ then for irrational $\alpha$ I expect that $\mathcal{P}_{\epsilon}(\alpha)$ will be everywhere discontinuous
as a function of $\epsilon$. This gives me some doubt that there will be a nice, closed-form answer, since in any recurrence-relation type approach to the problem you will end up having to consider
terms like $\mathcal{P}_{\epsilon}(\alpha)$ You might find something of use in the literature on combinatorics on words, but I don't know this literature very well at all so I'm not sure. – Louigi
Addario-Berry May 9 '11 at 15:54
add comment
5 Answers
active oldest votes
I wrote a Maple program in 2003 which computes $P(\alpha)$ (both the "upper" and "lower" values) for a given rational number $\alpha$, much in line with the method described by Johan. I
think I was able to compute it for all $\alpha=a/b$ for integers $1\le a\lt b\le 300$ in a couple of hours, so the program could probably compute fairly good approximations to $P(1-2/\
log_2(p))$ when $p$ is not too large. (Edit: When $p$ is large, $1/2$ will be a very good approximation. For example, if $p\gt1000$ then $0.499\lt P(1-2/\log_2(p))\lt1/2$.)
I didn't publish any of this, but much of it is contained in the article "The Maximum Average Gain in a Sequence of Bernoulli Games" by Wolfgang Stadje (American Mathematical Monthly,
December 2008).
EDIT: Since I don't have access to Maple anymore, I translated the program to Matlab:
function p=p(s,t);
pol([1 s+1 t+1])=[1 -2 1];
The program computes $P(s/t)$ for integers $0\lt s\lt t$, by computing $1$ minus the product of $1-r$ for those zeroes $r$ of $z^{2t/d}-2z^{(t-s)/d}+1$ that have absolute value less than
$1$ (there are $(t-s)/d$ of them), where $d=\gcd(s+t,2t)$.
By the way, in my notes I found the lower bound $P(\alpha)\ge A$, where $\alpha=1-\frac{2\log A}{\log(2A-1)}$, with equality for $\alpha=(k-2)/k$. I think Johan was involved in obtaining
this bound.
EDIT (May 14): So, here is a fairly detailed proof of the formula used in the program above.
Lemma 1. For integers $0\lt 2b\lt a$, the polynomial $g(z)=z^a-2z^b+1$ has no multiple zeros and exactly $b$ zeroes inside the unit circle.
up vote 9 Proof. If $g(z)=g'(z)=0$ then $z\ne 0$ and $$0=g'(z)=az^{a-1}-2bz^{b-1}=z^{b-1}(az^{a-b}-2b),$$ so $az^{a-b}-2b=0$. Hence, $0=g(z)-g'(z)*z/a=1-2(1-b/a)z^b$ so that $|z|^b=\frac1{2(1-b/a)}
down vote =\frac a{2(a-b)}$. Also, $az^{a-b}-2b=0$ so $|z|^{a-b}=2b/a$. This implies that $$\left(\frac a{2(a-b)}\right)^{a-b}=\left(\frac{2b}a\right)^b.$$ This can be rewritten as $2(1-b/a)^{1-b/
accepted a}(b/a)^{b/a}=1$. But it is easily checked that $2(1-y)^{1-y}y^y\gt1$ for $0\lt y\lt1/2$, a contradiction.
The second part can be proved by a straight forward application of Rouché's theorem.
Lemma 2. Suppose that $\sum_{i=1}^kA_ir_i^{-j}=1$ for $1\le j\le k$. Then $\sum_{i=1}^kA_i=1-\prod_{i=1}^k(1-r_i)$.
Proof. The equations can be seen as a system of linear equations in $A_1,\dots,A_k$, and the result can be obtained by using Cramer's rule and Vandermonde determinants. I skip the
details. (Actually, I think I had a simpler proof of this lemma, but I could neither find it in my notes, nor figure it out right now.)
Now, let $L=\max_{n\gt0}{S_n/n}$, so that $P(\alpha)=P(L\gt\alpha)$. Also, let $Y_i=(X_i+1)/2$ and $T_n=\sum_{i=1}^nY_i$ (so that $T_n$ is a random walk with steps $0$ and $1$ instead of
$\pm 1$). (The main reason for this is that my original result was for $T_n$, but I also think that the formulae get a little simpler in this case.)
Let me restate the result I am going to prove:
Theorem. For integers $0\lt s\lt t$, $P(s/t)=1-\prod_{i=1}^{t-s}(1-r_i)$, where $r_1,\dots,r_{t-s}$ are the zeroes of $z^{2t}-2z^{(t-s)}+1$ that have absolute value less than $1$.
Proof. Let $M=\max_{n\gt0}{T_n/n}$ and $Q(\alpha)=P(M\gt\alpha)$. I will prove the corresponding result for $Q(s/t)$, and then translate it to $P$, using the fact that $T_n=(S_n+n)/2$
which implies that $P(\alpha)=Q((\alpha+1)/2)$.
Suppose then that $1/2\lt s/t\lt1$ and consider $Q(s/t)$. Just as Johan did, we define another random walk $U_n=\sum_{i=1}^nZ_i$ with steps $Z_i=s$ or $Z_i=s-t$ so that $Q(s/t)$ equals
the probability that $U_n$ will ever reach $-1$, define $f(j)$ as the probablity that $U_n$ will reach $-1$ when it is currently at $j$, and find that $f(j)=f(j+s)/2+f(j+s-t)/2$ for $j\
ge0$. The characteristic equation of this recursion is $g(z)=z^t-2z^{t-s}+1=0$. By Lemma 1 and since $f(j)$ tends to $0$ as $j$ tends to infinity, we must have $f(j)=\sum_{i=1}^{t-s}
A_ir_i^j$, where $r_1,\dots,r_{t-s}$ are the (necessarily simple) zeroes of $g(z)$ inside the unit circle. Since $f(j)=0$ for $s-t\le j\le -1$, Lemma 2 implies that $Q(s/t)=f(0)=\sum_{i=
Now, if $0\lt s/t\lt1$ then $P(s/t)=Q((s+t)/(2t))$, which, by what we have just proved, equals $1-\prod_{i=1}^{t-s}(1-r_i)$, where $r_1,\dots,r_{t-s}$ are the zeroes of $z^{2t}-2z^{t-s}
+1$ inside the unit circle. This concludes the proof.
For example, I can approximate $P(1-2/\log_2(5))$ fairly quickly by $0.823974215430924=P(127/916)<P(1-2/\log_2(5))<P(207/1493)=0.823974215437916$ (modulo rounding errors). – Pontus von
Brömssen May 11 '11 at 19:21
This looks really impressive, and basically solves the problem! But could you please add a bit more detail as to how and why the program works, i.e. how you derived this program? I see
big similarities with Johan's post with the functional equations (e.g. $x^3 - 2x + 1 = 0$) but I do not see exactly how you solved this for general $s$ and $t$. – TMM May 12 '11 at
Sorry, the inequalities should go the other way: $P(207/1493)\lt P(1-2/\log_2(5))\lt P(127/916)$, so there obviously are some rounding errors in the computation. When I did the same
computation in Octave I got $P(127,916)=0.823974215438941$ and $P(207/1493)=0.823974215435109$. (Yes, I will try to explain how I derived the formula when I have more time. In the
meantime, I suggest that you look at the article I referred to above. The solution is not expressed in exactly the same way there, but I think it's more or less equivalent to my
solution.) – Pontus von Brömssen May 12 '11 at 20:41
Thanks for all the effort you have put into this, and of course thanks for the answer and the explanation. I will accept your answer as I don't think anyone will come up with a better
answer (at least not in 2 days). Again, thanks to you and Johan for everything. – TMM May 14 '11 at 23:46
add comment
I believe that leonbloy's comment/hint is relevant, and whenever $\alpha$ is rational, $P(\alpha)$ is algebraic. For instance, $P(1/3)$ is simply the probability that a random walk on $\
mathbb{Z}$ starting at the origin and taking steps of $+2$ or $-1$ with equal probability will ever reach $-1$. If $f(n)$ is the probability of ever reaching a negative point given that the
walk is currently at $n$, then $f(n)$ satisfies $$f(n) = \frac{f(n+2)+f(n-1)}2.$$ The standard ansatz $f(n) = x^n$ gives three solutions for $x$: $x=1$ or $x=(-1\pm \sqrt{5})/2$. It is easily
seen that the only solution of the form $f(n) = Ax_1^n + Bx_2^n + Cx_3^n$ that satisfies the boundary conditions (at $-1$ and infinity) is $f(n) = (-1/2+\sqrt{5}/2)^{n+1}$, from which it
follows that $$P(1/3) = \frac{\sqrt{5}-1}2.$$ Similarly, $P(1/2)$ is equal to the unique root of $x = (x^4+1)/2$ in $(0,1)$, and in general, $P(k/(k+2))$ is the unique relevant root of $x =
If $\alpha$ is rational but not of this form, the boundary conditions become a little more complicated. For instance, consider $\alpha=1/5$. This can be modeled by a random walk on $\mathbb
{Z}$ where a particle takes steps of $+3$ or $-2$. The corresponding ansatz gives $x^2 = (x^5+1)/2$, which has two roots of absolute value smaller than 1, one positive and one negative. Since
the walk can now jump to the left, it can reach a first negative value both at $-1$ and at $-2$. Apart from $f(n)\to 0$ at infinity, we get the two boundary conditions $f(-1)=1$ and $f(-2)=
1$. We can now find $f$ explicitly (at least numerically) as $f(n) = Ax_1^n+Bx_2^n$ where $x_1$ and $x_2$ are the roots in $(-1,1)$ and $A$ and $B$ are determined by the boundary conditions.
As has already been pointed out, $P(\alpha)$ makes a jump at every rational number. With the approach outlined above, one can in principle compute both the "lower" and "upper" values of $P(\
alpha)$ (the upper value being the probability that $S_n$ reaches, but does not cross, the line of slope $\alpha$) whenever $\alpha$ is rational. This is feasible only when $\alpha$ is a
relatively simple fraction, but it should still be possible to obtain a good plot of $P(\alpha)$ as a function of $\alpha$.
I recall that after this problem was discussed at the open problem session of FPSAC 2003, Pontus von Brömssen made some such plots. I haven't been in touch with him in the last few years, but
apparently he has an (inactive) MO-account. I will notify him of this question.
up vote
16 down ADDED: One might worry about whether in general, a solution obtained as indicated above is the correct one. For instance, in the example with $\alpha = 1/5$, the equation $x^2 = (x^5+1)/2$
vote has, apart from the three real roots, also two non-real roots, and even after finding a solution $f$ involving only the two real roots other than 1, it might not be totally obvious that there
is no other real function of the form $g(n) = A_1x_1^n+\cdots+A_5x_5^n$ that satisfies the boundary conditions (this would be possible if the two non-real roots had absolute value smaller
than 1).
There is a simple application of Brownian motion that shows that anything that satisfies the recursion $g(n+2) = (g(n) + g(n+5))/2$ as well as the boundary conditions, must be the correct
solution. I picked this up recently from Jeff Steif (in a slightly different context), who told me he heard it from Yuval Peres twenty years ago. Here is how it goes:
Suppose that someone gives us a function $g$ that satisfies $g(-1) = g(-2) = 1$, $g(n+2) = (g(n) + g(n+5))/2$ for $n\geq 0$, and $g(n)\to 0$ as $n\to\infty$. Since such a function must have
the form $A_1x_1^n+\cdots+A_5x_5^n$, it is easy to see that all the values have to be in the interval $[0,1]$ (there could not be a smallest value of $g$).
Now start a Brownian motion on the real line from the point $g(0)$, and run it until it hits either $g(-2)$ or $g(3)$. If it hit $g(3)$, continue until it hits $g(1)$ or $g(6)$, etc. In
finite time, the particle will reach either $1=g(-1) = g(-2)$ or 0 (in case it went through $g(n)$ for some sequence of $n$'s tending to infinity).
It follows from basic properties of the Brownian motion that the probability that it reaches 1 before reaching zero is $g(0)$. Since the process correctly emulates the discrete random walk of
steps $+3$ and $-2$, it follows that the probability that the emulated walk on the integers reaches $-1$ or $-2$ before going to infinity is also $g(0)$.
Therefore the boundary conditions uniquely specify the solution (and the argument obviously generalizes to any rational $\alpha$).
Nice! And running some calculations with series similar to $P(0)$ above confirms that $P(1/3 + \epsilon) \approx 0.618034\ldots$ and $P(1/2 + \epsilon) \approx 0.543689\ldots$ which matches
your results. And your general solution strategy seems doable, although solving the equations for $f(n)$ become more difficult when the denominator in rational $\alpha$ increases. But
calculating $P(\alpha_{+})$ and $P(\alpha_{-})$ for some good rational approximations $\alpha_{-} \leq \alpha \leq \alpha_{+}$ should be possible, and should give reasonably tight bounds on
$P(\alpha)$ for any real $\alpha$. – TMM May 10 '11 at 9:28
The amount by which $P$ can vary in an interval not containing any rationals with denominator greater than some $N$ will be exponentially decaying in $N$. So, if you want to calculate $\
mathcal{P}(\alpha)$ by approximating with $\mathcal{P}(\alpha^-)$ then you only need to choose $\alpha^-$ with denominator of the order of the logarithm of the required error bound, and the
degree of the polynomial you have to solve is also of the order of the logarithm of the error. – George Lowther May 10 '11 at 20:34
@George: Yes, but the exact denominators we use may of course depend on good approximations for $\alpha$, e.g. you don't approximate $\pi$ from above with denominator $8$ but with
denominator $7$. And there are some pitfalls to watch out for, e.g. approximating $P(1/2)$ with $P(1/2 - \eps)$ will never give a good approximation, but as long as the interval $[\alpha_
{-},\alpha_{+}]$ does not contain any rationals with small denominator the approximations should be fine. And with irrational $\alpha$ we can always guarantee this. – TMM May 11 '11 at 9:42
Ah, mathoverflow does not have \newcommand{\eps}{\epsilon} in its preamble ;( – TMM May 11 '11 at 9:45
add comment
Here's a gem I heard from Olle Häggström in 2003. It doesn't completely answer the question, but it's a surprisingly simple (imo) way to obtain an upper bound on $P(\alpha)$.
Suppose the walk does cross a line of slope $\alpha$. Then at some point it is above that line for the last time. Conditioning on that last time, we can permute the steps before that point
any way we like, so it follows that the conditional probability that the first step of the walk was a $+1$ is at least $(1+\alpha)/2 $. But that probability is just $$\frac{1/2}{P(\
alpha)},$$ and it follows that $$P(\alpha) \leq \frac{1/2}{(1+\alpha)/2} = \frac{1}{1+\alpha}.$$
up vote
8 down ADDED: It is possible to obtain quite sharp estimates of $P(\alpha)$ by combining this type of argument with some combinatorics of the first $n$ steps of the walk. On the other hand, the
vote reason the estimate is not sharp is that the walk sometimes overshoots (actually always if $\alpha$ is irrational) so that the last point on or above the line of slope $\alpha$ is strictly
above. This indicates that the exact value of $P(\alpha)$ will depend in a complicated way on the extent to which $\alpha$ can be approximated by rational numbers, and a simple formula may
be too much to ask for. I'm sure that interesting things can be said about $P(\alpha)$, but it might require specifying what we want to know.
Actually, for $\alpha < 1$ the random walk will always cross the line if the first step is a $+1$, which happens with probability $\frac{1}{2}$. So the probability of staying below the
line is trivially at most $\frac{1}{2}$, giving $\mathcal{P}(\alpha) \leq \frac{1}{2}$ for any $\alpha < 1$. This is a sharper upper bound than $\mathcal{P}(\alpha) \leq \frac{1}{1 + \
alpha}$ for any $\alpha < 1$. So although the derivation is nice, the upper bound is not so useful. – TMM May 3 '11 at 11:17
Sorry, I think I was visualizing the walk the wrong way and thought of the "slope" as something else than what is defined as $\alpha$. I'll edit my post. – Johan Wästlund May 3 '11 at
No, wait, I thought of $P(\alpha)$ as the probability that the walk does cross, but it's the probability of not crossing. But I don't have time to edit right now. – Johan Wästlund May 3
'11 at 12:31
I apologize for the confusion caused by the title (probability of crossing) and question (probability of not crossing). I have updated the definition of $P(\alpha)$ so that now it does
mean the probability of crossing. – TMM May 3 '11 at 17:18
Thanks, no need to apologize! – Johan Wästlund May 3 '11 at 18:35
show 2 more comments
(Just a hint. Should be a comment more than an answer, but don't have enough rep)
It seems interesting the slightly more general problem in which the initial distance to the line is greater than zero - or that the line has the equation $a n + b$ Considering $a$ fixed
and $b$ variable, one can get (if I'm not mistaken) a the following equation on $P(b)$ (probability that the process crosses the line) as:
up vote 3 $P(b)= \left\\{ \begin{array}{ll} \frac{1}{2}\left[ P(b+a-1)+P(b+a+1) \right] & \mbox{if } b \geq 0 \\\\ 1 & \mbox{if } b < 0 \end{array} \right. $
down vote
We want a solution (apart from the trivial $P(b)=1$) that goes to zero as $b \to \infty$ , and we are specially interested in $P(0^+)$ Does not seem easy, seems very sensitive to the
parameter $a$. I believe that if $a$ is rational, $a=m/n$, the funcion has discontinuites at points $k/n$.
add comment
An upper estimate is given in Appendix A of "The Probabilistic Method" by Alon/Spencer. This gives $P(S_n > a) < e^{-a^2/2n}$ for $a > 0$ (Theorem A.1.1). There are lots of bounds
up vote 0 down like this one, if you are interested in simple upper bounds.
Sorry - I didn't read the 'exists' part of your question carefully, so this isn't quite what you're looking for. It is a bound for a particular $n$, though. – John Jun 10 '11 at
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability random-walk catalan-numbers co.combinatorics or ask your own question. | {"url":"http://mathoverflow.net/questions/63789/probability-of-a-random-walk-crossing-a-straight-line/64444","timestamp":"2014-04-19T12:44:08Z","content_type":null,"content_length":"101012","record_id":"<urn:uuid:683ce608-b659-4d67-9d2b-2e661d729304>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00616-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference Database
Search Site
How Many?
Prime Curios!
e-mail list
Prime Lists
Submit primes This is the Prime Pages' interface to our BibTeX database. Rather than being an exhaustive database, it just lists the references we cite on these pages. Please let me know of any
errors you notice.
References: [ Home | Author index | Key index | Search ]
S. Yates, "Titanic primes," J. Recreational Math., 16:4 (1983-84) 250-262. [Here Yates defines titanic primes to be those with at least 1,000 digits.] | {"url":"http://primes.utm.edu/references/refs.cgi/Yates84","timestamp":"2014-04-16T18:57:33Z","content_type":null,"content_length":"4004","record_id":"<urn:uuid:48965713-9aae-4d81-9081-5829d1da0b46>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Design Life- Kinito's Perspective
How to model gears and chains
This is a tutorial on how to model gears and chains, I made it for a fixie bike I am currently working on. But you can use this method to model things like tank tracks, snowmobile tracks, any other
type of gear as well. It's fairly advanced, so I skip certain modeling steps, but if you need help send me an email at Raymundo302@gmail.com, I'll be happy to help a fellow modeler.
Th important thing to remember is the distance between the two cylinders which I think in gear lingo would be the rollers, In most bicycle chains the distance is .5 inches, The other thing to
consider is the size of these rollers, all these variable depend on what type of chain you want to build and how thick the gears are going to be, so you need to plan this out in advanced.
Building the chain
When building the chain you have to remember that there is two main parts, an inside link and an outside link.
Add a curve that is .5 inches long, center the pivot point than move the curve to 0,0,0
Add two circles to the ends of the curve, these will be your rollers, I chose a diameter of .26
Add a bigger circle on one corner, using the detach tool, detach it at about ¾ of the circle, like in the screen shot, mirror the circle to the other side and use a blend curve to connect both sides
Using the planar surface tool, skin and duplication, I made the inner link. I would also only do one side of the link and mirror it along the xz plane so that everything stays symmetrical. Remember
to group them so later you can select them all as an object.
Remember you can make it as complicated or as simple as you want, take in mind that every surface will be duplicated hundreds of times so it can really push your system, especially when rendering
things like Global Illumination, for my model, I'm going for a simpler model, since the only reason I really did it was to figure out how to model it.
When I modeled the outer link, I did it by duplicating the same surfaces, than just moving them to the outside. I also added a little cap on the corners to make them look better.
Put the curves and surfaces in layers and hide them for now. Below is an example of how later, when we finish the gear, the chain will look like, don't do this now, but if you want to check all your
measurements line up and how the chain will look, just move the outer link .5 inches to the right, and than duplicating them both a couple times by one inch. Now it really does look like a chain.
Building the guide curves
When building a gear the things we must consider is how many teeth will be in this gear. For my main gear I chose to use a 46 teeth gear.
To figure out the mathematics, you just need to find your old trigonometry book, and get a couple sheets of paper. I'm kidding, luckily we live in the age of Google, and after a quick search I found
. First thing we need to do is some simple math.
We need to figure out the angles, which is fairly easy, just 46/360 which equals 7.826086956521739, (A in diagram) you can simplify the numbers after we do the calculations. The next numbers we need
to know is the other two angles. Since we know they are going to be equal and all triangles equal 180 degrees, than all we have to do is subtract and divide (180-7.826086956521739)/2 which equals
86.086956521739 (B and C in the diagram). The last number we need to add is the distance of your teeth, which equals .5 inches (a in the diagram)
Insert all these numbers into the calculator, remember to choose AAS in the option box, than insert the numbers where they belong. The number we were looking for is the distance between the center
and the edge of the triangle, which is 3.6634108858967 (b and c in the diagram).
Now time to transfer those number into alias.
Go to the left view and add a edit point curve, type 0,0,0 press enter, then type 0,0, 3.6634. Depending on your settings Alias will probably only register 3.6634 anyway
Go to edit>Duplicate>Object. Duplicate it by 45, in rotation add 7.8261 to the Y column, the program will automatically round up the last digit anyway,
Group all the curves and rotate them as an object by half of 7.8261 which equals 3.91305. I'll explain why later
Building the Gear
First thing we need to do is add a circle that is the size of the roller, so add a circle in the center of the grid, scale it down to .26 inches (remember to enter A, if the scale tool is set to
relative), move the circle to 0,0,3.6634
Use the detach tool to cut the curve, remember when using the detach tool, hold down alt to cut it in even places, cut it like its shown in the example
Delete the top section. Select the bottom section and move the pivot point to 0,0,0. rotate the curve by 0,3.91305,0 than mirror it to the other side. We are starting to develop the teeth of the
Add another circle in the middle and detach the bottom portion. This will be the highest part of the tooth.
Connect all the curves using edit curves, or blend curves, it's all up to you and your design.
For my design I chose to have sort of a bevel. To make the side surface I just skinned two circle.
Using project curve, I trimmed the surface so I only needed to work on one section. Than I modeled my teeth section to my design specs and multiplied them all my 45.
By now I assume you have a good understanding on how to use math to work out whatever angle and repetitions your design needs. Here's my finished design.
Adding it all together
The first thing I do is move the pivot point of the links. Right now they are center, but I need them to be at the center of the the left or right roller, to do that I just use my original .5 curve
as a guide for my pivot points.
Than I use curve snap tool (holding down ctrl+alt) I use my guide curves to move the first link where it needs to go. This is the reason we rotated all the guide curves in the beginning of the
lesson. So they would line up with the links.
I did this for both links.
I go back to the left view and move both pivot points to 0,0,0
I chose the outer link and rotate it by 7.8261
Than I chose both links and go to Edit>Duplicate>Object, Since they are two object I only need to duplicate them by half of 46, which is 23 minus the one you already have so 22, and the angle would
be twice as big as 7.8261 so it will be 15.6522
After that, you are done with the hard part, now all you have to do is add more chains and gears and connect them using the duplicate tool. Here is my finished example.
I'm going to be busy the next couple of weeks so I wont be able to finish my next tutorial, How to model g3 surface transition between cylinders, but make sure to check back. I'm currently working at
Fisker Automotive as a Surface Designer. Reach me at Raymundo302@gmail.com. Take care and good luck.
5 comments:
1. thank u so much!
2. Excellent tutorial! Many thanks!!
3. Excellent tutorial! Many thanks!!
4. Excellent...thanks a lot for this tutorial...
Mohamed Rafi. S
5. Excellent tutorial..
appreciate the effort you might be taking to make it so better for learners... | {"url":"http://raymundo302.blogspot.com/2010/12/how-to-model-gears-and-chains.html","timestamp":"2014-04-21T07:56:16Z","content_type":null,"content_length":"111820","record_id":"<urn:uuid:7a65e942-f5a3-4c48-adf8-7b8327172649>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using hospitalization for ambulatory care sensitive conditions to measure access to primary health care: an application of spatial structural equation modeling
In data commonly used for health services research, a number of relevant variables are unobservable. These include population lifestyle and socio-economic status, physician practice behaviors,
population tendency to use health care resources, and disease prevalence. These variables may be considered latent constructs of many observed variables. Using health care data from South Carolina,
we show an application of spatial structural equation modeling to identify how these latent constructs are associated with access to primary health care, as measured by hospitalizations for
ambulatory care sensitive conditions. We applied the confirmatory factor analysis approach, using the Bayesian paradigm, to identify the spatial distribution of these latent factors. We then applied
cluster detection tools to identify counties that have a higher probability of hospitalization for each of the twelve adult ambulatory care sensitive conditions, using a multivariate approach that
incorporated the correlation structure among the ambulatory care sensitive conditions into the model.
For the South Carolina population ages 18 and over, we found that counties with high rates of emergency department visits also had less access to primary health care. We also observed that in those
counties there are no community health centers.
Locating such clusters will be useful to health services researchers and health policy makers; doing so enables targeted policy interventions to efficiently improve access to primary care.
Hospitalization for Ambulatory Care Sensitive Conditions (ACSCs) is a health care indicator that has been used extensively to study the accessibility of health care (AHC). The measure has been
endorsed by the United States Institute of Medicine [1] and the Agency for Healthcare Research and Quality [2]. Accessible and reasonably effective primary health care can potentially reduce the risk
of hospitalization for ACSCs. Thus, a higher rate of hospital admissions for ACSCs in an area may provide evidence of underlying problems with population access to health care. The theory underlying
the ACSC indicator has been supported empirically; lower availability of primary care has been associated with higher rates of ACSC admissions [3-6]. Mobley et al. [7] showed the spatial distribution
of ACSC admissions for the entire United States and observed clustering. This result suggested geographic variation of access to health care. Spatial analysis provides a tool to control this
variation, thereby improving estimates of associations between ACSCs and other factors.
One notable reason for the usefulness of the ACSC indicator is that it is often applied using readily available population rates of hospitalization. Models that estimate the risk of ACSC admissions
can account for a range of factors in addition to access to health care, such as population lifestyle, physician practice behaviors, population tendency to use health care resources, and disease
prevalence [8-10]. Using administrative health care data most commonly used to study hospitalizations for ACSCs, many of these factors are not measurable quantities, i.e., they are latent. The
complex relationships among these factors have received little attention [11]. One way to conceptualize their relationship with access to health care is as a complex latent construct of observable
and potentially observable variables, i.e. the ACSC hospitalization rate and other variables that are often unobservable in a given data set. Because of the unobservable nature of many factors,
structural equation modeling may be the best way to understand the intricate relationships among these factors.
We are specifically interested in applying the confirmatory factor analysis (CFA) approach in the context of structural equation modeling to identify how population lifestyle, physician practice
behaviors, population tendency to use health care resources, and disease prevalence are associated with access to health care. In CFA, the structure for the latent variables is prespecified and,
thus, determines how the model parameters should be constrained. Here, our primary purpose is to model the relationships among the multiple latent variables, whereas we are not interested in the
distributional properties of the latent variables. This enables us to standardize the manifest variables that are related to exogenous factors to have zero means and unit variances. In addition, some
of the regression coefficient parameters in the measurement models will be constrained according to a prespecified structure.
Structural equation models are well established for multivariate Gaussian response variables [12]. Generalization to the exponential family of distributions is more recent [13]. For manifest
variables that are spatially referenced, structural equation models have been proposed for continuous variables in [14,15]. Liu et al. [16] and Wang and Wall [17] generalize this application to the
exponential family of distributions. Congdon et al. [18] extended the generalized spatial structure equation models to incorporate spatially-structured and unstructured random effects at the
measurement level.
The conceptual model for access to health care (AHC)
Researchers have rarely noted that high ACSC admission rates at a geographical unit of measurement (e.g. county or zip code) may not exclusively indicate inadequate access to primary health care.
They may also indicate unhealthful population lifestyles, physician practice behaviors that vary among geographic areas due to differences in training or the cultures of local medical communities,
the tendency of the area population to use preventive health care, and/or high rates of disease [8,9,19]. These facts challenge the use of ACSCs as a measure of AHC, unless the analysis adjusts for
such factors. This framework for understanding the dynamics of health care access resulted in the development of a conceptual model (Figure 1), where ovals indicate underlying factors, rectangles
indicate observed variables, and an arrow with a solid line indicates the direction of flow of information.
Figure 1. Conceptual model to assess the underlying factor, access to health care.
A number of alternative models can also be conceptualized along these lines. Our purpose in the present study is not to identify a "perfect" theoretical model of ACSC hospitalization or to include
all observable variables that might be suggested for such a model, but rather to illustrate the usefulness of a statistical method for identifying areas with poor access to health care. Nonetheless,
the model presented in this study should be adequate to suggest geographical areas where further research should be concentrated to reduce potential barriers to the accessibility of primary health
care. The methods used in this paper could be usefully applied to other geographical areas as well as a wide variety of questions in public health and health services research.
Instead of modeling hospital admissions for ACSCs as a single measure of health care access, we propose to model twelve adult ACSCs individually and adopt a multivariate approach. To our knowledge
this is the first work that treats ACSCs as a multivariate concept, rather than a univariate one, in a spatial factor analytic approach. These twelve manifest variables represent ACSCs: short-term
diabetes complications, long term diabetes complications, uncontrolled diabetes, lower extremity amputation in individuals with diabetes, adult asthma, hypertension, dehydration, urinary tract
infection (UTI), bacterial pneumonia, angina without procedure, chronic obstructive pulmonary disease (COPD) and congestive heart failure (CHF). In Figure 1, these twelve ACSCs correspond to ACSC1
through ACSC12. The multivariate approach will allow us to incorporate the correlation structure among the ACSCs into the model. This is useful because some of the ACSCs share common comorbidities,
and others share common behavioral risk factors. Aggregating all ACSCs into a single variable would lose this information, introducing potentially substantial bias into the estimates. The latter
approach has been used in almost all previous research that relies on the ACSC indicator. Thus, the present method may provide a notable opportunity to improve research that relies on this
widely-used indicator.
The above conceptual model will be validated at the county level by a multivariate spatial factor analysis. The analysis will then potentially involve two confounded dimensions of dependency: between
different variables and between different spatial locations. The research question that we will address is how population lifestyle, physician practice behaviors, population tendency to use health
care resources, and disease prevalence are associated with a common spatial factor underlying ACSC admissions. We will look for a regression relationship among these variables by a confirmatory
factor analysis approach, where the factor underlying the twelve ACSC admissions is the dependent variable, and population lifestyle, physician practice behaviors, population tendency to use health
care resources, and disease prevalence are independent variables. We assume that the independent variables and the common factor (access to care) underlying the twelve ACSC admission types are
complex latent constructs rather than measurable quantities. Structural equation modeling treats these constructs as underlying latent factors and finds their relationships through the manifest
variables used to measure them.
Manifest variables
The manifest variables are the observed data used to measure the latent factors and examine the causal connections between these factors. In our model, all of the manifest variables are measured at
the county level.
Four variables are used to measure population lifestyle or socio-economic status (SES): household income, percentage of the population below the poverty level, unemployment rate per 1000 population,
and ethnicity. The measure of household income is the median household income. Ethnicity is measured by the percentage of the population that is African-American. This ethnicity definition is
reasonable in the South Carolina context; a large majority of residents are either African American or non-Hispanic white, both statewide and within each county, and the proportion that is African
American is substantial in every county. Other socio-economic variables, e.g., education level (measured by years of educational attainment), could be included among the measures for this latent
Three variables measure physician practice behavior: physician supply per 1000 population, hospital beds per 1000 population, and hospitalizations for high variation conditions per 1000 population.
The first two measures can affect practice patterns due to supplier-induced demand; when the supply of physicians or hospital beds grows to a level where the individual physician or hospital must
compete to maintain income, the likelihood of supplier-induced demand may rise [9]. High variation conditions are those for which hospitalizations vary greatly among areas [8,20]. Hospitalization for
these conditions involves physician discretion in treatment options; high rates of hospitalization for these conditions in a county may suggest underlying problems in medical decision making or
differences associated with physician training or local practice cultures. We use the list of medical DRGs for high-variation conditions provided by the Dartmouth Atlas of Health Care [21].
Three variables are used to measure population tendency to use health care: rural residence, the penetration of Health Maintenance Organizations (HMOs) in the area, and elective procedures. Rural
residence is a proxy measure of travel time and other barriers to accessing physicians. This can be conceptualized as an ordinal variable, with 10 categories of rurality. One previous study used an
ordinal definition of rurality of this sort, and found a notable gradient of hospitalization across levels of rurality [22]. HMO penetration rate influences physician practice behavior. Physicians in
areas with high HMO penetration tend to practice in a more preventative way (according to the HMO guidelines) than physicians in low HMO penetration areas, even when the patient is covered by
fee-for-service insurance [23]. Elective procedures are planned, non-emergency surgical procedures. They may be either medically required (e.g., cataract surgery) or optional (e.g., breast
augmentation or implant) surgery. Elective surgeries may extend life or improve the quality of life physically and/or psychologically. However, they nonetheless provide a measure of population
tendency to use health care since rates of such surgeries vary notably among both small areas and large geographical regions.
Four variables measure disease prevalence: disabled population per 1000, mortality per 1000 population, hospitalizations for marker conditions per 1000 population, and hospitalizations for chronic
conditions per 1000 population. Disability is measured by the number of people who receive Social Security benefits for disability. Instead of a blanket 'mortality' measure, we use mortality for
liver disease as a measure of excessive alcohol consumption. We also use mortality for heart disease, COPD, and diabetes [5]; the latter three mortality measures are for ACSCs. The rationale for
using these measures is to control for disease severity, which is presumably associated with mortality for these diseases. Death rates for these diseases may also indicate health care access
barriers; areas with inadequate access may have higher death rates. Thus, including these death rates may over-adjust ACSC rates, providing conservative estimates. Hospitalizations for marker
conditions are taken to be measures of population health. Marker conditions include hospitalizations for appendicitis with appendectomy, acute myocardial infarction (AMI), gastrointestinal
obstruction and hip fracture. Hospitalizations for these conditions are not typically associated with physician supply, physician practice patterns, or related variables. Another important predictor
for population health is the proportion of the population with chronic conditions. For a list of these conditions, we used the Chronic Condition Data Warehouse User Manual [24].
Figure 2 displays thematic maps of these manifest variables that are used for constructing the exogenous variables. In this display, all of these manifest variables are transformed to have mean zero
and standard deviation one. The first row shows the four manifest variables that measure population lifestyle/SES. The map for household income depicts an opposite pattern from the maps for the other
three variables. The second row shows the three manifest variables that measure physician practice behavior. These three maps do not show any common pattern. The third row shows the three manifest
variables that measure population tendency to use health care. The map for the HMO penetration rate shows an opposite pattern from the maps for the other two variables. The fourth row shows the four
manifest variables that measure disease prevalence. These four maps show similar patterns.
Figure 2. Thematic maps of the observed variables for underlying factors population lifestyle/SES (first row), physician practice behavior (second row), population tendency to use health care (third
row) and disease prevalence (fourth row).
Statistical models for access to health care
In the statistical model corresponding to the conceptual model for AHC, we have used the generalized spatial structural equation models proposed by Liu et al. [16] and Wang and Wall [17]. It is a
two-level hierarchical model; the first-level is a measurement model that can accommodate any distributions from the exponential family. The second-level is a structural equation model.
In the example below, we illustrate the implementation details of this model for the modeling of AHC, the use of cluster detection tools to find the counties with notable access risks for each type
of ACSC admissions, and use of a model selection criterion to validate the model.
Generalized spatial structural equation models for AHC
In the above conceptual model, the total number of latent factors is five (i.e., q = 5). Among them, one is an endogenous variable (i.e., q[1 ]= 1) and four are exogenous variables (i.e., q[2 ]= 4).
The total number of manifest variables is twenty-six, for which p[1 ]= 12, p[2 ]= 4, p[3 ]= 3, p[4 ]= 3 and p[5 ]= 4.
The observed number of hospital visits for ACSC1,...,ACSC12 are j = 1,...,12. For the other observed data,
and the joint mean structure with the constraint values to some factor loadings is
where, E[ij ]is the expected count of hospitalization for j th ACSC in i th county.
The structural equation model for the relationship between latent factors for county i is
where κ[i ]has a univariate proper conditional autoregressive (CAR) distribution, defined by
where, κ[-i]} is the set of κ's who share the common boundary with the i th region.
The joint distribution for f[2i ]is defined by the linear model of coregionalization method [25-27] as
where the u[j]'s have proper CAR distributions as defined in (4).
Prior specifications and Posterior distribution
Under the Bayesian paradigm, it is essential to set a prior distribution for each parameter to be estimated. Five factor loadings in (2), j = 1,..., p[k-1], k = 1,...,5) in (2), and to the χ 's in
the structural equation model in (3). Uninformative inverse-gamma priors are assigned to the scale parameters σ[κ ]in (4), and a[1], a[3], a[6 ]and a[10 ]in (5). Uniform distributions with the values
0 and 1 are considered prior distributions for the spatial correlation parameters ρ[κ ]in (3), and u[j ]in (5).
Let Γ be the vector that contains all the unknown parameters, O be a vector of order 26n of all the manifest variables, f[1 ]be a vector of order n of endogenous factors and f[2 ]be a vector of order
4n of exogenous factors. The joint posterior distribution of all the unknowns is defined as
where the elements of a[κ ]are the parameters for the distribution of κ, the elements of u[2], and p(Γ) is the product of each prior distribution.
Spatial cluster detection
In the measurement model, the Poisson models for the twelve adult ACSCs are given as j th ACSC and j = 1,...,12. It is of interest to find the counties where the rate of hospitalization is high for
specific ACSCs, as this has clinical relevance for the design of targeted interventions to improve the medical management of those conditions.
In order to find these counties, we apply a cluster detection tool that is developed in Hossain and Lawson [28] for spatial data. A cluster is a geographically and/or temporally bounded group of
occurrences of sufficient size and concentration that it is unlikely to have occurred by chance [29]. Some cluster detection tools proposed in Hossain and Lawson [28] are based on neighborhood
information, with the belief that clustering could have spatial integrity, and some are based on error rates (e.g., misclassification rate, mean square error).
From the maps, we will be interested to identify the counties with excess risks for ACSC hospitalizations, i.e., clusters. We first calculate the posterior exceedence probability (PEP), i.e., the
probability of ACSC specific relative risk estimates exceeding a given threshold value. This is often used to assess localized single region hot-spot clusters. It is assumed that estimates of gth
sample value from converged posterior sampling output, G is the posterior sample size and c is a factor-specific threshold value. The choice of a value for c, which is critical, can be made according
to the study objectives. One choice could be the value one. This probability estimate is commonly used to provide evidence of notable excess risk in individual counties [30]. Notable excess risk can
be regarded as a criterion for identifying 'hot-spot' clusters.
We could use PEP to examine a single county. However, it may be reasonable to believe that clustering should have some spatial integrity, in which case criteria that also examine county-level
neighborhoods around points could be useful. Define a set, {q[ijk]; k = 0,1,..., n[i]}, of first-order neighbor q values of the i th county, j th ACSC in k th neighboring county, where n[i ]is the
number of first-order neighbors of the i th county that share a common geographical boundary, and q[ij0 ]is the q value of the i th county and the jth ACSC.
A local measure R[ij], can be proposed as
to calculate the proportion having exceedence probability greater than 0.95 based on the first-order neighbors. The first indicator function in the right hand side of the above equation, I(q[ij ]>
0.95), is to ensure that only counties having excess risk are used to find clusters. The measure R[ij ]shows the grouping of excess risk regions where the posterior probability of excess risk is
greater than 0.95. In this way, a surface of R[ij ]can be derived, which will give evidence of clusters of excess risk and can be used to detect unique clusters. Note that there is a trade off
between the choice of c and the chosen critical probability value (here defined as 0.95). Higher values of c will lead to fewer regions signaling, while lower critical probability values will admit
more regions.
Model estimation and validation
To estimate the models, we used software written by the first author in the WinBUGS programming language [31]. The computer code used for this research is available from the first author on request.
The maps were produced in R [32]. The reported model results are the posterior mean over 20,000 MCMC samples after a burn-in period of 1,000,000 samples for each estimated unknown parameter. Because
the model is complex, this relatively long burn-in period was used to ensure convergence. We also checked the density plot and the trace plot of each parameter.
To validate our conceptual model, we will consider a number of alternative models based on spatial and/or independent effects at different hierarchical levels; the best model will be chosen by a
model selection criterion. As an aid to model selection we use the deviance information criterion (DIC) [33]. In a Bayesian paradigm, DIC seems the most appropriate model selection criterion since it
exploits the deviance statistics of GLM as a measure of goodness-of-fit, and then penalizes it by the effective number of parameters. Another possibility is to use the mean square prediction error
(MSPE). The MSPE is the posterior predictive loss under the squared error loss function as described in Gelfand and Ghosh [34]. The MSPE is the mean squared difference between the observed and the
predicted values of the outcome variable. Thus, the model that results in predicted values closest to the observed values will produce the lowest MSPE. Unlike DIC, in MSPE the role of the effective
number of parameters as a measure of model complexity is not clear; this suggest use of the DIC for model validation. Formally, the DIC for model M is defined as
where Θ[M ]is the set of all parameters under model M, p[M ]is the effective number of parameters, which is a measure of model complexity. The effective number of parameters is calculated by
Data sources
The above conceptual model was tested at the county level for the 2001 population of South Carolina ages 18 and over. The county specific observed numbers of hospital admissions for twelve adult
ACSCs for the state of South Carolina were obtained from the State Inpatient Database (SID) for South Carolina. The nationwide numbers of hospital admissions for the reference year, year 2000, for
the twelve adult ACSCs for different age- and gender-groups, were obtained from the Nationwide Inpatient Sample (NIS), with adjustment for the sampling weights. The total population in each age- and
gender-group for the South Carolina state population for the reference year was obtained from the US census bureau website. The case-mix adjusted county and ACSC specific expected counts were
obtained by the indirect method of standardization. In this case-mix adjustment, two important confounders were considered, age and sex, because the preliminary analysis indicated some degree of
variation in these two groups for the ACSCs hospitalization rates.
The county specific data were obtained from Area Resource File (ARF) for the following manifest variables: urban-rural continuum; physicians per 1000 population; HMO penetration rate; hospital beds
per 1000 population; median household income; mortality rates for liver disease, CHF, COPD and diabetes; percentage of the population that is disabled; unemployment rate; and percentage of population
below the poverty level. County specific hospital visits for marker conditions, chronic conditions, elective procedures, and high-variation conditions were obtained from the SID for South Carolina.
The state of South Carolina has forty-six counties (i.e., n = 46) with various degrees of racial and economic diversity. It has twenty federally-funded community health centers (CHCs); county-wise
numbers are given in the top-left of Figure 3. CHCs are widely regarded as easily accessible primary health care centers for economically disadvantaged populations. Charleston County in the east has
the largest number of CHCs. The thematic map for the number of emergency department (ED) visits in 2001 is given in the top-right of Figure 3. Standardized ACSC hospitalization rates (SAHR) are given
in the bottom-left of Figure 3. For the presentation in Figure 3, observed and expected ACSCs are obtained using the rates of the combined twelve adult ACSCs; this approach is used in most research
that uses the ACSC indicator. The highest SAHRs are observed in the northeast region, constituted by Marlboro, Dillon and Marion counties, and Union County in the north; the lowest SAHR is observed
for McCormick County in the west.
Figure 3. Map for the county-wise number of CHCs in operation (top-left), and thematic maps of ED admissions (top-right), standardized ACSC hospital visit rates (bottom-left), and endogenous
variable, access to health care (bottom-right).
The DIC for the current model (Model3) is 8407.14, with the effective number of parameters 121.96. To validate the current model, we have also fitted two other models and observe their DIC values.
The DIC values and the values for the effective number of parameters for each model are presented in Table 1. In Model1, no spatial dependence was assumed for the endogenous and exogenous factors.
Model2 and Model3 considered spatial dependence for all factors. The difference lay in Model2, where flat normal priors were assigned to all the χ 's in (3), whereas in Model3, the prior
distributions for χ 's are: χ[1 ]~ U(0, 10), χ[2 ]~ U(0, 10), χ[3 ]~ U(-10, 0) and χ[4 ]~ U(0, 10). The minimum and maximum values for the parameters of uniform distributions in the priors were
selected based on our preliminary understanding about the influence of exogenous factors on AHC. We can see an improvement in DIC value for Model3 for these prior specifications. The results
presented hereafter are for Model3.
Table 1. The deviance information criteria (DIC) and the effective number of parameters for the competing models
The thematic map of the posterior mean of the endogenous variable representing access to health care (AHC) is given in the bottom-right portion of Figure 3. The darker regions show counties with
lower rates of AHC (corresponding to higher rates of hospitalization for ACSCs); lighter colors indicate higher AHC rates. We can also see a clustering pattern; there are three distinct clusters of
various sizes and shapes: one in the north, one in the south, and one extended from north to east. The strong similarity between the maps of SAHR and AHC justifies using the ACSC hospitalization rate
as a manifest variable for AHC. In general, the four maps in Figure 3 are quite similar.
Table 2 gives the posterior mean estimates with the 95% credible interval (CI) of factor loadings for the endogenous variable, AHC, at measurement level. Uncontrolled diabetes, hypertension and
dehydration are the most significant ACSCs for the construction of AHC. Table 3 gives the posterior mean estimates with the 95% CI of factor loadings and standard deviations for the four exogenous
variables: population lifestyle/SES, physician practice behavior, population tendency to use health care and disease prevalence, at the same level. In these two tables, the first column shows the
name of manifest variables, and the second and third columns show the corresponding factor loading parameters and their estimates. The third and fourth columns of Table 3 show the standard deviations
of the measurement models for the exogenous factors and their estimates. All the factor loadings for the latent factor population lifestyle/SES are significant since none of the estimated credible
intervals include zero. Hospital bed supply and elective procedures are significant manifest variables for the construction of physician practice behavior and population tendency to use health care,
respectively. For the construction of disease prevalence, disabled and mortality are significant manifest variables. The significant loading factors always have low standard deviation.
Table 2. Posterior mean (95% credible interval) estimates of measurement model parameters of an endogenous variable for the conceptual model for South Carolina population ages 18 and over
Table 3. Posterior mean (95% credible interval) estimates of measurement model parameters of four exogenous variable for the conceptual model for South Carolina population ages 18 and over
The posterior means with 95% credible intervals for all parameters in the structural equation model are given in Table 4. All of the regression coefficients are significant. Among them, the latent
factors (population lifestyle/SES, physician practice behavior and disease prevalence) contribute positively to the lack of AHC. The other latent factor, population tendency to use health care,
contributes positively to the increase of AHC. The spatial correlation for the latent factor for AHC is close to one, indicating strong similarities among the spatial distributions of ACSCs. The
spatial correlations for the other latent factors are moderate.
Table 4. Posterior mean (95% credible interval) estimates of structural equation model parameters of conceptual model for South Carolina population ages 18 and over
Figure 4 displays the thematic maps of four exogenous variables: population lifestyle/SES, physician practice behavior, population tendency to use health care, and disease prevalence. In all of these
four maps, darker counties indicate unhealthful lifestyle/SES, inadequate physician practice behavior, a low tendency to use health care resources, and high rates of disease prevalence.
Figure 4. Thematic maps of four exogenous variables, disease prevalence (top-left), population tendency to use health care resources (top-right), physician practice behaviors (bottom-left), and
population lifestyle (bottom-right).
Figure 5 displays the exceedance probability for each ACSC where c = 1.5. The value for the threshold is chosen arbitrarily. A darker color indicates excess risk of ACSC admission. The maps tend to
show a clustering pattern. The largest clusters are obtained for uncontrolled diabetes and hypertension; factor loading estimates for these two ACSCs were 1.690 and 1.559, respectively. For these two
ACSCs, one cluster in the east extends to the state's center; one appears in the north and one in the south. Similar clustering is also shown for short-term diabetes complications, long-term diabetes
complications, lower extremity amputation in diabetic patients, adult asthma, dehydration, UTI, bacterial pneumonia, COPD and CHF. The smallest cluster is obtained for angina without procedure, for
which the loading factor estimate was 0.4434. Figure 6 displays the maps for R after using a cluster detection tool. Figure 6 signals similar clustering patterns as Figure 5; the tendency is for
counties with the highest exceedence probabilities in Figure 5 to have slightly weaker signals in Figure 6.
Figure 5. Thematic maps of exceedance probability of twelve adult ACSC hospital visits.
Figure 6. Thematic maps of R[i], i = 1,...,46 of twelve adult ACSC hospital visits.
By using generalized spatial structural equation modeling, we attempted to identify how population lifestyle/SES, physician practice behaviors, population tendency to use health care resources, and
disease prevalence are associated with access to primary health care, as measured by hospitalizations for ACSCs. We observed that counties having low access to primary health care also have
unhealthful lifestyles, inadequate physician practice behaviors, a low tendency to use health care and high rates of disease prevalence.
The overall strength of this research lies in the importance of showing the geographical distributions (i.e., maps) of each latent factor: access to health care, population lifestyle/SES, physician
practice behaviors, population tendency to use health care resources, and disease prevalence. Because of the unobservable nature of these factors, we used a multivariate spatial structural equation
modeling approach. To measure the underlying factor for AHC, we used all of the ACSCs individually, an approach that retains useful information in the modeling. By doing this for South Carolina
hospital discharge data for the year 2001, we confirmed a similar spatial distribution of AHC and ED visits. These two maps also have strong resemblance to the spatial distribution of CHC locations.
Counties that had no CHC had the least access to primary health care and more ED visits. This finding is consistent with the limited relevant research literature on the effectiveness of CHCs for
improving access [35-37] and a large body of research on factors associated with ED visits. The CHC finding has substantial policy relevance, as it is often anticipated that CHCs will be located in
counties having the greatest need to improve the accessibility or quality of primary health care. The results suggest that the counties that had the lowest estimated levels of access to health care
might benefit from having CHCs, which can reduce rates of expensive ED utilization.
This research also proposed to find the clusters of counties with excess risk for ACSC hospitalization, utilizing a cluster detection tool. In the computation of exceedance probability, we set the
threshold value to 1.5. Higher threshold values could also be of interest (e.g., 3) to find high-risk counties. The result would locate counties where the accessibility or quality of primary health
care may be particularly inadequate; these counties would be especially appropriate for targeted policy actions to enhance primary health care. This result illustrates the practical value of
identifying spatial clusters with a relatively high likelihood of having barriers to primary health care.
Access to health care can also be viewed as a dynamic process, i.e. besides the spatial dimensionality, it may also vary temporally. In our future work, we propose to extend the multivariate spatial
structural equation models to space-time data, since health care data are now regularly available for repeated years at the level of geographical units. The space-time analysis will show the spatial
and temporal distribution of those latent factors, and will locate clusters of under-served regions that are persistent over time. The extension to space-time analysis will be useful for examining
effects of policy changes designed to improve access to primary health care. It will also be useful for examining effects of state reductions in health care for vulnerable populations in the United
States Medicaid program.
Authors' contributions
MMH conceived the paper, performed the statistical analysis and drafted the manuscript. JNL provided health services research expertise, assisted in the development of the conceptual model, and
contributed to the manuscript. Both authors read and approved the final manuscript.
The first author would like to gratefully acknowledge the support of National Institute of Health (NIH grant # 1 R03CA125828-01 and NIH-CTSA award 1U54RR023417-01).
Sign up to receive new article alerts from International Journal of Health Geographics | {"url":"http://www.ij-healthgeographics.com/content/8/1/51","timestamp":"2014-04-21T15:24:22Z","content_type":null,"content_length":"140230","record_id":"<urn:uuid:77d9b97a-3b20-4e04-b12d-dda1766800db>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Ontology of Mathematics
Harvey Friedman friedman at math.ohio-state.edu
Tue Jun 27 14:44:42 EDT 2000
Reply to Mycielski 6/24/00 3:09PM:
I find this posting extremely misleading. In fact, In fact, its effect
would be to discount much of the most interesting and important work done
in f.o.m.
> I believe that the solution of this problem is:
This is already misleading. Age old philosophical problems are just not
rationally subject to statements like this.
> A rational ontology of pure mathematics tells us that the finite
>structure of mathematical ojects (say sets as containers designed to
>contain other containers) which are actually imagined by mathematicians.
This sentence does not seem to me to be grammatical.
>A mathematical counterpart of this structure is a finite (growing) segment
>of the algebra of epsilon terms (of the Hilbert epsilon extension of the
>first-order language of set theory) modulo the equations which have been
>proved (that is those about which we know).
This is a technical construction that simply serves to complicate the usual
model of mathematical reasoning through axiomatic set theory. Nothing is to
be gained, foundationally, by creating these epsilon terms, as they would
very rapidly become unreadable and unusable already in extremely elementary
> I believe that this is so simple and so compelling, that I do not
>see any significance of the intuitionistic critique of classical
It is complicated and uncompelling. In no way does it bear on the
significance of intuitionism.
>Indeed the above ontology is ferfectly finitistic and fully
It is infinistic and fully nonconstructive.
>It explains also our feeling of concreteness of mathematical
>objects, since they are things which we have imagined or at least named.
Giving a name to an object does not make it more concrete. E.g., you can
give a name to a well ordering of the reals, but that doesn't make it
>And it explains our feeling that, say, ZFC is consistent, since those
>finite segments grow in such a regular way that we cannot imagine that the
>process of their construction (which is described by the axioms of ZFC)
>could lead to 0 = 1.
It does not add any confidence that we might have that ZFC is consistent.
The epilson terms are a very general process that applies equally well to
any first order theory, and therefore has nothing whatsoever to do with set
theory in particular. What happens when you apply it to an inconsistent set
theory like ZFC + j:V into V?
> The above ontology is briefly mentioned by Hibert in 1904, however
>at that time he does not have yet his epsilon symbols. He introduced the
>latter no later than in 1924, but in a paper of 1925 he does not mention
>his ontology of 1904. (My readings are from van Heijenoort and a book
>(thesis) by Leisenring.)
It is a nice proof theoretic idea that has yielded results in proof theory
through the so called epsilon substitution method. And it can be made
closely related to cut elimination. But you are trying to use it for
something that it is not suited for.
>I have not seen anywhere a clear (as above)
>definition of this ontology, but I have read a lot of confusing and
>confused (I believe) papers and books. It looks to me as if everybody
>forgot about Hilbert's 1904 paper, and although his epsilons are
>remembered, their fundamental significance (as tools for describing the
>structures which are really present in human imaginatioins) is never
>recalled. Am I right about this silence?
I think so, and perhaps you can see some good reasons for this silence.
> Now it is also obvious to me (long ago in a conversation with R.
>M. Solovay we were in agreement about this) that there is no known
>distinction between the degree of abstractness of any imagined objects.
This is ignoring most of what we have learned from f.o.m. That there is
indeed a robust hierarchy of levels of abstraction - proof theoretic,
definable theoretic, interpretability theoretic, etcetera. And these are
remarkably related and robust. In dismissing the fundamental significance
of this hierarchy, you are dismissing much of f.o.m.
And where do you get the idea that this is "obvious"? By the way, I doubt
very much if Solovay would defend such a statement on the FOM.
>All are equally imaginary untill they are applied in descriptions
>of physical reality.
Why "equally"? Again, you are dismissing vitally important robust
hierarchies. And why do you take descriptions of physical reality as having
special significance? Such descriptions may be- and frequently are - even
more murky than abstract set theory.
>Thus, contrary to Brouwer or H. Weil or E. Bishop or some statements of
>Hilbert (where wanted to distinguish in mathematics the concrete from the
>abstract), we think that in pure mathematics nothing distinguishes the
>integers and their algebra from other mathematical objects and their
E.g., the integers are normally distinguished from the space of continuous
functions from the reals to the reals.
>We can only say that not all of the latter are used in natural
>science and some (like a well ordering of the real line) are unlikely to
>get such uses.
Why does natural science have such special status for you? If you are going
to make special distinctions, why not adopt the usual ones, say, between
integers and continuous functions on the reals?
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2000-June/004147.html","timestamp":"2014-04-18T08:05:38Z","content_type":null,"content_length":"8102","record_id":"<urn:uuid:11b36362-3ec3-4661-8661-de142b940acc>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does nature impose limits on what we can know? But why?
» Does nature impose limits on what we can know? But why?
Does nature impose limits on what we can know? But why?
September 12, 2013 Posted by News under News, Physics 3 Comments
From an article in Nature, on a variety of efforts to come to terms with the quantum world:
… entanglement and all the other strange phenomena of quantum theory are not a completely new form of physics. They could just as easily arise from a theory of knowledge and its limits.
To get a better sense of how, Fuchs has rewritten standard quantum theory into a form that closely resembles a branch of classical probability theory known as Bayesian inference, which has its
roots in the eighteenth century. In the Bayesian view, probabilities aren’t intrinsic quantities ‘attached’ to objects. Rather, they quantify an observer’s personal degree of belief of what might
happen to the object. Fuchs’ quantum Bayesian view, or QBism (pronounced ‘cubism’), is a framework that allows known quantum phenomena to be recovered from new axioms that do not require
mathematical constructs such as wavefunctions. QBism is already motivating experimental proposals, he says. Such experiments might reveal, for example, new, deep structures within quantum
mechanics that would allow quantum probability laws to be re-expressed as minor variations of standard probability theory.
Knowledge — which is typically measured in terms of how many bits of information an observer has about a system — is the focus of many other approaches to reconstruction, too. As physicists
Caslav Brukner and Anton Zeilinger of the University of Vienna put it, “quantum physics is an elementary theory of information”. Meanwhile, physicist Marcin Pawlowski at the University of Gdansk
in Poland and his colleagues are exploring a principle they call ‘information causality’. This postulate says that if one experimenter (call her Alice) sends m bits of information about her data
to another observer (Bob), then Bob can gain no more than m classical bits of information about that data — no matter how much he may know about Alice’s experiment.
Well, if a theory of information underlies the universe, then intelligence must also, not?
3 Responses to Does nature impose limits on what we can know? But why?
1. Anyone not familiar with the writings of Stanley L. Jaki could not harm themselves by reading everything he’s written that they can get their grubby little hands on.
Knowledge — which is typically measured in terms of how many bits of information an observer has about a system … if one experimenter (call her Alice) sends m bits of information about her
data to another observer (Bob)…
Ah yes, the aboutness of information.
2. OT: “Functioning ‘mechanical gears’ seen in nature for the first time”
This could be up there with the bacterial flagellum as an icon of ID.
3. As to this comment from the article:
“What is it about this world that forces us to navigate it with the help of such an abstract entity?” wonders physicist Maximilian Schlosshauer of the University of Portland in Oregon,
referring to the uncertainty principle; the wave function that describes the probability of finding a system in various states;,,
Over the past decade or so, a small community of these questioners have begun to argue that the only way forward is to demolish the abstract entity and start again.,,
his larger goal was to show how quantum physics might be reframed as a general theory of probability.,,
Two problems with their proposal. One problem is that the infinite dimensional wave-function is not an ‘abstract’ entity but is real:
On the reality of the quantum state – Matthew F. Pusey, Jonathan Barrett & Terry Rudolph – May 2012
Abstract: Quantum states are the key mathematical objects in quantum theory. It is therefore surprising that physicists have been unable to agree on what a quantum state truly represents. One
possibility is that a pure quantum state corresponds directly to reality. However, there is a long history of suggestions that a quantum state (even a pure state) represents only knowledge or
information about some aspect of reality. Here we show that any model in which a quantum state represents mere information about an underlying physical state of the system, and in which
systems that are prepared independently have independent physical states, must make predictions that contradict those of quantum theory. (i.e. Any model that holds the Quantum wave state as
merely a abstract representation of reality, i.e. as not a real representation of reality, must make predictions that contradict those of quantum theory.)
- per Nature
The following establishes the quantum wave function as ‘real’ from another angle of logic;
Does the quantum wave function represent reality? April 2012
Excerpt: “Similarly, our result that there is a one-to-one correspondence between the wave function and the elements of reality means that, if we know a system’s wave function then we are
exactly in such a favorable situation: any information that there exists in nature and which could be relevant for predicting the behavior of a quantum mechanical system is represented
one-to-one by the wave function. In this sense, the wave function is an optimal description of reality.”
per Physorg
Moreover wave functions are not abstract for they have been directly measured:
Direct measurement of the quantum wavefunction – June 2011
Excerpt: The wavefunction is the complex distribution used to completely describe a quantum system, and is central to quantum theory. But despite its fundamental role, it is typically
introduced as an abstract element of the theory with no explicit definition.,,, Here we show that the wavefunction can be measured directly by the sequential measurement of two complementary
variables of the system. The crux of our method is that the first measurement is performed in a gentle way through weak measurement so as not to invalidate the second. The result is that the
real and imaginary components of the wavefunction appear directly on our measurement apparatus. We give an experimental example by directly measuring the transverse spatial wavefunction of a
single photon, a task not previously realized by any method.
per Nature
As well, the following experiment actually encoded information into a photon while it was in its infinite dimensional quantum wave state, thus destroying the notion, held by many, that the wave
function was not ‘physically real’ but was merely ‘abstract’. i.e. How can information possibly be encoded into something that is not physically real but merely abstract?
Ultra-Dense Optical Storage – on One Photon
Excerpt: Researchers at the University of Rochester have made an optics breakthrough that allows them to encode an entire image’s worth of data into a photon, slow the image down for storage,
and then retrieve the image intact.,,, As a wave, it passed through all parts of the stencil at once,,,
Information in a Photon – Robert W. Boyd – 2010
Excerpt: By its conventional definition, a photon is one unit of excitation of a mode of the electromagnetic field. The modes of the electromagnetic field constitute a countably infinite set
of basis functions, and in this sense the amount of information that can be impressed onto an individual photon is unlimited.
Information In Photon – Robert W. Boyd – slides from presentation (slide 17)
It is also of interest to note that slide 15 and 17 in the preceding presentation has an uncanny resemblance to Euler’s Equation (and to the DNA helix) as is plotted in the following graph:
Graph of Euler’s Equation (page down just a bit)
The second problem with trying to declare the wave function abstract and trying to interpret quantum theory probabilistically is that it leads to the insanity of many worlds where there will be
an quasi-infinite number of versions of you, and everyone else, in a quasi-infinite number of parallel universes,,
You don’t exist in an infinite number of places, say scientists – January 25, 2013
Quantum probability and many worlds – 2007
Abstract: We discuss the meaning of probabilities in the many worlds interpretation of quantum mechanics. We start by presenting very briefly the many worlds theory, how the problem of
probability arises, and some unsuccessful attempts to solve it in the past. Then we criticize a recent attempt by Deutsch to derive the quantum mechanical probabilities from the
non-probabilistic parts of quantum mechanics and classical decision theory. We further argue that the Born probability does not make sense even as an additional probability rule in the many
worlds theory. Our conclusion is that the many worlds theory fails to account for the probabilistic statements of standard (collapse) quantum mechanics.
Nonlocality and free will vs. many-worlds and determinism: The material world emerges from outside space-time – Antoine Suarez – video
I think Dr. Suarez has a very good grasp on how to properly look at the issue of probability and quantum mechanics in this following paper:
What Does Quantum Physics Have to Do with Free Will? July 2013
Excerpt: True quantum randomness cannot happen without nonmaterial (nonlocal) control. Even in the most simple quantum interference experiments in the lab, the random and unpredictable firing
of the detectors is accompanied by an invisible coordination.
You must be logged in to post a comment. | {"url":"http://www.uncommondescent.com/physics/does-nature-impose-limits-on-what-we-can-know-but-why/","timestamp":"2014-04-19T22:29:44Z","content_type":null,"content_length":"61326","record_id":"<urn:uuid:308fcaea-217e-4645-824c-3c29acfdcf5a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra - Inequality word problem (hard)?
The toll to a bridge is $3.00. A three-month pass costs $7.50 and reduces the toll to $0.50. A six-month pass costs $30 and permits crossing the bridge for no additional fee. How many crossings per
three-month period does it take for the three month pass to be the best deal?
The answer is: more than 3 and less than 15 crossings per 3 month period.
I don't understand how to produce that answer. I set up a compound inequality to solve:
7.5 + .5x < 3x and 7.5 + .5x < 30
x > 3 and x < 45.
This looks like you would need more than 3 and less than 45 crossings to make the three month pass the better deal. Where is my confusion?
Re: Algebra - Inequality word problem (hard)?
Divine wrote:The toll to a bridge is $3.00. A three-month pass costs $7.50 and reduces the toll to $0.50. A six-month pass costs $30 and permits crossing the bridge for no additional fee. How
many crossings per three-month period does it take for the three month pass to be the best deal?
You need the 3-month pass to be less than each of the other two options for the same period of time. So you need to compare the base rate for six months to the six-month pass for six months and to
the three-month pass for six months. So you need two of the three-month passes!
($7.50 /3-mo period)*(2 3-mo periods) + ($0.50 / crossing)*(x crossings) < ($3.00 / crossing)*(x crossings)
($7.50 /3-mo period)*(2 3-mo periods) + ($0.50 / crossing)*(x crossings) < ($30.00 flat fee)
This gives you 15 + 0.5x < 3x and 15 + 0.5x < 30. See how that works out! (Remember to work back to the "crossings per three-month period" part for the actual answer.) | {"url":"http://www.purplemath.com/learning/viewtopic.php?f=8&t=2375","timestamp":"2014-04-17T10:55:44Z","content_type":null,"content_length":"20013","record_id":"<urn:uuid:e269c091-4c9d-4ace-8943-3992a0263d4c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Seeing the Chinese Remainder Theorem
Given the remainders and when a number is divided by relatively prime moduli and , the Chinese remainder theorem can be used to find the result of mod . Use the controls to select , , , and . The
darkest square contains the solution, where the two sets of differently colored bands intersect.
In the first snapshot,
54≡4 (mod 10)
54≡3 (mod 17) | {"url":"http://www.demonstrations.wolfram.com/SeeingTheChineseRemainderTheorem/","timestamp":"2014-04-16T16:06:51Z","content_type":null,"content_length":"42804","record_id":"<urn:uuid:30294e2e-8809-44f4-a755-384c7faaed37>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Recent Homework Questions About First Aid
Post a New Question | Current Questions
for the first answer I am going with #2 I am think #1
Tuesday, December 17, 2013 at 12:32pm
Here's one interpretation of Donne's sonnet: http://www.shmoop.com/death-be-not-proud-holy-sonnet-10/summary.html You marked answer #2 in your first post. Is that still what you'd answer? Here's a
modern-English translation of Shakespeare's Sonnet ...
Tuesday, December 17, 2013 at 12:12pm
A stone is thrown vertically upward at a speed of 27.70 m/s at time t=0. A second stone is thrown upward with the same speed 2.140 seconds later. At what time are the two stones at the same height?
At what height do the two stones pass each other? What is the downward speed of...
Tuesday, December 17, 2013 at 11:57am
ANY ANSWERS WILL HELP! THANKS! 1. Isolated points of electric charge are called ____. 4b. ____ was the American physicist who first measured this fundamental unit of charge. 5. This type of
electrification occurs when the charging rod touches an electroscope. 6. This type of ...
Tuesday, December 17, 2013 at 11:22am
9. Read these line, from The Passionate Shepherd to his love. "The country swains shall dance and sing/For they delight each May morning.?If these delights thy mind may move,/Then live with me, and
be my love." What sort of life is the shepherd offering the numph? 1....
Tuesday, December 17, 2013 at 10:51am
the third term of an arithmetic sequence is 14 and the ninth term is -1. Find the first four terms of the sequence
Tuesday, December 17, 2013 at 10:34am
A stone is thrown vertically upward at a speed of 27.70 m/s at time t=0. A second stone is thrown upward with the same speed 2.140 seconds later. At what time are the two stones at the same height?
At what height do the two stones pass each other? What is the downward speed of...
Tuesday, December 17, 2013 at 7:22am
The measure of the angles in a parallelogram total 360. Write and solve a system of equations to determine the values of x and y. The parallelogram shows x and 5.5y. The triangle shows x and y. I
have the first equation at 180= x + 5.5y. What would be another equation for the ...
Monday, December 16, 2013 at 10:26pm
You have to write a balanced equation first, do that, then look at the mole balance. You have .125*.75 moles of copperII sulfate.
Monday, December 16, 2013 at 10:18pm
yup @flO That's correct ! so did u done for this question Two thin, infinitely long, parallel wires are lying on the ground a distance d=3cm apart. They carry a current Io=200A going into the page. A
third thin, infinitely long wire with mass per unit length λ=5g/m ...
Monday, December 16, 2013 at 10:00pm
Use z-scores. First problem: z = (x - mean)/(sd/√n) Find two z-scores: z = (2.4 - 2.5)/(0.2/√50) = ? z = (2.8 - 2.5)/(0.2/√50) = ? Finish the calculations. Next, check a z-table to find the
probability between the two scores. Find mean and standard deviation...
Monday, December 16, 2013 at 6:46pm
Statistics 2
Formulas: CI90 = mean ± 1.645 (sd/√n) CI95 = mean ± 1.96 (sd/√n) For the first part, substitute the mean, standard deviation, and sample size into the appropriate formulas to determine the confidence
intervals. This will help you answer the questions ...
Monday, December 16, 2013 at 6:32pm
I agree with your first two answers, but not your last one,
Monday, December 16, 2013 at 5:39pm
Alright I changed my answers to B, C, A. I just did some quick research and went with my first instinct. So I don't know if these will be right.
Monday, December 16, 2013 at 5:19pm
Health (Ms. Sue)
1. Why should all types of sexual activity be included when talking about abstinence? A: All types of sexual activity should be included when talking about abstinence because the mistaken idea that
one can participate in other forms of sexual activity and still be considered ...
Monday, December 16, 2013 at 4:30pm
The first and third ones are correct. Rethink the second.
Monday, December 16, 2013 at 4:28pm
A charged particle with charge q is moving with speed v in a uniform magnetic field. A second identical charged particle is moving with speed 2v perpendiculuar to the same magnetic field. The time
to complete one full circular revolution for the first particle T1. The ...
Monday, December 16, 2013 at 3:24pm
Simplifying trig expression
sin(x+90) = ? cos(x+90) = ? Would the first equal like -sin(x)? Would the second equal uh -cos(x)? I'm not quite sure :/
Monday, December 16, 2013 at 2:36pm
hai any one got the ans of this Two thin, infinitely long, parallel wires are lying on the ground a distance d=3cm apart. They carry a current Io=200A going into the page. A third thin, infinitely
long wire with mass per unit length λ=5g/m carries current I going out of ...
Monday, December 16, 2013 at 12:27pm
First, you have to have the same power of 10, so 4.3x10^2 = .0043x10^5 Now subtract the coefficients to get (7.1-.0043)x10^5 = 7.0957x10^5
Monday, December 16, 2013 at 11:56am
well, first off, note that there's an (a+b+c) everywhere, so factor that out to get (16x+7y+8w)(a+b+c)
Monday, December 16, 2013 at 11:41am
See the first of the Related Questions below. Both sentences are correct.
Monday, December 16, 2013 at 6:33am
In 2008 (Leap year) KIT Hospital had 150 licensed beds for adults and children from January 1st through June 30th. From July 1st through December 31, licensed beds increased to 175. During the first
half of the year there were 23114 inpatient service days. During the second ...
Monday, December 16, 2013 at 5:46am
An automobile moving at a constant velocity of 15 m/s passes a gasoline station. Two seconds later, another automobile leaves the station and accelerates at a constant rate of 2 m/s^2 in the same
direction as the 1st automobile. How soon does the 2nd automobile overtake the ...
Monday, December 16, 2013 at 4:36am
Two thin, infinitely long, parallel wires are lying on the ground a distance d=3cm apart. They carry a current I_0=200A going into the page. A third thin, infinitely long wire with mass per unit
length lambda=5g/m carries current I going out of the page. What is the value of ...
Monday, December 16, 2013 at 1:21am
Yes first number is x value and second number is y value.
Sunday, December 15, 2013 at 9:53pm
ok, so the first number is the x value and the second is the y value? Thank you for explaining that.
Sunday, December 15, 2013 at 9:43pm
This is a simple stochiometric question first convert to grams using density. 2813(mL)*1.45(g/mL) = 4078.85g then convert this to moles of Nitroglycerin (requires molar mass) (4078.85g)/(227.087g/
mol)= 17.96mol of C3H5N3O9 Next, use your mol ratios to convert to moles of M2 (...
Sunday, December 15, 2013 at 8:14pm
The reason you got stuck is that you hadn't planned your essay first. Your first paragraph is the last paragraph to write. I urge you to read some of the websites I posted for you. Also, follow the
instructions I gave you first.
Sunday, December 15, 2013 at 6:23pm
Two thin, infinitely long, parallel wires are lying on the ground a distance d=3cm apart. They carry a current I_0=200A going into the page. A third thin, infinitely long wire with mass per unit
length lambda=5g/m carries current I going out of the page. What is the value of ...
Sunday, December 15, 2013 at 5:35pm
from the first equation y = 4 x - 4 use that in the second (substitution) 2 (4x - 4) = 3 x + 17 8 x - 8 = 3 x + 17 5 x = 25 x = 5 now go back for y y = 4 x - 4 y = 4(5) - 4 y = 20 - 4 y = 16
Sunday, December 15, 2013 at 5:09pm
goes up two when it goes right one goes up 6 when it goes right 3 so The slope is always 2. We can use a simple straight line y = m x + b where the slope: m = 2 Now put a point in, like the first one
3 = 2 (2) + b so b = -1 so I claim y = 2 x - 1
Sunday, December 15, 2013 at 5:04pm
A stone is thrown vertically upward at a speed of 27.70 m/s at time t=0. A second stone is thrown upward with the same speed 2.140 seconds later. At what time are the two stones at the same height?
At what height do the two stones pass each other? What is the downward speed of...
Sunday, December 15, 2013 at 4:44pm
You may have to search and research, but once you learn some good sources and methods, you should have success. In addition to searching on the Internet, you also need to make best friends with the
reference librarian(s) in your local or college library. Libraries these days ...
Sunday, December 15, 2013 at 3:07pm
Thank you so much, this really helped me. I didn't understand how you got .7 at first but now that you sort of explained it I get it now!
Sunday, December 15, 2013 at 11:33am
"In most situations, the first-born male has priority to inherit the chieftainship (primogeniture)" (Nowak & Laird, 2010, p. 177
Sunday, December 15, 2013 at 11:26am
What does the qoute mean. Can a counter example or counter claim be made? "To enjoy good wealth , to bring true happiness to ones family, to bring peace to all, one must first discipline and control
ones own mind. If a person can control his mind he can find his way to ...
Sunday, December 15, 2013 at 2:20am
First of all, I am sure you meant f(x) = (x-2)/(x-3) , those brackets are essential! or y = (x-2)/(x-3) step1 in forming the inverse: switch the x and y variables ---> x = (y-2)/(y-3) step2: solve
this new equation for y xy - 3x = y-2 xy - y = 3x - 2 y(x-1) = 3x-2 y = (3x-2...
Saturday, December 14, 2013 at 10:47pm
math - inequalities
first one: x+7 ≥ 3x -2x ≥ -7 x ≤ 7/2 x ≤ 3.5 2nd one: 3x + 4 ≤ 5x -2x ≤ -4 x≥ 2 so "common" implies belonging to first AND the second, or 2 ≤ x ≤ 3.5 which for integers would be {2,3}
Saturday, December 14, 2013 at 9:53pm
Finite Math
I know some or even all of your answers must be wrong, since the sum of all the prob's has to add up to 1 The sum of your first 2 results is already over 1 first one is correct, I assume you did C
(48,4) /C(52,4) = .718736.. Doing the 2nd the same way you want to chose 1 ...
Saturday, December 14, 2013 at 9:35pm
volume of first jar = π(1.5)^2 (4) = 9π inches^3 volume of 2nd jar = π(3^2)(3) = 27π which is 3 times the volume of the first, so it should cost 3 times as much 3(0.60) = $1.80
Saturday, December 14, 2013 at 9:19pm
first off, you move along a heading. If you see something and determine its position, then the bearing is the direction from where you are. Now on to the calculations. #1. ok #2. ok #3. I do mine by
converting each position to rectangular coordinates, then adding the vectors, ...
Saturday, December 14, 2013 at 6:04pm
Science help, Ms.Sue
1. After a rock layer is eroded, which additional event must occur to create an unconformity, or a gap in the geologic record? an earthquake or motion along a fault new deposits of sediment must
build up on the eroded area folding of sedimentary rock layers a volcanic eruption...
Saturday, December 14, 2013 at 5:41pm
Super quick english question
I agree with Lana on the first one -- it's d. The second sentence is A. The comma sets off the introductory clause.
Saturday, December 14, 2013 at 5:32pm
Quick english help
In which sentence is who used correctly? Who did you discuss with him? About who were you discussing? Who is the person that you were discussing? Who were you discussing? Would it be the first one?
Saturday, December 14, 2013 at 3:19pm
Science help,plz:)
Okay, these are my last questions of the day.... 1. After a rock layer is eroded, which additional event must occur to create an unconformity, or a gap in the geologic record? an earthquake or motion
along a fault new deposits of sediment must build up on the eroded area ...
Saturday, December 14, 2013 at 2:53pm
science i reall need help i am stuggling
How many times are you going to post this same bunch of questions? Which of the responses you got did you not understand? http://www.jiskha.com/display.cgi?id=1386758380 http://www.jiskha.com/
display.cgi?id=1386782045 http://www.jiskha.com/display.cgi?id=138...
Saturday, December 14, 2013 at 10:35am
Calculus (Please check my work.)
This is my last chance to make at least a C in my AP Calculus course, so please tell me if I got these right or wrong so that I can fix them before it's due. Also, there was one I wasn't sure how to
do, so if you can help me with that, it would be much appreciated! 1)...
Saturday, December 14, 2013 at 9:25am
Both are grammatically correct, but have slightly different meanings. The first one implies that you can see more of her body, while the second one means that you can see her, that person, more.
Saturday, December 14, 2013 at 2:39am
business law
I think that usually there can be only one policy of insurance on this building. The insurance company could never issue multiple policies because of the fear of then having something like this
happening and they could be obligated to pay out money to multiple claimants for ...
Friday, December 13, 2013 at 10:25pm
Well, you have to give me all the first parts before I can do the last. However: Vi = initial speed when push stops m g = weight down -.6 mg = friction force, only horizontal force left m a =
friction force = -.6 mg so a = -.6 g deacceleration so when does it stop? v = Vi - .6...
Friday, December 13, 2013 at 7:19pm
college Algebra
none of those lines is parallel, so I should be able to find a solution. first work on the first two by elimination x + y + z = 7 2x + y + z = 14 ---------------- subtract - x + 0 + 0 = -7 so x = 7
well that gives us a running start 7 + y + z = 7 14 + y + z = 14 both yield y...
Friday, December 13, 2013 at 7:06pm
Saw this when you posted before, looks incomplete My first reaction is to say, "an infinite number of them" since you state no conditions for your square. e.g. Do the vertices of the square have to
land on the sides of the rectangle? IF so, then there would be 3, ...
Friday, December 13, 2013 at 6:04pm
The United States postal service charges a "nonmachinable surcharge" for first class mail if the length of the envlope (parallel to the address) divided by the height of the envolope is less than 1.3
or more than 2.5. Charlene has an envelope with the height of 3.5in...
Friday, December 13, 2013 at 5:04pm
Earths atmosphere,oceans,and continents began to form during the first several hundred million years of a) The Precambrian time b) The Paleozoic era c) The Mesozoic era d) The Cenozoic era
Friday, December 13, 2013 at 5:01pm
the parts of speech
First person = I or we Second person = you Third person = he, she, it, or any singular noun Try again.
Friday, December 13, 2013 at 4:13pm
13. Since there are only 12 months, the first 12 people might all have different months. The 13th person must have the same month as one of the other 12.
Friday, December 13, 2013 at 12:16am
The two sentences mean about the same thing. The first one reads better because of the verb tenses. The second isn't incorrect, just very awkward.
Thursday, December 12, 2013 at 10:15pm
Quick math help
use the first 2 points to find the slope How is the slope of your new line related to this slope ? follow the method I just used in the previous reply to your other question. Let me know what you
Thursday, December 12, 2013 at 9:10pm
3x-2y=8 2x=2y=22 =========== I assume you mean 3x-2y=8 2x-2y=22 ????? if so then get y = some function of x from the second equation 2 x - 2 y = 22 is x - y = 11 or y = (x - 11) substitute that in
the first equation 3 x - 2 (x - 11) = 8 3 x - 2 x + 22 = 8 x = - 14 then go back...
Thursday, December 12, 2013 at 7:41pm
First you add the equations. -5y and 5y cross each out so you are left with 3x=-40 X= -40/3 Plug x back into one of the equations -3x-5y=-6 -3(-40/3)-5y=-6 -40-5y=-6 -5y= 34 Y=34/-5
Thursday, December 12, 2013 at 7:13pm
the -8 should always be first here.
Thursday, December 12, 2013 at 6:25pm
Physics Classical Mechanics
@jessica b) Find Mu_s first Mu_s= (m_p/3 + m_l/2) *cotan (theta)/(m_p+m_l) a) Force = Mu_s*g(m_p+m_l)
Thursday, December 12, 2013 at 4:53pm
a) Use ratio and proportion... 500 tagged birds/n total birds = 20/215 by cross products: 20n = 500 * 215 20n = 107,500 n = 107,500 / 20 = 5,375 b) ? c) In early spring the recently hatched baby
birds are probably not available for capture, as they are still in the nest. ...
Thursday, December 12, 2013 at 2:35pm
science i really really need help
Hypothesis: There will be a large difference between the number of maggots in a dark environment and the number in a brightly illuminated one. I want to rule out other possible reasons for the
maggots clustering in one end. First put the light on in the left end. ...
Thursday, December 12, 2013 at 2:33pm
the first is correct. On the second, I have no idea what the diagram is, however, I suspect your "ancestor" is correct.
Thursday, December 12, 2013 at 11:07am
Social studies
I agree with the first three. I'm not sure about the last one as to whether it is C or D.
Thursday, December 12, 2013 at 11:05am
Social studies
what is your thinking? I will give you one...the first...nation states
Thursday, December 12, 2013 at 11:04am
MATH Albera
Sales at a certain department store follow the model where y is the total sales in thousands of dollars and x is the number of years after 2001. What was the first year that sales fell below $50,000?
Thursday, December 12, 2013 at 10:35am
first, ampere's law 2PI * r * B=mu*current solve that for B Now Faradays law. change Area/time= speed*.1m EMF=B*speed*.1 check my thinking. Notice in the description, r=a as I understand the
"written" diagram.
Thursday, December 12, 2013 at 10:30am
Two thin, infinitely long, parallel wires are lying on the ground a distance d=3cm apart. They carry a current Io=200A going into the page. A third thin, infinitely long wire with mass per unit
length λ=5g/m carries current I going out of the page. What is the value of ...
Thursday, December 12, 2013 at 8:28am
Two thin, infinitely long, parallel wires are lying on the ground a distance d=3cm apart. They carry a current Io=200A going into the page. A third thin, infinitely long wire with mass per unit
length λ=5g/m carries current I going out of the page. What is the value of ...
Thursday, December 12, 2013 at 8:27am
3 workers can do a job in 12 days.2 of the workers work twice as first as the third.How long would it take one of faster workers to do the job alone?
Thursday, December 12, 2013 at 5:18am
From the first principles, find the derivation of (3x-1/2x)
Thursday, December 12, 2013 at 5:12am
Problem 1- In one day, a 85 kg mountain climber ascends from the 1440 m level on a vertical cliff to the top at 2360 m . The next day, she descends from the top to the base of the cliff, which is at
an elevation of 1310 m . a) What is her gravitational potential energy on the ...
Thursday, December 12, 2013 at 4:57am
Barb is making a big necklace she screams one white be done 3 blue beads and one white bead and someone write the number for the first 8 beads that are white what is the rule for the pattern
Wednesday, December 11, 2013 at 9:39pm
6th grade math
1,2, and 3 are ok #4 has the right answer, but I don't like the way you stated your solution First of all, don't put equal signs in front of each new line when solving. Youre 2nd last line has no
relation to the problem, what happened to the 5 ? here is how I would ...
Wednesday, December 11, 2013 at 9:24pm
25th term = 7 1/2 a + 24 d = 15/2 sum of first 23 terms = 98 1/2 (23/2)(2a + 22d) = 197/2 23(2a+22d) = 197 2a+22d = 197/23 a+11d = 197/46 subtract the two equations 13d = 15/2 - 197/46 = 74/23 d = 74
/299 sub back into first equation a + 24(74/299) = 15/2 a = 933/598 so sum of ...
Wednesday, December 11, 2013 at 9:03pm
KUAI or MS SUE HELP!!
I'll do the first one for you. 1) 5 - 1 1/4 = 3 3/4 If you post your answers for the other two, I'll check them for you.
Wednesday, December 11, 2013 at 8:24pm
Health (Ms. Sue)
1. Why do you think some teens believe that they can't get pregnant the first time they are sexually active? A: I think some teenagers believe that they can t become pregnant the first time they are
sexually active because they generally believe that nothing terrible ...
Wednesday, December 11, 2013 at 7:35pm
The 25 term of an Arithmetic Progression is 7 1/2 and the sum of the first 23 terms is 98 1/2. Find the i. 1st term ii. common difference iii. sum of the first 30 terms
Wednesday, December 11, 2013 at 7:25pm
Quick math help
In which step below does a mistake first appear in simplifying the expression? 0.5(-12c+6)-3(c+4)+10(c-5) Step 1: -6c+3-3(c+4)+10(c-5) Step 2: -6c+3-3c-12+10(c-5) Step 3: -6c+3-3c-12+10c-50 Step 4:
7c-41 Please help me!!! Thank you!!
Wednesday, December 11, 2013 at 6:46pm
For the first one: if you have to, just list em all out Mugs: 6,12,18,24,30,36,42,48,54,60.... Shirts: 14,28,42,56,70.... See any numbers in common so far? Yep: the 42. So the 42nd guest will get
both Second one: Can't figure it out, cuz not enough information, did you ...
Wednesday, December 11, 2013 at 5:19pm
ax^2 + bx + c = 1 plug in first value 4a -2b + c = 1 do the same for the others: 0 + 0 + c = 5 c = 5 16a + 4b + c = -59 so we plug in c into the equations with and b: 16a + 4b + 5 = -59 4a -2b + 5 =
1 4a = 2b -4 2a = b - 2 16a + 4b = -64 8(b-2) + 4b = -64 2(b-2) + b = -16 2b...
Wednesday, December 11, 2013 at 5:16pm
Ellis plans to hand out door prizes as the guest arrive she has 256 guests every 6th guest will receive a mug and for every 14 guests will receive a t shirt. Which guest will be the first to receive
both mug and at shirt? Next question is elli has 8 and 1/2 pound bags of mints...
Wednesday, December 11, 2013 at 5:14pm
Health (Ms. Sue)
1. Explain why it is important to abstain from all types of sexual activity. A: It is important to abstain from all types of sexual activity because, by avoiding all sexual activities, an adolescent
does not risk becoming pregnant or being infected with a sexually transmitted ...
Wednesday, December 11, 2013 at 4:43pm
Check my Science questions?
1. Why is it rare for the soft parts of an organism to become a fossil? The soft parts take a long time to decay. The soft parts can be eaten by other animals. The soft parts cannot be buried in
sediment. The soft parts form petrified fossils. 2. Which of the following terms ...
Wednesday, December 11, 2013 at 1:36pm
Sorry, as a teacher I'm opposed to giving you answers to homework questions because that's called helping you cheat. I will however help you figure out the answer. Question 1 is a logic question.
don't read to much into it. if you still have problems figuring out ...
Wednesday, December 11, 2013 at 1:23pm
Early Childhood Education
A disadvantage of large group times is: A. language is enjoyed as a group. B. children develop an understanding of being a group member. C. the difficulty of having intimate conversations. D. they
are often seen as sharing times. In a study about kindergarten readiness, which ...
Wednesday, December 11, 2013 at 12:42pm
Early Childhood Education
I agree with your first answer but not the other two.
Wednesday, December 11, 2013 at 12:36pm
Urgent Math~!
#1. I'd say elimination, since you already have 7x and -7x, so if you add the equations, the x's will be easily eliminated. #2. Did you actually test your answer to see whether it satisfied the
equations? From #1, if you add the equations, you get 3y=3, or y=1. In fact...
Wednesday, December 11, 2013 at 12:26pm
math(check answers please)
the answer for the first one is C. the answer for the second one is C to. are those correct?
Wednesday, December 11, 2013 at 11:29am
math(check answers please)
The first two are wrong. The third is correct.
Wednesday, December 11, 2013 at 11:20am
x + 2x + 3x = 54 6x = 54 x = 9 The first piece is 9 cm long.
Wednesday, December 11, 2013 at 11:13am
a wire 54cm long is cut into three pieces. the second piece is twice the length of the first, and the third piece is three times the length of the first. Find the length of each piece?
Wednesday, December 11, 2013 at 10:59am
Whenever you are writing a comparison/contrast paper (paragraph, essay, research paper), you need to plan it out very carefully on paper first. Try this: 1. List all the information about one of your
topics on one page. 2. List all the information about the other topic on ...
Wednesday, December 11, 2013 at 10:17am
American history
My guesses 1. Which of the following is one of the reasons that Native American slavery declined quickly? (4 points) The Spanish and Native American populations mixed. Native Americans did not know
how to farm European crops. The Spanish converted many Native Americans to ...
Wednesday, December 11, 2013 at 5:40am
Calculating acceleration science
i dont know but can u help with this plzz Multiple Choice 1. Which of the following is one of the reasons that Native American slavery declined quickly? (4 points) The Spanish and Native American
populations mixed. Native Americans did not know how to farm European crops. The ...
Wednesday, December 11, 2013 at 2:03am
please help me!! with this science!!!!!
Multiple Choice: 1. If a kestrel eats a mouse that eats grass, the kestrel is a (1 point) producer. second-level consumer. first-level consumer. decomposer. 2. Which of the following best explains
why a species of lizard is able to live in the desert but not in tundra regions...
Wednesday, December 11, 2013 at 1:17am
SORRY BUT I DONT KNOW. CAN U HELP ME WITH THIS Multiple Choice: 1. If a kestrel eats a mouse that eats grass, the kestrel is a (1 point) producer. second-level consumer. first-level consumer.
decomposer. 2. Which of the following best explains why a species of lizard is able ...
Wednesday, December 11, 2013 at 1:12am
Pages: <<Prev | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | Next>>
Post a New Question | Current Questions | {"url":"http://www.jiskha.com/health/first_aid/?page=18","timestamp":"2014-04-17T21:43:38Z","content_type":null,"content_length":"40322","record_id":"<urn:uuid:bf25cee0-7c4e-4580-97d2-b7852009211e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-svn] r8128 - in trunk: doc/source/reference doc/source/user numpy numpy/core/code_generators numpy/doc
[Numpy-svn] r8128 - in trunk: doc/source/reference doc/source/user numpy numpy/core/code_generators numpy/doc
numpy-svn@scip... numpy-svn@scip...
Wed Feb 17 17:55:17 CST 2010
Author: jarrod.millman
Date: 2010-02-17 17:55:16 -0600 (Wed, 17 Feb 2010)
New Revision: 8128
updated documentation from pydoc website (thanks to everyone who contributed!)
Modified: trunk/doc/source/reference/arrays.classes.rst
--- trunk/doc/source/reference/arrays.classes.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/reference/arrays.classes.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -13,8 +13,8 @@
several tools for simplifying how your new object interacts with other
array objects, and so the choice may not be significant in the
end. One way to simplify the question is by asking yourself if the
-object you are interested in can be replaced as a single array or does it
-really require two or more arrays at its core.
+object you are interested in can be replaced as a single array or does
+it really require two or more arrays at its core.
Note that :func:`asarray` always returns the base-class ndarray. If
you are confident that your use of the array object can handle any
@@ -42,10 +42,10 @@
This method is called whenever the system internally allocates a
new array from *obj*, where *obj* is a subclass (subtype) of the
- :class:`ndarray`. It can be used to change attributes of *self* after
- construction (so as to ensure a 2-d matrix for example), or to
- update meta-information from the "parent." Subclasses inherit a
- default implementation of this method that does nothing.
+ :class:`ndarray`. It can be used to change attributes of *self*
+ after construction (so as to ensure a 2-d matrix for example), or
+ to update meta-information from the "parent." Subclasses inherit
+ a default implementation of this method that does nothing.
.. function:: __array_prepare__(array, context=None)
@@ -66,10 +66,10 @@
the output object if one was specified. The ufunc-computed array
is passed in and whatever is returned is passed to the user.
Subclasses inherit a default implementation of this method, which
- transforms the array into a new instance of the object's class. Subclasses
- may opt to use this method to transform the output array into an
- instance of the subclass and update metadata before returning the
- array to the user.
+ transforms the array into a new instance of the object's class.
+ Subclasses may opt to use this method to transform the output array
+ into an instance of the subclass and update metadata before
+ returning the array to the user.
.. data:: __array_priority__
@@ -96,21 +96,21 @@
unexpected results when you use matrices but expect them to act like
-1. Matrix objects can be created using a string notation to allow Matlab-
- style syntax where spaces separate columns and semicolons (';')
- separate rows.
+1. Matrix objects can be created using a string notation to allow
+ Matlab-style syntax where spaces separate columns and semicolons
+ (';') separate rows.
2. Matrix objects are always two-dimensional. This has far-reaching
- implications, in that m.ravel() is still two-dimensional (with a 1 in
- the first dimension) and item selection returns two-dimensional
+ implications, in that m.ravel() is still two-dimensional (with a 1
+ in the first dimension) and item selection returns two-dimensional
objects so that sequence behavior is fundamentally different than
3. Matrix objects over-ride multiplication to be
matrix-multiplication. **Make sure you understand this for
functions that you may want to receive matrices. Especially in
- light of the fact that asanyarray(m) returns a matrix when m is a
- matrix.**
+ light of the fact that asanyarray(m) returns a matrix when m is
+ a matrix.**
4. Matrix objects over-ride power to be matrix raised to a power. The
same warning about using power inside a function that uses
@@ -119,8 +119,8 @@
5. The default __array_priority\__ of matrix objects is 10.0, and
therefore mixed operations with ndarrays always produce matrices.
-6. Matrices have special attributes which make calculations easier. These
- are
+6. Matrices have special attributes which make calculations easier.
+ These are
.. autosummary::
:toctree: generated/
@@ -132,11 +132,12 @@
.. warning::
- Matrix objects over-ride multiplication, '*', and power, '**', to be
- matrix-multiplication and matrix power, respectively. If your
- subroutine can accept sub-classes and you do not convert to base-class
- arrays, then you must use the ufuncs multiply and power to be sure
- that you are performing the correct operation for all inputs.
+ Matrix objects over-ride multiplication, '*', and power, '**', to
+ be matrix-multiplication and matrix power, respectively. If your
+ subroutine can accept sub-classes and you do not convert to base-
+ class arrays, then you must use the ufuncs multiply and power to
+ be sure that you are performing the correct operation for all
+ inputs.
The matrix class is a Python subclass of the ndarray and can be used
as a reference for how to construct your own subclass of the ndarray.
@@ -194,10 +195,10 @@
.. note::
- Memory-mapped arrays use the the Python memory-map object which (prior
- to Python 2.5) does not allow files to be larger than a certain size
- depending on the platform. This size is always < 2GB even on 64-bit
- systems.
+ Memory-mapped arrays use the the Python memory-map object which
+ (prior to Python 2.5) does not allow files to be larger than a
+ certain size depending on the platform. This size is always
+ < 2GB even on 64-bit systems.
.. autosummary::
:toctree: generated/
@@ -228,10 +229,11 @@
single: character arrays
.. note::
- The chararray module exists for backwards compatibility with
- Numarray, it is not recommended for new development. If one needs
- arrays of strings, use arrays of `dtype` `object_`, `str` or
- `unicode`.
+ The `chararray` class exists for backwards compatibility with
+ Numarray, it is not recommended for new development. Starting from numpy
+ 1.4, if one needs arrays of strings, it is recommended to use arrays of
+ `dtype` `object_`, `string_` or `unicode_`, and use the free functions
+ in the `numpy.char` module for fast vectorized string operations.
These are enhanced arrays of either :class:`string_` type or
:class:`unicode_` type. These arrays inherit from the
@@ -240,8 +242,8 @@
operations are not available on the standard :class:`ndarray` of
character type. In addition, the :class:`chararray` has all of the
standard :class:`string <str>` (and :class:`unicode`) methods,
-executing them on an element-by-element basis. Perhaps the easiest way
-to create a chararray is to use :meth:`self.view(chararray)
+executing them on an element-by-element basis. Perhaps the easiest
+way to create a chararray is to use :meth:`self.view(chararray)
<ndarray.view>` where *self* is an ndarray of str or unicode
data-type. However, a chararray can also be created using the
:meth:`numpy.chararray` constructor, or via the
@@ -255,8 +257,8 @@
Another difference with the standard ndarray of str data-type is
that the chararray inherits the feature introduced by Numarray that
-white-space at the end of any element in the array will be ignored on
-item retrieval and comparison operations.
+white-space at the end of any element in the array will be ignored
+on item retrieval and comparison operations.
.. _arrays.classes.rec:
@@ -341,7 +343,8 @@
for i in arr.shape[0]:
val = arr[i]
-This default iterator selects a sub-array of dimension :math:`N-1` from the array. This can be a useful construct for defining recursive
+This default iterator selects a sub-array of dimension :math:`N-1`
+from the array. This can be a useful construct for defining recursive
algorithms. To loop over the entire array requires :math:`N` for-loops.
>>> a = arange(24).reshape(3,2,4)+10
Modified: trunk/doc/source/reference/arrays.dtypes.rst
--- trunk/doc/source/reference/arrays.dtypes.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/reference/arrays.dtypes.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -148,8 +148,8 @@
.. admonition:: Example
- >>> dt = np.dtype(np.int32) # 32-bit integer
- >>> dt = np.dtype(np.complex128) # 128-bit complex floating-point number
+ >>> dt = np.dtype(np.int32) # 32-bit integer
+ >>> dt = np.dtype(np.complex128) # 128-bit complex floating-point number
Generic types
@@ -305,9 +305,9 @@
.. admonition:: Example
- >>> dt = np.dtype((np.int32, (2,2))) # 2 x 2 integer sub-array
- >>> dt = np.dtype(('S10', 1)) # 10-character string
- >>> dt = np.dtype(('i4, (2,3)f8, f4', (2,3))) # 2 x 3 record sub-array
+ >>> dt = np.dtype((np.int32, (2,2))) # 2 x 2 integer sub-array
+ >>> dt = np.dtype(('S10', 1)) # 10-character string
+ >>> dt = np.dtype(('i4, (2,3)f8, f4', (2,3))) # 2 x 3 record sub-array
``(base_dtype, new_dtype)``
@@ -321,7 +321,7 @@
32-bit integer, whose first two bytes are interpreted as an integer
via field ``real``, and the following two bytes via field ``imag``.
- >>> dt = np.dtype((np.int32, {'real': (np.int16, 0), 'imag': (np.int16, 2)})
+ >>> dt = np.dtype((np.int32,{'real':(np.int16, 0),'imag':(np.int16, 2)})
32-bit integer, which is interpreted as consisting of a sub-array
of shape ``(4,)`` containing 8-bit integers:
@@ -333,8 +333,6 @@
>>> dt = np.dtype(('i4', [('r','u1'),('g','u1'),('b','u1'),('a','u1')]))
-.. note:: XXX: does the second-to-last example above make sense?
.. index::
triple: dtype; construction; from list
@@ -428,7 +426,8 @@
byte position 0), ``col2`` (32-bit float at byte position 10),
and ``col3`` (integers at byte position 14):
- >>> dt = np.dtype({'col1': ('S10', 0), 'col2': (float32, 10), 'col3': (int, 14)})
+ >>> dt = np.dtype({'col1': ('S10', 0), 'col2': (float32, 10),
+ 'col3': (int, 14)})
Modified: trunk/doc/source/reference/arrays.indexing.rst
--- trunk/doc/source/reference/arrays.indexing.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/reference/arrays.indexing.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -318,13 +318,6 @@
Also recognize that ``x[[1,2,3]]`` will trigger advanced indexing,
whereas ``x[[1,2,slice(None)]]`` will trigger basic slicing.
-.. note::
- XXX: this section may need some tuning...
- Also the above warning needs explanation as the last part is at odds
- with the definition of basic indexing.
.. _arrays.indexing.rec:
Record Access
Modified: trunk/doc/source/reference/arrays.ndarray.rst
--- trunk/doc/source/reference/arrays.ndarray.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/reference/arrays.ndarray.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -9,9 +9,9 @@
An :class:`ndarray` is a (usually fixed-size) multidimensional
container of items of the same type and size. The number of dimensions
and items in an array is defined by its :attr:`shape <ndarray.shape>`,
-which is a :class:`tuple` of *N* positive integers that specify the sizes of
-each dimension. The type of items in the array is specified by a
-separate :ref:`data-type object (dtype) <arrays.dtypes>`, one of which
+which is a :class:`tuple` of *N* positive integers that specify the
+sizes of each dimension. The type of items in the array is specified by
+a separate :ref:`data-type object (dtype) <arrays.dtypes>`, one of which
is associated with each ndarray.
As with other container objects in Python, the contents of an
@@ -32,7 +32,8 @@
.. admonition:: Example
- A 2-dimensional array of size 2 x 3, composed of 4-byte integer elements:
+ A 2-dimensional array of size 2 x 3, composed of 4-byte integer
+ elements:
>>> x = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
>>> type(x)
@@ -44,10 +45,11 @@
The array can be indexed using Python container-like syntax:
- >>> x[1,2] # i.e., the element of x in the *second* row, *third* column
- 6
+ >>> x[1,2] # i.e., the element of x in the *second* row, *third*
+ column, namely, 6.
- For example :ref:`slicing <arrays.indexing>` can produce views of the array:
+ For example :ref:`slicing <arrays.indexing>` can produce views of
+ the array:
>>> y = x[:,1]
>>> y
@@ -96,14 +98,15 @@
the bytes are interpreted is defined by the :ref:`data-type object
<arrays.dtypes>` associated with the array.
-.. index:: C-order, Fortran-order, row-major, column-major, stride, offset
+.. index:: C-order, Fortran-order, row-major, column-major, stride,
+ offset
A segment of memory is inherently 1-dimensional, and there are many
-different schemes for arranging the items of an *N*-dimensional array in
-a 1-dimensional block. Numpy is flexible, and :class:`ndarray` objects
-can accommodate any *strided indexing scheme*. In a strided scheme,
-the N-dimensional index :math:`(n_0, n_1, ..., n_{N-1})` corresponds
-to the offset (in bytes)
+different schemes for arranging the items of an *N*-dimensional array
+in a 1-dimensional block. Numpy is flexible, and :class:`ndarray`
+objects can accommodate any *strided indexing scheme*. In a strided
+scheme, the N-dimensional index :math:`(n_0, n_1, ..., n_{N-1})`
+corresponds to the offset (in bytes):
.. math:: n_{\mathrm{offset}} = \sum_{k=0}^{N-1} s_k n_k
@@ -116,7 +119,8 @@
.. math::
- s_k^{\mathrm{column}} = \prod_{j=0}^{k-1} d_j , \quad s_k^{\mathrm{row}} = \prod_{j=k+1}^{N-1} d_j .
+ s_k^{\mathrm{column}} = \prod_{j=0}^{k-1} d_j ,
+ \quad s_k^{\mathrm{row}} = \prod_{j=k+1}^{N-1} d_j .
.. index:: single-segment, contiguous, non-contiguous
@@ -172,8 +176,6 @@
-.. note:: XXX: update and check these docstrings.
Data type
@@ -187,8 +189,6 @@
-.. note:: XXX: update the dtype attribute docstring: setting etc.
Other attributes
@@ -223,9 +223,6 @@
-.. note:: XXX: update and check these docstrings.
.. _array.ndarray.methods:
Array methods
@@ -241,11 +238,12 @@
:func:`argmin`, :func:`argsort`, :func:`choose`, :func:`clip`,
:func:`compress`, :func:`copy`, :func:`cumprod`, :func:`cumsum`,
:func:`diagonal`, :func:`imag`, :func:`max <amax>`, :func:`mean`,
-:func:`min <amin>`, :func:`nonzero`, :func:`prod`, :func:`ptp`, :func:`put`,
-:func:`ravel`, :func:`real`, :func:`repeat`, :func:`reshape`,
-:func:`round <around>`, :func:`searchsorted`, :func:`sort`, :func:`squeeze`,
-:func:`std`, :func:`sum`, :func:`swapaxes`, :func:`take`,
-:func:`trace`, :func:`transpose`, :func:`var`.
+:func:`min <amin>`, :func:`nonzero`, :func:`prod`, :func:`ptp`,
+:func:`put`, :func:`ravel`, :func:`real`, :func:`repeat`,
+:func:`reshape`, :func:`round <around>`, :func:`searchsorted`,
+:func:`sort`, :func:`squeeze`, :func:`std`, :func:`sum`,
+:func:`swapaxes`, :func:`take`, :func:`trace`, :func:`transpose`,
Array conversion
@@ -268,8 +266,6 @@
-.. note:: XXX: update and check these docstrings.
Shape manipulation
@@ -323,8 +319,8 @@
float32, float64, etc., whereas a 0-dimensional array is an ndarray
instance containing precisely one array scalar.)
-- If *axis* is an integer, then the operation is done over the given axis
- (for each 1-D subarray that can be created along the given axis).
+- If *axis* is an integer, then the operation is done over the given
+ axis (for each 1-D subarray that can be created along the given axis).
.. admonition:: Example of the *axis* argument
@@ -393,9 +389,6 @@
Arithmetic and comparison operations
-.. note:: XXX: write all attributes explicitly here instead of relying on
- the auto\* stuff?
.. index:: comparison, arithmetic, operation, operator
Arithmetic and comparison operations on :class:`ndarrays <ndarray>`
@@ -435,9 +428,9 @@
:meth:`ndarray.__nonzero__`, which raises an error if the number of
elements in the the array is larger than 1, because the truth value
of such arrays is ambiguous. Use :meth:`.any() <ndarray.any>` and
- :meth:`.all() <ndarray.all>` instead to be clear about what is meant in
- such cases. (If the number of elements is 0, the array evaluates to
- ``False``.)
+ :meth:`.all() <ndarray.all>` instead to be clear about what is meant
+ in such cases. (If the number of elements is 0, the array evaluates
+ to ``False``.)
Unary operations:
Modified: trunk/doc/source/reference/c-api.coremath.rst
--- trunk/doc/source/reference/c-api.coremath.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/reference/c-api.coremath.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -125,7 +125,8 @@
.. cvar:: NPY_EULER
- The Euler constant (:math:`\lim_{n\rightarrow \infty}{\sum_{k=1}^n{\frac{1}{k}} - \ln n}`)
+ The Euler constant
+ :math:`\lim_{n\rightarrow\infty}({\sum_{k=1}^n{\frac{1}{k}}-\ln n})`
Low-level floating point manipulation
Modified: trunk/doc/source/reference/c-api.types-and-structures.rst
--- trunk/doc/source/reference/c-api.types-and-structures.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/reference/c-api.types-and-structures.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -416,7 +416,8 @@
functions can (and must) deal with mis-behaved arrays. The other
functions require behaved memory segments.
- .. cmember:: void cast(void *from, void *to, npy_intp n, void *fromarr, void *toarr)
+ .. cmember:: void cast(void *from, void *to, npy_intp n, void *fromarr,
+ void *toarr)
An array of function pointers to cast from the current type to
all of the other builtin types. Each function casts a
@@ -442,7 +443,8 @@
a zero is returned, otherwise, a negative one is returned (and
a Python error set).
- .. cmember:: void copyswapn(void *dest, npy_intp dstride, void *src, npy_intp sstride, npy_intp n, int swap, void *arr)
+ .. cmember:: void copyswapn(void *dest, npy_intp dstride, void *src,
+ npy_intp sstride, npy_intp n, int swap, void *arr)
.. cmember:: void copyswap(void *dest, void *src, int swap, void *arr)
@@ -468,7 +470,8 @@
``d1`` < * ``d2``. The array object arr is used to retrieve
itemsize and field information for flexible arrays.
- .. cmember:: int argmax(void* data, npy_intp n, npy_intp* max_ind, void* arr)
+ .. cmember:: int argmax(void* data, npy_intp n, npy_intp* max_ind,
+ void* arr)
A pointer to a function that retrieves the index of the
largest of ``n`` elements in ``arr`` beginning at the element
@@ -477,7 +480,8 @@
always 0. The index of the largest element is returned in
- .. cmember:: void dotfunc(void* ip1, npy_intp is1, void* ip2, npy_intp is2, void* op, npy_intp n, void* arr)
+ .. cmember:: void dotfunc(void* ip1, npy_intp is1, void* ip2, npy_intp is2,
+ void* op, npy_intp n, void* arr)
A pointer to a function that multiplies two ``n`` -length
sequences together, adds them, and places the result in
@@ -527,7 +531,8 @@
computed by repeatedly adding this computed delta. The data
buffer must be well-behaved.
- .. cmember:: void fillwithscalar(void* buffer, npy_intp length, void* value, void* arr)
+ .. cmember:: void fillwithscalar(void* buffer, npy_intp length,
+ void* value, void* arr)
A pointer to a function that fills a contiguous ``buffer`` of
the given ``length`` with a single scalar ``value`` whose
@@ -542,7 +547,8 @@
:cdata:`PyArray_MERGESORT` are defined). These sorts are done
in-place assuming contiguous and aligned data.
- .. cmember:: int argsort(void* start, npy_intp* result, npy_intp length, void \*arr)
+ .. cmember:: int argsort(void* start, npy_intp* result, npy_intp length,
+ void \*arr)
An array of function pointers to sorting algorithms for this
data type. The same sorting algorithms as for sort are
@@ -666,11 +672,12 @@
.. cmember:: int PyUFuncObject.identity
- Either :cdata:`PyUFunc_One`, :cdata:`PyUFunc_Zero`, or :cdata:`PyUFunc_None`
- to indicate the identity for this operation. It is only used
- for a reduce-like call on an empty array.
+ Either :cdata:`PyUFunc_One`, :cdata:`PyUFunc_Zero`, or
+ :cdata:`PyUFunc_None` to indicate the identity for this operation.
+ It is only used for a reduce-like call on an empty array.
- .. cmember:: void PyUFuncObject.functions(char** args, npy_intp* dims, npy_intp* steps, void* extradata)
+ .. cmember:: void PyUFuncObject.functions(char** args, npy_intp* dims,
+ npy_intp* steps, void* extradata)
An array of function pointers --- one for each data type
supported by the ufunc. This is the vector loop that is called
@@ -764,8 +771,8 @@
.. ctype:: PyArrayIterObject
The C-structure corresponding to an object of :cdata:`PyArrayIter_Type` is
- the :ctype:`PyArrayIterObject`. The :ctype:`PyArrayIterObject` is used to keep
- track of a pointer into an N-dimensional array. It contains associated
+ the :ctype:`PyArrayIterObject`. The :ctype:`PyArrayIterObject` is used to
+ keep track of a pointer into an N-dimensional array. It contains associated
information used to quickly march through the array. The pointer can
be adjusted in three basic ways: 1) advance to the "next" position in
the array in a C-style contiguous fashion, 2) advance to an arbitrary
@@ -928,8 +935,9 @@
.. ctype:: PyArrayNeighborhoodIterObject
- The C-structure corresponding to an object of :cdata:`PyArrayNeighborhoodIter_Type` is
- the :ctype:`PyArrayNeighborhoodIterObject`.
+ The C-structure corresponding to an object of
+ :cdata:`PyArrayNeighborhoodIter_Type` is the
+ :ctype:`PyArrayNeighborhoodIterObject`.
@@ -1183,4 +1191,3 @@
``arrayobject.h`` header. This type is not exposed to Python and
could be replaced with a C-structure. As a Python type it takes
advantage of reference- counted memory management.
Modified: trunk/doc/source/reference/c-api.ufunc.rst
--- trunk/doc/source/reference/c-api.ufunc.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/reference/c-api.ufunc.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -63,7 +63,9 @@
-.. cfunction:: PyObject* PyUFunc_FromFuncAndData(PyUFuncGenericFunction* func, void** data, char* types, int ntypes, int nin, int nout, int identity, char* name, char* doc, int check_return)
+.. cfunction:: PyObject* PyUFunc_FromFuncAndData(PyUFuncGenericFunction* func,
+ void** data, char* types, int ntypes, int nin, int nout, int identity,
+ char* name, char* doc, int check_return)
Create a new broadcasting universal function from required variables.
Each ufunc builds around the notion of an element-by-element
@@ -102,9 +104,6 @@
:param nout:
The number of outputs
- :param identity:
- XXX: Undocumented
:param name:
The name for the ufunc. Specifying a name of 'add' or
'multiply' enables a special behavior for integer-typed
@@ -127,7 +126,8 @@
structure and it does get set with this value when the ufunc
object is created.
-.. cfunction:: int PyUFunc_RegisterLoopForType(PyUFuncObject* ufunc, int usertype, PyUFuncGenericFunction function, int* arg_types, void* data)
+.. cfunction:: int PyUFunc_RegisterLoopForType(PyUFuncObject* ufunc,
+ int usertype, PyUFuncGenericFunction function, int* arg_types, void* data)
This function allows the user to register a 1-d loop with an
already- created ufunc to be used whenever the ufunc is called
@@ -140,7 +140,9 @@
in as *arg_types* which must be a pointer to memory at least as
large as ufunc->nargs.
-.. cfunction:: int PyUFunc_ReplaceLoopBySignature(PyUFuncObject* ufunc, PyUFuncGenericFunction newfunc, int* signature, PyUFuncGenericFunction* oldfunc)
+.. cfunction:: int PyUFunc_ReplaceLoopBySignature(PyUFuncObject* ufunc,
+ PyUFuncGenericFunction newfunc, int* signature,
+ PyUFuncGenericFunction* oldfunc)
Replace a 1-d loop matching the given *signature* in the
already-created *ufunc* with the new 1-d loop newfunc. Return the
@@ -150,7 +152,8 @@
signature is an array of data-type numbers indicating the inputs
followed by the outputs assumed by the 1-d loop.
-.. cfunction:: int PyUFunc_GenericFunction(PyUFuncObject* self, PyObject* args, PyArrayObject** mps)
+.. cfunction:: int PyUFunc_GenericFunction(PyUFuncObject* self,
+ PyObject* args, PyArrayObject** mps)
A generic ufunc call. The ufunc is passed in as *self*, the
arguments to the ufunc as *args*. The *mps* argument is an array
@@ -179,7 +182,8 @@
Clear the IEEE error flags.
-.. cfunction:: void PyUFunc_GetPyValues(char* name, int* bufsize, int* errmask, PyObject** errobj)
+.. cfunction:: void PyUFunc_GetPyValues(char* name, int* bufsize,
+ int* errmask, PyObject** errobj)
Get the Python values used for ufunc processing from the
thread-local storage area unless the defaults have been set in
@@ -206,21 +210,29 @@
functions stored in the functions member of the PyUFuncObject
-.. cfunction:: void PyUFunc_f_f_As_d_d(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_f_f_As_d_d(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_d_d(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_d_d(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_f_f(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_f_f(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_g_g(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_g_g(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_F_F_As_D_D(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_F_F_As_D_D(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_F_F(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_F_F(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_D_D(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_D_D(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_G_G(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_G_G(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
Type specific, core 1-d functions for ufuncs where each
calculation is obtained by calling a function taking one input
@@ -235,21 +247,29 @@
but calls out to a C-function that takes double and returns
-.. cfunction:: void PyUFunc_ff_f_As_dd_d(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_ff_f_As_dd_d(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_ff_f(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_ff_f(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_dd_d(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_dd_d(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_gg_g(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_gg_g(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_FF_F_As_DD_D(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_FF_F_As_DD_D(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_DD_D(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_DD_D(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_FF_F(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_FF_F(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_GG_G(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_GG_G(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
Type specific, core 1-d functions for ufuncs where each
calculation is obtained by calling a function taking two input
@@ -261,25 +281,29 @@
of one data type but cast the values at each iteration of the loop
to use the underlying function that takes a different data type.
-.. cfunction:: void PyUFunc_O_O(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_O_O(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
-.. cfunction:: void PyUFunc_OO_O(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_OO_O(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
One-input, one-output, and two-input, one-output core 1-d functions
- for the :cdata:`NPY_OBJECT` data type. These functions handle reference count
- issues and return early on error. The actual function to call is *func*
- and it must accept calls with the signature ``(PyObject*)(PyObject*)``
- for :cfunc:`PyUFunc_O_O` or ``(PyObject*)(PyObject *, PyObject *)``
- for :cfunc:`PyUFunc_OO_O`.
+ for the :cdata:`NPY_OBJECT` data type. These functions handle reference
+ count issues and return early on error. The actual function to call is
+ *func* and it must accept calls with the signature ``(PyObject*)
+ (PyObject*)`` for :cfunc:`PyUFunc_O_O` or ``(PyObject*)(PyObject *,
+ PyObject *)`` for :cfunc:`PyUFunc_OO_O`.
-.. cfunction:: void PyUFunc_O_O_method(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_O_O_method(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
This general purpose 1-d core function assumes that *func* is a string
representing a method of the input object. For each
iteration of the loop, the Python obejct is extracted from the array
and its *func* method is called returning the result to the output array.
-.. cfunction:: void PyUFunc_OO_O_method(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_OO_O_method(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
This general purpose 1-d core function assumes that *func* is a
string representing a method of the input object that takes one
@@ -288,7 +312,8 @@
function. The output of the function is stored in the third entry
of *args*.
-.. cfunction:: void PyUFunc_On_Om(char** args, npy_intp* dimensions, npy_intp* steps, void* func)
+.. cfunction:: void PyUFunc_On_Om(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* func)
This is the 1-d core function used by the dynamic ufuncs created
by umath.frompyfunc(function, nin, nout). In this case *func* is a
Modified: trunk/doc/source/reference/index.rst
--- trunk/doc/source/reference/index.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/reference/index.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -34,7 +34,8 @@
Public Domain in August 2008). The reference documentation for many of
the functions are written by numerous contributors and developers of
Numpy, both prior to and during the
-`Numpy Documentation Marathon <http://scipy.org/Developer_Zone/DocMarathon2008>`__.
+`Numpy Documentation Marathon
Please help to improve NumPy's documentation! Instructions on how to
join the ongoing documentation marathon can be found
Modified: trunk/doc/source/reference/internals.code-explanations.rst
--- trunk/doc/source/reference/internals.code-explanations.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/reference/internals.code-explanations.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -99,7 +99,7 @@
dataptr member of the iterator object structure and call the macro
:cfunc:`PyArray_ITER_NEXT` (it) on the iterator object to move to the next
element. The "next" element is always in C-contiguous order. The macro
-works by first special casing the C-contiguous, 1-d, and 2-d cases
+works by first special casing the C-contiguous, 1-D, and 2-D cases
which work very simply.
For the general case, the iteration works by keeping track of a list
@@ -196,7 +196,7 @@
The implementation of advanced indexing represents some of the most
difficult code to write and explain. In fact, there are two
-implementations of advanced indexing. The first works only with 1-d
+implementations of advanced indexing. The first works only with 1-D
arrays and is implemented to handle expressions involving a.flat[obj].
The second is general-purpose that works for arrays of "arbitrary
dimension" (up to a fixed maximum). The one-dimensional indexing
@@ -222,7 +222,7 @@
After these optimizations, the array_subscript function itself is
called. This function first checks for field selection which occurs
-when a string is passed as the indexing object. Then, 0-d arrays are
+when a string is passed as the indexing object. Then, 0-D arrays are
given special-case consideration. Finally, the code determines whether
or not advanced, or fancy, indexing needs to be performed. If fancy
indexing is not needed, then standard view-based indexing is performed
@@ -330,12 +330,12 @@
single: ufunc
Universal functions are callable objects that take :math:`N` inputs
-and produce :math:`M` outputs by wrapping basic 1-d loops that work
+and produce :math:`M` outputs by wrapping basic 1-D loops that work
element-by-element into full easy-to use functions that seamlessly
implement broadcasting, type-checking and buffered coercion, and
output-argument handling. New universal functions are normally created
in C, although there is a mechanism for creating ufuncs from Python
-functions (:func:`frompyfunc`). The user must supply a 1-d loop that
+functions (:func:`frompyfunc`). The user must supply a 1-D loop that
implements the basic function taking the input scalar values and
placing the resulting scalars into the appropriate output slots as
explaine n implementation.
@@ -349,7 +349,7 @@
even though the actual calculation of the ufunc is very fast, you will
be able to write array and type-specific code that will work faster
for small arrays than the ufunc. In particular, using ufuncs to
-perform many calculations on 0-d arrays will be slower than other
+perform many calculations on 0-D arrays will be slower than other
Python-based solutions (the silently-imported scalarmath module exists
precisely to give array scalars the look-and-feel of ufunc-based
calculations with significantly reduced overhead).
@@ -366,9 +366,9 @@
dictionary the current values for the buffer-size, the error mask, and
the associated error object. The state of the error mask controls what
happens when an error-condiction is found. It should be noted that
-checking of the hardware error flags is only performed after each 1-d
+checking of the hardware error flags is only performed after each 1-D
loop is executed. This means that if the input and output arrays are
-contiguous and of the correct type so that a single 1-d loop is
+contiguous and of the correct type so that a single 1-D loop is
performed, then the flags may not be checked until all elements of the
array have been calcluated. Looking up these values in a thread-
specific dictionary takes time which is easily ignored for all but
@@ -378,11 +378,11 @@
evaluated to determine how the ufunc should proceed and the input and
output arrays are constructed if necessary. Any inputs which are not
arrays are converted to arrays (using context if necessary). Which of
-the inputs are scalars (and therefore converted to 0-d arrays) is
+the inputs are scalars (and therefore converted to 0-D arrays) is
-Next, an appropriate 1-d loop is selected from the 1-d loops available
-to the ufunc based on the input array types. This 1-d loop is selected
+Next, an appropriate 1-D loop is selected from the 1-D loops available
+to the ufunc based on the input array types. This 1-D loop is selected
by trying to match the signature of the data-types of the inputs
against the available signatures. The signatures corresponding to
built-in types are stored in the types member of the ufunc structure.
@@ -394,10 +394,10 @@
input arrays can all be cast safely (ignoring any scalar arguments
which are not allowed to determine the type of the result). The
implication of this search procedure is that "lesser types" should be
-placed below "larger types" when the signatures are stored. If no 1-d
+placed below "larger types" when the signatures are stored. If no 1-D
loop is found, then an error is reported. Otherwise, the argument_list
is updated with the stored signature --- in case casting is necessary
-and to fix the output types assumed by the 1-d loop.
+and to fix the output types assumed by the 1-D loop.
If the ufunc has 2 inputs and 1 output and the second input is an
Object array then a special-case check is performed so that
@@ -406,7 +406,7 @@
method. In this way, Python is signaled to give the other object a
chance to complete the operation instead of using generic object-array
calculations. This allows (for example) sparse matrices to override
-the multiplication operator 1-d loop.
+the multiplication operator 1-D loop.
For input arrays that are smaller than the specified buffer size,
copies are made of all non-contiguous, mis-aligned, or out-of-
@@ -441,7 +441,7 @@
compilation, then the Python Global Interpreter Lock (GIL) is released
prior to calling all of these loops (as long as they don't involve
object arrays). It is re-acquired if necessary to handle error
-conditions. The hardware error flags are checked only after the 1-d
+conditions. The hardware error flags are checked only after the 1-D
loop is calcluated.
@@ -449,10 +449,10 @@
This is the simplest case of all. The ufunc is executed by calling the
-underlying 1-d loop exactly once. This is possible only when we have
+underlying 1-D loop exactly once. This is possible only when we have
aligned data of the correct type (including byte-order) for both input
and output and all arrays have uniform strides (either contiguous,
-0-d, or 1-d). In this case, the 1-d computational loop is called once
+0-D, or 1-D). In this case, the 1-D computational loop is called once
to compute the calculation for the entire array. Note that the
hardware error flags are only checked after the entire calculation is
@@ -462,13 +462,13 @@
When the input and output arrays are aligned and of the correct type,
-but the striding is not uniform (non-contiguous and 2-d or larger),
+but the striding is not uniform (non-contiguous and 2-D or larger),
then a second looping structure is employed for the calculation. This
approach converts all of the iterators for the input and output
arguments to iterate over all but the largest dimension. The inner
-loop is then handled by the underlying 1-d computational loop. The
+loop is then handled by the underlying 1-D computational loop. The
outer loop is a standard iterator loop on the converted iterators. The
-hardware error flags are checked after each 1-d loop is completed.
+hardware error flags are checked after each 1-D loop is completed.
Buffered Loop
@@ -476,12 +476,12 @@
This is the code that handles the situation whenever the input and/or
output arrays are either misaligned or of the wrong data-type
-(including being byte-swapped) from what the underlying 1-d loop
+(including being byte-swapped) from what the underlying 1-D loop
expects. The arrays are also assumed to be non-contiguous. The code
-works very much like the strided loop except for the inner 1-d loop is
+works very much like the strided loop except for the inner 1-D loop is
modified so that pre-processing is performed on the inputs and post-
processing is performed on the outputs in bufsize chunks (where
-bufsize is a user-settable parameter). The underlying 1-d
+bufsize is a user-settable parameter). The underlying 1-D
computational loop is called on data that is copied over (if it needs
to be). The setup code and the loop code is considerably more
complicated in this case because it has to handle:
@@ -497,10 +497,10 @@
- special-casing Object arrays so that reference counts are properly
handled when copies and/or casts are necessary.
-- breaking up the inner 1-d loop into bufsize chunks (with a possible
+- breaking up the inner 1-D loop into bufsize chunks (with a possible
-Again, the hardware error flags are checked at the end of each 1-d
+Again, the hardware error flags are checked at the end of each 1-D
@@ -544,7 +544,7 @@
This function creates a reducing loop object and fills it with
parameters needed to complete the loop. All of the methods only work
on ufuncs that take 2-inputs and return 1 output. Therefore, the
-underlying 1-d loop is selected assuming a signature of [ ``otype``,
+underlying 1-D loop is selected assuming a signature of [ ``otype``,
``otype``, ``otype`` ] where ``otype`` is the requested reduction
data-type. The buffer size and error handling is then retrieved from
(per-thread) global storage. For small arrays that are mis-aligned or
@@ -573,10 +573,10 @@
.. index::
triple: ufunc; methods; reduce
-All of the ufunc methods use the same underlying 1-d computational
+All of the ufunc methods use the same underlying 1-D computational
loops with input and output arguments adjusted so that the appropriate
reduction takes place. For example, the key to the functioning of
-reduce is that the 1-d loop is called with the output and the second
+reduce is that the 1-D loop is called with the output and the second
input pointing to the same position in memory and both having a step-
size of 0. The first input is pointing to the input array with a step-
size given by the appropriate stride for the selected axis. In this
@@ -594,7 +594,7 @@
:math:`o` is the output, and :math:`i[k]` is the
:math:`k^{\textrm{th}}` element of :math:`i` along the selected axis.
This basic operations is repeated for arrays with greater than 1
-dimension so that the reduction takes place for every 1-d sub-array
+dimension so that the reduction takes place for every 1-D sub-array
along the selected axis. An iterator with the selected dimension
removed handles this looping.
@@ -625,9 +625,10 @@
o[k] & = & i[k]\textrm{<op>}o[k-1]\quad k=1\ldots N.
-The output has the same shape as the input and each 1-d loop operates
-over :math:`N` elements when the shape in the selected axis is :math:`N+1`. Again, buffered loops take care to copy and cast the data before
-calling the underlying 1-d computational loop.
+The output has the same shape as the input and each 1-D loop operates
+over :math:`N` elements when the shape in the selected axis is :math:`N+1`.
+Again, buffered loops take care to copy and cast the data before
+calling the underlying 1-D computational loop.
@@ -645,21 +646,21 @@
loop implementation is handled using code that is very similar to the
reduce code repeated as many times as there are elements in the
indices input. In particular: the first input pointer passed to the
-underlying 1-d computational loop points to the input array at the
+underlying 1-D computational loop points to the input array at the
correct location indicated by the index array. In addition, the output
-pointer and the second input pointer passed to the underlying 1-d loop
-point to the same position in memory. The size of the 1-d
+pointer and the second input pointer passed to the underlying 1-D loop
+point to the same position in memory. The size of the 1-D
computational loop is fixed to be the difference between the current
index and the next index (when the current index is the last index,
then the next index is assumed to be the length of the array along the
-selected dimension). In this way, the 1-d loop will implement a reduce
+selected dimension). In this way, the 1-D loop will implement a reduce
over the specified indices.
Mis-aligned or a loop data-type that does not match the input and/or
output data-type is handled using buffered code where-in data is
copied to a temporary buffer and cast to the correct data-type if
-necessary prior to calling the underlying 1-d function. The temporary
+necessary prior to calling the underlying 1-D function. The temporary
buffers are created in (element) sizes no bigger than the user
settable buffer-size value. Thus, the loop must be flexible enough to
-call the underlying 1-d computational loop enough times to complete
+call the underlying 1-D computational loop enough times to complete
the total calculation in chunks no bigger than the buffer-size.
Modified: trunk/doc/source/reference/maskedarray.generic.rst
--- trunk/doc/source/reference/maskedarray.generic.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/reference/maskedarray.generic.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -19,11 +19,19 @@
What is a masked array?
-In many circumstances, datasets can be incomplete or tainted by the presence of invalid data. For example, a sensor may have failed to record a data, or
-recorded an invalid value.
-The :mod:`numpy.ma` module provides a convenient way to address this issue, by introducing masked arrays.
+In many circumstances, datasets can be incomplete or tainted by the presence
+of invalid data. For example, a sensor may have failed to record a data, or
+recorded an invalid value. The :mod:`numpy.ma` module provides a convenient
+way to address this issue, by introducing masked arrays.
-A masked array is the combination of a standard :class:`numpy.ndarray` and a mask. A mask is either :attr:`nomask`, indicating that no value of the associated array is invalid, or an array of booleans that determines for each element of the associated array whether the value is valid or not. When an element of the mask is ``False``, the corresponding element of the associated array is valid and is said to be unmasked. When an element of the mask is ``True``, the corresponding element of the associated array is said to be masked (invalid).
+A masked array is the combination of a standard :class:`numpy.ndarray` and a
+mask. A mask is either :attr:`nomask`, indicating that no value of the
+associated array is invalid, or an array of booleans that determines for each
+element of the associated array whether the value is valid or not. When an
+element of the mask is ``False``, the corresponding element of the associated
+array is valid and is said to be unmasked. When an element of the mask is
+``True``, the corresponding element of the associated array is said to be
+masked (invalid).
The package ensures that masked entries are not used in computations.
@@ -38,7 +46,8 @@
>>> mx = ma.masked_array(x, mask=[0, 0, 0, 1, 0])
-We can now compute the mean of the dataset, without taking the invalid data into account::
+We can now compute the mean of the dataset, without taking the invalid data
+into account::
>>> mx.mean()
@@ -48,8 +57,9 @@
-The main feature of the :mod:`numpy.ma` module is the :class:`MaskedArray` class, which is a subclass of :class:`numpy.ndarray`.
-The class, its attributes and methods are described in more details in the
+The main feature of the :mod:`numpy.ma` module is the :class:`MaskedArray`
+class, which is a subclass of :class:`numpy.ndarray`. The class, its
+attributes and methods are described in more details in the
:ref:`MaskedArray class <maskedarray.baseclass>` section.
The :mod:`numpy.ma` module can be used as an addition to :mod:`numpy`: ::
@@ -138,30 +148,40 @@
The underlying data of a masked array can be accessed in several ways:
-* through the :attr:`~MaskedArray.data` attribute. The output is a view of the array as
- a :class:`numpy.ndarray` or one of its subclasses, depending on the type
- of the underlying data at the masked array creation.
+* through the :attr:`~MaskedArray.data` attribute. The output is a view of the
+ array as a :class:`numpy.ndarray` or one of its subclasses, depending on the
+ type of the underlying data at the masked array creation.
-* through the :meth:`~MaskedArray.__array__` method. The output is then a :class:`numpy.ndarray`.
+* through the :meth:`~MaskedArray.__array__` method. The output is then a
+ :class:`numpy.ndarray`.
-* by directly taking a view of the masked array as a :class:`numpy.ndarray` or one of its subclass (which is actually what using the :attr:`~MaskedArray.data` attribute does).
+* by directly taking a view of the masked array as a :class:`numpy.ndarray`
+ or one of its subclass (which is actually what using the
+ :attr:`~MaskedArray.data` attribute does).
* by using the :func:`getdata` function.
-None of these methods is completely satisfactory if some entries have been marked as invalid. As a general rule, where a representation of the array is required without any masked entries, it is recommended to fill the array with the :meth:`filled` method.
+None of these methods is completely satisfactory if some entries have been
+marked as invalid. As a general rule, where a representation of the array is
+required without any masked entries, it is recommended to fill the array with
+the :meth:`filled` method.
Accessing the mask
-The mask of a masked array is accessible through its :attr:`~MaskedArray.mask` attribute.
-We must keep in mind that a ``True`` entry in the mask indicates an *invalid* data.
+The mask of a masked array is accessible through its :attr:`~MaskedArray.mask`
+attribute. We must keep in mind that a ``True`` entry in the mask indicates an
+*invalid* data.
-Another possibility is to use the :func:`getmask` and :func:`getmaskarray` functions. :func:`getmask(x)` outputs the mask of ``x`` if ``x`` is a masked array, and the special value :data:`nomask` otherwise.
-:func:`getmaskarray(x)` outputs the mask of ``x`` if ``x`` is a masked array.
-If ``x`` has no invalid entry or is not a masked array, the function outputs a boolean array of ``False`` with as many elements as ``x``.
+Another possibility is to use the :func:`getmask` and :func:`getmaskarray`
+functions. :func:`getmask(x)` outputs the mask of ``x`` if ``x`` is a masked
+array, and the special value :data:`nomask` otherwise. :func:`getmaskarray(x)`
+outputs the mask of ``x`` if ``x`` is a masked array. If ``x`` has no invalid
+entry or is not a masked array, the function outputs a boolean array of
+``False`` with as many elements as ``x``.
@@ -169,7 +189,9 @@
Accessing only the valid entries
-To retrieve only the valid entries, we can use the inverse of the mask as an index. The inverse of the mask can be calculated with the :func:`numpy.logical_not` function or simply with the ``~`` operator::
+To retrieve only the valid entries, we can use the inverse of the mask as an
+index. The inverse of the mask can be calculated with the
+:func:`numpy.logical_not` function or simply with the ``~`` operator::
>>> x = ma.array([[1, 2], [3, 4]], mask=[[0, 1], [1, 0]])
>>> x[~x.mask]
@@ -177,9 +199,10 @@
mask = [False False],
fill_value = 999999)
-Another way to retrieve the valid data is to use the :meth:`compressed` method,
-which returns a one-dimensional :class:`~numpy.ndarray` (or one of its subclasses,
-depending on the value of the :attr:`~MaskedArray.baseclass` attribute)::
+Another way to retrieve the valid data is to use the :meth:`compressed`
+method, which returns a one-dimensional :class:`~numpy.ndarray` (or one of its
+subclasses, depending on the value of the :attr:`~MaskedArray.baseclass`
>>> x.compressed()
array([1, 4])
@@ -194,7 +217,8 @@
Masking an entry
-The recommended way to mark one or several specific entries of a masked array as invalid is to assign the special value :attr:`masked` to them::
+The recommended way to mark one or several specific entries of a masked array
+as invalid is to assign the special value :attr:`masked` to them::
>>> x = ma.array([1, 2, 3])
>>> x[0] = ma.masked
@@ -226,10 +250,15 @@
but this usage is discouraged.
.. note::
- When creating a new masked array with a simple, non-structured datatype, the mask is initially set to the special value :attr:`nomask`, that corresponds roughly to the boolean ``False``. Trying to set an element of :attr:`nomask` will fail with a :exc:`TypeError` exception, as a boolean does not support item assignment.
+ When creating a new masked array with a simple, non-structured datatype,
+ the mask is initially set to the special value :attr:`nomask`, that
+ corresponds roughly to the boolean ``False``. Trying to set an element of
+ :attr:`nomask` will fail with a :exc:`TypeError` exception, as a boolean
+ does not support item assignment.
-All the entries of an array can be masked at once by assigning ``True`` to the mask::
+All the entries of an array can be masked at once by assigning ``True`` to the
>>> x = ma.array([1, 2, 3], mask=[0, 0, 1])
>>> x.mask = True
@@ -238,7 +267,8 @@
mask = [ True True True],
fill_value = 999999)
-Finally, specific entries can be masked and/or unmasked by assigning to the mask a sequence of booleans::
+Finally, specific entries can be masked and/or unmasked by assigning to the
+mask a sequence of booleans::
>>> x = ma.array([1, 2, 3])
>>> x.mask = [0, 1, 0]
@@ -250,7 +280,8 @@
Unmasking an entry
-To unmask one or several specific entries, we can just assign one or several new valid values to them::
+To unmask one or several specific entries, we can just assign one or several
+new valid values to them::
>>> x = ma.array([1, 2, 3], mask=[0, 0, 1])
>>> x
@@ -264,12 +295,12 @@
fill_value = 999999)
.. note::
- Unmasking an entry by direct assignment will silently fail if the masked array
- has a *hard* mask, as shown by the :attr:`hardmask` attribute.
- This feature was introduced to prevent overwriting the mask.
- To force the unmasking of an entry where the array has a hard mask, the mask must first
- to be softened using the :meth:`soften_mask` method before the allocation. It can be re-hardened
- with :meth:`harden_mask`::
+ Unmasking an entry by direct assignment will silently fail if the masked
+ array has a *hard* mask, as shown by the :attr:`hardmask` attribute. This
+ feature was introduced to prevent overwriting the mask. To force the
+ unmasking of an entry where the array has a hard mask, the mask must first
+ to be softened using the :meth:`soften_mask` method before the allocation.
+ It can be re-hardened with :meth:`harden_mask`::
>>> x = ma.array([1, 2, 3], mask=[0, 0, 1], hard_mask=True)
>>> x
@@ -290,7 +321,9 @@
>>> x.harden_mask()
-To unmask all masked entries of a masked array (provided the mask isn't a hard mask), the simplest solution is to assign the constant :attr:`nomask` to the mask::
+To unmask all masked entries of a masked array (provided the mask isn't a hard
+mask), the simplest solution is to assign the constant :attr:`nomask` to the
>>> x = ma.array([1, 2, 3], mask=[0, 0, 1])
>>> x
@@ -308,9 +341,13 @@
Indexing and slicing
-As a :class:`MaskedArray` is a subclass of :class:`numpy.ndarray`, it inherits its mechanisms for indexing and slicing.
+As a :class:`MaskedArray` is a subclass of :class:`numpy.ndarray`, it inherits
+its mechanisms for indexing and slicing.
-When accessing a single entry of a masked array with no named fields, the output is either a scalar (if the corresponding entry of the mask is ``False``) or the special value :attr:`masked` (if the corresponding entry of the mask is ``True``)::
+When accessing a single entry of a masked array with no named fields, the
+output is either a scalar (if the corresponding entry of the mask is
+``False``) or the special value :attr:`masked` (if the corresponding entry of
+the mask is ``True``)::
>>> x = ma.array([1, 2, 3], mask=[0, 0, 1])
>>> x[0]
@@ -323,7 +360,9 @@
If the masked array has named fields, accessing a single entry returns a
-:class:`numpy.void` object if none of the fields are masked, or a 0d masked array with the same dtype as the initial array if at least one of the fields is masked.
+:class:`numpy.void` object if none of the fields are masked, or a 0d masked
+array with the same dtype as the initial array if at least one of the fields
+is masked.
>>> y = ma.masked_array([(1,2), (3, 4)],
... mask=[(0, 0), (0, 1)],
@@ -337,7 +376,11 @@
dtype = [('a', '<i4'), ('b', '<i4')])
-When accessing a slice, the output is a masked array whose :attr:`~MaskedArray.data` attribute is a view of the original data, and whose mask is either :attr:`nomask` (if there was no invalid entries in the original array) or a copy of the corresponding slice of the original mask. The copy is required to avoid propagation of any modification of the mask to the original.
+When accessing a slice, the output is a masked array whose
+:attr:`~MaskedArray.data` attribute is a view of the original data, and whose
+mask is either :attr:`nomask` (if there was no invalid entries in the original
+array) or a copy of the corresponding slice of the original mask. The copy is
+required to avoid propagation of any modification of the mask to the original.
>>> x = ma.array([1, 2, 3, 4, 5], mask=[0, 1, 0, 0, 1])
>>> mx = x[:3]
@@ -356,31 +399,39 @@
array([ 1, -1, 3, 4, 5])
-Accessing a field of a masked array with structured datatype returns a :class:`MaskedArray`.
+Accessing a field of a masked array with structured datatype returns a
Operations on masked arrays
Arithmetic and comparison operations are supported by masked arrays.
-As much as possible, invalid entries of a masked array are not processed, meaning that the
-corresponding :attr:`data` entries *should* be the same before and after the operation.
+As much as possible, invalid entries of a masked array are not processed,
+meaning that the corresponding :attr:`data` entries *should* be the same
+before and after the operation.
.. warning::
- We need to stress that this behavior may not be systematic, that masked data may be affected
- by the operation in some cases and therefore users should not rely on this data remaining unchanged.
+ We need to stress that this behavior may not be systematic, that masked
+ data may be affected by the operation in some cases and therefore users
+ should not rely on this data remaining unchanged.
The :mod:`numpy.ma` module comes with a specific implementation of most
-Unary and binary functions that have a validity domain (such as :func:`~numpy.log` or :func:`~numpy.divide`) return the :data:`masked` constant whenever the input is masked or falls outside the validity domain::
+ufuncs. Unary and binary functions that have a validity domain (such as
+:func:`~numpy.log` or :func:`~numpy.divide`) return the :data:`masked`
+constant whenever the input is masked or falls outside the validity domain::
>>> ma.log([-1, 0, 1, 2])
masked_array(data = [-- -- 0.0 0.69314718056],
mask = [ True True False False],
fill_value = 1e+20)
-Masked arrays also support standard numpy ufuncs. The output is then a masked array. The result of a unary ufunc is masked wherever the input is masked. The result of a binary ufunc is masked wherever any of the input is masked. If the ufunc also returns the optional context output (a 3-element tuple containing the name of the ufunc, its arguments and its domain), the context is processed and entries of the output masked array are masked wherever the corresponding input fall outside the validity domain::
+Masked arrays also support standard numpy ufuncs. The output is then a masked
+array. The result of a unary ufunc is masked wherever the input is masked. The
+result of a binary ufunc is masked wherever any of the input is masked. If the
+ufunc also returns the optional context output (a 3-element tuple containing
+the name of the ufunc, its arguments and its domain), the context is processed
+and entries of the output masked array are masked wherever the corresponding
+input fall outside the validity domain::
>>> x = ma.array([-1, 1, 0, 2, 3], mask=[0, 0, 0, 0, 1])
>>> np.log(x)
@@ -396,8 +447,9 @@
Data with a given value representing missing data
-Let's consider a list of elements, ``x``, where values of -9999. represent missing data.
-We wish to compute the average value of the data and the vector of anomalies (deviations from the average)::
+Let's consider a list of elements, ``x``, where values of -9999. represent
+missing data. We wish to compute the average value of the data and the vector
+of anomalies (deviations from the average)::
>>> import numpy.ma as ma
>>> x = [0.,1.,-9999.,3.,4.]
@@ -423,7 +475,8 @@
Numerical operations
-Numerical operations can be easily performed without worrying about missing values, dividing by zero, square roots of negative numbers, etc.::
+Numerical operations can be easily performed without worrying about missing
+values, dividing by zero, square roots of negative numbers, etc.::
>>> import numpy as np, numpy.ma as ma
>>> x = ma.array([1., -1., 3., 4., 5., 6.], mask=[0,0,0,0,1,0])
@@ -431,13 +484,16 @@
>>> print np.sqrt(x/y)
[1.0 -- -- 1.0 -- --]
-Four values of the output are invalid: the first one comes from taking the square root of a negative number, the second from the division by zero, and the last two where the inputs were masked.
+Four values of the output are invalid: the first one comes from taking the
+square root of a negative number, the second from the division by zero, and
+the last two where the inputs were masked.
Ignoring extreme values
-Let's consider an array ``d`` of random floats between 0 and 1.
-We wish to compute the average of the values of ``d`` while ignoring any data outside the range ``[0.1, 0.9]``::
+Let's consider an array ``d`` of random floats between 0 and 1. We wish to
+compute the average of the values of ``d`` while ignoring any data outside
+the range ``[0.1, 0.9]``::
>>> print ma.masked_outside(d, 0.1, 0.9).mean()
Modified: trunk/doc/source/reference/routines.array-creation.rst
--- trunk/doc/source/reference/routines.array-creation.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/reference/routines.array-creation.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -44,7 +44,8 @@
Creating record arrays (:mod:`numpy.rec`)
-.. note:: :mod:`numpy.rec` is the preferred alias for :mod:`numpy.core.records`.
+.. note:: :mod:`numpy.rec` is the preferred alias for
+ :mod:`numpy.core.records`.
.. autosummary::
:toctree: generated/
@@ -60,7 +61,8 @@
Creating character arrays (:mod:`numpy.char`)
-.. note:: :mod:`numpy.char` is the preferred alias for :mod:`numpy.core.defchararray`.
+.. note:: :mod:`numpy.char` is the preferred alias for
+ :mod:`numpy.core.defchararray`.
.. autosummary::
:toctree: generated/
Modified: trunk/doc/source/reference/routines.rst
--- trunk/doc/source/reference/routines.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/reference/routines.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -2,6 +2,16 @@
+In this chapter routine docstrings are presented, grouped by functionality.
+Many docstrings contain example code, which demonstrates basic usage
+of the routine. The examples assume that NumPy is imported with::
+ >>> import numpy as np
+A convenient way to execute examples is the ``%doctest_mode`` mode of
+IPython, which allows for pasting of multi-line examples and preserves
.. toctree::
:maxdepth: 2
Modified: trunk/doc/source/user/basics.indexing.rst
--- trunk/doc/source/user/basics.indexing.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/user/basics.indexing.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -6,11 +6,4 @@
.. seealso:: :ref:`Indexing routines <routines.indexing>`
-.. note::
- XXX: Combine ``numpy.doc.indexing`` with material
- section 2.2 Basic indexing?
- Or incorporate the material directly here?
.. automodule:: numpy.doc.indexing
Modified: trunk/doc/source/user/basics.rec.rst
--- trunk/doc/source/user/basics.rec.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/user/basics.rec.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -1,3 +1,5 @@
+.. _structured_arrays:
Structured arrays (aka "Record arrays")
Modified: trunk/doc/source/user/basics.rst
--- trunk/doc/source/user/basics.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/user/basics.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -2,11 +2,6 @@
Numpy basics
-.. note::
- XXX: there is overlap between this text extracted from ``numpy.doc``
- and "Guide to Numpy" chapter 2. Needs combining?
.. toctree::
:maxdepth: 2
Modified: trunk/doc/source/user/basics.types.rst
--- trunk/doc/source/user/basics.types.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/user/basics.types.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -4,11 +4,4 @@
.. seealso:: :ref:`Data type objects <arrays.dtypes>`
-.. note::
- XXX: Combine ``numpy.doc.indexing`` with material from
- "Guide to Numpy" (section 2.1 Data-Type descriptors)?
- Or incorporate the material directly here?
.. automodule:: numpy.doc.basics
Modified: trunk/doc/source/user/c-info.beyond-basics.rst
--- trunk/doc/source/user/c-info.beyond-basics.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/user/c-info.beyond-basics.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -159,17 +159,18 @@
.. index::
single: broadcasting
-When multiple arrays are involved in an operation, you may want to use the same
-broadcasting rules that the math operations ( *i.e.* the ufuncs) use. This can
-be done easily using the :ctype:`PyArrayMultiIterObject`. This is the object
-returned from the Python command numpy.broadcast and it is almost as easy to
-use from C. The function :cfunc:`PyArray_MultiIterNew` ( ``n``, ``...`` ) is
-used (with ``n`` input objects in place of ``...`` ). The input objects can be
-arrays or anything that can be converted into an array. A pointer to a
-PyArrayMultiIterObject is returned. Broadcasting has already been accomplished
-which adjusts the iterators so that all that needs to be done to advance to the
-next element in each array is for PyArray_ITER_NEXT to be called for each of
-the inputs. This incrementing is automatically performed by
+When multiple arrays are involved in an operation, you may want to use the
+same broadcasting rules that the math operations (*i.e.* the ufuncs) use.
+This can be done easily using the :ctype:`PyArrayMultiIterObject`. This is
+the object returned from the Python command numpy.broadcast and it is almost
+as easy to use from C. The function
+:cfunc:`PyArray_MultiIterNew` ( ``n``, ``...`` ) is used (with ``n`` input
+objects in place of ``...`` ). The input objects can be arrays or anything
+that can be converted into an array. A pointer to a PyArrayMultiIterObject is
+returned. Broadcasting has already been accomplished which adjusts the
+iterators so that all that needs to be done to advance to the next element in
+each array is for PyArray_ITER_NEXT to be called for each of the inputs. This
+incrementing is automatically performed by
:cfunc:`PyArray_MultiIter_NEXT` ( ``obj`` ) macro (which can handle a
multiterator ``obj`` as either a :ctype:`PyArrayMultiObject *` or a
:ctype:`PyObject *`). The data from input number ``i`` is available using
@@ -233,15 +234,19 @@
built-in data-types is given below. A different mechanism is used to
register ufuncs for user-defined data-types.
-.. cfunction:: PyObject *PyUFunc_FromFuncAndData( PyUFuncGenericFunction* func, void** data, char* types, int ntypes, int nin, int nout, int identity, char* name, char* doc, int check_return)
+.. cfunction:: PyObject *PyUFunc_FromFuncAndData( PyUFuncGenericFunction* func,
+ void** data, char* types, int ntypes, int nin, int nout, int identity,
+ char* name, char* doc, int check_return)
A pointer to an array of 1-d functions to use. This array must be at
- least ntypes long. Each entry in the array must be a ``PyUFuncGenericFunction`` function. This function has the following signature. An example of a
- valid 1d loop function is also given.
+ least ntypes long. Each entry in the array must be a
+ ``PyUFuncGenericFunction`` function. This function has the following
+ signature. An example of a valid 1d loop function is also given.
- .. cfunction:: void loop1d(char** args, npy_intp* dimensions, npy_intp* steps, void* data)
+ .. cfunction:: void loop1d(char** args, npy_intp* dimensions,
+ npy_intp* steps, void* data)
@@ -269,7 +274,8 @@
.. code-block:: c
static void
- double_add(char *args, npy_intp *dimensions, npy_intp *steps, void *extra)
+ double_add(char *args, npy_intp *dimensions, npy_intp *steps,
+ void *extra)
npy_intp i;
npy_intp is1=steps[0], is2=steps[1];
@@ -320,9 +326,9 @@
- Either :cdata:`PyUFunc_One`, :cdata:`PyUFunc_Zero`, :cdata:`PyUFunc_None`.
- This specifies what should be returned when an empty array is
- passed to the reduce method of the ufunc.
+ Either :cdata:`PyUFunc_One`, :cdata:`PyUFunc_Zero`,
+ :cdata:`PyUFunc_None`. This specifies what should be returned when
+ an empty array is passed to the reduce method of the ufunc.
@@ -458,7 +464,8 @@
these functions with the data-type descriptor. A low-level casting
function has the signature.
-.. cfunction:: void castfunc( void* from, void* to, npy_intp n, void* fromarr, void* toarr)
+.. cfunction:: void castfunc( void* from, void* to, npy_intp n, void* fromarr,
+ void* toarr)
Cast ``n`` elements ``from`` one type ``to`` another. The data to
cast from is in a contiguous, correctly-swapped and aligned chunk
@@ -531,7 +538,8 @@
this function is ``0`` if the process was successful and ``-1`` with
an error condition set if it was not successful.
-.. cfunction:: int PyUFunc_RegisterLoopForType( PyUFuncObject* ufunc, int usertype, PyUFuncGenericFunction function, int* arg_types, void* data)
+.. cfunction:: int PyUFunc_RegisterLoopForType( PyUFuncObject* ufunc,
+ int usertype, PyUFuncGenericFunction function, int* arg_types, void* data)
@@ -661,10 +669,6 @@
Some special methods and attributes are used by arrays in order to
facilitate the interoperation of sub-types with the base ndarray type.
-.. note:: XXX: some of the documentation below needs to be moved to the
- reference guide.
The __array_finalize\__ method
Modified: trunk/doc/source/user/c-info.python-as-glue.rst
--- trunk/doc/source/user/c-info.python-as-glue.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/user/c-info.python-as-glue.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -13,7 +13,7 @@
Many people like to say that Python is a fantastic glue language.
Hopefully, this Chapter will convince you that this is true. The first
adopters of Python for science were typically people who used it to
-glue together large applicaton codes running on super-computers. Not
+glue together large application codes running on super-computers. Not
only was it much nicer to code in Python than in a shell script or
Perl, in addition, the ability to easily extend Python made it
relatively easy to create new classes and types specifically adapted
@@ -123,8 +123,7 @@
interfaces to routines in Fortran 77/90/95 code. It has the ability to
parse Fortran 77/90/95 code and automatically generate Python
signatures for the subroutines it encounters, or you can guide how the
-subroutine interfaces with Python by constructing an interface-
-defintion-file (or modifying the f2py-produced one).
+subroutine interfaces with Python by constructing an interface-definition-file (or modifying the f2py-produced one).
.. index::
single: f2py
@@ -175,7 +174,7 @@
This command leaves a file named add.{ext} in the current directory
(where {ext} is the appropriate extension for a python extension
module on your platform --- so, pyd, *etc.* ). This module may then be
-imported from Python. It will contain a method for each subroutin in
+imported from Python. It will contain a method for each subroutine in
add (zadd, cadd, dadd, sadd). The docstring of each method contains
information about how the module method may be called:
@@ -586,7 +585,7 @@
One final note about weave.inline: if you have additional code you
want to include in the final extension module such as supporting
-function calls, include statments, etc. you can pass this code in as a
+function calls, include statements, etc. you can pass this code in as a
string using the keyword support_code: ``weave.inline(code, variables,
support_code=support)``. If you need the extension module to link
against an additional library then you can also pass in
@@ -784,7 +783,7 @@
-The two-dimensional example we created using weave is a bit uglierto
+The two-dimensional example we created using weave is a bit uglier to
implement in Pyrex because two-dimensional indexing using Pyrex is not
as simple. But, it is straightforward (and possibly faster because of
pre-computed indices). Here is the Pyrex-file I named image.pyx.
@@ -873,7 +872,7 @@
4. Multi-dimensional arrays are "bulky" to index (appropriate macros
may be able to fix this).
-5. The C-code generated by Prex is hard to read and modify (and typically
+5. The C-code generated by Pyrex is hard to read and modify (and typically
compiles with annoying but harmless warnings).
Writing a good Pyrex extension module still takes a bit of effort
@@ -1126,8 +1125,8 @@
area of an ndarray. You may still want to wrap the function in an
additional Python wrapper to make it user-friendly (hiding some
obvious arguments and making some arguments output arguments). In this
-process, the **requires** function in NumPy may be useful to return the right kind of array from
-a given input.
+process, the **requires** function in NumPy may be useful to return the right
+kind of array from a given input.
Complete example
Modified: trunk/doc/source/user/howtofind.rst
--- trunk/doc/source/user/howtofind.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/user/howtofind.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -4,6 +4,4 @@
.. seealso:: :ref:`Numpy-specific help functions <routines.help>`
-.. note:: XXX: this part is not yet written.
.. automodule:: numpy.doc.howtofind
Modified: trunk/doc/source/user/install.rst
--- trunk/doc/source/user/install.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/user/install.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -12,29 +12,30 @@
Good solutions for Windows are, The Enthought Python Distribution `(EPD)
-<http://www.enthought.com/products/epd.php>`_ (which provides binary installers
-for Windows, OS X and Redhat) and `Python (x, y) <http://www.pythonxy.com>`_.
-Both of these packages include Python, NumPy and many additional packages.
-A lightweight alternative is to download the Python installer from
-`www.python.org <http://www.python.org>`_ and the NumPy installer for your
-Python version from the Sourceforge `download site
+<http://www.enthought.com/products/epd.php>`_ (which provides binary
+installers for Windows, OS X and Redhat) and `Python (x, y)
+<http://www.pythonxy.com>`_. Both of these packages include Python, NumPy and
+many additional packages. A lightweight alternative is to download the Python
+installer from `www.python.org <http://www.python.org>`_ and the NumPy
+installer for your Python version from the Sourceforge `download site <http://
Most of the major distributions provide packages for NumPy, but these can lag
behind the most recent NumPy release. Pre-built binary packages for Ubuntu are
-available on the `scipy ppa <https://edge.launchpad.net/~scipy/+archive/ppa>`_.
-Redhat binaries are available in the `EPD
+available on the `scipy ppa
+<https://edge.launchpad.net/~scipy/+archive/ppa>`_. Redhat binaries are
+available in the `EPD <http://www.enthought.com/products/epd.php>`_.
Mac OS X
A universal binary installer for NumPy is available from the `download site
-The `EPD <http://www.enthought.com/products/epd.php>`_ provides NumPy binaries.
+package_id=175103>`_. The `EPD <http://www.enthought.com/products/epd.php>`_
+provides NumPy binaries.
Building from source
@@ -62,21 +63,22 @@
2) Compilers
- To build any extension modules for Python, you'll need a C compiler. Various
- NumPy modules use FORTRAN 77 libraries, so you'll also need a FORTRAN 77
- compiler installed.
+ To build any extension modules for Python, you'll need a C compiler.
+ Various NumPy modules use FORTRAN 77 libraries, so you'll also need a
+ FORTRAN 77 compiler installed.
- Note that NumPy is developed mainly using GNU compilers. Compilers from other
- vendors such as Intel, Absoft, Sun, NAG, Compaq, Vast, Porland, Lahey, HP,
- IBM, Microsoft are only supported in the form of community feedback, and may
- not work out of the box. GCC 3.x (and later) compilers are recommended.
+ Note that NumPy is developed mainly using GNU compilers. Compilers from
+ other vendors such as Intel, Absoft, Sun, NAG, Compaq, Vast, Porland,
+ Lahey, HP, IBM, Microsoft are only supported in the form of community
+ feedback, and may not work out of the box. GCC 3.x (and later) compilers
+ are recommended.
3) Linear Algebra libraries
- NumPy does not require any external linear algebra libraries to be installed.
- However, if these are available, NumPy's setup script can detect them and use
- them for building. A number of different LAPACK library setups can be used,
- including optimized LAPACK libraries such as ATLAS, MKL or the
+ NumPy does not require any external linear algebra libraries to be
+ installed. However, if these are available, NumPy's setup script can detect
+ them and use them for building. A number of different LAPACK library setups
+ can be used, including optimized LAPACK libraries such as ATLAS, MKL or the
Accelerate/vecLib framework on OS X.
FORTRAN ABI mismatch
@@ -87,8 +89,8 @@
should avoid mixing libraries built with one with another. In particular, if
your blas/lapack/atlas is built with g77, you *must* use g77 when building
numpy and scipy; on the contrary, if your atlas is built with gfortran, you
-*must* build numpy/scipy with gfortran. This applies for most other cases where
-different FORTRAN compilers might have been used.
+*must* build numpy/scipy with gfortran. This applies for most other cases
+where different FORTRAN compilers might have been used.
Choosing the fortran compiler
@@ -110,9 +112,9 @@
One relatively simple and reliable way to check for the compiler used to build
a library is to use ldd on the library. If libg2c.so is a dependency, this
-means that g77 has been used. If libgfortran.so is a a dependency, gfortran has
-been used. If both are dependencies, this means both have been used, which is
-almost always a very bad idea.
+means that g77 has been used. If libgfortran.so is a a dependency, gfortran
+has been used. If both are dependencies, this means both have been used, which
+is almost always a very bad idea.
Building with ATLAS support
Modified: trunk/doc/source/user/misc.rst
--- trunk/doc/source/user/misc.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/user/misc.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -2,8 +2,6 @@
-.. note:: XXX: This section is not yet written.
.. automodule:: numpy.doc.misc
.. automodule:: numpy.doc.methods_vs_functions
Modified: trunk/doc/source/user/performance.rst
--- trunk/doc/source/user/performance.rst 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/doc/source/user/performance.rst 2010-02-17 23:55:16 UTC (rev 8128)
@@ -2,6 +2,4 @@
-.. note:: XXX: This section is not yet written.
.. automodule:: numpy.doc.performance
Modified: trunk/numpy/add_newdocs.py
--- trunk/numpy/add_newdocs.py 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/numpy/add_newdocs.py 2010-02-17 23:55:16 UTC (rev 8128)
@@ -3985,6 +3985,12 @@
Functions that operate element by element on whole arrays.
+ To see the documentation for a specific ufunc, use np.info(). For
+ example, np.info(np.sin). Because ufuncs are written in C
+ (for speed) and linked into Python with NumPy's ufunc facility,
+ Python's help() function finds this page whenever help() is called
+ on a ufunc.
A detailed explanation of ufuncs can be found in the "ufuncs.rst"
file in the NumPy reference guide.
Modified: trunk/numpy/core/code_generators/ufunc_docstrings.py
--- trunk/numpy/core/code_generators/ufunc_docstrings.py 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/numpy/core/code_generators/ufunc_docstrings.py 2010-02-17 23:55:16 UTC (rev 8128)
@@ -1641,7 +1641,7 @@
x1 : array_like of integer type
Input values.
x2 : array_like of integer type
- Number of zeros to append to `x1`.
+ Number of zeros to append to `x1`. Has to be non-negative.
@@ -1849,6 +1849,16 @@
handles the floating-point negative zero as an infinitesimal negative
number, conforming to the C99 standard.
+ Examples
+ --------
+ >>> x = np.array([0, 1, 2, 2**4])
+ >>> np.log2(x)
+ array([-Inf, 0., 1., 4.])
+ >>> xi = np.array([0+1.j, 1, 2+0.j, 4.j])
+ >>> np.log2(xi)
+ array([ 0.+2.26618007j, 0.+0.j , 1.+0.j , 2.+2.26618007j])
add_newdoc('numpy.core.umath', 'logaddexp',
Modified: trunk/numpy/doc/constants.py
--- trunk/numpy/doc/constants.py 2010-02-17 23:53:04 UTC (rev 8127)
+++ trunk/numpy/doc/constants.py 2010-02-17 23:55:16 UTC (rev 8128)
@@ -75,8 +75,8 @@
isnan : Shows which elements are Not a Number
- isfinite : Shows which elements are finite (not one of
- Not a Number, positive infinity and negative infinity)
+ isfinite : Shows which elements are finite (not one of Not a Number,
+ positive infinity and negative infinity)
@@ -214,7 +214,7 @@
Euler's constant, base of natural logarithms, Napier's constant.
- `e = 2.71828182845904523536028747135266249775724709369995...`
+ ``e = 2.71828182845904523536028747135266249775724709369995...``
See Also
@@ -246,8 +246,8 @@
isnan : Shows which elements are Not a Number
- isfinite : Shows which elements are finite (not one of
- Not a Number, positive infinity and negative infinity)
+ isfinite : Shows which elements are finite (not one of Not a Number,
+ positive infinity and negative infinity)
@@ -322,20 +322,20 @@
- >>> np.newaxis is None
+ >>> newaxis is None
>>> x = np.arange(3)
>>> x
array([0, 1, 2])
- >>> x[:, np.newaxis]
+ >>> x[:, newaxis]
- >>> x[:, np.newaxis, np.newaxis]
+ >>> x[:, newaxis, newaxis]
- >>> x[:, np.newaxis] * x
+ >>> x[:, newaxis] * x
array([[0, 0, 0],
[0, 1, 2],
[0, 2, 4]])
@@ -343,20 +343,20 @@
Outer product, same as ``outer(x, y)``:
>>> y = np.arange(3, 6)
- >>> x[:, np.newaxis] * y
+ >>> x[:, newaxis] * y
array([[ 0, 0, 0],
[ 3, 4, 5],
[ 6, 8, 10]])
``x[newaxis, :]`` is equivalent to ``x[newaxis]`` and ``x[None]``:
- >>> x[np.newaxis, :].shape
+ >>> x[newaxis, :].shape
(1, 3)
- >>> x[np.newaxis].shape
+ >>> x[newaxis].shape
(1, 3)
>>> x[None].shape
(1, 3)
- >>> x[:, np.newaxis].shape
+ >>> x[:, newaxis].shape
(3, 1)
More information about the Numpy-svn mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-svn/2010-February/003910.html","timestamp":"2014-04-18T07:29:07Z","content_type":null,"content_length":"94450","record_id":"<urn:uuid:9f593cee-ddeb-46fe-8aea-0ee3898436f4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Missy on Saturday, April 18, 2009 at 5:14pm.
How do you find out the median in a set of numbers?
(Median is with mode, mean, and range) I know how to do them I just don't know how to find the median?
• 4th grade math - Writeacher, Saturday, April 18, 2009 at 6:07pm
Give me a set of numbers and then tell me what you think the median number is.
• algebra - Anonymous, Monday, April 20, 2009 at 7:47pm
i need help with algebra!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
• 4th grade math - Anonymous, Wednesday, April 29, 2009 at 7:27pm
Put the numbers from least to greatest then cross numbers out starting from the outside. So, if I had the numbers 57,8,79,58,38,29, and 42 first I would put them from least to greatest so that it
would be 8,29,38,42,57,58,79. Then I would cross out numbers in this order 8,79,29,58,38,57. Now I only have one number, 42. That is your median. Now let's say that we had another number 92. We
would now put the order like this: 8,29,38,42,57,58,79,92. We would cross out in this order 8,92,29,79,38,58. Now we have two numbers, and if we cross those numbers out we wouldn't have anything
left. What we do now is add th two numbers together (42 and 57) and the answer to that (99), and divide by two (this would be finding the average of the two). This would find the median (in our
prolbem it would be 49.5).
• 5th grade math - mahiem, Thursday, May 14, 2009 at 5:47pm
• 4th grade math - Anonymous, Wednesday, September 2, 2009 at 9:19pm
200 +10000
• 4th grade math - chasity, Wednesday, September 2, 2009 at 9:19pm
200 +10000
• 4th grade math - chasity, Wednesday, September 2, 2009 at 9:19pm
200 +10000
• 4th grade math - Emily, Thursday, January 7, 2010 at 6:46pm
i have a worksheet from my teacher that i dont really understand i was wondering if someone could help me please
thank you very much
Emily 4th grade
• 4th grade math - Emily, Thursday, January 7, 2010 at 6:46pm
i have a worksheet from my teacher that i dont really understand i was wondering if someone could help me please
thank you very much
Emily 4th grade
Related Questions
math pls check - find mean,median, mode, range: for folloiwng numbers: 5, -2, 27...
math mean,median,mode,range - can you check if i have this right? i need to find...
math mean median mode - which of the following shows the order from least to ...
math - can you check if i have this right? i need to find the mean median, mode ...
algebra mean med mode - can you check my answers i have to find the mean, median...
A l g e b r a MeanMed Mode - can you check my answers i have to find the mean, ...
Math - Find the median mode and range for each set of data. 1.11,18,19,20,20,23 ...
algebra help mrs. Sue - Mrs.Sue my homework from yesterday i had wrong ...
mrs. sue algebra help - Mrs.Sue my homework from yesterday i had wrong ...
Math - Find the median more and range 1.20,42,45,48,50,50 Median=46.5 MOde=50 ... | {"url":"http://www.jiskha.com/display.cgi?id=1240089290","timestamp":"2014-04-18T22:03:13Z","content_type":null,"content_length":"11148","record_id":"<urn:uuid:c89b4f25-ff6b-462a-ae10-109a4bd33a2b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
Live Help and Online Tutoring for Math Problems - Transtutors
Live Help and Online Tutoring for Math Problems
Math offers answers to all kind of everyday problems as well as those that are highly specialized and sophisticated. Right from counting the number of candies you have in your pocket to making
complicated calculations using new-age
numerical analysis
, Math is everywhere. Despite the common notion, Math is a very interesting and a challenging subject. Besides pure sciences and life sciences, Math techniques are used in virtually every subject
including social sciences and careers such as law and business.
Transtutors.com offers you best live help online for all kind of Math problems for students at school, college and graduate levels. Our live Math tutor can help you solve all range of problems and
pick up mathematical concepts in a jiffy. From adding two and two to teaching
differential equations
and their applicability in life sciences, our Math tutors are equipped to handle all your Math queries effectively.
Live Help and Live Online Tutor to help with Math problems
Math experts at Transtutors offer you live help for all your problems. They are accurate and precise, as well as sensitive to students of all age groups, grades, and nationalities. We offer
personalized one-on-one online tutoring that is tailored to suit the level of understanding of the student. With a thorough grounding and experience in handling students of different types, live Math
tutor at Transtutors.com can easily shift their way of communication and level of the topic discussed to suit the age, understanding and learning style of a particular student.
Live online help for Math at Transtutors can be used as an online tutoring facility to cover the entire syllabus of a particular grade or college-level, or just as a one-time service to understand a
difficult-to-understand Math concept. Our Math tutors have a few tips and tricks up their sleeve to help you analyze a problem easily, learn all those Math formulas and solve problems easily.
Math Tutor Online to offer Live Help at Transtutors.com
Transtutors.com offers you exceptional online tutoring experience and service that will not only help students understand Math but will also help them to enjoy the subject. Math tutors at Transtutors
are not only well qualified but they also have an infectious enthusiasm for all Math problems, concepts, and challenges.
With Internet and latest technology at their disposal, our Math online tutors are fully equipped to help you acquire broad basic knowledge for Mathematics at school level (K to 12) for topics like
fractions, decimals, integers, square roots
, and
introductory Algebra, Geometry, Trigonometry, Statistics
, and
. They have a vast experience in academia and industry and are best mentors to offer you specialized understanding of advanced topics like
Real Analysis, Number Theory, Algebraic Geometry and Stochastic Calculus.
Math Live help at Transtutors is a two-way process, where students can interact with the live Math tutors online and master more subtle aspects of Mathematics as well. Our Math experts and tutors
have been handpicked and have been trained well to cater to international audiences.
Live Math tutor at Transtutors can help you exceed in your school and college level tests and examinations through live one-on-one interaction, easy-to-follow tutorials, and step-by-step detailed
analysis of solutions to Math problems. Transtutors offers live help for Math problems at a very affordable price. Check the
pricing section
to know more about our rates.
Related Questions
more assignments » | {"url":"http://www.transtutors.com/live-online-tutor/math-help/","timestamp":"2014-04-16T16:07:45Z","content_type":null,"content_length":"77883","record_id":"<urn:uuid:68e9a688-f01e-4798-aa46-a6a03331b7db>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.FSTTCS.2012.362
URN: urn:nbn:de:0030-drops-38739
URL: http://drops.dagstuhl.de/opus/volltexte/2012/3873/
Go to the corresponding Portal
Boker, Udi ; Henzinger, Thomas A.
Approximate Determinization of Quantitative Automata
Quantitative automata are nondeterministic finite automata with edge weights. They value a run by some function from the sequence of visited weights to the reals, and value a word by its minimal/
maximal run. They generalize boolean automata, and have gained much attention in recent years. Unfortunately, important automaton classes, such as sum, discounted-sum, and limit-average automata,
cannot be determinized. Yet, the quantitative setting provides the potential of approximate determinization. We define approximate determinization with respect to a distance function, and investigate
this potential. We show that sum automata cannot be determinized approximately with respect to any distance function. However, restricting to nonnegative weights allows for approximate
determinization with respect to some distance functions. Discounted-sum automata allow for approximate determinization, as the influence of a word's suffix is decaying. However, the naive approach,
of unfolding the automaton computations up to a sufficient level, is shown to be doubly exponential in the discount factor. We provide an alternative construction that is singly exponential in the
discount factor, in the precision, and in the number of states. We prove matching lower bounds, showing exponential dependency on each of these three parameters. Average and limit-average automata
are shown to prohibit approximate determinization with respect to any distance function, and this is the case even for two weights, 0 and 1.
BibTeX - Entry
author = {Udi Boker and Thomas A. Henzinger},
title = {{Approximate Determinization of Quantitative Automata}},
booktitle = {IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012) },
pages = {362--373},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-939897-47-7},
ISSN = {1868-8969},
year = {2012},
volume = {18},
editor = {Deepak D'Souza and Telikepalli Kavitha and Jaikumar Radhakrishnan},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2012/3873},
URN = {urn:nbn:de:0030-drops-38739},
doi = {http://dx.doi.org/10.4230/LIPIcs.FSTTCS.2012.362},
annote = {Keywords: Quantitative; Automata; Determinization; Approximation}
Keywords: Quantitative; Automata; Determinization; Approximation
Seminar: IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)
Issue Date: 2012
Date of publication: 10.12.2012
DROPS-Home | Fulltext Search | Imprint | {"url":"http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=3873","timestamp":"2014-04-20T08:51:56Z","content_type":null,"content_length":"6047","record_id":"<urn:uuid:a1f6b38d-3554-41a5-b2c8-bf6d3ceb10c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum mechanics without wavefunctions...
...that's the title of a recent communication in the Journal of Chemical Physics[1] (as communication it is free access to everyone):
The authors perform some rather clever looking algebraic transformations and arrive at a formulation of nonrelativistic spinless QM (without fermionic or bosonic symmetry constraints, though) which
looks a lot like a classical Hamilton-Jacobi formulation of mechanics with one additional quantum coordinate/momentum pair (additionally to the classical position/momentum coordinates) for each
particle. That's basically the hidden variable, as far as I understand.
I was wondering if any of you had an opinion on this approach, especially the experts on QM interpretations. I am myself not quite sure what to make of that. On the one hand it does look very clever
and leads to an entirely real formulation (i.e., no complex numbers) of QM without wave functions and with a clear action principle. On the other hand, it looks a bit like just replacing the
N-dimensional complex Schroedinger equation by a 2N-dimensional set of real differential equations. The chief advantage seems to be that the latter is easier to combine with classical approximations
and easier to integrate numerically. But is this also a novel way of looking at things? Is there more to it? (The authors seems to think so and make some rather grand claims normally not found in J.
Chem. Phys.). Any opinions welcome.
[1]: If you are concerned about the journal: J. Chem. Phys. is highly respected and the #1 journal for chemical physics and many parts of molecular physics---in particular for all kinds of numerical
quantum dynamics and quantum mechanics applied to real atomic and molecular systems. If the authors had actually found a reformulation of QM which is helpful for numerical computations, it would make
a lot
of sense to publish it there first, because that is the journal the people read who would likely use it. | {"url":"http://www.physicsforums.com/showthread.php?p=3761653","timestamp":"2014-04-18T00:25:05Z","content_type":null,"content_length":"66980","record_id":"<urn:uuid:9d629f04-2979-431f-a509-acc1d5b217bd>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fourier Series
Normal Modes and Fourier Series
If the (n-1) point masses on an n-segment string are given initial displacements of the form
where the integer k is known as the mode number, periodic motion results when the string is released from rest. The next example uses the Feynman algorithm to show the oscillation and find the
frequency for each of the normal modes for the case when n=8.
As the mode number k increases, the frequency values increase, and the sine curve drawn through them is the analytic result obtained below. The normal modes, although interesting in their own right,
owe their importance to the fact that any displacement of the (n-1) point masses can be written as a superposition of the (n-1) normal modes. For an n-segment string clamped at both ends the
superposition is known as a Finite Fourier Sine Series (FFSS)
The Finite Fourier Transform (FFT) is defined by
The proof that the relations are valid involves only simple geometry in the complex plane. If we are dealing with an odd sequence of real numbers, i.e.
we need be concerned only with the first half of the sequence, and the FFT takes the form of the FFSS.
The FFSS is very useful for dealing with systems with "zero boundary conditions":
To show that each term in the FFSS is a normal mode, we substitute into Newton's law, and use "twice sine half-sum cosine half-difference" to add sines:
The expression relating frequency to mode number is known as the dispersion relation, and is the curve plotted in the previous example. A dispersion relation contains the "physics" of a problem dealt
with by Fourier analysis. We will say more about this in the what follows.
At this point we would like to show how the FFSS can be used to represent our pluck and pulse displacements. The next applet does this and shows the connection between the FFSS and the more familiar
Fourier Sine Series (FSS).
The FFSS fits a finite number of points exactly, whereas the FSS fits a continuous function over a specified range, but requires an infinite number of terms to do so exactly. As n increases, the FFSS
approaches the FSS. The applet shows the individual terms and their sum when we use a FFSS and a FSS with the same number of terms to represent the pluck and pulse displacements of our 8-segment
If the particles are at rest in the initial configuration (a "standing" pulse), the time dependence is generated by multiplying each term in the series by a cosine time dependence with the
appropriate normal-mode frequency. The main advantage of this approach over the Feynman algorithm is that the displacement at any future time can be displayed directly without stepping time in small
increments (this advantage is lost in simulations that require small time steps). The next applet treats the motion of the n-segment string with pluck and pulse initial displacements using the FFSS.
It offers the option of replacing the sinusoidal dispersion relation with a linear "harmonic" dispersion relation with the initial slope of the sine curve.
With the true dispersion relation the FFSS results agree with those from the Feynman algorithm. With the harmonic dispersion relation the FFSS results preserve the linear segments observed in the
applet in the introductory section. It is, therefore, not necessary to use a large value for n to model a continuous string: it is only necessary to modify the dispersion relation
The FFSS can also be used to describe motion when the initial configuration is not at rest ("travelling" pulses). A second FFSS with a sine time-dependence is added with coefficients chosen to fit
the initial velocities. The next applet uses this approach to treat the two pulses whose motion was studied previously using the Feynman algorithm.
In terms of the dispersion relation, the velocity of the envelope of a series of wave crests is the slope of the dispersion curve at the mode number of the wave crests, and the higher the value of n,
closer this is to the initial (harmonic) slope. The final applet plots the square of FFSS coefficient versus mode number for the pulses used in the two previous applets (Pluck, Pulse, Pulse A, and
Pulse B), and for the quantum analog of Pulse B. (Quantum wave pulses are dealt with in the next section.) The calculation uses n=384, and the mode scale is plotted only up to 64. The vertical scale
is adjusted to make the area shown in red the same for all five pulses. (In the case of Pluck and Pulse, the first coefficient goes off scale by an amount that can be estimated using this fact.) The
two standing pulses, Pluck and Pulse, have only odd non-zero coefficients. The travelling classical pulses, Pulse A and Pulse B, have the initial velocity shown in green along with the displacement
in blue. The quantum pulse is complex. Its real part is identical to Pulse B, and its imaginary part is what is plotted in green. Both Pulse B and the quantum pulse have FFSS coefficient
distributions centered about mode 32, the mode that modulates the pulse envelope.
With n=48, the peak in the distribution for Pulse B is 2/3 of the way along the dispersion curve, so it is not surprising that the pulse is quickly obliterated. When n=384, the high-order FFSS
coefficients become vanishingly small, and the series need not extend over the full spectrum. The applet prints out upper limit needed for the sum of the squares of the coefficients to reach 99.99%
of the total. | {"url":"http://www.kw.igs.net/~jackord/bp/n3.html","timestamp":"2014-04-20T23:26:43Z","content_type":null,"content_length":"7763","record_id":"<urn:uuid:403d8c94-34f6-492d-bc0d-2ac6dfb81aba>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: DELETING DIFFEOMORPHISMS WITH PRESCRIBED
Abstract. We show that, for every infinite-dimensional Banach space X with
a Schauder basis, the following are equivalent: (1) X has a Cp
smooth bump
function; (2) for every compact subset K and every open subset U of X with
K U, there exists a Cp
diffeomorphism h : X X \ K such that h is the
identity on X \ U.
A subset K of X is said to be topologically negligible provided there exists a home-
omorphism h : X X \ K. The homeomorphism h is usually required to be the
identity outside a given neighborhood U of K. Here X can be a Banach space, a
manifold, or just a topological space, but we will only consider the case when X is
an infinite-dimensional Banach space and h is a diffeomorphism (recall that points
are not topologically negligible in finite-dimensional spaces). Such h will be called
a deleting diffeomorphism, and we will say that h has its support on U.
Deleting diffeomorphisms are very powerful tools in infinite-dimensional global
analysis and nonlinear analysis. We do not intend to make a history of the develop-
ment of topological negligibility and its applications, and we refer the reader to the | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/235/3183290.html","timestamp":"2014-04-20T08:20:34Z","content_type":null,"content_length":"8341","record_id":"<urn:uuid:96a56929-343a-4ceb-b407-664ce035b9be>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factoring response time equation
August 8th 2007, 02:39 PM #1
Aug 2007
Factoring response time equation
Remember the calculation of response time in the lecture?
R = SQ + S(2)
Based on one Queuing Theorem:
Q = a * R(3)
Manipulating equations (2) and (3) using Factoring method, we obtain:
R = ---------- (4)
1 – a S
Show how you manipulate the two equations (2) and (3) in order to get (4).
The lecture notes remind us that Q=a(SQ=S) if that helps anyone out there. Thanks so much!
Last edited by Heather; August 8th 2007 at 03:30 PM. Reason: clarification and additional info
Remember the calculation of response time in the lecture?
R = SQ + S(2)
Based on one Queuing Theorem:
Q = a * R(3)
Manipulating equations (2) and (3) using Factoring method, we obtain:
R = ---------- (4)
1 – a S
Show how you manipulate the two equations (2) and (3) in order to get (4).
The lecture notes remind us that Q=a(SQ=S) if that helps anyone out there. Thanks so much!
I doubt most of us are taking this class with you!
Can you give us some background in regard to what you are talking about?
Most of what we are studying right now is factoring and finding GCF. Somehow with this problem you are supposed to be doing factoring, but the only way i see it possible is by starting with
Remember the calculation of response time in the lecture?
R = SQ + S(2)
Based on one Queuing Theorem:
Q = a * R(3)
Manipulating equations (2) and (3) using Factoring method, we obtain:
R = ---------- (4)
1 – a S
Show how you manipulate the two equations (2) and (3) in order to get (4).
The lecture notes remind us that Q=a(SQ=S) if that helps anyone out there. Thanks so much!
Oh okay, from the way you phrased the question I thought there was more to it.
So we have:
$R = SQ + S$
$Q = aR$
So put the bottom equation into the top:
$R = S(aR) + S = aSR + R$
Now we want to solve for R:
$R - aSR = S$
There is a common R to both terms on the left, so factor it:
$(1 - aS)R = S$
Now divide:
$R = \frac{S}{1 - aS}$
Thank you soooooo much this has been driving me crazy! But now i feel dumb because this was very simple
August 8th 2007, 06:42 PM #2
August 9th 2007, 07:18 AM #3
Aug 2007
August 9th 2007, 10:49 AM #4
August 10th 2007, 10:02 AM #5
Aug 2007
August 10th 2007, 10:08 AM #6 | {"url":"http://mathhelpforum.com/business-math/17617-factoring-response-time-equation.html","timestamp":"2014-04-18T05:47:18Z","content_type":null,"content_length":"55266","record_id":"<urn:uuid:62abf99a-a562-45bd-9cf2-8bb559b33932>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complete Orthogonal Factorization
Next: Other Factorizations Up: Orthogonal Factorizations and Linear Previous: QR Factorization with Column
The QR factorization with column pivoting does not enable us to compute a minimum norm solution to a rank-deficient linear least squares problem unless from the right to the upper trapezoidal matrix
This gives the complete orthogonal factorization
from which the minimum norm solution can be obtained as
The matrix Z is not formed explicitly but is represented as a product of elementary reflectors, as described in section 3.4. Users need not be aware of the details of this representation, because
associated routines are provided to work with Z: PxORMRZ (or PxUNMRZ ) can pre- or post-multiply a given matrix by Z or
Susan Blackford
Tue May 13 09:21:01 EDT 1997 | {"url":"http://www.netlib.org/scalapack/slug/node56.html","timestamp":"2014-04-18T18:40:58Z","content_type":null,"content_length":"3907","record_id":"<urn:uuid:bf6ed363-d7b5-4dca-ac3d-c242cfa01e67>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
PharmPK Discussion
• On 16 Aug 2007 at 16:28:50, "Wichittra Tassaneeyakul" (wichittra.tassaneeyakul.-at-.gmail.com) sent the message
Dear All,
I would like to hear the comment about how we should calculate AUC
for Bioequivalence study.
If the drug concentrations at 72 and 96 hr post dose for Test
formulation are below detection limit but 72 and 96 hr of Reference
formulation are still above detection limit. Should we calculate the
AUC of Refrence up to 96 hr or we need to ignore the data of
reference at 72, 96 hr as we did for Test formulation.
Best Regards,
Back to the Top
• On 16 Aug 2007 at 12:34:56, "Robert Lepage" (rlepage.at.pharmamedica.com) sent the message
The following message was posted to: PharmPK
If the measured concentrations for the test product are BLOQ and if the
test and reference formulations are similar then the values you obtain
for the reference product at 72 and 96 hours should be quite small. In
this case, they will not contribute significantly to the overall AUC and
not significantly affect the estimation of bioequivalence.
I believe that all regulatory authorities would require that you include
the measured data in your calculations. This could be fundamental in
the assessment of bioequivalence. Remember, the elimination rate (or
t1/2) of a drug substance from the body should be the same, regardless
of the formulation. If you notice large differences in the elimination
(e.g. different t1/2 causing measurable levels in one product) then it
is likely that the drug product is different. If the formulation is
significantly different, then the bioequivalent results should reflect
that difference.
Robert Lepage, M.Sc., CCRP
Manager, Biopharmaceutics & Assistant Study Director
Pharma Medica Research Inc.
E-mail rlepage.-at-.pharmamedica.com
Back to the Top
• On 16 Aug 2007 at 10:56:51, "Toufigh Gordi" (tgordi.-a-.Depomedinc.com) sent the message
The following message was posted to: PharmPK
Hi Wichittra,
The aim with bioeqivalence studies (as it is now) is to test whether the
test and the reference product (tablet, capsule,...) result in similar
maximum concentrations and total exposure of the drug. You need to
calculate the total AUC, i.e. AUC 0-infinity, for both formulations and
compare them. Obviously, you cannot ignore data. If the equivalency test
fails because your test formulation shows consistently higher
concentrations toward the end, resulting in higher AUC0-infinity values,
you simply need to go back to the laboratory and change the formulation.
Back to the Top
• On 17 Aug 2007 at 09:40:37, "paresh" (paresh.aaa.accutestindia.com) sent the message
The following message was posted to: PharmPK
dear wichittra,
the purpose of establishing BE is on the basis of the result what you
get in the experiment/study.
you cant able to ignore the result as at the end time point and dont
worry about the result produce, below detection limit you should as a
Zero as per your inhouse procedure.
regarding thee refe. formulation you mentioned as above detection
limit, i cant understand the term as you mentioned it.
before that you can go for pk repeat if your procedure at lab scale
allow or do you have any procedure for statistical outlier that you
should consider.
but as per my understanding end point at last 72 and 96 doesnt make
any difference at your result.
for AUC 0 to infinite i am with the concern of gordi,
hope i can near about your concern,
expert can comment on this issue.
Back to the Top
• On 19 Aug 2007 at 11:00:09, "Wichittra Tassaneeyakul" (wichittra.tassaneeyakul.-a-.gmail.com) sent the message
Dear All,
Thank for the comment on my concern on AUC calculation. I agree with
you (Robert and Praresh) that AUC from o-inf won't make any
different and should not any effect the estimation of bioequivalence.
What I concern is the correct way to do for calculation of AUC0-t ,
if t-last that give concentation above BLOQ of reference and test
are different.
For example if the result from one subject from 0 to 96 hr for Test
formulation, the concentration are higher than BLOQ but for
reference formulation concentration that above BLOQ are from 0-48
hr. In this case, is it correct to calculate AUC0-t for reference by
using data only from 0 to 48 hr and for test from 0 to 96 hr.
Some one suggest me that we need to use data from 0 to 48 hr to
calculate AUC from 0 to 48 h for both Reference and Test. In my
opinion, I don't understand why we have to ignore the data from 48-96
hr of the Test formulation.
Thank for sharing your knowledge with me. I really like this PharmPK
Best Regards,
Assoc. Prof.Dr. Wichittra Tassaneeyakul
Department of Pharmacology
Faculty of Medicine
Khon Kaen University
Khon Kaen 40002
Back to the Top
• On 19 Aug 2007 at 00:03:22, "Rhishikesh Mandke" (mandke.rhishikesh.aaa.gmail.com) sent the message
The following message was posted to: PharmPK
Dear Dr. Wichittra,
FDA guidelines state that in Area under the plasma/serum/blood
concentration-time curve from time zero to time t (AUC0-t), t is the
time point with measurable concentration for INDIVIDUAL formulation.
I guess, this definition becomes self-explanatory in your case.
Thanks and best regards,
Back to the Top
• On 19 Aug 2007 at 09:28:45, Yegnaraman (ryraman2661.-at-.yahoo.co.in) sent the message
Dear Prof .Wichittra,
In your example, for the reference formulation after 48 hrs, it is
below BLOQ, whereas for the test formulation, it goes below only
after 96 hrs. This shows, there is a difference in the value of K el,
T 1/2 and may be in C max.
For bioequivalence studies we have compare the AUC ( 0 to t) for Test
and reference taking the last time t. Thus we have to compare both
test and reference formualtion taking the time as 96 hrs.
Then with the K el, and last quantifiable concentration ( of course
above BLOQ), we have to extrapolate and calculate AUC ( o to inf) for
Test and Ref and them comparison has to be made to ascertain whether
it is bioequivalent.
Hence I feel, we have to include 96 hrs for both test and reference
for the AUC (0 to t).
Azidus Laboratories limited,
Chennai-600 048.
Back to the Top | {"url":"http://www.pharmpk.com/PK07/PK2007048.html","timestamp":"2014-04-20T05:43:27Z","content_type":null,"content_length":"8307","record_id":"<urn:uuid:b488d0dc-68b7-4319-8436-d361bfa7c4b4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: June 2012 [00361]
[Date Index] [Thread Index] [Author Index]
Re: Can someone read these line "aloud"
• To: mathgroup at smc.vnet.net
• Subject: [mg127052] Re: Can someone read these line "aloud"
• From: Richard Fateman <fateman at cs.berkeley.edu>
• Date: Wed, 27 Jun 2012 04:09:05 -0400 (EDT)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• References: <jsbt1n$6on$1@smc.vnet.net>
On 6/26/2012 1:48 AM, McHale, Paul wrote:
> Can anyone help me with proper Wolfram "lingo"?
If you were reading these aloud to someone, what words would you use?
For instance. I have heard "/; " or "/." is pronounced "given that".
Not sure which. Is there any list of the verbal equivalents?
Well, which is it? I think neither reflects the proper meaning
of rule application etc. So my answer is: neither .
> list /. x_ :> SuperStar[x] /; x > 9
ReplaceAll of list and RuleDelayed of Pattern of x blank and
Condition of SuperStar of x when Greater of x and 9.
This is yet another reason to use FullForm, from which
the above is a trivial modification. You can have a program
speak this out loud in some browsers equipped suitably.
see http://www.w3.org/TR/voice-tts-reqs/
or see tts (text to speech) add-ons for implementation.
I think you need to decide how to vocalize the mathematical expression
f(x,y) or Mathematica f[x,y].
It could be f of x and y or for special forms you might
use different filler words.
Thus Table{q,{i,a,b}] would by default be
Table of q and list i and a and b, but could be improved as, say,
Table of q using iteration for i from a to b.
If you want to speak
you need to be able to distinguish
Times of a and Plus of b and c and d
Times of a and Plus of b and c and d
perhaps by
Times of a and Plus of b and c EndPlus and d.
You could also try to improve some of the other vocalizations from the
default, even going so far as to vocalize Times[a,b]
as "a times b".
and a*(b+c)*d as a times quantity b plus c end times d.
You may be disappointed that I haven't told you how to pronounce /; and
:>, but I think that kind of pales in comparison to the fact that you
can't unambiguously speak expressions with ordinary arithmetic without
some help.
There is a modest literature on TTS for math, starting from the
Aster program.
I think that the default TTS program for Mathematica, assuming a
speech API is available, would be less than 1/3 a page of code.
An alternative is of course to spell everything out like
ay asterisk open paren bee plus see plus dee close paren slash dot x
blank right arrow blah blah.
There are worse alternatives like using MathML, OpenMath, ... | {"url":"http://forums.wolfram.com/mathgroup/archive/2012/Jun/msg00361.html","timestamp":"2014-04-19T12:46:45Z","content_type":null,"content_length":"27602","record_id":"<urn:uuid:285546b8-2baa-45cf-a0e6-526c8df0d536>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chunks of Uncertainty
Theories of the Universe
Chunks of Uncertainty
Universal Constants
Angular momentum can be thought of as momentum moving in a circle. It's what keeps a ball moving on the end of a rope as you spin it over your head. In classical physics it is defined by the mass and
the speed at which it is spinning. In quantum mechanics, the angular momentum, like everything else, is quantized.
All quantum physicists will agree that they can't explain quantum theory and that it doesn't make sense. So why would they use something to define the fundamental structure of the universe if they
don't understand what it is? Well, physicists who work in the field use the straightforward formulas handed down to them by the brilliant minds that developed them, but often don't understand why
they work or even what they mean. But what is known about it has produced remarkably accurate results. And that's why it's used. Some even feel that quantum mechanics offers some of the most accurate
numerical predictions that science has ever developed. And until something else comes along, the mathematical system used to define the quantum world is all that is available. For now, let's pick up
where we left off in the last section and follow the unfolding of quantum cosmology.
Waves of Matter
One of the reasons de Broglie wanted to develop a mechanical model was to show that the theories developed so far could be verified by experimental means. Up until then, the Rutherford-Bohr atomic
model was just theory; no one really knew whether the atoms looked like that. If he could show that his predictions could be confirmed by experiment, it would solidify the new ground on which quantum
theory was developing.
Neils Bohr came up with a description of how light was radiated from inside an atom. He also has shown why an atom was stable. His explanation of what the structure of the atom was like was close to
but not quite like the one envisioned by Rutherford that we touched upon earlier. Remember that he theorized that the atom's structure was very much like a planetary system with the electrons
orbiting the nucleus the way planets orbit the Sun. Bohr adopted this basic configuration, but couldn't imagine the electrons orbiting the nucleus as some cosmic cloud that was indefinable. So he had
arranged the orbiting electrons into layers or shells. Have you seen Russian nesting dolls or Chinese stacking boxes? In those you have one complete doll or box contained within another. Every time
you open up one there's a smaller one inside. The shells Bohr described were just like that, except that each shell had a specific number of electrons.
Bohr was able to calculate mathematically the diameter of each electron orbit along with the maximum number of electrons in each shell. The angular momentum of the electron in orbit was counteracted
by the attraction of the nucleus. In other words, since unlike electrical charges attract each other, the positive charge inside the nucleus attracted the negative charge of the electron. This theory
explained the structure of the atom and why it remained stable.
A French Prince Discovers Matter Waves
Universal Constants
When you create a standing wave with a jump rope, each point on the rope where there is no movement, a resting place, is called a node. It's the point at the end of each wave and also the point
between the waves that are moving up and down. For example, there are two nodes on the lowest frequency standing wave (a half wave), the two endpoints of the rope. The next higher frequency, (a whole
wave) has three nodes, the ones on each end and a third in the middle, which is the point that separates the crest from the trough. The next higher frequency (1 1/2 waves) has four nodes, the two end
points and two in the middle and so on. Get the idea? As the number of nodes on the rope increases, the frequency of the standing wave increases. If this were a vibrating violin string, the pitch
would also increase.
In 1923, a graduate student at the Sorbonne in Paris, Prince Louis de Broglie (1892-1987), introduced the remarkable idea that particles may exhibit wave properties. He had been strongly influenced
by Einstein's arguments that light had a dual nature. He was also deeply impressed by Einstein's particles of light that could cause the photoelectric effect (knock electrons out of metal) while also
producing the interference patterns caused by waves as in the double slit experiment. He proposed one of the great unifying principles in quantum physics. He was convinced that the wave/particle
duality discovered by Einstein in his theory of light quanta was a general principle that extended to all forms of matter. In other words, the propagation of a wave is associated with the motion of a
particle of any kind—photon, electron, proton, or any other.
De Broglie wished to develop a mechanical explanation for the wave/particle duality of light and to extend this to all forms of matter. He needed to find a mechanical reason for the photons in the
wave to have an energy that was determined by the frequency of that wave.
De Broglie noticed a connection between the angular momentum of the electron in a Bohr orbit and the number of nodes in a standing wave pattern (remember you created those with the jump rope in the
last section). The orbiting electrons could only have one unit of h (Planck's constant) or two units, etc. Could these discontinuous changes in the electron's angular momentum, these changes in the
amount of h allowed, be due somehow to a similar change in standing wave patterns?
De Broglie realized that the Bohr orbit could be seen as a circular violin string, like a snake swallowing its own tail. Would the orbit size predicted by his standing matter waves correspond to
Bohr's calculated electron shells? What would his wave do if confined to a circle? Well, what he discovered was that his matter waves fit Bohr's orbits exactly. And when he calculated the wavelength
of the lowest orbit, he discovered another astonishing mathematical connection between the wave and the particle. The momentum of the orbiting electron equaled Planck's constant divided by the
Excerpted from The Complete Idiot's Guide to Theories of the Universe © 2001 by Gary F. Moring. All rights reserved including the right of reproduction in whole or in part in any form. Used by
arrangement with Alpha Books, a member of Penguin Group (USA) Inc.
To order this book direct from the publisher, visit the Penguin USA website or call 1-800-253-6476. You can also purchase this book at Amazon.com and Barnes & Noble. | {"url":"http://www.infoplease.com/cig/theories-universe/chunks-uncertainty.html","timestamp":"2014-04-17T12:13:37Z","content_type":null,"content_length":"31439","record_id":"<urn:uuid:1de469a1-f318-488e-b784-d3fad393675c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00616-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum - Problems Library - Math Fundamentals, Number Sense: Large and Small Numbers
This page:
number sense
place value
About Levels
of Difficulty
Math Fundamentals
operations with numbers
number sense
number theory
parts of wholes
ratio & proportion
algebraic reasoning
discrete math
logical reasoning
statistics & data analysis
Browse all
Math Fundamentals
Math Fundamentals
About the
PoW Library
Large and Small Numbers
These problems encourage students to develop their number sense about large and small numbers.
Related Resources
Interactive resources from our Math Tools project:
Math 4: Number Sense
The closest match in our Ask Dr. Math archives:
Elementary Number Sense/About Numbers
Elementary Large Numbers
NCTM Standards:
Number and Operations Standard for Grades 3-5
Access to these problems requires a Membership. | {"url":"http://mathforum.org/library/problems/sets/fun_numbersense_largesmall.html","timestamp":"2014-04-18T03:21:25Z","content_type":null,"content_length":"15018","record_id":"<urn:uuid:e81afd4d-9340-41cb-b0bd-15f2db592979>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Taylor's Theorem with Remainder
May 2nd 2009, 02:33 PM #1
Oct 2007
Taylor's Theorem with Remainder
But, I'm not understanding the Lagrange Remainder.
for some
Here's a problem:
"Use the Lagrange form of the remainder to prove that the Maclaurin series converges to the generating function from the given function"
x* e^x
I got the Maclaurin series of that function.
$xe^x= x + x^2 + \frac{x^3}{2!}$
x^(n+1)/n! with the proper sum notation.
What I don't get is how to put this into the equation at the top. I might be able to correctly put a few things in the formula, such as the function, but I don't know how to solve or finish it.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/86962-taylor-s-theorem-remainder.html","timestamp":"2014-04-16T10:34:23Z","content_type":null,"content_length":"29592","record_id":"<urn:uuid:479bcde8-5caf-4103-bf88-d4d954c0c19d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculate sites number from sectors numbers [Archive] - Wire Free Alliance
View Full Version : Calculate sites number from sectors numbers
If u r working wid Siemens vendor u can have it by adding up the number of BTSM per BSC
What about cell ID, do you use a sequencial IDs, like for site ABC the cell IDs are 86521, 86522, 86523, 86524, 86525, 86526.. If this is valid you can still use the left an len functions to get the
number of sites.....
another idea, try to get only BTS number equal to zero of each site ( the first cell of each site has normally BTS number=0) of ur file, then apply ur filter of coordinates.
tell the type of CIs u are using in ur company and i can help u... | {"url":"http://www.finetopix.com/archive/index.php/t-5465.html","timestamp":"2014-04-17T22:25:53Z","content_type":null,"content_length":"16825","record_id":"<urn:uuid:ff2a90ff-8bb1-4a8d-8f01-d7fada7e13a6>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Girls, boys, and math.
You’ve probably already heard the news last week that a study published in Science indicates that the gender gap between girls and boys in mathematical performance may be melting faster than the
polar ice caps. The study, “Gender Similarities Characterize Math Performance” by Janet S. Hyde et al., appears in the July 25, 2008 issue of Science (behind a paywall). [1]
Hyde et al. revisit results of a meta-analysis published in 1990 (J. S. Hyde, E. Fennema, S. Lamon, Psychol. Bull. 107, 139 (1990).) that found negligible gender differences in math ability in the
general population but significant differences (favoring boys) in complex problem solving that appeared during the high school years. This 1990 meta-analysis drew on data collected in the 1970s and
1980s. The present study asked whether more recent data would support the same findings.
In previous decades, girls took fewer advanced math and science courses in high school than boys did, and girls’ deficit in course taking was one of the major explanations for superior male
performance on standardized tests in high school. By 2000, high school girls were taking calculus at the same rate as boys, although they still lagged behind boys in the number of them taking
physics. Today, women earn 48% of the undergraduate degrees in mathematics, although gender gaps in physics and engineering remain large.
The researchers used as their data results from the state assessment tests mandated by No Child Left Behind given to students in grades 2 through 11. Specifically, they drew on results from
California, Connecticut, Indiana, Kentucky, Minnesota, Missouri, New Jersey, New Mexico, West Virginia, and Wyoming (since these states were able to break down the results by grade level, gender, and
ethnicity), a pool of more than 7 million students.
In this data, Hyde et al. found no evidence of difference in performance on the mathematical assessments between boys and girls on average.
Effect sizes for gender differences, representing the testing of over 7 million students in state assessments, are uniformly <0.10, representing trivial differences. Of these effect sizes, 21
were positive, indicating better performance by males; 36 were negative, indicating better performance by females; and 9 were exactly 0. From this distribution of effect sizes, we calculate that
the weighted mean is 0.0065, consistent with no gender difference.
Even if there is no gender difference on average, it has long been claimed that males show more variance in mathematical abilities — in other words, that more boys will be found far below and far
above the average, while girls will be more tightly clustered around the average. This claim of greater variance is sometimes invoked to explain gender disparities in science and engineering careers,
where presumably you need people at the very top of the range of intellectual abilities, and there just happen to be more males that fit the bill.
So Hyde et al. examined the variance in their data:
The variance ratio (VR), the ratio of the male variance to the female variance, assesses these differences. Greater male variance is indicated by VR > 1.0. All VRs, by state and grade, are >1.0
[range 1.11 to 1.21]. Thus, our analyses show greater male variability, although the discrepancy in variances is not large.
At the top of the range on the test results (in data from Minnesota 11th graders), what kind of gender differences do the data show?
For whites, the ratios of boys:girls scoring above the 95th percentile and 99th percentile are 1.45 and 2.06, respectively, and are similar to predictions from theoretical models. For Asian
Americans, ratios are 1.09 and 0.91, respectively.
If these differences in the top echelons of math ability are being used to explain the gender imbalances in science, engineering, and mathematics as courses of university study or career paths, do
these numbers support the explanation?
If a particular specialty required mathematical skills at the 99th percentile, and the gender ratio is 2.0, we would expect 67% men in the occupation and 33% women. Yet today, for example, Ph.D.
programs in engineering average only about 15% women.
Not to mention, we would expect more Asian American women than Asian American men in these engineering programs if these educational choices simply followed ability. So at the very least, greater
variance among males is not sufficient to explain the current gender breakdown in math, science, and engineering. There must be other factors at work.
Earlier studies had reported gender differences in complex problem solving, favoring boys. (They also reported gender differences in computation and grasp of concepts, favoring girls, although these
differences never seemed to play much of a role in the coverage of these studies in the popular press.) It seems at least possible that some of this was connected to coursework — if you’re in a
course that introduces you to complex problems and strategies for solving them, you might then be more likely to perform better in complex problem solving tasks.
Hyde et al. tried to see what their more recent data indicated about complex problem solving and gender:
[W]e coded test items from all states where tests were available, using a four-level depth of knowledge framework. Level 1 (recall) includes recall of facts and performing simple algorithms.
Level 2 (skill/concept) items require students to make decisions about how to approach a problem and typically ask students to estimate or compare information. Level 3 (strategic thinking)
includes complex cognitive demands that require students to reason, plan, and use evidence. Level 4 (extended thinking) items require complex reasoning over an extended period of time and require
students to connect ideas within or across content areas as they develop one among alternate approaches. We computed the percentage of items at levels 3 or 4 for each state for each grade, as an
index of the extent to which the test tapped complex problem-solving. The results were disappointing. For most states and most grade levels, none of the items were at levels 3 or 4. Therefore, it
was impossible to determine whether there was a gender difference in performance at levels 3 and 4.
Here’s where we get into the debate about whether NCLB’s linkage of test results and school funding has been a positive or pernicious influence on test design, what our kids are learning, etc. Let me
just reiterate what teachers should already know: the assessment tells the students what they really have to learn. If you think a particular competency or bit of knowledge is important but you don’t
assess it, your students are smart enough not to waste their effort absorbing it.
I suppose this means that even if you’re not trying to teach to the test, to the extent that the students have reasonable information about what the test items will be like and what they will cover,
the students are going to “learn to the test”.
Also, to the extent that complex problem solving is a competence that it would be good for students to develop prior to starting a college level program in mathematics, science, or engineering, if
the NCLB tests are the assessments driving secondary math education, we may be in trouble. I’m hopeful that there are still old school teachers giving weekly quizzes focuses on complex problem
solving, and still significant populations of kids preparing for Advanced Placement exams or math team competitions. But it sure would be nice to raise the bar to a level including complex problem
solving — and figure out how to get more kids over that bar — rather than acquiescing to high stakes tests that apparently view complex problem solving as a frill.
So, in summary, when girls are taking the same math coursework as boys, the gender differences in mathematical performance seem to shrink to insignificance. And, NCLB assessments may be dumbing down
math for everyone.
What impact that will have on careers in math, science, and engineering remains to be seen.
[1] Janet S. Hyde, Sara M. Lindberg, Marcia C. Linn, Amy B. Ellis, Caroline C. Williams, “Gender Similarities Characterize Math Performance.” Science 25 July 2008:Vol. 321. no. 5888, pp. 494 – 495.
DOI: 10.1126/science.1160364
1. #1 Super Sally July 30, 2008
Oooooooooooooooooooooh, I’d love to do C. Pine’s item analysis on these test with over half a million students to determine what the upper level math test items are testing in the various
When I was last doing that item analysis on NJ Basic Skills tests given to students entering all levels of state funded colleges (and some private ones) I was working with 10k students a year
(over 8 years). I don’t know that any of our M/F analysis was ever published, but even back then when we controlled for HS math taken, girls were performing even with or better than boys.
Actually, in the earlier years (late ’70s) girls who took the same higher level courses performed better than boys, and by the mid-’80s that was down to even with boys, as more girls were taking
upper level HS math. Note that the NJBS tests of that era only covered math through the 8th grade curriculum and and the full range of 1st year algebra. Like the tests under discussion we did not
have extended complex tasks. It was interesting that we did see significant correlation between high performance on the more difficult equation solving items and high scores on the essay portion
of the tests.
Back in the mid-’80s when I left that work to ETS, the analysis was running on main-frames. It could probably be done nicely on a PC now, even for student numbers like these.
Maybe we have a retirement project here…Some of us are just data junkies, particularly with data on a topic about which we are passionate.
2. #2 Academic July 30, 2008
Nice synopsis. I blogged about this study yesterday as well. Naysayers are already coming out of the woodwork.
3. #3 Tony Jeremiah July 30, 2008
1. negligible gender differences in math ability in the general population but significant differences (favoring boys) in complex problem solving that appeared during the high school years.
2. We computed the percentage of items at levels 3 or 4 for each state for each grade, as an index of the extent to which the test tapped complex problem-solving. The results were disappointing.
For most states and most grade levels, none of the items were at levels 3 or 4.
3. when girls are taking the same math coursework as boys, the gender differences in mathematical performance seem to shrink to insignificance. And, NCLB assessments may be dumbing down math for
Taken together, these statements seem to suggest that the observed attenuation of gender differences at the secondary level may be due to tests at the secondary level having a low
item-discrimination index (d).
d is defined as the difference in the proportion of high scorers answering particular items on a test correctly, and the proportion of low scorers answering the same questions correctly. For
example, assuming math questions have some type of gender-bias, and, a typical math question on a grade 12 exam asks something like, 1+1=?; that question will have a low d since presumably, any
person making it to a grade 12 math should have no difficulty answering that question. However, if a typical question is something like, 1+Number of times the Yankees have one the World Series=?,
the d should be more substantial (assuming males have a greater fascination with sports trivia than females).
The above example is probably a ridiculous exaggeration, but is meant as a concrete analogy to statement #2, which suggests that at the secondary level, tests across various states appear to be
assessing lower level (e.g., recall) and not upper level (e.g., problem-solving) cognitive skills. If a gender-difference exists in problem-solving (due to genetic or likely environmental
factors; see Dave Munger’s Why aren’t there more women in math and science for a more detailed commentary), and, that is really the primary gender difference, it won’t be detected by secondary
school math tests since they appear to have a low d as it concerns math-related, problem solving skills.
If this goes undetected at the secondary level, it may show up at the post-secondary level (which appears to be the focus of Dave’s posting on this issue) as a gender difference, especially if
tests at the post-secondary level are primarily focused on assessing problem-solving ability.
This ultimately holds some great significance for your last comment: What impact that will have on careers in math, science, and engineering remains to be seen.
4. #4 Matt Brodhead July 31, 2008
I would imagine that behavioral history is the main factor, not gender as the significant variable. However, the Onion ran a fantastic article about the subject: http://www.theonion.com/content/
5. #5 Tony Jeremiah July 31, 2008
I would imagine that behavioral history is the main factor, not gender as the significant variable. However, the Onion ran a fantastic article about the subject:
Gender and behavioral history are probably confounded/intertwined though, especially when one considers genderschema theory in conjunction with reciprocal determinism.
6. #6 Dylab July 31, 2008
Super Sally: I had a thought on controlling for classes taken. Is it possible that the change in the statistics is a result of self-selection rather than an effect from the classes. If a smaller
group of girls decide to take increased math course it doesn’t seem paticularly implausible to me that they would come from an higher end of the intellegence spectrum of the same female cohort
relative their males who decide to take extra course work.
I’m curious what male/female differences would be where math courses are less optional.
7. #7 Alice July 31, 2008
Nice post, Janet. Thanks for blogging about it. | {"url":"http://scienceblogs.com/ethicsandscience/2008/07/30/girls-boys-and-math/","timestamp":"2014-04-16T11:19:05Z","content_type":null,"content_length":"71570","record_id":"<urn:uuid:a18c5926-64c9-4e06-ae08-49d64431868e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
This web page summarizes SUTRA's capabilities and limitations, input data requirements, and associated preprocessing and postprocessing utilities. For more information about SUTRA and its utilities,
see the formal documentation for each code, and take advantage of the various other resources offered on this website.
Capabilities and Limitations of SUTRA
SUTRA simulates saturated and/or unsaturated, constant-density or density-dependent groundwater flow and either single-species reactive solute transport or thermal energy transport. Simulations can
be either two-dimensional or fully three-dimensional. Solutions can be either steady-state or transient. SUTRA's main features and limitations are summarized below:
• Groundwater flow
□ saturated/unsaturated
□ constant-density or density-dependent
□ user-programmable unsaturated flow functions
• Transport
□ single solute or thermal energy
□ zeroth- and first-order solute production/decay; zeroth-order energy production/decay
□ linear, Freundlich, or Langmuir adsorption
• Time dependence
□ steady-state or transient solution
□ time-varying boundary conditions specified via input files or programmable subroutine
• Two-dimensional (2D) models
□ Cartesian coordinates or
□ radial/cylindrical coordinates
• Three-dimensional (3D) models
□ Cartesian coordinates
□ fully 3D
• Discretization
□ hybrid Galerkin-finite-element and integrated-finite-difference method
□ quadrilateral (2D) or generalized hexahedral (3D) finite elements
□ fully implicit finite-difference time discretization
• Nonlinear problems
□ Variable-density and/or unsaturated flow problems are nonlinear
□ Picard iteration available to resolve nonlinearities
• Matrix equation solvers
□ Gaussian elimination (direct)
□ preconditioned CG (iterative; only for flow equation without upstream weighting)
□ preconditioned GMRES (iterative)
□ preconditioned ORTHOMIN (iterative)
• Input
□ all input data are contained in text files
□ input data are grouped into "datasets"
• Output
□ output is written to text files and to the screen
□ flexible, columnwise listing of pressure, concentration or temperature, saturation, and velocity
□ observation well output (pressure, concentration or temperature, and saturation vs. time)
□ fluid mass and solute mass or energy budgets
• Preprocessing and postprocessing software (details below)
□ facilitates formulation of input datasets
□ helps with interpretation of results
Input data requirements for SUTRA
For each SUTRA simulation, the user must specify the type of simulation, mesh structure, physical properties, simulation and output controls, boundary conditions, and initial conditions. This is done
through two or more data files:
• the main input (".inp") file
• the initial conditions (".ics") file
• one or more (".bcs") files for time-dependent boundary conditions
The input data requirements are summarized below:
• Solute transport or
• Energy transport
• Node and element numbering scheme
• Coordinates in space of each node
□ (x, y) in 2D
□ (x, y, z) in 3D
• Thickness of the mesh at each node (2D only)
• For each element, a list of nodes that form its corners
• Simulation title
• Simulation dimensions
□ number of nodes
□ number of elements
□ number of boundary conditions
□ number of observation nodes
• Simulation modes
□ saturated/unsaturated
□ steady/transient
□ run initialization
□ storage of restart information
• Numerical controls
□ upstream weighting
□ enforcement of boundary conditions
□ time stepping
□ solution cycling
□ Picard iterations
□ matrix solver parameters
• Output controls
□ nodewise results (pressure, concentration, saturation)
□ elementwise results (velocity)
□ fluid mass and solute mass or energy budgets
□ results at observation nodes
□ screen output
• Four types of boundary conditions
□ Fluid sources or sinks
□ Solute mass or energy sources or sinks
□ Specified pressures
□ Specified solute concentrations or temperatures
• Specification of boundary conditions
□ All boundary conditions are specified at nodes.
□ Boundary conditions are not restricted to nodes at the physical boundary of the model; they can be specified at any node.
□ By default, there is no flux of mass or energy in or out of the model domain at any node for which a boundary condition is not explicitly specified.
□ Time-varying boundary conditions can be specified using the optional ".bcs" input files or programmed by the user in subroutine BCTIME.
• Initial conditions
□ starting time
□ initial pressure at each node
□ initial solute concentration or temperature at each node
• Restarting a run
□ A restart (".rst") file contains initial conditions, plus additional initialization information saved by SUTRA at the end of an earlier run.
□ Using the information saved in a restart file, an earlier run can be continued as though it had never been interrupted. This is called a "warm" start.
□ The restart file can also be used in a "cold start" to continue a run after changes have been made to the model input (e.g., different boundary conditions).
Preprocessing and postprocessing utilities
Preprocessing software facilitates the formulation of large, complex input datasets. Once a simulation is completed, postprocessing software can assist in the interpretation of results. The
SutraSuite software package includes a number of preprocessing and postprocessing programs designed for use with SUTRA. These are summarized in the table below:
• SutraPrep
□ Simple preprocessor for 3D SUTRA problems
□ Creates input for SUTRA 2.0 [2D3D.1], which can be read by SUTRA 2.1 and 2.2
□ Creates 3D (VRML) mesh plots
□ Text-based interface
□ Compiled version available for Windows^®
□ Can be run on any platform with a standard FORTRAN-90 compiler
• SutraGUI
□ Powerful graphical user interface (GUI) for 2D and 3D SUTRA problems
□ Features interactive graphical input and ability to import GIS maps of physical properties and boundary conditions
□ Also does postprocessing (2D only) -- see below
□ Windows^®-based
□ Requires ArgusONE^® software
• SutraPlot (version compatible with versions of SUTRA higher than 2.0 [2D3D.1] is currently under development)
□ Plots results of 2D and 3D SUTRA simulations
□ Creates mesh plots, contour plots, and velocity vector plots
□ Windows^®-based
• SutraGUI
□ Plots results of 2D SUTRA simulations
□ Creates contour plots of pressure, concentration/temperature, and saturation, and plots velocity vectors
□ Also does preprocessing (2D & 3D) -- see above
□ Windows^®-based
• Model Viewer
□ Creates visualizations of 2D and 3D SUTRA output that the user can manipulate on-screen
□ Plots isosurfaces or solid, 3D maps of pressure, concentration/temperature, or saturation
□ Plots velocity vectors
□ Saves visualizations as bitmaps (.bmp) or VRML plots (.wrl)
□ Displays animations of results and exports frames for creating animations outside of Model Viewer
□ Windows^®-based
• GW_Chart
□ Plots results of 2D and 3D SUTRA simulations as functions of time: fluid, solute, and energy budgets; and pressure and concentration or temperature at observation nodes
□ Windows^®-based
• CheckMatchBC
□ Aids in the proper formulation of specified-pressure, specified-concentration, and specified-temperature boundary conditions.
□ Windows^®-based | {"url":"http://water.usgs.gov/nrp/gwsoftware/sour/overview/overview.htm","timestamp":"2014-04-21T07:05:37Z","content_type":null,"content_length":"23747","record_id":"<urn:uuid:cc6994d6-3d30-48a4-b7dd-5935ca824691>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
s dilemma
prisoner’s dilemma, Imaginary situation employed in game theory. One version is as follows. Two prisoners are accused of a crime. If one confesses and the other does not, the one who confesses will
be released immediately and the other will spend 20 years in prison. If neither confesses, each will be held only a few months. If both confess, they will each be jailed 15 years. They cannot
communicate with one another. Given that neither prisoner knows whether the other has confessed, it is in the self-interest of each to confess himself. Paradoxically, when each prisoner pursues his
self-interest, both end up worse off than they would have been had they acted otherwise. See egoism. | {"url":"http://www.britannica.com/print/topic/477240","timestamp":"2014-04-19T07:19:18Z","content_type":null,"content_length":"6762","record_id":"<urn:uuid:33429982-fc4a-4fb9-a647-40e68dd90cd5>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
Distance and Midpoint Formulas
Slapping coordinates on a line makes any line look like the number line. Just the same, this system of coordinates can make any plane look like the x-y plane from algebra.
Since lines only require one coordinate, they're one-dimensional. Planes require two, so they're two-dimensional. If we kick it up one more notch, we can put points in space (three-dimensional space,
not the Final Frontier).
Finally, if we already know the exact point we're talking about, there's no need to add any coordinates to specify anything. That's why points are called zero-dimensional.
Going back to two dimensions, we can use coordinates to find the distance between points using the distance formula:
Sample Problem
If M = (3, 4) and N = (5, -2), find the length of the segment MN.
As always, it's a good idea to sketch the problem first.
Since the length of MN is just the distance between the endpoints, we can use the distance formula.
Plugging this into a calculator, we get approximately 6.32 units, which looks about right on the sketch.
There's also a three-dimensional version of the distance formula (no 3D glasses required). It looks a lot like the two-dimensional version but with one addition.
You might be wondering why computing one-dimensional distance looks so different from these more complicated distance formulas in higher dimensions. Well, if we follow the pattern with the square
root and everything using only one coordinate, we get:
d = |x[2] – x[1]|
The square root cancels out the exponent, but we have to put absolute values since square roots, just like distances, are always positive. Basically, they're the same concepts extended to multiple
The distance formula is really just an adaptation of the infamous Pythagorean theorem: a^2 + b^2 = c^2. We'll talk about this more when we get into right triangles. We just said it because Greek
influence extends even to Descartes's work. Geometry really is all about the Greeks, isn't it.
Coordinates also make it easy to find the midpoint of a segment. Finding the number in the middle of two others means finding their average (the average of a and b is (a + b) ÷ 2). Luckily, midpoint
works the same way. Average each of the coordinates of the endpoints, and you've got a midpoint. Here's a formula, if you prefer to remember those:
That formula is for two dimensions, but if you're working in three dimensions (or somehow transcend into yet higher dimensional realms), follow the same pattern.
Sample Problem
PQ has length 10 and has M as a midpoint. If P has coordinates (2, 4) and M has coordinates (5, 8), what are the coordinates of point Q?
It is tempting to plug (2, 4) and (5, 8) into the midpoint formula, but what would that give us? We'd just get the midpoint of P and M, which doesn't answer the question at all.
Instead, we are given a midpoint and need to solve for an endpoint. Let's just call the coordinates of Q (x, y) for now. We can still use the midpoint formula, but not in the same way we'd want to.
This looks like a single equation, but it's actually two equations disguised as one. We know that the first coordinates have to be equal, and also that the second coordinates are equal. In other
words, we have these two equations.
Solving for x and y (or just guessing and checking) yields (x, y) = (8, 12), and we're done.
Hold up. We never used the length of PQ. Does that mean our answer is wrong? Not necessarily. Many times in geometry, we're given extra information to try to throw us off. Sometimes it's completely
irrelevant ("If 1 + x = 83 and an orangutan's arms can extend up to 7 feet, find the value of x"). Sometimes it's actually pertinent to the question, but not needed to solve the problem. When that
happens, we can use this extra information to check our answer.
In this case, we can check and see if the distance between P and Q is 10. If it is, then we have more proof (not proofs again!) that our answer is right. If we're wrong, we may want to go back and
check our work.
PQ = 10
We're right, so there's no need to worry. | {"url":"http://www.shmoop.com/points-lines-angles-planes/distance-midpoint-formulas.html","timestamp":"2014-04-16T08:24:45Z","content_type":null,"content_length":"40704","record_id":"<urn:uuid:38e7a316-076b-4efd-9263-64450d14cf67>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
CDS 101/110 - State Feedback
From MurrayWiki
│CDS 101/110a │← Schedule → │Recitations│FAQ│()│
Monday: Reachability and State Feedback (Slides, MP3)
This lecture introduces the concept of reachability and explores the use of state space feedback for control of linear systems. Reachability is defined as the ability to move the system from one
condition to another over finite time. The reachability matrix test is given to check if a linear system is reachable, and the test is applied to several examples. The concept of (linear) state space
feedback is introduced and the ability to place eigenvalues of the closed loop system arbitrarily is related to reachability. A cart and pendulum system and the predator prey problem are used as
Wednesday: State Feedback Design: Lecture notes (MP3)
This lecture will present more advanced analysis on reachability and on control using state feedback.
• K. J. Åström and R. M. Murray,, Princeton University Press, 2008..
This homework set covers reachability and state feedback. The Whipple bicycle model is used as an example to illustrate state feedback with pole placement, and the dependence of both the tracking
behaviour and the command response on the location chosen for the closed-loop poles. | {"url":"http://www.cds.caltech.edu/~murray/wiki/index.php/CDS_101/110,_Week_5_-_State_Feedback","timestamp":"2014-04-17T21:23:42Z","content_type":null,"content_length":"18904","record_id":"<urn:uuid:ff2cb3be-fe0f-4c1a-bb3a-ec715fd7ae16>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
how many corners does a cube have in 4-D?How many 3-D faces?How many edges?A typical corner is (0,0,1,0),a typical edge goes to (0,1,0,0).
• one year ago
• one year ago
Best Response
You've already chosen the best response.
@estudier but how do i calculate it?
Best Response
You've already chosen the best response.
Ok, this is something I kinda learned in graph theory..
Best Response
You've already chosen the best response.
it's algebra question of strang book
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
2 dots make a line, two lines make a plane, if you connect the corresponding edges of two planes, you get a 3D cube, and finally, if you connect the corresponding edges of two 3D cubes you get a
4d cube.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
that looks soo messy, but that's what a 4D cube should look like.
Best Response
You've already chosen the best response.
it has twice as many corners as a 3D cube
Best Response
You've already chosen the best response.
and uhm.. let me count the number of edges.. |dw:1348059232884:dw|
Best Response
You've already chosen the best response.
So it should have 32 edges
Best Response
You've already chosen the best response.
where the 12 come from?
Best Response
You've already chosen the best response.
12 edges of a cube
Best Response
You've already chosen the best response.
thanks @bahrom7893
Best Response
You've already chosen the best response.
|dw:1348059530435:dw| and you're welcome
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5059bd0be4b0cc122893575d","timestamp":"2014-04-16T08:00:11Z","content_type":null,"content_length":"783651","record_id":"<urn:uuid:646acea9-670a-4cfc-916b-cdadf6c92e01>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US4976619 - Passive location method
The present invention pertains to simulated battlefield position location and more particularly to passive position location for determining the effects of simulated munitions deployment on a target.
Position detectors may be used to find the location of individuals or vehicles with respect to simulated munitions to determine whether injury or damage was inflicted on the individuals or vehicles.
A battlefield for a war game is established. Boundaries of this battlefield are marked by actuators. These actuators emit coded transmissions which cover the area of the battlefield. These location
detectors may be active or passive. Active location detectors interact with the actuators to both receive messages from the actuators and transmit messages to the actuators. Passive location
detectors receive transmissions from the actuators and determine their position relative to a predetermined impact point of the munition.
Active location devices contain transmitters and receivers. Passive location devices include receivers only By eliminating the transmitter portion of a position locator, passive location detectors
may be made to be less expensive, more reliable and more operable in a hostile environment.
Doerfel et al. U.S. Pat. No. 4,682,953, issued on July 28, 1987 and Doerfel et al. U.S. Pat. No. 4,744,761, issued on May 17, 1988 teach and describe one such passive location detection device. This
passive location device uses a time-windowing technique which generates a polygon-shaped area in which the target is probably located. The number of sides of this polygon is related to the number of
actuators used. For example, three actuators generate a six-sided polygon. When the target PDD is located along a border of the polygon area encompassed by the actuators, the polygon produced is
substantially elongated and distorted. Such distortion of the area in which the target Passive Detection Device (PDD) is located results in a possible indication of harm or damage when none was truly
inflicted or a possible indication of no damage when damage was actually inflicted. Because the target PDD is located only within a bounded area and because of the distortion when a PDD is near the
boundary of the actuator area, the passive location device described in the references can produce many false readings.
As a result, it is an object of the present invention to provide a passive location detector which determines the 15 actual coordinate position (rather than position within a window) of a target PDD
with minimal error.
In accomplishing the object of the present invention, a novel method for passive location is shown.
A passive location method determines the coordinate position of a passive detection device relative to an impact point of a round of munition in a simulated battlefield exercise. A number of
actuators transmit messages to the passive detection device.
First, the passive location method initializes a plurality of parameters by means of an initialization message. Next, the method sets the initial coordinate position estimate of the PDD as the
coordinate position of the impact point of the munition. Next, a sequence of new coordinate positions with respect to the initial coordinate position are examined to establish a position error
For each of the coordinate positions examined, an error metric is found. The smallest error metric is determined and its corresponding coordinate direction determined. Then a new PDD position
estimate is located a predetermined distance from the previous coordinate position in the coordinate direction of the smallest error metric.
Lastly, the method is iterated until a new coordinate position is produced which converges to the true position of the passive detection device by continuous minimization of the error metric. A
determination may then be made whether the PDD and its associated personnel or vehicle is killed, wounded or damaged.
FIG. 1 is a geometric diagram showing the preferred embodiment of the present invention.
FIG. 2 is a timing diagram of the passive location detector operation;
FIGS. 3A, 3B and 3C are a flow chart of the location detection method of the present invention.
FIG. 4 is a block diagram of a portion of a PDD.
U.S. Pat. Nos. 4,682,953 and 4,744,761 are hereby incorporated by reference.
FIG. 1 depicts a layout of a war game battlefield encompassed by lines 10, 11 and 12. This area includes actuators 1, 2 and 3. The number of actuators shown here in FIG. 1 is not by way of
limitation, but by way of explanation. Current design and technology permit up to five actuators to encircle the war game battlefield, but at least three actuators are required for establishing
location within the war game battlefield bounded by lines 10, 11 and 12. PDD location outside the bounded area is also possible, but degraded location accuracy results.
Point 4 within the encircled area of FIG. 1 denotes the location of a particular target. This target includes a PDD (Passive Detection Device). This basic configuration is similar to the above
incorporated references. The difference is that the system shown in the references locates targets only within a certain area within the war game battlefield as delineated by the time-windows
established by the actuator timing sequence. This area may be distorted if the target PDD is near a boundary, such as 10, 11 or 12. The present invention provides an actual pinpoint location of a
target by solution of the location equations in a suitably programmed microprocessor. The combination of location equations, a developed software algorithm and incorporation of microprocessing
technology comprise this invention.
Point IP within the encircled area of FIG. 1 is the predetermined impact point of a particular round of exploded munition. People or equipment within a basically circular zone around the impact point
are either damaged or mortally wounded. People and equipment within a larger radius of the impact point may be injured or slightly damaged. Personnel and equipment outside these zones survive
The actuators 1, 2 and 3 may be separated by as much as 20 km, as long as there is line of sight contact with the impact point IP.
In the following explanation, actuator 1 acts as the primary or reference transmitter. An initialization message is transmitted by the primary transmitter, in this case actuator 1. The target 4
(personnel or vehicle) contains the passive detection device (PDD). This initialization message includes the coordinate location of each of the actuators and the impact point. In an alternate form of
the invention, these initialization messages may be prestored in an electronically erasable programmable read-only-memory (EEPROM) located within the PDD unit. See FIG. 4.
The initializing actuator 1 radiates a coded pulse into the battlefield (exercise) area shown as "Act#1" in FIG. 2. The remaining actuator transmissions are delayed by predetermined amounts, T[d2], T
[d3], etc. This coded pulse is match-filtered by the PDD to produce a narrow synchronization pulse A[1] in FIG. 2, or as IP[1'], if the PDD is located at the impact point. A high-speed counter within
the PDD begins counting when the coded pulse is detected. The counter of the PDD then continuously counts at a predetermined rate. The value of this count is determined when a pulse from a second
actuator is detected by the PDD at time A[2]. Thus, a Δ time difference is measured from the last full count, T[0], beginning when the PDD received a coded pulse of actuator 1 until the pulse is
received by the PDD from the second actuator; this time difference is denoted by TA[12].
The time interval between actuator pulses received at the PDD location is known to be precisely T[0] representing an integral number of counts of the system clock (not shown).
The PDD then continues counting awaiting a pulse from actuator 3. Upon the arrival of this coded pulse, a second time difference is computed, TA[13]. This procedure is repeated for each of the
actuators employed in the particular configuration. When this information is collected by the PDD, the passive location method is executed by the PDD based upon the information measured according to
the counts obtained.
Referring to FIGS. 3A-C, the solution to the location equations (1) through (5) is determined via the disclosed software algorithm, and the system development computer simulation also shown in FIG.
The passive location method iteratively calculates a position error gradient. As a result of this calculated position error gradient, the estimated location coordinates of the PDD may be moved
successively closer to the true location of the PDD. The convergence parameter which is iteratively minimized is given by equation (1): ##EQU1## where TAi is the measured timing difference error
given by equation (2):
TA[i] =ΔT[i] -ΔT[1] (2)
where ΔT[i] is the number of clock cycles T[0] measured from the time the PDD receives the coded pulse of the first actuator to respond and tai is the estimated timing difference error as determined
by the latest position estimate of the PDD and N is the number of actuators.
Equation (1) given above is solved choosing an optimal starting position and then moving the solution in the direction which minimizes the error. For the purposes of this explanation, it will be
assumed that the surfaces within the encircled war game area are approximately of equal altitude and therefore no Z-axis coordinate exists. A two-dimensional X-Y coordinate plane is assumed. However,
the method shown applies equally well to three-dimensional solutions as revealed by the disclosed system simulation.
The impact point IP is chosen as the starting point. Incremental steps are taken from the impact point and the resultant error measured in the four coordinate (±X,±Y) directions. A candidate error
metric is computed by the microprocessor at a new location given by equation (3) below; this metric is seen to be the second range difference (the difference of the timing differences): ##EQU2##
where β refers to each of the four candidate directions (+X, -X, +Y and -Y) with coordinates
X.sub.β =X[0] ±ΔX[62] (4)
Y[B] =Y[0] ±ΔY.sub.β (5)
and X[0] and Y[0] refers to the present position of the estimate in the iterative solution and the increments of position change ΔX.sub.β and ΔY[62] assume 1 of 3 successively decreasing magnitudes
of distance as the method converges to the true position. More than 3 convergence parameters may be used. The four values of ERROR[62] are compared with the present value of ERROR given by equation
(1) above. Movement of an assumed position is allowed to occur in the direction of the 2 smallest ERROR[62] as long as this ERROR.sub.β value is smaller than the present value of ERROR. When this
test fails, the next smaller movement increment (one of a predetermined set of convergence distance parameters) is implemented until the method detects that no further movement will decrease ERROR.
As movement in the above four directions converges to the true position of the PDD, movement in four additional directions is added to supplement the estimation of the true position of the PDD. These
four added directions are located 45° from each of the principal X-Y coordinate directions. As a result, the passive detection method is able to converge to the true position of the PDD as the value
of ERROR approaches zero.
As mentioned above, a microprocessor of the PDD supports and controls the execution of the passive location method of the present invention. FIGS. 3A-C show the details of this method. Initially, the
PDD is in the standby mode in order to conserve power. The reception of the initialization pulse acts to "wake up" the PDD. The PDD determines at the first stage of processing whether or not its
location is within an acceptable radius of lethality. The PDD returns to the standby mode to conserve power if the radius of convergence is too large, block 27, the metric in this stage is the total
actuator count error and the threshold is a predetermined operator input. FIG. 3A primarily shows initialization of the computer simulation; FIG. 3B and 3C primarily shows the actual convergence
algorithm to be stored in the PDD.
First, the number of actuators, the location of the actuators, the conversion movement parameters and the PDD location errors relative to the IP are initialized by the method, block 21. The PDD
location errors relative to the IP are peak-to-peak location errors of the PDD relative to the impact point IP. The three convergence parameters are read from the input message of actuator 1. As an
example, these convergence parameters may be initialized at 70 meters, 10 meters and 3 meters in that order. These parameters determine the step size or distance of the movement from the impact point
toward the true location of the PDD and are chosen to minimize the total computation of the position estimate.
Next, the true range of the actuator to the PDD is determined for simulation purposes. The true range of the impact point IP to the actuator is also determined. Next, the timing error due to noise is
calculated by the simulation. Lastly, a second range difference including timing error is calculated. All of the above functions are performed by block 23 and represent true data to be measured by
the PDD counters if actually deployed.
The range errors of the actuators relative to the impact point are measured and the corresponding clock count range error is stored, block 24. Next, the measured range difference or errors relative
to the impact point are found and the total count error including timing noise is found, block 25.
Next, it is determined whether this particular PDD is close enough to the impact point. That is, it is determined whether the total clock count is less than a predetermined threshold value. This
threshold value may be either transmitted in the initialization message or may be preprogrammed within the PDD. If the number of counts actually determined is less than the threshold value, block 27
transfers control to block 29 and the location method is continued. If the number of counts is greater than the threshold, this indicates that the particular PDD is out of the range of any affect of
the particular munition described by this message transmission. As a result, block 27 transfers control to block 28 via the NO path. Block 28 powers down the PDD and places it in the standby mode.
Block 28 then transfers control to block 21 and waits for a subsequent initialization of the method when another message indicating a simulated munitions detonation is received.
If the PDD is within range of the detonated munition, block 27 transfers control to block 29 where the initial estimate of the PDD location is set to the coordinate location of the impact point; this
begins the actual locations algorithm. Next, a new coordinate position is selected relative to the impact point IP. This represents a step using the first convergence parameter from the impact point
along the ±X and ±Y coordinate directions. The second difference given by equation (1) above is calculated using these range coordinates, block 31. The ranges from the actuators to the new position
are determined for the movement along the X-Y coordinate directions. Next, for the new coordinate position estimate A an error sum is found by applying equation (3) given above. For each of the four
coordinate directions, an error sum is found and a total DRE is calculated, block 35. Next, the smallest error sum of the four, SMALL, is found, block 37.
Block 39 compares the value SMALL, to the value of the total ERROR sum of the new position estimate for all actuators relative to the impact point DRE. If the value of SMALL is less than the value of
DRE, block 39 transfers control to block 40 via the YES path. Block 40 oves the position estimate in the direction corresponding to SMALL (small error), which indicates the coordinate position most
likely to be nearer the true location of the PDD. Block 40 then transfers control to block 33 to iterate the process described above.
If the value of SMALL was greater than or equal to the value of DRE, block 30 transfers control via the NO path to block 41. Since, in this case, the value of the smallest error sum was greater than
or equal to the error sum of the new position, steps of this size or distance (convergence parameter) will not bring the coordinate position closer to the true position of the PDD. Therefore, block
41 determines whether this convergence parameter is the first parameter of the set. If this is the first convergence parameter, block 41 transfers control to block 42 via the YES path. Block 42
selects the second conversion parameter, which represents a smaller convergence distance and transfers control to block 35 to iterate the above process.
If this is not the first convergence parameter, block 41 transfers control to block 43 via the NO path.
Block 43 updates the range values for the second difference given by equation (3) for the new position. These new positions are such that they are selected at±45° relative to the X-coordinate
positions. As a result, the passive location method now includes eight degrees of freedom in which to move to obtain the true position of the PDD.
Next, block 45 finds the smallest error sum, SMALL. Then block 47 determines whether the value of SMALL is less than the value of DRE which is the error sum for a new position using the convergence
parameter selected. If SMALL is less than DRE, block 47 transfers control to block 48 via the YES path. Block 48 moves the position estimate along a 45° line centered upon the last location obtained.
Block 47 then transfers control to block 31 to iterate the above process.
If DRE is greater than SMALL, block 47 transfers control to block 49 via the NO path. Block 49 determines whether the second convergence parameter is being used. If the the second convergence
parameter is being used, block 49 transfers control to block 50 via the YES path. Block 50 selects the third conversion parameter which corresponds to steps of a smaller distance than the second
conversion parameter. Block 50 transfers control to block 35 to iterate the above process.
If block 49 determines that the second convergence parameter is not being used (i.e., the third and final parameter has been used), it transfers control via the NO path, block 51. This terminates the
actual PDD algorithm. Blocks 51 and 52 are performed for simulation purposes only. Since all three of the convergence parameters have been applied with the eight degrees of freedom, block 51 now
indicates that the residual error in the X-Y-Z coordinate directions is suitably minimized and the X-Y-Z coordinates calculated now pinpoint the true location of the PDD. These coordinates precisely
locate the PDD and its corresponding personnel or vehicle, so that the effects of the munition can be more accurately determined. Since the PDD location in terms of precise coordinates has been
determined, the location may be passed to other software which determines the engagement probabilities. These probabilities include the probability of a kill, of damage, and the probability of escape
given the coordinate position of a particular PDD.
Each of the probabilities depends upon the target type (personnel or vehicle), the weapon type (e.g. tank, cannon, Howitzer), range and angle with respect to the impact point IP. Block 51 then
transfers control to block 53 where the method is ended, until the next initialization message is received.
A simplified view of damage assessment is that a weapons, effective area may be divided into two zones, the outer zone comprising the near-miss area and the inner zone comprising the lethal or kill
area. Given the coordinate location of the PDD and the coordinate location of the impact point IP, which it received in the initialization message, the PDD may apply one of a number of techniques to
determine the zone of this particular PDD relative to the impact of the specified munition. The parameters for various types of munitions may be stored in an EEPROM contained in the target PDD.
FIG. 4 depicts a portion of the electronics of a PDD. The PDD includes a microprocessor 100 which is connected to either PROM (Programmable Read-Only-Memory) 101 and EEPROM (Electronically Erasable
Programmable Read-Only-Memory), 102. PROM 101 provides the storage for the instructions comprising the passive location method. Microprocessor 100 retrieves these instructions from PROM 101 and
executes the passive location method described above. Antenna 106 actuator transmissions and sends the information to PDD receiver 104. PDD receiver 104 is connected to decode data 105 and to
position timing error computation 103. Switch 107 allows data from the actuators through decode data 105 or previously stored data of EEPROM 102 to be transmitted to microprocessor 100 for analysis.
In addition, position timing error computation 103 is connected to microprocessor 100. Initially and from time-to-time, the input parameters are programmed into EEPROM 102. These parameters described
the various munitions types and the kill and near-miss regions associated with each type. Microprocessor 100 reads these parameters and performs probability estimates in order to determine the damage
assessment of each PDD from a particular round of munitions. These parameters alsO include the convergence increments and ranges of IP to the actuators.
It is possible to employ mobile actuators in vehicles as shown in the references mentioned above. In addition, aircraft may provide actuators for the present invention. The detection accuracy of the
present system would depend heavily on the accuracy of the aircraft navigation system. Instantaneous aircraft position data may be down-linked in a message to the PDD. More than one aircraft may be
employed in a given location as long as the interrogation periods were separated as in a TDMA format, for example.
Although the preferred embodiment of the invention has been illustrated, and that form described in detail, it will be readily apparent to those skilled in the art that various modifications may be
made therein without departing from the spirit of the invention or from the scope of the appended claims. | {"url":"http://www.google.com/patents/US4976619?ie=ISO-8859-1&dq=2040248","timestamp":"2014-04-18T06:48:47Z","content_type":null,"content_length":"101670","record_id":"<urn:uuid:6d8b04c1-1bfc-4f6e-b693-f82739ea2d7e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
CWRU Physics Faculty
Claudia de Rham
Baldwin Assistant Professor
Ph. D. Cambridge University, UK (2006)
Dark Energy
>> Interests << Publications
I am a cosmologist working on very early Universe Cosmology and Dark energy. More precisely, my main research interests cover
• Dark Energy and the CC Problem: Particle physics theories typically predict the presence of a vacuum energy, at worst of the order of the Planck mass M[Pl] ~ 10^72GeV^4, which would cause the
Universe to accelerate with a Hubble parameter H ~ 10^61km/s Mpc, or at best of the order of the supersymmetry breaking scale M[susy] ~ TeV^4. Since this is clearly incompatible with
observations, physicists have for a long time assumed that some symmetry sets this vacuum energy to zero. However, in 1998, observations of type Ia supernovae measured for the first time the
acceleration of the Universe with precision, corresponding to a Hubble parameter of order H ~ 70km/s Mpc, i.e. consistent with a cosmological constant of order 10^48GeV^4, that is 120 orders of
magnitude smaller than predicted from particle physics. This measurement was soon confirmed by many other sources, such as the CMB, matter density and gravitational lensing and is to date
arguably the most embarrassing puzzle of modern cosmology.
• Cosmological Perturbations: The power spectrum and non-gaussianities present in the Cosmic Microwave Background provide strong signatures of models of brane inflation.
Click here for a list of papers on the subject.
• Modified / Massive Gravity: A potential idea to tackle the cosmological constant problem, is to weaken the effect the vacuum energy has on the geometry by modifying gravity at large distances (of
the order of the Hubble scale today). Effectively this is possible the graviton was massive or had resonance.
This is the idea behind the degravitation model suggested by N. Arkani-Hamed, S. Dimopoulos, G. Dvali and G. Gabadadze, in 2002 and by G. Dvali, S. Hofmann, and J. Khoury, in 2007
• SLED: The SLED, or Supersymmetric Large extra dimensions, first proposed by Y. Aghababaie, C. Burgess, S. Parameswaran and F. Quevedo in 2003 provides another potential resolution to the
Naturalness Problem of the Cosmological Constant using two compactified supersymmetric extra dimensions of submillimeter size, hence potentially tackling the gauge hierarchy problem.
See Cliff Burgess's review
• EFT of codimension-2 objects: Exploring the effective physics for observers localized on codimension-2 branes or cosmic string and understanding the consequences for the Hierarchy problem and the
Cosmological Constant problem.
Bio: In January 2012, I will be joining the Particle/Astrophysics Group at Case Western Reserve University as an Assistant Professor. Currently I am an FNS assistant professor in the
Cosmology group
at Geneva University. Before that I was a joint postdoctoral Researcher at the Perimeter Institute and at McMaster University, and postdoc at McGill University. I obtained my PhD in 2005 in Cambridge
University, UK. | {"url":"http://www.phys.cwru.edu/faculty/index.php?deRham","timestamp":"2014-04-16T08:57:48Z","content_type":null,"content_length":"12063","record_id":"<urn:uuid:229d13ad-b9b2-4059-aa64-af230174ebdb>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - Integration of dirac delta composed of function of integration variable
kmdouglass Dec18-09 12:59 PM
Integration of dirac delta composed of function of integration variable
Hi all,
I'm working through Chandrasekhar's
Stochastic Problems in Physics and Astronomy
and can not understand the steps to progress through Eq. (66) in Chapter 1. The integral is:
[tex]\prod^{N}_{j=1} \frac{1}{l^{3}_{j}|\rho|}\int^{\infty}_{0} sin(|\rho|r_{j})r_{j}\delta (r^{2}_{j}-l^{2}_{j})dr_{j} = \prod^{N}_{j=1} \frac{sin(|\rho|l_{j})}{|\rho|l_{j}}[/tex]
Could anyone show the steps on how this result was obtained? I am aware of how to simplify a dirac delta that is composed of a function, but it does not lead me to the above result. Thanks.
1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution
phsopher Dec18-09 06:51 PM
Re: Integration of dirac delta composed of function of integration variable
Weird, I didn't get that one either. I got
[tex]\prod^{N}_{j=1} \frac{sin(|\rho|l_{j})}{2|\rho|l_{j}^3}[/tex]
diazona Dec18-09 10:07 PM
Re: Integration of dirac delta composed of function of integration variable
Quote by phsopher (Post 2498164)
Weird, I didn't get that one either. I got
[tex]\prod^{N}_{j=1} \frac{sin(|\rho|l_{j})}{2|\rho|l_{j}^3}[/tex]
That seems more reasonable. In the equation posted by the OP, the units are inconsistent between the two sides, so it can't be right.
kmdouglass Dec19-09 01:21 AM
Re: Integration of dirac delta composed of function of integration variable
Yes, you are right about the units. And someone else aside from myself got phsopher's result as well.
A few equations back, the author defines the probability distribution that he is using, and if I integrate over all angles and radial distances, I don't get unity. I think there are significant typos
in this section. Thanks for the help.
All times are GMT -5. The time now is 10:17 PM.
Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums | {"url":"http://www.physicsforums.com/printthread.php?t=364332","timestamp":"2014-04-18T03:17:17Z","content_type":null,"content_length":"7240","record_id":"<urn:uuid:64f4abbb-cfa7-4aaa-973f-1f785f894acd>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: NotesontheDeuring-
Jeffrey Stopple
864 NOTICES OF THE AMS VOLUME 53, NUMBER 8
Analytic number theory studies L-functions, gen-
eralizations of the Riemann zeta function (s). It
can be difficult to see why this is number theory.
In fact, the Class Number Formula (6) of Dirichlet
gives the number h(-d) of classes of binary qua-
dratic forms of discriminant -d as the value of such
an L-function at s = 1. The location of the zeros is
important: since the functions are continuous, the
value is influenced by any zero of the function
near s = 1. Such a zero would of course contradict
the Generalized Riemann Hypothesis (GRH).
The Deuring-Heilbronn phenomenon says that
such a counterexample to the GRH for one L-
function would influence the horizontal and ver- | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/609/1221066.html","timestamp":"2014-04-17T23:10:33Z","content_type":null,"content_length":"7849","record_id":"<urn:uuid:b9b5627a-61db-445d-9b5d-63cc8c2e10b9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
A restatement, in terms of the semi-group product of the left-invariant completion of a Polish group, of http://mathoverflow.net/questions/71389
up vote 5 down vote favorite
This is a re-statement, of sorts, of Is there a relational countable ultra-homogeneous structure whose countable substructures do not have the amalgamation property?, so far unanswered.
Let $G$ be a Polish group, $d_L$ a compatible left-invariant metric on $G$. This metric is usually not complete, so let $\hat G$ be the completion of $G$ with respect to $d_L$. If $(g_i)$ and $(h_i)$
are Cauchy sequences in $(G,d_L)$ then so is $(g_i h_i)$, endowing $\hat G$ with a semi-group structure.
Since any two left-invariant compatible metrics on $G$ are uniformly equivalent, none of this depnds on the precise choice of $d_L$.
Question: say $a,b \in \hat G$. Are there always $c,d \in \hat G$ such that $ca = db$? (No idea why this should be true, but then what is a counter-example?)
Motivation: $G$ can always be viewed as the automorphism group of some complete separable approximately ultra-homogeneous metric structure $M$, and $G$ is a closed subgroup of $S_\infty$ if and only
if $M$ can be taken to be a countable ultra-homogeneous discrete structure (what logicians usually understand by "structure"). Then $\hat G$ is the semi-group of embeddings of $M$ in itself. Now the
question becomes very close (and in the discrete case, possibly equivalent) to the one cited above: can any two copies of $M$ be amalgamated over a common copy of $M$, with the result embeddable in
lo.logic descriptive-set-theory topological-groups automorphism-groups
add comment
1 Answer
active oldest votes
In the end it was the original question which was answered first.
The answer to Is there a relational countable ultra-homogeneous structure whose countable substructures do not have the amalgamation property? by Ali Enayat shows that there exists
a countable ultra-homogeneous structure $M$ with embeddings of $f_i\colon M \to M$, $i = 0,1$, which do not amalgamate inside $M$.
up vote 3 down Taking $G = \textrm{Aut}(M)$, $G$ is a Polish group (and moreover homeomorphic to a closed subgroup of $S_\infty$), $f_i \in \hat G$, and there are no $g_i \in \hat G$ such that
vote accepted $g_0 f_0 = g_1 f_1$. This gives the desired counter-example.
(Thank you, Ali!)
You are quite welcome (but you did all the work on this one!). – Ali Enayat Jul 30 '11 at 16:13
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic descriptive-set-theory topological-groups automorphism-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/71485/a-restatement-in-terms-of-the-semi-group-product-of-the-left-invariant-completi/71647","timestamp":"2014-04-24T03:57:57Z","content_type":null,"content_length":"54450","record_id":"<urn:uuid:0a9d7b1b-ccf0-4db7-a203-af1d23b12734>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coupling between Transverse Vibrations and Instability Phenomena of Plates Subjected to In-Plane Loading
Journal of Engineering
Volume 2013 (2013), Article ID 937596, 7 pages
Research Article
Coupling between Transverse Vibrations and Instability Phenomena of Plates Subjected to In-Plane Loading
^1Department of Engineering, Institute of Applied Mechanics (IMA), Universidad Nacional del Sur (UNS), Alem 1253, B8000CPB Bahía Blanca, Argentina
^2Consejo Nacional de Investigaciones Científicas y Técnicas, (CONICET), Bahía Blanca, Argentina
Received 27 November 2012; Accepted 31 January 2013
Academic Editor: Gabriele Milani
Copyright © 2013 D. V. Bambill and C. A. Rossit. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
As it is known, the problems of free transverse vibrations and instability under in-plane loads of a plate are two different technological situations that have similarities in their approach to
elastic solution. In fact, they are two eigenvalue problems in which we analyze the equilibrium situation of the plate in configurations which differ very slightly from the original, undeformed
configuration. They are coupled in the event where in-plane forces are applied to the edges of the transversely vibrating plate. The presence of forces can have a significant effect on structural and
mechanical performance and should be taken into account in the formulation of the dynamic problem. In this study, distributed forces of linear variation are considered and their influence on the
natural frequencies and corresponding normal modes of transverse vibration is analyzed. It also analyzes their impact for the case of vibration control. The forces' magnitude is varied and the first
natural frequencies of transverse vibration of rectangular thin plates with different combinations of edge conditions are obtained. The critical values of the forces which cause instability are also
obtained. Due to the analytical complexity of the problem under study, the Ritz method is employed. Some numerical examples are presented.
1. Introduction
The transverse-free vibrations and buckling of plates which are subjected to edge loads acting in their middle planes are areas of research which have received a great deal of attention in the past
As it was stated experimentally by Hearmon [1] for the case of a beam, bifurcation buckling may be regarded as a special case of the vibration problem, that is, determining the in-plane stresses
which cause vibration frequencies to reduce to zero.
Most of the work has dealt with rectangular plates having uniformly distributed in-plane edge loads. In that case, the governing differential equations of motion and equilibrium have constant
coefficients, yielding exact solutions for frequencies and buckling loads straightforwardly when two opposite edges of the plates are simply supported.
Many researchers have analyzed both the buckling and vibration of rectangular plates subjected to in-plane stress field. Among them, one can mention Kang and Leissa [2]; Leissa and Kang [3]; Bassily
and Dickinson [4]; Dickinson [5]; Kielb and Han [6]; Kaldas and Dickinson [7].
For the linearly varying loading, the governing differential equations have variable coefficients.
Leissa and Kang [3] found exact solutions for the free vibration and buckling problems of the SS-C-SS-C isotropic plate loaded at its simply supported edges by linearly varying in-plane stresses.
They also found the exact solution [8] for the buckling of rectangular plates having linearly varying in-plane loading on two opposite simply supported edges, with different boundary conditions at
the other opposite edges.
Within the realm of the classical theory of plates, the case of buckling and vibrations problems for all the possibilities of boundary conditions and linearly varying in-plane forces offers
considerable difficulty. This is the reason why it is quite common to make use of the Ritz variational method.
2. Approximate Analytical Solution
In the case of a transversely vibrating, thin, isotropic plate subjected to in-plane forces , , and , (Figure 1 and (5)), the maximum value of the potential energy due to bending deformation is where
is the deflection amplitude of the middle plane of the plate, is the well known flexural rigidity , is the Young modulus, and is the Poisson coefficient.
While the maximum of the kinetic energy is where is the density of the plate material, is the circular frequency, and is the thickness of the plate.
And the maximum potential energy of the internal stresses caused by the in-plane loading is
The lengths of the sides of the rectangular plate are in the direction and in the direction. The coordinates are written in the dimensionless form as follows:
And the in-plane forces are expressed as (Bambill et al. [9]):
Then, the governing functional of the system is
Equation (6) satisfies, if is the exact solution, the condition:
Following the Ritz method, the expression of the deflection of the plate is approximated in the form of a truncated series: where and are the characteristic functions for the normal modes of
vibration of beams with end conditions nominally similar to those of the opposite edges of the plate in each coordinate direction [10]. Consequently, they satisfy the essential boundary conditions,
as the method requires.
The variational equation (7) is replaced by the homogeneous linear system of equations: and using nondimensional variables, becomes
Finally, one obtains a homogeneous linear system of equations in terms of the ’s parameters.
The nontriviality condition of the system (10) requires the determinant to be zero: where are the frequency coefficients.
The elements of the matrices involved in (11) are given by where is the aspect ratio where is a factor that indicates the magnitude of the in-plane loading system, regarding the relative value of the
forces. Consider
As it is known, the condition in (11) yields the critical value of the in-plane loading.
3. Numerical Evaluations
Hearmon [1] has experimented on a fixed-free strip. Admittedly, the problem is analytically simpler in the case of one-dimensional domains. As an example, let us try with a pinned-pinned transversely
vibrating beam, subjected to an axial compressive force . The expression of the frequency coefficient is where is the density of the material, is the cross-section, is the length, and the flexural
rigidity of the beam.
All the Euler buckling loads are determined making zero expression (15). For , the critical buckling load of the beam, , is obtained.
Plotting the values of the first three frequency coefficients depending upon the ratio yield regular curves as is shown in Figure 2. The presence of the compressive axial load does not alter the
order of the modal shapes of the beam.
In the case of a plate, in general, and due to the bidimensional behavior induced by the torsional rigidity, the compressive in-plane load may alter both the order and shape of the modal shapes
associated to each natural frequency.
This situation has an important technological signification from the point of view of vibration control.
Certainly, the modal shape of a natural resonant frequency must be known in order to suppress it. In the case of in-plane loading, this shape can be different from the expected one.
Due to the quantity and variability of the parameters involved in the description of the behaviour of these kinds of structures, just a few representative cases will be considered to demonstrate the
convenience of the procedure and the importance of the situation.
All the values are determined taking in (8).
Table 1 shows the values of the first natural frequency coefficients for a CCFF plate subjected to a general in-plane loading: linear load in direction ()—bending moment, constant load in y direction
(), and constant shear force .
In order to show the influence of the in-plane loading, the next two examples are presented.
Table 2 shows the natural frequency coefficients for a C-C-SS-SS square plate under uniform compression in the direction (, , and ).
Figure 3 shows that a minimal presence of in-plane loading (10% of the critical value) dramatically modifies the mode shapes, while changes in the values of frequencies may not be noticed (0.3% in
the sixth frequency). It is important to point out that the small load can be originated by thermal variations and restrictions on plane displacements imposed by the external supports.
Finally, Table 3 shows the results for a rectangular C-C-SS-SS plate subjected to shear in-plane forces.
Figure 4 shows that the third and fourth natural frequencies interchange their normal modes as increases. This situation is noticeable from Figure 5.
This means that for a given value of , between 0.25 and 0.4 of the critical value, there are two normal modes for the same natural frequency (repeated frequency). This is an important point in
vibration control, since when repeated frequencies arise in a system, the related vibration mode shape cannot be uniquely determined. Any linear combination of the modes is still valid for the
repeated frequency.
In order to evaluate the accuracy of the expounded procedure, comparison is made with the results obtained in [3] for a SS-C-SS-C plate loaded at its simply supported edges by linearly varying
in-plane stresses (Tables 4 and 5).
In Table 4, values of are compared for three different cases of the direction load: constant (), linear with null value at one extreme (), and bending moment (), and different aspect radii of the
plate. A convergence study is also made. As it can be seen, taking provides an excellent accuracy from an engineering viewpoint.
4. Conclusions
The classical, variational method of Ritz has been successfully used in the present study to obtain an approximate, yet quite accurate, solution to a difficult elastodynamics problem.
Natural frequencies and mode shapes of transverse vibration are obtained for a meaningful combination of the boundary conditions of a thin plate subjected to general in-plane loads. The critical
values of the in-plane forces which cause instability of the plates are also obtained.
The obtained values are the outcome of an algorithm, relatively simple to implement, [11] which allows studying these with only the assistance of a PC.
Additional complexities like orthotropic material characteristics can be taken into account [12].
The agreement with results available in the literature is excellent. Nevertheless, it is also possible to increase the number of terms in the summation on (8) to increase the accuracy.
No claim of originality is made, but it is hoped that the present work draws the attention to the effect that the presence of plane stress state may have on the effectiveness of vibration control on
The present work has been sponsored by the Secretaría General de Ciencia y Tecnología of Universidad Nacional del Sur, UNS, at the Department of Engineering and by Consejo Nacional de Investigaciones
Científicas y Técnicas, CONICET. The authors wish to thank Dr. D. H. Felix from Universidad Nacional del Sur.
1. R. F. S. Hearmon, “The frequency of vibration and the elastic stability of a fixed-free strip,” British Journal of Applied Physics, vol. 7, no. 11, pp. 405–407, 1956. View at Publisher · View at
Google Scholar
2. J.-H. Kang and A. W. Leissa, “Vibration and buckling of SS-F-SS-F rectangular plates loaded by in-plane moments,” International Journal of Stability and Dynamics, vol. 1, no. 4, pp. 527–543,
2001. View at Publisher · View at Google Scholar
3. A. W. Leissa and J.-H. Kang, “Exact solutions for vibration and buckling of an SS-C-SS-C rectangular plate loaded by linearly varying in-plane stresses,” International Journal of Mechanical
Sciences, vol. 44, no. 9, pp. 1925–1945, 2002. View at Publisher · View at Google Scholar · View at Scopus
4. S. F. Bassily and S. M. Dickinson, “Buckling and lateral vibration of rectangular plates subject to in-plane loads—a Ritz approach,” Journal of Sound and Vibration, vol. 24, no. 2, pp. 219–239,
1972. View at Publisher · View at Google Scholar · View at Scopus
5. S. M. Dickinson, “The buckling and frequency of flexural vibration of rectangular isotropic and orthotropic plates using Rayleigh's method,” Journal of Sound and Vibration, vol. 61, no. 1, pp.
1–8, 1978. View at Scopus
6. R. E. Kielb and L. S. Han, “Vibration and buckling of rectangular plates under in-plane hydrostatic loading,” Journal of Sound and Vibration, vol. 70, no. 4, pp. 543–555, 1980. View at Scopus
7. M. M. Kaldas and S. M. Dickinson, “Vibration and buckling calculations for rectangular plates subject to complicated in-plane stress distributions by using numerical integration in a
Rayleigh-Ritz analysis,” Journal of Sound and Vibration, vol. 75, no. 2, pp. 151–162, 1981. View at Scopus
8. J.-H. Kang and A. W. Leissa, “Exact solutions for the buckling of rectangular plates having linearly varying in-plane loading on two opposite simply supported edges,” International Journal of
Solids and Structures, vol. 42, no. 14, pp. 4220–4238, 2005. View at Publisher · View at Google Scholar · View at Scopus
9. D. V. Bambill, C. A. Rossit, and D. H. Felix, “Comments on ‘Buckling behavior of a graphite/epoxy composite plate under parabolic variation of axial loads’,” International Journal of Mechanical
Sciences, vol. 47, no. 9, pp. 1473–1474, 2005. View at Publisher · View at Google Scholar · View at Scopus
10. R. P. Felgar Jr., Formulas for Integrals Containing Characteristic Functions of a Vibrating Beam, Circular, no. 14, The University of Texas Publication, 1951.
11. D. H. Felix, D. V. Bambill, and C. A. Rossit, “Desarrollo de un algoritmo de cálculo para la implementación del método de Rayleigh-Ritz en el cálculo de frecuencias naturales de vibración de
placas rectangulares con complejidades diversas,” Revista Internacional de Métodos Numéricos para Cálculo y Diseño en Ingeniería, vol. 20, no. 2, pp. 123–138, 2004.
12. D. H. Felix, D. V. Bambill, and C. A. Rossit, “A note on buckling and vibration of clamped orthotopic plates under in-plane loads,” Structural Engineering and Mechanics, vol. 39, no. 1, pp.
115–123, 2011. View at Scopus | {"url":"http://www.hindawi.com/journals/je/2013/937596/","timestamp":"2014-04-16T19:10:45Z","content_type":null,"content_length":"218064","record_id":"<urn:uuid:2505d469-7063-4273-b6cd-2013ce79f076>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equity Allocation vs. Age
Most commentators advise people to reduce the percentage of stocks in their portfolios as they age. Some popular rules of thumb are to make the stock percentage 100 minus your age or 120 minus your
Larry Swedroe in his excellent book “Rational Investing in Irrational Times” offers his own advice. Swedroe expresses his advice in terms of how long until you need the money (time horizon) rather
than age. (
Canadian Capitalist
on this subject last month.)
Here is Swedroe’s table of time horizon vs. percentage in stocks:
0-3 years: 0%
4 years: 10%
5 years: 20%
6 years: 30%
7 years: 40%
8 years: 50%
9 years: 60%
10 years: 70%
11-14 years: 80%
15-19 years: 90%
20 years or longer: 100%
How do we test this advice?
Unlike almost everything else in his book, Swedroe offers this table with no analysis of where the numbers came from. I decided to try to come up with my own answer to this question.
It is surprisingly difficult to come up criteria for optimizing a portfolio for some end time. The best I have come up with so far is to optimize a portfolio for a particular target dollar amount.
This is similar, but admittedly not exactly the same thing.
So, given a particular dollar amount you are trying to save up, what portfolio mix should you use to minimize the expected time before reaching this goal? Of course, the goal amount should grow with
inflation so that you will end up with some fixed amount of purchasing power regardless of how long it takes.
I assumed you start with an initial investment without adding any more money along the way. I also assumed that you were limited to stocks, bonds, and risk-free short-term government debt with
returns and volatility as compiled by John Norstad in this
The results
Before letting my computer run for a couple of days to get the answer, I guessed that when you were far from your goal, the portfolio would be heavy with stocks and would start shifting money to
bonds and risk-free investments as the goal came nearer.
When the results came in, my guess was more or less correct, but not in the way I expected. The optimal portfolio mix is 100% stocks until you get to 93% of the goal portfolio value. From 93% to
99.8% of the goal, stocks are smoothly shifted into risk-free investments. From 99.8% of the goal onwards, the portfolio is entirely risk-free investments.
At a few points the optimal portfolio had 5% bonds, but for the most part, bonds were completely absent.
Expressed in terms of time, the portfolio mix is 100% stocks until 18 months from the goal. Then stocks are sold steadily until the goal is two months away, and after that everything is in risk-free
What does this mean?
Obviously the answer I came up with is radically different from Swedroe’s table. My table would look something like this:
0-2 months: 0%
6 months: 25%
10 months: 50%
14 months: 75%
18 months or longer: 100%
I don’t recommend using this table. I don’t think you should have any money you will need within 3 years in stocks. A retired person should have at least 3 years worth of living expenses in fixed
income investments. This gives you time for planning and adjusting to big upward or downward swings in the stock market.
However, I think it is likely that Swedroe’s table is too conservative. I would like to have seen how he came up with it.
12 comments:
1. Hi Michael,
great post as usual.
When doing your number crunching, what kind of equity portfolio did you use? Do you use one basic equity index like the S&P500 or a globally diversified one?
2. Jay:
The numbers I used came from Norstad's paper which is based on the S&P 500 from 1926 to 1994. I'd be happy to re-run the simulation with other numbers, but I haven't dug around enough to find
parameters for global equities over a long enough period.
3. Be careful using expected values when devising a strategy for something you'll only do once. The entire notion of "expected value" is based on the law of large numbers; if you're talking about
your retirement plans, you only get one shot, and so you can't use expected values to combine risk and reward into a single metric.
4. Patrick:
I agree with your comment, but I'm not exactly sure how it applies to my post. All my computations took full account of the variability of returns.
Since I computed a strategy that minimized the expected time to reaching a target amount of money, perhaps that is what you are referring to. It would be interesting to key on say the 99th
percentile of time to reaching the target and devise a strategy that minimizes this time. I'm not immediately sure of how to do this, but I'll give it some thought.
5. Hi Michael,
Well, I'm not sure what I'm suggesting either. Maybe start with your minimum-expected-time as a baseline, then use a confidence-interval kind of calculation that says "this strategy gets you
within X% of the minimum expected time Y% of the time" and then fix X (or Y) and maximize Y (or minimize X). That math would be way beyond me but I imagine it's possible in principle.
6. Patrick:
I've got an idea to make it possible to compute an answer in a reasonable amount of time. Look for something next week on this. I'll get my PC crunching on it this weekend.
7. One way to look at Patrick's question would be to test your theoretical asset allocation for people retiring every quarter from 1926 to 1994. How many retirees of the roughly 270 would be up or
down x% from the time they started shifting away from equities, to the time they retire? I'd hate to be the guy with the target retirement date of Sept '01. Would that guy be better off using the
the more traditional & gradual rotation away from equities?
8. AAI:
Testing on historical data is an interesting idea. Maybe I'll try to put this together sometime soon.
As for the guy who retired in September 2001, as long as he had 3-5 years worth of money he needed to live on in fixed income, he could have weathered the storm reasonably well. It's always
possible to find a period where some strategy works well. Even the all penny stock portfolio works if you choose the right penny stock at the right time.
9. I guess now, just a few short months later, we're pitying the guy who retired in October 2008...
10. Patrick: ... unless he cashed out all his stocks in July! I'm not a fan of selling all stocks at the start of retirement, but it certainly makes sense to have at least 3 years of living expenses
in fixed income. At least then you can plan to make 3 years worth of cash stretch to cover 4 years and avoid selling any stocks for at least a year.
11. I just learned of your blog and was checking it out and found this post.
Here is how I came up with the table.
First, you should not have money in the market if you know that you will need your money with certainty within a three year period. Just to risky as the last few years have demonstrated.
Second, at horizons of 10 years stocks had outperformed bonds about 70% of the time. So that seemed like a reasonable guideline as the MOST risk one should take.
Third, at 20 years stocks have beaten bonds almost all the time. So you could take UP TO 100 percent risk. But one should only do so if you also had the ability and need to take it.
For years 4-10 I simply added 10% a year. Then beyond 10 years added another 10 percent in two more brackets.
I hope that is helpful
BTW-feel free to email anytime at lswedroe@bamstl.com. Always happy to answer questions from readers.
And note that I have a new blog that I hope you will find of interest at http://moneywatch.bnet.com/investing/blog/wise-investing/?tag=content;col2
12. Larry: Thanks for the information about how you came up with the table. I've been following and enjoying your blog for a while now. | {"url":"http://www.michaeljamesonmoney.com/2008/03/equity-allocation-vs-age.html","timestamp":"2014-04-18T08:25:35Z","content_type":null,"content_length":"139487","record_id":"<urn:uuid:a9e9f803-47c5-4e55-b181-879000cccd71>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Ratings of NFL Teams from 1985 to 2012
There are numerous ways to decide which teams were better than others in a given year, and this Demonstration shows four methods across 28 seasons of American professional football. Massey's method
relies on points scored and points allowed, and the overall difficulty of the schedule. Colley's method also incorporates the strength of the schedule, but relies only on wins and not on points. One
thing is clear: the 2007 and 2012 Giants are two of the worst teams to ever win the Super Bowl.
Let be the Laplacian of the game graph; that is, is the number of games played between team and team , and is the number of games played by team . Let be the column vector whose component is the
number of wins less the number of losses of team . Let be the column vector whose component is the number of points scored less the number of points allowed by team .
The Massey ratings are the vector that solves the equation and whose components have mean 21 and standard deviation 7. The Colley ratings are given by the vector that solves the equation . The
Colleyized Massey ratings are given by the vector that solves the equation . The Masseyized Colley ratings are the vector that solves the equation .
These four methods are detailed in the first chapters of [1].
[1] A. N. Langville and C. D. Meyer,
Who's #1?
, Princeton, NJ: Princeton University Press, 2012. | {"url":"http://demonstrations.wolfram.com/RatingsOfNFLTeamsFrom1985To2012/","timestamp":"2014-04-17T16:14:40Z","content_type":null,"content_length":"44527","record_id":"<urn:uuid:e5bd1a41-2103-47e2-98b9-db0d3421b9fa>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Download Points Calculator 1.2.0 Free
calculate the points for an item of food...
NOTE: You are now downloading Points Calculator 1.2.0. This download is provided to you free of charge.
HINT: You can use Winzip, Winrar to open rar, zip, iso files.
Points Calculator 1.2.0 description
If you are someone that is on Weight Watchers and want a portable program that can fit anywhere, this program is for you. Simple program to calculate the points for an item of food....
Points Calculator 1.2.0 Screenshot
Points Calculator 1.2.0 Keywords
Bookmark Points Calculator 1.2.0
Points Calculator 1.2.0 Copyright
WareSeeker.com do not provide cracks, serial numbers etc for Points Calculator 1.2.0. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also prohibited.
Related Software
Spine Calculator will calculate the spine of a book given the paper thickness and number of pages
Free Download
This calculator for simply calculate money, which can you earn. Its localized in English and Czech language, so problem with understand shouldnt be exist
Free Download
The Interest Calculator can calculate virtually all types of interest.
Free Download
With the help of VAT Calculator you can easily calculate VAT and Tax
Free Download
Winall Calculator will calculate the possibility of a Sports Betting Arbitrage Profit. Simply enter number of possible outcomes and their respective odds, click calculate and you will know whether a
risk free profit exists. No more messy spreadsheets.
Free Download
Compound Interest Calculator - calculate the future value of your investment
Free Download
Scan Calculator is a program to calculate scanning resolutions
Free Download | {"url":"http://wareseeker.com/download/points-calculator-1.2.0.rar/1fe71440e","timestamp":"2014-04-18T16:18:30Z","content_type":null,"content_length":"27156","record_id":"<urn:uuid:c9fc294f-4063-43c8-ade1-fc4ac7a7461a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Importance sampling and the two-locus model with subdivided population structure
Advances in Applied Probability
Importance sampling and the two-locus model with subdivided population structure
Robert C. Griffiths, Paul A. Jenkins, and Yun S. Song
The diffusion-generator approximation technique developed by De Iorio and Griffiths (2004a) is a very useful method of constructing importance-sampling proposal distributions. Being based on general
mathematical principles, the method can be applied to various models in population genetics. In this paper we extend the technique to the neutral coalescent model with recombination, thus obtaining
novel sampling distributions for the two-locus model. We consider the case with subdivided population structure, as well as the classic case with only a single population. In the latter case we also
consider the importance-sampling proposal distributions suggested by Fearnhead and Donnelly (2001), and show that their two-locus distributions generally differ from ours. In the case of the
infinitely-many-alleles model, our approximate sampling distributions are shown to be generally closer to the true distributions than are Fearnhead and Donnelly's.
• Bahlo, M. and Griffiths, R. C. (2000). Inference from gene trees in a subdivided population. Theoret. Pop. Biol. 57, 79--95.
• Beaumont, M. (1999). Detecting population expansion and decline using microsatellites. Genetics 153, 2013--2029.
• Cornuet, J. M. and Beaumont, M. A. (2007). A note on the accuracy of PAC-likelihood inference with microsatellite data. Theoret. Pop. Biol. 71, 12--19.
• De Iorio, M. and Griffiths, R. C. (2004a). Importance sampling on coalescent histories. I. Adv. Appl. Prob. 36, 417--433.
• De Iorio, M. and Griffiths, R. C. (2004b). Importance sampling on coalescent histories. II: subdivided population models. Adv. Appl. Prob. 36, 434--454.
• Ethier, S. N. and Griffiths, R. C. (1990). On the two-locus sampling distribution. J. Math. Biol. 29, 131--159.
• Fearnhead, P. and Donnelly, P. (2001). Estimating recombination rates from population genetic data. Genetics 159, 1299--1318.
• Fearnhead, P. and Smith, N. G. C. (2005) A novel method with improved power to detect recombination hotspots from polymorphism data reveals multiple hotspots in human genes. Amer. J. Human
Genetics 77, 781--794.
• Golding, G. B. (1984). The sampling distribution of linkage disequilibrium. Genetics 108, 257--274.
• Griffiths, R. C. and Marjoram, P. (1996). Ancestral inference from samples of DNA sequences with recombination. J. Comput. Biol. 3, 479--502.
• Griffiths, R. C. and Tavaré, S. (1994a). Ancestral inference in population genetics. Statist. Sci. 9, 307--319.
• Griffiths, R. C. and Tavaré, S. (1994b). Sampling theory for neutral alleles in a varying environment. Proc. R. Soc. London B 344, 403--410.
• Griffiths, R. C. and Tavaré, S. (1994c). Simulating probability distributions in the coalescent. Theoret. Pop. Biol. 46, 131--159.
• Hudson, R. R. (2001). Two-locus sampling distributions and their application. Genetics 159, 1805--1817.
• Kuhner, M. K., Yamato, J. and Felsenstein, J. (1995). Estimating effective population size and mutation rate from sequence data using Metropolis--Hastings sampling. Genetics 140, 1421--1430.
• Kuhner, M. K., Yamato, J. and Felsenstein, J. (2000). Maximum likelihood estimation of recombination rates from population data. Genetics 156, 1393--1401.
• Li, N. and Stephens, M. (2003). Modeling linkage disequilibrium and identifying recombination hotspots using single-nucleotide polymorphism data. Genetics 165, 2213--2233.
• McVean, G., Awadalla, P. and Fearnhead, P. (2002). A coalescent-based method for detecting and estimating recombination from gene sequences. Genetics 160, 1231--1241.
• McVean, G. et al. (2004). The fine-scale structure of recombination rate variation in the human genome. Science 304, 581--584.
• Myers, S. et al. (2005). A fine-scale map of recombination rates and hotspots across the human genome. Science 310, 321--324.
• Stephens, M. and Donnelly, P. (2000). Inference in molecular population genetics. J. R. Statist. Soc. Ser. B 62, 605--655.
• Wilson, I. J. and Balding, D. J. (1998). Genealogical inference from microsatellite data. Genetics 150, 499--510. | {"url":"http://projecteuclid.org/euclid.aap/1214950213","timestamp":"2014-04-21T02:01:05Z","content_type":null,"content_length":"44683","record_id":"<urn:uuid:e223fced-f237-4957-9728-b6109deaf674>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Anayltic geometry
May 19th 2009, 11:23 AM #1
Jan 2009
Anayltic geometry
The line segment joining A(1,3) and B(-2,-1) is extended through each end by a distance equal to it's original length.Find the co-ordinates of the new endpoint.
I don't know what to do.
Work in terms of vectors.
The coordinates of a vector is the difference of each coordinate of its endpoints :
$\vec{AB} ~:~ (x_B-x_A ~,~y_B-y_A)=(-3,-4)$
And $\vec{CA}=\vec{AB}=\vec{BD}$
That should be enough ^^
seeing your other thread, you can also work with midpoints.
A is the midpoint of CB. Find the coordinates of C.
B is the midpoint of BD. Find the coordinates of D.
(draw a sketch if you can't see what I'm talking about
May 19th 2009, 11:30 AM #2 | {"url":"http://mathhelpforum.com/geometry/89674-anayltic-geometry.html","timestamp":"2014-04-20T13:30:14Z","content_type":null,"content_length":"34074","record_id":"<urn:uuid:f2fe7f16-68d3-4f8a-b040-4057914a0184>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |
When buckyballs go quantum
It’s widely believed that, whereas the macroscopic world is governed by the intuitive and predictable rules of classical mechanics, the nanoscale world operates in an anarchy of quantum weirdness . I
explained here why this view isn’t right; many changes in material behaviour at small scales have their origin in completely classical physics. But there’s another way of approaching this question,
which is to ask what you would have to do to be able to see a nanoscale particle behaving in a quantum mechanical way. In fact, this needn’t be a thought experiment; Anton Zeilinger at the University
of Vienna specialises in experiments about the foundations of quantum mechanics, and one of the themes of his research is in finding out how large an object he can persuade to behave quantum
mechanically. In this context, the products of nanotechnology are large, not small, and among the biggest things he’s looked at are fullerene molecules – buckyballs. The results are described in this
paper on the interference of C70 molecules.
What Zeilinger is looking for, as the signature of quantum mechanical behaviour, is interference. Quantum interference is that phenomenon which arises when the final position of a particle depends,
not on the path it’s taken, but on all the paths it could have taken. Before the position of the particle is measured, the particle doesn’t exist at a single place and time; instead it exists in a
quantum state which expresses all the places at which it could potentially be. But it isn’t just measurement which forces the particle (to anthropomorphise) to make up its mind where it is; if it
collides with another particle or interacts with some other kind of atom, then this leads to the phenomenon known as decoherence, by which the quantum weirdness is lost and the particle behaves like
a classical object. To avoid decoherence, and see quantum behaviour, Zeilinger’s group had to use diffuse beams of particles in a high vacuum environment. How good a vacuum do they need? By adding
gas back into the vacuum chamber, they can systematically observe the quantum interference effect being washed out by collisions. The pressures at which the quantum effects vanish are around one
billionth of atmospheric pressure. Now we can see why nanoscale objects like bucky-balls normally behave like classical objects, not quantum mechanical ones. The constant collisions with surrounding
molecules completely wash out the quantum effects.
What, then, of nanoscale objects like quantum dots, whose special properties do result from quantum size effects? What’s quantum mechanical about a quantum dot isn’t the dot itself, it’s the
electrons inside it. Actually, electrons always behave in a quantum mechanical way (explaining why this is so is a major part of solid state physics), but the size of the quantum dot affects the
quantum mechanical states that the electrons can take up. The nanoscale particle that is the quantum dot itself, in spite of its name, remains resolutely classical in its behaviour.
I have an historical question to ask. Then some computational chemistry questions.
First, when Eric Drexler came out in the late 1980′s early 1990′s with Nanotech, one of the most persistent criticisms was that quantum rules were fundemental at the Nanolevel. Why was this false
arguement ever put up by serious scientists?
Now for the technical question. The main difference between quantum systems and classical systems is their phase spaces. Each quantum particle has an enormous number of of states and to do quantum
calculations for molecules, one has to take interactions of ALL of these states. For classical systems, you only have to do calculations at the particle level.
However, you have indicated that the properties of molecules are essentially classical unless one is working with isolated electrons!
My 3 questions are:
1. Why are so many chemists doing computation quantum chemistry if molecules are classical!
2. Classical systems are of course mostly nonlinear. However, the biggest reason you give for the failure of Drexlerian Nanotech is that the nanoworld is governed by brownian motion. But Brownian
motion occurs due to forces which are quantum in origin (Van der Waal , London forces)! I understand that there is a classical theory of brownian motion, but this was for behaviour of pollen in
solution, not nanoparticles.
3. Finally, in what domain is the main difficulties of carrying out Nanomanipulations, is it in the Classical or Quantum domains?
An amateur mathematician
Good questions all.
Firstly, why do quantum chemistry? What sticks atoms together to make molecules are electrons, and as I said, electrons always are quantum mechanical objects. So, to understand the forces that hold
molecules and solids together, you do need to use quantum mechanics.
Secondly, what is the origin of Brownian motion? This is classical, whether for pollen or any other nanoparticles – the physics was worked out by Einstein and Smoluchowski. There are quantum
corrections to the classical theory of Brownian motion which are important at low temperature. How low is low here depends on the characteristic frequency of the vibrations. These may br significant
in calculating some of the higher frequency thermally excited vibrations of very stiff materials.
However, you are right to say that the origins of van der Waals forces are quantum mechanical in character.
Now to phase space – actually statistical mechanics can be done either on the
basis of quantum mechanics or classical mechanics. In fact, the quantum version is actually easier because you only have to sum over discrete quantum states. In classical stat mech, you have to do
integrals over a multidimensional phase space (how many dimensions is this space? For N particles, it’s 6N dimensional – 3N for all the position coordinates and 3N for all the momenta). So it isn’t
actually clear that life is much easier for classical systems, at least when they get to be non-linear.
I can’t answer the historical question – I’m only responsible for my own criticisms of Drexler!
Thank you for your reply.
I have now a good think about Quantum versus Classical representations.
I have come to the conclusion that the main problem is the atomic model of chemistry itself!
The reason that Drexlerians are so convinced of their position, is that the visual picture of atoms stuck together with elastic forces is so powerful that it leads to serious oversimplifications.
Now is this a correct description of your position on this?
Molecules are better thought of as many electron systems (as opposed to atomic systems) which can be modeled via classical or quantum methods given the problem at hand.
This means that Drexler’s naive pictures of the nanoscale is only useful if one can ignore electron / electron interactions which Drexler in Nanosystems seems to treat as a pertubation!
My final comments.
If the above is true, it then seems that the teaching of chemistry at the undergraduate level would need to be changed to give a more realist description of molecular chemistry (which gives the
appearance that ANY molecule can be synthesized).
Also, it appears that for complex molecular systems, that the atomic model has had its day, as it is too easy to fall into a drexlerian trap!
An amateur mathematician
The models Drexler uses treat atoms as classical objects, but the interaction between the atoms is described by force fields which are quantum mechanical in origin. The parameters for the force
fields need to be fixed by a combination of first principles calculations and fitting to experiment. This is a perfectly correct and above board method of proceeding, as long as one is aware of the
limitations. One of the major limitations is that it is very difficult to find force fields that accurately represent what goes on when chemical reactions take place. Another limitation is that it
isn’t particularly easy to correctly include the effects of finite temperature; this leads to a tendency to confuse mechanical stability with thermodynamic stability. Both of these problems are
particularly pointed in systems with a lot of surface (as all nanoscale objects have). | {"url":"http://www.softmachines.org/wordpress/?p=119","timestamp":"2014-04-17T03:50:17Z","content_type":null,"content_length":"15561","record_id":"<urn:uuid:1e0aa069-a0e1-4eb0-aadb-ad16af9d36fb>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Charlton City Math Tutor
Find a Charlton City Math Tutor
...For some people, studying comes naturally. For the rest of us, it can take some effort. Just finding the rhythm and getting into the habit can be difficult.
66 Subjects: including SAT math, statistics, ACT Math, reading
...I have an M.Ed. in History from Westfield State University. I am a licensed history teacher in Massachusetts. Along with over 2 years of experience teaching history and geography, I also have
experience tutoring students on World and U.S.
34 Subjects: including algebra 1, precalculus, reading, prealgebra
...Once you understand it, science and math is fun and easy. My style is not one who instructs. Instead, I lead and guide.
12 Subjects: including algebra 1, algebra 2, calculus, chemistry
...That is all. If I am familiar with it, I can teach you. In most cases, if I don't know it, I can teach myself and help you improve.
24 Subjects: including calculus, reading, chess, psychology
I am a graduate student working on my PhD in Transportation Engineering at the University Connecticut with a BS and MS in Civil Engineering and a BA in English. Like most engineers, I am
interested in the practical and social applications of math and science, and I bring that perspective to my teac...
34 Subjects: including prealgebra, precalculus, ACT Math, SAT math | {"url":"http://www.purplemath.com/Charlton_City_Math_tutors.php","timestamp":"2014-04-17T04:56:47Z","content_type":null,"content_length":"23438","record_id":"<urn:uuid:97967581-264c-4e40-8b45-56a3fd611dff>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Material Results
Search Materials
Return to What's new in MERLOT
Get more information on the MERLOT Editors' Choice Award in a new window.
Get more information on the MERLOT Classics Award in a new window.
Get more information on the JOLT Award in a new window.
Go to Search Page
View material results for all categories
Click here to go to your profile
Click to expand login or register menu
Select to go to your workspace
Click here to go to your Dashboard Report
Click here to go to your Content Builder
Click here to log out
Search Terms
Enter username
Enter password
Please give at least one keyword of at least three characters for the search to work with. The more keywords you give, the better the search will work for you.
select OK to launch help window
cancel help
You are now going to MERLOT Help. It will open a new window.
The study of sequences, although seen as an incipient numerical progression, is the foundation of mathematical analysis. This...
see more
Material Type:
Learning Object Repository
Date Added:
Nov 30, 2013
Date Modified:
Dec 08, 2013
This is a free textbook that is offered by Amazon for reading on a Kindle. Anybody can read Kindle books—even without a...
see more
Material Type:
Open Textbook
Lewis Carroll
Date Added:
Nov 26, 2013
Date Modified:
Dec 08, 2013
This is a free textbook that is offered by Amazon for reading on a Kindle. Anybody can read Kindle books—even without a...
see more
Material Type:
Open Textbook
John Stuart Mill
Date Added:
Nov 26, 2013
Date Modified:
Dec 08, 2013
This is a free textbook by Boundless that is offered by Amazon for reading on a Kindle. Anybody can read Kindle books—even...
see more
Material Type:
Open Textbook
Date Added:
Nov 20, 2013
Date Modified:
Nov 21, 2013
Dosage calculation and basic conversion is very important for nurses to know because a serious damage or even death can occur...
see more
Material Type:
Drill and Practice
Connie Houser
Date Added:
Nov 02, 2013
Date Modified:
Nov 02, 2013
Learning and practicing math is always more fun when it's part of a game. We like these apps and other learning tools for how...
see more
Material Type:
Anice Stansberry
Date Added:
Oct 28, 2013
Date Modified:
Mar 18, 2014
This is a free textbook offered by BookBoon.'This book is a guide through a playlist of Calculus instructional videos. The...
see more
Material Type:
Open Textbook
Frédéric Mynard
Date Added:
Oct 22, 2013
Date Modified:
Nov 05, 2013
This is a free textbook from Book Boon.'A Handbook for Statistics provides readers with an overview of common statistical...
see more
Material Type:
Open Textbook
Darius Singpurwalla
Date Added:
Oct 22, 2013
Date Modified:
Oct 22, 2013
This textbook is designed for the first course in the college mathematics curriculum that introduces students to the process...
see more
Material Type:
Open Textbook
Ted Sundstrom
Date Added:
Oct 18, 2013
Date Modified:
Nov 05, 2013
'This free online textbook (e-book in webspeak) is a one semester course in basic analysis. This book started its life as my...
see more
Material Type:
Open Textbook
Jirí Lebl
Date Added:
Oct 14, 2013
Date Modified:
Nov 05, 2013 | {"url":"http://www.merlot.org/merlot/materials.htm?nosearchlanguage=&pageSize=&page=16&nosearchlanguage=&nosearchlanguage=&category=2513&sort.property=dateCreated","timestamp":"2014-04-23T18:33:04Z","content_type":null,"content_length":"186997","record_id":"<urn:uuid:00ec17b4-0ed9-4dee-848c-758967479425>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Bohr to Bayes: Causality, Probability, and Statistics in Quantum Theory.
This paper critically examines the view of quantum mechanics that emerged shortly after the introduction of quantum mechanics and that has been widespread ever since. Although N. Bohr, P. A. M.
Dirac, and W. Heisenberg advanced this view earlier, it is best exemplified by J. von Neumann’s argument in Mathematical Foundations of Quantum Mechanics (1932) that the transformation of \'a
[quantum] state ... under the action of an energy operator . . . is purely causal,\' while, \'on the other hand, the state ... which may measure a [given] quantity ... undergoes in a measurement a
non-casual change.\' Accordingly, while the paper discusses all four of these arguments, it will especially focus on that of von Neumann. The paper also offers an alternative, radically noncausal,
view of the quantum-mechanical situation and considers the differences between the ensemble and the Bayesian understanding quantum mechanics. It will also discuss the Bayesian approach to quantum
information theory in this set of contexts. | {"url":"http://www.perimeterinstitute.ca/videos/bohr-bayes-causality-probability-and-statistics-quantum-theory","timestamp":"2014-04-18T09:46:59Z","content_type":null,"content_length":"27646","record_id":"<urn:uuid:37bf2935-adf4-4fa8-b0a5-a6c7f424b82a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Return to List
Studies on Divergent Series and Summability and The Asymptotic Developments of Functions Defined by Maclaurin Series
             
AMS Chelsea This book is based on several courses taught by the author at the University of Michigan between 1908 and 1912. It covers two main topics: asymptotic series and the theory of
Publishing summability. The discussion of nowhere convergent asymptotic series includes the so-called MacLaurent summation formula, determining asymptotic expansions of various classes of
functions, and the study of asymptotic solutions of linear ordinary differential equations. On the second topic, the author discusses various approaches to the summability of
1960; 342 pp; divergent series and considers in detail applications to Fourier series.
Graduate students and research mathematicians.
Volume: 143
Studies on Divergent Series and Summability
0-8284-0143-8 • The MacLaurin sum-formula, with introduction to the study of asymptotic series
• The determination of the asymptotic developments of a given function
ISBN-13: • The asymptotic solutions of linear differential equations
978-0-8284-0143-2 • Elementary studies on the summability of series
• The summability and convergence of Fourier series and allied developments
List Price: US$41 • Appendix
• Bibliography
Member Price:
US$36.90 The Asymptotic Developments of Functions Defined by MacLaurin Series
Order Code: CHEL/ • Preliminary considerations. First general theorem
143 • The theorem of Barnes
• MacLaurin series whose general coefficient is algebraic in character
• Second general theorem
• Auxiliary theorems
• MacLaurin series whose general coefficient involves the reciprocal of a single gamma function; Functions of exponential type
• MacLaurin series whose general coefficient involves the reciprocal of the product of two gamma functions; Functions of Bessel type
• Determination of the asymptotic behavior of the solutions of differential equations of the Fuchsian type
• Bibliography | {"url":"http://ams.org/bookstore?fn=20&arg1=chelsealist&ikey=CHEL-143","timestamp":"2014-04-21T13:30:43Z","content_type":null,"content_length":"15701","record_id":"<urn:uuid:50f5e863-f206-47ed-b6f1-8fc469a16eef>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Burt Ovrut
• Professor, University of Pennsylvania, (1990- present)
• Associate Professor, University of Pennsylvania (1985-1990)
• Assistant Professor, The Rockefeller University (1983-1985)
• Member, The Institute for Advanced Study (1980-1983)
• Research Associate, Brandeis University (1978-1980)
Ph.D., University of Chicago (1978)
Research Interests:
I am a high energy particle physicist with strong interests in cosmology. My early research involved the introduction of supersymmetry and supergravity into particle physics theories. Some of my key
contributions were the introduction of Wilson lines in string vacua to break gauge symmetry to the standard model, a five-dimensional superspace called heterotic M-theory that serves as a string
vacuum for realistic models of particle physics and the development of algebraic geometric methods necessary for the construction of gauge connections and explicit evaluation of the low energy
spectrum. Within this context, I constructed heterotic vacua with exactly the particle spectrum of the minimal supersymmetric standard model with right-handed neutrinos. These theories were applied
to cosmology, where a new theory of the early universe– ekpyrotic cosmology –was introduced based on colliding branes. More recently, I have given detailed renormlization group analyses of realistic
superstring particle theories, studying phase transitions in the gauge group and explored their low energy predictions for the LHC. Within this context I has also consider aspects of late-time
cosmology, including gauge domain walls, dark matter and baryogenesis.
Selected Publications:
• On the Four-Dimensional Effective Action of Strongly Coupled Heterotic String Theory. Andre Lukas, Burt A. Ovrut, Daniel Waldram. Nucl.Phys. B532 (1998) 43-82. hep-th/9710208
• The Universe as a Domain Wall. Andre Lukas, Burt A. Ovrut, K.S. Stelle, Daniel Waldram. Phys.Rev. D59 (1999) 086001. hep-th/9803235
• Heterotic M-theory in Five Dimensions. Andre Lukas, Burt A. Ovrut, K. S. Stelle, Daniel Waldram. Nucl.Phys. B552 (1999) 246-290. hep-th/9806051
• Standard Models from Heterotic M-theory. Ron Donagi (UPenn), Burt A. Ovrut (UPenn), Tony Pantev (UPenn), Daniel Waldram (Princeton University and CERN). Adv.Theor.Math.Phys. 5 (2002) 93-137.
• Cosmology and Heterotic M-Theory in Five-Dimensions. Andre Lukas, Burt A. Ovrut, Daniel Waldram. UPR-825T, OUTP-98-85P. hep-th/9812052
• Boundary Inflation. Andre Lukas, Burt A. Ovrut, Daniel Waldram. Phys.Rev. D61 (2000) 023506. hep-th/9902071
• The Ekpyrotic Universe: Colliding Branes and the Origin of the Hot Big Bang. Justin Khoury, Burt A. Ovrut, Paul J. Steinhardt, Neil Turok. Phys.Rev. D64 (2001) 123522. hep-th/0103239
• Visible Branes with Negative Tension in Heterotic M-Theory. Ron Y. Donagi, Justin Khoury, Burt A. Ovrut, Paul J. Steinhardt, Neil Turok. JHEP 0111 (2001) 041. , hep-th/0105199
• From Big Crunch to Big Bang. Justin Khoury, Burt A. Ovrut, Nathan Seiberg, Paul J. Steinhardt, Neil Turok. Phys.Rev. D65 (2002) 086007. hep-th/0108187
• Non-Perturbative Vacua and Particle Physics in M-Theory. Ron Donagi, Andre Lukas, Burt A. Ovrut, Daniel Waldram. JHEP 9905 (1999) 018. hep-th/9811168
Courses Taught:
Phys 601: Intro to Field Theory
Phys 632: Relativistic Quantum Field Theory
• April 21, 2014 - 12:30 pm
Doug Schaefer
Rom A5 DRL
• April 21, 2014 - 2:00 pm
Claudia de Rham (Case)
DRL 2N36
• April 23, 2014 - 4:00 pm
Professor Andrea Ghez, UCLA
DRL A8 | {"url":"http://www.physics.upenn.edu/people/standing-faculty/burt-ovrut","timestamp":"2014-04-19T09:54:23Z","content_type":null,"content_length":"21795","record_id":"<urn:uuid:789bbb78-9e95-4dd0-8611-81891e2e0fb6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 4
Hello! I was wondering if I could get some help solving this problem? A car travels at a constant speed of 10.0 m/s along a straight highway that is inclined 6.0 degrees for 11.0 s. How high
vertically does the car travel?
A ball rolls horizontally with a speed of 7.6 m/s off the edge of a tall platform. If the ball lands 8.7 m from the point on the ground directly below the edge of the platform what is the height of
the platform? Any help would be greatly appreciated! Thank You! :)
A ball rolls horizontally with a speed of 7.6 m/s off the edge of a tall platform. If the ball lands 8.7 m from the point on the ground directly below the edge of the platform what is the height of
the platform? Any help would be greatly appreciated! Thank You! :)
Hello! I was wondering if I could get some help solving this problem? A car travels at a constant speed of 10.0 m/s along a straight highway that is inclined 6.0 degrees for 11.0 s. How high
vertically does the car travel? | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Gusty","timestamp":"2014-04-19T14:57:44Z","content_type":null,"content_length":"7003","record_id":"<urn:uuid:699c90e8-ae80-4158-a2b7-0dbc3bdf1f07>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
hi zee-f
When you rotate 270 clockwise, that's the same as 90 anticlockwise. (90, 180, 270, 360)
Look at the point (2,1) I've boxed it into a rectangle and then showed the whole rectangle rotating.
That corner moves to (-1,2) For 90 degrees that sort of thing always happens. The x and y coordinates swap over and one becomes negative.
It happens because the across distance in the rectangle becomes the up distance and the up becomes the across (but now going to the left ... ie. it is now a negative amount).
2 ----> -1
1 ----> 2
So you should be able to do the other points the same way. | {"url":"http://www.mathisfunforum.com/post.php?tid=18566&qid=243232","timestamp":"2014-04-21T14:58:04Z","content_type":null,"content_length":"18464","record_id":"<urn:uuid:4be5688c-1293-4751-893b-4371d3072076>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
If asked to solve for x for the following:
`sin(x)tan(x) +tan(x) - 2sin(x) + cos(x) = 0`
are the following... - Homework Help - eNotes.com
If asked to solve for x for the following:
`sin(x)tan(x) +tan(x) - 2sin(x) + cos(x) = 0`
are the following solutions correct:
`x = (Pi)/(2) , and (3Pi)/(2)`
`2sin^3(x)-2sin^2(x)+sin(x)+1=0` (i)
So your one aswer is correct x=(3pi)/2
and x=pi/2 is not correct because it will not satisfy (i).
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/asked-solve-x-following-sin-x-tan-x-tan-x-2sin-x-445163","timestamp":"2014-04-16T22:44:19Z","content_type":null,"content_length":"26137","record_id":"<urn:uuid:70960df5-58e7-44bc-bfe8-2b98d22dcf27>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from October 2008 on The Spectre of Math
So it turns out I can’t get rid of this damn radical in my theorem, so it’s more like a proposition now. I have to replace “factorial” with “prefactorial” (i.e. every prime ideal of height one is a
radical of a principal ideal). The trouble is, very few people know what prefactorial is, and I have no clue how large the class of germs of varieties with ${\mathcal O}_{X,0}$ being normal and
prefactorial is. It seems to be larger than factorial and smaller than normal.
That’s the problem with ideals and radicals. Just like in politics, they just cause problems. And just like in politics, if you could find the generators of those radical ideals, you could really
solve some problems.
Question: Why are we all socialists?
Answer: Let’s consider $Co$ to be communism and $Ca$ to be capitalism. Anywhere between is really socialism. I.e. The model of any country is $tCo + (1-t) Ca$ for $t \in [0,1]$. Now socialism then
corresponds to $t \in (0,1)$. Since no country in the world is either really communist nor really capitalist, the whole world is full of socialists. In particular, both Obama and McCain are
socialists, though I guess $t_{obama} < t_{mccain}$.
Hmmm … this must be a conspiracy theory, but: Military (and the president) probably mostly support McCain. It seems generally agreed that international crisis (especially one requiring the military)
is good for McCain (in terms of the electorate). So why are we attacking things inside Syria one week before the election sparking a possible shooting confrontation? Why now?
On stocks (price of)
Value in our main portfolio dipped below $100k today. I feel sort of light headed when I think of how much money we’ve lost this past few weeks. I keep telling myself it’s monopoly money. I did some
extra shares of AINV today, thinking it can’t go any lower, I might as well make some money if it does another dead cat bounce and I sell.
So someone should explain to me why republicans are supposed to be better for the markets. Investors are still voting republican. For the lower taxes? That doesn’t make sense to me. If the taxes are
lower, but your investments are where they were a decade ago (not counting inflation) how is this good for you? 0% of 0 is not much less than 30% of 0.
So planetmath is descending deeper and deeper into crankiness. There is a lot of quality content on planetmath (the vast majority), but as it allows total cranks to write entries the overall quality
of the site deteriorates. The most recent example which is so far unchallenged by the admins (and is the reason why I am not writing new or updating my old entries there) is bc1. An good example of
what I am talking about is this entry. Note that I am talking about version 1 in case he edited the entry which he does very often to his entries.
This guy even wrote on some CR geometry and of course got it wrong. But I got tired of the fight.
Anyway, I have just found the link above and thought it may be amusing. It should also serve as a warning about sites like planetmath.
Hmmmm …
OK, this is mostly a test. I figure I should find a better “blog” site than advogato. And since I might want to write about math then it is nice that I can insert some latex such as $\int_\Omega d\
omega = \int_{\partial \Omega} \omega$ | {"url":"http://jlebl.wordpress.com/2008/10/","timestamp":"2014-04-16T22:10:20Z","content_type":null,"content_length":"24813","record_id":"<urn:uuid:868f7dd5-ba73-44b7-a574-42974003f117>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical properties of Lorentz processes in a tube
Peter Nandori
Sinai billiard and its extended version to the plane (periodic Lorentz
process) are among the most interesting examples of hyperbolic dynamical
systems. Since the pioneering results of Chernov and Dolgopyat, it became
realistic to prove delicate statistical properties for these models. In the
talk, I will shortly review the development of the theory and focus on some
recent results for periodic Lorentz processes in a strip (i.e. a Sinai
billiard configuration extended in one direction). In particular, I will
discuss the scaling limit of the trajectory of the particle in the presence
of an almost reflecting wall in the tube (joint result with D. Szasz) and
mention some work in progress (with D. Dolgopyat) on the limiting density
profile of non-interacting particles in a long tube with absorbing | {"url":"http://www.cims.nyu.edu/~lsy/seminars/abstracts/nandori2013.html","timestamp":"2014-04-21T14:47:28Z","content_type":null,"content_length":"1701","record_id":"<urn:uuid:13371aa3-8803-45d2-828e-5755a2efd3d2>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to describe the probabilty in this case?
If we receive a flow of bits of a particular length, how to describe it in function of the length L?
I am interested in the description of the probability of receiving the sequence "10".
p1: Probability of receiving 1
p0: Probability of receiving 0
px: Probability of receiving whatever
If L=3, we can have:
so, the probability is p=2*p1*p0*px
If L=4,
Here, the probability is p=3*p1*p0*(px^2)+(p1*p0)^2
If L=5,
In this case, p=4*p1*p0*(px^3)+3*((p1*p0)^2)*px
How to describe the probability in general, using L as variable??? | {"url":"http://mathhelpforum.com/advanced-statistics/202402-how-describe-probabilty-case.html","timestamp":"2014-04-17T16:46:23Z","content_type":null,"content_length":"40880","record_id":"<urn:uuid:3fed411a-208c-4a98-998a-be2a32820724>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Middleboro SAT Math Tutor
Find a Middleboro SAT Math Tutor
I'm a semi-retired lawyer, with years of trial experience. As you might expect from a lawyer, I teach primarily by the Socratic method, leading students to find the right answers themselves. I
have excelled in every standardized test I have taken: SAT 786M/740V, LSAT 794, National Merit Finalist.
20 Subjects: including SAT math, English, reading, writing
I have been teaching high school and middle school math courses for about 20 years. I currently teach at a local high school and teach on Saturdays at a school for Asian students in Boston. I am
currently teaching Honors Algebra 2, Senior Math Analysis, and MCAS prep courses, as well as 7-8 grade math, and SAT Prep courses.
12 Subjects: including SAT math, geometry, algebra 2, algebra 1
...I primarily tutored probability and statistics, single and multi variable calculus, and college algebra. I have also done a considerable amount of private tutoring in a variety of different
areas of math. I am qualified to tutor nearly all areas of high school and college math and can also ass...
14 Subjects: including SAT math, calculus, geometry, GRE
...I currently hold a master's degree in math and have used it to tutor a wide array of math courses. In addition to these subjects, for the last several years, I have been successfully tutoring
for standardized tests, including the SAT and ACT.I have taken a and passed a number of Praxis exams. I even earned a perfect score on the Math Subject Test.
36 Subjects: including SAT math, English, reading, calculus
...I studied Latin at a German Gymnasium (college preparatory school) for six years followed by four years in college. I continued to read Latin in my professional life during leisure times. After
20 years of practicing law I am returning to teaching.
21 Subjects: including SAT math, English, writing, ESL/ESOL | {"url":"http://www.purplemath.com/Middleboro_SAT_Math_tutors.php","timestamp":"2014-04-17T07:45:05Z","content_type":null,"content_length":"24008","record_id":"<urn:uuid:f1f7c13b-713d-41a4-adaf-5e7b42b25380>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Predictability in International Asset Returns: A Re-examination
WORKING PAPERS SERIES WP99-03
Predictability in International Asset Returns: A Re-examination
Christopher Neely and Paul Weller
Predictability in International Asset Returns: A Reexamination Christopher J. Neely* Paul Weller†
Original Version: April 1997 Current Version: February 19, 1999
Senior Economist, Research Department Federal Reserve Bank of St. Louis St. Louis, MO 63011 (314) 444-8568 (o), (314) 444-8731 (f), neely@stls.frb.org Department of Finance College of Business Administration, University of Iowa. Iowa City, IA 52240 (319) 335-1017 (o), (319) 335-3690 (f), paul-weller@uiowa.edu
Primary Subject Code: C32 - Time Series Modeling Secondary Subject Code: F30 - General International Finance Keywords: Vector Autoregression, Asset Price, Exchange Rate, Forecasting The views expressed are those of the authors and do not necessarily reflect official positions of the Federal Reserve Bank of St. Louis, or the Federal Reserve System. The authors wish to thank Robert Hodrick for supplying the data used in Bekaert and Hodrick (1992), Morgan Stanley for providing us with additional financial data and Kent Koch for excellent research assistance. We also thank Chuck Whiteman, Dick Anderson, Bob Rasche, Gene Savin and Dan Thornton for helpful comments and Robert Hodrick and Paul Sengmuller both for helpful comments and for pointing out an error in one of our programs. Paul Weller would like to thank the Research Department of the Federal Reserve Bank of St. Louis for its hospitality during his stay as a Visiting Scholar, when this work was initiated.
This paper argues that inferring long-horizon asset-return predictability from the properties of vector autoregressive (VAR) models on relatively short spans of data is potentially unreliable. We illustrate the problems that can arise by re-examining the findings of Bekaert and Hodrick (1992), who detected evidence of in-sample predictability in international equity and foreign exchange markets using VAR methodology for a variety of countries over the period 1981-1989. The VAR predictions are significantly biased in most out-of-sample forecasts and are conclusively outperformed by a simple benchmark model at horizons of up to six months. This remains true even after corrections for small sample bias and the introduction of Bayesian parameter restrictions. A Monte Carlo analysis indicates that the data are unlikely to have been generated by a stable VAR. This conclusion is supported by an examination of structural break statistics. Implied long-horizon statistics calculated from the VAR parameter estimates are shown to be very unreliable.
Introduction Early empirical work on stock prices and exchange rates found it very difficult to reject the hypothesis that these prices followed a random walk (Fama, 1970; Meese and Rogoff, 1983). Subsequently, however, research has produced evidence for the existence of transitory, or meanreverting components in both equity and foreign exchange markets (Summers, 1986; Campbell, 1987; Fama and French, 1988; Poterba and Summers, 1988; Mark, 1995). Although the earlier findings were the subject of some dispute on statistical grounds (Lo and MacKinlay, 1988; Nelson and Kim, 1993; Richardson, 1993) more recent research has tended to support and further refine the evidence for return predictability (Hodrick, 1992; Lamont, 1998). Most of these studies, however, have focused exclusively on in-sample evidence of predictability.1 Evidence of in-sample predictability implies out-of-sample predictability only if the estimated structural relationships are sufficiently stable over time. This is an argument for focusing on data collected over long time spans. If a relationship has persisted over fifty, or better yet, a hundred years, then one will clearly have more confidence that the relationship will continue to hold in the future. But recently a number of authors (Hodrick, 1992; Bekaert and Hodrick, 1992; Campbell, Lo and MacKinlay, 1997) have advocated an alternative approach, succinctly described as follows: “An alternative approach [to looking directly at the long-horizon properties of the data] is to assume that the dynamics of the data are well described by a simple timeseries model; long-horizon properties can then be imputed from the short-run model rather than estimated directly.” (Campbell, Lo and MacKinlay, 1997, p.280): These authors are careful to point out that such procedures are critically dependent on the
Mark (1995) and Lamont (1998) are exceptions. 1
assumption of stability of the estimated parameters. However, it is rare in studies of asset price predictability to find any formal investigation of this issue. The objective of this paper is to point out the potential pitfalls inherent in this approach, and to argue that more attention needs to be paid to the question of structural stability in studies of asset price predictability. Meese and Rogoff (1983) made this point forcefully in the context of structural models of the exchange rate. They showed that good in-sample performance of such models was accompanied by very poor out-of-sample performance.2 We proceed by reexamining the findings of Bekaert and Hodrick (1992) on international asset price predictability. Using two-country VARs including the U.S. and either the U.K., Germany or Japan, Bekaert and Hodrick analyze the performance of dividend yields, forward premia and lagged excess stock returns as predictors of excess returns in stock and foreign exchange markets over the period 1981-89. They find evidence supportive of previous findings that domestic dividend yields forecast excess stock returns and that forward premia forecast excess foreign currency returns, and new evidence both that dividend yields have predictive power for currency excess returns, and that forward premia have predictive power for excess stock returns. Bekaert and Hodrick (1992) also use the parameter estimates from the VARs to produce estimates of implied long horizon statistics such as slope coefficients from OLS regressions, variance ratios and R2’s. This is an application of a methodology advocated in Hodrick (1992). That paper presented Monte Carlo evidence showing that, if the assumed VAR structure is correct, the approach can produce accurate estimates of long horizon slope coefficients and R2’s without the need to use a very long series of data.
See Hansen (1992) and Stock and Watson (1996) for more recent evidence on instability in macroeconomic time series. Ghysels (1998) and Garcia and Ghysels (1998) have examined the importance of structural stability in the conditional CAPM framework. 2
We are interested in investigating the stability of the relationships estimated by Bekaert and Hodrick (1992). If they are indeed stable, we should be able to detect evidence of predictability out-of-sample. To determine whether this is the case, we examine the forecasting performance of the two-country VARs over the out-of-sample period 1990-96. In all cases it is rather poor, and the predictions from the VARs are inferior to those from a simple benchmark model. We also find that modifying the forecasting procedure to use rolling and expanding data samples, a Bayesian approach and/or endogenous-regressor bias corrections fails to generate results that outperform the benchmark model. Monte Carlo analysis indicates that the data over the full sample period were unlikely to have been generated by a stable VAR. Examination of structural break statistics also provides strong evidence that the estimated relationships are not stable. Finally, further experiments show that long-horizon regression coefficients, R2’s and variance ratios implied by the VAR parameter estimates are subject to great uncertainty.
2. The Data The data set consists of monthly data from four countries: the United States, Japan, the United Kingdom and Germany. The sample begins in 1981:1 and ends in 1996:10 for all countries except Germany, where the data run to 1995:6. Thus we extend the nine-year sample used in Bekaert and Hodrick (1992) by adding roughly seven years of additional data. The variables from each country will be identified by subscripts: 1 for the U.S., 2 for Japan, 3 for the United Kingdom and 4 for Germany. Let ijt be the one-month nominal interest rate from time t to t + 1 in country j (j = 1, 2, 3, 4). Let rjt+1 be the continuously compounded one-month rate of return in excess of ijt on the stock market in country j, denominated in the currency of that country. The log of the spot exchange rate for country j is sjt (dollars per unit of currency in
country j) and the excess return in dollars to a long position in currency j held from t to t + 1 is rsjt+1, where rs jt +1 = s jt +1 ! s jt + i jt ! i1t (1)
Denoting the log of the one-month forward rate at time t as fjt, then the one-month forward premium for currency j against the dollar is defined as fpjt = fjt - sjt. Using the covered interest parity relation, fjt - sjt = i1t - ijt, the dollar excess return to holding currency j can be calculated from spot and forward rates as rsjt+1 = sjt+1 - fjt. Finally, the dividend yield in country j is denoted dyjt. More details on the data sources and construction are provided in the data appendix.
3. The Vector Autoregressions We follow Bekaert and Hodrick (1992) in estimating a six-variable VAR for each of three two-country pairs, U.S.-Japan, U.S.-U.K. and U.S.-Germany.3 The variables included in each VAR are the excess return on equity in each country {r1t, rjt}, the excess return to a long position in the foreign currency from the viewpoint of a U.S. investor {rsjt}, the dividend yield in each country {dy1t, dyjt} and the forward premium {fpjt} (equal to the one-month interest differential). We can write the first-order VAR regressions in vector notation as: Yt = " 0 + AYt !1 + u t (2)
In the case of the U.S.-Japan VAR, Yt = { r1t, r2t, rs2t, dy1t, dy2t, fp2t }, "0 is a vector of constants, A is the matrix of regression coefficients on the first lag of Yt and ut is an error vector. We estimate the VARs by ordinary least squares over the whole sample (1981:1– 1995:6/1996:10) and over each of the two subperiods (1981:1–1989:12 and 1990:1– 1995:6/1996:10). The latter two periods correspond to our in-sample and out-of-sample periods
for evaluating forecasting performance. For each variable in a given VAR we report the statistic for the joint test of whether all six coefficients on the lagged variables are zero. The results, presented in Table 1, conform closely to the findings of Bekaert and Hodrick (1992) over the period 1981–89. There is strong evidence of predictability for the U.S. excess stock return and for all three foreign currency returns. There is also evidence of predictability for the Japanese excess stock return. This pattern of predictability in stock and currency returns is broadly reproduced during the period from 1990 to the end of the sample. However, over the whole sample period, support for predictability emerges only in the currency markets, which suggests the presence of parameter instability. To throw further light on this issue we turn to considering the out-of-sample forecast performance of the VARs.
4. Out-of-sample forecast performance If the VAR is well-specified and the covariance structure stable, then predictability implies that we should be able to use the parameter estimates from the period 1981-89 to forecast asset prices during the out-of-sample period 1990-95/96. To investigate the out-of-sample forecast performance of the VARs, we carry out the following exercise. We use the coefficient estimates from the sample period 1981:1-1989:12 to construct forecasts for each of the variables in the system over the out-of-sample period (1990:1-1996:10 for the US-UK and US-Japan, 1990:1-1995:6 for the US-Germany), at horizons of one, three and six months. That is, at each period we use the actual data at date t and the parameters as estimated over the fixed sample period and project the path of the system at dates t + 1, t + 3 and t + 6. We then update the data for the next period’s set of forecasts. This gives us a set of 82 one-period-ahead forecasts, 80
Lag length was selected according to the Schwarz criterion. We found that optimal lag length
overlapping three-period-ahead forecasts and 77 overlapping six-period-ahead forecasts for the US-UK and US-Japan.4 There are 16 fewer forecasts for US-Germany at each horizon. We first consider whether we can reject the hypothesis that the out-of-sample forecast errors have a mean of zero. The VAR residuals are constructed to provide an unbiased, in-sample forecast of the future value of the dependent variables. But if the estimated relationship is unstable, they may not have this property out-of-sample. Denoting the k-period-ahead forecast of
) variable Y , conditional on data at time t - k as Yt|t !k , we ask if the data are consistent with the t
following hypothesis: ˆ ˆ E(Yt|t ! k ! Yt | $t ! k , # 0 ) = E(u t|t !k | $t ! k , # 0 ) = 0 (3)
$ where ut |t ! k is the forecast error at time t, based on information $t-k through period t-k, and #0 is the set of parameters estimated from the fixed sample period. Panel A of Table 2 shows the mean errors and significance levels for tests of unbiasedness of the forecasts of the variables in each VAR. The hypothesis that the forecasts are unbiased is rejected at the five per cent level in 30 out of 54 cases, and at the one per cent level in 25 out of 54 cases. We also perform two further tests. The first considers whether the six forecasts from each VAR are jointly unbiased. The second considers whether the forecasts of each variable are jointly unbiased across the three VARs. Panels B and C of Table 2 report these results. The hypothesis of unbiasedness is rejected at any reasonable level of significance at all forecast horizons for the first test, and in all but two of eighteen cases for the second test.5 This evidence of bias means that the VAR forecasts are consistently under- or
was one in all cases, as did Bekaert and Hodrick for the shorter sample period. 4 The overlapping three and six-period forecast errors will have at least second and fifth order serial correlation in their errors which must be taken into account in the tests that follow. 5 The statistical tests used in the paper are described in detail in the appendix. 6
overpredicting the change in a variable during the out-of-sample period. For example, the forecasts fail to predict the very poor performance of the Japanese stock market during the outof-sample period. The mean forecast error of 35.44 at the six-month horizon means that the VAR forecast overpredicts the excess return to holding Japanese stocks by 35.44 per cent per annum. The forecasts also miss the strong performance of the U.S. stock market. Two of the three sixmonth forecasts of excess returns in the U.S. stock market underpredict by ten percentage points or more. It is important to emphasize that evidence of bias is not a sufficient reason on its own for concluding that a forecast is of no use, or that the variables in the system display no predictability. For example, the one-month forecast of the US dividend yield in the US-Germany VAR is quite accurate, with a mean forecast error of only 0.07 per cent per year. But the forecast is significantly biased because of the low variability of the error. Thus, even if the out-of-sample forecasts are biased, they may be valuable in the sense of being relatively more accurate — having a smaller average prediction error — than other forecasts available. Conversely, even an unbiased forecast may be of little use if it is very noisy. Notwithstanding these observations, it certainly appears that the magnitude of the forecast errors for all asset returns at all horizons is economically as well as statistically significant. In addition to the figures for US and Japanese stock returns mentioned above, we find that the mean forecast error for the excess return to German equity at the six-month horizon is 10.97%, and for the excess return to holding yen at the six-month horizon is 13.09%. In most cases the shorter horizon forecasts are even more inaccurate. One way of providing further evidence on the economic significance of the forecast errors is to ask whether the VAR forecasts outperform a suitably chosen simple benchmark
model. We choose the natural one in which expected excess returns in equity and foreign exchange markets are assumed to be constant, and are therefore unpredictable. This assumption on expected excess returns is equivalent to imposing a constant risk premium, and is consistent with the standard representative agent asset pricing model. Expected excess returns are set equal to their respective sample means over the period 1981-89. In addition, in the light of the welldocumented persistence of dividend yields and forward premia, we assume that these variables follow random walks.6 We compare the VAR forecasts to the benchmark forecasts with the mean-squared prediction error (MSPE) criterion.7 A simple way to compare the VAR and benchmark forecasts is to calculate the ratios of the MSPE from the VAR and benchmark models. A ratio less than one indicates that the forecasts from the VAR are more accurate, on average, than the benchmark forecasts. Table 3 shows that the out-of-sample prediction error ratios are, in 52 out of 54 cases, greater than one for every variable, for each country over every forecast horizon. That is, the VAR consistently predicts more poorly than the benchmark forecast. In the case of the forward premium against the DM, the out-of-sample, one-month MSPE for the VAR forecast is 41 times that of the benchmark forecast. In 29 of 54 cases, the differences have p-values of 0.01 or less. These figures are in stark contrast to those for the in-sample MSPE ratios, which range from 0.49 to 0.99 at all horizons. It is possible that the poor performance of the VAR forecasts is at least in part a consequence of ignoring the presence of heteroscedasticity in the error term. To investigate this, we perform a Lagrange multiplier test for autocorrelation in the squared errors.
First, we
In our data first-order autocorrelations for the dividend yields and forward premia are all greater than 0.92 while those for the excess return variables are all less than 0.13.
estimate the VARs over the 1981-1990 sample period and select the optimal lag length for an autoregression of squared errors using the Schwarz criterion. Next, we rerun an autoregression of the squared residuals using this optimal lag length and calculate the TR2 statistic, which is distributed as a chi-square random variable with number of degrees of freedom equal to the number of autoregressors under the null of no autocorrelation.8 The test rejects the null of no autocorrelation in the squared errors at the 10% level in three of the 18 equations, the Japanese dividend yield and forward premium equations in the U.S.-Japan VAR and the forward premium equation in the U.S.-U.K VAR. For each of these equations, a visual inspection of the error process revealed that a structural break in the variance process seemed at least as likely as a GARCH process to have generated the high LM statistics.9 To distinguish these hypotheses, we compared the Schwarz criterion and Akaike information criterion obtained by modeling the variance process in three ways: 1) as a GARCH(1,1) process; 2) as having a structural break; or 3) as a homoscedastic process.10 The date for the structural break was chosen in each case to maximize the likelihood function. In all three VARs, the Schwarz criterion and Akaike information criterion lead one to prefer the structural break model to either the homoscedastic or the GARCH model. Reestimating the three equations permitting a structural break in the variance term results in very little change in the forecasting performance of the VARs. We conclude that the poor forecasting
All of the forecasting results in this paper are robust to using mean-absolute prediction error (MAPE) as the measure of forecast accuracy. These results are available from the authors. 8 Engle (1983) proposed this test to detect ARCH errors. The results of this test are available upon request. 9 It is well known that the presence of structural breaks in the variance process can bias the autoregressive coefficients of the squared error process just as breaks in the level of a series can bias unit root tests to the nonstationary alternative (see Perron (1990)). 10 The GARCH(1,1) model is a parsimonious representation that has been shown to fit a number of data series well. 9
performance cannot be attributed to GARCH effects or to structural breaks in the error variance process. The figures in Table 3 on the relative performance of currency excess return forecasts are worth noting. Bekaert and Hodrick found particularly strong evidence of predictability over the period 1981-89 for the dollar against the pound and the mark. They reported confidence levels of 0.999 or above. But the benchmark model outperforms the VAR at all horizons except six months for the mark. At the one-month horizon the MSPE ratio for the mark is 3.59. This suggests that findings on the predictability of foreign exchange excess returns are heavily dependent on the experience of the 1980s. The results presented in this section show that although the VAR model is clearly superior to the benchmark model in-sample, the benchmark model consistently outperforms the VAR model during the out-of-sample period, and that the gain in performance is frequently statistically significant. We next turn to examining possible explanations for these results.
5. Modifications to the forecasting procedure We will refer to the forecasting procedure used in the previous section, using a fixed data sample and OLS estimates of the parameters, as the baseline case. We examine several modifications designed to improve forecasting performance: the imposition of Bayesian restrictions on coefficient estimates, correction for the small sample bias caused by the presence of lagged endogenous regressors11 and the use of additional data as the forecast date changes. There is considerable evidence that combining Bayesian techniques with VARs is helpful
Mankiw and Shapiro (1986) and Stambaugh (1986) discuss the small sample bias imparted by lagged endogenous regressors. Bekaert, Hodrick and Marshall (1997) use Monte Carlo procedures to correct for such a bias in term structure tests. 10
in forecasting (Litterman, 1986). We therefore consider a modified forecasting procedure in which we choose prior means for the coefficients of the A matrix to conform with our benchmark forecasting model. Thus, the own-lag coefficients on excess return variables have a prior mean of zero and the corresponding coefficients on dividend yields and forward premia have a prior mean of one. This is an adaptation of the well-known “Minnesota Prior” to the differenced variables in the model. In addition we assume a diffuse prior distribution for the constant term "0. The standard deviation of the prior distribution for all own-lag coefficients is set to 0.2. So in the case of equity and foreign exchange excess returns the forecaster is assumed to attach a prior probability of 0.95 to the hypothesis that the own-lag coefficient lies between – 0.4 and 0.4. The standard deviation of the prior distribution for all off-diagonal elements of the A matrix is set equal to 0.1 multiplied by an appropriate scale correction (Litterman, 1986, Doan, 1996). It is also important to recognize that estimating the coefficients from a fixed sample does not provide the VAR with all the information that would be available to the econometrician. So it may be possible to improve forecasting performance by updating the sample on which the forecasts are based, as new information becomes available. To investigate this question we consider two cases, an expanding data sample and a rolling data sample. If we are estimating a stable VAR relation, prediction with expanding sample sizes will provide the model with more useful information with which to make predictions. On the other hand, if instability in the underlying parameters is the cause of the inferior forecasting performance, using rolling samples may alleviate the problem. In the case of an expanding sample, we re-estimate the VAR on all data available up to time t to generate forecasts at time t. This contrasts with the fixed sample approach used in the baseline case, where the VAR was estimated only once on data from 1981-89. In the case of a
rolling sample, for a forecast at time t the VAR is re-estimated on a rolling window of data (up to time t) equal to the original sample size of 108 observations. We summarize the results of comparing Bayesian and classical forecasts for fixed, expanding and rolling samples in Table 4. Since we have already demonstrated the substantial superiority of the benchmark model over the baseline VAR model, it is not surprising that we should see some improvement in forecast performance in the Bayesian case, whose priors push the predictions in the direction of the benchmark model. The use of a rolling sample leads to the greatest improvement in quality of forecast as measured by the number of cases in which the prediction error ratio is significantly greater than one. But the results are still in all cases inferior to those from the benchmark model. We also test forecasts formed with coefficients adjusted for the bias arising from the presence of lagged endogenous regressors. Bekaert, Hodrick and Marshall (1997) calculate the adjustments in a similar procedure. Our adjustment procedure is as follows: 1. Using an OLS estimate AOLS of the VAR parameter matrix A and covariance matrix from the period 1981-89, we generate 100,000 data sets of 108 observations—with initial conditions drawn from the unconditional distribution of the data. 2. We estimate the parameter matrix A of the VAR for each simulated data set using OLS, and calculate the average of those matrices, AMC. 3. The bias-adjusted coefficient matrix is computed as: ABA = AOLS + ( AOLS ! AMC ) We find in all cases that the bias-adjusted matrices possess eigenvalues greater than unity, indicating that the VAR may be misspecified, but for completeness report the forecasts with the adjusted parameter matrices. The results are presented in the bottom line of the two
panels of Table 4. In comparison to the baseline case, we find that the adjustment does not increase the number of MSPE ratios less than one at any horizon. The number of ratios significantly greater than one is reduced from 40 to 38. It is clear that this modification to the forecasting procedure produces virtually no improvement. Given that the benchmark model outperforms the VAR it is reasonable to consider whether the VAR should be re-estimated with all the variables of an integrated order in an errorcorrection framework. Complicating such an approach, however, is the uncertainty about the nature of the processes generating dividend yields and forward premia. For example, all four dividend-yield series show evidence of a time-varying (declining) mean in the sample until 1988 (see Figure 1). It is not clear whether these series are simply very persistent or whether it would be appropriate to model them with a time trend, a structural break or by differencing. Indeed, research in the unit root and structural break literature has shown that no finite amount of data will enable the observer to distinguish between an integrated series and a “close” highly persistent stationary alternative. The lack of an obvious way to model these series prevents us from reformulating the model with any confidence that it is correctly specified. An alternative approach to improving forecasting performance is to proceed with stepwise elimination of insignificant parameter estimates to restrict the VAR coefficients. Li and Schadt (1995) analyzed the effect of this procedure on parameter estimates for the two-country VARs considered by Bekaert and Hodrick (1992). They examined the asset allocation strategy implied by out-of-sample forecasts at the six-month horizon over the period 1987-93. Based on an examination of Sharpe ratios they concluded that there was no evidence that the forecasts were informative. These results are consistent with our findings.
6. A Monte Carlo analysis of the forecasting performance of a stable VAR The dividend yields and the forward premia series are very persistent (see Figure 1), with first-order autocorrelations over 0.92. The dividend yields also may have a time-varying (declining) mean in the sample. The extreme persistence found in dividend yields and forward premia is likely to produce poor small sample properties for the parameter estimates. This suggests that even a VAR estimated on stationary data with no structural breaks would not forecast particularly well. To investigate whether this persistence affects the forecasting performance of the VARs, we carry out the following Monte Carlo analysis. Using the VAR parameter estimates derived from the full sample (1981-1995/6) we generate 1000 new data sets of the same size as the full sample, using the data from 1980:12 as initial conditions.12 We then produce the set of forecasting statistics shown in Table 4 using the simulated data sets. The results from the simulated VAR data are summarized in Table 5. The forecasting performance of the stable VAR is not very impressive. At all horizons the Bayesian expanding sample case performs best, but beats the benchmark model on average in only 54 per cent of cases (see Panel A). However, despite this fact, we still find strong evidence against the hypothesis that a stable VAR process, even one with very persistent variables, generated the actual data. For one-month forecasts with an expanding sample (Panel C) we find that in both classical and Bayesian cases only five per cent of MSPE ratios from the simulated data are greater than those in the actual data. As one lengthens the forecast horizon these numbers increase. But this is simply a reflection of the reduced power of such comparisons at longer horizons to detect departures from a stable VAR process. Panel D compares the number of significance levels from the VAR-generated data that are less than those observed in the actual
data. We again find that at short horizons there is strong evidence against the hypothesis that a stable VAR process generated the actual data. The very poor forecasting performance relative to that of a stable VAR indicates that the estimated VAR does not well describe stable dynamic relationships between the variables. If the short-run dynamics are misspecified, this leads one to believe that the complex, nonlinear functions of the VAR parameters characterizing the long-run behavior of the system are of dubious usefulness.
7. Implications for Long-Horizon Statistics Hodrick (1992) and Bekaert and Hodrick (1992) have argued that one of the advantages of the VAR-based approach is that, where evidence of predictability is present, one may use estimates of the parameters of the VAR to construct implied long-horizon statistics such as slope coefficients, R2 values and variance ratios without having to rely on a long series of data. Hodrick (1992) presents evidence from a Monte Carlo study indicating that implied long-horizon statistics have good small sample properties. He considers a three-variable VAR in which lagged stock return, dividend yield and Treasury bill rate are used to predict stock returns. His data sample contains 431 monthly observations on US data, and so is substantially larger than the one we have available. We investigate the small sample properties of the implied long-horizon statistics for a data set of the size we have. Thus we pose the following question: under the assumption that the estimated VAR is the correct model, will a sample of 174/190 monthly observations provide us with satisfactory estimates of the long-horizon statistics of interest? We estimate the VAR on the given data set (1981:1-1996:10 for US-Japan and US-UK
Drawing initial conditions from the unconditional distribution of the data made no significant 15
and 1981:1-1995:6 for US-Germany) and use the parameter estimates to generate 10,000 simulated data sets drawing initial conditions from the unconditional distribution. For each simulated data set we obtain parameter estimates for the VAR and use them to construct long horizon statistics. In Table 6 we compare for the U.S.-Japan VAR (1981:1-1996:10) actual values of selected long-horizon statistics with the simulated distribution. In Panel A we see that implied coefficients in a regression of stock return on dividend yield for both the U.S. and Japan are very imprecisely estimated and subject to severe bias. In the U.S. case the 50th percentile of the empirical distribution is two to three times the actual value. The bias is similar in the Japanese case.13 Only for the regression of foreign currency excess return on forward premium do we find that the empirical distributions are reasonably well centered on the actual value, although imprecisely estimated. A similar pattern emerges when we examine the implied R2 values in Panel B. When we consider implied variance ratios we find that they are again very imprecisely estimated for U.S. and Japanese stock return, but fare better in the case of currency excess return. The results from the other two VARs are comparable and we do not report them. So we find that even if the estimated VAR coefficients correspond to the true model, the implied long-horizon statistics will not be reliable for the sample size we consider. Our results stand in strong contrast to those of Hodrick (1992). The explanation for the difference in findings can be attributed to two factors. Hodrick’s (1992) data set had over twice the number of observations and the VAR he analyzed had only half the number of variables of those we consider.
difference to the results. 13 We found that it was necessary to increase the sample size to 5000 in order to approximately center the empirical distribution on the observed values in these two regressions. 16
8. Tests for Structural Breaks The poor out-of-sample forecast performance of the VAR and the relative success of rolling forecasts indicate that the estimated parameters are unstable. This problem, a form of model misspecification, is common in time series regressions. To quantify the extent of the problem we test for a structural break at an unknown date by calculating the standard Wald test statistics for a structural break at each observation in the middle third of each sample. The supremum of these test statistics identifies a possible structural break in the series but will have a nonstandard distribution (Andrews, 1993). The critical value for the supremum is calculated from a Monte Carlo experiment.14 Figure 2 shows a plot of these structural break statistics, along with the 1 per cent Monte Carlo critical values for the supremum of each series over the period from approximately the end of 1985 to the end of 1990.15 In all three VARs the supremum lies comfortably above the 1 per cent critical value, indicating the presence of a structural break. The strongest evidence of a break for the US-Japan VAR and US-UK VAR occurs in the period from February to April 1987. To help identify the relationships in the VAR that are changing in this period, we calculate structural break statistics for each equation and for the individual coefficients in each equation.16 The evidence is not conclusive, but the equations for the US and Japanese equity returns and the Japanese dividend yield have large structural break statistics in this period.17
Ghysels, Guay and Hall (1998) propose another approach to examining structural instability with an unknown break point that is less computationally demanding than the Andrews procedures. This advantage of their procedure is less helpful in the VAR environment. 15 The Statistical Appendix describes the Monte Carlo procedures. 16 Bekaert and Harvey (1998) examine the micro and macro determinants of instability in the volatility process of emerging markets. 17 Evidence from the individual equations and coefficients may be misleading in that the equation statistics lose covariance information compared to the statistics for the whole VAR. This may be important, as there is evidence of strong correlation in the coefficient estimators for 17
The US-Germany VAR structural break statistics are comfortably above the one per cent critical value from early 1989 on, perhaps due to changes brought about by reunification. The forward premium equation statistic was quite high from 1989 through 1990 but the equations showing the strongest evidence of an increase in instability in August 1990 were those of the US equity return and dividend yield, which are highly correlated in each VAR.
9. Discussion and Conclusion We have argued that the issue of structural stability of parameter estimates is generally ignored in studies that seek to demonstrate predictability of asset returns. This issue particularly complicates the inference of long-horizon properties of the data from relatively short time series. We have illustrated the problems that can arise by re-examining the findings of Bekaert and Hodrick (1992) on the predictability of international asset returns. We examine the out-of-sample forecasting performance of the VAR, and find it to be very poor. The VAR forecasts are conclusively outperformed by a simple benchmark model which assumes that excess returns in equity and foreign exchange markets are constant, and that dividend yields and forward premia follow random walks. We consider several explanations for the very poor forecasting performance of the VAR. 1. Poor parameter estimates may be caused by pure sampling error. 2. The extreme persistence found in dividend yields and forward premia may result in poor small sample properties for the parameter estimates. 3. The VAR coefficients may exhibit small-sample bias due to the presence of lagged
the US equity return and US dividend yield equations in all three VARs. Similarly, the structural break statistics for individual coefficients may be misleading if there is significant correlation between the coefficient estimators. 18
endogenous regressors. 4. The VAR parameters may have been subject to some underlying structural change. The strong evidence of bias in the out-of-sample forecasts and their very poor performance relative to the benchmark model suggest that pure sampling error is not a plausible explanation. This is further supported by the results of the Monte Carlo analysis in which we generate simulated data sets with a stable VAR estimated over the sample period. Although Monte Carlo forecasts based on VAR parameter estimates are not very good, they significantly outperform the benchmark model and certainly do far better than forecasts based on actual data. The Monte Carlo analysis also indicates that while highly persistent variables may contribute to the poor forecasting performance of the VAR, they cannot adequately explain our results. The lack of an obvious way to model the highly persistent series prevents us from reformulating the model with any confidence that it is correctly specified. The failure of bias adjustment to produce any detectable improvement in the VAR forecasts casts doubt on the third possible explanation, that lagged endogenous variables cause small sample bias. This leaves us with the fourth possibility, that one or more structural breaks occurred in the data. This hypothesis is strongly supported by the structural break statistics shown in Figure 2. Examination of long horizon statistics inferred from the VAR parameter estimates has shown that even if the VAR were stable the estimates from a sample of 190 monthly observations would in general be badly biased and very imprecise. Thus the methodology advocated by Hodrick (1992) is shown to be sensitive to sample size and model size. The unreliability of the estimates is here exacerbated by the evident instability of the VAR.
Data Appendix Excess equity returns were constructed by subtracting the continuously compounded 1month Eurocurrency interest rate, collected by the Bank for International Settlements at the end of each month, from the total equity return provided by Morgan Stanley Capital International (MSCI). The MSCI total return series is subdivided into separate income return and capital appreciation series. The MSCI income return and capital appreciation series were used to calculate dividend yields as annualized dividends divided by current price for the U.S., Japan and the U.K. The German dividend yield series is taken from various issues of the Monthly Report of the Deutsche Bundesbank, from the column labeled “yields on shares including tax credit.” Prior to 1993, this series could be found in Section VI, Table 6 of the statistical section. Starting in January 1993, this series was displayed in Section VII, Table 5 until June 1995. The spot and 1month forward exchange rates were obtained from DRI/McGraw-Hill’s DRIFACS database. All bid and ask exchange rate data were sampled at the end of the month. Transaction costs were also incorporated by taking account of the fact that a currency is bought at the ask price and sold at the bid.
Statistical Appendix A.1. Testing for bias in the VAR forecasts To determine whether the mean errors that are reported in Table 2 are statistically significantly different from zero, we calculate the following statistic (Hansen and Singleton, 1982 or Diebold and Mariano, 1995):
! T[e ’S T1 e] ~ A % 2 (r)
which is asymptotically distributed as a %2 random variable with r degrees of freedom under the hypothesis that the errors have zero mean, i.e. the forecasts are unbiased. In this expression, e is the average forecast error during the out-of-sample period, e =
1 T $ $ & et , et is the r by 1 forecast T t =1
error at time t, and ST/T is the (r by r) covariance matrix of e . The integer r is the number of forecasts being tested by the statistic. For example, for the tests of individual variables in panel A of Table 2, r = 1, but for the tests over a VAR in Panel B, r = 6 because we are testing six
$ forecasts at a time. Similarly, for the tests across VARs, r = 3. Because the { et } series may be
autocorrelated, we must take account of this in the construction of our estimate of the variance$ covariance matrix of et , ST. We do this with the Newey-West estimator:
ST =
q v=1
[1 - (v / (q + 1))]( ' v,T +
' v,T = 1 T $ $ & ( e t - e)( e t-v - e)) T t=v+1
The parameter q is chosen to match the order of the serial correlation in the data with a sampledependent formula suggested by Newey and West (1994). Experimentation with other lag length and weighting procedures produced similar results.
Testing for differences in the prediction errors of VAR and random walk models To determine whether the prediction error ratios in Table 3 are statistically significant, we
follow a similar procedure and calculate the following statistic (Hansen and Singleton, 1982):
2 2 2 2 ! T[ (eVAR ! eB )S T 1 (eVAR ! eB )’ ] ~ A % 2 (r)
where eVAR =
1 T $ VAR & Yt ! Yt T t =1
2 $ is the MSPE from the VAR forecast ( YtVAR ), eB is the
comparable statistic from the benchmark forecast, ST/T is the Newey-West estimate of the covariance matrix of the mean square difference ( eVAR - eB ), and r denotes the dimension of
eVAR and eB , the number of forecasts we are testing. For the individual variables tested, r = 1.
If the MSPEs from the two prediction methods are equal, the statistic is asymptotically distributed as a chi-square random variable with r degrees of freedom.
Testing for a structural break at an unknown break point We test for an unknown break point in the middle third of the sample by first constructing
a series of statistics for each observation in the subsample. The statistic at a given point, T0, in a sample running from time zero to T, is calculated by estimating the VAR from time zero to T0,
$ $ and from T0 to T, obtaining two sets of coefficient estimates, * 1 and * 2 . If + = T0/T denotes the
fraction of the total observations which are from the first part of the sample, then $ T * 1 ! * 1 ~ N (0,V1 / + ) and
$ T * 2 ! * 2 ~ N (0,V2 / (1 ! + ))
! where V1 = -1 , Q1!1 , V2 = - 2 , Q2 1 , Q1 =
1 T0
t =1
t !1
y t)!1 , Q2 =
1 T ! T0
t =T0 +1
t !1
y t)!1 ,
-1 =
1 T0
& u u) t t t =1
1 T ! T0
T t =T0 +1
& u u)
t !1
(Hamilton, 1994). The test statistic for a break at T0 is
given by the quadratic function of the difference between the parameter estimates weighted by the inverse of their covariance matrix.
) !1 . = T *ˆ1 ! *ˆ2 [(-1 , Q1!1 ) / + + (- 2 , Q2!1 ) /(1 ! + )] *ˆ1 ! *ˆ2
The null of no structural break within a given subsample is rejected for sufficiently high values of the supremum of . over the subsample. For tests of structural breaks in one equation or in an individual coefficient, statistics similar to that denoted by . can be calculated using the appropriate coefficient estimate(s) and the variance-covariance matrix of those estimate(s). We calculate the 1 per cent critical values from the following Monte Carlo experiment. 1. We estimate each VAR over the whole sample, saving coefficients and the covariance matrix. 2. Using the initial conditions (1980:12 data), estimated coefficients and covariance matrix we create 1000 new data sets of length T, where T is the length of the whole sample. 3. We compute 1000 time series of structural break statistics over the middle third of each of the simulated data sets, one series for each generated data set. Each time series of structural break statistics is approximately of length 63 (0.33T). 4. The ninety-ninth percentile of the distribution of suprema over these 1000 time series is the one per cent critical value used in Figure 2.
References Andrews, Donald W. K., “Tests for Parameter Instability and Structural Change with Unknown Change Point,” Econometrica (July 1993), 821-56. Bekaert, Geert and Campbell R. Harvey, “Emerging Equity Market Volatility,” Journal of Financial Economics, (January 1997), 29-77. Bekaert, Geert and Robert J. Hodrick, “Characterizing Predictable Components in Excess Returns on Equity and Foreign Exchange Markets,” Journal of Finance (June 1992), 467509. Bekaert, Geert, Robert J. Hodrick and David A. Marshall, “On Biases in Tests of the Expectations Hypothesis of the Term Structure of Interest Rates,” Journal of Financial Economics, (June 1997), 309-48. Campbell, John Y., “Stock Returns and the Term Structure,” Journal of Financial Economics (June 1987), 373-99. Campbell, John Y., Andrew W. Lo and A. Craig MacKinlay, The Econometrics of Financial Markets, Princeton University Press, 1997. Diebold, Francis X. and Roberto S. Mariano, “Comparing Predictive Accuracy,” Journal of Business and Economic Statistics (July 1995), 253-63. Doan, Thomas A., “RATS User’s Manual Version 4,” Estima, 1996. Engle, Robert F., “Estimates of the Variance of U.S. Inflation Based upon the ARCH Model,” Journal of Money, Credit, and Banking (August 1983), 286-301. Fama, Eugene F. “Efficient Capital Markets: A Review of Theory and Empirical Work,” Journal of Finance (May 1970), 383-417.
Fama, Eugene F. and Kenneth R. French, “Permanent and Temporary Components of Stock Prices,” Journal of Political Economy (April 1988), 246-73. Garcia, Rene and Eric Ghysels, “Structural Change and Asset Pricing in Emerging Markets,” Journal of International Money and Finance (June 1998), 455-73. Ghysels, Eric, “On Stable Factor Structures in the Pricing of Risk: Do Time Varying Betas Help or Hurt?,” Journal of Finance (April 1998), 549-573. Ghysels, Eric, Alain Guay and Alastair Hall, “Predictive Tests for Structural Change with Unknown Breakpoint,” Journal of Econometrics (February 1998), 209-33. Hamilton, James D., Time Series Analysis, Princeton University Press, 1994. Hansen, Bruce E., “Tests for Parameter Instability in Regressions with I(1) Processes,” Journal of Business and Economic Statistics (July 1992), 321-35. Hansen, Lars Peter and Kenneth J. Singleton, “Generalized Instrumental Variables Estimation of Nonlinear Rational Expectations Models,” Econometrica (September 1982), 1269-86. Hodrick, Robert J., “Dividend Yields and Expected Stock Returns: Alternative Procedures for Inference and Measurement,” Review of Financial Studies (1992), 357-86. Lamont, Owen, “Earnings and Expected Returns,” Journal of Finance (October 1998), 1563-87. Li, Hong and Rudi Schadt, “Multivariate Time Series Study of Excess Returns on Equity and Foreign Exchange Markets,” Journal of International Financial Markets, Institutions & Money (1995), 3-35. Litterman, Robert B., “Forecasting with Bayesian Vector Autoregressions - Five Years of Experience,” Journal of Business and Economic Statistics (January 1986), 25-38.
Lo, Andrew W. and A. Craig Mackinlay, “Stock Market Prices Do Not Follow Random Walks: Evidence from a Simple Specification Test,” Review of Financial Studies (Spring 1988), 41-66. Mankiw, N. Gregory, and Matthew D. Shapiro, “Do we reject too often? Small Sample Properties of Tests of Rational Expectations Models,” Economics Letters (1986), 139-45. Mark, Nelson M., “Exchange Rates and Fundamentals: Evidence on Long-Horizon Predictability,” American Economic Review (March 1995), 201-18. Meese, Richard A. and Kenneth Rogoff, “Empirical Exchange Rate Models of the Seventies: Do They Fit Out of Sample?” Journal of International Economics (February 1983), 3-24. Nelson, Charles R. and Myung J. Kim, “Predictable Stock Returns: The Role of Small Sample Bias,” Journal of Finance (June 1993), 641-61. Newey, Whitney K., and Kenneth D. West, “Automatic Lag Selection in Covariance Matrix Estimation,” Review of Economic Studies (October 1994), 631-53. Perron, Pierre, “Testing for a Unit Root in a Time Series with a Changing Mean,” Journal of Business and Economic Statistics (April 1990), 153-62. Poterba, James M. and Lawrence H. Summers, “Mean Reversion in Stock Prices,” Journal of Financial Economics (October 1988), 27-59. Richardson, Matthew, “Temporary Components of Stock Prices: A Skeptic’s View,” Journal of Business and Economic Statistics (April 1993), 199-207. Stambaugh, Robert F., “Bias in regressions with lagged stochastic regressors,” Center for Research in Security Prices (January 1986), working paper number 156,
Stock, James H. and Mark W. Watson, “Evidence on Structural Instability in Macroeconomic Time Series Relations,” Journal of Business and Economic Statistics (January 1996), 1130. Summers, Lawrence H. “Does the Stock Market Rationally Reflect Fundamental Values?” Journal of Finance (July 1986), 591-601.
Table 1 Tests for predictability in the vector autoregressions 1981-89 2 (6) U.S.-Japan VAR r1 r2 rs2 dy1 dy2 fp2 r1 r3 rs3 dy1 dy3 fp3 30.43 14.75 15.27 2858.67 16076.11 350.45 17.57 7.10 26.55 2511.45 2401.98 518.91 10.08 7.19 36.02 1901.93 4557.45 74.60 1990-end of sample 1981-end of sample 2 p-value (6) p-value 2(6) p-value 0.00 0.02 0.02 0.00 0.00 0.00 0.01 0.31 0.00 0.00 0.00 0.00 0.12 0.30 0.00 0.00 0.00 0.00 13.64 16.81 15.73 1973.64 521.27 3213.09 16.06 25.22 9.41 1942.97 551.30 2528.28 17.70 8.01 11.39 755.96 308.15 5144.52 0.03 0.01 0.02 0.00 0.00 0.00 0.01 0.00 0.15 0.00 0.00 0.00 0.01 0.24 0.08 0.00 0.00 0.00 4.69 8.87 19.07 5049.14 13763.07 1057.94 6.07 5.99 24.85 4410.49 4002.55 1310.00 2.79 7.23 27.24 4433.16 4791.59 2631.97 0.58 0.18 0.00 0.00 0.00 0.00 0.42 0.42 0.00 0.00 0.00 0.00 0.83 0.30 0.00 0.00 0.00 0.00
U.S.-U.K. VAR
U.S.-Germany r1 VAR r4 rs4 dy1 dy4 fp4
The variables from each country are identified by subscripts: 1 for the U.S., 2 for Japan, 3 for the United Kingdom and 4 for Germany. ri is the excess equity return for country i; rsi is the excess return in dollars to a long position in the currency of country i; dyi is the dividend yield in country i; fpi is the forward premium for the currency of country i. End-of-sample is 1996:10 for U.S.-Japan and U.S.-U.K, and 1995:6 for U.S.-Germany. The %2 test is a Wald test with heteroskedastic-consistent standard errors for the null hypothesis that the six slope coefficients in each equation are jointly equal to zero. Low p-values reject the null of no predictability in the VAR.
Table 2 Mean forecast errors and tests for unbiasedness Panel A: Individual variables 1 month Mean p-value -11.25 0.23 57.75 0.00 26.71 0.00 0.03 0.44 -0.06 0.00 0.15 0.16 -12.89 4.78 4.89 0.06 -0.04 -0.10 18.24 32.48 57.74 -0.07 -0.12 2.26 0.11 0.51 0.49 0.03 0.12 0.46 0.00 0.00 0.00 0.00 0.00 0.00 3 month Mean p-value -18.11 0.01 42.27 0.00 19.86 0.00 0.15 0.10 -0.15 0.00 0.40 0.15 -13.35 -1.85 6.43 0.19 -0.06 -0.17 1.54 20.05 15.13 -0.11 -0.27 4.16 0.02 0.76 0.25 0.01 0.36 0.62 0.76 0.01 0.01 0.01 0.00 0.00 6 month Mean p-value -10.98 0.03 35.44 0.00 13.09 0.00 0.26 0.05 -0.24 0.00 0.67 0.15 -10.04 -2.18 6.16 0.32 -0.08 -0.28 -4.84 10.97 5.31 -0.06 -0.35 5.13 0.04 0.69 0.24 0.00 0.42 0.59 0.34 0.18 0.29 0.35 0.00 0.00
U.S.-Japan VAR
r1 r2 rs2 dy1 dy2 fp2 r1 r3 rs3 dy1 dy3 fp3 r1 r4 rs4 dy1 dy4 fp4
U.S.-U.K. VAR
U.S.-Germany VAR
Forecast errors are measured in percentage points per annum. The p-values report the probability that the mean forecast errors were drawn from a distribution with zero mean. They were calculated with a Newey-West correction for serial correlation. See the appendix for a more detailed discussion of the tests. Low p-values reject the null of unbiased predictions by the VAR.
Table 2 (continued) Mean forecast errors and tests for unbiasedness Panel B: Variables pooled within each VAR 1 month Mean p-value 73.33 0.00 -3.30 0.00 110.53 0.00 3 month Mean p-value 44.41 0.00 -8.81 0.00 40.50 0.00 6 month Mean p-value 38.24 0.00 -6.10 0.00 16.17 0.00
VAR U.S.-Japan U.S.-U.K. U.S.-Germany
The mean errors are averaged over all the variables in the VAR. The p-values are calculated for the null hypothesis that the mean errors for all variables within each VAR are zero. Low p-values reject the null of unbiased predictions by the VAR.
Panel C: Variables Pooled Across VARs 1 month ME p-value 16.21 0.00 106.95 0.00 90.01 0.00 -0.06 0.00 -0.23 0.00 2.47 0.00 3 month ME p-value -13.90 0.00 74.40 0.00 39.14 0.01 0.02 0.00 -0.53 0.00 4.90 0.00 6 month ME p-value -15.68 0.77 57.41 0.00 20.82 0.36 0.20 0.00 -0.78 0.00 6.43 0.00
Variable r1 rj rsj dy1 dyj fpj
The mean errors for each variable are averaged across the three VARs. The p-values are calculated for the null hypothesis that the mean errors for each variable in all VARs are zero. Low p-values reject the null of unbiased predictions across the VARs.
Table 3 A comparison of the mean square prediction error (MSPE) of VAR forecasts to those of benchmark forecasts. 1 month p-value 0.00 0.00 0.01 0.00 0.00 0.01 0.00 0.00 0.16 0.00 0.01 0.00 0.01 0.02 0.00 0.00 0.01 0.00 3 month p-value 0.02 0.00 0.05 0.02 0.00 0.00 0.01 0.10 0.54 0.01 0.23 0.00 0.48 0.11 0.52 0.08 0.06 0.00 6 month p-value 0.02 0.01 0.42 0.04 0.00 0.01 0.02 0.45 0.89 0.01 0.34 0.01 0.10 0.10 0.44 0.68 0.29 0.00
MSPE U.S.-Japan VAR r1 r2 rs2 dy1 dy2 fp2 r1 r3 rs3 dy1 dy3 fp3 r1 r4 rs4 dy1 dy4 fp4
1.86 1.28 1.45 2.56 2.09 1.55 1.58 1.25 1.31 2.33 1.39 2.06 1.35 1.17 3.59 1.63 1.46 41.12
1.54 1.15 1.23 4.20 3.38 2.62 1.30 1.07 1.09 3.64 1.29 2.89 1.03 1.04 1.10 1.51 1.64 30.34
1.18 1.09 1.07 5.04 3.85 2.70 1.15 1.02 1.02 4.61 1.40 2.60 1.04 0.98 0.94 1.14 1.34 13.37
U.S.-U.K. VAR
U.S.-Germany VAR
Columns headed MSPE give the MSPE ratio. Ratios are constructed as VAR MSPE/benchmark model MSPE. A ratio less than one indicates that the VAR forecast is more accurate on average than the benchmark forecast. The columns labeled “p-value” show the probability of obtaining at least as extreme a test statistic given that the MSPEs from each model are equal. Ratios greater than one indicate that the simple benchmark forecasts are better than those of the VAR, low p-values reject the null of equal MSPE. See the appendix for a more detailed discussion of the tests.
Table 4 Summary of the performance of alternative forecasting methods relative to the benchmark model Panel A Classical (baseline case) Classical Classical Bayesian Bayesian Bayesian Bias corrected. Panel B Classical (baseline case) Classical Classical Bayesian Bayesian Bayesian Bias corrected. Sample fixed rolling expanding fixed rolling expanding fixed Sample fixed rolling expanding fixed rolling expanding fixed 1 month 0 0 0 0 0 0 0 MSPE ratios < 1 3 month 6 month Total (of 54) 0 2 2 0 0 0 0 0 0 3 2 2 2 2 1 3 2 2 2 2 1
MSPE ratios: number significant 1 month 3 month 6 month Total (of 54) 17 9 9 35 7 10 14 4 9 17 3 7 9 1 5 12 2 8 8 2 5 9 12 25 31 7 19 38
Panel A: Ratios are constructed as forecast MSPE/benchmark model MSPE. A ratio less than one indicates that the particular forecast is more accurate on average than the benchmark forecast. The panel records the number of MSPE ratios less than one over all variables and country pairs, by horizon and forecasting method. Ratios less than one indicate that the VAR outperforms the simple benchmark. Panel B: Number of MSPE ratios significantly greater than one at the 5% level over all variables and country pairs, by horizon and forecasting method. Significant MSPE ratios indicate that the benchmark model is superior to the particular VAR model in a statistically significant way.
Table 5 Monte Carlo forecasting performance with a stable VAR generating the data. Panel A Classical Classical Classical Bayesian Bayesian Bayesian Panel B Classical Classical Classical Bayesian Bayesian Bayesian Panel C Classical Classical Classical Bayesian Bayesian Bayesian Panel D Classical Classical Classical Bayesian Bayesian Bayesian Sample fixed rolling expanding fixed rolling expanding Sample fixed rolling expanding fixed rolling expanding Sample fixed rolling expanding fixed rolling expanding Sample % MSPE ratios < 1 1 month 3 month 6 month 37.34 42.73 46.30 35.49 36.66 41.23 48.67 49.51 51.04 46.73 46.39 48.30 48.51 45.50 47.69 57.34 52.78 52.84
Total 42.12 37.79 49.74 47.14 47.23 54.32
MSPE ratios: % significant 1 month 3 month 6 month Total 17.65 16.54 15.14 16.44 6.79 7.37 7.21 7.12 4.01 4.05 4.53 4.20 12.78 13.11 13.04 12.98 4.03 4.59 5.38 4.66 2.71 3.25 4.12 3.36 % MSPE ratios > actual 1 month 3 month 6 month Total 7.21 14.18 23.60 14.99 10.82 21.19 25.88 19.30 5.25 12.87 19.82 12.65 8.16 15.36 23.53 15.68 9.10 17.31 23.86 16.76 5.12 13.29 20.48 12.96
% MSPE significance levels < actual 1 month 3 month 6 month Total fixed 7.41 16.11 19.73 14.41 rolling 11.19 25.75 26.44 21.13 expanding 3.92 4.01 4.52 4.15 fixed 8.57 17.15 19.87 15.20 rolling 13.75 22.91 25.08 20.58 expanding 6.04 13.48 16.01 11.85
1000 data sets of 190 observations were generated from VAR parameter estimates for the sample 1981-96. Out-ofsample forecasts were constructed as in Table 4 and MSPE ratios relative to the benchmark model were calculated. Panel A reports the percentage of MSPE ratios less than one in VAR-generated data. Panel B reports the percentage of MSPE ratios significantly greater than one at the 5% level. Panel C reports the percentage of MSPE ratios from VAR-generated data greater than the corresponding ratios from the actual data. A low percentage of MSPE ratios greater than the actual ratios indicates that the data were unlikely to have been generated under the null of a stable VAR. Panel D reports the percentage of significance levels from VAR-generated data less than corresponding significance levels from the actual data. A low percentage of MSPE significance levels less than the actual ratios indicates that the data were unlikely to have been generated under the null of a stable VAR.
Table 6 Long-horizon statistics: Comparison of actual values with empirical distribution assuming that the estimated VAR is the true model: US-Japan VAR 1981:1- 1996:10 Panel A: US-Japan slope coefficients
Horizon (months) U.S.: Excess Stock Return and Dividend Yield Actual 5% 50% 95% Japan: Excess Stock Return and Dividend Yield Actual 5% 50% 95% Currency Excess Return and Forward Premium Actual 5% 50% 95% 1 3 6 12 24 36 48 60
1.08 -1.34 4.51 17.16 14.01
4.29 -2.96 14.96 51.71 43.38
50.46 8.03 126.39 237.74 288.09
73.11 13.25 161.93 259.78 387.01
88.82 16.38 178.16 263.12 461.31
98.45 17.05 182.82 260.85 515.48
-3.74 -1.29 32.02 67.47 97.22 165.66 84.57 160.43
-2.61 -4.78 -4.60 4.32 38.08 73.70 101.87 122.50 29.04 89.31 174.19 326.50 558.05 700.09 774.63 810.13 92.84 266.73 475.29 761.04 1055.17 1183.15 1244.56 1273.63 -4.56 -12.83 -23.18 -37.65 -6.74 -18.79 -33.97 -56.76 -4.45 -12.47 -22.27 -35.43 -1.95 -5.43 -9.23 -13.07 -49.57 -83.07 -44.73 -10.80 -49.49 -94.49 -43.90 -3.99 -45.14 -40.27
-99.73 -101.63 -40.31 -37.32 2.15 6.57
Table 6 (continued) Long-horizon statistics: Comparison of actual values with empirical distribution assuming that the estimated VAR is the true model: US-Japan VAR 1981:1- 1996:10 Panel B: US-Japan implied R2’s
Horizon (months) U.S.: Excess Stock Return Actual 5% 50% 95% 1 3 6 12 24 36 48 60
0.02 0.02 0.05 0.10 0.04 0.03 0.07 0.14 0.07 0.03 0.09 0.18
0.03 0.03 0.09 0.19 0.03 0.03 0.09 0.18 0.15 0.05 0.17 0.34
0.04 0.03 0.13 0.28 0.04 0.04 0.13 0.27 0.21 0.07 0.23 0.45
0.06 0.05 0.18 0.36 0.05 0.05 0.18 0.35 0.24 0.08 0.26 0.51
0.07 0.05 0.22 0.42 0.08 0.05 0.21 0.41 0.21 0.06 0.23 0.49
0.07 0.05 0.22 0.43 0.09 0.05 0.21 0.41 0.17 0.05 0.18 0.45
0.07 0.04 0.21 0.42 0.09 0.04 0.20 0.40 0.14 0.04 0.14 0.40
0.07 0.04 0.20 0.39 0.10 0.04 0.18 0.39 0.11 0.03 0.12 0.35
Japan: Excess Stock Return Actual 5% 50% 95% Currency Excess Return Actual 5% 50% 95%
Panel C: US-Japan variance ratios
Horizon (months) U.S.: Excess Stock Return Actual 5% 50% 95% 1 3 6 12 24 36 48 60
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
1.05 0.88 1.04 1.22 1.01 0.86 1.01 1.18 1.19 0.99 1.18 1.41
1.06 0.82 1.04 1.33 0.99 0.79 0.98 1.22 1.40 1.05 1.38 1.86
1.07 0.71 1.01 1.48 0.94 0.68 0.92 1.26 1.73 1.15 1.70 2.63
1.04 0.53 0.90 1.57 0.86 0.52 0.81 1.30 2.17 1.25 2.10 3.76
0.98 0.42 0.78 1.53 0.79 0.44 0.71 1.27 2.41 1.29 2.32 4.46
0.91 0.35 0.68 1.44 0.73 0.38 0.64 1.21 2.54 1.32 2.42 4.88
0.85 0.30 0.60 1.35 0.68 0.34 0.58 1.15 2.60 1.32 2.47 5.14
Japan: Excess Stock Return Actual 5% 50% 95% Currency Excess Return Actual 5% 50% 95%
10,000 data sets of 190 observations were generated from VAR parameter estimates for the sample 1981-96. The VAR was estimated on each data set and the parameter estimates were used to calculate the implied long-horizon regression slope coefficients, R2s and variance ratios. The panels of the table report the actual value of the longhorizon statistic and the percentiles of the empirical distribution.
Figure 1: Time series of the dividend yields (top panel) and the forward premia (bottom panel), in annualized percentage terms.
Figure 2: Structural break statistics for the VARs. The horizontal lines show one-percent Monte Carlo critical values, as described in the statistical appendix.
List of other working papers: 1999
1. Yin-Wong Cheung, Menzie Chinn and Ian Marsh, How do UK-Based Foreign Exchange Dealers Think Their Market Operates?, WP99-21 2. Soosung Hwang, John Knight and Stephen Satchell, Forecasting Volatility using LINEX Loss Functions, WP99-20 3. Soosung Hwang and Steve Satchell, Improved Testing for the Efficiency of Asset Pricing Theories in Linear Factor Models, WP99-19 4. Soosung Hwang and Stephen Satchell, The Disappearance of Style in the US Equity Market, WP99-18 5. Soosung Hwang and Stephen Satchell, Modelling Emerging Market Risk Premia Using Higher Moments, WP99-17 6. Soosung Hwang and Stephen Satchell, Market Risk and the Concept of Fundamental Volatility: Measuring Volatility Across Asset and Derivative Markets and Testing for the Impact of Derivatives Markets on Financial Markets, WP99-16 7. Soosung Hwang, The Effects of Systematic Sampling and Temporal Aggregation on Discrete Time Long Memory Processes and their Finite Sample Properties, WP99-15 8. Ronald MacDonald and Ian Marsh, Currency Spillovers and Tri-Polarity: a Simultaneous Model of the US Dollar, German Mark and Japanese Yen, WP99-14 9. Robert Hillman, Forecasting Inflation with a Non-linear Output Gap Model, WP99-13 10. Robert Hillman and Mark Salmon , From Market Micro-structure to Macro Fundamentals: is there Predictability in the Dollar-Deutsche Mark Exchange Rate?, WP99-12 11. Renzo Avesani, Giampiero Gallo and Mark Salmon, On the Evolution of Credibility and Flexible Exchange Rate Target Zones, WP99-11 12. Paul Marriott and Mark Salmon, An Introduction to Differential Geometry in Econometrics, WP99-10 13. Mark Dixon, Anthony Ledford and Paul Marriott, Finite Sample Inference for Extreme Value Distributions, WP99-09 14. Ian Marsh and David Power, A Panel-Based Investigation into the Relationship Between Stock Prices and Dividends, WP99-08 15. Ian Marsh, An Analysis of the Performance of European Foreign Exchange Forecasters, WP99-07 16. Frank Critchley, Paul Marriott and Mark Salmon, An Elementary Account of Amari's Expected Geometry, WP99-06 17. Demos Tambakis and Anne-Sophie Van Royen, Bootstrap Predictability of Daily Exchange Rates in ARMA Models, WP99-05 18. Christopher Neely and Paul Weller, Technical Analysis and Central Bank Intervention, WP9904 19. Christopher Neely and Paul Weller, Predictability in International Asset Returns: A Reexamination, WP99-03 20. Christopher Neely and Paul Weller, Intraday Technical Trading in the Foreign Exchange Market, WP99-02 21. Anthony Hall, Soosung Hwang and Stephen Satchell, Using Bayesian Variable Selection Methods to Choose Style Factors in Global Stock Return Models, WP99-01
1. Soosung Hwang and Stephen Satchell, Implied Volatility Forecasting: A Compaison of Different Procedures Including Fractionally Integrated Models with Applications to UK Equity Options, WP98-05 2. Roy Batchelor and David Peel, Rationality Testing under Asymmetric Loss, WP98-04 3. Roy Batchelor, Forecasting T-Bill Yields: Accuracy versus Profitability, WP98-03
4. Adam Kurpiel and Thierry Roncalli , Option Hedging with Stochastic Volatility, WP98-02 5. Adam Kurpiel and Thierry Roncalli, Hopscotch Methods for Two State Financial Models, WP98-01 | {"url":"http://www.docstoc.com/docs/7115731/Predictability-in-International-Asset-Returns-A-Re-examination","timestamp":"2014-04-19T12:55:41Z","content_type":null,"content_length":"123662","record_id":"<urn:uuid:2e461373-0d94-44ea-bb68-c511206a9c18>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Unique basis of relativistic field equations for arbitrary spin?
Hi Tom,
You're after a unified description of scalar, fermion and gauge fields… very ambitious. But don't forget the gravitational spin connection and frame.
Let [tex]A[/tex] be a 1-form gauge field valued in a Lie algebra, say spin(10) if you like GUTs, and [tex]\omega[/tex] be the gravitational spin connection 1-form valued in spin(1,3), and [tex]e[/
tex] be the gravitational frame 1-form valued in the 4 vector representation space of spin(1,3), and let [tex]\phi[/tex] be a scalar Higgs field valued in, say, the 10 vector representation space of
spin(10). Then, avoiding Coleman-Mandula's assumptions by allowing e to be arbitrary, possibly zero, we can construct a unified connection valued in spin(11,3):
[tex]H = {\scriptsize \frac{1}{2}} \omega + \frac{1}{4} e \phi + A[/tex]
and compute its curvature 2-form as
[tex]F = d H + \frac{1}{2} [H,H] = \frac{1}{2}(R - \frac{1}{8}ee\phi\phi) + \frac{1}{4} (T \phi - e D \phi) + F_A [/tex]
in which [tex]R[/tex] is the Riemann curvature 2-form, [tex]T[/tex] is torsion, [tex]D \phi[/tex] is the gauge covariant 1-form derivative of the Higgs, and [tex]F_A[/tex] is the gauge 2-form
curvature -- all the pieces we need for building a nice action as a perturbed [tex]BF[/tex] theory. To include a generation of fermions, let [tex]\Psi[/tex] be an anti-commuting (Grassmann) field
valued in the positive real 64 spin representation space of spin(11,3), and consider the "superconnection":
[tex]A_S = H + \Psi[/tex]
The "supercurvature" of this,
[tex]F_S = d A_S + A_S A_S = F + D \Psi + \Psi \Psi[/tex]
includes the covariant Dirac derivative of the fermions in curved spacetime, including a nice interaction with the Higgs,
[tex]D \Psi = (d + \frac{1}{2} \omega + \frac{1}{4} e \phi + A) \Psi[/tex]
We can then build actions, including Dirac, as a perturbed [tex]B_S F_S[/tex] theory.
Once you see how all this works, the kicker is that this entire algebraic structure, including spin(11,3) + 64, fits inside the E8 Lie algebra. | {"url":"http://www.physicsforums.com/showpost.php?p=3264559&postcount=5","timestamp":"2014-04-20T15:59:56Z","content_type":null,"content_length":"9170","record_id":"<urn:uuid:f0a7c3c8-cc5e-4c29-b67d-3d562ab9439a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Experiments for integration CLP and abduction
Results 1 - 10 of 11
, 2000
"... The goal of this paper is to extend classical logic with a generalized notion of inductive definition supporting positive and negative induction, to investigate the properties of this logic, its
relationships to other logics in the area of non-monotonic reasoning, logic programming and deductiv ..."
Cited by 58 (38 self)
Add to MetaCart
The goal of this paper is to extend classical logic with a generalized notion of inductive definition supporting positive and negative induction, to investigate the properties of this logic, its
relationships to other logics in the area of non-monotonic reasoning, logic programming and deductive databases, and to show its application for knowledge representation by giving a typology of
definitional knowledge.
- University of Parma , 2004
"... Abstract. We describe a system implementing a novel extension of Fung and Kowalski’s IFF abductive proof procedure which we call CIFF, and its application to realise intelligent agents that can
construct (partial or complete) plans and react to changes in the environment. CIFF extends the original I ..."
Cited by 12 (5 self)
Add to MetaCart
Abstract. We describe a system implementing a novel extension of Fung and Kowalski’s IFF abductive proof procedure which we call CIFF, and its application to realise intelligent agents that can
construct (partial or complete) plans and react to changes in the environment. CIFF extends the original IFF procedure in two ways: by dealing with constraint predicates and by dealing with
non-allowed abductive logic programs. 1
- Proceedings of the 7th International Conference on Logic for Programming and Automated Reasoning, volume 1955 of Lecture Notes in Arti Intelligence , 2000
"... . Many logic programming based approaches can be used to describe and solve combinatorial search problems. On the one hand there is constraint logic programming which computes a solution as an
answer substitution to a query containing the variables of the constraint satisfaction problem. On the ..."
Cited by 9 (2 self)
Add to MetaCart
. Many logic programming based approaches can be used to describe and solve combinatorial search problems. On the one hand there is constraint logic programming which computes a solution as an answer
substitution to a query containing the variables of the constraint satisfaction problem. On the other hand there are systems based on stable model semantics, abductive systems, and rst order logic
model generators which compute solutions as models of some theory. This paper compares these dierent approaches from the point of view of knowledge representation (how declarative are the programs)
and from the point of view of performance (how good are they at solving typical problems). 1 Introduction Consistency techniques are widely used for solving nite domain constraint satisfaction
problems (CSP) [19]. These techniques have been integrated in logic programming, resulting in nite domain constraint logic programming (CLP) [20]. In this paradigm, a program typically creates a
- 8th International Workshop on Non-Monotonic Reasoning, Special Session on Abduction , 2000
"... Many logic programming based approaches can be used to describe and solve combinatorial search problems. On the one hand there are definite programs and constraint logic programs that compute a
solution as an answer substitution to a query containing the variables of the constraint satisfaction prob ..."
Cited by 7 (3 self)
Add to MetaCart
Many logic programming based approaches can be used to describe and solve combinatorial search problems. On the one hand there are definite programs and constraint logic programs that compute a
solution as an answer substitution to a query containing the variables of the constraint satisfaction problem. On the other hand there are approaches based on stable model semantics, abduction, and
first-order logic model generation that compute solutions as models of some theory. This paper compares these different approaches from point of view of knowledge representation (how declarative are
the programs) and from point of view of performance (how good are they at solving typical problems).
- 8TH INT. WORKSHOP ON NON-MONOTONIC REASONING (NMR2000), SESSION ON ABDUCTION , 2000
"... The goal of the LP+ project at the K.U.Leuven is to design an expressive logic, suitable for declarative knowledge representation, and to develop intelligent systems based on Logic Programming
technology for solving computational problems using the declarative specications. The ID-logic is an integr ..."
Cited by 2 (2 self)
Add to MetaCart
The goal of the LP+ project at the K.U.Leuven is to design an expressive logic, suitable for declarative knowledge representation, and to develop intelligent systems based on Logic Programming
technology for solving computational problems using the declarative specications. The ID-logic is an integration of typed classical logic and a definition logic. Different abductive solvers for this
language are being developed. This paper is a report of the integration of high order aggregates into ID-logic and the consequences on the solver SLDNFA.
- Proceedings of the Fourth International Workshop on Computational Semantics , 2001
"... Texts in natural language contain a lot of temporal information, both explicit and implicit. Verbs and temporal adjuncts carry most of the explicit information, but for a full understanding
general world knowledge and default assumptions have to be taken into account. We will present a theory fo ..."
Cited by 1 (1 self)
Add to MetaCart
Texts in natural language contain a lot of temporal information, both explicit and implicit. Verbs and temporal adjuncts carry most of the explicit information, but for a full understanding general
world knowledge and default assumptions have to be taken into account. We will present a theory for describing the relation between, on the one hand, verbs, their tenses and adjuncts and, on the
other, the eventualities and periods of time they represent and their relative temporal locations, while allowing interaction with general world knowledge.
, 2000
"... A logic programming paradigm which expresses solutions to problems as stable models has recently been promoted as a declarative approach to solving various combinatorial and search problems,
including planning problems. In this paradigm, all program rules are considered as constraints and solut ..."
Cited by 1 (0 self)
Add to MetaCart
A logic programming paradigm which expresses solutions to problems as stable models has recently been promoted as a declarative approach to solving various combinatorial and search problems,
including planning problems. In this paradigm, all program rules are considered as constraints and solutions are stable models of the rule set. This is a rather radical departure from the standard
paradigm of logic programming. In this paper we revisit abductive logic programming and argue that it allows a programming style which is as declarative as programming based on stable models.
However, within abductive logic programming, one has two kinds of rules. On the one hand predicate definitions (which may depend on the abducibles) which are nothing else than standard logic programs
(with their nonmonotonic semantics when containing with negation); on the other hand rules which constrain the models for the abducibles. In this sense abductive logic programming is a smooth
extension of the standard paradigm of logic programming, not a radical departure. keywords: planning, abduction, non-monotonic reasoning.
"... Besides temporal information explicitly available in verbs and adjuncts, the temporal interpretation of a text also depends on general world knowledge and default assumptions. We will present a
theory for describing the relation between, on the one hand, verbs, their tenses and adjuncts and, on the ..."
Add to MetaCart
Besides temporal information explicitly available in verbs and adjuncts, the temporal interpretation of a text also depends on general world knowledge and default assumptions. We will present a
theory for describing the relation between, on the one hand, verbs, their tenses and adjuncts and, on the other, the eventualities and periods of time they represent and their relative temporal
locations. The theory is formulated in logic and is a practical implementation of the concepts described in Ness Schelkens et al. (this volume). We will show how an abductive resolution procedure can
be used on this representation to extract temporal information from texts. 1 Introduction This article presents some work conducted in the framework of Linguaduct, a project on the temporal
interpretation of Dutch texts by means of abductive reasoning. A natural language text contains a lot of both explicit and implicit temporal information, mostly in verbs and adjuncts. The purpose of
the theory presente...
"... The SLDNFA-system results from the LP+ project at the K.U.Leuven, which investigates logics and proof procedures for these logics for declarative knowledge representation. Within this project
inductive de nition logic (ID-logic) is used as representation logic. Dierent solvers are being developed fo ..."
Add to MetaCart
The SLDNFA-system results from the LP+ project at the K.U.Leuven, which investigates logics and proof procedures for these logics for declarative knowledge representation. Within this project
inductive de nition logic (ID-logic) is used as representation logic. Dierent solvers are being developed for this logic and one of these is SLDNFA. A prototype of the system is available and used
for investigating how to solve eciently problems represented in ID-logic. General Information The LP+ project at the K.U.Leuven aims at developing and investigating logics suitable for declarative
knowledge representation. To be able to represent problem domains in a declarative way, the logic must be capable to express the knowledge of the expert in a natural and graceful way. Therefore a
suited logic has to deal with two mayor types of knowledge: de nitional and assertional knowledge (Denecker 1995). This view is incorporated in ID-logic, a conservative extension of classical logic
with a ge...
- IN: PROC. INTERNATIONAL CONFERENCE ON LOGIC PROGRAMMING , 2001
"... We extend a theorem by Francois Fages about the relationship between the completion semantics and the answer set semantics of logic programs to a class of programs with nested expressions
permitted in the bodies of rules. Fages' theorem is important from the perspective of answer set programming ..."
Add to MetaCart
We extend a theorem by Francois Fages about the relationship between the completion semantics and the answer set semantics of logic programs to a class of programs with nested expressions permitted
in the bodies of rules. Fages' theorem is important from the perspective of answer set programming: whenever the two semantics are equivalent, answer sets can be computed by propositional solvers,
such as sato, instead of answer set solvers, such as smodels. The need to extend Fages' theorem to programs with nested expressions is related to the use of choice rules in the input language of | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=566517","timestamp":"2014-04-21T12:11:00Z","content_type":null,"content_length":"37954","record_id":"<urn:uuid:f67b39b3-2bc5-4595-ab60-f254da093da5>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Linear Programming problem
March 19th 2008, 08:50 AM #1
[SOLVED] Linear Programming problem
Hi there,
I was wondering what the solution to this problem is. I play basketball with my friends. Sometimes we are 16 people. We do teams of 5 people. What is the best efficient way to proceed so everyone
gets the maximum amount of playing time? 3 teams of 5 and then how do we juggle the last person?
Hi there,
I was wondering what the solution to this problem is. I play basketball with my friends. Sometimes we are 16 people. We do teams of 5 people. What is the best efficient way to proceed so everyone
gets the maximum amount of playing time? 3 teams of 5 and then how do we juggle the last person?
Not a linear programming problem.
March 20th 2008, 12:26 AM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/advanced-math-topics/31432-solved-linear-programming-problem.html","timestamp":"2014-04-17T07:18:49Z","content_type":null,"content_length":"32594","record_id":"<urn:uuid:d10d70ea-62f9-485d-9065-f05fcc21015e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
El Cerrito ACT Tutor
Find an El Cerrito ACT Tutor
...Because English spelling is far less regular than, for example, Spanish, students of English need to not only learn spelling rules, but also need to spend time memorizing and practicing
vocabulary words. Reviewing and learning 5, 10, or 20 words a week, when combined with reading a few minutes e...
42 Subjects: including ACT Math, English, reading, writing
...I am very effective in helping students to not just get a better grade, but to really understand the subject matter and the reasons why things work the way they do. I do this in a way that is
positive, supportive, and also fun. I explain difficult math and science concepts in simple English, and continue working with the students until they understand the concepts really well.
14 Subjects: including ACT Math, calculus, statistics, geometry
...To accept these funds, I was required to administer standardized tests to the students before and after tutoring. Those students who completed a 12-19 hour course of tutoring universally
improved their scores, most by more than an entire grade level! That's a whole grade level of improvement over the course of 6-10 weeks!
29 Subjects: including ACT Math, English, reading, writing
...My goal for you isn't to just memorize formulas and regurgitate facts, but to fall in love with the subject the same way I did. I'll bring excitement to the subject that previously made you
fall asleep in class. Hopefully, I'll hear from you soon and we can start tackling those challenging problems together!
19 Subjects: including ACT Math, physics, calculus, writing
...Specifically, I can tutor in all math up to calculus (pre-algebra, algebra, geometry, pre-calculus, trigonometry, calculus, etc.) and science (physics and chemistry, including science
projects). Whatever I am tutoring, I use positive reinforcement to encourage students and let them know when the...
14 Subjects: including ACT Math, chemistry, calculus, physics | {"url":"http://www.purplemath.com/El_Cerrito_ACT_tutors.php","timestamp":"2014-04-17T21:55:56Z","content_type":null,"content_length":"23766","record_id":"<urn:uuid:4f09fc0f-7b12-4889-a634-0a307d7b30d8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there a simple proof for the volume of a cone?
- I think I know the answer!
- That was fast!
- It is not so difficult when you think a bit.
- I am all ears.
- I filled a cone with rice and emptied it into a cylinder. They had the same size circle at the base.
- Did it fill the cylinder?
- It might have, but the cylinder and cone had the same height.
- So?
- I filled the cylinder with exactly 3 cones full of rice.
- So, what does that prove?
- That the volume of a cone is a third of the volume of a cylinder with same radius and height.
- You mean to say that the volume of a cone is \(\frac{\pi r^2 h }{3}\)?
- Exactly.
- Impressive! But I don’t think it counts as a mathematical proof.
- Why not? Are you not convinced?
- Maybe it is not a third, maybe it is 1/2.99!
- That would be ugly!
- So what?
- I live in a beautiful universe.
- That is a strange image! Who made it?
- It is called The Persistence Of Memory by Salvador Dali.
- If you didn’t like my first proof, here is a better one.
- The first was not a proof. But it did suggest a hypothesis.
- What is a hypothesis?
- Something you believe is true, but you haven’t found a proof yet.
- If you find a proof what is it called then?
- A theorem.
- Like the Pythagorean Theorem?
- Exactly. For that one we have a proof.
- OK. Are you ready for my proof number 2?
- Shoot!
- The area of a triangle is \(\frac{base * height}{2}\).
- I totally agree.
- The triangle and the rectangle live in 2 dimensions.
- OK.
- The cone and the cylinder live in 3 dimensions.
- OK.
- The area of the triangle = 1/2 * area of rectangle.
- So?
- So?! Isn’t it obvious? The volume of the cone = 1/3 * the volume of the cylinder.
- Me oh my! You are right. I’ve never thought of that!
- I told you I had a simple proof.
- But, on second thought. Is this really a proof? You are using an analogy that might be right, but it could also be wrong.
- What is an analogy?
- Reasoning by analogy is to use that if two or more things agree with one another in some respects they will probably agree in others.
- But the result could be right?
- Yes, of course, but it could also be wrong.
- Give me an example!
- Listen to this.
Rats are like people in many ways: They have very similar systems of enzymes and hormones, they adapt well to a wide variety of environments, they are omnivores, etc. People carry umbrellas. So, rats
carry umbrellas, too. – Source
- But that is nonsense. Rats don’t carry umbrellas!
- Are you sure?
- That is wrong! Mickey Mouse is not a rat.
- We better proceed.
- With what?
- Finding a simple proof for the volume of a cone.
- Oh that. I thought I had given you two proofs already.
- I know.
- So, what is there to do?
- Let’s try to be modest. Whats the volume of these three cylinders inside the cone?
- You mean rectangles, not cylinders!?
- No, they are cylinders. They are seen from the side. That’s why they look like rectangles.
- OK. Now I see.
- So, what is their volume.
- I have no idea. I don’t know how tall they are. Neither do I know their radius.
- OK. That makes sense.
- Thank you.
- Let’s say their height is a quarter of the height of the cone.
- OK.
- The radius question is a bit more interesting.
- You mean more difficult?
- Look at this drawing.
- Look at the triangle in the bottom left corner.
- You mean the one marked h/4 and r/4?
- That’s the one.
- I understand that its height is a quarter of the height of the cone, h/4, but why is its base r/4?
- The ratio between h and r has to be the same as the ratio between the height and base of the triangle.
- And since h/4 : r/4 = h : r everything is in harmony.
- You said it!
- So the radius of the bottom cylinder is \(r-\frac{r}{4}=\frac{3r}{4}\)?
- Correct.
- And the next cylinder has radius \(\frac{r}{4}\) less than that, or \(\frac{2r}{4}\).
- You’re a genius!
- Then I know what the total volume of the three cylinders are!
- Let’s hear it.
- \(\pi (\frac{3r}{4})^2 \frac{h}{4} + \pi (\frac{2r}{4})^2 \frac{h}{4}+ \pi (\frac{r}{4})^2 \frac{h}{4}\)
- Let’s make it simpler.
- Be my guest!
- What about this?
- \(\pi\frac{h}{4}(\frac{9r^2}{16}+\frac{4r^2}{16}+\frac{r^2}{16})\)
- Good! Can you make it even simpler?
- \(\pi r^2 h\frac{9+ 4+1}{64}\)
- \(\frac{9 + 4 + 1}{64}\).
- But that is not a third!
- The idea is to fill the cone completely with cylinders.
- Is that possible?
- That’s a good question! We will know soon.
- How many cylinders do we need?
- Let’s start with n.
- How much is n?
- n is n. Or, to put it differently, n is any number.
- I can dig that. I think…
- The fraction than becomes \(\frac{n^2 + … + 1^2}{(n+1)^3}\).
- It does?!
- In our example we had n = 3 and \(\frac{3^2 + … + 1^2}{4^3} = 0.21875\).
- So, if the patterns holds, we get what you just said. I will check the details later, but it seems reasonable.
- Great.
- But how much is that?
- Let’s ask WolframAlpha.
- Ask who?
- There is a web site called WolframAlpha that can help us.
- How much does it cost to use the site?
- It is free.
- That I like!
- Let’s type (n^2+…+1^2)/(n+1)^3 into the box on the site.
- Look! There is 0.21875 for n=3. Amazing!
- I agree!
- But how do we fill the cone completely with cylinders?
- Let’s try to let n go to infinity.
- What do you mean?
- Let’s see what happens as n grows and grows.
- Can WolframAlpha help us with that too?
- Sure can. We just ask: lim n-> infinity (n^2+…+1^2)/(n+1)^3.
- Can it hear us too?!
- No. What I meant was, “let’s type it into the box” and press Enter.
- It did it! The volume of the cone is a third of the volume of the cylinder. Are you convinced now?
- Actually I am not.
- Maybe after all maths is not for you. Have you tried golf?
- How do we know that the cylinders fill the cone completely?
- Are you blind?! Didn’t you see the third? WolframAlpha got the correct formula. End of discussion.
- Yes, I agree. It found the correct formula, but how do we know that it is the correct one?
- You mean, if we didn’t know already that it was the correct formula?
- Exactly. Like if we were the first ones to find it.
- I see your point.
- You know what, let’s take a break. Maybe you’ll find a way when you wash the dishes.
- Is it related to water?
- I don’t think so, but often when I have a problem I see the solution when I do quite different things.
- How come?
- I don’t know, but I believe that my brain works on the problem unconsciously and suddenly the solution pops up.
- Really?
- Henri Poincare has described a similar experience he had.
- Who is he?
- He was a famous mathematician. Look him up in Wikipedia.
“Just at this time I left Caen, where I was then living, to go on a geologic excursion under the auspices of the school of mines. The changes of travel made me forget my mathematical work. Having
reached Coutances, we entered an omnibus to go some place or other. At the moment when I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the
way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as upon
taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’ sake, I verified the result at my leisure.”
- What are Fuchsian functions?
- I have no idea, but that is not important. What is important is that the mind works in mysterious ways.
(To be continued in Milking the solution) | {"url":"http://www.twowayacademy.com/is-there-a-simple-proof-for-the-volume-of-a-cone/","timestamp":"2014-04-17T16:02:06Z","content_type":null,"content_length":"28030","record_id":"<urn:uuid:3b0094f8-696a-4a14-b90f-a3cce10c2b28>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Formula Auditing
Trace Precedents | Remove Arrows | Trace Dependents | Show Formulas | Error Checking | Evaluate Formula
Formula auditing in Excel allows you to display the relationship between formulas and cells. The example below helps you master Formula Auditing quickly and easily.
Trace Precedents
You have to pay $96.00. To show arrows that indicate which cells are used to calculate this value, execute the following steps.
1. Select cell C13.
2. On the Formulas tab, in the Formula Auditing group, click Trace Precedents.
As expected, Total cost and Group size are used to calculate the Cost per person.
3. Click Trace Precedents again.
As expected, the different costs are used to calculate the Total cost.
Remove Arrows
To remove the arrows, execute the following steps.
1. On the Formulas tab, in the Formula Auditing group, click Remove Arrows.
Trace Dependents
To show arrows that indicate which cells depend on a selected cell, execute the following steps.
1. Select cell C12.
2. On the Formulas tab, in the Formula Auditing group, click Trace Dependents.
As expected, the Cost per person depends on the Group size.
Show Formulas
By default, Excel shows the results of formulas. To show the formulas instead of their results, execute the following steps.
1. On the Formulas tab, in the Formula Auditing group, click Show Formulas.
Note: instead of clicking Show Formulas, you can also press CTRL + (`). You can find this key above the tab key.
Error Checking
To check for common errors that occur in formulas, execute the following steps.
1. Enter the value 0 into cell C12.
2. On the Formulas tab, in the Formula Auditing group, click Error Checking.
Result. Excel finds an error in cell C13. The formula tries to divide a number by 0.
Evaluate Formula
To debug a formula by evaluating each part of the formula individually, execute the following steps.
1. Select cell C13.
2. On the Formulas tab, in the Formula Auditing group, click Evaluate Formula.
3. Click Evaluate four times.
Excel shows the formula result.
Did you find this information helpful? Show your appreciation, vote for us.
Go to Top: Formula Auditing | Go to Next Example: Floating Point Errors | {"url":"http://www.excel-easy.com/examples/formula-auditing.html","timestamp":"2014-04-18T18:49:18Z","content_type":null,"content_length":"11015","record_id":"<urn:uuid:6fdecc98-4ddb-490c-b60b-72978004c584>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
The result is an array of type integer. If kind is present, the kind parameter of the result is that specified by kind; otherwise, the kind parameter of the result is that of default integer. If the
processor cannot represent the result value in the kind of the result, the result is undefined.
The following rules apply if dim is omitted:
● The array result has rank one and a size equal to the rank of array.
● If MINLOC(array) is specified, the elements in the array result form the subscript of the location of the element with the minimum value in array.
The ith subscript returned lies in the range 1 to e[i], where e[i] is the extent of the ith dimension of array.
● If MINLOC(array, MASK= mask) is specified, the elements in the array result form the subscript of the location of the element with the minimum value corresponding to the condition specified by
The following rules apply if dim is specified:
● The array result has a rank that is one less than array, and shape (d[1], d[2],...d[dim-1], d[dim+1],...d[n]), where (d[1], d[2],...d[n]) is the shape of array.
● If array has rank one, MINLOC( array, dim[, mask]) has a value equal to that of MINLOC( array[,MASK = mask]). Otherwise, the value of element (s[1], s[2],...s[dim-1], s[dim+1],...s[n]) of MINLOC(
array, dim[, mask]) is equal to MINLOC( array(s[1], s[2],...s[dim-1], :, s[dim+1],...s[n]) [, MASK = mask(s[1], s[2],...s[dim-1], :, s[dim+1],...s[n])]).
If more than one element has minimum value, the element whose subscripts are returned is the first such element, taken in array element order. If array has size zero, or every element of mask has the
value .FALSE., the value of the result is controlled by compiler option assume [no]old_maxminloc, which can set the result to either 1 or 0.
The setting of compiler options specifying integer size can affect this function.
The value of MINLOC ((/3, 1, 4, 1/)) is (2), which is the subscript of the location of the first occurrence of the minimum value in the rank-one array.
A is the array
[ 4 0 -3 2 ]
[ 3 1 -2 6 ]
[ -1 -4 5 -5 ].
MINLOC (A, MASK=A .GT. -5) has the value (3, 2) because these are the subscripts of the location of the minimum value (-4) that is greater than -5.
MINLOC (A, DIM=1) has the value (3, 3, 1, 3). 3 is the subscript of the location of the minimum value (-1) in column 1; 3 is the subscript of the location of the minimum value (-4) in column 2; and
so forth.
MINLOC (A, DIM=2) has the value (3, 3, 4). 3 is the subscript of the location of the minimum value (-3) in row 1; 3 is the subscript of the location of the minimum value (-2) in row 2; and so forth.
The following shows another example:
INTEGER i, minl(1)
INTEGER array(2, 3)
INTEGER, ALLOCATABLE :: AR1(:)
! put values in array
array = RESHAPE((/-7, 1, -2, -9, 5, 0/),(/2, 3/))
! array is -7 -2 5
! 1 -9 0
i = SIZE(SHAPE(array)) ! Get the number of dimensions
! in array
ALLOCATE (AR1 (i) ) ! Allocate AR1 to number
! of dimensions in array
AR1 = MINLOC (array, MASK = array .GT. -5) ! Get the
! location (subscripts) of
! smallest element greater
! than -5 in array
! MASK = array .GT. -5 creates a mask array the same
! size and shape as array whose elements are .TRUE. if
! the corresponding element in array is greater than
! -5, and .FALSE. if it is not. This mask causes MINLOC
! to return the index of the element in array with the
! smallest value greater than -5.
!array is -7 -2 5 and MASK= array .GT. -5 is F T T
! 1 -9 0 T F T
! and AR1 = MINLOC(array, MASK = array .GT. -5) returns
! (1, 2), the location of the element with value -2
minl = MINLOC((/-7,2,-7,5/)) ! returns 1, first
! occurrence of minimum | {"url":"http://scc.qibebt.cas.cn/docs/compiler/intel/11.1/Intel%20Fortran%20Compiler%20for%20Linux/main_for/lref_for/source_files/rfminloc.htm","timestamp":"2014-04-17T16:50:48Z","content_type":null,"content_length":"11647","record_id":"<urn:uuid:747d45fe-6571-4b36-87bd-1fabcb34cf8d>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phase transition thresholds for some Friedman-style independence results
"... Abstract The article starts with a brief survey of Unprovability Theory as of autumn 2006. Then, as an illustration of the subject's model-theoretic methods, we re-prove exact versions of
unprovability results for the Paris-Harrington Principle and the KanamoriMcAloon Principle using indiscernibles. ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract The article starts with a brief survey of Unprovability Theory as of autumn 2006. Then, as an illustration of the subject's model-theoretic methods, we re-prove exact versions of
unprovability results for the Paris-Harrington Principle and the KanamoriMcAloon Principle using indiscernibles. In addition, we obtain a short accessible proof of unprovability of the
Paris-Harrington Principle. The proof employs old ideas but uses only one colouring and directly extracts the set of indiscernibles from its homogeneous set. We also present modified, abridged
statements whose unprovability proofs are especially simple. These proofs were tailored for teaching purposes. The article is intended to be accessible to the widest possible audience of
mathematicians, philosophers and computer scientists as a brief survey of the subject, a guide through the literature in the field, an introduction to its model-theoretic techniques and, finally, a
model-theoretic proof of a modern theorem in the subject. However, some understanding of logic is assumed on the part of the readers. The intended audience of this paper consists of logicians,
logic-aware mathematicians andthinkers of other backgrounds who are interested in unprovable mathematical statements.
, 2011
"... Finding the phase transition for Friedman’s long finite sequences ..."
"... Using standard methods of analytic combinatorics we elaborate critical points (thresholds) of phase transitions from provability to unprovability of arithmetical well-partial-ordering assertions
in several familiar theories occurring in the reverse mathematics program. ..."
Add to MetaCart
Using standard methods of analytic combinatorics we elaborate critical points (thresholds) of phase transitions from provability to unprovability of arithmetical well-partial-ordering assertions in
several familiar theories occurring in the reverse mathematics program. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=955230","timestamp":"2014-04-20T17:43:03Z","content_type":null,"content_length":"16158","record_id":"<urn:uuid:1fd84ece-a811-40ed-84ab-63a5d4fc28d6>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Galois Theory Help
April 27th 2009, 09:06 AM #1
Apr 2009
Galois Theory Help
Hi, I have an examination for Galois theory coming up shortly, and realised there's a question in one of our past papers I don't know how to answer, it reads as follows:
Let $K$ be a subfield of $\mathbb{C}$ and lef $f$$\in$$K[t]$ be an irreducible polynomial. Prove that $f$ has no repeated roots in $\mathbb{C}$.
It's worth a fair few marks so I expect the answer's fairly long, but if anyone can provide any ideas how to deal with that I'd be very greatful.
Thanks in advance for your help.
Hi, I have an examination for Galois theory coming up shortly, and realised there's a question in one of our past papers I don't know how to answer, it reads as follows:
Let $K$ be a subfield of $\mathbb{C}$ and lef $f$$\in$$K[t]$ be an irreducible polynomial. Prove that $f$ has no repeated roots in $\mathbb{C}$.
It's worth a fair few marks so I expect the answer's fairly long, but if anyone can provide any ideas how to deal with that I'd be very greatful.
Thanks in advance for your help.
if $a \in \mathbb{C}$ is a repeated root of $f,$ then $f(a)=f'(a)=0.$ since $f$ is irreducible over $K,$ we have $\gcd(f,f')=1,$ i.e. $u(t)f(t)+v(t)f'(t)=1,$ for some $u,v \in K[t] \subseteq \
mathbb{C}[t].$ let $t=a$ to get $0=1. \ \Box$
note that since $\text{char} \ \mathbb{C} = 0,$ the derivative of a non-constant polynomial is not identically 0.
April 28th 2009, 04:16 AM #2
MHF Contributor
May 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/85996-galois-theory-help.html","timestamp":"2014-04-18T12:32:19Z","content_type":null,"content_length":"38110","record_id":"<urn:uuid:3a2d78cb-680a-493c-9e0c-f37925e79c9e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
sin, sinh (MATLAB Function Reference)
MATLAB Function Reference Search Help Desk
sin, sinh Examples See Also
Y = sin(X)
Y = sinh(X)
The sin and sinh commands operate element-wise on arrays. The functions' domains and ranges include complex values. All angles are in radians. Y = sin(X) returns the circular sine of the elements of
X. Y = sinh(X) returns the hyperbolic sine of the elements of X.
Graph the sine function over the domain
x = -pi:0.01:pi; plot(x,sin(x))
x = -5:0.01:5; plot(x,sinh(x))
The expression sin(pi) is not exactly zero, but rather a value the size of the floating-point accuracy eps, because pi is only a floating-point approximation to the exact value of
See Also
asin, asinh
[ Previous | Help Desk | Next ] | {"url":"http://faculty.petra.ac.id/resmana/private/matlab-help/techdoc/ref/sinh.html","timestamp":"2014-04-16T04:38:23Z","content_type":null,"content_length":"4527","record_id":"<urn:uuid:b44ca4d5-fe89-4944-ba33-3b6281bf0d5c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spy vs Spy
Card Games Home Page | Other Invented Games
Spy vs Spy
Contributed by Jeff Dorak (Jeff_d52000@yahoo.com), who writes:
"I invented a simple and fun single / multiplayer / tournament card game called spy using a normal deck of cards with unlimited amount of players. The game is fast paced, quick thinking and smart
planning. Game usually can last from up to 15 minutes to 90 minutes. My goal in this creation is to get this game known to everyone everywhere. The reason I chose the name "SPY" is because its
game play is like war and you are in a race using mind and speed to out due the other spy which gives it the name "SPY vs. SPY". (SPY is short for SPY vs. SPY)"
Basic Multiplayer: Its pretty simple the object of the game is to get 0 points or get negative points. You can begin by either dealing out the deck in half (half the deck for you half the deck for
another player only for a 2 player game) or all players use their own full deck of cards. (If you want a really short game use half a deck, but it's more fun with a full deck.) Depending on what you
choose depends on the amount of bonus points you win and lose as well as the strategy.
Goal: You goal is to lose points. The maximum winning points (cards) you can get with a full deck of cards are -20 points. If you're playing with 2 players both using half a deck then its -10 points.
Start: The game begins when all players shuffle and deal out 5 cards from their deck and set it aside to begin their points pile: these are your beginning 5 points (cards). Remember that you are not
supposed to look at these cards at any time but you can count how many cards you have there.
Remember reaching 0 or a negative number means you win the game.
How to Race: After dealing out 5 cards face down pick the remaining deck up: this is your racing hand (you cannot look at your racing hand). Now deal out 4 cards face up from the racing hand (I call
it 4 piles). Spread them out in a square. From there you and the other players must start together like a race. When you start you immediately flip the top card in your racing hand and place it on a
card [one of the four face up cards of your square]. You must either match the rank of the card you place it on, or place it on top of a card whose rank is one above or below it - e.g. 6 goes on 5, 5
goes on 5, 5 goes on 6, a 7 will not go on 5, a queen goes on king, a king goes on a queen, an ace goes on a king, a king goes on an ace, and a 2 on an ace, etc. The ranking order is -ace-2-3-4- to
-10-jack-queen-king-ace-2- etc (there are no jokers). Just keep flipping cards from the racing hand adding the card to the 4 piles. Very straightforward.
The Obstacles: Now you know the basics. Now in the case that you come across a card you cannot put down you have to start a new pile, meaning that you can now play on 5 piles of cards. If it happens
again you have to make a 6th pile of cards (6 is the maximum number of piles). The race is won when you have no cards in your racing hand to put in the 4, 5, 6 piles or you are stuck and can't put
any more cards down. I will explain how you get stuck and what happens later. Now if you have 6 piles of cards and you get to 1 card you can't put down you must tell everyone what that card is (yell
out to everyone) and remember that card's number (rank) and suit and place that card on the bottom of your racing hand while racing. Then continue flipping cards up and putting them in the piles, but
this time for every card you can't put down just put it on the bottom without remembering it or calling it.
Finishing the Race: Once you reach the card that you called out, then if at any point after that card you cannot put the next card down you are stuck (so stop and say finished). What this means is
even though you're stuck you still finished the race. The left over cards in your hand gets put into the points pile. When everyone has finished racing, everyone must pick one of the six piles
created through the race and add that to their own points pile too. Now the person that finished the race first (#1) gets to remove 5 points (cards) from the points pile, (hurray - remember those 5
cards sitting aside that's your points). The cards you remove go back into the pile of 5 faced up piles of cards. To be used in the race again.
Repeat race: Now if no one wins the game you race again so collect the remaining 4, 5, or 6 piles that are faced up, shuffle well and deal out 4 piles and commence the race again (3,2,1 go).
Scoring: If during the race you finish at any rank (first, second, third) and you have say 4 or 5 piles of cards down and one or two empty spaces then that's an automatic bonus -20 points (cards) and
you're the only person that does not have to pick from 6 piles to add to the points unless someone else won the -20 bonus then they don't have to add a pile either to their racing hand. (Full deck =
-20; Half deck = -10) (WOW cool that's 20 cards out from the deck usually people win the game right here unless you just happen to have a over 20 points in your points pile, that's got to suck).
Remember you can't add the winning -5 points to the -20 bonus points to make -25 that would make the game too easy to win for many people.
Important Ending Info: You can play with as many people as you want. If it's more than 2 it's recommended that everyone uses there own normal full deck of cards. (No jokers.) Remember there's no
trading points; you can talk during the race; you can play the race as fast as you want but you have to put the cards in the right spots; if you have the option to chose which pile to put a card you
may put it anywhere. You cannot skip a card on purpose - if it is capable of putting down or making a new pile you must do so. Do not add up the points till everyone finishes the race.
Tie Breakers To break a tie you can consider who has the better negative score so if you reach 0 and the person beside you reaches -1 the same time you did then the person with -1 wins. If you both
have the same -1 or 0 etc. then you both have to have a new race where there's a good chance you could both gain enough points to start the game up again and race to get negative first again.
Strategy: You could win in any number of ways: keep winning the races and ending with few enough points (cards) to keep getting rid of a few cards, most likely reaching zero; or have exactly 20
points and try and go for the -20 points. The more points you have the better chances you have of finishing the race first. With a huge pile of points you a greater chance of getting bonus points.
Remember when you have the option to choose which pile to put a card on, you may put it anywhere. Try to keep all the numbers together - it's a good strategy for winning - e.g. keep all the 5's in
one pile and all the kings in another, etc.
Cheating: Cheating is not allowed, but it seems very simple to cheat, so when the race is over you are allowed to check the other players piles. If anything does not match up then they automatically
lose or you can choose a pile from the six for the cheater to add to their points.
Single Player: All the same rules. Your goal is to reach 0 or negative. Only difference is you don't have go fast and you always get -5 at the end of every round. Just you must deal yourself 10 cards
for the start (bonus is the same -20 (full deck)). Cheat if you want because it is single player.
Fun Level: 1-10, 9.8
Downside - racing makes you think and moves your hands fast so it can get tiring.
Thank you for taking the time to learn and play the game spy.
e-mail me comments please to Jeff_d52000@yahoo.com and thank you.
Return to Index of Invented Card Games Last updated 5th November 2003 | {"url":"http://www.pagat.com/invented/spy.html","timestamp":"2014-04-19T23:22:32Z","content_type":null,"content_length":"12378","record_id":"<urn:uuid:c28d3a7a-d06e-4bba-a623-07921da0db72>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Game Theory on A Fine Theorem
How’s this for fortuitous timing: I’d literally just gone through this paper by Gentzkow and Kamenica yesterday, and this morning it was announced that Gentzkow is the winner of the 2014 Clark Medal!
More on the Clark in a bit, but first, let’s do some theory.
This paper is essentially the multiple sender version of the great Bayesian Persuasion paper by the same authors (discussed on this site a couple years ago). There are a group of experts who can
(under commitment to only sending true signals) send costless signals about the realization of the state. Given the information received, the agent makes a decision, and each expert gets some utility
depending on that decision. For example, the senders might be a prosecutor and a defense attorney who know the guilt of a suspect, and the agent a judge. The judge convicts if p(guilty)>=.5, the
prosecutor wants to maximize convictions regardless of underlying guilt, and vice versa for the defense attorney. Here’s the question: if we have more experts, or less collusive experts, or experts
with less aligned interests, is more information revealed?
A lot of our political philosophy is predicated on more competition in information revelation leading to more information actually being revealed, but this is actually a fairly subtle theoretical
question! For one, John Stuart Mill and others of his persuasion would need some way of discussing how people competing to reveal information strategically interact, and to the extent that this
strategic interaction is non-unique, they would need a way for “ordering” sets of potentially revealed information. We are lucky in 2014, thanks to our friends Nash and Topkis, to be able to nicely
deal with each of those concerns.
The trick to solving this model (basically every proof in the paper comes down to algebra and some simple results from set theory; they are clever but not technically challenging) is the main result
from the Bayesian Persuasion paper. Draw a graph with the agent’s posterior belief on the X-axis, and the utility (call this u) the sender gets from actions based on each posterior on the y-axis. Now
draw the smallest concave function (call it V) that is everywhere greater than u. If V is strictly greater than u at the prior p, then a sender can improve her payoff by revealing information. Take
the case of the judge and the prosecutor. If the judge has the prior that everyone brought before them is guilty with probability .6, then the prosecutor never reveals information about any suspect,
and the judge always convicts (giving the prosecutor utility 1 rather than 0 from an acquittal). If, however, the judge’s prior is that everyone is guilty with .4, then the prosecutor can mix such
that 80 percent of criminals are convicted by judiciously revealing information. How? Just take 2/3 of the innocent people, and all of the guilty people, and send signals that each of these people is
guilty with p=.5, and give the judge information on the other 1/3 of innocent people that they are innocent with probability 1. This is plausible in a Bayesian sense. The judge will convict all of
the folks where p(guilty)=.5, meaning 80 percent of all suspects are convicted. If you draw the graph described above with u=1 when the judge convicts and u=0 otherwise, it is clear that V>u if and
only if p<.5, hence information is only revealed in that case.
What about when there are multiple senders with different utilities u? It is somewhat intuitive: more information is always, almost by definition, informative for the agent (remember Blackwell!). If
there is any sender who can improve their payoff by revealing information given what has been revealed thus far, then we are not in equilibrium, and some sender has the incentive to deviate by
revealing more information. Therefore, adding more senders increases the amount of information revealed and “shrinks” the set of beliefs that the agent might wind up holding (and, further, the
authors show that any Bayesian plausible beliefs where no sender can further reveal information to improve their payoff is an equilibrium). We still have a number of technical details concerning
multiplicity of equilibria to deal with, but the authors show that these results hold in a set order sense as well. This theorem is actually great: to check equilibrium information revelation, I only
need to check where V and u diverge sender by sender, without worrying about complex strategic interactions. Because of that simplicity, it ends up being very easy to show that removing collusion
among senders, or increasing the number of senders, will improve information revelation in equilibrium.
September 2012 working paper (IDEAS version). A brief word on the Clark medal. Gentzkow is a fine choice, particularly for his Bayesian persuasion papers, which are already very influential. I have
no doubt that 30 years from now, you will still see the 2011 paper on many PhD syllabi. That said, the Clark medal announcement is very strange. It focuses very heavily on his empirical work on
newspapers and TV, and mentions his hugely influential theory as a small aside! This means that five of the last six Clark medal winners, everyone but Levin and his relational incentive contracts,
have been cited primarily for MIT/QJE-style theory-light empirical microeconomics. Even though I personally am primarily an applied microeconomist, I still see this as a very odd trend: no prizes for
Chernozhukov or Tamer in metrics, or Sannikov in theory, or Farhi and Werning in macro, or Melitz and Costinot in trade, or Donaldson and Nunn in history? I understand these papers are harder to
explain to the media, but it is not a good thing when the second most prominent prize in our profession is essentially ignoring 90% of what economists actually do.
“The Explanatory Relevance of Nash Equilibrium: One-Dimensional Chaos in Boundedly Rational Learning,” E. Wagner (2013)
The top analytic philosophy journals publish a surprising amount of interesting game and decision theory; the present article, by Wagner in the journal Philosophy of Science, caught my eye recently.
Nash equilibria are stable in a static sense, we have long known; no player wishes to deviate given what others do. Nash equilibria also require fairly weak epistemic conditions: if all players are
rational and believe the other players will play the actual strategies they play with probability 1, then the set of outcomes is the Nash equilibrium set. A huge amount of work in the 80s and 90s
considered whether players would “learn” to play Nash outcomes, and the answer is by and large positive, at least if we expand from Nash equilibria to correlated equilibria: fictitious play (I think
what you do depends on the proportion of actions you took in the past) works pretty well, rules that are based on the relative payoffs of various strategies in the past work with certainty, and a
type of Bayesian learning given initial beliefs about the strategy paths that might be used generates Nash in the limit, though note the important followup on that paper by Nachbar in Econometrica
2005. (Incidentally, a fellow student pointed out that the Nachbar essay is a great example of how poor citation measures are for theory. The paper has 26 citations on Google Scholar mainly because
it helped kill a literature; the number of citations drastically underestimates how well-known the paper is among the theory community.)
A caution, though! It is not the case that every reasonable evolutionary or learning rule leads to an equilibrium outcome. Consider the “continuous time imitative-logic dynamic”. A continuum of
agents exist. At some exponential time for each agent, a buzzer rings, at which point they randomly play another agent. The agent imitates the other agent in the future with probability exp(beta*pi
(j)), where beta is some positive number and pi(j) is the payoff to the opponent; if imitation doesn’t occur, a new strategy is chosen at random from all available strategies. A paper by Hofbauer and
Weibull shows that as beta grows large, this dynamic is approximately a best-response dynamic, where strictly dominated strategies are driven out; as beta grows small, it looks a lot like a
replicator dynamic, where imitation depends on the myopic relative fitness of a strategy. A discrete version of the continuous dynamics above can be generated (all agents simultaneously update rather
than individually update) which similarly “ranges” from something like the myopic replicator to something like a best response dynamic as beta grows. Note that strictly dominated strategies are not
played for any beta in both the continuous and discrete time i-logic dynamics.
Now consider a simple two strategy game with the following payoffs:
Left Right
Left (1,1) (a,2)
Right (2,a) (1,1)
The unique Nash equilibrium is X=1/A. Let, say, A=3. When beta is very low (say, beta=1), and players are “relatively myopic”, and the initial condition is X=.1, the discrete time i-logic dynamic
converges to X=1/A. But if beta gets higher, say beta=5, then players are “more rational” yet the dynamic does not converge or cycle at all: indeed, whether the population plays left or right follows
a chaotic system! This property can be generated for many initial points X and A.
The dynamic here doesn’t seem crazy, and making agents “more rational” in a particular sense makes convergence properties worse, not better. And since play is chaotic, a player hoping to infer what
the population will play next is required to know the initial conditions with certainty. Nash or correlated equilibria may have some nice dynamic properties for wide classes of reasonable learning
rules, but the point that some care is needed concerning what “reasonable learning rules” might look like is well taken.
Final 2013 preprint. Big thumbs up to Wagner for putting all of his papers on his website, a real rarity among philosophers. Actually, a number of his papers look quite interesting: Do cooperate and
fair bargaining evolve in tandem? How do small world networks help the evolution of meaning in Lewis-style sender-receiver games? How do cooperative “stag hunt” equilibria evolve when 2-player stag
hunts have such terrible evolutionary properties? I think this guy, though a recent philosophy PhD in a standard philosophy department, would be a very good fit in many quite good economic theory
“Long Cheap Talk,” R. Aumann & S. Hart (2003)
I wonder if Crawford and Sobel knew just what they were starting when they wrote their canonical cheap talk paper – it is incredible how much more we know about the value of cheap communication even
when agents are biased. Most importantly, it is not true that bias or self-interest means we must always require people to “put skin in the game” or perform some costly action in order to prove the
true state of their private information. A colleague passed along this paper by Aumann and Hart which addresses a question that has long bedeviled students of repeated games: why don’t they end right
away? (And fair notice: we once had a full office shrine, complete with votive candles, to Aumann, he of the antediluvian beard and two volume tome, so you could say we’re fans!)
Take a really simple cheap talk game, where only one agent has any useful information. Row knows what game we are playing, and Column only knows the probability distribution of such games. In the
absence of conflict (say, where there are two symmetric games, each of which has one Pareto optimal equilibrium), Row first tells Column that which game is the true one, this is credible, and so
Column plays the Pareto optimal action. In other cases, we know from Crawford-Sobel logic that partial revelation may be useful even when there are conflicts of interest: Row tells Column with some
probability what the true game is. We can also create new equilibria by using talk to reach “compromise”. Take a Battle of the Sexes, with LL payoff (6,2), RR (2,6) and LR=RL=(0,0). The equilibria of
the simultaneous game without cheap talk are LL,RR, or randomize 3/4 on your preferred location and 1/4 of the opponent’s preferred location. But a new equilibria is possible if we can use talk to
create a public randomization device. We both write down 1 or 2 on a piece of paper, then show the papers to each other. If the sum is even, we both go LL. If the sum is odd, we both then go RR. This
gives ex-ante payoff (4,4), which is not an equilibrium payoff without the cheap talk.
So how do multiple rounds help us? They allow us to combine these motives for cheap talk. Take an extended Battle of the Sexes, with a third action A available to Column. LL still pays off (6,2), RR
still (2,6) and LR=RL=(0,0). RA or LA pays off (3,0). Before we begin play, we may be playing extended Battle of the Sexes, or we may be playing a game Last Option that pays off 0 to both players
unless Column plays A, in which case both players get 4; both games are equally probable ex-ante, and only Row learns which game we actually in. Here, we can enforce a payoff of (4,4) if, when the
game is actually extended Battle of the Sexes, we randomize between L and R as in the previous paragraph, but if the game is Last Option, Column always plays A. But the order in which we publicly
randomize and reveal information matters! If we first randomize, then reveal which game we are playing, then whenever the public randomization causes us to play RR (giving row player a payoff of 2 in
Battle of the Sexes), Row will afterwards have the incentive to claim we are actually playing Last Resort. But if Row first reveals which game we are playing, and then we randomize if we are playing
extended Battle of the Sexes, we indeed enforce ex-ante expected payoff (4,4).
Aumann and Hart show precisely what can be achieved with arbitrarily long strings of cheap talk, using a clever geometric proof which is far too complex to even summarize here. But a nice example of
how really long cheap talk of this fashion can be used is in a paper by Krishna and Morgan called The Art of Conversation. Take a standard Crawford-Sobel model. The true state of the world is drawn
uniformly from [0,1]. I know the true state, and get utility which is maximized when you take action on [0,1] as close as possible to the true state of the world plus .1. Your utility is maximized
when you take action as close as possible to the true state of the world. With this “bias”, there is a partially informative one-shot cheap talk equilibrium: I tell you whether we are in [0,.3] or
[.3,1] and you in turn take action either .15 or .65. How might we do better with a string of cheap talk? Try the following: first I tell you whether we are in [0,.2] or [.2,1]. If I say we are in
the low interval, you take action .1. If I say we are in the high interval, we perform a public randomization which ends the game (with you taking action .6) with probability 4/9 and continues the
game with probability 5/9; for example, to publicly randomize we might both shout out numbers between 1 and 9, and if the difference is 4 or less, we continue. If we continue, I tell you whether we
are in [.2,.4] or [.4,1]. If I say [.2,.4], you take action .3, else you take action .7. It is easy to calculate that both players are better off ex-ante that in the one-shot cheap talk game. The
probabilities 4/9 and 5/9 were chosen so as to make each player indifferent from following the proposed equilibrium after the randomization or not.
The usefulness of the lotteries interspersed with the partial revelation are to let the sender credibly reveal more information. If there were no lottery, but instead we always continued with
probability 1, look at what happens when the true state of nature is .19. The sender knows he can say in the first revelation that, actually, we are on [.2,1], then in the second revelation that,
actually, we are on [.2,4], in which case the receiver plays .3 (which is almost exactly sender’s ideal point .29). Hence without the lotteries, the sender has an incentive to lie at the first
revelation stage. That is, cheap talk can serve to give us jointly controlled lotteries in between successive revelation of information, and in so doing, improve our payoffs.
Final published Econometrica 2003 copy (IDEAS). Sequential cheap talk has had many interesting uses. I particularly enjoyed this 2008 AER by Alonso, Dessein and Matouschek. The gist is the following:
it is often thought that the tradeoff between decentralized firms and centralized firms is more local control in exchange for more difficult coordination. But think hard about what information will
be transmitted by regional managers who only care about their own division’s profits. As coordination becomes more important, the optimal strategy in my division is more closely linked to the optimal
decision in other divisions. Hence I, the regional manager, have a greater incentive to freely share information with other regional managers than in the situation where coordination is less
important. You may prefer centralized decision-making when cooperation is least important because this is when individual managers are least likely to freely share useful information with each other.
“Between Proof and Truth,” J. Boyer & G. Sandu (2012)
In the previous post, I promised that game theory has applications in pure philosophy. Some of these applications are economic in nature – a colleague has written a paper using a branch of mechanism
design to link the seemingly disjoint methodologies of induction and falsification – but others really are pure. In particular, there is a branch of zero-sum games which deals with the fundamental
nature of truth itself!
Verificationist theories of truth like those proposed by Michael Dummett suggest that a statement is true if we can prove it to be true. This may seem trivial, but it absolutely is not: for one, to
the extent that there are unprovable statements as in some mathematical systems, we ought say “that statement is neither true nor false” not simply “we cannot prove that statement true or false”.
Another school of thought, beginning with Tarski’s formal logics and expanded by Hintikka, says that truth is a property held by sentences. The statement “Snow is white” is true if and only if there
is a thing called snow, and it is necessarily white. The statement “A or B” is true if and only if the thing denoted by “A” is true or the thing denoted by “B” is true.
These may seem very different conceptions. The first is a property requiring action – something is true if someone can verify it. The second seems more in the air – something is true if its sentence
has certain properties. But take any sentence in formal logic, like “There exists A such that for all B either C or D is true”. We can play a game between Verifier and Falsifier. Move left to right
across the sentence, letting the Verifier choose at all existentials and “or” statements, and the Falsifier choose at all universals and “and” statements. Verifier wins if he can get to the end of
sentence with the sentence remaining true. That is, semantic games take Tarksi truth and make it playable by someone with agency, at least in principle.
The paper by Boyer and Sandu takes this as a starting point, and discusses when Dunnett’s truth coincides with Tarksi and Hintikka’s truth, restricting ourselves to semantic games played on recursive
structures (nonconstructive winning strategies in the semantic game seems problematic if we want to relate truth in semantic games to verificationist truth!) Take statements in Peano arithmetic where
all objects chosen are natural numbers (it happens to be truth that in PA, every recursive structure is isomorphic to the natural numbers). When is every statement I can prove also true in the sense
of a winning strategy in the recursive semantic game? Conversely, when can the semantic game truth of a sentence by given by a proof? The answer to both is negative. For the first, check that the
sentence that all programs x1 and inputs x2, there exists a number of steps y such that the system either halts after y steps or it does not halt. This is the halting problem. It is not decidable,
hence there is no winning strategy for Verifier, but the sentence if trivially provable in Peano arithmetic by the law of the excluded middle.
Boyer and Sandu note (as known in an earlier literature) that we can relate the two types of truth by extending the semantic game to allow backward moves. That is, at any node, or at the end of the
game, Verifier can go back to any node she played and change her action. Verifier wins if she has a finite winning strategy. It turns out that Verifier can win in the game with backward moves if and
only if she can win in the standard game. Further, if a statement can be proven, Verifier can win in the game with backward moves using a recursive strategy. This has some interesting implications
for Godel sentences (“This sentence is not provable within the current system.”) which I don’t wish to discuss here.
Note that all of this is just the use of game theory in “games against nature”. We usually think of game theory as being a tool for the analysis of situations with strategic interaction, but the
condition that players are rational perfect optimizers means that, in zero sum games, checking whether something is possible for some player just involves checking whether a player called Nature has
a winning strategy against him. This technique is so broadly applicable, in economics and otherwise, that we ought really be careful about defining game theory as solely a tool for analyzing the
strategies of multiple “actual” agents; e.g., Wikipedia quotes Myerson’s definition that game theory is “the study of mathematical models of conflict and cooperation between intelligent rational
decision-makers”. This is too limiting.
Final copy. This article appeared in Synthese 187/March 2012. Philosophers seem to rarely put their working papers online, but Springer has taken down the paywall on Synthese throughout December, so
you can read the above link even without a subscription.
Game Theory and History, A. Greif & Friends (1993, 1994)
(This post refers to A. Greif, “Contract Enforceability and Economic Institutions in Early Trade: The Maghribi Traders’ Coalition”, AER 1993, and A. Greif, P. Milgrom & B. Weingast, “Coordination,
Commitment and Enforcement: The Case of the Merchant Guild,” JPE 1994.)
Game theory, after a rough start, may actually be fulfilling its role as proposed by Herbert Gintis: unifier of the sciences. It goes without saying that game theoretic analysis is widespread in
economics, political science (e.g., voter behavior), sociology (network games), law (antitrust), computer science (defending networks against attacks), biology (evolutionary strategies), pure
philosophy (more on this in a post tomorrow!), with occasional appearances in psychology, religion (recall Aumann’s Talmud paper), physics (quantum games), etc. But history? Surely game theory,
particularly the more complex recent results, has no place there? Yet Avner Greif, an economic historian at Stanford, has shown that games can play a very interesting role indeed in understanding
historical events.
Consider first his Maghribi traders paper. In the 11th and 12th century, a group of Judeo-Arabic traders called the Maghribis traded across the Mediterranean. Two institutional aspects of their trade
are interesting. First, they all hired agents in foreign cities to carry out their trade, and second, they generally used other Maghribi merchants as their agents. This is quite different from, for
instance, Italy, where merchants tended to hire agents in foreign cities who were not themselves merchants. What explains that difference, and more generally, how can long distance traders insure
that traders do not rip you off? For instance, how do I keep them from claiming they sold at a low price when actually they sold at a high one?
To a theorist, this looks like a repeated reputational game with imperfect monitoring. Greif doesn’t go the easy route and just assume there are trustworthy and untrustworthy types. Rather, he
assumes that there are a set of potential agents who can be hired in each period, that agents are exogenously separated from merchants with probability p in each period, and that merchants can choose
to hire and fire at any wage they choose. You probably know from economics of reputation or from the efficiency wage literature that I need to offer wages higher than the agent’s outside option to
keep him from stealing; the value of the continuation game, then, is more than the value of stealing now. Imagine that I fire anyone who steals and never hire him again. How do I ensure that other
firms do not then hire that same agent (perhaps the agent will say, “Look, give me a second chance and I will work at a lower wage”)? Well, an agent who has cheated one merchant will never be hired
by that merchant again. This means that when he is in the unemployed pool, even if other merchants are willing to hire him, his probability of getting hired is lower, since one merchant will
definitely not hire him. That means that the continuation value of the game if he doesn’t steal from me is lower. Therefore, the efficiency wage I must pay him to keep him from stealing is higher
than the efficiency wage I can pay someone who hasn’t ever stolen, so I strictly prefer to hire agents who have never stolen. This allows the whole coalition to coordinate. Note that the fewer agents
there are, the higher the continuation value from not stealing, and hence the lower the efficiency wage I can pay: it is optimal to keep the set of potential agents small.
What of the Italian merchants? Why do they not hire only each other? Maghribi merchants tended to be involved only in long distance trade, while Italian merchants were also involved in real estate
and other pursuits. This means the outside option (continuation value after cheating if no one hires me again) is higher for Italian merchants than Maghribi merchants, which means that hiring
merchants at the necessary efficiency wage will be relatively more expensive for Italians than Maghribis.
A followup by Greif, with Milgrom and Weingast, considers the problem of long distance trade from the perspective of cities. Forget about keeping your agent from ripping you off: how do you keep the
city from ripping you off? For instance, Genoans in Constantinople had their district overrun by a mob at one point, with no compensation offered. Sicilians raised taxes on sales by Jews at one point
after they had brought their goods for sale. You may naively think that reputation alone will be enough; I won’t rip anyone off because I want a reputation of being a safe and fair city for trade.
But again, the literature of repeated games tells us this will not work. Generally, I need to punish deviations from the efficient set of strategies, and punish those who themselves do not punish
deviators. In terms of medieval trade, to keep a city from ripping me off, I need not only to punish the city by bringing it less trade, but I also need to make sure the city doesn’t make up for my
lost trade by offering a special deal to some other trader. That is, I need to get information about violation against a single trader to other traders, and I need to make sure they are willing to
punish the deviating city.
The merchant guild was the institution that solved this problem. Merchant guilds were able to punish their own members by, for example, keeping them from earning rents from special privileges in
their own city. In the most general setting, when a guild orders a boycott, cities may be able to attract some trade, but less than the efficient amount, because only by offering a particularly good
deal to the merchants who come during a boycott will entice them to come and to credibly believe the city will not steal.
This is all to say that strong guilds may be in the best interest of cities since they allow the city to solve its commitment problem. The historical record confirms many examples of cities
encouraging guilds to come trade, and encouraging the strengthening of guilds. Only a reputational model like the above one can explain such city behavior; if guilds are merely extracting rents with
monopoly privilege, cities would not encourage them all. Both of these papers, I think, are quite brilliant.
1993 AER (IDEAS version) and 1994 JPE (IDEAS version). Big thumbs up to Avner for having the final published versions of these papers on his website.
“Unbeatable Imitation,” P. Duersch, J. Oechssler & B. C. Schipper (2012)
People, particularly in relatively unimportant situations, rely on heuristics rather than completely rational foresight. But using heuristics in modeling seems to me undesirable, because players
using heuristics can easily be abused by more strategic players. For instance, consider the game of fighter pilot Chicken as in the movie Top Gun. Both players prefer going straight while the
opponent swerves to swerving when the opponent swerves (hence showing lack of nerve) to swerving when the opponent goes straight (hence showing a unique lack of nerve) to going straight when the
opponent goes straight (hence crashing). Consider playing Chicken over and over against an heuristic-based opponent. Perhaps the opponent simply best responds to whatever you did in the previous
period. In this case, if I go straight in period 1, the opponent swerves in the next period, and if I swerve, the opponent goes straight. Therefore, I’ll go straight in periods 1 through infinity,
knowing my opponent will swerve in every period except possibly the first. The sophisticated player will earn a much higher payoff than the unsophisticated one.
Duersch et al show that, in every 2×2 symmetric game and in a large class of N-by-N symmetric two-player games, a simple heuristic called “imitation” has an undiscounted average payoff identical to
that which can be achieved by an opponent playing any strategy at all. In imitation, I retain my strategy each period unless the opponent earned strictly more than I did in the previous period, in
which case I copy him. Consider Chicken again. If I go straight and the opponent swerves, then I know he will go straight in the following period. In the next period, then, I can either crash into
him (causing him to swerve two periods on) or swerve myself (causing him to go straight two periods on). In any case, I can at best get my opponent to swerve while I go straight once every two
periods. By symmetry, in the periods where this doesn’t happen, I can at best get the payoff from swerving when my opponent goes straight, meaning my average payoff is no better than my
heuristic-based opponent! This is true no matter what strategy is used against the imitating opponent.
Now imitation will fail in many games, of course. Consider Rock-Paper-Scissors. If I play Rock when you play Scissors, then since you imitate, you will switch to Rock in the next period. Knowing
this, I will play Paper, and so on, winning every period. Games that have this type of cycling possibility allow me to extract arbitrarily larger higher payoff than the imitating opponent. What’s
interesting is that, in finite symmetric two-player games between an imitator and an agent with perfect rationality, games with a possibility of cycling in some subgame are the only ones in which the
imitator does not earn the same average payoff per period as the rational player! Checking this condition is difficult, but games with no pure equilibrium in the relative payoff game (i.e., the game
where payoffs for each player are equal to the difference in payoffs between players in the original game, hence making the original game zero-sum) always have a cycle, and games which are
quasiconcave never do. Many common games (oligopoly competition, Nash bargaining, etc.) can be written as quasiconcave games.
Imitation is really pretty unique. The authors give the example of a 3×3 symmetric oligopoly game, where strategy 1 is “produce Cournot quantity”, strategy 2 is “produce Stackelberg follower
quantity” and strategy 3 is “produce Stackelberg leader quantity.” The game has no subgames with cycles as defined above, and hence imitators and the rational player earn the same average payoff (if
rational player plays Stackelberg leader and I play something else, then he earns more than me, hence I imitate him next period, hence he best responds by playing Stackelberg follower). Other
heuristics do much worse than imitation. A heuristic where you simply best reply just plays Stackelberg follower forever, for example.
This result is quite interesting, and the paper is short; the “useful insight on the worst page” test of a quality paper is easily satisfied. I like this work too because it is related to some ideas
I have about the benefits of going first. Consider shifting a symmetric simultaneous game to a symmetric sequential game. Going first has no benefit except that it allows me to commit to my action
(and many negatives, of course, including the inability to mix strategies). Likewise a heuristic rule allows the heuristic player to commit to actions without assuming perfection of the equilibrium.
So there is a link between “optimal” heuristics and the desire of a rational player to commit to his action in advance if he could so choose.
November 2011 Working Paper (IDEAS version). Final version published in the September 2012 issue of Games and Economic Behavior.
“Aggregate Comparative Statics,” D. Acemoglu & M. K. Jensen (2011)
Enormous parts of economic theory are devoted to “comparative statics”: if one variable changes (one chickpea firm’s cost structure decreases, demand for pineapples goes up, firms receive better
information about the size of an oil field before bidding on rights to that field), does another variable increase or decrease (chickpea prices rise or fall, equilibrium supply of pineapples rises or
falls, revenue for the oil field auctioneer rises or falls)?
There are a couple ways we can check comparative statics. The traditional way is using the implicit function theorem. Call t the variable you wish to change, and x the endogenous variable who
comparative static you wish to check. If you have a model for which you can solve for x as a function of t, then finding the sign of dx/dt (or the nondifferential analogue) is straightforward. This
is rare indeed in many common economic models; for instance, if x is supply of firm 1 in a Cournot duopoly, and t is the marginal cost of firm 2, and all I know is that demand satisfies certain
properties, then since I haven’t even specified the functional form of demand, I will not be able to take an explicit derivative. We may still be in luck, however. Let f(x(t),t) be the function an
agent is maximizing, which depends on x the agent can choose, and t which is exogenous. If the optimal choice x* lies in the interior of a convex strategy set, and f is concave and twice
differentiable, then the first order approach applies, and hence at any optima, f’(x*,t)=0. By the chain rule, f’(x*,t)=0=(df/dx*)(dx*/dt)+(df/dt), or dx*/dt=-(df/dt)/(df/dx*).
So far, so good. But often, our maximand is neither differentiable nor concave, our strategy set is not convex, or our solution is not interior. What are we to do? Here, the monotone comparative
statics approach (Athey and Milgrom are the early reference here) allows us to sign without many assumptions. We still don’t have many good results for games of strategic substitutes, however; in
these types of games, my best response is decreasing in your action, so if a parameter change causes you to increase your action, I will want to decrease my action. Further, there are many games
where player actions are neither strategic complements nor strategic substitutes. A good example is a patent race. If you increase your effort, I may best respond by increasing my effort (in order to
“stay close”) or decreasing my effort (because I am now so far behind that I have no chance of inventing first).
Acemoglu and Jensen show that both types of games can be handled nicely if the games are “aggregative games” where my action depends only on the aggregate output of all firms rather than the
individual actions of other firms. My choice of production in a Cournot oligopoly depends only on aggregate production by all firms, and my choice of effort in an independent Poisson patent race
depends only on the cumulative hazard rate of invention. In such a case, if the game is one of strategic substitutes in the choice variable x, and the aggregate quantity is some increasing function
of the sum of all x and t, then an increase in t leads to a decrease in equilibrium choices x, and entry by another player increases the aggregate quantity. If t(i) is an idiosyncratic exogenous
variable that affects i’s choice, and only affects other players’ actions through the aggregate x, then an increase in t(i) increases x(i) and decreases x(j) for all other j. [When I say "increase"
and "decrease" for the equilibrium quantities, when there are multiple equilibria, this means that the entire set of equilibria quantities shift up or down.]
For example, take a Cournot game, where my profits are s(i)*P(X+t)-c(i)(s(i),t(i)), where s(i) is my supply choice, P(X+t) is price given the total industry supply plus a demand shifter t, and c(i)
is my cost function for producing s(i) given some idiosyncratic cost shifter t(i) such that s(i) and t(i) obey the single crossing property. Assume P is twice differentiable. Then taking the
derivative of the profit function with respect to other players’ supply, we have that my supply is a strategic substitute with other’s supply iff P’+s(i)P”<0. Note that this condition does not depend
on any assumption about the cost function c. Then using the theorem in the last paragraph, we have that, in a Cournot game with *any* cost function, a decrease in the demand curve decreases total
industry supply, an additional entrant decreases equilibrium supply from existing firms, and a decrease in costs for one firm increases that firm's equilibrium output and decreases the output of all
other firms. All we assumed to get this was the strategic substitutes property, twice-differentiability of P, and the single crossing property of the cost function in s and t. As with many
comparative statics results, checking strategic substitutability and making sure the signs of s and t are correct for the interpretation of results is often easier than taking implicit derivatives
even if your assumptions are such that the implicit approach could conceivably work.
If the strategic substitutes property does not hold, as in a patent race, then with compact, convex strategy sets, a payoff function pseudoconcave in own player strategies, a boundary condition, and
a “local solvability” condition are sufficient to get the same results. Local solvability is roughly the condition which guarantees that the player’s own effect on the aggregate when she reallocates
after a change in t or t(i) does not alter the result from the previous paragraph enough to flip the comparative static’s sign.
April 2011 Working Paper, as yet unpublished (IDEAS version).
“Evolutionary Dynamics and Backward Induction,” S. Hart (2002)
Let’s follow up yesterday’s post with an older paper on evolutionary games by the always-lucid Sergiu Hart. As noted in the last post, there are many evolutionary dynamics for which the rest points
of the evolutionary game played by completely myopic agents and the Nash equilibria of the equivalent static game played by strategic games coincide, which is really quite phenomenal (and since you
know there are payoff-suboptimal Nash equilibria, results of this kind have, since Maynard Smith (1973), fundamentally changed our understanding of biology). Nash equilibria is a low bar, however.
Since Kuhn (1953), we have also known that every finite game has a backward induction equilibrium, what we now call the subgame perfect equilibrium, in pure strategies. When does the invariant limit
distribution of an extensive form evolutionary game coincide with the backward induction equilibrium? (A quick mathematical note: an evolutionary system with mutation, allowing any strategy to
“mutate” on some agent with some probability in each state, means that by pure luck the system can move from any state to any other state. We also allow evolutionary systems to have selection,
meaning that with some probability in each state an agent switches from his current strategy to one with a higher payoff. This process defines a Markov chain, and since the game is finite and the
mutations allow us to reach any state, it is a finite irreducible Markov chain. Such Markov chains have a unique invariant distribution in the limit.)
In general, we can have limit distributions of evolutionary processes that are not the backward induction equilibrium. Consider the following three step game. Agent 1 chooses C or B, then if C was
chosen, agent 2 (in the agent-normal form) chooses C2 or B2, then if C and B2 were chosen, agent 3 chooses C3 or B3. The payoff to each agent when B is chosen is (4,0,0), when C and C2 are chosen is
(5,9,0), when C, B2 and C3 are chosen is (0,0,0), and when C, B2 and B3 are chosen is (0,10,1). You can see that (C,C2,C3) and (B,B2,B3) are both Nash, but only (B,B2,B3) is subgame perfect, and
hence the backward induction equilibrium. Is (B,B2,B3) the limit distribution of the evolutionary game? In the backward induction equilibrium, agent 1 chooses B at the first node, and hence nodes 2
and 3 are never reached, meaning only mutation, and not selection, affect the distribution of strategies at those nodes. Since the Markov chain is ergodic, with probability 1 the proportion of agents
at node 2 playing B2 will fall below .2; when that happens, selection at node 1 will push agents toward C instead of B. When this happens, now both nodes 2 and 3 are reached with positive
probability. If less than .9 of the agents in 3 are playing B3, then selection will push agents at node 2 toward C2. Selection can therefore push the percentage of agents playing B2 down to 0, and
hence (C,C2,C3) can be part of the limit invariant distribution even though it is not the backward induction solution.
So is backward induction unjustifiable from an evolutionary perspective? No! Hart shows that if the number of agents goes to infinity as the probability of mutation goes to zero, then the backward
induction solution, when it is unique, is also the only element in the limit invariant distribution of the evolutionary game. How does letting the number of agents go to infinity help? Let Bi be an
element of the backward induction equilibrium at node i somewhere in the game tree. Bi must be a best reply in the subgame beginning with i if Bj is played in all descendant nodes by a sufficiently
high proportion of the population, so if Bi is not a best reply (and hence selection does not push us toward Bi) it must be that Bj is not being played further down the game tree. If Bi is a best
reply in the subgame beginning with i, then most of the population will play Bi because of selection pressures.
Now here’s the trick. Consider the problematic case in the example, when node i is not being reached in a hypothesized limit distribution (if i is reached, then since the probability of mutation goes
to zero, selection is much stronger than mutation, and hence non best replies will go away in the limit). Imagine that there is another node g preceding i which is also not reached, and that i is
only reached when some strategy outside the backward induction equilibrium is played in g. When g and i are not reached, there is no selection pressure, and hence no reason that the backward
induction equilibrium node will be played. Large populations help here. With some small probability, an agent in g mutates such that he plays the node which reaches i. This still has no effect unless
there is also a mutation in the node before g that causes g to be reached. The larger the population, the lower the probability the specific individual who mutated in g mutates back before any
individual in the node before g mutates. Hence larger populations make it more likely that rare mutations in unreached nodes will “coordinate” in the way needed for selection to take over.
Final GEB version (IDEAS version). Big up to Sergiu for posting final pdfs of all of his papers on his personal website.
“Survival of Dominated Strategies Under Evolutionary Dynamics,” J. Hofbauer & W. Sandholm (2011)
There is a really interesting tension in a lot of economic rhetoric. On the one hand, we have results that derive from optimal behavior by agents with rational foresight: “price equals marginal cost
in competitive markets because of profit-maximizing behavior” or “Policy A improves welfare in a dynamic general equilibrium setting with utility-maximizers”. Alternatively, though, we have
explanations that rely on dynamic consequences to even non-maximizing agents: “price equals marginal cost in competitive markets because firms who price about MC are driven out by competition” or
“Policy A improves welfare in a dynamic general equilibrium, and the dynamic equilibrium is sensible because firms adjust myopically as if in a tatonnement process.”
These two types of explanation, without further proof, are not necessarily the same. Profit-maximizing firms versus firms disciplined by competition give completely different welfare results under
monopoly, since the non profit-maximizing monopolist can be very wasteful and yet still make positive profits. In a dynamic context, firms adjust myopically to excess demand in some markets, rather
than profit-maximizing according to rational expectations, will not necessarily converge to equilibrium (a friend mentioned that Lucas made precisely this point in a paper in the 1970s).
How can we square the circle? At least in static games, there has been a lot of work here. Nash and other strategic equilibrium concepts are well known. There is also a branch of game theory going
back to the 1950s, evolutionary games, where rather than choosing strategically, a probability vector lists what portion of the players are playing a given strategy at a given time, resulting in some
payoffs. A revision rule, perhaps stochastic to allow for “mutations” as in biology, then tells us how the vector of strategies updates conditional on payoffs in the previous round. Fudenberg and
Kreps’ learning model from the 1980s is a special case.
Amazingly, it is true for almost all sensible revision rules that the set of rest points of the dynamic includes every Nash equilibrium of the underlying static game, and further that for many
revision rules the dynamic rest points are exactly equivalent to the set of Nash equilibria. We have one problem, however: dynamic systems needn’t converge to points at all, but rather may converge
to cycles or other outcomes.
Hofbauer and Sandholm – Sandholm being both a graduate of my institution and probably the top economist in the world today on evolutionary games – show that for any revision rule satisfying a handful
of common properties, we can construct a game where strictly dominated strategies are played with positive probability. This includes any dynamic meeting the following four properties: the population
law of motion is continuous in payoffs and the current population vector, there is positive correlation between strategy growth rates and current payoffs, the dynamic is at rest iff the strategy
vector is a Nash equilibrium of the underlying static game, and if an unplayed strategy has sufficiently high rewards, then with positive probability some agents begin using it. These criterion are
satisfied by “excess payoff dynamics” like BNN where strategies with higher than average payoffs have higher than average growth rates, and by “pairwise comparison dynamics” where agents switch with
positive probability to strategies which have higher payoff than their own current payoff. A myopic best response is not continuous, and indeed, myopic best response has been shown to eliminate
strictly dominated strategies.
The proof involves a quite difficult topological construction which I don’t discuss here, but it’s worth discussing the consequence of this result. In strategic situations where we may think agents
lack full rationality or rational foresight, and where we observe cycle or other non-rest behavior over time, we should be hesitant to ignore strictly dominated actions (particularly ones that are
only dominated by a small amount) in our analysis of the situation. There is also scope for policy improvements: if agents are learning using a dynamic which does not rule out strictly dominated
strategies, we may be able to provide information which coerces an alternative dynamic which will rule out such strategies.
Final version in Theoretical Economics 6 (IDEAS version). Another big thumbs up to the journal Theoretical Economics, the Lake Wobegon of econ, where the submissions are free, the turnaround time is
fast, and all the articles are totally ungated.
“The Nash Bargaining Solution in Economic Modeling,” K. Binmore, A. Rubinsten & A. Wolinsky (1986)
If we form a joint venture, our two firms will jointly earn a profit of N dollars. If our two countries agree to this costly treaty, total world welfare will increase by the equivalent of N dollars.
How should we split the profit in the joint venture case, or the costs in the case of the treaty? There are two main ways of thinking about this problem: the static bargaining approach developed
first by John Nash, and bargaining outcomes that form the perfect outcome of a strategic game, for which Rubinstein (1982) really opened the field.
The Nash solution says the following. Let us have some pie of size 1 to divide. Let each of us have a threat point, S1 and S2. Then if certain axioms are followed (symmetry, invariance to unimportant
transformations of the utility function, Pareto optimality and something called the IIA condition), the bargain is the one that maximizes (u1(p)-u1(S1))*(u2(1-p)-u2(S2)), where p is the share of the
pie of size 1 that accrues to player 1. So if we both have linear utility, player 1 can leave and collect .3, and player 2 can leave and collect 0, but a total of 1 is earned by our joint venture,
the Nash bargaining solution is the p that maximizes (p-.3)*(1-p-0); that is, p=.65. This is pretty intuitive: 1-.3-0=.7 of surplus is generated by the joint venture, and we each get our outside
option plus half of that surplus.
The static outcome is not very compelling, however, as Tom Schelling long ago pointed out. In particular, the outside option looks like a noncredible threat: If player 2 refused to offer player 1
more than .31, then Player 1 would accept given his outside option is only .3. That is, in a one-shot bargaining game, any p between .3 and 1 looks like an equilibrium. It is also not totally clear
how we should interpret the utility functions u1 and u2, and the threat points S1 and S2.
Rubinstein bargaining began to fix this. Let players make offers back and forth, and let there be a time period D between each offer. If no agreement is reached after T periods, we both get our
outside options. Under some pretty compelling axioms, there is a unique perfect equilibrium whereby player 1 gets p* if he makes the first offer, and p** if player 2 makes the first offer. Roughly,
if the time between offers is D, player 1 must offer player 2 a high enough share that player 2 is indifferent between that share today and the amount he could earn when he makes an offer in the next
period. Note that the outside options do not come into play unless, say, player 1′s outside option is higher than min{p*,p**}. Note also that as D goes to 0, all of the difference in bargaining power
has to do with who is more patient. Binmore et al modify this game so that, instead of discounting the future, rather there is a small chance that the gains from negotiation will disappear
(“breakdown”) in between every period; for instance, we may want to form a joint venture to invent some product, but while we negotiate, another firm may swoop in and invent it. It turns out that
this model, with von Neumann-Morganstern utility functions for each player (though perhaps differing levels of risk aversion) is a special case of Rubinstein bargaining.
Binmore et al prove that as D goes to zero, both strategic cases above have unique perfect equilibria equal to a Nash bargaining solution. But a Nash solution for what utility functions and threat
points? The Rubinstein game limits to Nash bargaining where the difference in utilities has to do with time preference, and the threat points S1 and S2 are equal to zero. The breakdown game limits to
Nash bargaining where the difference in utilities has to do with risk aversion, and the threat points S1 and S2 are equal to whatever utility we would get from the world after breakdown.
Two important points: first, it was well known that a concave transformation of a utility function leads to a worse outcome in Nash bargaining for that player. But we know from the previous paragraph
that this concave transformation is equivalent to a more impatient Rubinstein bargainer: a concave transformation of the utilities in the Nash outcome has to do with changing the patience, not the
risk aversion, of players. Second, Schelling was right when he argued that the Nash threat points involve noncredible threats. As long as players prefer their Rubinstein equilibrium outcome to their
outside option, the outside option does not matter for the bargaining outcome. Take the example above where one player could leave the joint venture and still earn .3. The limit of Rubinstein
bargaining is for each player to earn .5 from the joint venture, not .65 and .35. The fact that one player could leave the joint venture and still earn .3 is totally inconsequential to the
negotiation, since the other player knows that this threat is not credible whenever the first player could earn at least .31 by staying. This point is often wildly misunderstood when people apply
Nash bargaining solutions: properly defining the threat point matters!
Final RAND version (IDEAS). There has been substantial work since the 80s on the problem of bargaining, particularly in trying to construct models where delay is generated, since Rubinstein
guarantees agreement immediately and real-world bargaining rarely ends in one step; unsurprisingly, these newer papers tend to rely on difficult manipulation of theorems using asymmetric information. | {"url":"http://afinetheorem.wordpress.com/category/game-theory/","timestamp":"2014-04-21T07:58:34Z","content_type":null,"content_length":"123368","record_id":"<urn:uuid:44bd76df-c34a-4e75-88c8-12786b1a9533>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Simplify the radical expression.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a9a9f9e4b06b5e4932d88d","timestamp":"2014-04-20T13:49:52Z","content_type":null,"content_length":"95456","record_id":"<urn:uuid:bb91ca86-2606-4c34-a5bd-2ea866415712>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bremen, GA ACT Tutor
Find a Bremen, GA ACT Tutor
...I have taught advanced students and challenged students with equal, excellent success. I have taught Advanced Placement Literature for the past eight consecutive years while also teaching
remediation classes for reluctant readers, students who fail the Georgia High School Writing and Graduation ...
15 Subjects: including ACT Math, English, Spanish, grammar
...I have taught in the RESA psychiatric/special needs program in Georgia for 8 years. These students were all diagnosed with one of the following: Asperger's, autism, bi-polar, split
personality, ADD, ADHD as well other unique disorders. My students ranged from 4th through 12th grade.
47 Subjects: including ACT Math, chemistry, English, physics
I am attending Jacksonville State University to complete my degree in secondary math education. I have tutored many students in algebra who are high school students and students in college. I
enjoy teaching math and helping others learn math.
9 Subjects: including ACT Math, calculus, geometry, algebra 1
...Mathematics is not always easy to grasp. Effort is always required on the part of the student and the teacher for this guarantee to become a reality. I taught a unit of logic every year for 14
years that I taught geometry.
12 Subjects: including ACT Math, calculus, geometry, ASVAB
I have 33 years of Mathematics teaching experience. During my career, I tutored any students in the school who wanted or needed help with their math class. I usually tutored before and after
school, but I've even tutored during my lunch break and planning times when transportation was an issue.
13 Subjects: including ACT Math, calculus, algebra 1, algebra 2
Related Bremen, GA Tutors
Bremen, GA Accounting Tutors
Bremen, GA ACT Tutors
Bremen, GA Algebra Tutors
Bremen, GA Algebra 2 Tutors
Bremen, GA Calculus Tutors
Bremen, GA Geometry Tutors
Bremen, GA Math Tutors
Bremen, GA Prealgebra Tutors
Bremen, GA Precalculus Tutors
Bremen, GA SAT Tutors
Bremen, GA SAT Math Tutors
Bremen, GA Science Tutors
Bremen, GA Statistics Tutors
Bremen, GA Trigonometry Tutors | {"url":"http://www.purplemath.com/bremen_ga_act_tutors.php","timestamp":"2014-04-19T23:23:26Z","content_type":null,"content_length":"23441","record_id":"<urn:uuid:a4f31c3f-736b-4def-8dbc-fbc7739bf0ca>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Browsing Mathematics publications (MU) by Title
Preserving the intellectual output & resources of the University of Missouri.
On the dimension of the Jacquet module of a certain induced representation [1]
On the ionization of a Keplerian binary system by periodic gravitational radiation [1]
On the Morgan-Shalen compactification of the SL(2,C) character varieties of surface groups [1]
On the norm of an idempotent Schur multiplier on the Schatten class [1]
On the Number of Sparse RSA Exponents [1]
On the Value Set of n! Modulo a Prime [1]
Orlicz-Lorentz Spaces [1]
Oscillators: resonances and excitations [1]
p-summing operators on injective tensor products of spaces [1]
Prime divisors of palindromes [1]
Relativistic motion of spinning particles in a gravitational field [1]
The resonance counting function for Schrödinger operators with generic potentials [1]
Roughly squarefree values of the Euler and Carmichael functions [1]
Short Kloosterman Sums for Polynomials over Finite Fields [1]
Significance of c /sqrt2 in relativistic physics [1]
Some conjectures about integral means of ∂f and ∂¯f [1]
Some Divisibility Properties of the Euler Function [1]
Some unusual identities for special values of the Riemann zeta function [1]
Some upper bounds on the number of resonances for manifolds with infinite cylindrical ends [1]
The spectrum of the kinematic dynamo operator for an ideally conducting fluid [1] | {"url":"https://mospace.umsystem.edu/xmlui/handle/10355/5221/browse?order=ASC&rpp=20&sort_by=1&etal=-1&offset=79&type=title","timestamp":"2014-04-16T04:17:45Z","content_type":null,"content_length":"26240","record_id":"<urn:uuid:1b8e72a8-6431-421c-b992-eeb613801eb1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
Efficiently Load A Sparse Matrix in R
up vote 2 down vote favorite
I'm having trouble efficiently loading data into a sparse matrix format in R.
Here is an (incomplete) example of my current strategy:
for(i in 1:5000)
Where x is usually around length 20. This is not efficient and eventually slows to a crawl. I know there is a better way but wasn't sure how. Suggestions?
r sparse-matrix
This is a good question. I have similar problems. – suncoolsu Mar 10 '12 at 22:54
add comment
2 Answers
active oldest votes
You can populate the matrix all at once:
n <- 5000
m <- 1e5
k <- 20
idxOfCols <- sample(1:m, k)
x <- rnorm(k)
a2 <- sparseMatrix(
i=rep(1:n, each=k),
j=rep(idxOfCols, n),
up vote 3 down vote accepted x=rep(x, k),
# Compare
a1 <- Matrix(0,5000,100000,sparse=T)
for(i in 1:n) {
a1[i,idxOfCols] <- x
sum(a1 - a2) # 0
add comment
You don't need to use a for-loop. Yu can just use standard matrix indexing with a two column matrix:
up vote 1 down vote a1[ cbind(i,idxOfCols) ] <- x
add comment
Not the answer you're looking for? Browse other questions tagged r sparse-matrix or ask your own question. | {"url":"http://stackoverflow.com/questions/9650851/efficiently-load-a-sparse-matrix-in-r","timestamp":"2014-04-21T02:43:42Z","content_type":null,"content_length":"66656","record_id":"<urn:uuid:ccba06de-e262-473c-96bb-6af5d2bb01ce>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
PERC 2013 Abstract Detail Page
Previous Page | New Search | Browse All
Abstract Facilitating thinking and learning in physics classrooms
Learning physics is challenging because there are only a few fundamental principles in physics that are condensed in compact mathematical forms. Learning physics requires unpacking these
fundamental principles and understanding their applicability in a variety of contexts. Cognitive theory can be used to design instruction and facilitate thinking and learning in the
Abstract: physics classrooms. In this poster gallery and discussionl session, we will showcase research-based strategies that can be effective in improving students' problem solving and
meta-cognitive skills. These approaches include helping students use different representations of knowledge and helping them learn to categorize physics problems appropriately. Improved
cognitive abilities can make learning physics a positive experience for students.
Abstract Poster Symposium
Author/Organizer Information
Andrew Mason
University of Central Arkansas
Primary Department of Physics & Astronomy University of Central Arkansas 201 Donaghey Avenue Lewis Science Center, Rm 171 Conway, AR 72035
Contact: Arkansas, PA 72035
Phone: 4126249045
Fax: 4126246381
and Chandralekha Singh
Symposium Specific Information
Discussant: Chandralekha Singh
Moderator: Chandralekha Singh
Presentation Problem Solving and Motivation – Getting our Students in Flow
1 Title:
Presentation N. Sanjay Rebello, Kansas State University
1 Authors:
Csíkszentmihályi proposed the psychological concept of flow as signifying a state of complete involvement and enjoyment in an activity. When learners are in flow they are motivated,
engaged, and completely focused on the task at hand, resulting in effortful learning. In this poster we explore the connections between the concept of flow and our model of transfer of
Presentation learning as applied to problem solving. Our model of transfer purports two cognitive mechanisms – horizontal and vertical – that learners use to construct knowledge. Further, it proposes
1 Abstract: that carefully designed sequences of horizontal and vertical learning which provide scaffolding within a learner's zone of proximal development can facilitate learners to navigate an
optimal adaptability corridor and foster progress toward adaptive expertise as characterized by Bransford & Schwartz. By exploring the connections between flow and our model of transfer,
we hope to gain insights into what can motivate learners to become better problem solvers.
Presentation Using categorization task to improve expertise in introductory physics
2 Title:
Presentation Andrew Mason, University of Central Arkansas and Chandralekha Singh, University of Pittsburgh
2 Authors:
The ability to categorize problems based upon underlying principles, rather than surface features or contexts, is considered one of several proxy predictors of expertise. Giving students
categorization task and then discussing experts' ways of categorizing problems can be used to help students develop expertise in physics and help them focus on deep features of problems.
Presentation Inspired by the classic study of Chi, Feltovich, and Glaser [1], we revisited categorization study in large introductory physics classes. Some problems in the categorization task posed
2 Abstract: to students included those available from the prior study by Chi et al. Our findings, which contrast from those of Chi et al., suggest that there is a much wider distribution of
expertise in mechanics among introductory students than previously believed. Implications for pedagogical interventions will be discussed.
[1] M.T.H. Chi, P. J. Feltovich, and R. Glaser, Categorization and representation of physics knowledge by experts and novices. Cognitive Science, 5, 121-152 (1981).
Presentation Teaching problem categorization using computer-based feedback
3 Title:
Presentation Jennifer Docktor, University of Wisconsin – La Crosse,
3 Authors: Jose Mestre, University of Illinois at Urbana-Champaign,
Brian Ross, University of Illinois at Urbana-Champaign
Categorization tasks are commonly used as a measure of problem solving expertise, but they might also be useful pedagogical tools for highlighting the concepts and principles needed to
Presentation solve problems. In this study, introductory physics students viewed several pairs of problems on a computer screen and were asked to judge whether the problems would be solved similarly.
3 Abstract: We found that students who received elaborate principle-based feedback on their answer then increased their use of physics principles when explaining their choices, whereas students who
did not receive detailed feedback continued to make decisions based on quantities and surface-level problem features. Additional study findings will be discussed and instructional
implications will be proposed.
Presentation The role of representations in research-based instructional practice in physics
4 Title:
Presentation David E. Meltzer, Arizona State University
4 Authors:
Decades of physics education research and of research-based instructional practice have demonstrated convincingly the crucial role played by multiple representations in the learning of
physics. Conceptual understanding is both reflected in and promoted by facility in the use of graphical, mathematical, diagrammatic, and verbal representations as well as in the ability
to translate between and among different representations. Similarly, familiarity with topic-specific representations such as PV diagrams, free-body diagrams, motion graphs, and
Presentation field-vector and potential-line diagrams is virtually indispensable for thorough understanding of particular concepts. I will review examples of research that illustrate some of the
4 Abstract: learning issues that arise with use of multiple representations, and will present examples of instructional strategies that have proved effective in guiding students to deeper
understanding through use of representations in different contexts.
*Supported in part by NSF DUE #0817282 and DUE #1256333
Presentation Challenges in developing effective scaffolding supports to help introductory students learn physics
5 Title:
Presentation Alex Maries and Chandralekha Singh, University of Pittsburgh
5 Authors:
Helping students develop facility with problem representation is a major goal of many introductory physics courses. We discuss two studies related to representations in which we
investigated strategies for improving students' performance on problem solving. In one study, we investigated students' difficulties in translating between mathematical and graphical
Presentation representations and the effect of scaffolding on students' performance. Analysis of the student performance with different levels of scaffolding reveals that the appropriate level of
5 Abstract: scaffolding is not necessarily the one that involves lots of guidance and support from an expert's perspective and that the optimal level of support for a given student population can
only be determined by research. In another study, we investigated whether students perform better when a diagram is provided with the problem or when they are explicitly asked to draw a
diagram. We find that students who draw a good diagram perform better regardless of whether they use a diagrammatic approach to problem solving or mainly use a mathematical approach to
problem solving. Instructional implications will be discussed. | {"url":"http://www.compadre.org/per/perc/2013/detail.cfm?id=5161","timestamp":"2014-04-17T18:39:46Z","content_type":null,"content_length":"21117","record_id":"<urn:uuid:2f0856da-2e71-4945-bdc5-44e7d9fbb68e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
Improving math performance
Peer tutoring of a profoundly deaf girl, age 13, by a hearing tutor, age 12, in mathmatics
Prior to beginning tutoring, the hearing peer tutor was taught approximately 20 basic signs, as plus and minus, to facilitate communication with the deaf tutee. Each tutoring session lasted 20
minutes and occurred prior to the beginning of the school day. All sessions began with the tutor presenting ten written math problems on worksheets and giving directions to the tutee on how to take
the test. These ten math problems were changed each day so that no single problem was tested more than once. The tutee was allowed 10 minutes to complete the test. The tutor was furnished with an
answer sheet to determine the number of correct responses. The time remaining in the session was used by the tutor to provide instruction for incorrect answers. If time allowed, the tutor also worked
with the tutee on the problems she had answered correctly or played a game with the tutee until each session ended. A sign language interpreter was in the room during each session in case the tutor
and tutee were unable to communicate. With the exception of showing the tutor a few math signs, as divide, during the first few sessions, the interpreter did not participate in the tutoring sessions.
As a result of baseline testing, four objectives were identified for tutoring. When the tutee obtained a score of 70% correct or better for three consecutive days on an instructional objective, the
next instructional objective was introduced the next day. This changing criterion design was followed for 19 days until all objectives were mastered. The results of this study indicate that a deaf
child can be instructed successfully in mathematics by a hearing child using peer tutoring.
Burley, S., Gutkin, T., & Naumann, W. (1994). Assessing the efficacy of an academic hearing peer tutor for a profoundly deaf student. American Annals of the Deaf, 139, 4, 415-419.
Judith Powell, ETSU
Improving helping behavior and math achievement
20 general educators from grades two through four and their entire classes
This study investigated the effects of providing training and practice in helping behaviors to students during peer tutoring in mathematics. From each class, teachers identified oneaverage-achieving
student and one student with a learning disability. The 20 classrooms were randomly assigned to two treatments. First, peer-tutoring experience with additional training in how to help . Second,
peer-tutoring experience without training in how to help. The method of training included teacher examples, role plays by adults and students, and student practice with teacher feedback. The trained
tutors and tutees used the techniques of offering and asking for help as taught to them in the helping behavior training. The training for the classwide peer tutoring included examples of appropriate
behavior for the students to engage in while working with a partner. These included things like talk only to your partner and talk only about peer tutoring. The training also included instruction on
how to give corrective feedback and offer specific positive reinforcement for correct answers. Also, how to practice using and interactive, mediated verbal rehearsal routine.
The trained and untrained tutors' performance was compared. Also, the trained and untrained tutees' performance was compared for the variables "offers help" and "asks for help". The students that
received the helping training engaged in an increased number of directly trained helping behaviors than the untrained students. Students trained in basic peer-tutoring procedures can give their
partners explanations of a more conceptual nature. The study showed that children working with a trained tutor had more improved math ability than students working with untrained tutors.
Fuchs, L. & Bentz, J. (1996). Improving peers' helping behavior to students with learning disabilities during mathematics peer tutoring. Learning Disability Quarterly, 19, 202-215.
Gina Sandidge, ETSU
improving math performance
Elementary grade students with moderate mental retardation
Students will be trained to make self-instruction statements during math problem solving activities. The statements pertain to general work habits (e.g., "Remember to work slowly and carefully" and
"Keep your eyes on your paper") and task-specific behaviors (e.g., Which is the biggest number?" and "Write the biggest number and put marks next to it for the other number"). Training should be
conducted on an individual basis, in ten-minute sessions, five days per week. The instructor should state the self-instruction statements and have the student repeat them. Then the student will make
the self-instruction statements with prompting from the instructor. Finally, the prompts should be faded to the point at which the student can make the statements independently and consistently. The
task-specific statements should be accompanied by the physical behaviors that the statement describes. These statements should also pertain to the order of the steps necessary to complete the problem
(i.e., "First I..., then I..., etc.). The instructor should conduct daily practice trials after training is completed to make sure students are maintaining self-instruction skills. Appropriate
reinforcement should be given for improved performance.
Percent of daily worksheet problems completed correctly.
Albion, F. M., & Salzberg, C. L. (1982). The effect of self-instructions on the rate of correct addition problems with mentally retarded children. Education and Treatment of Children, 5(2), 121-131.
Allison Rice and Nicole Fraye, UVA
Correctly solving math problems which involve finding missing addends
Eight to eleven year old students with learning disabilities
The student was taught a self-instruction procedure by the teacher during two training sessions lasting approximately 30 minutes. First, the teacher modeled the use of self-instruction. The modeled
instructions were as follows: First, I have to point to the problem. Second, I have to read the problem. Third I have to circle the sign. So far I'm OK! Above the smaller number I put that many
sticks. Above the square I put some more sticks so that the number of sticks altogether equals the larger number. I count the number of sticks above the square. This is the answer. I write the number
in the square. I read (again) the problem statement. I tell myself,"Nice work!" A tape recorder was turned on during the demonstration of the self-instruction steps; then the teacher, aided by the
taped instructions, solved the remaining problems on the training sheet. The teacher, in collaboration with the student, wrote the self-instruction cues together in the student's own words. The
student then read the instructions into the tape recorder. The written instructions were removed and the student was asked to solve the problems on a training worksheet with the aid of the taped
instructions listened to through earphones. The student was praised for correct use of the self instruction procedure and the teacher provided modeling, and rehearsal following incorrect responses.
This process was repeated until the student correctly and independently applied the self-instruction procedure to one arithmetic problem. During the second training session, the teacher briefly
demonstrated two problems using the taped instructions. The student was asked to solve 20 problems on a worksheet, using the taped instructions and earphones. Again the student was praised for
correct use of the self-instruction procedure. Feedback, modeling, and rehearsal followed any incorrect responses. Within a short period of time the student can be expected to use the
self-instruction procedure without needing the taped cues.
Record the percentage of problems completed correctly.
Wood, D.A., Rosenberg, M.S., & Carran, D.T. (1993). The effects of tape-recorded self-instruction cues on the mathematics performance of students with learning disabilities. Journal of Learning
Disabilities, 26(4), 250-258,269.
Linda Glover,UVA
Improving students' mathematics performance on homework assignments
Middle school students (grades 6-8) with learning disabilities or emotional disturbance
Students were assigned to heterogeneous cooperative homework teams (CHT) of three or four members. At the end of each day's math lesson (except Fridays), an instructionally relevant homework
assignment was given to the homework teams. Each assignment consisted of eight computation and two story problems. Each member of the team was to complete the homework independently. The next day of
class, CHT groups met for about ten minutes. Students gave their assignments to the team's checker whose job it was to grade the homework for that day. The responsibility of being a checker rotated
to a different team member each day. On the day they were checkers, students were responsible for:grading teammates' homework quickly using a teacher-made answer sheet, reporting the grades to the
teacher,and returning the papers to the team members for review and corrections. With individual papers in hand, team members were encouraged to help each other understand and correct mistakes.
Corrected papers were then collected and turned into the teacher by the checker. At the end of the week, team scores were determined by averaging each member's daily score and using these to
calculate a team mean. Awards in the form of certificates were presented to teams who met predetermined criteria.
Record the percentage correct on each individual's daily assignment, and record the weekly team mean.
O'Melia, M.C., & Rosenberg, M.S. (1994). Effects of cooperative homework teams on the acquisition of mathematics skills by secondary students with mild disabilities. Exceptional Children, 60(6),
Linda Glover, UVA | {"url":"http://special.edschool.virginia.edu/information/ints/perform.html","timestamp":"2014-04-18T05:30:38Z","content_type":null,"content_length":"11773","record_id":"<urn:uuid:2db608ef-b82e-46c5-a88a-5f08ab79e9c5>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
Size of pointer-to-pointer
if I should use vector-of-vectors instad of pointer-to-pointer, if my purpose is to do matrix arithmetic in C++
"pointer-to-pointer" is something you'd use when learning C, and not because it makes a good matrix (it doesn't!), but because it teaches a beginner C programmer to juggle dynamic memory allocation
of two kinds of arrays with nested lifetimes.
You can do vector of vectors in C++, it has the same runtime characteristics, makes a whole lot more sense for a C++ program, but doesn't make a good matrix either - as with the "pointer to pointer"
approach, the individual rows of your matrix are allocated separately all over the RAM.
If you want to go one step closer to a real matrix, try building a Matrix class, which holds a 1D storage container internally -- std::valarray or std::vector -- and converts between (row, column)
notation and the 1D index in getters/setters. See C++ FAQ Lite for some useful suggestions:
If you want to
matrix arithmetic in C++, use a matrix library: boost.ublas, Eigen, etc - they use expression templates for matrix arithmetic, which aren't easy to write yourself.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/104139/","timestamp":"2014-04-19T10:01:34Z","content_type":null,"content_length":"16823","record_id":"<urn:uuid:81d4cacd-8416-4334-b861-ed625a8664d2>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving A Nonlinear ODE
This section discusses these aspects of a nonlinear ODE problem:
You can run this example: "Solving a Nonlinear ODE with a Boundary Layer by Collocation".
Approximation Space
Seek an approximate solution by collocation from C^1^ piecewise cubics with a suitable break sequence; for instance,
breaks = (0:4)/4;
Because cubics are of order 4, you have
k = 4;
Obtain the corresponding knot sequence as
knots = augknt(breaks,k,2);
This gives a quadruple knot at both 0 and 1, which is consistent with the fact that you have cubics, i.e., have order 4.
This implies that you have
n = length(knots)-k;
n = 10;
You collocate at two sites per polynomial piece, i.e., at eight sites altogether. This, together with the two side conditions, gives us 10 conditions, which matches the 10 degrees of freedom.
Choose the two Gaussian sites for each interval. For the standard interval [–0.5,0.5] of length 1, these are the two sites
gauss = .5773502692*[-1/2; 1/2];
From this, you obtain the whole collection of collocation sites by
ninterv = length(breaks)-1;
temp = ((breaks(2:ninterv+1)+breaks(1:ninterv))/2);
temp = temp([1 1],:) + gauss*diff(breaks);
colsites = temp(:).';
Numerical Problem
With this, the numerical problem you want to solve is to find that satisfies the nonlinear system
If y is your current approximation to the solution, then the linear problem for the supposedly better solution z by Newton's method reads
with w[0](x)=2y(x),b(x)=(y(x))^2+1. In fact, by choosing
and choosing all other values of w[0],w[1],w[2], b not yet specified to be zero, you can give your system the uniform shape
sites = [0,colsites,1];
Linear System to Be Solved
Because z∊S[4,knots], convert this last system into a system for the B-spline coefficients of z. This requires the values, first, and second derivatives at every x∊sites and for all the relevant
B-splines. The command spcol was expressly written for this purpose.
Use spcol to supply the matrix
colmat = ...
From this, you get the collocation matrix by combining the row triple of colmat for x using the weights w[0](x),w[1](x),w[2](x) to get the row for x of the actual matrix. For this, you need a current
approximation y. Initially, you get it by interpolating some reasonable initial guess from your piecewise-polynomial space at the sites. Use the parabola x^2–1, which satisfies the end conditions as
the initial guess, and pick the matrix from the full matrix colmat. Here it is, in several cautious steps:
intmat = colmat([2 1+(1:(n-2))*3,1+(n-1)*3],:);
coefs = intmat\[0 colsites.*colsites-1 0].';
y = spmak(knots,coefs.');
Plot the initial guess, and turn hold on for subsequent plotting:
legend('Initial Guess (x^2-1)','location','NW');
axis([-0.01 1.01 -1.01 0.01]);
hold on
You can now complete the construction and solution of the linear system for the improved approximate solution z from your current guess y. In fact, with the initial guess y available, you now set up
an iteration, to be terminated when the change z–y is small enough. Choose a relatively mild ε = .1.
tolerance = 6.e-9;
epsilon = .1;
while 1
vtau = fnval(y,colsites);
weights=[0 1 0;
[2*vtau.' zeros(n-2,1) repmat(epsilon,n-2,1)];
1 0 0];
colloc = zeros(n,n);
for j=1:n
colloc(j,:) = weights(j,:)*colmat(3*(j-1)+(1:3),:);
coefs = colloc\[0 vtau.*vtau+1 0].';
z = spmak(knots,coefs.');
maxdif = max(max(abs(z.coefs-y.coefs)));
fprintf('maxdif = %g\n',maxdif)
if (maxdif<tolerance), break, end
% now reiterate
y = z;
legend({'Initial Guess (x^2-1)' 'Iterates'},'location','NW');
The resulting printout of the errors is:
maxdif = 0.206695
maxdif = 0.01207
maxdif = 3.95151e-005
maxdif = 4.43216e-010
If you now decrease ε, you create more of a boundary layer near the right endpoint, and this calls for a nonuniform mesh.
Use newknt to construct an appropriate finer mesh from the current approximation:
knots = newknt(z, ninterv+1); breaks = knt2brk(knots);
knots = augknt(breaks,4,2);
n = length(knots)-k;
From the new break sequence, you generate the new collocation site sequence:
ninterv = length(breaks)-1;
temp = ((breaks(2:ninterv+1)+breaks(1:ninterv))/2);
temp = temp([1 1], :) + gauss*diff(breaks);
colpnts = temp(:).';
sites = [0,colpnts,1];
Use spcol to supply the matrix
colmat = spcol(knots,k,sort([sites sites sites]));
and use your current approximate solution z as the initial guess:
intmat = colmat([2 1+(1:(n-2))*3,1+(n-1)*3],:);
y = spmak(knots,[0 fnval(z,colpnts) 0]/intmat.');
Thus set up, divide ε by 3 and repeat the earlier calculation, starting with the statements
while 1
Repeated passes through this process generate a sequence of solutions, for ε = 1/10, 1/30, 1/90, 1/270, 1/810. The resulting solutions, ever flatter at 0 and ever steeper at 1, are shown in the
example plot. The plot also shows the final break sequence, as a sequence of vertical bars. To view the plots, run the example "Solving a Nonlinear ODE with a Boundary Layer by Collocation".
In this example, at least, newknt has performed satisfactorily. | {"url":"http://www.mathworks.com/help/curvefit/solving-a-nonlinear-ode.html?nocookie=true","timestamp":"2014-04-23T17:36:45Z","content_type":null,"content_length":"50535","record_id":"<urn:uuid:a24aedd1-1ebd-478c-91ac-8d4632a00e14>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lévy Processes
Lévy Processes: Theory and Applications
Ole E Barndorff-Nielsen, Thomas Mikosch, Sidney I. Resnick
A Lévy process is a continuous-time analogue of a random walk, and as such, is at the cradle of modern theories of stochastic processes. Martingales, Markov processes, and diffusions are extensions
and generalizations of these processes. In the past, representatives of the Lévy class were considered most useful for applications to either Brownian motion or the Poisson process. Nowadays the need
for modeling jumps, bursts, extremes and other irregular behavior of phenomena in nature and society has led to a renaissance of the theory of general Lévy processes. Researchers and practitioners in
fields as diverse as physics, meteorology, statistics, insurance, and finance have rediscovered the simplicity of Lévy processes and their enormous flexibility in modeling tails, dependence and path
This volume, with an excellent introductory preface, describes the state-of-the-art of this rapidly evolving subject with special emphasis on the non-Brownian world. Leading experts present surveys
of recent developments, or focus on some most promising applications. Despite its special character, every topic is aimed at the non- specialist, keen on learning about the new exciting face of a
rather aged class of processes. An extensive bibliography at the end of each article makes this an invaluable comprehensive reference text. For the researcher and graduate student, every article
contains open problems and points out directions for futurearch.
The accessible nature of the work makes this an ideal introductory text for graduate seminars in applied probability, stochastic processes, physics, finance, and telecommunications, and a unique
guide to the world of Lévy processes.
We haven't found any reviews in the usual places.
III 3
IV 39
V 41
VI 57
VII 67
VIII 89
IX 109
X 111
XV 225
XVI 241
XVII 267
XVIII 281
XIX 283
XX 319
XXI 337
XXII 361
XI 139
XII 169
XIII 185
XIV 187
XXIII 377
XXIV 379
XXV 401
References from web pages
2nd maphysto Levy Conference
See moreover the recently published volume, from Birkhäuser, having also the title Lévy Processes. Theory and Applications (Editors: oe Barndorff-Nielsen, ...
www.maphysto.dk/ oldpages/ events/ 2ndLevyConf2002/
Bibliography. [1] oe Barndorff-Nielsen, “Probability and statistics; selfdecomposability, finance and. turbulence”, Proc. Conference “Probability towards ...
www.turpion.org/ php/ reference.phtml/ ref_rm701.pdf?journal_id=rm& paper_id=701& volume=59& issue=1& type=pdf
The estimation of the Barndorff-Nielsen and Shephard model from ...
APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY. Appl. Stochastic Models Bus. Ind. (2007). Published online in Wiley interscience ...
doi.wiley.com/ 10.1002/ asmb.702
Self-decomposability and Lévy processes in free probability
In oe Barndorff-Nielsen, T. Mikosch and S. Resnick (eds), Lévy Processes - Theory and Applications, pp. 283-318. Boston: Birkhäuser. ...
projecteuclid.org/ handle/ euclid.bj/ 1078779874
Publications and Talks
... densities and Lévy densities, Proceedings of the 2nd maphysto Conference on Lévy Processes: Theory and Applications, maphysto Miscellanea No. ...
www.fam.tuwien.ac.at/ ~fhubalek/ publications.html
International Conference on Lévy Processes: Theory and Applications, Aarhus,. Denmark, one hour talk. Workshop on Product Integrals and Pathwise Integration ...
www.math.utk.edu/ ~rosinski/ JR_cv.pdf
Analysis of filtering and smoothing algorithms for Lévy-driven ...
In: Barndorff-Nielsen, oe, Mikosch, T., Resnick, si (Eds.), Lévy Processes-Theory and Applications, Birkhauser, Boston, MA. Todorov, 2006. ...
portal.acm.org/ citation.cfm?id=1346350.1346447& coll=& dl=GUIDE& CFID=15151515& CFTOKEN=6184618
DOI: 10.1007/s00184-005-0371-6 Multiple testing has become a ...
maphysto Conference on Lévy Processes: Theory and Applications, held at. the University of Aarhus, Denmark, 1999. The book is divided into six parts and ...
www.springerlink.com/ index/ W6K8618U55532762.pdf
Levy processes in free probability -- Barndorff-Nielsen and ...
[Abstract/Free Full Text]; Barndorff-Nielsen, oe, Mikosch, T. & Resnick, S., (2001) Lévy Processes—Theory and Applications (Birkhäuser, Boston). ...
www.pnas.org/ cgi/ content/ full/ 99/ 26/ 16576
Some Nuffield College economics preprints are available, in postscript, via the WWW on page http://www.nuffield.ox.ac.uk . Click here to get a list of the ...
www.nuffield.ox.ac.uk/ Economics/ papers/ 2000/ index2000.htm
Bibliographic information | {"url":"http://books.google.com/books?id=ExpTdTauXMwC&rview=1","timestamp":"2014-04-19T18:22:22Z","content_type":null,"content_length":"135298","record_id":"<urn:uuid:d7a51301-0247-4e9a-8343-31ab24313ba3>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptography questions
November 9th 2009, 01:02 PM #1
Oct 2009
Cryptography questions
Explain why not all encryptors have corresponding decryptors, providing at least two examples of cncryptors wihch cannot be decrypted.
of the (26)^2=676 possible encryptors, determine precisely which have decryptors and which don't. How many decryptable encryptors are there?
Explain a procedure indicating how to find a decryptor, that is, if one exists for any given encryptor
Help would be greatly appreciated
For these problems we are using linear congruences. A linear encryptor is a function of the form E(w)=Aw+B (mod N)
Last edited by MichaelG; November 9th 2009 at 04:33 PM. Reason: More Detail
From the encryption law...
$E(w)= A\cdot w + B \mod N$ (1)
... we derive the 'decryption law'...
$w = A^{-1}\cdot (E - B) \mod N$ (2)
The problem in this case is represented by the term $A^{-1}$, that is the 'multiplicative inverse' of the element $A \in W$. But $N=26$ is not prime so that not all the $A \in W$ do have
multiplicative inverse...
Kind regards
November 10th 2009, 02:17 AM #2 | {"url":"http://mathhelpforum.com/number-theory/113497-cryptography-questions.html","timestamp":"2014-04-20T04:45:41Z","content_type":null,"content_length":"33430","record_id":"<urn:uuid:d2ad1f7f-60f8-462c-9923-831f2e4988a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frequency Response
The frequency response of an LTI filter may be defined as the spectrum of the output signal divided by the spectrum of the input signal. In this section, we show that the frequency response of any
LTI filter is given by its transfer function i.e., sine-wave analysis in Chapter 1.
Beginning with Eq.6.4), we have
where X(z) is the
transform of the filter input signal
transform of the output signal
A basic property of the z transform is that, over the unit circle spectrum [84].^8.1To show this, we set z transform, Eq.6.1), to obtain
which may be recognized as the definition of the
bilateral discrete time Fourier transform (DTFT)
]. When
, this definition reduces to the usual (unilateral) DTFT definition:
Applying this relation to
Thus, the spectrum of the filter output is just the input spectrum times the spectrum of the
impulse response
This immediately implies the following:
We can express this mathematically by writing
By Eq.7.2), the frequency response specifies the gain and phase shift applied by the filter at each frequency. Since complex-valued function of a real variable. The response at frequency sampling
period in seconds. It might be more convenient to define new functions such as
Notice that defining the frequency response as a function of unit circle in the complex sampling frequency to
'' equal to the
sampling rate
, we may restrict
We have seen that the spectrum is a particular slice through the transfer function. It is also possible to go the other way and generalize the spectrum (defined only over the unit circle) to the
entire analytic continuation (§D.2). Since analytic continuation is unique (for all filters encountered in practice), we get the same result going either direction.
Because every complex number viz., amplitude response phase response
Previous: Frequency Response AnalysisNext: Amplitude ResponseAbout the Author: Julius Orion Smith III
Julius Smith's background is in electrical engineering (BS Rice 1975, PhD Stanford 1983). He is presently Professor of Music and Associate Professor (by courtesy) of Electrical Engineering at
Stanford's Center for Computer Research in Music and Acoustics (CCRMA)
, teaching courses and pursuing research related to signal processing applied to music and audio systems. See
for details. | {"url":"http://www.dsprelated.com/dspbooks/filters/Frequency_Response_I.html","timestamp":"2014-04-20T08:14:09Z","content_type":null,"content_length":"96810","record_id":"<urn:uuid:1e277cf0-5f21-4e7b-be1e-f3326d0df2ac>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cluster decomposition and variational counting
Suppose we want to count the number of independent sets in a graph below.
There are 9 independent sets.
Because the graphs are disjoint we could simplify the task by counting graphs in each connected component separately and multiplying the result
Variational approach is one way of extending this decomposition idea to connected graphs.
Consider the problem of counting independent sets in the following graph.
There are 7 independent sets. Let $x_i=1$ indicate that node $i$ is occupied, and $P(x_1,x_2,x_3,x_4)$ be a uniform distribution over independent sets, meaning it is either 1/7 or 0 depending whether
$x_1,x_2,x_3,x_4$ forms an independent set
Entropy of uniform distribution over $n$ independent sets distribution is $H=-\log(1/n)$, hence $n=\exp H$, so to find the number of independent sets just find the entropy of distribution $P$ and
exponentiate it.
We can represent P as follows
Entropy of P similarly factorizes
$$H=H_a + H_b - H_c$$
Once you have $P_A$, $P_B$ and $P_C$ representing our local distributions of our factorization, we can forget the original graph and compute the number of independent sets from entropies of these
To find factorization of $P$ into $P_a,P_b$, and $P_c$ minimize the following objective
$$KL(\frac{P_a P_b}{P_c},P)$$
This is
between our factorized representation and original representation. Message passing schemes like cluster belief propagation can be derived as one approach of solving this minimization problem. In this
case, cluster belief propagation takes 2 iterations to find the minimum of this objective. Note that the particular form of distance is important. We could reverse the order and instead minimize $KL
(P,P_a P_b/P_c)$ and while this makes sense from approximation stand-point, minimizing reversed form of KL-divergence is generally intractable.
This decomposition is sufficiently rich that we can model P exactly using proper choice of $P_a,P_b$ and $P_c$, so the minimum of our objective is 0, and our entropy decomposition is exact.
Now, the number of independent sets using decomposition above factorizes as
$$n=\exp H=\frac{\exp H_a \exp H_b}{\exp H_c} = \frac{\left(\frac{7}{2^{4/7}}\right)^2}{\frac{7}{2\ 2^{1/7}}} = \frac{4.75 \times 4.75}{3.17}=7$$
Our decomposition can be schematically represented as the following cluster graph
We have two regions and we can think of our decomposition as counting number of independent sets in each region, then dividing by number of independent sets in the vertex set shared by pair of
connected regions. Note that region A gets "4.75 independent sets" so this is not as intuitive as decomposition into connected components
Here's another example of graph and its decomposition.
Using 123 to refer to $P(x_1,x_2,x_3)$ the formula representing this decomposition is as follows
$$\frac{124\times 236\times 2456 \times 4568 \times 478\times 489}{24\times 26 \times 456 \times 48 \times 68}$$
There are 63 independent sets in this graph, and using decomposition above the count decomposes as follows:
$$63=\frac{\left(\frac{21\ 3^{5/7}}{2\ 2^{19/21} 5^{25/63}}\right)^2 \left(\frac{3\ 21^{1/3}}{2^{16/21} 5^{5/63}}\right)^4}{\frac{21\ 3^{3/7}}{2\ 2^{1/7} 5^{50/63}} \left(\frac{3\ 21^{1/3}}{2\ 2^{3/
7} 5^{5/63}}\right)^4}$$
Finding efficient decomposition that models distribution exactly requires that graph has small tree-width. This is not the case for many graphs, such as large square grids, but we can apply the same
procedure to get cheap inexact decomposition.
Consider the following inexact decomposition of our 3x3 grid
Corresponding decomposition has 4 clusters and 4 separators, and the following factorization formula
$$\frac{1245\times 2356 \times 4578 \times 5689}{25\times 45 \times 56 \times 58}$$
We can use minimization objective as before to find the parameters of this factorization. Since original distribution can not be exactly represented in this factorization, result will be approximate,
unlike previous two example.
Using message-passing to solve the variational problem takes about 20 iterations to converge, to an estimate of 46.97 independent sets
To try various decompositions yourself, unzip the archive and look at usage examples in indBethe3-test.nb
12 comments:
This comment has been removed by the author.
interesting post and nice examples!
Can you explain how you got $\exp H_a =\frac{7}{2^{4/7}}$ ?
To my understanding, the reason your first decomposition took only two iterations to reach the optimal solution, is because your cluster graph was actually a tree. Hence BP works like
forward-backward and reaches the correct solution. Your other cluster graph is not a tree, and hence BP is just an approximation.
H_a is the entropy of the distribution over nodes 1,2,3. In other words it's the entropy with one node of the cycle marginalized out. Factor of 7 comes out because there are 7 outcomes before
marginalization. Writing out the entropy explicitly you get
$$\frac{2}{7} \log \left(\frac{2}{7}\right)+\frac{1}{7} \log \left(\frac{1}{7}\right)+\frac{2}{7} \log \left(\frac{2}{7}\right)+\frac{1}{7} \log \left(\frac{1}{7}\right)+\frac{1}{7} \log \left(\
To the other point -- yes -- for exact results, the cluster graph must be a junction tree. This also seems to also be a necessary condition for exactness if we allow more general belief
propagation schemes -- http://stats.stackexchange.com/questions/4564/when-is-generalized-belief-propagation-exact
Thanks! I thought there was a trick in calculating this entropy without explicitly enumerating the outcomes.
From this example it seems that computing the entropy of a marginal is equally hard as computing the entropy of the joint distribution (or counting independent sets). If so, what's the point in
such decomposition?
Roman -- to compute cluster entropy, you sum over all outcomes in a cluster, which scales exponentially with the size of the cluster. Brute force scales exponentially with size of the whole
Yes, but for each cluster outcome one still needs to compute its probability w.r.t. the whole graph (either 1/7 or 2/7 in the example). Is there an efficient way to do this?
Indeed there is. You get marginals by solving KL-divergence optimization problem I described above. That objective can be evaluated in time that's only linear in size of the graph (exponential in
size of cluster), and you can find local minimum efficiently. One approach gives Cluster BP, derivation on page 388-389 of Koller's book. When cluster graphs are trees like above, objective is
convex and cluster BP update steps are identical to the junction tree algorithm, so you get exact result.
This comment has been removed by the author.
I see now. Thank you!
This is nice blog. | {"url":"http://yaroslavvb.blogspot.com/2011/02/cluster-decomposition-and-variational.html?showComment=1298540686524","timestamp":"2014-04-19T11:56:14Z","content_type":null,"content_length":"94489","record_id":"<urn:uuid:a698ed5d-0b82-402c-bba3-ba5aaa7ae598>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |