content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
How is the percentage decay of uranium found with the equation below?
October 27th 2012, 05:57 PM
How is the percentage decay of uranium found with the equation below?
How would you go about solving this? (it's question 13 in the image)
The decay of uranium is modelled by D=D₀ x 2^-kt. If it takes 6 years for the mass of uranium to half, find the percentage remaining after
a) 2 years
b) 5 years
c) 10 years
The answers are a) 79%, b) 56% and c) 31%
October 27th 2012, 06:16 PM
Re: How is the percentage decay of uranium found with the equation below?
We are told the half-life is 6 years, so we may state:
$\ln\left(\frac{1}{2} \right)=-6k\ln(2)$
$k=\frac{1}{6}$ and so:
The percentage $P$ remaining after time $t$ is then:
a) $P(2)=100\cdot2^{-\frac{2}{6}}=100\cdot2^{-\frac{1}{3}}\approx79$
b) $P(5)=100\cdot2^{-\frac{5}{6}}\approx56$
c) $P(10)=100\cdot2^{-\frac{10}{6}}=100\cdot2^{-\frac{5}{3}}\approx31$
October 29th 2012, 12:31 AM
Re: How is the percentage decay of uranium found with the equation below?
Also, how did you get the unique text and layout in your equations? I'd like to be able to do that myself.
October 29th 2012, 12:38 AM
Re: How is the percentage decay of uranium found with the equation below?
It is done with $\LaTeX$.
Do a search here and online and you will find plenty of information and tutorials on its usage. | {"url":"http://mathhelpforum.com/algebra/206215-how-percentage-decay-uranium-found-equation-below-print.html","timestamp":"2014-04-21T16:50:01Z","content_type":null,"content_length":"7765","record_id":"<urn:uuid:e6224dda-a31c-4d43-894e-3d2d3e78796a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
L. Collins
Karen L. Collins
``Truth for authority, not authority for truth.''
-----------------------------Lucretia Mott.
Discrete Mathematics Day at Wesleyan University, October 5, 2013
Discrete Mathematics Day at Wesleyan University, Feb. 26, 2005,
Haiku. A poem in honor of Joan Hutchinson's 60th birthday, and in memory of Lucia Krompart, presented at Graph Theory with Altitude, May 18, 2005 in Denver, CO. Some pictures from the conference.
Preprint, with Ann Trenk, on the distinguishing chromatic number.
The poem Pi, by Wislawa Szymborska, read by Frank Lepkowski in memory of Lucia Krompart, at her memorial service, March 2002.
• I am a Professor of Mathematics at Wesleyan University.
• You can reach me by e-mail, at kcollins (at) wesleyan.edu,
• or at kcollins (at) member.ams.org,
• by telephone at (860) 685-2169,
• by fax at (860) 685-2571,
• or by mail at: Department of Mathematics and Computer Science, Wesleyan University, Middletown CT 06459-0128.
• I am married to Mark Hovey, see in picture far below, who is a Professor of Mathematics at Wesleyan.
• I received my B. A. from Smith College in 1981,
• and my Ph.D. from the Mathematics Department of MIT in 1986.
• My specialty is Combinatorics and Graph Theory.
• Here is my publications list.
• I am on the steering committee for the combinatorics conference series, Discrete Mathematics Days in the Northeast.
• I was on the steering committee for the combinatorics conference series, Discrete Mathematics in New England, 2002-2004. The CoNE meetings ran from April, 1992 until May, 2001.
• Workshop in Teaching Combinatorics by Guided Discovery, co-leader with Ken Bogart, August 11-15, 2003 at Dartmouth College.
• Graph Coloring and Symmetry, an AMS Summer Research Conference, Sunday, July 21--Thursday, July 25, 2002, at Mt. Holyoke College, co-organizers Danny Krizanc (Wesleyan), and Alex Russell (UConn).
• Judge for the Siemens Westinghouse Science and Technology Competition, December 2001, a national competition for high school seniors.
• ┌┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
││Benjamin Shemmer, my latest undergraduate thesis student and co-author, gave a poster in the undergraduate poster session sponsored by the MAA in New Orleans in January 2001, and achieved his │
││B. A. in May 2001. He will be a graduate student in the mathematics program at Emory this fall. │
││Zhongyuan Che graduated with a Ph. D. in mathematics (with me) and a Master's degree in Computer Science (with Mike Rice) in May 2003, and now is an assistant professor of mathematics at Penn │
││State Beaver. She gave a talk at the national AMS meetings in Baltimore in January 2003, and a talk at the Horizons in Combinatorics meeting, held at Vanderbilt University, May 21-24, 2001. │
││Kimber Tysdal graduated with a her Ph. D. in mathematics (with me) in May 2001 and now is an assistant professor at Hood College. She gave a poster presentation in the AWM Workshop at the │
││Joint Meetings in New Orleans in January, 2001. │
Fall 2006
Spring 2007
• Graph Homomorphisms and Graph Cores, Retrospective in Combinatorics: Honoring Stanley's 60th Birthday, June 22-26, 2004.
• Graph Cores and Kneser Graphs, seminar talk at the Université du Québec à Montréal, May 2003.
• Constructions of 3-Colorable Cores, in the minisymposium Graph Homomorphisms at the SIAM Discrete Math meeting, August 11-14, 2002.
• Constructions of 3-chromatic Graph Cores, at the Joint Math Meetings at San Diego, January 2002, in the special session, The Many Lives of Lattice Theory and the Theory of Ordered Sets, with
Connections to Combinatorics, organized by Jonathan Farley and Stefan Schmidt.
• ┌┬──────────────────────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────┬───────────────────────────────────────────────────────────┐
││Graph Homomorphisms and Coloring, was the very last talk of the very │Conference in Celebration of Smith College Alumnae │April 21-22, 2001 at Smith College. A good time was had by │
││first │Mathematicians │all. │
At the beach in Narragansett, RI, 2004
At the Isaac Newton Institute in Cambridge, England, August--December 2002:
The four of us at Block Island, June 2002:
For Grace's and Patrick's art work at the Blue Circle Studio, click here. | {"url":"http://kcollins.web.wesleyan.edu/","timestamp":"2014-04-19T09:35:39Z","content_type":null,"content_length":"14869","record_id":"<urn:uuid:cd1c6628-265f-4865-b40f-b4e0eee63b42>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
Magma Coercion Syntax
up vote 0 down vote favorite
Any Magma experts who know what I should write below in place of "Roots(ysoln)"? I basically want the roots of a FldFunRatMElt (variable here happens to show as "$.2"). I would guess there's some
simple coercion needed, but I've had no luck finding examples for this on the internet. I really just need to know the rational roots. Thanks!
CU:=Curve(PP, (z + x - y)*(z - x + y)*(-z + x + y)*(z + x + y));
ysoln; Type(ysoln);
// Other things really happen above, but denominator becomes 1
ysoln; Type(ysoln);
// I would like to just do this
// but Magma refuses...
// I know the Roots function works if set-up properly:
magma computer-algebra
1 Usually you want to Evaluate to reduce the ring dimension. Here it seems you also need to move from a function field to a polynomial ring with Numerator. $$ $$ R<t>:=PolynomialRing(Rationals());
$$ $$ Roots(Numerator(Evaluate(ysoln,[1,t,1]))); $$ $$ [ <-2, 1>, <0, 2>, <2, 1> ] – Junkie Feb 1 '12 at 20:43
Those two lines worked! Thanks! I had tried something similar, but didn't think to use the Numerator function. Magma is a bit clunky, but hopefully improving... – bobuhito Feb 3 '12 at 7:35
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged magma computer-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/87234/magma-coercion-syntax","timestamp":"2014-04-18T00:41:27Z","content_type":null,"content_length":"49182","record_id":"<urn:uuid:928145e8-2b59-41d4-b8c1-c5632d7b0abe>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rounding Numbers | How do you Round Numbers? | Nearest Hundred, Thousand
Rounding Numbers
Rounding numbers are discussed here.
Real life examples on rounding numbers:
(i) Ken’s new neighbor asked him his age. Ken said that he was fourteen years old. His actual age was fourteen years two months and seven days.
(ii) Shelly told Keri that she weighs about 50 kg. Shelly’s exact weight was 52 kg.
(iii) Victor paid $45 for a pair of sports shoes. He told Ron, this pair of shoes cost me nearly $50.
In all this these statements, the numbers have been rounded. A rounded number is easy to remember and is a convenient figure for calculation.
While rounding numbers, the number 10 is very useful. We use it to think about place value. The number 5 is also important.
5 is half of 10.5 is halfway between 0 and 10.
These number lines show the halfway point between two numbers.
Label the number liner to show the halfway point between each pair of numbers.
Look at the folded number line.
Each peak shows a number ending in 5. These numbers are halfway between the two tens.
The tens are the base of each fold in the valleys.
The halfway number between 10 and 20 is 15.
The halfway number between 20 and 30 is 25.
If we place a counter on number 17, it would slide to 20 because 20 is the ten that is nearest to 17.
A counter placed on 27 will slide to 30.
A counter placed on 36 will slide to 40.
A counter placed on 14 will slide to 10.
A counter placed on 33 will slide to 30.
Halfway number such as 5 and 15 are rounded to the higher ten.
25 would be rounded to 30.
35 would be rounded to 40.
Numbers can be rounded to the nearest hundred, thousand, ten thousand and so on.
The halfway numbers on all the number lines contain a 5. The place of the digit 5 varies.
The rules for rounding numbers are the same for rounding to the nearest ten, hundred, thousand ……
If the given number is less than the halfway number, then round down. The rounded number will be less than the given number.
How do you round numbers?
Here we will discuss how to round numbers.
(i) 435 is less than the halfway number, so it is rounded down.
The rounded number is 400 which is less than the original number 435. When rounding, we replaced the digits in the ones and tens places by zeros. The digits in the hundreds place remains unchanged.
If the given number is halfway or greater than the halfway number, then it is rounded up.
The rounded number will be greater than the given number.
(ii) Round 675 to the nearest hundred.
675 is greater than the halfway number, so it is rounded up to 700. The rounded number is greater than the original number 675. When rounding, we replaced the digits in the ones and tens places by
zeros and increased the digit in the hundreds place by 1.
A 3 digit number can be rounded to the nearest TEN or to the nearest HUNDRED.
(iii) Round 446 to the nearest ten.
Color the digit in the tens place 446
Look at the digit to the right of the colored digit. If it is greater than or equal to 5 then round up. If it less than 5 then round down.
6 > 5
446 will be rounded up to 450.
(iv) Round 726 to the nearest hundred.
Colored the digit in the hundreds place 726
Look at the digit to the right. It is greater than or equal to 5 then round up. If it is less than 5 then round down.
2 < 5
726 will be rounded down to 700.
The above method is convenient and helps us to remember very large numbers easily. It also simplifies calculation of large numbers.
We will learn more about how a large number may be rounded off to the nearest 10, 100, 1000, 10000, 10000 etc...
● Rounding Numbers.
5th Grade Numbers Page
5th Grade Math Problems
From Rounding Numbers to HOME PAGE | {"url":"http://www.math-only-math.com/rounding-numbers.html","timestamp":"2014-04-20T10:52:41Z","content_type":null,"content_length":"16531","record_id":"<urn:uuid:0ac99856-03c0-484a-9a53-db1f62030051>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Katherine Levinson
Curriculum Design
Katherine Levinson
Grade 5 Math
Part of a collaborative oceanography unit-curriculum design
I. Commencement content standard
• Standard 2: Students will access, generate, process, and transfer information using appropriate technologies.
• Standard 3: Students will understand mathematics and become mathematically confident by communicating and reasoning mathematically, by applying mathematics in real world settings, and by solving
problems through the integrated study of number systems, geometry, algebra, data analysis, probability, and trigonometry.
• Standard 5: Students will apply technological knowledge and skills to design, construct, use, and evaluate products and systems to satisfy human and environmental needs.
• Standard 6: Students will understand the relationships and common themes that connect mathematics, science, and technology and apply the themes to these and other areas of learning.
II. Benchmark Standards: Intermediate
Content Standard:
• Standard 2 Information technology is used to retrieve, process, and communicate information and as a tool to enhance learning.
• Standard 3 Students use measurement in both metric and English measure to provide a major link between the abstractions of mathematics and the real world in order to describe and compare objects
and data.
• Standard 5 Engineering design is an iterative process involving modeling and optimization used to develop technological solutions to problems within given constraints.
• Standard 5 Computers, as tools for design, modeling, information processing, communication, and system control, have greatly increased human productivity and knowledge.
• Standard 6 Identifying patterns of change is necessary for making predictions about future behavior and conditions.
Performance Standards:
Standard 2
• Use a range of equipment and software to integrate several forms of information in order to create good quality audio, video, graphic, and text-based presentations
Standard 3
• Use spreadsheets and database software to collect, process, display, and analyze information. Students access needed information from electronic data bases and on-line telecommunication services
Standard 5
• Explore and produce graphic representations of data using calculators/computers
Standard 5
• use a computer system to connect to and access needed information from various Internet sites
Standard 6
• observe patterns of change in trends or cycles and make predictions on what might happen in the future
III. Content outcome
Math Lessons will be done in conjunction with units on Oceanography (Fenninger & Tortora), Social Studies /Language Arts (Cafaro), and Technology and Research (O Brien)
Student will explore changes in scale and proportional relationships.
Students will know how to:
• collect and represent data electronically
• design and interpret graphs
• work cooperatively
IV. Performance measures:
• Students will produce original graphs after accessing sites on the Internet to obtain data.
• Students will take a test correctly plotting teacher-generated data. They will do this on the intermediate level or higher.
Rubric for original graph:
Can collect data, design questions and use information to create graphs that accurately represent collected information
Can, with minimal help, collect data, design questions, and use information to create graphs that accurately represent collected information
Needs help with some parts of the assignment but can use data to create graphs that accurately represent collected information
Cannot collect data, design questions, or use information to create graphs
Enabling Activities
Activity #1 Graphing
A. Grouping - pairs
B. Time - two to three 40-minute class and computer periods.
C. Materials - pencil, graph paper, text or lists of data
D. Directions - Classroom: Introduce graphs using text. Explain rubric that will be used to assess graphs.
Practice: Children can vote on a favorite (pet, flavor of ice cream, color etc.)
Note: Limit choices to 4 or 5 pets, flavors, etc.
? Using overhead or board, use class data to demonstrate how to fill in a tally sheet and a frequency table. Have children copy on their paper.
? To set up graph paper children must understand what an appropriate numerical scale with equal intervals looks like. For example if favorite pets are dog 11, cat 7, bird 5, fish 3, an appropriate
scale would be 0, 1, 2, 3, 4
? An appropriate scale for 33, 21, 12, 7, 14, would be 0,5,10,15
? Have class complete graph.
Technology Lab: With help of LMS (Library Media Specialist) students will examine graphs and use data to accurately plot information on a bar graph or a line graph. Children will use data from class
assignment on favorite pet etc.
E. Questions
1. How can data be collected? (surveys, almanacs)
2. For what information are graphs most useful?
3. How are bar and line graphs set up?
4. What can be learned from this graph?
5. Why is one graph better for representing data than another?
F. Rubric Key: 1= Novice 2= Apprentice 3=Worker 4= Expert
1 = student can explain use of graphs and data collection but is unable to use data to create graph
2 = student can explain use of graphs and data collection and is able with help to set up x and y axes Student is unable to complete task.
3 = student can explain use of graphs and data collection and is able with a minimal amount of help to construct graph.
4= student can explain use of graphs and data collection and is able to construct graph.
Activity #2 - Use Internet site
http://www.idis.com/teachweb/cohasset/infocard2.htm to access information on gray and Sei Whales.
A Grouping - pairs
B. Time - one 40 minute computer period and two 40 minute class periods
C. Materials - computer with Internet access, pencils, paper
D. Directions _If necessary, teacher or LMS (Library Media Specialist) will explain how to log on to the Internet and access information.
Students will use math notebooks to record data on whale lengths.
They will graph this information using spreadsheet software.
E. Questions
1. What title will you give your graph?
2. What information on the cards is relevant to your graph?
3. How will you collect data?
4. How will you design your graph?
5. How will you use the computer to produce a graph?
F. Rubric Key:
1 = Novice 2 = Apprentice 3 = Worker 4 = Expert
Student can use Internet to access data on whales, can choose an appropriate graph, and can accurately represent data on graph.
1 = Student is unable to use Internet to access information and is unable to create graph
2 = Student is able to use Internet to access information but is unable to create graph
3= Student is able to use internet to access information and is able with minimal amounts of help to create graph
4= Student is able to access information and create graph.
Activity #3 - Working cooperatively, students will accurately use a metric tape to measure partner's wrist. A class data sheet will be produced.
Using the data on board, students will graph their results using a bar graph.
A. Grouping - pairs
B. B. Time - 40 minute class period
C. Materials - pencils, paper, metric tape (or string and metric rulers)
D. Directions:
Divide class into pairs. Each student will measure partner s wrist.
Results will be recorded on the chalkboard. Children will record at their seats. They will order measurements from least to greatest.
Children will graph results.
E. Questions:
1. What is an appropriate numerical scale?
2. How can this data be displayed? Is there more than one way to construct a bar graph?
F. Rubric Key: 1-Novice 2-Apprentice 3 - Worker 4-Expert
Students can use class data to accurately create a bar graph
1 = An attempt was made to label axis and place bars accurately
2 = Graph was correctly titled and labeled but bars not placed accurately
3 = Graph was correctly titled, x and y axis's were correctly labeled with variables and units but were not evenly spaced. Most points were plotted accurately.
4 = Graph was titled correctly, x and y axis's were correctly labeled with variables and units and were evenly spaced, points were plotted accurately
Activity #4 Use Internet site http://bev.net/education/SeaWorld/homepage.html
Click on Educational Resources then Ocean Olympians then Record Breakers
A. Grouping: none
B. Time-two computer periods
C. Materials- computer with Internet access, pencils, paper
D. Directions-
Choose one of the record breakers from the list provided.
Review graphs and choose the one best suited to present data.
Use a spreadsheet program to create graph.
E. Questions How can you best present the information on the record breaker, line graph or bar graph?
F. Rubric Key: 1-Novice 2-Apprentice 3-Worker 4-Expert
Students can access information , choose an appropriate graph, and accurately represent the information.
1= Student could not collect data or use information to create graph
2= Student could collect data but needed help in using data to create graph
3= Student could, with a minimal amount of help, collect data and use data to create graph
4= Student could collect data and use data to create graph.
Activity #5: Design and create a line or bar graph. Use almanacs, surveys, or Internet sites to gather data on households with cable TV, average television viewing time, leading US advertisers, or a
topic chosen by teacher or children.
A. Grouping: pairs
B. Time: two periods, one class, one computer
C. Materials: computer with internet access, pencils, paper, almanacs
D. Directions: Children should be given a minimal amount of direction.
Choose area of interest; collect data, design graph.
E. Questions: What topic will you chose? Why?
How can you best represent this information on a graph?
F. Rubric Key: 1 = Novice 2 = Apprentice 3 = Worker 4 = Expert
Students will produce original graphs after accessing sites on the Internet to obtain data. Students will take a test correctly plotting teacher-generated data. They will do this on the intermediate
level or higher.
1 = Student cannot collect data, design questions, or use information to create graphs
2 = Student needs help with some parts of the assignment but can use data to create graphs that accurately represesnt collected information.
3 = Student can with minimal help, collect data, design questions, and use information to create graphs that accurately represent collected information.
4 = Student can collect data, design questions and use information to create graphs that accurately represent collected information. | {"url":"http://www.stac.edu/mcc/levinson.htm","timestamp":"2014-04-20T15:53:19Z","content_type":null,"content_length":"55094","record_id":"<urn:uuid:fef8b905-6ae5-43f0-986b-f1155bee4113>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hopf algebra
Algebraic theories
Algebras and modules
Higher algebras
Model category presentations
Geometry on formal duals of algebras
The notion of Hopf algebra is an abstraction of the properties of
where not only the associative algebra structure is remembered, but also the natural coalgebra structure, making it a bialgebra, as well as the algebraic structure induced by the inverse-operation in
the group, called the antipode.
More intrinsically, a Hopf algebra structure on an associative algebra is precisely the structure such as to make its category of modules into a rigid monoidal category equipped with a fiber functor
– this is the statement of Tannaka duality for Hopf algebras.
Hopf algebras and their generalization to Hopf algebroids arise notably as groupoid convolution algebras. Another important source of Hopf algebras is combinatorics, see at combinatorial Hopf
There is a wide variety of variations of the notion of Hopf algebra, relaxing properties or adding structure. Examples are weak Hopf algebras, quasi-Hopf algebras, (quasi-)triangular Hopf algebras,
quantum groups, hopfish algebras etc. Most of these notions are systematized via Tannaka duality by the properties and structures on the coresponding categories of modules, see at Tannaka duality
A $k$-bialgebra $(A,m,\eta,\Delta,\epsilon)$ with multiplication $m$, comultiplication $\Delta$, unit $\eta: k\to A$ and counit $\epsilon:A\to k$ is called a Hopf algebra if there exists a $k$-linear
$S : A \to A$
called the antipode or coinverse such that
$m\circ(\mathrm{id}\otimes S)\circ \Delta = m\circ(S\otimes\mathrm{id})\circ\Delta = \eta\circ\epsilon$
(as a map $A\to A$).
The antipode is an antihomomorphism both of algebras and coalgebras (i.e. a homomorphism $S:A\to A^{cop}_{op}$).
Proof (algebra part)
In Sweedler notation, for any $g,h\in A$,
$S((h g)_{(1)}) (h g)_{(2)} = \epsilon(h g)$
$S((h g)_{(1)}) h_{(2)}g_{(2)} = \epsilon(h)\epsilon(g)$
$S((h g)_{(1)}) h_{(2)}g_{(2)} S g_{(3)} S h_{(3)} = \epsilon(h_{(1)})\epsilon(g_{(1)}) S g_{(2)} S h_{(2)}$
$S(h_{(1)}g_{(1)}) \epsilon(h_{(2)})\epsilon(g_{(2)}) = (S g) (S h)$
$S(h_{(1)}\epsilon(h_{(2)})g_{(1)}\epsilon(g_{(2)})) = (S g)(S h)$
Therefore $S(h g) = (S g) (S h)$.
For the coalgebra part, notice first that $\epsilon(h)1\otimes 1 = \tau\circ\Delta(\epsilon(h)1)=\tau\circ\Delta(S h_{(1)}h_{(2)})$. Expand this as
$(S h_{(1)}\otimes S h_{(2)})(h_{(4)}\otimes h_{(3)}) = ((S h_{(1)})_{(2)}\otimes (S h_{(1)})_{(1)})(h_{(4)}\otimes h_{(3)})$
$(S h_{(1)}\otimes S h_{(2)})(h_{(4)}\otimes h_{(3)}) (S h_{(5)}\otimes S h_{(6)}) = ((S h_{(1)})_{(2)}\otimes (S h_{(1)})_{(1)})(h_{(3)}\otimes h_{(2)})(S h_{(4)}\otimes S h_{(5)})$
$((S h_{(1)}\otimes S h_{(2)})\epsilon(h_{(3)}) = ((S h_{(1)})_{(2)}\otimes (S h_{(1)})_{(1)})(1\otimes\epsilon(h_{(2)}))$
$((S h_{(1)}\otimes S h_{(2)})\epsilon(h_{(3)}) = (\tau\circ\Delta)(S h_{(1)})(1\otimes\epsilon(h_{(2)})1) = (\tau\circ\Delta)(S h_{(1)}\epsilon(h_{(2)}))$
$(S h_{(1)}\otimes S h_{(2)})=(\tau\circ\Delta)(S h) = (S h)_{(2)}\otimes (S h)_{(1)}.$
The axiom that must be satisfied by the antipode looks like a $k$-linear version of the identity satisfied by the inverse map in a group bimonoid: taking a group element $g$, duplicating by the
diagonal map $\Delta$ to obtain $(g,g)$, taking the inverse of either component of this ordered pair, and then multiplying the two components, we obtain the identity element of our group.
Just as an algebra is a monoid in Vect and a bialgebra is a bimonoid in $Vect$, a Hopf algebra is a Hopf monoid in $Vect$.
Relation to Hopf groups
Note that the definition of Hopf algebra (or, really, of Hopf monoid) is self-dual: a Hopf monoid in a symmetric monoidal category $V$ is the same as a Hopf monoid in $V^{op}$ (i.e. a “Hopf
comonoid”). Thus we can view a Hopf algebra as “like a group” in two different ways, depending on whether the group multiplication corresponds to the multiplication or the comultiplication of the
Hopf algebra. The formal connections between Hopf monoids and group objects are:
1. A Hopf monoid in a cartesian monoidal category $V$ is the same as a group object in $V$. Such Hopf monoids are always cocommutative (that is, their underlying comonoid is cocommutative). This is
because every object of a cartesian monoidal category is a cocommutative comonoid object in a unique way, and every morphism is a comonoid homomorphism.
2. A commutative Hopf monoid in a symmetric monoidal category $V$ is the same as a group object in $CMon(V)^{op}$, where $CMon(V)$ is the category of commutative monoids in $V$. This works because
the tensor product of commutative algebras is the categorical coproduct, and hence the product in its opposite category. In particular, a commutative Hopf algebra is the same as a group object in
the category $Alg^{op}$ of affine schemes.
Corresponding to these two, an ordinary group $G$ gives us two different Hopf algebras (here $k$ is the ground ring):
1. The group algebra $k[G]$ (the free vector space on the set $G$), with multiplication given by the group operation of $G$ and comultiplication given by the diagonal $g\mapsto g\otimes g$. This
Hopf algebra is always cocommutative, and is commutative iff $G$ is abelian. It can be viewed as the result of applying the strong monoidal functor $k[-]:Set \to k Mod$ to the Hopf monoid $G$ in
2. The function algebra $k(G)$ (the set of functions $G\to k$), with comultiplication given by precomposition with the group operation
$k(G) \to k(G\times G) \cong k(G)\otimes k(G),$
and multiplication given by pointwise multiplication in $k$. In this case we need some finiteness or algebraicity of $G$ in order to guarantee $k(G\times G) \cong k(G)\otimes k(G)$. This Hopf
algebra is always commutative, and is cocommutative iff $G$ is abelian.
Note that if $G$ is finite, then $k[G]\cong k(G)$ as $k$-modules, but the Hopf algebra structure is quite different.
Mike, can you do something with these notes that I took at some point as a grad student? I don't know this stuff very well, which is why I don't incorporate them into the text, but at least I cleaned
up the formatting a bit so that you can if you like it. —Toby
One can make a group into a Hopf algebra in at least $2$ very different ways. Both ways have a discrete version and a smooth version.
Given a (finite, discrete) group $G$ and a ground ring (field?) $K$, then the group ring $K[G]$ is a cocommutative Hopf algebra, with $M(g_0,g_1) = g_0 g_1$, $I = 1$, $D(g) = g \otimes g$, $E(g)
= 1$, and the nifty Hopf antipodal operator $S(g) = g^{-1}$. Notice that the coalgebra operations $D,E$ depend only on $Set|G|$.
Given a (finite, discrete) group $G$ and a ground ring (field?) $K$, then the function ring $Fun(G,K)$ is a commutative Hopf algebra, with $M(f_0,f_1)(g) = f_0(g)f_1(g)$, $I(g) = 1$, $D(f)(g,h) =
f(g h)$, $E(f) = f(1)$, and the nifty Hopf antipodal operator $S(f)(g) = f(g^{-1})$. Notice that the algebra operations $M,I$ depend only on $Set|G|$.
Given a (simply connected) Lie group $G$ and the complex (real?) field $K$, then the universal enveloping algebra $U(G)$ is a cocommutative Hopf algebra, with $M(\mathbf{g}_0,\mathbf{g}_1) = \
mathbf{g}_0 \mathbf{g}_1$, $I = 1$, $D(\mathbf{g}) = \mathbf{g} \otimes 1 + 1 \otimes \mathbf{g}$, $E(\mathbf{g}) = 0$, and the nifty Hopf antipodal operator $S(\mathbf{g}) = -\mathbf{g}$. Notice
that the coalgebra operation $D,E$ depend only on $K Vect|\mathfrak{g}|$.
Given a (compact) Lie group $G$ and the complex (real?) field $K$, then the algebraic function ring $Anal(G)$ is a cocommutative Hopf algebra, with $M(f_0,f_1)(g) = f_0(g) f_1(g)$, $I(g) = 1$, $D
(f)(g,h) = f(g h)$, $E(f) = f(1)$, and the nifty Hopf antipodal operator $S(f)(g) = f(g^{-1})$. Notice that the algebra operations $M,I$ depend only on $Anal Man|G|$.
The theorem of Hopf modules
Hopf algebras can be characterized among bialgebras by the fundamental theorem on Hopf modules: the category of Hopf modules over a bialgebra is canonically equivalent to the category of vector
spaces over the ground ring iff the bialgebra is a Hopf algebra. This categorical fact enables a definition of Hopf monoids in some setups that do not allow a sensible definition of antipode.
Tannaka duality
The category of modules (finite dimensional) over the underlying associative algebra of a Hopf algebra canonically inherits the structure of an rigid monoidal category such that the forgetful fiber
functor to vector spaces over the ground field is a strict monoidal functor.
The statement of Tannaka duality for Hopf algebras is that this property characterizes Hopf algebras. (See for instance (Bakke))
For generalization of this characterization to quasi-Hopf algebras and hopfish algebras see (Vercruysse).
Tannaka duality for categories of modules over monoids/associative algebras
2-Tannaka duality for module categories over monoidal categories
3-Tannaka duality for module 2-categories over monoidal 2-categories
As 3-vector spaces
A Hopf algebra structure on an associative algebra $A$ canonically defines on $A$ the structure of an algebra object internal to the 2-category of algebras, bimodules and bimodule homomorphisms.
By the discussion at n-vector space this allows to identify Hopf algebras with certain 3-vector spaces .
(For instance (FHLT, p. 27)).
More general 3-vector spaces are given by hopfish algebras and generally by sesquiunital sesquialgebras.
For a diagrammatic definition of a Hopf algebra, see the Wikipedia entry.
• Eiichi Abe, Hopf algebras, Cambridge UP 1980.
• Pierre Cartier, A primer on Hopf algebras, IHES 2006, 81p (pdf)
• V. G. Drinfel'd, Quantum groups, Proceedings of the International Congress of Mathematicians 1986, Vol. 1, 2 798–820, AMS 1987, djvu:1.3 M, pdf:2.5 M
• G. Hochschild, Introduction to algebraic group schemes, 1971
• S. Majid, Foundations of quantum group theory, Cambridge University Press 1995, 2000.
• John Milnor, John Moore, The structure of Hopf algebras, Annals of Math. 81 (1965), 211-264.
• Susan Montgomery, Hopf algebras and their action on rings, AMS 1994, 240p.
• B. Parshall, J.Wang, Quantum linear groups, Mem. Amer. Math. Soc. 89(1991), No. 439, vi+157 pp.
• M. Sweedler, Hopf algebras, Benjamin 1969.
• William C. Waterhouse, Introduction to affine group schemes, Graduate Texts in Mathematics 66, Springer 1979. xi+164 pp.
Tannaka duality for Hopf algebras and their generalization is alluded to in
• Joost Vercruysse, Hopf algebras—Variant notions and reconstruction theorems (arXiv:1202.3613)
and discussed in detail in
• Tørris Koløen Bakke, Hopf algebras and monoidal categories (2007) (pdf)
Discussion with an eye towards stable homotopy theory and the Steenrod algebra is in | {"url":"http://ncatlab.org/nlab/show/Hopf+algebra","timestamp":"2014-04-16T16:34:27Z","content_type":null,"content_length":"93744","record_id":"<urn:uuid:6dc8fdd8-ec27-43e6-8392-1917972c39c2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
American Mathematical Society
Bulletin Notices
AMS Sectional Meeting Program by Day
Current as of Saturday, March 10, 2012 00:27:20
Program · Deadlines · Registration/Housing/Etc.
Inquiries: meet@ams.org
2012 Spring Western Section Meeting
University of Hawaii at Manoa, Honolulu, HI
March 3-4, 2012 (Saturday - Sunday)
Meeting #1078
Associate secretaries:
Michel L Lapidus, AMS lapidus@math.ucr.edu, lapidus@mathserv.ucr.edu
Sunday March 4, 2012
• Sunday March 4, 2012, 7:30 a.m.-12:00 p.m.
Book Sale and Exhibit
Keller Hall, Keller Hall
• Sunday March 4, 2012, 7:30 a.m.-12:00 p.m.
Meeting Registration
Keller Hall, Keller Hall
• Sunday March 4, 2012, 8:00 a.m.-10:50 a.m.
Special Session on Algebraic Combinatorics, I
Room 127, Pacific Ocean Science and Technology
Federico Ardila, San Francisco State University federico@math.sfsu.edu
Sara Billey, University of Washington billey@math.washington.edu
Kelli Talaska, University of California, Berkeley talaska@math.berkeley.edu
Lauren Williams, University of California, Berkeley williams@math.berkeley.edu
□ 8:00 a.m.
Analysis of casino shelf shuffling machines.
Persi Diaconis, Stanford University
Jason Fulman*, University of Southern California
Susan Holmes, Stanford University
□ 8:30 a.m.
Construction and enumeration of Franklin magic circles.
Rebecca Garcia*, Sam Houston State University
□ 9:00 a.m.
Projective equivalence classes of vector configurations.
Andrew Berget*, University of California, Davis
Alex Fink, North Carolina State University
□ 9:30 a.m.
$h$-vectors of small matroid complexes.
Steven Klee*, University of California, Davis
□ 10:00 a.m.
□ 10:30 a.m.
Cointerval simplicial complexes and cellular resolutions.
Benjamin Braun, University of Kentucky
Jonathan Browder*, University of Washington
Steven Klee, University of California, Davis
• Sunday March 4, 2012, 8:00 a.m.-10:40 a.m.
Special Session on Algebraic Geometry: Singularities and Moduli, III
Room 306, Kuykendall Hall
Jim Bryan, University of British Columbia jbryan@math.ubc.ca
Jonathan Wise, Stanford University jonathan@math.stanford.edu
• Sunday March 4, 2012, 8:00 a.m.-10:50 a.m.
Special Session on Algebraic Number Theory, Diophantine Equations and Related Topics, III
Room 305, Kuykendall Hall
Claude Levesque, Université de Laval, Quebec, Canada Claude.Levesque@mat.ulaval.ca
• Sunday March 4, 2012, 8:00 a.m.-10:50 a.m.
Special Session on Arithmetic Geometry, III
Room 210, Kuykendall Hall
Xander Faber, University of Hawaii xander@math.hawaii.edu
Michelle Manes, University of Hawaii mmanes@math.hawaii.edu
Gretel Sia, University of Hawaii gsia@math.hawaii.edu
• Sunday March 4, 2012, 8:00 a.m.-10:50 a.m.
Special Session on Automorphic and Modular Forms, III
Room 209, Kuykendall Hall
Pavel Guerzhoy, University of Hawaii pavel@math.hawaii.edu
Zachary A. Kent, Emory University kent@mathcs.emory.edu
• Sunday March 4, 2012, 8:00 a.m.-10:40 a.m.
Special Session on C*-algebras and Index Theory, II
Room D101, Business Administration Building
Erik Guentner, University of Hawaii at Manoa erik@math.hawaii.edu
Efren Ruiz, University of Hawaii at Hilo ruize@hawaii.edu
Erik Van Erp, University of Hawaii at Manoa jhamvanerp@gmail.com
Rufus Willett, University of Hawaii at Manoa rufus.willett@vanderbilt.edu
• Sunday March 4, 2012, 8:00 a.m.-10:50 a.m.
Special Session on Geometry and Analysis on Fractal Spaces, III
Room 301, Kuykendall Hall
Michel Lapidus, University of California, Riverside lapidus@gmail.com
Hung Lu, Hawaii Pacific University hlu@hpu.edu
John A. Rock, California State Polytechnic University, Pomona jarock@csupomona.edu
Machiel van Frankenhuijsen, Utah Valley University vanframa@uvu.edu
• Sunday March 4, 2012, 8:00 a.m.-10:20 a.m.
Special Session on Holomorphic Spaces, III
Room C102, Business Administration Building
Hyungwoon Koo, Korea University koohw@korea.ac.kr
Wayne Smith, University of Hawaii wayne@math.hawaii.edu
□ 8:00 a.m.
A Volterra type operator on $H^{\infty}$.
Austin Anderson*, University of Hawaii
□ 8:30 a.m.
Duality in Segal-Bargmann Spaces.
Will Gryc, Muhlenberg College
Todd Kemp*, UCSD
□ 9:00 a.m.
Toeplitz operators on Bergman spaces of polyanalytic functions.
Zeljko Cuckovic*, University of Toledo, Ohio
Trieu Le, University of Toledo, Ohio
□ 9:30 a.m.
□ 10:00 a.m.
Geodesic zippers.
Donald E Marshall*, University of Washington, Seattle, WA 98195
• Sunday March 4, 2012, 8:00 a.m.-10:40 a.m.
Special Session on Kaehler Geometry and Its Applications, III
Room A102, Business Administration Building
Zhiqin Lu, University of California Irvine zlu@uci.edu
Jeff Streets, University of California Irvine jstreets@uci.edu
Li-Sheng Tseng, University of California Irvine lstseng@math.uci.edu
Ben Weinkove, University of California San Diego weinkove@math.ucsd.edu
• Sunday March 4, 2012, 8:00 a.m.-10:50 a.m.
Special Session on Kernel Methods for Applications on the Sphere and Other Manifolds, III
Room C101, Business Administration Building
Thomas Hangelbroek, University of Hawaii at Manoa Hangelbr@math.hawaii.edu
□ 8:00 a.m.
□ 9:00 a.m.
Stable computations with Gaussians.
Michael McCourt*, Cornell University
Greg Fasshauer, Illinois Institute of Technology
□ 9:30 a.m.
Numerical solutions to a boundary value problem on the sphere using radial basis functions.
Quoc Thong Le Gia*, Unversity of New South Wales, Sydney, NSW 2052
Kerstin Hesse, Leipzig Graduate School of Management
□ 10:00 a.m.
Radial Basis Functions: Developments and applications to planetary scale flows.
Natasha Flyer*, National Center for Atmospheric Research, Institute for Mathematics Applied to Geosciences
□ 10:30 a.m.
A partition of unity radial basis function collocation method for partial differential equations.
Elisabeth Larsson*, Uppsala University/Dept. of Information Technology
Alfa Heryudono, University of Massachusetts Dartmouth/Dept. of Mathematics
• Sunday March 4, 2012, 8:00 a.m.-10:50 a.m.
Special Session on Knotting in Linear and Ring Polymer Models, III
Room A101, Business Administration Building
Tetsuo Deguchi, Ochanomizu University deguchi@phys.ocha.ac.jp
Kenneth Millett, University of California, Santa Barbara millett@math.ucsb.edu
Eric Rawdon, University of St. Thomas ericrawdon@gmail.com
Mariel Vazquez, San Francisco State University arsuaga3@gmail.com
• Sunday March 4, 2012, 8:00 a.m.-10:20 a.m.
Special Session on Model Theory, III
Roomm 310, Kuykendall Hall
Isaac Goldbring, University of California Los Angeles isaac@math.ucla.edu
Alice Medvedev, University of California Berkeley alice@math.berkeley.edu
• Sunday March 4, 2012, 8:00 a.m.-10:50 a.m.
Special Session on Noncommutative Algebra and Geometry, III
Room D103, Business Administration Building
Jason Bell, Simon Fraser University jpb@sfu.ca
James Zhang, University of Washington zhang@math.washington.edu
• Sunday March 4, 2012, 8:00 a.m.-10:50 a.m.
Special Session on Singularities, Stratifications and Their Applications, III
Room 307, Kuykendall Hall
Terence Gaffney, Northeastern University t.gaffney@neu.edu
David Trotman, Université de Provence David.Trotman@cmi.univ-mrs.fr
Leslie Charles Wilson, University of Hawaii at Manoa les@math.hawaii.edu
• Sunday March 4, 2012, 8:00 a.m.-10:50 a.m.
Special Session on Universal Algebra and Lattice Theory, III
Room 110, Hawaii Institute of Geophysics
Ralph Freese, University of Hawaii ralph@math.hawaii.edu
William Lampe, University of Hawaii bill@math.hawaii.edu
J. B. Nation, University of Hawaii JB@math.hawaii.edu
• Sunday March 4, 2012, 8:30 a.m.-10:50 a.m.
Special Session on Applications of Nonstandard Analysis, III
Room D106, Business Administration Building
Tom Lindstrom, University of Oslo, Norway t.l.lindstrom@cma.uio.no
Peter Loeb, University of Illinois at Urbana-Champaign loeb@math.uiuc.edu
David Ross, University of Hawaii, Honolulu ross@math.hawaii.edu
• Sunday March 4, 2012, 8:30 a.m.-10:50 a.m.
Special Session on Asymptotic Group Theory, III
Room 112, Webster Hall
Tara Davis, Hawaii Pacific University tara.c.davis@gmail.com
Erik Guentner, University of Hawaii erik@math.hawaii.edu
Michael Hull, Vanderbilt University michael.b.hull@vanderbilt.edu
Mark Sapir, Vanderbilt University m.sapir@vanderbilt.edu
• Sunday March 4, 2012, 8:30 a.m.-10:50 a.m.
Special Session on Linear and Permutation Representations, III
Room 104, Webster Hall
Robert Guralnick, University of Southern California guralnic@usc.edu
Pham Huu Tiep, University of Arizona tiep@math.arizona.edu
• Sunday March 4, 2012, 8:30 a.m.-10:50 a.m.
Special Session on Mathematical Coding Theory and its Industrial Applications, III
Room 103, Webster Hall
J. B. Nation, University of Hawaii JB@math.hawaii.edu
Manabu Hagiwara, National Institute of Advanced Industrial Science and Technology, Japan hagiwara.hagiwara@aist.go.jp
• Sunday March 4, 2012, 8:30 a.m.-10:50 a.m.
Special Session on Nonlinear Partial Differential Equations at the Common Interface of Waves and Fluids, III
Room D203, Business Administration Building
Ioan Bejenaru, University of Chicago bejenaru@math.uchicago.edu
Vlad Vicol, University of Chicago vicol@math.uchicago.edu
• Sunday March 4, 2012, 9:00 a.m.-10:40 a.m.
Special Session on Computability and Complexity, III
Room 126, Pacific Ocean Science and Technology
Cameron E. Freer, Massachusetts Institute of Technology
Bjorn Kjos-Hanssen, University of Hawaii at Manoa bjoern@math.hawaii.edu
• Sunday March 4, 2012, 9:00 a.m.-10:50 a.m.
Special Session on Nonlinear Partial Differential Equations of Fluid and Gas Dynamics, III
Room G103, Business Administration Building
Elaine Cozzi, Oregon State University cozzie@math.oregonstate.edu
Juhi Jang, University of California Riverside juhijang@math.ucr.edu
Jim Kelliher, University of California Riverside kelliher@math.ucr.edu
• Sunday March 4, 2012, 9:00 a.m.-10:40 a.m.
Special Session on Transformation Groups in Topology, III
Room 113, Webster Hall
Karl Heinz Dovermann, University of Hawaii at Manoa heiner@math.hawaii.edu
Daniel Ramras, New Mexico State University ramras@nmsu.edu
• Sunday March 4, 2012, 11:10 a.m.-12:00 p.m.
Invited Address
Combinatorics of the real Grassmannian and shallow water waves.
Room 152, Bilger Hall
Lauren Williams*, University of California, Berkeley
• Sunday March 4, 2012, 2:00 p.m.-2:50 p.m.
Invited Address
Geometry of Calabi-Yau moduli.
Room 152, Bilger Hall
Zhiqin Lu*, University of California Irvine
• Sunday March 4, 2012, 3:15 p.m.-6:05 p.m.
Special Session on Algebraic Combinatorics, II
Room 127, Pacific Ocean Science and Technology
Federico Ardila, San Francisco State University federico@math.sfsu.edu
Sara Billey, University of Washington billey@math.washington.edu
Kelli Talaska, University of California, Berkeley talaska@math.berkeley.edu
Lauren Williams, University of California, Berkeley williams@math.berkeley.edu
• Sunday March 4, 2012, 3:15 p.m.-6:05 p.m.
Special Session on Algebraic Number Theory, Diophantine Equations and Related Topics, IV
Room 305, Kuykendall Hall
Claude Levesque, Université de Laval, Quebec, Canada Claude.Levesque@mat.ulaval.ca
• Sunday March 4, 2012, 3:15 p.m.-5:35 p.m.
Special Session on Applications of Nonstandard Analysis, IV
Room D106, Business Administration Building
Tom Lindstrom, University of Oslo, Norway t.l.lindstrom@cma.uio.no
Peter Loeb, University of Illinois at Urbana-Champaign loeb@math.uiuc.edu
David Ross, University of Hawaii, Honolulu ross@math.hawaii.edu
• Sunday March 4, 2012, 3:15 p.m.-4:35 p.m.
Special Session on Arithmetic Geometry, IV
Room 210, Kuykendall Hall
Xander Faber, University of Hawaii xander@math.hawaii.edu
Michelle Manes, University of Hawaii mmanes@math.hawaii.edu
Gretel Sia, University of Hawaii gsia@math.hawaii.edu
□ 3:15 p.m.
Integral points on elliptic curves and explicit valuations of division polynomials.
Katherine E Stange*, Stanford University
□ 3:45 p.m.
Ranks of elliptic curves in families of quadratic twists.
Zev Klagsbrun, University of Wisconsin, Madison
Barry Mazur, Harvard University
Karl Rubin*, UC Irvine
□ 4:15 p.m.
Using elliptic curve with CM by $\sqrt{-7}$ to test primality.
Alexander Abatzoglou, UCI
Alice Silverberg*, University of California, Irvine
Andrew V. Sutherland, MIT
Angela Wong, UCI
• Sunday March 4, 2012, 3:15 p.m.-4:35 p.m.
Special Session on Automorphic and Modular Forms, IV
Room 209, Kuykendall Hall
Pavel Guerzhoy, University of Hawaii pavel@math.hawaii.edu
Zachary A. Kent, Emory University kent@mathcs.emory.edu
• Sunday March 4, 2012, 3:15 p.m.-5:55 p.m.
Special Session on C*-algebras and Index Theory, III
Room D101, Business Administration Building
Erik Guentner, University of Hawaii at Manoa erik@math.hawaii.edu
Efren Ruiz, University of Hawaii at Hilo ruize@hawaii.edu
Erik Van Erp, University of Hawaii at Manoa jhamvanerp@gmail.com
Rufus Willett, University of Hawaii at Manoa rufus.willett@vanderbilt.edu
• Sunday March 4, 2012, 3:15 p.m.-5:35 p.m.
Special Session on Geometry and Analysis on Fractal Spaces, IV
Room 301, Kuykendall Hall
Michel Lapidus, University of California, Riverside lapidus@gmail.com
Hung Lu, Hawaii Pacific University hlu@hpu.edu
John A. Rock, California State Polytechnic University, Pomona jarock@csupomona.edu
Machiel van Frankenhuijsen, Utah Valley University vanframa@uvu.edu
• Sunday March 4, 2012, 3:15 p.m.-5:55 p.m.
Special Session on Kaehler Geometry and Its Applications, IV
Room A102, Business Administration Building
Zhiqin Lu, University of California Irvine zlu@uci.edu
Jeff Streets, University of California Irvine jstreets@uci.edu
Li-Sheng Tseng, University of California Irvine lstseng@math.uci.edu
Ben Weinkove, University of California San Diego weinkove@math.ucsd.edu
• Sunday March 4, 2012, 3:15 p.m.-6:05 p.m.
Special Session on Kernel Methods for Applications on the Sphere and Other Manifolds, IV
Room C101, Business Administration Building
Thomas Hangelbroek, University of Hawaii at Manoa Hangelbr@math.hawaii.edu
• Sunday March 4, 2012, 3:15 p.m.-6:05 p.m.
Special Session on Knotting in Linear and Ring Polymer Models, IV
Room A101, Business Administration Building
Tetsuo Deguchi, Ochanomizu University deguchi@phys.ocha.ac.jp
Kenneth Millett, University of California, Santa Barbara millett@math.ucsb.edu
Eric Rawdon, University of St. Thomas ericrawdon@gmail.com
Mariel Vazquez, San Francisco State University arsuaga3@gmail.com
• Sunday March 4, 2012, 3:15 p.m.-6:05 p.m.
Special Session on Mathematical Coding Theory and its Industrial Applications, IV
Room 103, Webster Hall
J. B. Nation, University of Hawaii JB@math.hawaii.edu
Manabu Hagiwara, National Institute of Advanced Industrial Science and Technology, Japan hagiwara.hagiwara@aist.go.jp
□ 3:15 p.m.
Approximately counting the number of constrained arrays via belief propagation.
Pascal O. Vontobel*, Hewlett-Packard Laboratories, 1501 Page Mill Road, Palo Alto, CA 94304, USA
□ 3:45 p.m.
Rational maps and maximum likelihood decoding: dynamical system and invariant theory in decodings.
Yasuaki Hiraoka*, Kyushu University, IMI
□ 4:15 p.m.
On nonbinary parity-check codes.
Gretchen L. Matthews*, Clemson University
□ 4:45 p.m.
Algebraic LDPC codes based on non-commuting permutation matrices.
Christine A. Kelley*, University of Nebraska-Lincoln
□ 5:15 p.m.
A transform approach for construction and analysis of quasi-cyclic low-density parity-check codes.
Qiuju Diao, Department of Electrical and Computer Engineering, University of California at Davis
Qin Huang, Beihang University, Beijing, China
Shu Lin*, Department of Electrical and Computer Engineering, University of California at Davis
Khaled Abdel-Ghaffar, Department of Electrical and Computer Engineering, University of California at Davis
□ 5:45 p.m.
• Sunday March 4, 2012, 3:15 p.m.-5:05 p.m.
Special Session on Nonlinear Partial Differential Equations of Fluid and Gas Dynamics, IV
Room G103, Business Administration Building
Elaine Cozzi, Oregon State University cozzie@math.oregonstate.edu
Juhi Jang, University of California Riverside juhijang@math.ucr.edu
Jim Kelliher, University of California Riverside kelliher@math.ucr.edu
• Sunday March 4, 2012, 3:15 p.m.-6:05 p.m.
Special Session on Singularities, Stratifications and Their Applications, IV
Room 307, Kuykendall Hall
Terence Gaffney, Northeastern University t.gaffney@neu.edu
David Trotman, Université de Provence David.Trotman@cmi.univ-mrs.fr
Leslie Charles Wilson, University of Hawaii at Manoa les@math.hawaii.edu
□ 3:15 p.m.
Singularities of projective Gauss images of surfaces in Anti de Sitter 3-space.
Liang Chen*, School of Mathematics and Statistics, Northeast Normal University, Changchun, P. R. China.
Shyuichi Izumiya, Department of Mathematics, Hokkaido University, Sapporo, Japan
Masaki Kasedou, Department of Mathematics, Hokkaido University, Sapporo, Japan
□ 3:45 p.m.
Lightlike geometry of spacelike surfaces in Minkowski space-time.
Shyuichi Izumiya*, Department of Matheamtics, Faculty of Science, Hokkaido University
□ 4:15 p.m.
□ 4:45 p.m.
The families of Gauss indicatrices on Lorentzian hypersurfaces in pseudo-spheres in semi-Euclidean 4 space.
Jianguo Sun*, Northeast Normal University
□ 5:15 p.m.
Whitney umbrellas and swallowtails.
Takashi Nishimura*, Yokohama National University
□ 5:45 p.m.
• Sunday March 4, 2012, 3:15 p.m.-5:05 p.m.
Special Session on Transformation Groups in Topology, IV
Room 113, Webster Hall
Karl Heinz Dovermann, University of Hawaii at Manoa heiner@math.hawaii.edu
Daniel Ramras, New Mexico State University ramras@nmsu.edu
• Sunday March 4, 2012, 3:15 p.m.-6:05 p.m.
Special Session on Universal Algebra and Lattice Theory, IV
Room 110, Hawaii Institute of Geophysics
Ralph Freese, University of Hawaii ralph@math.hawaii.edu
William Lampe, University of Hawaii bill@math.hawaii.edu
J. B. Nation, University of Hawaii JB@math.hawaii.edu
• Sunday March 4, 2012, 4:00 p.m.-5:40 p.m.
Special Session on Computability and Complexity, IV
Room 126, Pacific Ocean Science and Technology
Cameron E. Freer, Massachusetts Institute of Technology
Bjorn Kjos-Hanssen, University of Hawaii at Manoa bjoern@math.hawaii.edu
Inquiries: meet@ams.org | {"url":"http://ams.org/meetings/sectional/2190_program_sunday.html","timestamp":"2014-04-17T05:02:36Z","content_type":null,"content_length":"114318","record_id":"<urn:uuid:1fb670f6-ca91-49fe-a845-dc280a8f511b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Time and space e cient pose clustering
- Parallel Computing , 1995
"... Hierarchical clustering is a common method used to determine clusters of similar data points in multidimensional spaces. O(n 2 ) algorithms are known for this problem [3, 4, 10, 18]. This paper
reviews important results for sequential algorithms and describes previous work on parallel algorithms f ..."
Cited by 80 (1 self)
Add to MetaCart
Hierarchical clustering is a common method used to determine clusters of similar data points in multidimensional spaces. O(n 2 ) algorithms are known for this problem [3, 4, 10, 18]. This paper
reviews important results for sequential algorithms and describes previous work on parallel algorithms for hierarchical clustering. Parallel algorithms to perform hierarchical clustering using
several distance metrics are then described. Optimal PRAM algorithms using n log n processors are given for the average link, complete link, centroid, median, and minimum variance metrics. Optimal
butterfly and tree algorithms using n log n processors are given for the centroid, median, and minimum variance metrics. Optimal asymptotic speedups are achieved for the best practical algorithm to
perform clustering using the single link metric on a n log n processor PRAM, butterfly, or tree. Keywords. Hierarchical clustering, pattern analysis, parallel algorithm, butterfly network, PRAM
algorithm. 1 In...
- Computer Vision and Image Understanding , 1999
"... INTRODUCTION Despite recent advances in computer vision the recognition and localization of 3D objects from a 2D image of a cluttered scene is still a key problem. The reason for the difficulty
to progress mainly lies in the combinatorial aspect of the problem. This difficulty can be bypassed if t ..."
Add to MetaCart
INTRODUCTION Despite recent advances in computer vision the recognition and localization of 3D objects from a 2D image of a cluttered scene is still a key problem. The reason for the difficulty to
progress mainly lies in the combinatorial aspect of the problem. This difficulty can be bypassed if the location of the objects in the image is known. In that case, the problem is to compare
efficiently a region of the image to a viewer-centered object database. (See Fig. 1 for the figures used in our experiments.) Recent proposed solutions are, for example, based on principal component
analysis [1, 2], modal matching [3], or template matching [4]. But Grimson [5] emphasized that the hard part of the recognition problem is in separating out subsets of correct data from the spurious
data that arise from a single object. Recent researchesinthisfieldhave focused on the various components of the recognition problem: which features are invariant and discriminant [6], how it is | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1948998","timestamp":"2014-04-19T20:52:31Z","content_type":null,"content_length":"15632","record_id":"<urn:uuid:a5dbdf26-fa76-4740-a3c7-9a9bb3897ed2>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 1.
Labor Force Data Derived from the Current Population Survey
Limitations of the Data
Geographic. Although the present CPS sample is a State-based design, the sample size of the CPS is sufficient to produce reliable monthly estimates at the national level only. The sample does not
permit the production of reliable monthly estimates for the States. However, demographic, social, and economic detail is published annually for the census regions and divisions, all States and the
District of Columbia, 50 large metropolitan areas, and selected central cities. The production of subnational labor force and unemployment estimates is discussed in more detail in chapter 4 of this
Sources of errors in the survey estimates. There are two types of errors possible in an estimate based on a sample survey — sampling and nonsampling. The mathematical discipline of sampling theory
provides methods for estimating standard errors when the probability of selection of each member of a population can be specified. The standard error, a measure of sampling variability, can be used
to compute confidence intervals that indicate a range of differences from true population values that can be anticipated because only a sample of the population has been surveyed. Nonsampling errors
such as response variability, response bias, and other types of bias occur in complete censuses as well as sample surveys. In some instances, nonsampling error may be more tightly controlled in a
well-conducted survey, through which it is feasible to collect and process the data more skillfully. Estimation of other types of bias is one of the most difficult aspects of survey work, and
adequate measures of bias often cannot be made.
Nonsampling error. The full extent of nonsampling error is unknown, but special studies have been conducted to quantify some sources of nonsampling error in the CPS. The effect of nonsampling error
should be small on estimates of relative change, such as month-to-month change. Estimates of monthly levels would be more severely affected by nonsampling error.
Nonsampling errors in surveys can be attributed to many sources, including the inability to obtain information about all persons in the sample; differences in the interpretation of questions;
inability or unwillingness of respondents to provide correct information; inability to recall information; errors made in collecting and processing the data; errors made in estimating values for
missing data; and failure to represent all sample households and all persons within sample households (undercoverage).
The effects of some components of nonsampling error in the CPS data are reflected in the variation in some labor force measures among the rotation groups, each of which is designed to be a
representative sample of the population. For example, unemployment estimates from a rotation group tend to be higher in the first and fifth months of interviewing.
Undercoverage in the CPS results from missed housing units and missed persons within sample households. The noninterview adjustment procedure accounts for missed households. It also is known that the
CPS undercoverage of persons varies with age, sex, race, and Hispanic ethnicity. Generally, undercoverage is greater for men than for women and greater for blacks, Hispanics, and other races than for
whites. Ratio adjustment to independent age-sex-race-origin population controls, as described previously, partially corrects for the biases due to survey undercoverage. Biases still exist in the
estimates to the extent that persons in missed households or missed persons in interviewed households have characteristics different from those of interviewed persons in the same age-sex-race-origin
The independent population estimates used in the estimation procedure may be a source of error, although, on balance, their use substantially improves the statistical reliability of many of the
figures. Errors may arise in the independent population estimates because of underenumeration of certain population groups or errors in age reporting in the decennial census (which serves as the base
for the estimates) or similar problems in the components of population change (mortality, immigration, and so forth) since that date.
Sampling error. When a sample, rather than the entire population, is surveyed, estimates differ from the true population values that they represent. This difference, or sampling error, occurs by
chance, and its variability is measured by the standard error of the estimate. Sample estimates from a given survey design are unbiased when an average of the estimates from all possible samples
would yield, hypothetically, the true population value. In this case, the sample estimate and its standard error can be used to construct approximate confidence intervals, or ranges of values, that
include the true population value with known probabilities. If the process of selecting a sample from the population were repeated many times and an estimate and its standard error were calculated
for each sample, then:
1. Approximately 68 percent of the intervals from 1 standard error below the estimate to 1 standard error above the estimate would include the true population value.
2. Approximately 90 percent of the intervals from 1.6 standard errors below the estimate to 1.6 standard errors above the estimate would include the true population value.
3. Approximately 95 percent of the intervals from 2 standard errors below the estimate to 2 standard errors above the estimate would include the true population value.
Although the estimating methods used in the CPS do not produce unbiased estimates, biases for most estimates are believed to be small enough that these confidence interval statements are
approximately true.
Standard error estimates computed using generalized variance functions are provided in Employment and Earnings and other publications. Using replicate variance techniques, standard error estimates
are generated. As computed, these standard error estimates reflect contributions not only from sampling error, but also from some types of nonsampling error, particularly response variability.
Because replicate variance techniques are somewhat cumbersome, simplified formulas called generalized variance functions (GVFs) have been developed for various types of labor force characteristics.
The GVF can be used to approximate an estimate's standard error, but this indicates only the general magnitude of the standard error, rather than a precise value.
Next: Technical References | {"url":"http://www.bls.gov/opub/hom/homch1_k.htm","timestamp":"2014-04-18T08:26:27Z","content_type":null,"content_length":"38275","record_id":"<urn:uuid:05724e95-8011-452e-b213-5d97df720b17>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yesterday's Other Moment Of Clarity:
May 14th, 2008 by Dan Meyer
The Law of Cosines is a beastly formula, which, yesterday, for the first time in five years, I didn't ask my students to memorize.
I gave them my reasoning: basically, that ten years down the road, ten months, maybe ten days, they'd forget this formula. It's inevitable. I'd rather them pour their guts into creatively operating
the formula than memorizing it, since, in the Google era, that's an appropriation of resources I could no longer defend.
At the end of my monologue, I wrote the formula on the board and started passing out tests. One student in the front held up her hand, smiling, the Law of Cosines written brazenly across it.
12 Responses to “Yesterday’s Other Moment Of Clarity:”
1. on 14 May 2008 at 11:28 am1 Mr K.
I’ve gotten to the point where I regularly have a “cheat board” – a section of the white board where there are hints and formulas. They stay there for a couple of weeks, until I rotate them out
to make room for new hints/formulas.
I find that by the time I have to erase them, most students have internalized them already…
2. on 14 May 2008 at 11:34 am2
I came to the same realization the first time I taught an “advanced” class and had to familiarize myself with the formulas that I forgot since high school and college.
3. I’m sure there are others, but the AP Stats test has never required memorization of formulas. It’s all about, “Yes, but what can you do with that formula? Do you know when to use it and when to
use another one?”
If I need to use the Law of Cosines in my daily work (heaven forfend…) then I know where to find it and what to do with it.
A lot of good it did me in college stats to memorize the (ridiculous) formula for standard deviation. I mean, does anyone calculate s.d. by hand?!?
BTW – the same goes for the periodic table and historical dates (other than a select few, perhaps).
4. It’s actually a bit more intimidating when I go into a test and they give you an equation sheet or make it open book. If you spent time memorizing, you were hoping for a couple points just for
writing down the right equation correctly, but having the equation for free takes the wind out of those free-loading sails.
So the value of “equation dropping” goes down while the value of equation using goes up. Win win. Unless you didn’t study.
Nice work though, I wonder how much cumulative stress temporarily memorizing things like the law of cosines has caused us. On the other hand, I can still remember the phone numbers of my friends
from 4th grade that I dialed by hand, but I don’t really know anyone’s phone number now because it’s in my cell phone. Does it matter? I guess if my cell phone breaks…
5. on 14 May 2008 at 2:56 pm5 Mark
I just finished reviewing about 400 vocabulary words that have been plastered on my walls since September. A bunch of the terms are people throughout time period we studied, many of which my
students won’t remember next year. Even I had to refresh my memory on some of the words. How am I supposed to convince kids that they need to know all of these concepts, when I find myself in
need of a refresher course before their refresher course?
6. @Steven – I don’t think the internet will break any time soon, so no worries there. And if it does break, then we have bigger issues…
In my IB math course, students are given a comprehensive formula booklet at the beginning of the year. They are told that it can be used on any test, including the Exam. My rationale: I’m testing
your math skills, not your memory skills.
@Steven again – And, on those IB Exams, you are still given marks for choosing to use the correct formula.
7. I gave a test/quest/quiz/assessment on the laws of sines/cosines last week. The formulas were up on the screen the whole time. Some of the cherubs thought I forgot to change the slide. :)
I gave the seniors a sheet on card-stock with the trig identities. Heck, I don’t even remember the half-angle formulas, why should they?
8. on 14 May 2008 at 6:38 pm8 dan
Hmph. Thought I was being pretty subversive here but apparently I’m the last to board this train. Why don’t y’all post more?
9. on 14 May 2008 at 8:06 pm9 Glenn
Well, we don’t post more because we are just getting around to realizing how awesome a tool blogs are!
You discovered that years ago, which is why we look up to you and steal so much from you.
By the way, just started my blog at mrwaddell.edublogs.org, and just posted the graphs I used for a graphing lesson.
Feel free to take a peek and let me know what you think.
10. on 15 May 2008 at 8:26 am10
My state gives the Chemistry students a reference tables and I give them one in class as well. Regardless, my students keep asking whether or not they need to memorize the periodic table. :: sigh
11. on 15 May 2008 at 11:57 am11
Brian Cormier
I have my students memorize the formula when I feel the formula reflects the concept I want them to understand, like area of a rectangle and triangle, and even the pythagorean theorem. Otherwise,
they’re just memorizing letters, numbers and symbols in a seemingle arbitrary sequence.
Furthermore, if you make them memorize the formula and they write it incorrectly, but otherwise plug in and solve correctly, how much credit should they get?
12. on 06 Aug 2008 at 11:06 am12
Interesting. In Finland, at the school level resembling high school (classes 10-12, non-compulsory), the students are allowed the benefit of a whole book of formulas and data from maths, physics
and chemistry. However, few and far between are the students who actually know how to use it… | {"url":"http://blog.mrmeyer.com/2008/yesterdays-other-moment-of-clarity/","timestamp":"2014-04-21T02:32:56Z","content_type":null,"content_length":"41556","record_id":"<urn:uuid:d737012c-b18d-4253-a627-a287006ee037>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stony Brook University Mathematics Professor Recognized with Steele Prize for Lifetime Achievement by the AMS
STONY BROOK, N.Y., January 18, 2011 – John W. “Jack” Milnor, Ph.D., Professor of Mathematics and Co-director of the Institute for Mathematical Sciences at Stony Brook University, has been awarded The
American Mathematical
Society’s prestigious Steele Prize for Lifetime Achievement. The award, presented during the recent AMS Joint Mathematics Meetings in New Orleans, is among the world's most important honors given for
outstanding contributions to mathematics. The award includes a $5,000 prize.
"The AMS Lifetime Achievement Award is a well-deserved honor and recognition," said Samuel L. Stanley, Jr., M.D., President of Stony Brook University. "Jack's continuous contributions to his field
constitute a tremendous legacy, he is an outstanding member of the Stony Brook community, and we are very proud of his accomplishments."
The citation that accompanied the Lifetime Achievement award noted that Dr. Milnor “stands out from the list of great mathematicians in terms of his overall achievements and his influence on
mathematics in general, both through his work and through his excellent books”.
“It is a particular pleasure to receive an award for what one enjoys doing anyway,” Dr. Milnor said. “I have been very lucky to have had so many years to explore and enjoy some of the many highways
and byways of mathematics, and I want to thank the three institutions that have supported and inspired me for most of the past 60 years: Princeton University, where I learned to love mathematics; the
Institute for Advanced Study for many years of uninterrupted research; and Stony Brook University where I was able to reconnect with students and, to some extent, with teaching.
“I am very grateful to my many teachers, from Ralph Fox and Norman Steenrod long ago to Adrien Douady in more recent years, and I want to thank the family, friends, students, colleagues, and
collaborators who have helped me over the years. Finally, my grateful thanks to the selection committee for this honor.”
Dr. Milnor had previously won two other Steele Prizes from the AMS – for a Mathematical Exposition (2004) and for a Seminal Contribution to Research (1982).
Founded in 1888 to further mathematical research and scholarship, the more than 30,000-member American Mathematical Society today fulfills its mission through programs and services that promote
mathematical research and its uses, strengthen mathematical education, and foster awareness and appreciation of mathematics and its connections to other disciplines and to everyday life.
Dr. Milnor spent his undergraduate and graduate student years at Princeton, studying knot theory under the supervision of Ralph Fox. He received an A.B. and a Ph.D. in Mathematics from Princeton.
After many years at Princeton University and the Institute for Advanced Study, with shorter stays at UCLA and MIT, he has settled at Stony Brook University, where he is now co-director of the
Institute for Mathematical Sciences. Over the years, he has studied game theory, differential geometry, algebraic topology, differential topology, quadratic forms, and algebraic K-theory. For the
past 25 years, his main focus has been on dynamical systems and particularly on low dimensional holomorphic dynamical systems. Among his current projects is the preparation of a book to be called
Dynamics, Introductory Lectures. Five volumes of his older collected papers have been published by the AMS.
A member of the National Academy of Sciences, he has also won the Fields Medal – the International Medal for Outstanding Discoveries in Mathematics awarded to mathematicians under the age of 40 by
the International Congress of the International Mathematical Union (IMU) – and the Wolf Prize in Mathematics, Israel’s highest honor in mathematics.
According to the AMS, “Dr. Milnor’s discovery of 28 nondiffeomorphic smooth structures on the 7-dimensional sphere and his further work developing the surgery techniques for manifolds shaped the
development of differential topology beginning in the 1950s. Another of his famous results from this period is a counterexample to the Hauptvermutung: an example of homeomorphic but not
combinatorially equivalent complexes. This counterexample is a part of a general big picture of the relation between the topological, combinatorial, and smooth worlds developed by Milnor. Jointly
with M. Kervaire, Milnor proved the first results showing that the topology of 4-dimensional manifolds is exceptional, by revealing obstructions for the realization of 2-dimensional spherical
homology classes by smooth embedded 2-spheres. This is one of the founding results of 4-dimensional topology.
“In this way, Milnor opened several fields: singularity theory, algebraic K-theory, and the theory of quadratic forms. Although he did not invent these subjects, his work gave them completely new
points of view. For instance, his work on isolated singularities of complex hypersurfaces presented a great new topological framework for studying singularities, and at the same time provided a rich
new source of examples of manifolds with different extra structures. The concepts of Milnor fibers and Milnor number are today among the most important notions in the study of complex singularities.
“The significance of Milnor’s work goes much beyond his own spectacular results. He wrote several books (Morse Theory (Princeton University Press, Princeton, 1963), Lectures on h-Cobordism Theorem
(Princeton University Press, Princeton, 1965), and Characteristic Classes (Princeton University Press, Princeton, 1974), among others) which became classical, and several generations of
mathematicians have grown up learning beautiful mathematical ideas from these excellent books. Milnor’s survey “Whitehead torsion” (Bull. Amer. Math. Soc. 72 (1966), no. 3, 358–426) provided an entry
point for topologists to algebraic K-theory. This was followed by a number of Milnor’s own important discoveries in algebraic K-theory and related areas: the congruence subgroup theorem, the
computation of Whitehead groups, the introduction and study of the functor K2 and higher K-functors, numerous contributions to the classical subject of quadratic forms and in particular his complete
resolution of the theory of symmetric inner product spaces over a field of characteristic 2, just to name a few. Milnor’s introduction of the growth function for a finitely presented group and his
theorem that the fundamental group of a negatively curved Riemannian manifold has exponential growth was the beginning of a spectacular development of the modern geometric group theory and eventually
led to Gromov’s hyperbolic group theory.
“During the past 30 years, Milnor has been playing a prominent role in development of low-dimensional dynamics, real and complex. His pioneering work with Thurston on the kneading theory for interval
maps laid down the combinatorial foundation for the interval dynamics putting it into the focus of intense research for decades. Milnor and Thurston’s conjecture on the entropy monotonicity brought
together real and complex dynamics in a deep way, prompting a firework of further advances. And of course, his book Dynamics in One Complex Variable (Friedr. Vieweg & Sohn, Braunschweig, 1999)
immediately became the most popular gateway to this field.
“The Steele Prize honors John Willard Milnor for all of these achievements.”
About Stony Brook University
Part of the State University of New York system, Stony Brook University encompasses 200 buildings on 1,450 acres. In the 53 years since its founding, the University has grown tremendously, now with
nearly 24,700 students and 2,200 faculty and is recognized as one of the nation’s important centers of learning and scholarship. It is a member of the prestigious Association of American
Universities, and ranks among the top 100 national universities in America and among the top 50 public national universities in the country according to the 2010 U.S. News & World Report survey. One
of four University Centers in the SUNY system, Stony Brook University co-manages Brookhaven National Laboratory, joining an elite group of universities, including Berkeley, University of Chicago,
Cornell, MIT, and Princeton that run federal research and development laboratories. SBU is a driving force of the Long Island economy, with an annual economic impact of $4.65 billion, generating
nearly 60,000 jobs, and accounts for nearly 4% of all economic activity in Nassau and Suffolk counties, and roughly 7.5 percent of total jobs in Suffolk County. | {"url":"http://commcgi.cc.stonybrook.edu/am2/publish/General_University_News_2/Stony_Brook_University_Mathematics_Professor_Recognized_with_Lifetime_Achievement_Award_by_the_AMS.shtml","timestamp":"2014-04-16T07:13:16Z","content_type":null,"content_length":"20574","record_id":"<urn:uuid:44143b98-fc8c-4495-8ce6-232dd2b22a43>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frequently Asked Questions
This page contains answers to questions that are sent to me about calculators on a regular basis.
I have a calculator but I lost the manual. Can you help me?
Probably not. I won't bite your head off if you ask, but a) I don't have manuals for all the calculators in my possession; b) the manuals I have are mostly paper copies, and I have neither the
time, nor the stamina to make copies/scans on demand (as you can imagine, I get a lot of requests); c) even if I have a manual scanned, I may be violating the manufacturer's copyright by sending
you a copy.
Where can I find a manual for my HP calculator?
For current models, why, contact HP of course! For vintage models, I recommend ordering a CD-ROM set from the Museum of HP Calculators; this CD-ROM set contains a near complete collection of all
manuals for older HP models.
Where can I find a manual for my TI calculator?
Go to the TI Web site. They have the manuals for most of their current models available for download in PDF format. (You can also order paper copies.) Unfortunately, manuals for older models are
not available.
Where can I find a manual for my Casio calculator?
Manuals for select Casio calculators are available for download at Casio's Web site. A number of Casio manuals are also available online at silrun Systems. For other, older models, you are
advised to try and call Casio; several people reported success, as Casio was able to send them at least a photocopy of the requested manual. You can also try usersmanualguide.com.
Where can I find a manual for my Citizen calculator?
Some Citizen calculator manuals (actually, manuals for most of their contemporary models) are now available online at http://www.citizen-systems.co.jp/english/support/download/electronic/
Where can I find a manual for my Sharp calculator?
Sharp USA has a manuals Web site at http://www.sharp-usa.com/products/ProductOperationManuals/, and some calculator manuals are available here. More manuals can be found at the Sharp UK Web site:
http://www.sharp.co.uk/Manuals.aspx. You can also try usersmanualguide.com. or better yet, Sharp Austria at http://esupport.sharp.at/html/om/index.php?ProdLine=30&TemplateLang=en.
Where can I find a manual for my Radio Shack/Tandy calculator?
You can find some manuals by searching the Radio Shack Web site: e.g., try http://www.radioshack.com/search/manualResults.jsp?kw=calculator+manual.
Where can I find a manual for other calculator brands?
I have no idea! Presumably you already tried contacting the manufacturer. Your next best bet is to keep an eye on auction sites such as eBay, just in case a manual shows up. (Unfortunately, it is
a lot more common to see machines there without manuals than the other way around.)
Where can I get my calculator or accessory serviced?
For in-production or recently discontinued models, you need to contact the manufacturer. For older models, you're probably out of luck.
Can you repair my old calculator or accessory for me?
Technically, yes, but it's probably not a good idea business-wise. Repairing old machines is made more difficult than necessary by the fact that original replacement parts are no longer
available. Substitutes need to be found or jury-rigged. Success cannot be guaranteed, and diagnosis and repair can take many hours. If I charged you the true value of my time, you'd end up paying
many times the value of an equivalent new calculator. If I charged you less, it'd just not be worth my time.
Can you provide advice for my calculator repairs?
I wrote fairly detailed accounts of some of my repair "war stories", these are published right here on my Web site. For instance, look at my HP-25C or HP-91 pages.
How do I multiply and divide complex numbers on my calculator?
If your calculator has complex number support, you must consult your manual to find out how complex arguments can be entered. On all calculators, however, you can multiply two complex numbers, a+
bi and c+di, by computing (ac-bd)+(ad+bc)i. Dividing a+bi and c+di is computed as (ac+bd)/(c²+d²)+(bc-ad)i/(c²+d²). Or, you can use your calculator's polar/rectangular conversion functions and
convert a+bi to p·(cosq+i·sinq), and c+di to r·(coss+i·sins). The product of the two numbers is then computed as pr·(cos(q+s)+i·sin(q+s)), whereas the division can be calculated using p/r·(cos(q-
s)+i·sin(q-s)). I.e., to compute the product, multiply the absolute values and add the phase angles; for division, divide the absolute values and subtract the phase angles.
Why are your calculator images defaced with a copyright/not for sale notice?
Most people respect other people's work, even on the Internet. Some don't. In particular, some unscrupulous sellers on eBay and other auction sites used my calculator images without permission,
without naming the source, and without making it clear to buyers that the picture is not that of the actual item being sold. The copyright notice is meant to serve as a deterrent, although
lately, some eBay sellers began to blatantly use cropped versions of my images. Needless to say, when I come across such an eBay auction, I immediately request its removal through eBay's Verified
Rights Owner (VeRO) program. | {"url":"http://www.rskey.org/CMS/index.php/the-library/5","timestamp":"2014-04-20T11:56:53Z","content_type":null,"content_length":"22008","record_id":"<urn:uuid:e4baec84-33ca-4a1a-afba-befb828828c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Artesia, CA Math Tutor
Find an Artesia, CA Math Tutor
...You can rest assured that the student will be up to speed in a short period of time. Special need is learning disabilities such as ADD or ADHD. It is important to do a assessment of the
essential functioning of the individual, basic comprehension and hand eye coordination, the ability to pay attention enough to concentrate on the subject, etc.
20 Subjects: including prealgebra, geometry, trigonometry, English
...Three of my SAT Math II students this past year were boarding school students in Russia; I tutored these girls remotely via Skype-esque video conferencing and advanced document mark-up &
screen-sharing. AL of my students saw their SAT Math II scores improve after just two weeks of rigorous tutor...
60 Subjects: including SAT math, calculus, chemistry, geometry
...I also created lessons plans and worked closely with the standards therefore I have a good understanding of what is required by the student in their own classroom. I am eager to use all the
skills I have learned in college and start working with students. I am very enthusiastic and always trying to make learning fun and interesting.
12 Subjects: including algebra 2, elementary (k-6th), geometry, phonics
...My approach to tutoring is to help the student understand his or her best study methods and to make the learning experience practical and organic. Overall, I seek to help students become or
remain self-motivated and disciplined regardless of whether they are pursuing a major field of study, pass...
9 Subjects: including SPSS, reading, writing, piano
...I can use concrete objects and ideas to to this. My goal is not just an understanding, but to stir curiosity and therefore an interest in math. I tutored at College of the Siskiyous for 4
years working with both groups and individuals, tutoring students in Basic Math to Calculus.
4 Subjects: including algebra 1, algebra 2, prealgebra, elementary math
Related Artesia, CA Tutors
Artesia, CA Accounting Tutors
Artesia, CA ACT Tutors
Artesia, CA Algebra Tutors
Artesia, CA Algebra 2 Tutors
Artesia, CA Calculus Tutors
Artesia, CA Geometry Tutors
Artesia, CA Math Tutors
Artesia, CA Prealgebra Tutors
Artesia, CA Precalculus Tutors
Artesia, CA SAT Tutors
Artesia, CA SAT Math Tutors
Artesia, CA Science Tutors
Artesia, CA Statistics Tutors
Artesia, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Artesia_CA_Math_tutors.php","timestamp":"2014-04-20T02:12:51Z","content_type":null,"content_length":"23852","record_id":"<urn:uuid:420902eb-2286-4dde-9e0c-dca1ed5f877a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hometown, IL Calculus Tutor
Find a Hometown, IL Calculus Tutor
...As one of these tutors it was my responsibility to assist students with questions and to periodically check in on students who were studying in the lab. Algebra 1 (or Elementary Algebra)
introduces the use of variables in equations and usually concludes with the basic concept of functions. The ...
7 Subjects: including calculus, physics, geometry, algebra 1
...Art demands that we tap into our cultural side to understand how we as people create and share ideas visually that are central to our experiences and beliefs. Archaeology takes a third
approach of interpreting the role of objects in our lives. Collectively, these three fields are the critical f...
10 Subjects: including calculus, geometry, algebra 1, algebra 2
...I look forward to hearing from you.I took discrete math undergraduate at Tufts and received an A. Topics included set theory, graph theory, combinatorics. Since then I have worked as a TA for
"Finite Mathematics for Business" which had a major component of counting (combinations, permutations) problems, and linear programming, both of which are common in discrete math.
22 Subjects: including calculus, geometry, statistics, precalculus
...For example, the calculus of vector fields is necessary to understand how electric and magnetic fields behave. Astronomy: My undergraduate degrees are in both physics and astronomy; I graduate
from the University of Maryland (College Park) with degrees in both subjects. Probability and statistics: Particle physics research is mostly statistical data analysis.
13 Subjects: including calculus, physics, geometry, statistics
...Then I tutored freshman students, later I became a graduate student and taught the section of physics laboratories for science majors for three years. During my postdoctoral appointment at Los
Alamos, I tutored a student in physics for MCAT test. Twice I worked as an instructor at a summer camp...
18 Subjects: including calculus, chemistry, physics, geometry
Related Hometown, IL Tutors
Hometown, IL Accounting Tutors
Hometown, IL ACT Tutors
Hometown, IL Algebra Tutors
Hometown, IL Algebra 2 Tutors
Hometown, IL Calculus Tutors
Hometown, IL Geometry Tutors
Hometown, IL Math Tutors
Hometown, IL Prealgebra Tutors
Hometown, IL Precalculus Tutors
Hometown, IL SAT Tutors
Hometown, IL SAT Math Tutors
Hometown, IL Science Tutors
Hometown, IL Statistics Tutors
Hometown, IL Trigonometry Tutors
Nearby Cities With calculus Tutor
Argo, IL calculus Tutors
Burbank, IL calculus Tutors
Chicago Ridge calculus Tutors
Evergreen Park calculus Tutors
Mc Cook, IL calculus Tutors
Mccook, IL calculus Tutors
Merrionette Park, IL calculus Tutors
Oak Lawn calculus Tutors
Palos Park calculus Tutors
Riverside, IL calculus Tutors
Robbins, IL calculus Tutors
Summit Argo calculus Tutors
Summit, IL calculus Tutors
Willow Springs, IL calculus Tutors
Worth, IL calculus Tutors | {"url":"http://www.purplemath.com/Hometown_IL_Calculus_tutors.php","timestamp":"2014-04-17T11:21:40Z","content_type":null,"content_length":"24230","record_id":"<urn:uuid:7f9ed384-6246-44d1-9222-65ae2901ed77>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
Iterative Methods for Determining the k Shortest Paths in a Network
, 2002
"... We define general algebraic frameworks for shortest-distance problems based on the structure of semirings. We give a generic algorithm for finding single-source shortest distances in a weighted
directed graph when the weights satisfy the conditions of our general semiring framework. The same algorit ..."
Cited by 72 (20 self)
Add to MetaCart
We define general algebraic frameworks for shortest-distance problems based on the structure of semirings. We give a generic algorithm for finding single-source shortest distances in a weighted
directed graph when the weights satisfy the conditions of our general semiring framework. The same algorithm can be used to solve efficiently classical shortest paths problems or to find the
k-shortest distances in a directed graph. It can be used to solve single-source shortest-distance problems in weighted directed acyclic graphs over any semiring. We examine several semirings and
describe some specific instances of our generic algorithms to illustrate their use and compare them with existing methods and algorithms. The proof of the soundness of all algorithms is given in
detail, including their pseudocode and a full analysis of their running time complexity.
"... We define general algebraic frameworks for shortest-distance problems based on the structure of semirings. We give a generic algorithm for finding single-source shortest distances in a weighted
directed graph when the weights satisfy the conditions of our general semiring framework. The same algorit ..."
Add to MetaCart
We define general algebraic frameworks for shortest-distance problems based on the structure of semirings. We give a generic algorithm for finding single-source shortest distances in a weighted
directed graph when the weights satisfy the conditions of our general semiring framework. The same algorithm can be used to solve efficiently classical shortest paths problems or to find the
k-shortest distances in a directed graph. It can be used to solve single-source shortest-distance problems in weighted directed acyclic graphs over any semiring. We examine several semirings and
describe some specific instances of our generic algorithms to illustrate their use and compare them with existing methods and algorithms. The proof of the soundness of all algorithms is given in
detail, including their pseudocode and a full analysis of their running time complexity. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1550666","timestamp":"2014-04-17T13:14:57Z","content_type":null,"content_length":"15265","record_id":"<urn:uuid:389db8c3-e943-45c8-9ae2-b35e9297e9f8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pacman using GeoGebra??!!
Here's how I did it:
1. Go to GeoGebra.
2. Make an angle with given size.
3. A popup will show. Type 324 in the box. Then, an angle wil appear.
4. Put a point.
5. Play pacman! Right click the angle.
It is like chasing, right?
Last edited by julianthemath (2012-12-26 18:10:45) | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=267584","timestamp":"2014-04-21T02:00:17Z","content_type":null,"content_length":"16428","record_id":"<urn:uuid:6b420139-8631-44aa-981d-8a153977f657>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: RE: estimate 95%CIs of the mean for different sample sizes
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: RE: estimate 95%CIs of the mean for different sample sizes
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: RE: estimate 95%CIs of the mean for different sample sizes
Date Thu, 10 Jun 2010 11:39:58 +0100
If you bootstrap you have to invent extra methodology for contracting or
expanding to different sample sizes other than that in hand. The results
you get then conflate side-effects or variability associated with that
methodology with those of bootstrapping. That doesn't sound crazy, but
it sounds like an elaborate exercise for what seems like a fairly simple
question. Also, I think you would have to explain in a bit of detail
what you're doing. All sounds a bit over the top to me, but it's your
With -cii- you could say I have this mean and standard deviation; what's
the effect of changing the sample size? In fact, you don't need -cii- to
do that, as it's an elementary calculation; but if a calculator would
serve in place of -cii-, then so also -cii- would serve in place of a
Miranda Kim
I had in mind something like bootstrap originally, but am not sure what
method is most appropriate in this case. The purpose is to see what
precision is obtained for different sample sizes.
Nick Cox
> -cii-
Miranda Kim
> I have data for a sample of 300 which is approximately normally
> distributed (slightly skewed). I would like to derive 95% confidence
> intervals around the estimate of the mean for sample sizes of 50, 80,
> 100, 150, and 400.
> Does anyone have any suggestions what command I can use to do this?
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-06/msg00533.html","timestamp":"2014-04-21T07:15:30Z","content_type":null,"content_length":"9727","record_id":"<urn:uuid:0eb28761-54bf-4988-872e-2ae7c6bdc9bb>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unpublished works of Woodin on SCH and Radin forcing
up vote 13 down vote favorite
There are many unpublished results of Hugh Woodin on ''singular cardinals hypothesis'' and '' Radin forcing''. Some of his results are published later by others, but it seems that there are still
many unpublished results.
Does any one know some of them to state here.
Is there any way to get a copy of Woodin's unpublished results?
set-theory bibliography reference-request
1 Why not just ask him? His web page (math.berkeley.edu/~woodin) has his email address. – Nate Eldredge May 30 '13 at 12:34
1 What is a $T$-degree? (I presume it's not a Turing degree?) – Noah S May 30 '13 at 18:11
(I've moved and expanded my comments to an answer.) – Andres Caicedo May 31 '13 at 6:24
add comment
1 Answer
active oldest votes
(Below, I refer to the Handbook. This is the Handbook of Set Theory, Foreman, Kanamori, eds., Springer, 2010.)
The bulk of these results appears in notes by James Cummings. You may want to ask him about them. Originally these notes were intended for a book on Radin forcing to be coauthored by him
and Hugh, but significant portions of it appear in his Handbook article. (The book exists in draft form.)
There are a few additional facts about Radin forcing that did not make it into the article, but James has write ups of them, and of course the theory has expanded since.
For $\mathsf{SCH}$ specifically, some details appear in Gitik's paper The negation of the singular cardinal hypothesis from $o(\kappa)=\kappa^{++}$, APAL 43 (1989), 209-234. The state of
the art in this regard is described in Gitik's Handbook article, his papers, and those of his co-authors, particularly Carmy Merimovich.
Some applications of Radin/Prikry-like forcing in the context of determinacy are not in any of the above. You can see some in the Koellner-Woodin Handbook article, and yet others in the
proofs of the derived model theorem, most of which can be seen (perhaps in preliminary form) here.
The one thing I do not think is in either place is a discussion of $T$-degrees, or constructibility degrees. Hugh discusses some of this (briefly) in The cardinals below $|[\omega_1]^{<\
up vote omega_1}|$, APAL 140 (2006), 161–232, but there is a bit more than this.
12 down
vote Richard Ketchersid wrote up the basics in a nice article, More structural consequences of $\mathsf{AD}$, in Set Theory and Its Applications, Contemporary Mathematics, vol. 533, Amer. Math.
Soc., Providence, RI, 2011, pp. 71-106. To define these degrees, assume the axiom of determinacy, so we have Martin's cone measure on Turing-invariant sets of reals. Given sets of ordinals
$T$ and $S$, define $S\lt T$ iff for almost every $r$ (in the sense of Martin's measure) $$ L[T,x]\cap\mathbb R\setminus L[S,x]\ne\emptyset. $$ It turns out that for any two sets of
ordinals $S,T$, precisely one of the following holds:
1. $S\lt T$, in which case for almost every $x$, the reals of $L[S,x]$ are in $L[T,x]$.
2. $T\lt S$, in which case for almost every $x$, $\mathbb R\cap L[T,x]\subset L[S,x]$.
3. For almost every $x$, $\mathbb R\cap L[T,x]=\mathbb R\cap L[S,x]$.
If option 3 holds, we say that $S$ and $T$ have the same degree. The relation $\lt$ induces a well-ordering of degrees. This is established, together with the basic properties of these
degrees, via Prikry-like arguments.
It is possible that there are yet additional results not in any of the above. In that case, I doubt there are formal written notes, but there may be accounts in notes from seminar talks.
Thanks for your nice answer. – Mohammad Golshani Jun 8 '13 at 10:36
add comment
Not the answer you're looking for? Browse other questions tagged set-theory bibliography reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/132326/unpublished-works-of-woodin-on-sch-and-radin-forcing","timestamp":"2014-04-19T02:23:59Z","content_type":null,"content_length":"58369","record_id":"<urn:uuid:cc32a423-b65b-490d-83b5-0e2c2221f548>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Larkspur, CA Algebra 1 Tutor
Find a Larkspur, CA Algebra 1 Tutor
...Please let me reverse your feelings toward it. I would welcome the opportunity and look forward to hearing from you soon. Regards, MisaeAlgebra 1: Crystal Clear Explanation!
3 Subjects: including algebra 1, Japanese, prealgebra
...Since repetition is very important in studying math, we work on extra problems either from the book or that I offer myself. The student should work on as much of the current homework before a
session as possible. This allows us to focus on concepts or specific problems that the student is having a problem with.
12 Subjects: including algebra 1, calculus, algebra 2, geometry
...I'll help you identify where your strengths and weaknesses are and find any specific problems with your understanding of the material. Then, we'll break your problem areas down into simple
steps and walk through their applications. Finally, we'll reinforce your new skills while challenging you to apply your skill set to ever more subtle and challenging applications.
29 Subjects: including algebra 1, English, reading, writing
...Incidentally, I have passed the rigorous CSET math sections I & II, which includes Algebra, Geometry, Trigonometry and more advanced math as well. I've tutored many kids in math. Having been a
substitute teacher in the classroom and having passed the CSET for math levels 1 & 2 most of my tutori...
37 Subjects: including algebra 1, English, reading, writing
...I've supplemented my degree with additional courses in literature, drama, writing, psychology and music. My teaching style can best be described as tailored to each student's needs and
learning modalities.I began teaching elementary school twenty years ago. I have an elementary teaching credential from the state of California.
15 Subjects: including algebra 1, reading, English, elementary (k-6th)
Related Larkspur, CA Tutors
Larkspur, CA Accounting Tutors
Larkspur, CA ACT Tutors
Larkspur, CA Algebra Tutors
Larkspur, CA Algebra 2 Tutors
Larkspur, CA Calculus Tutors
Larkspur, CA Geometry Tutors
Larkspur, CA Math Tutors
Larkspur, CA Prealgebra Tutors
Larkspur, CA Precalculus Tutors
Larkspur, CA SAT Tutors
Larkspur, CA SAT Math Tutors
Larkspur, CA Science Tutors
Larkspur, CA Statistics Tutors
Larkspur, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Larkspur_CA_algebra_1_tutors.php","timestamp":"2014-04-17T10:45:06Z","content_type":null,"content_length":"24106","record_id":"<urn:uuid:679144fd-ea5b-44f4-825f-a0acbceab1de>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tetrahedron Faces
Copyright © University of Cambridge. All rights reserved.
'Tetrahedron Faces' printed from http://nrich.maths.org/
One face of a regular tetrahedron is painted blue and each of the remaining faces are painted using one of the colours red, green or yellow.
How many different possibilities are there, if we count those that just differ by a rotation as the same?
How many different possibilities would there be if the tetrahedron was irregular? | {"url":"http://nrich.maths.org/485/index?nomenu=1","timestamp":"2014-04-18T03:01:52Z","content_type":null,"content_length":"3400","record_id":"<urn:uuid:dd0f364a-1dee-46fb-ba3a-74933e3bc332>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
CMS Winter 2007 Meeting
History and Philosophy of Mathematics
Org: Tom Archibald (SFU) and Deborah Kent (Hillsdale College)
Many historians have observed that the eighteenth century was a time when the recognition of patterns produced results directly from hand calculations. Around 1840 Jacobi remarked that the
limitations of this calculational method as a source of discovery were becoming apparent. Reading with hindsight, his remarks seem prophetic, directly preceding a move to a more "modern",
conceptual viewpoint. This shift has been described by several writers in terms of a change from a formula-based to a concept-based mathematics. However, formulas have a variety of purposes: they
may be used to classify objects or to represent generic objects. The distinction between the older and the newer views is thus difficult to summarize in terms of the distinction between formulas
and concepts or structures.
In this paper, we consider the distinction between formulas and concepts against the background of differing views about the ontological status of mathematical objects. Some writers viewed
mathematical objects as naturally occurring, so that their relations were objectively given, and not merely subject to the whim of the mathematician and the requirements of consistency. This
viewpoint has several advantages for the historian, while still shedding light on the distinction in question.
In my talk I will trace the development of the idea that the continuum is indecomposable-that it cannot be split into disjoint nonempty parts-an idea, I contend, that goes back at least as far as
Aristotle. Time permitting, I will also describe how indecomposable continua are modelled in contemporary mathematics.
The first life annuity evaluation in England was done by Edmund Halley in 1693 related to his construction of the Breslau life table. The work had little impact until Abraham De Moivre published
his work on life annuities in 1725. After that point several English mathematicians, most notable among them Thomas Simpson, became interested in life annuity valuations. The topic is covered
from two perspectives. First, some eighteenth century annuity tables have been recalculated to try to discover the author's intentions and mistakes, if any. The second thread of enquiry is to try
to answer the question why books on annuities were published by mathematicians when some modern historians have doubted whether they were ever used in practice.
Mathematics is often claimed to be essential for science. While extensive use is made of mathematics, it is far from clear that it plays the same sort of role in explaining and understanding that
the rest of science plays. An interesting recent example concerns the 17 year life-cycle of the cicada. Why that long? Because 17 is a prime number. Of course, more is involved, but it is claimed
that primarily is part of the explanation. I shall argue that this is misconceived. Mathematics, in this sense of explanation, explains nothing in the natural world. On the other hand, there is a
second sense of explanation-understanding, as in "I want to understand your theory; please explain it to me." What do we understand of, say, physical properties such as the spin of an electron?
Can it be explained, in the sense of providing understanding? I would say that we know nothing, except that electron spin is in some sense analogous to a certain matrix, and so on. In this sense
of explanation, namely, understanding, mathematics is often essential. The only understanding we have of the spin of an electron (and of much in the world), is by means of mathematical structures
that we do understand and that we conjecture to be structurally similar to the natural world. So, mathematics is essential to science, but only in the second sense.
In 1866, Canada's first business mathematics textbook was published. Within a year, it had been replaced by an Americanized version. While it went through at least a dozen editions-several
published in Canada-within the next decade, the original version was never republished. Some editions were "published" by business school principals who were hardly involved with the production
of the book; in one case, even the supposed author was an impostor. In this talk, we will examine the background to these irregularities. We will also consider the difficulties of studying the
publication history of a book for which misleading title pages were the rule rather than the exception.
Many reasons have been advanced for being a pluralist about logic, i.e., for holding that more than one system of logic is correct, and some of these have something to do with mathematics. For
instance, it is sometimes claimed that the logic of natural language must be non-classical, due to the vagueness of the predicates involved, while classical logic is correct for mathematical
reasoning. On the other hand, while it is common enough to find people willing to advocate pluralism about mathematics in the pub, it is harder to find ones who will do so in print. The reason is
that it is simply harder to make sense of the claim that more than one mathematics is correct. This talk will describe what I take to be the best bets for making good the claims of mathematical
pluralism. This will involve closer investigation of the relationship between mathematical and logical pluralism, and of whether a view can be genuinely pluralist if one holds that there are two
correct systems, but one is a subsystem of the other.
Claudius Ptolemy is best known for writing the Almagest, his mammoth and influential compendium of astronomical hypotheses. For decades now, scholars have debated whether Ptolemy merely intended
to present mathematical fictions, with the aim of saving the phenomena of planetary motion, or whether he endeavored to expound a cosmological system that he truly believed to exist. In other
words, was Ptolemy an instrumentalist or a realist? Examination of Ptolemy's astronomical hypotheses in the context of his more methodological and philosophical expositions suggests that Ptolemy
did believe in the reality of mathematical objects, astronomical and otherwise. To begin with, in Almagest 1.1, Ptolemy adopts Aristotle's classification of the three theoretical sciences:
physics, mathematics, and theology. He describes the objects that each of the sciences studies, and he characterizes mathematical objects as form, shape, number, size, place, time, and motion
from place to place. In adopting an Aristotelian ontology, Ptolemy demonstrates that he believes in the existence of mathematical entities. Moreover, his realism is evident in the method he
utilizes in the Harmonics. Ptolemy introduces the concept of harmonia, which he defines as an active power in the cosmos that enforms rational objects. Music, human souls, and heavenly bodies all
exhibit the same harmonious ratios. Ptolemy's mathematical correlation of these diverse phenomena is proof that he believed that the mathematical entities heard in music, posited in the soul, and
observed in the heavens really do exist.
Vettius Valens wrote a treatise on astrology in Antioch during the late 2nd century of our era, about the same time that Ptolemy was active in Alexandria; the book is among the richest sources we
have for the use of mathematical methods in ancient astrology. Vettius Valens had a special enthusiasm for calculations predicting the length of a person's life, and he illustrates them and their
efficacy through examples drawn from the lives of his clients. The details of these examples cast interesting light both on the role of mathematical methods in Greek astrology and on the
manipulation, whether conscious or unconscious, of data to obtain exact agreement between theories and empirical data.
This talk will investigate the variey of ways the mobilization to enter World War impacted the mathematical community in the United States. It will also explore the efforts of individuals working
to aid the war effort with their mathematical training in colleges and universities, as well as in military and industrial contexts.
Greek mathematical texts do not consist purely of solutions to problems and proofs of theorems; they often begin with introductions in which the mathematicians speak explicitly about their work.
The statements of Greek geometers about geometry shed light on their motivations, their attitudes towards their work, and, most fundamentally, what exactly they think that mathematics is.
The direction toward which Muslim faithful must face for prayer, the qibla, garnered a great deal of attention from medieval astronomers. But, of course, the mathematical astronomy they inherited
from India and Greece did not instantly provide a solution. One approach, attributed to one of the earliest and greatest Muslim scientists Habash al-Hasib, solved the problem geometrically,
borrowing from the Greek tradition of the analemma. This technique, a clever reduction of the problem from three dimensions to two using several rotations within the celestial sphere, would be
transformed into a popular trigonometric tool that may be thought of as a sequence of coordinate transformations on the celestial sphere. We will survey the relevant history, and emphasize the
beautiful mathematics of this and related methods. | {"url":"http://cms.math.ca/Events/winter07/abs/hpm.html","timestamp":"2014-04-16T04:12:00Z","content_type":null,"content_length":"15811","record_id":"<urn:uuid:564d4f56-92ac-435e-bcd8-eb5c6fb5e575>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The MAXIMUM number of amino acid molecules which can be coded for by the triplet code is : a. 3 b. 20 c. 60 d. 64
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50bd0743e4b0de42629f9788","timestamp":"2014-04-20T18:35:29Z","content_type":null,"content_length":"37175","record_id":"<urn:uuid:fc45f5a3-7ad3-42a1-b824-faa169a26e9d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Date Subject Author
3/25/13 Mathematics and the Roots of Postmodern Thought David Petry
3/25/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/25/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
3/25/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/25/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/25/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
3/25/13 Re: Mathematics and the Roots of Postmodern Thought scattered
3/25/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
3/25/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/25/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
3/25/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/25/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
3/25/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/26/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/30/13 Re: Mathematics and the Roots of Postmodern Thought Jesse F. Hughes
3/25/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
3/25/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
3/25/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/25/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
3/25/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/30/13 Re: Mathematics and the Roots of Postmodern Thought Jesse F. Hughes
3/30/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/25/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
3/25/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/25/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
3/26/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/26/13 Re: Mathematics and the Roots of Postmodern Thought Virgil
3/26/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
3/26/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/26/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
3/26/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
3/26/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/26/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
3/26/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/26/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
3/26/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
3/26/13 Re: Mathematics and the Roots of Postmodern Thought Frederick Williams
4/2/13 Re: Mathematics and the Roots of Postmodern Thought Jesse F. Hughes
3/26/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/26/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/26/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/29/13 Re: Mathematics and the Roots of Postmodern Thought Shmuel (Seymour J.) Metz
3/29/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/30/13 Re: Mathematics and the Roots of Postmodern Thought Shmuel (Seymour J.) Metz
3/31/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/30/13 Re: Mathematics and the Roots of Postmodern Thought Jesse F. Hughes
3/26/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/30/13 Re: Mathematics and the Roots of Postmodern Thought Jesse F. Hughes
4/1/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
4/1/13 Re: Mathematics and the Roots of Postmodern Thought fom
4/1/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
4/1/13 Re: Mathematics and the Roots of Postmodern Thought Frederick Williams
4/1/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
4/1/13 Re: Mathematics and the Roots of Postmodern Thought fom
4/1/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
4/1/13 Re: Mathematics and the Roots of Postmodern Thought Frederick Williams
4/1/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
4/1/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
4/1/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
4/1/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
4/1/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
4/1/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
4/1/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
4/1/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
4/1/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
4/1/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
4/1/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
4/1/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
4/1/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
4/1/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
4/1/13 Re: Mathematics and the Roots of Postmodern Thought dan.ms.chaos@gmail.com
4/1/13 Re: Mathematics and the Roots of Postmodern Thought fom
4/1/13 Re: Mathematics and the Roots of Postmodern Thought fom
4/1/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
4/1/13 Re: Mathematics and the Roots of Postmodern Thought Virgil
4/2/13 Re: Mathematics and the Roots of Postmodern Thought fom
4/1/13 Re: Mathematics and the Roots of Postmodern Thought fom
4/1/13 Re: Mathematics and the Roots of Postmodern Thought fom
4/1/13 Re: Mathematics and the Roots of Postmodern Thought fom
4/1/13 Re: Mathematics and the Roots of Postmodern Thought Virgil
4/1/13 Re: Mathematics and the Roots of Postmodern Thought Jesse F. Hughes
4/1/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
4/1/13 Re: Mathematics and the Roots of Postmodern Thought Jesse F. Hughes
4/1/13 Re: Mathematics and the Roots of Postmodern Thought rt servo
4/1/13 Re: Mathematics and the Roots of Postmodern Thought fom
4/1/13 Re: Mathematics and the Roots of Postmodern Thought Jesse F. Hughes
4/1/13 Re: Mathematics and the Roots of Postmodern Thought fom
4/1/13 Re: Mathematics and the Roots of Postmodern Thought Virgil
4/1/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
4/1/13 Re: Mathematics and the Roots of Postmodern Thought fom
4/1/13 Re: Mathematics and the Roots of Postmodern Thought Jesse F. Hughes
4/6/13 Re: Mathematics and the Roots of Postmodern Thought Rotwang
4/7/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
4/7/13 Re: Mathematics and the Roots of Postmodern Thought Jesse F. Hughes
4/1/13 Re: Mathematics and the Roots of Postmodern Thought fom
4/1/13 Re: Mathematics and the Roots of Postmodern Thought David Petry
4/1/13 Re: Mathematics and the Roots of Postmodern Thought fom
3/25/13 Re: Mathematics and the Roots of Postmodern Thought bacle
3/29/13 Re: Mathematics and the Roots of Postmodern Thought HOPEINCHRIST | {"url":"http://mathforum.org/kb/message.jspa?messageID=8800925","timestamp":"2014-04-19T08:09:55Z","content_type":null,"content_length":"134403","record_id":"<urn:uuid:af9a1d65-314d-413d-bbb0-340afa07fd06>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Winston, GA Trigonometry Tutor
Find a Winston, GA Trigonometry Tutor
...I have taught in the special RESA program in Georgia for students that have psychiatric/severe special needs including aspergers, autism, bi-polar for more than 8 years. The grade levels
include grades 4 thru 12. I have developed special individualized programs for each of these special needs students.
47 Subjects: including trigonometry, chemistry, English, physics
...It is my mission to help students understand math, to see how it fits together, and to become independent, successful learners. I know that takes time and consistency, both of which I am more
than willing to provide. I am a former math teacher, and a former teacher educator.
8 Subjects: including trigonometry, statistics, algebra 1, algebra 2
...Helped individuals with the math sections of the GRE, SAT, and ACT. Awarded a full tuition scholarship for graduate work in Business Administration at the College of William and Mary in
Virginia. At the end of the first year my peers gave me the Mason Gold Standard Award which is a recognition of one who unselfishly contributes to the academic achievement of others through
28 Subjects: including trigonometry, calculus, physics, linear algebra
...After graduation, via a strange and winding path, I began to take an interest in becoming a health care actuary. While enjoying the classroom again, I also passed 6 actuarial exams covering
Calculus (again), Probability, Applied Statistics, Numerical Methods, and Compound Interest. It's this sp...
21 Subjects: including trigonometry, calculus, statistics, geometry
I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT
and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery.
12 Subjects: including trigonometry, statistics, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/Winston_GA_trigonometry_tutors.php","timestamp":"2014-04-20T11:11:40Z","content_type":null,"content_length":"24400","record_id":"<urn:uuid:7cf4ea30-a075-4b0e-a280-0f0fdd9cc1c3>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Question attached below
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/502f359fe4b0ac2883165a6e","timestamp":"2014-04-17T12:53:39Z","content_type":null,"content_length":"147400","record_id":"<urn:uuid:1e77eea9-5b73-4f21-a887-2416d4c43d71>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry has been associated with motion, either implicitly or explicitly, from very early times in human history. There are relationships between motion and geometry both in how motion is described
and in how it is harnessed and directed. Geometric notions underlie such mechanical devices as the potter's wheel and the wheeled cart, the ramp (or inclined plane), the lever, the pulley, and the
coil. Although formal geometrical descriptions and explicit functionality principles were not supplied until centuries after such mechanisms came into widespread use, their connections with linked
linear and circular motion, horizontal and vertical or forward and sideways motion, and winding-in and-out (spiral) or winding-up and-down (helical) motion are unmistakable. The substantial
interrelationships between motion and geometry have been a continuing focus of scientific study and technological development from the eras of Archimedes of Syracuse, Leonardo da Vinci and Galileo
Galilei, Rene Descartes, Isaac Newton, Pierre-Louis Moreau de Maupertuis, James Clerk Maxwell, and Albert Einstein fight through to the present time. Those linkages bear heavily on how motion is
modeled and ultimately controlled, be it by mechanical contrivance (for instance, in a pendulum clock) or through the discovery of how prevailing conditions influence outcomes (for example, finding
the trajectory of an object that is subject to gravity and that is thrown horizontally off a cliff).
From the construction of the Great Pyramids and of Stonehenge, which both involved the transport and careful positioning of massive blocks or lintels, to the reckoning of celestial motions; from the
Renaissance design or engineering of a prototype submarine, bicycle, or helicopter to latter-day satellite positioning or in vivo intestinal exploration and examination; from the movements of
subatomic particles to the meanderings of computer-modeled sidewinding snakes, geometry supplies an indispensable vocabulary for the mathematical description of whatever motions are observed,
achievable, or sought. As mathematics is the language of science, so geometry is the language of motion. The motivation may have changed from a desire to understand, predict, or direct motions by way
of geometric modeling and analysis to a focus on designing and controlling the mechanical generation of particular motions on the basis of their geometric description, computer simulation, and
robotic replication. However, the value of this geometric language is undiminished.
Some of the modem developments described in the following chapters include the geometric control of robot motion and craft orientation, how high-power precision micromotors are engineered for less
invasive surgery and self-focusing lens applications, what a mobile robot on a surface has in common with one moving in three dimensions, and how the motion-control problem is simplified by a coupled
oscillator's geometric grouping of degrees of freedom and motion time scales.
The four papers in these proceedings provide a view through the scientific portal of today's motion-control geometric research into tomorrow's technology. The mathematics needed to carry out this
research is that of modem differential geometry, and the questions raised in the field of motion-control geometry go directly to the research frontier. Some of the mathematical tools that are useful
here are Lie algebras of vector fields, differential forms and exterior algebra, and affine connections. Another tool that has proven useful is gauge theory remarkably, the same sort of geometry that
is used in elementary-particle physics. It is fortunate that mathematicians have developed the mathematical tools in a general context so that they can be used for many purposes. In particular, the
mathematical notion of the holonomy of a connection has been around for some time—an idea that links locomotion generation | {"url":"http://www.nap.edu/openbook.php?record_id=5772&page=1","timestamp":"2014-04-16T07:44:07Z","content_type":null,"content_length":"40105","record_id":"<urn:uuid:d7cfd3ff-e542-4975-907c-6a0424125d65>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra 2
Geodesic Dome Project
Geodesic Domes
For my math project I decided, after much thought and research, to build a geodesic dome. This is a dome constructed solely from different triangles, causing it to be extremely stable. My original
idea when I came across geodesic domes was to build one out of toothpicks or small sticks in order to demonstrate the strength of the shape but after doing some research I realized that it would be
much more reasonable to build it out of triangle cut from paper. This is the idea I stuck with because it still demonstrates strength but also does a much better job of using geometry.
The dome I built is known as an Icosa Alternate and is made up of icosahedrons which is one of the five solids discovered by ancient greeks. It is constructed by using a combination of equilateral
and isosceles triangles that fit together inside the icosahedron. The side lengths of the equilateral triangle, which are all A sides, are figured by using the chord factor of A, which is .61803,
and multiplying it by the radius of the dome you plan to build. The two B side lengths of the isosceles triangle are found in the same way except the chord factor of B which is .54653 is used
This dome made up of icosahedrons split up into triangles is actually used world wide. It is the design used for many greenhouses or art structures placed around gardens. Not only that, but after the
dome was discovered by Walther Bauersfeld after World War One, the army took it on as the structure for most of its buildings due to its structural stability. The smaller dome that I have constructed
is meerly an ant sized model of some of the domes out there but it is still able to display the genius that is found in geometry.
- I am very proud of the fact that I have aced all of my tests in algebra and am passing the class very easily. I am currently very close to the top of my class which includes half sophomores. | {"url":"http://students.animashighschool.com/~ekoppdevol/Algerbra%202.html","timestamp":"2014-04-18T02:58:11Z","content_type":null,"content_length":"6976","record_id":"<urn:uuid:e81e994b-32dd-4ee5-9b24-a9aa6550026b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
References for geometric K-homology
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Can anyone give me some good references to read geometric K-homology. I know bit of Kasparov's KK theory and analytic K-homology.
up vote 3 down vote favorite reference-request kt.k-theory-homology
add comment
Can anyone give me some good references to read geometric K-homology. I know bit of Kasparov's KK theory and analytic K-homology.
If I remenber correctly one can use, Martin Jakob's geometric approach to generalized homology theories:
Jakob, Martin An alternative approach to homology. Une dégustation topologique [Topological morsels]: homotopy theory in the Swiss Alps (Arolla, 1999), 87–97, Contemp. Math.,
265, Amer. Math. Soc., Providence, RI, 2000.
up vote 4 down vote Jakob, Martin A bordism-type description of homology. Manuscripta Math. 96 (1998), no. 1, 67–80.
Baum-Douglas geometric K-homology groups are defined in this way, this is very well explained in (sections 4 and 5 in the arXiv version):
Baum, Paul; Higson, Nigel; Schick, Thomas On the equivalence of geometric and analytic $K$-homology. Pure Appl. Math. Q. 3 (2007), no. 1, part 3, 1–24.
add comment
If I remenber correctly one can use, Martin Jakob's geometric approach to generalized homology theories:
Jakob, Martin An alternative approach to homology. Une dégustation topologique [Topological morsels]: homotopy theory in the Swiss Alps (Arolla, 1999), 87–97, Contemp. Math., 265, Amer. Math. Soc.,
Providence, RI, 2000.
Jakob, Martin A bordism-type description of homology. Manuscripta Math. 96 (1998), no. 1, 67–80.
Baum-Douglas geometric K-homology groups are defined in this way, this is very well explained in (sections 4 and 5 in the arXiv version):
Baum, Paul; Higson, Nigel; Schick, Thomas On the equivalence of geometric and analytic $K$-homology. Pure Appl. Math. Q. 3 (2007), no. 1, part 3, 1–24. | {"url":"http://mathoverflow.net/questions/119872/references-for-geometric-k-homology?sort=newest","timestamp":"2014-04-16T14:01:46Z","content_type":null,"content_length":"48582","record_id":"<urn:uuid:0b186af9-5e22-4436-a2dc-11c0f51a22f9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Useful Proofs
Take a break from the drudgery with some of these jokes, song parodies, anecdotes and assorted humor that has been collected from friends & from websites across the Internet. This humor is
light-hearted and sometimes slightly offensive to the easily-offended, so you are forewarned. I have taken care to censor "humor" with overt sexual overtones (or undertones), degrading political
taunts, and hateful tirades, so it is all workplace-safe. I have also tried to warn of any links that will result in audio clips so you can take appropriate precautions. Please send any potential
candidates for this humor page to the e-mail link above.
Humor #1 | Humor #2 | Humor #3
Proof by example: The author gives only the case n = 2 and suggests that it contains most of the ideas
of the general proof.
Proof by intimidation:
Proof by vigorous hand-waving:
Works well in a classroom or seminar setting.
Proof by cumbersome notation:
Best done with access to at least four alphabets and special symbols.
Proof by exhaustion:
An issue or two of a journal devoted to your proof is useful.
Proof by omission:
'The reader may easily supply the details' "The other 253 cases are analogous" "..."
Proof by obfuscation:
A long plotless sequence of true and/or meaningless syntactically related statements.
Proof by wishful citation:
The author cites the negation, converse, or generalization of a theorem from the
literature to support his claims.
Proof by funding:
How could three different government agencies be wrong?
Proof by eminent authority:
"I saw Karp in the elevator and he said it was probably NP- complete."
Proof by personal communication:
"Eight-dimensional colored cycle stripping is NP-complete [Karp, personal
Proof by reduction to the wrong problem:
"To see that infinite-dimensional colored cycle stripping is decidable, we reduce it to
the halting problem."
Proof by reference to inaccessible literature:
The author cites a simple corollary of a theorem to be found in a privately circulated
memoir of the Slovenian Philological Society, 1883.
Proof by importance:
A large body of useful consequences all follow from the proposition in question.
Proof by accumulated evidence:
Long and diligent search has not revealed a counterexample.
Proof by cosmology:
The negation of the proposition is unimaginable or meaningless. Popular for proofs of the
existence of God.
Proof by mutual reference:
In reference A, Theorem 5 is said to follow from Theorem 3 in reference B, which is
shown to follow from Corollary 6.2 in reference C, which is an easy consequence of
Theorem 5 in reference A.
Proof by metaproof:
A method is given to construct the desired proof. The correctness of the method is
proved by any of these techniques.
Proof by picture:
A more convincing form of proof by example. Combines well with proof by omission.
Proof by vehement assertion:
It is useful to have some kind of authority relation to the audience.
Proof by ghost reference:
Nothing even remotely resembling the cited theorem appears in the reference given.
Proof by forward reference:
Reference is usually to a forthcoming paper of the author, which is often not as
forthcoming as at first.
Proof by semantic shift:
Some of the standard but inconvenient definitions are changed for the statement of
the result.
Proof by appeal to intuition:
Cloud-shaped drawings frequently help here. The above material is by Dana Angluin and was published in Sigact News, Winter-Spring, 1983, Volume 15 #1. | {"url":"http://www.rfcafe.com/miscellany/humor/proofs.htm","timestamp":"2014-04-17T15:52:14Z","content_type":null,"content_length":"19047","record_id":"<urn:uuid:cb4fd3cf-a9ad-4444-b52a-35db4dc701bf>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
Returns the imaginary coefficient of a complex number in x + yi or x + yj text format.
If this function is not available, and returns the #NAME? error, install and load the Analysis ToolPak add-in.
1. On the Tools menu, click Add-Ins.
2. In the Add-Ins available list, select the Analysis ToolPak box, and then click OK.
3. If necessary, follow the instructions in the setup program.
Inumber is a complex number for which you want the imaginary coefficient.
● Use COMPLEX to convert real and imaginary coefficients into a complex number.
The example may be easier to understand if you copy it to a blank worksheet.
4. Create a blank workbook or worksheet.
5. Select the example in the Help topic.
Note Do not select the row or column headers.
Selecting an example from Help
6. Press CTRL+C.
7. In the worksheet, select cell A1, and press CTRL+V.
8. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas
A B
Formula Description (Result)
=IMAGINARY("3+4i") Imaginary coefficient of the complex number 3+4i (4)
=IMAGINARY("0-j") Imaginary coefficient of the complex number 0-j (-1)
=IMAGINARY(4) Imaginary coefficient 4 (0) | {"url":"http://office.microsoft.com/en-us/excel-help/imaginary-HP005209120.aspx?CTT=5&origin=HP005204211","timestamp":"2014-04-16T22:36:01Z","content_type":null,"content_length":"23625","record_id":"<urn:uuid:ab73b203-d11a-48a1-8d55-48d46996871a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum - Ask Dr. Math Archives: High School Equations/Graphs/Translations
This page:
equations, graphs
Dr. Math
See also the
Internet Library:
graphing equations
About Math
basic algebra
linear algebra
linear equations
Complex Numbers
Discrete Math
Fibonacci Sequence/
Golden Ratio
conic sections/
coordinate plane
practical geometry
Negative Numbers
Number Theory
Square/Cube Roots
Browse High School Equations, Graphs, Translations
Stars indicate particularly interesting answers or good places to begin browsing.
Selected answers to common questions:
Direct and indirect variation.
Slope, slope-intercept, standard form.
Could you give me three practical uses for computing parabolas and tell me the significance of computing the parabola in regard to its application?
Does the function f(x)=3De^(1/x) have a local minimum?
How can you prove that two lines (neither vertical) are perpendicular if and only if the product of the gradients is equal to -1?
What is a quadrant?
Given 6x^2 + x - 1 = 0, how do I find the roots, the vertex, some coordinates, and from these graph it?
When I solve (x^2 + 1)/4 greater than or equal to (x + 2)/2 , of my final two answers, only one works.
What does Quintic mean?
How do I know when there is a function on a graph?
The graph of a rectangular hyperbola looks nothing like a rectangle. Where does the name come from?
What effect does multiplicity [e.g. (x+1)(x-2)^2 where -1 has a multiplicity of 1 and 2 of 2] have on a polynomial function?
The Roses of Grandi are given by polar equations r = a*sin(n*theta) and r = a*cos(n*theta) where n is an integer. How can I show that they are algebraic?
How do you find a parabola that has a line of symmetry other than a line parallel to the x-axis or y-axis?
When the equation y = 5sinx is graphed, the curve looks almost linear except near the points where y = 5. Since the sine curve originates from the circle, why isn't it completely curvy like a
Why is the graph of y=f(x-c) just the graph of y=f(x) shifted c units to the right?
How would you describe how the two graphs of y = one over the square root of x squared and y = one over the square root of x squared minus 4 are similar and dissimilar?
z=(3-2i)^1/2, then find z^.
Can you help me piece a function together so that the following hold? It is increasing and concave up on (-infinity, 1) ...
Why should a curve change its position or sign at roots? Can't a curve have a positive value for all roots?
How do I sketch the curve y = log natural of x/ (1-x)?
Why does y = mx + b have to be that way?
What is the slope of the line y = -px + 5 if the line goes through (0, - 3)? Why does the y-axis have an infinite slope, and the x-axis have a slope of 0?
Why are the slopes of perpendicular lines negative reciprocals?
Given a circle through three points, what is the equation for the intersection of the perpendicular bisectors?
Can you help me with this? (M+N)/(N-M) + (M^2+N^2)/(N^2-M^2) - (2M)/(M-N) = ?
Can you show me the steps for solving conic equations and putting them in hyperbola, ellipse, or parabola form?
Solve the inequality and express the solution in terms of intervals.
Where is standard form used?
I need a formal proof showing that the sum of a positive number and its reciprocal is at least 2. I can prove it algebraically, but I need a visual justification.
How can you know whether a graph is symmetric to the x-axis, y-axis, or the origin? What does the symmetry mean?
How many tangent lines to the curve y = x/(x+1) pass through the point (1,2)? At which points do those tangent lines touch the curve?
Make a table of values for the line y = x/4+4 using x-values of 1,2,3,4, and 5. Which graph shows the line that passes through the ordered pairs of the table?
Is there a rule for testing whether or not an equation has a horizontal asymptote?
Find f(2+h), f(x+h), and f(x+h)-f(x)/h where h cannot = 0 for f(x) = x/(x + 1). Explain how the following graphs are obtained from the graph of y = f(x)...
What does translation mean?
If two TV transmitters are within 150 kilometers of each other, they must be assigned different channels. Construct a graph that shows the given problem and use it to find the number of channels
What is meant when a type of linear system is said to be a consistent or an inconsistent system?
Can you explain the basic principles of parametric equations?
I have a graph of time vs. 1/distance. What units do I use for 1/ distance - cm^-1?
What is a parametric equation used for? Can you give me an example?
How can I show that, although the cubic equation x^3 - 6x = 4 has three real solutions, Cardan's formula can find them by subtracting appropriate cube roots of complex numbers?
Page: [<prev] 1 2 3 4 5 6 7 [next>] | {"url":"http://mathforum.org/library/drmath/sets/high_equations.html?start_at=201&num_to_see=40&s_keyid=39890654&f_keyid=39890656","timestamp":"2014-04-17T16:06:54Z","content_type":null,"content_length":"23134","record_id":"<urn:uuid:e676d62d-6d15-4683-a103-f647a40904f7>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Controlled Trial
Time Series vs Other Controlled Trials
Posts-Only vs Fully Controlled Trial
Fully Controlled Trial vs Crossover
Fully Controlled vs Simple Crossovers
Appendix 1: Sample Size for Posts-Only Trial vs Fully-Controlled Trial
Appendix 2: Sample Size for Fully Controlled Crossover vs Simple Crossover
Appendix 3: Confidence Limits for the Individual Responses in a Posts-Only Trial
A study in which you measure the effect of a treatment or other intervention is usually called an experimental trial. Inevitably the study is a controlled experimental trial, because you include
measurements to control or account for what would have happened if you hadn't intervened. The difference or change between the measurements is the effect of the treatment. Such studies can give
definitive estimates of effects, especially when the subjects represent a random sample of a population, when the subjects are randomized to the treatments, and when subjects and researchers do not
know which treatment is being administered (Hopkins, 2000; see also Altman et al., 2001, for an explanation of randomization, blinding, and other strategies to avoid bias in controlled trials).
│Figure 1. Examples of the five kinds of controlled trial. Symbols (○∆) represent measurements on two groups of subjects. Color of symbols represents effect of experimental and control treatments │
│(shades of red and blue respectively). Small arrows represent the change or difference scores used in the analysis of the effect of the experimental treatment. Four post-tests are shown to │
│emphasize the role of the washout in crossovers. │
│ │
│Figure 2. Decision tree for choosing between the different kinds of randomized controlled trial, showing typical sample sizes (n; see Appendix 1 and 2 for formulae). │
│ │
│^aTypical error over the intervention period <0.7x or <0.5x the between-subject standard deviation for studies limited by number of subjects or number of tests, respectively (equivalent to │
│test-retest correlation >0.50 or >0.75 respectively). │
│ │
│^bTypical error over the washout+intervention period <1.4x or <2x the typical error over the intervention period for studies limited by number of subjects or number of tests, respectively │
│(equivalent to washout+intervention test-retest correlation >0.90 or >0.80 respectively, if the intervention test-retest correlation is 0.95). │
In a first draft of this article we identified four kinds of controlled trial, which we named posts-only trials, fully controlled (parallel-groups) trials, fully controlled crossovers, and simple
crossovers. The reviewer suggested that we include quasi-experimental or time-series trials. Figure 1 shows a schematic for these five kinds of trial. In this article we provide and explain a
decision tree (Figure 2) for choosing between them when you plan an intervention, and we give suggestions for the analyses.
The present article should be read in conjunction with the article about spreadsheets for fully controlled trials and simple crossovers, where there is a detailed treatment of the analyses (Hopkins,
2003). See also the In-brief item that introduces the spreadsheet for fully controlled crossovers in this issue. These articles and the spreadsheets apply mainly to outcome measures (dependent
variables) that are numeric and continuous: the primary measures of distance, mass, time and current, and derived measures such as force, power, concentration and voltage. The articles and
spreadsheets apply also to variables representing counts and proportions after appropriate transformation, which is provided in the spreadsheets.
Controlled trials in which the outcome measure is a nominal or binary variable (such as ill or healthy, injured or uninjured, winner or loser) are almost invariably performed as posts-only trials,
with all subjects starting off on the same level (e.g., uninjured). The methods of analysis for binary variables are usually logistic regression and other forms of generalized linear modeling that
are beyond the scope of this article. However, coded as 0 and 1, these variables can be analyzed using spreadsheets, because the central limit theorem ensures that the t statistic provides accurate
confidence limits with the large-ish sample sizes that such variables need. This t-test approach will work with these variables for all types of controlled trials, but the considerations about sample
size and individual responses in the present article do not apply. Inclusion of covariates in the analysis requires generalized lnear modeling.
The first decision in the decision tree concerns the use of a control group or treatment. In some situations there is no opportunity to use a control, but you are still interested in quantifying the
effect of a treatment. If you perform only one test before and one test after the treatment, you can estimate the change in the outcome measure, but you won't know how much of the change would have
occurred in the absence of the treatment. You can address this problem to some extent by performing a series of measurements to establish a baseline, then estimating the deviation from this baseline
during or after the treatment. The baseline measurements serve as a kind of control for the experimental treatment. Deviations from the baseline during or after the treatment could still be
coincidental rather than due to the treatment, but only a properly controlled trial or crossover will remove the doubt. Hence this type of controlled trial is the weakest and should be considered a
last resort.
The intuitive way to account for any trend in the baseline measurements is to extrapolate the trend beyond the baseline to the measurements taken during or after the experimental treatment. The
statistical analysis that reflects this intuitive approach is within-subject modeling. You fit a separate straight line (or a curve, if necessary) to the baseline points for each subject, then use
it to predict what each subject's baseline measurements would have been at the time of the later measurements. A series of paired t tests provides confidence limits for the differences. The
spreadsheet for simple crossovers (Hopkins, 2003) will perform these analyses. It is also possible to fit a line or curves to the points during or after the treatment, then use a series of paired t
tests to compare predictions at chosen times. A more sophisticated approach involves mixed modeling to account for different magnitudes of error at different time points and any fixed effects, such
as subject characteristics.
To the extent that the analysis for a time series is the same as that for a crossover, the sample size can also be similar. The crucial factor is the error of measurement between the extrapolated
baseline and the post test, which will depend on the number of baseline measurements and the extent of extrapolation (time between baseline tests and post tests), as well as the usual error of
measurement over the time frame of the repeated measurements in the baseline and between baseline and post test. The computations are complex and the problem might be better addressed by performing
simulations. If the resulting error is small relative to smallest worthwhile effects, a sample size <10 is possible, but to ensure representativeness, minimum sample size should be ~10. Larger
errors can result in a sample size of hundreds.
The next decision in the decision tree may surprise many researchers: it is possible and sometimes more efficient to perform an intervention by randomizing subjects to control and treatment groups
that are measured once only, after their treatment. This posts-only randomized controlled trial requires a large sample size–typically several hundred–but depending on the error of measurement, the
sample size may still be less than in a trial with pre and post tests (Figure 2 and Appendix 1). The effect of the treatment is simply the difference in the means of the two groups, and the unequal
variances unpaired t statistic provides its confidence limits. It is important that the assignment of subjects to the groups is random, to ensure that differences in the groups in the post test are
due to the treatment rather than to bias in the assignment. The spreadsheet for fully controlled trials (Hopkins, 2003) can be used for the analysis by inserting the data as if they were for pre
tests; the panels for comparison of the groups in the pre-test contain the outcome statistics.
A disadvantage of the posts-only design is that subjects will not know how well the treatments work for them as individuals, because they do not have a pretest. Nevertheless, a researcher can derive
a statistic representing the overall magnitude of individual responses by comparing the between-subject standard deviations (SD) in the groups. If there are individual responses to a treatment, the
SD of that group will be inflated relative to the SD of the control group, and the individual responses expressed as a standard deviation is simply the square root of the difference in the squares of
the SDs. The uncertainty (confidence limits) in the estimate of this standard deviation has been incorporated into the spreadsheet for fully controlled trials. The uncertainty is acceptable when
the sample size in each group is at least 100 (see Appendix 3). If individual responses are substantial, you can attempt to account for them by including subject characteristics as covariates
interacting with the treatment effect in an ANOVA or mixed-model analysis. The outcome of such analyses can provide limited information about effects on individuals (e.g., a trivial effect on males
but a moderate effect on females). Potential mechanism variables (that is, those that change as a result of the treatment) can also be identified by including them as covariates in the analysis,
whether or not there are individual responses.
In his commentary, the reviewer called our attention to a design known as the Solomon 4-group, which combines the two posts-only groups with the two parallel groups of a fully controlled trial. You
would use this design only if you wanted to estimate the extent to which a pre-test modifies the effect of the experimental treatment relative to the control. The magnitude of the modification is
given by the effect of the treatment in the fully controlled trial minus its effect in the posts-only trial, with confidence limits given by combining the sampling standard errors of the two effects.
The existence of this design highlights a further advantage of the posts-only design: it produces the least disturbance of the subjects and must therefore be regarded as providing the criterion
measure of the effect of a treatment.
Designs in which subjects are tested before and after an intervention in principle allow the researcher to assess the success of the intervention with each subject. In practice the ability to make
an assessment depends on the relative magnitudes of error of measurement and smallest worthwhile effect (Hopkins, 2004), and the outcome may be unclear for many subjects. Nevertheless, the
possibility of individual assessment can be a powerful motivator for participation in the study. In this respect, crossovers are better than a fully controlled trial. In a fully controlled trial,
only one group of subjects receives the treatment that the researchers think might work. Bias can therefore arise from exclusion of subjects who will not consent to the chance of ending up in a
control or other group. In unblinded fully controlled trials, subjects who end up in a control group may show resentful demoralisation (Dunn, 2002) by failing to comply with the requirements of the
study or by opting out before the post test. Motivation to perform well in a physically demanding test may also be lower for such subjects and for control subjects generally. Resentful
demoralization may be balanced to some extent by its converse, compensatory rivalry (also known as the Avis effect), but the end result is effectively individual responses to the control treatment,
which increase the error of measurement. Crossovers eliminate or reduce the biases arising from these patient preference effects, because all subjects receive all treatments, so it is in their
interest to comply with and perform well for all treatments, if they want to know how well the treatments work for them.
Crossovers are not without problems. As shown in Figure 2, the main impediment to their use is the time required to wash out the effects of the treatment(s). You can't perform an experimental study
to determine the period, because it would amount to an extended fully controlled trial! Instead, you opt for what seems a reasonable washout period based on related studies and on what is known
about the reversibility of the physiological changes the intervention may cause. For a fully controlled crossover in certain conditions, the washout need not be perfect, because the residual effect
of a treatment is measured in the next pre-test and is subtracted off the post-test measurement. The conditions are that the effect of the following treatment is not modified by the residual effect
of the previous treatment, and that the period of the intervention is too brief relative to the washout period for any further appreciable washout of the previous treatment.
The need for a washout means that the subjects will be in a crossover study for a longer period than in a fully controlled trial, so there is more likelihood that some will not be available for their
subsequent treatment and measurements. Subjects may also drop out before the second treatment because of side effects of the first treatment or because they are reluctant to commit to at another
treatment and set of measurements, particularly if the intervention or measurements are arduous. Such withdrawals may introduce bias if the withdrawals are more common for one treatment.
Balancing the advantages and disadvantages, the US Food and Drug Administration once suggested that crossovers be abandoned in clinical studies (Cornfield and O’Neill, 1976). In our view, crossovers
are preferable, when there are no problems with recruitment and retention of subjects. Having opted for a crossover, you then have a choice between the fully controlled and simple versions.
A fully controlled crossover is a better design than a simple crossover in all except one respect: a simple crossover ideally requires only one-quarter the sample size. The simple crossover is
therefore an option when you are short of subjects or resources.
The sample size in any pre-post design is determined by the reliability of the outcome measure over the time between tests. In the case of a fully controlled crossover, the relevant time between
tests is the duration of the intervention, whereas in a simple crossover the time between tests is the duration of the washout plus the duration of the intervention. When the duration of the washout
is weeks or months, the reliability is likely to deteriorate, because consistent changes will develop in the subjects that vary randomly from subject to subject. The required sample size for the
simple crossover will therefore increase and may even exceed the sample size for a fully controlled crossover. The break-even point is shown in the footnote to Figure 2 and explained in Appendix 2.
A simple crossover is usually preferable to a fully controlled crossover when there are more than two treatments and when each treatment washes out quickly. To reduce bias arising from the
interaction of any carryover of one treatment with the next, the order of treatments needs to be randomized using a Latin square, which ensures every treatment follows every other treatment an equal
number of times.
The main drawback of the simple crossover is that it does not provide an estimate of individual responses. This drawback can be overcome by including an extra control treatment as if it were another
experimental treatment, at the cost of the additional time and resources for an extra treatment and measurement. The two control treatments can then be analyzed as a reliability study to provide an
estimate of the typical error, which is needed to assess the changes that each individual experiences with the other treatment or treatments. The assessment can be performed quantitatively using a
spreadsheet for assessment of individuals. Estimation of the standard deviation representing the individual responses and its confidence limits requires mixed modeling.
The spreadsheet for analysis of simple crossovers available at this site does not allow for a systematic change in the mean of the dependent variable that would have occurred from test to test in the
absence of any intervention. Also known as order, familiarization or learning effects, such changes can be due to the subjects becoming more proficient with test protocols or to external influences
such as changes in environmental conditions. If there are equal numbers in the crossover groups, an order effect does not bias the estimate of the treatment effect, but it reduces the precision of
the estimate by effectively adding noise to the change scores. By including the order effect in a more sophisticated ANOVA or mixed-model analysis, you remove any bias arising from unequal numbers
in the groups, and you do not lose precision. StatsDirect and other medical statistical packages include order effects in their procedures for analysis of crossovers.
Order effects are the most frequently debated analysis issue in the literature on crossover trials (e.g., Senn, 1994). The debate focuses on the extent to which failure to fully wash out a treatment
manifests as an order effect and on strategies for dealing with it. The safest strategy is to use a simple crossover only in situations where the likelihood of carryover is negligible. Order
effects are not an issue in a fully controlled parallel-groups or crossover trial, because they disappear completely from the difference or change in the change scores.
The decision about which kind of controlled trial to use depends on the availability of subjects for a control group or treatment, the washout time for the treatments, and when resources are limited,
the reliability of the outcome measure over the treatment and washout times. The weakest design is a time series, because there is no control group or treatment. A fully controlled parallel-groups
trial is the industry standard, but omitting the pre-test makes posts-only controlled trials more efficient when the outcome measure is sufficiently unreliable. When washout time is acceptable, the
benefits of assessing the effects of treatments on every subject make the best designs arguably either fully controlled crossovers or simple crossovers with an extra control treatment.
Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, Gøtzsche PC, Lang L (2001). The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Annals of
Internal Medicine 134, 663-694
Cornfield J, O’Neill RT (1976). Minutes of the Biostatistics and Epidemiology Advisory Committee meeting of the Food and Drug Administration. June 23
Dunn G (2002). The challenge of patient choice and nonadherence to treatment in randomized controlled trials of counseling or psychotherapy. Understanding Statistics 1, 19–29
Hopkins WG (2000). Quantitative research design. Sportscience 4, sportsci.org/jour/0001/ wghdesign.html
Hopkins WG (2003). A spreadsheet for analysis of straightforward controlled trials. Sportscience 7, sportsci.org/jour/03/wghtrials.htm
Hopkins WG (2004). How to interpret changes in an athletic performance test. Sportscience 8, 1-7
Senn S (1994). The AB/BA crossover: past, present and future. Statistical Methods in Measurement Research 3, 303-324
Published Dec 2005
Appendix 1: Sample Size for Posts-Only Trial vs Fully-Controlled Trial
• The uncertainty (confidence interval) in the estimate of an outcome statistic is proportional to the sampling standard error of the statistic. Therefore sample sizes are equal when standard errors
are equal.
• Let n be the number of subjects in each of two groups (control and intervention).
• Let SD be the between-subject standard deviation in the control group.
• Let sd be the typical error (within-subject standard deviation) over the time frame of the intervention. Assume additional error due to individual responses can be neglected.
• Then the standard error of the difference between the means in the posts-only trial is √2SD/√n.
• And the standard error of the difference in the change in the means in a fully controlled trial is 2sd/√n.
• For the same number of subjects, the fully controlled trial gives better precision than the posts-only trial when 2sd/√n<√2SD/√n, i.e. when sd<SD/√2, i.e. the typical error is less than ~0.7 of the
between-subject standard deviation. The formula for the test-retest (intraclass) correlation coefficient is ICC=(SD^2-sd^2)/SD^2, so the fully controlled trial is superior when ICC>0.5.
• For the same number of tests, sample size in the fully controlled trial is half that of the posts-only trial, so the fully controlled trial is superior when 2sd/√(n/2)<√2SD/√n, i.e., when sd<SD/2,
i.e. when the typical error is less than half the between subject standard deviation, or when ICC>0.75.
• The estimate of the sample size for a posts-only trial, based on acceptable 90% confidence limits for trivial effects, is 2x 2(1.65^2)(SD/d)^2, or 2x ~5.5(SD/d)^2, where d is the smallest
worthwhile effect.
• For a fully controlled trial, the sample size is 2x 4(1.65^2)(sd/d)^2, or 2x ~11(sd/d)^2.
• Let n be the number of subjects.
• As above, let sd the typical error over the time frame of the intervention.
• And let e be the typical error over the time frame of the washout plus intervention. Assume additional error due to individual responses can be neglected.
• Then the standard error of the change in the change in the means in the fully controlled crossover is 2sd/√n.
• And the standard error of the change in the means in the simple crossover is √2e/√n.
• For the same number of subjects, the simple crossover gives better precision than the fully controlled crossover when √2e/√n<2sd/√n, i.e. when e<√2sd, i.e. when the typical error over the
washout+intervention period is less than 1.4x the typical error over the intervention period only.
• For the same number of tests, sample size in the fully controlled crossover is half that of the simple crossover, so the simple crossover gives better precision than the fully controlled crossover
when √2e/√n<2sd/√(n/2), i.e. when e<2sd, i.e. when the typical error over the washout+intervention period is less than twice the typical error over the intervention period only.
• The comparison of the ICCs for the fully controlled vs simple crossover depends on the magnitude of the between-subject SD. If SD=√20sd=~4.5sd, the short-term (intervention period) ICC is 0.95,
and the simple crossover will give better precision for the same number of subjects if its ICC is >0.90. For the same number of tests, the simple crossover will give better precision if its ICC is >
• The sample size for a fully controlled crossover, is 4(1.65^2)(sd/d)^2, or ~11(sd/d)^2, where d is the smallest worthwhile effect.
• For a simple crossover, the sample size is ~2(1.65^2)(e/d)^2, or ~5.5(e/d)^2.
• Let n be the number of subjects in each of a control and intervention group.
• Let SD[c] and SD[i ]be the between-subject standard deviation in these groups.
• The sampling standard error of any SD is approximately SD/√(2n).
• The standard deviation representing individual responses is √(SD[c]^2-SD[i]^2).
• Therefore the standard error of this standard deviation is √((SD[c]^2+SD[i]^2)/2n) = SD/√n, if SD[c] and SD[i] are approximately equal.
• For n=100, the 90% confidence limits of the standard deviation representing individual responses are therefore ±1.7SD/10 = 0.17SD.
• The default smallest important change in the mean resulting from the intervention is Cohen's standardized effect size of 0.2SD, which is greater than the uncertainty in the individual responses.
The sample size of 100 is therefore adequate for characterizing individual responses. | {"url":"http://www.sportsci.org/jour/05/wghamb.htm","timestamp":"2014-04-18T01:28:29Z","content_type":null,"content_length":"108591","record_id":"<urn:uuid:489735d9-4f79-4a3e-9fb5-8d971e4c4161>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angular deceleration of wheels.
February 14th 2011, 08:31 AM #1
Feb 2011
Angular deceleration of wheels.
Right, so I have been stuck on this question for a while now but not sure where to start. I'm almost pulling my hair out, any guidance or pointers would be appreciated.
I need to calculate the angular deceleration of wheels of a lorry. It needs to stop in 0.5km using constant braking force.
It's weight is 5 tonnes and velocity is 80 km h-1. The wheels are 1.5m in diameter.
So far I've worked out the deceleration of the lorry (-0.494 ms-2), the braking force required (-2.4kN), the time taken to come to rest (2024.29), and the initial angular velocity of the wheels
(29.63 rad/s).
Just can't get my head around the "angular deceleration" part. Can someone please tell me where to start.
You can use the same formula as with linear speed acceleration/deceleration.
$v^2 = u^2 + 2as$
v = 0,
u = initial angular velocity
a = acceleration or deceleration
s = angular displacement
Hi, thanks for the reply.
Could I use alpha=dw/dt?
You do not have d $\omega$/dt in order to do this. However you have the initial angular speed (you know the radius of the wheel and the linear speed, so $v_0 = r \omega _0$ ). You know the final
angular speed, and you know the angular distance the wheel has gone through. ( $s = r \theta$ ) So you can use the angular analogue of Unknonw008's equation: $\omega ^2 = \omega _0^2 + 2 \alpha \
theta$ .
By the way, the two equations $v_0 = r \omega _0$ and $s = r \theta$ are known as the "rolling without slipping" conditions. There is one more: $a = r \alpha$. Whenever a wheel is moving without
slipping these relations relate the linear variables (s, v, and a) to the rotating variables ( $\theta$, $\omega$, and $\alpha$.)
Hi Dan,
Thanks for the reply.
So using your formula for acceleration I get:
α=(ω_0^2- ω^2)/θ
α = -69.90 rad/s^2
p.s. what software/program do you use to display your formulas like you do?
February 14th 2011, 10:27 AM #2
February 14th 2011, 10:59 AM #3
Feb 2011
February 14th 2011, 11:37 AM #4
February 15th 2011, 09:05 AM #5
Feb 2011 | {"url":"http://mathhelpforum.com/math-topics/171253-angular-deceleration-wheels.html","timestamp":"2014-04-18T05:40:12Z","content_type":null,"content_length":"44973","record_id":"<urn:uuid:0a105c35-fd77-4050-acae-d0ed56b8f6c1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector Funtions
It is always moving in the same direction. For instance, one of the formulas was
(Cos t)i - (Sin t)j.
It would have helped if you told us that to begin with!
You raised an interesting question. What if the question asked which way is the particle moving at t=0. What method would you apply in that case? Please tell me if I am right:
You find the tangent to the curve at that point and find the direction of that vector with respect to the xy-plane.
Am I right?[/QUOTE]
Yes, that would work. Another way would be to just look at one component: y= -sin(t) so y'= cos(t). At t= 0, the point is (1, 0) and y is increasing: counter-clockwise. | {"url":"http://www.physicsforums.com/showthread.php?t=192784","timestamp":"2014-04-16T22:12:28Z","content_type":null,"content_length":"30198","record_id":"<urn:uuid:7742882b-d708-47ec-bc39-68357b398b77>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is meant by saying that the Goldstone-bosons are "eaten" by gauge bosons?
To see how this works, let's consider a specific example of a complex scalar field, [itex]\phi[/itex], coupled to an abelian gauge field. The complex scalar has 2 real degrees of freedom, while the
massless gauge field also has 2 real degrees of freedom after imposing gauge invariance. A massive abelian vector field has 3 real degrees of freedom, which will become important below.
If the scalar potential only depends on the modulus of the scalar field, [itex]V(\phi) = V(|\phi|)[/itex], then the Lagrangian has a continuous symmetry amounting to rescaling [itex]\phi[/itex] by a
phase, [itex] \phi \rightarrow e^{i\theta} \phi[/itex]. Now suppose that this potential has a minimum at [itex]|\phi|=\upsilon[/itex]. We say that the symmetry is spontaneously broken because the
vacuum state [itex]\langle \phi \rangle = \upsilon[/itex] is no longer invariant under the phase symmetry of the Lagrangian.
If we parameterize
[itex]\phi = (\rho + \upsilon) e^{i\alpha},[/itex]
we find that the Lagrangian only depends on the derivatives [itex]\partial_\mu \alpha[/itex] of the phase field. So [itex]\alpha[/itex] is a massless real scalar, while [itex]\rho[/itex] is a massive
real scalar field. Furthermore, there is an continuous invariance where [itex]\alpha \rightarrow \alpha + c[/itex], which is nothing more than the phase symmetry of the theory. If there were no gauge
field coupled to [itex]\phi[/itex], we would identify [itex]\alpha[/itex] with the Goldstone boson corresponding to the spontaneous breaking of the phase symmetry of the complex field.
However, in the presence of the gauge field, the total theory has a local gauge invariance [itex] \phi \rightarrow e^{i\theta(x)} \phi[/itex], [itex]A_\mu \rightarrow A_\mu - i \partial_\mu \theta(x)
[/itex]. We are free to use this gauge invariance to set [itex]\theta = -\alpha[/itex]. This eliminates the field [itex]\alpha[/itex] from the Lagranian entirely, leaving terms for the massive [itex]
\rho[/itex] and massive vector field [itex]A_\mu[/itex] and their interactions. The 2+2 real degrees of freedom we started with are now distributed as 1 real d.o.f. for [itex]\rho[/itex] and the 3
real d.o.f. for the massive gauge field.
The use of the gauge symmetry to eliminate the phase [itex]\alpha[/itex] in favor of the extra degree of freedom for the massive gauge field is what's referred to as "eating" the Goldstone boson. | {"url":"http://www.physicsforums.com/showthread.php?p=3503216","timestamp":"2014-04-19T22:51:00Z","content_type":null,"content_length":"30742","record_id":"<urn:uuid:d85a95b1-0b8c-421c-86ef-b1344477366c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical English Usage - a Dictionary
by Jerzy Trzeciak
[see also: appropriate, suitable, useful, suit
This realization is particularly convenient for determining......
It is convenient to view G as a nilpotent group.
For these considerations it will be convenient to state beforehand, for easy reference, the following variant of......
Therefore, whenever it is convenient, we may assume that......
We identify A and B whenever convenient.
We shall, by convenient abuse of notation, generally denote it by x[t] whatever probability space it is defined on.
We shall find it convenient not to distinguish between two such sequences which differ only by a string of zeros at the end.
Back to main page | {"url":"http://www.emis.de/monographs/Trzeciak/glossae/convenient.html","timestamp":"2014-04-17T15:33:17Z","content_type":null,"content_length":"1492","record_id":"<urn:uuid:7f177e6f-855f-4bc7-9391-6f181999f7d0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
rate of change
February 6th 2010, 08:16 AM
rate of change
Hey, I'm doing some homework and I'm a little confused with this question.
Find the rate of change of the volume V of a cube with respect to
(a) the length w of a diagonal on one of the faces.
(b) the length z of the one of the diagonals of the cube.
So i'm trying to find $\frac{dv}{dw}$
I know that the Volume of a cube is $v=x^3$
What i need to do is find a way of expressing V in terms of w.
I know that $x^2 + x^2 = w^2$ and solving for x I get $x= \frac{w}{2}$. Unless my math is incorrect (which is very possible).
The next step is the plug that into $v=x^3$ so i get $v=(\frac{w}{2})^3$
Do i just find the derivative of that to get answer? I know that's wrong somewhere since the answer is $(\frac{3 (2^(\frac{1}{2})}{4})w^2$
Can anyone tell me what i'm doing wrong? thank you very much
February 6th 2010, 08:44 AM
$V = x^3$
$w^2 = x^2 + x^2$
$w^2 = 2x^2$
$w = \sqrt{2} \cdot x$
$x = \frac{w}{\sqrt{2}}$
$V = \frac{w^3}{2\sqrt{2}}$
find $\frac{dV}{dw}$
$z^2 = w^2 + x^2$
$z^2 = w^2 + \frac{w^2}{2}$
$z^2 = \frac{3w^2}{2}$
$w = \sqrt{\frac{2}{3}} \cdot z$
use the chain rule ...
$\frac{dV}{dz} = \frac{dV}{dw} \cdot \frac{dw}{dz}$ | {"url":"http://mathhelpforum.com/calculus/127444-rate-change-print.html","timestamp":"2014-04-16T17:04:33Z","content_type":null,"content_length":"8697","record_id":"<urn:uuid:0ccee3ca-06eb-43eb-90a6-54055230f752>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Re:{4} Maintainable code is the best code -- principal components
in reply to Re:{4} Maintainable code is the best code -- principal components
in thread Maintainable code is the best code
Without going into too many specifics given the nature of my work, I'm using it to try to break down chemical spectra into identifiable components. When/if I get to publish this, I'll try to let
folks know, though time in the peer-reviewed journal world is only an illusion... :-)
For those that are curious, principle component analysis or factor analysis or a number of other different names descibes a method for breaking down sets of data in key basis sets. It assumes that
all experimental data is a linear combination of collected data, and thus, if your collected data is N units long with M total sets, you can use singular value decomposition to get M basis sets N
units long, and a square M x M weight matrix. This is an 'exact' specification. However, we typically want only C components, with C << M. Because during singular value decomposition, we generate M
eigenvalues, we can use empirical, statistical, or other methods to determine what C is, and which of those M basis sets are the most important.
Note that these basis sets may have any actual meaning; as jeroenes indicates, the method breaks out these basis sets as to attempt to minimize the variation of the data along one C-dimensional
vector. However, there are ways to transform the data from the PCA basis set to a set of vectors that have some meaning. In my case, it's going from a basis set of spectra that represent no real
substance to spectra of real substances; I can then get an idea of the composition of all the other non-basis set data that I started with.
As jeroenes also indicated, you can use the basis sets and weights to find out where clusters of data exist, and use those to guide the selection of basis sets and transformations to understand the
data better.
It's a very elegant method for large-scale data analysis and very easy to do with help from computers (there's enough empirical analysis that has to be done that a human needs to guide the end
Dr. Michael K. Neylon - mneylon-pm@masemware.com || "You've left the lens cap of your mind on again, Pinky" - The Brain
It's not what you know, but knowing how to find it if you don't know that's important | {"url":"http://www.perlmonks.org/?node_id=116443","timestamp":"2014-04-19T13:37:16Z","content_type":null,"content_length":"18815","record_id":"<urn:uuid:5b509a65-fa88-48d6-a20a-2a7484bb2c00>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
1.A drama club is planning a bus trip to New York City to see a Broadway play. The cost per person for the bus rental varies inversely as the number of people going on the trip. It will cost $30 per
person if 44 people go on the trip. How much will it cost per person if 60 people go on the trip? Round your answer to the nearest cent, if necessary. A.$22.00 B.$40.91 C.$1,320.00 D.$21.29
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5107fc9be4b08a15e7848056","timestamp":"2014-04-17T21:51:59Z","content_type":null,"content_length":"37786","record_id":"<urn:uuid:01e90dce-48df-4844-9365-a5c21797826b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
ref-syn-ops-ove - SICStus Prolog
4.1.5.1 Overview
Operators in Prolog are simply a notational convenience. For example, ‘+’ is an infix operator, so
2 + 1
is an alternative way of writing the term +(2, 1). That is, 2 + 1 represents the data structure
/ \
and not the number 3. (The addition would only be performed if the structure were passed as an argument to an appropriate procedure, such as is/2; see ref-ari-eae.)
Prolog syntax allows operators of three kinds: infix, prefix, and postfix. An infix operator appears between its two arguments, while a prefix operator precedes its single argument and a postfix
operator follows its single argument.
Each operator has a precedence, which is a number from 1 to 1200. The precedence is used to disambiguate expressions in which the structure of the term denoted is not made explicit through the use of
parentheses. The general rule is that the operator with the highest precedence is the principal functor. Thus if ‘+’ has a higher precedence than ‘/’, then
a+b/c a+(b/c)
are equivalent, and denote the term +(a,/(b,c)). Note that the infix form of the term /(+(a,b),c) must be written with explicit parentheses:
If there are two operators in the expression having the same highest precedence, the ambiguity must be resolved from the types of the operators. The possible types for an infix operator are
Operators of type ‘xfx’ are not associative: it is required that both of the arguments of the operator be subexpressions of lower precedence than the operator itself; that is, the principal functor
of each subexpression must be of lower precedence, unless the subexpression is written in parentheses (which gives it zero precedence).
Operators of type ‘xfy’ are right-associative: only the first (left-hand) subexpression must be of lower precedence; the right-hand subexpression can be of the same precedence as the main operator.
Left-associative operators (type ‘yfx’) are the other way around.
An atom named Name is declared as an operator of type Type and precedence Precedence by the command
:-op(Precedence, Type, Name).
An operator declaration can be cancelled by redeclaring the Name with the same Type, but Precedence 0.
The argument Name can also be a list of names of operators of the same type and precedence.
It is possible to have more than one operator of the same name, so long as they are of different kinds: infix, prefix, or postfix. Note that the ISO Prolog standard contains the restriction that
there should be no infix and postfix operators with the same name, however, SICStus Prolog lifts this restriction.
An operator of any kind may be redefined by a new declaration of the same kind. This applies equally to operators that are provided as standard, except for the ',' operator. Declarations for all
these built-in operators can be found in ref-syn-ops-bop. For example, the built-in operators ‘+’ and ‘-’ are as if they had been declared by (A) so that (B) is valid syntax, and means (C) or
pictorially (D).
:-op(500, yfx, [+,-]). (A)
a-b+c (B)
(a-b)+c (C)
/ \
- c
/ \
a b (D)
The list functor ./2 is not a standard operator, but we could declare it to be (E) and then (F) would represent the structure (G).
:-op(600, xfy, .). (E)
a.b.c (F)
/ \
a .
/ \
b c (G)
Contrasting this with the diagram above for a-b+c shows the difference between ‘yfx’ operators where the tree grows to the left, and ‘xfy’ operators where it grows to the right. The tree cannot grow
at all for ‘xfx’ operators; it is simply illegal to combine ‘xfx’ operators having equal precedences in this way.
The possible types for a prefix operator are:
and for a postfix operator they are:
The meaning of the types should be clear by analogy with those for infix operators. As an example, if not were declared as a prefix operator of type fy, then
not not P
would be a permissible way to write not(not(P)). If the type were fx, the preceding expression would not be legal, although
not P
would still be a permissible form for not(P).
If these precedence and associativity rules seem rather complex, remember that you can always use parentheses when in any doubt.
Send feedback on this subject. | {"url":"http://sicstus.sics.se/sicstus/docs/latest/html/sicstus.html/ref_002dsyn_002dops_002dove.html","timestamp":"2014-04-19T15:50:18Z","content_type":null,"content_length":"9360","record_id":"<urn:uuid:c3d180ae-a103-489b-b648-0f47fb88ec54>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Schrodinger's Equation
Quote by
would ask what motivates acceptance of the deBroglie hypothesis in general?
Experimental evidence. We can produce diffraction and interference effects in beams of electrons, neutrons, atoms, buckyballs... which are in agreement with wavelengths predicted by de Broglie's
equation [itex]\lambda = h / p[/itex]. | {"url":"http://www.physicsforums.com/showpost.php?p=1145604&postcount=65","timestamp":"2014-04-20T03:21:27Z","content_type":null,"content_length":"7873","record_id":"<urn:uuid:c1dde5d4-a7f6-4f8a-af26-8429fd752123>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
exponential type series question
April 5th 2011, 03:41 PM #1
Junior Member
Aug 2008
exponential type series question
this looks really simple but might end up quite complicated. i'm currently stuck though :/
suppose that for every k, $\sum_{n=0}^\infty \frac{a_{k+n}}{n!}=e$. does this imply that $a_k \to 1$?
the above condition can also be rewritten as $\sum_{n=0}^\infty \frac{1-a_{k+n}}{n!}=0$ of course, which makes it even more painfully obvious, but I still can't prove this...
I need also to figure out a more general question: does $\lim_{k\to\infty}\sum_{n=0}^\infty \frac{a_{k+n}}{n!}=e$ imply that $a_k \to 1$, but clearly knowing the answer for the initial question
would be a good start...
any partial ideas or references will be appreciated too!..
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/differential-geometry/176938-exponential-type-series-question.html","timestamp":"2014-04-16T06:36:25Z","content_type":null,"content_length":"30318","record_id":"<urn:uuid:82551b10-8970-4ed5-bdfb-0f75982ae170>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
New York Lottery Numbers General Information
New York Lottery Numbers General Information and Links
The New York Numbers lottery is a 3-digit game with six ways to play, which makes it one of the 3-dgit lottery games in th US with the highest number of choices of playing methods. You can
play Numbers for as low as 50¢. With an additional $1 you can add a seventh way to win by playing "Lucky Sum". Moreover, there is also an "Instant Win" option available for you with an
additional $1.
On this
New York Numbers drawing schedule: Numbers drawings are held twice a day, day draw and evening draw. Day drawings are held at approximately 12:30 p.m. everyday, and evening drawings are page...
held at approximately 7:30 p.m daily.
New York Numbers drawing method: Drawings for Numbers are televised live on various TV stations listed here.
How to Play New York Numbers Lottery
From a New York Lottery retailer (a gas station, a convenience store, a food store, etc. which plays NY Lottery) get a Numbers playslip. A playslip contains four different play areas (GAME A through
GAME D). Each play area represents a different Numbers play. If you want to play only one number, fill out only the first panel, Game A; if you want to play 12 numbers, then get three playslips and
fill all the panels, A to D of all of them.
Each Numbers play area contains three rows of the digits 0 to 9. Mark the first digit of your number on the first row, the second digit on the second row, and the third digit on the third row. So, if
you want to play the number 409, then 4 will be marked in the first row, 0 in the second row, and 9 in the third row. Next select and mark the amount (50¢ or $1), then select the type of play -
Straight, Box, Straight/Box, or Combo.
If you are playing Front Pair, mark only the first two rows of the digits and leave the last one blank. If you are playing Back Pair, mark only the last two rows of the digits and leave the first one
The numbers you are playing are valid only for the next draw. You may specify either the day or evining draw, or both. In fact, playing both draws doubles the price.
Next, present the completed playslip to the lottery retailer to receive your printed ticket. Check your ticket immediately to ensure that the numbers, dates, and play type match your request.
You may also look at a Numbers playslip here.
Play Types and How You Win the New York Numbers Lottery
There are six ways you can play and win the New York Numbers game - four basic types and two shortcuts of the basic types. These types together with how much is paid for a winning New York Numbers
game type are listed below.
The basic Numbers play types are:
1. Straight: You win by matching the winning Numbers number in the exact order drawn. For example, if you play the Numbers number 409 as Straight, you'll win only if the winning number is exactly
409; you don't win anything if the winning number is 049 or 408.
The prize of New York Numbers Straight game is $500 for a $1 game and $250 for a 50¢ game.
2. Box: You win by matching the winning Numbers number in any order. For example, if you play 409 as Box, you'll win if the Numbers winning number contains the three digits 0, 4, and 9 in any order,
such as 049 or 940; but not if any one is missing like 408.
The prize of New York Numbers Box play depends on whether one of the digits is repeated or not. A number such as 409, where none of the digits repeats is called a 6-way box and pays $80 for a $1
game and $40 for a 50¢ game. If one of the digits is double, like 007, then it is called a 3-way box and pays $160 for a $1 game and $80 for a 50¢ game. There is no Box play for numbers with all
three identical digits such as 777. These numbers can only be played straight.
3. Front Pair: Just select the first two digits to win $50 for a $1 game and $25 for a 50¢ game. For example, if you play the front pair 40 (i.e., 40X), you'll win as long as the first two digits of
the winning number are 40 in exact order. So, you'll win with 400, 401, etc., but not with 940 or 049.
4. Back Pair: Just select the last two digits to win $50 for a $1 game and $25 for a 50¢ game. For example, if you play the back pair 09 (i.e., X09), you'll win as long as the last two digits of the
winning number are 09 in exact order. So, you'll win with 009, 109, etc., but not with 490 or 094.
And the shortcut play types are:
5. Straight/Box: is simply a combination of a 50¢ Straight play and a 50¢ Box play. This play type is designed simply to save you time; it is a shortcut of playing one game of straight and another
game of box of the same number. That is, instead of filling one panel for 50-cents Straight and another panel for 50-cents box, of the same number, you might as well fill out just one panel and
mark it as Straight/Box. Note that the minimum amount for a Straight/Box game is $1.
The prize of a Straight/Box play is the sum of a 50¢ Straight game and a 50¢ Box game. For example, if you play 409 as Straight/Box, and if the winning number is 409, then you'll win $290 ($250
for the Straight part + $40 for the Box part). If the winning number is 490, you'll win only $40 for the box part since you did not match it Straight. How much do you win with Straight/Box of the
number 007? (Answer)
6. Combo (Combination): This is the same as playing Straight all possible combination of the selected number. For example, if you want to win $500 on the number 409 in any order, then you have to
play all the possible combinations, namely, 049, 940, 094, 409, 490, and 904, at a price of $6 ($1 for each). The shortcut is simply to fill out 409 and mark Combo for $1; you'll still pay $6. A
$1 combo play of double numbers such as 007 costs $3 since there are only 3 possible combinations(007, 070, 700).
Lucky Sum
Numbers Lucky Sum is an add-on feature to Numbers, for an extra $1, you can win a prize even if your exact numbers aren't drawn. Just tell your retailer you want to play in addition to your regular
Numbers wager.
Numbers Lucky Sum prizes:
0 ... $500.00
1 ... $166.00
2 ... $83.00
3 ... $50.00
4 ... $33.00
5 ... $23.50
6 ... $17.50
7 ... $13.50
8 ... $11.00
9 ... $9.00
10 ... $7.50
11 ... $7.00
12 ... $6.50
13 ... $6.50
14 ... $6.50
15 ... $6.50
16 ... $7.00
17 ... $7.50
18 ... $9.00
19 ... $11.00
20 ... $13.50
21 ... $17.50
22 ... $23.50
23 ... $33.00
24 ... $50.00
25 ... $83.00
26 ... $166.00
27 ... $500.00
The prizes are inversely proportional to the odds of each sum. Sums of 0 and 27 are the least likely since each covers only one number, 000 and 999, respectively. Therefore, they are allocated the
highest prize. On the other hand, 12, 13, 14, and 15 are the most likely to be drawn, thus with the lowest prizes.
The odds and all possible combinations of 3-digit games are available on this web site at 3-digit Sums.
Prizes and Odds of New York Lottery Numbers
How much does New York Numbers pay for Straight or Box? The following is a summary of winner prize payouts and the odds of winning.
Play Type Odds Prize (for $1 bet)
Straight 1 in 1,000 $500
3-way Box 1 in 333 $160
6-way Box 1 in 167 $80
Pair (Front or Back) 1 in 100 $50
Note that a 6-way Box is a number with three non-identical digits, a 3-way Box has one of the digits doubled. Pairs are printed on your ticket with X representing "any", e.g. front pair 40X and back
pair X09, which means that X can be any digit.
For New York Lottery general rules and how to claim prizes, please see New York Lottery general info page.
Links to some Numbers pages of the New York Lottery Official Web Site
Although we have tried to cover all the pertinent information of the New York Numbers, we recommend that you also look at the following pages at the New York Lottery official web site.
New York Numbers at US-Lotteries.com
The main purpose of this web site (US-Lotteries.com) is to present a well formatted latest and past results; analysis and statistics of past winning numbers, digits, pairs, triads; search to see if
your favorite numbers have ever won or have come close to winning the jackpot; mathematical tools such as combinations generators and wheels. The complete list of all New York Numbers pages of this
web site are listed below. | {"url":"http://www.us-lotteries.com/New_York/Numbers/Numbers-Game-Info.asp","timestamp":"2014-04-19T09:25:34Z","content_type":null,"content_length":"34075","record_id":"<urn:uuid:d94bbaa7-bf60-482e-b8d8-54b9ce78f14d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Suppose you must find the termperature in degrees Fahrenheit F when it is 15º C. You would solve the equation F = (9/5)C + 32 for F. Your next logical step would be F = (9/5)(15) + 32. Why? A:
Addition Property of Equality B: Multiplication Property of Equality C: Substitution Property of Equality D: Division Property of Equality
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f0b1694e4b014c09e64ad4e","timestamp":"2014-04-21T04:47:06Z","content_type":null,"content_length":"51856","record_id":"<urn:uuid:fc890c04-cd56-4f55-9c8a-1b9a5b5d5dff>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
water is flowing from a shallow concrete conical reservoir...
September 20th 2009, 10:24 AM
water is flowing from a shallow concrete conical reservoir...
Water is flowing at a rate of 50 m^3/min from a shallow concrete conical reservoir, vertex down, of base radius 40m and height 6m. How fast in cm per minutes, is the water level falling when the
water is 3ft deep?
I know the volume of a cone is V = (pi/3)r²h
and I know to use the chain rule: dV/dt = (dh/dt)(dV/dh)
but I don't know how to find dh/dt or dv/dh
September 20th 2009, 11:23 AM
Water is flowing at a rate of 50 m^3/min from a shallow concrete conical reservoir, vertex down, of base radius 40m and height 6m. How fast in cm per minutes, is the water level falling when the
water is 3ft deep?
I know the volume of a cone is V = (pi/3)r²h
and I know to use the chain rule: dV/dt = (dh/dt)(dV/dh)
but I don't know how to find dh/dt or dv/dh
$\frac{r}{h} = \frac{40}{6} = \frac{20}{3}$
$r = \frac{20h}{3}$
$V = \frac{\pi}{3} r^2 h$
$V = \frac{\pi}{3} \left(\frac{20h}{3}\right)^2 h$
$V = \frac{400\pi}{27} h^3$
take the time derivative of the last equation to get the relationship between $\frac{dV}{dt}$ and what you are looking for ... $\frac{dh}{dt}$
September 20th 2009, 02:35 PM
$\frac{{dv}}<br /> {{dt}} = \frac{{400\pi }}<br /> {9}{h^2}\frac{{dh}}<br /> {{dt}}<br />$
i know you plug 50 into dv/dt but I'm not sure what to plug in for h... 6m or 3ft?
and i know after that you solve for dh/dt
September 20th 2009, 02:47 PM
the question asks for dh/dt in cm/min when h = 3 ft(?) ... (sure it's in feet? if so, then you'll have to convert to meters to get it in units of m/min)
I wouldn't convert m to cm until the very end.
September 20th 2009, 02:50 PM
yes the question says "How fast in cm per minutes, is the water level falling when the water is 3ft deep?"
so I don't need to incorporate the 3ft into the equation?
September 20th 2009, 02:53 PM
yes you do ... h = 3ft in the derivative equation. you'll need to convert to meters like I told you previously.
September 20th 2009, 02:53 PM
so i plugged 3ft or 0.9144meters into h^2
and found $\frac{{dh}}<br /> {{dt}} = \frac{{450}}<br /> {{365.76\pi }}<br />$ | {"url":"http://mathhelpforum.com/calculus/103266-water-flowing-shallow-concrete-conical-reservoir-print.html","timestamp":"2014-04-20T02:32:13Z","content_type":null,"content_length":"10588","record_id":"<urn:uuid:08878d7f-6a8d-4a4d-99ea-9f7c5d425ac5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exact Multiplicity of Positive Solutions for a Class of Second-Order Two-Point Boundary Problems with Weight Function
1. Introduction
Consider the problem
The existence and multiplicity of positive solutions for ordinary differential equations have been studied extensively in many literatures, see, for example, [1–3] and references therein. Several
different approaches, such as the Leray-Schauder theory, the fixed-point theory, the lower and upper solutions theory, and the shooting method etc has been applied in these literatures. In [4, 5], Ma
and Thompson obtained the multiplicity results for a class of second-order two-point boundary value problems depending on a positive parameter
Exact multiplicity of positive solutions have been studied by many authors. See, for example, the papers by Korman et al. [6], Ouyang and Shi [7, 8], Shi [9], Korman and Ouyang [10, 11], Korman [12],
Rynne [13], Bari and Rynne [14] (for 15]. In these papers, bifurcation techniques are used. The basic method of proving their results can be divided into three steps: proving positivity of solutions
of the linearized problems; studying the direction of bifurcation; showing uniqueness of solution curves.
Ouyang and Shi [7] obtained the curves of positive solutions for the semilinear problem
where 7], the following two cases were considered:
Korman and Ouyang [10] studied the problem
under the conditions
They obtained a full description of the positive solution set of (1.3) and proved that all positive solutions of (1.3) lie on a single smooth solution curve bifurcating from the point
Of course a natural question is how about the structure of the positive solution set of (1.3) when
It is extremely difficult to answer such a question in general. So we shift our study to the problem (1.1) in this paper. We are interested in discussing the exact multiplicity of positive solutions
of (1.1) with a weight function
Suppose the following.
(H1)One has
(H2)concave convex that is, there exists
(H3)The limits
In this paper, we obtain exactly two disjoint smooth curves of positive solutions of (1.1) under conditions (H1)–(H4). According to this, we can conclude the existence and exact numbers of positive
solutions of (1.1) for
Remark 1.1.
Korman and Ouyang [10] obtained the unique positive solution curve of (1.3) under the condition (1.4). However they gave no information when 7], they did not treat the case that the equation contains
a weight function.
On the other hand, suppose the following.
()One has
Remark 1.2.
If 4] that the assumptions (H15]). Hence, the essential role is played by the fact of whether ∖
Our main tool is the following bifurcation theorem of Crandall and Rabinowitz.
Theorem 1.3 (see [16]).
2. Notations and Preliminaries
and let
with the norm
equipped with the norm
Define the operator
Definition 2.1.
For a nontrivial solution of (1.1), degenerate if the linearized problem
has a nontrivial solution; otherwise, it is nondegenerate.
Lemma 2.2.
Let (H1) and (H4) hold. For any degenerate positive solution
The proof is motivated by Lemma 11].
Suppose to the contrary that
respectively. We claim that
Multiplying (2.9) by
Note that the right side of (2.11) is zero, which is a contradiction.
Repeating the above proof on
Finally, integrating the differential equation in (2.13), we can choose
In view of (H4),
The following lemma is an important result in this paper.
Lemma 2.3.
Let (H1) and (H4) hold. Suppose that
(i)All solutions of (1.1) near
(ii)One has
(i) The proof is standard. Let
Now, we show that
Note that
Multiplying (2.17) by
However, the left side of (2.21) is equal to zero according to boundary conditions (2.18) and (2.20). This implies that
(ii) Substituting
Evaluating at
Multiplying (2.24) by
According to (H1), (H4), and Lemma 2.2, we see that
We first prove that
Differentiating (1.1) and (2.19) with respect to
Multiplying, (2.27) by
From (2.29), we get
Solving the equation
Combining with (2.32), we obtain (2.26).
The following proof is motivated by the proof of Theorem 8].
Next, we claim that there exists
We get
We first prove the above statement. On the contrary, suppose that
Consider the problem
Now let us consider the claim related to
From the claim and
3. The Main Results and the Proofs
In this section we state our main results and proofs.
Definition 3.1.
Remark 3.2.
It is well known that the eigenvalues of (3.2) are given by
For each
Definition 3.3 (see [7]).
Let superlinear (resp., sublinear) on sup-sub (resp., sub-sup) on
Lemma 3.4.
(i) Let
(ii) Let
The proof is similar to that of Proposition 7], so we omit it.
Lemma 3.5.
Let (H1)–(H4) hold, let
Let us consider
as a bifurcation problem from 17], we have (i).
Let us consider
as a bifurcation problem from infinity. Note that (3.7) is also the same as to (1.1). The proof of Theorem 5] ensures that (ii) is correct.
Lemma 3.6.
Let (H1), (H4) hold. Suppose that
Suppose to the contrary that
By (1.1) and (H1), we have
has a unique solution
The following Lemma is an interesting and important result.
Lemma 3.7.
Let (H1)–(H4) hold. Suppose that
From conditions (H1)–(H3), we can check easily that
In fact, let
Now, we give the proof in two cases.
Case I (
On the contrary, suppose that
Multiplying (1.1) by
By Lemma 2.2, (3.13), and
Case II (
On the contrary, suppose that
Note that
We can extend evenly
By the strong maximum principle, we conclude that
By Lemma 2.3, we get
Our main result is the following.
Theorem 3.8.
Let (H1)–(H4) hold. Then the following are considered.
(i)All positive solutions of (1.1) lie on two continuous curves
(ii)Equation (1.1) has no positive solution for 1).
(i) Since 17],
On the other hand, Lemma 3.7 and the implicit function theorem ensure that
From the above discussion, we see that
Now, we consider positive solutions of (1.1), for which the maximum value on
Let us return to consider (3.6) as the bifurcation problem from infinity. Note that (3.6) is also the same as to (1.1). Since 18], there exists a subcontinuum
Hence, ∖
Finally, we show that both curves
It follows that
Similarly, we can show that every positive solution of (1.1), the maximum value on
(ii) The result (ii) is a corollary of (i).
Next, we will give directly other theorems. Their proofs are similar to that of Theorem 3.8. So, we omit them.
Theorem 3.9.
(i)All positive solutions of (1.1) lie on a single continuous curve
(ii)Equation (1.1) has no positive solution for 2).
Remark 3.10.
In fact, if we reverse the inequalities in (H1), (H1
Also using the method in this paper, we can obtain the exact numbers of positive solutions for the Dirichlet problem
Definition 3.11
Theorem 3.12.
Let (H1
(i)All positive solutions of (3.19) lie on a single continuous curve
(ii)Equation (1.1) has no positive solution for
Theorem 3.13.
Let (H1), (H2), (H3), (H4
(i)All positive solutions of (3.19) lie on two continuous curves
(ii)Equation (3.19) has no positive solution for
Remark 3.14.
Theorems 3.12 and 3.13 extend the main result Theorem 10], where
4. Examples
In this section, we give some examples.
Example 4.1.
Example 4.2.
Example 4.3.
The authors are very grateful to the anonymous referees for their valuable suggestions. An is supported by SRFDP (no. 20060736001), YJ2009-16 A06/1020K096019, 11YZ225. Luo is supported by grant no.
Sign up to receive new article alerts from Boundary Value Problems | {"url":"http://www.boundaryvalueproblems.com/content/2010/1/207649","timestamp":"2014-04-21T05:42:22Z","content_type":null,"content_length":"141530","record_id":"<urn:uuid:5b6f98ca-81a5-4173-aa78-7a6e524e5e48>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Leandro Algebra 1 Tutor
...I taught pre-calculus sections as a TA at UC Santa Cruz for two years. I have taken classes in teaching literacy at Mills College. I have a degree in Sociology which required me to read
thousands of pages from close to 100 different authors.
15 Subjects: including algebra 1, reading, statistics, SAT math
...I have taught students at all levels of competency and comfort with these concepts. I earned a B.A. in Philosophy (Cum Laude) from the University of California-Santa Cruz. During that time, I
served as an Undergraduate Tutor.
29 Subjects: including algebra 1, English, reading, writing
...I have been a consultant in the marketing field to top companies in the Bay Area. I have multiple degrees in electrical engineering. I worked as an Electrical Engineer at one of the premier R&D
institutions - Bell Laboratories.
39 Subjects: including algebra 1, English, chemistry, writing
...I have studied English, literature, mathematics & finance, and I have applied them in the real world. I can help students in all of these areas. My goal always is to teach for a mastery of the
subject because that has much longer-term benefit to the student.Marketing, sales & business development are the most significant roles on my resume, which is summarized below.
17 Subjects: including algebra 1, English, reading, writing
I am a highly successful high school physics and earth science teacher and tutor since 1996. I have been certified to teach physics and earth science/geosciences in three states including
California. Courses that I have taught include Conceptual Physics, General Physics, Honors Physics, Advanced P...
32 Subjects: including algebra 1, reading, ACT Math, elementary math | {"url":"http://www.purplemath.com/san_leandro_algebra_1_tutors.php","timestamp":"2014-04-17T13:42:04Z","content_type":null,"content_length":"23986","record_id":"<urn:uuid:81f6fba1-ac89-4756-ad99-23a59d6a4e35>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Converting C into MIPS Assembly
Converting C into MIPS Assembly
Hi, I wanted to convert this code into assembly but I'm having trouble. Here's what I got :]
int b = 0, x;
while (x != 0) {
b += ((x & 03) == 2);
x >>= 1;
believe this is just a basic instruction to count the number of occurrences of the bit sequence “10” (one zero) in x.
Now to convert this, here's what I got:
L1: bne $s1, $zero, DONE # branch if ! ( x ==0j )
addu $s0, $ra, $0 #b = 0 store ra
add $s1, $s1, $s2 # b+=$s2
beq $s2, $s2, 2 b += ((x & 03) == 2
addi $s3, $s3, 1 # x>>=1
j L1 # jump back to top of loop
but I'm thinking that there are some logical issues with this ASM code. Is there a different way to approach this conversion? Thanks :]
Use a compiler, and study the generated assembler; patch up to match your processor.
It's even easier if you just get a cross-compiler that targets MIPS - the job is as good as done :)
MSVC 2008 has the option to compile for MIPS.
Just a small "thing". You wouldn't be using the "s" registers, since they're designed for "long" use. You'd want to be using the "t" registers to store the temporary values. Doesn't make much
sense you're adding to "b" before you evaluate the expression? :\
Not sure why you have a branch-on-equal instruction (beq). I don't see where that would come into your loop, since you'll then skip "x >>= 1;". So yes, your logic is wrong.
As others have said, generating the optimised MIPS assembly would be easier for learning, but don't get in the habit, especially if you have to do this in an exam. You may find the compiler might
even be putting things on the stack if it's not optimised.
I would approach this problem by rewriting the C code as simply as possible. Any line that has more than one operator on it probably needs to be simplified (with a few exceptions). Get it to the
point where you can rewrite every line of C as a line of MIPS. A few obvious problems that jump out are the fact that you use the << and & operators, but haven't used the corresponding
instructions . Also, you're using bne, but the comment right next to it implies the opposite comparison.
If you want to go the compiler route, MIPS-SDE is a cross-compiler available for free download from the MIPS website, but if you already have VS as mentioned above - that's certainly easier.
Though I'd second the recommendation not to do so if you're doing this for a class. I don't think there's anything wrong with your approach - you just need to have a better understanding of how
each operation in C breaks down into the operations done in Assembly.
If you turn off all optimizations in the MSVS compiler and set it for MIPS you may be able to learn something via its output. I would not turn this code in for credit but it may give you a push
in the right direction. | {"url":"http://cboard.cprogramming.com/c-programming/123817-converting-c-into-mips-assembly-printable-thread.html","timestamp":"2014-04-23T11:14:32Z","content_type":null,"content_length":"10260","record_id":"<urn:uuid:c9c290b2-8174-4ebe-aa4a-0b1643e09530>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integrability aspects of the inverse problem of the calculus of variations
W. Sarlet
E-mail: Willy.Sarlet@rug.ac.be
Abstract. For a long time, a paper by J. Douglas of 1941 has been the only contribution to the question of classifying second-order ordinary differential equations for which a non-singular multiplier
matrix exists which turns the given system into an equivalent system of Euler-Lagrange equations. It was based on the Riquier-Janet theory of formal integrability of partial differential equations
and limited to systems with two degrees of freedom. Quite recently, a geometrical calculus of derivations of tensor fields along projections has been developed, which in the study of second-order
differential equations is primarily related to the existence of a canonically defined linear connection on a suitable bundle. It turns out that this calculus provides the right tools for closely
monitoring the process of Douglas's analysis in a coordinate free way. After a survey of the integrability analysis which can be carried out this way, we briefly sketch how subcases belonging to each
of the three classes in the main classification scheme of Douglas can be generalised to an arbitrary number of degrees of freedom.
AMSclassification. 58F05, 58G99, 70H35
Keywords. Lagrangian systems, inverse problem, integrability | {"url":"http://www.emis.de/proceedings/7ICDGA/V/sarlet.htm","timestamp":"2014-04-21T12:09:51Z","content_type":null,"content_length":"2051","record_id":"<urn:uuid:3f9ccbf6-e480-4a93-899b-9cc35edc0b96>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Draws an arbitrary geometric path. Paths are described by a series of pathOperations:
Operation Description
moveTo[x,y] adds a point to the path by moving to the specified coordinates (x,y)
lineTo[x,y] adds a point to the path by drawing a straight line from the current coordinates to the new specified coordinates (x,y)
curveTo adds a curved segment, defined by three new points, to the path by drawing a Bézier curve that intersects both the current coordinates and the specified coordinates (x3,y3), using
[x1,y1,x2,y2,x3,y3] the specified points (x1,y1) and (x2,y2) as Bézier control points
quadTo[x1,y1,x2,y2] adds a curved segment, defined by two new points, to the path by drawing a Quadratic curve that intersects both the current coordinates and the specified coordinates (x2,y2),
using the specified point (x1,y1) as a quadratic parametric control point
hline[x] adds a point to the path by drawing an horizontal line to the specified coordinates (x,current.y)
vline[y] adds a point to the path by drawing a vertical line to the specified coordinates (current.x,y)
shapeTo appends the geometry of the specified Shape, shape operation or outline operation to the path, possibly connecting the new geometry to the existing path segments with a line
[shape,connect] segment
close closes the current subpath by drawing a straight line back to the coordinates of the last moveTo
The first operation must be a moveTo.
path( borderColor: 'darkBlue', fill: 'blue', borderWidth: 4 ){
moveTo( x: 50, y: 50 )
quadTo( x1: -30, y1: 100, x2: 50, y2: 150 )
quadTo( x1: 100, y1: 230, x2: 150, y2: 150 )
quadTo( x1: 230, y1: 100, x2: 150, y2: 50 )
quadTo( x1: 100, y1: -30, x2: 50, y2: 50 ) | {"url":"http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=35422288","timestamp":"2014-04-17T02:02:46Z","content_type":null,"content_length":"7830","record_id":"<urn:uuid:33feda2a-6ad1-4723-9383-b9ae18e614ab>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parents, Are You Ready To Teach Your Kids Arithmetic? -- Phyllis Schlafly March 29, 2000 column.
March 29, 2000
Parents are starting to realize that "fuzzy" math courses (variously called "whole math," "new math" or "new new math") are producing kids who can't do arithmetic, much less algebra. The U.S.
Department of Education responded last October by officially endorsing ten new math courses for grades K-12, calling them "exemplary" or "promising" and urging local school districts to "seriously
consider" adopting one of them.
The recommended programs were approved by an "expert" panel commissioned by the Department of Education. But many parents believe that the "experts" are subtracting rather than adding to the skills
of schoolchildren.
Scholars are criticizing the new courses, too. They say that most of the panel's "field reviewers" who made the initial recommendations were teachers, not math experts, and that the panel making the
final decisions did not include "active research mathematicians."
Within six weeks of the Department of Education's announcement, more than 200 mathematicians and scholars banded together to denounce the government-anointed curricula because they fail to teach
basic skills. The group wrote a joint letter to Education Secretary Richard Riley criticizing the "exemplary" programs and asking the Department to reconsider its choices.
The group then published the letter as a full-page ad in the November 18th Washington Post. Despite the prestige of the letter's signers, including four Nobel Laureates and two winners of the Fields
Medal (the highest mathematics honor), Riley refused to back away from the Department's endorsements.
Riley defended his Department's recommendations because they conform to the so-called "standards" adopted in 1989 by the National Council of Teachers of Mathematics (NCTM). But the nationally created
math "standards" are just as off the mark as the nationally created history standards that caused such an uproar when they were released in 1995.
The history standards were denounced in the U.S. Senate by a vote of 99 to 1, but that didn't faze the educators determined to indoctrinate students with "politically correct" history. After a few
cosmetic changes, revisionist history masquerading under the label "standards" has infected nearly all new social studies textbooks.
The schools appear just as determined to force fuzzy math on children despite its obvious failures and the opposition of scholars and parents. In Illinois, parents have clashed with schools over one
of these "exemplary" courses called "Everyday Math," or "Chicago Math" because it was produced by the University of Chicago Mathematics Project, complaining that the curriculum neglects basic
Last August, parents in Plano, Texas filed a lawsuit against their school district over another of these Department-approved courses, "Connected Math," accusing the district of failing to give their
children basic math instruction. In December, parents in Montgomery County, Maryland kicked up vigorous opposition to Connected Math even though the district was being enticed into using it by the
prospect of a $6 million federal grant.
Another of these Department-approved courses, "Mathland," directs the children to meet in small groups and invent their own ways to add, subtract, multiply and divide. It's too bad they don't know
that adults wiser than those now in school have already discovered how to add, subtract, multiply and divide.
Critics charge that these fuzzy math programs, which are touted as complying with "standards," do not teach traditional or standard arithmetic at all and actually give the word "standards" a bad
name. They are based on such theories as that "process skills" are more important than computational skills and that correct solutions are not important so long as the student feels good about what
he is doing.
The arguments for fuzzy math are that it is supposed to spare children the rigors of teacher-imposed rules and teach them that all they need is a calculator. Fuzzy math omits drill in basic math
facts, fails to systematically build from one math concept to another, and encourages children to work in groups to "discover" math and construct their own math language.
According to mathematician Joel Hass of the University of California-Davis, one of the signers of the letter to Riley, "Saying that we don't need to teach children how to compute now that we have
calculators is like saying we don't need to teach them how to draw now that we have cameras or we don't need to teach them how to play music now that we have CD players." Mathematician William G.
Quirk, whose career includes teaching 26 different math and computer science courses at three universities, says, "Nowhere in the NCTM's 258 pages of standards do they suggest that kids should
remember any specific math facts."
Critics complain that failing to teach children the division of fractions precludes their moving on to algebra. David Klein of California State University, another signer of the letter to Riley,
said, "In shutting the door to algebra, Connected Math also closes doors to careers in engineering and science."
In 1989 23 percent of freshmen entering California colleges needed remedial help in math. This figure has now risen to 55 percent. If parents want their children to learn arithmetic, they will have
to teach them at home. | {"url":"http://www.eagleforum.org/column/2000/mar00/00-03-29.html","timestamp":"2014-04-18T03:32:13Z","content_type":null,"content_length":"23796","record_id":"<urn:uuid:4792e3b8-7f1a-4ec3-b136-0b79c8f19f44>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Understanding Specific Gravity and Extract
by Martin P. Manning
Republished from BrewingTechniques' September/October 1993.
Understanding the basis of extract potential enables you to refine your method, fine-tune your bill of materials, and ensure success in achieving the beer you seek. Simple conversion charts make
easy work of converting specific gravity to degree extract and back again.
Brewers use two primary systems for measuring the amount of extract or alcohol dissolved in wort or beer -- specific gravity and percent extract by weight. Because published methods, recipes, and
product information often provide extract data in only one of the two possible forms, knowing how to convert from one system to the other provides a powerful tool for recipe design. This article
defines and explains the two main systems for measuring extract and provides the tools that will enable you to convert from one system to the other, calculate extract yield, and make adjustments
to your brewing process.
SPECIFIC GRAVITY
Specific gravity is the density (weight per unit volume) of a substance divided by the density of water. A specific gravity (SG) of 1.050 ("50 points") indicates that the substance is 5% heavier
than an equal volume of water. The specific gravity of liquids is often measured with a hydrometer, whose weight (a constant) displaces different volumes of liquid as the liquid's density varies.
The typical hydrometer consists of a weighted bulb with a slender graduated stem rising above it. Once the bulb is submerged, the increment of displacement with depth is determined only by the
cross section of the stem, which must be very small to ensure a high degree of accuracy.
Variation with temperature: The densities of water and wort vary with temperature. When using specific gravity in speaking or writing, reference temperatures for the wort sample and for water
must be specified along with the value of specific gravity. A wort may have, for example, a specific gravity of 1.050 at 60 degrees F (15.5 degrees C) relative to water at 60 degrees F, or 1.050
@ 60 degrees F/60 degrees F (1.050 @ 15.5 degrees C/15.5 degrees C). The density of water peaks at 39 degrees F (4 degrees C), and that temperature is sometimes used as a reference temperature
for the density of water. Brewers use reference temperatures closer to room (cellar?) temperature. The wort reference temperature and water reference temperature are always the same, so the
measured specific gravity of wort with no extract content is 1.000. It is therefore sufficient to say that a given wort has a specific gravity of 1.050 @ 60 degrees F, the convention used in this
article. Typical reference temperatures are 60 degrees F (15.5 degrees C), 64 degrees F (17.5 degrees C), and 68 degrees F (20 degrees C).
Readings taken when the wort or beer is at a temperature other than the reference temperature can be adjusted back to the reference temperature, and some hydrometers have a built-in thermometer
and correction read-out. Alternatively, a correction table or curve, usually supplied with the instrument, can be used to adjust readings. Such corrections must include the test sample's change
in density with temperature and also the change in the volume of the instrument itself. Strictly speaking, the necessary wort density adjustment is a function of the extract content and
temperature (1), but this is a small effect. If the correction data assumes a nominal extract content, the error is at most 0.0002 (0.2 points) in specific gravity, if the sample is 10 degrees F
(5.5 degrees C from the reference temperature. This diminishes to zero as the sample is brought to the reference temperature.
PERCENT EXTRACT BY WEIGHT -- PLATO AND BALLING SCALES
Tables constructed in 1843 by the German chemist Karl Balling, and later improved upon by Plato for the German Imperial Commission, correlate specific gravity with the percent by weight of
extract as sucrose in solution. Of the common sugars, sucrose produces the largest increase in specific gravity for a given percentage by weight in solution. Both tables were derived by making up
solutions of sucrose (cane sugar) in water -- each solution containing different percentages by weight -- and weighing a known volume of the resulting solutions to determine their densities.
Plato's table is slightly more accurate than Balling's, but for practical purposes they are the same. Stating that a wort is 10 degrees Plato (or Balling) means that if the extract in solution
were 100% sucrose, it would be 10% of the total weight. In the typical wort, however, only a small fraction of the extract is actually sucrose. This is not a problem; sucrose was merely selected
as the reference because it produces the largest increase in specific gravity for a given percentage by weight in solution. Had another, "lighter" sugar been selected, sucrose would, confusingly,
yield more than 100% of its weight as extract (see below). Throughout the following discussion, degrees P will be used to indicate the percent by weight of extract in solution.
Measuring extract using the Plato scale: A hydrometer that is calibrated in degrees Plato (degrees P) is properly called a saccharometer, because it directly measures the percent sugar as sucrose
by weight in solution. Degrees Plato do not vary with temperature, because the weight of the water and of the extract contained in the water do not vary. The statement that a wort is 10 degrees
P, therefore, needs no accompanying reference temperature. When measuring extract using the Plato scale, however, the wort must be at the reference temperature. Otherwise, a correction must be
applied because the saccharometer is still comparing a weight to a displaced volume. Adjustments are made in the same way as for instruments that measure specific gravity.
Although the Plato and Balling tables were derived empirically, approximate conversion formulas can be found. Roughly, for specific gravity @ 60 degrees F (15.5 degrees C),
degree P=SG-1/0.004 [1}
degree P=SGpoints/4 [2]
and conversely,
SGpoints=4 degrees P
A much more accurate, yet still easily calculated conversion, also for specific gravity @ 60 degrees F (15.5 degrees C), is given by
degree P=259-259/SG [3]
SG=259/259-degree P
This form is given by de Clerck (1) for specific gravity @ 17.5 degrees C, using 260 instead of 259 as the constant in the equation. I have found the value 259 to give good accuracy for specific
gravity @ 60 degrees F (15.5 degrees C).
When extreme accuracy is desired, a polynomial curve fit to the table values can be derived using linear regression techniques. I obtained the following third-degree fit using a table that charts
specific gravity @ 60 degrees F (15.5 degrees C) against the percentage (w/w) (measured in degrees Plato) found in Hough, et al. (2):
degrees P=-676.67 + 1286.4 SG -
800.47SG2 + 190.74 SG3 [4]
The curves shown in Figure 1 were generated using equation 4 and provide a graphical means of accurate conversion between specific gravity and degrees Plato.
Figure 2 shows the relative accuracies of the three conversion equations 1, 3, and 4. For most purposes, equation 3 is adequate and much preferred over equation 1, especially at high specific
Weight of extract in solution: The weight of extract, WE (as sucrose), per unit volume of solution can be calculated as
WE=degree P/100 SG r [5]
where r is the density of water at the reference temperature for specific gravity. Any of the previous equations relating specific gravity and degrees Plato can be substituted into the above to
obtain an equation for the weight of extract in solution as a function of either specific gravity or degrees Plato. Assuming degrees P = 259 - 259/SG, the weight of extract per unit volume can be
written as
WE = 2.59(SG - 1)r [6]
The density of water at 60 degrees F (15.5 degrees C) is 8.338 lb/gal (0.9990 kg/L) (3). Substituting the appropriate value yields the weight of extract in either pounds per gallon or kilograms
per liter. Alternatively, values read from Figure 1 can be used directly in equation 5.
Extract yield potential: A common measure of extract yield potential is the weight percent of raw material that will appear as extract (assumed to be sucrose) when that material is mixed or
mashed with water. This value is found by dividing the weight of extract found in solution (equation 5) by the weight of the raw material used per unit volume. The exact number can vary depending
on upon how the material is used. For grains, the yield depends upon the grind (fine or coarse), the starch conversion temperature, and whether the weight of the raw material is taken "as is" or
adjusted for its moisture content.
Because it is the reference, only sucrose will yield 100% of its weight as extract when dissolved in water. Other sugars, even though they will completely dissolve, produce a smaller increase in
density and yield something less than 100%. Dry malt extract, for example, yields about 97%, dextrose (corn sugar) about 90%. In the case of grains, only a portion of the raw material dry weight
can be dissolved, which reduces the typical yield to between 66 and 83%, depending upon the type of grain and how it is prepared. Malted barley yields at most about 80% of its dry weight in
Another common measure of extract yield potential is specific gravity increase (in points) when a unit weight of raw material is mixed or mashed with water to yield a unit volume of solution.
Typical units are points per pound per gallon (or points per kilogram per liter). Brewers use both extract potential by weight and specific gravity increase in points per pound per gallon in
recipe formulation to determine the amount of malts and adjuncts required to reach the desired wort gravity. Laboratory analyses of malts and adjunct grains often give the extract potential by
weight percent only, so it is useful to develop a conversion between the two.
Relationship between extract potential by weight and specific gravity: For a unit weight of raw material, the weight of extract in a unit volume of solution (equation 6) must be equal to the unit
weight times the extract potential by weight. In English units, this gives
1.0 lb (%yield) = 2.59 (SG - 1)8.338
46.31 (%yield) = points/lb/gal
In metric units,
1.0 kg (%yield) = 2.59 (SG - 1)0.9990
386.5 (%yield) = points/kg/L
From these equations it is evident that sucrose produces a specific gravity increase of 46.31 points/lb/gal (386.5 points/kg/L). Also, points per kilogram per liter = 8.346 (points/ lb/gal),
since 386.5/46.31 = 8.346.
Change in specific gravity with dilution or evaporation: As the volume of a wort is increased by dilution or decreased by evaporation, the weight of extract within it remains constant. Using the
previously derived relationships, a simple expression can be found to predict the resulting specific gravity. If a volume of wort V1 of strength degrees P1 is diluted or evaporated to a new
volume V2 and strength degrees P2,
which reduces to
Assuming that degrees P = 259 - 259/SG, and rearranging the equation gives
(Equation 7)
Equation 7 can be used, for example, to estimate wort gravity at the end of the boil based on readings taken at the end of the lautering process. If the target gravity is not indicated,
adjustments such as increasing or decreasing the boil time or adding water to the boil can be made to correct the situation. For example, suppose you collect 11.5 gal of runoff from your lauter
tun at SG 1.042. You schedule a boil time of 1.5 h, over which time 1.5 gal (@ 60 degrees F) will evaporate, leaving a final volume of 10.0 gal. The predicted gravity at the end of the boil is
then (11.5/10.0)42 = 48.3. One must be careful to correct the measured volume of the wort as well as its specific gravity to the reference temperature of the target specific gravity.
If the target specific gravity is 1.050 (higher than the predicted value), an increase in boil time is needed to further reduce the final volume. Solving equation 7 for the new final volume gives
V2 = 11.5(42/50) = 9.66 gal. Assuming a standard evaporation rate of 1.0 gal/h, the adjusted boil time is then (11.5-9.66)/1.0 = 1.84 h.
If the target specific gravity is 1.046 (lower than the predicted value), some water can be added to the kettle, or the boil time can be reduced. In either case, the final volume is V2 = 11.5 (42
/46) = 10.5 gal. If we choose to preserve the planned 1.5 h boil, evaporating 1.5 gal, the initial volume V1 must be adjusted to 10.5 + 1.5 = 12.0 gal. Alternatively, the boil time could be
adjusted to (11.5-10.5)/1.0 = 1.0 h. It should be noted, however, that if the boil time becomes too short, problems with hop utilization, trub formation, and elevated levels of dimethylsulfide
(DMS) may result.
Whenever the final volume or boil time are other than originally planned, the weight and timing of all hop additions may need to be adjusted to maintain the planned bitterness level. This is a
good reason to assess the situation and make corrections at the start of the boil rather than at the end.
(1) J. de Clerck, A Textbook of Brewing, Vol. 1 (Chapman Hall, 1957).
(2) J.S. Hough, D.E. Briggs, and R. Stevens, Malting and Brewing Science (Chapman and Hall, 1971).
(3) CRC Handbook of Chemistry and Physics, 66th ed., R.C. Weast, Ed. (CRC Press, 1985). | {"url":"http://morebeer.com/brewingtechniques/library/backissues/issue1.3/manning.html","timestamp":"2014-04-19T04:22:55Z","content_type":null,"content_length":"17051","record_id":"<urn:uuid:106998ec-ecad-4993-ac0d-2a326c11f18d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
s and
Results 1 - 10 of 235
, 1994
"... This paper shows howalarge class of interprocedural dataflow-analysis problems can be solved precisely in polynomial time. The only restrictions are that the set of dataflow facts is a finite
set, and that the dataflow functions distribute overthe confluence operator (either union or intersection). ..."
Cited by 373 (33 self)
Add to MetaCart
This paper shows howalarge class of interprocedural dataflow-analysis problems can be solved precisely in polynomial time. The only restrictions are that the set of dataflow facts is a finite set,
and that the dataflow functions distribute overthe confluence operator (either union or intersection). This class of problems includes---but is not limited to---the classical separable problems (also
known as "gen/kill" or "bit-vector" problems)---e.g.,reaching definitions, available expressions, and live variables. In addition, the class of problems that our techniques handle includes
manynon-separable problems, including trulylive variables, copyconstant propagation, and possibly-uninitialized variables. Anovelaspect of our approach is that an interprocedural dataflow-analysis
problem is transformed into a special kind of graph-reachability problem (reachability along interprocedurally realizable paths). The paper presents three polynomial-time algorithms for the
realizable-path reachability problem: an exhaustive version, a second exhaustive version that may be more appropriate in the incremental and/or interactive context, and a demand version. The first
and third of these algorithms are asymptotically faster than the best previously known realizable-path reachability algorithm. An additional benefit of our techniques is that theylead to improved
algorithms for twoother kinds of interprocedural analysis problems: interprocedural flow-sensitive side-effect problems (as studied by Callahan) and interprocedural program slicing (as studied by
Horwitz, Reps, and Binkley).
, 1996
"... SLD resolution with negation as finite failure (SLDNF) reflects the procedural interpretation of predicate calculus as a programming language and forms the computational basis for Prolog
systems. Despite its advantages for stack-based memory management, SLDNF is often not appropriate for query evalu ..."
Cited by 260 (27 self)
Add to MetaCart
SLD resolution with negation as finite failure (SLDNF) reflects the procedural interpretation of predicate calculus as a programming language and forms the computational basis for Prolog systems.
Despite its advantages for stack-based memory management, SLDNF is often not appropriate for query evaluation for three reasons: a) it may not terminate due to infinite positive recursion; b) it may
not terminate due to infinite recursion through negation; c) it may repeatedly evaluate the same literal in a rule body, leading to unacceptable performance. We address three problems fir a
goal-oriented query evaluation of general logic programs by presenting tabled evaluation with delaying (SLG resolution).
- SYMPOSIUM ON PRINCIPLES OF DATABASE SYSTEMS , 1997
"... The evaluation of path expression queries on semistructured data in a distributed asynchronous environment is considered. The focus is on the use of local information expressed in the form of
path constraints in the optimization of path expression queries. In particular, decidability and complexity ..."
Cited by 147 (6 self)
Add to MetaCart
The evaluation of path expression queries on semistructured data in a distributed asynchronous environment is considered. The focus is on the use of local information expressed in the form of path
constraints in the optimization of path expression queries. In particular, decidability and complexity results on the implication problem for path constraints are established.
- SIGMOD Record , 2000
"... Many problems encountered when building applications of database systems involve the manipulation of models. By “model, ” we mean a complex structure that represents a design artifact, such as a
relational schema, object-oriented interface, UML model, XML DTD, web-site schema, semantic network, comp ..."
Cited by 134 (21 self)
Add to MetaCart
Many problems encountered when building applications of database systems involve the manipulation of models. By “model, ” we mean a complex structure that represents a design artifact, such as a
relational schema, object-oriented interface, UML model, XML DTD, web-site schema, semantic network, complex document, or software configuration. Many uses of models involve managing changes in
models and transformations of data from one model into another. These uses require an explicit representation of “mappings ” between models. We propose to make database systems easier to use for
these applications by making “model ” and “model mapping ” first-class objects with special operations that simplify their use. We call this capability model management. In addition to making the
case for model management, our main contribution is a sketch of a proposed data model. The data model consists of formal, object-oriented structures for representing models and model mappings, and of
high-level algebraic operations on those structures, such as matching, differencing, merging, function application, selection, inversion and instantiation. We focus on structure and semantics, not
implementation. 1
, 1997
"... This paper describes how a number of program-analysis problems can be solved by transforming them to graph-reachability problems. Some of the program-analysis problems that are amenable to this
treatment include program slicing, certain dataflow-analysis problems, and the problem of approximating th ..."
Cited by 119 (8 self)
Add to MetaCart
This paper describes how a number of program-analysis problems can be solved by transforming them to graph-reachability problems. Some of the program-analysis problems that are amenable to this
treatment include program slicing, certain dataflow-analysis problems, and the problem of approximating the possible "shapes" that heap-allocated structures in a program can take on. Relationships
between graph reachability and other approaches to program analysis are described. Some techniques that go beyond pure graph reachability are also discussed.
- Journal of Logic Programming , 1988
"... We consider a bottom-up query-evaluation scheme in which facts of relations are allowed to have nonground terms. The Magic Sets query-rewriting technique is generalized to allow arguments of
predicates to be treated as bound even though the rules do not provide ground bindings for those arguments. I ..."
Cited by 117 (13 self)
Add to MetaCart
We consider a bottom-up query-evaluation scheme in which facts of relations are allowed to have nonground terms. The Magic Sets query-rewriting technique is generalized to allow arguments of
predicates to be treated as bound even though the rules do not provide ground bindings for those arguments. In particular, we regard as "bound" any argument containing a function symbol or a variable
that appears more than once in the argument list. Generalized "magic " predicates are thus defined to compute the set of all goals reached in a top-down exploration of the rules, starting from a
given query goal; these goals are not facts of constants as in previous versions of the Magic Sets algorithm. The magic predicates are then used to restrict a bottom-up evaluation of the rules so
that there are no redundant actions; that is, every step of the bottom-up computation must be performed by any algorithm that uses the same sideways information passing strategy (sips). The price
paid, compared to prev...
- Sci. of Comp. Prog , 2003
"... Abstract. Recently, pushdown systems (PDSs) have been extended to weighted PDSs, in which each transition is labeled with a value, and the goal is to determine the meet-over-allpaths value (for
paths that meet a certain criterion). This paper shows how weighted PDSs yield new algorithms for certain ..."
Cited by 106 (35 self)
Add to MetaCart
Abstract. Recently, pushdown systems (PDSs) have been extended to weighted PDSs, in which each transition is labeled with a value, and the goal is to determine the meet-over-allpaths value (for paths
that meet a certain criterion). This paper shows how weighted PDSs yield new algorithms for certain classes of interprocedural dataflow-analysis problems. 1
, 1992
"... Languages of declarative logic programming differ from other modal nonmonotonic formalisms by lack of syntactic uniformity. For instance, negation as failure can be used in the body of a rule,
but not in the head; in disjunctive programs, disjunction is used in the head of a rule, but not in the bod ..."
Cited by 103 (9 self)
Add to MetaCart
Languages of declarative logic programming differ from other modal nonmonotonic formalisms by lack of syntactic uniformity. For instance, negation as failure can be used in the body of a rule, but
not in the head; in disjunctive programs, disjunction is used in the head of a rule, but not in the body; in extended programs, negation as failure can be used on top of classical negation, but not
the other way around. We argue that this lack of uniformity should not be viewed as a distinguishing feature of logic programming in general. As a starting point, we take a translation from the
language of disjunctive programs with negation as failure and classical negation into MBNF---the logic of minimal belief and negation as failure. A class of theories based on this logic is defined,
theories with protected literals, which is syntactically uniform and contains the translations of all programs. We show that theories with protected literals have a semantics similar to the answer
set semantics us...
- In Proceedings of the ACM Symposium on Principles of Database Systems , 1990
"... ..."
- JOURNAL OF LOGIC PROGRAMMING , 1993
"... The area of deductive databases has matured in recent years, and it now seems appropriate to re ect upon what has been achieved and what the future holds. In this paper, we provide an overview
of the area and briefly describe a number of projects that have led to implemented systems. ..."
Cited by 100 (6 self)
Add to MetaCart
The area of deductive databases has matured in recent years, and it now seems appropriate to re ect upon what has been achieved and what the future holds. In this paper, we provide an overview of the
area and briefly describe a number of projects that have led to implemented systems. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=248206","timestamp":"2014-04-20T02:54:31Z","content_type":null,"content_length":"37329","record_id":"<urn:uuid:ba64bbf4-9c6b-447c-88d4-bd8661970e8a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
Faculty Publications
Electrical Engineering and Computer Sciences
COLLEGE OF ENGINEERING UC Berkeley
Faculty Publications - Kannan Ramchandran
Book chapters or sections
Information for:
Articles in journals or magazines
Faculty Articles in conference proceedings
Technical Reports
Support Services:
Facilities &
My EECS Info
Electrical Engineering and Computer Sciences
COLLEGE OF ENGINEERING UC Berkeley
Information for:
Support Services:
Facilities & Safety
My EECS Info
Faculty Publications - Kannan Ramchandran
Book chapters or sections
Articles in journals or magazines
Articles in conference proceedings
Technical Reports | {"url":"http://www.cs.berkeley.edu/Pubs/Faculty/ramchandran.html","timestamp":"2014-04-17T01:29:39Z","content_type":null,"content_length":"42083","record_id":"<urn:uuid:4eb67a62-7bb4-40ac-b1e1-03af7f8e2f91>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
This file contains an explanation of the difference between implicit and explicit time integration schemes. The content is intended for those who want to learn a bit more than what the
ForwardandBackwardEulerExplorer GUI can offer.This file contains an explanation of the difference between implicit and explicit time integration schemes. The content is intended for those who want to
learn a bit more than what the ForwardandBackwardEulerExplorer GUI can offer.
clear all;
Start with a simple linear ODE,
From ODE stability analysis we know that our eigenvalues need to be in the left half plane. In this case we only have one eigenvalue,
Now we can analyzed the staiblity for finite
the Forward Euler method is expressed as:
Applying Forward Euler to
The solution has the following form,
We want to determine under what conditions
For stability using Forward Euler we need
theta = linspace(0,2*pi,100);
lambda_delta_t = exp(i*theta)-1;
grid on;
axis([-3 3 -3 3])
xlabel('real(\lambda\Delta t)')
ylabel('imag(\lambda\Delta t)')
title('Stability Region for Forward Euler')
For a given problem, i.e. with a given
As an example let's take
v0 = 1;
lambda = -2;
tex = linspace(0,20);
uex = exp(-2*tex);
dt = [0.1 0.9 1.1];
for ind=1:3
N = ceil(20/dt(ind));
t{ind} = dt(ind)*[0:N];
v{ind} = (1+lambda*dt(ind)).^[0:N] * v0;
xlabel('t'); ylabel('v'); xlim([0 20]);
text(15,0.5,'\Delta t = 0.1')
xlabel('t'); ylabel('v'); xlim([0 20]);
text(15,0.5,'\Delta t = 0.9')
xlabel('t'); ylabel('v'); xlim([0 20]);
text(15,25,'\Delta t = 1.1')
If we have a stiff system with large negative real eigenvalues, using an explicit time integration scheme can be very inefficent as we will need to use very small time steps. A more efficient
approach to numerically integrate a stiff problem would be to use a metho with eigenvalue stability for large negative real eigenvalues. Implicit methods often have good stability along the negative
real axis. The simplice implicit metho is the Backward Euler Method,
The amplification factor for Backword Euler is
and for stability using Backward Euler we need
which is shown below
theta = linspace(0,2*pi,100);
lambda_delta_t = exp(i*theta)+1;
grid on;
axis([-3 3 -3 3])
xlabel('real(\lambda\Delta t)')
ylabel('imag(\lambda\Delta t)')
title('Stability Region for Backward Euler')
Returning to the example problem,
v0 = 1;
lambda = -2;
tex = linspace(0,20);
uex = exp(-2*tex);
dt = [0.1 0.9 1.1];
for ind=1:3
N = ceil(20/dt(ind));
t{ind} = dt(ind)*[0:N];
v{ind} = (1/(1-lambda*dt(ind))).^[0:N] * v0;
xlabel('t'); ylabel('v'); xlim([0 20]);
text(15,0.5,'\Delta t = 0.1')
xlabel('t'); ylabel('v'); xlim([0 20]);
text(15,0.5,'\Delta t = 0.9')
xlabel('t'); ylabel('v'); xlim([0 20]);
text(15,0.5,'\Delta t = 1.1')
So implicit methods are WAY better. Why don't we use them all the time?
Consider the Forward Euler method applied to
In this explicit algorithm, the largest computational cost is the matrix vector multiply,
Re-arraging to solve for
Thus to vind | {"url":"http://www.mathworks.com/matlabcentral/fileexchange/42662-forwardandbackwardeulerexplorer/content/ForwardandBackwardEulerExplorer/html/WhyExplicitvsImplicit.html","timestamp":"2014-04-16T16:46:59Z","content_type":null,"content_length":"35466","record_id":"<urn:uuid:cc396ec0-6e21-455b-b8fc-b986d0f95082>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Degree Sequences of $F$-Free Graphs
Let $F$ be a fixed graph of chromatic number $r+1$. We prove that for all large $n$ the degree sequence of any $F$-free graph of order $n$ is, in a sense, close to being dominated by the degree
sequence of some $r$-partite graph. We present two different proofs: one goes via the Regularity Lemma and the other uses a more direct counting argument. Although the latter proof is longer, it
gives better estimates and allows $F$ to grow with $n$.
As an application of our theorem, we present new results on the generalization of the Turán problem introduced by Caro and Yuster [Electronic J. Combin. 7 (2000)].
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v12i1r69","timestamp":"2014-04-17T07:53:54Z","content_type":null,"content_length":"14644","record_id":"<urn:uuid:05260a10-7d5b-4844-b104-e4adb0bf4889>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Centre of an arc
February 18th 2006, 02:43 AM #1
Feb 2006
Centre of an arc?
Can ne 1 help me how to find the centre (Xc,Yc) for an arc having end points A(x1,y1)[say (2,5)] and B(x2,y2)[say (6,2)]?.........
I tried to calculate the centre imagining those end points of an arc as the end points of the diagonal of a square..........If we assume that the side of the square as 'a' then it becomes the
radius 'r' for the arc which i found to be 5/sqrt(2) units with the given points (2,5) and (6,2)
The problem is, even if I knew the radius and the end points i'm unable to calculate the center C(Xc,Yc)...........
Was my calculation wrong or the approach was wrong?
And i also want to find the length 'l' of the arc and height 'h' of the curved surface................Please help me out.
I'm sending the figure as an attachment for clear picture of what i'm asking.
Last edited by ragsk; February 18th 2006 at 02:51 AM.
[SIZE=3][FONT=Palatino Linotype]Can ne 1 help me how to find the centre (Xc,Yc) for an arc having end points A(x1,y1)[say (2,5)] and B(x2,y2)[say (6,2)]?.........
through 2 points you can draw many circles. All centres of these circles form a line perpendicular to AB.
So you need an additional value to get only 1 specific circle. Look at the attached drawing.
As earboth says there are an infinity of circles matching your two points. A more algebraic approach will give you an idea of how to solve the problem in general and will show you why this
I will assume the two points you gave: 1=(2,5) and 2=(6,2). The general equation for a circle is: $(x-x_c)^2+(y-y_c)^2=r^2$ where $(x_c,y_c)$ is the center of your circle and r is the radius. We
have two points on that circle, 1 and 2, so we get two equations that $x_c$, $y_c$, and r must satisfy:
We need one more equation relating the three variables in order to solve this system in general. This is why we need more information to solve the problem.
However, if you are clever, it may appear we have such information: the center of the circle lies on the perpendicular bisector of the chord. To find this line, start by extending the chord to a
line and find the equation of this line. Points 1 and 2 are on this line, so we can find it. Skipping the steps to the final result, this line is $y=- \frac{3}{4} x + 13/2$.
The slope of the perpendicular to this line is $m'=4/3$. Knowing the coordinates of the midpoint of the chord, M(4, 7/2), we can find the line that is the perpendicular bisector: $y= \frac{4}{3}
x -11/6$.
The center of the circle lies on this line so thus $y_c= \frac{4}{3} x_c -11/6$.
This is, in fact, another relationship between the required variables, so our system is now:
$y_c= \frac{4}{3} x_c -11/6$
It may appear this is the solution to our difficulties. However there is a snag. Solve the last equation for $y_c$ and plug it into the top two equations. We find:
i.e. we get the same equation in both the two cases. This means our system is still indeterminate.
(The point of going through all of this is that you are going to need the perpendicular bisector of the chord to find h once you have the complete system of equations to solve your problem.)
February 18th 2006, 05:23 AM #2
February 18th 2006, 07:04 AM #3 | {"url":"http://mathhelpforum.com/geometry/1914-centre-arc.html","timestamp":"2014-04-20T02:18:29Z","content_type":null,"content_length":"43371","record_id":"<urn:uuid:f4e7b554-2747-469f-affa-af5310938e87>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Stefy just looked for a formula of the form y = mx + c so that when x = -2, then y = 3, and similarly when x = 8, y = 5.
My answer used a similar idea but I think his is the right way.
If you think about the straight line graph this produces, every point between -2 and 8, will map onto every point between 3 and 5
And the mapping is reversible.
To prove it, you'd have to show the mapping is 1:1 in both direcctions, and that a point inside the first interval maps to a point inside the resulting interval. | {"url":"http://www.mathisfunforum.com/post.php?tid=18479&qid=242534","timestamp":"2014-04-19T14:47:56Z","content_type":null,"content_length":"23494","record_id":"<urn:uuid:933db7d6-e007-4d88-ac67-2d2ffdbb3325>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAT Skills Insight
Select a score band
Problem Solving
Skills needed to score in this band
SKILL 1: Solve problems using multiple strategies, including the following:
— Visualization
— Estimation skills
— Recognizing relevant information
— Function notation
SKILL 2: Use insight in solving nonroutine geometric problems involving the following:
— Triangles
— Patterns
— Perimeter
— The Pythagorean Theorem
— Properties of circles
1. 1
Solve problems using multiple strategies, including the following: Visualization
2. 2
Solve problems using multiple strategies, including the following: Estimation skills
3. 3
Solve problems using multiple strategies, including the following: Recognizing relevant information
The two semicircles in the figure above have centers
4. 4
Solve problems using multiple strategies, including the following: Function notation
5. 5
Use insight in solving nonroutine geometric problems involving the following: Triangles
6. 6
Use insight in solving nonroutine geometric problems involving the following: Patterns
In each of the three square figures above, only the small 1-by-1 squares that border the outside edges of the figure are shaded. If a square whose side measures 100 were divided into 1-by-1
squares that are shaded in a similar fashion, how many squares in that figure would be shaded ?
7. 7
Use insight in solving nonroutine geometric problems involving the following: Perimeter
8. 8
Use insight in solving nonroutine geometric problems involving the following: The Pythagorean Theorem
9. 9
Use insight in solving nonroutine geometric problems involving the following: Properties of circles
In the coordinate plane, the points
Skills needed to score in the next band
SKILL 1: Solve the first stage of a problem, and then apply that solution to solve the next stage of the problem
SKILL 2: Recognize complexity in problems that appear at first to be routine
SKILL 3: Develop and apply an effective strategy and keep track of information in solving a nonroutine problem
SKILL 4: Identify relevant and irrelevant information when choosing a solution strategy
SKILL 5: Solve multistep problems involving properties of integers | {"url":"http://sat.collegeboard.org/practice/sat-skills-insight/math/band/600/skill/5","timestamp":"2014-04-17T07:41:17Z","content_type":null,"content_length":"90458","record_id":"<urn:uuid:6622854d-0881-4b54-a62d-2ee6fa7b7c3a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
3,146pages on
this wiki
Googology is the study and nomenclature of large numbers.^[1]
One who studies and invents large numbers and large number names is known as a googologist. A mathematical object relevant to googology is known as a googologism; the term googolism is similar but
only applies to numbers. Googology is known for the rather comic names given to the googologisms, such as "meameamealokkapoowa oompa", "a-ooga", and "wompogulus".
Googology is not to be confused with googlology, the study of the Google search engine and its various other services.
The antithesis to googology is ultrafinitism, which states that large numbers simply do not exist.
The term was coined by Andre Joyce, formed by combining googol (the classic large number) + -logos (Greek suffix, meaning "study"). Joyce's googology involved devising a system of names for numbers
based on wordplay and whimsical extrapolation. Ironically, the term does not appear to be well-known even among its own practitioners, and few "googologists" use the term to describe themselves. Of
the major figures in the field, only Sbiis Saibian extensively uses the word "googology."
Although the term googology is modern, the subject has existed for as long as humans have been fascinated by large numbers.
The earliest known work by a "googologist" is probably the Sand Reckoner written by Archimedes, a Greek polymath, sometime in the 3rd century B.C. In it he develops a system of numbers extending to
10^8 × 10^16. There is other examples in ancient history that illustrate mankind's fascination, and even adeptness, with large numbers. Some religious texts contain some very large numbers. Although
the Bible contains no definite numbers greater than 10^8, it uses figurative language in many places to describe very large numbers such as "the stars in the sky" or "the sands of the sea."
With the advent of modern mathematics, and the impending invention of the computer, mathematicians of the 19th and 20th centuries had access to numbers larger than ever before. This fascination was
relayed to the laymen through popular books on mathematics. "Googol," "googolplex," and "mega" were all introduced in books of popular mathematics, written by mathematicians who wanted to explain to
the laymen what mathematicians meant when they invoked infinity.
Eventually, the fascination of large numbers spread to a class of amateurs who took it upon themselves to extend the ideas hinted at in these popular books on mathematics. These became the early
googologists. This took on something of a form of a hobby that still continues today, with amateurs writing papers claiming to have "invented the largest number ever." That being said, not everything
produced is brilliant, nor is it all crank mathematics. There is a variety of skill levels, and some of googology actually comes from professional mathematicians, not amateurs. In particular, there
seem to be three classes:
1. Googologisms that arise in professional mathematics as side-effects of more serious math problems, such as Graham's number and Skewes' number.
2. Googologisms devised recreationally by professional mathematicians, such as chained arrow notation and Steinhaus-Moser Notation.
3. Googologisms created solely by amateurs, such as BEAF and Hyper-E notation.
During most of the 20th century, early googologists worked in isolation. Since the advent of the internet however, there has been a greater confluence of ideas, and several websites have sprung up to
gather the loose bits of information that form the body of knowledge, methodology, and conventions known as googology. Perhaps most important of these sites are Googology Wiki, Robert Munafo's site,
and One to Infinity.
Furthermore, within the last 11 years (2002 through 2013) a loosely knit community of large number enthusiasts, dubbing themselves googologists, has emerged, building websites, sharing information,
and developing a culture with a unique approach to one particular challenge: "What is the largest number you can come up with?" Googologists generally avoid many of the common responses such as "
infinity," "anything you can come up with plus 1," "the largest number that can be named in ten words," "the largest number imaginable," "a zillion," "a hundred billion trillion million googolplex"
or other indefinite, infinite, ill-defined, or inelegant responses. Rather googologists are interested in defining definite numbers using efficient and far reaching structural schemes, and don't
attempt to forestall the unending quest for larger numbers, but rather encourage it. So perhaps a more accurate description of the challenge is: "What is the largest number you can come up with using
the simplest tools?"
As far as mathematical fields go, googology is an oddball. It precariously teeters on the edge of what we call "science," becoming more of an art form as opposed to a mathematical study.
Although googology remains, and will probably always be, an obscure, esoteric, and impractical study, it at least now has a name, a history, and a community.
List of googologists
See also: Category:People
Below is a list of people who have contributed significantly to large number studies.
1. ↑ Googology
Big numbers.
See also | {"url":"http://googology.wikia.com/wiki/Googology","timestamp":"2014-04-19T09:46:51Z","content_type":null,"content_length":"73547","record_id":"<urn:uuid:4616c951-5f0f-49af-8c22-e6ac8fbe5704>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
Identity of the Weyl-Tensor
up vote 3 down vote favorite
Let $(M^n,g)$ be a Riemannian manifold and let $W$ be its Weyl tensor. For a given ONB, does the identity $$W_{ijkl}W_{ijkm}=\frac{1}{n}|W|^2g_{lm}$$ hold? I think I've seen it somewhere but I'm not
sure whether this is valid only in dimension four (in this case, this is certainly true).
dg.differential-geometry riemannian-geometry
add comment
1 Answer
active oldest votes
This does not hold for $n>4$. To see this, start with a Ricci-flat 4-manifold $N^4$ that is not flat, and let $M^{n}= N^4 \times \mathbb{R}^{n-4}$, endowed with the product metric.
Then the metric on $M$ is Ricci-flat, so its Riemann curvature tensor is its Weyl tensor and $W_{ijkl}=0$ when any index is greater than $4$. However it is now clear that we can't
have your equation when $l = m > 4$.
up vote 5 down
vote accepted (This argument doesn't start to work by $n=4$ because the Weyl curvature is identically zero until you get to dimension $4$.)
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry riemannian-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/96234/identity-of-the-weyl-tensor","timestamp":"2014-04-19T07:15:53Z","content_type":null,"content_length":"49809","record_id":"<urn:uuid:6bfce81c-6b0b-4bf2-937c-01d9213f279e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
complexity Big O
just make sure answer correct..
(i) n2 + 6n + 4
(ii) 5n3 + 2n + 8
(iii) (n2 + 1) (3n + 5)
(iv) 5(6n + 4)
(v) 4n log2n + 3n + 8
answer :
correct me if wrong
Last edited on
it is
4n · log 2( the two is right corner beside log ) n + 3n + 8
check this . at using calculator there the
log2(n) = ln(n)/ln(2) = log(n)/log(2)
O( n·log[2](n) + 3n + 8) = n · O(log[2](n)) = n · O(log(n)) = O(n·log(n))
Edit: small fix.
Last edited on
err for me i think the complexity
should be O(n)?
choose the biggest 1 isnt?
okay so i just wrong 1 question? which is last question?
Answer will be :
this ya?
ask u last quesiton for this topic
i press calculator. the value correct
from left to right.
keep bigger
(1/3)n,log (log n), log n,log2 n,√n, n,n!
place correct rite?
O(1/3 · n) ~ O(n)
so it should be right to the left of n, if I got you right.
sorry. because i cnanot type the power there
how did u plot this actualy?
do u have the software intro?
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/97080/","timestamp":"2014-04-17T01:26:30Z","content_type":null,"content_length":"19533","record_id":"<urn:uuid:351e19c5-e50b-4286-a181-5b637606b464>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
of the Measures of Gaul
Articles by John Neal
All contents Copyright © 2003 John Neal
In a work entitled Arithmetical Books, written by Augustus De Morgan in 1847
(Augustus De Morgan, Arithmetical Books, sub. From The Invention Of Printing Till The Present Time, Taylor and Walton, London 1847), he neatly identified the primary cause of modern confusion
regarding ancient metrology. He stated:
"There runs through all these national systems a certain resemblance in the measures of length; and, if a bundle of rods were made of foot rules, one from every nation, ancient and modern, there
would not be a very unreasonable difference in the lengths of the sticks."
The difficulties inherent in the clear definition, indeed, the very identification of ancient modules of linear measure is obvious to any who have researched the subject. The reason for this is that
all of the measurement systems of antiquity display a variation in the specific modules, and, as De Morgan observed, many of the modules attributed to separate cultures have very closely related
values. In the course of comparisons this often results in the lesser variation of a distinct measure - that is essentially longer than the measure of comparison - to be shorter in length than the
greater variations of the lesser measure.
Through an extensive and protracted study of ancient metrology I have been able to propose that, in a general sense, the extent of the variations of a particular module is as much as one part in
forty. Yet the Roman scale compared to the common Egyptian measures is forty-nine to fifty, and the common Egyptian is forty-eight to forty-nine of the Greek. They therefore overlap in magnitude at
certain of their variations, but by using the methods that I have discovered we are now able to state which of the modules is the target of the writer, architect or surveyor. This is because the
variations of the individual modules are intentional, exact and mathematical; not, as has been previously believed, due to an off-hand or arbitrary approach to metrology on the part of historical
Although it may be stated in a general sense that the range of variations of an individual module are in the region of one in forty, certain of the evidence would imply that this fractional range
might be greater; in some cases it can definitely be stated to be the case. This is certainly true of some of the modules termed by historians as Gallic, Belgic, Germanic, Saxon or Northern. As these
are mostly taken to be the same basic measurement, it underlines the lack common agreement even as to terminology, let alone concurrence on what are the precise values.
When comparing the linear measures of ancient nations, one encounters many modules that are multiples of the basic digits. Here again, De Morgan's statement contains a broad hint at the methods of
analysis to be employed in comparative evaluation of these modules. They should all be compared at the basic foot length; far too often a unit is compared with a foreign multiple-digit (or compound)
module to little avail. For example the foot of one nation is often compared with a cubit, pygon, palimpes or some other unit containing a number of the basic digits other than the sixteen to the
foot, of another nation. This has been so since recorded history, Pliny the Elder noted that the Roman foot was five ninths of a royal Egyptian cubit. This proves to be the case, but only at certain
of its variations and not by the tested method of the addition or subtraction of single fractions, which means that by Pliny's method of comparison, the wrong classifications are linked. (The correct
way to view Pliny's observation would be to compare a two-feet cubit of the Roman feet with the royal Egyptian cubit - the linking fraction is then nine Egyptian to ten Roman. This was the practise
of ancient metrologists who strove to link number by single fractions, multiple fractions such as five-ninths could not be expressed, but would have to be articulated as one half plus one
eighteenth). The importance of this will later be appreciated during the explanation of the difficulties posed in the classification of the Germanic and Belgic modules. The modern search for the
definition of the "Megalithic Yard" has been dogged by such misguided comparisons. This is largely caused by the contemporary custom of comparing ancient measures in terms of the yard, inch and even
more hopelessly with the metre. Comparative ancient metrology would forever have remained impenetrable when expressed in the metric system. This is because there is no sensible sub-division of the
metre that compares to any module in the ancient world, and the extremely close correspondences of certain of the ancient feet cannot be appreciated in terms of the millimetre. This is also true of
the habit of the researchers who use the English language - of expressing the modules in decimal inches. They have carried the subdivision one step too far; the comparisons become much clearer when
expressed in decimal feet.
The variations of the same module.
Very early in my studies of metrology it was realised that in the course of comparisons that it was an advantage to express the ancient modules in terms of the decimal foot; largely because the
English foot is itself an extremely ancient measurement, it therefore lent itself to comparative valuations perfectly. It soon became apparent that the preserved length of the English foot was
pivotal to the whole structure of metrology.
It has often been remarked in recent historical texts that the English foot more closely resembles the Grecian than the Roman. This is quite correct, for after extensive comparisons of the varying
recorded values of the Greek foot, it became obvious that the English foot was one in the series of the variations to which the Greek foot is subject.
It would appear from most of the empirical evidence that the full range of the variations in a single module, here given in terms of the variations of the Greek-English foot, are as follows:
Least Reciprocal Root Canonical Geographic
.98867 .994318 1 1.0057143 1.0114612
.990916 .996578 1.002272 1.008 1.01376
Table 1
For reasons dealt with elsewhere, the above terminology is used as descriptive in the classification of the variations. It was realised from the beginning that all of these variations were impossible
to express in an ascending order. They must be tabulated in two rows, the fraction linking each of the variations across the rows is 175:176, and each of the values in the top row is linked to the
value directly below as 440:441. "Root" prefixes the descriptive terminology from Least to Geographic in the top row and "Standard" in the bottom row. For example, 1.008 is Standard Canonical and
1.0114612 is Root Geographic etc. As well as these values being measurements, they are also regarded as the formulae by which any other module is classified.
It may be argued that the closeness of the disparities would allow for any proposed value to be categorised by using such an inclusive set of variations; the foot length difference between the Root
and Standard classification is only .69 of a millimetre and the difference between Root and Root Canonical is .74 of a millimetre and the same close relationships occur between all of the
neighbouring values.
This argument is countered in several ways:
Very close and reliably reported estimates of lengths between the lesser and greater values agree to such a level of equivalence with the proposed values from the above tables, often to a single part
in many thousands. Similarly, with the lengthening of the modules, to the level of yards, paces or fathoms etc, the differences become distinctly measurable. It becomes apparent that we are dealing
with absolute theoretical values, therefore, without margins of error.
The fractional differences, which have been identified as separating these variations of the same module, have practical mathematical purposes. This purpose would seem to be the maintenance of
integers in all aspects of a design. Circular designs in particular, because these fractional variations are related to the values of pi (the ratio between the diameter and perimeter of a circle)
that were used in the ancient world. The fractional difference of 175:176 serves the purpose of maintaining integers of the same module in both the diameters and perimeters of circles if the
diameters are multiples of four. For example, in discussing the odometer, Vitruvius stated that a wheel of four feet in diameter travels 12 ½ feet in one revolution. This is a very inaccurate pi
ratio of 25/8 or 3.125, but is a value that is frequently encountered throughout the ancient world. Far more commonly used is the quite accurate 22/7 or 3.1428571, and the difference between 25/8 and
22/7 is exactly 175 to 176. There is thus an eminently practical reason for this fractional separation in the modules: if Vitruvius' carriage wheel were four Roman feet of .96768ft (175) then the
perimeter is indeed exactly 12 ½ feet, but of the related foot of .9732096ft (176).
The fraction 440/441 also serves the purpose of maintaining integers in diameters and perimeters, but of different modules. This will be explained in the text as the examples arise.
Due to the fractional integration of the ancient national systems, once a module has been identified as an absolute value, all others become so expressible. A widely acknowledged example of this
integration was demonstrated by the noted metrologist, Livio Stecchini, as so:
Mycenaean or Italic foot 15 9
Roman foot 16 24
Greek foot 10 25
Many more, if not all of the national systems are similarly linked, by a single fraction, but have escaped recognition because of the variations of the individual feet. In order to see the
integration of the modules they must be compared at the correct classification, i.e. Root Geographic to Root Geographic etc. Otherwise one is looking at a compound fraction - the fraction that
separates the individual modules of comparison, plus the variational fraction(s), they then exhibit no sensible mathematical relationship. As an example of this extended single fraction connection,
the Persian foot of 1.05ft (half the "cubit of Darius the Great" of 2.1ft) is added to the above list:
Mycenaean or Italic foot 15 9 6
Roman foot 16 24
Greek foot 10 25 20
Persian foot 21 7
The potential measures
The above expansion of Stecchini's original three comparative measures can be continued through virtually all of the foot measures that are recorded from antiquity. Seldom are all of the values of
any module found as examples, but where certain of the values are absent, its acknowledged relative may be present. For example, it is one of the better-established relationships that illustrate this
point, that of the Roman to Greek association of 24 to 25. There is no accepted value of the Greek foot recorded (that I thus far know of) at 1.002272ft (1 1/440th), yet 24 to 25 of this value at
.9621818ft is a very well attested length of a Roman foot. And so it is through all of the national modules, enough examples of all of them are given to establish the general theory. Where one
variant is missing in the record, it is inferentially there because of the presence of its relative neighbour.
The Persian foot was selected to illustrate the expansion of the measures from Stecchini's foundation of three related national modules because it is the most important of the Gallic measures, in
spite of the fact that we term the Root value of 1.05ft "Persian":
Least Reciprocal Root Canonical Geographic
1.038102 1.044034 1.05 1.056 1.062034
1.040461 1.046406 1.052386 1.0584 1.064448
Table 2
A fact that I have noted on the nature of cubits is that when they achieve or exceed a length of 1.8ft they become divisible by two, as opposed to the 1½ ft multiplication that is normally associated
with the cubit definition. This is the case with the Persian cubit of Darius; it has a two feet division each of 1.05 feet. It, and its variations which are found throughout Asia, Europe and North
Africa, I will refer to it as Persian because at this value it is one and one twentieth of an English foot, therefore a Root value. All of its variations, up to 1.064448ft, which was termed a Hashimi
foot throughout the Arabian Empire, and pied de roi throughout the Frankish, will be appended Persian. For example, this value of 1.064448ft would be termed a Standard Geographic Persian foot (i.e.
1.05 x 1.01376); similarly, a variant of this foot survives in the English system as being 5000 to the English mile of 5280ft. At this value, 1.056ft, it would be termed a Root Canonical Persian foot
(i.e. 1.05 x 1.005714). (That this is indeed the derivation of the English mile will be firmly established by analysing the Gallic leagues).
This has been my method of classifying the variable modules; in the majority of cases, whichever of the variants comes to correspondence with the English foot, or Root, by a single substantial
fraction the other variants are appended by that nomenclature. For example, a well attested value of the royal Egyptian foot is one and one seventh of an English foot, therefore wherever its variants
be found, be they quite out of context with anything "Egyptian", it will still be termed royal Egyptian because of this association at Root level with the English foot. Classification of measurement
has always been the difficulty with metrology, for example what are universally known as "Roman" feet are often referred to as "Attic", implying a Greek, specifically Athenian origin. Almost
invariably, measures are called after the location in which they are proven to have been used, implying an exclusive origin from that nation or state. This is misleading, as the evidence points to
the fact that all ancient measures form a single organisation and were used universally and concurrently.
Prehistoric Measures of Europe
Persian foot: Ancient Persian measure was based upon the royal Persian cubit of Darius the Great; its length was 2.1ft. The measurement standard later adopted by the maturing Arabic Empire was the
Hashimi cubit, given as 649mm. This is two Persian feet at the Standard Geographic value, as described above. (2.1 x 1.01376) This was the measure adopted by Charlemagne as one of the cementing links
between his Frankish Empire and that of the great Arabian league of states represented by the Caliph Harun-al-Rashid. Many other values as predicted from the above tables have been very accurately
noted, principally by Flinders Petrie, but also recorded by John Greaves as early as 1640. Its variants are known to have been in continuous use from the most ancient records throughout the Middle
East and Egypt.
It is widely believed that on the adoption, in 785 AD, of the Hashimi cubit by the Franks as two pied de roi, that this is when it first made its appearance in Europe. But Jacques Dassié has produced
some very convincing proofs that the lineage of the pied de roi in France extends into prehistory.
The Roman Foot: M. Dassié has written extensively upon the Great Gallic Leagues. Miles and leagues were the principal itinerary measures used throughout Roman and, apparently, pre Roman Gaul. The
term "mile" stems from the Latin mille - thousand, the Roman mile, or mille passum, is therefore one thousand paces.
A pace is composed of two steps and is five feet, the Roman mile being 5000 feet; the league is one and a half miles, or 7500 feet (5000 cubits). He notes the earliest researches into these distances
in Gaul, were conducted by Bourguignon d' Anville in 1760, who calculated from the distances between the cities of Gaul a Roman league that equates to 2211 metres. The Standard Canonical value of the
Roman foot is .96768ft and 7500 of them equal 2212 metres. But the methods employed by him for his estimates of the distance are considered unreliable. However, he knew of this Roman value.
Using more methodical techniques, in 1770, De La Sauvagére arrived at a value of 2225 metres for the Roman league and 7500 feet of the Standard Geographic classification is 2224.75 metres. There is
much to be said in favour of using this Standard Geographic classification as it is often found in the itinerary modules in Britain. Below is the table of the Roman foot variations, the majority of
which have been positively confirmed from analysis of buildings, etc. and surviving measuring devices: (table 3)
Least Reciprocal Root Canonical Geographic
0.949121 0.954545 0.96 0.965485 0.971002
0.951278 0.956714 0.9621818 0.96768 0.9732096
However, M. Dassié is also critical of De La Sauvagére's interpretations, although I suspect he may be vindicated.
Gallic Leagues other than "Roman": Now things become very interesting. Pistollet de Saint-Ferjeux, in 1858 becomes the first to propose a longer league of pre Roman origin. He is stated to have
calculated the league as 2415 metres, and one and a half English miles is 2414 metres. Therefore the basic foot may be stated to have been 1.056ft - which is the Root Canonical value of the Persian
foot and directly related to the original pied de roi by a ratio of 1:1.008.
Aurés, the author of 14 memoirs concerning the Gallic league, in 1865 proposed a value of 2436 metres. Liévre later confirmed this distance in 1893; he described the methods he used in this thorough
determination following the route from Tours to Poitiers, a distance of 102.3 kilometres. He consulted an ancient document known as the Table of Peutinger, a 13th century copy of a map of the Roman
Empire, which stated that this distance was 42 leagues giving the same length for the league as Aurés' 2436 metres. This is within 2½ metres of the original reckoning of the pied de roi adopted by
This may be regarded as total accuracy; the margin of error over the entire distance of nearly 64 miles is within 100 metres, the original positions of the milliary posts being debatable. Due to the
fact that this same level of accuracy is displayed by distances between all of the major cities of Gaul, as measured by Dassié from information given on the Roman milliary columns, it proves two
major claims. Firstly the extreme longevity of measures - defined in remote antiquity and surviving into modern times, secondly the accuracy of the ancient surveyors being equal to those of the
modern. It also demonstrates how the extremely close values of the original feet become distinctly measurable when reaching these itinerary multiplications. The Persian foot of the 1½ English mile
league is 2414m and the league of the Persian foot 1.008 times greater as the pied de roi is about 20 metres longer. (2414 and 2436)
Dassié is something of a pioneer regarding the methods of accurate identification of these itinerary distances. He extensively uses aerial photography in conjunction with global positioning
satellites, then verifies these distances on large-scale maps, making all due allowances for curvature etc. He has proven that the Roman inscriptions relating to these itinerary distances are not all
recorded in terms of the Roman league. As we have seen, he has accurately identified a variety of distances that were termed "leagues", because they have been repetitively found in the course of many
hundreds of comparisons. There are more.
The Belgic Foot: In 15 BC, Nero Claudius Drusus, brother of Tiberius, became governor of the states north of Italy. Gaul had three distinct divisions; they encompassed all of France and overlapped
what are now parts of Spain, Germany and the Low Countries. He adopted the foot of the Germanic Tungri as an exchange linear standard with these northern territories and Rome. Their administrative
centre was at Tongeren, in what is now Belgium, where their standards were kept. The reason for this adoption is that the Belgic foot was 18 digits to the 16 digits Roman foot. This was a unit
already familiar to the Romans through the similarity of Greek system in which the 18 digits module was termed a pygme.
Once again, the acceptance of the 9 to 8 ratio with the Roman is not, strictly speaking, quite correct. This is because, the evidence suggests, they achieve this correspondence at the wrong
classifications, as a perusal of the potential values of the Belgic (also called Drusian) foot, depicted below, illustrate:
Least Reciprocal Root Root Canonical Root Geographic
1.059288 1.065341 1.071429 1.077551 1.083708
1.061695 1.067762 1.073864 1.08 1.086171
Table 4
The Root value of the Roman foot is .96ft, 9 to 8 of this length is 1.08ft, which is a Standard Canonical value. This classification shift is identical to that of Pliny's comparison, previously
mentioned - of the Roman module with the royal Egyptian, the 5 of the Roman foot is Root to 9 of the Standard Canonical Royal Egyptian cubit. This comparative integral classification shift is quite
common in the science of metrology and hints at depths to the study, which have yet to be fully understood.
A perusal of the above "Germanic" table in comparison to the previous "Persian or Gallic" table perfectly illustrates the points raised in the opening paragraph of this article, concerning the lesser
values, as of the above table, overlapping the greater values of the previous. In the past, this made the accurate identification of the intended module a virtual impossibility, leading people to
believe the subject is more complicated than it actually is. First, one must identify what is the Root value of a module by comparing it to the English foot; as stated the Persian is 1 1/20th to the
English and the Germanic - Belgic is 1 1/14th. The use of these units may be fully substantiated by analyses of the modules used in the construction of ancient monuments, one of the functions of
which is the embodiment of standards; they are constructed with such extreme accuracy that it leaves little doubt as to this purpose.
This overlapping of values in the variations of the distinctly separate feet may explain the confusion regarding metrology in pre revolutionary France - because the accepted value of the Hashimi
cubit, two original pied de roi, is a maximum, or Standard Geographic classification, yet the values used in France at that time often exceeded this length, where they strayed into lesser values of
the Belgic standards. This unsatisfactory definition of standards was the primary reason that led to the development of the metre, a length designed for universal scientific agreement.
Had the French, or the scientific community at large, realised that three feet of the Standard Geographic Belgic foot was more accurately a metre than that which they eventually devised after a
hundred years of experiment, much expense would have been spared. But, the Belgic foot would be taken from the table below:
Least Reciprocal Root Root Canonical Root Geographic
1.067762 1.073864 1.080000 1.086171 1.092378
1.070189 1.076304 1.082455 1.088640 1.094861
Table 5
If the exchange rate of Drusus is taken literally, and the tables are arranged so that they begin at Root 1.08ft in order for them to be a ratio of 9 to 8 with all of the values of the Roman feet,
then the below is an extension of the original table (4). In certain cases therefore, there are more than 10 potential variations of the same measure:
1.059288 1.065341 1.071429 1.077551 1.083708
1.061695 1.067762 1.073864 1.080000 1.086171 1.092378
1.070189 1.076304 1.082455 1.088640 1.094861
The emboldened numbers are those that are duplicated in tables 4 & 5,
illustrating how in certain cases the directly related measures extend
beyond the classifications of the ten values.
That these extended values are valid is confirmed by the fact that an additional value of the Gallic league proposed by Dassié is one of 2490m, and 7500 Belgic feet of (1.08864ft a/a) is 2488.6m.
This is a doubly interesting value because, as stated, three of these Belgic feet from the above at 1.094861ft would be the metre that is more precise than that devised by the pre revolutionary
French from their measurement of the meridian degree. Many other exact examples of both these modules exist.
The Northern or Saxon foot: This is yet another quite separate module that is often confused with the Gallic-Persian and Belgic-Germanic measures. Close variants of this module are found throughout
Eurasia and North Africa. Its rating in England was exactly 1.1 feet and was used almost exclusively for land measurement. The English acre was a strip of land one furlong by one chain; 660 x 66
English feet and the rod or perch was 16.5 feet. In terms of the Saxon foot of 1.1ft these distances make better sense; the furlong is a 600ft stadium, as in all other cultures, the chain is the
sexagesimal 60ft and the perch is 10 cubits. Although this measure seems quite straightforward at 1 1/10th English feet, its derivation is rather more complex.
It is the measure that is broadly known as Sumerian, although it must be stressed that all of these nationalistic terminologies as applied to the modules are a modern convention, about which, there
is little universal agreement. These Sumerian values are quite accurately represented by examples of every classification. German archaeologists, principally directed by Robert Koldeway, worked
intensively in Mesopotamia for many years in the early 20th century. Between them they closely identified the foot lengths represented in the table below:
Least Reciprocal Root Root Canonical Geographic
1.084711 1.090909 1.097143 1.103412 1.109717
1.087176 1.093388 1.099636 1.105920 1.112240
Table 6
As can be seen, none of these values is exactly 1.1ft. Extraordinarily close is the Standard value at 1.099636 and this was rendered into rational correspondence with the English foot by the addition
of another repetitively found fraction, that of 3025 to 3024. There can be little doubt that this convention was observed by the ancient metrologists, so often is it found. Possibly the best example
is from that of the most studied and clearly defined ancient measurement - the royal Egyptian cubit. For simplicity, the side of the Great Pyramid is 756ft or 440 royal cubits of 1.71818ft,
(Standard) an extremely close value to this is the often-quoted 20.625 inches or1.71875ft, and the difference between them is 3024 to 3025. Although the difference at the cubit length is minute, it
would add exactly .25 of a foot to the side of the pyramid were the greater length used; reiterating the statement that the greater the distance measured, the clearer these distinctions become.
The operative perception in these observations is that these "Standard" values of the modules are rendered into rational numbers in terms of the English foot, and from a protracted study of metrology
my conclusion is that the English foot is the base one, or datum, from which all metrological calculations stem. This minute adjustment is observable from geographic distances to individual modules.
Once again, it is often encountered as an adjustment to the pi value - it is the difference between the universal 22/7 or 3.1428571 and 864/275 or 3.1418181, (the pi value deduced by Fibonacci). It
was not ignorance of the nature of pi in the ancient world, but conventions of convenience that governed the choice of the values; they were used in the maintenance of integers or rational numbers.
Just one other example will be given because it is pertinent to the next module to be considered. The British exchange of the variable Spanish vara was that of Madrid, taken as 2.74285ft. As an
absolute value, this may calculated as (Root) 2.7428571 and 441 to 440 of this is (Standard) 2.74909ft - (the vara of Mexico is given as 2.749ft). 3025 to 3034 of this Standard value is exactly
2.75ft. Thus, all of the Standard values of the modules, such as the Saxon foot, could be made numerically rational in terms of the English foot.
The Spanish Vara: Gallia Belgica, northern Gaul, was of mixed Celtic and Germanic nations and Gallia Lugdunensis, central Gaul, principally Celtic, is the general area associated with the above
modules. Southern Gaul - Gallia Aquitania, was a mix of Celtic and Iberian nations and the measurement most associated with Iberia is the vara. The equivalent of the vara would be 2 ½ feet of the
above Sumerian table from which the Saxon measures derived, and this step or half pace was a module that was used widely in northern Gaul and Britain. That it is a step may be established by the fact
that it has been demonstrated to have a 40 digit sub division. Its range of values, many of which survived into modern times as variations of the vara, are as follows:
Least Reciprocal Root Canonical Geographic
2.711777 2.727273 2.742857 2.758531 2.774294
2.717940 2.733471 2.749091 2.764800 2.780599
Table 7
However, in Iberia these modules had a three feet sub division, so may more properly be called a yard. Additionally, each foot was divided into 12 pulgadas, or inches rather than digits. Was this
division of the vara adopted at the time of the Roman occupation in order to bring the module into line with Roman subdivisions? Probably not, as the foot of the vara integrates very well with many
other modules - it is exactly 20 to 21 of the Roman. An alternative division of the vara was one of four palmos, which far better suits the northern interpretation of 2 ½ feet divisions because it is
forty digits. This is quite common in ancient metrology; subdivisions of the modules being dictated by convenience; for example, if a module needed to be one and one third of the constituent foot,
then it must be sub divided into 12 inches because one cannot have an integral third of 16 digits. I term such modules nodes, where a multiple of one unit exactly corresponds with a different
multiple of another.
This correspondence is common at itinerary lengths; for example, a widely used Arabic measurement was a Black cubit, first recorded in Egypt, with a value of 1.77408ft. There were 4000 cubits to an
Arabic mile that was also exactly 7000 Greek feet of 1.01376ft. The league in Iberia was 5000 varas, roughly 2.6 miles or 4.2 kilometres. This length is exactly 2/3 of the Egyptian schoenus of 12000
royal Egyptian cubits. The Spanish also used a league that exactly corresponds to the ancient Egyptian schoenus. It was known legua géographique of 7500 varas, taken to be 17 ½ to the degree of the
meridian it was made obligatory to use on the scales of maps. This is truly remarkable because Phillip V passed the statute as early as 1718 which was a full 70 years before the French measured the
degree to similar accuracy; and this because at certain of its known values it is indeed 17 ½ to the meridian degree. Did the Spanish pre-empt the French in their laborious determination of this
length? Or did the Spanish refer to traditions that stretched back to ancient Egypt?
Five palmos of four to the vara is three royal Egyptian feet. There is little doubt that we are viewing a single association of international measures whose origins or dispersal have nothing to do
with the Roman Empire.
The Exemplar of the Temple: The first millennium BC is remarkable for mass migrations and wars on what must be a previously unprecedented scale, culminating in the subjugation of Gaul and Britain by
the Romans. At the time of the Roman invasion of Gaul the Gallic nations were under extreme pressure from the insurgent Helvetians. In 58 BC, Caesar slaughtered 30000 of them at the battle by the
Arar River, later that same year some 150000 were massacred before the remnants of their nation retreated back to Switzerland. In the course of his 8 years Gallic campaigns and his expeditions
against the Germanic tribes across the Rhine, upwards of a further million died at the hands of the Romans. Prior to this the Greeks had migrated and slaughtered throughout the world known to them,
such unsustainable expansions with casualties of modern war proportions seem to have typified the staggered closing of what we term the Bronze Age.
Even before this it would seem that things were very different and far better organised. All of the disparate nations of Gaul, Germany, Scandinavia, Britain and Iberia would seem to have been under
the sway of a unified culture. Even this view is limited as to area, but let us stay within the bounds of the sceptic. Roughly, between 5000 and 1500 BC, throughout this area the megalithic monuments
were erected, the most impressive of these megaliths are clustered in Britain and Brittany. There is such uniformity in design and orientation of these great stones they had to be products of a
single culture that governed this enormous territory.
At the time of the Roman invasion of Britain, Gallic and Belgic tribes inhabited the southern part of the country; overall, the culture was Celtic whose traditions were maintained by the Druids.
Although the Druids conducted ceremonies at megalithic sites, they had had little to do with their erection or design. They had been abandoned as inhabited college sites at least a millennium before
the arrival of the Romans, the destruction of, then the deterioration of Stonehenge into a ruin was well under way, and the arts associated with monumental masonry long disused.
Among its other functions, Stonehenge is the monument that best exhibits the role of the temple as a repository of measure. It evolved into its final form over a period of around 1500 years, begun in
the fourth millennium BC and completed in the second. Through metrology it may be established that this development adhered to a single theme. Those from the scientific community who have evaluated
the geometrical-astronomical orientation of the Henge have established that its precise latitude north governed the foundation of the station stone rectangle. Because by sighting along the lesser
sides they are oriented to the most northerly sunrise and the most southerly sunset, and by sighting along the greater sides of the rectangle they are aligned upon the most northerly moonset; and in
the reverse direction, the most southerly moonrise. Furthermore, by sighting along the diagonal in a westerly direction it is aligned to the most southerly moonset. If Stonehenge were sited just a
few miles north or south of its location, the rectangle would lose its integrity as to this purpose and become a parallelogram.
Most often in ancient structures an inbuilt proportion determines its module of construction. Significantly, the Stonehenge rectangle is Pythagorean in as much that two adjacent sides and the
diagonal are in the proportion of 5, 12 and 13; one unit of these integers is a length of ten Standard Geographic Belgic two feet cubits - 21.72342ft (table 4), or 8 steps of 2 ½ of these feet.
Overall, the shorter side is 100 Belgic feet, the longer side 240 and the diagonal 260. This rectangle was one of the earliest features of the whole complex; its diagonal length is identical to the
diameter of the Aubrey hole circle upon which it is founded. Originally there were four station stones of which only two remain, it is estimated that that they were placed as early as 3000 BC and it
was a full 1200 years from this date that the final phase, the erection of the sarsen circle, was complete. Yet, by means of the metrological evidence, all of the phases may be demonstrated to form
an elegant and coherent whole.
For simplicity, a no more ingenious device could be conceived than the outline of the sarsen circle and the station stone rectangle for the preservation and exemplification of metrological standards.
As we have seen, the station stone rectangle is laid out with values of the Standard Geographic variation of the modules; therefore, one would seek to find other modules throughout the monument at
this classification, which proves to be the case.
The most important component of the massive stone circle is the surmounting ring of the lintel stones. Unlike the rough hewed sarsen uprights, the lintel circle is carefully dressed; it may therefore
be measured with accuracy. There were thirty uprights and thirty lintels and the lintel width was one thirtieth of the circle's diameter; as measures in ancient monuments are invariably linked to
ratios, one should seek the modules of the design in these in-built numbers. Taking as the inner diameter of the sarsens the length from Flinders Petrie's careful examination, it is 100 Roman feet of
.9732096ft, which is the Standard Geographic classification (table 3). The outer diameter may therefore be computed be 100 common Greek feet of 1.0427245 feet, also of the Standard Geographic. The
width of the lintel, at 3.4757485ft is a double royal Egyptian cubit of the same classification; this could also be legitimately termed a three feet "royal" yard.
Immediately one can recognise the integrated nature of ancient metrology and also gain an insight into its purpose - which appears to be the maintenance of integers in all aspects of a design. This
integration continues through all the features of this simple design - the outer edge of the lintel stones, or one thirtieth of the perimeter, is exactly 10 Belgic feet of 1.092378ft the "Drusian
foot" at 9/8ths of the Roman (it is one pertica or perch taken from extended table 4). However, this is a Root Geographic value; which is 440 to 441 of the Standard Geographic. These niceties can be
appreciated when it is realised that one is dealing with absolute values. Another way to interpret the module of the outer perimeter is to divide the overall length by a number that is traditionally
associated with circular design. In this case the canonical 360 - which yields the value of the Mycenaen foot of .910315ft, also a Root Geographic classification. (This value of the Mycenaean foot is
that tendered by Stecchini from the 100 feet diameter of the Grave Circle adjacent the Lion Gate of Mycenae).
Only one module fits all of the lintel dimensions at the same Standard Geographic Classification, and this is the common Egyptian foot of .993071ft (6/7ths of the royal Egyptian), at 98 in the inner
diameter, 105 in the outer, 308 in the inner perimeter and 330 in the outer. Feet that fit the dimensions in integers:
The inner diameter 100 Roman feet of .9732096ft
98 Common Egyptian feet of .993071 ft
96 Greek feet of 1.01376 ft
84 Royal Egyptian feet of 1.15858 ft
The outer diameter 105 Common Egyptian feet of .993071 ft
100 Common Greek feet of 1.0427245ft
96 Belgic feet of 1.086171 ft
90 Royal Egyptian feet of 1.158583 ft
Many other modules fit these dimensions but they may all be demonstrated to be compound measures, by which it is meant they are multiples of the 6 types of feet as above, or digit multiples based
upon these feet. Of particular interest is the Belgic foot regarding the expansion of the monument's proportions beyond the sarsen circle.
It is illustrated in the - not strictly to scale - diagram below. The module is taken from the numerical proportion is a cubit of two of these Belgic feet - there are 48 in the outer sarsen diameter
and 50 in the station stone rectangle, therefore one cubit separates the two above and below the circle:
Many other modules fit this rectangle in integers but the above is the most satisfactory and is the probable design intention, being x 10 the rectangle and diagonal proportion of 5,12,13.
Thus we have seen that the measures of Germania, Gaul and Iberia are identical to the units used in Southern England during pre history, and had been anciently used here long before the intrusions of
those who used them at the time of the Roman invasion.
- - - - -
Copyright © 2003 John Neal
Ancient Measurement Systems
- Their fractional integration
All contents Copyright © 2003 John Neal
From the beginning of the 20th century the study of ancient metrology has faded into the background of academic research. Before this, it was a topic of lively debate among the scientific and
archaeological communities. It was considered important to clearly define the ancient modules in order to interpret architectural intentions in the ancient monuments, understand itinerary distances,
the statements of the classical writers and even biblical descriptions that abound with reference to measures.
Interest in the subject was briefly revived during the 1960s by the claim of Alexander Thom who asserted that the builders of the Megalithic structures had consistently used a common unit of measure
throughout the British Isles and Brittany. The claim, to this day, has been neither confirmed nor disproved. This is entirely due to the fact that there is a prevailing ignorance of the subject of
ancient metrology. The statisticians who number-crunched the Megalithic data, Thom, Broadbent, Kendall, Freeman and dozens of highly qualified authors, simply failed to recognise the modules that
their analyses produced. None of them had made a detailed study of ancient systems and methods of mensuration.
At first glance, the subject seems formidable, causing one learned academic to exclaim that "Ancient metrology is not a science, it is a nightmare". It may seem so, there were a great many modules
that were commonly used, various spans, feet, digit multiples such as pygme, remen, cubit and feet multiples - step, yard, pace, fathom, pertica and various bracchia; intermediate measures of the
various furlongs and stadia and the itineraries of miles, leagues, schoenus etc. However, the approach to the subject may be simplified by simply considering the basic measure of each system, which
is invariably the foot. Augustus De Morgan gave a broad hint at this method of assessment in 1847, he stated:
"There runs through all these national systems a certain resemblance in the measures of length; and, if a bundle of rods were made of foot rules, one from every nation, ancient and modern, there
would not be a very unreasonable difference in the lengths of the sticks."
It was simply by comparing the foot lengths of the different national systems that a very elegant order was perceived, leading to the conclusion that all of these "national systems" formed a single
organisation. It has become customary for us to name a unit from the society in which it is found to have been used, but most often, the bureaucracies have adopted particular units during the
historical period. Obviously a universal system had been fragmented into these various cultures. The difficulties of research are compounded by the lack of agreement as to nomenclature, for example,
what are universally known as Roman feet are often called Attic, and at one of its variations, Pelasgo.
Which brings us to the most confusing of all the aspects of metrology - the variations. In all ancient societies, there is a quite broad variation throughout the modules, which has been wrongly
regarded as slackness in the maintenance of standards. It would seem that the range of variations and the fractions by which they vary, are not merely similar from nation to nation, but identical.
Once these fractions - and the simple mathematical reasons for them - had been established, it became possible to then classify these dissimilarities of the same module. The feet of the various
national standards could then be compared at their correct relationships. By seeing the fractional integration through the basic foot structure, many modules could be discarded for comparative
purposes, until very few Root feet remained. In fact, there are probably only twelve distinct feet from which all other "feet" are extrapolated. For example the Pythic foot is a half Saxon cubit, and
many modules attributed to different cultures are in fact variations of the same basic foot, such as Saxon and Sumerian, or pied de roi and Persian.
These feet in ascending order, in terms of the English foot are as follows:
• Assyrian .9ft
When cubits achieve a length of 1.8ft such as the Assyrian cubit they are divisible by two, instead of the 1 ½ ft division normally associated with the cubit length. Variations of this measure
are distinctively known as Oscan, Italic and Mycenaean measure.
• Iberian .9142857ft
This is the foot of 1/3rd of the Spanish vara, which survived as the standard of Spain from prehistory to the present.
• Roman .96ft
Most who are interested in metrology would consider this value to be too short as a definition of the Roman foot, but examples survive as rulers very accurately at this length.
• Common Egyptian .979592ft
One of the better-known measures, being six sevenths of the royal Egyptian foot.
• English/Greek 1ft
The English foot is one of the variations of what are accepted as Greek measure, variously called Olympian or Geographic.
• Common Greek 1.028571ft
This was a very widely used module recorded throughout Europe, it survived in England at least until the reforms of Edward I in 1305. It is also the half sacred Jewish cubit upon which Newton
pondered and Berriman referred to as cubit A.
• Persian 1.05ft
Half the Persian cubit of Darius the Great. Reported in its variations throughout the Middle East, North Africa and Europe, survived as the Hashimi foot of the Arabian league and the pied de roi
of the Franks.
• Belgic 1.071428ft
Develops into the Drusian foot or foot of the Tungri. Detectable in many Megalithic monuments.
• Sumerian 1.097142ft
Perhaps the most widely dispersed module of all, recorded throughout Europe, Asia and North Africa, commonly known as the Saxon or Northern foot.
• Yard and full hand 1.111111ft
• This is the foot of the 40 inch yard widely used in mediaeval England until suppressed by statute in 1439. It is the basis of Punic measure and variables are recorded in Greek statuary from Asia
• Royal Egyptian 1.142857ft
The most discussed and scrutinised historical measurement. Examples of the above length are plentiful.
• Russian 1.166666ft
One half of the Russian arshin, one sixth of the sadzhen. One and one half of these feet as a cubit would be the Arabic black cubit, also the Egyptian cubit of the Nilometer.
Variants and variables in the above descriptions are in no wise arbitrary regional fluctuations but follow a distinct discipline. The extent of the variations covers a range of values that amounts to
about one fortieth part. Immediately one can see one of the prime difficulties in the identification of ancient modules, because some of the distinct foot values are related by lesser fractions; the
Roman is 48 to 49 of the common Egyptian and the common Egyptian is 49 to 50 of the Greek/English. They therefore overlap at certain of their variations, in the course of comparisons this often
results in the lesser variation of a distinct measure - that is essentially longer than the measure of comparison - to be shorter in length than the greater variations of the lesser measure.
Metrologists continually confuse the Belgic, Frankish and Saxon/Sumerian, the latter has also been appended Ptolemaic. But, the differences become distinctively identifiable at the lengths of the
pertica, chain, furlong, stadium, mile etc.
It would appear from most of the empirical evidence that the full range of the variations in a single module, here given in terms of the variations of the Greek-English foot, (the English foot being
one of the series of the Greek foot) are as follows:
│ Least │Reciprocal │ Root │Canonical│Geographic │
│.98867 │ .994318 │ 1 │1.0057143│ 1.0114612 │
│ │ │Standard│ │ │
│.990916│ .996578 │1.002272│ 1.008 │ 1.01376 │
The above terminology is used as descriptive in the classification of the values. It was realised from the beginning that all of these variations were impossible to express in an ascending order.
They must be tabulated in two rows, the fraction linking each of the variations across the rows is 175:176, and each of the values in the top row is linked to the value directly below as 440:441.
"Root" prefixes the descriptive terminology from Least to Geographic in the top row and "Standard" in the bottom row. For example, 1.008 is Standard Canonical and 1.0114612 is Root Geographic etc.
As well as these values being measurements, they are also regarded as the formulae by which any other module is classified. That is, any of the listed feet could occupy the Root position in the above
table, and all of its variants would be subject to the multiplications of the tabulated values. As an example, the Persian foot when subjected to this process:
│ Least │Reciprocal│ Root │Canonical│Geographic│
│1.038102│ 1.044034 │1.05 Standard │ 1.056 │ 1.062034 │
│1.040461│ 1.046406 │ 1.052386 │ 1.0584 │ 1.064448 │
Thus, whichever of the measures shows a direct fractional link to the English foot, such as the one and one twentieth, as above, is Root, then the maximum value of 1.064448ft is both the Hashimi foot
and the original pied de roi, both could be classified as a Standard Geographic Persian foot (1.05 x 1.01376). Or the given length of the Mycenaen foot at .910315ft could be classified as a Root
Geographic Assyrian foot (.9 x 1.0114612ft) and so forth. Then, whenever one is making cultural comparisons of modules, the correct classification must be selected, Root Reciprocal to Root Reciprocal
etc. otherwise one is looking at a compound fraction, i.e. the fraction separating the distinctive foot plus the fraction of the variation(s), which may then show no apparent rational relationship.
These fractional separations of the rows and columns have a practical purpose; they are designed to maintain integers in circular structures and artefacts such as storage and measuring vessels. If a
diameter is multiple of four or a decimal, by using 22/7 as pi this results in a fractured number perimeter. Therefore 3.125 or 25/8 would be used to give an integer or rational fraction for the
perimeter. Accuracy is maintained by using the longer version - by the 176th part - of the measure in the perimeter; this is because 25/8 differs from 22/7 as 175 to 176. Similarly, the fraction 441
to 440 maintains integrity of number in diameter and perimeter, but of different modules. If one has a canonical perimeter number such as 360 English feet, then the diameter will be exactly 100 royal
Egyptian feet, but, the royal Egyptian foot that is directly related by a ratio 8 to 7 of the English at 1.142857ft (Root), is supplanted in the diameter by the foot that is the 440th part longer at
1.145454ft (Standard). Another example is to use as a diameter 100 Standard common Greek feet, then the perimeter is 360 Assyrian feet but of the Root classification - the 440th part less. This is
clearly indicative of the integrated nature of the original system, the purpose of which was the maintenance of integers in what would mostly be fractured numbers were a single standard measure used,
which is what we have today. Ancient metrology is very simply based upon how number itself behaves.
Copyright © 2003 John Neal
All contents Copyright © 2003 John Neal
The existence of a rigidly maintained prehistoric system of measurements, based upon a unit he called the Megalithic yard, was first proposed by Alexander Thom in 1968. He claimed that the Megalithic
yard and associated modules governed the construction of Megalithic monuments throughout a wide geographic area. For the first time in 100 years an ancient unit of measurement became internationally
newsworthy, attracting both approving and condemnatory responses from the surprisingly wide spectrum of those who were interested. This situation persists until the present day. For far too long, the
existence or otherwise of this Megalithic yard has been an unresolved topic of debate among archaeologists. Despite the apparent dryness of the subject, the debate is often acrimonious. The orthodox
view is that the idea of precision measurement among Neolithic and Bronze Age societies is preposterous; the progressive view is that a high level of sophistication governed both their science and
social structure.
Because metrology has been more or less dropped from the study of both archaeology and the history of science, there are no longer any experts to consult on the matter. This has led to the argument
on the validity of the Megalithic yard being conducted by virtually unqualified people in this field. Representatives of either faction act out an absurd "Oh yes it is", "Oh no it isn't", circular
impasse of academic bickering; the "evidence" called upon to support either view is scientifically flimsy.
The resolution of the validity of the Megalithic yard can only be reached via a proper understanding of the rules governing ancient metrology. Megalithic culture extends far beyond the regions
explored by Thom and therefore overlaps with ancient territories whose systems of metrology are understood. This article is therefore more than an explanation of the Megalithic yard; it requires a
brief introductory account of the principles that the evidence suggests, govern historical measurement systems generally.
The Metrological Background
Gaetano de Sanctis, a lecturer at the University of Rome during the 1930s, remarked: "Ancient metrology is not a science, it is a nightmare".^1 The only leap of faith required of the reader is the
acceptance that far from being a nightmare, the basic subject might be both simple and sublime. A plethora of modules were used concurrently in the ancient world and there are many differing values
for each of them. At first sight the subject appears arbitrary, random and confused. The variable values have for long been regarded as evidence for slackness in the maintenance of standards, but
this is not the case. Luca Petto, the renaissance antiquarian, remarked that if any of these variables were found to exactly agree, then this would imply an intended standard.^2 Seen from this angle,
so many of the ancient modules are of precisely the same length that one must conclude that the variations are indeed deliberate. But what exactly are these variations - and what is their purpose?
W.M.F. Petrie was the most prolific, accurate and engaged researcher into ancient metrology. From his Inductive Metrology, (1877), to the publication of his Measures and Weights, (1934), he noted
many of the variations evidenced by singular modules, certain of these variations were regularly of the 450th and the 170th part.^3 (As evidence now stands, these figures should be exactly refined as
variations of the 440th and the 175th part). Petrie had reached his very accurate results by measurement alone, but the solution to ancient metrology is partially numerical in terms of certain
absolute measurements, which leaves no tolerance at all in the definition and identification of related modules. That is, Petrie, and the majority of metrologists, have had to express their given
values between certain narrow parameters, but once certain measures are unequivocally identified, in an integrated system, all others become expressible as absolutes in the same way.
It is ultimately thanks to an insight of John Michell that it has been possible to make significant advances. In his Ancient Metrology, (1981), he unwittingly established some of the governing
principles that reveal the subject as a true branch of science. Much of his work concerns the geodetic appearance of certain of the modules, particularly the Greek, in that certain values of the
Greek foot were sexagesimal fractions of the meridian degree. He was not of course the first to point this out, for the hypothesis has been regularly raised over the past 200 years.
Most importantly, Michell recognised that that the ancient systems were most easily interpretable through the medium of the decimally expressed English foot. That is, fractional relationships in
comparisons of differing measures are virtually undetectable when expressed in either millimetres or feet and inches, for this reason the systems of metrology have remained impenetrable. (Expressed
in decimal feet, one regularly sees the repetitive eleven fraction such as .1818 recurring or the septenary .142857, or 12 multiples 1.728 etc.). Most significantly he was also able to identify the
fractional difference between singular measures as 175 to 176 - for example the Roman foot of .96768ft relates as 175 to 176 of the Roman foot of .9732096ft. Furthermore, he noted that this regular
separation also occurred in the Greek, the royal Egyptian, the common Egyptian and sacred Jewish systems and modules. Thus, there were absolute values able to be expressed without margins of error,
whereby one is able to detect direct linkages between cross-cultural systems.^4
Livio Stecchini, probably the best-informed metrologist of the latter 20th century, noted many of these fractions linking ancient national standards. He graphically expressed certain of these links
in the following manner:^5
• Mycenaean or Italic foot 15 9
• Roman foot 16 24
• Greek foot 10 25
Had he continued to regard other systems from this point of view he might have recognised that they are all similarly related. Furthermore, of the measures just listed, he never understood that it is
the Greek foot that is pivotal to these interrelationships. He was thus unable to recognise the purely numerical structure of the metrological values that he could identify. From his wide experience
and ability to make comparisons from an enormously broad array of values, he was able to very closely identify certain definitive values of the Roman and Greek feet. The Roman foot, he claimed, was
exactly .9709501ft, and the Greek at the universally accepted 25 to 24 of this value he gave as 1.0114064ft. These values are almost exactly 440 to 441 of the values claimed by Michell, and for
reasons adequately dealt with elsewhere it is the values proposed by Michell that proved to be the absolutes.^6 The higher value Michell stated of the Greek foot is 1.01376ft and 440 to 441 of this
value is 1.0114612ft, there is little doubt, therefore, that the value of the Greek foot almost given by Stecchini is one of a series. Because in numerical terms of the English foot this Greek foot
is (176/175)2, which is twice the fraction identified by Michell for differing values of the same module; obviously there are considerably more than the two values of each module he has identified.
And, most importantly, what we call the English foot is shown to be one of the variables of the acknowledged series of the Greek foot: 1ft x 1.0057143 x 1.0057143 = 1.0114612ft.
It was the numerical structure underlying ancient metrology, had he but recognised it, that first attracted Algernon Berriman to the subject. (His Historical Metrology, (1953), for all its faults,
errors and conjecture, remains the most frequently quoted authority on the subject of mensuration - a fact which simply underscores the prevailing ignorance of this highly important branch of our
historical inheritance). He noted that that the sexagesimal 129.6 English inches as the perimeter of a circle, expressed a closely accepted value of the royal Egyptian cubit as the radius.^7
Unfortunately he used true pi to divide the sexagesimal "canonical" number, 129.6, his resultant cubit thereby lost its precise definition. Had he used 22/7 as pi, a figure he clearly knew was an
approximation used in ancient Egypt, then, the cubit would have been the numerical absolute. The value arrived at by Berriman is 1.71887ft, but the precise value is 1.7181818ft. This is exactly the
length of the lesser value identified by Michell (related to a cubit as 175 to one of 176 equalling 1.728 ft). This is also the value arrived at by Petrie, principally from his examination of the
king's Chamber of the Great Pyramid, but he was obliged to give the solution as 20.62 + .005 inches; yet 1.7181818ft is exactly 20.61818 inches.8 (This point of canonical numbers as perimeters in
terms of the English values will be laboured here because of its importance in the solutions of the stone circle designs).
If we apply the rule proposed in the explanation of the variations of the Greek foot and reduce the 1.71818ft cubit by its 441st part, then it is exactly 12/7 or 1.714285 English feet. This value has
also been recognised as one of the standard expressions of the royal cubit. This royal Egyptian cubit is therefore exactly one English cubit of eighteen inches plus its seventh part. Consequently,
since the English foot is one of a series of the Greek feet, then the seventh part added to any of the many values of the Greek cubit will equal a known value of the royal Egyptian cubit.
For many reasons, none of them patriotic, it is the foot called English that is the basis or "Root" from which all calculations involving ancient metrology should begin. It would appear from most of
the empirical evidence that the full range of the variations in a single module, here given in terms of the variations of the Greek-English foot, are as follows:
Least .98867 /.990916
Reciprocal .994318 /.996578
Root 1 Standard /1.002272
Canonical 1.0057143 / 1.008
Geographic 1.0114612 /1.01376
For reasons dealt with elsewhere, the above terminology is used as descriptive in the classification of the variations. It was realised from the beginning that all of these variations were impossible
to express in an ascending order. They must be tabulated in two rows, the fraction linking each of the variations across the rows is 175:176, and each of the values in the top row is linked to the
value directly below as 440:441. "Root" prefixes the descriptive terminology from Least to Geographic in the top row and "Standard" in the bottom row. For example, 1.008 is Standard Canonical and
1.0114612 is Root Geographic etc.9 As well as these values being measurements, they are also regarded as the formulae by which any other module is classified. For example, an established value of the
Persian foot of Darius is 1.05ft. This is one and one twentieth of an English foot, therefore a Root value (at Root, the modules relate to the English foot by a single fraction, 1 1/10th, 1 1/14th,
or 9/10 etc.). This Persian foot multiplied by 1.01376 = 1.064448ft which is the recorded value of the Arabian Hashimi foot and the basis of the French pied de roi, this would therefore be classified
as a Standard Geographic Persian foot (the Persian being the "Root" value). Few ancient modules are encountered that cannot be classified by the above methods. The principal reason that the regular
interrelationships of ancient systems has not previously been recognised is that researchers have been trying to assess the element at the wrong distinction. They are therefore viewing a compound
fraction that appears to possess no mathematical relationship, (i.e. the fraction that separates the modules plus the fraction(s) of the module variation). The modules must be compared at the correct
variation, that is, in the same column. For example, Freidrich Hultsch spent a lifetime attempting to link the common Egyptian foot of approx. 300mm with the Roman foot of approx 296mm, but since the
former is a Root Canonical (300.28mm) value and the latter a Root Geographic (295.96mm), it was an exercise in futility.10 The ratio connecting the two at the same classification is exactly 49 Roman
to 50 common Egyptian.
All of the ancient systems to which we erroneously give nationalistic names are similarly connected - by a single fraction. This is in keeping with the ancient systems of mathematics that are a
matter of record (multiple fractions such as 4/7ths would have to be expressed as ½ + 1/14th; and depending on the complexity of the original fraction, series of diminishing single fractions approach
the solution). This is how ancient metrology fits together, the variables of the singular measures relate to their neighbours by a single fraction, and the measures relate to differing measures at
the correct classification also by a single fraction. As the research progressed, such comparisons provoked the obvious conclusion - that the disparate "national" metrological structures were
branches of a single system that was originally designed to be used concurrently.
The fractional difference of 175:176 serves the purpose of maintaining integers of the same module in both the diameters and perimeters of circles if the diameters are multiples of four. For example,
in discussing the odometer, Vitruvius stated that a wheel of four feet in diameter travels 12 ½ feet in one revolution.11 This is a very inaccurate pi ratio of 25/8 or 3.125, but is a value that is
known to have been used throughout the ancient world. Far more commonly used is the quite accurate 22/7 or 3.1428571, and the difference between 25/8 and 22/7 is exactly 175 to 176. There is thus an
eminently practical reason for this fractional separation in the modules: if Vitruvius' carriage wheel were four Roman feet of .96768ft (175) then the perimeter is indeed exactly 12 ½ feet, but of
the related foot of .9732096ft (176).
The fraction 440:441 also serves the purpose of maintaining integers in diameters and perimeters but of differing modules, particularly if the perimeters are canonical numbers, i.e. sexagesimal and
duodecimal solutions - numbers that are traditionally associated with circle perimeters. We saw this with Berriman's resolution of the royal Egyptian cubit, where the canonical number 129.6ins
(correctly 10.8ft, also canonical) as a perimeter rightly yields a radius of 1.71818ft. Thus, there is a perimeter expressed in English feet and a radius expressed in royal Egyptian cubits, but the
cubit related to the English is 12/7ft or 1.714285ft - 1½ ft + 1/7th, and this is the 440th part less than 1.7181818ft. The best example of this phenomenon is the division of the perimeter by 360; if
this is regarded as feet, then the diameter is 114.5454ft, or 100 royal Egyptian feet each of (1ft + 1/7th) +1 /440th. Therefore the module of the perimeter must have the 440th part added to its
related module of the diameter, so there is Root classification perimeter and Standard classification diameter, again, the separation of the 440th part is shown to be a corrective fraction related to
the pi ratio and designed to maintain integers. This may be difficult to grasp but will be further explained as we analyse Megalithic circles. This rule is consistently observable with all of the
canonical perimeter numbers in feet and the differing modules of the diameters.
The Vigesimal Counting Base
The last aspect of metrology to take into account before considering the Megalithic measurements is the nature of these modules other than feet and cubits. The basic unit of any system of measurement
is the foot. It may be sub-divided into either inches, or with more versatility, into digits of which there would be sixteen, various multiples of these digits form other modules. Many of these
modules persisted in the British imperial system into the 20th Century, but their antiquity was not remarked upon, being largely obscured by the sub-division of the foot exclusively into inches. On
modern steel tapes the sixteen-inch division is still marked, but no longer has a terminology such as pygon, remen or pygme. The five feet pace was also commonly used until recent times. The yard, of
course, is a double cubit, and a cubit is simply a device to alter the counting base to twos instead of threes, as in terms of the yard and fathom.
In Egyptian metrology there was a 20-digit measurement called a remen. The measurement systems of the Greeks and the Romans are directly related to the Egyptian and both had a 20-digit module, (this
is not to say that the digit was the same length - they all differed by the same proportion as did their respective feet). In Greek, this unit was called a pygon and in Rome a palimpes. The double
remen was therefore a step (gradus) of 40 digits and the double step was a passus or pace. As there are 80 digits in a pace, it is therefore five feet, each of sixteen digits. There are many digit
multiples that were recognised as modules of measure. Knuckle, palm, hand, lick, handlength, pygme, cubit and so forth, and of these it is the handlength that interests us. From the heel of the hand
to the fingertips this length, at ten digits, is a half remen, this, to this day, is the basis of the counting system that is still used by the Welsh.
A twenty, or vigesimal, counting base is not peculiar to the Welsh, but was relatively common in the historical world; it is basically made up from two tens. In Welsh, twenty is di-deg - two tens;
forty is then two two-tens, or di di-deg; sixty is tri di-deg, or three two-tens and eighty is pedwar di-deg, four two-tens, and so forth. That this method of counting - by the score - has survived
from at least the Neolithic and Bronze Age to the present day should come as no surprise; our societies are permeated with ancient cultural distinctions that we unconsciously preserve. As well as
cultural differences in the ancient world, there were also very distinct cultural similarities and these are nowhere as obvious as in the counting systems and measurements that have survived from
pre-history. It is the measurement system used by the megalith builders that reveals them as "fully paid up members" of their contemporary world. This is because their method of measurement is
identical in all respects to that of the majority of the systems of the ancient world; it is not unique to the Megalithic arena.
Alexander Thom, although he identified a module that had been consistently used in the construction of Megalithic monuments failed to accurately identify it. The modern thought processes require us
to conform to a single standard; heretofore it has been the yard, rapidly being supplanted by the metre. One must not assume that the inhabitants of the ancient world viewed the subject of
mensuration in this fashion. To them the subject was a science in its own right, rather than a simple method of quantifying. It was Thom's modern thought processes that forced him to try to identify
a single module and an invariable value of that module. He mistakenly believed the basic measure to be the Megalithic yard, but this is essentially a compound measure. From his examinations of the
monuments he identified rather more than the basic Megalithic yard, and taken altogether, these multiples and sub-divisions which he noted distinguish the branch of ancient metrology from whence they
He noted that the stone circles, egg shapes and elliptical structures were often an odd number in terms of his Megalithic yard in their diameters; therefore there must be half-yard module in the
radius. A length of two Megalithic yards was detectable in the longer distances, which he termed a Megalithic fathom, and he claimed that the perimeters of the circles were designed to be in
multiples of 2½ yards, which he termed a Megalithic rod. If this were all of the information available, then the Megalithic series would have remained an irresolvable enigma, believable only by
statistical mathematicians who would be unable to clearly demonstrate its existence to the layman, which has been the case. But Thom recognised another unit that seems to have escaped scrutiny and
clearly identifies the system to which the measures belong.
The great stones are often patterned with "cup and ring" markings which are thought to be contemporary with their host megaliths, and it was through analysis of their spacing and geometrical layout
that Thom identified the basic sub-division of the longer modules. This was the Megalithic inch - forty to the Megalithic yard. With crystal clarity this enables the whole series to be categorised.
All of Thom's appellations of his modules are misleading misnomers. The "Megalithic inch" is clearly a digit, the half-yard is a remen, pygon or palimpes, the yard is a step or gradus, the fathom is
the 5 feet pace and the rod is 10 palms, which suggests that the 10 digit palm was the basic module of the Megalithic engineers.
Megalithic equivalent
Module name Digits feet
Finger 1 Megalithic inch
Knuckle 2
Palm 4
Hand 5
Lick 8
handlength 10 1/4 Megalithic yard (palmo)
Span 12
Foot 16 1
Pygme 18 1.125
Pygon (Remen) 20 1.25 1/2 Megalithic yard
Cubit 24 1.5
Step 40 2.5 Megalithic yard
Xylon 72 4.5
Pace 80 5 Megalithic fathom
Fathom 96 6
(10 handlengths)
100 6.25 Megalithic rod
Pole 160 10 2 Megalithic fathoms
Note how the versatile digit structure allows for many counting bases, particularly duodecimal, but the Megalithic is clearly the remen or groups of twenty - two-tens.
The Sumerian Cubit and the Megalithic Inch
When the Megalithic "inch" is compared with known ancient metrological systems at the commonly used multiples of 16 to the foot and 24 to the cubit, Thom's measurements are clearly what we call
Sumerian measures.
Roman and Greek measures relate to each other by the ratio 24 to 25, Sumerian and royal Egyptian measures relate to each other by the same ratio, the Greek and royal Egyptian measures relate to each
other by the ratio of 7 to 8. The Roman measures are therefore 7 to 8 of the Sumerian. In addition to the Megalithic yard, all these "systems" (among others) are significantly represented in British
stone circles with the most minuscule margins of error; indeed, to ratios that far exceed the accuracy of 1:400 of which Thom claimed the builders were often capable.
Although no measuring devices survive from Sumeria, the varying values of the Sumerian cubit have been accurately established from the dimensions of buildings, the measurement of bricks, and from
cuneiform tablets that record these dimensions. The most widely acknowledged value of the cubit is taken from that of the half-cubit represented as a rule on the statue of Gudea from Lagash, and
given as 248mm. Thoreau-Dangin had previously given a related value of 495mm from cuneiform texts and plans of temple dimensions, which he subsequently excavated. As an absolute value it may be
stated as 495.93mm, which is the classification Root Least (1.627066ft).12 The other values of the Sumerian cubit drop into their proposed positions of the tabular arrangement - (at the ratios
forwarded as above for the Greek feet) - with high degrees of accuracy, all the way to the maximum, Standard Geographic value, of 1.66836ft. This final measure is accurately given by the eastern side
of the Ziggurat of Etemenanki at Babylon, as excavated under Robert Koldeway, and given as 91.52 metres, or 180 cubits, (calculated here as 91.532 metres).13 These are the proposed values of the
Sumerian cubit that are convincingly in agreement with the empirical evidence:
Least1 .627066 /1.630764
Reciprocal 1.636363 /1.640082
Root 1.645714 Standard /1.649454
Canonical 1.655118 /1.65888
Geographic 1.664576 /1.668359
Although the Sumerian cubit was most commonly sub-divided into 30 shu-si, Stecchini remarked:
"The texts do not draw any distinction between different types of cubits, except to state that the cubit usually divided sexagesimally into 30 fingers is at times divided into 24 fingers as in the
rest of the ancient world".
It is this 24th division of the Sumerian cubit that is the Megalithic inch.
Although the royal Egyptian cubit had 28 divisions, they are not regular, the reasons for which have not yet been satisfactorily explained. However, Petrie, largely by his analysis of artist's grids
that were accurately chalk-marked onto the walls of tombs, was able to state that they used a unit in multiples of 1/25th of a royal Egyptian cubit:
"Of these engraved lists the first two have a unit of a decimal division of the cubit; in No. I the spaces are 16/100 of a cubit wide, and 20/100 high, or 4/25 and 5/25; and in No.2 the spaces are 14
/100 wide, and 16/100 high, or 7/50 and 8/50. The cubit of No. I would be 20.45 ± .05, and of No.2, 20.58 ± .08. This is of course inferior as a cubit standard to the determinations from large
buildings but it is very valuable as showing the decimal division of this cubit, which is also found in other countries".
(Additionally, Petrie has here accurately pinpointed two of the deduced values of the royal cubit, Root Reciprocal is 20.4545 inches and Root is 20.57142 inches).
Obviously this digit is the Sumerian, 24 to that cubit, 25 to the royal Egyptian and 40 to the Megalithic yard. More examples of the application of this Megalithic "inch" are cited below in the
course of the Stonehenge description.
But rather than labour through all the given values of the Sumerian cubit to substantiate the general theory, let us move directly on to the module which is the survival of the Megalithic yard into
modern times.
The Spanish Vara
In his comparisons of the Megalithic yard with surviving measures, Thom gave much attention to the varied values of the Spanish vara.14 After millennia of non-recognition, the most widely accepted
value of the vara, that of Madrid and Castile proves to be the principal, or Root value, of the Megalithic yard. The exchange value of this vara is given as 2.7425ft and the value of a 40-digit
module from the Root 24 digit Sumerian cubit is 2.7428571ft, this is a ratio of accuracy in the region of 1:7700. Even this negligible discrepancy is probably to be accounted for by the rounding down
of a decimal. The vara is divided into three feet of 12 pulgadas or inches, and in all probability these divisions were adopted at the time of the Roman occupation so as to come into line, so far as
counting bases go, with the Roman unica. The alternative division of the vara is a length of 4 palmos, although these would each be of 9 pulgadas, the length would also be that of 10 Sumerian digits
or Megalithic inches; ten digits being the palm length of the Greco-Roman measurement systems, and the basis of an essentially decimal digit count. Having identified the Castilian vara as the Root
value of the Megalithic yard, the related values may be expressed as follows:
Least 2.711777 /2.717939
Reciprocal 2.727272 /2.733470
Root 2.742857Standard /2.749090
Canonical 2.758531 /2.768400
Geographic 2.774293 /2.780598
As well as regional variations of the vara within Spain, the Spaniards took them to all of the countries where they settled. Many of them subsequently fell from general use in the home country, but
were preserved in the colonies. Thom noted some of these variations, and with additional variations from elsewhere they are tabulated below with their ratios of accuracy to the absolute values
(derived from the Sumerian 40 digit module) as listed above:
│Madrid │2.7425ft related to│Root 2.74285 error as 1:7700 │
│Burgos │2.766ft related to │Standard Canonical 2.7684 error as 1:2300 │
│Almeria │2.7329ft related to│Standard Reciprocal 2.73347 error as 1:5160 │
│Mexico │2.749ft related to │Standard 2.74909 error as 1:30000 │
│California│2.781ft related to │Standard Geographic 2.78059 error as 1:6800 │
│Texas-Peru│2.75ft related to │Standard 2.74909 error as 1:3000 │
│Canaries │2.7625 related to │Standard Canonical 2.7648 error as 1:1185 │
There are many more variations of the vara that accurately represent all of the proposed values and it is the values of the Root Least, Standard Least and Root Reciprocal that were very closely
identified by Thom from his surveys. He admitted that his definitive value of 2.72 feet was obtained by averaging, not realising that the variations that he witnessed were quite deliberate. The
widespread practice of averaging in the study of ancient metrology has totally obscured its structure.
Land areas and itinerary distances also yield close values to those proposed in the above table. The length of the legua, a distance of 5000 varas, as used in the American Southwest is given as
2.6305 miles,15 which differs from the Root Canonical by only 17.5 feet. The slight discrepancy may be accounted for by the acceptance of this value as 33 1/3 inches for convenience of conversion,
instead of the 33.291 inches, which is the absolute. For a measure that is certainly of prehistoric derivation, it is a great tribute to the artisans, civil servants and scientists, who, over
countless years, have maintained these standards.
The Survival of Standards
Petrie speculated that constant copying had caused many of the variations of singular modules. Reasoning that standards, over the years, must lengthen, the copyists would err on the side of
generosity because an error could be corrected were the rod cut too long, but would have to be discarded were it too short. This is obviously not the case. It was the custodians of the temples that
manufactured and issued weights and measuring devices, as Petrie himself established. Ritual stone rods were kept in the temples as standards from which others were copied or compared. Were these
rods ever lost or destroyed, they could be reconstituted from the accurately known dimensions of the temple itself, which among its other functions, was regarded as the permanent repository of
measure. Additionally, civic buildings would be of known dimensions and were often engraved with standards of measure upon officially approved stones as checks for market traders and artisans. By
such methods, but principally by constant usage, standards remained unchanged for millennia. A new rod could be calculated from known constants.
These facts immediately answer the primary criticisms of Megalithic measures theory:
"How could such a unit be kept standard over more than a millennium and a geographic area of thousands of square miles? What would the standard be made of (wood? stone?) and where would it be kept?
How could a population of dispersed migratory tribes maintain standardized measures, or even want to?" ^16
The answer to the first question is that the units would be kept standard in the dimensions of permanent monuments. The second answer is stone, the stones of the monuments. The final criticism is
specious, because to hypothesise that the Neolithic people were "dispersed and migratory tribes", remains just that, a hypothesis.
The massiveness and uniformity of the structures that have survived indicate that these people were highly organised from centralised points throughout their territorial range. E.W. Mackie has
acutely observed that the situation could not have come about unless there was a centralised training centre whereby instruction would be imparted in the necessary skills to achieve such
uniformity.17 He saw the whole scheme as analogous to the Roman Church, with its training colleges and rigidly hierarchical organisation. These centralised training points would be attended by the
most able to receive instruction in the necessary skills of astronomy, geometry, geography and mensuration; of necessity, this would be accomplished under the umbrella of religion.
Although statisticians may be convinced of the regular unit construction of the Megalithic monuments, their analytical data is not comprehensible to the non-mathematician. This is because the
analysts seem to show no particular necessity for the units that they identify within the monuments - and with which they were supposed to be designed - in any sensible integers within the
constructions. This situation is entirely unsatisfactory; far better - John Michell:
"A tradition which has been credited by many learned men over the centuries is that the ancients encoded their knowledge of the world in the dimensions of their sacred monuments. If that is so, any
attempt to elicit that knowledge must be preceded by a study of ancient metrology, for to interpret any set of dimensions it is of course necessary to establish the units of measure in which they
were originally framed".
Naturally, we should first establish the precise magnitude of that which is under investigation, then seek the sensible integers that fit those measurements. Should those round numbers be discovered
in modules that have been previously identified, these may then be considered to be a basis for a sensible theorem. Considered from this perspective, the Megalithic yard, fathom and rod at the values
forwarded by Thom should, quite rightly, be dismissed. But if a consideration of the variables of the basic modules that he deliberated shows that they are present in the arrangements of the
megaliths to great degrees of accuracy, this should help to convince the sceptical.
Stonehenge is the monument most amenable to such analysis. It is an intricate and unique structure, but to establish the regular modules used in its design it is not necessary to analyse every
feature. The most important element of this complex structure is the sarsen lintel circle, which proves to be extremely informative. In its function as the repository of constant modules, a no more
ingenious a device could be imagined. The majority of the stones of Stonehenge are roughly shaped, but the imposts of the trilithons and the lintels of the sarsen circle are carefully dressed. The
lintels are far above the ground and by accident or design have been preserved from damage and wear. They are carefully mortised and tenonned to the uprights and tongued and grooved together, and
this circle may therefore be calculated in both its intended inner and outer dimensions with the utmost accuracy. Although the lintel circle itself is reduced to six stones of the original thirty,
its inner diameter is identical to the inner diameter of the sarsen circle, the most accurate estimate of whose intended length is that of Petrie's examination of 1877.18 He gives this dimension as
1167.9 + .7 inches, which he identified as 100 Roman feet and since 100 Roman feet of the Standard Geographic classification (97.32096ft) is 1167.851 inches, it must be recognised as such. (That such
information may be deduced from a monument four millennia after its construction, should answer the question "Where would such a standard be kept"?).
Although the widths of the lintels have been roughly measured, no fully accurate estimate of the outer diameter of the lintels has been made until the publication of Michell's "Ancient Metrology". He
gave it as 104.2724571ft, an absolute that can be expressed within the parameters of the estimates of other surveys.19 It is a correct solution for a number of reasons, not the least of which is the
fact that temples were constructed to certain proportions whereby the modules of their design were deduced from ratios. It is significant that there are 30 lintels in the perimeter and that the width
of the lintel is also one 30th of the overall diameter; the ratio of the inner to outer diameter is therefore 14 to 15. One fourteenth of 97.32096ft and one fifteenth of 104.2724571ft is 6.951497ft,
and as a single module this is one Megalithic rod. The constituent Megalithic yard of this "rod" is the surviving length of the Standard Geographic Spanish vara whose absolute value is 2.780598ft.
Because there are 14 rods in the inner diameter, there are exactly 44 in the inner perimeter, but the outer diameter being of 15 rods yields a perimeter that is not integral in terms of this module -
there are 47.25 rods of the Root Geographic classification of 6.93573ft. Another solution must therefore be sought, in terms of a different module, for the outer perimeter.
The Megalithic yard also fits these diameters very informatively. Because there are exactly 35 in the inner diameter, it betrays the fact that there is a half yard module, (remen), in the radius. But
most interestingly, there are exactly 37 ½ Megalithic yards in the outer diameter; therefore, as predicted from the appearance of the Megalithic yard module as a double remen, with a vigesimal
counting base there would be a ten-digit palm in this radius.
The dimensions of Stonehenge incorporating Thom's Megalithic rod are entirely unsatisfactory. Although he had identified a value of the Spanish vara in excess of 2.78ft, it never seemed to occur to
him to experiment with these increased lengths, as opposed to those that he thought definitive of the Megalithic yard module. He stated that the intention of the design was to contain the sarsens (or
lintels) between concentric circles with circumferences of 45 and 48 Megalithic rods. Since the difference between the two is .48 of a rod of 6.806ft, this implies a lintel width of 3.26688ft, which
as every survey of the stones reveals, is far too small.20 He thus had to interpolate a whole rod into the inner circumference by using too small a value. The inner diameter he took as 97.41ft and
employed a Megalithic yard of 2.7224ft, which fits this length in no sensible integer at all. Small wonder that the rationalists have dismissed his results.
It is beyond dispute that Stonehenge in addition to whatever its other functions may have been, served as a repository of measures. The principal dimensions are designed in values of the Standard
Geographic classification, (which means that in order to obtain the parity of the modules with the English foot, they must be divided by 1.01376, which reduces them to Root). Exactly how all the
measurement systems of remote antiquity once formed an integrated system is beautifully exemplified in Stonehenge with some of the more interesting modules:
The inner diameter
100 Roman feet of .9732096ft
98 Common Egyptian feet of .993071ft
96 Greek feet of 1.01376ft
56 Royal Egyptian cubits of 1.737874ft
35 Megalithic yards (varas) of 2.780598ft
14 Megalithic rods of 6.951497ft
The outer diameter
105 Common Egyptian feet of .993071ft
100 Common Greek feet of 1.0427245ft
90 Royal Egyptian feet of 1.158583ft
60 Royal Egyptian cubits 1.737874ft
50 Jewish sacred cubits of 2.0854491ft
37.5 Megalithic yards (varas) of 2.780598ft
15 Megalithic rods of 6.951497ft
Interesting connections become clearer by such methods of tabulation. Frank Skinner, who had charge of the Weights and Measures Department of the Science Museum during the 1950s, noted that the
common Greek foot related as 3/5ths of the royal Egyptian cubit, and since the sacred Jewish cubit relates to the royal Egyptian as 6 to 5 it is clear that the common Greek foot is a half sacred
cubit.21 More such loops can be easily spotted in the above table. The sacred Jewish cubit, that of Moses and Ezekiel, expressed as "a cubit and a hands breadth" refers to the royal Egyptian as the
basic cubit. Since a hands breadth is 5 digits, the implication is that 5 such hands comprise the royal Egyptian and 6 the sacred Jewish, this digit is therefore the Megalithic "inch": 15 to the
"common" Greek foot, 24 to the Sumerian cubit, 25 to the royal Egyptian cubit, 30 to the sacred Jewish cubit and 40 to the Megalithic yard. There is little doubt that we are regarding a single
organization of measurement in what has previously been viewed as quite separate systems.
One of the most intriguing solutions to these numerical harmonies is that of the outer circumference of Stonehenge, 327.713ft, being exactly 360 Italic or Mycenaean feet of .910315ft as an absolute.
This canonical perimeter number is of the Root Geographic classification, the 440th part less than the Standard Geographic measures of the diameter, this is the corrective fraction, previously
mentioned, that maintains integers and happens arithmetically. There are 100 common Greek feet in the diameter and the common Greek foot is a Mycenaean foot plus the seventh part. The modules of
diameters are therefore composite measures if the perimeters are basic measures in canonical multiples. A pertinent example is to view the compound module of 100 Standard Megalithic yards as a
diameter of 274.90909ft, the perimeter is then the canonical number 864 English feet. It is to be noted that the composite measures of the diameters are invariably expressed decimally and the
perimeters duodecimally. These dual counting bases in harness have other metrological properties, but space precludes an explanation here. It would thus appear that the system had been observed or
discovered through simple arithmetic, rather than contrived.
The Rollright Stones
Stonehenge is often thought as a great temple with a surrounding necropolis of the burial mounds of Neolithic Bronze Age royalty or heroes. The monument is supposedly unique, with no ancestors or
descendents. But a metrological examination of other Megalithic monuments reveal similar, though lesser solutions, in precisely defined values to such degrees of accuracy that they may only be
recognised as intentional, and thereby directly related to the design of Stonehenge.
Without considering the complexities of the repetitive shapes of the rings identified by Thom, the existence of a regular measurement system is more simply demonstrable if confined to the
straightforward analysis of true circles. The Rollright Stones of Oxfordshire22 have a very obvious and direct metrological relationship with Stonehenge. Although many of the stones had been removed
early in the 19th century they were replaced in 1866 when such "restorations" were fashionable, and their intended dimensions may thus be accurately assessed. Given by Thom as 103.6ft, the diameter
is within .96 of inch of its presumed value of 103.68ft; because 103.68ft is exactly 175 to 176 of the outer diameter of Stonehenge at 104.27245ft. The modules in the Rollrights are therefore all
calculated at the Standard Canonical classification of measures, 1.008 greater than Root. All the elements which fit the Stonehenge dimension therefore fit the Rollrights at this classification. This
indicates a margin of error in the region of one part in 1300, which of course is no error at all; any surveyor would allow such latitude. The Megalithic yard of this classification is that of the
vara of Burgos, 2.7648ft, at 37.5 in the diameter; the principal "Megalithic" measurement here is likely to be in counts of the 10 digit palmo at 75 in the radius, or, as at Stonehenge, 15 Megalithic
rods diameter. The most likely intention at the Rollrights is 100 common Greek feet diameter and 360 Mycenaean feet perimeter. Thom claims that the diameter is 38.1 Megalithic yards, this yields
2.71916ft, which is numerically unsatisfactory and unlikely as part of any design.
The Merry Maidens
The Merry Maidens at St. Buryan is another true circle, also connected to the Stonehenge measurements in as much it is constructed with the maximum, Standard Geographic modules. Thom remarks that it
was re-erected, but it comprises stones of such sturdy simplicity that it is certain that it reflects the original design. He gives the diameter as 77.8ft or 28.6 Megalithic yards and perimeter as
35.9 Megalithic rods, which is an arbitrary Megalithic yard of 2.72027ft, and it is little wonder that his reasoning of a constant unit is disputed.23 The diameter of this circle is 80 Roman feet of
.9732096ft to within less than ¾ of an inch, but because this is a multiple of four it is unlikely that this is the module with which to interpret the metrological design. (The four multiple cannot
give an integral perimeter by the established method of adding the 175th part, because at the Standard Geographic this classification is already the apparent maximum). 80 Roman feet is, however, also
70 Sumerian feet, which yields an eminently suitable 220 Sumerian feet of Standard Geographic of 1.11224ft as a perimeter. This is also 88 Megalithic yards of 2.78059ft (the vara of California) and
35.2 Megalithic rods. Both 88 and 352 are significant numbers in metrology, and it would seem from this observation that Thom was correct in much of his reasoning; in this case, in his claims that
the designers were obsessed with maintaining perimeter integrity. Thom believed that they "sacrificed" diameter integrity to sustain this; but with a variable base measure both may be obtained with
exactitude. Again, it would seem from this interpretation of the Merry Maidens that the addition of the seventh part to a module has a function related to pi in the maintenance of integers.
(Mycenaean plus the seventh is common Greek, Roman plus the seventh is Sumerian and "geographic" Greek - of which the English is a component - plus the seventh is royal Egyptian).
Perhaps the most widely acknowledged connection between what are often wrongly believed to be separate systems of measure, is that of the Roman to Greek at 24 to 25. A Roman furlong of 625 feet is
the equivalent of a Greek stade of 600 feet. This too, is related to the pi ratio. One Roman foot radius is six Greek feet perimeter. But once again, there is a classification change in the modules;
the foot of the perimeter is the 175th part greater than its relative of the radius. (Berriman noted this exact relationship). Therefore, one Sumerian foot as a radius is six Egyptian feet as a
perimeter. Six royal Egyptian feet (or four royal cubits) is therefore 6.25 Sumerian feet, which is exactly the Megalithic rod (2 ½ Megalithic yards which are each 2 ½ Sumerian feet).
The Ring of Brodgar
The Ring of Brodgar on mainland Orkney provides a perfect example of the kind of ambiguities thrown up in the interpretation of measurements. Thom found the diameter to be 340.66 + .44ft and rightly
gave the distance as 125 Megalithic yards. However, he took the lesser estimate and took the Megalithic rod to be 6.813ft, stating:
"The reason for using a circle diameter 125 Megalithic yards is that it gives a perimeter of 392.7 Megalithic yards which is very nearly an integral number of Megalithic rods". ^24
Of course, it is nothing of the sort; at 157.08 Megalithic rods it is quite meaningless. The correct interpretation of the geometry of Brodgar, however, yields perfect integers in previously
identified modules. The diameter is intended 125 Megalithic yards of the Root Reciprocal 2.7272ft and 340.909ft is perfectly within the measured length. As well as being exactly 50 m rods it is also
200 royal Egyptian cubits. Because both are a decimal count the resultant perimeter will be in modules of the Root classification, the 175th part longer than those of the diameter. The ensuing
perimeter is 1071.4285ft and the temptation is to leave it there, in that it is 1000 feet of 1 1/14th English feet, and although this module is difficult to categorise, similar and related lengths
are reported from many ancient sources. It is in perfect agreement with other aspects of metrology by being at Root an increase of the English foot by a single fraction. At certain of its values it
overlaps, exactly, the so-called Germanic or Drusian foot, which is 18 digits to the Roman 16.25 In Greek terminology this Germanic foot would be termed a pygme at its Roman reduction. The perimeter
of Brodgar is also exactly 625 Royal Egyptian cubits of 12/7 English feet, but this is a numerically unsatisfactory number for a circumference. Certainly, this length cannot be interpreted in terms
of either the Megalithic yard or the Megalithic rod, but the most interesting aspect of Brodgar is perhaps that it has exactly the same diameter as the two inner circles of Avebury.
This gives credence to the claims of cultural regularity and wide dispersal of monuments that had an obvious scientific purpose, if only as an exercise in geometry. Obviously it was much more. This
is apparent by what we know of the enormous difficulties that were experienced in the development of the metric system, which can only be used to quantify. In comparison to the system of antiquity,
it is childishly simplistic.
From these few examples we can see how modules that have been precisely defined elsewhere, fit British Megalithic geometry without margins of error. The plausibility of the argument is supported by
the fact that the modules that have been identified also fit the monuments in rational sets of numbers. Such is the integration of metrology. Are we then dealing with a "Megalithic" system of
measures? Probably not - nor any other that could be labelled. It would seem that all of the systems which we brand with a nationalistic nomenclature are simply contrived from singular standards
which have been drawn from an older complete cosmology by the bureaucrats who sought to quantify, tax and regulate the societies into which it was fragmented. Were it not for the survival of the vara
into modern times then Thom's claims of a consistent unit would have remained forever enigmatic. For all of the evidence that Thom brings to bear on the subject, from his own statistical methods to
those of Broadbent, Kendall and Freeman, whom he cites, they remain virtually useless in the definite establishment of the Megalithic yard.26 Because for all the fine terminologies in which
statistical methodologies are cloaked, they simply remain other methods of averaging. Averages are the enemy of precise definition, and in most cases are misleading; they have certainly obscured the
clarity of ancient metrology.
Although the few examples that have been included here are persuasively indicative of a regular and systematic usage of a single system across a wide geographic area, only around 30% or so of
Megalithic monuments (apparently) conform to the numerical scheme as outlined. So the argument for the Megalithic yard remains open, but a greater beast has been unleashed.
1 Stecchini, L.C. http://www.metrum.org/measures/whystud.htm
2 http://www.metrum.org/measures/romegfoot.htm
(Few works are published by Stecchini, but his memorial web site is the equivalent of several books)
3 Petrie, W.M.F. Encyclopaedia Britannica, Eleventh Edition, 1910, (Ancient Historical) Weights and Measures, p 481
4 Michell, J. 1981 Ancient Metrology, p 17
5 Stecchini, appendix to Tompkins, P. Secrets of the Great Pyramid, 1971, p 352
6 Neal, J. All Done With Mirrors, 2000, pp 69-75
7 Berriman, A.E. 1953 Historical Metrology, pp18-19
8 Petrie, W.M.F. 1883 The Pyramids and Temples of Gizeh, ch 20, para137
9 Neal, J. http://www.secretacademy.com/pages/greeksystem.htm
10 Stecchini, appendix to Tompkins, P. Secrets of the Great Pyramid, 1971, p 309
11 Vitruvius, On Architecture, Book 10 ch. 9 (Trans Frank Granger, 1934, from the Harleian manuscript).
12 Skinner, F. Weights and Measures, 1967, HMSO Science Museum, p 41
13 Stecchini, L. http://www.metrum.org/measures/length_u.htm para 7
14 Thom, A. and Thom, A.S. 1978, Megalithic Remains in Britain and Brittany, p 43
15 Rowlett, R. How Many? A Dictionary of Units of Measurement, (revised 2001) University of N. Carolina at Chapel Hill
16 Knorr, W. R. The geometer and the archaeoastronomers: on the prehistoric origins of mathematics. Review of: Geometry and algebra in ancient civilizations [Springer, Berlin, 1983; MR: 85b:01001] by
B. L. van der Waerden. British J. Hist. Sci. 18 (1985), no. 59, part 2, 197--212. SC: 01A10, MR: 87k:01003.
17 MacKie, E. 1977, The Megalith Builders, Phaidon Press
18 Petrie, W.M.F. 1877 Stonehenge: Plans, Description, and Theories. p 23
19 Michell, J. 1981 Ancient Metrology, p 20
20 Stone, E. Herbert. 1924 The Stones of Stonehenge, p 6
21 Skinner, F. Weights and Measures, 1967, HMSO Science Museum, p 35
22, 23, 24 Thom, A. and Burl, A. 1980, Megalithic Rings
25 Skinner, F. Weights and Measures, 1967, HMSO Science Museum, p 40
26 Thom, A. and Thom, A.S. 1978, Megalithic Remains in Britain and Brittany, p 4
The author, John Neal, has no qualifications whatever, either as a mathematician or as a writer. This may make it difficult for academicians either to read this book, or to accept the findings
herein. But were I a scholar, then I may have had a vested interest in not writing it at all. Because the content so contradicts the prevailing orthodoxy regarding our historical origins, that it may
have been career-threatening to reveal it from within the ranks. Having no career to threaten, I communicate this good news with enthusiasm
This is written in my sixtieth year, and I have had nothing more than a pure interest in history and archaeology since childhood. Ancient buildings, monuments, standing stones and earthworks have
always exerted a solemn fascination upon me and I was ever aware that they embodied something communicable within their mystery. This communication was finally established through an understanding of
the units of measurement that were enshrined in their proportions, they speak in numbers. The architects speak to us across the ages through this forgotten language, as clearly as though their voices
were heard in their antique tongue. Although the final solution proved to be utterly simple, the convoluted paths which had to be followed towards the resolution of the enigmas of metrology, defy
description. Countless hours of calculation and comparisons over a period of some thirty years, finally yielded a priceless pearl that becomes brighter the more it is contemplated and handled. If the
reader does not appreciate this, it is entirely due to the inexperience and the presentation of the author. I recommend that those who have difficulties with numbers resolve them and read on until
they do understand.
Related Links | {"url":"http://world-mysteries.com/johnneal1.htm","timestamp":"2014-04-18T18:43:16Z","content_type":null,"content_length":"168665","record_id":"<urn:uuid:dd71dfe0-d776-4606-912b-a3ceb858d61e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Quadratic Equations by the Quadratic Formula
May 6th 2013, 11:25 PM #1
Junior Member
May 2013
Solving Quadratic Equations by the Quadratic Formula
Hey Guys,
I am a bit confused on what to do for this problem. If I can get some help I'd appreciate it.
Suppose that f(x) = x^4 + 7x^2+ 12. Find the values of x such that
a. f(x) = 12
b. f(x) = 6
I figured part b already but I am having trouble on part (a) which is
12 = x^4 + 7x^2+ 12
-12 -12
Now I'm left with x^4 + 7x^2
I use "Let u = x^2 , Let u^2 = x^4^"
u^2 + u
Do I get a third term by completing the square method? I am lost with this step.
Last edited by Cake; May 6th 2013 at 11:27 PM.
Re: Solving Quadratic Equations by the Quadratic Formula
It is not must that quadratic polynomial should always have three terms. It may be a monomial / binomial or a trinomial.
Now you have u(u+ 7u ) = 0
In second case we will have
u^2 + 7u + 6 = 0
Re: Solving Quadratic Equations by the Quadratic Formula
What is the quadratic formula? What are the values of a, b, and c?
Re: Solving Quadratic Equations by the Quadratic Formula
Hey ibdutt,
You are correct
Honestly, I did factor the problem and ended up with u(u + 7u) = 0 but I thought it looked wrong and I wasn't used to seeing a quadratic polynomial with two terms. Thanks for clearing things up
for me!
Re: Solving Quadratic Equations by the Quadratic Formula
May 6th 2013, 11:58 PM #2
Super Member
Jul 2012
May 7th 2013, 10:54 AM #3
Junior Member
Apr 2013
Green Bay
May 7th 2013, 05:49 PM #4
Junior Member
May 2013
May 7th 2013, 05:51 PM #5
Junior Member
May 2013 | {"url":"http://mathhelpforum.com/algebra/218668-solving-quadratic-equations-quadratic-formula.html","timestamp":"2014-04-19T02:09:57Z","content_type":null,"content_length":"43018","record_id":"<urn:uuid:c4224e61-e37f-4244-b64d-b335339c40d0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
hi BarandaMan
x by 5
Add 2z
take 6
divide by 2
I suspect you ran into difficulty because of this
take 2
divide by -2
when you multiply or divide by a negative amount you must reverse the inequality sign
If you want to explore this rule some more, post back.
hi bobbym, sorry didn't know you were posting too. | {"url":"http://www.mathisfunforum.com/post.php?tid=18411&qid=239892","timestamp":"2014-04-21T10:05:36Z","content_type":null,"content_length":"21319","record_id":"<urn:uuid:e0f2d15e-dab0-43a6-b94b-a674b3cc7949>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00174-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Use the Law of Sines with a Triangle
When you already know two angles in a triangle as well as one of the sides, such as in the case of ASA or AAS, you can use the law of sines to find the measures of the other two sides. This law uses
the ratios of the sides of a triangle and their opposite angles. The bigger the side, the bigger its opposite angle. The longest side is always opposite the largest angle. Here’s how it goes.
The law of sines for triangle ABC with sides a, b, and c opposite those angles, respectively, says
So the law of sines says that in a single triangle, the ratio of each side to its corresponding opposite angle is equal to the ratio of any other side to its corresponding angle.
For example, consider a triangle where side a is 86 inches long and angles A and B are 84 and 58 degrees, respectively. The following figure shows a picture of the triangle, and the following steps
show you how to find the missing three parts.
1. Find the measure of angle C.
The sum of the measures of a triangle’s angles is 180 degrees. So find the sum of angles A and B, and subtract that sum from 180.
180 – (84 + 58) = 180 – 142 = 38
Angle C measures 38 degrees.
2. Find the measure of side b.
□ Using the law of sines and the proportion
□ fill in the values that you know.
Use the given values, not those that you’ve determined yourself. That way, if you make an error, you can spot it easier later.
□ Use a calculator to determine the values of the sines (in this case, rounded to three decimal places).
□ Multiply each side by the denominator under b to solve for that length. Because the original measures are whole numbers, round this answer to the nearer whole number.
Side b measures about 73 inches.
3. Find the measure of side c.
□ Using the law of sines and the proportion
□ fill in the values that you know.
Again, it’s best to use the given values, not those that you determined. In this case, however, you have to use a computed value, the angle C.
□ Use a calculator to determine the values of the sines.
□ Multiply each side by the denominator under c to solve for that length. Because the original measures were given as whole numbers, round this answer to the nearer whole number.
Side c measures about 53 inches. | {"url":"http://www.dummies.com/how-to/content/how-to-use-the-law-of-sines-with-a-triangle.html","timestamp":"2014-04-19T06:18:56Z","content_type":null,"content_length":"54961","record_id":"<urn:uuid:1f9106ef-dcfe-457a-9757-71b2f1b2eab2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: ASTR 3730: Problem Set 1
(due Tuesday September 6th
(1) The FERMI space telescope has recently detected flares from the Crab Nebula in
gamma-rays with an energy of around 100 MeV. Suppose that the quiescent flux
from the Crab in photons of this energy is 10-10
erg cm-2
at Earth.
(a) What is the frequency and wavelength of 100 MeV photons?
(b) If the effective collecting area of FERMI for photons of 100 MeV energy is
, how many photons from the Crab does the telescope collect per day,
on average?
(2) Estimate the theoretical resolution of the human eye, assuming a pupil diameter
of 0.5 cm and a wavelength corresponding to that of green light ( = 0.5 µm).
Express the answer in arcminutes.
(3) A globular cluster has 105
stars distributed within a sphere of characteristic radius | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/256/4252913.html","timestamp":"2014-04-19T15:19:50Z","content_type":null,"content_length":"7881","record_id":"<urn:uuid:32222be6-b400-4643-8bff-5db165eda051>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Purplemath Forums
Hi Guys,
I'm having trouble working out some equations mostly dealing with exponential and logs.
the equations are:
y= 2x + 3^(-12x)
y= x^(-2x) - (e^2x)/5
I need to find both the first and second derivative. Any help and explanation would be much appreciated.
Thanks guys.
y= 2x + 3^(-12x) y= x^(-2x) - (e^2x)/5 I need to find both the first and second derivative. Where are you getting stuck? Please be complete. Thank you! :wink: From my understanding the first one has
to do with ln so dy/dx = 2 - 12ln3(3^-12x) I'm not sure whether I multiply the -12ln3 with the (3^-1...
Ouch! They were supposed to teach you that stuff before assigning homework on it! :shock: To learn how to work with exponential and logarithmic functions and their derivatives, try here . :wink:
Thanks for helping. Im doing a master of finance course with no background knowledge on calculus. The le... | {"url":"http://www.purplemath.com/learning/search.php?author_id=9457&sr=posts","timestamp":"2014-04-17T07:49:26Z","content_type":null,"content_length":"15861","record_id":"<urn:uuid:7959b4df-0629-40a3-ae37-222e02a32ecc>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Liman Corporation has a single product whose selling
View the step-by-step solution to:
Liman Corporation has a single product whose selling price is $140 and whose...
Home Tutors Accounting Liman Corporation has a single...
This question has been answered by Expert on Jul 7, 2012. View Solution
Liman Corporation has a single product whose selling price is $140 and whose variable expense is $60 per unit. The company’s monthly fixed expense is $40,000.
Using the formula method, determine for the dollar sales that are required to earn a target profit of $8,000. (Do not round intermediate calculations. Omit the "$" sign in your response.)
Dollar sales $
Dear Student
Please find... View Full Answer
Download Preview:
iman Corporation has a single product whose selling price is $140 and whose variable expense is $60 per unit. The companys monthly fixed expense is... | {"url":"http://www.coursehero.com/tutors-problems/Accounting/8298679-Liman-Corporation-has-a-single-product-whose-selling-price-is-140-and/","timestamp":"2014-04-21T14:41:07Z","content_type":null,"content_length":"44073","record_id":"<urn:uuid:baff404d-c950-4ad3-b3ac-4276d048d71a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numbers Geekery
I was supposed to be reading A Storm of Swords (I have to catch my sister, and stay ahead of my wife…) but I caught up in thinking some more about a geeky bit of number play (aka what I think maths
should be) that I started thinking about yesterday. So I thought I’d type it up and post it, because I think it’s neat. – Not that it necessarily has any point, or is original, or even especially
clever; it’s just a fun neat pattern, and that’s what I like about maths: patterns and fun.
On a related note, I would suggest reading A Mathematician’s Lament by Paul Lockhart.
Anyway, what did I notice? (I’m glad you asked … or bothered to keep reading, at any rate).
The first thing I noticed was that if I subtracted the sum of 1 and 3 (i.e. 4) from 13, I got 9. Then I thought about other "teens", and found the same thing; e.g. 18 – (1 + 8) = 9. I thought that
was pretty cool, and started wondering about other 2 digit numbers:
Pretty neat: if you take a two digit number, add the digits, and subtract that from the original number, you get a factor of 9. What factor of 9 do you get? Why, however many tens you had in the
original number. I like that. So, what about 3 digit numbers?
So … you still get factors of 9, but no longer the number of tens in the original number. In fact, after some thought it became apparent that what you get is [S:the highest factor of 9 that is less
than the lowest number of that decade. That is, for 100-109, the highest multiple of 9 that is less than 100 is 99, or 11 x 9; for 200-209 the highest multiple of 9 that is lower than 200 is 198, or
22 x 9; for 360-369 the highest is 351 (39 x 9), and so on.:S]
[S:Heh – and it didn’t take much at all to work out that last multiple of 9: I simply subtracted (3 + 6) from 360 – being very sure the result would be a multiple of nine.:S]
Fail. Once I started looking at the higher 200s I found that my "highest multiple of 9" idea was wrong. Clearly wrong. Maybe I shouldn’t have been surprised, as it didn’t seem especially easy to
explain, nor elegant. Rather what seems to be the rule, is that the factor of nine is found with the hundreds and tens columns: the hundreds becomes tens, and then adding the digits in the hundreds
and tens columns gives the ones for the factor. Gawd that sounds messy. Example then:
No easier to explain, perhaps, but it "feels" to me to be more elegant.
So. Fun with numbers. I said earlier that it might not have any point – but in a way that’s the point: it’s just fun, and it doesn’t need to have any sort of application.
Posted in Cool stuff, Maths and tagged as cool, fun, geeky, maths, numbers | {"url":"http://www.tsuken.co.nz/numbers-geekery/","timestamp":"2014-04-18T13:09:13Z","content_type":null,"content_length":"20600","record_id":"<urn:uuid:eed6c913-f321-4261-89a4-56fda342a97e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fibbo-something sequences
Does anybody understand the concept of analyzing a word problem and turning it into an equation?
Mr. Yazaki discovered that there were 225 dandelions in his garden on the first Saturday of spring. He had time to pull out 100, but by the next Saturday, there were twice as many as he had left.
Each Saturday in spring he removed 100 dandelions, only to find that the number of remaining danelions had doubled the following Saturday.
So can somebody explain how to solve this?
4mase wrote:Does anybody understand the concept of analyzing a word problem and turning it into an equation?
In general, sure. But the techniques, etc, are covered in various courses, generally over a period of years.
4mase wrote:Mr. Yazaki discovered that there were 225 dandelions in his garden on the first Saturday of spring. He had time to pull out 100, but by the next Saturday, there were twice as many as
he had left. Each Saturday in spring he removed 100 dandelions, only to find that the number of remaining danelions had doubled the following Saturday.
So can somebody explain how to solve this?
Um... So far, all I can see is a story. I see no equation to "solve", or even any instructions on what to do with this...?
Please reply with clarification. Thank you!
Re: Fibbo-something sequences
I think the "Fibbo-something" is "Fibonacci", like "fib a knot chee". I think you start with 1 and 1, and then you get your next numbers by adding the two before. Maybe you're supposed to do
something like that, and come up with some kind of formula for how many you have each Sunday.
Re: Fibbo-something sequences
The problem appears to be asking for the sequence of the number of dandelions each Saturday. The problem gives that on the first Saturday, the number of dandelions is 225. So
n[0] = 225
To get from any Saturday's number to the next Saturdays number, you subtract 100 and double the result. The equation for that is
n[k+1] = 2(n[k]-100)
The above is called a recursion equation, because it tells you how to get the next in the sequence from the last value or values. Crunching the numbers for the first few in the sequence, I get
n[1] = 250
n[2] = 300
n[3] = 400
The real question is, can you find a closed expression for n[k]? That is, if I tell you which member of the sequence I want to know the value of, is there a way, without calculating all the n's up to
that point, that you can compute for me the value of that member of the sequence. Look at the following table of expanding the calculation of the first few values, and you can see a pattern:
n[0] = 225
n[1] = 2^1(225) - 2^1(100)
n[2] = 2^2(225) - 2^2(100) - 2^1(100)
n[3] = 2^3(225) - 2^3(100) - 2^2(100) - 2^1(100)
In general, for the k'th value, you take 225 times 2^k and subtract the sum of all the powers of 2 from there down times 100. The sum of powers of 2 from 1 to k is 2^k+1 - 2. So the general formula
for the k'th value is
n[k] = 2^k(225) - 100(2^k+1 - 2)
I think this is the kind of work the problem was asking for.
Challenge: Can you prove that the above formula is equivalent to
n[k] = 200 + 2^k(25) | {"url":"http://www.purplemath.com/learning/viewtopic.php?f=15&t=79&p=211","timestamp":"2014-04-16T04:53:59Z","content_type":null,"content_length":"24882","record_id":"<urn:uuid:495d4315-ecb7-4718-840c-c4cb38a4bdde>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unit vectors in Spherical Coordinates
Ok, this is fairly trivial.
Assume that some vector [itex]\vec{u}[/itex] (dependent on some independent variables) has unit size irrespective of the values of the independent variables, i.e:
Then, labeling an independent variable as [itex]x_{i}[/itex], we get by differentiating (1) wrt. to that variable:
[tex]2\frac{\partial\vec{u}}{\partial{x}_{i}}\cdot\vec{u}=0[/itex], i.e, the derivatives of the unit vector are orthogonal to it!
Thus, starting out with the radial vector,
[tex]\vec{i}_{r}=\sin\phi\cos\theta\vec{i}+\sin\phi\sin\theta\vec{j}+\cos\ph i\vec{k}[/tex], we perform the two differentiations here:
[itex]\frac{\partial\vec{i}_{r}}{\partial\phi}=\cos\phi\cos\theta\vec{i}+\cos \phi\sin\theta\vec{j}-\sin\phi\vec{k}=\vec{i}_{\phi}[/tex]
where the appropriate forms of the unit vectors [itex]\vec{i}_{\phi},\vec{i}_{\theta}[/itex] have been indicated. | {"url":"http://www.physicsforums.com/showthread.php?t=322642","timestamp":"2014-04-17T18:42:03Z","content_type":null,"content_length":"35462","record_id":"<urn:uuid:5e6dbfd7-f8ff-42d1-a4e7-0c7ad0ccb85b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
360 Assembly/360 Instructions/DR
From Wikibooks, open books for an open world
DR - Divide by Register - Opcode 1D
DR 2,7
The specific syntax is
DR target register, source register.
RR Instruction (2 bytes)
Byte 1 Byte 2
target register source register
(8 bits) (4 bits) (4 bits)
1D 0..F 0..F
• The first argument is a one with lesser number of pair of target registers which value is affected by the instruction.
• The second argument is the source value register.
The DR instruction is available on all models of the 360, 370 and z/System.
The DR instruction divides the dividend - 64-bit signed value stored in pair of registers T and T+1, where T is target register number (T shall contain most significant part and T+1 shall contain
least significant part), by the divisor - 32-bit signed value in the source register. The target register number T shall be even. The instruction places quotient to register T and remainder to
register T+1, both as 32-bit signed values.
The divisor shall not be zero. The quotient shall fit into limits of 32-bit signed value (-2147483648 till 2147483647).
The DR instruction performs so-called T-division when quotient is truncated to zero; the remainder sign is equal to the dividend sign, if both values are not equal to 0; in other words, (remainder *
dividend >= 0).
The Condition Code field in the Program Status Word is not changed.
Exceptions and Faults[edit]
• If an odd register number is specified as the target register, operation exception occurs.
• If the divisor is zero, operation exception occurs.
• If signed integer overflow is detected and the bit 36 in PSW is set, operation exception occurs.
Consider that register 4 contains 0, 5 - 13 and 11 - 4. The instruction "DR 4,11" will divide 13 by 4 and place quotient (equals to 3) to register 4 and remainder (equals to 1) to register 5.
Related instructions[edit]
Previous Instruction Next Instruction
DP 360 Assembly Instructions DSG
Previous Opcode Next Opcode
1C 1E | {"url":"http://en.wikibooks.org/wiki/360_Assembly/360_Instructions/DR","timestamp":"2014-04-17T03:57:16Z","content_type":null,"content_length":"31354","record_id":"<urn:uuid:869779db-51e0-45ff-9d83-86ddaa6df4ab>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Manipulation of the distribution
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Manipulation of the distribution
From <linnrenee@hotmail.com>
To <statalist@hsphsun2.harvard.edu>
Subject st: Manipulation of the distribution
Date Mon, 10 Nov 2008 12:10:43 +0000
HiI have a dataset of hourly prices. Prices are identified by hour, weekday, week number, month and year. So within each day I have 24 hourly prices. What I’d like to do is to manipulate the prices by changing the distribution (changing the volatility of the prices following some assumptions on how the market will be changed). Let’s say I want to increase the daily price volatility by for example increasing the standard deviation within each day. We’ve figured out a way to do this systematically using a group variable and directly changing the original standard deviation for each day. egen group_var=group(day week)capture drop new_no1gen new_no1=no1sum group_var forvalues i=1(1) `r(max)' { sum no1 if group_var==`i' ret list local org_mean `r(mean)' local org_sd `r(sd)' capture drop std_no1 gen std_no1=(no1-`org_mean')/`org_sd' if group_var==`i' sum std_no1 if group_var==`i' local new_mean `org_mean' local new_sd 1.2*`org_sd' replace new_no1=`new_sd'*std_no1 + `new!
_mean' if group_var==`i' sum new_no1 if group_var==`i' }However, using this method will only systematically “stretch” the distribution so that the originally highest prices increases and the lowest prices decreases. Making the original distribution of prices more extreme.My trouble is to find a way to change the prices by randomly drawing a “price change” for each hourly price. I think what I need is to generate a variable for example one that is normally distributed with a given mean and standard deviation. Then I want to draw randomly from this distribution for each price in my dataset and add this element to the original price in order to have a “new price”.Well, I have some trouble understanding how to actually generate new variables with a given distribution, say a normally or a binomial distributed variable (could really use som instruction on this). Second, I need to figure out how to tell Stata to draw randomly from this new variable distribution (one for each pri!
ce observation) and then assigned the element drawn to the new price.
(well this last thing would be easy as soon as I have the draws). I’ve looked into for example the rnd command written by Joe Hilbe, and also some other commands like normal, drawnormal ect (I’m running Stata 9), but I’m far from “fluent in Stata”, so I could need some help.ThanksLinn
Hold kontakten med Windows Live Messenger.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-11/msg00340.html","timestamp":"2014-04-19T07:02:22Z","content_type":null,"content_length":"7721","record_id":"<urn:uuid:d2003342-a49b-497b-895e-032241f74b46>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why Net Present Value is the Best Measure for Investment Appraisal?
“Why net present value (NPV) is the best measure for investment appraisal?” This question is as good as another question - “How NPV is better than other methods of investment appraisal? There are
many methods for investment appraisal such as accounting (book) rate of return, payback period (PBP), internal rate of return (IRR), and Profitability Index (PI).
Before comparing NPV, let’s recapitulate the concept again. Net present value method calculates the present value of the cash flows based on the opportunity cost of capital and derives the value
which will be added to wealth of the shareholders if that project is undertaken.
Let us discuss each of these methods in comparison with net present value (NPV) to reach at the conclusion.
Net Present Value (NPV) vs. Payback Period (PBP)
Payback period calculates a period within which the initial investment of the project is recovered. The criterion for acceptance or rejection is just a benchmark decided by the firm say 3 Years. If
the PBP is less than or equal to 3 Years, the firm will accept the project and else will reject it. There are two major drawbacks with this technique –
1. It does not consider the cash flows after the PBP.
2. Ignores time value of money.
The second drawback is still covered a bit by an extended version of PBP which is commonly called as Discounted Payback Period. Only difference it makes is the cash flows used are discounted cash
flows but it also does not consider the cash flows after PBP.
Net present value considers time value of money and also takes care of all the cash flows till the end of life of the project.
Net Present Value (NPV) vs. Internal Rate of Return (IRR)
Internal rate of return (IRR) calculates a rate of return which is offered by the project irrespective of the required rate of return and any other thing. It also has certain disadvantages discussed
1. It does not understand economies of scale and ignores dollar value of the project. It cannot differentiate between two projects with same IRR but vast difference between dollar returns. On the
other hand, NPV talks in absolute terms and therefore this point is not missed.
2. IRR assumes discounting and reinvestment of cash flows at same rate. If the IRR of a very good project is say 35%, it is practically not possible to invest money at this rate in the market.
Whereas, NPV assumes a rate of borrowing as well as lending near to the market rates and not absolutely impractical.
3. IRR enters the problem of multiple IRR when we have more than one negative net cash flow and the equation is then satisfied with two values therefore have multiple IRRs. Such a problem does not
exist with NPV.
Net Present Value (NPV) vs. Profitability Index (PI)
Profitability index is a ratio between the discounted cash inflow to the initial cash outflow. It presents a value which says how many times of the investment is the returns in the form of discounted
cash flows.
The disadvantage associated with this method again is its relativity. A project can have same profitability index with different investments and vast difference in absolute dollar return. NPV has an
upper hand in this case.
We have noted that almost all the difficulties are survived by net present value and that is why it is considered to be the best way to analyze, evaluate, and select big investment projects. At the
same time, the estimation of cash flows requires carefulness because if the cash flow estimation is wrong, NPV is bound to be misleading.
A small problem with NPV is that it also considers the same discounting rate for both cash inflow and outflows. We know that there are differences between borrowing and lending rates. Modified
internal rate of return is another method which is little more complex but improved which takes care of the difference between borrowing and lending rates also as it discounts cash inflows at lending
rates and cash outflow at borrowing rates. | {"url":"http://www.efinancemanagement.com/investment-decisions/147-why-net-present-value-is-the-best-measure-for-investment-appraisal","timestamp":"2014-04-18T03:31:33Z","content_type":null,"content_length":"30899","record_id":"<urn:uuid:25227a05-e349-447d-ba44-fb5058f73876>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wayland, MA Trigonometry Tutor
Find a Wayland, MA Trigonometry Tutor
...I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. I am a second year graduate student
at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point.
16 Subjects: including trigonometry, French, elementary math, algebra 1
...I am an extremely responsible, detail-oriented and very organized person. Using interesting samples to demonstrate a difficult concept is my strength. Math was always my best subject.
11 Subjects: including trigonometry, geometry, accounting, Chinese
...As a scholar of Latin and Greek along with several modern languages, I have developed a thorough understanding of the grammar and mechanics that make the English language work. Although the
writing portion of the SAT did not exist when I was in high school, I excelled on this section of the PSAT...
43 Subjects: including trigonometry, English, reading, ESL/ESOL
...Regardless of a student's current situation, there is a basic plan that I always follow to ensure success. If you have any questions, please don't hesitate to contact me. Most students can
achieve results in one or two sessions a week for 1-2 hours each week which can be usually be reduced as progress is made.
13 Subjects: including trigonometry, calculus, algebra 1, algebra 2
...All my students continued after the first lesson, through the end of the school year and showed significant improvement. I am a postdoctoral research fellow in biophysics at Harvard Medical
School and Boston Children's Hospital. I do research in protein mechanics and cell signaling.
27 Subjects: including trigonometry, reading, writing, physics
Related Wayland, MA Tutors
Wayland, MA Accounting Tutors
Wayland, MA ACT Tutors
Wayland, MA Algebra Tutors
Wayland, MA Algebra 2 Tutors
Wayland, MA Calculus Tutors
Wayland, MA Geometry Tutors
Wayland, MA Math Tutors
Wayland, MA Prealgebra Tutors
Wayland, MA Precalculus Tutors
Wayland, MA SAT Tutors
Wayland, MA SAT Math Tutors
Wayland, MA Science Tutors
Wayland, MA Statistics Tutors
Wayland, MA Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Ashland, MA trigonometry Tutors
Auburndale, MA trigonometry Tutors
Concord, MA trigonometry Tutors
Holliston trigonometry Tutors
Lincoln Center, MA trigonometry Tutors
Lincoln, MA trigonometry Tutors
Maynard, MA trigonometry Tutors
Needham Jct, MA trigonometry Tutors
Newtonville, MA trigonometry Tutors
Southboro, MA trigonometry Tutors
Southborough trigonometry Tutors
Sudbury trigonometry Tutors
Wellesley trigonometry Tutors
Wellesley Hills trigonometry Tutors
Weston, MA trigonometry Tutors | {"url":"http://www.purplemath.com/wayland_ma_trigonometry_tutors.php","timestamp":"2014-04-20T02:13:38Z","content_type":null,"content_length":"24268","record_id":"<urn:uuid:4d0aac3b-6f1a-4431-b619-a99beea5c16a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
About the 4 potential
[tex]\partial^\mu[/tex] is a four vector. In order for the continuity equation to be a scalar, this requires [tex]j^\mu[/tex] to be a four vector.
In fact it is possible for [itex]\partial_{\mu} j^{\mu} = 0[/itex] to be true in all frames without [itex]j^{\mu}[/itex] being a four vector. For example, there could be a 2nd-rank tensor [itex] T^{\
mu\nu}[/itex] that satisfies [itex]\partial_{\mu} T^{\mu\nu}=0[/itex]. If you then define [itex]j^{\mu}:=T^{\mu 0}[/itex], we have an counter example.
Anyway, the current density [itex]j^{\mu}[/itex] can be assumed to be a four vector for other reasons too.
Since [tex]\partial^\mu[/tex] is a four vector, the d'Alembertian
[tex]\partial^\mu\partial_\mu[/tex] is a scalar.
The wave equation then shows that [tex]A^\mu[/tex] is a four vector in the Lorentz gauge.
This conclusion was logically valid, but I'll restate it for clarity:
"If we assume that the equation of motion is Lorentz invariant, then it follows that the four potential must transform as a four vector."
Alternatively we could draw the conclusion in the other direction:
"We postulate that the four potential transforms as a four vector, and therefore the equation of motion is Lorentz invariant."
Whatever you like. IMO the transformation properties of different fields are something that should be postulated, and the invariance of the equations are something that should be proven. | {"url":"http://www.physicsforums.com/showthread.php?t=206391","timestamp":"2014-04-20T05:48:53Z","content_type":null,"content_length":"67120","record_id":"<urn:uuid:f07e30a3-ada3-4717-9dee-897d31b13c40>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
sequence multiplied by -1
On 25 Sep, 09:22, Yingjie Lan <(E-Mail Removed)> wrote:
> Hi,
> I noticed that in python3k, multiplying a sequence by a negative integer is the same as multiplying it by 0, and the result is an empty sequence. It seems to me that there is a more meaningful
> Simply put, a sequence multiplied by -1 can give a reversed sequence.
> Then for any sequence "seq", and integer n>0, we can have
> "seq * -n" producing "(seq * -1) * n".
> Any thoughts?
> Yingjie
If [1, 2]*-1 is correct, then, arguably, so should be -[1, 2]
Some answers have invoked mathematics to weigh the value of this
proposal, e.g. likening lists to vectors. But the obvious
mathematical analogy is that the set of all lists forms a monoid under
the operation of concatenation, which (unfortunately?) is performed
with the "+" operator in Python. So it is natural that "*" represents
repeated concatenation.
Now under concatenation, non-empty lists do not have an inverse, i.e.
for any non-empty list l, there does not exist a list l' such that l +
l' == []. So there is no natural interpretation of -l and therefore
of l*-1.
However, by using "+" for list (and string) concatenation, Python
already breaks the mathematical pledge of commutativity that this
operator implies. | {"url":"http://www.velocityreviews.com/forums/t734095-sequence-multiplied-by-1-a.html","timestamp":"2014-04-23T22:53:48Z","content_type":null,"content_length":"35729","record_id":"<urn:uuid:f895cb76-931a-4699-a8e8-072f7581b00a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stephen W. Hawking - My Life in Physics
I did my first degree in Oxford. In my final examination, I was asked about my future plans. I replied, if you give me a first class degree, I will go to Cambridge. If I only get a second, I will
stay in Oxford. They gave me a first. I arrived in Cambridge as a graduate student in October 1962. I had applied to work with Fred Hoyle, the principal defender of the steady state theory and the
most famous British astronomer of the time. I say astronomer because cosmology was at that time, hardly recognized as a legitimate field, yet that was where I wanted to do my research, inspired by
having been on a summer course with Hoyle's student, Jayant Narlikar. However, Hoyle had enough students already, so to my great disappointment, I was assigned to Dennis Sharma, of whom I had not
heard. But it was probably for the best. Hoyle was away a lot, seldom in the department, and I wouldn't have had much of his attention. Sharma, on the other hand, was usually around and ready to
talk. I didn't agree with many of his ideas, particularly on Mach's principle, but that stimulated me to develop my own picture.
When I began research, the two areas that seemed exciting were cosmology and elementary particle physics. Elementary particles was the active, rapidly changing field that attracted most of the best
minds, while cosmology and general relativity were stuck where they had been in the 1930s. Feynman has given an amusing account of attending the conference on general relativity and gravitation in
Warsaw in 1962. In a letter to his wife, he said, “I am not getting anything out of the meeting. I am learning nothing. Because there are no experiments, this field is not an active one, so few of
the best men are doing work in it. The result is that there are hosts of dopes here (126) and it is not good for my blood pressure. Remind me not to come to any more gravity conferences!”
Of course, I wasn't aware of all this when I began my research. But I felt that elementary particles at that time, was too like botany. Quantum electro dynamics, the theory of light and electrons
that governs chemistry and the structure of atoms, had been worked out completely in the 40s and 50s. Attention had now shifted to the weak and strong nuclear forces between particles in the nucleus
of an atom, but similar field theories didn't seem to work. Indeed, the Cambridge school, in particular, held that there was no underlying field theory. Instead, everything would be determined by
unitarity, that is, probability conservation, and certain characteristic patterns in the scattering. With hind sight, it now seems amazing that it was thought this approach would work, but I remember
the scorn that was poured on the first attempts at unified field theories of the weak nuclear forces. Yet it is these field theories that are remembered and the analytic S matrix work is forgotten.
I'm very glad I didn't start my research in elementary particles. None of my work from that period would have survived.
Cosmology and gravitation, on the other hand, were neglected fields that were ripe for development at that time. Unlike elementary particles, there was a well defined theory, the general theory of
relativity, but this was thought to be impossibly difficult. People were so pleased to find any solution of the field equations, they didn't ask what physical significance, if any, it had. This was
the old school of general relativity that Feynman encountered in Warsaw. But the Warsaw conference also marked the beginning of the renaissance of general relativity, though Feynman could be forgiven
for not recognizing it at the time.
A new generation entered the field and new centers of general relativity appeared. Two of these were of particular importance to me. One was in Hamburg under Pascal Jordan. I never visited it, but I
admired their elegant papers which were such a contrast to the previous messy work on general relativity. The other center was at Kings College, London, under Hermann Bondi, another proponent of the
steady state theory but not ideologically committed to it, like Hoyle.
I hadn't done much mathematics at school or in the very easy physics course at Oxford, so Sharma suggested I work on astrophysics. But having been cheated out of working with Hoyle, I wasn't going to
do something boring like Faraday rotation. I had come to Cambridge to do cosmology, and cosmology I was determined to do. So I read old text books on general relativity and traveled up to lectures at
Kings College, London each week with three other students of Sharma. I followed the words and equations, but I didn't really get a feel for the subject. Also, I had been diagnosed with motor neurone
disease, or ALS, and given to expect I didn't have long enough to finish my PhD. Then suddenly, towards the end of my second year of research, things picked up. My disease wasn't progressing much and
my work all fell into place, and I began to get somewhere.
Sharma was very keen on Mach's principle, the idea that objects owe their inertia to the influence of all the other matter in the universe. He tried to get me to work on this, but I felt his
formulations of Mach's principle were not well defined. However, he introduced me to something a bit similar with regard to light, the so called Wheeler Feynman electro dynamics. This said that
electricity and magnetism were time symmetric. However, when one switched on a lamp, it was the influence of all the other matter in the universe that caused light waves to travel outward from the
lamp, rather than come in from infinity and end on the lamp. For Wheeler Feynman electro dynamics to work, it was necessary that all the light traveling out from the lamp should be absorbed by other
matter in the universe. This would happen in a steady state universe in which the density of matter would remain constant, but not in a big bang universe where the density would go down as the
universe expanded. It was claimed that this was another proof, if proof were needed, that we live in a steady state universe. There was a conference on Wheeler Feynman electro dynamics and the arrow
of time in Cornell in 1963. Feynman was so disgusted by the nonsense that was talked about the arrow of time that he refused to let his name appear in the proceedings. He was referred to as Mr. X,
but everyone knew who X was.
I found that Hoyle and Narlikar had already worked out Wheeler Feynman electro dynamics in expanding universes and had then gone on to formulate a time symmetric new theory of gravity. Hoyle unveiled
the theory at a meeting of the royal society in 1964. I was at the lecture, and in the question period, I said that the influence of all the matter in a steady state universe would make his masses
infinite. Hoyle asked why I said that, and I replied that I had calculated it. Everyone thought I had done it in my head during the lecture, but in fact, I was sharing an office with Narlikar and had
seen a draft of the paper. Hoyle was furious. He was trying to set up his own institute, and threatening to join the brain drain to America if he didn't get the money. He thought I had been put up to
it, to sabotage his plans. However, he got his institute and later gave me a job, so he didn't harbor a grudge against me.
The big question in cosmology in the early 60s, was did the universe have a beginning? Many scientists were instinctively opposed to the idea, because they felt that a point of creation would be a
place where science broke down. One would have to appeal to religion and the hand of God to determine how the universe would start off. Two alternative scenarios were therefore put forward. One was
the steady state theory, in which as the universe expanded, new matter was continually created to keep the density constant on average. The steady state theory was never on a very strong theoretical
basis because it required a negative energy field to create the matter. This would have made it unstable, to run away production of matter and negative energy. But it had the great merit as a
scientific theory of making definite predictions that could be tested by observations. By 1963, the steady state theory was already in trouble. Martin Ryle's radio astronomy group at the Cavendish
did a survey of faint radio sources. They found the sources were distributed fairly uniformly across the sky. This indicated that they were probably outside our galaxy because otherwise, they would
be concentrated along the Milky Way. But the graph of the number of sources against source strength did not agree with the prediction of the steady state theory. There were too many faint sources
indicating that the density of sources was higher in the distant past. Hoyle and his supporters put forward increasingly contrived explanations of the observations, but the final nail in the coffin
of the steady state theory came in 1965 with the discovery of a faint background of microwave radiation. This could not be accounted for in the steady state theory, though Hoyle and Narlikar tried
desperately. It was just as well I hadn't been a student of Hoyle, because I would have had to have defended the steady state.
The microwave background indicated that the universe had had a hot dense stage in the past. But it didn't prove that was the beginning of the universe. One might imagine that the universe had had a
previous contracting phase, and that it had bounced from contraction to expansion at a high, but finite density. This was clearly a fundamental question, and it was just what I needed to complete my
PhD thesis.
Gravity pulls matter together, but rotation throws it apart. So my first question was, could rotation cause the universe to bounce? Together with George Ellis, I was able to show that the answer was
no, if the universe was spatially homogeneous, that is, if it was the same at each point of space. However, two Russians, Lifshitz and Khalatnikov, had claimed to have proved that a general
contraction without exact symmetry would always lead to a bounce, with the density remaining finite. This result was very convenient for Marxist Leninist dialectical materialism, because it avoided
awkward questions about the creation of the universe. It therefore became an article of faith for Soviet scientists.
Lifshitz and Khalatnikov were members of the old school in general relativity. That is, they wrote down a massive system of equations and tried to guess a solution. But it wasn't clear that the
solution they found was the most general one. However, Roger Penrose introduced a new approach which didn't require solving the field equations explicitly, just certain general properties such as
that energy is positive and gravity is attractive. Penrose gave a seminar in Kings College, London, in January 1965. I wasn't at the seminar, but I heard about it from Brandon Carter, with whom I
shared an office in the then new DAMTP premises in Silver Street. At first, I couldn't understand what the point was. Penrose had showed that once a dying star had contracted to a certain radius,
there would inevitably be a singularity, a point where space and time came to an end. Surely, I thought, we already knew that nothing could prevent a massive cold star collapsing under its own
gravity until it reached a singularity of infinite density. But in fact, the equations had been solved, only for the collapse of a perfectly spherical star. Of course, a real star won't be exactly
spherical. If Lifshitz and Kalatnikov were right, the departures from spherical symmetry would grow as the star collapsed and would cause different parts of the star to miss each other and avoid a
singularity of infinite density. But Penrose showed they were wrong. Small departures from spherical symmetry will not prevent a singularity.
I realized that similar arguments could be applied to the expansion of the universe. In this case, I could prove there were singularities where spacetime had a beginning. So again, Lifshitz and
Khalatnikov were wrong. General relativity predicted that the universe should have a beginning, a result that did not pass unnoticed by the Church.
The original singularity theorems of both Penrose and myself, required the assumption that the universe had a Cauchy surface, that is, a surface that intersects every time like curve once, and only
once. It was therefore possible that our first singularity theorems just proved that the universe didn't have a Cauchy surface. While interesting, this didn't compare in importance with time having a
beginning or end. I therefore set about proving singularity theorems that didn't require the assumption of a Cauchy surface. In the next five years, Roger Penrose, Bob Geroch and I developed the
theory of causal structure in general relativity. It was a glorious feeling, having a whole field virtually to ourselves. How unlike particle physics, where people were falling over themselves to
latch onto the latest idea. They still are.
Up to 1970, my main interest was in the big bang singularity of cosmology, rather than the singularities that Penrose had shown would occur in collapsing stars. However, in 1967, Werner Israel
produced an important result. He showed that unless the remnant from a collapsing star was exactly spherical, the singularity it contained would be naked, that is, it would be visible to outside
observers. This would have meant that the break down of general relativity at the singularity of a collapsing star would destroy our ability to predict the future of the rest of the universe.
At first, most people, including Israel himself, thought that this implied that because real stars aren't spherical, their collapse would give rise to naked singularities and break down of
predictability. However, a different interpretation was put forward by Roger Penrose and John Wheeler. It was that there is Cosmic Censorship. This says that Nature is a prude and hides singularities
in black holes where they can't be seen. I used to have a bumper sticker, black holes are out of sight, on the door of my office in DAMTP. This so irritated the head of department, that he engineered
my election to the Lucasian professorship, moved me to a better office on the strength of it, and personally tore off the offending notice from the old office.
My work on black holes began with a Eureka moment in 1970, a few days after the birth of my daughter, Lucy. While getting into bed, I realized that I could apply to black holes, the causal structure
theory I had developed for singularity theorems. In particular, the area of the horizon, the boundary of the black hole, would always increase. When two black holes collide and merge, the area of the
final black hole is greater than the sum of the areas of the original holes. This and other properties that Jim Bardeen, Brandon Carter and I discovered, suggested that the area was like the entropy
of a black hole. This would be a measure of how many states a black hole could have on the inside, for the same appearance on the outside. But the area couldn't actually be the entropy, because as
everyone knew, black holes were completely black and couldn't be in equilibrium with thermal radiation.
There was an exciting period culminating in the Les Houches summer school in 1972, in which we solved most of the major problems in black hole theory. This was before there was any observational
evidence for black holes, which shows Feynman was wrong when he said an active field has to be experimentally driven. Just as well for M theory. The one problem that was never solved was to prove the
Cosmic Censorship hypothesis, though a number of attempts to disprove it, failed. It is fundamental to all work on black holes, so I have a strong vested interest in it being true. I therefore have a
bet with Kip Thorne and John Preskill. It is difficult for me to win this bet, but quite possible to lose, by finding a counter example with a naked singularity. In fact, I have already lost an
earlier version of the bet by not being careful enough about the wording. They were not amused by the t-shirt I offered in settlement.
We were so successful with the classical general theory of relativity that I was at a bit of a loose end in 1973 after the publication with George Ellis, of The Large Scale Structure of Spacetime. My
work with Penrose had shown that general relativity broke down at singularities. So the obvious next step would be to combine general relativity, the theory of the very large, with quantum theory,
the theory of the very small. I had no background in quantum theory, and the singularity problem seemed too difficult for a frontal assault at that time. So as a warm up exercise, I considered how
particles and fields governed by quantum theory would behave near a black hole. In particular, I wondered, can one have atoms in which the nucleus is a tiny primordial black hole, formed in the early
To answer this, I studied how quantum fields would scatter off a black hole. I was expecting that part of an incident wave would be absorbed, and the remainder, scattered. But to my great surprise, I
found there seemed to be emission from the black hole. At first, I thought this must be a mistake in my calculation. But what persuaded me that it was real, was that the emission was exactly what was
required to identify the area of the horizon with the entropy of a black hole. I would like this simple formula to be on my tomb stone.
Work with Jim Hartle, Gary Gibbons, and Malcolm Perry uncovered the deep reason for this formula. General relativity can be combined with quantum theory in an elegant manner, if one replaces ordinary
time, by imaginary time. I have tried to explain imaginary time on other occasion with varying degrees of success. I think it is the name, imaginary, that makes it so confusing. It is easier if you
accept the positivist view that a theory is just a mathematical model. In this case, the mathematical model has a minus sign whenever time appears twice. The Euclidean approach to quantum gravity,
based on imaginary time, was pioneered in Cambridge. It met a lot of resistance, but is now generally accepted.
The Radiation from a black hole will carry away energy, so the black hole will lose mass, and shrink. Eventually, it seems the black hole will evaporate completely and disappear. This raised a
problem that struck at the heart of physics. My calculation showed that the radiation was exactly thermal and random, as it has to be, if the area of the horizon is to be the entropy of the black
hole. So how could the radiation left over carry all the information about what made the black hole? But if information is lost, this is incompatible with quantum mechanics. This paradox had been
argued for thirty years without much progress, until I found what I think is its resolution. Information is not lost, but it is not returned in a useful way. It is like burning an encyclopedia.
Information is not lost, but it is very hard to read. In fact, Kip Thorne and I had a bet with John Preskill on the information paradox. I gave John a baseball encyclopedia. Maybe I should have just
given him the ashes.
Between 1970 and 1980, I worked mainly on black holes and the Euclidean approach to quantum gravity. But the suggestions that the early universe had gone through a period of inflationary expansion,
renewed my interest in cosmology. Euclidean methods were the obvious way to describe fluctuations and phase transitions in an inflationary universe. We held a Nuffield work shop in Cambridge in 1982,
attended by all the major players in the field. At this meeting, we established most of our present picture of inflation, including the all important density fluctuations which give rise to galaxy
formation and so to our existence. This was ten years before fluctuations in the microwave were observed, so again in gravity, theory was ahead of experiment.
The scenario for inflation in 1982 was that the universe began with a big bang singularity. As the universe expanded, it was supposed somehow to get into an inflationary state. I thought this was
unsatisfactory because all equations would break down at a singularity. But unless one knew what came out of the initial singularity, one could not calculate how the universe would develop. Cosmology
would not have any predictive power.
After the work shop in Cambridge, I spent the summer at the Institute of Theoretical Physics, Santa Barbara, which had just been set up. We stayed in student houses and I drove in to the institute in
a rented electric wheel chair. I remember my younger son, Tim aged three, watching the Sun set on the mountains, and saying, it's a big country.
While in Santa Barbara, I talked to Jim Hartle about how to apply the Euclidean approach to cosmology. According to Dewitt and others, the universe should be described by a wave function that obeyed
the Wheeler Dewitt equation. But what picked out the particular solution of the equation that represents our universe. According to the Euclidean approach, the wave function of the universe is given
by a Feynman sum over a certain class of histories in imaginary time. Because imaginary time behaves like another direction in space, histories in imaginary time can be closed surfaces, like the
surface of the Earth, with no beginning or end. Jim and I decided that this was the most natural choice of class, indeed the only natural choice. We had side stepped the scientific and philosophical
difficulty of time beginning by turning it into a direction in space.
Most people in theoretical physics have been trained in particle physics rather than general relativity. They have therefore been more interested in calculations of what they observe in particle
accelerators than in questions about the beginning and end of time. The feeling was that if they could find a theory that in principle, allowed them to calculate particle scattering to arbitrary
accuracy, everything else would somehow follow. In 1985, it was claimed that string theory was this ultimate theory. But in the years that followed, it emerged that the situation was more complicated
and more interesting. It seems that there's a network called M theory. All the theories in the M theory network can be regarded as approximations to the same underlying theory, in different limits.
None of the theories allow calculation of scattering to arbitrary accuracy, and none can be regarded as the fundamental theory of which others are reflections. Instead, they should all be regarded as
effective theories, valid in different limits. String theorists have long used the term, effective theory, as a pejorative description of general relativity. However, string theory is equally an
effective theory, valid in the limit that the M theory membrane is rolled into a cylinder of small radius. Saying that string theory is only an effective theory isn't very popular, but it's true.
Because they had the dream of a theory that would allow calculation of scattering to arbitrary accuracy, people rejected quantum general relativity and supergravity on the grounds that they were non
renormalizable. This means that one needs undetermined subtractions at each order to get finite answers. In fact, it is not surprising that naïve perturbation theory breaks down in quantum gravity.
One can not regard a black hole as a perturbation of flat space.
I have done some work recently on making supergravity renormalizable, by adding higher derivative terms to the action. This apparently introduces ghosts, states with negative probability. However, I
have found this is an illusion. One can never prepare a system in a state of negative probability. But the presence of ghosts means that one can not predict with arbitrary accuracy. If one can accept
that, one can live quite happily with ghosts.
This approach to higher derivatives and ghosts allows one to revive the original inflation model of Starobinski and other Russians. In this, the inflationary expansion of the universe is driven by
the quantum effects of a large number of matter fields. Based on the no boundary proposal, I picture the origin of the universe as like the formation of bubbles of steam in boiling water. Quantum
fluctuations lead to the spontaneous creation of tiny universes, out of nothing. Most of the universes collapse to nothing, but a few that reach a critical size will expand in an inflationary manner
and will form galaxies and stars, and maybe beings like us.
It has been a glorious time to be alive and doing research in theoretical physics. Our picture of the universe has changed a great deal in the last 40 years, and I'm happy if I have made a small
contribution. I want to share my excitement and enthusiasm. There's nothing like the Eureka moment of discovering something that no one knew before. I won't compare it to sex, but it lasts longer. | {"url":"http://www.kalvisolai.com/2010/12/stephen-w-hawking-my-life-in-physics.html","timestamp":"2014-04-20T08:14:07Z","content_type":null,"content_length":"387077","record_id":"<urn:uuid:a16741f7-ae45-4db8-9bc9-1c352125615c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00299-ip-10-147-4-33.ec2.internal.warc.gz"} |
Formula for the Area of a Circle
Date: 12/17/98 at 23:37:57
From: Kismet
Subject: How do you get the area of a circle?
I haven't figured any of it out, but I want to know how to do it.
Please help.
Date: 12/18/98 at 12:03:44
From: Doctor Peterson
Subject: Re: How do you get the area of a circle?
Hi, Kismet. I'm not sure whether you're asking for the formula for the
area of a circle, or for an explanation of how it works. I'll give you
The formula is very simple:
A = pi * r^2
which means the area is Pi (3.14159...) times the square of the radius.
In a book it would look more like this:
__ 2
A = || r
To use this formula, just measure the radius of the circle (which is
half the diameter), square it (multiply it by itself), and then
multiply the result by 3.14.
There's an interesting way to see why this is true, which may help you
remember it. (Though the easiest way to remember the formula is the
old joke: "Why do they say "pie are square" when pies are round?")
Picture a circle as a slice of lemon with lots of sections (I'll only
show 6 sections, but you should imagine as many as possible):
* *
* \ / *
* \ / *
* \ / *
* / \ *
* / \ *
* / \ *
* *
Now cut it along a radius and unroll it:
/\ /\ /\ /\ /\ /\
/ \ / \ / \ / \ / \ / \
/ \ / \ / \ / \ / \ / \
/ \/ \/ \/ \/ \/ \
All those sections (technically called sectors of the circle) are close
enough to triangles (if you make enough of them) that we can use the
triangle formula to figure out their area; all together they are
A = 1/2 b * h = 1/2 C * r
since the total base length is C, the circumference of the circle, and
the height of all the triangles is r, the radius (if the triangles are
thin enough). You should know that the circumference is pi times the
diameter, or
C = 2 * pi * r
(this is actually the definition of pi), so the area is just
A = 1/2 (2 * pi * r) * r = pi * r^2
In other words, the area of a circle is just the area of a triangle
whose base is the circumference of the circle, and whose height is
the radius of the circle.
What I've just done gets pretty close to algebra, which you haven't
learned yet, but if you think about it (and maybe try actually
measuring some real circles, or even make some lemonade) you should
be able to see what I mean.
You probably didn't know that the area of a circle is the same as the
area of a triangle!
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/57660.html","timestamp":"2014-04-18T03:04:48Z","content_type":null,"content_length":"7770","record_id":"<urn:uuid:492c0f6a-7f98-456c-a566-c614856d0dea>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: The meaning of truth
Matt Insall montez at rollanet.org
Mon Nov 6 11:36:30 EST 2000
Professor Silver:
Of course, the above "proof" is faulty. But, in establishing
that there's at least one sentence, G ("This sentence is unprovable"),
that is true but unprovable, this same model is alluded to. The
model is singled out in order to establish what it is that the
sentence G is true of. I am imagining that Kanovei objects
to this reference to the "standard model" (as being similar
to referring to unicorns), yet this reference is needed to
establish the truth of G.
I would like Kanovei to address whether the above loose argument
characterizes one aspect of his objection to the notion of Truth, as I
would like to better understand his view. I would also be interested
in refutations of this loosely stated argument, in order to better
understand precisely why it fails--supposing that it does.
Matt Insall:
If I understand your question correctly, you would like to eliminate
reference to
the standard model in Gödel's proof of the first incompleteness
theorem. However, I
seem to recall having seen this done already. Here's how I understand the
Someone please tell me if I am wrong: You are correct that some arguments
for the
incompleteness theorems appeal to the existence of a model, and the
proof of the existence of a Gödel sentence G is a diagonal argument that
does not give
much information about the specific types of sentences that one can
substitute for G.
As I understand it, the non-constructive version just produces an instance
of a modified
version of the Liars' Paradox. The proof that I think satisfies your query
shows the following:
(S) If PA is satisfiable, then PA does not deduce Con(PA).
The point of this which is relevant to professor Kanovei's argument, I
guess, is that one need
not actually commit to the existence of a model of PA in order to prove the
above statement.
The argument proceeds by taking the existence of a model of PA as an
hypothesis, and showing
that Con(PA) does not follow from the axioms of PA in standard FOL. Of
course, he has thrown
cold water on this type of argument by his appeal to the parallel between
hypothetical statements
and what he seems to think is meaningless gibberish because of its
reference to unicorns. In fact,
rather than allow professor Kanovei to trouble himself to bring up what he
considers to be the
``meaninglessness'' of statement (S), I will draw the parallel for him, by
writing what I think would be
part of his response: ``Statement (S) is similar to the following:
(U) If the theory of unicorns is satisfiable, then the theory of unicorns
does not deduce its own consistency.''
Of course, the truth value I assign (S) is the value true, because we have,
as mathematicians, developed
a terminology in which statement (S) refers to certain facts. Since I am
not completely sure what is the
status of the theory (or theories) of unicorns, I do not know what truth
value should be assigned to statement
(U). In this sense, statement (S) is ``more meaningful'' to me than is
statement (U).
Matt Insall
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2000-November/004545.html","timestamp":"2014-04-19T17:07:10Z","content_type":null,"content_length":"5438","record_id":"<urn:uuid:3cd87fa2-86e2-4cdd-bb03-ac1df992e59b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cauchy Sequences
November 12th 2009, 04:55 PM #1
Oct 2008
Cauchy Sequences
1. Can anyone provide an example of a sequence that satisfies the following definition, but it not Cauchy:
for every e>0 there exists natural number N such that for every n>or=N |xn-xn+1|<e
How about the sequence of partial sums $H_n=\sum_{k=1}^{n}\frac{1}{k}$. It's not Cauchy since it's divergent, but $H_{n+1}-H_n=\frac{1}{n+1}$, which can be made as small as desired.
November 12th 2009, 05:35 PM #2 | {"url":"http://mathhelpforum.com/calculus/114210-cauchy-sequences.html","timestamp":"2014-04-20T03:15:07Z","content_type":null,"content_length":"33068","record_id":"<urn:uuid:fbf55edd-55b5-4b9b-b1e8-5ed32e40a06d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
1911 Encyclopædia Britannica/Infinite
From Wikisource
←Infant Schools 1911 Encyclopædia Britannica, Volume 14 Infinitesimal Calculus→
INFINITE (from Lat. in, not, finis, end or limit; cf. findere, to cleave), a term applied in common usage to anything of vast size. Strictly, however, the epithet implies the absence of all
limitation. As such it is used specially in (1) theology and metaphysics, (2) mathematics.
1. Tracing the history of the world to the earliest date for which there is any kind of evidence, we are faced with the problem that for everything there is a prior something: the mind is unable to
conceive an absolute beginning (“ex nihilo nihil”). Mundane distances become trivial when compared with the distance from the earth of the sun and still more of other heavenly bodies: hence we infer
infinite space. Similarly by continual subdivision we reach the idea of the infinitely small. For these inferences there is indeed no actual physical evidence: infinity is a mental concept. As such
the term has played an important part in the philosophical and theological speculation. In early Greek philosophy the attempt to arrive at a physical explanation of existence led the Ionian thinkers
to postulate various primal elements (e.g. water, fire, air) or simply the infinite τὸ ἄπειρον (see Ionian School). Both Plato and Aristotle devoted much thought to the discussion as to which is most
truly real, the finite objects of sense, or the universal idea of each thing laid up in the mind of God; what is the nature of that unity which lies behind the multiplicity and difference of
perceived objects? The same problem, variously expressed, has engaged the attention of philosophers throughout the ages. In Christian theology God is conceived as infinite in power, knowledge and
goodness, uncreated and immortal: in some Oriental systems the end of man is absorption into the infinite, his perfection the breaking down of his human limitations. The metaphysical and theological
conception is open to the agnostic objection that the finite mind of man is by hypothesis unable to cognize or apprehend not only an infinite object, but even the very conception of infinity itself;
from this standpoint the infinite is regarded as merely a postulate, as it were an unknown quantity (cf. √−1 in mathematics). The same difficulty may be expressed in another way if we regard the
infinite as unconditioned (cf. Sir William Hamilton’s “philosophy of the unconditioned,” and Herbert Spencer’s doctrine of the infinite “unknowable”); if it is argued that knowledge of a thing arises
only from the recognition of its differences from other things (i.e. from its limitations), it follows that knowledge of the infinite is impossible, for the infinite is by hypothesis unrelated.
With this conception of the infinite as absolutely unconditioned should be compared what may be described roughly as lesser infinities which can be philosophically conceived and mathematically
demonstrated. Thus a point, which is by definition infinitely small, is as compared with a line a unit: the line is infinite, made up of an infinite number of points, any pair of which have an
infinite number of points between them. The line itself, again, in relation to the plane is a unit, while the plane is infinite, i.e. made up of an infinite number of lines; hence the plane is
described as doubly infinite in relation to the point, and a solid as trebly infinite. This is Spinoza’s theory of the “infinitely infinite,” the limiting notion of infinity being of a numerical,
quantitative series, each term of which is a qualitative determination itself quantitatively little, e.g. a line which is quantitatively unlimited (i.e. in length) is qualitatively limited when
regarded as an infinitely small unit of a plane. A similar relation exists in thought between the various grades of species and genera; the highest genus is the “infinitely infinite,” each
subordinated genus being infinite in relation to the particulars which it denotes, and finite when regarded as a unit in a higher genus.
2. In mathematics, the term “infinite” denotes the result of increasing a variable without limit; similarly, the term “infinitesimal,” meaning indefinitely small, denotes the result of diminishing
the value of a variable without limit, with the reservation that it never becomes actually zero. The application of these conceptions distinguishes ancient from modern mathematics. Analytical
investigations revealed the existence of series or sequences which had no limit to the number of terms, as for example the fraction 1/(1−x) which on division gives the series. 1+x+x²+. . . . ; the
discussion of these so-called infinite sequences is given in the articles Series and Function. The doctrine of geometrical continuity (q.v.) and the application of algebra to geometry, developed in
the 16th and 17th centuries mainly by Kepler and Descartes, led to the discovery of many properties which gave to the notion of infinity, as a localized space conception, a predominant importance. A
line became continuous, returning into itself by way of infinity; two parallel lines intersect in a point at infinity; all circles pass through two fixed points at infinity (the circular points); two
spheres intersect in a fixed circle at infinity; an asymptote became a tangent at infinity; the foci of a conic became the intersections of the tangents from the circular points at infinity; the
centre of a conic the pole of the line at infinity, &c. In analytical geometry the line at infinity plays an important part in trilinear co-ordinates. These subjects are treated in Geometry. A notion
related to that of infinitesimals is presented in the Greek “method of exhaustion”; the more perfect conception, however, only dates from the 17th century, when it led to the infinitesimal calculus.
A curve came to be treated as a sequence of infinitesimal straight lines; a tangent as the extension of an infinitesimal chord; a surface or area as a sequence of infinitesimally narrow strips, and a
solid as a collection of infinitesimally small cubes (see Infinitesimal Calculus). | {"url":"http://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Infinite","timestamp":"2014-04-20T15:18:23Z","content_type":null,"content_length":"30554","record_id":"<urn:uuid:2f6e1992-459e-44cd-bc14-c2f4bc898d71>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
Car Buying-Compound interest
March 29th 2011, 04:14 PM
Car Buying-Compound interest
I'm checking answer for my sister. But since I no longer have the background in how to appropriately use the financial formulas, I post it here for help.
Problem: A car that costs $10,000. I want to buy this car.
-I have $2,500 to deposit right now toward a down payment. I can get a 6.5% interest rate if I deposit this $2,500 into a CD. This is a one time deposit of $2,500 compounded monthly and it is
locked in for 2 years.
-When I purchase this car in 2 years, I want to make $100 monthly payments for 3 years, and I project this loan will be at 4.2% interest.
Question: What do I need to deposit monthly into an annuity for the next 24 months to obtain the rest of the down payment? I can earn 5% interest compounded monthly with this annuity.
My first attempt I got $149.97, but my sister's answer is different ($145.85).
I got confused after the $100 monthly payments part; and switched between different equations for PV and FV, thus different answers. There's a play of words here and it confuses me. I don't know
if the rest of the down payment after 2-year-CD will have the 4.2% interest or the $10,000.
March 29th 2011, 05:07 PM
Here is my other attempt at this:
- After the CD, I have $2846.07
So when I purchase the car, I have $10,000-$2846.07 = $7,153.93 left to pay off.
-According to the wording of the problem, I assume I will be depositing $100 monthly for 3 years.
So for 3 years I will make the total payment of: $100 x 12 months x 3 years = $3,600
-But the (assumed annual) year interest for the $7,153.93 is at 4.2%
So after 3 years it will be: $7153.93(1.042)^3= $8093.71
Subtract it from the total payment, I have $8093.71 - 3600 = $4493.71 left to pay off.
-So now how much I need to deposit monthly into an annuity for the next 24 months to obtain the $4493.73? I can earn 5% interest compounded monthly with this annuity.
-I'm not sure what it means by "I can earn 5% interest compounded monthly with the annuity"
-Hit the wall.
March 29th 2011, 05:55 PM
Okay, seem like I incorrectly assume the $3,600 after 3 years as payment toward the $7,153.93
-So I make a monthly payment of $100 for 3 years to pay off the $7,153.93. And this amount also have a 4.2% interest rate.
-After much research, the amount left after 3 years of interest and my monthly payment of $100 is called the balloon payment. I need an equation to relate the rate, PMT, and FV.
March 29th 2011, 06:41 PM
Means you're paid .05/12 monthly.
Monthly deposit required is 178.4223...
Formula: F * i / [(1 + i)^n - 1]
d = monthly deposit (?)
F = future value (4493.73)
i = interest (.05/12)
n = number of deposits (24)
d = 4493.73 * .05/12 / [(1 + .05/12)^24 - 1] = 178.4223....
March 29th 2011, 06:53 PM
Okay, seem like I incorrectly assume the $3,600 after 3 years as payment toward the $7,153.93
-So I make a monthly payment of $100 for 3 years to pay off the $7,153.93. And this amount also have a 4.2% interest rate.
-After much research, the amount left after 3 years of interest and my monthly payment of $100 is called the balloon payment. I need an equation to relate the rate, PMT, and FV.
IF YOU GOT THAT RIGHT (!) then balloon payment will be 4283.28.
That's from (the FV of $7153.93) minus (the FV of 36 monthly payments of $100) | {"url":"http://mathhelpforum.com/business-math/176233-car-buying-compound-interest-print.html","timestamp":"2014-04-19T13:31:02Z","content_type":null,"content_length":"8511","record_id":"<urn:uuid:935eb95f-b539-4623-86ec-384c05bc6776>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the number field analogue of the Narasimhan-Seshadri theorem ?
up vote 21 down vote favorite
In his famous 1940 letter from prison in Rouen to his sister Simone, André Weil talks about the analogy between number fields and functions fields (in one variable) over finite fields, and the
analogy between these functions fields and functions fields over $\mathbf{C}$ (or equivalently compact connected curves over $\mathbf{C}$). This letter is reproduced in his Scientific papers and has
been recently translated into English (Notices of the AMS 52(3) 2005).
Question What is the number field analogue of the Narasimhan-Seshadri theorem (Stable and unitary vector bundles on a compact Riemann surface. Ann. of Math. (2) 82 1965 540–567) ?
Addendum (in response to Felipe's comment) The original paper of Narasimhan and Seshadri is available on JSTOR. An excerpt from their introduction : D. Mumford has defined the notion of a stable
vector bundle on a compact Riemann surface $X$ and proved that the set of equivalence classes of stable bundles (of fixed rank and degree) has a natural structure of a non-singular, quasi-projective,
algebraic variety [13]. We prove in this paper that, if $X$ has genus $\ge2$, the stable vector bundles are precisely the holomorphic vector bundles on $X$ which arise from certain irreducible
unitary representations of suitably defined fuchsian groups acting on the unit disc and having $X$ as quotient (Theorem 2, $\S12$). [...] A particular case of our result is that a holomorphic vector
bundle of degree zero on $X$ is stable if and only if it arises from an irreducible unitary representation of the fundamental group of $X$. As a consequence one sees that a holomorphic vector bundle
on $X$ arises from a unitary representation of the fundamental group of $X$ if and only if each of its indecomposable components is of degree zero and stable.
Their main result is summarised by Atiyah (MR0170350) and Le Potier (Séminaire Bourbaki Exposé 737) as follows :
Atiyah: Let $X$ be a compact Riemann surface. If $W$ is a (holomorphic) vector bundle of rank $n$ over $X$ we define $d(W)$ to be the degree of the associated line bundle $\bigwedge^n W$. A bundle
$W$ is stable, in the sense of Mumford, if $(\mathrm{rank}W)d(V)<(\mathrm{rank}V)d(W)$ for all proper sub-bundles $V$ of $W$. According to Mumford [Proc. Internat. Congr. Mathematicians (Stockholm,
1962), pp. 526--530, Inst. Mittag-Leffler, Djursholm, 1963], the set of isomorphism classes of stable bundles of rank $n$ and degree $q$ over $X$ has a natural structure of an algebraic variety. In
this paper the authors give a complete characterization of stable bundles in terms of unitary representations of a certain discrete group (provided genus $X$ $≥2$).
Their main theorem runs as follows. Given integers n and q, with $-n< q \le0$, we can choose (i) a discrete group $\pi$ acting on a simply connected Riemann surface $Y$ with $Y/\pi=X$ and with the
map $p:Y\to X$ being ramified over only one point $x_0\in X$; (ii) a representation $\tau:\pi_{y_0}→\mathrm{GL}(n,\mathbf{C})$ of the isotropy group of $\pi$ at a point $y_0\in p^{−1}(x_0)$ by
scalars such that the following holds. A vector bundle over $X$ of rank $n$ and degree $q$ is stable if and only if the corresponding sheaf is isomorphic to a sheaf of the form $p_∗^\pi(\mathbf{V})$,
where $\mathbf{V}$ denotes the $\pi$-sheaf of holomorphic mappings $Y\to V$, $V$ is an irreducible unitary representation of $\pi$ coinciding with $\tau$ when restricted to $\pi_{y_0}$, $p_∗$ is the
direct image functor and $p_∗^\pi$ denotes the subsheaf invariant under $\pi$. Moreover, two such stable bundles are isomorphic on $X$ if and only if the corresponding unitary representations of $\
pi$ are equivalent.
It should be observed that the inequality $-n< q \le0$ presents no essential restriction since it can always be realized by tensoring with a line bundle $L$ and, on the other hand, the definition of
stable bundle shows that $W$ is stable if and only if $W\otimes L$ is stable.
Le Potier: En 1965 Narasimhan et Seshadri établissaient une correspondence bijective entre l’ensemble des classes d’équivalence de représentations unitaires irréductibles du groupe fondamentale $\pi$
d’une surface de Riemann compacte $X$, et l’ensemble des classes d’isomorphisme de fibrés vectoriels stables de degré $0$ sur $X$ : ils associent à une representation $\rho:\pi\to\mathbf{U}(r)$ le
fibré vectoriel holomorphe $E_\rho$ défini par $$ E_\rho=\tilde X\times_\pi\mathbf{C}^r $$ où $\tilde X$ est le revêtement universel de $X$, et où le produit ci-dessus est le quotient de $\tilde X\
times\mathbf{C}^r$ par l’action de $\pi$ définie par $(\gamma,(x,v))\mapsto(x\gamma^{-1},\gamma v)$ pour $\gamma\in\pi$ et $(x,v)\in \tilde X\times\mathbf{C}^r$.
1 I think you should state their theorem here in addition to giving a link. – Felipe Voloch May 7 '12 at 12:54
2 Toy model (i.e. GL_1 and "infinitesimal") for Narasimhan-Seshadri theorem is Hodge theory. As far as I understand this step is a major problem to generalize from C to function fields and moreover
to alg.numbers. I guess if this would be clear, then proper exponentiating and non-abelianization would be not so difficult as the first step... See also: mathoverflow.net/questions/90928/…
mathoverflow.net/questions/85943/… – Alexander Chervov May 7 '12 at 15:41
4 Let us look on the "abelian" NS-theorem: the moduli space of line bundles (which we know to be the Jacobian I.e. torus of real dimension 2g) is identified as topological manifold (not as
algebraic) with unitary 1-dimensional irreps of fundamental group (which is also 2g-dim torus, obviously ). Is there an analogue for this in alg.num setup? It smells like class field theory... J –
Alexander Chervov May 7 '12 at 19:25
1 Let us go a little further analysing "abelian" version. Consider "infinitesimal" versions (i.e. tangent spaces on the both sides). Consider "holomorphic side". Moduli space of line bundles is H^1
(O^*), so the tangent space is H^1(O), by Serre's duality it is H^0(K) - holomorphic differentials. Consider the "unitary side": the moduli space of unitary 1-dimensional representations of \pi_1.
It is the same as H^1(\pi_1, S^1), so the tangent space is H^1(\pi_1, R), the same as H^1(X,R), where X is our Riemann surface. Continued... – Alexander Chervov May 8 '12 at 7:34
3 You might look at the articles of Deninger and Werner: wwwmath.uni-muenster.de/u/deninger/about/publikat/cd49.ps – Jason Starr May 8 '12 at 12:52
show 3 more comments
1 Answer
active oldest votes
Theorem of Narasimhan and Seshadri is a special case of what Carlos Simpson calls nonabelian Hodge theory developed by Hitchin and Simpson. This theory was generalized to the characteristic
$p$ case in the paper of Ogus and Vologodsky, Nonabelian Hodge Theory in Characteristic p. Hope this helps.
Update: Below, on Alexander's request, is a brief explanation of relation between nonabelian Hodge theory and NS theorem. Consider vector bundles with vanishing 1st and 2nd Chern classes (I
will call this condition ($\star$)), then the story in higher dimensions is exactly the same as for complex curves. The detailed explanation is in the pages 12-19 of Simpson's 1992 paper
[S1992]. From there, it follows that flat unitary connections correspond exactly to vanishing Higgs fields (subject to ($\star$)). Briefly, every semistable Higgs bundle $E=(V, \bar\partial,
up vote \theta)$ has a hermitian YM metric $K$. Define the connection $D_K$ (as in [S1992], page 13), then then $D_K$ is flat (subject to ($\star$), page 17 of [S1992]).
7 down
vote If Higgs field $\theta$ vanishes then $D_K=\partial_K+ \bar\partial$ and, hence, by definition of $\partial_K$, connection $D_K$ preserves the metric $K$. Thus, our bundle reduces to a flat
unitary bundle. Conversely, if bundle is flat unitary (with unitary metric denoted $K$) then the associated (multivalued) map $\Phi_K$ defined on page 16 of [S1992], is constant, so it has
zero derivative. But its derivative is $0=d\Phi_K=\theta+ \bar\theta$ (here $\theta$ is the Higgs field determined by $K$). Since $\theta, \bar\theta$ have different types, the only way we
can have $\theta+ \bar\theta=0$ is that $\theta=0$.
G. Faltings "A p-adic Simpson correspondence" imperium.lenin.ru/~kaledin/math/falt.pdf , see also mathoverflow.net/questions/21100/… , see also Deninger-Werner e.g. arxiv.org/abs/math/
0403516 (as Jason Starr commented above). Small survey of surrounding works: sfb45.de/graduate-school/Mainz.pdf – Alexander Chervov May 9 '12 at 5:57
I am not great expert, but "NS is SPECIAL CASE of Simpson correspondence" seems to me too strong claim, although they are certainly related... Simposon's correspondence is for manifolds of
arbitrary dimensions and takes irreps of pi_1 to GL to pairs vector bunle "F" + Higgs Field "Phi". While in NS you have 1-dimensional manifolds and instead of GL you have U(n) and there is
NO Higgs field Phi. However the for 1-dimensional manifolds NS and Simposon (which was done by Hitchin before Simpson's N-dim case) are COMPATIBLE ... continued ... – Alexander Chervov May
9 '12 at 6:04
Continued ... In the sense that if we take Simposon's map with zero Phi we will get U(n) irrep of \pi_1. (If I remember correctly), however it seems to me it will require additional work
to prove this COMPATIBILITY starting from Hitchin-Simpson... – Alexander Chervov May 9 '12 at 6:07
@Alexander: Nonabelian Hodge theory (in the context of Riemann surfaces) contains Narasimhan-Seshadri theorem as a special case: $\Phi=0$ iff the representation of the fundamental group
has pre-compact image, in the case of interest, conjugate to a unitary representation, which is exactly NS theorem. You are right, however, in the sense that this was not done in the
characteristic $p$ case. – Misha May 9 '12 at 13:54
@Misha yes, it is the same that I wrote in the last comment, however it seems to this will require some addtional work, comparing with Simpson correspondence, am I wrong ? By the way what
will happen in higher dims if $ \Phi =0$ ? – Alexander Chervov May 9 '12 at 19:57
show 11 more comments
Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/96187/what-is-the-number-field-analogue-of-the-narasimhan-seshadri-theorem/96401","timestamp":"2014-04-19T22:34:26Z","content_type":null,"content_length":"71882","record_id":"<urn:uuid:28badc3c-418e-4a98-8f74-d1f3b227fafb>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next: Classifier Systems and the Up: A Local Learning Algorithm Previous: A Local Learning Algorithm
Various algorithms for supervised learning in recurrent non-equilibrium networks with non-stationary inputs and outputs have been proposed [Robinson and Fallside, 1987] [Williams and Zipser, 1988] [
Pearlmutter, 1988] [Gherrity, 1989] [Rohwer, 1989]. Apart from the fact that these algorithms require explicit teaching signals for the output units, there is a second reason which makes them
biologically inplausible: They depend on global computations.
What are the differences between local and global computations in the context of neural networks? We would like to make the distinction between two kinds of local computations in systems consisting
of a large number of connected units:
`Local in space' is meant to say that changes of a unit's weight vector should depend solely on activation information from the unit itself and from connected units. The update complexity for a
unit's weight vector at a given time should be only proportional to the dimensionality of the weight vector. This implies that for a completely recurrent network the weight update complexity at a
given time is
`Local in time' is meant to say that weight changes should take place continually, and that changes should depend only on information about units and weights from a fixed recent time interval. This
contrasts to weight changes that take place only after externally defined episode boundaries, which require additional a priori knowledge and in some cases high peaks of computation time. The
expression `local in time' corresponds to the notion of `on-line' learning.
As far as we can judge today, biological systems use completely local computations to accomplish complex spatio-temporal credit assignment tasks. However, the local learning rules proposed so far
(like Hebb's rule) make sense only if there are no `hidden units'.
In this paper (which is based on [Schmidhuber, 1989]) we want to demonstrate that local credit assignment with `hidden units' is no contradiction by itself, by giving a constructive example: We
propose a method local in both space and time which is designed to deal with `hidden units' and with units whose past activations are `hidden in time'.
Next: Classifier Systems and the Up: A Local Learning Algorithm Previous: A Local Learning Algorithm Juergen Schmidhuber 2003-02-21
Back to Reinforcement Learning Economy page
Back to Recurrent Neural Networks page | {"url":"http://www.idsia.ch/~juergen/bucketbrigade/node1.html","timestamp":"2014-04-16T18:56:50Z","content_type":null,"content_length":"5962","record_id":"<urn:uuid:225700bb-0105-41d1-bdd5-926b270137fd>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lincoln Park, NJ Math Tutor
Find a Lincoln Park, NJ Math Tutor
...I also have experience tutoring Pre-algebra, Algebra 1 and 2, and Geometry. When tutoring, I usually go over the students' homework, quizzes and tests, and bring extra problems for them to
work on in the areas where they seem to struggle most. I also can tutor conversational French.I have a str...
7 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I teach at my home OR go to the students' place, whichever is preferable. I am a math tutor.I make my students confident in their subject. My qualifications are a high school diploma and 103
credits of electronics and communication engineering.
8 Subjects: including algebra 2, geometry, prealgebra, trigonometry
...I first began to tutor at an after school center. There I was able to work with students who had challenging learning needs. I greatly enjoyed my time there because I was able to work with
students who were not typically given personal academic attention.
27 Subjects: including algebra 1, elementary (k-6th), grammar, writing
...I took the MCAT and received a score of 40 (14PS-14VR-12B), which placed me in approximately the 99.75 percentile of all test takers. I took the test July 27th 2013. I have been accepted to
medical school and am matriculating in August, though I currently tutor full time.
24 Subjects: including algebra 1, algebra 2, ACT Math, chemistry
...I just love to see my students enjoyed the experience of learning.I was born in Guangzhou, China. I was educated in Guangzhou, China, before I came to the US. I am fluent (native proficiency)
in Mandarin and Cantonese Chinese.
6 Subjects: including algebra 2, prealgebra, chemistry, physics
Related Lincoln Park, NJ Tutors
Lincoln Park, NJ Accounting Tutors
Lincoln Park, NJ ACT Tutors
Lincoln Park, NJ Algebra Tutors
Lincoln Park, NJ Algebra 2 Tutors
Lincoln Park, NJ Calculus Tutors
Lincoln Park, NJ Geometry Tutors
Lincoln Park, NJ Math Tutors
Lincoln Park, NJ Prealgebra Tutors
Lincoln Park, NJ Precalculus Tutors
Lincoln Park, NJ SAT Tutors
Lincoln Park, NJ SAT Math Tutors
Lincoln Park, NJ Science Tutors
Lincoln Park, NJ Statistics Tutors
Lincoln Park, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Lincoln_Park_NJ_Math_tutors.php","timestamp":"2014-04-18T08:35:45Z","content_type":null,"content_length":"23996","record_id":"<urn:uuid:35b935ee-6556-4692-bf70-ccbd53f30f0c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evesham Twp, NJ Geometry Tutor
Find an Evesham Twp, NJ Geometry Tutor
...I find knowing why the math is important goes a long way towards helping students retain information. After all, math IS fun!In the past 5 years, I have taught differential equations at a
local university. I hold degrees in economics and business and an MBA.
13 Subjects: including geometry, calculus, statistics, algebra 1
...The problem was, he couldn't remember how to put it back together. (That feat took me 3 days!) My first IT job was solving issues for people who couldn't print, or the keyboard wasn't
responding, or the computer froze, etc. A few years of troubleshooting in person evolved into troubleshooting...
26 Subjects: including geometry, reading, algebra 1, algebra 2
...As for my credentials, I've received the EducationWorks Service Award for one year of service in the Supplemental Educational Services (SES) Tutoring Program. I specialize in tutoring
elementary math, geometry, and algebra for success in school. I tutored a student who switched from a public sc...
5 Subjects: including geometry, reading, algebra 1, elementary math
I am a career changer and 12 years ago I went into education because I felt that I had something to give to students that they would find useful. I've spent the last 10 years teaching middle
grade students in Math and Science. My students were very successful on the state PSSA tests by increasing ...
6 Subjects: including geometry, algebra 1, American history, elementary math
...Student success determined how many sessions were needed, and student feedback was an integral part of the program. An important part of the development and implementation of the program was
not only ensuring that students' needs were met, but that relationships were built between the students a...
19 Subjects: including geometry, reading, algebra 1, algebra 2
Related Evesham Twp, NJ Tutors
Evesham Twp, NJ Accounting Tutors
Evesham Twp, NJ ACT Tutors
Evesham Twp, NJ Algebra Tutors
Evesham Twp, NJ Algebra 2 Tutors
Evesham Twp, NJ Calculus Tutors
Evesham Twp, NJ Geometry Tutors
Evesham Twp, NJ Math Tutors
Evesham Twp, NJ Prealgebra Tutors
Evesham Twp, NJ Precalculus Tutors
Evesham Twp, NJ SAT Tutors
Evesham Twp, NJ SAT Math Tutors
Evesham Twp, NJ Science Tutors
Evesham Twp, NJ Statistics Tutors
Evesham Twp, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Evesham_Twp_NJ_geometry_tutors.php","timestamp":"2014-04-17T11:28:51Z","content_type":null,"content_length":"24416","record_id":"<urn:uuid:15eba578-1ebd-49ec-9bcc-2c4e26015e14>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
ch 6 vocab
If your printer has a special "duplex" option, you can use it to automatically print double-sided. However for most printers, you need to:
1. Print the "odd-numbered" pages first
2. Feed the printed pages back into the printer
3. Print the "even-numbered" pages
If your printer prints pages face up, you may need to tell your printer to reverse the order when printing the even-numbered pages. | {"url":"http://quizlet.com/3738115/print","timestamp":"2014-04-21T15:09:42Z","content_type":null,"content_length":"220048","record_id":"<urn:uuid:0b2294d8-6ab3-4677-8979-c4f182b66e14>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
CMS Winter 2007 Meeting
We will discuss the structure of hyperelliptic Hodge integrals (integrals of l and y classes on the hyperelliptic locus of curves). We start by giving a purely combinatorial proof of
Faber-Pandharipande's well known formula for l[g] l[g-1], and proceed from there to describe several nice combinatorial features/formulas of more general classes integrals.
We propose the graph description of Teichmüller theory of surfaces with marked points on boundary components (bordered surfaces). Introducing new parameters, we formulate this theory in terms of
hyperbolic geometry. We can then describe both classical and quantum theories having the proper number of Thurston variables (foliation-shear coordinates), mapping-class group invariance (both
classical and quantum), Poisson and quantum algebra of geodesic functions, and classical and quantum braid-group relations. These new algebras can be defined on the double of the corresponding
graph related (in a novel way) to a double of the Riemann surface (which is a Riemann surface with holes, not a smooth Riemann surface). We enlarge the mapping class group allowing
transformations relating different Teichmüller spaces of bordered surfaces of the same genus, same number of boundary components, and same total number of marked points but with arbitrary
distributions of marked points among the boundary components. We describe the classical and quantum algebras and braid group relations for particular sets of geodesic functions corresponding to A
[n] and D[n] algebras.
A matrix is called totally positive if all of its minors are positive. Recently, such minors have been expressed in terms of parameters associated with a corresponding bidiagonal factorization,
and consequently, these minors can be written as subtraction free-expressions in these parameters. This new representation has been recast in a number of different settings, and it continues to
arise as a useful tool involving totally positive matrices.
In this talk, I will review this combinatorial representation and discuss some of the interesting resulting applications, including determinantal inequalities and an implication to Jordan
canonical forms.
We discuss combinatorial properties of Coxeter polytopes in terms of their missing faces. This leads to a set of simplicial complexes which is useful to differ combinatorial types of simple
polytopes. We describe properties of this set and state some conjectures.
This is a report on an ongoing joint project with Michael Shapiro and Dylan Thurston devoted to the study of cluster-algebraic structures in decorated Teichmuller spaces of bordered Riemann
surfaces with marked points.
Let Q be a quiver without oriented cycles, n the positive part of the symmetric Kac-Moody algebra of type Q and L = C[`(Q)]/áå[a Î Q] [a,[`(a)]] ñ the corresponding preprojective algebra. Let M =
Å[i=1]^r M[i] be a terminal representation of Q, i.e., the summands of M are a family of indecomposable preinjective representations, closed under successors. Then the full subcategory category
C[M] = {X Î L-mod[0] with X |[Q] Î Add(M)}
is a Frobenius category and its stable category is 2-Calabi-Yau. Moreover it has a canonical cluster tilting object. We can describe C[M] conveniently as the D-good modules of a quasi-hereditary
algebra, in this setting, mutation can be described via D-dimension vectors.
We have a cluster character f from L-mod[0] to the commutative ring U(n)^* which is compatible with our subcategories, so we get a cluster algebra structure on the "image" of C[M]. These cluster
algebras are closely related to the coordinate rings of certain reduced double Bruhat cells.
This is a report on joint work with B. Leclerc and J. Schröer.
We give a new general solution to the KP equations, that specializes to Okounkov's double Hurwitz series, and also has other specializations of interest to combinatorialists.
A map is a 2-cell embedding of a graph in a Riemann surface. The generating series for a class of maps is called the map series for the class. I shall discuss two questions, one from mathematical
physics and the other from algebraic geometry, where map theory reveals the presence of deeper structure and connexions between the two.
I) The f^4-model and log(1-f)^-1-model (due to Penner) are early models of topological quantum field theory. The relationship between the partition functions for these two models may be explained
as a consequence of a functional relationship between two classes of maps in orientable surfaces, one in which all vertices have degree 4 and the other in which there is no restriction on vertex
degrees. Moreover, comparable relationships hold for other classes of maps and there is evidence of a natural bijection accounting for these relationships.
II) The generating series for the virtual Euler characteristics for the moduli spaces of complex and for real algebraic curves, respectively, may be shown to be specialisations of the map series
for all surfaces through an algebraic parameter associated with Jack symmetric functions. This parameter is conjectured to have an interpretation as an invariant of maps, which then opens the
possibility of passing it through the Strebel derivative construction used by Harer and Zagier, to the level of the moduli spaces.
In this talk I shall show how these conjectures arose in the first place.
We give a unified explanation of various factorization phenomena involving loop groups, affine Grassmannians, and representations of affine Lie algebras. Our main tool is a geometric fusion
product analogous to the Feigin-Loktev fusion of affine representations.
Let G be a graph whose edge set E has been totally ordered. Consider the corresponding NBC complex D consisting of all subsets of E which do not contain a broken circuit with respect to the
ordering. Let R be the Stanley-Reisner ring of D. Jason Brown gave an explicit description of a homogeneous system of parameters for R in terms of fundamental edge-cuts in G. So R modulo this
h.s.o.p. is a finite dimensional vector space. We conjecture an explicit monomial basis for this vector space in terms of the circuts of G and prove that the conjecture is true for several
families of graphs.
We show that the set of cluster monomials for the cluster algebra of type D[4] forms a basis of the Z-module Z [x[1,1],...,x[3,3]]. We also show that the transition matrices relating the cluster
basis of this module to the natural and the dual canonical bases are unitriangular and nonnegative. These results support a conjecture of Fomin and Zelevinsky on the equality of the cluster and
dual canonical bases of Z [x[1,1],...,x[3,3]]. In the event that this conjectured equality is true, our results also imply an explicit factorization of each dual canonical basis element of the
module as a product of cluster variables.
Given a Coxeter group W, a W-graph is a combinatorial structure that encodes a W-module, or more generally, a module for the associated Iwahori-Hecke algebra. By a theorem of Gyoja, it is known
that every irreducible representation of a finite Coxeter group may be realized as a W-graph. Of special interest are the W-graphs that encode the Kazhdan-Lusztig cell representations of Hecke
algebras, and more generally, the cell representations associated to blocks of irreducible representations of real Lie groups.
In this talk, our goal is to isolate some basic features of the W-graphs of cell representations, and use these to create a class of "admissible" W-graphs that is amenable to combinatorial
analysis and (we hope) classification. In this direction, we will describe two theorems. First, a Dynkin diagram classification of all rank 2 admissible W-cells, and second, in the simply-laced
case, a combinatorial characterization of all admissible W-graphs.
The space of smooth genus 0 curves in projective space has a natural smooth compactification: the moduli space of stable maps, which may be seen as the generalization of the classical space of
complete conics. It has a beautiful combinatorial structure. In arbitrary genus, no such natural smooth model is expected, as the space satisfies "Murphy's Law". In genus 1, however, the
situation remains beautiful and combinatorial. I will describe a natural smooth compactification of the space of elliptic curves in projective space.
This space is a blow up of the space of stable maps. It can be interpreted as blowing up the most singular locus first, then the next most singular, and so on, but with a twist-these loci are
often entire components of the moduli space. I will give a number of applications in enumerative geometry and Gromov-Witten theory. For example, it has been used by Aleksey Zinger to prove
physicists' famous mirror symmetry prediction for genus 1 Gromov-Witten invariants of a quintic threefold.
This is joint work with Zinger.
The (anti-ferromagnetic) q-state Potts model of a graph reduces to the number of proper q-colourings of the graph (when q is a natural number and the temperature is zero). The random-cluster
expansion gives an interpretation of this partition function for any q ³ 0. When q ³ 1, the FKG inequality yields positive correlations among any increasing functions on the state space. (At q=1
all the fundamental events are independent.) In the range 0 £ q £ 1 negative correlations are known to hold in some forms, but not in others, and are conjectured to hold in many more. I will
survey the current state of the problem, highlighting recent progress and potential applications. | {"url":"http://cms.math.ca/Events/winter07/abs/ca.html","timestamp":"2014-04-16T04:14:16Z","content_type":null,"content_length":"19033","record_id":"<urn:uuid:436d0c1b-4100-498a-b7ed-c820b7d2ab3b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts Tagged with 'Complex Analysis'—Wolfram|Alpha Blog
Why did the mathematician name his dog Cauchy? Because he left a residue at every pole!
But could the mathematician find the poles and their residues for a given function? He certainly could, with the help of Wolfram|Alpha.
We are proud to announce that Wolfram|Alpha has added residues and poles to its ever-expanding library of mathematical data that it can compute! To showcase this behavior, let’s first recall just
what a pole is.
In the study of complex analysis, a pole is a singularity of a function where the function behaves like 1/z^n at z == 0 .
More technically, a point z_0 is a pole of a function if the Laurent series expansion of the function about the point z_0 has only finitely many terms with a negative degree of z – z_0. As an
example, let’s look at the Laurent expansion of 1/Sin[z] at z == 2 Pi:
Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more.
Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies…
Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes!
Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon?
Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step. | {"url":"http://blog.wolframalpha.com/tag/complex-analysis/","timestamp":"2014-04-21T14:41:12Z","content_type":null,"content_length":"30627","record_id":"<urn:uuid:2c888ee1-b951-48f9-8e46-4a8901e4dc55>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computability and the Limits of Human Knowledge
In 1928, German mathematician David Hilbert asked a profound question about the nature of mathematical knowledge now known as the “Entscheidungsproblem”, or “decision problem”. Given the success of
generalized problem solving methods developed by Blaise Pascal, Johann Gauss and others, Hilbert asked whether there is a general method, an algorithm, that can be used to determine whether any given
mathematical statement is true or false. The answer to this seemingly theoretical question eventually led to the development of the computer.
The idea of a process, or algorithm, had physical meaning long before the advent of the computer. The method of “long division” we all learn in grade school for “doing” division is itself an
algorithm that can be used by humans, without a computer, that calls for physical actions to be applied to inputs (the scribbling you do based on the numerator and denominator) according to a set of
rules, ultimately producing the result sought after (the quotient written above “the line”). Though taken for granted by most grade school students and adults alike, the fact that there is a physical
process, a “dance” of long division, that produces consistently correct results is actually quite shocking, and appreciating this is a good first step to understanding the apparent connection between
mathematics and physical reality that gives mathematics inherent physical meaning.
Hilbert’s Entscheidungsproblem was an important piece of an epistemological movement in mathematics that demanded ever greater “rigor”, or precision. This new mathematical precision was not concerned
with calculating numbers more accurately. Rather, the precision sought after related to how rigorously the author stated and proved the premises presented. If the answer to the Entscheidungsproblem
were “yes”, then there would be a mechanical way of achieving this rigor - a process that divides the universe into true and false statements. Ironically, the fruits of this effort undermined the
effort itself, as Alonzo Church and Alan Turing both proved, independently of the other and using different methodologies, that the answer to the Entscheidungsproblem was “no”, by constructing
examples that led to irreconcilable paradoxes.
Turing Machines
Although Church and Turing both answered the Entscheidungsproblem in the negative, Turing’s answer was notable for another reason - he had essentially invented the computer in order to prove his
answer by rigorously describing machines now known as “Turing Machines” that are capable of performing any mathematical task. He then proved that there are tasks that no Turing Machine could perform.
While a rigorous statement of Turing’s answer to the Entscheidungsproblem requires a solid, but by no means scholarly, background in mathematics, understanding Turing’s answer requires knowledge of
only long division and common sense.
Turing Machines run programs (e.g., the algorithm for long division) that have inputs (the numerator and denominator) and generate output (the quotient). If we call a given program “P” and its inputs
“i”, we could write P(i) to denote the output of a Turing Machine running program P when fed i as input. To make matters more concrete, this would be like running an Instagram filter (P) on one of
your photos (i) on your iPhone (the Turing Machine), and P(i) would denote the image after being processed by the filter.
The Halting Problem
When dividing certain numbers by others, the algorithm for long division can get stuck in an “infinite loop”, causing you to scribble indefinitely as you work out the quotient to ever greater
precision (e.g., 1 divided by 3). Imagine looking at the numerator and denominator of a fraction and knowing beforehand, without doing long division, whether or not the number you generate will have
a finite number of decimal places. You would know, without doing the work, that the long division algorithm will either come to a halt on those inputs or run forever. Because Turing Machines are
capable of performing all mathematical calculations, they can do long division, and therefore Turing Machines can get stuck in an infinite loop. But Turing Machines do a lot more than just long
division, and so Turing asked whether it was possible to predict, as a general matter, whether any algorithm running on a Turing Machine would halt or run forever. This question is known as Turing’s
“Halting Problem”.
To answer the question posed by the Halting Program, Turing began by assuming that there was in fact a program that could decide whether or not any other program would halt, outputting “yes” for
programs that halt and “no” for programs that don’t. Let’s do the same and call that program “H”. So if “D” is the algorithm for long division, it follows that H(D(1/2)) = yes, while H(D(1/3)) = no.
That is, H knows, without running the numbers, that D(1/2) will halt (since 1/2 = .5), while D(1/3) will run forever.
Now let’s define a new program called “AMT” as follows:
AMT takes P as input, and then as Step 1 AMT asks whether H(P(P)) is yes or no. That is, AMT asks whether P, when run with itself as input, will halt or not. To make matters concrete, this would be
like running an Instagram filter on the code for the filter itself, instead of running it on an image.
As Step 2, AMT does one of two things:
(1) if P(P) does not halt but instead runs forever, then AMT halts;
(2) if P(P) halts, then AMT computes D(1/3), running forever.
Now let’s examine AMT(AMT), running AMT when fed itself as input. As Step 1, AMT would ask whether H(AMT(AMT)) is yes or no. That is, will AMT halt when run as a program given itself as input or run
forever? We don’t know the answer to this question, but we do know there are only two possibilities - it either halts or it runs forever. Let’s examine both possibilities:
(1) Assuming AMT(AMT) runs forever puts us in prong (1), in which case AMT halts. But remember, we assumed AMT(AMT) runs forever. Yet when we run AMT(AMT) assuming it runs forever, it halts. Since
assuming AMT(AMT) runs forever leads to a contradiction, we are forced to conclude that AMT(AMT) halts, bringing us to prong (2).
(2) Assuming AMT(AMT) halts puts us in prong (2), in which case AMT runs for ever computing 1/3. But we are again confronted with a contradiction, because we assumed AMT(AMT) halts, yet when we run
AMT(AMT) assuming it halts, it runs for ever.
In short, assuming AMT(AMT) runs forever causes it to halt; assuming it halts causes it to run forever. Because all programs either halt or run forever, prongs (1) and (2) represent all possible
outcomes for AMT(AMT). Yet both prongs lead to an irresolvable contradiction. Because the existence of H implies the existence of AMT, we are forced to conclude that H does not exist. Because H does
not exist, the Halting Problem is an example of a mathematical question that cannot be answered by an algorithm, making the answer to the Entscheidungsproblem “no”.
Out of the Crooked Timber
In the same breath, Turing unlocked an era of unprecedented human productivity, and simultaneously circumscribed its limits. By this measure, Turing was the greatest economist that ever lived, albeit
unintentionally. By any measure, Turing produced one of the greatest ideas in human history. Around the same time as Church’s and Turing’s answers to the Entscheidungsproblem, Kurt Gödel, Werner
Heisenberg and other mathematicians and scientists reached similar, remarkable conclusions about the limits of inference and observation. Though inadvertent pioneers of things we cannot know, they
all demonstrated that truly rigorous and intellectually honest epistemology is so powerful, it can point to its own boundaries, showing us where human knowledge ends and the perennially unknown
peemsterandhalsadick likes this
rockinitallon likes this
bourbakiaxiom likes this
quantavore likes this
conversationswithrocks likes this
nicoonmars likes this
amateurpolymath likes this
dasaimendokusai reblogged this from kalleskultur
thoughtswholesale likes this
aloysiag likes this
culturedallroundman likes this
waterraining reblogged this from kalleskultur
rizzomma likes this
dragon-of-death reblogged this from kalleskultur
dragon-of-death likes this
chillingillin likes this
in-perpetuuum likes this
derisolierte likes this
thepunchboy reblogged this from kalleskultur
waterraining likes this
gusbritish reblogged this from kalleskultur and added:
Great read.
gusbritish likes this
parastatic likes this
kvothelover likes this
cwtae-likeblackholesinthesky reblogged this from kalleskultur
peemsterandhalsadick reblogged this from kalleskultur
kalleskultur posted this | {"url":"http://kalleskultur.tumblr.com/post/42465090186/computability-and-the-limits-of-human-knowledge","timestamp":"2014-04-17T04:20:23Z","content_type":null,"content_length":"52700","record_id":"<urn:uuid:16f07341-184e-46cf-a793-0c93941709b3>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Convexity Question in Matrix Analysis (Open)
Summary: The reader is asked to prove that for a real matrix $B$ with maximal rank $m \le n$, in the orthant where each $\lambda_i$ is positive, the scalar-valued function ${\cal C}(\lambda_1, \
ldots, \lambda_n) = Tr((B^{\ast}\diag(\lambda_1, \ldots, \lambda_n) B)^{-2})$ is convex.
Classification: Primary, Algebra; Secondary, Matrices and Determinants
David L. Russell
Mathematics Department
Virginia Tech
Blacksburg, VA 24061-0123
e-mail: russell@math.vt.edu | {"url":"http://www.siam.org/journals/categories/09-002.php","timestamp":"2014-04-20T10:48:13Z","content_type":null,"content_length":"9205","record_id":"<urn:uuid:198db187-4531-4cf1-b100-3fb13494b8cf>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plotting Module
Pyglet Plotting Module
This is the documentation for the old plotting module that uses pyglet. This module has some limitations and is not actively developed anymore. For an alternative you can look at the new plotting
The pyglet plotting module can do nice 2D and 3D plots that can be controlled by console commands as well as keyboard and mouse, with the only dependency being pyglet.
Here is the simplest usage:
>>> from sympy import var, Plot
>>> var('x y z')
>>> Plot(x*y**3-y*x**3)
To see lots of plotting examples, see examples/pyglet_plotting.py and try running it in interactive mode (python -i plotting.py):
$ python -i examples/pyglet_plotting.py
And type for instance example(7) or example(11).
See also the Plotting Module wiki page for screenshots.
Plot Window Controls
│ Camera │ Keys │
│Sensitivity Modifier │SHIFT │
│Zoom │R and F, Page Up and Down, Numpad + and - │
│Rotate View X,Y axis │Arrow Keys, A,S,D,W, Numpad 4,6,8,2 │
│Rotate View Z axis │Q and E, Numpad 7 and 9 │
│Rotate Ordinate Z axis│Z and C, Numpad 1 and 3 │
│View XY │F1 │
│View XZ │F2 │
│View YZ │F3 │
│View Perspective │F4 │
│Reset │X, Numpad 5 │
│ Axes │Keys│
│Toggle Visible │F5 │
│Toggle Colors │F6 │
│ Window │ Keys │
│Close │ESCAPE │
│Screenshot │F8 │
The mouse can be used to rotate, zoom, and translate by dragging the left, middle, and right mouse buttons respectively.
Coordinate Modes
Plot supports several curvilinear coordinate modes, and they are independent for each plotted function. You can specify a coordinate mode explicitly with the ‘mode’ named argument, but it can be
automatically determined for cartesian or parametric plots, and therefore must only be specified for polar, cylindrical, and spherical modes.
Specifically, Plot(function arguments) and Plot.__setitem__(i, function arguments) (accessed using array-index syntax on the Plot instance) will interpret your arguments as a cartesian plot if you
provide one function and a parametric plot if you provide two or three functions. Similarly, the arguments will be interpreted as a curve is one variable is used, and a surface if two are used.
Supported mode names by number of variables:
• 1 (curves): parametric, cartesian, polar
• 2 (surfaces): parametric, cartesian, cylindrical, spherical
>>> Plot(1, 'mode=spherical; color=zfade4')
Note that function parameters are given as option strings of the form “key1=value1; key2 = value2” (spaces are truncated). Keyword arguments given directly to plot apply to the plot itself.
Specifying Intervals for Variables
The basic format for variable intervals is [var, min, max, steps]. However, the syntax is quite flexible, and arguments not specified are taken from the defaults for the current coordinate mode:
>>> Plot(x**2) # implies [x,-5,5,100]
>>> Plot(x**2, [], []) # [x,-1,1,40], [y,-1,1,40]
>>> Plot(x**2-y**2, [100], [100]) # [x,-1,1,100], [y,-1,1,100]
>>> Plot(x**2, [x,-13,13,100])
>>> Plot(x**2, [-13,13]) # [x,-13,13,100]
>>> Plot(x**2, [x,-13,13]) # [x,-13,13,100]
>>> Plot(1*x, [], [x], 'mode=cylindrical') # [unbound_theta,0,2*Pi,40], [x,-1,1,20]
Using the Interactive Interface
>>> p = Plot(visible=False)
>>> f = x**2
>>> p[1] = f
>>> p[2] = f.diff(x)
>>> p[3] = f.diff(x).diff(x)
>>> p
[1]: x**2, 'mode=cartesian'
[2]: 2*x, 'mode=cartesian'
[3]: 2, 'mode=cartesian'
>>> p.show()
>>> p.clear()
>>> p
<blank plot>
>>> p[1] = x**2+y**2
>>> p[1].style = 'solid'
>>> p[2] = -x**2-y**2
>>> p[2].style = 'wireframe'
>>> p[1].color = z, (0.4,0.4,0.9), (0.9,0.4,0.4)
>>> p[1].style = 'both'
>>> p[2].style = 'both'
>>> p.close()
Using Custom Color Functions
The following code plots a saddle and color it by the magnitude of its gradient:
>>> fz = x**2-y**2
>>> Fx, Fy, Fz = fz.diff(x), fz.diff(y), 0
>>> p[1] = fz, 'style=solid'
>>> p[1].color = (Fx**2 + Fy**2 + Fz**2)**(0.5)
The coloring algorithm works like this:
1. Evaluate the color function(s) across the curve or surface.
2. Find the minimum and maximum value of each component.
3. Scale each component to the color gradient.
When not specified explicitly, the default color gradient is f(0.0)=(0.4,0.4,0.4) -> f(1.0)=(0.9,0.9,0.9). In our case, everything is gray-scale because we have applied the default color gradient
uniformly for each color component. When defining a color scheme in this way, you might want to supply a color gradient as well:
>>> p[1].color = (Fx**2 + Fy**2 + Fz**2)**(0.5), (0.1,0.1,0.9), (0.9,0.1,0.1)
Here’s a color gradient with four steps:
>>> gradient = [ 0.0, (0.1,0.1,0.9), 0.3, (0.1,0.9,0.1),
... 0.7, (0.9,0.9,0.1), 1.0, (1.0,0.0,0.0) ]
>>> p[1].color = (Fx**2 + Fy**2 + Fz**2)**(0.5), gradient
The other way to specify a color scheme is to give a separate function for each component r, g, b. With this syntax, the default color scheme is defined:
>>> p[1].color = z,y,x, (0.4,0.4,0.4), (0.9,0.9,0.9)
This maps z->red, y->green, and x->blue. In some cases, you might prefer to use the following alternative syntax:
>>> p[1].color = z,(0.4,0.9), y,(0.4,0.9), x,(0.4,0.9)
You can still use multi-step gradients with three-function color schemes.
Plotting Geometric Entities
The plotting module is capable of plotting some 2D geometric entities like line, circle and ellipse. The following example plots a circle and a tangent line at a random point on the ellipse.
In [1]: p = Plot(axes='label_axes=True')
In [2]: c = Circle(Point(0,0), 1)
In [3]: t = c.tangent_line(c.random_point())
In [4]: p[0] = c
In [5]: p[1] = t
Plotting polygons (Polygon, RegularPolygon, Triangle) are not supported directly. However a polygon can be plotted through a loop as follows.
In [6]: p = Plot(axes='label_axes=True')
In [7]: t = RegularPolygon(Point(0,0), 1, 5)
In [8]: for i in range(len(t.sides)):
....: p[i] = t.sides[i] | {"url":"http://docs.sympy.org/0.7.2/modules/plotting.html","timestamp":"2014-04-20T17:13:01Z","content_type":null,"content_length":"91176","record_id":"<urn:uuid:edabe096-2914-4d1d-85fb-9a9370da4d88>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
`A` square matrix `A` is called orthogonal if `A^TA = I_n` .
Let `v_1,v_2,ldots,v_n` be the columns of an... - Homework Help - eNotes.com
`A` square matrix `A` is called orthogonal if `A^TA = I_n` .
Let `v_1,v_2,ldots,v_n` be the columns of an orthogonal matrix `A` . Show that the `v_i"'s"` are mutually perpendicular and unit vectors.
Let's look at the element in `i`-th row and `j`-th column `delta_(ij)` of matrix `A^TA`. That element is scalar product of vectors `v_i` and `v_j` that is
`delta_(ij)=v_i cdot v_j`. (1)
Also since `A` is orthogonal it follows that
`delta_(ij)={(0 if i ne j),(1 if i=j):}`
Hence from (1) we get
`v_i cdot v_j={(0 if i ne j),(1 if i=j):}`
So if we multiply two different vectors `v_i` and `v_j` we get 0 which means they are perpendicular (scalar product is equal to 0 only if vectors are perpendicular or if one of them is zero-vector),
and if we multiply vector by itself `v_i cdot v_i` we get 1 which means `v_i` is vector with length 1 that is a unit vector.
Hence all vectors `v_1,v_2,ldots,v_n` are mutually perpendicular unit vectors.
Multiplyng on the rigt both side of the realtion:
(1) A'A= I (where A' is the transposed matrix of A)
by unity vector u`^k` we get:
(2) A' v`^k` = u`^k`
Now, multiplyng on the again both side b u`^h` h `!=` k
v`^h` x v`^k` = u`^k` X u`^h` = 0
Therfore the vectors v`^h` and v`^k` are orthogonal for h`!=` k
Multiplyng the relation (2) by v`^h` on the right:
A' v`^k` X v `^h` = u`^k` X v`^h`
since the left side is zero thus:
u`^k` x v`^h` = 0 for h `!=` k
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/square-matrix-called-orthogonal-t-n-let-vector-v-426355","timestamp":"2014-04-16T08:05:10Z","content_type":null,"content_length":"28176","record_id":"<urn:uuid:43d43cb9-8935-4a4d-a66b-8d45595d9bb4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Foothill Online Course Outline System
1. Description -
Mathematical tools of quantum information theory and provides understanding and design elementary quantum circuits and algorithms. The first of a sequence, it develops the quantum mechanical
foundation needed to understand how quantum computers can beat ordinary computers in certain problem classes by using quantum entanglement and teleportation under the ideal condition of a noiseless
channel. The endpoint of the course is a working knowledge of the quantum Fourier transform and Shor algorithm, which can be used to break RSA encryption, the basis of current Internet security. No
prior knowledge of quantum mechanics is required.
Prerequisite: None
Co-requisite: None
Advisory: C S 1B, 18 and MATH 1B.
2. Course Objectives -
The student will be able to:
A. Define and solve problems that involve the basic linear algebra and complex arithmetic needed in quantum computing theory.
B. Use Fourier theory to write a Fourier sum for a given a periodic function or a Fourier transform for a given general function.
C. State the axioms of traditional quantum mechanics in Dirac notation and compute the probabilities associated with a measurement on a spin-1/2 atom in a magnetic field.
D. Given a classical circuit diagram containing basic logic gates (AND, OR, etc.), compute the outputs given a set of input bits.
E. Define a qubit and, given a quantum circuit diagram containing single-qubit logic gates, compute its output given an input qubit.
F. Explain the difference between reversible and irreversible gate logic and give examples demonstrating how entangled states can be exploited in order to emulate a classical irreversible circuit
using an irreversible quantum circuit.
G. Write out a quantum circuit that will perform quantum teleportation of a qubit using entangled states and explain why the same cannot be done using a classical logic or information exchange.
H. Explain the difference between the discrete Fourier transform (DFT) and the fast Fourier transform (FFT), and define the quantum Fourier transform (QFT).
I. Define the phase estimation problem and give an analytical summary of how a QFT can solve it.
J. Define the order-finding problem and prove that it can be reduced to a phase estimation problem.
K. Show the steps that reduce the prime-number factorization problem to the order-finding problem, and explain why this enables a QFT to do factorization with improved time complexity, thus
breaking RSA encryption.
3. Special Facilities and/or Equipment -
A. access to a networked computer laboratory which can access quantum computer simulators and tools either via the Web or loaded onto lab computers.
B. a website or course management system with an assignment posting component, a forum component. This applies to all sections, including on-campus (i.e., face-to-face) offerings.
C. When taught via Foothill Global Access on the Internet, the college will provide a fully functional and maintained course management system through which the instructor and students can
D. When taught via Foothill Global Access on the Internet, students must have currently existing e-mail accounts and ongoing access to computers with internet capabilities.
4. Course Content (Body of knowledge) -
A. Linear Algebra and Complex Arithmetic
1. Vector spaces, bases, inner products, orthogonality, orthonormality
2. Linear transformations, matrices, inverses and the transpose
3. Direct sums and direct products
4. Eigenvectors and eigenvalues
5. Complex arithmetic, the complex plane and conjugation
6. Complex vector spaces and unitary operators
7. The exponential operator applied to a complex matrix
B. Elementary Fourier Theory and Probability
1. Trigonometric functions and identities
2. Survey and definitions of derivatives and integrals of “well behaved” real-valued functions.
3. Real and complex Fourier series
4. The Fourier transform
5. The convolution theorem
6. Probability distribution and density functions
C. Basic Quantum Mechanical Results
1. The traditional state-vector formulation of quantum mechanics
2. Dirac notation
3. Superposition of states
4. Observables and orthogonal measurements
5. Completeness relations
6. Unitary transformation of states
7. 2-D Hilbert space
8. Spin ½ systems
9. Survey of laboratory and industry spin-1/2 computers
D. Classical Computation Theory
1. The classical bit
2. Logic gates and truth tables
3. Universal gates
4. Reversible and irreversible circuits
5. Fredkin and Toffoli gates
6. Emulating irreversibility with reversible logic
7. Time complexity of algorithms
E. Quantum Computation: A Single Qubit
1. The Stern-Gerlach experiment
2. Different 2-D bases for a single-qubit Hilbert space
3. Bloch sphere representation of qubits
4. Unitary (reversible) evolution of a qubits
5. Pauli-matrices and Bloch-sphere interpretation of unitary operators
6. Single-qubit logic gates
7. Quantum wires, circuit elements and interference
F. Theory of Two Qubits
1. Bipartite system as a 4-D tensor product space
2. Bases, Bell states and quantum entanglement
3. Unitary evolution in bipartite system vs. evolution in single qubit system
4. Quantum gates from classical irreversible gates
5. Quantum gate emulation of irreversible classical gates
6. The no-cloning theorem
G. Two Qubit Circuits and Early Algorithms
1. Universal quantum gates
2. Quantum teleportation
3. Superdense coding
4. Deutsch's algorithm
5. Time complexity speedup of Deutch's algorithm: theoretical vs. actual
H. The Quantum Fourier Transform
1. The discrete FT
2. The fast Fourier transform (FFT)
3. FFT improvement of polynomial multiplication
4. Time complexities of FFT and DFT
5. The QFT
6. Circuit and time complexity for QFT
I. QFT and Phase Estimation
1. Two-register implementation of phase estimation of an “ideal phi”
2. Relationship between phase estimation and inverse FT
3. Phase estimation when phi is not ideal
4. Quantum phase estimation algorithm
J. QFT and Order-Finding
1. The order-finding problem
2. Euclid's algorithm
3. The continued fraction algorithm
4. Reduction of order-finding to phase estimation
K. QFT and Cryptography-Breaking
1. Prime number factorization and RSA encoding
2. Shor's reduction of factorization to order finding
3. The Shor algorithm
4. Breaking RSA cryptography with quantum computers
5. Repeatability - Moved to header area.
6. Methods of Evaluation -
A. Tests and quizzes
B. Written assignments which include algorithms, mathematical derivations, logical circuits, and essay questions.
C. Final examination
7. Representative Text(s) -
Nielsen, M. and Chuang, I.: Quantum Computation and Quantum Information, Cambridge University Press, 2010.
Kaye, P., Laflamme, R. and Mosca, M.: An Introduction to Quantum Computing, Oxford University Press, 2007.
8. Disciplines -
Computer Science
9. Method of Instruction -
A. Lectures which include mathematical foundations, theoretical motivation and coding implementation of quantum mechanical algorithms.
B. Detailed review of assignments which includes model solutions and specific comments on the student submissions.
C. In person or on-line discussion which engages students and instructor in an ongoing dialog pertaining to all aspects of designing, implementing and analyzing programs.
D. When course is taught fully on-line:
1. Instructor-authored lecture materials, handouts, syllabus, assignments, tests, and other relevant course material will be delivered through a college hosted course management system or
other department-approved Internet environment.
2. Additional instructional guidelines for this course are listed in the attached addendum of CS department on-line practices.
10. Lab Content -
Not applicable.
11. Honors Description - No longer used. Integrated into main description section.
12. Types and/or Examples of Required Reading, Writing and Outside of Class Assignments -
A. Reading
1. Textbook assigned reading averaging 30 pages per week.
2. Reading the supplied handouts and modules averaging 10 pages per week.
3. Reading on-line resources as directed by instructor though links pertinent to programming.
4. Reading library and reference material directed by instructor through course handouts.
B. Writing
1. Writing technical prose documentation that supports and describes the programs that are submitted for grades.
13. Need/Justification -
This course is a restricted support course for the AS degree in Computer Science. | {"url":"http://www.foothill.edu/schedule/outlines.php?act=1&rec_id=5786","timestamp":"2014-04-17T15:58:54Z","content_type":null,"content_length":"20368","record_id":"<urn:uuid:3d2c80e2-c701-4b94-916e-9ab8d64875c9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
generalised pigeonhole principle
December 28th 2009, 04:03 AM #1
Nov 2009
The generalised pigeonhole principle is proved by using contradiction. The proof is given below.
consider there are no pigeonholes that contain more than ceil(m/n) - 1.
Then the total number of pigeons m <= n * ceil(m/n)-1
m < n*(m/n) + 1 - 1
m = m which is a contradiction.
Can anyone please tell how it is a contradiction.
The generalised pigeonhole principle is proved by using contradiction. The proof is given below.
consider there are no pigeonholes that contain more than ceil(m/n) - 1.
Then the total number of pigeons m <= n * ceil(m/n)-1
m < n*(m/n) + 1 - 1
m = m which is a contradiction.
Can anyone please tell how it is a contradiction.
I think there is an error in what you wrote.
"m = m which is a contradiction"
it has to be m < m which is a contradiction. This follows directly from the preceding step.
December 28th 2009, 04:13 AM #2
Super Member
Apr 2009 | {"url":"http://mathhelpforum.com/discrete-math/121764-generalised-pigeonhole-principle.html","timestamp":"2014-04-16T08:44:36Z","content_type":null,"content_length":"32234","record_id":"<urn:uuid:0e9e53bb-9421-4d9b-98e7-5b6f039ec116>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Santa Cruz LC Outreach
WE NEED YOUR HELP!!
Simultaneous with the Santa Cruz Linear Collider Retreat, two separate groups (totalling 200 or more) of science oriented students will be in residence on the Santa Cruz campus for intensive summer
The COSMOS http://epc.ucsc.edu/cosmos/ (California State Summer School in Mathematics and Science) program will bring approximately 150 students between the ages of 13-15 to the Santa Cruz campus for
four weeks of science and math immersion beginning June 23. By the way, we will share dining space with these students for breakfast and lunch.
The UC Santa Cruz campus is one of the five national sites for the annaul summer programs of the Johns Hopkins Center for Talented Youth (CTY) http://www.jhu.edu/gifted/, which cater to the top 2\%
of the nation's high school students. The first session of this summer's program extends from June 23 - July 12, overlapping the Linear Collider Retreat. Roughly half of the 200 students registered
for this session's programs are scientifically inclined, with roughly half again of those specifically interested in physical science and mathematics.
The Organizing Committee of the Linear Collider Retreat have met with the leaders of these two programs, and together we have arranged two functions designed to introduce these students to the
Particle Physics in general, the Linear Collider in particular, and the physicists (that's us!) who make up this field.
On Wednesday night (the retreat's arrival night), there will be two 45 minute presentations: one on Particle Physics in general followed by one specifically on the Linear Collider. This will be
attended by all of the COSMOS students and the science-oriented CTY students: approximately 250 students altogether. Retreat participants are encouraged to attend the presentations and to mingle with
the students afterward.
On Friday, between breakfast and the morning coffee break, physicists will meet in small groups with the COSMOS students who are specifically interested in physical science and mathematics. Between
lunch and afternoon coffee, there will be meetings with the corresponding group of CTY students. HERE IS WHERE WE NEED YOUR HELP
To keep these latter meetings intimate, we would like the groups to consist of 8-10 students, a staff member, and two physicists. Since each of the two groups will consist of 50-75 students, will
will need 10-20 physicists for EACH of these sessions. PLEASE CONSIDER volunteering to take part in this outreach opportunity (subject to your availability once the parallel session agendas come
together) by checking the appropriate box on the registration form.
Finally, as mentioned, we will be eating breakfast and lunch with the COSMOS students. The less introverted of us may want to take this opportunity to circulate amongst them! | {"url":"http://scipp.ucsc.edu/LC/outreach.html","timestamp":"2014-04-21T04:43:07Z","content_type":null,"content_length":"5164","record_id":"<urn:uuid:10084983-901b-4326-8083-20d86eb976cf>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bywood, PA Calculus Tutor
Find a Bywood, PA Calculus Tutor
...I favor the Socratic Method of teaching, asking questions of the student to help him/her find her/his own way through the problem rather than telling what the next step is. This way the
student not only learns how to solve a specific proof, but ways to approach proofs that will work on problems ...
58 Subjects: including calculus, reading, geometry, biology
Hey Everybody! I am an experienced tutor teaching subjects like physics, calculus, algebra, Spanish and even guitar. I was originally an engineer for a helicopter company for nearly 4 years and I
resigned to start a career in education.
16 Subjects: including calculus, Spanish, physics, algebra 1
...Able to help focus students on necessary grammar rules and help them with essay composition. I majored in Operations Research and Financial Engineering at Princeton, which involved a great
deal of higher level math similar to that seen on the Praxis test. To earn the degree I had to take a number of upper-level math courses.
19 Subjects: including calculus, statistics, algebra 2, geometry
...With a physics and engineering background, I encounter math at and above this level every day. With my experience, I walk the student through what a concept in math is about, how to execute
it, and how to tackle a problem when it comes time for a test. I am a tutor with a primary focus in math and science who has worked with students at this level of math for multiple years.
9 Subjects: including calculus, physics, geometry, algebra 1
...While in high school I scored a 780 on the math SAT and an 800 on the math SAT II. I took AP Calculus BC and scored a 5. ScienceI am available to tutor chemistry, physics and any electrical
related topics.
15 Subjects: including calculus, chemistry, physics, algebra 1
Related Bywood, PA Tutors
Bywood, PA Accounting Tutors
Bywood, PA ACT Tutors
Bywood, PA Algebra Tutors
Bywood, PA Algebra 2 Tutors
Bywood, PA Calculus Tutors
Bywood, PA Geometry Tutors
Bywood, PA Math Tutors
Bywood, PA Prealgebra Tutors
Bywood, PA Precalculus Tutors
Bywood, PA SAT Tutors
Bywood, PA SAT Math Tutors
Bywood, PA Science Tutors
Bywood, PA Statistics Tutors
Bywood, PA Trigonometry Tutors
Nearby Cities With calculus Tutor
Carroll Park, PA calculus Tutors
East Lansdowne, PA calculus Tutors
Fernwood, PA calculus Tutors
Kirklyn, PA calculus Tutors
Llanerch, PA calculus Tutors
Merion Park, PA calculus Tutors
Millbourne, PA calculus Tutors
Nether Providence, PA calculus Tutors
Oakview, PA calculus Tutors
Overbrook Hills, PA calculus Tutors
Penn Wynne, PA calculus Tutors
Primos Secane, PA calculus Tutors
Primos, PA calculus Tutors
Upper Darby calculus Tutors
Westbrook Park, PA calculus Tutors | {"url":"http://www.purplemath.com/Bywood_PA_calculus_tutors.php","timestamp":"2014-04-16T10:15:42Z","content_type":null,"content_length":"24131","record_id":"<urn:uuid:39135814-c753-45a6-8c8d-449ef9007e94>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Southlake Algebra 1 Tutor
Find a Southlake Algebra 1 Tutor
I am a Mechanical Engineer by education and currently working as a Mechanical, Electrical, and Plumbing coordinator in a large commercial construction. Advanced mathematics is applied everyday at
my work. Someone once said “Everything should be made as simple as possible, but not simpler.” My approach to tutoring is “make it simple” to help understand math.
8 Subjects: including algebra 1, physics, calculus, geometry
...The SAT Reading Test can be approached in a very logical way. Let me help you conquer this section of the test. Running out of time?
29 Subjects: including algebra 1, Spanish, reading, GRE
...I believe that success in biochemistry really boils down to truly functional understanding of both chemistry and biology. I have taught swimming lessons for the better part of 10 years.
Additionally, I swam on varsity for Allen High School in 2000-2001.
15 Subjects: including algebra 1, chemistry, physics, algebra 2
I have been tutoring and teaching high school math, SAT, ACT, PSAT, GRE and GMAT preparation for seven years. My approach is to create a positive and supportive environment to help students
succeed. By breaking a problem down into smaller manageable steps, I can help identify and resolve issues that need improvement.
11 Subjects: including algebra 1, geometry, GRE, SAT math
...I'm more than willing to work with parents to find an arrangement that works for everyone involved. My goal for every session is to ensure that both student and tutor are satisfied with our
progress. I'll be learning from them as much (or more) as they learn from me.
28 Subjects: including algebra 1, reading, English, chemistry
Nearby Cities With algebra 1 Tutor
Bedford, TX algebra 1 Tutors
Colleyville algebra 1 Tutors
Coppell algebra 1 Tutors
Euless algebra 1 Tutors
Flower Mound algebra 1 Tutors
Grapevine, TX algebra 1 Tutors
Highland Village, TX algebra 1 Tutors
Hurst, TX algebra 1 Tutors
Keller, TX algebra 1 Tutors
Northlake, TX algebra 1 Tutors
Roanoke, TX algebra 1 Tutors
Saginaw, TX algebra 1 Tutors
Trophy Club, TX algebra 1 Tutors
Watauga, TX algebra 1 Tutors
Westlake, TX algebra 1 Tutors | {"url":"http://www.purplemath.com/Southlake_algebra_1_tutors.php","timestamp":"2014-04-21T14:57:41Z","content_type":null,"content_length":"23784","record_id":"<urn:uuid:aa3098c5-b9ac-403d-b017-081e94e36b59>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |