content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Paper 02-27:
id 02-27
authors Neun, Winfried
year 2002
title MATH-NET - THE STATE OF THE ART OF A DISTRIBUTED INFORMATION & COMMUNICATION SYSTEM IN MATHEMATICS
source elpub2002 - Technology Interactions. Proceedings of the 6th International ICCC/IFIP Conference on Electronic Publishing held in Karlovy Vary, Czech Republic, 6–8 November 2002. Editors:
Carvalho, Joao Álvaro; Hübler, Arved; Baptista, Ana Alice. Publisher: VWF Berlin, 2002. ISBN 3-89700-357-0. 395 pages.
summary The Math-Net Initiative steered by the International Mathematical Union (IMU) tries to improve and to coordinate the mathematical scholarly information in the World Wide Web. Math-Net is a
community-based information and communication system in mathematics. It is based on the information which is provided by persons and institutions taking part in Math-Net on their local
servers (Math-Net Members). They should make their information resources electronically available in a standardised fashion. Currently the Math-Net activities are focused on three topics:
Math-Net Pages, Mathematical Preprints, and Personal Information about Mathematicians. The talk will give an overview about the idea of Math-Net. The Math-Net Page, a standardised portal
to the information of mathematical institutions, will be explained in more detail.
series ELPUB:2002
email neun@zib.de
more http://www.math-net.org
content (2,184,816 bytes)
discussion No discussions. Post discussion ...
ratings Ratings: 5
urn:nbn urn:nbn:se:elpub-02-27
last 2003/07/09 15:06
These pages are best viewed with any standards compliant browser (e.g. Mozilla). | {"url":"http://elpub.scix.net/cgi-bin/works/Show?02-27","timestamp":"2014-04-17T18:26:25Z","content_type":null,"content_length":"13270","record_id":"<urn:uuid:119a6345-84d7-4888-9356-d051581900a4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus: Effective Function Decompositions Video | MindBites
Calculus: Effective Function Decompositions
About this Lesson
• Type: Video Tutorial
• Length: 11:42
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 127 MB
• Posted: 06/26/2009
This lesson is part of the following series:
Calculus (279 lessons, $198.00)
Calculus: Basics of Integration (14 lessons, $23.76)
Calculus: Integration by Substitution Illustrated (4 lessons, $7.92)
Taught by Professor Edward Burger, this lesson comes from a comprehensive Calculus course. This course and others are available from Thinkwell, Inc. The full course can be found at http://
www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hopital's Rule, functions and their inverses,
improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other AP Calculus,
College Calculus and Calculus II topics.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has
won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of
America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, "Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas" and of the textbook "The
Heart of Mathematics: An Invitation to Effective Thinking". He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The "Journal of Number Theory" and "American Mathematical Monthly". His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
This lesson has not been reviewed.
Please purchase the lesson to review.
This lesson has not been reviewed.
Please purchase the lesson to review.
The Basics of Integration
Illustrating u-Substitution
Choosing Effective Function Decompositions Page [1 of 1]
Well I thought we'd close this section like we began the introduction to antidifferentiation, with a game. We looked at Math Jeopardy before, where you had to actually produce the right question for
the answer. Here I'd like to be more traditional. I want you to actually give me the answer to my question. And the question's always going to be, what is a good choice of u in each of the integrals
I'm going to show you. And the reason for this, that I want you to think about this is a few-fold. First of all, it's good practice to get in the habit of seeing if you can guess what the right
choice of u is. And secondly, to know what to look for when you're looking for those choices of u. And thirdly, to point out the basic fact that in reality, when you're trying these problems, whether
it's in the library or in your dorm room, or where ever, inevitably you might make some mistakes in your choice of u. You might make a guess for u, do some work, and realize that wasn't a good choice
of u. You have to realize it's okay to go back and try again with a different choice of u. And in fact, it's only after you practice and practice as I have that you can look at these things and
actually make a pretty good guess at what u might be. So here's an opportunity for you to practice that. We're not going to actually evaluate the integrals. All we're going to do is make a guess as
to what u should be. You want to go out and try these integrals on your own, great. The goal is to figure out what u should be. And here's a chance for you to actually contribute and interact.
For the first question, is dealing with this long integral, (4x^3+x^2-x+1)(12x^2+2x-1). What is a good choice for u? Here are your options. Should it be this first thing right here in this first
product? Should it be the second term here, all this stuff? Or should it be the product of all these things. What is a good choice for u? Make a guess right now. Well, you may have noticed that if we
take a look at this term and differentiate it, notice what we get. We get 12x^2+2x-1. And that's exactly what's here, 12x^2+2x-1. And so, if you let u be this, the derivative is sitting right over
here. So this is a good choice for u. You let u be that, you will succeed in actually evaluating this derivative--this antiderivative.
Okay let's try another one. The integral of x^3 x sin(4x^4). So here are your choices right now. Should u be x^3? Should u be sin(4x^4)? Or should u be 4x^4? Or should u be the whole thing, x^3 x sin
(4x^4)? Think about it. Make your guess now. Well if I look at just the inside term right here, you'll notice the derivative is 16x^3. Even though that's not exactly what this term is, it's just off
by a constant multiple. So in fact, this choice of u, 4x^4, would be one that would actually lead us to an easier but equivalent integral to evaluate. We will have take care of that 16 by dividing by
16, but we could certainly do that. And therefore, this is the right choice for u. Why? Because it's derivative is basically sitting right here.
Okay let's try another one. Here's a long fraction. (6x^2-8x+6)/(x^3-2x^2+3x+1). What is a good choice of u here? Is it the top, (6x^2-8x+6)? Is it the bottom, (x^3-2x^2+3x+1)? Or should u be the
whole thing, this quotient? Enter your answer now. Well, you may have noticed that if you look at the bottom here, and take the derivative, I see 3x^2-4x+3. And that is exactly the top if I were to
multiply the answer by 2. So the derivative of this bottom is actually equal to half of the top. So since they differ only by a constant multiple, it appears that this is a good choice for u. And by
the way, if you were to actually perform this integration, you would see that you would have something over u, a natural log of u. You try that. Anyway, this question though, is just to find out the
right choice of u, and I believe it's the bottom here.
Okay great. I didn't make it out very well. I hope you're faring okay. I'm getting these, which is sort of surprising. Okay, here's a green one, and let's make sure that you understand what this says
here. This says x multiplied by (e^x)^2, so that x^2 is in the power, divided by (e^x)^2+5. What is a good choice for u? One answer is x. The next answer is x(e^x)^2. The next answer is just (e^x)^2.
The next answer is just x^2. And the last answer is (e^x)^2-5, a lot of choices there. Take a thought about it. Think about it for a second. And now enter your answer when you're ready. Well, there
are a lot of possibilities there. Which one is the one that might actually pan out? Well, if we let u be this bottom here, what's the derivative of that? Well the derivative of the 5, +5 just drops
out. And what's the derivative of this? Well that actually requires a little chain rule. I see e to the blop. And the derivative of e to the blop is e to the blop--and then I have to multiply by the
derivative of the blop, which is 2x. So I see that the derivative of this entire bottom is almost equal to this. It's just off by a factor of 2 in the front. And so really the choice for u might be
(e^x)^2+5. If you let u be that, the derivative is basically all across here, except you have to modify it with the 2. This guess might have been a good guess by the way, if you would have let u be
that. And that actually would simplify things dramatically, but then you would have a u+5, and you'd have to do another u substitution again. So if you said that, that's a good runner up. Feel good
about that. But I think this choice of u would actually simplify things even more.
Okay, let's try just a couple more here. This is a penultimate one. We're almost done. This is the orange one. It's the integral of sin of the square root of x all divided by the square root of x,
dx. What are the choices for u here? Well should u be sin of the square root of x? Should u be just the square root of x? Or should u be 1/the square root of x? Think about this and make your guess
when you're ready. Well let's think about the following possibility. What if we let u equal just that square root of x right here? What's the derivative of the square root of x? Well the derivative
of the square root of x is 1 divided by 2 square root of x. Let me write that down for you. Remember that the derivative of the square root of x is equal to 1 over 2 square root of x. And so that 1
over the square root of x actually is up here, 1 over the square root of x. So in fact, this will simplify quite nicely with a u choice of square root of x. Then you just have sin(u), and I guess you
have to take care of that factor so you'll have a 2sin(u). And integrate 2sin(u), you know that'd be -2 cos(u). So in fact, the right choice here will be the square root of x. That was a hard one.
And I'm going to close with a really hard one now. So if you don't get this, don't be discouraged. Look at that sec^3x x tangent x. We want to find the antiderivative. So we're looking now for a good
choice of u here. This is, I think, really tricky. Let me give you some choices. Should u be tangent x? Should u be sec x? Should u be sec^3 x? Should u be sin x? Should u be cos x? A lot of choices
there, this one, I think is hard. Think about it and, if you can, if you've been watching these videos sort of in succession, try to think about trig functions the way I think about trig functions.
That may be a hint. If you don't know what I'm talking about, I'll explain that in a second. But first try your darnedest to make a guess at which of these things you think might work. Don't feel bad
if you don't get this one right, but give it a try. All right. There may be a lot of ways of looking at this, but let me tell you the sort of naïve, silly way that I look at these problems. When it's
not completely apparent what to do, I usually take a trig problem and convert it back into sin's and cos's and hope that things go well. And if I try that, if I convert this back into sin's and
cos's, what I would see is--I'd remember that sec is actually 1/cos. So I would see 1/cos^3x. And tangent I would revert to sin/cos. And if I now combine this, I see sin x divided by well cos^3 x cos
^4 x. So this is identical to that, and now it's a little bit clearer. I'm making it slightly more clearer for you. I write this as sin x/(cos x)^4. This is the same thing, but I'm trying to write it
so you can really see the inside there. Notice that if I call that quantity u, the inside stuff right there, cos, the derivative of it is sort of sitting on the top right there, because the
derivative of cos is -sin. So as long as I prefer things with a -sin in front, I'm okay. So really this is a tricky problem. I first converted back to sin's and cos's, simplified and realized that
there's an inside and the derivative is over here. So the choice here would be the cos x, very sneaky, very tricky, but at least I wanted to see what one of these looks like, and now try these on
your own if you want, it would be a good idea to actually work through. This one and the other ones, in fact, too, I invite you to try that. Anyway, well congratulations on conquering this idea of
substitution and the notion of finding integrals and antiderivatives. And up next and lastly, for this course, we're going to take a look at that second fundamental question of calculus. I'll remind
you what that question is, and I'll remind you where we are in this course, and we're finally going to begin to see the big picture and head toward the finish line. Congratulations, and I'll see you
in just a bit.
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet: | {"url":"http://www.mindbites.com/lesson/3489-calculus-effective-function-decompositions","timestamp":"2014-04-21T07:08:17Z","content_type":null,"content_length":"63031","record_id":"<urn:uuid:1026bab5-7f74-417b-aea8-6c32491169ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary is just another way of counting. Almost every single number you encounter in everyday life is in base 10. In base 10, you count each digit up, from 0 to 9. When you add 1 to 9 though, the 9 in
the ones place changes to a 0, and the 0 in the tens place changes to a 1. So when you have 099, and you increase it by one, the 9 in the ones place changes to a zero and adds 1 to the 9 in the tens
place, which then changes to a 0 and adds 1 to the 0 in the hundreds place, giving you 100. If you have a number like 5473, that means you have 3 ones, 7 tens, 4 hundreds, and 5 thousands. The number
of place the the left the number is is 10^n, where n is the number of places to the left it is. So the ones place is 10^0, the tens place is 10^1, the hundreds place is 10^2, the thousands place is
10^3, etc.
But in binary, you only have two digits, a 0 and a 1. So when you have a 1 and add another 1 to it, you need to shift up a place, to get 10. In binary, 1 + 1 = 10. With base 10, you have 10^n, but in
base 2, you have 2^n. So the first place is 2^0, the second is 2^1, the third 2^2, the fourth 2^3, etc. So with a binary number 101101, you have one 2^0 (1), one 2^2 (4), one 2^3 (8), and one 2^5
(32), which gives you 45.
These are both different types of positional notations for numbers. There are other types of positional systems, one for every real number in fact, but some other common types include unary (base 1),
quinary (base 5), octal (base 8), hexadecimal (base 16), and duodecimal (base 12). The first known positional notation was made by the Babylonians, and was base 60. Hexadecimal is actually used in
computers as well, with each possible group of 4 binary digits signifying one hexadecimal digit. The 6 digits past 9 in hexadecimal are A through F. This makes it easier to write numbers when dealing
with computers, since binary numbers use too many digits for relatively small numbers. An example of a non positional notation would be Roman numerals. With Roman numerals, you need new digits the
higher you go up, which makes it hard to symbolize large numbers. With positional systems, you can literally represent any possible number.
The reason humans use base 10 is thought to be because we have 10 fingers. But some cultures (like the Babylonians mentioned before) use different bases. Some parts of Africa use binary, some
Aboriginal Australian languages use a quinary system, the Native American Yuki and Pamean languages use octal systems, and some Nigerian and Indian languages use duodecimal.
The reason computers use binary is because using only two different voltages is much less prone to error and easier to construct than using any more than that. However, if we develop technology so
it’s possible to make some type of higher base computer which has mechanics that can compute just as quickly and compactly as the system we use now, it will be able to process things with much less
information (for example, if we develop a base 10 computer, the binary string 10000000 will become 128, saving 5 places). Boolean algebra is used in computers in the form of logic gates to manipulate
binary numbers.
mckelvey631 likes this
nathataintme likes this
neonflavoredskittles likes this
minecraftadder posted this | {"url":"http://minecraftadder.tumblr.com/post/3826081569/binary","timestamp":"2014-04-21T09:38:27Z","content_type":null,"content_length":"30830","record_id":"<urn:uuid:71c9bed6-7423-44e2-a01c-9cd6fdb8ab6f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Calculate the wavelength of the radiation released when an electron moves from n=5 to n=2
Best Response
You've already chosen the best response.
\[1/\lambda = R ( 1/na ^{2} -1/nb ^{2})\] \[R = 1.097 \times 10^{7}\] \[1/\lambda = R (1/(2)^{2}-1/(5)^{2})\] \[1/\lambda = R (99/25)\] \[\lambda = 25/99R\]
Best Response
You've already chosen the best response.
thank you!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e789b8b0b8b7d4f6d1675e8","timestamp":"2014-04-18T19:07:14Z","content_type":null,"content_length":"30014","record_id":"<urn:uuid:262abc85-f4c0-49b7-8545-898a6c763899>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clermont, GA Math Tutor
Find a Clermont, GA Math Tutor
...I have worked for several small businesses including start-ups and have set them up using Quickbooks. I have over 25 years of experience in Office Management/Controller positions. I can help
you set up your program, learn how to use it and be available for assistance anytime.
8 Subjects: including prealgebra, English, reading, accounting
...I am proficient in Microsoft Office 2007 and Medical Transcription. I also love very much to work with individuals that have Special Needs and have worked approximately three years with
developmentally-disabled children and adults in Georgia, Alabama, and Arkansas. I am very willing to try to meet any need you may have and to gain new knowledge in areas in which the need arises.
38 Subjects: including algebra 1, SAT math, reading, discrete math
I am prior military-- I served in the Army for four years. I am currently a Gainesville State College student studying Psychology. However, I have taken the initiative to take extra math classes,
since math comes easy to me.
6 Subjects: including calculus, precalculus, algebra 1, algebra 2
I have a BS and MS in Physics from Georgia Tech and a Ph.D. in Mathematics from Carnegie Mellon University. I worked for 30+ years as an applied mathematician for Westinghouse in Pittsburgh.
During that time I also taught as an adjunct professor at CMU and at Duquesne University in the Mathematics Departments.
10 Subjects: including algebra 1, algebra 2, calculus, geometry
I am a current freshman at University of North Georgia, with a major in Nursing and a minor in Spanish. I have tutored throughout high school with excellent results (reference available if
requested). I prefer to work with middle-school through high-school levels, as that is my average student age-...
10 Subjects: including algebra 1, reading, Spanish, writing
Related Clermont, GA Tutors
Clermont, GA Accounting Tutors
Clermont, GA ACT Tutors
Clermont, GA Algebra Tutors
Clermont, GA Algebra 2 Tutors
Clermont, GA Calculus Tutors
Clermont, GA Geometry Tutors
Clermont, GA Math Tutors
Clermont, GA Prealgebra Tutors
Clermont, GA Precalculus Tutors
Clermont, GA SAT Tutors
Clermont, GA SAT Math Tutors
Clermont, GA Science Tutors
Clermont, GA Statistics Tutors
Clermont, GA Trigonometry Tutors | {"url":"http://www.purplemath.com/Clermont_GA_Math_tutors.php","timestamp":"2014-04-16T10:41:43Z","content_type":null,"content_length":"23865","record_id":"<urn:uuid:b7ac051c-f226-436a-bf07-aff2f74bce36>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method for teaching critical thinking
Patent application title: Method for teaching critical thinking
Inventors: Kevin S. Winterrowd (Yukon, OK, US)
IPC8 Class: AG09B1900FI
USPC Class: 434236
Class name: Education and demonstration psychology
Publication date: 2009-04-30
Patent application number: 20090111076
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A method for teaching critical thinking to students, using pre-selected teaching materials relating to a pre-selected subject having at least one concept relating to the pre-selected subject,
consists of teaching the students to identify the concept under consideration (Step 1), teaching the student to analyze the significance of the concept in relation to at least one relatively narrower
context (Step 2), and teaching the student to evaluate the significance of the concept in relation to at least one relatively broader context (Step 3).
A method for teaching critical thinking to students using pre-selected teaching materials relating to a pre-selected subject having at least one concept relating to the pre-selected subject, the
method for teaching critical thinking comprising the steps of:teaching the students to identify the concept under consideration;teaching the student to analyze the significance of the concept in
relation to at least one relatively narrower context; andteaching the student to evaluate the significance of the concept in relation to at least one relatively broader context.
The method of claim 1, wherein each step of the method is represented in a critical thinking diagram for reference by the students, the critical thinking diagram comprising;a triangle having two
legs, a base, and at least one line parallel to the base positioned between the two legs above the base;wherein the concept is represented on the critical thinking diagram as the point of
intersection of the two legs above the base;wherein the analysis step is represented on the critical thinking diagram by the at least one line parallel to the base; andwherein the evaluation step is
represented on the critical thinking diagram by the base.
The method of claim 2, wherein the critical thinking diagram further comprises a vertical line connecting the point at the meeting of the legs of the triangle to the base, wherein the vertical line
is perpendicular to the base.
The method of claim 2 wherein the pre-selected teaching materials relate to the fine arts.
The method of claim 2 wherein the pre-selected teaching materials relate to the sciences.
The method of claim 2 wherein the pre-selected teaching materials relate to business.
The method of claim 2 wherein the pre-selected teaching materials relate to education.
The method of claim 2, wherein the pre-selected teaching materials relate to mathematics.
The method of claim 2, wherein the pre-selected teaching materials relate to history.
The method of claim 2, wherein the pre-selected teaching materials relate to earth sciences.
The method of claim 2, wherein the pre-selected teaching materials relate to language arts.
A method for learning critical thinking by students using pre-selected teaching materials relating to a pre-selected subject having at least one concept relating to the pre-selected subject, the
method for learning critical thinking comprising the steps of:referring to a critical thinking diagram, the critical thinking diagram, further comprising a triangle having two legs, a base, and at
least one line parallel to the base positioned between the two legs above the base;identifying the concept under consideration in relation to the critical thinking diagram, wherein the concept is
represented on the critical thinking diagram as the point of intersection of the two legs above the base;analyzing the significance of the concept in relation to at least one relatively narrower
context, wherein the analysis is represented on the critical thinking diagram by the at least one line parallel to the base; andevaluating the significance of the concept in relation to at least one
relatively broader context, wherein the evaluation is represented on the critical thinking diagram by the base.
The method of claim 12, wherein the critical thinking diagram further comprises a vertical line connecting the point at the meeting of the legs of the triangle to the base, wherein the vertical line
is perpendicular to the base.
The method of claim 12, wherein the pre-selected teaching materials relate to the fine arts.
The method of claim 12, wherein the pre-selected teaching materials relate to the sciences.
The method of claim 12, wherein the pre-selected teaching materials relate to business.
The method of claim 12, wherein the pre-selected teaching materials relate to education.
The method of claim 12, wherein the pre-selected teaching materials relate to the social sciences.
The method of claim 12, wherein the pre-selected teaching materials relate to history.
The method of claim 12, wherein the pre-selected teaching materials relate to earth sciences.
BACKGROUND OF THE INVENTION [0001]
1. Field of the Invention
This invention relates to education, and, more particularly, but not by way of limitation, to a method for teaching critical thinking. Optionally, the method for teaching critical thinking process is
represented by a critical thinking triangle. From the student's perspective, the invention is a method for learning how to engage in critical thinking using the critical thinking triangle.
2. Discussion
Following the 1948 Convention of the American Psychological Association, B. S. bloom took a lead in formulating a classification of the goals of the educational process. Three domains of educational
activities were identified. The first of these, the Cognitive Domain, is a knowledge-based domain consisting of six levels. The second, the Affective Domain, is an attitudinal-based domain consisting
of five levels. The third, the Psychomotor Domain, is a skills-based domain consisting of six levels. Eventually, Bloom and his co-workers established a hierarchy of educational objectives, generally
referred to as Bloom's Taxonomy, which divides cognitive objectives ranging from the simplest behavior to the most complex.
Bloom's Taxonomy is a multi-tiered model of classifying thinking according to six cognitive levels of complexity. Over the years, the levels have often been depicted as a stairway, leading many
teachers to encourage their students to "climb to a higher level of thought." The lowest three levels are knowledge, comprehension, and application. The highest three levels are analysis, synthesis,
and evaluation.
Bloom's Taxonomy has been condensed, expanded, and reinterpreted in a variety of ways. It has provided a point of departure for numerous research projects and papers. During the 1990s, a former
student of Bloom's, Lorin Anderson, led a new assembly which met for the purpose of updating Bloom's Taxonomy, hoping to add relevance for 21
century students and scholars. The effort result in a Revised Bloom's Taxonomy (RBT). Published in 2001, the revision includes several changes in three broad categories: terminology, structure, and
Changes in terminology between the two versions are the most obvious differences. Bloom's six major categories (evaluation, synthesis, analysis, application, comprehension, and knowledge) were
changed from noun to verbs. The "lowest" level of the original taxonomy, knowledge, was renamed "remembering." Finally, comprehension and synthesis were renamed understanding and creating,
respectively. The six categories in the RBT are remembering, understanding, applying, analyzing, evaluating, and creating.
According to the Revised Bloom's Taxonomy, remembering is defined as retrieving, recognizing, and recalling relevant knowledge from long-term memory. Understanding is defined as constructing meaning
from oral, written, and graphic messages through interpreting, exemplifying, classifying, summarizing, inferring, comparing, and explaining. Applying is defined as carrying out or using a procedure
through executing or implementing. Analyzing is defined as breaking material into constituent parts, determining how the parts relate to one another and to an overall structure or purpose through
differentiating, organizing, and attributing. Evaluating means making judgments based on criteria and standards through checking and critiquing. Finally, Creating means putting elements together to
form a coherent or functional whole (reorganizing elements into a new pattern or structure through generating, planning, or producing).
As history has shown, this well known, widely applied scheme filled a void and provided educators with one of the first systematic classifications of the process of thinking and learning. As teachers
struggle to help students acquire both knowledge and critical thinking skills, Bloom's Taxonomy and the Revised Bloom's Taxonomy remain easy for teachers to understand. Teachers must measure their
students' ability. Accurate measurement of students' ability requires a classification of levels of intellectual behavior important in learning. Bloom's Taxonomy and the Revised Bloom's Taxonomy
provided the measurement tool for thinking.
Today's teachers must make tough decisions about how to spend their classroom time. The use of Bloom's Taxonomy and the Revised Bloom's Taxonomy provide a framework for the teacher to identify the
fit of each lesson plan's purpose, essential question, goal, or objective.
Yet, neither Bloom's Taxonomy nor the Revised Bloom's Taxonomy translates directly into classroom activities. Accordingly, what is needed is a method of teaching (and learning) which incorporates the
teachings of Bloom's Taxonomy (and the RBT) in a form which can be applied systematically and effectively in the classroom.
SUMMARY OF THE INVENTION [0012]
According to a method for teaching critical thinking, the student is first to identify the concept or fact under consideration (Step 1). Next, the student is taught to analyze the significance of the
concept or fact in relation to at least one relatively narrower context (Step 2). Then, the student is taught to evaluate the significance of the concept in relation to at least one relatively
broader context (Step 3). The method also provides a triangular critical thinking diagram wherein each step is represented and the student progresses from Step 1 (a single point) to Step 2 to Step 3
within the critical thinking diagram.
An object of the present invention is to provide a classroom-appropriate method of teaching critical thinking to students.
Yet another object of the present invention is to provide a framework in which students can learn critical thinking skills.
Yet another object of the present invention is to provide a graphic representation for use by students in evaluating their responses to questions regarding concepts relating to pre-selected teaching
Other objects, features, and advantages of the present invention will become clear from the following description of the preferred embodiment when read in conjunction with the accompanying drawings
and appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS [0017]
FIG. 1 is a diagram showing the steps of applicant's method for teaching critical thinking.
[0018]FIG. 2
is a diagram showing the steps of another method for teaching critical thinking according to applicant's invention.
FIG. 3 is a diagram showing the steps of a method for learning critical thinking according to applicant's invention.
FIG. 4 is a view of a triangular critical thinking diagram according to applicant's invention.
[0021]FIG. 5
is a view of another critical thinking triangle according to applicant's invention.
FIG. 6 is a view of another critical thinking triangle according to applicant's invention.
FIG. 7 is a view of another critical thinking triangle according to applicant's invention.
FIG. 8 is a view of another critical thinking triangle according to applicant's invention.
FIG. 9 is a view of another critical thinking triangle according to applicant's invention.
DETAILED DESCRIPTION OF THE INVENTION [0026]
In the following description of the invention, like numerals and characters designate like elements throughout the figures of the drawings.
Referring now to FIG. 1, shown therein are the related steps of applicant's method for teaching critical thinking to students. In Step 1, the students are taught to identify the concept under
consideration. In Step 2, the students are taught to analyze the significance of the concept in relation to at least one relatively narrower context. In Step 3, the students are taught to evaluate
the significance of the concept in relation to at least one relatively broader context.
Referring now to
FIG. 2
, shown therein are the related steps of another method for teaching critical thinking to students according to applicant's invention. Each step is represented on a critical thinking diagram (See
FIG. 4). In Step 1, the students are taught to identify the concept under consideration wherein the concept is represented by a point on the critical thinking diagram shown in FIG. 4. In Step 2, the
students are taught to analyze the significance of the concept in relation to at least one relatively narrower context, wherein the analysis step is represented on the critical thinking diagram shown
in FIG. 4. In Step 3, the students are taught to evaluate the significance of the concept in relation to at least one relatively broader context. The evaluation step is represented on the critical
thinking diagram shown in FIG. 4.
Referring now to FIG. 3, shown therein are the related steps of applicant's method for learning critical thinking. In Step 1, the students learn to identify the concept under consideration. In Step
2, the students learn to analyze the significance of the concept in relation to at least one relatively narrower context. In Step 3, the students learn to evaluate the significance of the concept in
relation to at least one relatively broader context.
Referring now to FIG. 4, a critical thinking triangle 50 according to the present invention is defined by points 52, 54, and 56 forming a left leg 52-54, a right leg 52-56, and a base 54-56. Points
58, on the left leg 52-54, and 60, on the right leg 52-56, define a line 58-60 positioned between the point 52 and the base 54-56 and parallel to the base 54-56. A point 62 on the base 54-56
cooperates with the point 52 to define a vertical line 52-62 perpendicular to the base 54-56. The vertical line 52-62 intersects the line 58-60 at a point 64. As will be discussed more fully below,
an arrow 66 represents a progression from the point 52 toward the point 64 on the line 58-60. An arrow 68 represents a further progression from the point 64 toward the point 62 on the base 54-56.
Still referring to FIG. 4, points 52, 58, and 60 define a contained triangle 70 having legs 52-56, 52-60, and a base 58-60. The contained triangle 70 further contains a left triangular portion 72
defined by legs 52-58, 52-64 and a base 58-64 and a right triangular portion 74 defined by legs 52-60, 52-64 and a base 64-60. The points 54, 56, 58, and 60 define a contained trapezoid 76 having a
left leg 54-48, a right leg 56-60, a top 58-60, and a base 54-56. The contained trapezoid 76 further contains a left trapezoidal portion 78, defined by lines 54-58, 54-62, 58-64, and 62-64, and a
right trapezoidal portion 80, defined by lines 62-56, 56-60, 60-64, and 62-64.
Still referring to FIG. 4, a downwardly-pointing arrow 66,indicates progression from the point 52 toward the line 58-60. Another downwardly-pointing arrow 68 indicates progression from the line 58-60
toward the base 54-56.
It will be understood by one skilled in the art that the present method for teaching critical thinking is applicable to all disciplines, including, but not limited to, arts and sciences (including
the fine arts and the earth sciences), engineering, mathematics, business, education, history, the social sciences, the study of law, the study of medicine, the study of dentistry, the study of
accounting, and language arts. Specifically included are the following: algebra, geometry, calculus, biology, zoology, geology, chemistry, physics, world history, U.S. history, government, political
science, chemical engineering, mechanical engineering, electrical engineering, English, literature, composition, social studies, psychology, economics, health, physical education, and reading. It
will be further understood that, while the present method for teaching critical is universally applicable across all disciplines, the present method, with its utilization of a critical thinking
diagram, is especially well suited for secondary school students.
To further illustrate the present method for teaching critical thinking, we will discuss an example involving a World History class wherein the pre-selected teaching materials include a unit on World
War I. Within the unit on World War I, a section is devoted to the Treaty of Versailles, which brought an official end to World War I. Step 1 questions typically begin with verbs such as tell, list,
describe, relate, locate, write, find, state, name, identify, or define.
-US-00001 Teacher Step request Desired student response 1. Identify the Define/describe the Signed in 1919, the Treaty of concept/event under Treaty of Versailles Versailles was a treaty
consideration between the Allies and Germany (from pre-selected teaching materials). 2. Analyze the Analyze the treaty's The Treaty of Versailles significance of the significance with respect brought
World War I to an concept/event in to World War I. official close (from materials). relation to a relatively or narrower context. The Treaty of Versailles imposed. payment of harsh reparations on
post-War Germany (from materials). or The Treaty of Versailles set the stage for the rise of a nationalistic Germany and the Nazi Party (from materials). 3. Evaluate Evaluate the significance The
U.S. learned an important of the Treaty of lesson. Instead of requiring Versailles on U.S. history harsh reparations, a better and policy. approach is to help defeated enemies to rebuild and convert
them to trading partners and allies (not included in materials). Discuss later U.S. actions following World War II.
In the first step
(See FIG. 1 and table, above), the teacher would teach the student to identify the 1919 Treaty of Versailles. In Step 2, the student would be taught to analyze the 1919 Treaty of Versailles with
respect to World War I in 2-3 sentences drawing on information contained in the teaching materials. The student might point out that Germany was kept under blockade until she signed, that the Treaty
of Versailles declared Germany responsible for the war, that the Treaty of Versailles required Germany to pay enormous war reparations and cede territory to the victors, that the harsh reparations
and territorial losses caused enormous bitterness in Germany, giving rise to nationalist movements, especially the Nazis, or that the treaty contributed to one of the worst economic collapses in
German history, sparking runaway inflation in the 1920s. In Step 3, the students might be asked to evaluate the concept in relation to a relatively broader context, preferably the students' lifetime
experience or expectations. The students might be taught to consider lessons learned by the United States and its allies regarding treatment of defeated enemies, with costs for rebuilding paid, at
least in part, by the American taxpayer, and with long-term economic and political commitments.
Referring now to FIGS. 1 and 4 and the Treaty of Versailles example, the student would be taught that Step 2 (analysis) is represented by the line 58-60 in FIG. 4 and, further, that Step 3
(evaluation) is represented by the base 54-56 in FIG. 4. The relatively narrower context (Step 2) is the impact on post-war Germany. The relatively broader context (Step 3) would be the long-lasting
impact on U.S. policy with respect to defeated enemies. The arrow 66 indicates a progression of the student's thinking from a statement of identification of the concept (represented by the point 54),
without little else, to analysis of the significance of the concept in relation to a relatively narrower context (represented by the line 58-60). The arrow 68 indicates a progression of the student's
thinking from the analysis of the significance of the concept in relation to a relatively narrower context (Step 2) to evaluation of the significance of the concept in a relatively broader context
(represented by the base 54-56; Step 3).
Referring now to
FIG. 5
, another critical thinking triangle 150 according to the present invention is defined by points 152, 154, and 156 forming a left leg 152-154, a right leg 152-156, and a base 154-156. Points 158, on
the left leg 152-154, and 160, on the right leg 152-156, define a line 158-160 positioned between the point 152 and the base 154-156 and parallel to the base 154-156. A point 162 on the base 154-156
cooperates with the point 152 to define a vertical line 152-162 perpendicular to the base 154-156. The vertical line 152-162 intersects the line 158-160 at a point 164. An arrow 166 represents a
progression from the point 152 toward the point 164 on the line 158-160. An arrow 168 represents a further progression from the point 164 toward the point 162 on the base 154-156.
Still referring to
FIG. 5
, points 152, 158, and 160 define a contained triangle 170 having legs 152-156, 152-160, and a base 158-160. The contained triangle 170 further contains a left triangular portion 172 defined by legs
152-158, 152-164 and a base 158-164 and a right triangular portion 174 defined by legs 152-160, 152-164 and a base 164-160. The points 154, 156, 158, and 160 define a contained trapezoid 176 having a
left leg 154-158, a right leg 156-160, a top 158-160, and a base 154-156. The contained trapezoid 176 further contains a left trapezoidal portion 178, defined by lines 154-158, 154-162, 158-164, and
162-164, and a right trapezoidal portion 180, defined by lines 162-156, 156-160, 160-164, and 162-164.
Still referring to
FIG. 5
, a downwardly-pointing arrow 166 indicates progression from the point 152 toward the line 158-160. Another downwardly-pointing arrow 168 indicates progression from the line 158-160 toward the base
Referring now to
FIG. 5
in conjunction with FIG. 1, the concept to be identified by the students in Step 1 is represented on the critical thinking diagram 150 as the point of intersection 152 of the two legs 152-154 and
152-156 above the base 154-156. In Step 2, the students are taught to analyze the significance of the concept in relation to at least one relatively narrower context. The analysis is represented on
the critical thinking diagram by the line 158-160 parallel to the base 154-156. Finally, in Step 3 the students are taught to evaluate the significance of the concept in relation to at least one
relatively broader context, wherein the evaluation is represented on the critical thinking diagram by the base 154-156.
Referring now to FIGS. 4 and 5, the distance between the point 52 and the point 64 in FIG. 4 is greater than the distance between the point 152 and the point 164 in
FIG. 5
, thereby suggesting a somewhat decreased progression between the Steps 1 and 2 in the critical thinking triangle 150 in
FIG. 5
as compared to the progression between the Steps 1 and 2 as represented by the critical thinking triangle 50 in FIG. 4. In some cases and with certain teaching materials, the progress from concept
identification/definition to analysis will be minimal. These differences are not significant. Rather, what is significant is the process by which the students learn to advance from a statement of
fact (Step 1) to analysis (Step 2), wherein the students consider the significance of the fact in relation to a relatively narrower context, and then to evaluation (Step 3), wherein the students
consider the significance of the fact in relation to a relatively narrower context.
It will be understood by one skilled in the art that the length of the lines 58-60 (FIG. 4) and 158-160 (
FIG. 5
), although shown as terminating at the legs 52-54, 52-56 of the critical thinking triangle 50 and the legs 152-154, 152-156 of the critical thinking triangle 150, respectively, is not intended to
indicate a limitation on the relatively narrower context in which the concept is considered by the students in the analysis Step 2. The breadth of the students' analysis is determined by the teaching
materials provided to the student (since the student response is drawn from the teaching materials) and the instructions given by the teacher. For example, the teacher may limit the students'
response to 1-2 sentences in some cases, or the teacher may ask the students to provide 3 sentences in the analysis Step 2.
It will also be understood by one skilled in the art that, whereas Step 1 asks for a factual answer which is either right or wrong, Step 2 may have numerous answers all of which are acceptable so
long as they are supported by the teaching materials. Step 3 of the present method for teaching critical thinking involves a combination of teaching materials content with student reaction,
interaction, and evaluation, none of which are contained in the teaching materials. Step 3 may invoke cultural responses which may vary from student to student. The goal of Step 3 is to teach the
student to evaluate the significance of a concept/fact under consideration in a relatively broader context, thereby facilitating retention of both the concept/fact under consideration and developing
the student's ability to think beyond the bare concept/fact.
Referring still to FIGS. 4 and 5, the critical thinking triangles 50, 150 are isosceles triangles. Therefore, the distances to the left and right of the vertical lines 52-62 and 152-162,
respectively, are equal, thereby suggesting a balanced analysis (Step 2) and a balanced evaluation (Step 3).
Referring now to FIG. 6, another critical thinking triangle 250 according to the present invention is defined by points 252, 254, and 256 forming a left leg 252-254, a right leg 252-256, and a base
254-256. Points 258, on the left leg 252-254, and 260, on the right leg 252-256, define a line 258-260 positioned between the point 252 and the base 254-256 and parallel to the base 254-256. A point
262 on the base 254-256 cooperates with the point 252 to define a vertical line 252-262 perpendicular to the base 254-256. The vertical line 252-262 intersects the line 258-260 at a point 264. An
arrow 266 represents a progression from the point 252 toward the point 264 on the line 258-260. An arrow 268 represents a further progression from the point 264 toward the point 262 on the base
Still referring to FIG. 6, points 252, 258, and 260 define a contained triangle 270 having legs 252-256, 252-260, and a base 258-260. The contained triangle 270 further contains a left triangular
portion 272 defined by legs 252-258, 252-264 and a base 258-264 and a right triangular portion 274 defined by legs 252-260, 252-264 and a base 264-260. The points 254,256, 258, and 260 define a
contained trapezoid 276 having a left leg 254-258, a right leg 256-260, a top 258-260, and a base 254-256. The contained trapezoid 276 further contains a left trapezoidal portion 278, defined by
lines 254-258, 254-262, 258-264, and 262-264, and a right trapezoidal portion 280, defined by lines 262-256, 256-260, 260-264, and 262-264.
Still referring to FIG. 6, a downwardly-pointing arrow 266 indicates progression from the point 252 toward the line 258-260. Another downwardly-pointing arrow 268 indicates progression from the line
258-260 toward the base 254-256.
Referring now to FIG. 7, another critical thinking triangle 350 according to the present invention is defined by points 352, 354, and 356 forming a left leg 352-354, a right leg 352-356, and a base
354-356. Points 358, on the left leg 352-354, and 360, on the right leg 352-356, define a line 358-360 positioned between the point 352 and the base 354-356 and parallel to the base 354-356. A point
362 on the base 354-356 cooperates with the point 352 to define a vertical line 352-362 perpendicular to the base 354-356. The vertical line 352-362 intersects the line 358-360 at a point 364. An
arrow 366 represents a progression from the point 352 toward the point 364 on the line 358-360. An arrow 368 represents a further progression from the point 364 toward the point 362 on the base
Still referring to FIG. 7, points 352, 358, and 360 define a contained triangle 370 having legs 352-356, 352-360, and a base 358-360. The contained triangle 370 further contains a left triangular
portion 372 defined by legs 352-358, 352-364 and a base 358-364 and a right triangular portion 374 defined by legs 352-360, 352-364 and a base 364-360. The points 354, 356, 358, and 360 define a
contained trapezoid 376 having a left leg 354-358, a right leg 356-360, a top 358-360, and a base 354-356. The contained trapezoid 376 further contains a left trapezoidal portion 378, defined by
lines 354-358, 354-362, 358-364, and 362-364, and a right trapezoidal portion 380, defined by lines 362-356, 356-360, 360-364, and 362-364.
Still referring to FIG. 7, a downwardly-pointing arrow 366 indicates progression from the point 352 toward the line 358-360. Another downwardly-pointing arrow 368 indicates progression from the line
358-360 toward the base 354-356.
Referring now to FIG. 8, another critical thinking triangle 450 according to the present invention is defined by points 452, 454, and 456 forming a left leg 452-454, a right leg 452-456, and a base
454-456. Points 458, on the left leg 452-454, and 460, on the right leg 452-456, define s line 458-460 positioned between the point 452 and the base 454-456 and parallel to the base 454-456. A point
462 on the base 454-456 cooperates with the point 452 to define a vertical line 452-462 perpendicular to the base 454-456. The vertical line 452-462 intersects the line 458-460 at a point 464. An
arrow 466 represents a progression from the point 452 toward the point 464 on the line 458-460. An arrow 468 represents a further progression from the point 464 toward the point 462 on the base
Still referring to FIG. 8, points 452, 458, and 460 define a contained triangle 470 having legs 452-456, 452-460, and a base 458-460. The contained triangle 470 further contains a left triangular
portion 472 defined by legs 452-458, 452-464 and a base 458-464 and a right triangular portion 474 defined by legs 452-460, 452-464 and a base 464-460. The points 454, 456, 458, and 460 define a
contained trapezoid 476 having a left leg 454-458, a right leg 456-460, a top 458-460, and a base 454-456. The contained trapezoid 476 further contains a left trapezoidal portion 478, defined by
lines 454-458, 454-462, 458-464, and 462-464, and a right trapezoidal portion 480, defined by lines 462-456, 456-460, 460-464, and 462-464.
Still referring to FIG. 8, a downwardly-pointing arrow 466 indicates progression from the point 452 toward the line 458-460. Another downwardly-pointing arrow 468 indicates progression from the line
458-460 toward the base 454-456.
Referring now to FIG. 9, another critical thinking triangle 550 according to the present invention is defined by points 552, 554, and 556 forming a left leg 552-554, a right leg 552-556, and a base
554-556. Points 558, on the left leg 552-554, and 560, on the right leg 552-556, define a line 558-560 positioned between the point 552 and the base 554-556 and parallel to the base 554-556. A point
562 on the base 554-556 cooperates with the point 552 to define a vertical line 552-562 perpendicular to the base 554-556. The vertical line 552-562 intersects the line 558-560 at a point 564. An
arrow 566 represents a progression from the point 552 toward the point 564 on the line 558-560. An arrow 568 represents a further progression from the point 564 toward the point 562 on the base
Still referring to FIG. 9, points 552, 558, and 560 define a contained triangle 570 having legs 552-556, 552-560, and a base 558-560. The contained triangle 570 further contains a left triangular
portion 572 defined by legs 552-558, 552-564 and a base 558-564 and a right triangular portion 574 defined by legs 552-560, 552-564 and a base 564-560. The points 554, 556, 558, and 560 define a
contained trapezoid 576 having a left leg 554-558, a right leg 556-560, a top 558-560, and a base 554-556. The contained trapezoid 576 further contains a left trapezoidal portion 578, defined by
lines 554-558, 554-562, 558-564, and 562-564, and a right trapezoidal portion 580, defined by lines 562-556, 556-560, 560-564, and 562-564.
Still referring to FIG. 9, a downwardly-pointing arrow 566 indicates progression from the point 552 toward the line 558-560. Another downwardly-pointing arrow 568 indicates progression from the line
558-560 toward the base 554-556.
Still referring to FIG. 9, points 582 and 584 on the left leg 552-554 and the right leg 552-556, respectively, of the critical thinking triangle 550 define a line 582-584 representing an additional
analysis (Step 2) between the point 582 (representing a Step 1 concept/fact) and the line 558-560 (representing the Step 2 analysis) The representation of an additional analysis step reflects a
common occurrence wherein progressive analyses are appropriate. Points 586 and 588 on the left leg 552-554 and the right leg 552-556, respectively, of the critical thinking triangle 550 define a line
586-588 representing an additional evaluation Step 3 according to the present method. The representation of an additional evaluation step reflects a common occurrence wherein progressive evaluations
are appropriate.
To further illustrate the method for teaching critical thinking according to the present invention, consider the case of students studying To Kill A Mockingbird, by Harper Lee, as part of an American
Literature class. In a particular passage of interest for the purpose of this illustration, Scout, the young girl who narrates the story, tells of an incident wherein the Sheriff is faced with a
rabid dog. To Scout's surprise, the Sheriff asks Atticus Finch, Scout's father, to take the Sheriff's rifle and shoot the dangerous dog. Scout was surprised because, in her life experience, she had
never known Atticus Finch to have anything to do with guns. Atticus takes the rifle from the Sheriff, aims, fires, and kills the rabid dog with a single shot. Recognizing Scout's amazement, the
Sheriff informs Scout that Atticus had long been the best shot in the county.
Applying applicant's method for teaching critical thinking to the students' study of the passage described above, the instructor would first (in Step 1), teach the students to identify the concept/
fact under consideration. In this case, the concept/fact under consideration is the killing of the rabid dog by Atticus Finch. The Step 2 analysis (using the selected passage in conjunction with
related materials from the book) would include a short discussion of Scout's lack of knowledge of her father's reputation as a crack shot. The Step 3 evaluation might include, among others, a
discussion of the idea that we often do not know a person as well as we think--even members of our own family. Stated another way, the student might conclude that we should be open to seeing
previously unseen aspects of those around us.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the
invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best
explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as
are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
Patent applications in class PSYCHOLOGY
Patent applications in all subclasses PSYCHOLOGY
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20090111076","timestamp":"2014-04-18T03:47:59Z","content_type":null,"content_length":"62442","record_id":"<urn:uuid:8d688ac4-9a3c-4c45-8ce5-81e28a1411ac>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Paper 02-27:
id 02-27
authors Neun, Winfried
year 2002
title MATH-NET - THE STATE OF THE ART OF A DISTRIBUTED INFORMATION & COMMUNICATION SYSTEM IN MATHEMATICS
source elpub2002 - Technology Interactions. Proceedings of the 6th International ICCC/IFIP Conference on Electronic Publishing held in Karlovy Vary, Czech Republic, 6–8 November 2002. Editors:
Carvalho, Joao Álvaro; Hübler, Arved; Baptista, Ana Alice. Publisher: VWF Berlin, 2002. ISBN 3-89700-357-0. 395 pages.
summary The Math-Net Initiative steered by the International Mathematical Union (IMU) tries to improve and to coordinate the mathematical scholarly information in the World Wide Web. Math-Net is a
community-based information and communication system in mathematics. It is based on the information which is provided by persons and institutions taking part in Math-Net on their local
servers (Math-Net Members). They should make their information resources electronically available in a standardised fashion. Currently the Math-Net activities are focused on three topics:
Math-Net Pages, Mathematical Preprints, and Personal Information about Mathematicians. The talk will give an overview about the idea of Math-Net. The Math-Net Page, a standardised portal
to the information of mathematical institutions, will be explained in more detail.
series ELPUB:2002
email neun@zib.de
more http://www.math-net.org
content (2,184,816 bytes)
discussion No discussions. Post discussion ...
ratings Ratings: 5
urn:nbn urn:nbn:se:elpub-02-27
last 2003/07/09 15:06
These pages are best viewed with any standards compliant browser (e.g. Mozilla). | {"url":"http://elpub.scix.net/cgi-bin/works/Show?02-27","timestamp":"2014-04-17T18:26:25Z","content_type":null,"content_length":"13270","record_id":"<urn:uuid:119a6345-84d7-4888-9356-d051581900a4>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Suspended Decoupler: A New Design of Hydraulic Engine Mount
Advances in Acoustics and Vibration
Volume 2012 (2012), Article ID 826497, 11 pages
Research Article
Suspended Decoupler: A New Design of Hydraulic Engine Mount
^1MTS Systems Corporation, 14000 Technology Drive, Eden Prairie, MN 55344-2290, USA
^2Department of Mechanical Engineering, Milwaukee School of Engineering, Milwaukee, WI 53202, USA
^3School of Aerospace, Mechanical, and Manufacturing Engineering, RMIT University, Melbourne, VIC 8083, Australia
Received 28 June 2011; Accepted 14 September 2011
Academic Editor: Mohammad Tawfik
Copyright © 2012 J. Christopherson et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
Because of the density mismatch between the decoupler and surrounding fluid, the decoupler of all hydraulic engine mounts (HEM) might float, sink, or stick to the cage bounds, assuming static
conditions. The problem appears in the transient response of a bottomed-up floating decoupler hydraulic engine mount. To overcome the bottomed-up problem, a suspended decoupler design for improved
decoupler control is introduced. The new design does not noticeably affect the mechanism's steady-state behavior, but improves start-up and transient response. Additionally, the decoupler mechanism
is incorporated into a smaller, lighter, yet more tunable and hence more effective hydraulic mount design. The steady-state response of a dimensionless model of the mount is examined utilizing the
averaging perturbation method applied to a set of second-order nonlinear ordinary differential equations. It is shown that the frequency responses of the floating and suspended decoupled designs are
similar and functional. To have a more realistic modeling, utilizing nonlinear finite elements in conjunction with a lumped parameter modeling approach, we evaluate the nonlinear resorting
characteristics of the components and implement them in the equations of motion.
1. Introduction and Statement of Problem
Modern vehicles illustrate a trend toward lighter, higher performance, aluminum-based engines thereby increasing the potential for vibration. The engine is the largest concentrated mass in a vehicle
and causes vibration if it is not properly isolated and constrained. The trend for many years to isolate vibrations was to simply connect the engine and frame by means of an engine mount made of
elastomeric materials such as rubber [1–3]. Modeling the rubber isolator by a linear system and considering a base excited single-degree-of-freedom system, we know in the frequency response curves of
the acceleration transmitted to the isolated mass there exists a crossing point at a frequency ratio value of in which all the curves representing systems with differing damping ratios converge [4].
This is a switching point for systems where system behavior reverses dependent upon excitation frequency. This paradoxical behavior indicates that for optimum isolation of a structure from
acceleration, hence force, a mount is needed in which high damping is allowed at low excitation frequencies, and low damping is allowed at increased excitation frequencies.
Because there is a need for a vibration isolator that can exhibit a dual damping ratio that is dependent upon frequency the hydraulic engine mount was introduced. The hydraulic engine mount is a
device that approximately provides the desired damping characteristics via the implementation of a mechanical switching mechanism known as the decoupler in conjunction with a narrow, highly
restrictive fluid path known as the inertia track [1, 2, 5–9]. These two mechanisms act together, assuming an appropriately designed system, to provide a passive means of variable damping dependent
upon excitation characteristics [10]. More specifically, when a large pressure differential is imparted to the fluid chambers, by means of a substantial outside perturbation, the decoupler will
bottom out in its cage bounds and cause the pressure differential within the mount to be equalized via the inertia track. Due to the inertia tracks dimensions it provides an increased damping
coefficient to the engine mount. However, when the external perturbation is low in intensity, or at increased frequency, the decoupler does not bottom out, and hence the inertia track is effectively
short-circuited; therefore, due to the decoupler’s large dimensions, the system provides a low damping coefficient.
Figure 1(a) illustrates a schematic of a typical floating-decoupler hydraulic engine mount (HEM) [9, 10]. The engine mount is named as such because the decoupler “floats” freely inside of its
housing. The basic premise of operation of the HEM is relatively straightforward. The engine is supported by a rubber structure acting as the main load carrying component, and a means by which to
induce fluid motion within the engine mount [2]. The fluid motion induced in the engine mount, due to external excitation, is then forced through a system of passageways of inertia track and
decoupler. The preferred pathway is dependent upon the nature of the excitation.
The decoupler and its housing are shown in Figure 1(b). Large amplitude, low frequency excitations impart a significant enough fluid motion that the decoupler plate is forced to bottom out on its
surrounding cage thereby forcing the fluid to flow through the inertia track into the compliant lower chamber. The inertia track is a long, small-diameter tube that runs circumferentially around the
engine mount providing a very restrictive flow path between upper and lower chambers. Due to the restrictive nature of the inertia track, an increased viscous damping coefficient is realized for the
system. This increased damping acts to reduce the acceleration transmissibility of the mount at low excitation frequencies. However, at increased excitation frequencies, the decoupler plate does not
bottom out on the cage bounds. Instead it moves back and forth freely providing a relatively low flow restriction. Because the decoupler provides a low restriction to fluid flow, it becomes the
preferred flow path, and acts on reducing the damping coefficient of the engine mount.
This system works quite well and is in place on the large majority of automotive applications to date. It is analyzed, and modeled by researchers since 1980 from different viewpoints. Adiguna and
coworkers determined dynamic behavior of HEMs in time domain [11] and frequency domain [5], utilizing linear and nonlinear lumped models [12]. The nonlinear function of the decoupler is successfully
modeled, examined, and applied by Golnaraghi and Jazar [13, 14] utilizing a third-degree equation to describe a nonlinear damping. Adapting their model, Christopherson and Jazar [15, 16] optimized a
sprung mass suspended by an HEM and provided a design method.
In the present investigation we study two common assumptions and explore their effects in modeling and dynamics of hydraulic engine mounts. First assumption is that in lumped model of the system, the
nonlinearities involved in elastomechanical parts are usually ignored and a linear behavior is assumed. Second assumption is that either in transient or steady-state responses it is assumed that the
decoupler is settled down in its neutral position exactly in the middle of the gap of decoupler duct. Therefore, two questions arise as to what are the effects of nonlinearities involved in
elastomechanical parts, and what happens on initial start up if the decoupler is bottomed up.
This investigation will utilize finite element analysis to determine the mechanical behavior of the components, and will employ perturbation analysis to determine transient and steady-state behaviors
of the mount.
Because of the density mismatch between the decoupler and surrounding fluid, the decoupler will float, sink, twist, or stick to the cage bounds, assuming static conditions. The problem is what
happens if the decoupler is in a nonoptimum location for a given random or initial excitation to provide either low damping by being open or closed to allow for high damping.
We introduce a supported decoupler mechanism illustrated in Figure 2. By supporting the decoupler it is ensured to be in a neutral location upon startup. However, the trick to designing such a
mechanism is to ensure that the nature of the support does not influence the previously mentioned steady-state operation of the mechanism, while still maintaining the advantage of the supported
decoupler in the initial transient response. Here the decoupler disk is supported and forced to be in neutral position equidistance between either cage limit; however, the decoupler is not fixed from
motion. This design requires the decoupler to be made of an elastomeric material to provide sufficient stiffness to keep the decoupler located during nonexcited (static) situations, and provide
sufficient flexibility to allow normal operation during dynamic events.
Up to the present, very few researchers have looked at the start-up or transient behavior of the hydraulic mount with Adiguna et al. being among the few [11]. However, the behavior of the mount for a
bottomed-up decoupler has never been investigated, although after a short period of time the decoupler recovers its purposed function and does what it is designed for. However, in the current
floating-decoupler design the decoupler might simply sit against one of the cage bounds while not excited, depending upon mounting configurations and density mismatch with the surrounding fluid,
therefore causing the system to initially utilize only the inertia track. Therefore, after every excitation removal the decoupler might sink or float to create the problem again. With either
condition it becomes apparent, after some consideration, that because it is the decoupler that allows the mount to act as either a low damping or a high damping mechanism by means of its position,
then the position of the decoupler during the aforementioned excitations is quite important.
2. Suspended Decoupler HEM Model Description
Floating decoupler HEM is described in the literature very well [5–16]. To be compared with float type, here we describe a suspended decoupler HEM. Noting the disadvantages of the floating decoupler
compared to suspended decoupler mount, it seems advantageous to design a new mount utilizing such a suspended decoupler mechanism. Such a mount should provide effective isolation characteristics
through a broad frequency spectrum while maintaining or surpassing existing hydraulic engine mount benchmarks for performance.
Figure 3 illustrates a schematic of the proposed design intended to meet the aforementioned criteria. The mount utilizes the same decoupler mechanism (1) as illustrated in Figure 2. In addition, the
mount does away with the traditional upper rubber structure common to practically every modern hydraulic engine mount. Instead the proposed mount makes use of a Belleville spring (2) to provide the
primary axial stiffness and a thick circumferential rubber band (3) surrounding the upper structure of the mount to limit transverse motions of the mount. The volumetric compliance of the upper
chamber of the mount is provided through a relatively thick rubber chamber (4) which is mechanically fastened to the upper moving head of the engine mount (5). The advantage of such a structure over
the traditional rubber structure is twofold. First, the stiffness of the engine mount is more tunable and is as simple as appropriate spring sizing as compared to complicated geometrical designs
required for the current rubber structure. Second, the damping of the system can be allowed to rest with the fluid motion inside of the mount thereby allowing more precise tuning by means of inertia
track and decoupler geometry [10]. Such a method is far simpler than trying to design an upper rubber structure with a specified amount of hysteretic-type damping.
Figure 4 illustrates the three-dimensional representation of the design. Here the decoupler geometry becomes clearer in conjunction with the design of the upper structure. Figure 5 provides a better
illustration of the decoupler geometry required to achieve the aforementioned requirements. As illustrated in Figure 5 the decoupler support tabs are thinned regions with slots on either side to help
them to not dramatically influence overall decoupler dynamics while still maintaining the required stiffness to ensure proper decoupler position during static conditions.
3. Dynamic Parameters Evaluations
In every HEM there are two rubber-type components in upper and lower chambers to collect the moving fluid. These rubbery components produce compliances of the system which appear in the equations of
motion. Besides the two chambers, the suspended decoupler also show an elastic behavior. Utilizing FEM we show how to determine the elastic behavior of the decoupler, upper bellow, and lower
collector compliances.
To begin the analysis of the engine mount it is paramount that the necessary geometric and material parameters are identified. To accomplish such, finite element analysis is utilized as a tool to
provide knowledge of component load-deflection relationships, volumetric expansion properties, and so forth. By creating a finite element model based upon the geometry illustrated in Figure 5,
information regarding the load-deflection behavior of the decoupler mechanism is readily obtainable. Figure 6 illustrates the discretized finite element model. The model was discretized using 10 node
tetrahedral elements with a total of 42,794 active degrees of freedom for the model.
To simulate the impact condition between the decoupler and the surrounding cage bounds Lagrangian type-contact elements were imposed upon potential impacting surfaces (see Figure 7). The contact
region at the decoupler support points was simulated using a rough-style interface between the two materials thereby allowing no slippage [17]. Whereas the surfaces contacting after sufficient
decoupler deformation were treated as frictionless thereby allowing relative motion between the two bodies. To simplify the analysis and determine the effectiveness of the new design compared to
floating decoupler design, we ignore the fluid-solid interaction as is done in modeling HEM [8–16].
Because the decoupler is to be made of an elastomeric material the three-parameter Mooney-Rivlin model, illustrated in (1) is utilized [18–20]. The three-parameter Mooney-Rivlin model expresses the
strain energy density as a function of the material constants (, , and ), and the first two invariants ( and ) of the Right Cauchy-Green deformation tensor [19–21]. The material constants that we
adapted are shown in Table 1, and may be obtained from experiment by means of a least-squares curve-fitting procedure [15, 22, 23],
To solve the finite element model an applied numerical solution method must be employed. Such an approach was required due primarily to two factors. First, the material for the decoupler is nonlinear
and requires full geometric nonlinearity options to be utilized. Second, the contact between the rubber and metallic cage bounds is asymmetric noting the differences in material responses between the
two structures; therefore, the full Newton-Raphson approach must be employed to deal with the unsymmetrical nature of the assembled matrices [17].
The fluid is assumed to be incompressible compared to elastic and flexible parts. To simulate fluid-induced pressure an evenly distributed pressure of 20kPa was assumed to one side of the entire
exposed surface of the decoupler. To constrain the entire assembly from motion the lower surface of the cage was fixed in all degrees of freedom. In order to obtain information regarding the
load-deflection relationship of the supported decoupler the applied pressure was resolved into a force component by multiplying the area upon which the pressure was applied. The corresponding
deflection measurement was taken in the vertical direction from the center node (exposed due to symmetry conditions) of the decoupler disk. The results of the finite element analysis are illustrated
in Figure 8.
Notice from Figure 8 that even after the decoupler impacts the cage bounds the disk, it continues to displace with a corresponding increase in applied load due to the elastic nature of the decoupler
material. In addition, notice that a third-order polynomial, expressed in (2), approximates the data with reasonably good accuracy with N,
Consider the upper structure of the engine mount with material properties shown in Table 2. The structure is considered as a whole because of the nonlinearity inherent in the load-deflection
relationship of the spring, but also the material nonlinearity of the rubber components. Because of the nonlinearity, the principle of superposition is not applicable; therefore, the stiffness of the
upper structure will be modeled by one nonlinear spring element (as compared to multiple springs in parallel). Figure 9 illustrates the model geometry and corresponding finite element mesh which
consisted of 20 node hexahedral elements and 10 node tetrahedral elements with a total of 71,211 degrees of freedom. Additionally, contact surfaces were specified everywhere metallic components are
in contact or were to contact. Bonded-type contact surfaces were specified everywhere that elastomeric materials were in contact with metallic components as the design intent was to have said
metallic components bonded to the rubber parts as a part of the manufacturing process.
The finite element model was constrained on the lower surface with load being applied in the form of a specified displacement in the axial direction on the opposing surface. In addition, fixed
constraints were applied to the lower and outer surfaces of the surrounding rubber component as illustrated in Figure 9.
To solve the finite element model a nonlinear simulation was utilized allowing for finite strains. Figure 10 illustrates the resulting load-deflection relationship obtained from the finite element
analysis. Because of nonlinearity, a third-degree polynomial is fit to the data. Equation (3) is the result of a least-squares curve fit to the finite element results. In this equation, the input
deflection has units of mm,
Next, consider the upper bellows and its corresponding volumetric compliance. The corresponding finite element model is illustrated in Figure 11. The mesh consisted of 8 node quadrilateral-type
elements with 768 total degrees of freedom. The analysis allowed for finite strains to account for the hyperelastic behavior of the rubber upper compliance, and therefore required solution by the
Newton-Raphson approach. The finite element model illustrated in Figure 11 was constrained from motion on the bottom and top surfaces while an evenly distributed pressure was applied on the internal
surface of the upper compliance to simulate fluid pressure.
Figure 12 illustrates the volume-pressure relationship for the upper bellows structure. Note the relative linearity of the relationship; therefore, by using a least-squares fit of a linear line to
the data results in a line with a slope ofm^5/N, which corresponds to the volumetric compliance of the upper bellows structure.
Determination of the lower chamber volumetric compliance is accomplished much the same as for the upper chamber. Figure 13 illustrates the finite element model for the lower chamber. However, to
model the rubber compliance shell elements were utilized noting the constant thickness of the part, and the large deformations this structure is intended to undergo. In addition, due to the large
deformations expected the rubber compliance has the tendency to buckle outwards. This buckling is difficult to model using solid hexahedral elements noting such deformations can result in
unacceptable element shapes and potentially inaccurate solutions; therefore, 4 node shell elements were employed as such deformations do not necessarily cause such element shape problems [24, 25].
Figure 14 illustrates the results of the analysis of the model illustrated in Figure 13.
Notice the behavior of the lower compliance illustrated in Figure 14 is also nonlinear; however, it appears approximately bilinear. After closer investigation the initial portion of the
volume-pressure curve represents the chambers initial expansion until it contacts the surrounding structural walls. At the point where contact between the two bodies initiates the slope of the
volume-pressure curve drastically changes indicating a less compliant structure. It is the slope of this segment of line that is used to approximate the volumetric compliance of the lower structure
noting the small amount of fluid pressure required to move the system operating point into this region. Such an assumption in regards to the system operating point being located in said region can be
validated by noting that the static load of the engine is sufficient to cause such an operating point shift. Table 3 introduces the complete compliment of hydraulic engine mount parameters [10].
4. Mathematical Analysis
By introducing the support to the decoupler the momentum balance equation for the decoupler exhibits a restoring force term. Additionally, the nonlinear damping term first introduced by Golnaraghi
and Jazar is utilized [13, 14]. However, to fully describe system dynamics, the inertia track momentum equation is needed along with the fluid continuity equations [10–14, 26, 27]. In this commonly
accepted modeling, the fluid-solid interaction is ignored,
Equations (4) and (5) are momentum balance of the fluid mass in decoupler canal and inertia track, while the (5) and (6) are continuity equations for upper and lower chambers, respectively. Utilizing
(4) through (7) results in the following equations of motion which describe the internal dynamics of the hydraulic mount: where, In order to make the analysis general, the following nondimensional
parameters are introduced: Using the parameters in (10), (8) is now expressed in the following nondimensional forms: where, Introducing the small parameter as a measure of the nonlinearity, the
following nondimensional parameters are instituted: Using the parameters in (14) the equations of motion from (11) and (12) are now expressed:
To obtain a solution in the frequency domain for (15) the averaging method is employed by introducing an assumed solutions in the following form [28, 29]:
Expressing the first derivatives as in (18) and (19) requires two constraint equations to maintain the validity of the solution, Now the second-time derivatives can be obtained directly from (18) and
(19): Equations (16) through (19) and (21) are now substituted directly into the equations of motion and utilized in conjunction with (20) to transform the second-order differential equations in (15)
into a system of four first-order differential equations. After extracting the slow terms of the resulting first-order differential equations, averaging over one period of oscillation, the equations
of motion in terms of the first-order differential equations are obtained: where,
In order for equations (22) through (25) to be useable in a frequency domain consider the following transformation to allow conversion of (22) through (25) to an autonomous system of equations:
Utilizing (27) in equations (22) through (25) and noting that for steady-state conditions to prevail the time derivatives must vanish results in implicit frequency response functions for the system,
Equations (28) are identical to the frequency response functions obtained in [13, 16] for a floating decoupler mount if is allowed to equal zero thereby validating the solution noting that the only
difference mathematically between the two systems is the term.
5. Dynamic Responses
Figure 15 illustrates the frequency response function for both the supported decoupler introduced in this investigation and the unsupported decoupler model from [10]. It is seen there is no
discernible difference between the two models indicating that by supporting the decoupler disk the overall function of the mechanism was not substantially affected in its steady-state response.
Figure 16 illustrates the inertia track frequency response function for the mount obtained from the averaging solution above in conjunction with the solution from [10] indicating no appreciable
difference or effect on its behavior due to the decoupler modification.
Noting that the supported decoupler design is based on the initial transient response of the system, consider the force transmitted through the engine mount due to a 1mm pulse input held for a
period of 0.1 seconds. To calculate the force transmitted through the engine mount, consider the following equation developed in [14] and illustrated here to describe the response to a step input.
The transmitted force is the mount dynamic including the nonlinear stiffness of the upper rubber,
Determining the solution to the equations of motion in (8) numerically allows determination of the pressure term in (29) by means of numerical integration of the continuity equations, thereby
allowing determination of the transmitted force by means of (29).
Figure 17 illustrates the force transmitted through the engine mount for the supported decoupler mount and the free decoupler mount. In this analysis we assumed the initial condition of the floating
decoupler to be bottom-up position. Therefore, the supported decoupler mount transmits substantially less force (~200N) at startup as compared to the free decoupler mount. In addition, the maximum
amplitude of force transmitted via the supported decoupler mount is 716N whereas the maximum amplitude of force transmitted via the free decoupler mount is N. The supported decoupler mount provides
a reduction in peak amplitude on startup of 32.5% over the free decoupler mount thereby indicating the effectiveness of the supported decoupler design. In addition, Figures 16 and 17 illustrate that
by utilizing the supported decoupler the steady-state dynamics of the engine mount are not effected in a measurable amount; therefore, the supported decoupler design has been shown to be superior in
improving overall system dynamics and mount isolation characteristics.
6. Conclusion
This study has introduced a decoupler design motivated by the desire to improve upon the current floating-decoupler design. Using nonlinear finite elements, information in regards to the structural
elastic behavior was obtained. This information was then readily utilized by the lumped parameter modeling approach utilized by practically all researchers investigating hydraulic engine mounts.
Using the lumped parameter model, the frequency response of the system was investigated utilizing averaging method and compared to previously published results describing floating-decoupler-type
mounts with excellent agreement. The agreement between the two models indicated that by supporting the decoupler on thin, low-stiffness tabs, the overall steady-state response of the system is
practically unaffected. Additionally, by using numerical analysis to determine the transient response of the system, the supported decoupler substantially improves the engine mounts’ response to
sudden excitations. Future work must be about optimizing the supported decoupler design illustrated in this investigation utilizing the RMS optimization method.
: Area
: Equivalent viscous damping coefficient
: Volumetric compliance
: Nonlinear decoupler damping coefficient
: Nonlinear decoupler force coefficient
: Force
: Inverse sum of compliances
: Upper rubber load-deflection coefficient
: Upper rubber load-deflection coefficient
: Upper rubber equivalent stiffness
: Mass
: Pressure
: Flow rate
: RMS of acceleration transmissibility
: Time
: Position
: Excitation
: Gap size
: Excitation frequency
: Damping ratio
: Natural frequency
: Nondimensional amplitude
: Nondimensional frequency
, : Tensor invariants
, , : Material constants
: Strain energy density.
: Inertia track
: Decoupler
: Piston
: Rubber
1: Upper chamber
2: Lower chamber
: Atmosphere
: Transmitted.
1. W. C. Flower, “Understanding hydraulic mounts for improved vehicle noise, vibration and ride qualities,” SAE Technical Paper Series 850975, 1985.
2. M. Bernuchon, “A new generation of engine mounts,” SAE Technical Paper Series 840259, 1984.
3. J. C. Snowdon, “Vibration isolation: use and characterization,” Journal Acoustic Society of America, vol. 66, no. 5, pp. 1245–1274, 1979.
4. J. P. Den Hartog, Mechanical Vibrations, McGraw Hill, New York , NY, USA, 5th edition, 1956.
5. R. Singh, G. Kim, and P. V. Ravindra, “Linear analysis of automotive hydro-mechanical mount with emphasis on decoupler characteristics,” Journal of Sound and Vibration, vol. 158, no. 2, pp.
219–243, 1992. View at Scopus
6. M. Clark, “Hydraulic engine mount isolation,” SAE Technical Paper Series 851650, 1986.
7. M. Sugino and E. Abe, “Optimum application for hydroelastic engine mount,” SAE Technical Paper Series 861412, 1986.
8. K. H. Lee, Y. T. Choi, and S. P. Hong, “Performance analysis of hydraulic engine mount by using bond graph method,” SAE Technical Paper Series 951347, 1995.
9. P. E. Corcoran and G. H. Ticks, “Performance analysis of hydraulic engine mount by using bond graph method,” SAE Technical Paper Series 840407, 1984.
10. J. Christopherson and G. N. Jazar, “Optimization of classical hydraulic engine mounts based on RMS method,” The Shock and Vibration Digest, vol. 12, no. 2, pp. 119–147, 2005. View at Scopus
11. H. Adiguna, M. Tiwari, R. Singh, H. E. Tseng, and D. Hrovat, “Transient response of a hydraulic engine mount,” Journal of Sound and Vibration, vol. 268, no. 2, pp. 217–248, 2003. View at
Publisher · View at Google Scholar · View at Scopus
12. M. Tiwari, H. Adiguna, and R. Singh, “Experimental characterization of a nonlinear hydraulic engine mount,” Noise Control Engineering Journal, vol. 51, no. 1, pp. 36–49, 2003. View at Scopus
13. M. F. Golnaraghi and R. N. Jazar, “Development and analysis of a simplified nonlinear model of a hydraulic engine mount,” Journal of Vibration and Control, vol. 7, no. 4, pp. 495–526, 2001. View
at Scopus
14. R. N. Jazar and M. F. Golnaraghi, “Nonlinear modeling, experimental verification, and theoretical analysis of a hydraulic engine mount,” JVC/Journal of Vibration and Control, vol. 8, no. 1, pp.
87–116, 2002. View at Publisher · View at Google Scholar · View at Scopus
15. J. Christopherson and R. N. Jazar, “Dynamic behavior comparison of passive hydraulic engine mounts. Part 2: finite element analysis,” Journal of Sound and Vibration, vol. 290, no. 3–5, pp.
1071–1090, 2006. View at Publisher · View at Google Scholar · View at Scopus
16. J. Christopherson and R. N. Jazar, “Dynamic behavior comparison of passive hydraulic engine mounts. Part 1: mathematical analysis,” Journal of Sound and Vibration, vol. 290, no. 3–5, pp.
1040–1070, 2006. View at Publisher · View at Google Scholar · View at Scopus
17. SAS IP, Inc., ANSYS 8.0 Help Documentation, 2003.
18. T. Belytschko, W. K. Liu, and B. Moran, Nonlinear Finite Elements for Continua and Structures, John Wiley & Sons, New York, NY, USA, 2000.
19. D. J. Charlton, Y. Yang, and K. K. Teh, “Review of methods to characterize rubber elastic behavior for use in finite element analysis,” Rubber Chemistry and Technology, vol. 67, no. 3, pp.
481–503, 1994. View at Scopus
20. D. J. Seibert and N. Schocke, “Direct comparison of some recent rubber elasticity models,” Rubber Chemistry and Technology, vol. 73, no. 2, pp. 366–384, 2000. View at Scopus
21. M. M. Attard and G. W. Hunt, “Hyperelastic constitutive modeling under finite strain,” International Journal of Solids and Structures, vol. 41, no. 18-19, pp. 5327–5350, 2004. View at Publisher ·
View at Google Scholar · View at Scopus
22. W. B. Shangguan and Z. H. Lu, “Experimental study and simulation of a hydraulic engine mount with fully coupled fluid—structure interaction finite element analysis model,” Computers and
Structures, vol. 82, no. 22, pp. 1751–1771, 2004. View at Publisher · View at Google Scholar · View at Scopus
23. W. B. Shangguan and Z. H. Lu, “Modelling of a hydraulic engine mount with fluid—structure interaction finite element analysis,” Journal of Sound and Vibration, vol. 275, no. 1-2, pp. 193–221,
2004. View at Publisher · View at Google Scholar · View at Scopus
24. A. Duster, S. Hartmann, and E. Rank, “p-FEM applied to finite isotropic hyperelastic bodies,” Journal of Computational Methods In Applied Mechanics and Engineering, vol. 192, no. 47-48, pp.
5147–5166, 2003. View at Publisher · View at Google Scholar · View at Scopus
25. L. A. D. Filho and A. M. Awruch, “Geometrically nonlinear static and dynamic analysis of shells and plates using the eight-node hexahedral element with one-point quadrature,” Journal of Finite
Elements in Analysis and Design, vol. 40, no. 11, pp. 1297–1315, 2004. View at Publisher · View at Google Scholar · View at Scopus
26. G. Kim and R. Singh, “Nonlinear analysis of automotive hydraulic engine mount,” ASME Journal of Dynamic Systems, Measurement and Control, vol. 115, no. 3, pp. 482–487, 1993. View at Scopus
27. G. Kim and R. Singh, “A study of passive and adaptive hydraulic engine mount systems with emphasis on non-linear characteristics,” Journal of Sound and Vibration, vol. 179, no. 3, pp. 427–453,
1995. View at Publisher · View at Google Scholar · View at Scopus
28. A. H. Nayfeh and D. Mook, Nonlinear Oscillations, John Wiley & Sons, New York, NY, USA, 1993.
29. A. H. Nayfeh, Introduction to Perturbation Techniques, John Wiley & Sons, New York, NY, USA, 1993. | {"url":"http://www.hindawi.com/journals/aav/2012/826497/","timestamp":"2014-04-17T05:06:05Z","content_type":null,"content_length":"247249","record_id":"<urn:uuid:5528d000-004a-4bd3-a97f-5f9686188b12>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference to the Existence and Uniqueness of the PDE system
up vote 1 down vote favorite
Hi all
I've the following Problem on systems of Partial Differential Equations.I have " N " Physical variables. and Finally I form the equation on a bounded domain having regular boundary in R^d. (d=2
div(W_i)=f_i i=1....N
W_i = Sum(A_ij . grad(P_j)) with summation indices j=1....N
where each A_ij is 2*2 non-constant matrix and N unknowns P_1...P_N .For N=1 based on existing theory of elliptic PDE one can ascertain existence and uniqueness by looking at coefficient matrix.But
can someone kindly give any reference to the existence and uniqueness of these kind of problems.And moreover if not then any reference\idea whether existing DN-elliptic systems can be modified to
tackle these kind of problems..??
regards ram
differential-equations elliptic-pde ap.analysis-of-pdes
You should $\TeX$ the equations. – timur Aug 23 '12 at 1:23
add comment
1 Answer
active oldest votes
A good keyword here is strongly elliptic systems. There is an original paper by Nirenberg. Also have a look at MacLean's book Strongly elliptic systems and boundary integral
up vote 1 down vote operators. Folland's Introduction to PDE has a good treatment too.
Thanks a lot.And moreover if the A_ij are not constants then is there any way to decouple the system into N PDEs of one variable...??For example by any basis transformation or
something like that..!!!!! – Ramu_Dull_Boy Aug 22 '12 at 14:00
add comment
Not the answer you're looking for? Browse other questions tagged differential-equations elliptic-pde ap.analysis-of-pdes or ask your own question. | {"url":"http://mathoverflow.net/questions/105160/reference-to-the-existence-and-uniqueness-of-the-pde-system","timestamp":"2014-04-20T08:50:19Z","content_type":null,"content_length":"52469","record_id":"<urn:uuid:61f0c043-f4e9-4b94-8a22-5e31d458896d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Millennium Prize Problems
As new discoveries are constantly being made in the field of mathematics, there are still a number of unsolved problems. Many of these problems, once solved, will help to improve the quality of our
daily lives.
In 2000, the Clay Mathematics Institute selected 7 different unsolved problems and offered a prize of $1 million per problem for those who find a solution. They chose to call these problems the
“millennium problems”. (http://www.claymath.org/millennium/)
These problems include:
• Birch and Swinnerton-Dyer Conjecture
Ever tried to solve a quadratic equation? You just used the quadratic formula right? How about an equation of the form:y^2 = x^3 - x
The graph of this equation is called an elliptic curve. It turns out you can describe these curves using algebraic terms and geometric terms. The Birch Swinnerton-Dyer Conjecture says there is a
connection between these two descriptions.
• Hodge Conjecture
• Navier-Stokes Equations
• P vs NP
Ever played minesweeper? Ever wondered how fast a computer could beat the game? So do computer scientists! There are very many problems for which it is not known how fast they can be solved. The
P vs NP problem seeks to show that these problems can be solved in polynomial time.
• Poincaré Conjecture
Solved by Grigori Perelman in the early 21st century.
If something looks like a sphere, smells like a sphere, and tastes like a sphere, is it a sphere? In dimension two, the answer is easy. In dimensions four and greater, the answer is still yes.
For a long time, the answer for dimension three was unknown until Perelman solved the problem in the early 2000’s.
• Riemann Hypothesis
In calculus you learn that the series
converges when p > 1. People have asked what happens when we treat p as a variable in the complex plane? It turns out this new function is related to prime numbers. The function has zeros at the
negative even integers and in a small strip. Reimann’s Hypothesis is that all these extra zeros have real part equal to ½. If this is true, it implies that the primes are well spaced.
• Yang-Mills Theory | {"url":"http://weusemath.org/?didyouknow=millennium-prize-problems","timestamp":"2014-04-20T05:42:34Z","content_type":null,"content_length":"43184","record_id":"<urn:uuid:1538e5c6-42cb-4214-a0cc-c4d8e545378d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Handling of numpy.power(0, <something>)
Stuart Brorson sdb@cloud9....
Wed Feb 27 17:10:03 CST 2008
I have been poking at the limits of NumPy's handling of powers of
zero. I find some results which are disturbing, at least to me.
Here they are:
In [67]: A = numpy.array([0, 0, 0])
In [68]: B = numpy.array([-1, 0, 1+1j])
In [69]: numpy.power(A, B)
Out[69]: array([ 0.+0.j, 1.+0.j, 0.+0.j])
IMO, the answers should be [Inf, NaN, and NaN]. The reasons:
** 0^-1 is 1/0, which is infinity. Not much argument here, I would
** 0^0: This is problematic. People smarter than I have argued for
both NaN and for 1, although I understand that 1 is the preferred
value nowadays. If the NumPy gurus also think so, then I buy it.
** 0^(x+y*i): This one is tricky; please bear with me and I'll walk
through the reason it should be NaN.
In general, one can write a^(x+y*i) = (r exp(i*theta))^(x+y*i) where
r, theta, x, and y are all reals. Then, this expression can be
rearranged as:
(r^x) * (r^i*y) * exp(i*theta*(x+y*i))
= (r^x) * (r^i*y) * exp(i*theta*x) * exp(-theta*y)
Now consider what happens to each term if r = 0.
-- r^x is either 0^<positive> = 1, or 0^<negative> = Inf.
-- r^(i*y) = exp(i*y*ln(r)). If y != 0 (i.e. complex power), then taking
the ln of r = 0 is -Inf. But what's exp(i*-Inf)? It's probably NaN,
since nothing else makes sense.
Note that if y == 0 (real power), then this term is still NaN (y*ln(r)
= 0*ln(0) = Nan). However, by convention, 0^<real> is something other
than NaN.
-- exp(i*theta*x) is just a complex number.
-- exp(-theta*y) is just a real number.
Therefore, for 0^<complex> we have Inf * NaN * <complex> * <real>,
which is NaN.
Another observation to chew on. I know NumPy != Matlab, but FWIW,
here's what Matlab says about these values:
>> A = [0, 0, 0]
A =
>> B = [-1, 0, 1+1*i]
B =
-1.0000 0 1.0000 + 1.0000i
>> A .^ B
ans =
Inf 1.0000 NaN + NaNi
Any reactions to this? Does NumPy just make library calls when
computing power, or does it do any trapping of corner cases? And
should the returns from power conform to the above suggestions?
Stuart Brorson
Interactive Supercomputing, inc.
135 Beaver Street | Waltham | MA | 02452 | USA
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-February/031630.html","timestamp":"2014-04-20T01:49:53Z","content_type":null,"content_length":"5003","record_id":"<urn:uuid:7fcff627-d931-46eb-9f55-23386df3fabb>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference for nef coherent sheaves?
up vote 4 down vote favorite
The definition and basic properties of nef locally free sheaves appear for instance in the second volume of Lazarsfeld's book "Positivity in Algebraic Geometry" (beginning of chapter 6).
However, I am in a situation where some of the sheaves I deal with are not locally free, but only coherent; so I would like to know whether there is a well-behaved notion of nefness for coherent
sheaves. The only mention of this that I found is at the end of section 1 of Kodaira Dimension of Subvarieties by Peternell-Schneider-Sommese, but it is just the definition with no references and no
discussion of basic properties.
So my question is: is there a reference that gives an analogue of Theorem 6.2.12 in Lazarsfeld for nef coherent sheaves? (The results I'm mostly interested in are: (a) quotient of nef is nef; (b)
pullback of nef is nef; and (c) extension of nef by nef is nef.)
ag.algebraic-geometry vector-bundles
add comment
1 Answer
active oldest votes
Definition A coherent sheaf $\mathcal{F}$ on an algebraic variety $X$ is nef if the following condition holds: For every irreducible curve $C\subset X$, the line bundle $O(1)$ is nef on $\
I haven't seen this definition of a nef coherent sheaf either, but I think most of the properties you mention just follow formally from the properties of ordinary nefness. Here is a proof
for a) and b):
up vote 2 a) A quotient of a nef sheaf is nef. Let $C$ be as in the definition. If $F\to E\to 0$ is a surjection, this restricts to a surjection $F|_C\to E|_C\to 0$ and hence gives an embedding $\
down vote mathbb{P}(\mathcal{F}|_C)\hookrightarrow \mathbb{P}(\mathcal{F}|_C)$ such that $O(1)$ on $\mathbb{P}(\mathcal{F}|_C)$ restricts to $O(1)$ on $\mathbb{P}(\mathcal{E}|_C)$. Since the
restriction of a nef line bundle is nef, $E$ is also nef.
b) A pullback of nef is nef. Similarily, if $f:X\to Y$ is finite, then $f$ restricts to a finite map $f:C'=f^{-1}C\to C$ and hence there is a finite map $F:\mathbb{P}(f^*\mathcal{F}|_C)\to
\mathbb{P}(\mathcal{F}|_C)$ such that $O(1)=F^*O(1)$. Now the claim just follows from the corresponding statement for line bundles.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry vector-bundles or ask your own question. | {"url":"http://mathoverflow.net/questions/77899/reference-for-nef-coherent-sheaves","timestamp":"2014-04-19T02:22:17Z","content_type":null,"content_length":"51126","record_id":"<urn:uuid:d9f3c1f6-0e5c-44cf-983d-213d712dff60>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some Notes About the Solar System
Some Notes About the Solar System
I began this page in July 2001 while doing research related to orrery development. It would be years until I actually began to build orrerys.
The fundamental question I was trying to answer is to get the most accurate available figures for the length of time it takes each planet to orbit the Sun. As I looked more deeply into the question,
I discovered many complex variables that make the question more and more difficult to answer as the desired accuracy increases.
Exact Resonances
Mercury solar day to Mercury solar year: 2:3. There are exactly two points on Mercury's surface where the Sun is directly overhead at perihelion.
Venus day, Venus year and Earth year: Every time Venus comes into inferior conjunction (closest approach) to Earth, the same side of Venus is facing Earth.
Io sidereal month to Europa sidereal month: 1:2. Opposition of Io and Europa always occurs at the same point in their orbits (relative to the stars)
Europa sidereal month to Ganymede sidereal month: 1:2. Opposition of Europa and Ganymede always occurs at the same point in their orbits (relative to the stars)
Mimas sidereal month to Tethys sidereal month: 1:2. Opposition of Mimas and Tethys always occurs at the same point in their orbits (relative to the stars)
Enceladus sidereal month to Dione sidereal month: 1:2. Opposition of Enceladus and Dione always occurs at the same point in their orbits (relative to the stars)
Titan sidereal month to Hyperion sidereal month: 3:4. Titan completes 4 orbits in precisely the same time it takes Hyperion to complete 3.
Neptune year to Pluto year: 2:3. Pluto completes 2 orbits in precisely the same time it takes Neptune to complete 3 orbits. When Pluto is at perihelion, Neptune's position is always either 1/4 orbit
further ahead or 1/4 orbit behind Pluto's position.
Pluto argument of perihelion precession rate to Pluto ascending node precession rate: Pluto's perihelion is locked at the point in its orbit where it is furthest "below" the plane of the solar
(Most of this info is also on this page. This page has some interesting facts about resonances.)
Chaotic Motion
Except when there are resonances, the motion of the planets is generally chaotic, in the sense that any tiny perturbation today can and will cause a great change in position at some time in the
future. For example, the positions of the inner planets cannot be accurately predicted more than about 20,000 years into the future, nor can they be accurately estimated for more than 20,000 into the
Sidereal Periods
The sidereal period of a planet is the amount of time it takes for the planet to complete one orbit, when viewed in relation to the stars. This is the most common definition of the planet orbits, and
the most useful if you are looking at the whole solar system and don't want to treat any one planet as "special". There are other definitions, including:
• The synodic period (beginning of each orbit defined as when the planet crosses an imaginary line drawn through the Sun and Earth; determines how long it will be until the planet reappears in the
evening sky).
• The tropical year (time from one a planet's equinox to the next; factors in the precession of the planet's rotational axis, and tells how long it takes the given planet to go through a full cycle
of its weather seasons). The ecliptic coordinate system in astronomy uses the Earth's equinox as its basis.
Since I wanted to build solar system models (orreries) it seemed logical to choose a year definition that would be the same for all the planets, and that makes the sidereal year the most obvious
choice. To make a precise definition, you can define the sidereal year in terms of a particular star (presumably one with little proper motion), draw a line from the center of the solar system to
that star, and define the year according to when the planet crosses that line.
When meauring a sidereal period, you can either use the sun as the center of the orbit, or the barycenter of the Solar System. Both are useful, although the sun is the one more commonly used.
Even though the Sun is over 300,000 times as massive as the Earth, the distances involved are so great that the Sun's position varies by 450 km (almost 300 miles) in either direction due to its
gravitational interaction with the orbiting earth.
The position of the Sun can very from the barycenter of the Solar System by as much as twice the Sun's radius in any direction, and most of that movement is due to Jupiter. To balance Jupiter, the
Sun moves 750,000 km in either direction, a total wobble (1.5 Gm) that is greater than the Sun's diameter. The other large planets have similar effects and they all add together. As the sun moves in
its lissajous-like "orbit", the inner planets, with their comparatively shorter orbital periods, move with it. Thus, due to Jupiter's influence the Earth's orbit shifts over a range of 1,500,000 km,
and more if you count Saturn, Uranus and Neptune.
As a result, if you're looking at an inner planet it makes more sense to define the sidereal year in terms of the sun's location, but if you're looking at the planets beyond Jupiter it makes more
sense to use the Solar System's barycenter.
The sidereal periods of each planet vary from one orbit to the next, due to the gravitational influence of the other planets. I recently spent a while trying to find very accurate (8 significant
figures or better) estimates of the sidereal periods of the planets and discovered that they vary so much from one orbit to the next that it is hard to get a figure with more than 4 digits of
JPL (Jet Propulsion Laboratory in Pasadena CA) has made their ephemeris and tracking system available to the public. It is probably the most accurate model available anywhere (it is used by JPL for
tracking of their spacecraft). Using this system I found the time (Julian Date) of the beginning of each orbit for the 8 major planets (Mercury through Neptune) according to the following
• The X, Y and Z axes cross at 90-degree angles at the solar system's barycenter or sun's center (depending on which type of sidereal year is being measured). Thus they define a reference frame
that is accelerated, but nonrotating. NOTE: When sun's center is used, the axes move along with the sun mostly in response to the Jovian planets.
• The X-Y plane is parallel to the plane of the Earth's orbit at Julian date 2451545.0 (2000-01-01 12:00 UTC)
• The X axis is parallel to the intersection of the plane of the Earth's equator and the plane of the Earth's orbit at Julian date 2451545.0 (2000-01-01 12:00 UTC), and the positive end of the X
axis points in the same direction as the ascending node.
• The positive end of the Z axis is the end closest to the direction of the Earth's north pole.
• For any given planet position (X,Y,Z), a phase angle theta is defined as arctan(X/Y)
• The beginning of a planet orbit is defined to be the moment when theta=0. (For the Earth, this approximately coincides with the fall equinox, around Sep 22nd)
Two tables were generated, one for solar system barycenter and one for the sun as the center.
Sidereal Year Relative To Solar System Barycenter
│ │ during │ │ average │ │ range / │ │
│ planet │ the years │ orbits │ sidereal │ range │ orbits │ notes │
│ │ │ │ period │ │ │ │
│ Mercury │ 1800-2199 │ 1660 │ 87.9691205 │ 0.054 │ 0.0000162 │ main oscillation has period 11.9 years, amplitude 0.04 days │
│ Venus │ 1800-2200 │ 650 │ 224.700295 │ 0.199 │ 0.000153 │ main oscillation has period 11.9 years, amplitude 0.15 days │
│ Earth │ 1800-2199 │ 399 │ 365.25519 │ 0.38 │ 0.00047 │ main oscillation has period 11.9 years, amplitude 0.3 days │
│ Mars │ 1800-2199 │ 211 │ 686.97882 │ 0.78 │ 0.00184 │ main oscillation has period 11.9 years, amplitude 0.6 days │
│ Ceres │ - │ - │ - │ - │ - │ - │
│ Jupiter │ 1500-2497 │ 84 │ 4332.6045 │ 3.1 │ 0.0186 │ main oscillation has period 59 years, amplitude currently near maximum of 3 days (varies over an 880-year period) │
│ │ │ │ │ │ │ currently alternating back and forth between "odd" and "even" orbits, "even" orbit is 14 days longer; "odd" orbit gradually │
│ Saturn │ 1525-2497 │ 34 │ 10759.17 │ 19.2 │ 0.28 │ getting longer until the 25th century when they will be of equal length and the "odd" orbit will stop getting longer and the │
│ │ │ │ │ │ │ "even" orbit will begin getting shorter (total cycle lasts 880 years) │
│ Uranus │ 1002-2935 │ 23 │ 30684.45 │ 22.4 │ 0.49 │ currently shortening by an average of 2 days per orbit; complete cycle takes 4300 years │
│ Neptune │ 1037-2849 │ 11 │ 60189.7 │ 47.8 │ 2.2 │ currently lengthening by an average of 5 days per orbit; complete cycle takes 4300 years │
│ Pluto │ - │ - │ 90284. │ - │ - │ I belong to the "Pluto is not a major planet" camp; however it has been shown that Pluto's orbital period is precisely 3/2 that of │
│ │ │ │ │ │ │ Neptune due to orbital resonance. │
Sidereal Year Relative To the Sun
│ │ during │ │ average │ │ │ │
│ planet │ the years │ orbits │ sidereal │ range │ range / orbits │ notes │
│ │ │ │ period │ │ │ │
│ Mercury │ 1800-2199 │ 1660 │ 87.96927077 │ 0.00150 │ 0.00000045 │ main oscillation has period 1.1 years, amplitude 0.001 days (2 minutes) │
│ Venus │ 1800-2200 │ 650 │ 224.7007826 │ 0.0091 │ 0.0000070 │ irregular │
│ Earth │ 1800-2199 │ 399 │ 365.2562953 │ 0.0113 │ 0.0000141 │ irregular │
│ Mars │ 1800-2199 │ 211 │ 686.98228 │ 0.104 │ 0.00024 │ irregular │
│ Ceres │ - │ - │ - │ - │ - │ │
│ Jupiter │ 1500-2497 │ 84 │ 4332.608 │ 3.8 │ 0.022 │ same pattern as in barycenter table above │
│ Saturn │ 1525-2497 │ 34 │ 10759.01 │ 16.8 │ 0.25 │ same pattern as in barycenter table above │
│ Uranus │ 1002-2935 │ 23 │ 30684.53 │ 22.4 │ 0.49 │ same pattern as in barycenter table above │
│ Neptune │ 1037-2849 │ 11 │ 60189.8 │ 48.0 │ 2.2 │ same pattern as in barycenter table above │
│ Pluto │ - │ - │ 90284. │ - │ - │ same note as in barycenter table above │
The variations in year length are real, but depend on your definition of a year. In particular, they depend a lot on when the year starts.
For example, consider the Earth's orbit with the Sun (not the barycenter) as the center, and look at the perturbation that would be caused by Jupiter. Let's divide Earth's orbit into four parts:
│ part │ description │
│ 1 │ Earth-Jupiter distance is less than Sun-Jupiter distance and Earth is moving towards Jupiter │
│ 2 │ Earth-Jupiter distance is less than Sun-Jupiter distance and Earth is moving away from Jupiter │
│ 3 │ Earth-Jupiter distance is more than Sun-Jupiter distance and Earth is moving away from Jupiter │
│ 4 │ Earth-Jupiter distance is more than Sun-Jupiter distance and Earth is moving towards Jupiter │
Jupiter is pulling on both Earth and the Sun. When Earth is in sections 1 and 2, the acceleration of the Earth by Jupiter is greater than the acceleration of the Sun due to Jupiter. Similarly, in
sections 3 and 4 acceleration of the Earth by Jupiter is less than the acceleration of the Sun due to Jupiter. This means that the Earth will be effectively speeding up in its orbit (relative to the
Sun) during sections 1 and 3 and will be slowing down during sections 2 and 4.
Since Jupiter moves, an Earth year does not consist of 3 months in each section 1,2,3 and 4. Instead, an Earth year will include partial sections, and sometimes the acceleration during sections 1 and
3 does not completely cancel out the deceleration during sections 2 and 4. The same effect happens with all other planets, and the total of all these perturbations makes the length of the year vary
by about 16 minutes from one year to another.
The Jupiter orbit oscillation period of 59.6 years corresponds to 5.02 orbits of Jupiter, which is very nearly equal to 2.02 orbits of Saturn. During this period Jupiter and Saturn are in opposition
three times. This same 59.6-year period is the reason for the every-other-orbit pattern in Saturn's sidereal year. Because it's not exactly 5-to-2, the points of opposition move slowly around the
orbits and take about 880 years to return to the point where they were.
There is a weaker 1-to-3 ratio between Saturn and Uranus. Saturn completes 3.08 orbits in the same time (about 91 years) it takes Uranus to complete 1.08 orbits. During this time Saturn is in
opposition with Uranus twice. After about 6.5 repetitions of this period (about 570 years) the opposition happens in the same place again.
Uranus and Neptune have a close to 2:1 ratio. Uranus completes 2.04 orbits in the time it takes Neptune to complete 1.04. Thus, each opposition (point of nearest approach) between the two planets
occurs 1/25 of an orbit further along in the orbit. It takes about 4300 years for this point of nearest approach to make one full cycle around the Sun.
Leap Seconds
During the 20th century, thanks to the quartz clock (and later the atomic clock), it became possible to measure time so accurately that the fluctuations in the rate of the Earth's rotation could be
measured. There fluctuations are caused by such things as convection currents in the mantle and seasonal movements of ice and water (polar ice caps and tide/current resonances). It became clear that
the period of rotation of the Earth had a significant unpredictable component it was generally slowing from one year to the next, but not at a steady rate and with no pattern.
In 1967 the second was defined in terms of the Cesium atom used in atomic clocks. The length of a second has not been redefined to accommodate the slowing of the Earth. However, since 1967 the
Earth's rate of spin slowed down so much that, if no correction had been made, the atomic clocks would now be reading 12:00:00 noon an average of 25 seconds before the point when the sun is overhead
(after adjusting for the known astronomical effects such as the eccentricity of the Earth's orbit, etc.) Therefore, in order to prevent the atomic clocks (from which all other clocks are set) from
drifting too far away from astronomical reality, "leap seconds" have been added about once every 500 days (starting in 1972). The leap second always comes at midnight on either June 30th or Dec 31st.
For more about this go to the NIST Time and Frequency website.
JPL HORIZONS ephemeris data
Sidereal and tropical year, solstices and equinoxes: webexhibits calendar page | {"url":"http://mrob.com/pub/planets.html","timestamp":"2014-04-19T19:32:56Z","content_type":null,"content_length":"22498","record_id":"<urn:uuid:36b82bc4-2a8c-4632-8b19-e3aebffaff45>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: True North - Sorry - 1 hour error in original calc.
[Date Prev][Date Next][Thread Prev][Thread Next] - [Date Index][Thread Index][Author Index]
Re: True North - Sorry - 1 hour error in original calc.
• Subject: [amsat-bb] Re: True North - Sorry - 1 hour error in original calc.
• From: brianm@xxxxxxxxxxxxxxxxx (Brian Mitton)
• Date: Wed, 7 Feb 2001 17:54:52 -0600 (CST)
Sorry but you have a false assumption there. Solar time is apparent time,
based on the apparent sun. UTC is mean time, based on the mean sun a
mathematical construct which casts no shadow. Your method may be off by
as much as 15 minutes (twice a year). It will be correct four times a year.
The difference between Apparent time (sundials) and Mean time (clocks)
is given by "The Equation of Time". (Cool name huh?)
I just did a search for "Equation of Time" and these came up
For a description of Cooks voyage
If I were going to find true south by looking at shadows, I might try;
Mark the location of the shadow at sun rise, either the azimuth from the
gnomon (tower) or the shadow's tip. Repeat at sunset. Bisect the angle,
or find the perpendicular bisector of the tip to tip length. You should
be real close.
N8NPA Brian
>Date: Thu, 8 Feb 2001 09:09:42 +1100
>From: "Murray Peterson VK2KGM" <vk2kgm@ihug.com.au>
>Subject: [amsat-bb] Re: True North - Sorry - 1 hour error in original calc.
>Hi David,
>I will try to explain this more clearly and correctly.
>The thing I was looking for was solar noon (12:00).
>Solar noon at Greenwich (0 degrees of longitute) is at 12:00 UTC or as it
>used to be known GMT (Greenwich Mean Time). (Greenwich is a place in London
>where the British Navy, centuaries ago decided was the place that
>longititude should be measured from and so they went to considerable expense
>(British Board of Longitude) setting up observatories at may places arrouund
>the world which they used to determine the longitude of that location. Since
>clocks several centuaries ago did not keep time all that accurately, they
>used astronomical events to claibrate their measurements. Australia (the
>east coast) was discovered by Captain James Cook and his crew in 1770. They
>were sent on a mission by the British Board of Longitude to observe the
>transit of the planet Venus across the Sun from the islands of Taheti and
>the position of the sun in the sky in order to accurately determine the
>longitude of Taheti. The transit of Venus could be accurately calculated and
>so time in Greenwich and Taheti could be accurately calculated and therefor
>the longitude difference.
>We now have accurate clocks and we can usually find out our longitude
>and so calulating solar noon is not so difficult.
>As I previously said, the thing we are looking for is solar noon as this
>will directly give us the direction of North or South. We know solar noon at
>Greenwich is 12:00 UTC. For my longitude which is 151.06278 Degrees. The
>Earth rotates once (360 Degrees - actually almost 361 as the Earth moves
>almost 1 degree around the Sun each day but we are interested in angles with
>respect to the position of the Sun, not the rest of the Universe) in 24
>hours and so that is 15 degrees per hour (360/24=15). My home is
>151.06278/15 = 10.070852 Hours ahead of UTC. Therefore my solar noon will
>occur 10.070852 Hours before 12:00 UTC and so that is 12.000000 - 10.070852
>= 1.929148 or 01:55 and 45 seconds. [In my previous email there is a 1 hour
>error - I forgot that I have Daylight savings operating here - clocks put
>forward 1 hour during summer - dumb idea - I think it would be good for the
>whole world to set clocks to UTC!]
>Murray Peterson
Via the amsat-bb mailing list at AMSAT.ORG courtesy of AMSAT-NA.
To unsubscribe, send "unsubscribe amsat-bb" to Majordomo@amsat.org | {"url":"http://www.amsat.org/amsat/archive/amsat-bb/200102/msg00244.html","timestamp":"2014-04-20T08:38:01Z","content_type":null,"content_length":"7134","record_id":"<urn:uuid:4d835eb8-ed74-4607-8ee9-02a2c3ed387c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Paghahanda at Balidasyon ng Lagumang Pagsusulit sa Filipino
PAGHAHANDA AT BALIDASYON NG LAGUMANG
PAGSUSULIT SA FILIPINO PARA SA
MAG-AARALSA IKA-3 TAON
SA SEKUNDARYA.
Angelita A. Kuizon
THE PROBLEM
General objectives
The study is descriptive – developmental in nature which aimed to prepare and validate an Achievement test in Filipino for third year high school students.
Specific Objectives
The study also attempt to:
1. Identify specific skills in Filipino 3 which will be included in the test
2. Prepare a table of specifications for the skills to be tested
3. Construct test items for each skill
4. Trey out the validity of the test
5. Analyze each item of the test
6. Revise the test based on the weaknesses found during the try-out stage
7. Compute the validity and reliability indices of the test
8. Prepare a test manual
Scope and Delimitation
The study is only concerned on the preparation and validation of achievement test in Filipino for third year students of ANHS, Butuan City SY 2003-2004.
The achievement test prepared measures the skills developed during the four grading periods. The validity and reliability of the test prepared were determined with the help of Filipino 3 teachers in
the school and some selected students in Filipino subjects in the four sections in the third year.
Design of the Study
The study underwent four stages: planning, test construction, administration and evaluation stages.
The descriptive – developmental method is used in this study. Its objective is to develop and validate an achievement test in Filipino 3.
Respondents of the Study
The respondents of the study were the fourth year and third year students of Agusan National High School, Butuan City, during SY 2003-2004
The subjects of the pilot testing stage were twenty (20) fourth year students chosen randomly from twenty (20) sections in fourth year students. Two hundred (200) fourth year students were the
subjects for the first and second trial run. Two hundred (200) third year students were used in the third trial run or the final run of the study. From those one hundred (100) were chosen to take the
re-test and they were also the very same students who were made to take the Division test.
The teachers of ANHS teaching Filipino evaluated the appropriateness or suitability of the items, clarify of directions and language used in the test.
The fourth year students utilized for the pilot testing were chosen through random sampling method. There were two hundred fourth year students chosen who took the first and second trial run. The
third year students for the final run. They were chosen using the purposive random sampling.
Preparation of the Table of Specification
In this study a pre-survey was done to identify the skills to be tested. It was an aid for the development of the test item. All skills included in the test were the result of the pre-survey and the
self observation of the researcher.
The tests were based on the Philippine Secondary Schools Learning Competencies in grammar and literature in Filipino 3.
Item Writing
The test items in this study were developed using analytic method and only the multiple type of test was effective since its validity and reliability is measurable.
Content Validation and Pilot Testing
After the items were developed inspection was done by the experts and Filipino teachers of Agusan National High School, Butuan City. The test was tried out to twenty (20) fourth year students, the
purpose of which was to determine the language suitability of the items and ease in following directions from the point of view of the examinees. The average length of time to finish the test was
also determined. The test can be finished in two (2) hours and 10 minutes as known during the pilot testing.
First Try out
After the pilot testing and content validation it was given for the first time to two hundred (200) fourth year students. It was found out that there were only ninety four (94) acceptable items,
eleven (11) need revision and eighty three (83) were not accepted. The test was finished in two hours during the first try out.
Second try out
There was a total of ninety-four (94) items included in the test. After it was given again to two hundred (200) students the item analyses have shown that there were fifty six (56) good items, eleven
(11) need revision, seventeen (17) were not accepted and included in the final form of the test.
Final run of the test (third try out)
There were fifty-six (56) items included in the final form of test. It was conducted to third year students to find out the effectiveness of the revisions done during the preceding try outs. It was
given to two hundred (200) students selected purposively from third year students on February 11, 2004.
Evaluating the test
The concurrent validity of the tests were calculated using the Pearson Product Moment method by comparing the test re-test result to the Division Achievement test. The reliability coefficient was
determined by comparing their score from the final run to the test re-test result. The calculated concurrent validity coefficient was 0.658 significantly means that the test constructed were valid,
while the test re-test reliability coefficient of the test was 0.802 it means that the test is concurrently reliable.
Preparing the Test Manual
Based on the results of the different try outs, the final form of the test was prepared. There were fifty six (56) items included in its final from.
1. The test is based on the learning skills found on the PSSLC used in teaching Filipino 3 and the scientific steps in constructing the test are observed.
2. It was found out that the students were not at ease with the items on getting information, nothing details, symbolism of the poem, and literacy theories.
3. In the test result, it was noted that the students got low in figures of speech, conventions of drama, and interpreting the lines of the poem.
4. There were not enough theses on literary and language testing in Filipino which can be used as basis of the present study.
5. It was found out that the test is reliable because those who got higher score in the final form of test got a higher result in the test re-test.
6. The test is valid because the score of the students in the test re-test is consistent and had a higher score in the Division Achievement test and this type of tests is really intended for the
third year level.
1. The test can be given to other students in barangay high schools in or outside CARAGA Region for further validation.
2. Wide use of multiple choice of test and analytic method of test is encouraged so that the students will be used to such type of tests.
3. The teachers should not focus only on preparing work books but they are encouraged to prepare other teaching materials like a validated test.
4. It is suggested in this study that teachers in the secondary should be updated on test construction.
Oronce, Orlando A. et.al. Exploring Mathematics (Geometry). Manila: Rex Book Store, 2003.
Polya, George.How To Solve It: A New Aspect of Mathematical Method. 2nd ed. Princeton, New Jersey: Princeton University Press, 1957.
Sevilla, Consuelo G. et. al. Research Method. Quezon City: Rex Printing Company, 1992.
Walpole, Ronald E. Introduction to Statistics. New York: Macmillan Publishing Company, 1982.
Articles, Journals, Publications
Battista, Michael T. and Clements, Douglas H. “Geometry and Proof.” The Mathematics Teacher. (January 1995):48-53.
Brandell, Joseph L. “Helping Students Write Paragraph Proof in Geometry”. The Mathematics Teachers. (October 1995): 498-502.
Caine, Renate Nummela and Geoffrey Caine
Making Connections: Teaching and the Human Brain (1991),
Cooper, Charles R. and Petrosky, Anthony.” Secondary SchoolStudents’ Perceptions of Mathematics Teachers and Mathematics Classes”, The Mathematics Teacher, Volume 69,No. 3 March, 1976,226.
Copeland, Richard W. “How Children Learn in Math”, 284-256.
Gerver, Robert and Richard Sgroi. “Retooling the General Mathematics Curriculum.” Mathematics Teacher 85. (April) 1992.
Hoffer,Alan R. “Geometry is more than a proof.” Mathematics Teachers 74.(January) 1981.Jensen, Eric . Learning Smarter (2000),
Lewis, Aiken Jr., “Mathematics as a Creative Language”. The Arithmetic Teacher, XXIV, 251-254.
Licop, Nora.” Cooperative Mastery Learning Strategy and the Conventional Approach in the Teaching of Mathematics”. The Elementary School Journal, Vol. 37, No. 3,(July-September) 1990:35-45.
McIntosh, Margaret E. “Word That’s in Geometry”. The Mathematics Teacher. (October 1994).
Nebres, Bienvenido F. “Message on the CEAP Conference”. Ceap Perspective. Vol. XIX No. 3 September 1996.
Seak, Shanon L. “How Well A Students Write Geometry Proofs?” The Mathematics Teacher.(September 1985):448 – 456.
Tatsulok: Ang Magasing Matematika sa Makabagong Kabataan, Book Compilation SY. 1996-1997. Book 2. Diwa Scholastics Press Inc. Makati, 1997.
Veniegas, Ophelia P. Practical Suggestions for Engaging Students Learning The Interactive Curriculum. Excerpts from the Department of Education (DepEd) basic document describing the 2002 Basic
Education Curriculum to exemplify what it envisions as interactive curriculum.__. “Achievement Rate in Mathematics:” http://www.hhh.umu.edu./centers/wilkins/news.htm.
”Mathematics Achievement in the Middle School Years:” Boston College, TIMSS International Study Center: 1996 -.http://www.harvard.edu./Pls/HG.htm.; .http://www./rde.pitt.edu/page/old/gaeapage.htm.
Andres, Ma. Bernadette B. “ The Effects of Exposure to Modules On Triangle Congruency on Geometry
Scores, Visual Imagery, and Spatial Reasoning Skills”. Master’s Thesis PNU, Manila. 1993.”
De Ocampo, Dolores. “The Relative Effects of Small Group with a Large Group Instructional Versus One Large Group Instructional in Teaching Mathematics”. Master’s Thesis U.P. 1988.
Gallardo, Wilfredo. ” Selected Factors Affecting the Mathematical Ability of High School Students in the Agricultural Schools of Zamboanga del Norte”.Master’s Thesis, MSU 1996.
Ganzan, Orgenia A. “ The Effects of Structured Homework on the Mathematics Performance of the Third Year high School Students”. Master’s Thesis, BSC 2000.
Guevarra,Ma. Buenaflor L. “ Students’ Misconceptions on Selected Geometric Concepts”. Special Project Thesis,PNU 1998.
Hermosisimo, Fe. “ The Deficiencies of Grade Five Pupils in Cebu City in the Math Concepts of sets, Bases and Geometry”. Master’s Thesis. 1975.
Januras, Merlita O. “Problems of Mathematics Teachers in Teaching Third Year Mathematics”. Master’s Thesis, BSC, 2000.
Santos, Edgardo M.” The Effect of the Modified Worked Example Strategy in Developing the Problem Solving Ability of Students in Geometry”. Master’s Thesis, Philippine Normal University, 1998.
<<Previous Next >> | {"url":"http://pnupres.tripod.com/kuizon.html","timestamp":"2014-04-18T21:53:27Z","content_type":null,"content_length":"59149","record_id":"<urn:uuid:6b157c51-2c77-413b-8585-6da6bbc06beb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Soap bubbles offer key to maximizing efficiency
Frank Morgan, Atwell Professor of Mathematics at Williams College, is chiefly concerned with optimization. He investigates how to optimize shapes with minimal surfaces. Morgan will speak at the UA on
Feb. 10 during a sponsored talk. (Photo courtesy of the Worcester Polytechnic Institute)
(PhysOrg.com) -- People seek out shortcuts just about everywhere -- in traffic, at grocery stores, in weight loss regimens and on keyboards. But Frank Morgan, an upcoming UA guest speaker, said soap
bubbles present the most simple example of heightened efficiency.
Some people are motivated to find the shortest drive home, the quickest way to weight loss or the fastest line in the grocery store.
One motive for such behaviors is the desire for greater efficiency. It turns out, certain soap bubbles have the same intention.
Mathematician Frank Morgan and his colleagues have determined that certain types of double bubbles maximize efficiency, a finding that can be philosophically tied to human efforts.
"Soap bubbles are a serious math topic," said Morgan, the Atwell Professor of Mathematics at Williams College. "We spend our lives trying to minimize and maximize things, and mathematicians are
trying to solve that issue and look for the simplest examples."
Morgan, also vice president of the American Mathematical Society, will speak about his research this week during a University of Arizona colloquium sponsored by the mathematics department.
Morgan said the reason soap bubbles are round is not that the molecules that constitute them are round, but because they "just want to be efficient, to enclose a given volume of air with the least
surface area or energy," Morgan said.
The round shape, observed for thousands of years, was mathematically proved optimal in 1884.
"It's more of a geometric issue because a sphere is the least-area way that you can enclose that amount of air," said Morgan, who has published six books and more than 100 scientific articles. In
1999, he and his collaborators proved the "Double Bubble Conjecture."
The double bubble theorem effectively states that the double bubble "provides the least-perimeter way to enclose and separate two prescribed volumes," according to a paper Morgan co-authored that was
published in 2004.
"The new theorem is based on the philosophy that soap bubbles are trying to minimize their energy as you do when deciding what route to take in the morning," Morgan said.
Morgan said the same goes for many practical questions: How do I minimize expenses? How do I maximize my profits? How do I maximize my happiness?
"So, the soap bubbles solve a math problem at minimizing energy, which is one of the biggest questions in mathematics: How do you minimize something?"
This is promising research, he said.
"You can understand anything in nature by understanding its effort to minimize energy," Morgan said.
"If we can understand soap bubbles, we can solve other problems," he added. "What's amazing about the soap bubbles is that we can now prove beyond any doubt what's best. Isn't math amazing?"
More information: Morgan's talk, "The Double Soap Bubble Theorem," is free and open to the public and will be held on Thursday, Feb. 10 at 4 p.m. at Flandrau: The UA Science Center, 1601 E.
University Blvd.
His talk coincides with the UA Flandrau Soap Bubble Math Fun and Family Fun Time, which will be held Feb. 10 at 5:30 and 6 p.m. and again Feb. 11 at 6:30 p.m.
As part of Flandrau's event, Morgan also will be hosting hands-on demonstrations Feb. 10 one at 5:30 p.m. that is free and open to the public, and another at 6 p.m. requiring Flandrau admissions
He also will provide a demonstration Feb. 11 at 6:30. Flandrau admissions fees apply. | {"url":"http://phys.org/news/2011-02-soap-key-maximizing-efficiency.html","timestamp":"2014-04-21T02:37:42Z","content_type":null,"content_length":"68050","record_id":"<urn:uuid:eec90a34-57cd-47e0-9957-5d4f31c7ed69>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equivalence Classes
Date: 02/19/99 at 03:41:21
From: Wayne Chow
Subject: Discrete Mathematics
Is there an equivalence class containing exactly 271 elements?
Date: 02/19/99 at 19:22:14
From: Doctor Kate
Subject: Re: Discrete Mathematics
To define an equivalence class, one needs to define an equivalence
relation. There are all sorts of equivalence relations one could use,
and to answer your question, it would help to know if you had a
particular equivalence relation in mind. I will assume you do not.
Let's define an equivalence relation. Let a and b be numbers.
To say a is equivalent to b, I will write a ~ b.
An equivalence relation is any relation that satisfies these rules:
1) a ~ a always (this rule is called reflexivity)
2) if a ~ b, then b ~ a (this is called symmetry)
3) if a ~ b and b ~ c, then a ~ c (called transitivity)
So there are all sorts of equivalences:
Example 1:
Equals (=) is an equivalence relation because:
1) a = a all the time
2) if a = b, b = a
3) if a = b and b = c, a = c
Example 2:
Let us define a ~ b if a and b have the same sign (let's pretend 0 has
positive sign for the purpose of this example).
For example, 3 ~ 5, 7 ~ 2, -5 ~ -18, but -6 is NOT equivalent to 3.
1) a has the same sign as a, so a ~ a
2) if a has the same sign as b, b has the same sign as a
3) if a has the same sign as b, and b has the same sign as c, a and c
must have the same sign!
Example 3:
We will say a ~ b if a and b have the same remainder when you divide by
three. (Notice that I've just changed what ~ (equivalent) means - it's
different now than in the last example. I can define it in lots of
ways, as long as I follow the three rules for equivalence relations.)
For example, 3 ~ 6, because both have remainder 0 when you divide by
three. Further, 1 ~ 16, because both have remainder 1 when you divide
by three, but 2 is NOT equivalent to 7 because 2 has remainder 2, but
7 has remainder 1.
For fun, try to check the three rules yourself. Here's the first one:
1) a ~ a because you always get the same remainder when you divide a
by three (yes, it's really easy).
Back to your question:
An equivalence class is a set of things that are equivalent to each
So for '=', every equivalence class is size 1, since the only thing
equivalent (equal) to a is a itself. For the second example, the
equivalence classes are infinitely big, because there are infinitely
many things with positive sign, and infinitely many things with
negative sign.
Now in the third example, what possibilities do we have? We can have
a remainder of 0, 1 or 2, so there are really three equivalence
1) the things with remainder 0 when you divide by three (multiples
of 3)
2) the things with remainder 1 when you divide by three (like 1, 4,
7, 10...)
3) the things with remainder 2 when you divide by three (like 2, 5,
8, 11...)
But how big are they? They're infinite again.
These are pretty normal examples of equivalence classes, but if you
want to find one with an equivalence class of size 271, what could you
do? Well, we could be silly, for a moment, and define an equivalence
class like this:
Let's talk about the integers. Let a and b be integers.
If a and b are both between 1 and 271, a ~ b. If a and b are both
outside the interval 1 to 271, a ~ b. If one of a and b is between 1
and 271 and the other is not, a is NOT equivalent to b.
I know it is silly! But it's a perfectly good equivalence relation.
I will let you check the three rules yourself. Now, what equivalence
classes do we have here? Clearly, we have two:
1) 1, 2, 3, .... 271
2) everything else
And what size does the first one have? You guessed it.
I can imagine you asking whether there is a less silly example. Dodging
the question of what "silly" means exactly, I will say that there is
not really (well, maybe slightly better, but nothing as nice as my
examples above), if you want to work with integers. However, you can
define equivalence relations on all sorts of sets. You can define
equivalence relations on your socks: all socks of the same colour are
equivalent. So you may have three equivalence classes: grey, blue, and
pink. These equivalence classes probably won't have size 271, but you
never know. They're your socks, not mine.
So the answer to your question is yes.
- Doctor Kate, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/54266.html","timestamp":"2014-04-19T09:58:36Z","content_type":null,"content_length":"9266","record_id":"<urn:uuid:b5d02063-27b3-442d-b061-febea36dcf57>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
News aggregator
Why isn’t there a clamp function in Data.Ord? Something like that:
clamp :: (Ord a) => a -> a -> a -> a clamp mn mx = max mn . min mx
We have min and max, clamp is a very useful function as well, and very famous.
submitted by
_skp [link] [23 comments]
I haven't looked into how the Haskell language is standardized, but I have noticed that articles and libraries seem to liberally use extensions specific to GHC, or at least that GHC lists as
syntactic extensions. In comparison, the Hugs compiler looks pretty boring and no one talks about it much.
I learned to program with C++, and that community cares very much about standard conformance, but we also have multiple compilers on multiple OS's and we want to write code that conforms to everyone
at once. So when referring to Haskell, should one assume that everyone conforms to GHC, even if GHC does not define Haskell? On a related note: does anyone use a compiler other than GHC, including
Hugs, and why?
submitted by
SplinterOfChaos [link] [39 comments]
'I' is for isomorphic
Howdy, howdy, howdy! ('H' is for Howdy, but howdy ends with the Y-combinator, so we'll talk about then we cover the letter 'Y,' as in 'Y' is for Recursion.)
(I laugh, but the
isn't necessarily ... well, anything, and not even recursive, because it could be it's opposite: inductive) (and by 'opposite' you do know I mean 'dual.')
(Uh, yeah, that.)
So, this new math, this new way of counting, the new way of proving everything, the new,
way of proving you
prove everything, like, oh, location and velocity at one point.
What's really neat is these new maths prove the coast of Britain is how long, precisely?
Trick question. You may say, 'Oh, the coast of Britain is about 2,000 miles long,' because you're forgetting that the
British don't use the British Imperial system
measure distance anymore, 'cause that's how those wild brits roll.
The metric system, it's so perfectly square, it's cool! (Square, geddit?)
... for those cool brits, because I guarantee you it's below 30° outside on the 2,000 miles of Britain's coast.
It's also a trick question, because using
fractal geometries
('fract': 'to break' into smaller and smaller pieces),
, the inventor of the geometry has measured a very precise length of the British coast:
It's infinite length. Immeasurably so. Mandelbröt knows. He measured it.
Bully for him. Question: did he actually
those zillion + 1 metres? Huh? Did he? Huh?
All these mathematicians making these bold proclamations, like 'oh, the British coast, when more and more exactly measured, is of infinite length!'
But do they put their moneys (pounds? why is money so heavy? or euros? Why is money so cute and feminine? euro from Europa. And why is Zues always turning himself into a bull to chase the ladiez?
These are imponderables that one faces, from time to time.)
(I mean like all over Greek mythology, girlz be havin' the hots for bulls and having bull-kids tha spawn really, really bad fanfiction ... you do know what
Catching Fire
are based on, don't you? "Oh, I, Medea will be tribute for my little sister, 'cause I suddenly totally kick ass with the bow, because Battle Royale was made a decade earlier and a decade times
better, but nobody knows Battle Royale nor Greek Mythology, so I'll just crib it for some tons of moneys!" Like
Fifty Shades of Grey
was originally really bad
fanfiction, which isn't hard to find: really bad
came about because Connecticut kicked a girl to the curb who had really bad self-esteem issues, so she wrote a book about that ... and about sparkly vampires. And twenty-eight publishing companies
told her: 'You're joking, right?' then Summit made a really bad movie about it, and all the goth girls lined up to see it, and that's when people said, 'Huh, I guess it's got something, like
Harry Potter,
which is a story that, as far as I can tell, about a fat uncle who wants to punch this nerdlinger who fed his owl
in seven books in the face, but he never got that pleasure.)
(The magic of
Harry Potter?
He has a zero-maintenance owl, and more magic? "Gryffindor wins!" Because why? Because of PFM, that's why!)
(But don't worry if you missed it the first time, because the exact same thing repeats in the next six books, but with just more and more pages, and a kitchen elf with mismatched socks and pro-labour
Yeah. 'I' is for isomorphic, ... that's what I was talking about. I'm sure of it.
Kinda. Sorta.
So, the question of equality comes up in this new world of not caring what your operating on, as the operators or functions are the bees' knees.
So how do you tell if two things are 'equal' and what does 'equal' mean now? I mean, saying
5 == 5
is simple enough in the category of numbers, but then what about of the categories of functions on numbers, how do you measure equality? Or how about where the units are categories? In the category
of categories how do you know that one category is 'equal' to another category? Or, testing the identity function, for example, when an object is transformed for a function that may or may not be the
identity function, how can you tell the object in the codomain is 'the same' as the object in the domain before the function was applied?
It's actually pretty simple, actually. What's hard is saying two things are equal without a clear definition of what equality is.
So let's give the categorical definition of equality: isomorphism. That's a word from the Greek, iso, meaning 'the same' and morphism, meaning 'shape.'
Equality is that object A is equivalent to object B if they both have the 'same' 'shape.'
Now, for things without a shape that your familiar with ...
For example, numbers have 'shape' in their magnitude, ...
and parallelograms have shape in their ... well, shape.
But what is the 'shape' of a function? You can't look 'inside' a function to inspect its shape, just as you don't look inside a number to see that the number 3 has the same shape of the sum of the
numbers 1 and 2.
Question: what's 'inside' a number?Answer: Nothing.
Solved that problem that's been bugging mathematicians since Plato and Euclid, so, moving on.
That problem's solved, but that still leaves the question of the shape of functions, and then the shape of categories.
Well, in some cases the shape of things is undecidable, because, for example, one way to determine (if you can) the shape of functions, or to see if two functions are equivalent, is to feed the same
arguments to each of the functions, then to see that if, for the same input values, both functions return the same output values (because the output values have the same shape) (again, that
aggravatingly ambiguous test!) then we have equivalence.
The problem if you have something like this function:
let f x = f (x + 1)
Then what is the answer to that?
f 0 = f 1 which is f 2 which is f 3 which is f 4 ...
You'll never be able to determine the shape of that function, because for each value you give to f, it seeks the solution by monotonically increasing the argument,
ad infinitum.
So what shape is f? And what if g is
let g y = f (y + 1)
does g have the same shape as f? Sure, ... right? But how do you verify that?
Or how about
let h z = h (g (f z))
What shape is h? Is it the same shape as f or g? or neither of them?
These are valid formulations, but how can we reason about them and then across them to make statements of truth of how these functions relate to each other that we can prove? For many numbers (but
not all!) (take that!), many functions, and even many categories, determining equality, or, correctly, isomorphism, is decidable: we inspect the shape (usually with a shape-defining function), see
that they are the same, and say, behold, objects A and B are isomorphic! And we're done.
Which is a vast improvement to what we had before:
"I assume for every p, p == p is true.."
Which is saying something equals itself because it equals itself.
But then how do you know p == q is true if you don't know what q may be, either p or not p, or in the same shape as p. If q is the same shape as p, inhabiting it, is it 'equal' to p?
In Category Theory, we say, 'yes.'
Why do we say 'yes' so easily?
Because we don't care what p and q are.
We care what the functions do. And a function taking either p or q as argument returning the same value, every time.
Good enough (verifiably so) for me. Our functions behave consistently. Let's move on.
To 'j'.
Oh, in Latin, there is no 'j.' Not that I'm being IVDGMENTAL or anything.
Here's a neat, little number. Tiny.
, in fact.
The number: *.
That's right. * ('star') is a number. In
Game Theory
it is the number denoting that in a two-player game, the person who has the next move has only a bad move to make.
The neat thing about * is this. No other number is equal to it. In fact * is the only number there this statement is true:
* =/= *
Star is not even equal to itself.
So where
1 = 1
is a provable (and proved) statement of truth, this infinitesimal has it's own statement of truth:
* || *
This reads: "star is
to star."
Okay, so,
you have a conversation-starter at all the cocktail parties and
that you're attending, you stunning socialite, you.
Don't say I never gave you anything. Have a star.
geophf to move and win.
After all, I did give you the *-move.
So: 'I' for isomorphic ... or is it for the incomparable infinitesimal that isn't?
'M' is for 'mum's the word' from me.
Sometimes, I have a bug in a complex map and filter chain, such as a wrong value coming out the end, or a value that should have filtered remains in the list. How do you deal with this? I've been
creating a pair of the value and itself, so that I can see which input resulted in the wrong output, then rewriting my chain to work on the first element of a tuple, but this is tedious. Is there a
better way?
submitted by
Dooey [link] [9 comments]
Hello guys,
I just wrote my first "real" haskell program: a game of life implementation. Would anyone be willing to critique my code? it's here
Regarding the code:
• I ran into trouble with IO (random and writeout)
• I probably should have used a 2d array to represent the board
• There are some performance issues (related to all the concatenations in the svg code?)
Any thoughts/comments are most welcome, please don't hold back! Thanks.
Thanks for the great comments everybody! I have started updating the code with the suggested changes. In particular: fixing the randomness bug, making evolution and writeboard "pure(r)" and
simplifying functions.
I've updated the link above to point to the new version, the original code can be found here.
submitted by
stmu [link] [14 comments]
As a spin off from teaching programming to my 10 year old son and his friends, we have published a sprite and pixel art editor for the iPad, called BigPixel, which you can get from the App Store. (It
has a similar feature set as the earlier Haskell version, but is much prettier!)
The migration is now complete. Currently, I have:
* Atom feed support
* Hosted on https:// (with
* Sources hosted on
* Main blog over at https://blog.cppcabrera.com
* All content from here ported over
* All posts tagged appropriately
I've documented the process as my first new post at the new location:
Learning Hakyll and Setting Up
At this point, the one feature I'd like to add to my static blog soon is the ability to have a Haskell-only feed. I'll be working on that over the coming week.
Thanks again for reading, and I hope you'll enjoy visiting the new site! | {"url":"http://sequence.complete.org/aggregator?page=4","timestamp":"2014-04-18T13:09:38Z","content_type":null,"content_length":"40993","record_id":"<urn:uuid:f2aef9c2-f53a-4b7c-96b2-993d9482e91c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
Classical proofs, typed processes and intersection types
- MATHEMATICAL STRUCTURES OF COMPUTER SCIENCE , 2008
"... X is an untyped continuation-style formal language with a typed subset which provides a Curry-Howard isomorphism for a sequent calculus for implicative classical logic. X can also be viewed as a
language for describing nets by composition of basic components connected by wires. These features make X ..."
Cited by 16 (16 self)
Add to MetaCart
X is an untyped continuation-style formal language with a typed subset which provides a Curry-Howard isomorphism for a sequent calculus for implicative classical logic. X can also be viewed as a
language for describing nets by composition of basic components connected by wires. These features make X an expressive platform on which algebraic objects and many different (applicative)
programming paradigms can be mapped. In this paper we will present the syntax and reduction rules for X and in order to demonstrate the expressive power of X, we will show how elaborate calculi can
be embedded, like the λ-calculus, Bloo and Rose’s calculus of explicit substitutions λx, Parigot’s λµ and Curien and Herbelin’s λµ ˜µ.
, 2004
"... We prove the confluence of λµ˜µT and λµ˜µQ, two well-behaved subcalculi of the λµ˜µ calculus, closed under call-by-name and call-by-value reduction, respectively. ..."
Add to MetaCart
We prove the confluence of λµ˜µT and λµ˜µQ, two well-behaved subcalculi of the λµ˜µ calculus, closed under call-by-name and call-by-value reduction, respectively.
, 2005
"... X is an untyped language for describing circuits by composition of basic components. This language is well suited to describe structures which we call “circuits ” and which are made of parts
that are connected by wires. Moreover X gives an expressive platform on which algebraic objects and many diff ..."
Add to MetaCart
X is an untyped language for describing circuits by composition of basic components. This language is well suited to describe structures which we call “circuits ” and which are made of parts that are
connected by wires. Moreover X gives an expressive platform on which algebraic objects and many different (applicative) programming paradigms can be mapped. In this paper we will present the syntax
and reduction rules for X and some its potential uses. To demonstrate the expressive power of X, we will show how, even in an untyped setting, elaborate calculi can be embedded, like the naturals,
the λ-calculus, Bloe and Rose’s calculus of explicit substitutions λx, Parigot’s λµ and Curien and Herbelin’s λµ˜µ. Keywords: Language design, mobility, circuits, classical logic, Curry-Howard
correspondance Résumé X est un langage non typ é conçu pour d écrire les circuits par composition de «briques » de base. Ce langage s’adapte parfaitement à la description des structures que nous
appelons «circuits » et qui sont faites de composants connect és par des fils. De plus, X fournit une plate-forme expressive sur laquelle des objets alg ébriques et de nombreux paradigmes de
programmation (applicative) de toutes sortes peuvent être appliqu és. Dans ce rapport, nous pr ésenterons la syntaxe de X, ses règles de r éduction et certaines de ses utilisations potentielles. Pour
mettre en lumière le pouvoir expressif de X, nous montrerons comment, même dans un cadre non typ é, on peut y plonger des calculs relativement sophistiqu és, comme les entiers naturels, le λ-calcul,
le calcul de substitutions explicites λx de Bloe et Rose, le calcul λµ de Parigot et le calcul λµ˜µ de Curien et Herbelin. Mots-clés: Conception de langage, mobilit é, circuits, logique classique, | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=576663","timestamp":"2014-04-20T00:13:30Z","content_type":null,"content_length":"18653","record_id":"<urn:uuid:643ed6ce-0abb-4cca-b02b-57cdc86a5987>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kinetics of calcium-dependent inactivation of calcium current in voltage-clamped neurones of Aplysia californica
Chad, J., Eckert, R. and Ewald, D. (1984) Kinetics of calcium-dependent inactivation of calcium current in voltage-clamped neurones of Aplysia californica. Journal of Physiology, 347, (1), 279-300.
Full text not available from this repository.
Ca currents flowing during voltage-clamp depolarizations were examined in axotomized Aplysia neurones under conditions that virtually eliminated other currents. Moderate to large currents exhibited a
two-component time course of relaxation that can be approximated reasonably well by the sum of two exponentials. The rapid phase (tau 1 approximately equal to 70 ms at 0 mV) plus the slower phase
(tau 2 approximately equal to 300 ms at 0 mV) ride upon a steady, non-inactivating current, I infinity. Conditions that diminish the peak current amplitude, such as reduced stimulus depolarization,
inactivation remaining from a prior depolarization, or partial blockade of the Ca conductance by Cd, slowed both phases of inactivation, and all selectively eliminated the tau 1 phase, such that weak
currents exhibited only the slower phase of decline. Injection of EGTA slowed both phases of inactivation, decreased the extent of the tau 1 phase, and increased the intensity of I infinity and of
the current during the tau 2 phase. For a given voltage, the rate of inactivation increased as the peak current strength was increased, and decreased as the peak current strength was decreased. For a
given peak current the rate of inactivation decreased as depolarization was increased. The relation of inactivation to prior Ca2+ entry was essentially linear for small currents, but decreased in
slope with time during strong currents. The relation also became shallower with increasing depolarization, suggesting an apparent decrease in the efficacy of Ca in causing inactivation at more
positive potentials. The basic kinetics of Ca current inactivation along with experimentally induced changes in those kinetics were simulated with a binding-site model in which inactivation develops
during current flow as a function of the entry and accumulation of free Ca2+. This demonstrated that a single Ca-mediated process can account for the two-component time course of inactivation, and
that the nearly bi-exponential shape need not arise from two separate processes. The two-component time course emerges as a consequence of a postulated hyperbolic reaction between diminishing
probability of channels remaining open and the accumulation of intracellular free Ca2+. The occurrence of a single- or a two-component time course of inactivation thus appears to depend on the levels
of internal free Ca2+ traversed during current flow.
Actions (login required) | {"url":"http://eprints.soton.ac.uk/55974/","timestamp":"2014-04-17T02:40:31Z","content_type":null,"content_length":"27017","record_id":"<urn:uuid:5d1c2dd0-d558-49c0-b638-dc222aa9d71b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics GRE
Classical Mechanics - 20%
Kinematics, Newton's laws, work and energy, oscillatory motion, rotational motion about a fixed axis, dynamics of systems of particles, central forces and celestial mechanics, three-dimensional
particle dynamics, Lagrangian and Hamiltonian formalism, non-inertial reference frames, elementary topics in fluid dynamics.
Electromagnetism - 18%
Electrostatics, currents and DC circuits, magnetic fields in free space, Lorentz force, induction, Maxwell's equations and their applications, electromagnetic waves, AC circuits, magnetic and
electric fields in matter.
Optics and Wave Phenomena - 9%
Wave properties, superposition, interference, diffraction, geometrical optics, polarization, Doppler effect.
Thermodynamics and Statistical Physics - 10%
Laws of thermodynamics, thermodynamic processes, equations of state, ideal gases, kinetic theory, ensembles, statistical concepts and calculation of thermodynamic quantities, thermal expansion of
heat transfer.
Quantum Mechanics - 12%
Fundamental concepts, solutions of the Schrodinger equation (including square wells, oscillators, and hydrogenic atoms), spin, angular momentum, wave function symmetry, elementary perturbation
Atomic Physics - 10%
Properties of electrons, Bohr model, energy quantization, atomic structure, atomic spectra, selection rules, black-body radiation, x-rays, atoms in electric and magnetic fields.
Special Relativity - 6%
Introductory concepts, time dilation, length contraction, simultaneity, energy and momentum, four-vectors and Lorentz transformation, velocity addition.
Laboratory Methods - 6%
Data and error analysis, electronics, instrumentation, radiation detection, counting statistics, interaction of charged particles with matter, lasers and optical interferometers, dimensional
analysis, fundamental applications of probability and statistics.
Specialized Topics - 9%
Nuclear and Particle physics, Condensed Matter physics, mathematical methods, computer applications, astrophysics. | {"url":"https://www.msu.edu/~wamps/mentoring/Physics_GRE.html","timestamp":"2014-04-18T23:18:35Z","content_type":null,"content_length":"11235","record_id":"<urn:uuid:ac86fadb-a894-4474-bf1d-d3440185a35e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Maxima] simp:false after load(unit)
Stavros Macrakis macrakis at alum.mit.edu
Sun Jun 28 15:12:31 CDT 2009
On Sun, Jun 28, 2009 at 2:44 PM, Douglas A Edmunds<dae at douglasedmunds.com>
> I am using wxMaxima. I find that if simp:false then
> I can get a 'natural display' of the entry. The order
> of variables is not jumbled, and values (such as 25^2) remain
> as 25^2.
> I find this very helpful to verify the accuracy of the entry.
This sounds like a reasonable goal. To make it work, you will need to both
turn off simplification and not try to evaluate the expression with
simplification off. For example, you could define something like this:
verifying_read(varname) :=
block( [simp:false, display2d:true, val],
val: readonly(concat("Value of ",varname,"? ")),
print("Setting ",varname," to:"),
print("which evaluates and simplifies to:"),
val: ev(val),
varname :: val );
(%i25) a: 23$ <<< setting parameters
(%i26) b: 55$
(%i27) verifying_read('q); <<< quoting because q is the literal name of
the variable
Value of q?
a*x^3-b/x+sin(3/2*%pi); <<< user input
Setting q to:
3 b 3
a x - - + sin(- %pi) <<< unsimplified, unevaluated
x 2
which evaluates and simplifies to:
23 x - -- - 1 <<< simplified, evaluated
Is this what you had in mind?
> Apparently simp:false won't always work.
> Is there some other alternative way to do this? I can replace
> everything with a dummy variable, (x^y; then x:25; y:2), but that
> adds a lot of steps.
> Doug Edmunds
> _______________________________________________
> Maxima mailing list
> Maxima at math.utexas.edu
> http://www.math.utexas.edu/mailman/listinfo/maxima
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.math.utexas.edu/pipermail/maxima/attachments/20090628/3e5a2167/attachment-0001.htm
More information about the Maxima mailing list | {"url":"http://www.ma.utexas.edu/pipermail/maxima/2009/017652.html","timestamp":"2014-04-20T11:03:45Z","content_type":null,"content_length":"4967","record_id":"<urn:uuid:07c574d0-44ff-4ada-b9a8-a1c505998e3f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Chicago, IN
Find a New Chicago, IN Calculus Tutor
...I've tutored many people I've encountered, including friends, roommates, and people who've sat next to me on trains, aside from people who have contacted me for my services. And I've been
assistant-teaching math online for more than three years. I'm used to people of many levels, from people struggling with prealgebra to people who do well at national and international math
13 Subjects: including calculus, statistics, geometry, algebra 1
Since graduating summa cum laude in mathematics education, my passion for teaching math has only grown stronger. I have two and a half years experience teaching high school math and three and a
half years experience tutoring college math. Some of my favorite moments as a teacher have been when I w...
12 Subjects: including calculus, geometry, algebra 1, algebra 2
...Daniel Amen and his books on the subject are very interesting and informative. I have worked in Special Education as an instructional assistant for the past 10 years. In that time I have had
the opportunity to work with approximately 10-15 students with Asperger's or Autism.
24 Subjects: including calculus, chemistry, special needs, study skills
...Our program was based in a small room on campus, called the 'Math Lab' where young college students taking algebra classes were welcome to come in and work on their homework with free
resources, including on-duty math tutors. As one of these tutors it was my responsibility to assist students wit...
7 Subjects: including calculus, physics, geometry, algebra 1
...During college, I tutored the 13 year old daughter of the cook at our fraternity for two years, usually for two hours a week. We studied mostly math, but I also helped her with science and
history questions. After college, I have worked with my girlfriend's 10 year old niece a few times in different areas that she needed help in.
28 Subjects: including calculus, chemistry, geometry, algebra 1
Related New Chicago, IN Tutors
New Chicago, IN Accounting Tutors
New Chicago, IN ACT Tutors
New Chicago, IN Algebra Tutors
New Chicago, IN Algebra 2 Tutors
New Chicago, IN Calculus Tutors
New Chicago, IN Geometry Tutors
New Chicago, IN Math Tutors
New Chicago, IN Prealgebra Tutors
New Chicago, IN Precalculus Tutors
New Chicago, IN SAT Tutors
New Chicago, IN SAT Math Tutors
New Chicago, IN Science Tutors
New Chicago, IN Statistics Tutors
New Chicago, IN Trigonometry Tutors
Nearby Cities With calculus Tutor
Beverly Shores calculus Tutors
Boone Grove calculus Tutors
Gary, IN calculus Tutors
Hebron, IN calculus Tutors
Hobart, IN calculus Tutors
Kouts calculus Tutors
La Crosse, IN calculus Tutors
Lake Station calculus Tutors
Leroy, IN calculus Tutors
Lowell, IN calculus Tutors
Ogden Dunes, IN calculus Tutors
Pottawattamie Park, IN calculus Tutors
Wanatah calculus Tutors
Wheeler, IN calculus Tutors
Whiting, IN calculus Tutors | {"url":"http://www.purplemath.com/New_Chicago_IN_Calculus_tutors.php","timestamp":"2014-04-17T15:57:43Z","content_type":null,"content_length":"24322","record_id":"<urn:uuid:549bb0b7-57e2-4f85-a7fb-2eca690d645e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Re: Categorical "foundations"?
Colin McLarty cxm7 at po.cwru.edu
Fri Jan 23 12:07:07 EST 1998
Friedman wrote a reply to Pratt
Including one important point I will reply to now. Pratt referred
to mathematical thought:
>Mathematical thought? Whose mathematical thought? Philosophers are going to
>want to understand and examine very carefully some of the relevant
>mathematical thought to see if it is conceptually coherent.
This is crucial to the disagreement. When I say "mathmatical
thought" I mean high school math but also the thoughts of people generally
considered the best mathematicians. I do not mean that "mathematics" is just
by definition "whatever mathematicians do". I mean that I actually like, by
and large, the usual judgements: Gauss, Poincare, Hilbert, Serre, Atiyah,
Thurston are great mathematicians.
Friedman considers much of what those people do as mere sport, "not
stupid in the ordinary sense" but also not respectable. He means a kind of
"mathematical thought" validated by a particular philosophic view. Well, I
am interested in his philosophic view. That's what interests me most on fom.
But understand that he intends it as oppositional to what nearly all
professional mathematicians mean by "mathematics".
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-January/000940.html","timestamp":"2014-04-16T13:40:04Z","content_type":null,"content_length":"3615","record_id":"<urn:uuid:bfaafa73-624d-409f-b53c-f15c8b883566>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Non-geometric approach to gravity impossible?
It might be helpful to read Einstein's description of rulers on a heated slab
Einstein points out that being able to tile a surface with squares that don't overlap at all is possible only on a plane.
Einstein doesn't specifically mention the surface of a sphere as a counterexample, but you can imagine trying to do it, and realize that it won't work - for instance, the circumference of the earth
at the equator (0 degrees lattitude) won't equal the circumference of the Earth a short distance above it (say 1 minute of an arc above the equator).
The point is that with the actual rulers we use, observable rulers, the geometry of space-time is measurably curved - at least according to General Relativity (and light bending experiments agree
with this prediction).
We can't tile space with perfect cubes that fit perfectly together, nor can we tile space-time with perfect hypercubes. This happens because space-time isn't flat (and spatial slices of consant
Schwarzschild time aren't flat either).
It turns out you can make such a "heated ruler" theory to describe gravity. You wind up with imaginary rulers and clocks that perfectly cover an unobservable flat background space-time with squares,
like the marble slab, and real rulers that expand and contract and clocks that speed up and slow down due to "extra fields" that affect all matter uniformly (like the heated rulers), so that actual
rulers can't tile the geometry (with hypercubes for the example of space-time).
Note that in a space-time geometry, clocks play the role of rulers, in that they measure "distances in time".
More formally, one actually uses the Lorentz interval of special relativity than the usual concept of distance, but it probably won't be too confusing to gloss over this point.
There are some limits to this approach, that Weinberg didn't mention. For instance, you can't make a flat background spacetime have wormholes, because the topology isn't the same. You also tend to
run into problems trying to model black holes (a black hole, fully extended with the Kruskal extentions, is equivalent to a wormhole, so the topology is basically different). | {"url":"http://www.physicsforums.com/showthread.php?p=3789551","timestamp":"2014-04-16T16:09:02Z","content_type":null,"content_length":"87352","record_id":"<urn:uuid:4c0f5e7f-ae99-476d-a33b-1d769c9f9ed2>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Berwyn, IL Calculus Tutor
Find a Berwyn, IL Calculus Tutor
My tutoring experience ranges from grade school to college levels, up to and including Calculus II and College Physics. I've tutored at Penn State's Learning Center as well as students at home.
My passion for education comes through in my teaching methods, as I believe that all students have the a...
34 Subjects: including calculus, reading, writing, statistics
...During the four season with the team, I played both singles and doubles at every match. By my senior year, I was named captain of the Women's varsity team and the number 1 singles player. As
the oldest member of the team, other girls looked to me as their leader and my coaches expected me to lead practices and team warm-ups.
13 Subjects: including calculus, chemistry, geometry, biology
...As a tutor, I make a personal effort to become familiar with a student's 'learning personality' so that I may steer clear of lecturing (something already available in a conventional
classroom). A lecture can be recorded and replayed perpetually; however, if your instructor is not speaking your l...
7 Subjects: including calculus, physics, geometry, algebra 1
...I have experience helping beginning, intermediate, and advanced English speakers improve their grammar, vocabulary, pronunciation. I'm also happy to do purely conversational classes. I can
also teach beginning or intermediate Spanish as I reached a high level of fluency while living abroad.
28 Subjects: including calculus, Spanish, reading, chemistry
...Graduated High School with an A in Calculus. Received a 4 on the Calculus BC exam. Received a BSME from the University of Toledo.
20 Subjects: including calculus, physics, statistics, geometry
Related Berwyn, IL Tutors
Berwyn, IL Accounting Tutors
Berwyn, IL ACT Tutors
Berwyn, IL Algebra Tutors
Berwyn, IL Algebra 2 Tutors
Berwyn, IL Calculus Tutors
Berwyn, IL Geometry Tutors
Berwyn, IL Math Tutors
Berwyn, IL Prealgebra Tutors
Berwyn, IL Precalculus Tutors
Berwyn, IL SAT Tutors
Berwyn, IL SAT Math Tutors
Berwyn, IL Science Tutors
Berwyn, IL Statistics Tutors
Berwyn, IL Trigonometry Tutors
Nearby Cities With calculus Tutor
Bellwood, IL calculus Tutors
Broadview, IL calculus Tutors
Brookfield, IL calculus Tutors
Cicero, IL calculus Tutors
Forest Park, IL calculus Tutors
Forest View, IL calculus Tutors
La Grange Park calculus Tutors
Lyons, IL calculus Tutors
Maywood, IL calculus Tutors
North Riverside, IL calculus Tutors
Oak Park, IL calculus Tutors
River Forest calculus Tutors
Riverside, IL calculus Tutors
Stickney, IL calculus Tutors
Westchester calculus Tutors | {"url":"http://www.purplemath.com/Berwyn_IL_Calculus_tutors.php","timestamp":"2014-04-17T00:51:01Z","content_type":null,"content_length":"23886","record_id":"<urn:uuid:ac7152b3-5c73-4460-a550-b6d12b80130b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/spndsh/answered/1","timestamp":"2014-04-20T13:54:52Z","content_type":null,"content_length":"96186","record_id":"<urn:uuid:ed19c673-bc4d-4cbd-a011-0de3435a8047>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Riverside, RI SAT Math Tutor
Find a Riverside, RI SAT Math Tutor
...I have excellent qualifications for tutoring ACT Math. I have spent several years teaching high school Math and tutoring students in Math at the middle school, high school, and college level.
I have also tutored many students privately to prepare them for the ACT.
25 Subjects: including SAT math, geometry, statistics, algebra 1
...From my hard work in high school, I received the Dean's Scholarship from the University of New Hampshire from which I recently graduated from in 3.5 years and was the given the honor of being
a University Scholar as well as graduating magna cum laude. These achievements I received from hard work...
18 Subjects: including SAT math, reading, English, writing
I just completed my undergraduate program in Elementary Education and Psychology, with a minor in Spanish. I have completed student teaching in 2nd grade and am certified in grades 1-6. I have
substitute taught in grades K-12 and have taught private Spanish lessons.
29 Subjects: including SAT math, Spanish, reading, English
...I have 16 years experience tutoring children in grades K-10. Let's work together to help your child do well in school and enjoy learning with multi-sensory tools. I will work efficiently with
your child's time and your family's financial resources, and am well qualified to support students in Math and Reading.
14 Subjects: including SAT math, reading, biology, algebra 1
...Very familiar with current curriculum guides and Common Core Standards and can easily assist your student at any point where they are struggling with class material or seek enrichment. Use a
variety of tools and visuals to assist students that may need different learning needs, including special...
13 Subjects: including SAT math, calculus, geometry, algebra 1
Related Riverside, RI Tutors
Riverside, RI Accounting Tutors
Riverside, RI ACT Tutors
Riverside, RI Algebra Tutors
Riverside, RI Algebra 2 Tutors
Riverside, RI Calculus Tutors
Riverside, RI Geometry Tutors
Riverside, RI Math Tutors
Riverside, RI Prealgebra Tutors
Riverside, RI Precalculus Tutors
Riverside, RI SAT Tutors
Riverside, RI SAT Math Tutors
Riverside, RI Science Tutors
Riverside, RI Statistics Tutors
Riverside, RI Trigonometry Tutors | {"url":"http://www.purplemath.com/Riverside_RI_SAT_math_tutors.php","timestamp":"2014-04-18T05:40:03Z","content_type":null,"content_length":"24055","record_id":"<urn:uuid:ac06d9ae-bd3d-4ecb-a126-1abf54ea13f9>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Uncertainty Analyses Applied to the UAM/TMI-1 Lattice Calculations Using the DRAGON (Version 4.05) Code and Based on JENDL-4 and ENDF/B-VII.1 Covariance Data
Science and Technology of Nuclear Installations
Volume 2013 (2013), Article ID 437854, 21 pages
Research Article
Uncertainty Analyses Applied to the UAM/TMI-1 Lattice Calculations Using the DRAGON (Version 4.05) Code and Based on JENDL-4 and ENDF/B-VII.1 Covariance Data
^1Department of Nuclear Chemistry, Chalmers University of Technology, 412 96 Gothenburg, Sweden
^2Department of Nuclear Engineering, Chalmers University of Technology, 412 96 Gothenburg, Sweden
Received 31 July 2012; Accepted 3 November 2012
Academic Editor: Alejandro Clausse
Copyright © 2013 Augusto Hernández-Solís et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
The OECD/NEA Uncertainty Analysis in Modeling (UAM) expert group organized and launched the UAM benchmark. Its main objective is to perform uncertainty analysis in light water reactor (LWR)
predictions at all modeling stages. In this paper, multigroup microscopic cross-sectional uncertainties are propagated through the DRAGON (version 4.05) lattice code in order to perform uncertainty
analysis on and 2-group homogenized macroscopic cross-sections. The chosen test case corresponds to the Three Mile Island-1 (TMI-1) lattice, which is a 15 15 pressurized water reactor (PWR) fuel
assembly segment with poison and at full power conditions. A statistical methodology is employed for the uncertainty assessment, where cross-sections of certain isotopes of various elements belonging
to the 172-group DRAGLIB library format are considered as normal random variables. Two libraries were created for such purposes, one based on JENDL-4 data and the other one based on the recently
released ENDF/B-VII.1 data. Therefore, multigroup uncertainties based on both nuclear data libraries needed to be computed for the different isotopic reactions by means of ERRORJ. The uncertainty
assessment performed on and macroscopic cross-sections, that is based on JENDL-4 data, was much higher than the assessment based on ENDF/B-VII.1 data. It was found that the computed Uranium 235
fission covariance matrix based on JENDL-4 is much larger at the thermal and resonant regions than, for instance, the covariance matrix based on ENDF/B-VII.1 data. This can be the main cause of
significant discrepancies between different uncertainty assessments.
1. Introduction
The significant increase in capacity of new computational technology made it possible to switch to a newer generation of complex codes, which are capable of representing the feedback between core
thermal-hydraulics and neutron kinetics in detail. The coupling of advanced, best estimate (BE) models is recognized as an efficient method of addressing the multidisciplinary nature of reactor
accidents with complex interfaces between disciplines. However, code predictions are uncertain due to several sources of uncertainty, like code models as well as uncertainties of plant, materials,
and fuel parameters. Therefore, it is necessary to investigate the uncertainty of the results if useful conclusions are to be obtained from BE codes.
In the current procedure for light water reactor analysis, during the first stage of the neutronic calculations, the so-called lattice code is used to calculate the neutron flux distribution over a
specified region of the reactor lattice by solving deterministically the transport equation. Lattice calculations use nuclear libraries as input basis data, describing the properties of nuclei and
the fundamental physical relationships governing their interactions (e.g., cross-sections, half-lives, decay modes and decay radiation properties, rays from radionuclides, etc.). Experimental
measurements on accelerators and/or estimated values from nuclear physics models are the source of information of these libraries. Because of the huge amount of sometimes contradictory nuclear data,
the data need to be evaluated before they can be used for any reactor physics calculations. Once evaluated, the nuclear data are added in a specific format to so-called evaluated nuclear data files,
such as ENDF-6 (Evaluated Nuclear Data File-6). The information of the evaluation files can differ because they are produced by different working groups all around the world (e.g., ENDF/B for the
USA, JEFF for Europe, JENDL for Japan, BROND for Russia, etc.). The data can be of different type, containing an arbitrary number of nuclear data sets for each isotope, or only one recommended
evaluation made of all the nuclear reactions for each isotope. Finally, these data are fed to a cross-sectional processing code such as NJOY99 [1], which produces the isotopic cross-section library
used by the lattice code. This process can create a multigroup library specifically formatted for the lattice code in use. For instance, Hébert [2] developed a nuclear data library production system
that recovers and formats nuclear data required by the advanced lattice code DRAGON version 4 [3] and higher versions. For these purposes, a new postprocessing module known as DRAGR was included in
NJOY99, which is thus capable of creating the so-called DRAGLIB nuclear data library for the DRAGON v 4.05 code.
In the major nuclear data libraries (NDLs) created around the world, the evaluation of nuclear data uncertainty is included as data covariance matrixes. The covariance data files provide the
estimated variance for the individual data as well as any correlation that may exist. The uncertainty evaluations are developed utilizing information from experimental cross-section data, integral
data (critical assemblies), and nuclear models and theory. The covariance is given with respect to point-wise cross-section data and/or with respect to resonance parameters. Thus, if such
uncertainties are intended to be propagated through deterministic lattice calculations, a processing method/code must be used to convert the energy-dependent covariance information into a multigroup
format. For example, the ERRORJ module of NJOY99 or the PUFF-IV code is able to process the covariance for cross-sections including resonance parameters and generate any desired multigroup
correlation matrix.
Among the different approaches to perform uncertainty analysis, the one based on statistical techniques begins with the treatment of the code input uncertain parameters as random variables.
Thereafter, values of these parameters are selected according to a random or quasirandom sampling strategy and then propagated through the code in order to assess the output uncertainty in the
corresponding calculations. This framework has been highly accepted by many scientific disciplines not only because of its solid statistical foundations, but also because it is affordable in practice
and its implementation is relatively easy thanks to the tremendous advances in computing capabilities. In this paper, the microscopic cross-sections of certain isotopes of various elements, belonging
to the 172-group DRAGLIB library format, are considered as normal random variables. Two different DRAGLIB’s are created, one based on JENDL-4 and the other one based on ENDF/B-VII.1 data, because a
large amount of isotopic covariance matrices have been compiled for these two major NDLs [4, 5]. The aim is to propagate the multigroup uncertainties through the DRAGON v 4.05 code, in order to
assess and compare the different code outputs uncertainties while using both JENDL-4 and ENDF/B-VII.1 data. Uncertainty assessment is performed on and on the different 2-group homogenized macroscopic
cross-sections of a PWR fuel assembly segment with poison (UO[2]-Gd[2]O[3]). This test case corresponds to the Three Mile Island-1 (TMI-1) Exercise I-2 that is included in the neutronics phase (Phase
I) of the “Benchmark for Uncertainty Analysis in Modeling (UAM) for design, operation, and safety analysis of LWRs,” organized and lead by the OECD/NEA UAM scientific board [6].
The preferred sampling strategy for the current study corresponds to the quasirandom Latin Hypercube Sampling (LHS). This technique allows a much better coverage of the input uncertainties than
simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. In fact, LHS was created in the field of safety analysis of nuclear reactors [7],
and the benefits and efficiency of using LHS over SRS have been already proved in both LWRs neutronic and thermal-hydraulic predictions [8, 9]. Output uncertainty assessment is based on the
multivariate tolerance limits concept. Due to the fact that the output space formed by and some of the two-group homogenized macroscopic cross-sections are correlated, the univariate analysis does
not apply anymore. By statistically perturbing 450 times the different isotopic microscopic cross-sections, 450 different DRAGLIB libraries are created. Therefore, the output sample formed by the 450
code calculations infers to cover 95% of the multivariate output population, with at least a 95% of confidence. All these is performed twice, once for libraries based on JENDL-4 data and another time
for libraries based on ENDF/B-VII.1 data, respectively, for their further comparison.
In the next sections, the multigroup microscopic cross-section uncertainties computed with ERRORJ are shown for some important nuclides. Thereafter, a deeper review on how to perform a statistical
uncertainty analysis is presented, with emphasis on a developed methodology to properly sample the scattering kernel and the fission spectrum. This allows a correct uncertainty propagation through
the lattice code since the neutron balance is preserved in the transport equation. Finally, results of the uncertainty analyses are shown for the test case and discussed.
2. Multigroup Uncertainty Based on JENDL-4 and ENDF/B-VII.1
2.1. Main Features
The uncertainty information in the major NDLs is included in the so-called “covariance files” within the ENDF-6 formalism. The following covariance files are defined:(i)data covariances obtained from
parameter covariances and sensitivities (MF30),(ii)data covariances for number of neutrons per fission (MF31),(iii)data covariances for resonance parameters (MF32),(iv)data covariances for reaction
cross-sections (MF33),(v)data covariances for angular distributions (MF34),(vi)data covariances for energy distributions (MF35),(vii)data covariances for radionuclide production yields (MF39),(viii)
data covariances for radionuclide production cross-sections (MF40).
To propagate nuclear data uncertainties in reactor lattice calculations, it is necessary to begin by converting energy-dependent covariance information in ENDF format into multigroup form. This task
can be performed conveniently within the latest updates of NJOY99 by means of the ERRORJ module. In particular, ERRORJ is able to process the covariance data of the Reich-Moore resolved resonance
parameters, the unresolved resonance parameters, the component of the elastic scattering cross-section, and the secondary neutron energy distributions of the fission reactions [5]. ERRORJ was
originally developed by Kosako and Yamano [10] as an improvement of the original ERRORR module in order to calculate self-shielded multigroup cross-sections, as well as the associated correlation
coefficients. These data are obtained by combining absolute or relative covariances from ENDF files with an already existing cross-section library, which contains multigroup data from the GROUPR
In the presence of narrow resonances, GROUPR handles self-shielding through the use of the Bondarenko model [1]. To obtain the part of the flux that provides self-shielding for the isotope , it is
assumed that all other isotopes are represented with a constant background cross-section . Therefore, at resonances the flux takes the following form:
The most important input parameters to ERRORJ are the smooth weighting function and the background cross-section . It should be noticed that these are assumed to be free of uncertainty.
2.2. Computation of Uncertainties and Correlation Matrices of Important Isotopes
In this section, results of the ERRORJ module are shown from Figures 1, 2, 3, 4, 5, 6, and 7, respectively, for different reactions of 5 important nuclides: , , , , and . Results for are based on
JENDL-3.3 data since JENDL-4 does not contain uncertainty information for this isotope. The value of the microscopic cross-sections and their relative variances in percentage were computed for an
energy-grid of 172 groups by using a weighting flux that corresponds to the shape (in NJOY, this is equivalent to the option of GROUPR). For all cases, an infinite dilution condition was assumed
(i.e., barns) and the temperature was considered to be 293K.
Each of the following figures contains 3 main plots. The plot on the right corresponds to the value of a certain reaction cross-section, while the plot at the top corresponds to the relative variance
(i.e., the variance of the cross-section divided by the actual value of the cross-section at a certain energy group). These two plots are presented in multigroup format as a function of energy (eV).
Finally, the plot at the center represents the correlation that exists among the different 172 energy groups for that type of reaction.
From the isotopic composition of the TMI-1 exercise, , , , , and are the only nuclides for which uncertainty information exists in both JENDL-4 and ENDF/B-VII.1 libraries. Therefore, only the
corresponding reactions of these nuclides were statistically perturbed. It has to be mentioned that the fission spectrum uncertainty could not be computed by ERRORJ for the ENDF/B-VII.1 library,
neither for the nor the isotope. The code gave an error message about the I/O format of the file and, since this could not be resolved, the fission spectra covariance matrices from JENDL-4 were used
instead. This problem has already been addressed to the ENDF/B research group.
As seen in the previous figures, for each cross-section of a given nuclide, the variability of the probability of interaction at a certain energy group is related to the probability of interactions
at other energy groups since the same measuring equipment was used when determining such probabilities. Such correlation can be studied through the self-reaction covariance matrix. In the same way,
the variability of the probability of interaction at a certain energy group of a certain type of reaction is also related to the probability of interaction of a second type of reaction at the same
energy group due to the same reason as above. Such correlation can be studied through the multireaction covariance matrix.
It should be noted that in the modern JENDL libraries, covariances for mu-bar (which allows performing an uncertainty analysis up to a linear degree of anisotropy) are defined for actinides. However,
this is not the case for the newly ENDF/B-VII.1 library and thus, the uncertainty analysis was only performed on the isotropic components of the scattering matrix. Another important issue that was
noticed while computing the different reaction covariances was the fact that resonance uncertainties in JENDL-4 are absolute. This means that self-shielded relative variances (or relative standard
deviations) will change as a function of temperature and dilution at the resonant groups. To illustrate this issue, relative standard deviations at the resonant groups for different background
cross-sections were computed for the and reactions, as shown in Figures 8 and 9, respectively. Small relative standard deviations are obtained with large background cross-section values and vice
versa. This fact is supported by the results obtained by Chiba and Ishikawa [11], where a dependency between relative multigroup covariances and background cross-sections at the resonances was
observed when JENDL-3.2 data were employed.
Regarding the ENDF/B-VII.1 resonant uncertainties, only an absolute dependency was observed, leaving the relative terms intact for any temperature and/or dilution conditions. This is an important
issue, because as will be seen in Section 3, it is very easy to implement the perturbation methodology based on relative uncertainties. Nevertheless, an exception must be made at the actinides
resonances for the JENDL-4 case.
3. Statistical Uncertainty Analysis
3.1. Uncertainty Assessment Using Nonparametric Tolerance Limits
The first step of the standard statistical framework is to identify from the code inputs the most important uncertain parameters defined as , which can be models, boundary conditions, initial
conditions, closure parameters, and so forth. They should be characterized by a sequence of probability distribution functions (PDFs) known as the uncertain input space. Then, a sampling strategy is
used to generate a sample of size from such an input space which is propagated through the code in order to treat the output calculations as random variables. This scheme is shown in Figure 10.
Once a sample of the code output has been taken, a statistical inference of the output population parameters is performed. During recent years, it has been common in the field of nuclear reactor
safety to use the theory of nonparametric tolerance limits for the assessment of code output uncertainty. This approach, proposed by Gesellschaft für Anlagen-und Reaktorsicherheit (GRS) [12], is
based on the work done by Wilks [13, 14] to obtain the minimum sample size in order to infer a certain coverage of a population, with a certain confidence. Let us assume that the uncertainty
assessment is only performed in one output parameter. For the two-sided case, where the coverage of the output population is expected to be inferred from the percentile to the percentile with a of
confidence, the minimum sample size is given by the following implicit equation [15]:
For example, if the 5th and 95th percentiles of the population are to be inferred with a 95% of confidence, a sample size of 93 elements is required. It should be noticed that this analysis is solely
based on the number of samples and applies to any kind of PDF the output may follow. Also, since the input space is only used as an indirect way to sample the output space, the use of nonparametric
tolerance limits is independent from the number of uncertain input parameters. When the code output is comprised by several variables that depend on each other, the uncertainty assessment should be
based on the theory of multivariate tolerance limits. Wald [16, 17] was the first to analyze the statistical coverage of a joint distribution-free PDF. In Guba et al. [18], the concern about
assessing separate tolerance limits to statistically dependent outputs was raised within the nuclear reactor safety community. In such a work, it was shown that the general equation developed by
Noether [19] for simultaneous upper and lower tolerance limits can be used to determine the minimum sample size required to cover, in a distribution-free manner, a joint PDF depending on the number
of output variables. Such equation reads as follows: where is the number of upper tolerance limits and is the number of lower tolerance limits to be assessed. For instance, in the case of two-sided
tolerance limits for a single variable, and (3) turns out to be the same as (2). Therefore, if a two-sided uncertainty assessment is going to be performed to 2 statistically dependent output
variables then , and so on. It should be noticed that the sample size in the multivariate case depends on the correlation among the different parameters. Guba et al. [18] exemplified this fact for a
bivariate normal distribution. It was then shown that if the variables were highly correlated, the required sample size to cover the joint PDF is smaller than for the poorly correlated case.
Nevertheless, if nothing is known about the output space PDF, (3) would give the required sample size for the desired multivariate coverage with a desired confidence independently of the correlation
(or covariance) among the output parameters. This is a very powerful statistically significant way to assess uncertainty in the design of computational experiments since in general, nothing is known
about the PDF where the calculations are coming from.
Other authors have done some work to derive the minimum sample size for multivariate nonparametric tolerance limits, such as the equation presented by Scheffe and Tukey [20] as follows: where is the
value of the -distribution with degrees of freedom. Ackermann and Abt [21] tabulated (4) as a function of the desired coverage and confidence, respectively, for a large number of tolerance limits the
space in study may be comprised with. These tables are in agreement with for instance, Table no. 4 shown in [18] with respect to the solution of (3) for the two-sided case and up to 3 variables in
3.2. Latin Hypercube Sampling
The simplest sampling procedure for developing a mapping from input space to output space is through SRS. In this procedure, each sample element is generated independently from all other sample
elements; however, there is no assurance that a sample element will be generated from any particular subset of the input space. In particular, important subsets with low probabilities but high
consequences are likely to be missed if the sample is not large enough [7]. Even though in the theory of nonparametric tolerance limits, the minimum sample size is independent from the dimension of
the input space, if an efficient coverage of the different inputs can be performed with the same sample size that is needed to statistically significant cover the output space, then the code
nonlinearities would be better handled and the output uncertainty assessment would be as well more efficient. The aforementioned goal can be achieved if Latin Hypercube sampling is employed instead
of simple random sampling.
LHS can be viewed as a compromise, since it is a procedure that incorporates many of the desirable features of random and stratified sampling. LHS is done according to the following scheme to
generate a sample of size from the input space in consistency with their PDFs. The range of each variable (i.e., the ) is exhaustively divided into disjoint intervals of equal probability and one
value is selected at random from each interval. The values thus obtained for are paired at random without replacement with the values obtained for . These pairs are combined in a random manner
without replacement with the values of to form triples. This process is continued until a set of NK-tuples is formed. In this way, a good coverage of all the subsets defining the uncertain input
space can be achieved. This procedure is exemplified in Figure 11 for two different possible input distributions, one corresponding to a uniform distribution and the second to a normal distribution,
In the field of computational experiments, the concept of tolerance limits applied to the code uncertainty assessment is valid even if the input space is sampled with LHS. This is due to the fact
that such a theory does not assume any kind of parametric distribution of the code output space, and is only founded on the ranking of a statistically significant number of samples. Therefore, since
this theory is independent from the dimensionality of the input space, it does not matter how the input space is sampled as long as the minimum sample size requirement is being fulfilled. In other
words, LHS is used to cover much better the input space and ergo, to much better handle the code nonlinearities in order to try to infer more realistic output percentiles that the ones SRS might
infer for the same sample size, and for the same level of confidence. For example, the use of LHS applied to the inference of code output tolerance limits in a nonparametric way can be found in [7,
22, 23]. Moreover, it should be reminded that the estimation of the output cumulative density function (CDF) when LHS is employed is unbiased [24].
3.3. Determination of the Sample Size according to Two-Group Diffusion Theory
Since uncertainty analysis in this work is performed to both and homogenized two-group macroscopic cross-sections, the minimum sample size to assess multivariate uncertainty based on nonparametric
tolerance limits is dependent on the number of macroscopic cross-sections that are required to calculate . For example, by following the solution of the two-group diffusion equation in a homogenous
system and applying vacuum boundary conditions [25], the well-known four-factor formula can be derived where the removal cross-section is given by
It is common that thermal upscattering is not present and thus, . Therefore, when assessing the covariances between and the two-group macroscopic cross-sections, a minimum of 6 output parameters are
in question (i.e., , , , , , and ). According to Table 1b present in [21], for a two-sided 95% coverage of 6 variables with a 95% of confidence, a minimum of 361 samples are required. Nevertheless,
if the uncertainty assessment is extended to other parameters such as diffusion coefficients, a sample size of 410 elements is needed, because diffusion coefficients are related to through the
transport cross-section. Therefore, since one of the main goals of performing lattice calculations is to prepare a set of homogenized and energy collapsed set of parameters for any further core
analysis, the output sample for the multivariate uncertainty analysis should contain at least 410 elements.
4. The Input Uncertain Space: Sampling Procedure of the DRAGLIB Library
4.1. Main Features of the DRAGON Code and the DRAGLIB Library
The DRAGON code is the result of an effort made at École Polytechnique de Montréal to rationalize and unify the different models and algorithms used to solve the neutron transport equation into a
single code.
Advanced lattice codes essentially feature self-shielding models with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space dependent isotropic or
anisotropic streaming effect, availability of the characteristics method and burnup calculation with energy-resolved reaction rates. The advanced self-shielding models available in DRAGON version
4.05 are based on two main approaches: equivalence in dilution or subgroup models. State-of-the art resonance self-shielding calculations with such models require dilution-dependent microscopic
cross-sections for all resonant reactions, and for more than 10 specific dilutions. Ultrafine multigroup cross-section data are also required in the resolved energy domain. Thus, the cross-sections
library energy structure should comprise at least 172 groups. Since these capabilities require information that is not currently available in for example, the WIMS-formatted library, a nuclear data
library production system was written by Hébert [2] to recover and format the required nuclear data that is needed to feed the DRAGON v 4.05 code.
The management of a cross-section library requires capabilities to add, remove, or replace an isotope, and the capability to reconfigure the burnup data without recomputing the complete library. For
these purposes, DRAGR was developed by Hébert [2] and is an interface module to perform all these functions while maintaining full compatibility with NJOY99 and its further improvements. DRAGR
produces DRAGLIB, a direct access cross-section library in a self-described format that is compatible with DRAGON or with any lattice code supporting that format. The DRAGR Fortran module was written
as a clean and direct utility that makes use of the NJOY modules PENDF and GENDF. For each nuclide within DRAGLIB, the cross-sections for the following neutron-interaction reactions are described: ,
, , , , , and . Also, Nu-Sigma-Fission, the released neutron energy spectrum (CHI), and the P0 and P1 scattering matrices are included. Since the uncertainty study reported hereafter is based on
JENDL-4 data, a DRAGLIB library of 172 groups was needed to be produced using JENDL-4 information for different temperatures and background cross-sections. The first 79 groups correspond to the
thermal region; the next 46 groups correspond to the resonant region and the last 47 groups correspond to the fast region. An example of microscopic cross-sections for different reactions included in
DRAGLIB can be found in Figures 1, 2, and 3 for , , and , respectively. These cross-sections were calculated at 293K and considering an infinite dilution.
The DRAGON code solves the multigroup criticality equation at the pin cell level using the collision probability theory, and at the fuel assembly level by means of the method of characteristics. In
its integro-differential form, the zero-level transport corrected multigroup equation is given by
The left hand side of (7) is related to how neutrons disappear in space by leakage and any absorption or scattering reaction at the group , while the right hand side is related to how neutrons are
being produced at the energy level through the sum of the scattering and fission contributions coming from the different neutron energy groups. Then, the input uncertain space is composed by the
different microscopic cross-sections, and . If any statistical perturbation on a type of reaction is going to be made in one side of the transport equation, it should be somehow propagated to the
other side as well in order to preserve the neutron balance. However, some uncertainty information (depending on the type of reaction and nuclide in question) cannot be directly computed directly
from the NDLs. For example, straightforward covariances cannot be obtained for the scattering matrices, and so on. Therefore, different methodologies needed for a proper propagation of microscopic
cross-section uncertainty are detailed in the next subsections.
4.1.1. Uncertainty Analysis of the Scattering Cross-Section
The scattering source can be expanded such as where the -index indicates if the reaction is elastic or inelastic, and is referred to the nuclide index. In general, the and scattering matrices in
multigroup format computed by NJOY are based, within the ENDF-6 formalism, on the file which accounts for energy-angle distributions of different reactions. For example, the reaction is considered
for elastic scattering, while all the reactions that are present in the file between and should be taken into account for inelastic scattering.
Let us analyze the matrix. For the nominal case, the following relationship between energy-integrated cross-sections and the scattering matrix can be grounded
Since uncertainties are only given to the isotropic scattering reaction , any sampling of the form can be propagated to the scattering matrix if the nominal transfer matrix is kept constant, that is
In the nominal case of the transport corrected version, a degree of linear anisotropy can be taken into account by modifying the diagonal of the scattering matrix as follows:
As shown before in Section 2, uncertainties for the average of the cosine of the scattering angle mu-bar are defined in JENDL-4 only for some actinides. Nevertheless, since this is not the case for
the ENDF/B-VII.1 library, perturbations to mu-bar were not considered in this paper because otherwise, a fair comparison between the distinct uncertainty assessments would not take place.
If it is considered that any nondiagonal element of the scattering matrix is isotropic (i.e., ), any perturbation can be balanced in the transport equation since the total microscopic cross-section
is given by the sum of the absorption and the corrected scattering cross-sections. This means that where the capture and fission perturbations expressed such as can be directly sampled from the
covariance matrices computed with ERRORJ.
4.1.2. Uncertainty Analysis of the Fission Spectrum
Equation (7) is expressed in such a way that the fission spectrum should always satisfy the following normalization condition:
If a sample is to be drawn for the different spectrum groups, the perturbed spectrum should be carefully re-normalized to unity. In the statistical uncertainty approach, this can be achieved by
dividing each of the perturbed group-terms of the spectrum by the sum of all of the perturbed group-terms. For example, for a certain sample, this can be illustrated as follows: where the new
perturbed fission spectrum will satisfy the normalization condition, that is
4.2. Sampling the DRAGLIB Library
For our study, the multigroup microscopic cross-sections of certain isotopes are treated as random variables following a normal PDF. Therefore, for each cross-section of a given nuclide, the nominal
cross-section value at each energy group corresponds to the mean value. Since the LHS methodology described in the previous section assumes that the different variables are independent, the Latin
hypercube procedure developed by Iman and Conover [26] for sampling correlated variables was followed. This procedure is based not directly on the covariance matrix but, instead, on the correlation
matrix. Nevertheless, it can be applied in a very straightforward manner because the ERRORJ output can be processed by the NJOYCOVX [27] program in order to obtain directly, for each reaction, the
variance of each group and the associated correlation matrices.
A final total correlation matrix needs to be computed in order to evaluate all the individual self and mutual-reaction correlation matrices. This corresponds to a square matrix of size 172*(number of
individual correlation matrices). Before starting the sampling procedure, the total correlation matrix should be positive definite. If not, the negative eigenvalues contained in the diagonal of the
matrix should be made slightly positive (and is created). Then, the new positive definite total correlation matrix takes the form: where is a matrix containing the eigenvectors of the original
correlation matrix.
For each nuclide, the procedure for correlated variables begins by taking an LHS sample based on the individual group variances, and assuming that the group cross-section values are independent, for
example: where is the total number of multigroup cross-sections, and the number of samples. The aim of this procedure is to rearrange the values in the individual columns of , so that a desired rank
correlation structure results among the individual variables. This can be achieved by somehow relating the correlation coefficients of the matrix, to the total correlation matrix .
If the correlation matrix of is called , the method applies a Cholesky decomposition to both and in order to obtain, respectively, and lower triangular matrices that satisfy the following
relationships: Then, the target or desired matrix can be computed such as: where the matrix relates and as follows:
In the end, has a correlation matrix equal to , and the values of each variable in must be rearranged so that they have the same rank (order) as the target matrix . That is why this method is known
as the rank-induced method.
Since ERRORJ only can evaluate one dilution at a time, a methodology was developed in this work to shield the cross-sections covariances at all dilutions and temperatures. Due to the fact that ERRORJ
gives both the relative and absolute covariance matrices, only one evaluation is necessary at one temperature and one dilution (i.e., infinite dilution and 273K). Afterwards, it is only required to
multiply the cross-sections value at each energy group by the relative multigroup covariance matrix. This scheme is exemplified in Figure 12.
For moderators and some other materials, only and the matrix are to be perturbed already in the DRALGIB format. It is important to modify the total cross-section according to the different and
perturbations, since the total cross-section is used by the code and the neutron balanced must be preserved. For important actinides present in LWRs, the , Nu-Sigma-Fission, and fission spectrum
should be as well modified in DRAGLIB. The total cross-section for these cases should be modified and transport corrected according to (11) and (12). In principle, according to the code developers [3
], the transport correction is made at the code level and thus, the total cross-section included in DRAGLIB should be only based on isotropic terms. However, in this implemented statistical
methodology DRAGLIB is modified to include the transport corrected version at each sample and therefore, while performing lattice calculations, a flag must be raised at the input deck level in order
to inform the code not to perform the transport correction.
5. Results
5.1. Uncertainty Analysis
The TMI-1 test case corresponds to a PWR fuel assembly segment with poison at full power conditions (i.e., pellet temperature at 900K). Four fuel pins are doped with gadolinia as a burnable poison.
The actual UO[2]-Gd[2]O[3] fuel has a density of 10.144g/cm^3, the fuel enrichment is 4.12w/o, and the Gd[2]O[3] concentration is 2wt%. Important geometrical rod parameters are presented in Table
1; more information like isotopic composition and so forth, can be found in [6].
The nominal solution to this exercise is shown in Tables 2 and 3, where the fast and thermal macroscopic cross-sections and are presented, respectively, using libraries based on both JENDL-4 and ENDF
/B-VII.1 data. For example, for this exercise, Ball [28] computed a value of 1.40340 based on the 69-group IAEA library. All these nominal values can be used as a point of comparison for the
uncertainty results.
The final sample of 450 elements is significant to cover 95% of the output space formed by the different homogenized macroscopic cross-sections, and diffusion coefficients with a 95% of confidence,
since all one needs is a sample size of 410 as previously explained. If the relative uncertainty for is defined such as:
Then, uncertainty results for are presented in Table 4. For the two-group macroscopic cross-sections and diffusion coefficients, uncertainty results based on JENDL-4 are shown from Tables 5, 6, and 7
, while other results based on ENDF/B-VII.1 are shown from Tables 8, 9, and 10.
The correlation matrices among the different output parameters are shown, respectively, in Figures 13 and 14.
5.2. Analysis of the Results
As can be appreciated from the previous study, computed uncertainties in the output parameters are much higher for the JENDL-4 case, than for the ENDF/B-VII.1 case. For example, the standard
deviation of the JENDL-4 Nu-Sigma-Fission cross-section for JENDL-4 is 78 times larger than its ENDF/B-VII.1 counterpart. In a previous sensitivity study applied to a PWR fuel segment and based on
JENDL-4 [9], it was found that the most dominant input parameter corresponded to reaction. If one compares the computed ERRORJ variances from both NDLs for such reaction, just like the one made below
in Figure 15.
It can be seen that up to 1000eV, uncertainties based on JENDL-4 data are much larger than the uncertainties based on ENDF/B-VII.1. This creates a large sampling variability of the microscopic
cross-section. For example, this effect at 293K and assuming infinite dilution is presented in Figure 16, where two different samples of 100 elements were taken based on both JENDL-4 and ENDF/
B-VII.1 covariance data.
A big difference is observed in the spread of the samples for thermal energies and almost up to the last resonant energies. The fact of having large relative variances in JENDL-4 for the thermal
groups (~7%) compared to small relative variances in ENDF/B-VII.1 (~0.5%), and also large variance differences (up to 10 times) at the resonances, is the cause of such a huge sampling variability
between both libraries.
Since uncertainties included in JENDL-4 for are very high compared with for instance, the ones included on the ENDF/B-VII.1 library, such reaction becomes the most dominant. Other studies based on
the SCALE 44-group covariance matrices [28, 29] suggested that the microscopic cross-section is the most influential one. Indeed, it is natural to think that capture cross-sections have a big impact
on lattice calculations, since it is the only reaction that imbalance only one side of the neutron transport equation (i.e., disappearance at a certain energy group). Nevertheless, unfair
uncertainties among different input reactions make the uncertainty computations to be very biased.
6. Conclusions
In this paper, a statistical uncertainty analysis was performed on lattice calculations using the DRAGONv4.05 code. The input uncertainty space corresponded to the microscopic cross-sections of the
different nuclides of the DRAGLIB library. This work is one of the first attempts to process in multigroup format uncertainties from modern nuclear libraries such as JENDL-4 and ENDF/B-VII.1, so they
could be applied to the uncertainty assessment of lattice calculations. Thus, confidence in the results of advance lattice codes can be obtained through the use of a statistical uncertainty analysis.
By comparing the obtained relative uncertainty coming from the two different NDLs, a huge difference could be observed. It can be concluded that large differences in the computed covariances, just
like the ones existing between JENDL-4 and ENDF/B-VII.1 for the reaction, are the cause of such biases in the uncertainty results. This fact was supported by making a comparison on the spread of the
different samples of such microscopic cross-section; huge spreads were obtained at the thermal and resonant regions when the sampling is based on JENDL-4 than when is based, for instance, on ENDF/
B-VII.1 data.
The results obtained in this work are important because they demonstrate that it is feasible to statistically perturb and propagate basic uncertainty data through lattice calculations with the
current computational technology. This is also the first step to develop an integral statistical uncertainty methodology for nuclear reactor predictions using advanced models, since the lattice code
outputs are to be used as inputs to the core simulators. Further studies may include a global and nonparametric sensitivity analysis, where the correlation between the different microscopic and
macroscopic cross-sections can be assessed. Also, geometrical uncertainties, as well as state-variable uncertainties can be included.
Uncertainty analysis applied to lattice calculations is very important to trust LWR core designs, because the computation of the homogenized and energy-collapsed macroscopic cross-sections is the
first step in the modeling of LWRs. Therefore, confidence in the further calculation of the effective neutron multiplication factor is totally bounded to the computed uncertainties of lattice codes
output parameters.
: Fast fission factor
: Resonance escape probability
: Thermal utilization factor
: Thermal fission factor
: Removal macroscopic cross-section (1/cm)
: Fast down-scattering macroscopic cross-section (1/cm)
: Thermal upscattering macroscopic cross-section (1/cm)
: Fast absorption macroscopic cross-section (1/cm)
: Thermal absorption macroscopic cross-section (1/cm)
: Fast Nu-sigma-fission macroscopic cross-section (1/cm)
: Thermal Nu-sigma-fission macroscopic cross-section (1/cm)
: Scalar neutron flux at the energy group (neutrons/)
: Transport-corrected total macroscopic cross-section at the energy group (1/cm)
: Transport-corrected scattering macroscopic cross-section at the energy group (1/cm)
: scattering matrix at the -inelastic or elastic reaction, from the -nuclide from energy group to
: Capture microscopic cross-section, from the -nuclide and at the energy group
: Fission microscopic cross-section, from the -nuclide and at the energy group
: Nu-bar at the energy group
: Mu-bar at the energy group
: Normalized fission spectrum at the energy group .
1. R. E. MacFarlane and A. C. Kahler, “Methods for processing ENDF/B-VII with NJOY,” Nuclear Data Sheets, vol. 111, no. 12, pp. 2739–2890, 2010. View at Publisher · View at Google Scholar · View at
2. A. Hébert, “A nuclear data library production system for advanced lattice codes,” in International Conference on Nuclear Data for Science and Technology, pp. 701–704, 2007.
3. G. Marleau and A. Hébert, “A user guide for DRAGON version 4,” Institute of Nuclear Energy Internal Report IGE-294, École Polytechnique de Montréal, 2009.
4. K. Shibata, O. Iwamoto, T. Nakagawa et al., “JENDL-4.0: a new library for nuclear science and engineering,” Journal of Nuclear Science and Technology, vol. 48, no. 1, pp. 1–30, 2011. View at
Publisher · View at Google Scholar · View at Scopus
5. M. B. Chadwick, M. Hermanb, P. Oblozinsky, et al., “ENDF/B-VII. 1 nuclear data for science and technology: cross sections,” Nuclear Data Sheets, vol. 112, no. 12, pp. 2887–2996, 2011.
6. K. Ivanov, et al., “Benchmark for Uncertainty Analysis in Modeling (UAM) for Design, Operation and Safety Analysis of LWRs vol. I: Specification and Support Data for the Neutronic Cases (Phase I)
,” NEA/NSC/DOC(2011), Version 2, 2011.
7. J. C. Helton and F. J. Davis, “Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems,” Reliability Engineering and System Safety, vol. 81, no. 1, pp. 23–69,
2003. View at Publisher · View at Google Scholar · View at Scopus
8. A. Hernandez-Solis, C. Ekberg, A. Ö. Jensen, et al., “Statistical uncertainty analyses of void fraction predictions using two different sampling strategies: Latin hypercube and random sampling,”
in Proceedings of the18th International Conference on Nuclear Engineering (ICONE '10), Xi’An, China, May 2010.
9. A. Hernandez-Solis, Uncertainty and sensitivity analysis applied to LWR neutronic and thermal-hydraulic calculations [Ph.D. thesis], Chalmers University of Technology, 2012.
10. K. Kosako and N. Yamano, “Preparation of a Covariance Processing System for the Evaluated Nuclear Data File JENDL (III),” JNC TJ9440 99-003, 1999.
11. G. Chiba and M. Ishikawa, “Revision and application of the covariance data processing code, ERRORJ,” in International Conference on Nuclear Data for Science and Technology, pp. 468–471, October
2004. View at Publisher · View at Google Scholar · View at Scopus
12. H. Glaeser, “GRS method for uncertainty and sensitivity evaluation of code results and applications,” Science and Technology of Nuclear Installations, vol. 2008, Article ID 798901, 6 pages, 2008.
View at Publisher · View at Google Scholar
13. S. S. Wilks, “Determination of sample sizes for setting tolerance limits,” Annals of Mathematical Statistics, vol. 12, no. 1, pp. 91–96, 1941.
14. S. S. Wilks, “Statistical prediction with special reference to the problem of tolerance limits,” Annals of Mathematical Statistics, vol. 13, no. 4, pp. 400–409, 1942.
15. S. S. Wilks, Mathematical Statistics, Wiley, New York, NY, USA, 1962.
16. A. Wald, “An extension of Wilks’method for setting tolerance limits,” Annals of Mathematical Statistics, vol. 14, pp. 44–55, 1943.
17. A. Wald and J. Wolfowitz, “Tolerance limits for a normal distribution,” Annals of Mathematical Statistics, vol. 17, pp. 208–215, 1946.
18. A. Guba, M. Makai, and L. Pál, “Statistical aspects of best estimate method-I,” Reliability Engineering and System Safety, vol. 80, no. 3, pp. 217–232, 2003. View at Publisher · View at Google
Scholar · View at Scopus
19. G. E. Noether, Elements of Nonparametric Statistics, Wiley, New York, NY, USA, 1967.
20. H. Scheffe and J. W. Tukey, “A formula for sample sizes for population tolerance limits,” Annals of Mathematical Statistics, vol. 15, no. 2, p. 217, 1944.
21. H. Ackermann and K. Abt, “Designing the sample size for non-parametric,” Multivariate Tolerance Regions. Biometrical Journal, vol. 26, no. 7, pp. 723–734, 1984.
22. A. Matala, “Sample size requirement for monte carlo—simulations using latin hypercube sampling,” Internal Report 60968, Departmentof Engineering Physics and Mathematics, Helsinki University of
Technology, 2008.
23. A. Hernandez-Solis, C. Ekberg, C. Demaziere, A. Ödegård Jensen, and U. Bredolt, “Uncertainty and sensitivity analyses as a validation tool for BWR bundle thermal-hydraulic predictions,” Nuclear
Engineering and Design, vol. 241, no. 9, pp. 3697–3706, 2011.
24. M. Stein, “Large sample properties of simulations using Latin Hypercube Sampling,” Technometrics, vol. 29, no. 2, pp. 143–151, 1987. View at Scopus
25. W. M. Stacey, Nuclear Reactor Physics, Wiley-VCH, Weinheim, Germany, 2004.
26. R. L. Iman and W. J. Conover, “A distribution-free approach to inducing rank correlation among input variables,” Communication in Statistics-Simulation and Computation B, vol. 11, no. 3, pp.
311–334, 1982.
27. OECD/NEA Databank, “ERRORJ, Multigroup covariance matrices generation from ENDF-6 format,” Package No. NEA-1676/07, 2010.
28. M. Ball, Uncertainty analysis in lattice reactor physics calculations [Ph.D. thesis], McMaster University, 2012.
29. M. Pusa, “Incorporating sensitivity and uncertainty analysis to a lattice physics code with application to CASMO-4,” Annals of Nuclear Energy, vol. 40, pp. 153–162, 2012. | {"url":"http://www.hindawi.com/journals/stni/2013/437854/","timestamp":"2014-04-18T19:20:12Z","content_type":null,"content_length":"351254","record_id":"<urn:uuid:feba3135-0e90-4bfb-bdd9-42888a71ac21>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Write a polynomial, Make up a rational function, & z varies jointly as u
March 24th 2009, 08:52 AM #1
Mar 2009
Write a polynomial, Make up a rational function, & z varies jointly as u
Please help me with the following 3 questions:
- Write a polynomial of degree 4 that has exactly 3 distinct x-intercepts and whose graph rises to the left and right.
- Make up a rational function f (x) that has vertical asymptotes at x = 2 and x = −1, a horizontal asymptote at y = 1, a y-intercept at (0, 2), and x-intercept at 4.
- z varies jointly as u and the cube of v and inversely as the square of w; z = 9 when u = 4, v = 3, and w = 2. Find w when u = 27, v = 2, and z = 8.
Thank you so kindly for the help,
If there are three x-intercepts for a degree-4 polynomial, what must be true of the multiplicity of one of the zeroes?
If the polynomial is "up" on both ends, is the leading coefficient positive or negative?
Use this information to invent any polynomial you like that fits the requirements: three linear factors, one of which is repeated; and a positive leading coefficient.
If there are vertical asymptotes at x = 2 and at x = -1, what two factors must be in the denominator?
If the horizontal asymptote is at y = 1, then how must the degrees of the numerator and denominator compare, and how much their leading coefficients compare?
If y = x for x = 4, what must be a factor of the numerator?
If y = 2 when x = 0, then what must be the constant term of the numerator?
Use this to create a rational function which fits the requirements.
To learn how to set up and solve variation equations, try here.
March 24th 2009, 09:42 AM #2
MHF Contributor
Mar 2007 | {"url":"http://mathhelpforum.com/calculus/80412-write-polynomial-make-up-rational-function-z-varies-jointly-u.html","timestamp":"2014-04-20T01:22:59Z","content_type":null,"content_length":"36182","record_id":"<urn:uuid:ed4eaa7e-b3d3-4bc5-9ef2-eaa429238adc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Critical Points - Problem 3
Critical Points - Problem 3 3,761 views
Critical points of a function are where the derivative is 0 or undefined. To find critical points of a function, first calculate the derivative. Remember that critical points must be in the domain of
the function. So if x is undefined in f(x), it cannot be a critical point, but if x is defined in f(x) but undefined in f'(x), it is a critical point.
On a graph, critical points can mean one of two things: that there is a horizontal tangent at that point (if f'(x)=0 at that x), or there is a vertical tangent at that point (if f'(x) is undefined at
that x).
We are talking about finding critical points for a function. Now here’s another example. Consider h(x) equals x to the 2/3 times the quantity x minus 4. And I’ve graphed that function here. Find the
critical points of h and explain their geometric significance.
Let’s start by recalling that a critical point, is a point with a derivative equal to 0 or is undefined. So I want to look at the derivative of this function. And this function is a product. So I’ll
use the product rule, the product rule on this.
So it's first times the derivative of the second. And the derivative of the second is just 1, plus the second times the derivative of the first. I’ll use the power rule on x to the 2/3. So it’s going
to be 2/3x to the 2/3 minus 1. And 2/3 minus 1 is -1/3. So this is 2/3x to the -1/3. Now, let me simplify this a little bit.
I have x to the 2/3 here, plus, now this x to the -1/3 it means 1 over x to the 1/3. I have a 3 in the denominator as well. So I’m going to show you what’s in the denominator here, there’s 3 and x to
the 1/3.And in the numerator there’s a 2 and x minus 4. Now, in order to find to critical points, I need to factor this completely. But when you have to also get a single fraction.
I have two different expressions here I need to combine them to a single one. The way to do that is to get a common denominator. So I need to get this guy to have the same denominator as this guy.
This denominator is 3x to the 1/3. So I multiply top and bottom of this. Think of this as x to the 2/3 over 1. I multiply the top and bottom by 3x to the 1/3. So x to the 2/3 times 3x to the 1/3 over
3x to the 1/3. And that’s plus, remember over here I have 2x plus 8, over the same denominator; 3x to the 1/3.
Now when I multiply these here, I’m going to get x to the 2/3 times x to the 1/3. You add the exponents in a case like that. You get x to the 1, 3x to the 1. So this is 3x over 3x to the 1/3 that’s a
3, plus 2x minus 8 over the same denominator. And now I’m ready to combine these two. 3x plus 2x minus 8 is 5x minus 8. Over 3x to the 1/3.
Critical points. I have to look for two things; first where is the derivative equal to 0, and where is it undefined? And this time there is a place where the derivative is undefined. The derivative
is undefined at x equals 0. I need to do a quick reality check.
Critical points have to be in the domain of a function. So x equals 0 will not count as a critical point, if it's not in the domain of the original function. It is in the domain of the original
function. X equals 0 works fine here. So x equals 0 is in the domain and the derivative is undefined there.
So that counts as a critical point. So let me just make the note. So this is h(x), h'(0) is undefined. So x equals 0 is a critical point and I’ll abbreviate that c.p. And now looking back here again,
I also need to find whether the derivative equals 0. The derivative equals 0, when the numerator equals 0.
So h'(x) equals o when 5x minus 8 equals 0, and that happens when x equals 8/5. And so this is another critical point. Let me just write that another critical point. And so my two critical points are
x equals 0 when x equals 8/5. I was also asked to explain the geometric significance of this critical points. So I need to go back to the graph and show you that.
If you take a look at this graph, you can figure out what point we are talking about here. This is the point at x equals 0. Here the significance isn’t the same as before. Before we had a horizontal
tangent at our critical point here. Here this would represent, because the derivative is undefined here, a vertical tangent. So imagine if you are trying to draw a tangent, a line that went the same
direction as the curve, the best you could do is draw a vertical line here.
And that means that these slopes actually do go vertical when they meet at the point 0,0. So this blue line represents a vertical tangent. The other critical point is x equals 8/5, which I’m guessing
is the x coordinate of this point 8/5. Because at that point, the derivative was 0. So at this point we do have a horizontal tangent.
So in some cases the critical point is going to represent a place where the graph has a horizontal tangent. But in some cases, it will represent a vertical tangent. | {"url":"https://www.brightstorm.com/math/calculus/applications-of-the-derivative/critical-points-3/","timestamp":"2014-04-25T03:51:46Z","content_type":null,"content_length":"83744","record_id":"<urn:uuid:9bb13915-fe30-4bb9-8c15-128e0a5f584d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Mathematical conceptualism
Nik Weaver nweaver at dax.wustl.edu
Mon Sep 19 18:10:59 EDT 2005
I had expected that my previous message, together with the papers
that I posted to the ArXiv, would spark some discussion on this
list. I would be disappointed to think that because I am unknown
in foundational circles (my specialty is functional analysis) people
in this area consider these papers not worth reading. Perhaps
everyone already has a favorite foundational stance and is not
interested in looking at work that supports an opposing view? If
so, this would make my readership very small, as I represent a view
that I gather is almost universally rejected. (Jeremy Avigad says
that "it is an awkward fact that there seems to be no strong case
that predicativity is a notion worthy of our attention".)
To the contrary, I think there is a strong case for predicativism
given the natural numbers, or as I prefer to call it, mathematical
The case for predicativism: (1) the idea that there exists an abstract,
metaphysical, Platonic world of sets is nonsensical and is discredited
by the set-theoretic paradoxes; (2) once this is accepted, any domain in
which set-theoretic reasoning is to take place must be in some sense
constructed; (3) it would be absurd to allow such a construction to
be circular.
The case for "given the natural numbers": (1) the scope of mathematical
reasoning is the realm of logical possiblity; (2) once this is accepted,
a necessary and sufficient condition that a putative construction be
considered legitimate is that we be able to concretely imagine how
it could be carried out; (3) we have such a concrete picture for
constructions of length omega, but not for constructions carried out
along larger cardinals.
These arguments are made in greater detail in my paper "Mathematical
conceptualism". (I also give reasons there for the terminological
It is my impression that predicativism is rejected not because of any
logical defects in the arguments summarized above, but rather because
one has been told that it fails to support important mainstream results
(e.g., Kruskal's theorem) and that it is limited by a rather small
ordinal, Gamma_0. However, neither of these statments is correct, as
I demonstrate at some length in my paper "Predicativity beyond Gamma_0".
As the latter assertion has been conventional wisdom for the past forty
years, and I am claiming to have decisively refuted it, I should think
that this paper would be of interest to a number of people on this list.
I have also written a paper, "Analysis in J_2", in which I show how
core mathematics can be straightforwardly developed in a predicatively
valid system which is faithful to classical intuition and avoids some
of the coding machinery involved in subsystems of second order
arithmetic. All three papers are available at
Nik Weaver
Math Dept.
Washington University
St. Louis, MO 63130 USA
nweaver at math.wustl.edu
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2005-September/009083.html","timestamp":"2014-04-21T10:53:41Z","content_type":null,"content_length":"5429","record_id":"<urn:uuid:7eb9fc33-a085-47fe-ad7e-6c1d66c1505c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roll ‘em out…
December 31st, 2006
Image Source.
The discovery of new planets is rarely clear cut. No sooner does a new world (Vesta, Neptune, Pluto) emerge, than the wrangling for the credit or the naming rights starts. And it’s usually possible
to find a reason why the prediction (or even the planet itself) wasn’t really valid in the first place.
The trans-Uranian planet predicted by Urbain J. J. Le Verrier and John Couch Adams happened to coincide quite closely with Neptune’s actual sky position in September 1846, but the orbital periods of
their models were too long by more than 50 years. Le Verrier’s predicted planetary mass, furthermore, was too large by nearly a factor of three, and Adams’ mass prediction was off by close to a
factor of two.
In England, following the announcement of Neptune’s discovery, and with the glory flowing to Le Verrier in particular and France in general, the Rev. James Challis and the Astronomer Royal George
Airy were denounced for not doing enough to follow up Adams’ predictions, “Oh! curse their narcotic Souls!” wrote Adam Sedgwick, professor of geology at Trinity College.
Nowadays, with the planet count up over 200, the prediction and discovery of a new world doesn’t quite carry the same freight as it did in 1846. No editorial cartoons, no Orders of Empire, and no
extravagant public praise to the discoverer, such as that heaped by Camille Flammarion on Le Verrrier, who wrote, “This scientist, this genius, has discovered a star with the tip of his pen, without
other instrument than the strength of his calculations alone!”
Nevertheless, I don’t want to be shoehorned into the ranks of the “narcotic souls” as a result of not properly encouraging the bringing to light of any potential planetary discoveries in the systemic
catalog of real stellar radial velocity data sets. As of Dec. 30th, 2006, over 3,680 orbital fits have been uploaded to the systemic backend. It’s definitely time to start sifting carefully through
the results that the 518 registered systemic users have produced. Over the next few weeks we’ll be introducing a variety of analysis and cataloging tools that will make this job easier, but there are
some interesting questions that can be answered right away. Foremost among these is: what are the most credible (previously unannounced) planets in the database?
The backend uses the so-called reduced chi-square statistic as a convenient metric for rank-ordering fits:
In the above expression, N is the number of radial velocity data points, and M is the number of activated fitting parameters. As a rule of thumb, a reduced chi-square value near unity is indicative
of a “good” fit to the data, but this rule is not exact, and should hence be applied with caution. The observational errors likely depart from a normal distribution, and more importantly, the
tabulated errors don’t incorporate the astrophysical radial velocity noise produced by activity on the parent star. Furthermore, it’s almost always possible to lower the reduced chi-square statistic
by introducing an extra low-mass planet.
Eugenio recently implemented the downloadable console‘s F-test, which can provide help in evaluating whether an additional planet is warranted. The F-test is applied to two saved fits and returns a
probability that the two fits are statistically identical. As an example, pull up the HD 69830 data set and obtain the best two planet fit that includes the 8.666-planets and 31-day planets. Save
this fit to disk. Next, add the 200-day outer planet and save the resulting 3-planet fit to disk (using a separate name). Clicking on the console’s F-test button allows the F-test to be computed
using the two saved fits:
In the case of HD 69830, there’s a 1.7% probability that the 2-planet fit and the 3-planet fit are statistically identical. This low probability indicates that the third planet is providing a
significant improvement to the characterization of the data. It’s likely really out there orbiting the star.
So here’s the plan: Let’s comb through the systemic “Real Star” catalog, and find the systems that (1) contain an unannounced planet(s) in addition to the previously announced members of the system
(see the exoplanet.eu catalog for the up-to-date list). (2) have a F-test probability of less than 2% of being statistically identical, and (3) are dynamically stable for at least 10,000 years. If
you find a system that meets these requirements, post your findings to the comments section of this post.
Disclaimer: this exercise is for the satisfaction of obtaining a better understanding of the planetary census, and also for fun. When the planets do turn up, I’m going to sit back with a bottle full
of bub and enjoy any scrambles for priority from a safe distance.
Happy New Year, y’all!
1. January 16th, 2007 at 12:18 | #1
Posted on Systemic 1/5/07. HD50499 2 planets classified stable by the BOT. Commented F-test prob = 0.0012 that 2 planet fit is better than the published HD50499b. Stable for 10000 years by
Systemic stability checker. Is this the information you need?
2. January 16th, 2007 at 20:39 | #2
Hi Bruce, Absolutely!
3. January 16th, 2007 at 21:29 | #3
Here are a few more:
Posted on Systemic 1/5/07. HD134987_B06 two planet fit. Consists of published HD134987b plus a 9720 day planet. F-test result = 2.4468, prob = 0.0000, stability tested for 10,000 years.
Posted on Systemic 1/6/07. HD142415 single planet improvement over published HD142415b. F-test result = 1.9287, Prob = 0.0001. No stability test run since it is a single planet. Thiessen posted a
nearly identical fit on 9/1/06. This fit has the same period as the published, but mass and eccentricity differ significantly.
Posted on Systemic 1/8/07. HD177830_B06K two planet fit. Consists of HD177830b plus another planet with a 111 day period. F-test result 3.0648, prob = 0.0001, stability tested for 22,000 years.
Posted on Systemic 1/8/07. HD183263_B06K two planet fit. Consists of HD183263b plus another planet with a 3896 day period. F-test result 10.6355, prob = 0.0000. Stability tested for 10,000 years.
Posted on Systemic 1/8/07. HD183263 two planet fit. Consists of HD183263b plus another planet with a 3896 day period. Agrees well with HD183263_B06K. F-test result = 5.4317, prob = 0.0000,
stability tested for 10,000 years.
4. January 20th, 2007 at 01:13 | #4
HD19994 – 2 planet fit.
Published HD19994b plus a planet with a period of 35 days. F-test result = 2.0992, probability = 0.0115. Stable for 10,000 years. A fit including a planet with a period of ~35 days was first
uploaded by eugenio on 6/26/06.
The criteria (2) for planet finding using the Systemic F-test needs to be restated to eliminate confusion:
“have a F-test probability of less than 2% of being statistically identical”
means the F-test result should be large enough so that the probability is 0.02 or smaller. Correct?
5. January 23rd, 2007 at 06:38 | #5
Hi Bruce,
Very interesting results, and sorry that I didn’t notice this earlier. We’ve been getting so much comment spam that I sometimes miss comments with actual content!
Would it be okay if I write up a couple of your finds for a front-end post? (with full credit to you, of course!)
You’re correct wrt to the statement on the F-test. The F-test result should be large enough so that the probability is 0.02 or smaller.
6. January 23rd, 2007 at 19:50 | #6
Sure, write up anything you think interesting. May I suggest that the team aspect should be emphasized; I am just one of many users. And without the Systemic development, there would be little to
7. January 27th, 2007 at 00:17 | #7
Published planet HD208487b: Period 123 days, Mass 0.45 Jupiter masses, Eccentricity 0.32.
This fit:
Planet 1: Period 129.436 days, Mass 0.4601 Jupiter masses, Eccentricity 0.2840.
Planet 2: Period 873.897 days, Mass 0.4926 Jupiter masses, Eccentricity 0.2635.
Statistics: Chi^2 = 1.0556
F-test result = 3.8425, probability = 0.0001
Stablity tested for 10,000 years.
Systemic postings of a ~873 day planet:
mikevald: 1013 days on 2006-09-04, 20:22:53
dstew: 997 days on 2006-09-07, 13:24:27
greg: 980 days on 2006-09-11, 13:27:00
andy: 1000 days on 2006-09-11, 13:39:59
bruce01: 873 days on 2007-01-26, 15:40:42
Published planet HD208487b: Period 123 days, Mass 0.45 Jupiter masses, Eccentricity 0.32.
This fit:
Planet 1: Period 129.338 days, Mass 0.4857 Jupiter masses, Eccentricity 0.2025.
Planet 2: Period 1006.670 days, Mass 0.7637 Jupiter masses, Eccentricity 0.4612.
Statistics: Chi^2 = 0.6163
F-test result = 4,7131, probability = 0.0000
Stablity tested for 17,000 years.
Systemic postings of a ~1006 day planet:
eugenio: 959 days on 2006-07-12, 22:16:43
mikevald: 1006 days on 2006-09-04, 20:22:16
mikehall: 1006 days on 2006-09-26, 13:42:59
irpoorman: 999 days on 2006-10-23, 03:02:03
bruce01: 1006 days on 2007-01-26, 15:41:08
8. January 28th, 2007 at 12:00 | #8
Published planet HD216770b: Period 118.45 days, Mass 0.68 Jupiter masses, Eccentricity 0.37.
This fit:
Planet 1: Period 118.169 days, Mass 0.6491 Jupiter masses, Eccentricity 0.3887.
Planet 2: Period 12.456 days, Mass 0.1886 Jupiter masses, Eccentricity 0.4741.
Statistics: Chi^2 = 0.6383
F-test result = 3.7618, probability = 0.0116
Stablity tested for 13,900 years.
Systemic postings of a ~12.4 day planet:
goldrake on 2006-09-19, 13:17:26
glenn on 2006-09-29, 22:21:01
EricFDiaz on 2006-09-30, 10:43:55
flanker on 2006-10-03, 11:30:07
EricFDiaz on 2006-11-11, 14:06:09
EricFDiaz on 2006-11-11, 14:06:09
flanker on 2006-11-12, 01:21:13
flanker on 2006-11-12, 01:24:04
bruce01 on 2007-01-27, 06:14:25
9. January 29th, 2007 at 13:34 | #9
Single planet fit:
Published planet HD222582b: Period 572 days, Mass 5.11 Jupiter masses, Eccentricity 0.76.
Systemic single planet fit to this data set:
Planet 1: Period 572.318 days, Mass 7.8547 Jupiter masses, Eccentricity 0.7346.
Statistics: Chi^2 = 1.7991
F-test result = 21.5919, probability = 0.0000
No stability test.
Systemic postings of a 572 day planet with mass ~7.85:
andy: 2006-08-06, 14:29:15
markk: 2006-09-27, 06:23:16
flanker: 2006-11-11, 10:32:59
khaslag: 2006-11-12. 11:43:10
Two planet fit:
Published planet HD222582b: Period 572 days, Mass 5.11 Jupiter masses, Eccentricity 0.76.
Planet 1: Period 572.727 days, Mass 7.8334 Jupiter masses, Eccentricity 0.7290.
Planet 2: Period 15.444 days, Mass 0.0513 Jupiter masses, Eccentricity 0.5632.
Statistics: Chi^2 = 1.2187
F-test result = 28.2239, probability = 0.0000
Stablity tested for 17,200 years.
Systemic postings of a ~15.4 day planet:
flanker: 2006-11-11, 10:32:59
bruce01: 2006-12-30, 16:14:41
10. February 8th, 2007 at 08:11 | #10
GJ 876: 4-planets fit by EricFDiaz
F-Test 1.0% probability.
New Planet Data:
Period: 361.64 days
Mass: 0.06 MJup
Eccentricity: 0.09
Omega: 195.28°
11. February 11th, 2007 at 18:56 | #11
My fit for GJ 876 is a bad fit. I did not understand how to properly interpret the F-test at the time, resulting in me to erroneously conclude that I had discovered a new planet. I was wrong. The
actual F-test probability for this fit is 82%, meaning that there is no significant statistical difference between it and the 3-planets fit. I have to do this fit over again starting from
scratch. I apologize for any confusion that I may have caused. Sincerely, Eric F. Diaz | {"url":"http://oklo.org/2006/12/31/roll-em-out/","timestamp":"2014-04-18T23:15:58Z","content_type":null,"content_length":"48064","record_id":"<urn:uuid:75d780cf-0a4d-4434-95b5-ebffbacd157b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recursively counting numbers with fixed bit counts
I ran across this problem in a reddit side-bar job-ad, and was intrigued by the task (description paraphrased to decrease googleability):
Write a function
uint64_t bitsquares(uint64_t a, uint64_t b);
such that it return the number of integers in [a,b] that have a square number of bits set to 1. Your function should run in less than O(b-a).
I think I see how to do it in something like logarithmic time. Here’s how:
First off, we notice that we can list all the squares between 0 and 64: these are 0, 1, 9, 16, 25, 36, 49, and 64. The function I will propose will run through a binary tree of depth 64, shortcutting
through branches whenever it can. In fact; changing implementation language completely, I wonder if I cannot even write it comprehensively in Haskell.
The key insight I had was that whenever you try to find the number of numbers with a bitcount matching some element of some list within the bounds of 0b0000…0000 and 0b000…01111…11, then it reduces
to a simple binomial coefficient — n choose k gives the number of numbers with k bits set among the n last. Furthermore, we can reduce the total size of the problem by removing a matching prefix from
the two numbers we test from.
Hence, we trace how many bits off the top agree between the two numbers. We count the set bits among these, subtract them from each representative in the list of squares, giving us the counts we need
to hit in the remainder.
Write a’ for a with the agreeing prefix removed, and similarly for b’. Then the total count is the count for the reduced things from a’ to 0b000…01…111 plus the count for the reduced things from 0 to
b’. The reduction count for b’ needs to be 1 larger than the one for a’ since in one case, we are working with the prefix before the varying bit increases, and in the other, we work with the prefix
after the varying bit increases — the latter count is not really from 0 to b’, but this is a useful proxy for the count from 0b0000…010…000 to b’ with the additional high bit set.
In code, I managed to boil this down to:
import Data.Word
import Data.Bits
import Data.List (elemIndices)
bitsquare :: Word64 -> Word64 -> Word64bitsquare a b = bitcountin a b squares -- # integers in [a,b] with square # of 1
squares = [1,4,9,16,25,36,49,64] :: [Word64]
allones = [fromIntegral (2^k - 1) | k <- [1..64]]
choose n 0 = 1
choose 0 k = 0
choose n k = (choose (n-1) (k-1)) * n `div` k
popCount :: Word64 -> Word64
popCount w = sum [1 | x <- [0..63], testBit w x]
-- # integers in [a,b] with 1-counts in counts
bitcountin :: Word64 -> Word64 -> [Word64] -> Word64
bitcountin a b counts
| a > b = 0
| a == b = if popCount b `elem` counts then 1 else 0 | (a == 0) && (b `elem` allones) = sum [choose n k | n <- [popCount b], k <- c
| otherwise = (bitcountin a' low [c-lobits | c <- counts, c>= lobits]) +
(bitcountin hi b' [c-hibits | c <- counts, c>= hibits])
agreements = [(testBit a n) == (testBit b n) | n <- [0..63]]
agreeI = elemIndices False agreements
prefixIndex = last agreeI
prefixCount = sum [1 | x <- [prefixIndex..63], testBit a x]
a' = a .&. (2^prefixIndex - 1)
b' = b .&. (2^prefixIndex - 1)
low = 2^prefixIndex - 1
hi = 0
lobits = prefixCount
hibits = prefixCount+1
• James Cook
• May 14th, 2012
• 17:49
For what it’s worth, it’s much simpler if you start off by defining a single-variable version of the problem (i.e., calculate bitSquareLTE n = bitsquare 0 n, and then just define the range version by
An implementation of that approach is here: https://github.com/mokus0/junkbox/blob/master/Haskell/Math/BitSquares.hs
• Sam
• May 14th, 2012
• 18:00
I solved this in C instead. It is a bit wonky since you need to be careful to avoid integer overflow when you compute binomial coefficients (it’s better to just memoize Pascal’s triangle), but gcc’s
__builtin_clzl() comes in handy for the actual puzzle. You can simplify the algorithm at the cost of slightly increased runtime by defining the function `bitCount 0 b counts’ instead and then
defining the full function by subtraction.
• Noah Easterly
• May 14th, 2012
• 18:41
popCount is already defined in Data.Bits.
• Sam
• May 14th, 2012
• 19:41
P.S. I know that redefining bitCount to subtract is cheating, because the problem asks for an O(log(b-a)) solution. However, ‘bitCount a a k’ tells you how whether the population count of a is k or
not; I don’t think that this can be done in constant time if k and a go to infinity.
• May 15th, 2012
• 18:38
@Noah: I know; however, it wasn’t in the version of the Haskell Platform I was working on. Hence my inclusion of the code.
Everyone else: Sure, I can see how the one-parameter version simplifies thinking about the problem. But it seems a bit wasteful to traverse the entire (underlying) bit decision tree when you only
need to look at the span between the numbers given… | {"url":"http://blog.mikael.johanssons.org/archive/2012/05/recursively-counting-numbers-with-fixed-bit-counts/","timestamp":"2014-04-18T23:15:01Z","content_type":null,"content_length":"41102","record_id":"<urn:uuid:344e313a-3fde-4252-a526-4205fa3faf87>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nokesville Math Tutor
Find a Nokesville Math Tutor
...I am a recently retired Federal government employee, who worked in Information Technology for over 35 years. I am currently working as a substitute teacher for Fairfax County Public Schools,
for secondary school and high school math and computer/information technology; and, as a tutor for mathem...
7 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...As well I have also spent quite a few years tutoring at my former elementary school. There I tutored in math and technology for part of the robotics program. When they encountered a certain
problem that they could not solve, I gave them the tools necessary for to figure out the problem without solving it for them.
7 Subjects: including algebra 2, geometry, precalculus, trigonometry
...I believe that there is a way to learn math for everyone and I look forward to finding out which way works best for you. Even if you just need a little reminder of math you used to know, I'm
happy to help you remember the fundamentals. I feel very strongly about help students succeed in math be...
22 Subjects: including linear algebra, logic, ACT Math, GRE
...My master's degree in Educational Administration has helped me understand education on a broader range of levels.A in College Advanced Grammar and Composition. Degree in English Education.
Extensive experience tutoring students in grammar.
16 Subjects: including prealgebra, Spanish, reading, English
...Imagine, hiring a tutor that cares about your success and the money in your pocket! Hello, I am Shannon. I am a graduate of Mississippi State University with a Bachelor's of Science in
Geoscience with a concentration in Professional Meteorology.
13 Subjects: including calculus, spelling, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/nokesville_math_tutors.php","timestamp":"2014-04-20T11:24:01Z","content_type":null,"content_length":"23728","record_id":"<urn:uuid:08abf74e-5093-43b8-9576-04762e8e3949>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find a vector3 that is perpendicular to another vector3 and (0,1,0)?
July 11th 2012, 07:35 AM
Find a vector3 that is perpendicular to another vector3 and (0,1,0)?
Hi everyone,
recently I was trying to solve this problem:
Attachment 24268
If I know 3D position vectors P0 and P1 how would I go about solving for P2 and P3?
I assumed it would be by taking the cross product of the vector P1 - P0 with an arbitrary basis vector like (0,1,0)
something like
Vector3 perpendicularVector = Vector3.Cross((P1-P0), Vector3.Up);
but I wasn't able to make that work.
Any help on why my understanding is flawed and what I need to study to be able to understand a problem like this would be greatly appreciated!
July 11th 2012, 07:55 AM
Re: Find a vector3 that is perpendicular to another vector3 and (0,1,0)?
Hi everyone,
recently I was trying to solve this problem:
Attachment 24268
If I know 3D position vectors P0 and P1 how would I go about solving for P2 and P3?
I assumed it would be by taking the cross product of the vector P1 - P0 with an arbitrary basis vector like (0,1,0)
something like
Vector3 perpendicularVector = Vector3.Cross((P1-P0), Vector3.Up);
I don't really understand what is going on here.
But $\overrightarrow {{P_0}{P_1}} = \left\langle {1,1,0} \right\rangle$
There is no unique answer to this $\overrightarrow {{P_0}{P_2}} = \left\langle {a,b,c} \right\rangle$ where $a+b=0$ and $c$ can be any real.
If that is true then $\overrightarrow {{P_0}{P_1}} \bot \overrightarrow {{P_0}{P_2}}$.
July 11th 2012, 08:13 AM
Re: Find a vector3 that is perpendicular to another vector3 and (0,1,0)?
Hi Plato,
Thanks for your answer! I was pretty sure I was going about this problem the incorrect way...curse my art degree!
Maybe I can clarify what I'm attempting and you can tell me the proper way to go about solving a problem like this.
Say a user can draw a line by choosing a starting location on a 2d grid and an ending location on that same 2d grid.
I'd like to have that action result in the pictured rectangle and not just the 2 points that the user provided.
Does that make any more sense?
of topic: If there is a text that would help me more accurately understand these types of problems that you can recommend I'd love any suggested readings!
July 11th 2012, 08:22 AM
Re: Find a vector3 that is perpendicular to another vector3 and (0,1,0)?
Is it 2d grid or 3d grid? If it is a 2d grid, the why did you post 3d example?
July 11th 2012, 08:29 AM
Re: Find a vector3 that is perpendicular to another vector3 and (0,1,0)?
I'm sorry,
I constrained my "clarification example" to 2 dimensions because I thought it would make the explanation easier...
The ultimate solution I was looking for would be in 3 dimensions.
I want to find the corners of a cube with arbitrary length, width, and depth but I only have to 2 positions in 3D space, and the Basis vectors as input.
If that problem is nonsensical (and the more I say it, it sure sounds like it) then I apologize for wasting your time, but appreciate the knowledge.
July 11th 2012, 09:01 AM
Re: Find a vector3 that is perpendicular to another vector3 and (0,1,0)?
I constrained my "clarification example" to 2 dimensions because I thought it would make the explanation easier... The ultimate solution I was looking for would be in 3 dimensions.
I want to find the corners of a cube with arbitrary length, width, and depth but I only have to 2 positions in 3D space, and the Basis vectors as input. If that problem is nonsensical (and the
more I say it, it sure sounds like it) then I apologize for wasting your time, but appreciate the knowledge.
$(0,0,0),~(1,0,0),~(1,1,0),~(0,1,0),~(0,1,1),~(0,0, 1),~(1,0,1),~(1,1.1)$ are the eight vertices of a unit cube.
Its faces parallel to the principal planes.
You need a good deal of mathematics to adjust the lengths of the sides, and to rotate the cube is 3d space.
July 11th 2012, 10:07 AM
Re: Find a vector3 that is perpendicular to another vector3 and (0,1,0)?
Thanks yeah I got that far as well,
made the cube at the origin with the arbitrary lengths then transformed that result to the position P0 and oriented to look at P2,
I just feel like there is a major bit of Vector transformations and indeed Vector's themselves that I'm missing something fundamental here.
Thanks for your help! | {"url":"http://mathhelpforum.com/geometry/200874-find-vector3-perpendicular-another-vector3-0-1-0-a-print.html","timestamp":"2014-04-21T08:07:19Z","content_type":null,"content_length":"12395","record_id":"<urn:uuid:23e4864a-6026-49d1-9748-2ad19a7ffad3>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: [Fwd: Hierarchical model]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: [Fwd: Hierarchical model]
From Evans Jadotte <evans.jadotte@uab.es>
To statalist@hsphsun2.harvard.edu
Subject st: [Fwd: Hierarchical model]
Date Fri, 02 Oct 2009 18:26:37 +0200
Hello listers!
I find it hard to believe that no one has never used hierarchical model out there. The last 3 inquiries I made here were left with ZERO feedback. May be something is wrong with my email. any way, if
somebody has any idea I would so much appreciate some feedback from the thread in this mail.
--- Begin Message ---
From Evans Jadotte <evans.jadotte@uab.es>
To statalist@hsphsun2.harvard.edu
Subject Hierarchical model
Date Fri, 02 Oct 2009 11:26:36 +0200
Dear statalisters,
I am estimating a linear three-level hierarchical model via "restricted maximum likelihood-REML" with the /xtmixed/ command, with households (level-1) nested into villages (level-2) nested into
regions (level-3). Specification seems alright. However, the empirical Bayes random intercept at level-3, which according to theory should be inferior or equal to the corresponding calculated
REML estimate, do not reflect the REML estimate that I calculated at that level. Specifically, I have an empirical Bayes (the BLUP r.e. computed by Stata) that is more than 300 times the REML
estimate calculated at level-3. At level-2 it seems reasonable, empirical Bayes = 0.87 REML estimate.
Moreover, I expected the sum of the variances (that of raw residuals at level-1 + that of residuals REML level-2 + that of residuals REML level-3). When I compute them however, the result gives
an estimated total variance that is lower than that of the raw residuals at level-1.
I am pretty confident about the model specification and the computation but evidently things do not match as they should. Can anyone familiar with these models give me any feedback? I would so
much appreciate and thanks in advance.
--- End Message --- | {"url":"http://www.stata.com/statalist/archive/2009-10/msg00079.html","timestamp":"2014-04-17T18:53:25Z","content_type":null,"content_length":"7846","record_id":"<urn:uuid:32da85bd-70bb-4794-92e8-0c13302d87a4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definite Integrals
The trapezoid sum is a good one to have some shortcuts for. We'll call the trapezoid sum with n sub-intervals TRAP(n).
Here's our favorite shortcut: TRAP(n) is the average of LHS(n) and RHS(n).
These rectangles are, respectively a left-hand sum and a right-hand sum!
Remember that the trapezoid sum is the average of the left- and right-hand sums. However, there's an even shorter way to get a trapezoid sum out of your calculator.
Remember that
LHS(n) = [f (x[0]) + f (x[1]) + ... + f (x[{ n – 1 }])]Δx
RHS(n) = [f (x[1]) + ... + f (x[{ n – 1 }]) + f (x[n])]Δx.
The trapezoid sum is the average of the right- and left-hand sums, so
This is kind of a mess. It gets better if we factor out the Δx:
Now look carefully at what we have inside the parentheses. The quantities f (x[0]) and f (x[n]) only show up once each, because f (x[0]) is only used in the left-hand sum and
f (x[n]) is only used in the right-hand sum:
However, every term from f (x[1]) to f (x[{ n – 1 }]) is used in both the left-hand sum and right-hand sum, so each of these terms will show up twice each!
That means
If we're estimating the area between f and the x-axis on [a,b] with TRAP(n) the first thing we do is divide [a,b] up into n equal sub-intervals and find the endpoints.
The value f (x[0]) is only used as a height of the left-most trapezoid. Similarly, the value f (x[n]) is only used as a height of the right-most trapezoid. However, the value of f at every endpoint
in between these shows up in two trapezoids.
When we add the areas of all these trapezoids we get
Factoring out the x gives us
Now we have a much better way to find a trapezoid sum:
In words,
• divide the interval into sub-intervals
• find the value of f at each endpoint
• multiply each value by 2 unless it's the value of f at one of the original endpoints
• add everything up, divide by 2, and multiply by the width of a sub-interval! | {"url":"http://www.shmoop.com/definite-integrals/trapezoid-sum-shortcut.html","timestamp":"2014-04-17T04:32:16Z","content_type":null,"content_length":"30143","record_id":"<urn:uuid:7ceea716-1713-4313-a6a8-69cbc16cfe00>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Difference Between Sample Mean and Population Mean
Sample Mean vs Population Mean
“Mean” is the average of all the values in a sample. It can be calculated by adding up all the values and then dividing the sum total by the number of values in the sample.
Population Mean
When the provided list represents a statistical population, then the mean is called the population mean. It is usually denoted by the letter “µ.”
Sample Mean
When the provided list represents a statistical sample, then the mean is called the sample mean. The sample mean is denoted by “X.” It is a satisfactory estimate of the population mean.
For a sample, a population mean may be defined as:
µ = Σ x / n where;
Σ represents the sum of all the number of observations in the population;
n represents the number of observations taken for the study.
When frequency is also included in the data, then the mean may be calculated as:
µ = Σ f x / n where;
f represents the class frequency;
x represents class value;
n represents the size of the population, and
Σ represents the summation of the products “f” with “x” all over the classes.
In the same way the sample mean will be;
X = Σ x / n or
µ = Σ f x / n where “n” is the number of observations.
In a more elaborate way it may be represented as;
X = x₁ + x₂ + x₃ +…………….xn / n or
X = 1/n(x₁ + x₂ + x₃ +…………….xn ) = Σ x / n
This can be cleared with the following example:
Suppose the data has the following observations of a study.
1, 2, 2, 3, 3, 4, 5, 6, 7, 8
For these samples to take out the sample mean, we will consider several samples and consider the mean.
For 1, 2, 3, mean will be calculated as (1+ 2+3/ 3) = 2;
For 3, 4, 5, mean will be calculated as (3 +4 + 5/3) = 4;
For 4, 5, 6, 7, 8, mean will be calculated as (4 +5+6 +7 +8/5) = 6;
And for 3, 3, 4, 5, mean will be calculated as (3 + 3 +4 + 5/4) = 3.75.
Thus the total mean of these samples is (2 + 4+ 6 + 3.75/ 4) = 3.94 or approximately 4.
This value is called the sample mean.
Now for the population, the population mean can be calculated as:
1+ 2+ 2+ 3+ 3+4+5+ 6+7+ 8/10 = 4.1
Thus the sample mean is very close to the population mean. The accuracy increases with an increase in the number of samples taken.
1.A sample mean is the mean of the statistical samples while a population mean is the mean of the total population.
2.The sample mean provides an estimate of the population mean.
3.A sample mean is more manageable data while a population mean is difficult to calculate.
4.The sample mean increases its accuracy to the population mean with the increased number of observations.
Search DifferenceBetween.net :
Email This Post
: If you like this article or our site. Please spread the word. Share it with your friends/family.
Leave a Response
Articles on DifferenceBetween.net are general information, and are not intended to substitute for professional advice. The information is "AS IS", "WITH ALL FAULTS". User assumes all risk of use,
damage, or injury. You agree that we have no liability for any damages. | {"url":"http://www.differencebetween.net/science/difference-between-sample-mean-and-population-mean/","timestamp":"2014-04-18T18:15:36Z","content_type":null,"content_length":"44770","record_id":"<urn:uuid:2ea82d9a-2038-4366-94a8-6b85bdeb56e1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Python-ideas] [Python-Dev] Inclusive Range
MRAB python at mrabarnett.plus.com
Tue Oct 5 21:43:29 CEST 2010
On 05/10/2010 20:23, spir wrote:
> On Tue, 05 Oct 2010 13:45:56 +0200
> Boris Borcic<bborcic at gmail.com> wrote:
>> Nick Coghlan wrote:
>>> [...] Being able to say things like
>>> "10:00"<= x< '12:00", 10.0<= x< 12.0, "a"<= x< "n" are much
>>> clearer than trying to specify their closed range equivalents.
>> makes one wonder about syntax like :
>> for 10<= x< 20 :
>> blah(x)
>> Mh, I suppose with rich comparisons special methods, it's possible to turn
>> chained comparisons into range factories without introducing new syntax.
>> Something more like
>> for x in (10<= step(1)< 20) :
>> blah(x)
> About notation, even if loved right-hand-half-open intervals, I would wonder about [a,b] noting it. I guess 99.9% of programmers and novices (even purely amateur) have learnt about intervals at school in math courses. Both notations I know of use [a,b] for closed intervals, while half-open ones are noted either [a,b[ or [a,b). Thus, for me, the present C/python/etc notation is at best misleading.
> So, what about a hypothetical language using directly math *unambiguous* notation, thus also letting programmers chose their preferred semantics (without fooling others)? End of war?
[Oops! Post sent to wrong list!]
Dijkstra came to his conclusion after seeing the results of students
using the programming language Mesa, which does support all 4 forms of
More information about the Python-ideas mailing list | {"url":"https://mail.python.org/pipermail/python-ideas/2010-October/008207.html","timestamp":"2014-04-16T17:24:36Z","content_type":null,"content_length":"4666","record_id":"<urn:uuid:1a802abb-efb0-4b2d-bfea-6f3112e93738>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Graph Theoretic Formula for the Steady State Distribution of Finite Markov Processes
James J. Solberg
Additional contact information
James J. Solberg: Purdue University
Management Science, 1975, vol. 21, issue 9, pages 1040-1048
Abstract: This paper presents a formula which expresses the solution to the steady-state equations of a finite irreducible Markov process in terms of subgraphs of the transition diagram of the
process. The formula is similar in spirit to well-known flowgraph formulas, but possesses several unique advantages. The formula is the same whether the process is discrete or continuous in time; it
is efficient in the sense that no cancellation of terms can occur (it is a simple sum of positive terms); and it is both conceptually and computationally simple. Because these advantages are gained
by exploiting properties of Markov processes, the formula is not applicable to linear equations in general, as are the flowgraph methods. The paper states and proves the theorem for both the discrete
and continuous cases, gives examples of each, and cites computational experience with the formula.
Date: 1975
References: Add references at CitEc
Citations Track citations by RSS feed
Downloads: (external link)
http://dx.doi.org/10.1287/mnsc.21.9.1040 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text
Persistent link: http://EconPapers.repec.org/RePEc:inm:ormnsc:v:21:y:1975:i:9:p:1040-1048
Access Statistics for this article
More articles in Management Science from INFORMS
Contact information at EDIRC.
Series data maintained by Mirko Janc (). | {"url":"http://econpapers.repec.org/article/inmormnsc/v_3a21_3ay_3a1975_3ai_3a9_3ap_3a1040-1048.htm","timestamp":"2014-04-16T07:26:16Z","content_type":null,"content_length":"12877","record_id":"<urn:uuid:3cafc5c2-d7f1-4f80-9efb-9fea342f9013>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Peter Suber, "Infinite Reflections"
Originally presented as an all-college address at St. John's College in October 1996. Published in the St. John's Review, XLIV, 2 (1998) 1-59. Copyright © 1998, Peter Suber.
Infinite Reflections
Peter Suber, Philosophy Department, Earlham College
Galileo's Paradox
Here's a paradox of infinity noticed by Galileo in 1638. It seems that the even numbers are as numerous as the evens and the odds put together. Why? Because they can be put into one-to-one
correspondence. The evens and odds put together are called the natural numbers. The first even number and the first natural number can be paired; the second even and the second natural can be
paired, and so on. When two finite sets can be put into one-to-one correspondence in this way, they always have the same number of members.
Supporting this conclusion from another direction is our intuition that "infinity is infinity", or that all infinite sets are the same size. If we can speak of infinite sets as having some number
of members, then this intuition tells us that all infinite sets have the same number of members.
Galileo's paradox is paradoxical because this intuitive view that the two sets are the same size violates another intuition which is just as strong. Clearly, the even numbers seem less numerous
than the natural numbers, half as numerous to be precise. Why? Because we can obtain the evens by starting with the naturals and deleting every other member. Needless to say, when we delete every
other member of a finite set, the result is a set which is half as numerous as the original set.
If the evens and the naturals were finite sets, then these two verdicts would form a strict contradiction. If two finite sets can be put into one-to-one correspondence, then they have the same
number of members; but if one can be produced by deleting every other member of the other, then they do not have the same number of members and cannot be put into one-to-one correspondence. So do
we have a strict contradiction here?
The evens and the naturals are not finite but infinite sets. By this I only mean that counting them one at a time will never come to the end; there is no greatest even number, and no greatest
natural number.
At this point let's introduce the technical term cardinality to refer to the number of members in a set. For example, the set of fingers on one hand has cardinality five. The set of faces on Mt.
Rushmore has cardinality four. The set of stooges has cardinality three.
In the language of cardinality, we may say that any two sets which can be put into one-to-one correspondence are equal in cardinality; they have the same number of members. This is easily
verified for finite sets, and we will regard it as the definition of equal magnitude for infinite sets. Using the same language of cardinality, our intuition has given us two additional
propositions: (1) that all infinite sets are equal in cardinality,[Note 1] and (2) that if one set can be obtained by deleting members of another, then they have unequal cardinalities. The latter
verdict can be paraphrased thus: some infinite sets have a larger cardinality than other infinite sets, or not all infinite sets are equal in cardinality. Therefore, these two verdicts of
intuition directly contradict one another and cannot both be true.
Let us introduce one more technical term, our last for a long time. One set is a subset of a second set if all its members belong to the second set. It is a proper subset if all its members
belong to that second set and if it omits or excludes some of the members of that second set. The evens are a proper subset of the naturals because they form a subset of the naturals which omits
some naturals, namely, the odds. The set of Moe and Larry makes a proper subset of stooges because Moe and Larry are some but not all the stooges; they omit a stooge, namely, Curly. With this
terminology, we can offer one more paraphrase of the second verdict of intuition: a set must have a larger cardinality than its proper subsets.
If we add Curly to the set of Moe and Larry, then the set grows in cardinality from two to three. What would happen if we added the odd numbers to the set of the even numbers? Would the set grow
in cardinality, or would it retain the same cardinality as the set of evens alone? This is the original question in a new form. The first verdict of intuition says no; all infinite sets are equal
in cardinality, so adding the evens to the odds would not increase cardinality. The second verdict of intuition says yes, for this verdict is just another way of saying that adding new members to
a given set, and especially adding an infinite number of new members, will always increase cardinality.
So which verdict is correct? Before we answer this question, note that we cannot have it both ways. Either all infinite sets are equal in cardinality, or all infinite sets have a larger
cardinality than their proper subsets, but not both. Therefore, the truth on this question will violate at least one of our intuitions. For my purposes here, this lesson is at least as important
as the mathematical details of the correct answer, for it implies that we should not trust our intuitions in this domain, nor should we expect to confirm mathematical results about infinity with
our intuitions. Some true results will violate our intuitions and some false results will be ratified by them.
Now we can point out that both the verdicts of intuition are false. First, it is false that all infinite sets are equal in cardinality. We can prove that some infinities are larger than others
(for example, see Theorems 3, 4, 5, and 16 in the Appendix). Second, it is false that all sets have a larger cardinality than their proper subsets. We can prove that some additions to a given
set, even infinite additions, do not increase the cardinality of the given set (for example, see Theorems 1, 2, 7, 14, 15, 19, and 22 in the Appendix).
In his original statement of the paradox, Galileo did not use the even numbers; he used the perfect squares, 0, 1, 4, 9, 16....[Note 2] Like the evens, this set is infinite and the set of its
natural number omissions is also infinite. But it seems much less likely than the even numbers to equal the naturals in cardinality because, as we move along the series of squares, the interval
between members becomes increasingly large. In fact, as we move outward the ratio of perfect squares to natural numbers approaches zero. The evens never peter out, but the squares become
infinitely sparse.
Nevertheless, we can put the natural numbers and the perfect squares into one-to-one correspondence. Every distinct natural number has a distinct perfect square; and every distinct square number
has a distinct natural number as its square root. Hence every member of one sequence has a unique counterpart on the other, and vice versa. (The same is true of the evens and the naturals.)
This fact is the key to the solution. If the two sets can be put into one-to-one correspondence, then they have the same cardinality, by definition. One intuition ratified this result (namely,
that all infinite sets are equal in cardinality) and one opposed it (namely, that all infinite sets have a greater cardinality than their proper subsets). Both intuitions are false in general,
but one was accidentally true in this case. The lesson for intuition is: get used to it.
Galileo's paradox is paradoxical only in the weak sense: it violates our intuitions. It is not a contradiction. It is weird and amazing; it is literally counter-intuitive; but it is not
contradictory. We were able to choose between the competing intuitions and eliminate the appearance of contradiction once we held fast to the definition of equal cardinality for infinite sets
provided by the principle of one-to-one correspondence.
This innovation is due to Georg Cantor, as is set theory itself, the theory of infinite sets, and the modern concept of infinite cardinality. Cantor lived from 1845 to 1918, and worked out his
theory of infinite sets from roughly 1870 to 1895. Cantor's verdict is that the set of even numbers, the set of odd numbers, the set of perfect squares, and the set of all the natural numbers
have the same cardinality. The key to this solution is simply to define equal cardinality through one-to-one correspondence, and then to show that these sets can be put into one-to-one
correspondence with one another. Similarly, we can prove that some infinite sets have a larger cardinality than others by showing that they cannot be put into one-to-one correspondence.
You may know that many mathematicians and philosophers have objected to the very idea of a completed or actual infinity, as opposed to a potential infinity. Cantor's mathematics, however, boldly
posits complete infinities. The natural numbers make a potential infinity when we think of counting them out, and never coming to an end; we could always add one more and keep going. They
constitute a completed or actual infinity when they are all bundled together and said to form a set of some definite cardinality. Cantor not only flew in the face of the traditional objection to
completed infinities, he used completed infinities in the form of infinite sets as an intrinsic part of his solution to the classical paradoxes of the infinite.
There are many other classical paradoxes of the infinite. But Galileo's is enough to get us started. The infinite has been a perennial source of mathematical and philosophical wonder, in part
because of its enormity anything that large is grand, and provokes awe and contemplation and in part because of the paradoxes like Galileo's. Infinity seems impossible to tame intellectually,
and to bring within the confines of human understanding. I will argue, however, that Cantor has tamed it. The good news is that Cantor's mathematics makes infinity clear and consistent but does
nothing to reduce the awe-inspiring grandeur of it.
I'll offer reflections on just a handful of the specific questions mathematicians and philosophers have asked about the infinite over the centuries. Has modern mathematics allowed us to speak
coherently of "complete" or "actual" infinities, as opposed to merely "potential" ones? Is the very idea of an infinite set (which can be put into one-to-one correspondence with some of its
proper subsets) self-contradictory? Can infinite collections be "imagined" or only "conceived" or not even that? Do we have an idea of infinity or only the idea of finitude and its negation? I
will discuss how we go about "unlearning" some intuitions, cultivated in our experience of the finite, which make some consistent and demonstrable results literally counter-intuitive. Finally, I
will examine why the deep explorers of the infinite, even in its strictly mathematical forms, recurringly find it to be (in Kant's term) sublime.
Contradictory or Counter-Intuitive?
Cantor forces us to see that the intuitive notion of a set's size is ambiguous. When we say that one set is smaller than another set, we might mean two distinctly different things. First, we
might mean that the "smaller" set is a proper subset of the "larger" set. Second, we might mean that one set has a smaller cardinality than the other set in the sense that one-to-one
correspondence between them fails.
These two notions of size are distinct and independent. A set may be smaller than another set by one measure and not smaller by the other measure. Galileo's paradox is a perfect illustration. The
set of perfect squares is a proper subset of the set of natural numbers; in that sense it is a "smaller" set. However, the two sets can be put into one-to-one correspondence; in that sense, it's
not smaller at all but the same "size".
With finite sets, these two notions of size always and necessarily agree; that may be why they are so easy to confuse with one another when we are dealing with infinite sets. I believe that all
the classical paradoxes of the infinite rest on just this confusion of the two notions of a set's size, a symptom of the unwarranted expectation that infinite magnitudes should behave like finite
magnitudes. The classical paradoxes set up two infinite sets which are unequal by one test, but equal by the other, and present this counter-intuitive but consistent possibility as a
contradiction or impossibility. The classical objections to completed infinities[Note 3] rest on the same confusion. Those who argued that completed infinities are self-contradictory appeal to
the apparent contradictions contained in the classical paradoxes like Galileo's. When we recognize the two distinct and compatible notions of size which are at work in these paradoxes, then, we
show that the apparent contradiction is not a real one, we dissolve the paradox, and we answer the objections based on it against completed infinities.
To repeat, then, for the sake of explicitness: Cantor's solution to Galileo's paradox is that the set of perfect squares and the set of natural numbers have the same cardinality even though one
of these sets is a proper subset of the other.
It follows have courage! that some infinite sets can be put into one-to-one correspondence with proper subsets of themselves. This can never happen with finite sets. But it happens, for
example, with the natural numbers and its proper subset, the even natural numbers, and again with the natural numbers and its proper subset, the perfect squares.
The very idea that a set can be put into one-to-one correspondence with one of its proper subsets is deeply counter-intuitive. If you're feeling a barrier of resistance, this is probably the
cause. For example, an infinite set with this property will not grow in cardinality as we add members to it, one at a time (see Theorem 7 in the Appendix), and will not shrink in cardinality as
we subtract members from it, one at a time (see Theorems 8 and 9 in the Appendix).
For the sake of future discussion, let us say that a set which can be put into one-to-one correspondence with at least one of its proper subsets is self-nesting. (Unfortunately, mathematicians
have given no name to this property, so I have to invent one.)
Self-nesting sets seemed impossible or contradictory as soon as they were conceived. In the sixth century, John Philoponous of Alexandria argued that if the world were infinitely old, then an
infinite number of months would have passed. But thirty times as many days would also have passed. But either the infinite number of months and the infinite number of days are equal or unequal.
If equal, then in our terms the infinite set of past days is self-nesting and can be put into one-to-one correspondence with its proper subset, the infinite set of past months. If unequal, then
there would be infinities of different sizes. Because Philoponous thought both options contradictory, he concluded that the world must be finitely old.[Note 4]
Cantor's theory faced intense opposition in the late 19th century, from mathematicians as well as from philosophers and theologians. It wasn't just denied and disbelieved; it was hated. Yet
despite this heat, no opponent of the theory has been able to show that self-nesting is contradictory for infinite sets. The objections that self-nesting is contradictory for finite sets, or
counter-intuitive for infinite sets, are clearly beside the point. Today, Cantor's theory is standard mathematics even though there are still a few holdouts. Beyond consistency, it has the virtue
of eliminating the apparent contradiction from puzzles like Galileo's paradox.
When a theory with these virtues is opposed by intuition, the remedy is not to deny the theory but to unlearn our old intuitions.[Note 5] In the task of re-educating our intuitions, I've found
three strategies to be helpful.
First, study the proofs for the basic mathematical results. When your intuition is opposed only by someone's say-so, like mine or a teacher's, then intuition can easily win and perhaps in that
case, it ought to win. When it is opposed by an articulate chain of reasoning, then it starts to give and it ought to give.
Second, remember that our intuitions were cultivated by our experience of finite sets: sets of fingers, sets of coins, sets of people. And for finite sets, self-nesting is a flat contradiction.
When we deal with infinite sets, we must accept the fact that most of our "common sense" or "rules of thumb" will either be inapplicable or false, evolving as they did for the more tractable
domains of finite experience. This is not a license to disregard or negate our intuitions, which are often valuable clues to mathematically coherent theories. It is simply a reason to put them to
one side when they conflict with a consistent theory supported by strong proofs which solves otherwise insoluble mathematical problems.
In the same vein, it is helpful to remember past cases in the history of mathematics in which we mistook counter-intuitive ideas for contradictory ones. The preeminent examples are
incommensurable quantities and instantaneous velocities; however we could also cite negative numbers, the denial of Euclid's parallel postulate, and, more recently, incomputable numerical
functions. With the passage of time, the acceptance and utility of these ideas have only increased, and their consistency has been more firmly and clearly recognized, while the opposing
intuitions have faded away with the world-views which cultivated them.
Third, remember that our intuitions would not be satisfied any better by rejecting Cantor's self-nesting solution to Galileo's paradox. If we didn't accept Cantor's view that Galileo's two sets
had the same cardinality, then we'd have to accept the view that they had unequal cardinalities. But this result would contradict the intuitive principle that one-to-one correspondence
establishes equal cardinality. When we are at an impasse for intuition, then intuition is no longer a helpful guide, since it pulls as much (or as little) for one side as for the other. That is
when we should be looking for another guide, not clinging to the guide which has disqualified itself.
Imagination v. Conception
We've seen that intuition disqualifies itself in this domain by endorsing contradictory conclusions. Cantor's conclusions are rigorously proved, and so far (despite some strenuously motivated
effort), rigorous proof has not endorsed contradictory conclusions about the infinite. This is one good reason to prefer proof to intuition. The distinction between intuition and proof as reasons
for accepting a theory, and the inadequacy of intuition for dealing with the infinite, have many consequences for the philosophy and mathematics of the infinite. For example, even after
acknowledging the consistency of Cantor's theory, many people will still insist that we know nothing about infinity. What they seem to mean is that knowing requires some intuition, imagination,
or visualization.
I think I understand the origin of this objection, but I also believe it is easily answered. Just as our intuitions about sets, subsets, and cardinality are cultivated by finite sets, where
self-nesting is impossible, our ordinary knowledge of objects is limited to finite numbers of objects of finite size. (In a moment I will look at the question whether we ever experience anything
that is truly infinite.) We can visualize objects of finite size and we can visualize finite numbers of them. This means that virtually all of our ordinary knowledge of objects is accompanied by
this possibility of visualization. It's natural that we would come to expect that anything we can know, we can also visualize.
Even if this expectation is legitimate for the finite, it is entirely illegitimate for the infinite. Just as intuitions cultivated for the finite are likely to be inapplicable or false of the
infinite, so is the expectation that we be able to visualize.
Descartes asks us to imagine, that is, visualize, a chiliogon or 1,000-sided regular polygon.[Note 6] Can you do it? Try it right now. Chances are, you are either visualizing something like a 20
or 30-sided polygon and pretending it has 1,000 sides, or you are visualizing a circle and pretending the sides are too small to see with your mind's eye. We know exactly what a chiliogon is; we
can even compute the interior angle of its sides and, for a given edge, its area and perimeter. But we cannot visualize one.
One reason I like Descartes' example is that it is finite. Philosophers who think the infinite utterly beyond human understanding often fail to notice that their arguments, once made specific,
also apply to very large finite magnitudes as well. We cannot visualize infinitely many cherries in a tree, but neither can we visualize a billion. Does that disqualify us from using billions
intelligibly and accurately?
To Descartes, the chiliogon thought-experiment proved that we have at least two avenues to knowledge: imagination (which I've been calling visualization) and conception. We can conceive the
chiliogon, although we cannot imagine it. Once it is pointed out with a concrete example like the chiliogon, this is undeniable and we start to see other examples everywhere. To Descartes, the
distinction is more important in theology than in mathematics. The greatest obstacle to true faith, he thinks, is the attempt to imagine God when we can only conceive God.[Note 7]
Let me return to set theory. The power set of a given set is the set of all its subsets. For example, if I have a set of three stooges, then its power set is the set of all the subsets of stooges
I can make from that set of three. There is the set {Moe, Curly}, the set {Moe, Larry}, and the set {Curly, Larry}. There is also the set of {Moe} alone, {Curly} alone, and of {Larry} alone. For
technical reasons, we say that every set is a subset of itself, and the null set is a subset of every set. Hence we throw in {Moe, Curly, Larry} and {} to boot. This makes eight. Any set of three
objects any set with a cardinality of three will have a power set of cardinality eight.
We can imagine visualize many methods for systematically drawing out all the subsets of a given finite set. These methods will be extremely cumbersome for sets of cardinality 1,000, say, but
each method contains an algorithm which we can visualize working out.
Contemplate the set of natural numbers. Here is a set of infinite cardinality. What is the cardinality of its power set?
We saw that the evens had the same cardinality as the naturals, despite appearances to the contrary. We might cautiously generalize that all infinite sets have the same cardinality, but here we
find a counter-example. Cantor found an elegant proof that the power set of any set, finite or infinite, possesses a greater cardinality than the original set; this important result is simply
called Cantor's Theorem.
It has a short proof of marvelous beauty. The proof is negative, which means that Cantor assumed the negation of his conclusion and derived a contradiction from it. Since the theorem works for
any arbitrary set, let's apply it to the set of natural numbers. So, to set up the negative proof, let us assume that the set of natural numbers and its power set have the same cardinality. If
so, then they can be put into one-to-one correspondence. Let us suppose we have done so (even though we have no idea how to do so). Now by hypothesis each natural number is paired with exactly
one set of natural numbers, and vice versa. Some numbers will be paired with sets which happen to contain them. For example, 2 might be paired with the set of even numbers. Let us call such
numbers happy, and all other numbers sad. Now the set of all sad numbers is a bona fide set of natural numbers, and so has been paired with some natural number in our infinite list of
correspondences. Let's say it has been paired with x. Is x a happy number or a sad one? At this point, I know you'll start to get a little dizzy. That's good; it means you're following along. If
x is sad, then because it has been paired with the set of sad numbers, it has been paired with set which includes it; but that means it would be happy. But if x is happy, then it would be a
member of the set to which it has been paired; but because it has been paired with the set of sad numbers, that means it would be sad. Hence, if x is happy, then x would be sad, and if x is sad,
then x would be happy. Our assumption implied this contradiction, and so must be false. But to deny our assumption is to conclude that the set of natural numbers and its power set have different
cardinalities. (See Theorem 4 in the Appendix.)
In my view there are two great counter-intuitive results in the mathematics of the infinite. The first is that some infinite sets are self-nesting. (It turns out that all are; see Theorem 10 in
the Appendix.) The second is that some infinities are larger have a greater cardinality than others. (See Theorems 3, 4, 5, and 16 in the Appendix.) Now we have seen proofs for both results.
The first was proved by one-to-one correspondence, the second by a technique that has been called diagonalization.
Cantor's Theorem is not very remarkable if we think only of finite sets. Of course for every finite set the power set is bigger than the original. But for infinite sets Cantor's Theorem is the
astounding proposition that for every infinite cardinality, there is a larger one namely, the power set of the first one. So if the cardinality of the set of natural numbers is one infinite
number, and the cardinality of its power set is a distinct, larger infinite number, and the cardinality of its power set is a distinct, larger infinite number, then it's clear that what Cantor
has really proved is that there exists an infinite sequence of infinite cardinal numbers.
Of this remarkable theorem and its remarkable proof, David Hilbert said, "This appears to me to be the most admirable flower of the mathematical intellect and one of the highest achievements of
purely rational human activity."
Infinity as a Positive Idea
Let us grant, then, that imagination and intuition are too feeble to grasp infinity. It does not yet follow that conception is strong enough, or indeed that any human faculty is strong enough. We
might understand infinity the way medieval Christian philosophers thought we understood God: via negativa, that is, by understanding what God, or infinity, is not. For example, I know what it is
like for a row of trees to come to an end. This exemplifies my concept of finitude. If I say that an infinite row of trees is just like the finite row "except that it never comes to an end", then
I am merely negating my concept of finitude.
Descartes, again, thought we did have a positive idea of infinity three centuries before Cantor. This was important to him because he thought that finite human resources could not suffice to
give us the idea of infinity, and therefore that the idea could only have been given to us by an infinite being; in short, it was part of another of his arguments for the existence of God.[Note
8] He has two theses here: first, that we do possess a positive idea of infinity, and second that we could not have obtained this idea from our own finite experience or creativity. If both are
true, this would be important for just the reason he thought. But are they both true?
In a moment I will take up the question whether we ever experience anything infinite. On the question whether we know infinity positively, or just via negativa, Descartes is very short. He argues
that he would not know that he is finite or imperfect unless he had prior, positive ideas of infinity and perfection.[Note 9] There are many follow-up questions a skeptic would like to ask at
this point, but Descartes does not pause for them.
Descartes does pause to ask himself the question: Is it possible that I am an infinite being, don't know it, and could therefore be the source of my idea of infinity?[Note 10] Although this is a
terribly interesting and important question, he is also very short with it. After a brief look, he answers like Steve Martin in a Saturday Night Live routine, "Naaaa!"
Etymologically, the word "infinite" is "non-finite". This supports that view that perhaps finitude is the primary notion here and our concept of infinity is the negation of our concept of
finitude. But we can't get any mileage from etymologies in this inquiry. Etymologically, the word "independent" is "non-dependent" as if unfreedom were the primary concept and freedom derivative.
But the word "unfreedom" is "non-freedom" as if freedom were the primary concept after all. Similarly, the continuum is one of the premier examples of infinity in mathematics, but it differs from
other infinities like the rational number series in being "unbroken" or "without gaps". This suggests that we only know the continuum via negativa, by negating the idea of gaps; but
etymologically the terms "continuous" and "discontinuous" suggest the opposite, that continuity is the primary concept here.
More telling than etymology is this exercise: define finitude. I often teach a course at Earlham with a unit on the mathematics of infinity, and every now and then I'll throw "finitude" or
"finite set" onto a quiz, as a term to define. Invariably, students lose more points trying to define it precisely than they do when defining various infinite cardinalities.
Do try this one at home, however. Define finitude with clarity and precision. There are ways, even brief ways, but they usually don't occur to people with no training in mathematics.
Infinity, by contrast, at least since Cantor, is easy to define with clarity and precision. Remember that Cantor proved that some infinite sets are self-nesting, or can be put into one-to-one
correspondence with at least one of their proper subsets. It's not hard to prove that all infinite sets, in fact, are self-nesting. (See Theorem 10 in the Appendix.) And we already knew that only
infinite sets are self-nesting, or that no finite sets have this property. Consequently, we can define infinite sets as just those which are self-nesting. Correspondingly, we can define finite
sets as just those which are not.
Note the neat turning of tables here. Infinite sets have the positive property of self-nesting; finite sets do not. Finitude is defined via negativa.[Note 11]
Charles Peirce in 1885, and Richard Dedekind in 1888, proposed to define infinity through self-nesting.[Note 12] According to this proposal, we don't know that infinite sets are self-nesting
because of some proof; we know it because infinite sets are defined as those which are self-nesting. However, we can prove that the Peirce-Dedekind definition is equivalent to a more traditional
one by which we know infinitude rather than finitude via negativa,[Note 13] and for me that fact makes the controversy about logical priority or primacy merely scholastic.
What is not merely scholastic is that we have now reduced the question whether we have a positive idea of infinity to the question whether we have a positive idea of self-nesting. I suggest that
we do have such an idea, or can, if we study Cantor's transfinite arithmetic. In my own experience, to understand self-nesting at all is to understand it positively. I'm quite sure I don't
understand it via negativa or as the negation of something else like "the failure or impossibility of self-nesting". The failure or impossibility of self-nesting definitely carries for me the
status of a derivative idea, one that never comes to mind when I think about self-nesting unless I make a great effort.[Note 14]
I realize that my reason derives from my experience putting sets into one-to-one correspondence with proper subsets of themselves, and studying the works of others who have done the same;
therefore it begs the question somewhat. I'm saying that if you study the mathematics of infinite sets, this positive idea will come, although perhaps not quickly or in a form you could
communicate easily to those who have not undertaken a similar study. But if you haven't studied Cantor this looks like hand-waving. I know that cultists of every stripe say virtually the same
thing: Study the book of our inspirational founder and you too will see the light, and until then shut up with your criticisms.
So let me try to do better. I think I can show that we have a positive idea of infinity, in the form of self-nesting, and even that self-nesting can be made somewhat intuitive or visualizable. I
owe the following idea to Josiah Royce,[Note 15] one of the first philosophers to make use of Cantor's mathematics. Imagine a perfect map of England, say, somewhere in London. By a "perfect" map
I mean one which shows not only the cities and roads, but also the houses, furniture, pennies behind the sofa cushions, bacteria, quarks in fact, every last particle of matter. Now if the map is
perfect in this sense and if it is located in London, then somewhere on the map there will be a perfect image of the map itself. Again, by "perfect image" I mean that every detail of the outer
map will appear on the inner map. But if this is true, then like a hall of mirrors the map within the map will also contain a perfect image of the map, and so on ad infinitum.
To use Royce's term, the map will be self-representing. Of course we can't actually make such a map, and it is useful to think of the reasons why. One obstacle in our way is the fact that the
pixels we must use are larger than the smallest particles of matter we wish to represent. It may seem that this fact would not stop us from making a perfect map of England, but only require that
the map be larger than England. But if the map were larger than England, then it could not be located inside England, and therefore could not be self-representing or "perfect" in our sense.
Another obstacle in our way is that we can only arrange a finite number of pixels to make a picture. Such a 'finitist' map could be self-representing only imperfectly; if it didn't represent
London as a mere dot, it would represent the map within London as a mere dot, or the map within the map within London. With only a finite number of pixels to use in composing our picture, we will
inevitably run out of pixels before we run out of information. This would not be a problem with an ordinary or non-self-representing map. If we had as many pixels at our disposal as there are
quarks in England, then we could (in principle) arrange them to make a perfect map of England down to the quark level even if the resulting map were larger than England. But once the map itself
is put inside England and becomes one of the landmarks to represent on the map, then to be perfect the map would have to be perfectly self-representing and therefore infinitely nested; suddenly
the number of pixels needed rises from finite to infinite.
Now what if we could make a picture using dimensionless points as pixels, and use an infinite number of them? It is strange and wonderful that Leibniz posits just these two conditions in the
Monadology of 1714. In that work he outlines a new atomic theory in which conventional atoms are replaced by monads, "the true atoms of nature".[Note 16] Monads differ from conventional atoms in
many ways, but the most important for our purposes is that they have zero size. They are dimensionless points. And of course there are infinitely many of them. This allows a set of monads to
represent England perfectly even if there are infinitely many particles smaller than quarks which would have to appear on the map. It also means that in Leibniz's world it is physically possible
for some chunk of matter to achieve perfect self-representation, the way England does in Royce's scenario. It might contain within itself a perfect representation of itself, and hence an infinite
series of nested microcosms. But it might do better still: it might be a perfect representation of the universe as a whole, including itself as one of the parts, and therefore contain infinitely
many nested perfect representations of itself and the universe. Leibniz thought this was not only possible, but that every chunk of matter of every size is a perfect mirror of the universe, of
itself, and of all the other perfect mirrors, in just this way.[Note 17]
You don't have to agree with Leibniz that the world is really set up this way however if truth is beauty, and beauty truth, then there is a lot to be said for the idea. You only have to admit
that you grasp his theory, or Royce's.[Note 18] If you do, then you grasp the essence of self-nesting, which is the essence of infinite cardinality. You need no longer approach it via negativa.
[Note 19]
Do We Experience Anything Infinite?
So we agree with Descartes that we do possess a positive idea of infinity. If Descartes is correct in his second thesis that we could not have obtained the idea from our finite experience and
creative resources, then we feel the pressure he felt to posit an infinite being. So let us face directly the question whether we experience anything infinite.
The words "infinite" and "infinity" are often used loosely in street English to suggest that we do experience infinites. For example, we may say that a film is infinitely clever, a coral reef has
an infinite variety of wildlife, a spouse has infinite patience, or that a vinyl upholstery cleaner has infinitely many applications. (That's why it's called a miracle product.) Before cameras
were automated, they had a focal-length setting called "infinity", presumably for photographing the arrow Lucretius shot into the edge of space. In these cases we speak loosely, and "infinity"
means very many or very large, perhaps indefinitely many or large.[Note 20] On a clear day the sky may seem infinitely deep, but it's really just a wild blue yonder an indefinitely deep 'out
there'.[Note 21]
Do we ever experience something which is literally infinite? If time, space, or matter are infinitely divisible, then to experience a finite chunk of any one of them is to experience its infinity
of parts. Having said this, I would like to put to one side the question whether time, space, or matter really are infinitely divisible. Not only is it very thorny, it is unnecessary to answer
the question on the table. For even if time, space, and matter are infinitely divisible, we experience their infinite parts bundled into chunks most of whose parts are indiscernible to us. When a
movie runs at 24 frames per second, it appears continuous, its separate frames indiscernible to us. We certainly experience 24 chunked frames, but not the 24-ness, or even the finitude, of the
chunking. Once the eye is fooled into seeing continuity, the number of frames per second could increase to a billion, or to an infinite number, and we would not notice the difference.[Note 22]
This is the sense in which we could experience something infinite without experiencing its infinitude. Similarly, if time, space, and matter were continuous and infinitely divisible, then the
spectacle of life would be like a movie run at an (uncountably) infinite number of frames per second; but while we would experience expanses, durations, and objects with infinitely many parts,
but we would not experience the infinitude of those parts.
As the movie shows, the same is true of finite divisibility. If my car has (say) 5,000 parts, I experience it as an object with many parts; but I don't experience the 5,000-ness of the parts.
Motion seems to introduce new issues. If I open a pair of scissors and close them again, then the blades produce an infinite number of different angles, and in a sense I saw them all. But when we
think about it we realize the we are dealing with the same issues all over again. First, an infinite number of distinct angles is produced only if time and space are both continuous; if either
one is composed of irreducible quanta, then only a finite number of angles is produced. Second, even if time and space are continuous, and the angles infinite, we don't experience the infinitude
of the angles. This is shown by the fact that we cannot tell from the experiment whether time and space are continuous; that is, we cannot tell whether we saw an infinite or merely a huge finite
number of distinct angles.
Similarly, if space is continuous, then walking any distance at all is to traverse an infinity of spatial units. Or if time is continuous, then it is to traverse an infinity of temporal units.
But even if so, we only experience the chunked, finite meter we traversed, in the chunked finite second, not the infinitude of dimensionless points inside them.[Note 23]
When Descartes said we experience nothing infinite, I think he meant that we see nothing infinite in any given scene, and nothing infinite in a lifetime of scenes. But how do we know this?
Because we only live a finite time? Actually, it depends on how you count. If you count in years or months or days or seconds, then yes, the duration of our lives spans only a finite number of
those units. But if we divide time into dimensionless points, such as points on a time line, then we live an infinite number of them and we would still do so even if we lived for only one
The same holds spatially within a given scene. Whether a scene is finite depends on how we divide it. No panaroma covers an infinite number of miles or meters or nanometers. But every scene, even
a pinhead, covers an infinite number of dimensionless points of space. Hamlet was thinking of something else at the time, but he made this point very well when he said, "O God, I could be bounded
in a nutshell and count myself king of infinite space...."[Note 24]
Still, while the spatial points would be infinite, our experience would never notice or recognize their infinitude.
Past time might be infinite. But even if it is, living in the present would be like treading water over an infinite depth. We would not experience the infinitude except in the form of buoyancy
which could, of course, have a finite explanation. The time in which we exist may rest upon, and be continuous with, an infinite prior time, but we will never know whether this is so simply from
our experience of present time.
Performing an infinite number of tasks in finite time has always been a mathematician's dream. If I could count one number in half a second, the next number in the next quarter second, the next
number in the next eighth of a second, and so on, then I could count an infinite number of numbers in one second flat. So far nobody has managed to pull this off. However, a mathematician at Bell
Labs, named Peter Schor, has come close by showing that the kind of parallelism possible on a quantum computer is indefinitely large if not infinite.[Note 25] We could in effect peform an
infinite number of simultaneous computations using only finite hardware, allowing us to compute otherwise intractable functions. Schor proved that quantum indeterminacy makes this kind of
parallelism mathematically possible; but notably, it has not yet been realized in a physical machine.
An analog signal as opposed to a digital signal contains an infinite amount of information. But when we make an audio recording of a single piano keystroke, the digital nature of the molecules of
air carrying the waves, and the digital nature of the molecules of the magnetic coating on the tape, mean that we can preserve and send to the ear only a finite subset of the information which
the keystroke would have registered in a continuous medium.[Note 26] And even if we could hear the note played back after being perfectly recorded in a continuous medium, we would at best hear an
analog signal with an infinite amount of information in it; we would not experience the infinitude of that information.
This is precisely why Leibniz posits a continuous medium (a plenum of monads) rather than discrete molecular air to mediate causal influences like the propagation of sound waves.[Note 27] Leibniz
thinks we are continuously bombarded by an infinite amount of information from the universe at large, and that we register all of it, although not all of it consciously. This is his famous
doctrine of minute perceptions.[Note 28] Without going into its details here, we can at least see that it unabashedly implies that we do experience something infinite; in fact, we do so
Until we got to Leibniz, there was a pattern in these examples. There are several ways in which the objects or theaters of our experience might be infinite. But we can't tell from our experience
whether they are or are not infinite, and this means at the very least that we don't experience their infinitude. By positing minute perceptions, Leibniz posits the experience of infinitely faint
influences. He admits, even insists, that not all of these experiences are conscious,[Note 29] but likewise insists that without them conscious experiences would not exist, just as finite line
segments would not exist without their constitutive dimensionless points.
Elegance is the chief reason to believe Leibniz's theory. After positing an infinite number of infinitesimal monads a priori, Leibniz surprises us by making the theory remarkably subtle and adept
at explaining the world and experience. If Kant is right, however, we should hesitate to affirm or deny infinities a priori.
In Kant's diagnosis, Leibniz fell victim to a natural, even rational temptation. It's extremely tempting to think that time, space, and matter really are, in themselves, apart from the
limitations on human knowledge, either infinitely divisible or finitely divisible. We may not know which one they are, and we might not perceive their internal infinitude if they are infinitely
divisible, but they must really be one way or the other. Kant argued that this is a mistake; in fact, this assumption leads to a special kind of contradiction which he called an antinomy.[Note
30] It also leads to contradiction or antinomy to assume that past time is really either infinite or finite, or that space is really either infinite or finite.[Note 31] There are two reasons,
briefly, why these assumptions lead to contradiction: first, they treat the world as a thing in itself, rather than as a phenomenon partly constituted by the act of knowing it; second, they are a
priori claims, based on no empirical evidence, and the opposite a priori claims are equally compelling to reason. Kant concludes that to avoid these contradictions, we must regard the extent of
space, the depth of past time, and the divisibility of time, space, and matter as indeterminate. We know them as far as we have inquired into them, and tomorrow we may know more. We must speak of
the world (time, space, matter) as growing in extent, duration, and divisibility as we find it to be larger, older, or finer; to say that the world consists of something in and of itself which
fixes its size, age, and ultimate particles is a demand of reason but ultimately a contradictory one. This is one place where reason must be reigned in, disciplined, or subject to critique.
What follows from all this for Kant is the strange-sounding doctrine that in its spatial extent, temporal duration, and material divisibility, the world is neither finite nor infinite.[Note 32]
For myself, I find that I am attracted to the view that time and space are continuous; at the same time I suspect that the question whether time and space are continuous cannot be settled
empirically. When I am inclined to soar in the sky of unfettered conjecture, I am attracted by the elegance of Leibniz's theory of minute perceptions, which arguably follows from the view that
time, space, and matter are continuous; when I am inclined to discipline my conjectures and hold them inside the bounds of verification, I heed Kant's admonition. I'm no closer to a resolution
than this.
So if we have a positive idea of the infinite, how do we obtain this idea? We make this question harder to answer, not easier, if we say that the world is neither finite nor infinite, or that if
it is infinite, then we do not experience its infinitude. My disappointing, pedestrian answer is that we may not possess the positive idea of infinitude until we study self-nesting, and during
that study, we get the positive idea of infinitude from the exercise of putting an infinite set into one-to-one correspondence with one of its proper subsets. This exercise, I should add, is a
finite experience. We take the first few even numbers, 2, 4, 6..., for example, and pair them off against the first few natural numbers, 1, 2, 3.... We know that each sequence is rule-governed,
because we know exactly how to generate the next member of each. Hence, we know that the nth member of one sequence will have a partner in the nth member of the other, no matter how large n is,
or no matter how far out we take the sequences. This is the finitistic way to put infinite sets into one-to-one correspondence. But if one set is the proper subset of the other, then we have
established self-nesting, which is impossible for finite sets. Until we undertake this exercise, and think about what it means, our notion of the infinite may well be nothing more than the
negation of the idea of finitude.
While we do not experience the infinitude of time, space, or matter, even if they are infinite in extent or divisibility, neither do we experience large finite magnitudes. I've seen estimates of
the number of sub-atomic particles in the universe ranging from 10^65 to 10^85. But to be conservative, let's say that nothing in the universe, including the universe itself, has more than 10^100
parts. The name for 10^100, or 1 followed by 100 zeroes, is a googol. So even if there are more than googol of ultimate particles, it's fair to say that no collection of physical objects that we
have ever experienced grains of sand on a beach, snowflakes in a storm, stars in the sky has more than a googol of members.[Note 33] If true, then we did not obtain our idea of a googol from
experience. But it does not follow that we must posit a very large finite being Googolzilla to be the source of our idea. We know exactly what a googol is as a concept, even if we have never
experienced it manifest in a sensation or image. We can list the million natural numbers which are its closest neighbors, we can do arithmetic with it, and we know infallibly whether an arbitrary
natural number is larger or smaller than it. If we may export the lesson of this to the infinite, then we may suggest that while we have no experience of the infinitude of anything, we have a
perfectly good concept of infinity, and that the ultimate explanation of this fact lies not so much in anything special about infinity as in the distinction between concepts and images.
The Sublimity of the Infinite
I am profoundly grateful that understanding infinity does not deprive it of its majesty. If the infinite were only interesting because of the paradoxes it generates, and the absorbing academic
issues raised by the need to resolve them, then it would not be studied any more than self-reference, a prolific but more pedestrian engine of paradox. But the infinite is also majestic, one
might say infinitely majestic.
An hour under a clear sky at night, looking up, gives some sense of this. The depth of space is a wild blue yonder, not a true, perceived infinity.[Note 34] But it inspires contemplation of the
true infinite, and the slightest brush with that idea is breath-taking, invigorating, expanding, lifting, calming, but also agitating, alluring, but also distant and magnificently indifferent.
One reason to study mathematics is that you can get these feelings in broad daylight or indoors.
There are many ways to become precise about these feelings, and many ways to praise and honor the infinite. I'd like to use Kant's term: it is sublime.[Note 35]
Just for comparison, Cantor had a different set of numinous feelings about the infinite. He was not only a great mathematician, but a very religious man and by some standards a mystic. Yet his
mysticism was supported by his mathematics, which to him was at least as strong an argument for the mathematics as for the mysticism.[Note 36] Apart from claiming divine inspiration for his work,
we don't know exactly what spiritual views he linked to his mathematics, but his theorems[Note 37] give support to the following. Measured in meters, we are tiny specks compared to the universe
at large. But measured in dimensionless points, we are as large as the universe: a proper subset, but one with the same cardinality as the whole. Similarly, measured in meters, we may be off in a
corner of the universe. But measured in points, the distance is equally great in all directions, whether universe is finite or infinite; that puts us in the center, wherever we are. Measured in
days, our lives are insignificant hiccups in the expanse of past and future time. But measured in points of time, our lives are as long as universe is old. We are as small as we seem, but
simultaneously, by a most reasonable measure, co-extensive with the totality of being in both space and time. This is truly (as Blake put it) "[t]o see the world in a grain of sand and a heaven
in a wild flower, hold infinity in the palm of your hand and eternity in an hour."[Note 38]
Kant's theory of the sublime does not rest on these Cantorian theorems. His chief thesis for our purposes is that, "That is sublime in comparison with which everything else is small."[Note 39]
Clearly the infinitely large is a perfect fit for this definition.[Note 40]
The sublime is not an easy notion, and the best approach to it may be via negativa, showing how it differs from something familiar, the beautiful. Sticking only to those differences which bear
most on the sublimity of the infinite, Kant says that the beautiful concerns a bounded object while the sublime object can be unbounded; the beautiful is compatible with charms while the sublime
is not; the beautiful attracts the mind while the sublime both attracts and repels it; and the beautiful "seems as it were predetermined for our power of judgment" while the sublime is
"incommensurate with our power of exhibition, and as it were violent to our imagination, and yet we judge it all the more sublime for that."[Note 41]
The infinitely large meets these criteria almost by design. The infinitely large is unbounded, incommensurate with our powers of imagination, and to engage and satisfy us it no more needs charm
than spring water needs sugar. It is so large that some of its proper subsets are just as large, a property shared by no finite magnitude.
What triggers the feeling of the sublime most is immensity. Immensity in turn makes us feel a tension between two aspects of ourselves. On the one hand it makes us feel the inadequacy of our
senses and imagination. On the other it makes us feel that there is more to us than senses and imagination, whose adequacy cannot be brought into question by immensity, no matter how spectacular
or infinite. This second dimension of ourselves is not conception but moral vocation. While physically the immensity dwarfs us into insignificance, this very fact highlights that within us which
is not dwarfed. As long as we are physically safe when viewing the sublime immensity, Kant argues, it helps us know our moral dignity and nonphysical invulnerability undiminished, even
accentuated, by our forceful acknowledgement of our physical smallness and frailty.[Note 42]
Properly understood, the idea of a completed infinity is no longer a problem in mathematics or philosophy. It is perfectly intelligible and coherent. Perhaps it cannot be imagined but it can be
conceived; it is not reserved for infinite omniscience, but knowable by finite humanity; it may contradict intuition, but it does not contradict itself. To conceive it adequately we need not
enumerate or visualize infinitely many objects, but merely understand self-nesting. We have an actual, positive idea of it, or at least with training we can have one; we are not limited to the
idea of finitude and its negation. In fact, it is at least as plausible to think that we understand finitude as the negation of infinitude as the other way around. The world of the infinite is
not barred to exploration by the equivalent of sea monsters and tempests; it is barred by the equivalent of motion sickness. The world of the infinite is already open for exploration, but to
embark we must unlearn our finitistic intuitions which instill fear and confusion by making some consistent and demonstrable results about the infinite literally counter-intuitive. Exploration
itself will create an alternative set of intuitions which make us more susceptible to the feeling which Kant called the sublime. Longer acquaintance will confirm Spinoza's conclusion that the
secret of joy is to love something infinite.[Note 43]
Mark Twain came to love mathematics as an adult and always regretted that he didn't have a stronger foundation for it. He once said that if he could live forever, he'd spend 8,000 years studying
mathematics. I've never been able to decide whether this remark shows his wit or his weak foundation in mathematics. If he could live forever, then he could spend infinitely many years studying
mathematics, and have infinitely many years left over for other pursuits. That's the way I'd like to do it.
Blake, William. The Viking Portable Blake. Ed. Alfred Kazin. Viking Press, 1946,
Cantor, Georg. Contributions to the Founding of the Theory of Transfinite Numbers. Trans. Philip E.B. Jourdain. Dover Publications, 1955. (Translation originally published 1915.)
Copeland, Jack. Artificial Intelligence: A Philosophical Introduction. Basil Blackwell, 1993.
Dauben, Joseph Warren. Georg Cantor: His Mathematics and Philosophy of the Infinite. Princeton University Press, 1979.
Dedekind, Richard. Essays on the Theory of Numbers. Trans. Wooster Woodruff Beman. Dover Publications, 1963. (Translation originally published 1901.)
Dedekind, Richard. Was sind und was sollen die Zahlen? 6th ed., Braunschweig, 1930 (original 1888).
Descartes, René. Philosophical Essays. Trans. Laurence J. Lafleur. Bobbs-Merrill, 1964. (Discourse on Method, original 1637; Meditations, original 1641.)
Fraenkel, Abraham A. Abstract Set Theory. North-Holland Pub. Co., 1953.
Hofstadter, Douglas R. Gödel, Escher, Bach: an Eternal Golden Braid. Basic Books, Inc., 1979.
Galilei, Galileo. Dialogues Concerning Two New Sciences. Trans. Henry Crew and Alfonso de Salvio. Dover Publications, 1954 (original trans. 1914; original work 1638.)
Kant, Immanuel. Critique of Judgment. Trans. Werner S. Pluhar. Hackett Pub. Co., 1987 (original 1790).
Kant, Immanuel. Critique of Pure Reason. Trans. Norman Kemp Smith. St. Martin's Press, 1968 (original 1781).
Kant, Immanuel. Foundations of the Metaphysics of Morals. Trans. Lewis White Beck. Bobbs-Merrill Co., 1959 (original 1785).
Kant, Immanuel. Observations on the Feeling of the Beautiful and Sublime. Trans. John T. Goldthwait. University of California Press, 1960 (original 1763-64).
Keyser, C.J., "Charles Sanders Peirce as a Pioneer," Galois Lectures (Scr. Math. Library, No. 5, 1941) 87-112
Kleene, Stephen Cole. Introduction to Metamathematics. North-Holland Pub. Co., 1988 (original 1952).
Kleene, Stephen Cole. Mathematical Logic. John Wiley & Sons, 1967.
Leibniz, Gottfried Wilhelm. New Essays on Human Understanding. Trans. Peter Remnant and Jonathan Bennett. Cambridge University Press, 1981 (original 1704).
Leibniz, Gottfried Wilhelm. Philosophical Papers and Letters. Ed. and Trans. by Leroy E. Loemker. D. Reidel Publishing Co., second ed., 1969. (Discourse on Metaphysics, original 1686; Monadology,
original 1714.)
Lloyd, Seth, "Quantum Mechanical Computers," Scientific American, (October 1995), pp. 140-145.
Locke, John. An Essay Concerning Human Understanding. Ed. Peter H. Nidditch. Oxford University Press, 1975 (original 1690).
Maor, Eli. To Infinity and Beyond: A Cultural History of the Infinite. Princeton University Press, 1991 (original 1987).
Moore, A.W. The Infinite. Routledge, 1990.
Peirce, Charles Sanders. Collected Papers. Ed. Charles Hartshorne and Paul Weiss. Vol. III, 1933.
Pickover, Clifford A. Keys to Infinity. John Wiley & Sons, 1995.
Royce, Josiah, "The One, The Many, and the Infinite," Supplementary Essay in his The World and the Individual, First Series, Dover Publications, 1959 (original 1899), pp. 473-588.
Rucker, Rudy. Infinity and the Mind: The Science and Philosophy of the Infinite. Birkhäuser, 1982.
Rucker, Rudy, "One of George Cantor's Speculations on Physical Infinities," Speculations in Science and Technology, (October 1978) pp. 419-421.
Schor, Peter, "Algorithms for Quantum Computation: Discrete Log and Factoring," Proceedings of the 35th Annual Symposium on the Foundations of Computer Science, IEEE Computer Society, 1994, pp.
124ff. This paper is available by FTP from the menu of papers at the Quantum Information Page, http://vesta.physics.ucla.edu/~smolin/.
Shakespeare, William. Hamlet. Ed. Edward Hubler. New American Library, 1963 (original 1600).
Spinoza, Benedict. Ethics, Treatise on the Emendation of the Intellect, and Selected Letters. Trans. Samuel Shirley. Hackett, 1992. (Treatise, original date unknown, posthumously published;
Ethics, original 1675.)
Vilenkin, N. Ya. In Search of Infinity. Trans. Abe Shenitzer. Birkhäuser, 1995.
1. Locke argued for this verdict of intuition thus: "[I]f a Man had a positive Idea of infinite...he could add two Infinites together; nay, make one Infinite infinitely bigger than another,
Absurdities too gross to be confuted." Locke, Essay Concerning Human Understanding (1690), at p. 222. [Resume]
2. Galileo, Dialogues Concerning Two New Sciences (1638) at 31-33. [Resume]
3. Here I mean the classical mathematical objections. In this paper I put to one side theological objections such as that completed infinities contradict the doctrine that God is both infinite
and unique. [Resume]
4. See Moore, The Infinite (1990), at p. 48. Many ancient and medieval scholars, however, accepted the view that infinite sets permit self-nesting; Kleene, Mathematical Logic (1967) at p.
176.n.121 cites various authors who point to Plutarch in the first century of the common era, Proclus in the fifth, Adam of Balsham in the twelfth, and Robert Holkot in the fourteenth. [Resume]
5. When a theory of lesser virtue is opposed by intuition, the remedy is not as clear. For example, when Zeno argued through his four paradoxes that motion and change were impossible, and hence
illusory, his conclusions were opposed by everyone's intuitions about the reality of motion and change. In this case it's not clear whether we should trust Zeno's logic more than our intutions,
or vice versa. [Resume]
6. Descartes, Meditations (1641) at pp. 126-127 (Meditation VI). [Resume]
7. Descartes, Discourse on Method (1637) at p. 28 (Fourth Discourse); see also his Meditations (1641) at pp. 64, 69, 71, and 73. [Resume]
8. Descartes, Meditations (1641), at pp. 101-102 (Meditation III). Note that this argument would work just as well with very large finite magnitudes. [Resume]
9. Descartes, Meditations (1641), at p. 102 (Meditation III). [Resume]
10. Descartes, Discourse on Method (1637), at p. 26 (Discourse IV), and Meditations (1641), at p. 103. [Resume]
11. Descartes, who thought we had a positive and not merely a negative idea of the infinite, draws the same conclusion: "[M]y notion of the infinite is somehow prior to that of the finite...."
Descartes, Meditations (1641), at p. 102 (Meditation III). [Resume]
12. Peirce, Collected Papers (1885), at pp. 210-249, 360; and Dedekind, Essays on the Theory of Numbers (1888), at p. 109 (theorem 160). Bernard Bolzano may have been the first to suggest this
idea in his Paradoxien des Unendlichen, Section 20, published posthumously in 1851. [Resume]
13. For a proof that the Peirce-Dedekind ("reflexive") definition of infinity is equivalent to a more traditional ("inductive") one, see Fraenkel, Abstract Set Theory (1953), at pp. 41-42.
14. One might argue that "the failure or impossibility of self-nesting" is simply a negative way of describing Euclid's positive principle that the whole is always greater than its (proper)
parts, and that therefore the idea of self-nesting is equivalent to the negation of the positive Euclidean idea. While this is true, it remains the case that self-nesting, at least after Cantor,
has taken on a positive life of its own and may be thought in its own terms, directly, and no longer as the mere failure of the Euclidean logic of parts and wholes. [Resume]
15. See Royce, "The One, The Many, and the Infinite," (1899), esp. pp. 503-507. [Resume]
16. Leibniz, Monadology (1714), at §3. For Cantor's physical speculations on similar topics, including his views on mass-monads (which were infinite but not continuous) and aether-monads (which
were infinite and continuous), see Rudy Rucker's translation of Cantor in Rucker, "One of George Cantor's Speculations on Physical Infinities," (1978). Cantor's views are briefly summarized in
Rucker, Infinity and the Mind (1982), at p. 90. [Resume]
17. Leibniz, Discourse on Metaphysics (1686), at §§8-9, 14, and Monadology (1714), at §§62-68. [Resume]
18. Leibniz is not alone in arguing for the truth of this vision, as opposed to its mere possibility or consistency. Royce, "The One, The Many, and the Infinite," (1899), at pp. 538-554 argues
that the entire "realm of reality" is a self-representing system, just like England conceived as the home and subject of its own perfect map. [Resume]
19. The positive idea of self-nesting not only frees us from the indirectness and incompleteness of knowing infinity via negativa, but as a bonus it decisively answers one line of objections to
the idea of a completed infinite. This line of objections asserts that the very idea of a completed infinite is unattainable by finite human beings, or incoherent and contradictory, or
meaningless. The positive idea of infinity, if it exists and we possess it, and its consistency, are standing refutations to this line of thought. [Resume]
20. The examples show that sometimes we want terms of indefinite largeness rather than infinitude. That is why the American Indian expression that a promise will hold as long as the grass grows
and the rivers flow is more accurate and credible than a declaration for eternity, even if it is still an overstatement. [Resume]
21. When Kant speaks of the human person as a being of "infinite worth", is this another figurative or exaggerated use of the term "infinite"? A tool may be used as a means to an end, and nothing
more, without violating its dignity; the reason is that a tool has only "finite worth". As Kant is wont to say, a tool has a price, while a person has a dignity. Kant, Foundations (1785) at p.
53. If we measure the "worth" of these entities with a unit of finite size, such as the dollar, then the tool has finite worth. But it's not clear whether the person has infinite worth or whether
the person is beyond measure the way she is beyond price. To say that a person is worth an infinite number of dollars may be as much a category mistake as to say she is worth a finite number of
dollars, and just as far from capturing Kant's meaning. This is why I don't use the human person as an example of something we experience which is literally infinite. [Resume]
22. At 18 frames per second, old silent films look jerky. The jerkiness alerts us to the fact that we are viewing a rapid succession of frames, not a continuously changing image. But our ability
to discern 1/18th second intervals of time, and see the jerks, is not the same as the ability to discern that we are seeing 18, rather than 17 or 19, frames per second. It is, however, enough to
tell us that we are experiencing a finite number of frames per second. But once the speed increases to the point where the jerkiness disappears, and the appearance of continuity sets in, we
cannot know whether the underlying pace of frames is infinite or finite but huge. [Resume]
23. Several of Zeno's paradoxes of motion are best solved by using the commonplace notion of the calculus that we can traverse an infinite number of spatial units in a finite time. Note, however,
that those who object to the use of completed infinities cannot answer Zeno in this way, for it is to appeal to a completed infinity of spatial units successfully traversed. [Resume]
24. Shakespeare, Hamlet, Act II, Scene II, line 258. The quotation continues: "...were it not that I have bad dreams." [Resume]
25. Peter Schor, "Algorithms for Quantum Computation: Discrete Log and Factoring," (1995). See also Seth Lloyd, "Quantum Mechanical Computers," (1995). [Resume]
26. One unexpected reason why this matters is that if potential brain inputs through the senses are only finite, then artificial intelligence is definitely possible. That is, we could in
principle create a computable function that duplicated the brain's operation flawlessly. Whether AI is possible when potential brain inputs are infinite is still unsettled. See Copeland,
Artificial Intelligence (1993), pp. 233-238. [Resume]
27. Leibniz, Monadology (1714), at §§8, 61-62. [Resume]
28. Leibniz, New Essays (1790), at pp. 53-58. [Resume]
29. In my view, Leibniz is the first thinker to posit unconscious experience. It is important, then, that his theoretical motivation is not to explain memory, dream, or neurosis, but the
infinitely small sensory influences that constitute all sensation and the infinitely large number of sensory experiences. [Resume]
30. Kant, Critique of Pure Reason (1781), at B.462. [Resume]
31. Kant, Critique of Pure Reason (1781), at B.454. [Resume]
32. Kant, Critique of Pure Reason (1781), at B.533. The world would be either finite or infinite if it were a thing in itself, B.532.
Here is one way to paraphrase Kant's view here. There is no empirical way to ascertain whether time and space are infinite, or whether time, space, or matter are infinitely divisible. So on
empirical grounds we can say neither that they are infinite nor that they are finite. To try to decide these questions on a priori grounds is precisely what leads to contradiction. Hence on a
priori grounds as well we can say neither that they are infinite nor that they are finite. [Resume]
33. Even if this is not true of a googol, it is true of 10^googol. The point is that there is some large finite number which is larger than the cardinality of any collection we've ever
experienced. [Resume]
34. Kant, Critique of Judgment (1790), at p. 124: "[T]he infinite...for sensibility is an abyss." Cf. pp. 115, 130. [Resume]
35. In this section I will speak only of the infinitely large. [Resume]
36. See Dauben, Georg Cantor (1979), at pp. 288-291, 294-297. [Resume]
37. See Theorems 12, 13, 18, 20, 21, 23, and 24 in the Appendix. [Resume]
38. William Blake, Auguries of Innocence, lines 1-4, in Viking Portable Blake (1946) at p. 150. Also see his Marriage of Heaven and Hell: "If the doors of perception were cleansed every thing
would appear to man as it is, infinite. For man has closed himself up, till he sees all things thro' narrow chinks of his cavern," ibid. at p. 258. [Resume]
39. Kant, Critique of Judgment (1790), at p. 105. The italics are Kant's. [Resume]
40. Kant, Critique of Judgment (1790), at p. 114: "The infinite, however, is absolutely large (not merely large by comparison). Compared with it everything else...is small" at least if
"everything else" is limited to finitely large objects. Here Kant mistakenly assumes that all infinities are equal, a common mistake before Cantor. If one were larger than another, then the
latter, although infinite, would indeed be small in comparison with something. In Kant's defense we may offer Moore's view that Kant was one of the first thinkers to acknowledge that it is no
contradiction to suppose that one infinity can be larger than another; Moore, The Infinite (1990) at p. 90. (Moore does not make clear on which passages in Kant he bases his reading.) [Resume]
41. Kant, Critique of Judgment (1790), at pp. 98-99. In an earlier work Kant says the sublime brings "enjoyment" but sometimes with "horror", while the beautiful is a "pleasant sensation but one
that is joyous and smiling"; "[n]ight is sublime, day is beautiful"; the sublime "moves", the beautiful "charms"; the face of a person feeling the sublime is "earnest, sometimes rigid and
astonished" while the face of a person experiencing the beautiful shows "shining cheerfulness [and]...smiling features"; Observations on the Feeling of the Beautiful and Sublime (1763-64), at p.
47. [Resume]
42. These mixed feelings are in tension. Unlike the beautiful, the sublime does not yield pleasure. Because the mind is both attracted and repelled, it responds more with admiration than liking,
which Kant calls a "negative pleasure", Critique of Judgment (1790), at p. 98; cf. pp. 129, 131. It includes a note of displeasure, with our inadequate sensory and imaginative resources, pp. 114,
116, leading Kant to call it "a pleasure that is possible only by means of a displeasure", p. 117. [Resume]
43. Spinoza, Treatise on the Emendation of the Intellect, at p. 235. [Resume] | {"url":"http://dash.harvard.edu/bitstream/handle/1/3715468/suber_infinite.html?sequence=5","timestamp":"2014-04-20T06:07:03Z","content_type":null,"content_length":"88962","record_id":"<urn:uuid:1bfceb91-2c15-4eaf-8687-d0f964bacfea>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: arXiv:1101.0586v6[hep-th]3Sep2011
Predictions of a fundamental statistical picture
Roland E. Allen
Department of Physics and Astronomy
Texas A&M University, College Station, Texas 77843, USA
A picture is presented in which standard physics and its extensions are obtained from statistical
counting and stochastic fluctuations, together with the geography of our particular universe in D
dimensions. The inescapable predictions include supersymmetry, SO(d) grand unification, Higgs-
like bosons, vanishing of the usual cosmological constant, nonstandard behavior of scalar bosons,
and Lorentz violation at extremely high energies.
For a theory to be viable, it must be mathematically consistent, its premises must lead
to testable predictions, and these predictions must be consistent with experiment and ob-
servation. Here we will present a theory which appears to satisfy these requirements, but
which starts with an unfamiliar point of view: There are initially no laws, and instead all
possibilities are realized with equal probability. The observed laws of Nature are emergent
phenomena, which result from statistical counting and stochastic fluctuations, together with
the geography (i.e. specific features) of our particular universe in D dimensions. | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/405/5308628.html","timestamp":"2014-04-18T12:00:02Z","content_type":null,"content_length":"8384","record_id":"<urn:uuid:5926e6cd-36fc-403c-9359-1a7540c5b189>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
the definition of fractal
Computing Dictionary
fractal definition
mathematics, graphics
A fractal is a rough or fragmented geometric shape that can be subdivided in parts, each of which is (at least approximately) a smaller copy of the whole. Fractals are generally self-similar (bits
look like the whole) and independent of scale (they look similar, no matter how close you zoom in).
Many mathematical structures are fractals; e.g. Sierpinski triangle, Koch snowflake,
Peano curve
Mandelbrot set
Lorenz attractor
. Fractals also describe many real-world objects that do not have simple geometric shapes, such as clouds, mountains, turbulence, and coastlines.
Benoit Mandelbrot
, the discoverer of the
Mandelbrot set
, coined the term "fractal" in 1975 from the Latin fractus or "to break". He defines a fractal as a set for which the Hausdorff Besicovich dimension strictly exceeds the topological dimension.
However, he is not satisfied with this definition as it excludes sets one would consider fractals.
sci.fractals FAQ (ftp://src.doc.ic.ac.uk/usenet/usenet-by-group/sci.fractals/).
See also
fractal compression
fractal dimension
Iterated Function System
newsgroups: news:sci.fractals, news:alt.binaries.pictures.fractals, news:comp.graphics.
["The Fractal Geometry of Nature", Benoit Mandelbrot].
[Are there non-self-similar fractals?] | {"url":"http://dictionary.reference.com/browse/fractal","timestamp":"2014-04-16T18:08:02Z","content_type":null,"content_length":"102385","record_id":"<urn:uuid:a2a66fbd-4f72-450c-8e03-6a620abf65bf>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
February 27th 2008, 09:43 PM #1
Feb 2008
Please help me with these 2 questions.
1. A party of $n$ mens to be seated round a circular table. Find the odds against the event that two particular men sit together.
2. X and Y stand in a ring with 12 other persons. If the arrangement of the 14 persons is at random, find the chance that there are exactly 5 persons between X and Y.
Seat all but one of the particular men around the table, so now there are
n-1 places where the n-th man could be sat, and two of these are next
to the other particular man. So the requred probability is 2/(n-1) (at least
if n>=4, if n<4 the probability is 1)
So the requred probability is $2/(n-1)$
That is, the odds against the event is $(n-1):2$. Is this right?
That is not the standard definition of odds used in statistics.
In statistics the odds of an event are defined to be $p/(1-p)$, so the odds of
the event not happening are $(1-p)/p$ .
or: $(n-3):2$.
Though I beleive that in some parts of the gamming industry that odds
are defined differently, but UK bookies use them in the sense that statisticians
use them, or at least they did when I was a child.
February 28th 2008, 05:23 AM #2
Grand Panjandrum
Nov 2005
February 28th 2008, 07:59 PM #3
Feb 2008
February 28th 2008, 10:27 PM #4
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/discrete-math/29418-probability.html","timestamp":"2014-04-20T12:55:36Z","content_type":null,"content_length":"41203","record_id":"<urn:uuid:0504de2e-7cdf-478c-8f9e-30bc421a3bc6>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roslyn Heights Calculus Tutor
Find a Roslyn Heights Calculus Tutor
...This involves learning how to strategize learning. Let me be your guide and companion in your next academic journey and you will find the trip far easier and more pleasant than you imagined!I
have taught algebra techniques not only as a topic on its own but also in conjunction with the physical ...
50 Subjects: including calculus, chemistry, physics, geometry
...I studied and worked for a major airline in Japan for almost 10 years, then I came to New York and finished my MBA degree in Finance. Therefore, I am fluent in Mandarin, Japanese and English. I
am also proficient to teach math to children up to 7th grade because I have tutored my own nieces so ...
6 Subjects: including calculus, Japanese, Chinese, precalculus
...In math, everything you learn builds on top of what you learned in previous years, and without that strong foundation, students can fall behind. When teachers explain something in class they
assume that the students have a certain knowledge about math based on what they learned in previous years...
21 Subjects: including calculus, geometry, statistics, accounting
...I have lead many study sessions for fellow students in many of my classes, all of which received higher test grades based on the study sessions we had. I enjoy teaching others and have been
told that the way I explain material and concepts are very understandable. I believe that it is important...
11 Subjects: including calculus, physics, geometry, accounting
...I have studied many areas of both analytical and phenomenological philosophy, which specific interest in life and death, the self, aesthetics, and philosophy of law. Since my other major was
mathematics, I also feel very comfortable with the principles of logic. While majoring in math I covered probability in many of the courses I took, but it was most prominent in discrete
22 Subjects: including calculus, geometry, trigonometry, algebra 2 | {"url":"http://www.purplemath.com/Roslyn_Heights_calculus_tutors.php","timestamp":"2014-04-19T20:05:55Z","content_type":null,"content_length":"24383","record_id":"<urn:uuid:ff911a6d-6fc0-44b4-87a0-f30e1535850b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sample Problem – Insulation thickness calculation for a pipe
Sample Problem Statement
Determine the minimum insulation thickness required for a pipe carrying steam at 180^0C. The pipe size is 8″ and the maximum allowable temperature of outer wall of insulation is 50^0C. Thermal
conductivity of the insulation material for the temperature range of the pipe can be taken as 0.04 W/m·K. The heat loss from steam per meter of pipe length has to be limited to 80 W/m.
Solution to this sample problem is quite straightforward as demonstrated below.
As per EnggCyclopedia’s heat conduction article,
For radial heat transfer by conduction across a cylindrical wall, the heat transfer rate is expressed by following equation,
For the given sample problem,
T[1] = 50^0C
T[2] = 180^0C
r[1] = 8″ = 8 × 0.0254 m = 0.2032 m
k = 0.04 W/m·K
N = length of the cylinder
Q/N = Heat loss per unit length of pipe
Q/N = 80 W/m
Hence, inserting the given numbers in the radial heat transfer rate equation from above,
80 = 2π × 0.04 × (180-50) ÷ ln(r[2]/0.2032)
ln(r[2]/0.2032) = 2π × 0.04 × (180-50) / 80 = 0.4084
Hence, r[2]/= r[1] × e^0.4084
r[2]/= 0.2032 × 1.5044 = 0.3057 m
Hence, insulation thickness = r[2] – r[1]
thickness = 305.7 – 203.2 = 102.5 mm
Some margin should be taken on the insulation thickness because if the conductive heat transfer rate happens to be higher than the convective heat transfer rate outside the insulation wall, the outer
insulation wall temperature will shoot up to higher values than 50^0C. Hence conductive heat transfer rate should be limited to lower values than estimates used in this sample problem. The purpose of
this sample problem is to demonstrate radial heat conduction calculations and practical calculations of insulation thickness also require consideration of convective heat transfer on the outside of
insulation wall. | {"url":"http://www.enggcyclopedia.com/2011/09/sample-problem-insulation-thickness-calculation-pipe/","timestamp":"2014-04-24T23:56:15Z","content_type":null,"content_length":"54418","record_id":"<urn:uuid:94f168cd-0cfd-4048-b958-a238cfcf9062>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
in relation to convergent/divergent integrals
March 9th 2009, 02:53 PM #1
Feb 2009
in relation to convergent/divergent integrals
i was told that to be "convergent" an integral had to evaluate to a number
but the question says "determine if convergent or divergent; if convergent, evaluate"
so i'm just wondering..how does one determine if something is convergent WITHOUT evaluating it? is there something i'm missing?
Are we talking about integrals or series?
March 9th 2009, 02:59 PM #2
March 9th 2009, 03:32 PM #3
Feb 2009 | {"url":"http://mathhelpforum.com/calculus/77787-relation-convergent-divergent-integrals.html","timestamp":"2014-04-17T04:49:21Z","content_type":null,"content_length":"35671","record_id":"<urn:uuid:19c60acf-a8ad-46bf-9981-5182a70bf001>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Recent Homework Questions About Physics
Post a New Question | Current Questions
let h = mass of hot dogs p = mass of potato salad l = mass of lemonade solve for x, position of lemonade 52/2 = (p*0 + l*x + h*52)/(p + l + h) 52/2 - x = distance from center and it will be towards
the hot dogs
Sunday, April 6, 2014 at 11:37pm
Discrete Math
There are 150 students taking Discrete Mathematics II, Calculus II, and Physics I courses. Of these 51 are taking Discrete Mathematics II, 111 are taking Calculus II, and 63 are taking Physics I.
There are 41 taking Discrete Mathematics II and Calculus II, 32 students taking ...
Sunday, April 6, 2014 at 11:29pm
pls let the answer be at the end of this - sad college student
Sunday, April 6, 2014 at 11:03pm
Sunday, April 6, 2014 at 10:53pm
If 1/C1+1/C2+1/C3=1/Ceq and Ceq=3.18 mF, then replace the given variables as such: 1/6.54+1/8.19+1/C:unknown=1/3.18 Now it becomes an algebra problem: Multiply both sides of the eqn by 3.18, then
you'll have: 3.18/6.54 +3.18/8.18+3.18/Cunknown=1 Simplify: .49+.39+3.18/C=1 ...
Sunday, April 6, 2014 at 10:51pm
Water towers store water above the level of consumers for times of heavy use, eliminating the need for high-speed pumps. How high above a user must the water level be to create a gauge pressure of
3.00×105 N/m2 ?
Sunday, April 6, 2014 at 10:47pm
A cosmic-ray proton in interstellar space has an energy of 19.5 MeV and executes a circular orbit having a radius equal to that of Mars' orbit around the Sun (2.28 1011 m). What is the magnetic field
in that region of space?
Sunday, April 6, 2014 at 10:21pm
You are given a four-wheeled cart of mass 11 kg, where the distance between a wheel and its nearest neighbors is 2.1 m. Suppose we load this cart with a crate of mass 109 kg, where the crate's center
of mass is located in the back-middle of the cart, 0.525 m from its ...
Sunday, April 6, 2014 at 9:47pm
Consider a coaxial cable with 2Amps in the center conductor coming out of the page, and 1Amp in the outer conductor going into the page. The center conductor has a radius of 1mm, the outer
conductor's inner radius is 2mm, and the outer conductor's outter radius is 3mm...
Sunday, April 6, 2014 at 9:32pm
1 C 2 less, maybe you run over the mark from the first one on the track just as the rear can explodes, and the two marks on the track are together :)
Sunday, April 6, 2014 at 8:57pm
On earth I assume weight on first line = (86.4+20.8)(9.81) weight on second (lower) line = 20.8 (9.81)
Sunday, April 6, 2014 at 8:53pm
in the horizontal direction the ball has no acceleration once let go (no horizontal force) so now it is 9 m/s^2 down by the way, g on Mars about the same as earth ? FALSE
Sunday, April 6, 2014 at 8:50pm
A train is traveling along a straight, horizontal track at a constant speed of 20 mph (i.e.,a non-relativistic speed). An observer on the train places paint cans at the front and back of one of the
cars. She then detonates the cans. Due to a malfunction, the can at the front ...
Sunday, April 6, 2014 at 8:45pm
Consider two masses that hang from an overhead beam. The first mass of 86.4 kg is attached to the beam using an ideal rope. The second mass of 20.8 kg is attached to the first mass with an ideal rope
and hangs directly under the first mass. (a) Find the tension in the lower ...
Sunday, April 6, 2014 at 8:44pm
A special train is traveling down a long, straight track with a constant acceleration of 9m/s^2 in the forward direction. The train is located on Mars where the acceleration due to gravity is 9 m/s^
2. If a passenger drops a ball and measures the acceleration of the ball, the ...
Sunday, April 6, 2014 at 8:43pm
You are given a four-wheeled cart of mass 11 kg, where the distance between a wheel and its nearest neighbors is 2.1 m. Suppose we load this cart with a crate of mass 109 kg, where the crate's center
of mass is located in the back-middle of the cart, 0.525 m from its ...
Sunday, April 6, 2014 at 8:43pm
college physics
Sunday, April 6, 2014 at 8:42pm
z = 5 * 10^5 ohms i = 1/(5*`10^6) = .2 *10^-6 = 2 *10^-7
Sunday, April 6, 2014 at 8:18pm
Sunday, April 6, 2014 at 8:08pm
Va=63rev/min * 6.28rad/rev * 1min/60s= 6.594 rad/s. Angle=0.57 + 6.594rad/s*9.5s=63.21 rad. or 0.4132 rad.
Sunday, April 6, 2014 at 7:46pm
College Physics
Sunday, April 6, 2014 at 7:36pm
A series RLC circuit is made with a C=100pF capacitor, an L=100μH inductor, and an R=30Ω resistor. The circuit is driven at 5×106rad/s with a 1V amplitude. What is the current oscillation amplitude
in amps
Sunday, April 6, 2014 at 5:04pm
In the device a 50g mass is hung from a pulley and released. A disk is placed on the platform. The platform has a radius of 5cm. The disk and platform go from rest to an angular velocity of 8.5rad/s
in 2.5 seconds. What is the moment of inertia of the disk? What is its mass? ...
Sunday, April 6, 2014 at 4:56pm
Sunday, April 6, 2014 at 4:50pm
conservation of momentum initial=final 1500*25East=(1500+2000)V solve for V
Sunday, April 6, 2014 at 3:42pm
Boxcar A, with a mass of 1500 kg, is travelling at 25 m/s to the east. Boxcar B has a mass of 2000 kg, and is initially at rest. The boxcars collide inelastically and move together after they get
stuck. What is their combined velocity?
Sunday, April 6, 2014 at 3:41pm
Isn't this Biot-Savat law? http://dev.physicslab.org/Document.aspx?doctype=3&filename=Magnetism_BiotSavartLaw.xml
Sunday, April 6, 2014 at 3:40pm
it takes time=.2 seconds to get there, so it falls h=1/2 g t^2=4.9*.04 meters
Sunday, April 6, 2014 at 3:29pm
A bullet is fired horizontally with a speed of 100 m/s aiming at a target 20m away. It misses the target by??
Sunday, April 6, 2014 at 2:58pm
9.8/8.4 answer: 1.16666
Sunday, April 6, 2014 at 2:34pm
Sunday, April 6, 2014 at 2:27pm
Sunday, April 6, 2014 at 1:31pm
when Tyson left at 11:00, Rashad had already gone 200km, so the two were 580 km apart. They approached each other at 180 km/hr, meaning they met in 3 2/9 hours. So, by the time they met at 2:13:20,
Tyson had gone 257.8 km.
Sunday, April 6, 2014 at 1:18pm
cape1 physics
thanks for the example.
Sunday, April 6, 2014 at 1:13pm
cape1 physics
a hammer is often used to force a nail into wood.the faster the hammer moves,the deeper the nail moves into the wood.
Sunday, April 6, 2014 at 12:56pm
Problem Question: Rashad started The 780km drive from Sault st. Marie to Ottawa at 09:00. His average speed was 100km/h. Two hours later Tyson left Ottawa for Sault St. Marie. He drove at 80km/h on
the same highway. How far from Ottawa, and what time of day did they meet?
Sunday, April 6, 2014 at 12:26pm
A lightning bolt may carry a current of 1.00 ✕ 104 A for a short time interval. What is the resulting magnetic field 134 m from the bolt? Assume that the bolt extends far above and below the point of
Sunday, April 6, 2014 at 12:14pm
A series RLC circuit is made with a C=100pF capacitor, an L=100μH inductor, and an R=30Ω resistor. The circuit is driven at 5×106rad/s with a 1V amplitude. What is the current oscillation amplitude
in amps
Sunday, April 6, 2014 at 10:40am
Consider a coaxial cable with 2Amps in the center conductor coming out of the page, and 1Amp in the outer conductor going into the page. The center conductor has a radius of 1mm, the outer
conductor's inner radius is 2mm, and the outer conductor's outter radius is 3mm...
Sunday, April 6, 2014 at 8:59am
Physics--PLZ HELP!
Sunday, April 6, 2014 at 7:11am
Sunday, April 6, 2014 at 2:01am
Correction: Vo = 460.6 rad/s V = 45 rad/s a = -84.3 rad/s^2
Saturday, April 5, 2014 at 9:49pm
Vo=4401rev/min * 6.28rad/rev * 1min/60s= 460.6 m/s V = 430 * 6.28/60 = 45 m/s. a=(V-Vo)/t = (45-460.6)/4.93=-84.3 m/s^2
Saturday, April 5, 2014 at 9:45pm
Depth = V*T = 1530 * 3.65/2 = 2792 m.
Saturday, April 5, 2014 at 9:15pm
V = Vo + g*Tr = 0 14 - 9.8Tr = 0 9.8Tr = 14 Tr = 1.43 s. = Rise time. h = ho + Vo*Tr + 0.5g*Tr^2 hmax = 70 + (14*1.43 -4.9*1.43^2)=80 m. Above gnd. hmax = 0.5g*t^2 = 80 4.9t^2 = 80 t^2 = 16.33 Tf =
4.04 s = Fall time. Tr+Tf = 1.43 + 4.04 = 5.47 s. = Time to strike gnd. V=Vo + ...
Saturday, April 5, 2014 at 9:01pm
A brass ring of diameter 10.00 cm at 22.2°C is heated and slipped over an aluminum rod of diameter 10.01 cm at 22.2°C. Assume the average coefficients of linear expansion are constant. (b) What if
the aluminum rod were 9.16 cm in diameter?
Saturday, April 5, 2014 at 8:53pm
Saturday, April 5, 2014 at 7:41pm
PE = m*V^2 = 0.164 J. .0508*V^2 = 0.164 V^2 = 3.228 V = 1.80 m/s.
Saturday, April 5, 2014 at 7:33pm
delta L/L = (T/area) / E = (215*9.81 /2*10^-5) / 8*10^10 = ( 1054 * 10^5 ) / 8*10^10 = 132 * 10-5 so delta L = (3.5)(132*10^-5) = 461 * 10^-5 = .00461 meters = .461 cm
Saturday, April 5, 2014 at 6:32pm
Tension in cable = T Force up at end from cable = T sin 37 compression Force toward wall = T cos 37 max friction force = .485 T cos 37 up vertical forces on rod sum to 0 2 Fg = .485 T cos 37 + T sin
37 2 Fg = .989 T T = 2.02 Fg take moments about intersection of rod and wall ...
Saturday, April 5, 2014 at 6:26pm
Physics 141
Thank you so much for your time and help!
Saturday, April 5, 2014 at 6:16pm
physics (forces)
Saturday, April 5, 2014 at 6:02pm
A 215-kg load is hung on a wire of length of 3.50 m, cross-sectional area 2.000x10^-5 m2, and Young's modulus 8.00x 10^10 N/m2. What is its increase in length?
Saturday, April 5, 2014 at 5:36pm
One end of a uniform 3.20-m-long rod of weight Fg is supported by a cable at an angle of θ = 37° with the rod. The other end rests against the wall, where it is held by friction as shown in the
figure below. The coefficient of static friction between the wall and the...
Saturday, April 5, 2014 at 5:35pm
(1/2) L i^2 = (1/2) C V^2 i^2 = (C/L) V^2 i^2 = (3.1*10^-6 / 5.6*10^-3) (961) i^2 = 532 * 10^-3 = 53.2 * 10^-4 i = 7.3 * 10^-2 = .073 amps
Saturday, April 5, 2014 at 5:21pm
period = 1/frequency = (1/7.01)*10^-18 wavelength = 3*10^8 (1/7.01)*10^-18 = .428 * 19^-10 = 4.28 * 10^-11 meters
Saturday, April 5, 2014 at 5:12pm
Physics 141
Hey. I just did that.
Saturday, April 5, 2014 at 5:04pm
Physics 141
Distance axle above step edge = .33 - .125 = .205 Angle T is angle between straight down from Axle and the top edge of the step so cos T = .205/.33 T = 10.58 deg sin T = .184 Take moments about the
top edge of the step. F * .205 clockwise (my step is on the right) m g * .33 ...
Saturday, April 5, 2014 at 5:04pm
Physics 141
A bicycle wheel resting against a small step whose height is h = 0.125 m. The weight and radius of the wheel are W = 23.1 N and r = 0.330 m, respectively. A horizontal force vector F is applied to
the axle of the wheel. As the magnitude of vector F increases, there comes a ...
Saturday, April 5, 2014 at 4:54pm
Physics 141
THANK YOU SO MUCH!
Saturday, April 5, 2014 at 4:53pm
Physics 141
T = 1/4 so f = 4
Saturday, April 5, 2014 at 4:51pm
Physics 141
after one period T, (8 pi T) = 2 pi so T = 1/4 so f = 1/T = 4
Saturday, April 5, 2014 at 4:50pm
Physics 141
The drawing shows a bicycle wheel resting against a small step whose height is h = 0.125 m. The weight and radius of the wheel are W = 23.1 N and r = 0.330 m, respectively. A horizontal force vector
F is applied to the axle of the wheel. As the magnitude of vector F increases...
Saturday, April 5, 2014 at 4:30pm
Physics 141
x=3.5cos(8pi.t) Amplitude=3.5m Frequency?
Saturday, April 5, 2014 at 4:11pm
Physics 141
x=3.5Cos(8pi.t) Amplitude=3.5m What is the frequency?
Saturday, April 5, 2014 at 4:08pm
Acceleration due to gravity is independent of mass, but the force is not. Determine the final velocity of the following objects and the force generated by the same objects as they are dropped from
the given heights. 10 kg 2.5 meters 7 kg 5.5 meters 0.5 kg 12 meters 0.7 kg 5 ...
Saturday, April 5, 2014 at 2:17pm
Saturday, April 5, 2014 at 12:51pm
Saturday, April 5, 2014 at 12:06pm
You can solve for Emax by this S=(1/munaught*2*speedlight)Em^2 Then solve for Bmax from Em/Bm= speedlight then the rms value of each is 1/sqrt2 * max value
Saturday, April 5, 2014 at 12:05pm
An industrial laser is used to burn a hole through a piece of metal. The average intensity of the light is S = 1.13 109 W/m2. What is the rms value of each of the following fields in the
electromagnetic wave emitted by the laser? (a) electric field (b) magnetic field
Saturday, April 5, 2014 at 11:41am
A 3.1-µF capacitor has a voltage of 31 V between its plates. What must be the current in a 5.6-mH inductor so that the energy stored in the inductor equals the energy stored in the capacitor?
Saturday, April 5, 2014 at 11:36am
In a dentist's office, an X-ray of a tooth is taken using X-rays that have a frequency of 7.01 1018 Hz. What is the wavelength in vacuum of these X-rays?
Saturday, April 5, 2014 at 11:36am
I dont understand.
Saturday, April 5, 2014 at 9:18am
College Physics
Two forces, F⃗ 1 and F⃗ 2, act at a point, as shown in the picture. (Figure 1) F⃗ 1 has a magnitude of 8.20N and is directed at an angle of α = 55.0∘ above the negative x axis in the second quadrant. F⃗
2 has a magnitude of 6.40N and is ...
Saturday, April 5, 2014 at 6:48am
you need v such that 3i + 4j + 2(xi+yj) = 0 3+2x = 0 4+2y = 0 v = -3/2 i -2j now just get the magnitude and direction.
Saturday, April 5, 2014 at 5:46am
A bomb of mass M at rest explodes into three pieces in the ratio 1 : 1 : 2. The smaller ones are thrown off in perpendicular directions with velocities 3 ms−1 and 4 ms−1. What is the velocity of the
third piece after the explosion?
Saturday, April 5, 2014 at 1:33am
Saturday, April 5, 2014 at 1:17am
Friday, April 4, 2014 at 10:54pm
The liquid with the highest density will have the largest percentage of its' volume above the water. Find the density of each liquid and compare them.
Friday, April 4, 2014 at 10:28pm
d = V*t = 1500m/s * 1.5/2 = 1125 m.
Friday, April 4, 2014 at 10:03pm
T = d/V = 200/7 = 28.57 s.
Friday, April 4, 2014 at 9:56pm
P = F * d/t = mg * d/t P = 66*9.8 * 14/44 Joules/s.
Friday, April 4, 2014 at 9:47pm
Physics Help
L = V(1/F) + 0 Slope = V. V is a constant. y-intercept = 0
Friday, April 4, 2014 at 9:26pm
All of the parameters change.
Friday, April 4, 2014 at 8:51pm
A pitcher throws a 0.192-kg baseball, and it approaches the bat at a speed of 43.2 m/s. The bat does 69.8 J of work on the ball in hitting it. Ignoring air resistance, determine the speed of the ball
after the ball leaves the bat and is 32.6 m above the point of impact.
Friday, April 4, 2014 at 8:49pm
change PE= min work min work= GMm/r1-GMm/R2 M mass of moon, m is mass of lunar
Friday, April 4, 2014 at 8:35pm
Use symettry as you integrate across. Starting from one side, all you want to add is the horizontal component (the vertical component will be oposite direction when you get to the other side). So
integrate the cosine/sine Theta part of the angle only (theta equals an angle ...
Friday, April 4, 2014 at 8:32pm
I'm having trouble setting up/solving this physics problem? Can someone please help me? Thank you. A 4700 kg lunar lander is in orbit 45 km above the surface of the moon. It needs to move out to a
340-km-high orbit in order to link up with the mother ship that will take ...
Friday, April 4, 2014 at 8:02pm
An infinitely long thin metal strip of width w=12cm carries a current of I=10A that is uniformly distributed across its cross section. What is the magnetic field at point P a distance a=3cm above the
center of the strip? I have tried using the integration method but cannot get...
Friday, April 4, 2014 at 7:27pm
Two manned satellites approaching one another at a relative speed of 0.500 m/s intend to dock. The first has a mass of 3.50 ✕ 103 kg, and the second a mass of 7.50 ✕ 103 kg. Assume that the positive
direction is directed from the second satellite towards the ...
Friday, April 4, 2014 at 6:52pm
work done = increase in kinetic energy (here we have a decrease so the work is negative. The force is opposite to the direction of motion) work = (1/2)(4.5*10^4)(5000^2 - 7000^2) work = force
*distance moved in direction of force so Force = work/(1.4*10^6)
Friday, April 4, 2014 at 6:45pm
physics. PLEASE HELP
If it is 50 meters high, the pressure is the weight of a one meter square column 50 meters high. Water density is about 1000 Kilograms/ cubic meter So the mass of this column is 1000 Kg/m^3 * 50 m *
g or 1000 * 50 * 9.81 = 490,500 Newtons that weight is spread out over the one...
Friday, April 4, 2014 at 6:39pm
physics. PLEASE HELP
Where do I start? Calculate the water pressure at the bottom of the 50-m {\rm m}-high water tower shown in the photo. Express your answer to two significant figures and include the appropriate units.
The question says to neglect the pressure due to atmosphere when doing my ...
Friday, April 4, 2014 at 6:23pm
An asteroid is moving along a straight line. A force acts along the displacement of the asteroid and slows it down. The asteroid has a mass of 4.5× 104 kg, and the force causes its speed to change
from 7000 to 5000m/s. (a) What is the work done by the force? (b) If the ...
Friday, April 4, 2014 at 6:22pm
KE= 1/2 I w^2 I for a solid disk= 1/2 m r^2 w= rad/sec= 2PI*2300rad/min*1min/60sec
Friday, April 4, 2014 at 5:50pm
energy stored in spring=PE lost bycrate 1/2 k .24^2=5*9.8*(2.0+.24)^2 solve for k
Friday, April 4, 2014 at 5:45pm
A grinding wheel 0.3m in diametre has amass of 5kg is rotating at an angular velocity of 2300rev/min.what is the kinetic energy?
Friday, April 4, 2014 at 4:23pm
A grinding wheel 0.3m in diametre has amass of 5kg is rotating at an angular velocity of 2300rev/min.what is the kinetic energy?
Friday, April 4, 2014 at 4:16pm
a 5 kg crate falls from a height of 2.0 m into an industrial spring scale. When the crate comes to rest the compression of the spring is 24 cm. what is the spring constant?
Friday, April 4, 2014 at 4:09pm
I can't figure this out.. A physics student stands at the top of hill that has an elevation of 37 meters. He throws a rock and it goes up into the air and then falls back past him and lands on the
ground below. The path of the rock can be modeled by the equation y = ...
Friday, April 4, 2014 at 3:04pm
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
Post a New Question | Current Questions | {"url":"http://www.jiskha.com/science/physics/?page=5","timestamp":"2014-04-20T14:13:42Z","content_type":null,"content_length":"32898","record_id":"<urn:uuid:5a0408b2-82c5-4414-b859-427defd55213>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Baum-Welch Algorithm on Map-Reduce for Parallel Hidden Markov Model Training.
• Type:
• Status: Resolved
• Priority:
• Resolution: Won't Fix
• Affects Version/s: 0.4, 0.5
• Fix Version/s: None
Proposal Title: Baum-Welch Algorithm on Map-Reduce for Parallel Hidden Markov Model Training.
Student Name: Dhruv Kumar
Student E-mail: dkumar@ecs.umass.edu
Organization/Project: Apache Mahout
Assigned Mentor:
Proposal Abstract:
The Baum-Welch algorithm is commonly used for training a Hidden Markov Model because of its superior numerical stability and its ability to guarantee the discovery of a locally maximum, Maximum
Likelihood Estimator, in the presence of incomplete training data. Currently, Apache Mahout has a sequential implementation of the Baum-Welch which cannot be scaled to train over large data sets.
This restriction reduces the quality of training and constrains generalization of the learned model when used for prediction. This project proposes to extend Mahout's Baum-Welch to a parallel,
distributed version using the Map-Reduce programming framework for enhanced model fitting over large data sets.
Detailed Description:
Hidden Markov Models (HMMs) are widely used as a probabilistic inference tool for applications generating temporal or spatial sequential data. Relative simplicity of implementation, combined with
their ability to discover latent domain knowledge have made them very popular in diverse fields such as DNA sequence alignment, gene discovery, handwriting analysis, voice recognition, computer
vision, language translation and parts-of-speech tagging.
A HMM is defined as a tuple (S, O, Theta) where S is a finite set of unobservable, hidden states emitting symbols from a finite observable vocabulary set O according to a probabilistic model Theta.
The parameters of the model Theta are defined by the tuple (A, B, Pi) where A is a stochastic transition matrix of the hidden states of size |S| X |S|. The elements a_(i,j) of A specify the
probability of transitioning from a state i to state j. Matrix B is a size |S| X |O| stochastic symbol emission matrix whose elements b_(s, o) provide the probability that a symbol o will be emitted
from the hidden state s. The elements pi_(s) of the |S| length vector Pi determine the probability that the system starts in the hidden state s. The transitions of hidden states are unobservable and
follow the Markov property of memorylessness.
Rabiner [1] defined three main problems for HMMs:
1. Evaluation: Given the complete model (S, O, Theta) and a subset of the observation sequence, determine the probability that the model generated the observed sequence. This is useful for evaluating
the quality of the model and is solved using the so called Forward algorithm.
2. Decoding: Given the complete model (S, O, Theta) and an observation sequence, determine the hidden state sequence which generated the observed sequence. This can be viewed as an inference problem
where the model and observed sequence are used to predict the value of the unobservable random variables. The backward algorithm, also known as the Viterbi decoding algorithm is used for predicting
the hidden state sequence.
3. Training: Given the set of hidden states S, the set of observation vocabulary O and the observation sequence, determine the parameters (A, B, Pi) of the model Theta. This problem can be viewed as
a statistical machine learning problem of model fitting to a large set of training data. The Baum-Welch (BW) algorithm (also called the Forward-Backward algorithm) and the Viterbi training algorithm
are commonly used for model fitting.
In general, the quality of HMM training can be improved by employing large training vectors but currently, Mahout only supports sequential versions of HMM trainers which are incapable of scaling.
Among the Viterbi and the Baum-Welch training methods, the Baum-Welch algorithm is superior, accurate, and a better candidate for a parallel implementation for two reasons:
(1) The BW is numerically stable and provides a guaranteed discovery of the locally maximum, Maximum Likelihood Estimator (MLE) for model's parameters (Theta). In Viterbi training, the MLE is
approximated in order to reduce computation time.
(2) The BW belongs to the general class of Expectation Maximization (EM) algorithms which naturally fit into the Map-Reduce framework [2], such as the existing Map Reduce implementation of k-means in
Hence, this project proposes to extend Mahout's current sequential implementation of the Baum-Welch HMM trainer to a scalable, distributed case. Since the distributed version of the BW will use the
sequential implementations of the Forward and the Backward algorithms to compute the alpha and the beta factors in each iteration, a lot of existing HMM training code will be reused. Specifically,
the parallel implementation of the BW algorithm on Map Reduce has been elaborated at great length in [3] by viewing it as a specific case of the Expectation-Maximization algorithm and will be
followed for implementation in this project.
The BW EM algorithm iteratively refines the model's parameters and consists of two distinct steps in each iteration--Expectation and Maximization. In the distributed case, the Expectation step is
computed by the mappers and the reducers, while the Maximization is handled by the reducers. Starting from an initial Theta^(0), in each iteration i, the model parameter tuple Theta^i is input to the
algorithm, and the end result Theta^(i+1) is fed to the next iteration i+1. The iteration stops on a user specified convergence condition expressed as a fixpoint or when the number of iterations
exceeds a user defined value.
Expectation computes the posterior probability of each latent variable for each observed variable, weighed by the relative frequency of the observed variable in the input split. The mappers process
independent training instances and emit expected state transition and emission counts using the Forward and Backward algorithms. The reducers finish Expectation by aggregating the expected counts.
The input to a mapper consists of (k, v_o) pairs where k is a unique key and v_o is a string of observed symbols. For each training instance, the mappers emit the same set of keys corresponding to
the following three optimization problems to be solved during the Maximization, and their values in a hash-map:
(1) Expected number of times a hidden state is reached (Pi).
(2) Number of times each observable symbol is generated by each hidden state (B).
(3) Number of transitions between each pair of states in the hidden state space (A).
The M step computes the updated Theta(i+1) from the values generated during the E part. This involves aggregating the values (as hash-maps) for each key corresponding to one of the optimization
problems. The aggregation summarizes the statistics necessary to compute a subset of the parameters for the next EM iteration. The optimal parameters for the next iteration are arrived by computing
the relative frequency of each event with respect to its expected count at the current iteration. The emitted optimal parameters by each reducer are written to the HDFS and are fed to the mappers in
the next iteration.
The project can be subdivided into distinct tasks of programming, testing and documenting the driver, mapper, reducer and the combiner with the Expectation and Maximization parts split between them.
For each of these tasks, a new class will be programmed, unit tested and documented within the org.apache.mahout.classifier.sequencelearning.hmm package. Since k-means is also an EM algorithm,
particular attention will be paid to its code at each step for possible reuse.
A list of milestones, associated deliverable and high level implementation details is given below.
Time-line: April 26 - Aug 15.
April 26 - May 22 (4 weeks): Pre-coding stage. Open communication with my mentor, refine the project's plan and requirements, understand the community's code styling requirements, expand the
knowledge on Hadoop and Mahout internals. Thoroughly familiarize with the classes within the classifier.sequencelearning.hmm, clustering.kmeans, common, vectorizer and math packages.
May 23 - June 3 (2 weeks): Work on Driver. Implement, test and document the class HmmDriver by extending the AbstractJob class and by reusing the code from the KMeansDriver class.
June 3 - July 1 (4 weeks): Work on Mapper. Implement, test and document the class HmmMapper. The HmmMapper class will include setup() and map() methods. The setup() method will read in the HmmModel
and the parameter values obtained from the previous iteration. The map() method will call the HmmAlgorithms.backwardAlgorithm() and the HmmAlgorithms.forwardAlgorithm() and complete the Expectation
step partially.
July 1 - July 15 (2 weeks): Work on Reducer. Implement, test and document the class HmmReducer. The reducer will complete the Expectation step by summing over all the occurences emitted by the
mappers for the three optimization problems. Reuse the code from the HmmTrainer.trainBaumWelch() method if possible. Also, mid-term review.
July 15 - July 29 (2 weeks): Work on Combiner. Implement, test and document the class HmmCombiner. The combiner will reduce the network traffic and improve efficiency by aggregating the values for
each of the three keys corresponding to each of the optimization problems for the Maximization stage in reducers. Look at the possibility of code reuse from the KMeansCombiner class.
July 29 - August 15 (2 weeks): Final touches. Test the mapper, reducer, combiner and driver together. Give an example demonstrating the new parallel BW algorithm by employing the parts-of-speech
tagger data set also used by the sequential BW [4]. Tidy up code and fix loose ends, finish wiki documentation.
Additional Information:
I am in the final stages of finishing my Master's degree in Electrical and Computer Engineering from the University of Massachusetts Amherst. Working under the guidance of Prof. Wayne Burleson, as
part of my Master's research work, I have applied the theory of Markov Decision Process (MDP) to increase the duration of service of mobile computers. This semester I am involved with two course
projects involving machine learning over large data sets. In the Bioinformatics class, I am mining the RCSB Protein Data Bank [5] to learn the dependence of side chain geometry on a protein's
secondary structure, and comparing it with the Dynamic Bayesian Network approach used in [6]. In another project for the Online Social Networks class, I am using reinforcement learning to build an
online recommendation system by reformulating MDP optimal policy search as an EM problem [7] and employing Map Reduce (extending Mahout) to arrive at it in a scalable, distributed manner.
I owe much to the open source community as all my research experiments have only been possible due to the freely available Linux distributions, performance analyzers, scripting languages and
associated documentation. After joining the Apache Mahout's developer mailing list a few weeks ago, I have found the community extremely vibrant, helpful and welcoming. If selected, I feel that the
GSOC 2011 project will be a great learning experience for me from both a technical and professional standpoint and will also allow me to contribute within my modest means to the overall spirit of
open source programming and Machine Learning.
[1] A tutorial on hidden markov models and selected applications in speech recognition by Lawrence R. Rabiner. In Proceedings of the IEEE, Vol. 77 (1989), pp. 257-286.
[2] Map-Reduce for Machine Learning on Multicore by Cheng T. Chu, Sang K. Kim, Yi A. Lin, Yuanyuan Yu, Gary R. Bradski, Andrew Y. Ng, Kunle Olukotun. In NIPS (2006), pp. 281-288.
[3] Data-Intensive Text Processing with MapReduce by Jimmy Lin, Chris Dyer. Morgan & Claypool 2010.
[4] http://flexcrfs.sourceforge.net/#Case_Study
[5] http://www.rcsb.org/pdb/home/home.do
[6] Beyond rotamers: a generative, probabilistic model of side chains in proteins by Harder T, Boomsma W, Paluszewski M, Frellsen J, Johansson KE, Hamelryck T. BMC Bioinformatics. 2010 Jun 5.
[7] Probabilistic inference for solving discrete and continuous state Markov Decision Processes by M. Toussaint and A. Storkey. ICML, 2006.
As suggested by Ted, I'm creating this JIRA issue to foster feedback. Like I mentioned on the dev-list, while I'm working on this issue for a Bioinformatics class project, I'd be happy to extend it
for a GSoC 2011 proposal.
Dhruv Kumar
added a comment -
As suggested by Ted, I'm creating this JIRA issue to foster feedback. Like I mentioned on the dev-list, while I'm working on this issue for a Bioinformatics class project, I'd be happy to extend it
for a GSoC 2011 proposal.
Sounds like a good project, especially if it helps the community learn about issues for code reuse
in EM algorithms.
Ted Dunning
added a comment -
Sounds like a good project, especially if it helps the community learn about issues for code reuse in EM algorithms.
How are the GSoC proposals discussed in Mahout?
I was wondering if I should continue with the proposal here by editing the JIRA or email it to the dev list.
Dhruv Kumar
added a comment -
How are the GSoC proposals discussed in Mahout? I was wondering if I should continue with the proposal here by editing the JIRA or email it to the dev list.
You can discuss on the group and update the JIRA with concrete plans
Robin Anil
added a comment -
You can discuss on the group and update the JIRA with concrete plans
Added proposal here based on Shannon's feedback.
Dhruv Kumar
added a comment -
Added proposal here based on Shannon's feedback.
Updated draft 2 after fixing typos and partial rewriting.
Dhruv Kumar
added a comment -
Updated draft 2 after fixing typos and partial rewriting.
Updated JIRA: added gsoc labels, edited the JIRA title to match project's title.
Dhruv Kumar
added a comment -
Updated JIRA: added gsoc labels, edited the JIRA title to match project's title.
Submitted to the GSoC 2011 website under the Apache Software Foundation.
Dhruv Kumar
added a comment -
Submitted to the GSoC 2011 website under the Apache Software Foundation.
Updated JIRA proposal: added references, fixed one typo, made the task list more clear. Updated on GSOC.
Dhruv Kumar
added a comment -
Updated JIRA proposal: added references, fixed one typo, made the task list more clear. Updated on GSOC.
Hi Dhruv,
How goes progress on this?
Hi Grant,
Since my last update, I have finished the first round of implementation of the end to end functionality resulting in 7 new classes, present under the classifier.sequencelearning.baumwelchmapreduce
package. The attached patch contains the following:
1. BaumWelchDriver, BaumWelchMapper, BaumWelchCombiner and BaumWelchReducer.
2. MapWritableCache, a general class to load MapWritable files from the HDFS.
3. BaumWelchUtils, a utility class for constructing the legacy HmmModel objects from a given HDFS directory containing the probability distributions (emission, transition and initial) as MapWritable
types, stored as Sequence Files.
4. BaumWelchModel, a serializable version of HmmModel.
In addition to these classes, the patch also adds support for command line training of HMM using the BaumWelch MapReduce variant by modifying common/commandline/DefaultOptionCreator.java and the src/
conf/driver.classes.props files.
In order to maximize parallelization, a total of 2|S| + 1 keys are emitted where |S| is the number of hidden states. There are |S| keys, one for each hidden state with corresponding values containing
expected transition counts as a MapWritable object. Similarly , there are |S| keys and corresponding MapWritable values encoding the emission probability distribution. Finally, there is one key with
a MapWritable value encoding the initial probability distribution vector. The large key space permits recruitment of more number of reducers, with each key being processed separately by one reducer
in the best case.
The input and output formats are of Sequence File type. The keys for input are LongWritable and the values are ArrayWritable containing int[] observations where each int in the observed sequence is a
mapping of the training set's tokens defined by the user. The reducers write the distributions as Sequence Files with keys of type Text and values as MapWritable. I have extensively re-used the
current sequential variant of HMM training, and a lot of my design decisions w.r.t to input and output types were guided by the legacy code's API.
Now that the driver-mapper-combiner-reducer chain's preliminary implementation is complete, the rest of the time will be actively spent in testing, debugging and refinement of the new trainer's
features. In particular, I'm looking at alternative types to ArrayWritable for wrapping the observation sequence given to the mappers. For text mining, a simple mapping which encodes tokens in the
text corpus as integer states could be performed, as expected by the current design. However, I'm not sure if this approach is the best w.r.t scalability or whether it is at all applicable to domains
different from Information Retrieval requiring scalable HMM Training. I'm aware that a lot of other algorithms in Mahout require the input in the form of Vectors, packed into a Sequence File and it
will be useful to get feedback on this issue.
Dhruv Kumar
added a comment -
Hi Grant, Since my last update, I have finished the first round of implementation of the end to end functionality resulting in 7 new classes, present under the
classifier.sequencelearning.baumwelchmapreduce package. The attached patch contains the following: 1. BaumWelchDriver, BaumWelchMapper, BaumWelchCombiner and BaumWelchReducer. 2. MapWritableCache, a
general class to load MapWritable files from the HDFS. 3. BaumWelchUtils, a utility class for constructing the legacy HmmModel objects from a given HDFS directory containing the probability
distributions (emission, transition and initial) as MapWritable types, stored as Sequence Files. 4. BaumWelchModel, a serializable version of HmmModel. In addition to these classes, the patch also
adds support for command line training of HMM using the BaumWelch MapReduce variant by modifying common/commandline/DefaultOptionCreator.java and the src/conf/driver.classes.props files. In order to
maximize parallelization, a total of 2|S| + 1 keys are emitted where |S| is the number of hidden states. There are |S| keys, one for each hidden state with corresponding values containing expected
transition counts as a MapWritable object. Similarly , there are |S| keys and corresponding MapWritable values encoding the emission probability distribution. Finally, there is one key with a
MapWritable value encoding the initial probability distribution vector. The large key space permits recruitment of more number of reducers, with each key being processed separately by one reducer in
the best case. The input and output formats are of Sequence File type. The keys for input are LongWritable and the values are ArrayWritable containing int[] observations where each int in the
observed sequence is a mapping of the training set's tokens defined by the user. The reducers write the distributions as Sequence Files with keys of type Text and values as MapWritable. I have
extensively re-used the current sequential variant of HMM training, and a lot of my design decisions w.r.t to input and output types were guided by the legacy code's API. Now that the
driver-mapper-combiner-reducer chain's preliminary implementation is complete, the rest of the time will be actively spent in testing, debugging and refinement of the new trainer's features. In
particular, I'm looking at alternative types to ArrayWritable for wrapping the observation sequence given to the mappers. For text mining, a simple mapping which encodes tokens in the text corpus as
integer states could be performed, as expected by the current design. However, I'm not sure if this approach is the best w.r.t scalability or whether it is at all applicable to domains different from
Information Retrieval requiring scalable HMM Training. I'm aware that a lot of other algorithms in Mahout require the input in the form of Vectors, packed into a Sequence File and it will be useful
to get feedback on this issue.
Uploaded a new patch:
1. Added a new method for building a random HmmModel, wrapping the distribution matrices rows as MapWritable types, and writing them to the specified directory in a Sequence File format.
2. Updated common/DefaultOptionCreator for the new option in #1. Also added an option for the user to specify the directory containing a pre-written HmmModel object (as a Sequence File type
containing MapWritable).
2. Updated the driver class for accomodating #1 and #2.
Dhruv Kumar
added a comment -
Uploaded a new patch: 1. Added a new method for building a random HmmModel, wrapping the distribution matrices rows as MapWritable types, and writing them to the specified directory in a Sequence
File format. 2. Updated common/DefaultOptionCreator for the new option in #1. Also added an option for the user to specify the directory containing a pre-written HmmModel object (as a Sequence File
type containing MapWritable). 2. Updated the driver class for accomodating #1 and #2.
Hey, Dhruv. I'd submitted some code in the https://issues.apache.org/jira/browse/MAHOUT-734 which contains HmmModel serialization utility and command-line tools for sequential HMM functionality and
it could be integrated to your code.
Sergey Bartunov
added a comment -
Hey, Dhruv. I'd submitted some code in the https://issues.apache.org/jira/browse/MAHOUT-734 which contains HmmModel serialization utility and command-line tools for sequential HMM functionality and
it could be integrated to your code.
Thanks Sergey.
I ended up creating a version of serialized HmmModel too. However, for the time being I'm not going to use it per my design which relies heavily on MapWritables for storing the distributions with a
large key space equal to 2|S| +1, where |S| is the number of hidden states.
The sequential command line utils are convenient for validating the results against the MapReduce variant so they are definitely useful in my case.
Dhruv Kumar
added a comment -
Thanks Sergey. I ended up creating a version of serialized HmmModel too. However, for the time being I'm not going to use it per my design which relies heavily on MapWritables for storing the
distributions with a large key space equal to 2|S| +1, where |S| is the number of hidden states. The sequential command line utils are convenient for validating the results against the MapReduce
variant so they are definitely useful in my case.
Uploaded a new patch after a week's worth of testing:
□ Bug fixes for a few corner cases
□ Refactoring of the BaumWelchUtils and BaumWelchMapper class.
□ Added verbose loggers for debugging.
Dhruv Kumar
added a comment -
Uploaded a new patch after a week's worth of testing: Bug fixes for a few corner cases Refactoring of the BaumWelchUtils and BaumWelchMapper class. Added verbose loggers for debugging.
Uploaded a new patch with refactorings and miscellaneous improvements. This concludes the chain's implementation and testing with manual inputs. The trainer works and provides a scalable variant of
Baum Welch. Next phase of project will entail more testing of the chain via unit tests and implementation of the log-scaled variant.
Next patch will contain more documentation and unit tests for some of the methods of the trainer.
Dhruv Kumar
added a comment -
Uploaded a new patch with refactorings and miscellaneous improvements. This concludes the chain's implementation and testing with manual inputs. The trainer works and provides a scalable variant of
Baum Welch. Next phase of project will entail more testing of the chain via unit tests and implementation of the log-scaled variant. Next patch will contain more documentation and unit tests for some
of the methods of the trainer.
Any luck yet on the unit tests?
Hi Grant,
I am finishing up some documentation and a few tests. The unit testing has led to a lot of refactoring in the BaumWelchMapper and the BaumWelchUtils classes, which was somewhat expected. I should be
able to wrap this up before the pencils down deadline though, with an example of POS tagging to follow.
Dhruv Kumar
added a comment -
Hi Grant, I am finishing up some documentation and a few tests. The unit testing has led to a lot of refactoring in the BaumWelchMapper and the BaumWelchUtils classes, which was somewhat expected. I
should be able to wrap this up before the pencils down deadline though, with an example of POS tagging to follow.
Hey Dhruv, nearing pencils down, how are we doing?
Hi Grant,
I have uploaded the first candidate patch for this issue's resolution and it will be great to get some feedback on it from you and the dev community. It contains:
1. Complete individual unit tests for the mapper, combiner, reducer to verify accurate summarization, normalization, probability matrices and vectors lengths.
2. Unit tests for the overall trainer. The trained model's probability values are validated against the sequential HMM implementation of Mahout (which in turn used the R and Matlab HMM packages for
3. Documentation for each of the 8 classes under the new classifier.sequencelearning.baumwelchmapreduce package.
4. Command line BaumWelch MapReduce training utilities--can be invoked using "bin/mahout hmmBaumWelchMapReduce <args>". The driver.classes.props file was modified for the same.
On my system, which is an aging Pentium 4, the unit tests for baumwelchmapreduce took 57 seconds to complete.
Please let me know what you think and where things can be improved. I will be refactoring this based on yours and others feedback until the firm pencils down date next week on Monday 22nd.
Thank you.
Dhruv Kumar
added a comment -
Hi Grant, I have uploaded the first candidate patch for this issue's resolution and it will be great to get some feedback on it from you and the dev community. It contains: 1. Complete individual
unit tests for the mapper, combiner, reducer to verify accurate summarization, normalization, probability matrices and vectors lengths. 2. Unit tests for the overall trainer. The trained model's
probability values are validated against the sequential HMM implementation of Mahout (which in turn used the R and Matlab HMM packages for validation). 3. Documentation for each of the 8 classes
under the new classifier.sequencelearning.baumwelchmapreduce package. 4. Command line BaumWelch MapReduce training utilities--can be invoked using "bin/mahout hmmBaumWelchMapReduce <args>". The
driver.classes.props file was modified for the same. On my system, which is an aging Pentium 4, the unit tests for baumwelchmapreduce took 57 seconds to complete. Please let me know what you think
and where things can be improved. I will be refactoring this based on yours and others feedback until the firm pencils down date next week on Monday 22nd. Thank you. Dhruv
First MapReduce based open-source Baum-Welch HMM Trainer!
I have attached the Patch for inclusion into the trunk, keeping in line with the "firm pencils down date."
It is complete with all the deliverables as listed in the project's timeline: unit tests, documentation, a POS example. Specifically, the following improvements have taken place in the last 5 days:
1. Created a new Log scaled training variant by refactoring the mapper, combiner, reducer and driver classes (and added a unit test for the same). The option for log scaling can be invoked via the
command line and causes the trainer to operate in log space between mapper -> combiner -> reducer. This should provide numerical stability for extremely long sequences or large state spaces (or
2. Added a scalable Map-Reduce based Parts Of Speech tagger which uses the log scaled training.
3. (Minor) Changed the input format from IntArrayWritable to Mahout's VectorWritable.
It will be awesome to get some feedback on the code, functionality, design etc.
I'm eager to keep improving the trainer, fix any bugs when they arise and making it more useful for users!
Dhruv Kumar
added a comment -
First MapReduce based open-source Baum-Welch HMM Trainer! I have attached the Patch for inclusion into the trunk, keeping in line with the "firm pencils down date." It is complete with all the
deliverables as listed in the project's timeline: unit tests, documentation, a POS example. Specifically, the following improvements have taken place in the last 5 days: 1. Created a new Log scaled
training variant by refactoring the mapper, combiner, reducer and driver classes (and added a unit test for the same). The option for log scaling can be invoked via the command line and causes the
trainer to operate in log space between mapper -> combiner -> reducer. This should provide numerical stability for extremely long sequences or large state spaces (or both). 2. Added a scalable
Map-Reduce based Parts Of Speech tagger which uses the log scaled training. 3. (Minor) Changed the input format from IntArrayWritable to Mahout's VectorWritable. It will be awesome to get some
feedback on the code, functionality, design etc. I'm eager to keep improving the trainer, fix any bugs when they arise and making it more useful for users!
In its current form, one should follow the following steps to use the trainer:
1) Encode the hidden and emitted state tags to integer ids and store them as a MapWritable SequenceFiles. The MapWritableCache.Save() method comes handy here.
2) Convert the input sequence to the integer ids as described by the emitted states map in step 1. Wrap the input sequence as a VectorWritable.
3) Store the VectorWritable obtained in step 2 as a sequence file containing key as an arbitrary LongWritable, and the Value as the integer sequence. This forms the input to the trainer.
4) (Optional) Use the BaumWelchUtils.BuildHmmModelFromDistributions() method to store an initial model with given distributions. The model will be stored as a SequenceFile containing MapWritables as
the distributions.
4) Invoke the trainer via the command line or using the API by calling the driver's run() method. The users can specify if they want to use the log scaled variant by setting the logScaled to true,
and they can specify the convergence delta, the max number of iterations etc. In case step 4 is omitted, the users must ask the program to create a random initial model by setting the buildRandom to
true. This starts the iterative training using Maximum Likelihood Estimation.
5) At the end, as the result of the training, a HmmModel is stored as a MapWritable with probability distributions encoded as DoubleWritables. The utility method BaumWelchUtils.CreateHmmModel(Path)
can be used to decode the result and obtain the HmmModel.
Design Discussion
The design uses MapWritables and SequenceFiles to freely convert between the legacy HmmModel to a serializable varaint which also encodes the probability distributions. This design choice had the
following advantages:
1 I could leverage a lot of existing functionality of the legacy sequential Hmm code by writing utility methods to encode and decode (BaumWelchUtils class was made for this purpose).
2. The users ultimately get the legacy HMM Model at the end of step 5, they can then use it to decode the test sequence using HmmAlgorithms.decodeSequence() method. They also have at their disposal
all the other methods provided by the legacy code.
3. Since the trained model is persisted in a SequenceFile, one can store these models for future reference and use the BaumWelchUtils.CreateHmmModel(Path) later to decode it and compare with other
trained models (possibly with different initial seed values).
Dhruv Kumar
added a comment -
In its current form, one should follow the following steps to use the trainer: 1) Encode the hidden and emitted state tags to integer ids and store them as a MapWritable SequenceFiles. The
MapWritableCache.Save() method comes handy here. 2) Convert the input sequence to the integer ids as described by the emitted states map in step 1. Wrap the input sequence as a VectorWritable. 3)
Store the VectorWritable obtained in step 2 as a sequence file containing key as an arbitrary LongWritable, and the Value as the integer sequence. This forms the input to the trainer. 4) (Optional)
Use the BaumWelchUtils.BuildHmmModelFromDistributions() method to store an initial model with given distributions. The model will be stored as a SequenceFile containing MapWritables as the
distributions. 4) Invoke the trainer via the command line or using the API by calling the driver's run() method. The users can specify if they want to use the log scaled variant by setting the
logScaled to true, and they can specify the convergence delta, the max number of iterations etc. In case step 4 is omitted, the users must ask the program to create a random initial model by setting
the buildRandom to true. This starts the iterative training using Maximum Likelihood Estimation. 5) At the end, as the result of the training, a HmmModel is stored as a MapWritable with probability
distributions encoded as DoubleWritables. The utility method BaumWelchUtils.CreateHmmModel(Path) can be used to decode the result and obtain the HmmModel. Design Discussion The design uses
MapWritables and SequenceFiles to freely convert between the legacy HmmModel to a serializable varaint which also encodes the probability distributions. This design choice had the following
advantages: 1 I could leverage a lot of existing functionality of the legacy sequential Hmm code by writing utility methods to encode and decode (BaumWelchUtils class was made for this purpose). 2.
The users ultimately get the legacy HMM Model at the end of step 5, they can then use it to decode the test sequence using HmmAlgorithms.decodeSequence() method. They also have at their disposal all
the other methods provided by the legacy code. 3. Since the trained model is persisted in a SequenceFile, one can store these models for future reference and use the BaumWelchUtils.CreateHmmModel
(Path) later to decode it and compare with other trained models (possibly with different initial seed values).
Some minor changes to move the packaging around to be a bit more consistent w/ the rest of Mahout (I think). Also some minor other tweaks in style.
Dhruv, for the example, I think it would be good to have a shell script to run just like the other examples. Also, it should probably move the downloading of the test/train data out to that script
(and only do it if it isn't already there.)
I am still reviewing the algorithm itself, but it looks pretty good and seems consistent with our sequential implementation.
Grant Ingersoll
added a comment -
Some minor changes to move the packaging around to be a bit more consistent w/ the rest of Mahout (I think). Also some minor other tweaks in style. Dhruv, for the example, I think it would be good to
have a shell script to run just like the other examples. Also, it should probably move the downloading of the test/train data out to that script (and only do it if it isn't already there.) I am still
reviewing the algorithm itself, but it looks pretty good and seems consistent with our sequential implementation.
Hi Grant,
Thanks for the feedback and for fixing the code style.
I will create a script for the example and have it download the test data only if it is not already present.
In your testing, if you come across any corner case which has missed my testing, please let me know. I can add a test for it and refactor the code to eliminate the bug. I am traveling until Saturday
for a job interview in Seattle but I should be able to roll out the patch soon after that!
Dhruv Kumar
added a comment -
Hi Grant, Thanks for the feedback and for fixing the code style. I will create a script for the example and have it download the test data only if it is not already present. In your testing, if you
come across any corner case which has missed my testing, please let me know. I can add a test for it and refactor the code to eliminate the bug. I am traveling until Saturday for a job interview in
Seattle but I should be able to roll out the patch soon after that!
Dhruv, any progress on the last pieces here? I'd like to see this get committed relatively soon.
Grant Ingersoll
added a comment -
Dhruv, any progress on the last pieces here? I'd like to see this get committed relatively soon.
Hi Grant,
Sorry I was caught up with the job interviews and turning in the graduation documents.
Here is the patch with the changes which you wanted:
1. Created a new shell script which automatically downloads the training and test sets. If the sets are already present, it skips the download.
2. Modified the POS tagger example code to avoid the download and accept the training and test sets via command line arguments. These arguments are passed by the script in #1.
Please let me know if you need further changes to make it commit ready!
Dhruv Kumar
added a comment -
Hi Grant, Sorry I was caught up with the job interviews and turning in the graduation documents. Here is the patch with the changes which you wanted: 1. Created a new shell script which automatically
downloads the training and test sets. If the sets are already present, it skips the download. 2. Modified the POS tagger example code to avoid the download and accept the training and test sets via
command line arguments. These arguments are passed by the script in #1. Please let me know if you need further changes to make it commit ready!
I'm going to look to commit this soon after ApacheCon (or perhaps during)
Grant Ingersoll
added a comment -
I'm going to look to commit this soon after ApacheCon (or perhaps during)
While reviewing the code in BaumWelchTrainer.java, noticed that we have a bunch of System.out.println() statements. Code needs some cleanup to replace these by SLF4j logger calls.
Suneel Marthi
added a comment -
While reviewing the code in BaumWelchTrainer.java, noticed that we have a bunch of System.out.println() statements. Code needs some cleanup to replace these by SLF4j logger calls.
I'm getting
1/12/18 17:21:02 WARN mapred.LocalJobRunner: job_local_0010
java.lang.IllegalArgumentException: The output state probability from hidden state 0 to output state 2 is negative
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at org.apache.mahout.classifier.sequencelearning.hmm.HmmUtils.validate(HmmUtils.java:187)
at org.apache.mahout.classifier.sequencelearning.hmm.hadoop.BaumWelchMapper.setup(BaumWelchMapper.java:111)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
11/12/18 17:21:03 INFO mapred.JobClient: map 0% reduce 0%
11/12/18 17:21:03 INFO mapred.JobClient: Job complete: job_local_0010
11/12/18 17:21:03 INFO mapred.JobClient: Counters: 0
Exception in thread "main" java.lang.InterruptedException: Baum-Welch Iteration failed processing tmp/output/model-9
at org.apache.mahout.classifier.sequencelearning.hmm.hadoop.BaumWelchDriver.runIteration(BaumWelchDriver.java:315)
at org.apache.mahout.classifier.sequencelearning.hmm.hadoop.BaumWelchDriver.runBaumWelchMR(BaumWelchDriver.java:253)
at org.apache.mahout.classifier.sequencelearning.hmm.hadoop.BWPosTagger.trainModel(BWPosTagger.java:293)
at org.apache.mahout.classifier.sequencelearning.hmm.hadoop.BWPosTagger.main(BWPosTagger.java:364)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:188)
when running the example. Otherwise, all tests pass.
Grant Ingersoll
added a comment -
I'm getting 1/12/18 17:21:02 WARN mapred.LocalJobRunner: job_local_0010 java.lang.IllegalArgumentException: The output state probability from hidden state 0 to output state 2 is negative at
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.mahout.classifier.sequencelearning.hmm.HmmUtils.validate(HmmUtils.java:187) at
org.apache.mahout.classifier.sequencelearning.hmm.hadoop.BaumWelchMapper.setup(BaumWelchMapper.java:111) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) at
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
11/12/18 17:21:03 INFO mapred.JobClient: map 0% reduce 0% 11/12/18 17:21:03 INFO mapred.JobClient: Job complete: job_local_0010 11/12/18 17:21:03 INFO mapred.JobClient: Counters: 0 Exception in
thread "main" java.lang.InterruptedException: Baum-Welch Iteration failed processing tmp/output/model-9 at org.apache.mahout.classifier.sequencelearning.hmm.hadoop.BaumWelchDriver.runIteration
(BaumWelchDriver.java:315) at org.apache.mahout.classifier.sequencelearning.hmm.hadoop.BaumWelchDriver.runBaumWelchMR(BaumWelchDriver.java:253) at
org.apache.mahout.classifier.sequencelearning.hmm.hadoop.BWPosTagger.trainModel(BWPosTagger.java:293) at org.apache.mahout.classifier.sequencelearning.hmm.hadoop.BWPosTagger.main
(BWPosTagger.java:364) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at
org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:188) when running the example. Otherwise, all tests pass.
Converts the trainer to be an AbstractJob, brings up to trunk.
Grant Ingersoll
added a comment -
Converts the trainer to be an AbstractJob, brings up to trunk.
Once I get past the example issue, I think this is ready to go for the most part.
Grant Ingersoll
added a comment -
Once I get past the example issue, I think this is ready to go for the most part.
Marking this as 0.7, as much as I would love to get it in for 0.6.
Grant Ingersoll
added a comment -
Marking this as 0.7, as much as I would love to get it in for 0.6.
Dhruv, any luck on the example issue?
Still think this is useful, but we need a fix for the example. Marking for 0.8
Grant Ingersoll
added a comment -
Still think this is useful, but we need a fix for the example. Marking for 0.8
Sorry for being MIA for a while. I have relocated to SF and was extremely busy coming up to speed with my new job. That being said, I do want to work on this, maintain it and make sure that this
feature makes it to Mahout's trunk.
This example is not entirely suitable for demonstrating the MR version of HMM training. The state space is very large which causes underflow very easily.
I'm searching for a good example for this feature.
Does anyone else have a recommendation for a HMM training example I can use?
Dhruv Kumar
added a comment -
Sorry for being MIA for a while. I have relocated to SF and was extremely busy coming up to speed with my new job. That being said, I do want to work on this, maintain it and make sure that this
feature makes it to Mahout's trunk. This example is not entirely suitable for demonstrating the MR version of HMM training. The state space is very large which causes underflow very easily. I'm
searching for a good example for this feature. Does anyone else have a recommendation for a HMM training example I can use?
Does anyone else have a recommendation for a HMM training example I can use?
Perhaps look at other HMM packages?
Grant Ingersoll
added a comment -
Does anyone else have a recommendation for a HMM training example I can use? Perhaps look at other HMM packages?
Hi all - hoping I can revive this discussion
I'm not yet well-versed with Mahout especially applied to HMM, but find the idea quite interesting - especially since I have massive amounts of data that would be ideal for it.
I'm more familiar with R, and am knowledgeable about some example datasets that can be used for testing (below). I've applied Dhruv's patch and currently rebuilding Mahout. I will see if I can get
some of these examples working on my local Hadoop instance, but there will be a slight learning curve.
README: http://134.76.173.220/hmm-with-r/data/00_README.txt
Hope this helps.
Tim Schultz
added a comment -
Hi all - hoping I can revive this discussion I'm not yet well-versed with Mahout especially applied to HMM, but find the idea quite interesting - especially since I have massive amounts of data that
would be ideal for it. I'm more familiar with R, and am knowledgeable about some example datasets that can be used for testing (below). I've applied Dhruv's patch and currently rebuilding Mahout. I
will see if I can get some of these examples working on my local Hadoop instance, but there will be a slight learning curve. http://134.76.173.220/hmm-with-r/data/ README: http://134.76.173.220/
hmm-with-r/data/00_README.txt Thoughts? Hope this helps. -Tim
Hi Tim,
Thanks a lot for trying out my patch and providing these examples. Please let me know if you need any help or clarification about the API. Like I've mentioned above, I need a good example to
demonstrate the capability so I'll look at your link to see if it fits the need here.
Dhruv Kumar
added a comment -
Hi Tim, Thanks a lot for trying out my patch and providing these examples. Please let me know if you need any help or clarification about the API. Like I've mentioned above, I need a good example to
demonstrate the capability so I'll look at your link to see if it fits the need here.
Bringing up to date with trunk by accounting for the new driver.classes.default.props file
Dhruv Kumar
added a comment -
Bringing up to date with trunk by accounting for the new driver.classes.default.props file
Cool, thanks, Dhruv. Any luck on the examples?
Dhruv, can you update by chance? Otherwise, will remove this from 0.8.
Grant Ingersoll
added a comment -
Dhruv, can you update by chance? Otherwise, will remove this from 0.8.
Hi Grant,
As I understand the only blocker for this issue is a small, self contained example which the users can run in a reasonable amount of time and see the results. The parts of speech tagger example which
I originally adapted for this trainer can take hours to converge, and sometimes it fails with arithmetic underflow due to an unusually large set of states for the Observations (observed states are
the words of the corpus in the POS tagger's model).
When is 0.8 due? I can chip away on this issue for the next few days in the evenings and hunt for a short example from the book mentioned above. Should require a week or two at least to sign off from
my side.
There are also unit tests with the trainer which demonstrate that it works--the results of Map Reduce based training are identical to the ones obtained in the sequential version.
Dhruv Kumar
added a comment -
Hi Grant, As I understand the only blocker for this issue is a small, self contained example which the users can run in a reasonable amount of time and see the results. The parts of speech tagger
example which I originally adapted for this trainer can take hours to converge, and sometimes it fails with arithmetic underflow due to an unusually large set of states for the Observations (observed
states are the words of the corpus in the POS tagger's model). When is 0.8 due? I can chip away on this issue for the next few days in the evenings and hunt for a short example from the book
mentioned above. Should require a week or two at least to sign off from my side. There are also unit tests with the trainer which demonstrate that it works--the results of Map Reduce based training
are identical to the ones obtained in the sequential version.
Hi Dhruv,
Thanks for the response. We are trying to get 0.8 in the next week or two. Any help on a short example as well as updating the code to trunk would be awesome.
Grant Ingersoll
added a comment -
Hi Dhruv, Thanks for the response. We are trying to get 0.8 in the next week or two. Any help on a short example as well as updating the code to trunk would be awesome. Thanks, Grant
Any chance this can get done?
Hi Dhruv,
Will this be ready for 0.9 (tentatively mid-November)? It would be great if you could address Grant's comments above.
Suneel Marthi
added a comment -
Hi Dhruv, Will this be ready for 0.9 (tentatively mid-November)? It would be great if you could address Grant's comments above.
Moving this to Backlog per email from Grant. | {"url":"https://issues.apache.org/jira/browse/MAHOUT-627","timestamp":"2014-04-18T07:51:27Z","content_type":null,"content_length":"227136","record_id":"<urn:uuid:fd5d5a78-3697-4944-8868-0a68aba4152f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Future Annuity with Compound Interest?
March 3rd 2009, 06:59 PM #1
Nov 2008
Future Annuity with Compound Interest?
Jackson deposits $210 each month into a savings account earning interest at the rate of 7% per year compounded monthly. How much will he have in this account at the end of 6 years?
This is almost the same as the other two. Really, throw us a bone, here. You can't have NO idea.
Sorry - I am confused as to which formula to use. I wanted to make sure this was Future Annuity with Compound Interest. I really DONT know anything lol! I don't know where to start :/
I'm hearing a voice of desperation. You need a voice of hope in your head.
You START with "Basic Principles". Really!! Forget the silly formulas.
Jackson deposits $210 each month into a savings account earning interest at the rate of 7% per year compounded monthly. How much will he have in this account at the end of 6 years?
"Jackson deposits $210"
P = 210
"each month"
n = 12
"earning interest at the rate of 7% per year"
i = 0.07
"compounded monthly"
j = i/n = i/12 = 0.00583333
Accumulate One Month
a = 1+j = 1.00583333
I haven't even read the question, yet, and just look at all the stuff that is available! This is how you start. Collect the pertinent data.
"How much will he have in this account at the end of 6 years? "
6 years = 6*n = 72 months
We are ready to build!
P*r^72 + P*r^71 + ... + P*r = S = The desired result.
The rest is algebra.
P*(r^72 + r^71 + ... + r) = S
Your task is to add up the geometric series in the parentheses.
If you cannot do it, you may be in the wrong class. Give it some thought. If all you can do is tell us that you cannot do anything, that is not encouraging and you should have a very clear
conversation with your academic advisor. You will not get anywhere here or in school without the necessary background.
March 3rd 2009, 07:45 PM #2
MHF Contributor
Aug 2007
March 3rd 2009, 07:52 PM #3
Nov 2008
March 4th 2009, 05:22 PM #4
MHF Contributor
Aug 2007 | {"url":"http://mathhelpforum.com/algebra/76817-future-annuity-compound-interest.html","timestamp":"2014-04-21T08:48:16Z","content_type":null,"content_length":"38985","record_id":"<urn:uuid:e39f4978-1692-4a11-83e9-d05693cf3332>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] While loop working with TRUE/FALSE?
David Winsemius dwinsemius at comcast.net
Thu Feb 2 14:14:50 CET 2012
On Feb 2, 2012, at 6:55 AM, Chris82 wrote:
> Thanks to Berend and the others,
> I've found a solution which works fine for my problem.
> I have not only 2 vectors, but also 4.
> Question is, if q1 and q2 is equal to w1 and w2.
> The computational time is very short, also for large data.
> q1 <- c(9,5,1,5)
> q2 <- c(9,2,1,5)
> w1 <- c(9,4,4,4,5)
> w1 <- c(9,4,4,4,5)
> v <- vector()
> for (i in 1:(length(q1))){
> v[i] <- any((q1[i] == w1) & (q2[i] == w2))
This suggests a lack of understanding re: how to use logical
functions. The any() function is completely superfluous here. It will
return exactly the same vector as would:
q1[i] == w1) & (q2[i] == w2)
If you wanted to use any() to pick out cases where either q1[i] == w1
or q2[i]==w2 then do not put an ampersand between those arguments but
rather a comma.
David Winsemius, MD
Heritage Laboratories
West Hartford, CT
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2012-February/302234.html","timestamp":"2014-04-19T17:50:35Z","content_type":null,"content_length":"3819","record_id":"<urn:uuid:0040e37d-997f-4b39-8bd8-a270f57a22c4>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
astroid is the hypocycloid for which the rolled circle is four times as large as the rolling circle. The curve can be written in a Whewell equation as s = cos 2φ^2^).
The curve can also be constructed as the envelope of the lines through the two points (cos t, 0) and (0, sin t), being a line piece between the axes of equal length. A mechanical device composed from
a fixed bar with endings sliding on two perpendicular tracks is called a trammel of Archimedes.
This is equivalent with a falling ladder, the astroid can also be seen as a glissette.
The astroid is also the envelope of co-axial ellipses whoes sum of major and minor axes is constant.
The length of the this unit astroid curve is 6, and its area is 3π/8.
This sextic curve ^3^) is also called the regular star curve ^4). 'Astroid' is an old word for 'asteroid', a celestial object in an orbit around the sun, intermediate in size between a meteoroid and
a planet.
The curve acquired its astroid name from a book from Littrow, published in 1836 in Vienna, replacing existing names as cubocycloid, paracycle and four-cusp-curve.
Because of its four cusps it is also called the tetracuspid, and the hypocycloid of four cusps
Abbreviation for a hypocycloid with four cusps (a=1/4) led to the name of H4.
Some relations with other curves:
The first to investigate the curve was Roemer (1674), during his search for gear teeth.
Those who also worked on the curve where:
• Johann Bernoulli (1691)
• Leibniz (who corresponded in 1715)
• Johann Bernoulli (1725)
• d'Alembert (1748)
The curve can be generalized into the super ellipse or Lamé curve.
Some authors call this generalization the astroid.
1) In Italian: astroide.
2) φ: inclination angle of the tangent; s: arc length.
3) In Cartesian coordinates: (x^2 + y^2 - 1) + 27 x^2 y^2 = 0
4) Astrum (Lat.) = star | {"url":"http://www.2dcurves.com/roulette/roulettea.html","timestamp":"2014-04-17T15:27:56Z","content_type":null,"content_length":"5991","record_id":"<urn:uuid:815dbff4-fb91-4f8f-93b5-fb2f76b6612b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Tutors
Schenectady, NY 12304
Accomplished Ph.D. Candidate with Teaching Experience
...Exams that I have taken within my listed specialties include: Math I (now Integrated Algebra I), Math II (now Geometry), Math III (now Algebra II and Trigonometry), Chemistry,
(2002, the year I took it, was the year of the
Exam Controversy, in which...
Offering 10+ subjects including physics | {"url":"http://www.wyzant.com/Altamont_NY_physics_tutors.aspx","timestamp":"2014-04-21T07:20:44Z","content_type":null,"content_length":"55659","record_id":"<urn:uuid:3ffc9065-cc6c-45f1-9e73-18577fdbe3de>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Interactive Introduction to Randomized Evaluation
This activity introduces a Randomized Control Trial (RCT) in the classroom. The experiment is straightforward to implement, and provides students experiential learning opportunities to the nuts and
bolts of impact evaluation. Student choices in the experiment are used to demonstrate the effect of a treatment, and the critical notion of Average Treatment Effect (ATE).
Learning Goals
The economics discipline is now flush with impact evaluation exercises especially in the area of Development economics. The primary tool of impact evaluation is using the RCT framework. However, the
RCT procedure, and its effects are not very intuitive when introduced through standard chalk and talk methods. The Classroom RCT Game introduced here provides an interactive introduction to the
technique of randomized evaluation, and demonstrates the Average Treatment Effect using student generated data from the activity. By participating in the classroom game students get a first hand
insight into a randomized program evaluation which in turn helps them to understand the literature on RCT better.
Context for Use
Impact evaluation has become a common tool in economics. Specially, Randomized Control Trials (RCT) are now the main investigative tool for development economists. This exercise can provide an
interactive introduction to the topic in courses such as Development Microeconomics, Public Policy and Economics of Development, Economic Development and Growth, Econometrics and Impact Evaluation,
and Community Economic Development.
By participating in the experiment the students have the opportunity not only to learn the core measure in any evaluation program (the Average Treatment Effect), they also get exposed to an intuitive
understanding of the impact evaluation technique. The Classroom RCT Game does not need to be restricted just to undergraduate introductions to the topic of randomized evaluation. The game can be used
in a graduate course as well, since it can be time-consuming if not impossible to take the whole class to the field to provide a first-hand exposure to an actual randomized evaluation program.
Additionally, the idea of control and trial is now common enough in courses other than development economics. Courses in behavioral economics and experimental economics routinely have a topic on
measurement and experiment design. Here again, the classroom game can provide a personal experience into the design and process of an experiment which can make the logic of designing an experiment –
to evaluate and estimate "treatment differentials" more vivid to the participating student.
The experiment takes about 35-45 minutes for a class of 20 students. The activity is introduced before starting with lectures on RCT. Some simple preparation before hand is needed as indicated below.
Description and Teaching Materials
1. A set of poker chips of two different colors. Alternatively one can use two different suits from a deck of cards (Diamonds and Clubs).
2. A list of words with associated meaning written next to each. A GRE wordlist would be perfect. We use terms in economics as an example
Wordlist (Acrobat (PDF) 55kB Feb18 13)
here. Multiple copies of the list is needed to distribute to about half the students in class.
3. A quiz comprising of these words is needed. See example
Quiz (Acrobat (PDF) 67kB Feb18 13)
. Copies of the quiz need to be distributed to all students.
Overview of the experiment
In the experiment students participate in an intervention where a random subset of them are exposed to a list of words with associated meanings, while the rest are not. All students then participate
in a quiz on word meanings containing these words. It is expected that due to the "intervention", the students exposed to the wordlist will have a higher average score than the students who were not
exposed to the wordlist. The random placement of students in a treatment and the control group ensures that pre-existing differences are averaged out between the two groups.
The objective of this activity is to provide students an intuitive understanding of how treatment differences arise, and the concept of Average Treatment Effect (ATE). The Average Treatment Effect is
the foremost variable of interest in any randomized control trial, since it captures the impact of the treatment on the outcome-variable of interest.
Description of the classroom activity
1. Students need to be placed randomly in a Treatment and a Control group, first. To construct the treatment and the control group, poker chips are handed out to the students at the beginning of the
experiment. Students with red chips are assigned to the treatment group and are asked to sit on the right side of the classroom. Students with white chips are assigned to the control group and are
asked to sit on the left side of the classroom. Handing out the chips provides a useful depiction of random assignment into groups, a critical methodology for disentangling treatment effects from
pre-existing differences.
2. Each student in the Treatment group is given a copy of "Wordlist" to review for five minutes. Students in the Control group do not have any task at that time.
3. At the end of the review period, the instructor collects back the wordlists from the treatment group, distributes the "Quiz" to all students in the treatment as well as the control group. They are
allowed five minutes to complete the quiz.
4. At the end of five minutes, the instructor reads out the correct answers for students to score their tests. The students are asked to write their total points on the left hand corner of the test –
a point for each correct answer.
5. The instructor collects the scored quiz sheets and computes the average score for the treatment group, and the average score for the control group.
6. The difference in the average quiz scores of the two groups is the Average Treatment Effect of the intervention. A simple excel graph can be used for visual elaboration. This can be readily done
using an excel sheet.
Teaching Notes and Tips
Using the results
The natural way one can use the activity is to start with the
Excel Graphs (Excel 2007 (.xlsx) 33kB Feb18 13)
of the computed results before introducing ATE formally (see concept below).
The fact that the students themselves have generated the data allows them to identify with all the components of the experiment design readily, and allows the instructor to describe and define the
core measure and other subsequent measures more naturally (see Mani and Dasgupta 2010 for some extensions).
Note, our intervention can also be used to allow students revise concepts that have been just covered in the lectures.
Post activity discussion
Definition of ATE: Consider a pool of applicants (N) for a job training program. A randomly selected subset N[T] gets assigned to the treatment group (T), and receives the treatment (for example: the
job training program). The remaining sample N[C] = N-N[T] gets assigned to the control (C) group which does not receive the training. In our example we are interested in measuring the impact of the
training program on some measurable outcome variable (Y) such as wage earnings. The Average Treatment Effect (ATE) measures the overall impact of a program on an observable outcome variable. Under
perfect compliance, it is defined to be the difference in the empirical means of the outcome variable (Y collected at the end of the program) between the treatment and the control group. Thus, under
perfect compliance,
ATE = Y-BAR[T ]- Y-BAR[C ],
where Y-BAR
is the sample mean of the outcome variable for everyone in the treatment group and Y-BAR
is the sample mean of the outcome variable for everyone in the control group.
Related Reading
After introducing the formal concept of ATE, the instructor can follow up on some of the classic studies below to discuss some of the actual applications of RCT and the usage of average treatment
Conditional cash transfer program: In an effort to improve children's schooling outcomes (test scores, completed grades, and enrollment), cash transfer payments have been provided as incentives to
parents' who send their children regularly to school. A randomized control trial implemented to understand the effectiveness of conditional cash transfers find positive association between the
program and - schooling enrollment, and completed grades of schooling in Mexico (Parker, Rubalcava, & Teruel, 2008; Behrman, Sengupta & Todd, 2005).
Deworming pills program: In an attempt to improve children's health and schooling, Miguel & Kremer (2004) and Bobonis, Miguel, & Puri-Sharma (2006) evaluate the effectiveness of providing deworming
pills to school age children using a randomized control trial. Both papers find positive impact of the intervention on children's schooling attendance.
Microfinance program: Banerjee et. al (2009) conduct the first randomized evaluation study to assess the effectiveness of microcredit on poverty. The authors find that increased access to microcredit
is associated with increased expenditure on durable goods though, not associated with improvements in average household per capita expenditure – an important measure of well-being.
There are no formal assessments. However, the instructors who used the experiment in their lectures found very positive student feedbacks.
References and Resources
This activity was based on the paper Mani, S., and Dasgupta, U. (2010): "Explaining Randomized Evaluation Techniques Using Classroom Games. Available at SSRN: http://ssrn.com/abstract=1676876 or
1. Banerjee, A.V., Duflo, E., Glennerster, R., & Kothari, D. (2010). Improving immunisation coverage in rural India: clustered randomised controlled evaluation of immunisation campaigns with and
without incentives. BMJ2010; 340:c2220.
2. Behrman, J. R., Sengupta, P., & Todd, P. (2005). Progressing through PROGRESA: An Impact Assessment of a School Subsidy Experiment in Rural Mexico, Economic Development and Cultural Change,
University of Chicago Press, vol. 54(1), pages 237-75, October.
3. Bobonis, G. J., Miguel, E., & Puri-Sharma, C. (2006). Anemia and School Participation. J. Human Resources, XLI(4), 692–721.
4. Miguel, E., & Kremer, M. (2004). Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities. Econometrica, 72(1), 159–217.
5. Parker, S. W., Rubalcava, L., & Teruel, G. (2008). Evaluating Conditional Schooling and Health Programs. Handbook of Development Economics. | {"url":"http://serc.carleton.edu/sp/library/experiments/examples/70039.html","timestamp":"2014-04-17T12:43:07Z","content_type":null,"content_length":"38444","record_id":"<urn:uuid:e86cf277-c8bd-48ec-8eb7-c838be505ffe>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
universal battery system
Definitions for universal battery system
This page provides all possible meanings and translations of the word universal battery system
The Standard Electrical Dictionary
1. Universal Battery System
A term in telegraphy. If several equal and high resistance telegraphic circuits are connected in parallel with each other from terminal to terminal of a battery of comparatively low resistance
each circuit will receive the same current, and of practically the same strength as if only one circuit was connected. This is termed the universal battery system. It is a practical corollary of
Ohm's law. The battery being of very low resistance compared to the lines the joining of several lines in parallel practically diminishes the total resistance of the circuit in proportion to
their own number. Thus suppose a battery of ten ohms resistance and ten volts E. M. F. is working a single line of one hundred ohms resistance. The total resistance of the circuit is then one
hundred and ten ohms. The total current of the circuit, all of which is received by the one line is 10/110 = .09 ampere, or 90 milliamperes. Now suppose that a second line of identical resistance
is connected to the battery in parallel with the first. This reduces the external resistance to fifty ohms, giving a total resistance of the circuit of sixty ohms. The total current of the
circuit, all of which is received by the two lines in equal parts, is 10/60 = .166 amperes. But this is equally divided between two lines, so that each one receives .083 ampere or 83
milliamperes; practically the same current as that given by the same battery to the single line. It will be seen that high line resistance and low battery resistance, relatively speaking, are
required for the system. For this reason the storage battery is particularly available. The rule is that the resistance of the battery shall be less than the combined resistance of all the
circuits worked by it.
Find a translation for the universal battery system definition in other languages:
Use the citation below to add this definition to your bibliography:
Are we missing a good definition for universal battery system? | {"url":"http://www.definitions.net/definition/universal%20battery%20system","timestamp":"2014-04-19T00:58:52Z","content_type":null,"content_length":"26230","record_id":"<urn:uuid:b4f62d7d-4379-41da-8cb3-adc920bc1ee2>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
can anyone help me with this problem find vertex and equation, line of symmetry and graph the function. f(x)= 1/3 x^2
• one year ago
• one year ago
Best Response
You've already chosen the best response.
f(x)=\[\frac{ x ^{2} }{ 3 }\]
Best Response
You've already chosen the best response.
I might be reading it wrong I am sorry
Best Response
You've already chosen the best response.
the parabola or quadratic equation is f(x)=ax^2 +bx +c, vertex x=-b/2a if you have a formula of f(x)= 1/3 x^2+bx +c --> here b=0 and c=0 vertex x=-b/2a x=-0/2(1/3)=0 sub this to f(x)= y=1/3 x^2=0
then what is the vertex V(x,y)=___? is it V(0,0) yes or no? for graphing use values of x=0,+-1,+-2 +-3 etc..etc....:D good luck now
Best Response
You've already chosen the best response.
You had it right I think I was confused, but you really helped me understand the question...
Best Response
You've already chosen the best response.
ok good,,, good luck now and have fun ....:D
Best Response
You've already chosen the best response.
I will once someone looks at my problems lol..... One person is looking I just want to ensure I am on the right track....
Best Response
You've already chosen the best response.
ok where is the prob? i may be able to help a little bit :D
Best Response
You've already chosen the best response.
How would I show you?
Best Response
You've already chosen the best response.
If anyone can help me check my work I would be very thankful. So it can be more understandable I attached my work in a word file. Thank you so very much I just want to ensure I am on the right
Best Response
You've already chosen the best response.
Thats the question
Best Response
You've already chosen the best response.
hmm go to your prob site and notify my name there then i will click it and be there :D
Best Response
You've already chosen the best response.
like this hi andjie
Best Response
You've already chosen the best response.
I say your name
Best Response
You've already chosen the best response.
yes copy and paste my name there then ill just click on it
Best Response
You've already chosen the best response.
ok I did that
Best Response
You've already chosen the best response.
hmm i didnt have a notification,, did you highlight and copy then paste my name there?
Best Response
You've already chosen the best response.
Paste it where exactly lol
Best Response
You've already chosen the best response.
in the type your reply area, because I did that
Best Response
You've already chosen the best response.
on where you posted your problem
Best Response
You've already chosen the best response.
ok go back to where you posted your problem then paste my name there
Best Response
You've already chosen the best response.
I posted your name in the question
Best Response
You've already chosen the best response.
So confusing
Best Response
You've already chosen the best response.
hmm i dont know why i didnt get a notification?
Best Response
You've already chosen the best response.
is it still open? why dont you close it hen repost them new
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b6c57ce4b0c789d50f8d98","timestamp":"2014-04-19T20:01:12Z","content_type":null,"content_length":"84678","record_id":"<urn:uuid:4681fab4-a5ee-44f1-a7be-c1deed3a0896>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
The higher math, to me, was poetry
The higher math, to me, was poetry
I think that I shall never see
Skip to next paragraph
Subscribe Today to the Monitor
Click Here for your FREE 30 DAYS of
The Christian Science Monitor
Weekly Digital Edition
A polynomial lovely as a tree.
On parents' night, I sat in my son's algebra classroom, looked at the blackboard, and felt an old, familiar horror. I was back in my own high school algebra class, confronted by scary polynomial
As my son's teacher talked about the curriculum, I recalled my freshman math mantra: "Please don't call on me!"
By the time I reached ninth-grade math, I had hoed a hard row from primary arithmetic. I had barely memorized the multiplication tables to the satisfaction of Miss McCormick and squeaked through long
division with Mr. Lynch. Fractions gave me a frisson.
But at the portal of higher math, after only a few algebra quizzes, I concluded that I was truly "pushing the envelope" of my numerical skills. Algebra was Greek to me, all its meaning quite lost in
I needed lots of translation: My ear was attuned to words and tone-deaf to numbers. Words entertain multiple possibilities; numbers, I thought, were restricted to one-dimensional answers. I foundered
on math syntax. I could not fathom the virtue in absolute answers when ambiguity seemed so much more attractive and interesting.
I yearned for the beautiful words of the problem-solving I heard in poetry, like the poetic equations that Archibald MacLeish wrote: "A poem should be equal to: Not true." Could I not earn
alternative algebra credit by "calculating" nontraditional equations such as this?
I decided to let the poets teach me math. A poem is a word problem, after all. And poems not only preserve ambiguity, they draw to within a fraction's fraction's fraction of a common denominator.
I count it a virtue that poems tantalize with this sense of almostness, for instance, when Wallace Stevens perceived "nothing that is not there and the nothing that is" while having the "mind of
winter." Emily Dickinson chose to "dwell in possibility" - and in all probability had problems kindred to mine with polynomials.
Yet poets do not shy away from variables, I learned. In fact, I loved the variables, verbal and mathematical, for which e.e. cummings was known. He "rejoice[d] in a purely irresistible truth" that
two times two is five, an equation which, I intuit, is not useful in building plumb houses or calculating gas mileage. However, it comforted my algebraic struggles to hear a poet wink at math's
tyrannical definiteness.
As I slouched furtively in the back row of algebra class ("Please don't call on me!"), I heard e.e. whisper to me of the universe next door. I went. Since I couldn't avoid math, I could at least
experience it from an obtuse angle.
Geometry modified the experience. It had a transmathematical vocabulary: Tangent, apex, acute - words a poet could appreciate. Might not geometry be a poetry of shape, proportion, and beauty simply
disguised as math? Imagine - numbers arising from the blackboard as pyramids, Parthenons, or Bauhaus towers!
Poet Rita Dove wrote: "I prove a theorem and the house expands." She heard language translating abstractions directly into sensible things, taking flight with poetry's prime theorem: metaphor:
... the windows have hinged into butterflies,
sunlight glinting where they've intersected.
Now we're talking - a foot in the door of the possibility of doing math with words.
A metaphor is an equation, after all. Robert Frost called poetry "the one permissible way of saying one thing and meaning another," but it's really just an equation that need not balance, an equation
containing the "pleasure of ulteriority."
Frost wrote: "Like a piece of ice on a hot stove the poem must ride on its own melting." I liked to think of my answers on algebra tests as just this sort of equation: poetic attempts to be "equal
to, not true."
But I earned no credit. Algebra was a hot stove. I was ice. That's my metaphor for failing the course.
Ultimately, I felt successful in math, once I found math problems such as Howard Nemerov had enumerated in "To David, About His Education":
The world is full of mostly invisible things,
And there is no way but putting the mind's eye,
Or its nose, in a book, to find them out,
Things like the square root of Everest
Or how many times Byron goes into Texas....
This is my adopted higher math: word problems with English poets rustling in Texas; all hat and no cattle - my kind of polynomial. Finally, I've decided, it comes down to a question of how to
describe these "invisible things." Numbers or words, for instance, if the Parthenon's beauty is to be described?
I much prefer descriptors like the phrase "the golden mean" rather than "Y = 5." "The poet is the priest of the invisible," according to Wallace Stevens.
Hence my second year slouching furtively in algebra while exploring the universe next door.
Hence my son's smug satisfaction on parents' night, as he deftly "solves for Y," while I enjoy my ulterior contentment by solving for "Why?"
(c) Copyright 2000. The Christian Science Publishing Society | {"url":"http://www.csmonitor.com/2000/1228/p22s1.html","timestamp":"2014-04-20T05:09:34Z","content_type":null,"content_length":"52385","record_id":"<urn:uuid:6fdf7100-abdd-48b6-885d-25bf634743e2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Castle, DE Math Tutor
Find a New Castle, DE Math Tutor
...I have been playing the double bass and bass guitar for many years and have studied jazz bass and piano with seasoned performers. I have received several undergraduate poetry prizes, including
First Place in Christianity & Literature's Student Writing Contest. I was also a Research Assistant for Dr.
38 Subjects: including calculus, composition (music), ear training, elementary (k-6th)
...My Ph.D. thesis involves solving a set of simultaneous partial difference equations with boundary values. I used discrete math concepts to solve several projects I worked on in industry. In
fact I earned a US patent for one of those solutions.
39 Subjects: including trigonometry, ACT Math, discrete math, statistics
...I have been correcting the written assignments of friends and family members for as long as I can remember. I am able to precisely ascertain the area(s) of difficulty when assessing a student.
I am able to clearly explain a concept in a variety of ways to ensure understanding.
30 Subjects: including algebra 2, LSAT, English, geometry
...My approach to tutoring is simple. First I offer a free 1-hour consultation to any potential clients. From there, I develop a game plan to get the student back on track with his or her good
grades, and if the parent(s) agree to the terms, then I will use the sessions to help the student take a more positive approach to the math subject.
4 Subjects: including algebra 1, algebra 2, prealgebra, trigonometry
...I would feel comfortable teaching my own faith or helping students who want to learn about comparative religion. As a member of the clergy, I have been involved in public speaking all of my
adult life. It didn't come easy for me, but as time passed, I learned to feel more at ease and more adept at the art.
20 Subjects: including prealgebra, SAT math, algebra 1, reading
Related New Castle, DE Tutors
New Castle, DE Accounting Tutors
New Castle, DE ACT Tutors
New Castle, DE Algebra Tutors
New Castle, DE Algebra 2 Tutors
New Castle, DE Calculus Tutors
New Castle, DE Geometry Tutors
New Castle, DE Math Tutors
New Castle, DE Prealgebra Tutors
New Castle, DE Precalculus Tutors
New Castle, DE SAT Tutors
New Castle, DE SAT Math Tutors
New Castle, DE Science Tutors
New Castle, DE Statistics Tutors
New Castle, DE Trigonometry Tutors | {"url":"http://www.purplemath.com/new_castle_de_math_tutors.php","timestamp":"2014-04-20T01:56:42Z","content_type":null,"content_length":"23953","record_id":"<urn:uuid:b83138c0-96fa-4d55-9703-33437bee578d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
A case study of the completion procedure: Proving ring commutativity problems
, 1996
"... Introduction Many researchers who study the theoretical aspects of inference systems believe that if inference rule A is complete and more restrictive than inference rule B, then the use of A
will lead more quickly to proofs than will the use of B. The literature contains statements of the sort "ou ..."
Cited by 24 (5 self)
Add to MetaCart
Introduction Many researchers who study the theoretical aspects of inference systems believe that if inference rule A is complete and more restrictive than inference rule B, then the use of A will
lead more quickly to proofs than will the use of B. The literature contains statements of the sort "our rule is complete and it heavily prunes the search space; therefore it is efficient". 2 These
positions are highly questionable and indicate that the authors have little or no experience with the practical use of automated inference systems. Restrictive rules (1) can block short, easy-to-find
proofs, (2) can block proofs involving simple clauses, the type of clause on which many practical searches focus, (3) can require weakening of redundancy control such as subsumption and demodulation,
and (4) can require the use of complex checks in deciding whether such rules should be applied. The only way to determ
- In Proceedings 7th IEEE Symposium on Logic in Computer Science
"... A new algorithm for computing a complete set of unifiers for two terms involving associative-commutative function symbols is presented. The algorithm is based on a non-deterministic algorithm
given by the authors in 1986 to show the NP-completeness of associative-commutative unifiability. The algori ..."
Cited by 18 (0 self)
Add to MetaCart
A new algorithm for computing a complete set of unifiers for two terms involving associative-commutative function symbols is presented. The algorithm is based on a non-deterministic algorithm given
by the authors in 1986 to show the NP-completeness of associative-commutative unifiability. The algorithm is easy to understand, its termination can be easily established. More importantly, its
complexity can be easily analyzed and is shown to be doubly exponential in the size of the input terms. The analysis also shows that there is a double-exponential upper bound on the size of a
complete set of unifiers of two input terms. Since there is a family of simple associative-commutative unification problems which have complete sets of unifiers whose size is doubly exponential, the
algorithm is optimal in its order of complexity in this sense. This is the first associative-commutative unification algorithm whose complexity has been completely analyzed. The approach can also be
used to show a singl... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=774212","timestamp":"2014-04-20T05:01:47Z","content_type":null,"content_length":"17339","record_id":"<urn:uuid:9ccb1443-65e9-470b-acf9-288dc2de71ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combinatorial algorithms in C#
In this article will be presented a small number of classes that can be used to perform some basic combinatorial operations on collection of objects. Here you won’t find a detailed explanation of how
the code works, but mainly on how to use these utility classes. The source code is small (only 4 classes in 4 files) so if you want to see how the algorithms are implemented, have a look there.
The purpose of the utilities shown here is to present the programmer with an easy way of generating all the possible combinations, permutations and variations from a collection of objects. For
example, suppose that you have balls numbered from 1 to 35 put in a black box, and want to pick up 5 of them. Then the possible combinations of 5 balls to be taken from a total number of 35 are
approximately 325,000. The presented classes will generate every single one of them. This is usually very handy if you have a theory of how the black box works and test all the combinations against
your theory in order to figure out what is the most likely combination of 5 numbers to be drawn.
First write:
using Combinatorial;
Now declare your array of integers :
Array myIntArray = Array.CreateInstance(
Type.GetType("System.Int32"), 35);
for (int j = 0; j < myIntArray.Length; j++)
myIntArray.SetValue(j, j);
Then make a combinatorial object to manipulates the objects in this array (in this particular case the objects are of type System.Int32) like this :
Combinations combs = new Combinations(myIntArray, 5);
Now write a cycle to generate all the possible combinations of 5 integers from 35.
while(combs.MoveNext()) {
Array thisComb = (Array)combs.Current;
for (int i = 0; i < thisComb.Length; i++) {
// Just access the value. This requres boxing.
int nVal00 = (int)thisComb.GetValue(i);
// Just access the value. This requres no boxing.
Object nVal01 = thisComb.GetValue(i);
The Combinations, Permutations and Variations classes all support the System.Collections.IEnumerate interface, so it is very easy to cycle through them. If you want to reset these objects just call
the Reset() member function. After that, all the combinations generation will start anew.
The collection that is going to be used is passed as a parameter to the constructor of the combinatorial object. This means that you cannot reinitialize the object with a new collection when you have
finished using the current one, but have to create an entirely new Combinations (or some of the others) object.
There are three possible constructors :
protected CombinatorialBase(Array arrayObjects, int nKlass );
protected CombinatorialBase(IList listObjects, int nKlass );
protected CombinatorialBase(IEnumerator enumeratorObjects,
int nKlass );
As you can see, you can pass any collection that supports either IList or IEnumerator interfaces. Or you can pass any array of objects. This means that you can use these classes on almost every
collection met in the .NET framework. Because the combinatorial classes support the IEnumerate interface itself, you actually can create constructs like : combination of combinations or permutations
of combinations of variations and all the stuff like this, that you can think of. However I strongly advice you not to do so (unless for a situation with small number of combinations) because the
constructor cycles through all the objects that it have to manipulate ( the objects in the collection). If you pass another combination this process can take a lot of time.
Armed with all the constructors from above we can use code like this :
double[] doubles = new double[10];
for (int j = 0; j < doubles.Length; j++)
doubles[j] = (double)j;
// Generate the combinations of 5 numbers from a bunch of 10
Combinations combs = new Combinations(doubles, 5);
Or like this when using ArrayList :
ArrayList myArrayList = new ArrayList(15);
for (int j = 0; j < 10; j++)
// Generate all the permutations of 10 objects.
Permutations perms = new Permutations(myArrayList);
And even some unusual constructs like this one here :
string myString = "abcdefghij";
// Generate all the possible five char combinations from the
// letters of this string.
Combination combs = new Combination(myString.GetEnumerator(), 5);
And now a little note for those of you who would say : Hey isn’t it true that we can generate all the permutation of 5 elements, by generating all the variations of those five elements of size
(class) 5 ?
This means that :
Permutations combs = new Permutations(myArrayList);
Variations combs = new Variations(myArrayList, myArrayList.Count);
actually do the same thing.
So why we need a Permutation object, when we have a more general Variation object? The truth is, that mathematically they do the same. But because the algorithms for generation of Combinations and
Permutation are so much easier to implement than those of Variations, these two objects are the basis of this library. The Variations object actually uses the combinations and permutations to
generate all the possible variations. If you need only permutations, please always use the Permutation class.
One last thing to mention : This library generates only combinations, variations and permutations without repetition. If you need repetition you have to implement it yourself. I do not have a need
for such functionality right now, so probably won’t write it very soon.
Some may ask why haven’t I implemented the IEnumerable interface (like in the String or Array classes), but chose the IEnumerate instead. The IEnumerable interface has just one method : GetEnumerator
() that returns an enumerator. Each time you request an enumerator this interface should return to you a valid enumerator over the sequence. And one very important thing : If you have requested
enumerator over the same sequence previously it must not become invalid, because it still may be in use. This is impossible with the current implementation of the Combinatorial classes and I don’t
see a way to easily modify this. So I supply just IEnumerate for now. | {"url":"http://www.codeproject.com/Articles/2781/Combinatorial-algorithms-in-C?msg=4301695","timestamp":"2014-04-20T10:21:30Z","content_type":null,"content_length":"112795","record_id":"<urn:uuid:5388cfc4-1539-471a-9d17-efb7c04db9c6>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plane's wings and Bernoulli's equation
How large an area is included in this velocity profile?
I'm not really sure what you're asking here...
Also isn't part of the pressure differential related to work done by a wing onto the air? The work done results in a downwards (lift) and somewhat forwards (drag) flow of air after a wing passes
through a volume of air (wrt air) (the exit velocity of the affected air when it's pressure returns to ambient). That work done on the air involves a mechanical interaction that violates Bernoulli,
but is responsible for part of the lift (but I don't know by how much).
You're trying to split up a single problem here, and the truth is that it's really just multiple ways of looking at the same answer. If you know the velocity distribution around a low speed airfoil
(< mach 0.3), bernoulli can give you a very accurate lift and induced drag calculation, and it does indeed account for the work done on the air. That isn't a separate term, it's merely a different
way of looking at the same problem (and it certainly doesn't "violate" bernoulli). You can get exactly the same answer for the lift by either taking a control volume around the airfoil and looking at
the momentum flux into and out of the control volume (effectively a newtonian analysis), and by looking at the pressure distribution on the airfoil surface itself. In oversimplified explanations,
these two approaches are often stated as conflicting, or different, but in reality, they are both valid.
The real flaw in popular belief about how airfoils work isn't in the fact that high velocity air has a low pressure - this is both true, and a valid way to analyze the problem. The problem with
popular belief is the explanation for
the air is traveling faster over the top of the airfoil. This is commonly explained with the equal transit time assumption, which is fatally flawed in many ways. The true reason has to do with the
generation of circulation around the airfoil due to the need to have the flow attached at the trailing edge (known as the kutta condition). However, when this circulation is calculated and the
velocity profile around the airfoil is obtained, the bernoulli relation can absolutely be used to transform that velocity profile into a pressure distribution, and the force on the airfoil is (as
would be expected) simply the pressure integrated around the outside surface of the airfoil.
If you plot the pressure distribution around the airfoil, you will also notice a large low pressure bubble above the top surface of the airfoil, and a less substantial high pressure bubble below the
airfoil. Both of these will tend to turn the freestream flow farther from the airfoil downwards, so the downwash generated by the airfoil is not an independent effect from the pressure distribution -
rather the two effects are interlinked and inseparable from each other.
Hopefully this helps clear things up a bit... | {"url":"http://www.physicsforums.com/showthread.php?p=3862808","timestamp":"2014-04-17T15:36:06Z","content_type":null,"content_length":"91302","record_id":"<urn:uuid:bfc14757-e14d-499a-8342-ab9f229bec11>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
iHack, therefore iBlog
August 9, 2013 § Leave a comment
A Function of Scale
Draft 1, Ernest Prabhakar, 2013-08-08
The Premise
Real systems aren’t linear, but have scales where the cost is fixed below, but astronomical above.
The Goal
Extend/Restrict the Minion Machine to capture what it means to operate at “optimal scale”.
The Concept
Define a Multi-Minion Machine as a Minion Machine with the following changes:
1. There is one minion for each bin (and thus each object) (M = N)
2. Minions never move; they just shoot objects to other minions.
3. The N objects are arranged in a ring of radius R, so “1″ is next to “N”.
4. The objects travel on independent tracks of size r << R, so they don’t collide, but take effectively the same distance to a given bin.
The Model
Assume the minions are smart enough to figure out the optimal route from one bin to another. Instead of specifying a distance, we can thus just specify a destination (and not have to worry about
‘overflow’ or ‘underflow’).
Our primitive commands only need specify the initial (b_i) and final (b_f) bins, giving a size of:
S1 = 2 log(N) := 2 k
All other quantities are the same, except that the average distance d will be less (half?) due to the ring topology.
Let us use bold characters to represent an action tuple (E, t) whose norm is E times t. For example, operation L has the action A_L = (E_L, t_L). The action of our system can be decomposed into C for
the communicator and M for movement.
If solving the puzzle requires n commands of size S1 and average distance d, we can write our action as:
A0 = n S1 C + n M(d)
[Errata: parallel operations could complete in a time proportion to max(d), independent of n. There is a complex dependency on the relative values of C_t and M_t which I overlooked].
Now we can ask: would higher order commands reduce the action?
To start, let us introduce a program with per-command cost T that interprets a command as a transposition instead of a move. For example, if N = 8, the command 0x1f is split into 0x1f and 0xf1 and
executed in parallel.
For a set of disjoint transpositions that would normally take n moves to solve, the action is now:
A1 = n/2 S1 C + n/2 M(d) + n/2 T
For this case, it is a net win when (substituting k = log(N) = S1 / 2):
T < 2 k C + M(d)
which is a net win for sufficiently large k.
However, that advantage only holds for disjoint permutations. Conjoined permutations (e.g., cycles) take the same number of steps as before, but most now pay the penaltyT.
To solve that, we could replace T with a program L that describes loops (cycles) rather than mere transpositions. This gives us, for all (?) permutations:
A2 = n/2 S1 C + n/2 M(d) + n/2 L
with a similar constraint:
L < 2 k C + M(d)
A particular command/program specification can be interpreted as a “strategy”.
For example [as Christy suggested], imagine two players Satan and God.
1. Each of them is given a Multi-Minion box for which they devise a fixed strategy behind closed doors.
2. When the curtain comes up, Satan & God get to see each other’s strategies.
3. Satan secretly feeds commands into his box to entangle a set of balls.
4. Those balls are teleported into God’s box, where he must dis-entangle them.
Every command costs some number of “action points” (great name, Christy :-). The winner is the player who spends the fewest action points.
This leads to a number of interesting questions:
• Are there optimal strategies for God and Satan? Is the optimal strategy the same for both players? Is there a meta-strategy for which commands Satan should use, after finding out God’s strategy?
• Does one player have an intrinsic advantage in this case? What about the case where the entanglement isn’t simple permutations, but some NP-complete problem?
• How should we calculate the per-command cost P for the program used to implement the strategy? Naively, L ought to be bigger than T, but by how much? Can we break all possible strategies down
into a “basis” of simpler components, allowing cost comparisons between them?
• Do any of these results change in interesting ways if we add baseline costs for any of the elements?
I’m not sure if we learned anything about scale, but we did develop a useful concept of strategy. It also implies that the action (which is perhaps closer to “difficulty” rather than mere
“complexity”) depends on interactions between the instruction set chosen and details of the input vectors.
Then again, maybe that is why we have different scales: to allow optimal instruction sets for different levels of representing a problem…
August 9, 2013 § 1 Comment
The Action of Complexity
Draft 2, Ernest Prabhakar, 2013-08-07
Inspired by a proposal from Christy Warren
The Premise
Using concepts derived from physics such as Energy and Time, we can gain insight into the nature of computational complexity.
The Goal
Devise the simplest possible physical system that captures the aspects of computation relevant to complexity theory.
The Concept
We are given a box containing:
1. A set of N distinguishable objects each of which occupies one bin in a linear array, also of size N.
2. An army of M “minions” that can move those objects (with some cost in energy and time)
3. Some mechanism for communicating with the minions (which also costs energy and time)
4. Some reliable way to measure both energy and time
The goal is to move the objects from an initial ordering I to final ordering F while consuming the least amount of time and energy. Importantly, the only way to accomplish this task is by giving
commands to the minions. Minions only understand commands of the form “Minion – Starting Bin – Direction – Distance”.
The Model
We start by making a number of simplifying assumptions. These can be revisited later as needed.
1. The energy required for the minions to live and move themselves is either negligible or from an external source. The only energy we care about is that required to i) move the objects and ii)
communicate with the minions.
2. All objects have significant mass (so it takes energy to move them) but negligible size (so we don’t need to worry about collisions).
3. All objects have the same mass m0 and top speed v0. The array has negligible friction, and the distance between bins is very large compared to the distance required to accelerate to top speed.
This allows us to assume that moving any object from one bin to another takes the same amount of energy (to accelerate & decelerate):
E0 = m0 v0^2
but a varying amount of time, proportional to the distance x:
t = x / v0
1. The communicator uses something like FM modulation, which requires energyE_c and time t_c both proportional to the dimensionless size S of the command, e.g. in bits:
E_c = a S t_c = b S
The sizes N and M are fixed, so we can specify that all primitive commands use a fixed-width bitfield of size S0:
S0 = log(M)+ log(N) + 1 + log(N)
Say that it takes n steps to obtain the desired order. The distance traversed by each step is given by x_i, which can be summed and divided by n to get the average distance d.
Assuming serialized movements with no latency between them gives:
Energy = n a S0 + n E0 = n (a S0 + E0)
Time = n b S0 + n d / v0 = n (S0 + d/v0)
We can multiply these to get the action:
Action = Energy * Time = n^2 (a S0 + E0 ) (b S0 + d/v0)
Since S0 is dimensionless, we can pull all the dimensions in a new constant h, which is the per bit action of the communicator:
h = a b
giving us new dimensionless constants:
e = E0 / a f = d / (b v0)
allowing us to write:
Action = n^2 h (S0 + e)(S0 + f) = n^2 h (S0^2 + 2(e + f)S0 + e f)
The action can be interpreted as a measure of the effort required to ‘disentangle’ a system from an initial ordering I to final ordering F.
The constant e is the ratio between the energy required for each step of movement (E0) and that for each bit of control (a).
The constant f is the ratio between the average time required for each step of movement (d/v0) and that to send each bit of control (b).
Which tells us that the effort is primarily determined by:
• Movement, when e + f >> 1
• Control, when e + f << 1
• Energy, when e >> f
• Time, when f >> e
While those are perhaps obvious, this model also provides a precise way to measure the effort (action) in intermediate cases where e and f are comparable to 1 and each other. It also gives us a
mathematical formalism that can be used to minimize the action when varying some of the constants or extending the action.
• Is the action the right way to combine E and t? What are the alternatives, and their advantages and disadvantages?
• Right now having more minions doesn’t help (or hurt). What happens if we include their energy cost, but allow them to perform actions in parallel? What if we are allowed (at some cost) to send
the same command to multiple minions at once?
• What if the energy cost is dependent the distance between bins, rather than constant?
• What is the physical interpretation of h e f, the “pure movement” action d E0 / v0?
This model leads to a natural and interesting definition of action for computational systems that bears some interesting similarities to the idea of ‘complexity’. To flesh this out, however, would
require a mechanism for encoding (and costing) higher-order algorithms such as “sort the array”, rather than merely “move these objects between bins”.
May 21, 2013 § Leave a comment
In my opinion, BitC is the most innovative take on systems programming we’ve seen since the invention of C. While sad that it failed, I am deeply impressed by the thoughtful post-mortem by Jonathan
S. Shapiro. Here are links to the various threads of his analysis (and the equally thoughtful responses):
October 19, 2012 § Leave a comment
[A follow-on to Spreading Effective Vision and The Agile Church, addressed specifically to the Church Spread of Kingsway Community Church.]
In less than twelve months, together with the Holy Spirit, we have completely reinvented Kingsway Church. While our overall numbers may be the same, we have spread to two new neighborhoods,
dramatically expanded our pastoral staff, and filled much of our congregation with renewed vision for reaching our communities.
What if that was just the beginning?
October 17, 2012 § Leave a comment
Great summary of what “Designing for the Future” really involves.
As software developers, we know that our systems will evolve with time. We must understand the forces that drive this evolution, in order to design systems that are easily evolvable. Unfortunately,
many programmers have misconceptions about the real drivers of software evolution and about their own ability to predict how a system will evolve.
In my many years of experience as a professional software developer I had the luck to work on many projects from start. But on other occasions I joined projects that were already mature and deployed
to real users. In these projects I observed the difficulty to maintain systems that were not planned for change. I condense my personal experience below in the form of four myths of software
Myth 1 – Changes are driven by improvements in implementation
Very often programmers believe that they will only have to change their code if they think about…
View original 537 more words
October 11, 2012 § 2 Comments
While discussing The Agile Church and Metrics versus Goals, I realized that our organization’s primary motivation for adopting Agile practices is to spread the ownership of effective vision.
That is, we start with a shared belief that vision ought to be:
1. Effective: timely, clear, actionable & aligned with the organization’s overall purpose
2. Spread: distributed from the core leadership out to every member
3. Owned: each person takes responsibility for how they implement that common vision
Working from there, we can adapt techniques from, e.g., Scrum, that will help our organization achieve that goal.
Here are what I consider the most powerful suggestions:
1. Adopt a mindset of continuously developing and implementing new visions
2. Care about improving how we do things, not just what we do
3. Maintain a written backlog of “things worth doing/changing”
4. Innovate in seasons of 4-8 weeks, tied to, e.g., a sermon series
5. The leader (pastor) prioritizes 1-3 items from the backlog to focus on each season
6. The team owns the vision (together) and its implementation (individually)
7. Define the conceptual goal and practical metrics in terms of the value delivered to the customer (e.g., God)
8. At the end of each season, celebrate what was accomplished (“thanksgiving”) and reflect on what did or did not go well (“confession”)
To me, the key is moving from strategic once-a-year vision-and-budgeting meetings for leaders towards tactical “sprints” that mobilize the entire organization (congregation).
This pace may sound a bit exhausting, but that very awareness forces us to alternate “productive” and “relaxing” sprints to keep the whole community healthy. It is already too easy to fall into ruts
where some people never do much while others are continually burning themselves out. A good process should make explicit important issues that were previously implicit, so we are forced to
consciously manage them. | {"url":"http://ihack.us/","timestamp":"2014-04-17T18:33:42Z","content_type":null,"content_length":"50154","record_id":"<urn:uuid:acd6eebe-b456-4f84-b911-8a5677939ac5>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume 17,
Volume 17, Issue 12, December 1976
View Description Hide Description
We study an N‐level system coupled linearly to an infinite quasifree Fermi or Bose reservoir in the vacuum state or in a state corresponding to an arbitrary temperature. We show that the singular
reservoir limit can be performed in the vacuum state and at infinite temperature, thus leading to a completely positive Markovian reduced time evolution for the system, which, in the infinite
temperature case, preserves the central state. On the other hand, no such limit is possible for KMS states (finite temperature) and at zero temperature. Some extension to norm‐continuous
semigroups of an infinite‐dimensional B (H) is possible.
View Description Hide Description
In this paper, two general properties of the conserved discrete states CIR systems have been found: (1) the equal spacing of and a general expression for the characteristic roots of their
transition‐rates‐matrices; (2) a general formulation for their conditional probabilities. The discussion is also extended to the disappearing discrete‐states CIR systems which include the
infinite level harmonic oscillators as a special case.
View Description Hide Description
Two new proofs are given of the Dyson and Lenard lower bound for the energy of matter with boson electrons. Another result is a new inequality for the two‐point correlation function.
View Description Hide Description
We present a rigorous theory of magnetohydrodynamical shock waves in the framework of a given curved space–time, under general assumptions corresponding both to a plasma and to a condensed
medium. The results can be of use in astrophysics. We prove the timelike character of the wavefronts, the main thermodynamic inequalities, the relative location of the speeds of the shock waves
with respect to the magnetosonic and Alfvén speeds, and show some existence and uniqueness theorems. In particular, we show that there can exist initial states giving slow shocks, but no weak
View Description Hide Description
We consider the two first order differential operators A [ x ]=μ (x) ∂/∂x+λ (x), B [ x ]=−(∂/∂x) μ (x)+λ (x), associate two kernels f, g satisfying both well‐defined boundary conditions and A [ x
] f=B [ y ] g, A [ y ] f=B [ x ] g, and construct the Fredholm determinants corresponding to these kernels. From these determinants we can build up solutions of second order differential
equations. These solutions have an interpretation in the Schrödinger inversion formalism. For instance, for the inversion at fixed angular momentuml, these solutions for k=0 correspond to the
classical Gel’fand–Levitan and Marchenko equations, whereas, for k≠0, they correspond to k dependent potentials. Similarly, for the inversion at fixed k, these solutions for l=0 correspond to the
classical Regge–Newton equations, whereas, for l≠0, they correspond to l‐dependent potentials. More generally we show, in a generalized inversion formalism, how parameter dependent potentials
appear very naturally in the theory.
View Description Hide Description
Coupled gravitational and electromagnetic perturbations of a Reissner–Nordström black hole are analyzed using the Newman–Penrose formalism. It is shown that χ^ B [1](≡3ψ[2]φ^ B [0]−2φ[1]ψ^ B [1])
or χ^ B [−1](≡3ψ[2]φ^ B [2]−2φ[1]ψ^ B [3]) determines the perturbations except for those corresponding to an infinitesimal change in the mass, charge, and angular momentum parameters of the balck
View Description Hide Description
A simple sufficiency condition for the zeros of a polynomial of grand partition function form to lie entirely on the unit circle in the complex fugacity (z) plane is rigorously proven. The
condition has two parts: the canonical partition function Q [ n ](M) is symmetric, Q [ n ](M) =Q [ M−n ](M), and is bounded above by the binomial coefficient (^ M [ n ]). This represents a
generalization of the condition given by Lee and Yang in the context of the Ising model and the proof is independent of theirs. Necessity of the condition is trivially proven.
View Description Hide Description
We use the Frenet–Serret formalism to study the intrinsic geometry of Killing trajectories that are admitted by an arbitrary n‐dimensional Riemannian space. The intrinsic quantities associated
with these curves, i.e. their curvatures, are found to be constants of the motion that can be evaluated in terms of Hankel determinants. The results are then applied to curves in real quantum
View Description Hide Description
Analytic results for radial integrals over products of Dirac–Coulomb functions and the radial part of the electromagnetic Green’s function are expressed in terms of a matrix generalization of the
gamma function. This matrix gamma function has many useful properties, including a recurrence relation similar to that of the gamma function, and provides a compact easily manipulated method of
evaluating the Dirac–Coulomb radial integrals. These results can be used to calculate the virtual and real photon spectra associated with electron scattering from the nucleus.
View Description Hide Description
An abstract Hilbert space with a particularly convenient scalar product is introduced to permit a generalization of Feenberg’s rearrangement method of perturbation theory to be applied to thermal
Green’s function calculations. This method has the advantage of treating averages (either thermal or configuration) rigorously from the start. Explicit calculations are done for the frequency
dependent electrical conductivity for alloys with diagonal disorder at zero temperature. Three practical approaches are discussed: (1) the Gram–Schmidt orthogonalization procedure, (2) a trick
which depends on the Hermitian character of the polarization operator, and (3) a general procedure for using nonorthogonal basis vectors to expand the Feenberg formulas. To second order in the
scattering strength, a new expression for the conductivity is found which is valid for all frequencies. This expression agrees with earlier perturbation theory results when the frequency is very
small or very large.
View Description Hide Description
Solutions of the Einstein–Maxwell equations are investigated for which the electromagnetic field is nonsingular and weakly parallelly propagated along its principal null congruences. It is shown
that a subclass of these solutions admits an invertible two‐dimensional Abelian group of motions. A weaker characterization of a recently found solution is thereby obtained.
View Description Hide Description
An analytical expression is given for the multiple integral ∫⋅⋅⋅∫dμ (x [ n+1]) dμ (x [ n+2]) ⋅⋅⋅dμ (x [ N ]) Π[1⩽j<k⩽N ]‖x [ j ] −x [ k ]‖^β, where 0⩽n⩽N, β=1,2, or 4 and the positive measure dμ
(x) is such that all its moments exist, ∫dμ (x) x ^ j <∞, j=0,1,2,⋅⋅⋅. The case dμ (x) =d x, for −1⩽x⩽1, and dμ (x) =0, for ‖x‖≳1, is given as an example. In the limit N→∞ the correlation
functions of this example, the so‐called Legendre ensembles, coincide with those of the circular or the Gaussian ensembles of random matrices.
View Description Hide Description
The concept of Killing spinor is analyzed in a general way by using the spinorial formalism. It is shown, among other things, that higher derivatives of Killing spinors can be expressed in terms
of lower order derivatives. Conformal Killing vectors are studied in some detail in the light of spinorial analysis: Classical results are formulated in terms of spinors. A theorem on Lie
derivatives of Debever–Penrose vectors is proved, and it is shown that conformal motion in vacuum with zero cosmological constant must be homothetic, unless the conformal tensor vanishes or is of
type N. Our results are valid for either real or complex space–time manifolds.
View Description Hide Description
Following Plebański and Robinson, complex V [4]’s which admit a congruence of totally null surfaces are shown to have coordinates which, in pairs, have a spinor structure which generates the
usual spinor structure of the 2‐forms over the space. This structure allows Einstein’s vacuum equations to fracture into three triples and a singlet, which allow for easy reduction of the entire
set to one nonlinear partial differential equation needed for consistency. An inhomogeneous GL̃ (2,C) group of coordinate transformations, constrained to leave the tetrad form invariant, is
constructed and used to simplify the equations and clarify the geometrical meaning of the parameters introduced during the integration process.
View Description Hide Description
Symplectic maps (canonical transformations) are treated from the Lie algebraic point of view using Lie series and Lie algebraic techniques. It is shown that under very general conditions an
analytic symplectic map can be written as a product of Lie transformations. Under certain conditions this product of Lie transformations can be combined to form a single Lie transformation by
means of the Campbell–Baker–Hausdorff theorem. This result leads to invariant functions and generalizes to several variables a classic result of Birkhoff for the case of two variables. It also
provides a new approach since the connection between symplectic maps, Lie algebras, invariant functions, and Birkhoff’s work has not been previously recognized and exploited. It is expected that
the results obtained will be applicable to the normal form problem in Hamiltonian mechanics, the use of the Poincaré section map in stability analysis, and the behavior of magnetic field lines in
a toroidalplasma device.
View Description Hide Description
A topological classification of monopoles and vortices is formulated in terms of fibre bundles. The distinction between Dirac and ’t Hooft monopoles is made in the light of the energy finiteness
problem. Finite‐length vortices with Dirac monopoles at the end points are also discussed.
View Description Hide Description
We show the existence of a formal identity between Einstein’s and Ernst’s stationary axisymmetric gravitational field equations and the Einstein–Maxwell and the Ernst equations for the
electrostatic and magnetostatic axisymmetric cases. Our equations are invariant under very simple internal symmetry groups, and one of them appears to be new. We also obtain a method for
associating two stationary axisymmetric vacuum solutions with every electrostatic known. | {"url":"http://scitation.aip.org/content/aip/journal/jmp/17/12","timestamp":"2014-04-20T01:50:16Z","content_type":null,"content_length":"134056","record_id":"<urn:uuid:2a4efcd3-4f66-4c17-932f-2eeb95b579a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
**CLOSED**Birthday Blessings Bash! Giveaway # 5 Charlie Banana............
Welcome to day 5 of the Birthday Blessings Bash Celebration! Today is another joint giveaway with And the Little Ones Too! Once you have entered my giveaway you can head over to And the Little Ones
Too and enter for a chance to win a Tiny Tush Elite one size pocket diaper in choice of color!!
Today's giveaway is something from Charlie Banana.... anyone guess what it might be?
Sure they make some cute cloth diapers, cloth wipes, diaper laundry bags, baby training pants, swim diapers and baby leg warmers.... and the list goes on. Today, I am going to be reviewing their
reusable feminine pads! Okay, before you all say Eeeewwww and quickly close out this page, let me tell you a little about 'reusable feminine pads' or mama cloth as I like to call it ;)
So why make the switch? Well, there are certain chemicals in pads and tampons that can be hazardous to your health. The most harmful of which are called 'dioxins.' While researching this term
'dioxins' I came across several articles that really opened my eyes to the dangers of using disposable pads and tampons.
I found some great information on the Natracare website regarding dioxins and the dangers of using tampons. I also found an article on The Health Wyze Report website that explains the link between
disposable pads and tampons to endometriosis.
Another great article (from Canada) describes how dioxins may be linked to cancer, endometriosis, low sperm count, and immune system suppression.
Are you shocked yet? Are you ready to make the change to protect your body from these potential life-threatening chemicals?
I am! I received some beautiful reusable feminine pads from Charlie Banana for review!
I received a pack of regular pads and a pack of super pads (and they each came with a cute little pink carry-case)!
The first thing that I thought when I opened the boxes was how cute these little pads were, and how soft too! Don't you just love the little pink butterfly print?
Yeah, ok so they are cute! But do they work? These pads came just in time (if you know what I mean). I washed them in Rockin Green and then hung them to dry.
So, without going into much detail, let me just say that these pads did the trick. I did not experience any leaks and they are sooooo comfortable to wear!
At first I was kinda grossed out by the thought of washing these pads after use, but after washing them, I am no longer grossed out. I just put them in a bag (you can use a small wet-bag, or a little
plastic bag to store them in until wash day), then wash them in the washer alone. Some people had told me that they wash them with their regular laundry, and even cloth diapers. I did not try that,
and am not sure I will. For some reason, I am just more comfortable washing them alone for now!
So you can buy these cute little pads at Mami's and Papi's (Target also carries some Charlie Banana products, but I have not seen the pads there *yet*), or you can go to Charlie Banana's website to
locate an authorized dealer!
I have been told that a good number of these pads to have in your 'stash' is about 15 for an average woman. However, every woman is different and so you may need a few more or less.
Charlie Banana has graciously offered a set of pads in choice of color/print and absorbency to one lucky LWML follower!
*Please note: Your e-mail address must be included in every comment so I can contact you if you win!
Mandatory Entries:
♥ You must follow my blog, and also follow And the Little Ones Too blog publicly via Google Friend Connect (1 entry)
♥ Optional/Extra Entries:
♥ Like Charlie Banana on Facebook (1 entry) *please leave your FB username in comment
♥ Copy this note on their wall "Mama of the Littles sent me, thanks for the Birthday Blessings Bash giveaway on her blog http://tiny.cc/5uzja" (Can be done once daily for 1 entry per day)
♥ Become a fan of my FB page and leave a comment stating you are a fan. If you already are a fan, just state that you already are! Please include your facebook username in the comment! (1 entry)
♥ Become a fan of And the Little Ones Too on Facebook and leave a comment stating you are a fan. If you already are a fan, just state that you already are! Please include your facebook username in
the comment! (1 entry)
♥ Follow me on Twitter (1 entry) *include your twitter username in comment
♥ Follow Charlie Banana on Twitter (1 entry) *include your twitter username in comment
♥ Follow And the Little Ones Too on Twitter (1 entry) *include your twitter username in comment
♥ Tweet about this giveaway. Tweet the following: "I entered to #win some reusable feminine pads @charliebanana from @oregongal1 and YOU can too! http://tiny.cc/5uzja #giveaway (14 Feb)" (can be
tweeted once per day for 1 extra entry per day)! leave your status link and twitter id in comment
The Kicker:
♥ Head over to And the Little Ones Too blog and enter her giveaway for a Tiny Tush Elite one size pocket diaper (your choice of color). Come back here and comment that you have entered (4 entries)
Today's giveaway will end on 14 Feb at midnight (EST). Winner will be chosen using random.org. All winners from the Birthday Blessings Bash will be e-mailed and announced on the blog on Monday,
February 21st. Winners will have 48 hours to contact me to claim their prize, or another winner will be chosen.
Good Luck and Have fun!
~Mama of the Littles~
Disclosure: I received the product mentioned above in exchange for this review, Thanks to Charlie Banana! No monetary compensation was received by me. This is my completely honest opinion above and
may differ from yours.
329 comments:
1. follow you both thru GFC
diaryofmomma00 at yahoo dot com
2. like CB on FB
ID: Katie Adams
diaryofmomma00 at yahoo dot com
3. placed your note on their FB page
ID: Katie Adams
diaryofmomma00 at yahoo dot com
4. already a fan of your's on FB
ID: Katie Adams
diaryofmomma00 at yahoo dot com
5. new fan of Ashley: And the Little Ones to on FB, left some love
ID: Katie Adams
diaryofmomma00 at yahoo dot com
6. follow you on twitter
diaryofmomma00 at yahoo dot com
7. follow cb on twitter
diaryofmomma00 at yahoo dot com
8. follow And the Little Ones Too on twitter
diaryofmomma00 at yahoo dot com
9. http://twitter.com/#!/diaryofmomma00/status/34655022415478784
diaryofmomma00 at yahoo dot com
10. #1 Entered giveaway on "And The Little Ones Too"
diaryofmomma00 at yahoo dot com
11. #2 Entered giveaway on "And The Little Ones Too"
diaryofmomma00 at yahoo dot com
12. #3 Entered giveaway on "And The Little Ones Too"
diaryofmomma00 at yahoo dot com
13. #4 Entered giveaway on "And The Little Ones Too"
diaryofmomma00 at yahoo dot com
14. I follow your blog, and also follow And the Little Ones Too blog publicly via Google Friend Connect.
aehixon at gmail dot com
15. I like Charlie Banana on Facebook (Amy Hixon)
aehixon at gmail dot com
16. I posted your note on Charlie Banana's wall.
aehixon at gmail dot com
17. I already follow you on facebook.
aehixon at gmail dot com
18. I'm a fan on And the Little Ones Too on Facebook (Amy Hixon)
aehixon at gmail dot com
19. I entered the giveaway for a Tiny Tush Elite one size pocket diaper at And the Little Ones Too.
aehixon at gmail dot com
20. I entered the giveaway for a Tiny Tush Elite one size pocket diaper at And the Little Ones Too.
aehixon at gmail dot com
21. I entered the giveaway for a Tiny Tush Elite one size pocket diaper at And the Little Ones Too.
aehixon at gmail dot com
22. I entered the giveaway for a Tiny Tush Elite one size pocket diaper at And the Little Ones Too.
aehixon at gmail dot com
23. I follow both via gfc
missanneperry@gmail dot com
24. I like cb on fb
Anne E. Perry
missanneperry at gmail dot com
25. posted note on fb
Anne E. Perry missanneperry at gmail dot com
26. already are a fan on fb
Anne E. Perry
missanneperry at gmail dot com
27. fan of And the Little Ones Too on Facebook
Anne E. PErry
missanneperry at gmail dot com
28. ♥ Follow you on Twitter@aeperry
missanneperry at gmail dot com
29. ♥ Follow Charlie Banana on Twitter (@aeperry
missanneperry at gmail dot com
30. ♥ Follow And the Little Ones Too on Twitter @aeperry
missanneperry at gmail dot com
31. tweet
missanneperry at gmail dot com
32. entered her giveaway for a Tiny Tush Elite
missanneperry at gmail dot com
33. entered her giveaway for a Tiny Tush Elite
missanneperry at gmail dot com
34. entered her giveaway for a Tiny Tush Elite
missanneperry at gmail dot com
35. entered her giveaway for a Tiny Tush Elite
missanneperry at gmail dot com
36. following your blog, and also following And the Little Ones Too blog publicly via Google Friend Connect (Rachel)
dakotaring at yahoo dot com
37. i'm following both blogs!!
karilynaley at yahoo
38. already a CB fan on facebook! they are my FAVORITE!!!
39. I Like Charlie Banana on Facebook (Fluffy Giveaways)
dakotaring at yahoo dot com
40. i'm already your facebook fan!! kari mee....
karilynaley at yahoo
41. fanned And the Little Ones, too on facebook!! :)
karilynaley at yahoo
42. Copied the note on their wall:
dakotaring at yahoo dot com
43. i follow your twitter
kari boo meek...
44. i follow CB on twitter! kari boo mee...
45. i follow the little ones, too on twitter
kari boo mee...
karilynaley at yahoo
46. already am your fb fan (Fluffy Giveaways)
dakotaring at yahoo dot com
47. Following you on Twitter (@Rachelhooey)
dakotaring at yahoo dot com
48. Following Charlie Banana on Twitter (@Rachelhooey)
dakotaring at yahoo dot com
49. Following And the Little Ones Too on Twitter (@Rachelhooey)
dakotaring at yahoo dot com
50. I'm a fan of And the Little Ones Too on Facebook (Fluffy Giveaways)
dakotaring at yahoo dot com
51. tweet:
dakotaring at yahoo dot com
52. Went over to And the Little Ones Too blog and entered her giveaway for a Tiny Tush Elite
(entry 1)
dakotaring at yahoo dot com
53. Went over to And the Little Ones Too blog and entered her giveaway for a Tiny Tush Elite
(entry 2)
dakotaring at yahoo dot com
54. Went over to And the Little Ones Too blog and entered her giveaway for a Tiny Tush Elite
(entry 3)
dakotaring at yahoo dot com
55. Went over to And the Little Ones Too blog and entered her giveaway for a Tiny Tush Elite
(entry 4)
dakotaring at yahoo dot com
56. Follow you both on GFC (Jen)
boomersoonermama at gmail dot com
57. Like Charlie Banana on FB (Jen Wyble Breedlove)
boomersoonermama at gmail dot com
58. FB fan of yours already (Jen Wyble Breedlove)
boomersoonermama at gmail dot com
59. New FB fan of And the little ones too (Jen Wyble Breedlove)
boomersoonermama at gmail dot com
60. Entered the Tiny Tush giveaway (#1)
boomersoonermama at gmail dot com
61. Entered the Tiny Tush giveaway (#2)
boomersoonermama at gmail dot com
62. Entered the Tiny Tush giveaway (#3)
boomersoonermama at gmail dot com
63. Entered the Tiny Tush giveaway (#4)
boomersoonermama at gmail dot com
64. GFC follower of you & And the Little Ones Too
65. Liked Charlie Banana on FB (Shakeeta W)
66. Posted comment to Charlie Banana's FB page (Shakeeta W)
67. Liked you on FB (Shakeeta W)
68. Liked And the Little Ones Too on FB (Shakeeta W)
69. Following you on Twitter (@clemson09)
70. Following And the Little Ones Too on Twitter (@clemson09)
71. Following Charlie Banana on Twitter (@clemson09)
72. I follow you and And the little ones too via GFC.
(my email is in my profile)
73. I follow you on twitter (corgipants)
74. I follow charlie banana on twitter (corgipants)
75. I follow Ashley T on twitter (corgipants)
76. tweeted:
77. I follow you and And The Little Ones Too on GFC
78. I like Charlie Banana on FB
heather irwin
79. Posted on their wall
80. I was already a fan on FB
heather irwin
81. I'm a fan of Ashley:And the little ones on FB
heather irwin
82. I follow you on twitter
83. following Charlie Banana on twitter
84. following And the little ones on twitter
85. tweet
86. I entered to win Tiny Tush Elite at And the little ones
comment #61
87. I entered to win Tiny Tush Elite at And the little ones
comment #61
88. I entered to win Tiny Tush Elite at And the little ones
comment #61
89. I entered to win Tiny Tush Elite at And the little ones
comment #61
90. Blog Follower via GFC to both blogs!
Mrs. Smitty
rasmith0506 at gmail dot com
91. Like Charlie Banana on Facebook
Rebecca Smith
rasmith0506 at gmail dot com
92. Posted on Charlie Banana's wall
rasmith0506 at gmail dot com
93. Like you on FB
Rebecca Smith
rasmith0506 at gmail dot com
94. Follow Charlie Banana on Twitter
rasmith0506 at gmail dot com
95. Tweet
rasmith0506 at gmail dot com
96. daily tweet
97. posted on Charlie Banana's wall
heather irwin
98. tweet:
dakotaring at yahoo dot com
99. wrote on Charlie Banana's wall:
dakotaring at yahoo dot com
100. I follow both blogs via GFC.
ayakers (at) gmail (dot) com
101. I like Charlie Banana on facebook.
Amanda Nordman
ayakers (at) gmail (dot) com
102. I'm a fan of your facebook page.
Amanda Nordman
ayakers (at) gmail (dot) com
103. I like And The Little Ones Too on facebook.
Amanda Nordman
ayakers (at) gmail (dot) com
104. I entered the Tiny Tush giveaway on And The Little Ones Too. #1
ayakers (at) gmail (dot) com
105. I entered the Tiny Tush giveaway on And The Little Ones Too. #2
ayakers (at) gmail (dot) com
106. I entered the Tiny Tush giveaway on And The Little Ones Too. #3
ayakers (at) gmail (dot) com
107. I entered the Tiny Tush giveaway on And The Little Ones Too. #4
ayakers (at) gmail (dot) com
108. tweet
diaryofmomma00 at yahoo dot com
109. left note on CB's wall today
ID: Katie Adams
diaryofmomma00 at yahoo dot com
110. I follow this blog & the Little Ones too. etwilkins at gmail dot com
111. I like Charlie Banana on FB. Tiffany.Poole.Wilkins
etwilkins at gmail dot com
112. Posted on Charlie Banana's wall. etwilkins at gmail dot com
113. Already a fan of you on FB. Tiffany.Poole.Wilkins
etwilkins at gmail dot com
114. FB fan of The Little Ones Too. Tiffany.Poole.Wilkins
etwilkins at gmail dot com
115. I follow you on twitter. IamMrsWilkins
etwilkins at gmail dot com
116. I follow Charlie Banana on twitter. IamMrsWilkins
etwilkins at gmail dot com
117. I follow the Little Ones Too on twitter. IamMrsWilkins
etwilkins at gmail dot com
118. Tweeted!
etwilkins at gmail dot com
119. I entered the Tiny Tush giveaway from And the Little Ones Too. 1
etwilkins at gmail dot com
120. I entered the Tiny Tush giveaway from And the Little Ones Too. 2
etwilkins at gmail dot com
121. I entered the Tiny Tush giveaway from And the Little Ones Too. 3
etwilkins at gmail dot com
122. I entered the Tiny Tush giveaway from And the Little Ones Too. 4
etwilkins at gmail dot com
123. tweet
missanneperry at gmail.com
124. Following And the Little Ones Too
125. follow you and the little ones via GFC (lulu)
126. like charlie banana on FB (lulu a-f)
127. wrote on charlie banana FB wall (lulu a-f)
128. like you on FB (lulu a-f)
129. like the little ones on FB (lulu a-f)
130. follow you on twitter (lulubunny1)
131. follow charlie banana on twitter (lulubunny1)
132. follow the little ones on twitter (lulubunny1)
133. #1. entered tiny tush on the littles on blog
134. #2. entered tiny tush on the littles on blog
135. #3. entered tiny tush on the littles on blog
136. #4. entered tiny tush on the littles one blog
137. This comment has been removed by the author.
138. I follow your blog, and also follow And the Little Ones Too blog publicly via Google Friend keidlog (at) yahoo (dot) com .
139. Following both.
trishabear1970 at yahoo dot com
140. I like CB on FB.
trishabear1970 at yahoo dot com
141. Tiny Tush #1
trishabear1970 at yahoo dot com
142. Tiny Tush #2
trishabear1970 at yahoo dot com
143. Tiny Tush #3
trishabear1970 at yahoo dot com
144. Tiny Tush #4
trishabear1970 at yahoo dot com
145. This comment has been removed by the author.
146. I follow your blog, and also follow And the Little Ones Too blog publicly via Google Friend Connect
MJMorphew at gmail dot com
147. I like CB on FB (Melissa Lavaty)
MJMorphew at gmail dot com
148. I Copied this note on Charlie Banana's wall "Mama of the Littles sent me, thanks for the Birthday Blessings Bash giveaway on her blog http://tiny.cc/5uzja (Melissa Lavaty)
MJMorphew at gmail dot com
149. I like your FB page (Melissa Lavaty)
MJMorphew at gmail dot com
150. I like And the Little Ones Too on FB (Melissa Lavaty)
MJMorphew at gmail dot com
151. I entered the And the Littles Too giveaway for a Tiny Tush Elite one size pocket diaper (Melissa Lavaty)
MJMorphew at gmail dot com
152. I entered the And the Littles Too giveaway for a Tiny Tush Elite one size pocket diaper (Melissa Lavaty)
MJMorphew at gmail dot com
153. I entered the And the Littles Too giveaway for a Tiny Tush Elite one size pocket diaper (Melissa Lavaty)
MJMorphew at gmail dot com
154. I entered the And the Littles Too giveaway for a Tiny Tush Elite one size pocket diaper (Melissa Lavaty)
MJMorphew at gmail dot com
155. You must follow my blog, and also follow And the Little Ones Too blog publicly via Google Friend Connect. (Leann)
ziggy28028 at yahoo dot com
156. Like Charlie Banana on Facebook (Leann LaPresti)
ziggy28028 at yahoo dot com
157. Copy this note on their wall "Mama of the Littles sent me, thanks for the Birthday Blessings Bash giveaway on her blog http://tiny.cc/5uzja"
ziggy28028 at yahoo dot com
158. Already a fan of your FB page. (Leann LaPresti)
ziggy28028 at yahoo dot com
159. Already a fan of And the Little Ones Too on Facebook. (Leann LaPresti)
ziggy28028 at yahoo dot com
160. Head over to And the Little Ones Too blog and enter her giveaway for a Tiny Tush Elite one size pocket diaper (your choice of color). (1)
ziggy28028 at yahoo dot com
161. Head over to And the Little Ones Too blog and enter her giveaway for a Tiny Tush Elite one size pocket diaper (your choice of color). (2)
ziggy28028 at yahoo dot com
162. Head over to And the Little Ones Too blog and enter her giveaway for a Tiny Tush Elite one size pocket diaper (your choice of color). (3)
ziggy28028 at yahoo dot com
163. Head over to And the Little Ones Too blog and enter her giveaway for a Tiny Tush Elite one size pocket diaper (your choice of color). (4)
ziggy28028 at yahoo dot com
164. Follow both through GFC (katie, kmogilevski)
kmogilevski at gmail dot com
165. Like CB on Facebook (katie schneider)
kmogilevski at gmail dot com
166. Posted on CB's wall (http://www.facebook.com/lovecharliebanana/posts/197039543655005)
kmogilevski at gmail dot com
167. Like you on Facebook (katie schneider)
kmogilevski at gmail dot com
168. Like Little Ones on Facebook (katie schneider)
kmogilevski at gmail dot com
169. Follow you on Twitter (kmogilevski)
kmogilevski at gmail dot com
170. Follow CB on Twitter (kmogilevski)
kmogilevski at gmail dot com
171. Follow Little Ones on Twitter (kmogilevski
kmogilevski at gmail dot com
172. Tweeted (http://twitter.com/#!/kmogilevski/status/35125417505587200)
kmogilevski at gmail dot com
173. Entered Little Ones' Tiny Tush giveaway (kmogilevski) - 1
kmogilevski at gmail dot com
174. Entered Little Ones' Tiny Tush giveaway (kmogilevski) - 2
kmogilevski at gmail dot com
175. Entered Little Ones' Tiny Tush giveaway (kmogilevski) - 2
kmogilevski at gmail dot com
176. Entered Little Ones' Tiny Tush giveaway (kmogilevski) - 4 (the last one should have been entry 3)
kmogilevski at gmail dot com
177. Brooke TFebruary 8, 2011 at 9:49 PM
I follow both on GFC
brookelynthomas (@) gmail. com
178. Brooke TFebruary 8, 2011 at 9:50 PM
I follow Charlie Banana on Fb
brookelynthomas (@) gmail. com
179. Brooke TFebruary 8, 2011 at 9:51 PM
I follow you on Twitter
brookelynthomas (@) gmail. com
180. Brooke TFebruary 8, 2011 at 9:52 PM
I follow Charlie Banana on Twitter
brookelynthomas (@) gmail. com
181. I follow both on GFC (Madeline).
madelinemiller at gmail dot com
182. I like CB on Facebook (Madeline Doms Miller).
madelinemiller at gmail dot com
183. Wrote on the CB wall.
madelinemiller at gmail dot com
184. I like you on Facebook (Madeline Doms Miller).
madelinemiller at gmail dot com
185. I like And the Little Ones Too on Facebook (Madeline Doms Miller).
madelinemiller at gmail dot com
186. I follow you on Twitter (@MadelineMiller).
madelinemiller at gmail dot com
187. I follow CB on Twitter (@MadelineMiller).
madelinemiller at gmail dot com
188. I follow and the Little Ones Too on Twitter (@MadelineMiller).
madelinemiller at gmail dot com
189. Tweeted: http://twitter.com/#!/MadelineMiller/status/35228647346552832
madelinemiller at gmail dot com
190. Entered the Tiny Tush giveaway at And the Little Ones too.
madelinemiller at gmail dot com
191. Entered the Tiny Tush giveaway at And the Little Ones too.
madelinemiller at gmail dot com
192. Entered the Tiny Tush giveaway at And the Little Ones too.
madelinemiller at gmail dot com
193. Entered the Tiny Tush giveaway at And the Little Ones too.
madelinemiller at gmail dot com
194. Following you both! :)
195. Like charlie banana on FB -user Amanda Steed
196. #1 entered tiny tush giveaway
197. #2 entered tiny tush giveaway
198. #3 entered tiny tush giveaway
199. #4 entered tiny tush giveaway
200. tweet
New comments are not allowed. | {"url":"http://lifewithmylittles.blogspot.com/2011/02/birthday-blessings-bash-giveaway-5.html","timestamp":"2014-04-19T09:23:39Z","content_type":null,"content_length":"485296","record_id":"<urn:uuid:cc3c02bc-4fa6-4c8d-bb00-6c64a8402cf7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Szilassi polyhedron
A toroidal heptahedron (seven-sided polygon) first described in 1977 by the Hungarian mathematician Lajos Szilassi. It has 7 faces, 14 vertices, 21 edges, and 1 hole. The Szilassi polyhedron is the
dual of the Császár polyhedron and, like it, shares with the tetrahedron the property that each of its faces touches all the other faces. Whereas a tetrahedron demonstrates that four colors are
necessary for a map on a surface topologically equivalent to a sphere, the Szilassi and Császár polyhedra show that seven colors are necessary for a map on a surface topologically equivalent to a
Related category | {"url":"http://www.daviddarling.info/encyclopedia/S/Szilassi_polyhedron.html","timestamp":"2014-04-21T12:09:35Z","content_type":null,"content_length":"6097","record_id":"<urn:uuid:b5d5d51b-e443-4f76-93b2-1c1272d5b0f3>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
2: Reduction Orders (Spring 2000)
MCS-287 Lab Project 2: Reduction Orders (Spring 2000)
Due: March 16, 2000
Your project report should reflect your final product, rather than focusing on each incremental step along the way. Your project report should explain the examples in which you compare applicative
and leftmost reduction orders, and the functionality of your reduction procedures that allow you to make these comparisons. Don't go into the details in English: your audience can read Scheme.
However, don't assume the audience knows what you are trying to accomplish or how you have gone about accomplishing it.
• Do exercises 4.2.2-4.2.4 on pages 106-107.
• Do exercise 4.3.1 and/or 4.3.2 from pages 112-113, as modified below. It will be most convenient to have both of these, but you can make do with either one of them for the crux of the lab (the
comparison of reduction orders), so if you are hung up on one but not the other, you might want to go on. The modifications I would make to these exercises are as follows:
□ The book's definition of what an answer is needs to be changed somewhat: we will say that any expression that reduce-once-appl fails to reduce is an answer. In most cases this is the same as
the book's definition, but they differ on some irreducible applications such as (x x).
□ Don't use the version of reduce-once-appl that is in the book. Instead, use my version, which is linked onto the web copy of this lab handout. Note that my version makes no use of the
procedure called answer?. Nor should you: don't type it in or any variant of it. Anywhere you might think it is useful, you can instead use reduce-once-appl with appropriate success and
failure continuations.
□ For 4.3.1, n should be allowed to be 0, not just positive, and the list you produce should start with the given expression, rather than the first reduced version. The list should then
continue with at most n reduced versions, but perhaps fewer (possibly even 0) if you reach a point where no further reduction is possible. Here are the book's examples with this modification:
> (reduce-history '((lambda (x) (x ((lambda (x) y) z))) w) 5)
(((lambda (x) (x ((lambda (x) y) z))) w)
(w ((lambda (x) y) z))
(w y))
> (reduce-history '((lambda (x) (x x)) (lambda (x) (x x))) 3)
(((lambda (x) (x x)) (lambda (x) (x x)))
((lambda (x) (x x)) (lambda (x) (x x)))
((lambda (x) (x x)) (lambda (x) (x x)))
((lambda (x) (x x)) (lambda (x) (x x))))
□ For 4.3.2, you may want to loosen the restriction that n be positive.
• Do exercise 4.3.5 from page 115. Now rename reduce-history to reduce-history-appl and reduce* to reduce*-appl and make corresponding reduce-history-leftmost and reduce*-leftmost procedures.
Rather than duplicating code between the applicative-order and leftmost versions, you should use a generalized or higher-order procedure to capture the commonality.
• Now, for the crux of the lab, use your programs to show examples where
□ The expression exp reaches the answer exp' after exactly n applicative-order reductions and reaches the normal form exp' after exactly n leftmost reductions, with n > 0. (As a note on
mathematical convention: the two occurrences of exp' mean that these should be equal, and similarly for the occurrences of n. However, moving from one bulleted goal to the next, you are
allowed to switch to a different exp, exp', etc.)
□ The expression exp reaches the answer exp' after exactly n applicative-order reductions and reaches the normal form exp' after exactly n' leftmost reductions, with n' > n > 0.
□ The expression exp reaches the answer exp' after exactly n applicative-order reductions and reaches the normal form exp' after exactly n' leftmost reductions, with n > n' > 0.
□ The expression exp reaches the normal form exp' after exactly n leftmost reductions but has not reached any answer after 100+n applicative-order reductions, with n > 0.
□ The expression exp reaches the answer exp' after some number of applicative-order reductions and reaches the normal form exp'' after some (possibly different) number of leftmost reductions,
with exp' different from exp''.
If you have trouble with one of these, you may want to do later ones and come back.
Instructor: Max Hailperin | {"url":"https://gustavus.edu/+max/courses/S2000/MCS-287/labs/lab2/","timestamp":"2014-04-17T09:35:54Z","content_type":null,"content_length":"5425","record_id":"<urn:uuid:38e44411-8511-4f1c-ae16-dfcb9c20ae8b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bremerton Math Tutor
Find a Bremerton Math Tutor
...The students took exams in all classes; and if they passed, they could go to college. Otherwise, they had to repeat 11th and 12th grade, so it was critical they performed well. The SAT is
easier to prepare for than the Cambridge Exams.
39 Subjects: including algebra 1, algebra 2, grammar, linear algebra
...As a perfect-score teacher, I get requests from all over the country, so I usually teach using online meeting software. The software allows me to talk to you and you talk to me (with our
voices), and we see and work the same problems together using a slide show and drawing board. I've taught in...
16 Subjects: including algebra 2, geometry, ACT Math, algebra 1
...You may read some of their responses. Not only did I help them to understand and apply concepts, I also guided them through their various projects and assisted them in writing statistical
research papers. Teaching SAT is a big chunk of of my 15 years for tutoring and teaching experience.
20 Subjects: including trigonometry, ACT Math, SAT math, algebra 1
...Previously, I worked in an academic biology lab on the UC Berkeley campus, while tutoring math and science one-on-one to high school students. I also taught 7th and 8th grade science at a
middle school in Oakland, CA through a program called Teach For America. Therefore, I have both teaching and tutoring experience, as well as hands-on expertise in the science field.
9 Subjects: including algebra 1, algebra 2, biology, chemistry
...My goal is to help all students meet grade-level standards, and I love it when students meet standards and no longer need my help!I have a valid Washington state teaching certificate and
endorsements in Elementary education, special education, and reading. I have a master's in assessment, teaching and learning, and a doctorate in instructional leadership. I love teaching!
22 Subjects: including prealgebra, reading, English, dyslexia | {"url":"http://www.purplemath.com/Bremerton_Math_tutors.php","timestamp":"2014-04-19T07:36:41Z","content_type":null,"content_length":"23779","record_id":"<urn:uuid:e95b643e-bf36-4ed3-b989-9e486157f339>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
Effects of Weather on Irrigation Requirements
• Print this fact sheet no. 4.721
Effects of Weather on Irrigation Requirements
by A. Andales
Quick Facts....
• Variability in evapotranspiration demand and precipitation causes irrigation requirements to change from year to year.
• Past records of seasonal evapotranspiration and precipitation can be used to plan ahead for irrigation requirements that will probably occur.
• The amount and timing of precipitation (P) and evapotranspiration (ET) demand are the two main weather-related variables that determine irrigation requirements.
• A crop will attain its yield potential as long as its ET demand is satisfied throughout the growing season and all other grow factors are non-limiting.
The irrigation requirements of a crop are affected by weather variability. The amount and timing of precipitation (P) and evapotranspiration (ET) demand are the two main weather-related variables
that determine irrigation requirements. The ET demand of a crop is a measure of how much water can be consumed via soil evaporation and plant transpiration assuming that plant-available water is
adequate. The ET demand varies from day-to-day depending on crop growth stage and weather variables such as solar radiation, air temperature, humidity, and wind conditions. The daily ET demand of a
crop can be estimated from daily measurements of the weather variables previously mentioned.
Assuming that all other growth factors are non-limiting – meaning conditions are such that these factors remain favorable to crop growth – a crop will attain its yield potential as long as its ET
demand is satisfied throughout the growing season. Yield reductions occur when the ET demand is not satisfied, especially during critical growth stages (for example, reproductive and grain filling
stages). The ET demand can be satisfied by precipitation, stored soil moisture in the root zone, and/or irrigation. Irrigation becomes necessary when natural precipitation and stored soil moisture
are not adequate to satisfy all of the ET demand.
An example of the seasonal variability of ET demand and precipitation is shown in Figure 1 for the Colorado State University (CSU) – Agricultural Research, Development and Education Center (ARDEC)
located northeast of Fort Collins, Colorado. The corn ET demand and precipitation from May to August of each year was obtained from the Colorado Agricultural Meteorological Network (CoAgMet) crop ET
access page (http://ccc.atmos.colostate.edu/cgi-bin/extended_etr_form.pl) for the available period of record (1992-2008). Instructions for using this online tool are available on the website above.
For this example, corn ET demand was calculated assuming a May 1 planting date each year.
There were 17 years between 1992 and 2008. However, 24 days of data were missing from the 2003 seasonal record at ARDEC and no nearby weather station could be used in its place. Therefore, 2003 was
excluded from the analysis. Also, 28 days of data were missing from the 2008 seasonal record because of tornado damage of the weather station. A complete record for 2008 from a nearby weather station
(Wellington) was used instead. For the 16 years of usable record available from CSU-ARDEC, the average seasonal (May to August) corn ET demand was 20.2 inches while average precipitation for the same
period was only 6.5 inches (only 32 percent of corn ET demand).
This meant that the average shortfall (ET − P) was 13.7 inches, which would have had to be satisfied by stored soil moisture and/or irrigation. The quantity ET − P (that is, ET minus P) can also be
used as a rough estimate of irrigation requirement. Actual stored soil moisture at planting must be subtracted from this quantity to get a better estimate of the seasonal irrigation requirement. It
is also important to note that not all precipitation amounts are effectively available to the crop because of runoff and deep percolation losses from the root zone. Figure 1 shows that ET demand,
precipitation, and irrigation requirements can vary greatly from year-to-year. This figure shows how the weather in each year (represented by ET and P) affects irrigation requirement (represented by
ET − P). The water shortfall was highest in 2006 (ET − P = 21.2 inches) and lowest in 1995 (ET − P = 3.5 inches).
Figure 1. Total corn evapotranspiration (ET) demand per season (May to August) at CSU-ARDEC near Fort Collins from 1992 to 2008. The year 2003 was not included because of missing data. Part of the ET
demand can be satisfied by precipitation (P) while the remainder (ET − P) must be satisfied by stored soil moisture or irrigation.
Probable Irrigation Requirements
It is difficult to say with certainty what a crop’s irrigation requirement will be for the coming season. This is because weather, specifically precipitation and ET demand, are difficult to predict.
However, past records of P and ET can be used to estimate the probability (chance of occurrence) that certain amounts of P, ET, and corresponding shortfalls (P − ET) will occur at a location. Then,
depending on the level of risk we are willing to take; we can select a level of probability (50 percent for example) and determine the corresponding crop ET demand that will likely occur. We can then
plan ahead to ensure that we have enough water to supply the ET demand that will likely occur. Simple frequency analysis of P and ET can be performed to estimate the chances based on past weather
A time series of values (for example, a record of seasonal ET demand over many years) can be plotted graphically against the probabilities that these values will be exceeded. The “probability of
exceedance” (Pe) can be expressed as the percentage of time that the value being considered will be exceeded. The Weibull formula is a standard method of estimating a value’s probability of
exceedance and is commonly used in hydrology. It is given by the equation below [Source: Chow, V.T., Maidment, D.R., Mays, L.R. (1988) Applied Hydrology, McGraw-Hill, Inc., New York, p.396]:
where m is the rank of a value in a list arranged from highest to lowest and n is the total number of observations or values. For instance, the highest value will have a rank of 1 while the lowest
value will have a rank of n. As with any statistical procedure, having more data (large n) is better than having few data (small n).
│Table 1. Ranked values of seasonal (May to August) corn ET demand at CSU-ARDEC and their assigned probability values. │
│Year │Corn ET (in) │Rank, m │Probability of exceedance, % │
│2006 │23.07 │1 │5.9 │
│2000 │23.01 │2 │11.8 │
│2002 │22.88 │3 │17.6 │
│2001 │21.80 │4 │23.5 │
│1994 │21.60 │5 │29.4 │
│1998 │21.58 │6 │35.3 │
│2007 │20.92 │7 │41.2 │
│1999 │20.91 │8 │47.1 │
│1993 │20.77 │9 │52.9 │
│2008 │20.75 │10 │58.8 │
│2005 │19.37 │11 │64.7 │
│1996 │17.84 │12 │70.6 │
│1995 │17.49 │13 │76.5 │
│1997 │17.27 │14 │82.4 │
│1992 │17.22 │15 │88.2 │
│2004 │16.39 │16 │94.1 │
As an example, the data from CSU-ARDEC (Figure 1) was plotted versus their probabilities of exceedance. Table 1 shows the ranking of seasonal corn ET demand from highest to lowest. The probabilities
in the right-most column were calculated using the Weibull formula. The same procedure was applied (data not shown) to seasonal precipitation and water shortfall (ET − P).
Figure 2 shows that the relationship between corn ET demand and exceedance probability can be approximated by a straight line (linear trend line fitted through the points by a graphing program like
Microsoft Excel®). The straight line accounts for about 94 percent of the variability of corn ET demand depending on exceedance probability (r2 = 0.94). From the graph, one can see that 50 percent of
the time, seasonal corn ET demand was equal to or greater than 20 inches of water. Seasonal corn ET demand was at least 17 inches 80 percent of the time while it was at least 22.5 inches 20 percent
of the time. From the graph, one can get an estimate of how often a certain value of corn ET demand at CSU-ARDEC was equaled or exceeded.
Figure 2. Probabilities (chances) of exceeding different values of seasonal corn ET (May to August) at CSU-ARDEC for the period 1992-2008.
For example, if we want to be 80 percent sure that our water supply (stored soil moisture + irrigation water) will be enough to satisfy corn ET demand, then we should determine the seasonal corn ET
that is exceeded only 20 percent of the time (Pe = 100 − 80 = 20%). Corn ET with 20 percent exceedance probability means that it will not be exceeded 80 percent of the time. From Figure 2 at 20
percent probability of exceedance, the expected seasonal corn ET is 22.5 inches. Therefore, we should make plans to have a total of 22.5 inches of water available for the season (stored soil moisture
and/or irrigation water). In this example, we are taking a 20 percent chance (risk) that our water supply will not be enough to satisfy corn ET demand. Producers who are willing to take more risks
can select a higher probability of exceedance.
Likewise, seasonal precipitation (May to August) was plotted against probability (Figure 3). In this case, precipitation versus probability was not linear, so the horizontal axis was converted to a
logarithmic scale (base 10 logarithmic scale in Microsoft Excel®). This means that the probability changes rapidly as seasonal precipitation varies. In hydrology, a logarithmic scale is often used to
make the probability graph appear linear. Sometimes, we are interested in unknown values between two adjacent observations. Interpolation is the process of estimating unknown values between actual
observations based on observed trends. Converting data to their logarithmic values makes interpolation easier, since a straight trend line is much simpler than a curved trend line. From Figure 3, it
can be estimated that seasonal (May to August) precipitation at CSU-ARDEC was at least 5.5 inches 50 percent of the time. The plot shows that seasonal precipitation was at least 4 inches 80 percent
of the time while it was at least 9 inches 20 percent of the time.
Figure 3. Probabilities (chances) of exceeding different values of seasonal precipitation (May to August) at CSU-ARDEC for the period 1992-2008.
As mentioned earlier, the water shortfall represented by (ET − P) can be a rough estimate of irrigation requirements. The probability graph of this requirement for corn at CSU-ARDEC is linear (Figure
4). Half of the time (50 percent probability), the water shortfall was at least 13 inches. The water shortfall was at least 8.5 inches, 80 percent of the time, while it was at least 18 inches 20
percent of the time.
Figure 4. Probabilities (chances) of exceeding different values of seasonal (May to August) water shortfalls (ET − P) at CSU-ARDEC for the period 1992-2008.
Caution Needed in Interpreting Probabilities
Probability graphs, like the ones given above, are only as reliable as the individual data points used to make them. At times, there may be outliers – data points that are extremely high or low
because of errors in data collection (a malfunctioning rain gauge, for example). Outliers may need to be excluded from the data series to get a more reliable probability plot. Also, having more data
points in time gives more credibility to the probability graph. In the above example, the year 2003 was excluded because it had 24 days of missing records, which would have caused an under-estimation
of ET and P for that year. As more years are added to the historical record of ET and P at CSU-ARDEC (and all other CoAgMet stations), these can be included in updated versions of the probability
There is a danger in estimating probabilities outside of the available data range (extrapolation). For example, estimating the probability of 16 inches of seasonal precipitation from Figure 3 would
not be a good idea. Probability plots are most reliable in the middle of the data range, where more data have been recorded or observed. That is why longer periods of record are better, because more
extreme (very high or very low) values would have been recorded.
Statisticians use statistical tests of the data to improve the reliability of probability plots and to fit appropriate lines through the data points. Only a simplistic approach is given here to
illustrate how weather variability can affect irrigation water requirements.
^1A. Andales, Colorado State University Extension irrigation specialist and assistant professor, soil and crop sciences. 3/09.
Colorado State University, U.S. Department of Agriculture, and Colorado counties cooperating. CSU Extension programs are available to all without discrimination. No endorsement of products mentioned
is intended nor is criticism implied of products not mentioned.
Go to top of this page.
Updated Wednesday, January 08, 2014 | {"url":"http://www.ext.colostate.edu/pubs/crops/04721.html","timestamp":"2014-04-16T10:20:07Z","content_type":null,"content_length":"24800","record_id":"<urn:uuid:98e2456f-a0b6-4a54-9b49-ef68d82709c3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
NWT Literacy Council - Adult Literacy - What's the Problem? For Adults Who Like A Challenge Solving Word Problems - Page 8
WHAT'S THE OPERATION?
Mark the correct answer for each question with a check mark:
1. Which clue word suggests the operation add?
a) Left
b) Altogether
c) Each
d) Of
2. Which clue word suggests the operation subtract?
a) Left
b) Altogether
c) Each
d) Of
3. Which clue word suggests the operation divide?
a) Left
b) Altogether
c) Each
d) Of
4. Which clue word suggests the operation mulitply?
a) Left
b) Altogether
c) Each
d) Of
5. What operation is suggested by this question: How many more does he have to do?
a) Addition
b) Subtraction
c) Multiplication
d) Division
6. What operation is suggested by this question: How much did each one cost?
a) Addition
b) Subtraction
c) Multiplication
d) Division | {"url":"http://www.nwt.literacy.ca/resources/adultlit/problems/p8.htm","timestamp":"2014-04-17T15:57:49Z","content_type":null,"content_length":"3948","record_id":"<urn:uuid:24f74d26-459a-4cc5-a757-56e6e7c463dd>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
I just wanted to lay out a perspective in relation to how one might describe the engine in relation to the design of the exhaust system as supportive of the whole frame of reference as the engine.
The pipe is a resonant chamber which shapes the exhaust pulse train in a way which uses shock waves to constrain the release of the combustion.Russell Grunloh (boatguy)
I mean it is not wholly certain for me that without perception, once realizing that potential recognizes that like some "source code" we are closer to recognizing the seed of our action, is an
expression of the momentum of our being. It is a stepping off of all that we have known, is an innate expression of our being in action.
So as souls, we are immortalized as expressions of, like a memory that tells a story about our life, our choices and the life we choose to live.
Dr. Mark Haskins
On a wider class of complex manifolds - the so-called Calabi-Yau manifolds - there is also a natural notion of special Lagrangian geometry. Since the late 1980s these Calabi-Yau manifolds have
played a prominent role in developments in High Energy Physics and String Theory. In the late 1990s it was realized that calibrated geometries play a fundamental role in the physical theory, and
calibrated geometries have become synonymous with "Branes" and "Supersymmetry".
Special Lagrangian geometry in particular was seen to be related to another String Theory inspired phemonenon, "Mirror Symmetry". Strominger, Yau and Zaslow conjectured that mirror symmetry could
be explained by studying moduli spaces arising from special Lagrangian geometry.
This conjecture stimulated much work by mathematicians, but a lot still remains to be done. A central problem is to understand what kinds of singularities can form in families of smooth special
Lagrangian submanifolds. A starting point for this is to study the simplest models for singular special Lagrangian varieties, namely cones with an isolated singularity. My research in this area
([2], [4], [6]) has focused on understanding such cones especially in dimension three, which also corresponds to the most physically relevant case.
So it is also about string theory in a way for me as well, and my attempts to understand those expressions in the valley. Poincare's description of a pebble, rolling down from the hilltop.
It follows then that not all comments will not all be accepted, yet, I felt it important for one to recognize what Poincare was saying and what I am saying.
HENRI POINCARE Mathematics and Science:Last Essays
Since we are assuming at this juncture the point of view of the mathematician, we must give to this concept all the precision that it requires, even if it becomes necessary to use mathematical
language. We should then say that the body of laws is equivalent to a system of differential equations which link the speed of variations of the different elements of the universe to the present
values of these elements.
Such a system involves, as we know, an infinite number of solutions, But if we take the initial values of all the elements, that is,their values at the instant t =(which would correspond in
ordinary language to the "present"), the solution is completely determined, so that we can calculate the values of all the elements at any period
whatever, whether we suppose />0, which corresponds to the "future," or whether we suppose t<0, which corresponds to the "past." What is important to remember is that the manner of inferring the
past from the present does not differ from that of inferring the future from the present.
Contrast the pebble as an issuance of, from symmetry, and the top of mountain(a sharpened pencil standing straight up) and the decay(asymmetry), as an expression of the solidification of who we are
in that valley. as a pebble?? After the example, we are but human form with a soul encased. The present, is our future? Our past, our presence?
Mathematics and Science: Last Essays, by Henri Poincare
8 Last Essays
"But it is exactly because all things tend toward death that life is
an exception which it is necessary to explain.
Let rolling pebbles be left subject to chance on the side of a
mountain, and they will all end by falling into the valley. If we
find one of them at the foot, it will be a commonplace effect which
will teach us nothing about the previous history of the pebble;
we will not be able to know its original position on the mountain.
But if, by accident, we find a stone near the summit, we can assert
that it has always been there, since, if it had been on the slope, it
would have rolled to the very bottom. And we will make this
assertion with the greater certainty, the more exceptional the event
is and the greater the chances were that the situation would not
have occurred."
Of course I do not believe our lives are just an expression of chance, but choice as "a memory" we choose. Of course too, how do you set up a life as an expression if you do not continue to learn?
In the pool of symmetry, how did we ever begin? I looked for such expressions as if mathematically deduced from a time where we might be closer to the idea of such a pool. Ramanujan comes to mind.
Then too, if we are to become spiritually immersed back again from where we came from, then how can we individually be explained "as a spark of measure," for each soul as a memory to be chosen from
all that has existed before, for such an expression in this life as the task of it's future??
Whole Item Format Size
science_and_hypothesis_librivox_128kb.m3u 128kbps M3U StreamB
science_and_hypothesis_librivox_64kb.m3u 64Kbps M3U StreamB
science_and_hypothesis_librivox_64kb_mp3.zip 64Kbps MP3 ZIP 193.4 MB
Audio Files 128Kbps MP3 Ogg Vorbis 64Kbps MP3
01 - Introduction by Judd Larmor 12.4 MB 6.9 MB 6.2 MB
02 - Author's Preface 10.0 MB 5.3 MB 5.0 MB
03 - On the Nature of Mathematical Reasoning 27.9 MB 16.3 MB 14.0 MB
04 - Mathematical Magnitude and Experiment 24.9 MB 13.5 MB 12.4 MB
05 - Non-Euclidean Geometries 25.5 MB 12.9 MB 12.8 MB
06 - Space and Geometry 29.3 MB 19.4 MB 14.7 MB
07 - Experiment and Geometry 24.7 MB 16.9 MB 12.3 MB
08 - Classical Mechanics 28.0 MB 14.6 MB 14.0 MB
09 - Relative and Absolute Motion 18.2 MB 13.0 MB 9.1 MB
10 - Energy and Thermo-dynamics 22.0 MB 12.5 MB 11.0 MB
11 - Hypotheses in Physics 26.6 MB 14.7 MB 13.3 MB
12 - The Theories of Modern Physics 30.4 MB 17.3 MB 15.2 MB
13 - The Calculus of Probability 46.4 MB 33.3 MB 23.2 MB
14 - Optics and Electricity 20.8 MB 11.5 MB 10.4 MB
15 - Electro-Dynamics 39.4 MB 23.0 MB 19.7 MB
Information Format Size
science_and_hypothesis_librivox_files.xml Metadata 20.8 KB
science_and_hypothesis_librivox_meta.xml Metadata 2.2 KB
science_and_hypothesis_librivox_reviews.xml Metadata 195.0 B
See Also:
A Model for Thought?
"We all are of the citizens of the Sky" Camille Flammarion
In 1858, by the set of its relations, it will allow Camille Flammarion, the 16 years age, to enter as raises astronomer at the Observatory of Paris under the orders of Urbain the Glassmaker, at
the office of calculations.
See:The Gravity Landscape and Lagrange Points
Now there is a reason that I am showing "this connection" so that the jokes that go around at the PI institute in regards to Tegmark( not that I am speaking for him and have absolutely no affiliation
of any kind) and the "mathematical constructs are recognized" beyond just the jeering section, that while not being a party too, will and can be shown some light.
Three-body problem
For n ≥ 3 very little is known about the n-body problem. The case n = 3 was most studied, for many results can be generalised to larger n. The first attempts to understand the 3-body problem were
quantitative, aiming at finding explicit solutions.
* In 1767 Euler found the collinear periodic orbits, in which three bodies of any masses move such that they oscillate along a rotation line.
* In 1772 Lagrange discovered some periodic solutions which lie at the vertices of a rotating equilateral triangle that shrinks and expands periodically. Those solutions led to the study of
central configurations , for which \ddot q=kq for some constant k>0 .
The three-body problem is much more complicated; its solution can be chaotic. A major study of the Earth-Moon-Sun system was undertaken by Charles-Eugène Delaunay, who published two volumes on
the topic, each of 900 pages in length, in 1860 and 1867. Among many other accomplishments, the work already hints at chaos, and clearly demonstrates the problem of so-called "small denominators"
in perturbation theory.
The chaotic movement of 3 interacting particles
The chaotic movement of 3 interacting particles
The restricted three-body problem assumes that the mass of one of the bodies is negligible; the circular restricted three-body problem is the special case in which two of the bodies are in
circular orbits (approximated by the Sun-Earth-Moon system and many others). For a discussion of the case where the negligible body is a satellite of the body of lesser mass, see Hill sphere; for
binary systems, see Roche lobe; for another stable system, see Lagrangian point.
The restricted problem (both circular and elliptical) was worked on extensively by many famous mathematicians and physicists, notably Lagrange in the 18th century and Poincaré at the end of the
19th century. Poincaré's work on the restricted three-body problem was the foundation of deterministic chaos theory. In the circular problem, there exist five equilibrium points. Three are
collinear with the masses (in the rotating frame) and are unstable. The remaining two are located on the third vertex of both equilateral triangles of which the two bodies are the first and
second vertices. This may be easier to visualize if one considers the more massive body (e.g., Sun) to be "stationary" in space, and the less massive body (e.g., Jupiter) to orbit around it, with
the equilibrium points maintaining the 60 degree-spacing ahead of and behind the less massive body in its orbit (although in reality neither of the bodies is truly stationary; they both orbit the
center of mass of the whole system). For sufficiently small mass ratio of the primaries, these triangular equilibrium points are stable, such that (nearly) massless particles will orbit about
these points as they orbit around the larger primary (Sun). The five equilibrium points of the circular problem are known as the Lagrange points.
So the thing is, that while one may not of found an anomalousness version of what is written into the pattern of WMAP( some Alien signal perhaps in a dimension of space that results in star
manipulation), and what comes out, or how string theory plays this idea that some formulation exists in it's over calculated version of mathematical decor.
String Theory
In either case, gravity acting in the hidden dimensions affects other non-gravitational forces such as electromagnetism. In fact, Kaluza and Klein's early work demonstrated that general
relativity with five large dimensions and one small dimension actually predicts the existence of electromagnetism. However, because of the nature of Calabi-Yau manifolds, no new forces appear
from the small dimensions, but their shape has a profound effect on how the forces between the strings appear in our four dimensional universe. In principle, therefore, it is possible to deduce
the nature of those extra dimensions by requiring consistency with the standard model, but this is not yet a practical possibility. It is also possible to extract information regarding the hidden
dimensions by precision tests of gravity, but so far these have only put upper limitations on the size of such hidden dimensions.
Bold was added by me for emphasis. See also:Angels and Demons on a Pinhead
This is/was to be part of the hopes of people in research for a long time. I have seen it before, in terms of orbitals(the analogical version of the event in the cosmos) and how such events could
gave been portrayed in those same locations in space. Contribute, to the larger and global distinction of what the universe is actually doing. If it's speeding up, what exactly does this mean, and
what should we be looking for from what is being contributed to the "global perspective" of WMAP from these locations??
But lets move on here okay.
If you understand the "three body problem" and being on my own, and seeing things other then what people reveal in the reports that they write, how it is possible for a lone researcher like me to
come up with the same ideas about the universe having some kind of geometrical inclination?
You would have to know that "such accidents while in privy to data before us all", and what is written into the calculations by hand would reveal? Well, I never did have that information. What I did
know is what Sean Carroll presented of the Lopsided Universe for consideration. This coincided nicely with my work to comprehend Poincaré in a historical sense. The relationship with Klein.
As mentioned before, at the time, I was doing my own reading on Poincaré and of course I had followed the work of Tegmark and John Baez's expose' on what the shape of the universe shall look like.
This is recorded throughout my bloggery here for the checking.
What I want to say.
Given the mathematics with which one sees the universe and however this mathematical constructs reveals of nature, nature always existed. What was shown is that the discovery of the mathematics made
it possible to understand something beautiful about nature. So in a sense the mathematics was always there, we just did not recognize it.:)
In an ordinary 2-sphere, any loop can be continuously tightened to a point on the surface. Does this condition characterize the 2-sphere? The answer is yes, and it has been known for a long time.
The Poincaré conjecture asks the same question for the 3-sphere, which is more difficult to visualize.
On December 22, 2006, the journal Science honored Perelman's proof of the Poincaré conjecture as the scientific "Breakthrough of the Year," the first time this had been bestowed in the area of
I have been following the Poincaré work under the heading of the Poincaré Conjecture. It would serve to point out any relation that would be mathematically inclined to deserve a philosophically jaunt
into the "derivation of a mind in comparative views" that one might come to some conclusion about the nature of the world, that we would see it differences, and know that is arose from such
philosophical debate.
Poincaré, almost a hundred years ago, knew that a two dimensional sphere is essentially characterized by this property of simple connectivity, and asked the corresponding question for the three
dimensional sphere (the set of points in four dimensional space at unit distance from the origin). This question turned out to be extraordinarily difficult, and mathematicians have been
struggling with it ever since.
Previous links in label index on right and relative associative posts point out the basis of the Poincaré Conjecture and it's consequent in developmental attempts to deduction about the nature of the
world in an mathematical abstract sense?
Jules Henri Poincare (1854-1912)
The scientist does not study nature because it is useful. He studies it because he delights in it, and he delights in it because it is beautiful.
Mathematics and Science:Last Essays
8 Last Essays
But it is exactly because all things tend toward death that life is
an exception which it is necessary to explain.
Let rolling pebbles be left subject to chance on the side of a
mountain, and they will all end by falling into the valley. If we
find one of them at the foot, it will be a commonplace effect which
will teach us nothing about the previous history of the pebble;
we will not be able to know its original position on the mountain.
But if, by accident, we find a stone near the summit, we can assert
that it has always been there, since, if it had been on the slope, it
would have rolled to the very bottom. And we will make this
assertion with the greater certainty, the more exceptional the event
is and the greater the chances were that the situation would not
have occurred.
How simple such a view that one would speak about the complexity of the world in it's relations. To know that any resting place on the mountain could have it's descendants resting in some place
called such a valley?
Stratification and Mind Maps
Pascal's Triangle
By which path, and left to some "Pascalian idea" about comparing some such mountains in abstraction to such a view, we are left to "numbered pathways" by such a design that we can call it "a resting"
by nature selection of all probable pathways?
Diagram 6. Khu Shijiei triangle, depth 8, 1303.
The so called 'Pascal' triangle was known in China as early as 1261. In '1261 the triangle appears to a depth of six in Yang Hui and to a depth of eight in Zhu Shijiei (as in diagram 6) in 1303.
Yang Hui attributes the triangle to Jia Xian, who lived in the eleventh century' (Stillwell, 1989, p136). They used it as we do, as a means of generating the binomial coefficients.
It wasn't until the eleventh century that a method for solving quadratic and cubic equations was recorded, although they seemed to have existed since the first millennium. At this time Jia Xian
'generalised the square and cube root procedures to higher roots by using the array of numbers known today as the Pascal triangle and also extended and improved the method into one useable for
solving polynomial equations of any degree' (Katz, 1993, p191.)
Even the wisest of us does not realize what Boltzmann in his expressions would leave for us that such expression would leave to chance such pebbles in that valley for such considerations, that we
might call this pebble, "some topological form," left to the preponderance for us in our descriptions to what nature shall reveal in those same valleys?
The Topography of Energy Resting in the Valleys
The theory of strings predicts that the universe might occupy one random "valley" out of a virtually infinite selection of valleys in a vast landscape of possibilities
Most certainly it should be understood that the "valley and the pebble" are two separate things, and yet, can we not say that the pebble is an artifact of the energy in expression that eventually
lies resting in one of the possible pathways to that energy at rest.
The mountain, "as a stratification" exists.
Here in mind then, such rooms are created.
The ancients would have us believe in mind, that such "high mountain views do exist." Your "Olympus," or the "Fields of Elysium." Today, are these not to be considered in such a way? Such a view is
part and parcel of our aspirate. The decomposable limits will be self evident in what shall rest in the valleys of our views?
Such elevations are a closer to a decomposable limit of the energy in my views. The sun shall shine, and the matter will be describe in such a view. Here we have reverted to such a view that is
closer to the understanding, that such particle disseminations are the pebbles, and that such expressions, have been pushed back our views on the nature of the cosmos. Regardless of what the LHC does
not represent, or does, in minds with regards to the BIG Bang? The push back to micros perspective views, allow us to introduce examples of this analogy, as artifacts of our considerations, and these
hold in my view, a description closer to the source of that energy in expression.
To be bold here means to push on, in face of what the limitations imposed by such statements of Lee Smolin as a statement a book represents, and subsequent desires now taken by Hooft, in PI's Status
of research and development.
It means to continue in face of the Witten's tiring of abstraction of the landscape. It means to go past the "intellectual defeatism" expressed by a Woitian design held of that mathematical world.
At this point in the development, although geometry provided a common framework for all the forces, there was still no way to complete the unification by combining quantum theory and general
relativity. Since quantum theory deals with the very small and general relativity with the very large, many physicists feel that, for all practical purposes, there is no need to attempt such an
ultimate unification. Others however disagree, arguing that physicists should never give up on this ultimate search, and for these the hunt for this final unification is the ‘holy grail’. Michael
"No Royal Road to Geometry?"
Click on the Picture
Are you an observant person? Look at the above picture. Why ask such a question as to, "No Royal Road to Geometry?" This presupposes that a logic is formulated that leads not only one by the
"phenomenological values" but by the very principal of logic itself.
All those who have written histories bring to this point their account of the development of this science. Not long after these men came Euclid, who brought together the Elements, systematizing
many of the theorems of Eudoxus, perfecting many of those of Theatetus, and putting in irrefutable demonstrable form propositions that had been rather loosely established by his predecessors. He
lived in the time of Ptolemy the First, for Archimedes, who lived after the time of the first Ptolemy, mentions Euclid. It is also reported that Ptolemy once asked Euclid if there was not a
shorter road to geometry that through the Elements, and Euclid replied that there was no royal road to geometry. He was therefore later than Plato's group but earlier than Eratosthenes and
Archimedes, for these two men were contemporaries, as Eratosthenes somewhere says. Euclid belonged to the persuasion of Plato and was at home in this philosophy; and this is why he thought the
goal of the Elements as a whole to be the construction of the so-called Platonic figures. (Proclus, ed. Friedlein, p. 68, tr. Morrow)
I don't think I could of made it any easier for one, but to reveal the answer in the quote. Now you must remember how the logic is introduced here, and what came before Euclid. The postulates are
self evident in his analysis but, little did he know that there would be a "Royal Road indeed" to geometry that was much more complex and beautiful then the dry implication logic would reveal of
It's done for a reason and all the geometries had to be leading in this progressive view to demonstrate that a "projective geometry" is the final destination, although, still evolving?
Eventually it was discovered that the parallel postulate is logically independent of the other postulates, and you get a perfectly consistent system even if you assume that parallel postulate is
false. This means that it is possible to assign meanings to the terms "point" and "line" in such a way that they satisfy the first four postulates but not the parallel postulate. These are called
non-Euclidean geometries. Projective geometry is not really a typical non-Euclidean geometry, but it can still be treated as such.
In this axiomatic approach, projective geometry means any collection of things called "points" and things called "lines" that obey the same first four basic properties that points and lines in a
familiar flat plane do, but which, instead of the parallel postulate, satisfy the following opposite property instead:
The projective axiom: Any two lines intersect (in exactly one point).
If you are "ever the artist" it is good to know in which direction you will use the sun, in order to demonstrate the shadowing that will go on into your picture. While you might of thought there was
everything to know about Plato's cave and it's implication I am telling you indeed that the logic is a formative apparatus concealed in the geometries that are used to explain such questions about,
"the shape of space."
The Material World
There are two reasons that having mapped E8 is so important. The practical one is that E8 has major applications: mathematical analysis of the most recent versions of string theory and supergravity
theories all keep revealing structure based on E8. E8 seems to be part of the structure of our universe.
The other reason is just that the complete mapping of E8 is the largest mathematical structure ever mapped out in full detail by human beings. It takes 60 gigabytes to store the map of E8. If you
were to write it out on paper in 6-point print (that's really small print), you'd need a piece of paper bigger than the island of Manhattan. This thing is huge.
Polytopes and allotrope are examples to me of "shapes in their formative compulsions" that while very very small in their continuing expression, "below planck length" in our analysis of the world,
has an "formative structure" in the case of the allotrope in the material world. The polytopes, as an abstract structure of math thinking about the world. As if in nature's other ways.
This illustration depicts eight of the allotropes (different molecular configurations) that pure carbon can take:
a) Diamond
b) Graphite
c) Lonsdaleite
d) Buckminsterfullerene (C60)
e) C540
f) C70
g) Amorphous carbon
h) single-walled carbon nanotube
Review of experiments
Graphite exhibits elastic behaviour and even improves its mechanical strength up to the temperature of about 2500 K. Measured changes in ultrasonic velocity in graphite after high temperature
creep shows marked plasticity at temperatures above 2200 K [16]. From the standpoint of thermodynamics, melting is a phase transition of the first kind, with an abrupt enthalpy change
constituting the heat of melting. Therefore, any experimental proof of melting is associated with direct recording of the temperature dependence of enthalpy in the neighbourhood of a melting
point. Pulsed heating of carbon materials was studied experimentally by transient electrical resistance and arc discharge techniques, in millisecond and microsecond time regime (see, e.g., [17,
18]), and by pulsed laser heating, in microsecond, nanosecond and picosecond time regime (see, e.g., [11, 19, 20]). Both kind of experiments recorded significant changes in the material
properties (density, electrical and thermal conductivity, reflectivity, etc. ) within the range 4000-5000 K, interpreted as a phase change to a liquid state. The results of graphite irradiation
by lasers suggest [11] that there is at least a small range of temperatures for which liquid carbon can exist at pressure as low as 0.01 GPa. The phase boundaries between graphite and liquid were
investigated experimentally and defined fairly well.
Sean Carroll:But if you peer closely, you will see that the bottom one is the lopsided one — the overall contrast (representing temperature fluctuations) is a bit higher on the left than on the
right, while in the untilted image at the top they are (statistically) equal. (The lower image exaggerates the claimed effect in the real universe by a factor of two, just to make it easier to
see by eye.)
See The Lopsided Universe-.
#36.Plato on Jun 12th, 2008 at 10:17 am
Thanks again.
“I’m a Platonist — a follower of Plato — who believes that one didn’t invent these sorts of things, that one discovers them. In a sense, all these mathematical facts are right there waiting to be
discovered.”Harold Scott Macdonald (H. S. M.) Coxeter
Moving to polytopes or allotrope seem to have values in science? Buckminister Fuller and Richard Smalley in terms of allotrope.
I was looking at Sylvestor surfaces and the Clebsch diagram. Cayley too. These configurations to me were about “surfaces,” and if we were to allot a progression to the “projective geometries”
here in relation to higher dimensional thinking, “as the polytope[E8]“(where Coxeter[I meant to apologize for misspelling earlier] drew us to abstraction to the see “higher dimensional relations”
toward Plato’s light.)
As the furthest extent of the Conjecture , how shall we place the dynamics of Sylvestor surfaces and B Fields in relation to the timeline of these geometries? Historically this would seem in
order, but under the advancement of thinking in theoretics does it serve a purpose? Going beyond “planck length” what is a person to do?
Thanks for the clarifications on Lagrange points. This is how I see the WMAP.
Diagram of the Lagrange Point gravitational forces associated with the Sun-Earth system. WMAP orbits around L2, which is about 1.5 million km from the Earth. Lagrange Points are positions in
space where the gravitational forces of a two body system like the Sun and the Earth produce enhanced regions of attraction and repulsion. The forces at L2 tend to keep WMAP aligned on the
Sun-Earth axis, but requires course correction to keep the spacecraft from moving toward or away from the Earth.
Such concentration in the view of Sean’s group of the total WMAP while finding such a concentration would be revealing would it not of this geometrical instance in relation to gravitational
gathering or views of the bulk tendency? Another example to show this fascinating elevation to non-euclidean, gravitational lensing, could be seen in this same light.
Such mapping would be important to the context of “seeing in the whole universe.”
See:No Royal Road to Geometry
Allotropes and the Ray of Creation
Pasquale Del Pezzo and E8 Origination?
Projective Geometries
As I pounder the very basis of my thoughts about geometry based on the very fabric of our thinking minds, it has alway been a reductionist one in my mind, that the truth of the reality would a
geometrical one.
The emergence of Maxwell's equations had to be included in the development of GR? Any Gaussian interpretation necessary, so that the the UV coordinates were well understood from that perspective as
well. This would be inclusive in the approach to the developments of GR. As a hobbyist myself of the history of science, along with the developments of today, I might seem less then adequate in the
adventure, I persevere.
On the Hypotheses which lie at the Bases of Geometry.
Bernhard Riemann
Translated by William Kingdon Clifford
[Nature, Vol. VIII. Nos. 183, 184, pp. 14--17, 36, 37.]
It is known that geometry assumes, as things given, both the notion of space and the first principles of constructions in space. She gives definitions of them which are merely nominal, while the
true determinations appear in the form of axioms. The relation of these assumptions remains consequently in darkness; we neither perceive whether and how far their connection is necessary, nor a
priori, whether it is possible.
From Euclid to Legendre (to name the most famous of modern reforming geometers) this darkness was cleared up neither by mathematicians nor by such philosophers as concerned themselves with it.
The reason of this is doubtless that the general notion of multiply extended magnitudes (in which space-magnitudes are included) remained entirely unworked. I have in the first place, therefore,
set myself the task of constructing the notion of a multiply extended magnitude out of general notions of magnitude. It will follow from this that a multiply extended magnitude is capable of
different measure-relations, and consequently that space is only a particular case of a triply extended magnitude. But hence flows as a necessary consequence that the propositions of geometry
cannot be derived from general notions of magnitude, but that the properties which distinguish space from other conceivable triply extended magnitudes are only to be deduced from experience. Thus
arises the problem, to discover the simplest matters of fact from which the measure-relations of space may be determined; a problem which from the nature of the case is not completely
determinate, since there may be several systems of matters of fact which suffice to determine the measure-relations of space - the most important system for our present purpose being that which
Euclid has laid down as a foundation. These matters of fact are - like all matters of fact - not necessary, but only of empirical certainty; they are hypotheses. We may therefore investigate
their probability, which within the limits of observation is of course very great, and inquire about the justice of their extension beyond the limits of observation, on the side both of the
infinitely great and of the infinitely small.
For me the education comes, when I myself am lured by interest into a history spoken to by Stefan and Bee of Backreaction. The "way of thought" that preceded the advent of General Relativity.
Einstein urged astronomers to measure the effect of gravity on starlight, as in this 1913 letter to the American G.E. Hale. They could not respond until the First World War ended.
Translation of letter from Einstein's to the American G.E. Hale by Stefan of BACKREACTION
Zurich, 14 October 1913
Highly esteemed colleague,
a simple theoretical consideration makes it plausible to assume that light rays will experience a deviation in a gravitational field.
[Grav. field] [Light ray]
At the rim of the Sun, this deflection should amount to 0.84" and decrease as 1/R (R = [strike]Sonnenradius[/strike] distance from the centre of the Sun).
[Earth] [Sun]
Thus, it would be of utter interest to know up to which proximity to the Sun bright fixed stars can be seen using the strongest magnification in plain daylight (without eclipse).
Fast Forward to an Effect
Bending light around a massive object from a distant source. The orange arrows show the apparent position of the background source. The white arrows show the path of the light from the true position
of the source.
The fact that this does not happen when gravitational lensing applies is due to the distinction between the straight lines imagined by Euclidean intuition and the geodesics of space-time. In
fact, just as distances and lengths in special relativity can be defined in terms of the motion of electromagnetic radiation in a vacuum, so can the notion of a straight geodesic in general
To me, gravitational lensing is a cumulative affair that such a geometry borne into mind, could have passed the postulates of Euclid, and found their way to leaving a "indelible impression" that the
resources of the mind in a simple system intuits.
Einstein, in the paragraph below makes this clear as he ponders his relationship with Newton and the move to thinking about Poincaré.
The move to non-euclidean geometries assumes where Euclid leaves off, the basis of Spacetime begins. So such a statement as, where there is no gravitational field, the spacetime is flat should be
followed by, an euclidean, physical constant of a straight line=C?
I attach special importance to the view of geometry which I have just set forth, because without it I should have been unable to formulate the theory of relativity. ... In a system of reference
rotating relatively to an inert system, the laws of disposition of rigid bodies do not correspond to the rules of Euclidean geometry on account of the Lorentz contraction; thus if we admit
non-inert systems we must abandon Euclidean geometry. ... If we deny the relation between the body of axiomatic Euclidean geometry and the practically-rigid body of reality, we readily arrive at
the following view, which was entertained by that acute and profound thinker, H. Poincare:--Euclidean geometry is distinguished above all other imaginable axiomatic geometries by its simplicity.
Now since axiomatic geometry by itself contains no assertions as to the reality which can be experienced, but can do so only in combination with physical laws, it should be possible and
reasonable ... to retain Euclidean geometry. For if contradictions between theory and experience manifest themselves, we should rather decide to change physical laws than to change axiomatic
Euclidean geometry. If we deny the relation between the practically-rigid body and geometry, we shall indeed not easily free ourselves from the convention that Euclidean geometry is to be
retained as the simplest. (33-4)
It is never easy for me to see how I could have moved from what was Euclid's postulates, to have graduated to my "sense of things" to have adopted this, "new way of seeing" that is also accumulative
to the inclusion of gravity as a concept relevant to all aspects of the way in which one can see reality.
So of course I am troubled by my inexperience, as well as, the interests of what could have been produced in the "new computers" of the future? So in some weird sense how would you wrap the dynamics
of what lead to "Moore's law" and find that this consideration is now in trouble? While having wrapped the "potential chaoticness" in a systemic feature here as deterministic? Is this apporpriate?
In the presence of gravitational field (or, in general, of any potential field) the molecules of gas are acted upon by the gravitational forces. As a result the concentration of gas molecules is
not the same at various points of the space and described by Boltzman distribution law:
What happens exponetially in recognizing the avenues first debated between what was a consequence of "two paths," One that would be more then likely "a bizzare" while some would have consider the
other, the cathedral? Leftists should not be punished Lubos:)
So what is Chaos then?
The roots of chaos theory date back to about 1900, in the studies of Henri Poincaré on the problem of the motion of three objects in mutual gravitational attraction, the so-called three-body
problem. Poincaré found that there can be orbits which are nonperiodic, and yet not forever increasing nor approaching a fixed point. Later studies, also on the topic of nonlinear differential
equations, were carried out by G.D. Birkhoff, A.N. Kolmogorov, M.L. Cartwright, J.E. Littlewood, and Stephen Smale. Except for Smale, who was perhaps the first pure mathematician to study
nonlinear dynamics, these studies were all directly inspired by physics: the three-body problem in the case of Birkhoff, turbulence and astronomical problems in the case of Kolmogorov, and radio
engineering in the case of Cartwright and Littlewood. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic
oscillation in radio circuits without the benefit of a theory to explain what they were seeing.
13:30 Lecture
Edward Norton Lorenz
Laureate in Basic Sciences
“How Good Can Weather Forecasting Become ? – The Star of a Theory”
So this talk then is taken to "another level" and the distinctions of WeB 2.0 raised it's head, and of course, if you read the exponential growth highlghted in communities desemmination of all
information, how could it be only Web 1.0 if held to Netscape design?
I mean definitely, if we were to consider "the Pascalian triangle" and the emergence of the numbered systems, what said the Riemann Hypothesis would not have emerged also? The "marble drop" as some
inclusive designation of the development of curves in society, that were once raised from "an idea" drawn, from some place?
True creativity often starts where language ends.
Arthur Koestler
For those who engaged the issue of intuition, can we say that this is very close to what creativity is ,and the quote suplied above, is in essence. Is the mind, having come to a point on the
Aristotlean arch, as, having fully understood, that work with reason and insight would have been to see that, the medium what ever it is, is held to this regard?
If the question is held in context of the mind and foci is strong, in what probabilistic venue would we see such events as issuing from someplace? This I would say would be the "unconscious" and in
having diagramed this in a schematic way, how is it that such causations, might have been tied to the fisherman's line and lure, that is sent deep into a future for an examination result.
Of Koestler’s many books, his powerfully anti-Communist novel Darkness at Noon (1941) is still the most famous, but he wrote one book that focused squarely on the paranormal – The Roots of
Coincidence (1972). Here, he attempts to find a basis for paranormal events in coincidence, or more precisely synchronicity, so that there is only one phenomenon to explain rather than many. He
proceeds to seek the roots of coincidence in the Alice-in-Wonderland world of quantum physics, the infinitesimally small subatomic realm where our everyday logic no longer holds sway, where
particles can be waves and vice versa, where forces that only mathematical equations can glimpse swim in the dark, unfathomable ocean of probability before the manifestation of either matter or
mind. Towards the end of the book, Koestler pleads that parapsychology be made “academically respectable and attractive to students”, otherwise the “limitations of our biological equipment may
condemn us to the role of Peeping Toms at the keyhole of eternity”.
So it was gathered all around, everything that we were involved with, and out of it, a solution abstractually engaging the mind in symbolisms of a language not understood. But still relevant. What
would this new language be, if it had run it's course previously, and we needed new insight. We were careful then in understanding the porgress can be made in our expectancy, as well as having full
confident in the self, to explore these unknown regions.
Who better then to create the dialogue necessary in bringing forth the creative flow, if we had acknowledged the teacher and student, within ourselves?
Art Mirrors Physics Mirrors Art, by Stephen G. Brush
The French mathematician Henri Poincaré provided inspiration for both Einstein and Picasso. Einstein read Poincaré's Science and Hypothesis (French edition 1902, German translation 1904) and
discussed it with his friends in Bern. He might also have read Poincaré's 1898 article on the measurement of time, in which the synchronization of clocks was discussed--a topic of professional
interest to Einstein as a patent examiner. Picasso learned about Science and Hypothesis indirectly through Maurice Princet, an insurance actuary who explained the new geometry to Picasso and his
friends in Paris. At that time there was considerable popular fascination with the idea of a fourth spatial dimension, thought by some to be the home of spirits, conceived by others as an "astral
plane" where one can see all sides of an object at once. The British novelist H. G. Wells caused a sensation with his book The Time Machine (1895, French translation in a popular magazine
1898-99), where the fourth dimension was time, not space.
So, would we have recognized some of these features, in the way the words are written, or how the question mark, would transcended the inspiration sought and found from others, who would propel us
forward? The conditions then and foundational attitudes had to rely on what history had already gone through, that we might have recognized also the work that Poincare might have relinquished in that
dialogue. To have propelled other minds, like Picasso or Einstein forward?
Is this where "time" became something of a issue with the space coordinates, that such resolution might have paved the way for a spacetime? Answered, what the fourth dimension actually was? Such
progression then would have been important, as we move forward in society that not only had Poincare provided the prospect, but that also Grossman in the geometrically refined views sought out as
well, to contribute to the troubles Einstein was facing?
Where these might have thought to be random, the events are tied together? Are seen in the actualization of what trasncended these two random events? Or were they?
We talk about the historical time and around then, what was happening if we had seen information on Flatland and Abbott? Issues of mysticism held in context of what those extra dimensions might
actually mean.
Out of this, a new found responsibility, as to how such mysticism once held in the spookiness of Einstein, has now an explanation that has been further refined in what a Anton Zeilinger might have
been doing for us?
AS a child, Einsten when given the gift of the compass, immediately reocgnized the mystery in nature? If such a impression could have instigated the work that had unfolded over timein regards to
Relativity, then what work could have ever instigated the understanding of the Pea as a constant reminder of what the universe became in the mind of a child, as we sleep on it?
Hills and Valley held in context of Wayne Hu's explanations was a feasible product of the landscape to work with?
'The Princess & The Pea' from 'The Washerwoman's Child'
If Strings abhors infinities, then the "Princess's Pea" was really a creation of "three spheres" emmanating from the "fabric of spacetime?" It had to be reduced from spacetime to a three dimensional
frame work?
Spheres can be generalized to higher dimensions. For any natural number n, an n-sphere is the set of points in (n+1)-dimensional Euclidean space which are at distance r from a fixed point of that
space, where r is, as before, a positive real number. Here, the choice of number reflects the dimension of the sphere as a manifold.
a 0-sphere is a pair of points
a 1-sphere is a circle
a 2-sphere is an ordinary sphere
a 3-sphere is a sphere in 4-dimensional Euclidean space
Spheres for n ¡Ý 3 are sometimes called hyperspheres. The n-sphere of unit radius centred at the origin is denoted Sn and is often referred to as "the" n-sphere. The notation Sn is also often
used to denote any set with a given structure (topological space, topological manifold, smooth manifold, etc.) identical (homeomorphic, diffeomorphic, etc.) to the structure of Sn above.
An n-sphere is an example of a compact n-manifold.
Was it really fantasy that Susskind was involved in, or was there some motivated ideas held in mathematical structure? People like to talk about him without really understandng how such geometrical
propensities might have motivated his mind to consider conjectures within the physics of our world?
Bernhard Riemann once claimed: "The value of non-Euclidean geometry lies in its ability to liberate us from preconceived ideas in preparation for the time when exploration of physical laws might
demand some geometry other than the Euclidean." His prophesy was realized later with Einstein's general theory of relativity. It is futile to expect one "correct geometry" as is evident in the
dispute as to whether elliptical, Euclidean or hyperbolic geometry is the "best" model for our universe. Henri Poincaré, in Science and Hypothesis (New York: Dover, 1952, pp. 49-50) expressed it
this way.
You had to realize that working in these abstractions, such work was not to be abandon because we might have thought such abstraction to far from the tangible thinking that topologies might see of
Poincaré Conjecture Proved--This Time for Real
By Eric W. Weisstein
In the form originally proposed by Henri Poincaré in 1904 (Poincaré 1953, pp. 486 and 498), Poincaré's conjecture stated that every closed simply connected three-manifold is homeomorphic to the
three-sphere. Here, the three-sphere (in a topologist's sense) is simply a generalization of the familiar two-dimensional sphere (i.e., the sphere embedded in usual three-dimensional space and
having a two-dimensional surface) to one dimension higher. More colloquially, Poincaré conjectured that the three-sphere is the only possible type of bounded three-dimensional space that contains
no holes. This conjecture was subsequently generalized to the conjecture that every compact n-manifold is homotopy-equivalent to the n-sphere if and only if it is homeomorphic to the n-sphere.
The generalized statement is now known as the Poincaré conjecture, and it reduces to the original conjecture for n = 3.
While it is very dificult for me "to see" how such movements are characterized in those higher spaces, it is not without some understanding that such topologies and genus figures would point to the
continuity of expression, as "energy and matter" related in a most curious way? Let's consider the non-discretium way in which such continuites work, shall we?
From one perspective this circle woud have some valuation to the makings of the universe in expression, would identify itself where such potenials are raised from the singular function of the
circular colliders. Those extra dimensions had to have some basis to evolve too in those higher spaces for such thinking to have excelled to more then mathematical conjectures?
We can also consider donuts with more handles attached. The number of handles in a donut is its most important topological information. It is called the genus.
It might be expressed in the tubes of KK tower modes of measure? That such "differences of energies" might have held the thinking to the brane world, yet revealled a three dimensional perspective in
the higher diemnsional world of bulk. These had to depart from the physics, and held in context?
Clay Institute
If we stretch a rubber band around the surface of an apple, then we can shrink it down to a point by moving it slowly, without tearing it and without allowing it to leave the surface. On the
other hand, if we imagine that the same rubber band has somehow been stretched in the appropriate direction around a doughnut, then there is no way of shrinking it to a point without breaking
either the rubber band or the doughnut. We say the surface of the apple is "simply connected," but that the surface of the doughnut is not. Poincaré, almost a hundred years ago, knew that a two
dimensional sphere is essentially characterized by this property of simple connectivity, and asked the corresponding question for the three dimensional sphere (the set of points in four
dimensional space at unit distance from the origin). This question turned out to be extraordinarily difficult, and mathematicians have been struggling with it ever since.
While three spheres has been generalized in my point of view, I am somewhat perplexed by sklar potential when thinking about torus's and a hole with using a rubber band. If the formalization of
Greene's statement so far were valid then such a case of the universe emblazoning itself within some structure mathematically inclined, what would have raised all these other thoughts towards quantum
In fact, in the reciprocal language, these tiny circles are getting ever smaller as time goes by, since as R grows, 1/R shrinks. Now we seem to have really gone off the deep end. How can this
possibly be true? How can a six-foot tall human being 'fit' inside such an unbelievably microscopic universe? How can a speck of a universe be physically identical to the great expanse we view in
the heavens above?
(Greene, The Elegant Universe, pages 248-249)
Was our thoughts based in a wonderful world, where such purity of math structure became the basis of our expressions while speaking to the nature of the reality of our world?
Bubble Nucleation
Some people do not like to consider the context of universe and the suppositions that arose from insight drawn, and held to possibile scenario's. I like to consider these things because I am
interested in how a geometical cosistancy might be born into the cyclical nature. Where such expression might hold our thinking minds.
Science and it's Geometries?
Have these already been dimissed by the physics assigned, that we now say that this scenario is not so likely? Yet we are held by the awe and spector of superfluids, whose origination might have been
signalled by the gravitational collapse?
Would we be so less inclined not to think about Dirac's Sea of virtual particles to think the origination might have issued from the very warms water of mother's creative womb, nestled.
Spheres that rise from the deep waters of our thinking, to have seen the basis of all maths and geometries from the heart designed. Subjective yet in the realization of the philosophy embued, the
very voice speaks only from a pure mathematical realm, and is covered by the very cloaks of one's reason?
After doing so, they realized that all inflationary theories produced open universes in the manner Turok described above(below here). In the end, they created the Hawking-Turok Instanton theory.
The process is a bit like the formation of a bubble
in a boiling pan of water...the interior of this tiny
bubble manages to turn itself into an infinite open
universe. Imagine a bubble forming and expanding at the
speed of light, so that it becomes very big, very quickly.
Now look inside the bubble.
The peculiar thing is that in such a bubble, space and time
get tangled in such a way that what we would call today's
universe would actually include the entire future of the
bubble. But because the bubble gets infinitely large in
the future, the size of 'today's universe' is actually infinite.
So an infinite,open universe is formed inside a tiny, initially
microscopic bubble.
This is going to be quite the blog entry because as little a response might have been from Clifford's links to artistic imagery and it's relation to science. I definitely have more to say.
So being short of time, the entries within this blog posting will seem disjointed, but believe me it will show a historical significance that one would not have considered had one not seen the
relevance of art and it's implications along side of science.
Did Picasso Know About Einstein
Arthur Miller
Miller has since moved away from conventional history of science, having become interested in visual imagery through reading the German-language papers of Einstein, Heisenberg and Schrödinger -
"people who were concerned with visualization and visualizability". Philosophy was an integral part of the German school system in the early 1900s, Miller explains, and German school pupils were
thoroughly trained in the philosophy of Immanuel Kant.
Piece Depicts the Cycle of Birth, Life, and Death-Origin, Identity, and Destiny by Gabriele Veneziano
The Myth of the Beginning of Time
The new willingness to consider what might have happened before the big bang is the latest swing of an intellectual pendulum that has rocked back and forth for millenia. In one form or another,
the issue of the ultimate beginning has engaged philosophers and theologians in nearly every culture. It is entwined witha grand set of concerns, one famosly encapsulated in a 1897 painting by
Paul Gauguin: D'ou venons? Que sommes-nous? Ou allons-nous?
Scientific America, The Time before Time, May 2004.
Sister Wendy's American Masterpieces":
"This is Gauguin's ultimate masterpiece - if all the Gauguins in the world, except one, were to be evaporated (perish the thought!), this would be the one to preserve. He claimed that he did not
think of the long title until the work was finished, but he is known to have been creative with the truth. The picture is so superbly organized into three "scoops" - a circle to right and to
left, and a great oval in the center - that I cannot but believe he had his questions in mind from the start. I am often tempted to forget that these are questions, and to think that he is
suggesting answers, but there are no answers here; there are three fundamental questions, posed visually.
"On the right (Where do we come from?), we see the baby, and three young women - those who are closest to that eternal mystery. In the center, Gauguin meditates on what we are. Here are two
women, talking about destiny (or so he described them), a man looking puzzled and half-aggressive, and in the middle, a youth plucking the fruit of experience. This has nothing to do, I feel
sure, with the Garden of Eden; it is humanity's innocent and natural desire to live and to search for more life. A child eats the fruit, overlooked by the remote presence of an idol - emblem of
our need for the spiritual. There are women (one mysteriously curled up into a shell), and there are animals with whom we share the world: a goat, a cat, and kittens. In the final section (Where
are we going?), a beautiful young woman broods, and an old woman prepares to die. Her pallor and gray hair tell us so, but the message is underscored by the presence of a strange white bird. I
once described it as "a mutated puffin," and I do not think I can do better. It is Gauguin's symbol of the afterlife, of the unknown (just as the dog, on the far right, is his symbol of himself).
"All this is set in a paradise of tropical beauty: the Tahiti of sunlight, freedom, and color that Gauguin left everything to find. A little river runs through the woods, and behind it is a great
slash of brilliant blue sea, with the misty mountains of another island rising beyond Gauguin wanted to make it absolutely clear that this picture was his testament. He seems to have concocted a
story that, being ill and unappreciated (that part was true enough), he determined on suicide - the great refusal. He wrote to a friend, describing his journey into the mountains with arsenic.
Then he found himself still alive, and returned to paint more masterworks. It is sad that so great an artist felt he needed to manufacture a ploy to get people to appreciate his work. I wish he
could see us now, looking with awe at this supreme painting."
Art Mirrors Physics Mirrors Art, by Stephen G. Brush
Arthur Miller addresses an important question: What was the connection, if any, between the simultaneous appearance of modern physics and modern art at the beginning of the 20th century? He has
chosen to answer it by investigating in parallel biographies the pioneering works of the leaders of the two fields, Albert Einstein and Pablo Picasso. His brilliant book, Einstein, Picasso,
offers the best explanation I have seen for the apparently independent discoveries of cubism and relativity as parts of a larger cultural transformation. He sees both as being focused on the
nature of space and on the relation between perception and reality.
The suggestion that some connection exists between cubism and relativity, both of which appeared around 1905, is not new. But it has been made mostly by art critics who saw it as a simple causal
connection: Einstein's theory influenced Picasso's painting. This idea failed for lack of plausible evidence. Miller sees the connection as being less direct: both Einstein and Picasso were
influenced by the same European culture, in which speculations about four-dimensional geometry and practical problems of synchronizing clocks were widely discussed.
The French mathematician Henri Poincaré provided inspiration for both Einstein and Picasso. Einstein read Poincaré's Science and Hypothesis (French edition 1902, German translation 1904) and
discussed it with his friends in Bern. He might also have read Poincaré's 1898 article on the measurement of time, in which the synchronization of clocks was discussed--a topic of professional
interest to Einstein as a patent examiner. Picasso learned about Science and Hypothesis indirectly through Maurice Princet, an insurance actuary who explained the new geometry to Picasso and his
friends in Paris. At that time there was considerable popular fascination with the idea of a fourth spatial dimension, thought by some to be the home of spirits, conceived by others as an "astral
plane" where one can see all sides of an object at once. The British novelist H. G. Wells caused a sensation with his book The Time Machine (1895, French translation in a popular magazine
1898-99), where the fourth dimension was time, not space.
The Search for Extra Dimensions
OR Does Dzero Have Branes?
by Greg Landsberg
Theorists tell us that these extra spatial dimensions, if they exist, are curled up, or "compactified."In the example with the ant, we could imagine rolling the sheet of paper to form a cylinder.
If the ant crawled in the direction of curvature, it would eventually come back to the point where it started--an example of a compactified dimension. If the ant crawled in a direction parallel
to the length of the cylinder, it would never come back to the same point (assuming a cylinder so long so that the ant never reaches the edge)--an example of a "flat"dimension. According to
superstring theory, we live in a universe where our three familiar dimensions of space are "flat,"but there are additional dimensions, curled up so tightly so they have an extremely small radius
Issues with Dimensionality
"Why must art be clinically “realistic?” This Cubist “revolt against perspective” seized the fourth dimension because it touched the third dimension from all possible perspectives. Simply put, Cubist
art embraced the fourth dimension. Picasso's paintings are a splendid example, showing a clear rejection of three dimensional perspective, with women's faces viewed simultaneously from several
angles. Instead of a single point-of-view, Picasso's paintings show multiple perspectives, as if they were painted by a being from the fourth dimension, able to see all perspectives simultaneously.
As art historian Linda Henderson has written, “the fourth dimension and non-Euclidean geometry emerge as among the most important themes unifying much of modern art and theory."
And who could not forget Salvador Dali?
In geometry, the tesseract, or hypercube, is a regular convex polychoron with eight cubical cells. It can be thought of as a 4-dimensional analogue of the cube. Roughly speaking, the tesseract is
to the cube as the cube is to the square.
Generalizations of the cube to dimensions greater than three are called hypercubes or measure polytopes. This article focuses on the 4D hypercube, the tesseract.
So it is interesting nonetheless isn't it that we would find pictures and artists who engaged themselves with seeing in ways that the art seems capable of, while less inclinations on the minds to
grasp other opportunities had they had this vision of the artist? They of course, added their flavor as Salvador Dali did in the painting below this paragraph. It recognize the greater value of
assigning dimensionality to thinking that leads us even further had we not gone through a revision of a kind to understand the graviton bulk perspective could have so much to do with the figures and
realization of what dimensionality means.
So while such lengths had been lead to in what curvature parameters might do to our views of the cosmos, it wasn't to hard to envision the realistic valuation of graviton as group gatherings whose
curvature indications change greatly on what we saw of the energy determinations.
Beyond forms
Probability of all events(fifth dimension)
vvvvvvvvvvvvv Future-Time
vvvvvvvvvvv |
vvvvvvvvv |
vvvvvvv |
vvvvv |
vvv |
v |
<<<<<<<<<<<<>>>>>>>>>>>now -------|
flash fourth dimension with time |
A |
AAA |
AAAAA |
AAAAAAA |
AAAAAAAAA |
AAAAAAAAAAA |
AAAA ___AAAAA |
AAAAA/__/|AAAAA____Three dimension
AAAAAA|__|/AAAAAA |
AAAAAAAAAAAAAAAAAAA |
___ |
/__/ brane--------two dimension
\ /
.(U)1=5th dimension
I hope this helps explain. It certainly got me thinking, drawing it:)
Similarly a hypercube’s shadow cast in the third dimension becomes a cube within a cube and, if rotated in four dimensions, executes motions that would appear impossible to our three-dimensional
So hyperdimenionsal geometry must have found itself describable, having understood that Euclid's postulate leads to the understanding of the fifth. A->B and the field becomes a interesting idea, not
only from a number of directions(Inverse Square Law), dimensional understanding of a string, that leads from the fifth dimensional perspective is a point, with a energy value that describes for us
the nature of curvature, when extended to a string length(also becomes the point looking at the end, a sphere from a point, and at the same time a cylinder in its length).
In looking at Einsteins fourth dimension of time, the idea of gravity makes its appearance in respect of dimension.
So how is it minds like ours could perceive a fifth dimensional perspective but to have been lead to it. It is not always about points( a discrete perspective)but of the distance in between those
points. We have talked about Gauss here before and Riemann.
Who in Their Right Mind?
Penrose's Influence on Escher
During the later half of the 1950’s, Maurits Cornelius Escher received a letter from Lionel and Roger Penrose. This letter consisted of a report by the father and son team that focused on impossible
figures. By this time, Escher had begun exploring impossible worlds. He had recently produced the lithograph Belvedere based on the “rib-cube,” an impossible cuboid named by Escher (Teuber 161).
However, the letter by the Penroses, which would later appear in the British Journal of Psychology, enlightened Escher to two new impossible objects; the Penrose triangle and the Penrose stairs. With
these figures, Escher went on to create further impossible worlds that break the laws of three-dimensional space, mystify one’s mind, and give a window to the artist heart.
Penrose and Quanglement
Order and Chaos, by Escher (lithograph, 1950) | {"url":"http://www.eskesthai.com/search/label/HENRI%20POINCARE","timestamp":"2014-04-18T11:27:24Z","content_type":null,"content_length":"340562","record_id":"<urn:uuid:bb6a28ad-3524-4204-9870-aa5e356df072>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sketch the graph of the following:
2x^2 + 2y^2 + x + y = 0 Show complete working out.
Your question is not at all clear. You could "sketch" the graph by calculating several points: if x= 0, $2y^2+ y= y(2y+1)= 0$ so y= 0 or -1/2. $(0, 0)$ and $(0, -\frac{1}{2})$ are points on the
graph. Taking y= 0, we get, in exactly the same way, $(0,0)$ which we had before, $(0,\frac{1}{2})$. Get enough points that way and you can sketch the graph. But I suspect that you are intended to
complete the square to see what kind of figure this is: If you divide through by 2 you get $x^2+ \frac{1}{2}x+ y^2+ \frac{1}{2}y= 0$. Presumably you know that $(x+ a)^2= x^2+ 2ax+ a^2$. Comparing
that to $x^2+ \frac{1}{2}x$, they match if a= 1/4 and we can make this a "perfect square by adding $\left(1/4\right)^2= 1/16$. Doing that for both x and y, $x^2+ \frac{1}{2}x+ \frac{1}{4}+ y^2+ \frac
{1}{2}y+ \frac{1}{4}= \frac{1}{4}+ \frac{1}{4}$ or $(x+\frac{1}{2})^2+ (y+\frac{1}{2})^2= \frac{1}{2}$. What kind of figure is that?
$2x^2+2y^2+x+y=0$ $2(x^2+\frac{1}{2}x)+2(y^2+\frac{1}{2}y)=0$ $2(x^2+\frac{1}{2}x+\frac{1}{16}-\frac{1}{16})+2(y^2+\frac{1}{2}y+\frac{1}{16}-\frac{1}{16})=0$ $2[(x+\frac{1}{4})^2-\frac{1}{16}]+2[(y+\
frac{1}{4})^2-\frac{1}{16}]=0$ $2(x+\frac{1}{4})^2-\frac{1}{8}+2(y+\frac{1}{4})^2-\frac{1}{8}=0$ $2(x+\frac{1}{4})^2+2(y+\frac{1}{4})^2=\frac{1}{4}$ $(x+\frac{1}{4})^2+(y+\frac{1}{4})^2=\frac{1}{8}$
$(x+\frac{1}{4})^2+(y+\frac{1}{4})^2=\left (\frac{1}{2\sqrt 2}\right )^2$ Which is a circle with it's centre at $\left ( -\frac {1}{4}, -\frac {1}{4}\right )$ with a radius of $\frac{1}{2\sqrt 2}$ | {"url":"http://mathhelpforum.com/pre-calculus/93112-sketch-graph-following-print.html","timestamp":"2014-04-23T18:19:02Z","content_type":null,"content_length":"13162","record_id":"<urn:uuid:afcd1583-edce-43c4-ab64-d77c8a074213>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math.round...(for -ve no.)
Author Math.round...(for -ve no.)
Ranch Hand
Mar 15, public class math
2001 {
Posts: 108 public static void main(String a[])
System.out.println(Math.round(-2.5));// o/p is -2....line 1
System.out.println(Math.round(-2.4));// o/p is -2....line 2
System.out.println(Math.round(-2.6));// o/p is -3....line 3
System.out.println(Math.round(2.5)); // o/p is 3...as expected.
System.out.println(Math.round(2.4)); // o/p is 2...as expected.
System.out.println(Math.round(2.6)); // o/p is 3...as expected.
round returns a long for double, returns an int for float. (closest int or long value to the argument)
The result is rounded to an integer by adding � , taking the floor of the result, and casting the result to type int / long.
NOw...look at the o/p of line 1 and line 3 (-2.5 is giving -2 whereas line -2.6 is giving -3.) . As fer as in +ve no. this is not the rule. In positive nos. both 2.5 and 2.6 will give
3.Now look at line 2 also.
same problem here.
Can any1 tell me what is real rule for -ve no's roundness..
I am simply not geting any rule??
Thanks In Advance.
<marquee> Ratul Banerjee </marquee>
Ranch Hand
Joined: Hi, Ratul.
Oct 31, Math.round() returns the integral value (int or long) that is closest to the input value. If that input value is equidistant from its neighboring integral values (e.g., -2.5, +104.5),
2000 Math.round() rounds up, towards positive infinity. Hence Math.round( -2.5 ) yields -2, and Math.round( +2.5 ) yields 3.
Posts: 241 Art
Ranch Hand
Mar 15, Thanks Art
2001 can u tell me then why line 3 is showing -3 on the stnd. o/p.
Posts: 108
Ranch Hand
Because -3 is the int that is closer to the float -2.6. As I mentioned earlier, as a general rule, Math.round() returns the integral value that is closer to the input value. The int closer
Joined: to -2.6 is -3. Similarly, the int closer to +2.6 is +3, so Math.round( +2.6 ) = 3.
Oct 31, Now that we've established that round() returns the integral value that's closer to the input, the question presents itself, what is the behavior if the input value is equidistant from its
2000 neighboring integral values? Which int is "closer" then? That is, if float f = ( 2n + 1 ) / 2 for integral n, what is Math.round( f )? The answer is, for these special cases where round()
Posts: 241 's input is "something-point-five", round() rounds up towards positive infinity.
Maybe this illustration will help:
If this hasn't cleared round() up for you, Ratul, let me know.
Ranch Hand
see the src code of Math.round ( i just pick the float->int version):
Joined: public static int round(float a) {
Mar 19, return (int)floor(a + 0.5f);
2001 }
Posts: 33 so round(-2.4f)=floor(-1.9f)=-2
Ranch Hand
Mar 15, THANKS ART ...It is clear now.
2001 Thanks shadow
Posts: 108
subject: Math.round...(for -ve no.) | {"url":"http://www.coderanch.com/t/199133/java-programmer-SCJP/certification/Math-ve","timestamp":"2014-04-19T04:39:34Z","content_type":null,"content_length":"29335","record_id":"<urn:uuid:dee2a5c4-709b-48ad-a012-c167663a8754>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
1.Given Int Variables K And Total That Havealready ... | Chegg.com
1.Given int variables k and total that havealready been declared, use a for loop to compute the sum ofthe squares of the first 50 whole numbers, and store this valuein total . Thus your code should
put 1*1 + 2*2 + 3*3 +... +49*49 + 50*50 into total . Use no variables other than k and total .
2. Given an int variable n that has been initialized toa positive value and, in addition, int variables kand total that have already been declared, use a forloop to compute the sum of the cubes of
the first n wholenumbers, and store this value in total . Thus if nequals 4, your code should put 1*1*1 + 2*2*2 + 3*3*3 + 4*4*4into total . Use no variables other than n , k ,and total .
3. Given int variables k and total that havealready been declared, use a do...while loop to compute thesum of the squares of the first 50 whole numbers, and store thisvalue in total . Thus your code
should put 1*1 + 2*2 + 3*3+... + 49*49 + 50*50 into total . Use no variables otherthan k and total .
Given an int variable n that has been initializedto a positive value and, in addition, int variables kand total that have already been declared, use a do...while loop to compute the sum of the cubes
of the first n whole numbers, and store this value in total . Thusif n equals 4, your code should put 1*1*1 + 2*2*2 + 3*3*3 +4*4*4 into total . Use no variables other than n, k , and total .
Assume the int variables i , lo , hi, and result have been declared and that lo and hi have been initialized.
Write a for loop that adds the integers between lo and hi (inclusive), and stores the result in result.
Computer Science | {"url":"http://www.chegg.com/homework-help/questions-and-answers/1given-int-variables-k-total-havealready-declared-use-loop-compute-sum-ofthe-squares-first-q63466","timestamp":"2014-04-24T20:33:23Z","content_type":null,"content_length":"24834","record_id":"<urn:uuid:30666649-ae56-4a30-86cc-75c613d6c999>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Issaquah ACT Tutor
Find a Issaquah ACT Tutor
...I have completed over four years of University level coursework in Psychology ranging from Child Development to Neural basis of behaviors to Cognitive Psychology to Brain anatomy laboratories.
I have tutored University level Psychology courses for over two years and I truly enjoy teaching the su...
27 Subjects: including ACT Math, chemistry, reading, writing
...Finally, since pacing is the key on this section, we work on developing speed while maintaining accuracy. I have tutored K-6th grade students in English (reading, vocabulary, and writing),
math, and science. I also specialize in test prep and have ample experience with the ISEE and SSAT.
32 Subjects: including ACT Math, English, reading, geometry
...I tutored my kids in math and science through math competitions AMC, MathFest, and Intel Science Fairs (NWSE). Because of her solid foundation in math and science, my daughter was accepted at
MIT, Cambridge, MA. I trained for a year as a yoga instructor from the oldest institute for yoga trainin...
16 Subjects: including ACT Math, geometry, algebra 1, algebra 2
...I've constructed, researched, and presented biological fuel cells. I've presented a fuel-cell powered car at a national AIChE conference. I've researched and presented UREX designs.
62 Subjects: including ACT Math, chemistry, English, statistics
...To give you an example of my creative methods of teaching - I once taught math in an inner city New York 2nd grade class room. I took a class of 15 students that didn't know how to multiply. I
noticed they loved to compete and run around.
17 Subjects: including ACT Math, calculus, geometry, statistics | {"url":"http://www.purplemath.com/Issaquah_ACT_tutors.php","timestamp":"2014-04-20T04:30:29Z","content_type":null,"content_length":"23353","record_id":"<urn:uuid:1e3df2c1-30e1-42e1-ab62-bceb88763626>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
ESL Cafe's Idea Cookbook - Who's that girl?
Who's that girl?
Note: The chalkboards that I use are magnetic, which helps facilitate this game.
First I place a picture on the chalkboard. Then with our revealing what the picture is I cover it with magnetic cards numbered from 1 to 16. When I am finished, the picture is completely covered
by the sixteen cards arranged in a 4 by 4 grid.
The game is simple. I ask a question pertaining to whatever the students are studying. A student who knows the answer raises his hand. If the answer is correct the he scores a point and chooses
a number from 1 to 16. I then remove that card from the board so that that part of the picture is revealed. The student may now guess what or who the picture is of. If the student is correct
then he scores an extra 5 points. If the student is incorrect then I ask the next question.
‚ve can do this for a number of questions, and for any point that the students are studying. It just spices up various English subjects that are not so interesting on their own.
Dave's ESL Cafe Copyright © 2008 Dave Sperling. All Rights Reserved. | {"url":"http://www.eslcafe.com/idea/index.cgi?display:944552107-2894.txt","timestamp":"2014-04-21T10:04:08Z","content_type":null,"content_length":"4790","record_id":"<urn:uuid:ea70a625-4312-4429-91cc-92c003d12ad3>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Hey, please help:)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51044807e4b03186c3f96976","timestamp":"2014-04-16T13:14:49Z","content_type":null,"content_length":"154569","record_id":"<urn:uuid:f435d527-9b13-4efc-8c97-f4c3ff0d71db>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
Size of the intersection set
July 1st 2012, 10:42 PM #1
Mar 2006
Size of the intersection set
Hello guys
I am thinking about a problem involving intersection of sets.
If I have for example a two sets A and B, and |A| is the number of elements in A and |B| the number of elements in B, then |A \cap B| is the number of elements in the intersection set A \cap B.
Now, if I add a third set C, with |C| elements, the number of elements in A \cap B \cap C will be naturally smaller (or equal) than in A \cap B. And if I'll add more sets, the number of elements
in the intersection will keep decreasing.
Are there any researches / theorems about the rate of decrements ? I mean, if I take sets with numbers that are chosen by random, and I keep adding them, how strongly will the number of elements
in the intersection converge to zero ?
Any insights / references will be most appreciated...
/cap means intersection, couldn't make latex work, don't know why...apologies...
Last edited by WeeG; July 1st 2012 at 10:52 PM.
Re: Size of the intersection set
Well it depends on what elements $C$ and $|A \cap B|$ have in common. You can't show that the intersection converges to zero, because each additional set might as well be equal to the
intersection of A and B.
If A and B are fixed and are subsets of a universal set U, and if C, D, ... are subsets of fixed size, that's an entirely different question. Try small cases first.
Re: Size of the intersection set
What I am trying to prove is slightly different.
Let's say I have a set U which is a subset of the integers set Z. U contains around 10,000 elements.
Now I take a subset of U, let's call it A, and it has (just for illustration) 3000 elements. In the next step I will take another subset B, and it will have 2700 elements. I choose the elements
of the subsets by random (!). Now I look at the intersection of A and B, it will contain (for example) 300 elements. In the next step I will take another random subset, C, and will look at the
intersection of A, B and C. My intuition say that if I'll keep doing that, eventually the generalized intersection will converge to the empty set.
In the real world problem from which I took this challenge, I have seen the size of the generalized intersection decreases fast to 25 elements for 3 sets only. I do not have the 4th just yet, but
I am trying to find a mathematical explanation to my intuition.
Re: Size of the intersection set
Ohh I see...one way to think of it, if any element n appears in the intersection of sets A,B,C, it must be contained in sets A,B,C. What is the probability of that happening?
I haven't thought about this for much time, but it looks like you'll probably get some sort of probability distribution based on the number and size of your subsets, and the size of the universal
set U. Yes, intuitively, the intersection should "converge" to the empty set.
July 1st 2012, 11:05 PM #2
Super Member
Jun 2012
July 2nd 2012, 10:18 PM #3
Mar 2006
July 2nd 2012, 10:58 PM #4
Super Member
Jun 2012 | {"url":"http://mathhelpforum.com/discrete-math/200557-size-intersection-set.html","timestamp":"2014-04-18T15:11:04Z","content_type":null,"content_length":"38304","record_id":"<urn:uuid:e6cbd04c-bd8d-4c44-a597-064e9d56b548>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tinley Park Geometry Tutor
Find a Tinley Park Geometry Tutor
...I have also tutored Geometry and Calculus students. I have a degree in Mathematics from Augustana College. I am currently pursuing my Teaching Certification from North Central College.
7 Subjects: including geometry, algebra 1, algebra 2, trigonometry
...Outside of engineering, I also pursue a side career in music composition. I was raised on classical music and more contemporary music from the 50's, 60's, and I do keep up with current hits
too. My forte is using classical music theory in compositions that reach further than just the classical genre.
16 Subjects: including geometry, chemistry, English, algebra 1
...I wrote my own FFT programs. My master's project was on Remote monitoring of engine health using non intrusive methods. Dear Students, I currently work as Sr.
16 Subjects: including geometry, chemistry, physics, calculus
Struggling in your math class? Want to raise your standardized test score? Email me today - I want to help!
18 Subjects: including geometry, chemistry, GRE, algebra 1
...By my senior year, I was named captain of the Women's varsity team and the number 1 singles player. As the oldest member of the team, other girls looked to me as their leader and my coaches
expected me to lead practices and team warm-ups. Although, I no longer play competitively, I am always looking for opportunities to practice, keep up my skills, and play a friendly match.
13 Subjects: including geometry, chemistry, calculus, biology | {"url":"http://www.purplemath.com/Tinley_Park_Geometry_tutors.php","timestamp":"2014-04-16T10:58:01Z","content_type":null,"content_length":"23673","record_id":"<urn:uuid:f6285995-7695-4c8e-beb6-f020dc921065>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
3D Partial Differential Equation
March 14th 2009, 02:25 PM
3D Partial Differential Equation
A rectangular plate of sides a and b has its edges fixed in the xy plane and set into transverse vibration. If the initial displacement is f(x,y) and the initial velocity is g(x,y), find the
displacement u(x,y,t).
Does anybody know how to solve this partial differential equation?
March 14th 2009, 02:54 PM
A rectangular plate of sides a and b has its edges fixed in the xy plane and set into transverse vibration. If the initial displacement is f(x,y) and the initial velocity is g(x,y), find the
displacement u(x,y,t).
Does anybody know how to solve this partial differential equation?
It helps if you give us the PDE. Is it
$u_{tt} = c^2 \left( u_{xx} + u_{yy}\right)$ or $u_{tt} + c^2 \left( u_{xxxx} + 2 u_{xxyy} + u_{yyyy}\right)=0\;?$
March 14th 2009, 03:16 PM
I believe it's the first one you had, although I'm not sure. The textbook I'm reading just has the physical setting.
Let's assume the first.
March 14th 2009, 04:01 PM
Assume the usual separation of variables $u = T(t)X(x)Y(y)$. Substitute and separate into 3 ODE, i.e.
$\frac{1}{c^2} \frac{T''}{T} = \frac{X''}{X} + \frac{Y''}{Y}$
$\frac{X''}{X} = \lambda_1$, $\frac{Y''}{Y} = \lambda_2$, and $\frac{1}{c^2} \frac{T''}{T} =\lambda_1 + \lambda_2$
Now, bring in the BC's for the first two ODE's giving (details omitted)
$X = c_1 \sin \frac{ \pi\, x }{a},\;\;\;Y = c_2 \sin \frac{ \pi\, y}{b}$
which leads to
$\frac{1}{c^2} \frac{T''}{T} = - \left( \frac{\pi}{a} \right)^2 - \left( \frac{\pi}{b} \right)^2$
which you can solve (again leading to sin's and cos's). Then match your initial condition. | {"url":"http://mathhelpforum.com/differential-equations/78689-3d-partial-differential-equation-print.html","timestamp":"2014-04-21T00:14:25Z","content_type":null,"content_length":"7696","record_id":"<urn:uuid:00876df7-1780-484c-951c-c54250bf3a2f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Emmanuel on Thursday, May 9, 2013 at 6:52pm.
Jada, Nicholas, Nita, and Calvin shared $60. Jada received 1/2 of the total amount of money Nicholas, Nita, and Calvin received. Nicholas received 2/3 of the total amount of money Nita and Calvin
received. Nita received 3 times as much as calvin.
(a) how much money did jada receive?
(b) how much more money did nicholas receive than calvin?
• Math - bobpursley, Thursday, May 9, 2013 at 7:25pm
J=1/2 (Nk+Ni+C)
Nk=2/3 (Ni+C)
take first equation,put second in it.
3/2 Nk + 3/2Ni+3/2C=60
or Nk+Ni+C=40
Then J=20*****
Now take
Nk=2/3 (Ni+C)
to get Nk=2/3(4C=8/3 C
But 20=1/2 (Nk+Ni+C) or
40=8/3 C + 3C + C so figure C...
Related Questions
math - 1. 4+(-7) 2. 0+(-6) 3. -2+8 4.6+(-6) 5. -8+(-7).
MATH - Jada is trying to determine the number of t-shirts that were in inventory...
math - How many feet are in one mile?
math - wirte four names for 13.
math - what is the greatest common factor if 1/5 of 2/15 and 4/9?
math - y dont they allow scissors in the school cafeteria?
math - Solve w-2/7=2/3. Express you answering the simplest form
math - What were the total dollar sales of sundaes from the previous question
Math - (y+5)/(y^(2)+4-32) Find the excluded values for the following fraction.
science - how can i change pitch? | {"url":"http://www.jiskha.com/display.cgi?id=1368139955","timestamp":"2014-04-18T04:33:12Z","content_type":null,"content_length":"8518","record_id":"<urn:uuid:cfc5f73c-73d0-45e2-bcbb-7221a57a8e4a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
equation tattoos
Brittany writes:
“Someday i hope to be a wacky, flannel-sportin’ physicist. my tattoo is schroedinger’s equation for the wave function of a particle. i chose this equation because its elegance & symmetry reflect
that of our multiverse, & also because it describes the fundamental source of “quantum weirdness.” time travel, quantum computers…no matter what happens in my life, there is an infinitely
Glorious Plan swirling all about us….I would be honored to be included among the ranks of badass scientists all over the world. oh, & if you have any pull with any preeminent physicists, tell
brian greene to return my fan mail! :]”
Click here to go to the full Science Tattoo Emporium.
“This tattoo is the Zermelo-Fraenkel with Choice axioms of set theory. These nine axioms are the basis for ZFC set theory, which is the most commonly studied form of set theory and the most well
known set of axioms as well. From these nine axioms, one can derive all of mathematics. These provide the foundation of mathematics, a field that you can likely tell that I love dearly.”
Carl: Mark is making an encore appearance at the Emporium. See his Y combinator here…
Click here to go to the full Science Tattoo Emporium.
“This, on my leg, is the incompressible form of the conservation of mass equation in a fluid, also known as the continuity equation. When people ask what it means, I say it defines flow.
Sometimes I say it means you should have studied more physics, but that is only when I am feeling like being funny. What it means in more detail is that, for an incompressible fluid, the partial
derivative of the velocity of the fluid in the three spatial dimensions must sum to zero. It therefore concisely states the fundamental nature of a fluid.”My advisor took this picture, and I
swear he is obsessed (in a good way) with this tattoo. He is giving a talk at Woods Hole next week as he is the recipient of an award, and he is planning to show off ‘how quantitative scripps
students are’ which i think is hilarious and only slightly mortifying. Speaking of mortifying, it is slightly mortifying to be sending this email at all–I have to admit I am a little embarrassed.
It is definitely the most vain thing i have done today. I do have an ulterior motive which I have no problem admitting: I want to stake a claim on this particular piece. I guess it might be a
little lame to want to claim ownership over something so silly but there it is and I guess at least I can admit it.”
Click here to go to the full Science Tattoo Emporium.
Greg writes:
“I’m currently a Ph.D. student studying maths in Australia (submitting next week). The the tattoo on the top, I got about three years ago in Berkeley, CA. The other tattoo I got about a year
later in Sydney, Australia. Both these tattoos are closely related to the research I’ve done for my Ph.D., which is in the area of elliptic partial differential equations. The top equation is
called the Monge-Ampere equation and is the archetype of the equations I currently study. The bottom equation is called the ‘Infinity Laplacian’ and was chosen because it is correlated to
variational theories which I find to be beautiful. Loosely speaking these equations are correlated to how surfaces (in arbitrary dimension) bend and curve. I figured since I did half my Ph.D. in
the US and half in Australia, I would get at least one tattoo in each of those countries. The tattoos are meant to represent a memory of the time I spent in my studies.”
Click here to go to the full Science Tattoo Emporium.
time dilation formula is over my heart and represents my personal belief in life: the faster you go, the more you get to see and the more you get to live. maximum intensity and maximum velocity at
all times for maximum lifetime experience per life.”
Click here to go to the full Science Tattoo Emporium.
this was one of the most beautiful sentences in that language — a medley of the five most important numbers. Through an odd turn of events, this is actually my own handwriting from a bar napkin.”
Click here to go to the full Science Tattoo Emporium. | {"url":"http://blogs.discovermagazine.com/loom/tag/equation-tattoos/","timestamp":"2014-04-17T21:06:16Z","content_type":null,"content_length":"116089","record_id":"<urn:uuid:5e180361-42f0-45c2-94b9-195f70981995>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Mean of the Latent Intercept Growth Factor
Scott Weaver posted on Friday, August 18, 2006 - 2:28 pm
I am exploring a semi-continuous latent growth model across 4 time points for two groups. In the semi-continuous growth model example in the Mplus User's Guide, the latent intercept factor mean for
the binary portion is fixed at 0 and thresholds are estimated and equated. Because I want to freely estimate the latent intercept factor mean so that I can test for equality of this mean across
groups, I reparameterized the model by fixing all thresholds at 0 and estimating the latent intercept mean. However, I am unsure as how to interpret the mean for the latent intercept factor.
Because the model is centered at the 1st time point, I thought that exp(M)/(1+exp(M) should equal the estimated probability for endorsing the 2nd category at the 1st time point, where M is the latent
intercept mean. However, my calculation does not equal the estimated probability provided in the output.
Thank you!
Bengt O. Muthen posted on Friday, August 18, 2006 - 5:16 pm
In the binary part of the two-part model, random effects influence the outcome probability. The probability that you have computed is conditional on a person being at the mean of the random
intercept. That is not the same as the (marginal) probability. To get the marginal probability you have to numerically integrate over the random intercept and that is what Mplus does to get the
estimated probability.
Scott Weaver posted on Friday, August 18, 2006 - 5:48 pm
Thank you for the information!
So does this mean that the interpretation of the random intercept for a growth process with binary indicators is analogous to interpretation of a random intercept for a growth process with continuous
indicators? If not, what meaning or interpretation does the random intercept mean with binary indicators have? Or is there no substantively useful meaning or interpretation?
Bengt O. Muthen posted on Friday, August 18, 2006 - 6:17 pm
I think it has a meaningful substantive interpretation given that it directly influences the probability. It is similar to the interpretation of an intercept for a continuous outcome. The intercept
mean isn't the logit that gives the mean probability, but for people at the intercept mean it is the logit behind the probability for those people. This same phenomenon has been written about in the
binary growth literature, for instance in the context of population-averaged and subject-specific differences, e.g. in the GEE literature by Zeger and others. See also text books on it, including the
standard multilevel books.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=23&page=1571","timestamp":"2014-04-20T00:46:26Z","content_type":null,"content_length":"21192","record_id":"<urn:uuid:514574ff-a052-4960-aaf5-5fc00d4e5f9d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Now with offline access functionality, CourseSmart offers instructors and students the freedom and convenience of online, offline, and mobile access using a single platform. CourseSmart eTextbooks do
not include media or supplements that are packaged with the bound textbook.
Elayn Martin-Gay's developmental math textbooks and video resources are motivated by her firm belief that every student can succeed. Martin-Gay's focus on the student shapes her clear, accessible
writing, inspires her constant pedagogical innovations, and contributes to the popularity and effectiveness of her video resources (available separately). This revision of Martin-Gay's algebra series
continues her focus on students and what they need to be successful.
Table of Contents
1. Review of Real Numbers
1.1 Tips for Success in Mathematics
1.2 Symbols and Sets of Numbers
1.3 Fractions and Mixed Numbers
1.4 Exponents, Order of Operations, Variable Expressions and Equations
1.5 Adding Real Numbers
1.6 Subtracting Real Numbers
1.7 Multiplying and Dividing Real Numbers
1.8 Properties of Real Numbers
2. Equations, Inequalities, and Problem Solving
2.1 Simplifying Algebraic Expressions
2.2 The Addition Property of Equality
2.3 The Multiplication Property of Equality
2.4 Solving Linear Equations
2.5 An Introduction to Problem Solving
2.6 Formulas and Problem Solving
2.7 Percent and Mixture Problem Solving
2.8 Further Problem Solving
2.9 Solving Linear Inequalities
3. Graphing
3.1 Reading Graphs and the Rectangular Coordinate System
3.2 Graphing Linear Equations
3.3 Intercepts
3.4 Slope and Rate of Change
3.5 Equations of Lines
3.6 Functions
4. Solving Systems of Linear Equations and Inequalities
4.1 Solving Systems of Linear Equations by Graphing
4.2 Solving Systems of Linear Equations by Substitution
4.3 Solving Systems of Linear Equations by Addition
4.4 Systems of Linear Equations and Problem Solving
4.5 Graphing Linear Inequalities
4.6 Systems of Linear Inequalities
5. Exponents and Polynomials
5.1 Exponents
5.2 Adding and Subtracting Polynomials
5.3 Multiplying Polynomials
5.4 Special Products
5.5 Negative Exponents and Scientific Notation
5.6 Dividing Polynomials
6. Factoring Polynomials
6.1 The Greatest Common Factor and Factoring by Grouping
6.2 Factoring Trinomials of the Form x^2 + bx + c
6.3 Factoring Trinomials of the Form ax^2 + bx + c and Perfect Square Trinomial
6.4 Factoring Trinomials of the Form ax^2 + bx + c by Grouping
6.5 Factoring Binomials
6.6 Solving Quadratic Equations by Factoring
6.7 Quadratic Equations and Problems Solving
7. Rational Expressions
7.1 Simplifying Rational Expressions
7.2 Multiplying and Dividing Rational Expressions
7.3 Adding and Subtracting Rational Expressions with Common Denominators and Least Common Denominators
7.4 Adding and Subtracting Rational Expressions with Unlike Denominators
7.5 Solving Equations Containing Rational Expressions
7.6 Proportion and Problem Solving with Rational Equations
7.7 Variation and Problem Solving
7.8 Simplifying Complex Fractions
8. Roots and Radicals
8.1 Introduction to Radicals
8.2 Simplifying Radicals
8.3 Adding and Subtracting Radicals
8.4 Multiplying and Dividing Radicals
8.5 Solving Equations Containing Radicals
8.6 Radical Equations and Problem Solving
8.7 Rational Exponents
9. Quadratic Equations
9.1 Solving Quadratic Equations by the Square Root Property
9.2 Solving Quadratic Equations by Completing the Square
9.3 Solving Quadratic Equations by the Quadratic Formula
9.4 Complex Solutions of Quadratic Equations
9.5 Graphing Quadratic Equations
Appendix A. Geometry
Appendix B. Additional Exercises on Proportion and Proportion Applications
Appendix C. Operations on Decimals
Appendix D. Mean, Median, and Mode
Appendix E. Tables
Purchase Info ?
With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs.
Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere.
Buy Access
Beginning Algebra, CourseSmart eTextbook, 6th Edition
Format: Safari Book
$81.99 | ISBN-13: 978-0-321-78520-6 | {"url":"http://www.mypearsonstore.com/bookstore/beginning-algebra-coursesmart-etextbook-0321785207","timestamp":"2014-04-19T12:38:07Z","content_type":null,"content_length":"17610","record_id":"<urn:uuid:51907e11-8fed-42ac-9ea3-5cd08b1daa8e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contributions to Plasma PhysicsEffect of Anisotropic Ion Pressure on Solitary Waves in Magnetized Dusty PlasmasDiagnostic of Plasma Produced by a Spark Plug at Atmospheric Pressure: Reduced Electric Field and Vibrational TemperatureSelf-Force in 1D Electrostatic Particle-in-Cell Codes for NonEquidistant GridsPotential Formation in Front of an Electrode Close to the Plasma Potential Studied by PIC SimulationAngular Velocity Distribution of the Electric Microfield in PlasmaWave Generation in a Warm Magnetized Multi-Component PlasmaComparative Plasma Chemical Reaction Studies of CH4/Ar and C2Hm/Ar (m = 2,4,6) Gas Mixtures in a Dielectric Barrier DischargeIon Charge Separation in a Multi-Species Plasma Flowing through a Magnetic Transport SystemParticle in Cell/Monte Carlo Collision Method for Simulation of RF Glow Discharges: Effect of Super Particle WeightingCover Picture: Contrib. Plasma Phys. 4/2014Issue Information Picture: Contrib. Plasma Phys. 4/2014Contents: Contrib. Plasma Phys. 4/2014Preface: Contrib. Plasma Phys. 4/2014Plasma Parameters in the COMPASS Divertor During Ohmic PlasmasCharacterization of Scrape-Off Layer Turbulence Changes Induced by a Non-Axisymmetric Magnetic Perturbation in an ASDEX Upgrade Low Density L-ModeLangmuir Probe Evaluation of the Plasma Potential in Tokamak Edge Plasma for Non-Maxwellian EEDFElectric Probe Measurements of the Poloidal Velocity in the Scrape-Off Layer of ASDEX UpgradeDirect Plasma Potential Measurements by Ball-Pen Probe and Self-Emitting Langmuir Probe on COMPASS and ASDEX UpgradeA New Deduction Method of Heat Flux Evolution From Thermal Probe DataOn Negative Slope of Probe Characteristics in Magnetized PlasmasPropagator Computational Method for Drift-Diffusion Equations to Describe Plasma-Wall InteractionInfluence of Plasma-Neutral Collisions on Probe Measurements in Atmospheric Pressure PlasmasThe Plasma Stopping Power Velocity Moment DiagnosticsPropulsive Force in an Electric Solar Sail
The propagation of linear and nonlinear dust ion acoustic waves (DIAWs) are studied in a collisionless magnetized plasma which consists of warm ions having anisotropic thermal pressure, nonthermal
(energetic) electrons and static dust particles of positive and negative charge polarity. The anisotropic ion pressure is defined using double adiabatic Chew-Golberger-Low (CGL) theory. In the linear
regime, the propagation properties of the two possible modes are investigated via ion pressure anisotropy, dust particle polarity and nonthermality of electrons. Using reductive method
Zakharov-Kuznetsov (ZK) equation is derived for the propagation of two dimensional electrostatic dust ion acoustic solitary waves in dusty plasmas. It is found that both compressive and rarefactive
solitons are formed in presence of nonthermal electrons using Cairn's distribution [R.A. Cairns, A.A. Mamun, R. Bingham, R.O. Dendy, R. Bostrom, C.M.C. Nairn and P.K. Shukla, Geophys.Res. Lett. 22,
2709 (1995)] in the system. The ion pressure anisotropy, nonthermality of electrons and charge polarity of the dust particles have significant effects on the amplitude and width of the dust ion
acoustic solitary waves in such anisotropic nonthermal magnetized dusty plasmas. The numerical results are also presented for illustration. Our finding is applicable to space dusty plasma regimes
having anisotropic ion pressure and nonthermal electrons. (© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) | {"url":"http://onlinelibrary.wiley.com/rss/journal/10.1002/(ISSN)1521-3986","timestamp":"2014-04-20T15:36:55Z","content_type":null,"content_length":"88074","record_id":"<urn:uuid:2836fcf2-f5bd-444e-ab74-fba7b248a590>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
TY - JOUR T1 - Defining Daylighting From Windows in Terms of Candlepower Distribution Curves JF - IEEE/IAS 1984 Annual Meeting Y1 - 1984 A1 - Mark Spitzglas AB -
This work describes a method forsevaluating quantitatively the daylight admittance ofswindows under any outdoor conditions in terms thatsmake it possible to calculate interior light distribution. The
work is based on a new concept in quantitative daylight analysis, the Transmission Function Approach, developed by the author while preparing graduate thesis (1976 and 1982) [2], [3], ands[4].
The visible daylight flux introduced through aswindow (or other daylight-admitting aperture) can besconsidered, from the point of view of the internalsspace, as being emitted from a point source or
from asfinite-area uniform source. The photometric properties of those light sources are defined in terms of the well-known candlepower distribution curves. Thesways in which this approach can be
applied for different window designs are demonstrated.
This approach to the photometric properties ofswindow systems allows one to translate typical daylighting calculation problems into a format in which they can be resolved using traditional electric
lighting calculations or computer codes. Evensdaylighted-oriented computer codes are limited assto the geometric complexity of the windows they cansmodel--this method eliminates such limitations.
Itswill also contribute to a better understanding andsvisualization of the photometric properties of variousswindows and other daylight-admitting elements.sThis approach, therefore, may also serve as
an educational tool.
CY - Chicago, IL U1 -
Windows and Daylighting Group
U2 - LBL-18087 ER - | {"url":"http://eetd.lbl.gov/publications/export/ris/50786","timestamp":"2014-04-19T05:31:07Z","content_type":null,"content_length":"2378","record_id":"<urn:uuid:e7f2df4b-97b9-4829-abc3-aea413785cba>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic integration formulas
The fundamental use of integration is as a continuous version of summing. But, paradoxically, often integrals are computed by viewing integration as essentially an inverse operation to
differentiation. (That fact is the so-called Fundamental Theorem of Calculus.)
The notation, which we're stuck with for historical reasons, is as peculiar as the notation for derivatives: the integral of a function $f(x)$ with respect to $x$ is written as $$\int f(x)\;dx$$
The remark that integration is (almost) an inverse to the operation of differentiation means that if $${d\over dx}f(x)=g(x)$$ then $$\int g(x)\;dx=f(x)+C$$ The extra $C$, called the constant of
integration, is really necessary, since after all differentiation kills off constants, which is why integration and differentiation are not exactly inverse operations of each other.
Since integration is almost the inverse operation of differentiation, recollection of formulas and processes for differentiation already tells the most important formulas for integration: \begin
{align*}\int x^n\; dx &= {1\over n+1}x^{n+1}+C & \hbox{ unless $n=-1$ }\\ \int e^x \;dx&= e^x+C \\ \int {1\over x} \;dx&= \ln x+C \\ \int \sin x\;dx&=-\cos x+C \\ \int \cos x\;dx&= \sin x + C\\ \int
\sec^2 x\;dx&=\tan x+C \\ \int {1\over 1+x^2} \; dx&=\arctan x+C \end{align*}
And since the derivative of a sum is the sum of the derivatives, the integral of a sum is the sum of the integrals: $$ \int f(x)+g(x)\;dx=\int f(x)\;dx+\int g(x)\;dx$$ And, likewise, constants ‘go
through’ the integral sign: $$\int c\cdot f(x)\;dx=c\cdot \int f(x)\;dx$$
For example, it is easy to integrate polynomials, even including terms like $\sqrt{x}$ and more general power functions. The only thing to watch out for is terms $x^{-1}={1\over x}$, since these
integrate to $\ln x$ instead of a power of $x$. So $$\int 4x^5-3x+11-17\sqrt{x}+{3\over x}\;dx= {4x^6\over 6}-{3x^2\over 2}+11x-{ 17x^{3/2} \over 3/2 }+3\ln x+C$$ Notice that we need to include just
one ‘constant of integration’.
Other basic formulas obtained by reversing differentiation formulas: \begin{align*} \int a^x \;dx&= {a^x\over \ln a}+C \\ \int \log_a x\;dx&={1\over \ln a}\cdot{1\over x}+C \\ \int { 1 \over \sqrt
{1-x^2 }} \; dx&=\arcsin x+C\\ \int { 1 \over x\sqrt{x^2-1 }} \; dx&=\hbox{ arcsec}\, x+C \end{align*}
Sums of constant multiples of all these functions are easy to integrate: for example, $$\int 5\cdot 2^x-{ 23 \over x\sqrt{x^2-1 }}+5x^2\;dx= {5\cdot 2^x\over \ln 2}-23\,\hbox{arcsec}\,x+{5x^3\over 3}
1. $\int 4x^3-3\cos x+{ 7 \over x }+2\;dx=?$
2. $\int 3x^2+e^{2x}-11+\cos x\,dx=?$
3. $\int \sec^2 x\,dx=?$
4. $\int { 7 \over 1+x^2 }\; dx=?$
5. $\int 16x^7-\sqrt{x}+{ 3 \over \sqrt{x }}\; dx=?$
6. $\int 23 \sin x-{ 2 \over \sqrt{1-x^2 }}\; dx=?$ | {"url":"http://mathinsight.org/basic_integration_formulas_refresher","timestamp":"2014-04-21T02:26:42Z","content_type":null,"content_length":"16259","record_id":"<urn:uuid:ee07a73b-d7c3-4fde-aeb0-ebabe91d5d1c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Observations of Magnetic Fields - J.P. Vallée
2.3. Magnetic Moment and Angular Momentum
The angular momentum A of a spherical object of radius r[surf] is
where ^-1 , and mass is the mass of the object. Thus for the Sun (^-6 sec^-1) one gets A ^42 kg m^2 sec^-1; the Sun's magnetic moment M[surf] ^27 Gauss m^3 . For Mercury (^-6 sec^-1), A ^30 kg m^2
sec^-1, and M[surf] ^16 Gauss m^3.
A direct relationship is often but not always observed between the magnetic dipole moment M[surf] of an object and its angular momentum A. This often observed relation is given approximately by
Figure 4 shows this equation (dashed), along with the observational data. Originally proposed only for the planets, this empirical law has been extended to include the Sun (e.g., Blackett 1947;
Russell 1978), and big moons such as Ganymede and Io (Kivelson et al. 1996a, 1996b).
Figure 4. Observed relation between the magnetic moment and the angular momentum of moons, planets, and the Sun. The dashed line follows the equation for a dipolar dynamo, with a slope of Kivelson et
al. (1996b), the rest from this text. Below a certain strength and a certain angular momentum (at bottom left), remanent magnetism is often found.
Such a law has been called a "magnetic Bode's law" (Russell 1978), and was though to be a "long-sought connection between electromagnetic and gravitational phenomena" (Blackett 1947), and has also
been called "an effect more along 'meteorological' lines" (e.g., chapter 18 in Parker 1979).
This relationship may now be better called a "dipolar dynamo law", for three reasons. (1) All the moons, planets and star(s) that obey so far this relation do have a dipolar dynamo. (2) All the moons
and planets without a significant magnetic dynamo (Earth's Moon, Venus, Mars) do not follow this law - the observed data for Earth's Moon, Venus, and Mars fall significantly below the M[surf] values
predicted by this law (e.g., Kivelson et al. 1996b). (3) The relationship may not work for other (not dypolar) dynamo types - thus our Milky Way galaxy has a planar disk with an axisymmetric spiral
(not dipolar) dynamo magnetic field, with A ^67 kg m^2 sec^-1 and M[surf] ^55 Gauss m^3, and thus the equation above would predict only M[surf] ^47 Gauss m^3 - about 10^8 times lower than observed.
Physically, no direct physical justification for this law has been found. Mathematically, since M[surf] = B[surf] ^. r^3[surf] for a sphere, and since A ~ (mass) ^. r^2[surf] ~ (density) ^. r^5[surf]
then it can be seen that M[surf] and A are strong powers of r[surf], so the apparent correlation of these two quantities should predict M[surf] ~ r^3[surf] ~ A^0.60 . This mathematical argument would
predict that the data for all planets would follow this law - this argument does not explain why the observed data for some planets or some moons fall below the predictions of this law. | {"url":"http://ned.ipac.caltech.edu/level5/March03/Vallee2/Vallee2_3.html","timestamp":"2014-04-20T21:17:33Z","content_type":null,"content_length":"6621","record_id":"<urn:uuid:eea7ab31-16e7-499b-8b43-ae0f96631a52>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Torsion of differentials on toric varieties
Klaus Altmann \Lambda
Institut f¨ur reine Mathematik, HumboldtUniversit¨at zu Berlin
Ziegelstr. 13a, D10099 Berlin, Germany.
Email: altmann@mathematik.huberlin.de
We introduce an invariant for semigroups with cancellation property. When
the semigroup equals the set of lattice points in a rational, polyhedral cone,
then this invariant describes the torsion of the differential sheaf on the asso
ciated toric variety.
Finally, as an example, we present the case of twodimensional cones (corre
sponding to twodimensional cyclic quotient singularities).
1 An invariant for semigroups
(1.1) Let S be a commutative semigroup with 0 and cancellation property (i.e.
a + s = b + s implies a = b for a; b; s 2 S). In particular, S can be embedded into a
group, and the notion \Gammaa for a 2 S makes sense. Assume that (inside this group)
S `` (\GammaS) = f0g; then via
a – b :() a \Gamma b 2 S ;
S turns also into a partially ordered set.
For each ` 2 S we will define a certain abelian group T ` . Their direct sum T := | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/628/3777361.html","timestamp":"2014-04-20T09:10:28Z","content_type":null,"content_length":"8196","record_id":"<urn:uuid:beaa6562-6301-4b75-b61a-1ed88a91a3b4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Study Guide for First Semester Geometry Mastery, 6-5-95
(old version, questions 1,2,3,4,5,6,7,9,10,11,13,25,28,31,33,34)
Geometry, Nichols et.al., Holt,, 1991.
Chapters 1-7 & Pythagorean Theorem (Ch 9).
midpoints (1.3)
acute/obtuse/right/straight angles (1.4)
adjacent angles and angle bisectors (1.5)
supplements and complements of angles (1.6)
vertical angles (2.8)
parallel lines and transversals (3.2)
alternate & Same Side Interior Angles (3.3)
sum of angles in triangle (3.6)
exterior and remote interior angles (3.7)
congruence of triangles, SAS, SSS, ASA, AAS (4.2-4,6)
isosceles, equilateral trianges (5.1)
altitudes & medians of triangles (5.5)
triangle angle/side inequalities (5.7-8)
interior angles of polygons (6.2)
parallelogram conditions (6.4-6.5)
special quadrilateral conditions (7.2-3)
the pythagorean theorem (9.2)
Phrase Guide for Pre-Algebra, Nichols, et.al., Holt, 1992.
Unit 1, Chapters 1 through 3: Introduction to Algebra
Chapter 1, Lessons 1-1 through 1-9: Integers
Lesson 1-1, pgs 2-6: The Integers
the integers
bar graph
above sea level
below zero
absolute value
opposite of
number line
original integer
round number
nearest million
Lesson 1-2, pgs 7-10:
Comparing and Ordering Integers
number line
ascending order
descending order
compare integers
greater than
less than
left of
right of
locate the numbers
list the numbers
mathematical sentence
ordering integers
Lesson 1-3, pgs 11-12:
Problem Solving Exploration
Using an Addition Model
positive counter
negative counter
value of square
opposite signs
neutral pair
add counters
remove pairs
same sign
record the sum
final value
problem solving
addition model
Lesson 1-4, pgs 13-18:
Adding Integers
stock transactions
add integers
model this sum
number line
move right
move left
like signs
absolute values
neutral pair
property of opposites
set of integers
find the profit
greater value
difference of
addition sentence
change in
net change
two at a time
shortcut method
find it mentally
is always
successive plays
final position
total gained
series of plays
find the sum
only if
statement is false
true statement
Lesson 1-5, pgs 21-22: Problem Solving Exploration
Using a Subtraction Model
use counters
model subtraction
subtract from
change the value
greater than
neutral counters
negative number
final value
model shows
absolute value
subtraction model
Lesson 1-6, pgs 23-25:
Subtracting Integers
are related
subtracting integers
is the same as
adding the opposite
rewrite problem
subtraction rule
same value
difference is
difference of
calculator sequence
inequality sign
Lesson 1-7, pgs 26-27:
Problem Solving Strategies
Choosing a Method
of Computation
formulate a plan
solve a problem
more efficient
use a calculator
mental math
use counters
use a model
paper & pencil
use a computer
exact answer
arrive at answer
large numbers
represent problem
apply strategies
divisible by
sum is closer
find the product
cheaper way
opposite direction
method of computation
Lesson 1-8, pgs 28-31:
Multiplying Integers
first factor
second factor
complete the pattern
different signs
same signs
like signs
unlike signs
multiplying integers
find the product
correct statement
times a loss
numbered consecutively
evenly spaced
directly opposite
Lesson 1-9, pgs 32-35: Dividing Integers
yards per minute
change in altitude
amount of change
fraction form
inverse operations
related rules
dividing integers
divide by zero
not defined
like signs
unlike signs
zero by zero
division statements
total decrease
inequality sign
equal sign
total decrease
average drop
Chapter 2, Lessons 2-1 through 2-8:
Variables and Expressions
Lesson 2-1, pgs 39-43:
Evaluating Expressions
algebraic expression
represent the total
numerical expression
the result is
word expression
added to
sum of
increased by
more than
subtracted from
decreased by
less than
product of
multiplied by
divided by
quotient of
same number
simplify expression
remove parentheses
substitute for
a given value
word descriptions
algebraic descriptions
cost in cents
for what value
evaluate expression
Lesson 2-2, pgs 44-45: Problem Solving Strategies
Using a Step-by-Step Process
look back
better buy
correct operations
answer the question
label a diagram
evaluate expression
how many different
exactly the same
distance around
step-by-step process
Lesson 2-3, pgs 46-50:
Order of Operations
compute the cost
one expression
two values
rules are needed
simplify expression
same result
order of operations
enclosed in parenthesis
perform operation
algebraic expression
indicates the product
scientific calculator
Lesson 2-4, pgs 51-52 Problem Solving Exploration
Making Generalizations
specific instance
only one instance
does not equal
every instance
Lesson 2-5, pgs 54-58:
Basic Properties and Mental Computations
commutative property
change the sum
associative property
identity property
of addition
of multiplication
simplify mentally
addition of opposites
numerical expression
commutative operation
share equally
Lesson 2-6, pgs 59-63: Using the Distributive Property
distributive property
multiplication over addition
numerical coefficient
like terms
add like terms
recognize instances
equivalent expressions
Lesson 2-7, pgs 64-68: Formulas: Perimeter, Area, Average
distance around
square units
length times width
square meters
Lesson 2-8, pgs 69-74:
Area Formulas:
Parallelograms, Triangles, and Trapezoids
geometric figure
opposite sides
same length
graph paper
base times height
size and shape
figure ABCD
double the height
Chapter 3, Lessons 3-1 to 3-9:
Equations and Problem Solving
Lesson 3-1, pgs 80-84:
Equations and Inequalities
mathematical sentence
number replacement
equal sign
solution set
each member of
Lesson 3-2, pgs 85-88:
Solving Equations
write an equation
what number
solve an equation
reasonable guess
mental math
added to
times a number
divided by
equivalent expressions
Lesson 3-3, pgs 89-90: Problem Solving Strategies
Choosing Strategies
total number of
draw a diagram
simpler problem
look for pattern
Lesson 3-4, pgs 91-94:
Solving Addition and Subtraction Equations
solve equations
substitute values
replacement set
mental math
general method
addition property
equivalent equations
should be added
Lesson 3-5, pgs 95-99:
Solving Multiplication and Division Equations
what is the question
choose a variable
in words
in algebra
multiplication property
of equality
division property
solve and check
multiply each side
original equation
not true for zero
multiply each side
divide each side
Lesson 3-6, pgs 101-104: Inverse Operations
inverse operations
undo each other
both sides
apply the inverse
perform operations
insert parentheses
Lesson 3-7, pgs 105-109:
Solving Multi-Step Equations
distributive property
remove parentheses
original equation
combine terms
taking the opposite
rewrite expression
Lesson 3-8, pgs 110-113:
Translating Word Expressions to Algebraic Expressions
word expression
algebraic expression
essential skill
same symbol
more than one
addition expression
sum of
add to
more than
increased by
total of
subtraction expression
difference of
subtract from
decreased by
less than
multiplication expression
product of
multiplied by
division expressions
divided by
quotient of
times the quantity
enclosed in parentheses
algebraic symbols
distance traveled
Lesson 3-9, pgs 114-118:
Problem Solving
Writing an Equation
deposited some money
what is question
what is given
consecutive numbers
one more than
number of
total number
write equations
connect conditions
check conditions
Unit 2, Chapters 4-5:
Rational Numbers
Chapter 4, Lessons 4-1 through 4-8:
Number Theory
Lesson 4-1, pgs 126-130:
Factors and Multiples
form a product
factor of
multiple of
divisible by
natural number
mentally divide
infinite set
continue without end
consecutive months
divide evenly
illustrate your answer
Lesson 4-2, pgs 131-136:
Tests for Divisibility
add the digits
is divisible by
even numbers
odd numbers
number formed by
members of set
Lesson 4-3, pgs 137-138:
Problem Solving Exploration
Factors and Patterns
unit square
all possible
ordered pair
one-rectangle number
Lesson 4-4, pgs 139-141:
Prime Numbers
different factors
prime number
composite number
natural number
twin primes
distinct parts
reversal primes
Lesson 4-5, pgs 143-146:
Prime Factorization
to factor
express as a
natural number
factor completely
prime factorization
factor tree
divisibility test
ascending order
Lesson 4-6, pgs 148-149:
Problem Solving Strategies
Solving a Simpler Problem
divisible by
related problem
simpler problem
Lesson 4-7, pgs 150-153:
prime factorization
fourth power
order of operation
key sequence
algebraic expression
Lesson 4-8, pgs 154-158:
LCM and GCF
multiples of
common multiple
least common multiple
natural number
divisible by
prime number
prime factors
common factor
greatest common factor
relatively prime
Chapter 5, Lessons 5-1 through 5-10:
Lesson 5-1, pgs 163-167:
Equivalent Fractions
one half
three quarters
whole number
natural number
equivalent fraction
lowest terms
relatively prime
common factor
even numbers
Lesson 5-2, pgs 169-170:
Problem Solving Exploration
Modeling Multiplication
horizontal lines
vertical lines
mathematical symbols
represented by
Lesson 5-3, pgs 171-174:
Multiplying Fractions
multiplication rule
common factor
lowest terms
divided equally
share what remains
Lesson 5-4, pgs 175-179:
Multiplying Mixed Numbers
mixed number
improper fraction
whole number
distributive property
common factors
at this rate
Lesson 5-5, pgs 180-181:
Problem Solving Strategies
Organizing Information
logical reasoning
must be
Lesson 5-6, pgs 183-188:
Using Reciprocals to Solve Equations
product is one
is its own
complex fraction
Lesson 5-7, pgs 189-193:
Dividing Fractions and Mixed Numbers
square section
mathematical model
division rule
multiplicative inverse
number line
inverse operation
division rule
mixed number
common factor
improper fraction
multiplication expression
Lesson 5-8, pgs 194-199:
Fractions with Like Denominators
organizing data
make comparisons
formulate questions
draw conclusions
make decisions
apply the property
commutative property
associative property
adding fractions
like denominators
express answer
lowest terms
fractional part
Lesson 5-9, pgs 200-205:
Adding and Subtracting Fractions:
Unlike Denominators
to the nearest
whole number
is about
equivalent fractions
least common denominator
Lesson 5-10, pgs 206-209:
Subtracting Mixed Numbers
density property
solution set
replacement set
Unit 3, Chapters 6-8:
Using Ratios
Chapter 6, Lessons 6-1 through 6-9:
Lesson 6-1, pgs 216-220:
Ratio and Proportion
compare data
3 out of 5
3 to 5
lowest terms
first term
second term
equivalent ratios
Lesson 6-2, pgs 221-225:
Problem Solving
Using Proportions
equivalent ratios
quality control
expected number
cents per ounce
unit rate
Lesson 6-3, pgs 226-230:
Ratio and Measurement
basic unit
fluid ounce
actual distance
map distance
represents the scale
scale drawing
actual length
compare ratios
Lesson 6-4, pgs 231-232:
Problem Solving Strategies
Collecting Data
prepare the report
gather information
analyze data
Lesson 6-5, pgs 234-235:
Problem Solving Exploration
Recording Chances
numbered sides
ordered pair
summary table
tally mark
chances of
how many different
are possible
successful throw
Lesson 6-6, pgs 236-240:
as likely to
fair way
equally likely
possible outcome
sample space
successful outcome
total number
more likely
certain to happen
less likely
cannot happen
drawn at random
odds in favor
Lesson 6-7, pgs 241-246:
Independent and Dependent Events
independent event
sample space
successful event
chosen at random
dependent event
mutually exclusive
Lesson 6-8, pgs 247-251:
Making Choices
different choices
tree diagram
fundamental counting principle
number of events
sample space
equally likely
random survey
Lesson 6-9, pgs 252-257:
tree diagram
possible outcome
ordered arrangement
number of ways
factorial notation
fundamental counting principal
Chapter 7, Lessons 7-1 through 7-10:
Lesson 7-1, pgs 262-266:
Rational Expressions
ratio of
algebraic expression
rational expression
values for
rational number
not equal zero
square root
expressed as
fractional part
lowest terms
identity property
Lesson 7-2, pgs 267-270:
Rational Numbers
number line
sum is zero
rational numbers
equivalent fractions
Lesson 7-3, pgs 271-274:
Decimals and Fractions
mileage indicator
extreme right
tenths of a mile
basic symbols
decimal system
in combination with
place value
expanded form
negative decimal
equivalent fraction
mixed number
lowest terms
base ten
Lesson 7-4, pgs 275-276:
Problem Solving Strategies
Drawing a Diagram
connecting lines
completely rhymed
draw diagram
lady murasaki
incompletely rhymed
opposite corners
right angle
90 degrees
Lesson 7-5, pgs 277-280:
Repeating Decimals
rational number
repeating decimal
repeats without end
use a bar
digits that repeat
exact value
round the decimal
irrational number
nearest hundredth
Lesson 7-6, pgs 282-286:
Estimating Sums and Differences
bar graph
compare data
round to tenths
break the record
front digits
exact answer
Lesson 7-7, pgs 287-288:
Problem Solving Exploration
Powers of Ten
number of zeros
powers of ten
whole numbers
Lesson 7-8, pgs 289-294:
Scientific Notation
approximate age
billion years
scientific notation
two factors
greater than
or equal
power of ten
standard notation
decimal point
positive exponent
zero exponent
negative exponent
scientific calculator
multiplying powers
represented by
calculator display
degrees Celsius
helium atom
decimal system
Lesson 7-9, pgs 295-296:
Problem Solving Exploration
The Metric System
decimal point
write a rule
test a rule
metric system
basic unit
converting units
metric units
Lesson 7-10, pgs 297-300:
Estimating Products and Quotients of Decimals
nearest whole number
estimated answer
calculated answer
a little less than
a little more than
fuel economy
miles per gallon
best describes
Chapter 8, Lessons 8-1 through 8-9:
Lesson 8-1, pgs 305-308:
The Meaning of Percent
write a ratio
to compare
denominator is 100
division property
Lesson 8-2, pgs 309-312:
Decimals and Percents
percent chance
specific designed area
chances are that
per hundred
percent symbol
equivalent to
Lesson 8-3, pgs 313-316:
Estimating the Percent of a Number
yearly budget
multiple of
use fractions
fractional equivalents
is between
closer estimate
difference between
memorize table
best estimate
about how many
solve by estimation
Lesson 8-4, pgs 317-318:
Problem Solving Strategies
Deciding on Estimates
bar graph
relative sizes
compare amounts
total daily usage
area of
square miles
how accurate
Lesson 8-5, pgs 319-322:
Finding the Percent of a Number
practical use
expected to use
exact answer
nearest tenth
actual percent
commission on sales
cannot be used
Lesson 8-6, pgs 324-328:
money paid for
the use of money
borrow money
rate of interest
percent of principal
yearly percent
simple interest
amount paid back
compound interest
money accumulated
compounded yearly
Lesson 8-7, pgs 330-333:
amount subtracted
regular price
list price
rate of discount
sale price
amount of discount
net price
marked down
Lesson 8-8, pgs 334-337:
Solving Percent Equations and Proportions
percent discount
original number
write an equation
write a proportion
let n equal
what percent of
what is 8% of
Lesson 8-9, pgs 338-341:
Percent Increase and Decrease
percent increase
amount of increase
original amount
percent decrease
whole percent
Unit 4, Chapters 9-11:
Using Graphs
Chapter 9, lesson 9-1 through 9-7:
Analyzing Data
Lesson 9-1, pgs 348-350:
Misleading Graphs
horizontal scale
vertical scale
Corpus Christi
missing information
four-month period
bar graph
line graph
Lesson 9-2, pgs 351-354:
Using Data from Graphs and Tables
organized in tables
inflation rate
estimate the cost
rate of increase
monthly payments
indicated rates
braking distance
presenting data
Lesson 9-3, pgs 355-357:
Organizing and Presenting Data
average time
frequency table
what percent
reasonable time
Lesson 9-4, pgs 359-362:
Measures of Central Tendency
most common
in the middle
measures of
central tendency
set of data
typical number
assumed mean
Lesson 9-5, pgs 364-365:
Problem Solving Strategies
Analyzing Sample Data
freshwater stream
total population
sample data
estimate size
make predictions
known values
good estimate
reliable sample
Lesson 9-6, pgs 366-369:
Stem and Leaf Plots
stem-leaf plot
highest score
lowest score
tens digit
vertical line
units digit
range interval
range difference
Lesson 9-7, pgs 370-372:
Box and Whisker Plots
box and whisker plot
central tendency
greatest value
least value
median above
upper quartile
lower quartile
extreme points
appropriate scale
what percent
Chapter 10, Lessons 10-1 through 10-7:
The Number Line
Lesson 10-1, pgs 377-381:
The Set of Real Numbers
repeat a pattern
infinite decimal
irrational number
the set of all
real numbers
number line
the graph of
open ray
not included
solution set
open circle
closed circle
closed ray
all real numbers
replacement set
comparison statements
decreased by one
Lesson 10-2, pgs 382-385:
The Addition Property of Inequality
logical reasoning
represent the unknown
let x represent
original amount
addition property
of inequality
solve and graph
Lesson 10-3, pgs 386-389:
The Multiplication Property of Inequality
square centimeters
Lesson 10-4, pgs 390-391:
Problem Solving Strategies
Using Generalizations
magic numbers
any integer
special numbers
entries in pattern
Lesson 10-5, pgs 393-397:
Solving Inequalities
understand problem
what question
what given
develop plan
carry out plan
represent unknown
write inequality
solve inequality
look back
reverse the symbol
consecutive numbers
whole numbers
multi-step process
Lesson 10-6, pgs 399-402:
the word "and"
join sentences
only if
both are true
graph the
solution set of
common to both
closed interval
open interval
empty set
Lesson 10-7, pgs 403-405:
special meaning
in mathematics
the word "or"
if either is true
either or both
chosen at random
Chapter 11, Lessons 11-1 through 11-8:
The Coordinate Plane
Lesson 11-1, pgs 410-413:
Coordinate Graphs
ordered pair
coordinate graph
coordinate system
perpendicular lines
horizontal axis
vertical axis
right is positive
left is negative
up is positive
down is negative
Lesson 11-2, pgs 414-417:
Graphing Linear Equations
let x represent
ordered pair
replacement set
set of real numbers
infinite number
pass through
graph the solution
straight line
linear function
Lesson 11-3, pgs 418-421:
The Standard Form of a Linear Equation
linear equation
graph is
straight line
standard form
represent integers
crosses the axis
parallel to
Lesson 11-4, pgs 422-426:
The Slope of a Line
rate of change
nearly flat
slope of a line
vertical units
horizontal units
as x increases by
change in corresponding
undefined slope
linear equation
two given points
Lesson 11-5, pgs 428-429:
Problem Solving Strategies
Revising the Solution
revise the solution
easier method
efficient method
Lesson 11-6, pgs 430-434:
Graphing Equations and Inequalities
coordinate plane
slope-intercept form
slope m
y-intercept b
rewrite the equation
draw the graph
graph of inequality
dashed line
open half-plane
shade the half-plane
closed half-plane
Lesson 11-7, pgs 435-439:
Problem Solving
Using Two Variables
solve equations
two variables
let x represent
let y represent
graph equations
same coordinate plane
solution to both
system of equations
solution of the system
solve by graphing
Lesson 11-8, pgs 440-443:
basic pattern
overall design
units to the right
image point
right is positive
upward is positive
left is negative
downward is negative
geometric figure
original point
congruent figures
same size
same shape
Unit 5, Chapters 12-13:
Using Real Numbers
Chapter 12, Lessons 12-1 through 12-6:
Square Roots and Right Triangles
Lesson 12-1, pgs 450-451:
Problem Solving Exploration
Square Roots
square meters
s squared
good estimate
nearest thousandth
decimal places
square root
irrational number
Lesson 12-2, pgs 452-456:
Using Square Roots
square root
radical symbol
perfect square
positive root
whole number
multiply radicals
rational expression
Lesson 12-3, pgs 457-462:
The Pythagorean Theorem
Nile River
right triangle
square corner
ancient Greeks
Pythagorean Theorem
simplify radicals
Pythagorean Triples
Lesson 12-4, pgs 463-464:
Problem Solving Strategies
Formulating Questions
how long is
how wide is
Lesson 12-5, pgs 466-472:
Similar Triangles
similar triangles
same shape
corresponding angles
equal measure
corresponding sides
same ratio
Lesson 12-6, pgs 474-480:
The Tangent Ratio
right triangle
acute angle
measure of
ninety degrees
similar triangles
decimal represents
right angle
opposite side
adjacent side
tangent ratio
tan A
tangent of angle
round nonterminating
four decimal places
Chapter 13, Lessons 13-1 through 13-9:
Lesson 13-1, pgs 485-490:
Adding Polynomials
combine like terms
tiles to model
x-squared tile
positive tile
negative tile
neutral pair
constant terms
Lesson 13-2, pgs 491-493:
Subtracting Polynomials
model subtraction
of polynomials
use tiles
remove tiles
x-squared tiles
the opposite of
each term of
like terms
missing variables
coefficient of zero
Lesson 13-3, pgs 494-495:
Problem Solving Exploration
Using a Multiplication Model
Lesson 13-4, pgs 496-500:
Multiplying Binomials
Lesson 13-5, pgs 501-502:
Problem Solving Strategies
Too Much/Too Little Data
not enough
can't tell
divide evenly
ratio of
square of
average speed
value of
Lesson 13-6, pgs 504-507:
Using the Distributive Property
distributive property
find the product
two binomials
vertical arrangement
FOIL method
first terms
outside terms
inside terms
last terms
shaded region
Lesson 13-7, pgs 508-511:
Special Products
use a model
distributive property
FOIL method
perfect square
sum and difference
same two terms
Lesson 13-8, pgs 512-515:
Common Factors
to factor
an expression
as a product
common factor
factor out
greatest common factor
original binomial
common binomial factor
monomial factor
rational expression
lowest terms
Lesson 13-9, pgs 516-519:
Factoring a Trinomial
product of
two binomials
last term
middle term
numerical coefficient
complete the pattern
pair of integers
quadratic trinomial
coordinate plane
linear graph
solution set
Unit 6, Chapters 14-15:
Using Equations
Chapter 14, Lessons 14-1 through 14-7:
Equations in Geometry
Lesson 14-1, pgs 526-531:
Angles and Angle Measures
flat surface
set of points
plane figure
common endpoint
angle ABC
measured in
right angle
acute angle
obtuse angle
pair of angles
ninety degrees
one hundred eighty
point of intersection
Lesson 14-2, pgs 532-536:
Parallel and Perpendicular Lines
single point
be parallel
right angles
opposite angles
vertical angles
exterior angles
interior angles
corresponding angles
are equal
alternate interior
alternate exterior
logical argument
Lesson 14-3, pgs 437-438:
Problem Solving Strategies
Extending Patterns
turned ninety degrees
identify pattern
sequence of
geometric figures
extend the sequence
next figure
Lesson 14-4, pgs 539-544:
rigid figure
measure of
triangle sum
one hundred eighty
opposite sides
equal measures
exterior angle
remote interior
parallel lines
intersected by
corresponding angles
alternate interior
equal measure
Lesson 14-5, pgs 546-550:
simple closed curve
crossing itself
line segments
regular polygon
same length
same measure
Lesson 14-6, pgs 551-556:
Circumference and Area
shape of a
set of points
in a plane
same distance
distance around
irrational number
pi d
2 pi r
area of circle
pi r squared
square units
square centimeters
Lesson 14-7, pgs 558-561:
Circle Graphs
percent of
each kind
circle graph
whole circle
three hundred sixty degrees
one hundred percent
center of circle
vertex of angle
labelled graph
pie-shaped region
amount budgeted
tally sheet
Chapter 15, Lessons 15-1 through 15-15-8:
Volume and Surface Area
Lesson 15-1, pgs 566-567:
Problem Solving Exploration
Surface Area and Volume
graph paper
surface area
x units long
y units wide
Lesson 15-2, pgs 568-571:
Volume of a Rectangular Prism
lateral face
rectangular prism
space occupied
determined by
cubic units
multiply by
number of layers
cubic feet
area of base
volume is
length times
width times
cubic meters
s cubed
cubic yards
Lesson 15-3, pgs 572-575:
Surface Area of a Rectangular Prism
rectangular prism
surface area
cubical box
six s squared
minimum amount
Lesson 15-4, pgs 576-580:
Volume of a Cylinder
right cylinder
perpendicular distance
form is rigid
area of base
pi r h squared
Lesson 15-5, pgs 582-585:
Volume of a Pyramid
Egyptian pyramids
2600 B.C.
lateral face
TransAmerica Building
perform experiment
compare volume
congruent base
one-third B h
Lesson 15-6, pgs 586-589:
Volume of a Cone
line segment
perpendicular to
one-third pi r squared h
Lesson 15-7, pgs 590-591:
Problem Solving Strategies
Making a Model
triangular pyramid
square pyramid
straight line
perpendicular line
Lesson 15-8, pgs 592-595:
Volume and Area of a Sphere
least surface area
given volume
spherical object
volume of sphere
four-thirds pi r cubed
surface area
four pi r squared
Phrase Guide for Algebra 1, Nichols, et. al., Holt, 1986.
Ernie's computer, file=c:
\wp51\docs\algebra1.gui. 6-9-95, dml.
Chapter 1, Sections 1.1 - 1.8:
Operations, Variables, Formulas
Section 1.1, pgs 1-3:
Rules for Order of Operations
numerical expressions
order of operations
grouping symbols
times sign
raised dot
Section 1.2, pgs 4-6:
Expressions with Variables
algebraic expressions
represent the number
replaced by numbers
Section 1.3, pgs 7-9:
The Commutative and Associative Properties
Section 1.4, pgs 10-12:
The Distributive Property
Section 1.5, pgs 13-15:
Combining Like Terms
like terms
understood to be
complex expression
rearrange terms
grouped together
Section 1.6, pgs 16-17:
Factors and Exponents
whole number
power of
natural number
raise to a power
Section 1.7, pgs 18-19:
Formulas in Geometry
geometric figure
distance around
unit squares
square centimeters
Section 1.8, pgs 21-22:
Recognizing Solutions of Open Sentences
open sentence
true sentence
indicated value
algebraic sentence
Chapter 2, Sections 2.1 - 2.13:
Real Numbers
Section 2.1, pgs 28-31:
Integers and the Number Line
number line
compare integers
absolute value
sets of numbers
listing members
three dots
go on forever
natural numbers
counting numbers
whole numbers
shaded arrows
less than
greater than
Section 2.2, pgs 32-33:
Real Numbers
division by zero
not zero
terminating decimal
repeating decimal
nonzero digit
block of digits
real numbers
Section 2.3, pgs 34-36:
Adding Real Numbers
positive direction
negative direction
absolute value
same sign
different signs
even integers
odd integers
closed under
closure property
of addition
Section 2.4, pgs 37-39:
Properties for the Addition of Real Numbers
additive inverse
additive identity
property of addition
Section 2.5, pgs 40-43:
Multiplying Real Numbers
to subtract
add the opposite
Section 2.6, pgs 41-43:
Multiplying Real Numbers
product of
two positives
is a positive
property of zero
positive and negative
is a negative
two negatives
is positive
closure property
unique real number
like signs
positive product
different signs
negative product
Section 2.7, pgs 44-46:
Dividing Real Numbers
inverse operations
quotient of
like signs
is positive
different signs
is negative
division by zero
multiplicative inverse
unique real number
division rule
multiplication by
the reciprocal
expressed as
a product
Section 2.8, pgs 47-48:
Shortcuts in Adding and Subtracting Real Numbers
enclosing the
in parentheses
group numbers
like signs
innermost parentheses
rectangular solids
Section 2.9, pgs 49-50:
Evaluating Algebraic Expressions
negative value
two variables
Section 2.10, pgs 51-52:
Like Terms with Real Number Coefficients
combining like terms
real number coefficients
distributive property
unlike terms
Section 2.11, pgs 53-54:
Multiplication Properties of 1 and -1
product of
positive one
any number
multiplicative identity
negative one
opposite of the
Section 2.12, pgs 55-57:
Simplifying Expressions Containing Parentheses
algebraic expressions
containing parentheses
distributive property
Section 2.13, pgs 58-59:
Subtracting Algebraic Expressions
of negative one
subtract from
from something subtract
subtract the opposite
Chapter 3, Sections 3.1 - 3.7:
An Introduction
Section 3.1, pgs 64-66:
Solving Equations:
x+ b=c
mathematical sentence
solve equation
true statement
add same
each side
equivalent equations
addition property
for equations
undo the operation
add the opposite
subtraction property
x+ b=c
Section 3.2, pgs 67-68:
Solving Equations:
ax=c; x/a=c
division property
multiplication property
undo this operation
multiply each side
divide each side
Section 3.3, pgs 69-71:
Solving Equations:
ax+b=c; x/a+b=c
equivalent equations
x alone
two undoings
combine like terms
variable term
first step
to each side
solve and check
Section 3.4, pgs 72-73:
Conditionals in Logic
if-then form
truth table
Section 3.5, pgs 74-75:
The Language of Algebra
English phrases
mathematical form
mathematical symbols
various operations
word problems
key phrases
decreased by
made smaller by
increased by
made greater by
let a variable
represent the number
less than
more than
twice a number
times a number
divided by
Section 3.6, pgs 76-79:
Steps for Solving Word Problems
four basic steps
solve word problems
identity given
what to be found
analyze information
represent data
write equation
solve equation
check solution
decide reasonable
arranged compactly
Section 3.7, pgs 80-81:
Proving Statements
real numbers
assumed true
negative one
replaced by
Chapter 4, Sections 4.1 - 4.7:
Equations and Word Problems
Section 4.1, pgs 86-88:
Equations with Variables on Each Side
solve equations
variable term
on each side
add the opposite
combine like terms
Section 4.2, pgs 89-91:
Equations Containing Parentheses
solve equations
containing parentheses
remove parentheses
distributive property
combine like terms
is the same as
Section 4.3, pgs 92-95:
Number Problems
word problem
express relationship
variable represents
in terms of
let x equal
their sum is
separate into two parts
basis of comparison
the smaller of
Section 4.4, pgs 96-99:
Equations with Fractions
with fractions
without fractions
multiply by
the reciprocal
product is one
most efficient
only one term
least number
divisible by
least common multiple
Section 4.5, pgs 100-102:
Equations with Decimals
equations that
contain decimals
multiply by 10
equivalent to
move decimal point
decimals as fractions
convenient technique
digits to the right
Section 4.6, pgs 103-105:
Using Algebra in Percent Problems
per hundred
to a decimal
percent of is
is what percent of
decimal equation
selling price
Section 4.7, pgs 106-108:
Perimeter Formulas
draw a picture
distance around
represent the | {"url":"http://www.docstoc.com/docs/35515788/MATH-PHRASES","timestamp":"2014-04-17T00:05:21Z","content_type":null,"content_length":"93060","record_id":"<urn:uuid:a3b2e307-95ee-44c1-8341-d5588bbcec8e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
I = PRT
Name: Ryan
Who is asking: Student
Level of the question: Secondary
Question: Use the formula to find the value of the variable that is not given:
I=PRT;I=$2880, R=0.08, P=$12,000
We quite often use one letter to stand for a variable, as in your equation. When we do then by placing two letters adjacent to each other we mean to multiply the values. Thus, for example your
I = PRT
I = P
Thus, with I = $2,880, R = 0.08 and P = $12,000 the equation becomes
$2,880 = $12,000
$2,880 = $960
T = ^$2,880/[$960] = 3 | {"url":"http://mathcentral.uregina.ca/QQ/database/QQ.09.05/ryan1.html","timestamp":"2014-04-18T10:36:12Z","content_type":null,"content_length":"5851","record_id":"<urn:uuid:f723630e-07f6-4031-aac9-86e9d2bf130a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
P. Cousot, Design of Syntactic Program Transformations
by Abstract Interpretation of Semantic Transformations
Abstract: Traditionally, static program analysis has been used for offline program transformation i.e. an abstraction of the subject program semantics is used to determine which syntactic
transformations are applicable. A classical example is binding-time analysis before partial evaluation.
We present a new application of abstract interpretation to the formalization of source to source program transformations:
• The semantic transformation is understood as an abstraction of the subject program semantics. The intuition is that the transformed semantics is an approximation of the subject semantics because,
most often, redundant elements of the subject semantics have been eliminated;
• The correctness of the semantic transformation is expressed by an observational abstraction. The intuition is that the subject and transformed semantics should be exactly the same when
abstracting away from irrelevant hence unobserved details;
• Finally, the syntax of a program is shown to be an abstraction of its semantics (in that details of the execution are lost) so that the transformed program is an abstraction of the transformed
Abstract interpretation theory provides the ingredients for designing a syntactic source-to-source transformation as an abstraction of a semantics-to-semantics transformation, which correctness is
formally established through an observational abstraction. In particular iterative transformation algorithms are abstraction of the fixpoint semantics of the subject program.
Several examples have been studied with this perspective such as blocking command elimination, program reduction, constant propagation, partial evaluation, etc.
\newblock Design of Syntactic Program Transformations by Abstract
Interpretation of Semantic Transformations (invited talk).
\newblock In \emph{Proceedings of the 17th International Conference, ICLP
2001}, Ph{.} Codognet (Ed.), LNCS 2237, Paphos, Cyprus, November 26 --
December 1, 2001, pp. 4--5, Springer, Berlin, 2001.
author = {Cousot, P{.}},
title = {Design of Syntactic Program Transformations by Abstract
Interpretation of Semantic Transformations (invited talk)},
editor = {Codognet, Ph{.}},
pages = {4--5},
booktitle = {Proceedings of the Seventeenth International Conference, ICLP
address = {Paphos, Cyprus},
publisher = {LNCS 2237, Springer, Berlin},
month = {November/December},
year = 2001, | {"url":"http://cs.nyu.edu/~pcousot/COUSOTpapers/ICLP01.shtml","timestamp":"2014-04-16T10:13:43Z","content_type":null,"content_length":"4341","record_id":"<urn:uuid:fe74d4cb-551b-4472-a8c5-3829332acec5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
tanding Cap Rates
Understanding Cap Rates
Any seasoned real estate professional should be familiar with the term Capitalization Rate (or “Cap Rate” for short). But unfortunately, for both experienced investors and new investors alike, the
true meaning of cap rate is often not well understood. The goal of this article is to try to explain cap rates in a way that everyone can fully understand – and internalize – both their meaning and
their value to investors.
Before we discuss exactly what the Cap Rate is, let’s discuss why having an investment metric like Cap Rate is important...
Why Cap Rate is Important
Let’s say you were buying a property, and you wanted to know if the property were a good investment. What information would you ask the seller to determine if the property were one that you should
buy? For many investors, the first question that comes to mind is, “What is the cash flow of the property?”
Note: For those not familiar with the term “cash flow,” it is the amount of money the investor will have left over after collecting the monthly income from the property and paying all the expenses
(including property taxes, insurance, maintenance, mortgage, etc). So, a property with a $5000 per month cash flow will allow the property owner to pocket $60,000 per year.
While this seems like a reasonable question from the perspective of a potential buyer, there is one major problem with asking what the cash flow is on a property: the cash flow is going to be
different for each potential buyer. This is because the cash flow is directly affected by the expenses associated with the property; the higher the expenses, the lower the cash flow. And while a
number of the expenses associated with a property are will not vary depending on the owner (for example, property taxes, insurance, maintenance, etc will likely be the same regardless of who owns the
property), one key expense item will be very much dependent on the specific buyer – and that’s the debt service payments (mortgage payments).
The debt service payments are going to be directly related to the interest rate on the loan, the amortization period, and the down payment amount. Because two buyers will likely use different
financing mechanisms, one is likely to have higher debt service payments than the other. And because cash flow decreases as expenses increase, the one who has higher debt service payments will have
lower cash flow as well. In fact, on the same property, one buyer who gets a loan with a low interest rate and a big down payment could have positive cash flow (i.e., he’ll make money each month on
the property), while another buyers who gets a loan with a high interest rate and small down payment could have negative cash flow (i.e., he’ll lose money each month on the property).
Because my cash flow will likely be different than your cash flow on the same property, it doesn’t make sense to determine the value of the investment value of the property using this metric. This
goes for several other popular metrics used to determine the investment value of the property as well – cash-on-cash return, total return, etc. That’s because these investment metrics are also
specific to the buyer’s specific circumstances (debt service payment, tax bracket, etc).
So, if none of these metrics is a good measure of the value of the investment independent of the specific buyer, what is? That’s where the Cap Rate comes in...
The Cap Rate is a measure of a property’s investment potential, independent of the specific buyer. Regardless of who is evaluating the property, the Cap Rate will remain the same, and therefore two
investors can do an apples-to-apples comparison of the same property using this measure.
Okay, so now you hopefully understand why the Cap Rate is an important metric when evaluating a property, but you still have no idea what it means…let’s jump into that now...
What Does the Cap Rate Mean?
In a nutshell, the Cap Rate is equivalent to the return on investment you would receive if you were to pay all cash for a property.
Here’s another way to think about that...
Most investors understand the basics of return on investment (ROI) for simple investments, like a Certificate of Deposit (CD). And most investors have a pretty good sense of what a good return on
investment is. For example, with a typical CD, you might get a 5% return on your money. So, if you were to invest $100,000 in CD for one year, you would receive a 5% return (or $5000) at the end of
the year. Likewise, if you were to invest $100,000 in a stock market S&P fund for one year, you might expect to receive about an 8% return, or $8000 (assuming the S&P returned its long-time average
amount in that year). People are very used to thinking about return on investment in this way – a simple return percentage that indicates how much your invested money will earn for you each year.
Unfortunately, calculating return on investment for a property is a little more complicated. This is because – unlike with a CD or a stock market investment where you pay for the total value of the
asset up-front – with a property investment you often only pay for a portion of the asset up-front and the rest of the asset is paid for using a loan. For example, to buy $100,000 property, you may
only have to put up $10,000. Because of this, figuring out the return on a property investment is more complicated.
But Cap Rate is our way of making the more complicated property investment look just as simple as a CD or stock market investment. Cap Rate assumes that you pay for the entire property up-front (just
like the CD or stock market fund), and indicates your return on that property investment. In addition to allowing you to compare one property to another, Cap Rate also allows you to compare a
property investment to other investments. For example, if a $100,000 property had a 12% Cap Rate, the property would return $12,000 per year to someone who paid all cash for the property. Compare
this to the 5% ($5000) that the CD returns and the 8% ($8000) that the stock market fund returns, and you see that the property investment is a pretty good deal (assuming you ignore all the extra
work that goes into owning a property versus owning a CD or a stock fund).
Important: You should keep in mind that the Cap Rate isn’t necessarily the amount a real-life investor will make on a property. Because Cap Rate assumes that an investor pays 100% up-front for a
property, and because an actual investor rarely pays 100% up-front, the actual return will differ from the Cap Rate of the property. In many cases, the actual return will be higher than the Cap Rate
(one advantage of leverage).
How is the Cap Rate Calculated?
Now that we know what the Cap Rate means and why it’s important, let’s discuss how it’s calculated.
The cap rate is calculated as follows:
Cap Rate = Net Operating Income / Property Price
Note: For those not familiar with the term “net operating income” (or “NOI”), it is the amount of money the investor will have left over after collecting the monthly income from the property and
paying all the expenses except for the debt service. Because the debt service amount should be the only property expense that is directly affected by the specific buyer, the NOI will be the same for
all potential buyers.
As an example, let’s say that a specific property has the following characteristics:
Purchase Price: $500,000
Income Per Month: $15,000
Expenses Per Year: $60,000
Here is the cap rate for our example property...
First, calculate NOI:
NOI = Annual Income – Annual Expenses
= (12 x $15,000) – ($100,000) = $80,000
Then calculate Cap Rate:
Cap Rate = NOI / Property Price
= $80,000 / $500,000 = 16%
The next question you might ask is, “What is a good Cap Rate?” While it really depends on the area of the country you’re in, in general, most areas see maximum Cap Rates in the 8-12% range. And just
like the value of single family houses are based on the prices of comparable houses in the area, the value of larger investment properties are usually based on the Cap Rate of comparable investment
properties in the area. So, if the average Cap Rate in your area is 10%, you should be looking for at least a 10% Cap Rate for your property (barring other more complex situations and | {"url":"http://www.threetypes.com/re/cap-rates.shtml","timestamp":"2014-04-18T00:13:40Z","content_type":null,"content_length":"15620","record_id":"<urn:uuid:722f8623-379e-4b6f-88a9-13540af9b0ff>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00605-ip-10-147-4-33.ec2.internal.warc.gz"} |
vertex of parabola
October 7th 2008, 12:35 PM #1
Junior Member
Sep 2008
vertex of parabola
At a time t seconds after it is thrown up in the air, a tomato is at a height (in meters) of f(t)=−49t^2+60t+3 m.
What is the average velocity of the tomato during the first 5 seconds?
How high does the tomato go?
How long is the tomato in the air?
forgot a decimal in your function ...
$f(t) = -4.9t^2 + 60t + 3$
$V_{avg} = \frac{f(5) - f(0)}{5 - 0}$
t-value for the vertex of f(t) (time at the top of its trajectory) is $\frac{-b}{2a}$ ... then find $f\left(\frac{-b}{2a}\right)$
total time ... set f(t) = 0 and solve for t
How do you know what b and a are?
never mind i got it, thanks
October 7th 2008, 02:06 PM #2
October 7th 2008, 02:19 PM #3
Junior Member
Sep 2008
October 7th 2008, 02:20 PM #4
Junior Member
Sep 2008 | {"url":"http://mathhelpforum.com/pre-calculus/52472-vertex-parabola.html","timestamp":"2014-04-16T16:52:50Z","content_type":null,"content_length":"36932","record_id":"<urn:uuid:3c2c6b42-e395-4901-a4c0-0320c5d847b2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
Miami Algebra 1 Tutor
Find a Miami Algebra 1 Tutor
...This experience has expanded my biological background. I taught many graduate students basic skills in a wet laboratory. I feel like a I have a strong Mathematical and Biological background for
which I can help students with their assignments.
18 Subjects: including algebra 1, chemistry, calculus, geometry
...I am experienced in preparing and editing APA style papers on any subject and of any length.My geometry lessons include formulas for lengths, areas and volumes. The Pythagorean theorem will be
explained and applied. We will learn terms like circumference and area of a circle; also, area of a triangle, volume of a cylinder, sphere, and a pyramid.
46 Subjects: including algebra 1, Spanish, reading, writing
I began working as a tutor in High School as part of the Math Club, and then continued in college in a part time position, where I helped students in College Algebra, Statistics, Calculus and
Programming. After college I moved to Spain where I gave private test prep lessons to high school students ...
11 Subjects: including algebra 1, calculus, physics, geometry
...I am currently an undergraduate university student. Because of this, I have experienced filling various applications for different colleges and universities. I have also done a variety of
research on different schools when applying.
20 Subjects: including algebra 1, reading, English, chemistry
...Personally, I believe that such experience is very much in phase with my personality and has given me a very particular perspective about general psychological impediments to enjoying and
learning Physics and Math concepts. I also strongly believe that learning Math and Sciences is of the upmost...
11 Subjects: including algebra 1, Spanish, physics, calculus | {"url":"http://www.purplemath.com/miami_algebra_1_tutors.php","timestamp":"2014-04-18T00:31:29Z","content_type":null,"content_length":"23773","record_id":"<urn:uuid:7a63faa8-a3c4-4f40-aeab-8dc8b1f09790>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
Subgroups of the Rational Numbers Under Addition
Date: 02/01/2003 at 23:42:09
From: Cathie
Subject: Subgroups of the rational numbers
I need to describe all the subgroups of the rational numbers under
I know the definition of a subgroup:
1) It has to be closed under the operation (in this case, addition)
2) It has to contain the identity of the group
3) Every element has to have an inverse
I've tried obvious choices like the positive or negative rationals
plus zero, but there was no additive inverse.
I also tried {x, -x : x is in Q} plus zero, but that wasn't
necessarily closed under addition.
I'm pretty stuck, but I'm going to keep at it. I'd love some input.
Date: 02/02/2003 at 10:02:15
From: Doctor Tom
Subject: Re: Subgroups of the rational numbers
A couple of examples occurred to me right away:
1) { ..., -4/3, -3/3, -2/3, -1/3, 0, 1/3, 2/3, ...}
2) {x : x = k/2^n for all integers k and positive integers n}
Can we generalize these? For example, consider the set of all numbers
of the form:
where k is a positive or negative integer, and m and n are
non-negative integers. This is clearly an additive subgroup of the
In fact, you can pick any subset of the prime numbers to use as
denominators, for example, all the numbers of the form:
k/(2^m 5^n 17^p)
And, in fact, not just finite subsets - you can choose ANY subset of
the prime numbers, finite or infinite, and the set of all numbers that
look like:
k/(p1^n1 p2^n2 p3^n3 .... (forever))
such that p1, p2, ... are distinct prime numbers and n1, n2, ... are
non-negative integers such that only a finite number of them are
Clearly we can form a subgroup from all the integer multiples of any
fixed rational number.
In addition, when you look at the various subgroups I mentioned above,
the numerators could also be any multiple of a number relatively prime
to all the primes in the denominator.
Also, the trivial subgroup {0} is closed under addition.
It also seems that you can restrict the exponents on the primes in the
denominators to be no greater than some fixed number if you want.
In other words, we could allow 3 and 3^2 down there together with any
power of 2, so the general form would be:
where k is a positive or negative integer, m is zero, 1, or 2, and n
is any non-negative integer.
Anyway, after talking this over with a couple of friends, we all agree
that the following is a complete answer to your question:
The only finite subgroup is {0}.
All the infinite groups can be characterized as follows:
Let p0, p1, p2, ... be all the prime numbers, so p0 = 2, p1 = 3,
p2 = 5, p3 = 7, and so on.
Let k0, k1, k2, ... be either non-negative integers or a special
number that I will call "infinity." In addition, let n be any number
that is relatively prime to pi^ki for all i. Each different list of
values of the ki together with the value of n generates a unique
subgroup of the rationals whose members are defined as follows:
Let m0 <= k0, m1 <= k1, m2 <= k2, ... mi <= ki, where at most a finite
number of the mi are non-zero. Let q be an integer, positive, negative
or zero. If the ki is the special number "infinity," then the mi can
be any non-negative finite integer. Then the number:
represents all the elements in that subgroup. (Notice that although
the thing in the denominator appears to be an infinite product, it is
not, since at most a finite number of the mi are non-zero.)
That's it! Much more complicated and interesting than I thought at
- Doctor Tom, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/62187.html","timestamp":"2014-04-20T13:42:41Z","content_type":null,"content_length":"8566","record_id":"<urn:uuid:52fe1e99-da3e-44b9-85fe-eb8f5266af48>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
If the diameter of cross-section of a wire is decreased by 20%. How much percent should the length be increased,so... - Homework Help - eNotes.com
If the diameter of cross-section of a wire is decreased by 20%. How much percent should the length be increased,so that the volume remains the same ?
Since the shape of wire is cylindrical, you may evaluate its volume using the following formula, such that:
`V = pi*d^2/4*l`
d represents the diameter of cross section of wire
l represents the length of wire
Since the new diameter of cross section of wire is `d_1 = d - ` `20/100*d` , you need to substitute in equation of volume, such that:
`V = pi*(d_1^2)/4*l_1 => pi*d^2/4*l = pi*(d_1^2)/4*l_1`
Reducing duplicate factors yields:
`pi*d^2/4*l = pi*d^2(1 - 20/100)^2/4*l_1`
Reducing duplicate factors yields:
`l = (1 - 20/100)^2*l_1 => l_1 = l/(1 - 20/100)^2`
`l_1 = l*(100/80)^2 => l_1 = l*(5/4)^2 => l_1 = 1.56*l =>l_1 = 156/100*l = 156%*l`
Hence, evaluating the number of percents the original length of wire needs to be increased, under the given conditions, yields `l_1 = 156%*l.`
Let the original diameter be = d1 and original length be = l1 The original radious = d1/2 initial volume V1 = (pie)*(d1/2)^2*l1= (pie)*d1^2/4*l1 The new diameter d2 (after decreasing by 20% of
original)= 0.8d1 [ d2=d1-(20/100)*d1= d1-0.2d1= 0.8d1] The new radious = (0.8d1)/2 = 0.4d1 Let the new length be = l2 The new volume V2= (pie)*(0.4d1)^2*l2 = (pie)*0.16(d1^2)*l2 Since the volume
remained same, therefore V1 = V2 (pie)*d1^2/4*l1 = (pie)*0.16(d1^2)*l2 => l1/4 = 0.16 l2 [ pie and d1^2 get cancelled ] => l2 = l1/(4*0.16) = l1/(0.64)= l1*(100/64)= (25/16)l1 => l2 = (25/16)l1
increase in length = l2-l1 = (25/16)l1 - l1 = (9/16)l1 percentge increase in length = ((9/16)l1/l1)*100 = (9/16)*100 = 225/4 = 56.25% Hence, percentage increase in length of the wire = 56.25% <-
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/diameter-cross-section-wire-decreased-by-20-how-378767","timestamp":"2014-04-18T23:52:30Z","content_type":null,"content_length":"28178","record_id":"<urn:uuid:3f18a761-083e-4156-b10e-8fc5cb38941b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thanks to the Subspace Theorem of Wolfgang Schmidt, we have a solid understanding of norm form equations over number fields, at least from a ``qualitative'' viewpoint. The situation is much less
satisfactory if we desire to solve such equations ``effectively'', however, in all but the simplest cases. In this talk, I will sketch recent work on applications of linear forms in logarithms to
such problems, generalizing work of Vojta, and applications of these results.
For the anticyclotomic $p$-adic Rankin--Selberg L-function attached to a Hecke eigenform and an imaginary quadratic field we explain the construction of the two-variable $p$-adic L-function by
introducing the second $p$-adic variable by considering Hida and Coleman families of Hecke eigenforms parametrized by the weight.
We will report on our recent work on the relation between Heegner points and $p$-adic $L$-functions, both made to vary in $p$-adic families. By Kolyvagin's ``method of Euler systems", this leads
to a number of applications, in particular to the arithmetic of modular elliptic curves.
In this talk we will present a construction of real analytic Eisenstein series $E(z,s)$ attached to a totally real field $K$, $E(z,s)$ being real analytic in $z$ and holomorphic in $s$. We will
present a precise formula for its Fourier series expansion around $z$. Having such an explicit formula at our disposal, we will then prove a functional equation which relates $E(z,s)$ to its
so-called "dual Eisenstein series" $E^*(z,1-s)$. It turns out that the constant term of this Fourier series is a partial zeta function $\zeta(s)$ in the complex variable s weighted by a sign
character. In the special case when $ord_{s=0}(\zeta(s))=1$, it is expected that $\zeta'(0)$ is equal to the logarithm of a global unit in an abelian extension of $K$. In order to get some
insights about a possible solution of this outstanding conjecture, we will present a (classical) proof of this conjecture in the special case when $K=\mathbf{Q}$ which involves Cauchy's classical
residues theorem.
I will survey a number of results regarding bounding the ramification of the torsion fields of Drinfeld modules, with applications to Serre large image type results for the associated Galois
Let $G$ be a connected reductive quasi-split algebraic group over a $p$-adic field. In this talk we introduce an abelian category of equivariant perverse sheaves on an ind-variety built from $\,^
LG$, the L-group for $G$, and show that there is a canonical bijection between isomorphism classes of simple objects in this category and complete Langlands parameters. Joint work with Pramod
Achar, Masoud Kamgarpour and Hadi Salmasian. This group is currently working on a proof that this category is Koszul.
This geometric and categorical approach to complete Langlands parameters suggests a geometric and categorical approach to irreducible admissible representations and the local Langlands
Correspondence itself, which is already realised in joint work with David Roe when $G = GL(1)$, and under construction for other algebraic tori. Time permitting, I will also say a few words about
this work.
If $k$ is a number field, $p$ an odd prime number and $S$ a finite set of primes of $k$ containing the set $S_p$ of primes above $p$, the arithmetic curve $\operatorname{Spec}(\mathcal{O}_k\
setminus S)$ is a $K(\pi,1)$ for $p$, i.e. the pro-$p$-completion of its étale homotopy type is weakly equivalent to the Eilenberg-MacLane space of $\pi_1^{et}(\operatorname{Spec}(\mathcal{O}_k\
setminus S)(p)$, the Galois group of the maximal pro-$p$-extension of $k$ unramified outside $S$. We discuss arithmetic consequences of the $K(\pi,1)$-property in the more difficult tame case
(i.e. $S\cap S_p=\varnothing$) due to A. Schmidt and show how the first explicit examples have been obtained by J. Labute using the theory of mild pro-$p$-groups. We investigate how these groups
can be constructed using higher cohomological Massey products and give an arithmetic interpretation for $k=\mathbb{Q}$ in terms of certain analogues of $p$-th power symbols.
In this report on joint work with Samit Dasgupta, I will describe a relationship between classes in the cohomology of $\mathfrak{p}$-arithmetic groups and intertwining operators between smooth or
$p$-adic representations of $GL_2(F_\mathfrak{p})$, $F$ a totally real field, and classical or completed cohomology groups of $\mathfrak{p}$-towers of quaternionic Shimura varieties over $F$.
This relationship can be used generalize some results of Breuil and Emerton concerning $L$-invariants and $p$-adic $L$-functions to the setting of Hilbert modular forms.
Let $\rho_{A_i,\ell} : G_K \rightarrow \mbox{Aut}(V_\ell(A_i))$ be the $\ell$-adic Galois representation attached to an abelian variety $A_i/K$, and let $\tau_{A_i,\ell} : \mbox{End}(A_i)\otimes
\mathbb{Q}_\ell \stackrel{\sim}{\rightarrow} \mbox{End}_{\mathbb{Q}_\ell[G_K]}(V_\ell(A_i))$ be the canonical isomorphism (Tate/Faltings). The purpose of this talk is to study properties of the
tensor product $\rho_{A_1, \ell}\otimes \rho_{A_2, \ell}$ of two such representations, particularly in view of the following question: when is $\tau_{A_1,\ell}\otimes \tau_{A_2,\ell} : \mbox{End}
(A_1)\otimes \mbox{End}(A_2)\otimes\mathbb{Q}_\ell \rightarrow \mbox{End}_{\mathbb{Q}_\ell[G_K]}(V_\ell(A_1)\otimes V_\ell(A_2))$ an isomorphism? (This question is related to Tate's Conjecture
for codimension 2 cycles on products of abelian varieties.) In this talk I will give a solution in the case when $K = \mathbb{Q}$ and $A_i = A_{f_i}$ is a modular abelian variety attached to a
weight 2 newform $f_i$ on $\Gamma_1(N_i)$. If time permits, I will also discuss mod $\ell$ analogues.
In his foundational work on the theory of $p$-adic modular forms, N. Katz observed that there is a positive lower bound for the "growth condition" of an overconvegent $p$-adic modular eigenform
with nonzero $U_p$-eigenvalue. In more modern language, this states that any such form can be analytically continued from its initial domain of definition to a not "too small" region of the rigid
analytic modular curve. Years later, K. Buzzard, by adding $\Gamma_0(p)$ to the level, proved that such forms can be further extended to a certain "large" region of the modular curve. These
results were used by Buzzard and Taylor to prove modularity lifting results which led to a proof of certain cases of the Strong Artin conjecture.
It has been known for a while how to extend these results to the Hilbert case when $p$ is split in the totally real field of degree $g>1$, as the problem looks formally like a product of $g$
copies of the modular curve case. In the inert case, however, a mixing happens that fundamentally changes the nature of the problem. In this talk, I will explain new results on domains of
automatic analytic continuation for overconvergent Hilbert modular forms in the case $p$ is unramified in the totally real field. These results can be used to prove many cases of the strong Artin
conjecture for Hilbert modular forms. Some of the work that will be presented is joint with Sasaki and Tian.
The Coates-Sinnott Conjecture was formulated in 1974 as a K-theory analogue of Stickelberger's Theorem and proven for K2 for abelian number fields up to 2-torsion. In this talk we present recent
results about the general situation of higher K-groups, arbitrary relative abelian extensions of number fields and all primes including 2. The most complete general results for all primes are due
to R. Taleb, and in some more specific situations to Taleb and myself.
Let F be a finite extension of $\mathbb Q_p$ of degree d and let E/F be a quadratic extension with ring of integers $O_E$. In this lecture I will explain how Drinfeld's formal `upper half plane'
also arises as a moduli space for p-divisible groups of dimension 2d and height 4d with an action of $O_E$ and a polarization which may be principal or not depending on whether E/F is ramified or
not. If time permits, I will explain how to use this model of Drinfeld space to give new examples of p-adic uniformization of certain Shimura varieties. This is joint work with Michael Rapoport.
Let $p$ be a prime, let $S$ be a finite set of primes $q\equiv1\ {\rm mod}\ p$ but $q\not\equiv1\ {\rm mod}\ p^2$ and let $G_S$ be the Galois group of the maximal $p$-extension of $\mathbb Q$
unramified outside of $S$. If $\rho$ is a continuous homomorphism of $G_S$ into ${\rm GL}_2(\mathbb Z_p)$ we use the Koch presentation of $G_S$ and the theory of mild pro-$p$-groups to show that
if $p>3$ then, under certain conditions on the linking numbers of the primes in $S$, either $\rho=1$ or $\rho(G_S)$ is a Sylow $p$-subgroup of ${\rm SL}_2(\mathbb Z_p)$. Under certain conditions
on $S$ with $|S|=2,3$, we show that $\rho=1$.
I will talk about my joint work with David Loeffler and Sarah Zerbes on the construction of cohomology classes for the Rankin-Selberg convolution of two weight-two modular forms, that is, to
construct a collection of elements in $H^1(\mathbb{Q}(\mu_m),V_f\otimes V_g)$ where $f$ and $g$ are modular forms of weight two and $V_f$ and $V_g$ are the corresponding Deligne representations.
I will also talk about how such construction is related to Perrin-Riou's conjecture on the existence of an Euler system for $V_f\otimes V_g$ and other applications.
The purpose of this talk is to interpret results of Jakubec, Jakubec-Lassak, Marko and Jakubec-Marko on congruences of Ankeny-Artin-Chowla type for cyclic totally real fields as an elementary
version of the $p$-adic class number formula modulo powers of $p$. Explicit formulas for quadratics and cubic fields will be given.
A possible characterization of absolute Galois groups among profinite groups still seems to be a very difficult task.
Recently however several remarkable developments have opened up new ways of exploration. These include the Rost-Voevodsky proof of the Bloch-Kato conjecture, the advances of F. Bogomolov, F. Pop
and Y. Tschinkel, on a birational abelian program, and some advances on the structure and surprising anabelian character of rather small quotients of finite exponents of absolute Galois groups.
The interview and its aftermath will focus on the latter explorations carried out with S. Chebolu, I. Efrat, J. Swallow and A. Topaz.
We report on the work on endoscopic classification for quasi-split unitary groups, following Arthur's methods. We will highlight some local and global results that are corollaries of the theory.
We consider an Abelian variety A of complex multiplication defined over a number field F, and study the growth of the p-rank of the Selmer group of A in certain classes of infinite p-extensions
of F. This is joint work with Meng Fai Lim.
Noncommutative Iwasawa theory predicts congruences between twisted p-adic L-values arising from Artin representations. We shall discuss the background and present numerical evidence of such
Using Faltings' theory of the Hodge-Tate sequence of an abelian scheme we give a functorial construction of ``modular sheaves" $\Omega^\kappa$, where $\kappa$ is a not-necessarily integral
weight, attached to abelian schemes on which the canonical subgroup exists. These sheaves generalize the integral powers, $\omega^k$, of the sheaf $\omega$ of relative differentials on a modular
curve. Global sections of $\Omega^\kappa$ provide geometric realizations of overconvergent automorphic forms of non-integral weight. Applications of this approach to the theory of $p$-adic
Hilbert modular forms will be described. This is joint work with Fabrizio Andreotti and Adrian Iovita.
Alain Connes has proposed solving Hilbert's 12th problem by constructing points on certain `non-commutative varieties'. Such a variety is given by a C*-dynamical system, the Bost-Connes system,
made out of the adeles of a number field. The talk will discuss the construction, along with recent work by the presenter, M. Laca and S. Neshveyev which turns this construction into a functor
from algebraic number fields to C*-dynamical systems.
In Kronecker's Jugendtraum, abelian extensions of an imaginary quadratic field are constructed by special values of modular functions on classical modular curves. In practical terms, this allows
one to find generating polynomials for number fields arising from class field theory using numerical methods. We extend Kronecker's Jugendtraum to CM number fields in this explicit way by
computing special values of functions on Shimura curves over totally real fields. To do so, we exhibit a method to numerically compute power series expansions of modular forms on a cocompact
Fuchsian group, using the explicit computation a fundamental domain and linear algebra.
Motivated by the Szpiro conjecture, we conjecture that for any prime $p$, any integer $M$, and any integer $l>6$, if $E$ is a semistable elliptic curve with minimal discriminant $p^r M^l$, then
$r\leq 6$. We prove this conjecture for many primes $p$ and $l=6k$ by looking for perfect powers in certain elliptic nets. | {"url":"http://cms.math.ca/Reunions/hiver12/res/ant","timestamp":"2014-04-18T10:47:44Z","content_type":null,"content_length":"29000","record_id":"<urn:uuid:1aa3b9c0-3e45-4177-a912-0e61650f6db0>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Puente Prealgebra Tutor
Find a La Puente Prealgebra Tutor
...Then in the basic painting course, I started working with different media, in addition to linseed oil, such as glazing medium, beeswax, poppy seed oil, and stand oil. I also got to try
different techniques like palette-knife painting and wet-into-wet. I continue to take these weekly classes, and I'm excited to see my skills develop.
24 Subjects: including prealgebra, reading, French, English
...I have experience with college, high school, and elementary school students in various subjects. I come from rough and humble beginnings, where dedication and talent allowed me to graduate from
college. I love passing on these values to students, particularly those interested in attending college.
11 Subjects: including prealgebra, reading, biology, algebra 1
...During my time at USC, I conducted astronomy research studying the flow of subsurface matter in the sun, and I also worked at Mt. Wilson Observatory. After graduation, I moved to Kiev, Ukraine
to teach English.
15 Subjects: including prealgebra, English, physics, writing
...These broad experiences and content knowledge allow me to teach key concepts and information in unique and meaningful ways to students. I believe that every student can learn. I am a patient
and resourceful teacher that engages students in meaningful learning.
11 Subjects: including prealgebra, geometry, biology, anatomy
Hello! I am currently a graduate student at California State University, Fullerton. I received my B.S. in Mathematics/Applied Chemistry in the summer of 2013 from the University of California,
Riverside with my major GPA just over 3.5.
7 Subjects: including prealgebra, chemistry, calculus, algebra 1 | {"url":"http://www.purplemath.com/la_puente_prealgebra_tutors.php","timestamp":"2014-04-17T11:10:47Z","content_type":null,"content_length":"23973","record_id":"<urn:uuid:a664804f-627e-43a0-9a1d-fdb8668f8d8f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
At age 18, what the the normal way to feed an newborn infant to you?
18 Thu, 06-09-2011 - 3:33pm
Think back to when you where 18 (or 16 if your are currently 18) and how you would have answered the following qeustion:
1. What is the normal way to feed an newborn infant?
A. Breastfeeding
B. A Bootle of infant formula.
C. Either breastfeeding or a bottle of infnat formula, as both where normal.
2. Would your answer have been any different If you where ask at the age of 9 or 10? | {"url":"http://www.ivillage.com/forums/print/117609575","timestamp":"2014-04-16T19:23:06Z","content_type":null,"content_length":"10715","record_id":"<urn:uuid:913a1cda-9e94-4f9f-89d7-909e8c92a71b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |