content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Musings of the Masters
Musings of the Masters: An Anthology of Mathematical Reflections, ed. Raymond G. Ayoub, 2004. 288+xii pp. $47.95 hardcover. ISBN 0-88385-549-6. The Mathematical Association of America . 800-331-1622,
In Musings of the Masters, Raymond G. Ayoub has collected 17 essays, excerpts, and speeches by renowned mathematicians who offer their thoughts on issues as diverse as mathematical pedagogy
("Thoughts on the Heuristic Method", by Hadamard), aesthetics ("Mathematics and the Arts", by Morse), ontology ("Mathematical Invention", by Poincaré), the role of the scholar ("The Community of
Scholars", by Lichnerowicz) and the role of the history of mathematics ("History of Mathematics: Why and How", by Weil).
We might begin by asking why read a book about mathematics? Ayoub notes in the introduction that the collection stems, in part, from "a curiosity concerning the creative process in mathematics
together with a curiosity concerning the essence of this remarkable subject" (p. vii). He ends the introduction by expressing hope that the collection will encourage mathematicians "to share with
others their perceptions on this beautiful subject, or on other matters dealing more directly with the humanistic side of our nature." Ayoub's work lives up to its title and to the first goal; as to
the second, only time will tell.
There is much to recommend this book. Several of the essays ought to be required reading of all who intend to become mathematicians or teachers of mathematics. Poincaré's selection gives a personal
view of the role "unconscious" (what we would call subconscious) thought plays in mathematical discovery, an encouraging lesson to anyone who has ever meditated on a problem for weeks or months
without approaching a solution. Meanwhile the Presidential address by Sylvester emphasizes the importance of observation in mathematical creativity: "Most, if not all, of the great ideas of modern
mathematics have had their origin in observation" (p. 158). Weil explains the value of the history of mathematics: its role is to place before us examples of first-rate mathematical work (p. 204),
presumably for emulation or inspiration; this is a sentiment that teachers of ordinary, non-mathematical history ought to keep in mind. Hadamard's essay on the heuristic process offers the viewpoint
of someone who has clearly had much classroom experience; it is interesting to see the great similarities between the debate over mathematical pedagogy in Hadamard's time and in our own.
Unfortunately in the service of its second goal (encouraging mathematicians to share their own perceptions), many of the "Musings" are mathematicians discussing topics only distantly connected with
mathematics. Lévy's "Does God Exist?" ventures into theology and notes that belief in "God" is a reflection of the tendency to accept a word as an explanation (p. 223); Morse's "Mathematics and the
Arts" espouses the viewpoint that "the basic affinity between mathematics and the arts is psychological and spiritual and not metrical or geometrical" (p. 88), and Lichnerowicz's essay argues that
scientists must involve themselves in political activity as scientists (p. 197).
There is no question that Ayoub's book lives up to its title and its stated intent. Many of the selections deal directly with mathematics. On the other hand, many of the selections have little
connection to mathematics proper, and their presence is justified by being written by prominent mathematicians. If you are looking for an anthology of writings on mathematics or by mathematicians,
this is worthwhile reading.
Jeff Suzuki, Associate Professor, Brooklyn College | {"url":"http://www.maa.org/publications/periodicals/convergence/musings-of-the-masters","timestamp":"2014-04-18T05:47:12Z","content_type":null,"content_length":"98819","record_id":"<urn:uuid:38d42384-2fdc-4be2-afc5-5fc265575006>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
'God's Number' Revealed
The world has waited with bated breath for three decades, and now finally a group of academics, engineers, and math geeks have finally found the magic number. That number is 20, and it's the maximum
number of moves it takes to solve a Rubik's Cube.
Known as "God's Number", the magic number required about 35 CPU-years and a good deal of man-hours to solve. Why? Because there's 43,252,003,274,489,856,000 possible positions of the cube, and the
computer algorithm that finally cracked God's Algorithm had to solve them all. (The terms "God's Number/Algorithm are derived from the fact that if God was solving a Cube, he/she/it would always do
it in the most efficient way possible.)
A full breakdown of the history of God's Number as well as a full breakdown of the math is available here, but summarily the team broke the possible positions down into sets, then drastically cut the
number of possible positions they had to solve for through symmetry (if you scramble a Cube randomly and then turn it upside down, you haven't changed the solution).
They then borrowed some computing time from Google (one of the principals is an engineer there) and burned about 35 core-years to solve all the possible positions. The number 20 has been the lower
limit for God's Number for more than a decade, but the team was finally able to whittle away at the upper limit (which was trimmed back to 22 in 2008).
So far the algorithm has identified some 12 million distance-20 positions, though there are definitely many more than that. Click on this link if you want to see what some of the hardest positions
are, and how they exactly tackled this problem. | {"url":"http://www.zeitnews.org/node/734","timestamp":"2014-04-19T22:11:55Z","content_type":null,"content_length":"24363","record_id":"<urn:uuid:e9b2c9f6-30c0-4816-b887-e63f9de7da62>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
BODMAS Rule |Order of Operation| Steps to Simplify Order of Operation| Examples
BODMAS Rule
Easy and simple way to remember BODMAS rule!!
B → Brackets first (parentheses)
O → Of (orders i.e. Powers and Square Roots, Cube Roots, etc.)
DM → Division and Multiplication (start from left to right)
AS → Addition and Subtraction (start from left to right)
(i) Start Divide/Multiply from left side to right side since they perform equally.
(ii) Start Add/Subtract from left side to right side since they perform equally.
Steps to simplify the order of operation using BODMAS rule:
First part of an equation is start solving inside the 'Brackets'.
For Example; (6 + 4) × 5
First solve inside ‘brackets’ 6 + 4 = 10, then 10 × 5 = 50.
Next solve the mathematical 'Of'.
For Example; 3 of 4 + 9
First solve ‘of’ 3 × 4 = 12, then 12 + 9 = 21.
Next, the part of the equation is to calculate 'Division' and 'Multiplication'.
We know that, when division and multiplication follow one another, then their order in that part of the equation is solved from left side to right side.
For Example; 15 ÷ 3 × 1 ÷ 5
‘Multiplication’ and ‘Division’ perform equally, so calculate from left to right side. First solve 15 ÷ 3 = 5, then 5 × 1 = 5, then 5 ÷ 5 = 1.
In the last part of the equation is to calculate 'Addition' and 'Subtraction'. We know that, when addition and subtraction follow one another, then their order in that part of the equation is solved
from left side to right side.
For Example; 7 + 19 - 11 + 13
‘Addition’ and ‘Subtraction’ perform equally, so calculate from left to right side. First solve 7 + 19 = 26, then 26 - 11 = 15 and then 15 + 13 = 28.
These are simple rules need to be followed for simplifying or calculating using BODMAS rule.
In brief, after we perform "B" and "O", start from left side to right side by solving any "D" or "M" as we find them. Then start from left side to right side solving any "A" or "S" as we find them.
● Related Concept
● Decimals
● Conversion of Unlike Decimals to Like Decimals
● Decimal and Fractional Expansion
● Converting Decimals to Fractions
● Converting Fractions to Decimals
● H.C.F. and L.C.M. of Decimals
● Repeating or Recurring Decimal
● BODMAS/PEMDAS Rules - Involving Decimals
● PEMDAS Rules - Involving Integers
● PEMDAS Rules - Involving Decimals
● BODMAS Rules - Involving Integers
● Conversion of Pure Recurring Decimal into Vulgar Fraction
● Conversion of Mixed Recurring Decimals into Vulgar Fractions
● Rounding Decimals to the Nearest Whole Number
● Rounding Decimals to the Nearest Tenths
● Rounding Decimals to the Nearest Hundredths
● Simplify Decimals Involving Addition and Subtraction Decimals
● Multiplying Decimal by a Decimal Number
● Multiplying Decimal by a Whole Number
● Dividing Decimal by a Whole Number
● Dividing Decimal by a Decimal Number
7th Grade Math Problems
From BODMAS Rule to HOME PAGE
Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need.
New! Comments
Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question. | {"url":"http://www.math-only-math.com/bodmas-rule.html","timestamp":"2014-04-18T20:42:58Z","content_type":null,"content_length":"34482","record_id":"<urn:uuid:a5c9560c-cf48-4402-b6d3-f1e1f6c79483>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Win-Vector Blog
November 6th, 2012 Comments off
von Neumann and Morgenstern’s “Theory of Games and Economic Behavior” is the famous basis for game theory. One of the central accomplishments is the rigorous proof that comparative “preference
methods” over fairly complicated “event spaces” are no more expressive than numeric (real number valued) utilities. That is: for a very wide class of event spaces and comparison functions “>” there
is a utility function u() such that:
a > b (“>” representing the arbitrary comparison or preference for the event space) if and only if u(a) > u(b) (this time “>” representing the standard order on the reals).
However, an active reading of sections 1 through 3 and even the 2nd edition’s axiomatic appendix shows that the concept of “events” (what preferences and utilities are defined over) is deliberately
left undefined. There is math and objects and spaces, but not all of them are explicitly defined in term of known structures (are they points in R^n, sets, multi-sets, sums over sets or what?). The
word “event” is used early in the book and not in the index. Axiomatic treatments often rely on intentionally leaving ground-concepts undefined, but we are going to work a concrete example through
von Neumann and Morgenstern to try and illustrate a bit more of the required intuition and deep nature of their formal notions of events and utility. I also will illustrate how, at least in
discussion, von Neuman and Morgenstern may have held on to a naive “single outcome” intuition of events and a naive “direct dollars” intuition of utility despite erecting a theory carefully designed
to support much more structure. This is possible because they never have to calculate in the general event space: they prove access to the preference allows them to construct the utility funciton u()
and then work over the real numbers. Sections 1 through 3 are designed to eliminate the need for a theory of preference or utility and allow von Neuman and Morgenstern to work with real numbers
(while achieving full generality). They never need to make the translations explicit, because soon after showing the translations are possible they assume they have already been applied. Read more…
A Personal Perspective on Machine Learning
October 31st, 2010 7 comments
Having a bit of history as both a user of machine learning and a researcher in the field I feel I have developed a useful perspective on the various trends, flavors and nuances in machine learning
and artificial intelligence. I thought I would take a moment to outline a bit of it here and demonstrate how what we call artificial intelligence is becoming more statistical in nature. Read more…
Categories: Computer Science, Expository Writing, History, Opinion, Statistics Clustering, Data Mining, Machine Learning
Deming, Wald and Boyd: cutting through the fog of analytics
April 20th, 2010 Comments off
This article is a quick appreciation of some of the statistical, analytic and philosphic techniques of Deming, Wald and Boyd. Many of these techniques have become pillars of modern industry through
the sciences of statistics and operations research.
Read more…
Categories: History, Statistics A-10, Boyd, Deming, Novum Organum, OODA, PDCA, Wald
What is “Genetic Art?”
June 1st, 2009 Comments off
What is “genetic art?” My answer to this is http://www.geneticart.org (redirects to http://www.mzlabs.com), but this requires some explanation. Read more…
Categories: Computer Science, History, Mathematics art, genetic art
Hello World: An Instance Of Rhetoric in Computer Science
February 19th, 2008 Comments off
Hello World: An Instance Of Rhetoric in Computer Science
John Mount: jmount@mzlabs.com
February 19, 2008
Computer scientists have usually dodged questions of intent, purpose or meaning. While there are theories that assign deep mathematical meaning to computer programs[13] we computer scientists usually
avoid discussion of meaning and talk more about utility and benefit. Discussions of the rhetorical meaning of programs is even less common. However, there is a famous computer program that has a
clean an important rhetorical point. This program is called “hello world” and its entire action is to write out the phrase “hello world.” The action is simple but the “hello world” program actually
has a fairly significant purpose and meaning.
I would like to briefly trace the known history of “hello world” and show how the rhetorical message it presents differs from the rhetoric embodied in earlier programs. In this sense we can trace a
change in the message computer scientists felt they needed to communicate (most likely due to changes in the outside world).
Categories: Computer Science, History Hello World, Programming | {"url":"http://www.win-vector.com/blog/category/history/","timestamp":"2014-04-20T01:06:18Z","content_type":null,"content_length":"50814","record_id":"<urn:uuid:95e760dc-49e2-4ffa-8f21-05c73cff6e32>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
1)People who download music eat pizza 2)You download music. 3)You eat pizza. is this invalid,law of syllogism, or law of detachment?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
I say invalid, it's not true. That is opinionated.
Best Response
You've already chosen the best response.
Thats what i thought
Best Response
You've already chosen the best response.
This is invalid. It is the fallacy of the undistributed middle. "I am a person, but I am not all people"
Best Response
You've already chosen the best response.
I think it's a perfect example of the law of detachment. says if given a->b (in this case, download music -> eat pizza) someone downloads music means they are "a" so they are also by definition
Best Response
You've already chosen the best response.
I think its invalid. Its optional.
Best Response
You've already chosen the best response.
But you're not given any other information, right? it's an if/then statement... unless I'm mistaken?!
Best Response
You've already chosen the best response.
If you were to do a formal logic proof on this, it would sound more like "There exist those who when they eat pizza, they download music". To conclude the proof you would have to say "There exist
at least some who when they eat pizza, download music." However, what their conclusion is saying is that "For ALL people, when they eat pizza, they download music", this is cleary false.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
See: Existential instantiation/Universal instantiation for predicate logic.
Best Response
You've already chosen the best response.
where are you getting simultaneity from though? I am no expert in logic, but I don't think the last claim is the ultimate one this is making
Best Response
You've already chosen the best response.
Ooh, this seems like a civilized disagreement. *grabs popcorn
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
what's your thought on the matter, @julianassange?
Best Response
You've already chosen the best response.
Yes, It is simply invalid. No more further say, because once said opinionated, you can conclude that is not true as a fact, simply true as a false thought.
Best Response
You've already chosen the best response.
This is entertaining..
Best Response
You've already chosen the best response.
I mean, I get it. I know that the set {music downloaders} is a subset of {humans} and that there is a separate subset {pizza eaters} that intersects with music downloaders.
Best Response
You've already chosen the best response.
Right well ultimately there's two problems here because the events are disjoint. Eating pizza and downloading music. There's no implication. It's not implying that because I eat pizza, I download
music. Second. You vs. People. You = "There exists" People = "For all" You cannot do this. You cannot say because there exists one where this is the case then it must be true for ALL.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Invalid is the answer for the win!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50107f24e4b009397c6881f8","timestamp":"2014-04-19T10:31:02Z","content_type":null,"content_length":"71587","record_id":"<urn:uuid:6db3a1e1-4fe5-474b-b35d-70efe3301095>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gronwall-Bellman Type Inequalities and Their Applications to Fractional Differential Equations
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 217641, 7 pages
Research Article
Gronwall-Bellman Type Inequalities and Their Applications to Fractional Differential Equations
^1School of Mathematical Sciences, Qufu Normal University, Qufu, Shandong 273165, China
^2Department of Mathematics, Jining University, Qufu, Shandong 273155, China
Received 29 May 2013; Accepted 22 July 2013
Academic Editor: Irena Lasiecka
Copyright © 2013 Jing Shao and Fanwei Meng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
Some new weakly singular integral inequalities of Gronwall-Bellman type are established, which can be used in the qualitative analysis of the solutions to certain fractional differential equations.
1. Introduction
Gronwall-Bellman type integral inequalities play increasingly important roles in the study of quantitative properties of solutions of differential and integral equations, as well as in the modeling
of engineering and science problems. The integrals concerning this type of inequalities have regular or continuous kernels, but some problems of theory and practicality require us to solve integral
inequalities with singular kernels; see [1–4] and the references cited therein. For example, Ye and Gao [5] considered the integral inequalities of Henry-Gronwall type and their applications to
fractional differential equations with delay; Ma and Pečarić [4] established some weakly singular integral inequalities of Gronwall-Bellman type and used them in the analysis of various problems in
the theory of certain classes of differential equations, integral equations, and evolution equations.
In this paper, we study a certain class of nonlinear inequalities of Gronwall-Bellman type, which generalizes some known results and can be used as handy and effective tools in the study of
differential equations and integral equations. Furthermore, applications of our results to fractional differential are also involved.
2. Preliminary Knowledge
In this section, we give some inequalities, which will be used in the proof of the main results.
Lemma 1 (Jensen's inequality). Let , and let be nonnegative real numbers. Then, for ,
Lemma 2. Let , , . If , and where . Then, for , one has where .
Proof. Given , for , Define a function , ; then , , is positive and nondecreasing for , and Let , and ; we obtain which implies that And so By the arbitrary of , we obtain the inequality (3). The
proof is complete.
Lemma 3. Let , , and . If and where and is a real constant, then where is defined as that in Lemma 2.
Proof. For , we have By Gronwall inequality, we have the inequality (11). We prove that (10) holds for now. Given that and for , we get Define a function , ; then , , is positive and nondecreasing
for , and As that in the proof of Lemma 2, we obtain And then By the arbitrary of , we obtain the inequality (10). The proof is complete.
3. Main Results
Now, we are in a position to deal with the integral inequality with weak singular kernels.
Theorem 4. Let . If and where and are constants, then the following assertions hold.(i) Suppose that . Then where , , and .(ii) Suppose that , , and . Then where , and .
Proof. (i) Using the Cauchy-Schwarz inequality, we obtain Using Lemma 1, we obtain Let ; we get Using Lemma 2 and noticing that is nondecreasing, we get by the relationship of and , the first
inequality (18) holds.
(ii) By the hypothesis, we get . Using Hölder inequality, we obtain Using Lemma 1, we obtain Let , we get Using Lemma 2 and noticing that is nondecreasing, we get and by the relation of and , (19)
holds. The proof is complete.
Theorem 5. Let , and . If with where and are constants, then the following assertions hold.(i) Suppose that . Then where , , and are defined as those in Theorem 4.(ii) Suppose that , , and . Then,
where , , and are defined as those in Theorem 4.
Proof. (i) Using the Cauchy-Schwarz inequality by (28), we obtain Using Lemma 1, we obtain Let , we get Using Lemma 3, we get the first inequality of (29) and the second inequality of (29) is easily
(ii) By the hypothesis, we get . Using Hölder inequality, we obtain Using Lemma 1, we obtain Let ; we get Using Lemma 2, we get the first inequality of (30) and the second inequality of (30) is
easily obtained. The proof is complete.
For the case of , this kind of inequalities has been considered by Pachpatte [6] and the case of retarded integral inequalities also has been obtained by Ye and Gao [5, Theorem 2.5]. So, we list only
a theorem using different condition and method from Pachpatte [6, Theorem 1.2.4].
Theorem 6. Let , and . If and where , then the following assertions hold.(i) Suppose that . Then where and are defined as those in Theorem 4.(ii) Suppose that , and . Then where and are defined as
those in Theorem 4.
Remark 7. In [6, Theorem 1.2.4], is continuously differentiable, but in Theorem 6, is only continuous in the interval , so the methods of [6, Theorem 1.2.4] are invalid for Theorem 6. In [7, Theorem
1], Ye et al. also considered the similar integral inequalities using an iterative method, but we use different methods differing from the previously mentioned two papers.
4. Applications to FDEs
In this section, we present applications of Theorem 4 and Theorem 5 to study certain properties of solutions of fractional differential equations.
Consider the following fractional differential equations: for , , where represents the Caputo fractional derivative of order , , and . The corresponding Volterra fractional integral equation, see [8,
Lemma 6.2], becomes
Theorem 8. Suppose that , where , is real number. If is any solution of the initial value problem (40), then the following estimations hold.(i) Suppose that . Then where .(ii) Suppose that , , and .
Then where . Notice that , and are the same as those in Theorem 4, .
Proof. By (41), it is easy to derive that Using Theorem 4, we get the desired conclusion. This proves the theorem.
Considering the following fractional differential equations: for , with the given initial condition , , is a given continuously differentiable function on up to order . In this case, we denote , ,
and , , , and are defined as those in (40).
In [8, Lemma 6.2], the initial value problem (45) is equivalent to the Volterra fractional integral equation:
The next result deals with the upper bounds of solution of (45).
Theorem 9. Suppose that , where , and is real number. If is any solution of the initial value problem (46), then the following estimations hold.(i)Suppose that . Then (ii)Suppose that , and . Then
Notice that , and are the same as those in Theorem 8, .
The proof of this theorem is omitted because it is similar to that of Theorem 8.
The authors thank the referee for his/her useful comments on this paper. This research was partially supported by the NSF of China (Grants 11171178 and 11271225), Science and Technology Project of
High Schools of Shandong Province (Grant J12LI52), and program for Scientific Research Innovation Team in Colleges and Universities of Shandong Province. | {"url":"http://www.hindawi.com/journals/aaa/2013/217641/","timestamp":"2014-04-16T16:58:17Z","content_type":null,"content_length":"661184","record_id":"<urn:uuid:f09451b7-cfc6-45e2-833b-5649ca1c5a01>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Display Class Syllabus
Real Analysis II
CLASS CODE: MATH 462 CREDITS: 3
DIVISION: PHYSICAL SCIENCE & ENGINEERING
DEPARTMENT: MATHEMATICS
GENERAL EDUCATION: This course does not fulfill a General Education requirement.
DESCRIPTION: Analysis in the context of metric spaces. Applications involving such tools as approximation, Fourier analysis, and multivariate optimization.
TAUGHT: Winter odd years
CONTENT AND TOPICS: Basic metric properties of Euclidean space. Multivariate analysis and vector analysis. Applications to science, engineering and technology.
GOALS AND OBJECTIVES: 1. Develop real-world applications of real analysis from a one-dimensional point of view.
2. Introduce real-world applications of real analysis requiring a higher-dimensional point of view.
3. Extend the standard concepts of real analysis to higher dimensional settings.
4. Formulate conjectures and prove theorems about the standard objects and concepts of real analysis in higher dimensions, sufficient for the applications presented.
5. Determine relationships among the standard concepts of real analysis in a higher-dimensional setting.
6. Develop real world applications of real analysis in a higher-dimensional setting.
REQUIREMENTS: Attendance is required. Other requirements include: written examinations, class presentations, quizzes, homework assignments, and other forms of assessment.
PREREQUISITES: Math 341 and Math 461
EFFECTIVE DATE: August 2003 | {"url":"http://www2.byui.edu/catalog-archive/2004-2005/class.asp2184.htm","timestamp":"2014-04-17T15:28:17Z","content_type":null,"content_length":"4293","record_id":"<urn:uuid:db7b61ce-e990-4cd4-a64e-5a6b58cb65ec>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Teach the Multiplication Tables to Your Child
Edit Article
Multiplication Table PracticeTeaching the SkillsMemorizing the AnswersRewarding Your ChildChecking Their Progress
Edited by Nicole Willson, Ooops I forget, Sondra C, Shaiaqua and 38 others
Many children struggle with learning their times tables--as their parent, you may feel like it's your duty to help. After all, they'll need quick multiplication skills to help them throughout high
school, college, and life. You'll need time, strategy, and patience to help your child work with and enjoy the quest of conquering these figures, but it's guaranteed to be worth it.
Multiplication Table Practice
Method 1 of 4: Teaching the Skills
1. 1
Commit to a time. Sit down with your child when both of you are ready to make a dent into the subject. If you are preoccupied with work or if your child is too tired or hungry, learning won't
occur as quickly as you want it to. Sit down for 30 minutes and don't allow any distractions for either of you.
□ Energy and enthusiasm are very important for both of you. Turn off your cell phone(s), TV, and sit down at the dinner table with some munchies and attack those numbers.
2. 2
Start with the fact families of 0, 1, 2, and 3. When memorizing, it's important to rehearse a small portion of facts before attempting to learn the entire chart. Remember: Your child isn't
counting; they are simply memorizing. Presumably, they already know the basic concept of multiplying.
□ If your child is unfamiliar with multiplying, put it in terms of adding. That is, 4x3 is 4+4+4.
□ Ask your child to bring you their math book and any resources they've been given. You'll be able to see exactly what they are studying and the teaching method used in their school.
□ Have a chart or number line handy showing the numbers 0 through 100. A chart will give you the answers by correlating the row with the column. A chart is better for those just starting off as
the answers are quicker to find.
☆ A number line is a bit more work. You can have your child circle the multiples of a certain number in pencil or code each number and its multiples with different colors.
3. 3
Explain how the commutative property makes everything easier. Show your child that each answer repeats, so, technically, they only have to learn half of the chart (score!). 3x7 is the same as
7x3. When they've learned the fact families of 0, 1, 2, and 3, they already know 4 numbers each of 4, 5, 6, 7, 8, 9, and 10.
□ After your child has mastered 0-3, move onto 4-7, and then 8-10. If you want to go above and beyond, work with 11 and 12, too. Some teachers will include a few harder problems for a bonus or
to gauge where each child is at.
4. 4
Discuss patterns in the whole chart. It doesn't all have to be rote memorization with no clues or hints. The chart will easily point out things to look for.
□ All the multiples of ten end in zero.
□ All the multiples of 5 end in either 5 or 0 and are half as large as the multiples of ten. (10x5=50; 5x5=25, or half of 50)
□ Any number x 0 is still 0. No matter what.
5. 5
Know the tricks. Luckily, math is full of shortcuts. Teach your child these tricks and they'll be impressed and, hopefully, quite thankful.
□ To memorize the 9's tables, use your fingers. Spread them all in front of you, palms down. For 9x1, put your left pinky down. What do you have showing? 9. For 9x2, put your second finger down
(the left ring finger). What do you have showing? 1 and 8. 18. Put your third finger down--2 and 7. 27. This works all the way up to 9x9 (8 and 1. 81).
□ If your child can double a number, the x4's will be easy. Just double the number and double it again! Take 6x4. 6 doubled is 12. 12 doubled is 24. 6x4=24. Use this to make the answer become
automatic. Again, this is about memorizing.
□ To multiply anything by 11, just duplicate the number. 3x11=33. Two 3's. 4x11=44. Two 4's. The answer is in the question, just twice.
☆ If your child is a math genius, teach them this trick to multiply 11's by double digit numbers. Take the double digit number and split it up. 11 by 17 is 1_7. Add the double digit number
together and put it in the middle: 187.
Method 2 of 4: Memorizing the Answers
1. 1
Do speed drills. Now that your child is familiar with the entire chart, drill them. Drill them over breakfast, during commercials, and for a few minutes before bed. As you progress, get faster
and faster and faster.
□ At the beginning, start in order. As you get more and more convinced that they have it down, start mixing it up. They'll slow down initially but then should spark right back up to where they
2. 2
Make it fun. By this point, you both may be wondering what those squiggles in each number really are. Spice it up for the both of you with games and contests.
□ Have your child make a set of flash cards. Write the problem, like 4 x 9, on the front and the answer, 36, on the back. The act of writing out the multiples will provide another repetition/
reinforcement. Use a timer to see how many cards they can go through in a minute. Can they beat that score tomorrow?
☆ You could also do this with a blank chart. That's an easy way to monitor which ones they're struggling with.
□ Grab a deck of cards. This game is similar to War, but with multiplication. You each get half the deck to place face down in front of you--don't look at the cards! Each player flips their
first card simultaneously--the first person to say the answer based on the two numbers gets both cards (the object of the game is to win them all). If the two of you flip a 7 and a 5, the
answer to shout out is 35. For Jacks, Queens, and Kings, you can use 11, 12, and 13, use them as 0's, or take them out entirely.
□ Say a number, like 30. Can they list all of the possible combinations that multiply to it? 5 x 6? 3 x 10?
□ Say a number, then ask for the next multiple. For example, start at 30 and ask for the next multiple of 6. Or start at 18 and ask for the next two multiples of 9. You could even start at 22
and ask for the next multiple of 4, even though 22 is not a multiple of 4. Be tricky once they have it.
□ Try multiplication bingo. Your child fills in a six-by-six grid with whatever numbers they want. You read off a problem like "5 x 7." If they have 35 on their bingo card, then they mark it
off. Continue until someone has a "bingo." What's the prize they could win?
Method 3 of 4: Rewarding Your Child
1. 1
Use incentives. You don't have to use money or material goods--that may spoil their love of learning. Of course, snacks, drinks and offering things they like to do are always good ideas.
□ Save the big rewards for school tests. Once they can perform under pressure, you know you've been successful.
2. 2
Praise your child. Don't forget to pause and have fun between serious repetitions of the facts. If you're happy with their success, they'll be more likely to want to be successful. Show them how
awesome they're doing with verbal recognition.
□ If they're going slower than you think they should, relax. Negativity may make them shut down. A bad mood can kill any learning ability. Encourage them to press on.
3. 3
Take breaks. No child can learn for hours on end. When you sense that they're wearing down, take a break. You probably need one, too.
□ After a break, quickly review what they've already learned before moving onto new facts.
Method 4 of 4: Checking Their Progress
1. 1
Utilize online materials. Once the paper and pens are put back and the initial hurdles are over, go online for quizzes and games to see how much your child has retained.
□ Of course, it's possible to write out quizzes yourself and you're more than welcome to do this--but simply being on a computer may make your child feel like it's less of a test and more of a
fun challenge.
2. 2
Ask about their scores. You've done all this work at home--now how has it gone at school? If your child isn't volunteering this information, just ask! They should be proud of good grades; if
they're grades aren't so stellar, you can review with them more to have better results next time.
□ It's always an option to call the teacher and inquire about the curriculum. An involved parent is always appreciated.
• Try to teach to the school's method. If you learned a different way, start with the school's method first. If it's working, stick with it. If it's not, use yours.
• Be kind and patient. If need be, just work with one combination for a few days until the child completely understands.
• Advanced for later: the squares of the 10's are very similar to the basics of 1 squared is 1 and 10 squared is 100. It is easy enough to see that 20 squared is 400, 30 squared is 900, 40 squared
is 1600, etc.
• Point out that adding can be done two ways: 2 + 1 = 3 and 1 + 2 = 3. The same goes for multiplication.
• Pushing larger numbers too quickly causes confusion and frustration. Work up to them gradually to make learning multiplication seem easier, but press forward toward firm and steady improvement.
And, do not be afraid to be advanced even if only a little at a time.
• Never, ever use the word "stupid," "lousy," or any other labels. Do not use it to refer to your child, yourself or the material.
• Do not tire your child out by working on too many rows or patterns at a time -- remember to laugh and take short breaks in between lessons.
• Understand that the child should not truly be counting. Rapid responses will only be formed by memorization. Counting allows for the knowledge to form initially, but it should be an unnecessary
step once engrained.
Article Info
Thanks to all authors for creating a page that has been read 469,833 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Teach-the-Multiplication-Tables-to-Your-Child","timestamp":"2014-04-19T12:50:54Z","content_type":null,"content_length":"87343","record_id":"<urn:uuid:b42a3fdf-6e41-48ad-b5a6-4bb179916ffa>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Allocating Task Interaction Graphs to Processors in Heterogeneous Networks
September 1997 (vol. 8 no. 9)
pp. 908-925
ASCII Text x
Chi-Chung Hui, Samuel T. Chanson, "Allocating Task Interaction Graphs to Processors in Heterogeneous Networks," IEEE Transactions on Parallel and Distributed Systems, vol. 8, no. 9, pp. 908-925,
September, 1997.
BibTex x
@article{ 10.1109/71.615437,
author = {Chi-Chung Hui and Samuel T. Chanson},
title = {Allocating Task Interaction Graphs to Processors in Heterogeneous Networks},
journal ={IEEE Transactions on Parallel and Distributed Systems},
volume = {8},
number = {9},
issn = {1045-9219},
year = {1997},
pages = {908-925},
doi = {http://doi.ieeecomputersociety.org/10.1109/71.615437},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Parallel and Distributed Systems
TI - Allocating Task Interaction Graphs to Processors in Heterogeneous Networks
IS - 9
SN - 1045-9219
EPD - 908-925
A1 - Chi-Chung Hui,
A1 - Samuel T. Chanson,
PY - 1997
KW - Task allocation
KW - task interaction graph
KW - heterogeneous network
KW - shared communication medium
KW - parallel program
KW - minimum elapsed time.
VL - 8
JA - IEEE Transactions on Parallel and Distributed Systems
ER -
Abstract—The problem of allocating task interaction graphs (TIGs) to heterogeneous computing systems to minimize job completion time is investigated. The only restriction is that the interprocessor
communication cost is the same for any pair of processors. This is suitable for local area network based systems, such as Ethernet, as well as fully interconnected multiprocessor systems. An optimal
polynomial solution exists if sufficient homogeneous processors and communication capacity are available. This solution is generalized to obtain two faster heuristics, one for the case of homogeneous
processors and the other for heterogeneous processors. The heuristics were tested extensively with 60,900 systematically generated random TIGs and shown to be stable independent of the size of the
TIG. A performance model is also proposed to predict the performance of the heuristic algorithms, and it is successful in explaining the experimental results qualitatively.
[1] D. Bernstein, M. Rodeh, and I. Gertner, "On the Complexity of Scheduling Problems for Parallel/Pipelined Machines," IEEE Trans. Computers, vol. 38, no. 9, pp. 1,308-1,313, Sept. 1989.
[2] S.H. Bokhari, "Partitioning Problems in Parallel, Pipelined and Distributed Computing," IEEE Trans. Computers, vol. 37, no. 1, pp. 48-57, Jan. 1988.
[3] N.S. Bowen, C.N. Nikolaou, and A. Ghafoor, “On the Assignment Problem of Arbitrary Process Systems to Heterogeneous Distributed Computer Systems,” IEEE Trans. Computers, vol. 41, no. 3, Mar.
[4] T. Bultan and C. Aykanat, "A New Mapping Heuristic Based on Mean Field Annealing," J. Parallel and Distributed Computing, vol. 16, pp. 292-305, Dec. 1992.
[5] T.L. Casavant and J.G. Kuhl,“A taxonomy of scheduling in general-purpose distributed computing systems,” IEEE Trans. on Software Engineering, vol. 14, no. 2. Feb. 1988.
[6] V. Chaudhary and J.K. Aggarwal, "A Generalized Scheme for Mapping Parallel Algorithms," IEEE Trans. Parallel and Distributed Systems, Mar. 1993, pp. 328-346.
[7] H. Clark and B. McMillin,“DAWGS—A distributed compute Srever utilizing idle workstations,”J. Parallel Distrib. Comput., pp. 175–186, Feb. 1992.
[8] E.G. Coffman Jr., Computer and Job Shop Scheduling Theory.New York: Wiley, 1976.
[9] E.G. Coffman Jr., M.R. Garey, and D.S. Johnson, "A Application of Bin-Packing to Multiprocessor Scheduling," SIAM J. Computing, vol. 7, pp. 1-17, Feb. 1978.
[10] K. Efe, "Heuristic Models of Task Assignment Scheduling in Distributed Systems," Computer, vol. 15, pp. 50-56, June 1982.
[11] A.V. Goldberg and R.E. Tarjan, "A New Approach to the Maximum Flow Problem," Proc. 18th Ann. Symp. Theory of Computing, pp. 136-146, 1987.
[12] R.L. Graham, "Bounds on Multiprocessing Timing Anomalies," SIAM J. Applied Math., vol. 17, pp. 416-429, Mar. 1969.
[13] S. Hurley, "Taskgraph Mapping Using a Genetic Algorithm: A Comparison of Fitness Functions," Parallel Computing, vol. 19, pp. 1,313-1,317, Nov. 1993.
[14] B.W. Kernighan and S. Lin, "An Efficient Heuristic Procedure for Partitioning Graphs," Bell System Technical J., vol. 49, pp. 291-307, Feb. 1970.
[15] P. Krueger and R. Chawla, "The Stealth Distributed Schedular," Proc. 11th Int'l Conf. Distributed Computing Systems, pp. 336-343, May 1991.
[16] V.M. Lo, "Task Assignment to Minimize Completion Time," Proc. Fifth Int'l Conf. Distributed Computing Systems, pp. 329-336, May 1985.
[17] S. Manoharan, "Taxonomy for Assignment in Parallel Processor Systems," Proc. Advanced Computer Technology, Reliable Systems, and Applications, Fifth Ann. European Computer Conf. CompEuro '91,
pp. 143-147, May 1991.
[18] M.W. Mutka, "A Comparison of Workload Models of the Capacity Available for Sharing Among Privately Owned Workstations," Proc. 24th Ann. Hawaii Int'l Conf. System Sciences, pp. 353-362, Jan.
[19] D.M. Nicol and D.R. O'Hallaron, "Improved Algorithms for Mapping Pipelined and Parallel Computations," IEEE Trans. Computers, vol. 40, no. 3, pp. 295-306, Mar. 1991.
[20] J. Ramanujam, F. Erçal, and P. Sadayappan, "Task Allocation by Simulated Annealing," Proc. Int'l Conf. Supercomputing, vol. III, pp. 475-497, May 1988.
[21] C.-C. Shen and W.-H. Tsai, "A Graph Matching Approach to Optimal Task Assignment in Distributed Computing Systems Using a Minimax Criterion," IEEE Trans. Computers, vol. 34, no. 3, pp. 197-203,
Mar. 1985.
[22] C.A. Waldspurger, T. Hogg, B.A. Huberman, J.O. Kephart, and W.S. Stornetta, Spawn: A Distributed Computational Economy IEEE Trans. Software Eng., vol. 18, no. 2, pp. 103-117, Feb. 1992.
[23] J. Wang, S. Zhou, K. Ahmed, and W. Long, "Lsbatch: A Distributed Load Sharing Batch System," Technical Report CSTI-286, Computer Systems Research Inst., Univ. of Toronto, Apr. 1993.
[24] C.M. Woodside and G.G. Monforton, “Fast Allocation of Processes in Distributed and Parallel Systems,” IEEE Trans. Parallel and Distributed Systems, vol. 4, no. 2, pp. 164-174, Feb. 1993.
Index Terms:
Task allocation, task interaction graph, heterogeneous network, shared communication medium, parallel program, minimum elapsed time.
Chi-Chung Hui, Samuel T. Chanson, "Allocating Task Interaction Graphs to Processors in Heterogeneous Networks," IEEE Transactions on Parallel and Distributed Systems, vol. 8, no. 9, pp. 908-925,
Sept. 1997, doi:10.1109/71.615437
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/td/1997/09/l0908-abs.html","timestamp":"2014-04-18T14:47:54Z","content_type":null,"content_length":"56780","record_id":"<urn:uuid:7d409891-f2bc-4f3f-83db-3b053b9901d3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Free Logic
First published Mon Apr 5, 2010
Classical logic requires each singular term to denote an object in the domain of quantification—an “existing” object. Free logic does not. Free logic is therefore useful for analyzing discourse
containing singular terms that either are empty (have no referent or refer to objects that do not exist) or might be.
Section 1 lays out the basics of free logic, explaining how it differs from classical predicate logic and how it is related to inclusive logic, which permits empty domains or “worlds.” Section 2
shows how free logic may be represented by each of three formal methods: axiom systems, natural deduction rules and tree rules. Varying conventions for calculating the truth values of atomic formulas
containing empty singular terms yield three distinct species of free logic: negative, positive and neutral. These are surveyed in Section 3, along with supervaluations, which were developed to
augment neutral logics. Section 4 is critical, examining three anomalies that infect most free logics. Section 5 samples applications to theories of description, logics of partial or non-strict
functions, logics with Kripke semantics, logics of fiction and logics that are in a certain sense Meinongian. Section 6 takes a glance at free logic's history.
Free logic is formal logic whose quantifiers are interpreted in the usual way—that is, objectually over a specified domain D—but whose singular terms may denote objects outside of D, or fail to
denote at all. Singular terms include proper names (individual constants), definite descriptions, and such functional expressions as ‘2 + 2’. Since classical (i.e., Fregean) predicate logic requires
that singular terms denote members of D, free logic is a “nonclassical” logic. Where D is, as usual, taken to be the class of existing things, free logic may be characterized as logic the referents
of whose singular terms need not exist.
Karel Lambert (1960) coined the term ‘free logic’ as an abbreviation for ‘logic free of existence assumptions with respect to its terms, singular and general’. General terms are predicates. Lambert
was suggesting that just as classical predicate logic generalized Aristotelian logic by, inter alia, admitting predicates that are satisfied by no existing thing (‘is a Martian’, ‘is
non-self-identical’, ‘travels faster than light’), so free logic generalizes classical predicate logic by admitting singular terms that denote no existing thing (‘Aphrodite’, ‘the greatest integer’,
‘the present king of France’).
Because classical logic's singular terms must denote existing things (when, as usual, ‘∃’ is read as “there exists”), classical logic is unreliable in application to statements containing singular
terms whose referents either do not exist or are not known to. Consider, for example, the true statement:
(S) We detect no motion of the earth relative to the ether,
using ‘the ether’ as a singular term for the light-bearing medium posited by nineteenth century physicists. The reason why (S) is true is that, as we now know, the ether does not exist. According to
classical logic, however, (S) is false, because it implies the existence of the ether. Free logic allows such statements to be true despite the non-referring singular term. Indeed, it allows even
statements of the form ~∃x x=t (e.g., “the ether does not exist”) to be true, though in classical logic, which presumes that t refers to an object in the quantificational domain, they are
Free logic accommodates empty singular terms (those that denote no member of the quantificational domain D) by rejecting inferences whose validity depends on the classical presumption that they must
denote members of D. Consider, for example, the rule of universal instantiation (specification): from the premise “Every x (in D) satisfies A” we may infer “t satisfies A.” This rule, whose formal
expression is:
∀xA ⊢ A(t/x),
is invalid in free logic; for even if every object in D satisfies A, if t does not denote a member of D, A(t/x) may be false. (Here and elsewhere A(t/x) is the result of replacing all occurrences of
x in A by individual constant t; if there are no such occurrences, then A(t/x) is just A.) Existential generalization (the principle that from “t satisfies A” we may infer “there exists (in D) a
thing x that satisfies A”):
A(t/x) ⊢ ∃xA
is likewise invalid; for if t does not denote an object in D then the truth of A(t/x) does not guarantee that there exists in D an object that satisfies A. Though free logic rejects such classical
inferences, it accepts no classically invalid inferences; hence it is strictly weaker than classical logic for a language with the same vocabulary.
To distinguish terms that denote members of D from those that do not, free logic often employs the one-place “existence” predicate, ‘E!’ (sometimes written simply as ‘E’). For any singular term t, E!
t is true if t denotes a member of D, false otherwise. ‘E!’ may be either taken as primitive or (in bivalent free logic with identity) defined as follows:
E!t =[df] ∃x(x=t).
Using ‘E!’ we can express classical logic's blanket presumption that singular terms denote members of D as an explicit premise, E!t, for selected terms t. Thus we can formulate the following weaker
analogs of universal instantiation:
∀xA, E!t ⊢ A(t/x)
and existential generalization:
A(t/x), E!t ⊢ ∃xA,
which are valid in free logic.
Classical predicate logic presumes not only that all singular terms refer to members of the quantificational domain D, but also that D is nonempty. Free logic rejects the first of these presumptions.
Inclusive logic (sometimes also called empty or universally free logic) rejects them both. Thus while inclusive logic for a language containing singular terms must be free, free logics need not be
Many existential assertions—e.g., ∃x(x=x), ∃x(Px → Px), ∃x(Px → ∀yPy)—are true in all nonempty domains and hence are valid in both classical logic and non-inclusive free logic. But since all
existentially quantified formulas are false in the empty domain, none are valid in inclusive logic. Correlatively, since all universally quantified formulas are true in the empty domain, none are
self-contradictory in inclusive logic. Even vacuously universally quantified formulas (formulas of the form ∀xA, where x is not free in A) are true in the empty domain. Hence the schema:
∀xA → A, where x is not free in A,
which is valid in both classical logic and non-inclusive free logic, is invalid in inclusive logic. Inclusive logic also invalidates some of the laws of confinement—e.g.,
∀x(P & A) ↔ (P & ∀xA), where x is not free in P,
that are used for prenexing formulas (giving quantifiers the widest possible scope) or purifying them (giving quantifiers the narrowest possible scope). And in inclusive logic the formula:
∀x(A ↔ x=t),
widely used in the theory of definite descriptions, is not equivalent, as it otherwise is, to:
∀x(A → x=t) & A(t/x),
since with D empty and A(t/x) false, the first but not the second is true. Where there is need for such regularities, a non-inclusive free logic may be preferable to an inclusive one. Yet because
inclusivity frees logic from one more existential presumption, many free logicians favor it.
Logics may be represented in various ways. Axiom systems, natural deduction systems and trees (or, equivalently, tableaux) are among the most common. This section presents all three for the bivalent
inclusive form of free logic known as Positive Free Logic (PFL) and mentions some variants. (For the meaning of the term “positive” in this context see Section 3.2). PFL is formulated in a
first-order language L without sentence letters or function symbols, whose primitive logical operators are negation (not) ‘~’, the conditional (if-then) ‘→’, the universal quantifier (for all) ‘∀’,
identity ‘=’ and ‘E!’, the others being defined as usual. We assume for the sake of definiteness that the formulas of L are closed (contain no unquantified variables) and that they may be vacuously
quantified (have the form ∀xA or ∃xA, where x does not occur free in A). An occurrence of a variable is quantified if it lies within the scope of an operator such as ‘∀’ or ‘∃’ that binds that
variable; otherwise it is free.
PFL may be axiomatized, with modus ponens as the sole inference rule, by adding the following schemas to the tautologies of classical propositional logic:
(A1) A → ∀xA
(A2) ∀x(A → B) → (∀xA → ∀xB)
(A3) ∀xA, if A(t/x) is an axiom
(A4) ∀xA → (E!t → A(t/x))
(A5) ∀xE!x.
A note on conventions involving variables: once again, A(t/x) is the result of replacing all occurrences of x in A by individual constant t. If there are no such occurrences, then A(t/x) is just A.
In (A1) the variable x is not free in A (since otherwise A would be an open formula and formulas of L are closed). However, x may be free in A or B in (A2) and in A in (A3) and (A4).
(A4) and (A5) are special axioms for free logic. The others are classical. (A4) modifies the classical principle:
(A4c) ∀xA → A(t/x)
by using ‘E!’ to restrict specification. (A4) stipulates in effect that the quantifiers range over all objects that satisfy ‘E!’, (A5) that they range only over objects that satisfy ‘E!’. Omitting
(A5) and replacing (A4) with (A4c) yields classical logic. To obtain a non-inclusive free logic, we may add to (A1)-(A5) the axiom ∃xE!x—or any axiom of the form ∃xT such that for any term t, T(t/x)
is a tautology.
For languages containing the identity predicate, we also need:
(A6) s=t → (A → A(t//s))
(where A(t//s) is the result of replacing one or more occurrences of s in A by t), and either
(A7) t=t
if all self-identity statements, including those whose singular term is empty, are to be true or
(A7−) ∀x(x=x)
if not (see Sections 3.1 and 3.2 below). If ‘E!’ is defined in terms of the identity predicate as indicated in Section 1.2, then (A4) takes the form:
∀xA → (∃y(y=t) → A(t/x))
and (A5) is redundant and may be omitted. ‘E!’ cannot be defined without the identity predicate (Meyer, Bencivenga and Lambert, 1982).
Free logic can be formalized without either ‘=’ or ‘E!’. (A1)–(A3) remain unchanged, but (A4) and (A5) are replaced respectively by:
(A4′) ∀y(∀xA → A(y/x))
(A5′) ∀x∀yA → ∀y∀xA.
(A4′), like (A4), restricts specification to objects within D, but it uses a quantifier instead of ‘E!’ to do so. The quantifier permutation axiom (A5′) is redundant in the presence of the identity
axioms but, as Fine proved in (1983), is independent of the other axioms.
The formulas used in the axiom systems discussed so far are closed, but some free logics allow open formulas—i.e., formulas that contain free variables. These logics follow one of two conventions for
variable assignments. Those that assign to each free variable a member of D are called E^+-logics; those that do not are called E-logics. The following specification rule is valid in E^+-logics but
not in E-logics:
∀xA ⊢ A(v/x).
(Here A(v/x) is the result of replacing every occurrence of the variable x in A by a variable v that is free for x in A.) Conversely, the following substitution rule is valid in E-logics but not in E
A ⊢ A(t/x).
But since this article employs closed formulas, the distinction between E- and E^+-logics may here be ignored.
PFL can also equivalently be formulated in a natural deduction system. The introduction and elimination rules for the operators of propositional logic and identity are as usual. The quantifier
introduction and elimination rules are restricted by use of the predicate ‘E!’, as follows:
∀I: Given a derivation of A(t/x) from E!t, where t is new and does not occur in A, discharge E!t and infer ∀xA.
∀E: From ∀xA and E!t infer A(t/x).
∃I: From A(t/x) and E!t infer ∃xA.
∃E: Given ∃xA and a derivation of a formula B from A(t/x) & E!t, where t is new and does not occur in either A or B, discharge A(t/x) & E!t and infer B from ∃xA.
The variable x need not be free in A, in which case A(t/x) is just A. ‘E!’ may either be taken as primitive (in which case it requires no additional rules) or defined in terms of the identity
predicate as in Section 1.2. For non-inclusive logic, we may add a rule that introduces ∃xE!x.
Jeffrey-style tree rules (Jeffrey 1991) for PFL can be obtained by replacing the classical rules for existentially and universally quantified formulas with the following:
Existential Rule: If ∃xA appears unchecked on an open path, check it, and
i. if x is free in A, choose a new individual constant t and list both E!t and A(t/x) at the bottom of every open path beneath ∃xA, and
ii. if x is not free in A, write A at the bottom of every open path beneath ∃xA.
Universal Rule: If ∀xA appears on an open path, then
i. if x is free in A, then where t is an individual constant that occurs in a formula on that path, or a new individual constant if there are none on the path, split the bottom of every open
path beneath ∀xA into two branches, writing ~E!t at the bottom of the first branch and A(t/x) at the bottom of the second, and
ii. if x is not free in A, write A at the bottom of every open path beneath ∀xA.
For languages that do not allow vacuous quantification, clause (ii) can in each case be omitted. Non-inclusive free logic needs an additional rule that introduces E!t for some new individual constant
t if a path does not already contain a formula of this form.
Semantics for free logics differ in how they assign truth-values to atomic formulas that are empty-termed—i.e., contain at least one empty singular term. There are three general approaches:
1. Negative semantics require all empty-termed atomic formulas to be false,
2. Positive semantics allow some empty-termed atomic formulas not of the form E!t to be true, and
3. Neutral (or nonvalent) semantics require all empty-termed atomic formulas not of the form E!t to be truth-valueless.
A negative semantics is a bivalent semantics on which all empty-termed atomic formulas (including identity statements) are false. The inclusive version presented here makes only minimal adjustments
to classical semantics to allow for non-denoting terms.
Let the language L be defined as in Section 2. Then a negative inclusive model for L is a pair 〈D,I〉, where D is a possibly empty set (the domain) and I is an interpretation function that assigns
referents to individual constants and extensions to predicates such that:
i. for each individual constant t of L, either I(t) ∈ D or I(t) is undefined, and
ii. for each n-place predicate P of L, I(P) ⊆ D^n.
(D^n is the set of n-tuples of members of D, a 1-tuple of an object d being just d itself.) Given a model 〈D,I〉, we recursively define a valuation function V that assigns truth values to formulas
as follows:
• T ⇔ I(t[1]),…, I(t[n]) are all defined and 〈I(t[1]),…, I(t[n])〉 ∈ I(P);
V(Pt[1]…t[n]) = • F otherwise.
V(s=t) = T ⇔ I(s) and I(t) are both defined and I(s) = I(t);
F otherwise.
V(E!t) = T ⇔ I(t) is defined;
F otherwise.
V(~A) = T ⇔ V(A) = F;
F otherwise.
V(A → B) = T ⇔ V(A) = F or V(B) = T;
F otherwise.
• T ⇔ for all d∈D[], V[(t][,d][ )](A(t/x)) = T (where t is any individual constant not in A and V[(t][,d][ )] is the valuation function on the model 〈D,I^*〉 such that I^* is
V(∀xA) = just like I except that I^*(t) =[]d);
• F otherwise.
(The metalinguistic symbol ‘⇔’ means “if and only if.”) A logic adequate to this semantics may be axiomatized by making three changes to the axioms of PFL. The first is to add the axiom:
(A−) Pt[1]…t[n] → E!t[i], where 1≤i≤n and P is any primitive n-place predicate, including ‘=’.
This expresses the convention that an atomic formula cannot be true unless its terms refer. Second, because all empty-termed identity statements are false on a negative semantics, (A7) is invalid and
must be replaced by (A7−). Third, since (A2), (A3), (A−) and (A7−) together imply (A5), (A5) may be omitted. The resulting logic is known as NFL (Negative Free Logic). For languages with function
symbols, negative free logic requires in addition this axiom of strictness:
E!f(t[1],…,t[n]) → E!t[i], where 1≤i≤n,
which assures that a function has a value only if each of its arguments does. Because of its unusual treatment of identity, negative free logic validates the equivalence:
t=t ↔ E!t.
Thus in some negative logics, E!t is defined simply as t=t. It is noteworthy, too, that each instance of the principle of indiscernibility of nonexistents:
(~E!s & ~E!t) → (A → A(t//s)),
(where A(t//s) is the result of replacing one or more occurrences of s in A by t) is provable in negative free logic.
Positive semantics allow some empty-termed atomic formulas not of the form E!t to be true. They are typically bivalent, though there are variants that allow truth-value gaps or extra truth values.
Only bivalent semantics are considered in this section.
Positive semantics treat formulas of the form t=t as true, whether or not t is empty. Hence they validate (A7), which affirms all self-identity statements, not merely the weaker (A7−), which affirms
only self-identities between nonempty terms.
Like negative semantics, some positive semantics require each singular term to denote either a member of D or nothing at all. But then when a term fails to denote, the truth value of an atomic
formula containing it cannot as usual be a function of its denotation, and the formula must be evaluated in some nonstandard way. To avoid such irregularity and yet permit empty-termed formulas to be
true, other positive semantics allow singular terms to denote, and predicates to be satisfied by, nonmembers of D. These nonmembers are collected into a second or outer domain D[o], in contrast to
which D is described as the inner domain. The result is a dual-domain semantics.
Positive semantics with dual domains are generally the simplest. The members of the outer domain D[o] typically represent “non-existing” things. Depending on the application, these may be theoretical
or ideal entities, error objects (in computer science), fictional objects, merely possible (or even impossible) objects, and so on. Some authors make D a subset of D[o], which is the convention
throughout this article; others make the two disjoint. In a bivalent dual-domain semantics each singular term denotes an object in D[o] though possibly not in D. Thus D, though not D[o], may empty.
Predicates are assigned extensions from D[o], and the truth-values of atomic formulas (whether empty-termed or not) are computed in the usual Tarskian fashion: an atomic formula is true if and only
if the n-tuple of objects denoted by its singular terms, taken in order, is a member of the predicate's extension. Identity statements are no exception. Statements of the form s=t are true if and
only if s and t denote the same object. Hence, even if empty-termed, they may be true.
More formally, a dual-domain model for a language L of the sort defined in Section 2 is a triple 〈D,D[o],I〉, where D is a possibly empty inner domain, D[o] is a nonempty outer domain such that D ⊆
D[o], and I is an interpretation function such that for every individual constant t of L, I(t) ∈ D[o], and for every n-place predicate P of L, I(P) ⊆ D[o]^n. Given a model 〈D,D[o],I〉, the valuation
function V assigns truth values to atomic and quantified formulas as follows:
V(Pt[1]…t[n]) = T ⇔ 〈I(t[1]),…,I(t[n])〉 ∈ I(P);
F otherwise
V(s=t) = T ⇔ I(s) = I(t);
F otherwise
V(E!t) = T ⇔ I(t) ∈ D;
F otherwise
• T ⇔ for all d∈D[], V[(t][,d][ )](A(t/x)) = T (where t is not in A and V[(t][,d][ )] is the valuation function on the model 〈D,D[o],I^*〉 such that I^* is just like I except
V(∀xA) = that I^*(t) =[]d);
• F otherwise
The clauses for ‘~’ and ‘ → ’ are the same as in negative free logic. PFL with classical identity — that is, the logic axiomatized by (A1)–(A7) — is sound and complete with respect to this semantics
(Leblanc and Thomason 1968).
Dual-domain semantics have been criticized as ontologically extravagant. In response, some authors have advocated single-domain positive semantics, which assign no denotation to empty singular terms.
In such semantics empty-termed atomic formulas require unconventional treatment. Typically such semantics determine the truth-values of atomic formulas in two different ways: a Tarksi-style
calculation for formulas whose terms all refer, and a separate truth-value assignment for empty-termed atomic formulas. The details, however, tend to get complicated. (See, for example, Antonelli
Neutral semantics make all empty-termed atomic formulas not of the form E!t truth-valueless. Truth-valueless formulas are often said to have “truth-value gaps.” Neutral semantics are of two types:
ordinary neutral semantics, which provide conventions for calculating the truth values of complex formulas directly from their components, even when there are empty terms, and supervaluational
semantics, which calculate the truth values of complex formulas by considering all the values that their components could have if their empty terms had referents. Ordinary neutral semantics will be
considered in this section, supervaluations in Section 3.4.
The uniform policy of making all empty-termed atomic formulas truth-valueless has the advantages of plausibility and simplicity at the atomic level, but it complicates the evaluation of complex
formulas. How are the logical operators to function when some of the values on which they usually operate are absent? Some cases are fairly clear. The negation of a truth-valueless formula, for
example, is generally taken to be truth-valueless. But:
• If A is true and B truth-valueless, is A → B false or truth-valueless?
• If A is false and B truth-valueless, is A → B true or truth-valueless?
• Let A = (B & C), where x is free in B, B be true of some but not all members of D, and C be closed and truth-valueless. Clearly this open formula is either truth-valueless of every object in D or
truth-valueless of some and false of others. In either case, is ∃xA truth-valueless or false?
At one extreme, we might want the operators to generate as many plausible truth values as possible in order to validate as many classically valid formulas as we can. At the other, one might arrange
things so that all empty-termed formulas are truth-valueless, which would produce a very weak logic (Lehman 2001). But however we choose, many formulas that are valid in both classical predicate
logic and the usual forms of free logic—indeed, even in propositional logic—will become invalid. The law of noncontradiction, for example:
~(A & ~A)
is truth-valueless whenever A is (unless we make negations of truth-valueless statements true) and hence becomes invalid. Of course this law and many other standard logical principles remain weakly
valid—i.e., not false on any model—and it is possible to construct a logic based on weak validity rather than ordinary validity. But because any such logic will still be weaker than classical logic
and because its theorems need not even be true, most logicians reject this strategy. For more on neutral free logic, see Lehman 1994, 2001, and 2002, pp. 233–237.
Neutral semantics can be made to validate all the theorems of standard free logics by augmenting them with supervaluations. Supervaluations were first formalized by van Fraassen (1966). The version
presented here is a variant of Bencivenga's approach (1981 and 1986).
The fundamental idea is this: when empty terms deprive a formula of truth-value, supervaluational semantics nevertheless accounts it true (or false) if all possible ways of assigning referents to
those terms agree in making it true (or false). This strategy restores validity to many principles that would lose it in an ordinary neutral semantics. The following instance of the law of
~(Pt & ~Pt),
for example, is truth-valueless when t is nondenoting (assuming an ordinary neutral semantics that makes the negation of a truth-valueless formula truth-valueless). Hence in such a semantics the law
itself is invalid. Yet were we to assign a referent to t, that referent would either be in the extension of P or not. If it were, then Pt would be true. If it were not, then Pt would be false. In
either case ~(Pt & ~Pt) would be true. Thus, since all possible ways of assigning referents to t agree in making ~(Pt & ~Pt) true, we should count ~(Pt & ~Pt) itself as true. In this way the law of
noncontradiction can be preserved.
More explicitly, a supervaluation begins with a neutral model M with a single, possibly empty domain. We then construct the set of completions of M. These may be regarded as bivalent dual-domain
positive models whose inner domain is the domain of M, but which also have an outer domain D[o] to provide referents for the empty terms. In each completion, singular terms that are nonempty in M
retain their referents, and those that are empty in M denote a member of D[o] — D. For each n-place predicate P, the extension of P is a subset of D[o]^n and a superset of P's extension in M.
From these completions we now construct a supervaluation. A supervaluation of M is a partial assignment of truth-values to formulas that makes a formula true if all completions of M make it true,
false if they all make it false, and truth-valueless if they disagree. A formula is valid on a supervaluational semantics if and only if it is true on all supervaluations. This semantics validates
all and only the theorems of PFL (Bencivenga 1981, Morscher & Simons 2001, pp. 14–18).
Supervaluations employ what Bencivenga (1986) calls a “counterfactual theory” of truth: a empty-termed statement is true if it would be true on any assignment of referents to its empty terms. This
has struck many critics as simply false. Moreover, the logic itself leaves much to be desired. For one thing, supervaluational consequence is too strong. Thus, for example, although the formula Pt →
E!t is (quite properly) not valid on a supervaluational semantics, nevertheless since E!t is true on every supervaluation on which Pt is true, the sequent (derivability statement) Pt ⊢ E!t is im
properly semantically valid. Therefore, although PFL is sound on supervaluational semantics and every semantically valid formula is a theorem of PFL, not all semantically valid sequents are provable
in PFL. In fact, supervaluational consequence is not axiomatizable by any extension of free logic. This follows from a result of Woodruff (1984), who has shown that supervaluational semantics has
many of the undesirable properties of second-order semantics. Finally, since supervaluations are built from completions that are in effect positive dual-domain models, we may wonder whether the
detour through supervaluations is worth the trouble, since positive dual-domain models alone are simpler and more adequate to PFL.
While problems noted above are specific to particular forms of free logic, there are anomalies that infect all, or nearly all, forms. This section considers three: (1) a cluster of problems related
to the application of primitive predicates to empty terms, (2) the failure of substitutivity salva veritate of co-referential expressions, and (3) the inability of free logic to express sufficient
conditions for existence.
In classical logic and in positive free logic any substitution instance of a valid formula (or form of inference) is itself a valid formula (or form of inference). But in negative or neutral free
logic this is not the case. A substitution instance is the result of replacing primitive non-logical symbols by possibly more complex ones of the same semantic type—n-place predicates with open
formulas in n variables, and individual constants with singular terms—each occurrence of the same primitive symbol being replaced by the same possibly complex symbol. The replacement of an occurrence
of a primitive n-place predicate P in some formula B by an open formula A with free variables x[1],…,x[n] is performed as follows: where t[1],…,t[n] are the individual constants or variables
immediately following P in that occurrence, replace Pt[1]…t[n] in B by A(t[i]/x[i])—the result of replacing x[i] by t[i] in A, for each i, 1≤i≤n.
Let P, for example, be a primitive one-place predicate. Then if the semantics is negative, Pt → E!t is valid. But now consider the substitution instance ~Pt → E!t, in which the open formula ~Px is
substituted for P. This substitution instance is false when t is empty. Hence valid formulas may have invalid substitution instances. The same holds for ordinary neutral semantics that make
conditionals true whenever their consequents are true.
In a negative semantics, moreover, the truth value of an empty-termed statement depends arbitrarily on our choice of primitive predicates. Consider, for example, a negative free logic interpreted
over a domain of people that takes as primitive the one-place predicate ‘A’, meaning “is an adult,” and defines “is a minor” by this schema:
Mt =[df] ~At.
For any non-denoting name t, At is false in this theory; hence Mt is true. If we take ‘is a minor’ as primitive instead, the truth-values of At and Mt are reversed. But why should truth-values depend
on primitiveness in this way?
Positive semantics avoid these anomalies. But, if bivalent, in application they force us to assign truth values to empty-termed formulas in some other way, often without sufficient reason. Consider,
for example, these three formulas, all of which contain the empty singular term ‘1/0’ (where ‘/’ is the division sign):
1/0 = 1/0
1/0 > 1/0
1/0 ≤ 1/0
Assuming a bivalent positive semantics, which ones should we make true and which false? Since the semantics is positive, ‘1/0 = 1/0’ is automatically true. One might argue further that since ‘≤’
expresses a relationship weaker than ‘=’ and since ‘1/0 = 1/0’ is true, ‘1/0 ≤ 1/0’ should be true as well. But that is merely to mimic with empty terms an inference pattern that holds for denoting
terms. To what extent is such mimicry justified? Suppose we do decide to make ‘1/0 ≤ 1/0’ true; should we therefore make ‘1/0 > 1/0’ false? There are no non-arbitrary criteria for answering such
questions. To a large extent, of course, the answers don't matter. There are no facts here; any consistent convention will do. But that's just the problem. Some convention is needed, and establishing
one can be a lot of bother for nothing.
Classical predicate logic has the desirable feature that co-extensive open formulas may be substituted for one another in any formula salva veritate—i.e., without changing that formula's truth value.
(Open formulas A and B in n free variables x[1],…,x[n] are coextensive if and only if ∀x[1]…∀x[n](A ↔ B) is true.) This principle fails for nearly all free logics with identity. Consider, for
example, the formula t=t, where t is empty, which is an instance of the open formula x=x. Now x=x is coextensive with both (x=x & E!x) and (E!x → x=x), since all three formulas are satisfied by all
members of D. Hence if co-extensive open formulas could be exchanged salva veritate, (t=t & E!t) and (E!t → t=t) would have the same truth value as t=t. But on nearly all free logics this is not the
case. Positive free logic and the supervaluations described in Section 3.4 make t=t true and (t=t & E!t) false; negative free logic makes t=t false and (E!t → t=t) true; and any ordinary neutral free
logic whose conditionals are true whenever their antecedents are false makes t=t truth-valueless and (E!t → t=t) true. Many find this troubling because, since Frege, it has been widely held that (1)
extensions of complex linguistic expressions should be functions of the extensions of their components (so that co-extensive components should be exchangeable without affecting the extension of the
whole) and (2) the extension of a formula (or statement) is a truth value.
One possible response is to reject (2). Leeb (2006) develops for a version of PFL a dual-domain semantics in which the extensions of formulas are abstract states of affairs. In this semantics,
co-referential open sentences are exchangeable not salve veritate, but (as he puts it) salve extensione; that is, the exchange does not alter the state of affairs designated by the statement in which
it occurs. But Leeb's state-of-affairs semantics is so complex that it may discourage application.
Those who wish to retain (2) may be consoled by the following observation: though substitutivity salve veritate of co-extensive open formulas fails for nearly all free logics, a related but weaker
principle, the substitutivity salve veritate of co-comprehensive open formulas, is valid for positive free logics. Open formulas A and B in n free variables x[1],…,x[n] are co-comprehensive if every
assignment of denotations in the outer domain D[o] to x[1],…,x[n] satisfies A if and only if it satisfies B. Among the open formulas mentioned in the previous paragraph, for example, x=x and (E!x → x
=x) are co-comprehensive in a dual-domain positive free logic, being satisfied by all members of D[o], but (x=x & E!x) is not co-comprehensive with them, since it is satisfied only by the members of
D. Unlike co-extensiveness, however, co-comprehensiveness is not expressible in the language of PFL. But it becomes expressible with the introduction of quantifiers over the outer domain—a strategy
considered in Section 5.5.
‘Whatever thinks exists,’ ‘Any necessary being exists’, ‘That which is immediately known exists’: such statements of sufficient conditions for existence are prominent in metaphysical debates. But,
somewhat surprisingly, they are not expressible in free logic. Their apparent form is ∀x(A → E!x). But because the universal quantifier ranges just over D, which is also the extension of E!, this
form is valid in free logic—as it is in classical logic with E!x expressed as ∃y y=x. No statement of this form—not even ‘all impossible things exist’—can be false. Hence on free logic all such
statements are equally devoid of content. Argument evaluation suffers as a result. Consider, for example, the obviously valid inference:
I think.
Whatever thinks exists.
∴ I exist.
Its natural formalization in free logic is Ti, ∀x(Tx → E!x) ⊢ E!i. But this form is invalid. To obtain the conclusion, we must first deduce Ti → E!i by specification from the second premise and then
use modus ponens with the first. But since the logic is free, specification requires the question-begging premise E!i. A remedy is not to be found in free logic alone, but once again quantification
over the outer domain of a dual-domain semantics may help (see Section 5.5).
This section considers applications of free logic in theories of definite descriptions, languages that allow partial or non-strict functions, logics with Kripke semantics, logics of fiction and
logics that are in a certain sense “Meinongian.” Free logic has also found application elsewhere—most prominently in theories of predication, programming languages, set theory, logics of
presupposition (with neutral semantics), and definedness logics. For more on these and other applications, see Lambert 1991 and 2001b; Lehman 2002, pp. 250–253; and Nolt 2006, pp. 1039–1053.
The earliest and most extensive applications of free logic have been to the theory of definite descriptions. A definite description is a phrase that may be expressed in the form “the x such that A,”
where A is an open formula with only x free. Formally, this is written using a special logical operator, the definite description operator ‘ι’, as ιxA. Contra Russell, free logic treats definite
descriptions not as merely apparent singular terms in formulas whose logical form is obtainable only by elaborate contextual definitions, but as genuine singular terms. Thus, like an individual
constant, ιxA may be attached to predicates and (under appropriate conditions) substituted for variables. For any object d in the domain D, ιxA denotes d if and only if among all objects in D, d and
only d satisfies A. If in D there is more than one object satisfying A, or none, ιxA is empty. The description operator therefore obeys Lambert's Law:
(LL) ∀y(y=ιxA ↔ ∀x(A ↔ x=y)), x free in A.
Adding (LL) to the free logic defined by (A1)–(A6) and (A7−) gives the minimal free definite description theory MFD. MFD is the core of virtually all free description theories, which therefore differ
only in the additional principles they endorse.
There is plenty of room for variation, for MFD fails to specify truth conditions for atomic formulas (including identities) when they contain empty descriptions, and there are many ways to do it.
Making all atomic formulas containing empty descriptions false yields a negative free description theory axiomatizable by adding (LL) to NFL (Burge 1974, Lambert 2001h). The result is essentially
Bertrand Russell's theory of definite descriptions, but with the description operator taken as primitive rather than contextually defined.
The simplest positive free description theory makes all identities between empty terms true. Known as FD2, it may be axiomatized by adding (LL) and:
(~E!s & ~E!t) → s=t
to PFL. FD2 is akin to Gottlob Frege's theory of definite descriptions; but whereas Frege chose a single arbitrary existing object to serve as the conventional referent for empty singular terms, FD2
makes this object non-existent. FD2 is readily modeled in a dual-domain positive semantics with just one object in the outer domain.
On FD2 all empty descriptions are intersubstitutable salve veritate. But this result is subject to counterexamples in ordinary language. This statement:
The golden mountain is a possible object,
for instance, is true, while this one:
The set of all non-self-membered sets is a possible object,
is false—though each applies the same predicate phrase ‘is a possible object’ to an empty description. Thus we may prefer a more flexible positive free description theory on which identities between
empty terms may be false. The literature presents a surprising diversity of these (Lambert 2001a, 2003c, 2003d, 2003h; Bencivenga 2002, pp. 188–193; Lehman 2002, pp. 237–250).
Some logics employ primitive n-place function symbols—symbols that combine with n singular terms to form a complex singular term. Thus, for example, the plus sign ‘+’ is a two-place function symbol
that, when placed between, say, ‘2’ and ‘3’, forms a complex singular term, ‘2 + 3’ that denotes the number five. Similarly, ‘^2’ is a one-place function symbol that, when placed after term denoting
a number, forms a complex singular term that denotes that number's square. Semantically, the extension of a function symbol is a function whose arguments are members of the quantificational domain D,
and the resulting complex term denotes the result of applying that function to the referents of the n component singular terms, taken in the order listed. Since classical logic requires every
singular term (including those formed by function symbols) to refer to to an object in D, for each such function symbol f, it requires that:
∀x[1]…∀x[n]∃y(y = f(x[1], …, x[n])).
Hence classical logic prohibits primitive function symbols whose extensions are partial functions—functions whose value is for some arguments undefined. Such, for example, is the binary division sign
‘/’, since when placed between two numerals the second of which is ‘0’, it forms an empty singular term. Similarly, the limit function symbol ‘lim’ yields an empty singular term when applied to the
name of a non-coverging sequence. Classical logic can accomodate function symbols for partial functions via elaborate contextual definitions. But then (as with Russellian definite descriptions) the
form in which these function symbols are usually written is not their logical form. Free logic provides a more elegant solution. Because it allows empty singular terms, symbols for partial functions
may simply be taken as primitive.
In applications of free logic involving partial functions, the existence predicate ‘E!’ is often replaced by the postfix definedness predicate ‘↓’. For any singular term t, t↓ is true if and only if
t has some definite value in D. Thus, for example, the formula ‘(1/0)↓’ is false. While some writers (e.g., Feferman (1995)) distinguish ‘↓’ from ‘E!’, the literature as a whole does not, and ‘↓’ is
often merely a syntactic variant of ‘E!’.
In addition to partial functions, positive free logics can also readily handle non-strict functions. A non-strict function is a function that may yield a value even if not all of its arguments are
defined. The binary function f such that f(x,y) = x, for instance, can yield a value even if the y-term is empty. So, for example, the formula f(1, 1/0) = 1 can be regarded as true. Logics for
non-strict functions must be positive because in a negative or neutral logic empty-termed atomic formulas, such as f(1, 1/0) = 1, cannot be true. Free logics involving non-strict functions find
application in some programming languages (Gumb 2001, Gumb and Lambert 1991). Such logics may employ a dual-domain semantics in which the referents of empty functional expressions such as ‘1/0’ are
regarded as error objects—objects that correspond in the running of a program to error messages. Thus, for example, an instruction to calculate f(1, 1/0) might return the value 1, but an instruction
to calculate f(1/0, 1) would return an error message.
Kripke semantics for quantified modal logics, tense logics, deontic logics, intuitionistic logics, and so on, are often free. This is because they index truth to certain objects that we shall call
“worlds,” and usually some things that we have names for do not exist in some of these worlds. Worlds may be conceived in various ways: they may, for example, be understood as possible universes in
alethic modal logic, times or moments in tense logic, permissible conditions in deontic logic, or epistemically possible states of knowledge in intuitionistic logic. Associated with each world w is a
domain D[w], of objects (intuitively, the set of objects that exist at w). An object may exist in (or “at”) more than one world but need not exist in all. Thus, for example, Kripke semantics for
tense logic represents the fact that Bertrand Russell existed at one time but exists no longer by Russell's being a member of the domains of certain “worlds”—that is, times (specifically, portions of
the last two centuries)—but not others (the present, for example, or all future times). Two natural assumptions are made here: that the same object may exist in more than one world (this is the
assumption of transworld identity), and that some singular terms—proper names, in particular—refer to not only to an object at a given world, but to that same object at every world. Such terms are
called rigid designators. Any logic that combines rigid designators with quantifiers over the domains of worlds in which their referents do not exist must be free.
Kripke semantics gives predicates different extensions in different worlds. Thus, for example, the extension of the predicate ‘is a philosopher’ was empty in all worlds (times) before the dawn of
civilization and more recently has varied. For rigidly designating terms, this raises the question of how to evaluate atomic formulas at worlds in which their referents do not exist. Is the predicate
‘is a philosopher’ satisfied, for example, by Russell in worlds (times) in which he does not exist—times such as the present? The general answers given to such questions determine whether a Kripke
semantics is positive, negative or neutral.
For negative or neutral semantics, the extension at w of an n-place predicate P is a subset of D[w]^n. An atomic formula can be true at w only if all its singular terms have referents in D[w]; if
not, it is false (in negative semantics) or truth-valueless (in neutral semantics). In a positive semantics, atomic formulas that are empty-termed at w may nevertheless be true at w. Predicates are
usually interpreted over the union U of domains of all the worlds, which functions as a kind of outer domain for each world, so that the extension of an n-place predicate P at a world w is a subset
of U^n. Some applications, however, require predicates to be true of—and singular terms to be capable of denoting—objects that exist in no world. If so, we may collect these objects into an outer
domain that is a superset of U. (They might be fictional objects, timeless Platonic objects, impossible objects, or the like.)
Quantified formulas, like all formulas, are true or false only relative to a world. Thus ∃xA, for example, is true at a world w if and only if some object in D[w] satisfies A. Except in
intuitionistic logic, where it has a specialized interpretation, the universal quantifier is interpreted similarly: ∀xA is true at w if and only if all objects in D[w] satisfy A. Kripke semantics
often specify that for each w, D[w] is nonempty, so that the resulting free logic is non-inclusive—but we shall not do so.
Any of various free modal or tense logics can be formalized by adding to a language L of the sort defined in Section 2 the sentential operator ‘□’. If A is a formula, so is □A. In alethic modal
logic, this operator is read “it is necessarily the case that.” More generally, it means “it is true in all accessible worlds that,” where accessibililty from a given world is a different relation
for different modalities: possibility for alethic logics, permissibility for deontic logics, various temporal relations for tense logics, and so on. A typical bivalent Kripke model M for such a
language consists of a set of worlds, a binary accessibility relation R defined on that set; an assignment to each world w of a domain D[w]; an “outer” domain D[o] of objects (which typically is
either U or a superset thereof); and a two-place interpretation function I that assigns denotations at worlds to individual constants and extensions at worlds to predicates. For each individual
constant t and world w, I(t,w)∈ D[o]. In such a model, a singular term is a rigid designator if and only if for all worlds w[1] and w[2], I(t,w[1]) = I(t,w[2]). For every n-place predicate P, I(P,w)
⊆ D[w]^n if the semantics is negative or neutral; if it is positive, I(P,w) ⊆ D[o]^n. Truth values at the worlds of a model M are assigned by a two-place valuation function V (where V(A,w) is read
“the truth value V assigns to formula A at world w”) as follows:
V(Pt[1]…t[n],w) = T ⇔ 〈I(t[1],w),…,I(t[n],w)〉 ∈ I(P,w);
F otherwise.
V(s=t,w) = T ⇔ I(s,w) = I(t,w);
F otherwise.
V(E!t,w) = T ⇔ I(t,w) ∈ D[w];
F otherwise.
V(~A,w) = T ⇔ V(A,w) = F;
F otherwise.
V(A → B,w) = T ⇔ V(A,w) = F or V(B,w) = T;
F otherwise.
V(□A,w) = T ⇔ for all u such that wRu, V(A,u) = T;
F otherwise.
• T ⇔ for any d ∈ D[w], V[(t,d)](A(t/x),w) = T (where t is not in A and V[(t,d)] is the valuation function for the model just like M except that its interpretation function I
V(∀xA,w) = ^* is such that for each world w, I^*(t,w) = d);
• F otherwise.
Under the stipulations that admissible models make all individual constants rigid designators and that I(P,w) ⊆ D[o]^n, the standard free logic PFL, together with the modal axioms and rules
appropriate to whatever structure we assign to R, is sound and complete on this semantics.
Modal semantics thus defined call for free logic whenever worlds are allowed to have differing domains—that is whenever we may have worlds u and w such that D[u] ≠ D[w]. For in that case there must
be an object d that exists in one of these domains (let it be D[w]), but not the other, so that any singular term t that rigidly designates d must be empty at world u. Hence ~∃x(x=t) (which is
self-contradictory in classical logic) must be true at world u. Such a semantics also requires free logic when D[o] contains objects not in U, for in that case rigid designators of these objects are
empty in all worlds. Finally, this semantics calls for inclusive logic if any world has an empty domain. Thus, given this semantics, the only way to make the resulting logic unfree is to require that
domains be fixed—i.e., that all worlds have the same domain D, that D be non-empty, and that D[o] = D.
Just this trio of requirements was in effect proposed by Saul Kripke in his ground-breaking (1963) paper on modal logic as one of two strategies for retaining classical quantification. (The other,
more draconian, strategy was to allow differing domains but ban individual constants and treat open formulas as if they were universally quantified.) But such fixed-domain semantics validate the
implausible formula:
∀x□∃y(y = x),
which asserts that everything exists necessarily and the equally implausible Barcan formula:
∀x□A → □∀xA
(named for Ruth Barcan, later Ruth Barcan Marcus, who discussed it as early as the late 1940s). To see its implausibility, consider this instance: ‘If everything is necessarily a product of the big
bang, then necessarily everything is a product of the big bang’. It may well be true that everything (in the actual world) is necessarily a product of the big bang—i.e., that nothing in this world
would have existed without it. But it does not seem necessary that everything is a product of the big bang, for other universes are possible in which things that do not exist in the actual world have
other ultimate origins. Because of the restrictiveness and implausibility of fixed-domain semantics, many modal logicians loosen Kripke's strictures and adopt free logics.
We may also drop the assumption that singular terms are rigid designators and thus allow nonrigid designators. On the semantics considered here, these are singular terms t such that for some worlds w
[1] and w[2], I(t,w[1]) ≠ I(t,w[2]). Definite descriptions, understood attributively, are the best examples. Thus the description “the oldest person” designates different people at different times
(worlds)—and no one at times before people existed (“worlds” w at which I(t,w) is undefined).
Nonrigid designators, if empty at some worlds, require free logics even with fixed domains. (Thus classical logic with nonrigid designators is possible only if we require for each singular term t
that at each world w, t denotes some object in D[w].) On some semantics for nonrigid designators, the quantifier rule must differ from that given above, and other adjustments must be made. For
details, see Garson 1991, Cocchiarella 1991, Schweitzer 2001 and Simons 2001.
Intuitionistic logic, too, has a Kripke semantics, though special valuation clauses are needed for ‘~’, ‘→’ and ‘∀’ in order to accommodate the special meanings these operators have for
intuitionists, and ‘□’ is generally not used. The usual first-order intuitionistic logic, the Heyting predicate calculus (HPC)—also called the intuitionistic predicate calculus—has the theorem ∃x(x=t
) and hence is not free. But intuitionists admit the existence only of objects that can in some sense be constructed, while classical mathematicians posit a wider range of objects. Therefore users of
HPC cannot legitimately name all the objects that classical mathematicians can. Worse, they cannot legitimately name objects whose constructibility has yet to be determined. Yet some Kripke-style
semantics for HPC do allow use of names for such objects (semantically, names of objects that “exist” at worlds accessible from the actual world but not at the actual world itself). Some such
semantics, though intended for HPC, have turned out, unexpectedly, not to be adequate for HPC. An obvious fix, advocated by Posy (1982), is to adopt a free intuitionistic logic. For more on this
issue, see Nolt 2007.
Because fictions use names that do not refer to literally existing things, free logic has sometimes been employed in their analysis. So long as we engage in the pretense of a story, however, there is
no special need for it. It is true, for example, in Tolkien's The Lord of the Rings that Gollum hates the sun, from which we can legitimately infer that in the story there exists something that hates
the sun. Thus quantifiers may behave classically so long as we consider only what occurs and what exists “in the story.” (The general logic of fiction, however, is often regarded as nonclassical, for
two reasons: (1) a story may be inconsistent and hence require a paraconsistent logic, and (2) the objects a story describes are typically (maybe always) incomplete; that is, the story does not
determine for each such object o and every property P whether or not o has P.)
The picture changes, however, when we distinguish what is true in the story from what is literally true. For this purpose logics of fiction often deploy a sentence operator that may be read “in the
story.” Here we shall use ‘S[x]’ to mean “in the story x,” where ‘x’ is to be replaced by the name of a specific story. Anything within the scope of this operator is asserted to be true in the named
story; what is outside its scope is to be understood literally. (For a summary of theories of what it means to be true in a story, see Woods 2006.)
With this operator the statement ‘In the story, The Lord of the Rings, Gollum hates the sun’ may be formalized as follows:
S[The Lord of the Rings](Gollum hates the sun).
The statement that in The Lord of the Rings something hates the sun is:
S[The Lord of the Rings]∃x(x hates the sun).
This second statement follows from the first, even though Gollum does not literally exist. But it does not follow that there exists something such that it, in The Lord of the Rings, hates the sun:
∃xS[The Lord of the Rings](x hates the sun),
and indeed that statement is not true, for, literally, Gollum does not exist. Since the sun, however, exists both literally and in the story, the statement:
∃xS[The Lord of the Rings](Gollum hates x)
is true and follows by free existential generalization from ‘S[The Lord of the Rings](Gollum hates the sun)’ together with the true premise ‘E!the sun’. Thus free logic may play a role in reasoning
that mixes fictional and literal discourse.
Terms for fictional entities also occur in statements that are entirely literal, making no mention of what is true “in the story.” Consider, for example, the statement:
(G) Gollum is more famous than Gödel.
Assuming that (G) is both atomic and true and that ‘Gollum’ is an empty singular term, this statement calls for a free positive logic. The logic must be free because it employs an empty singular
term, and it must be positive, because only on a positive semantics can empty-termed atomic statements be true. Regarding the name ‘Gollum’, however, there seems to be a choice; this term can be
understood either as having no referent or as having a referent that does not exist.
If ‘Gollum’ has no referent, then (G) might be handled by a single-domain positive semantics. But that semantics would have to treat atomic formulas non-standardly; it could not, as usual, stipulate
that (G) is true just in case the pair 〈Gollum, Gödel〉 is a member of the extension of the predicate ‘is more famous than’; for if there is no Gollum, there is no such pair. On such a semantics
‘Gollum is more famous than Gödel’ would not imply that something is more famous than Gödel.
If, on the other hand, terms such as ‘Gollum’ refer to non-existent objects, then we may collect those objects in the outer domain of a dual-domain positive free logic. Now atomic formulas may have
their standard truth conditions: (G) is true just in case 〈Gollum, Gödel〉 is a member of the extension of ‘is more famous than’. Moreover, if we allow quantifiers over that outer domain, then
‘Something is more famous than Gödel’ (where the quantifier ranges over the outer domain) does follow from ‘Gollum is more famous than Gödel’, though ‘There literally exists something more famous
than Gödel’ (where the quantifier ranges over the inner domain) does not. Meinongian logics of fiction employ this strategy.
Alexius Meinong is best known for his view that some objects that do not exist nevertheless have being. His name has been associated with various developments in logic. Some free logicians use it to
describe any dual-domain semantics. For others, Meinongian logic is something much more elaborate: a rich theory of all the sorts of objects we can think about—possible or impossible, abstract or
concrete, literal or fictional, complete or incomplete. In this section the term is used to describe logics stronger than the first type but possibly weaker than the second: positive free logics with
an extra set of quantifiers that range over the outer domain of a dual-domain semantics.
Whether such logics can legitimately be considered free is controversial. On older conceptions, free logic forbids any quantification over non-existing things (see Paśniczek 2001 and Lambert's reply
in Morscher and Hieke 2001, pp. 246–8). But by anybody's definition, Meinongian logics in the sense intended here at least contain free logics when the inner domain is interpreted as the set of
existing things. Moreover, on the strictly semantic definition used in this article (Section 1.1), which is also that of Lehman 2002, whether the members of D exist is irrelevant to the question of
whether a logic is free. For a defense of this definition, see Nolt 2006, pp. 1054–1057.
Historically, quantification over domains containing objects that do not exist has been widely dismissed as ontologically irresponsible. Quine (1948) famously maintained that existence is just what
an existential quantifier expresses. Yet nothing forces us to interpret “existential” quantification over every domain as expressing existence—or being of any sort. Semantically, an existential
quantifier on a variable x is just a logical operator that takes open formulas on x into truth values; the value is T if and only if the open formula is satisfied by at least one object in the
quantifier's domain. That the objects in the domain have or lack any particular ontological status is a philosophical interpretation of the formal semantics. Alex Orenstein (1990) argues that
“existential” is a misnomer and that we should in general call such quantifiers “particular.” That suggestion is followed in the remainder of this section.
Quantifiers ranging over the outer domain of a dual-domain semantics are called outer quantifiers, and those ranging over the inner domain inner quantifiers. If the inner particular quantifier is
interpreted to mean “there exists” and the members of the outer domain are possibilia, then the outer particular quantifier may mean something like “there is possible a thing such that” or “for at
least one possible thing.” We shall use the generalized product symbol ‘Π’ for the outer universal quantifier and the generalized sum symbol ‘Σ’ for its particular dual. This notation enables us to
formalize, for example, the notoriously puzzling but obviously true statement ‘Some things don't exist’ (Routley 1966) as:
Since in a dual-domain semantics all singular terms denote members of the outer domain, the logic of outer quantifiers is not free but classical. With ‘E!’ as primitive, the free inner quantifiers
can be defined in terms of the classical outer ones as follows:
∀xA =[df] Πx(E!x → A)
∃xA =[df] Σx(E!x & A).
The outer quantifiers, however, cannot be defined in terms of the inner.
Logics with both inner and outer quantifiers have various applications. They enable us, for example, to formalize substantive sufficient conditions for existence and hence adequately express the
argument of Section 4.3, as follows:
Ti, Πx(Tx → E!x) ⊢ E!i.
This form is valid. The co-comprehensiveness of open formulas A and B in n free variables x[1],…,x[n] (see Section 4.2), can likewise be formalized as:
Πx[1]…Πx[n](A ↔ B).
Richard Grandy's (1972) theory of definite descriptions holds that ιxA=ιxB is true if and only if A and B are co-comprehensive and thus is readily expressible in a Meinongian logic. Free logics with
outer quantifiers have also been employed in logics that are Meinongian in the richer sense of providing a theory of objects (including, in some cases, fictional objects) that is inspired by
Meinong's work (Routley 1966 and 1980, Parsons 1980, Jacquette 1996, Paśniczek 2001, Priest 2005 and 2008, pp. 295–7).
Inclusive logic was conceived and formalized before free logic per se was. Thus, since inclusive logic with singular terms is de facto free, the inventors of inclusive logics were, perhaps
unwittingly, the inventors of free logic. Bertrand Russell suggested the idea of an inclusive logic in (1919, p. 201, n.). Andrezej Mostowski (1951) seems to have been among the first to formalize
such a logic (but see Morscher and Simons 2001, p. 27, note 3). Theodore Hailperin (1953), Czeslaw Lejewski (1954) and W. V. O. Quine (1954) made important early contributions. It was Quine who first
dubbed such logics “inclusive.”
Henry S. Leonard (1956) was the first to develop a free logic per se, though he used a defective definition of ‘E!’. Karel Lambert began his prolific series of contributions to the field in (1958),
critiquing Leonard's definition, and then coining the term “free logic” in (1960). The early systems of free logic were positive. Negative free logic was developed by Rolf Schock in a series of
papers during the 1960s, culminating in (1968). Timothy Smiley suggested the idea of a neutral free logic in (1960), but the first thoroughgoing treatment appeared in Lehman 1994. Supervaluations
were described in Mehlberg 1958, pp. 256–260, as a device for handling, not neutral free logic, but vagueness. But their formalization and application to free logic began with van Fraassen 1966, in
which the term “supervaluation” was introduced. Dual-domain semantics were discussed in lectures by Lambert, Nuel Belnap and others as early as the late 1950s, but it appears that Church 1965 and
Cocchiarella 1966 were the first published accounts.
• Antonelli, Gian Aldo, 2000, “Proto-Semantics for Positive Free Logic,” Journal of Philosophical Logic, 29 (3): 277–294.
• Bencivenga, Ermanno, 1981, “Free Semantics” in Boston Studies in the Philosophy of Science, 47: 38–41; revised version reprinted in Lambert 1991, pp. 98–110.
• –––, 1986, “Free Logics,” in D. Gabbay and F. Guenthner (eds.), Handbook of Philosophical Logic, vol. III: Alternatives to Classical Logic, Dordrecht: D. Reidel, pp. 373–426
• –––, 2002, “Free Logics,” in D. Gabbay and F. Guenthner (eds.), Handbook of Philosophical Logic, 2^nd edition, vol. 5, Dordrecht: Kluwer, pp. 147–196. (This is a republication of Bencivenga
• Burge, Tyler, 1974, “Truth and Singular Terms,” Noûs, 8: 309–25; reprinted in Lambert 1991, pp. 189–204.
• Church, Alonzo, 1965, review of Lambert 1963 in Journal of Symbolic Logic, 30: 103–104.
• Cocchiarella, Nino B., “A Logic of Actual and Possible Objects” (abstract), Journal of Symbolic Logic, 31: 688–689.
• –––, 1991, “Quantification, Time and Necessity,” in Lambert 1991, pp. 242–256.
• Feferman, Solomon, 1995, “Definedness,” Erkenntnis, 43 (3): 295–320.
• Fine, Kit, 1983, “The Permutation Principle in Quantificational Logic,” Journal of Philosophical Logic, 12: 33–7.
• Garson, James W., 1991, “Applications of Free Logic to Quantified Intensional Logic,” in Lambert 1991, pp. 111–142.
• Grandy, Richard E., 1972, “A Definition of Truth for Theories with Intensional Definite Description Operators,” Journal of Philosophical Logic, 1: 137–55; reprinted in Lambert 1991, pp. 171–188.
• Gumb, Raymond D., 2001, “Free Logic in Program Specification and Verification,” in Morscher and Hieke 2001, pp. 157–93.
• Gumb, Raymond D., and Karel Lambert, 1991, “Definitions in Nonstrict Positive Free Logic,” Modern Logic, 7: 25–55 and 435–440 (errata).
• Hailperin, Theodore, 1953, “Quantification Theory and Empty Individual Domains,” Journal of Symbolic Logic, 18: 197–200.
• Hintikka, Jaakko, 1959, “Towards a Theory of Definite Descriptions,” Analysis, 19: 79–85.
• Jacquette, Dale, 1996, Meinogian Logic: The Semantics of Existence and Nonexistence, Berlin: Walter de Gruyter.
• –––, (ed.), 2006, Philosophy of Logic (Series: Volume 5 of the Handbook of the Philosophy of Science), Amsterdam: Elsevier.
• Jeffrey, Richard, 1991, Formal Logic: Its Scope and Limits, 3^rd edition, New York: McGraw-Hill.
• Kripke, Saul 1963, “Semantical Considerations on Modal Logic,” Acta Philosophical Fennica, 16: 83–94.
• Lambert, Karel, 1958, “Notes on E!,” Philosophical Studies, 9: 60–63.
• –––, 1960, “The Definition of E! in Free Logic,” in Abstracts: The International Congress for Logic, Methodology and Philosophy of Science, Stanford: Stanford University Press.
• –––, 1963, “Existential Import Revisited,” Notre Dame Journal of Formal Logic, 4: 288–292.
• –––, (ed.), 1991, Philosophical Applications of Free Logic, New York: Oxford University Press.
• –––, 2001a, “Free Logic and Definite Descriptions,” in Morscher and Hieke 2001, pp. 37–47.
• –––, 2001b, “Free Logics,” in Lou Goble (ed.), The Blackwell Guide to Philosophical Logic, Oxford: Blackwell Publishing, pp. 258–279.
• –––, 2003a, Free Logic: Selected Essays, Cambridge: Cambridge University Press.
• –––, 2003b, “Existential Import, E! and ‘The’” in Lambert 2003a, pp. 16–32.
• –––, 2003c, “Foundations of the Hierarchy of Positive Free Definite Description Theories” in Lambert 2003a, pp. 69–91.
• –––, 2003d, “The Hilbert-Bernays Theory of Definite Descriptions” in Lambert 2003a, pp. 44–68.
• –––, 2003e, “Nonextensionality” in Lambert 2003a, pp. 107–121.
• –––, 2003f, “The Philosophical Foundations of Free Logic” in Lambert 2003a, pp. 122–175.
• –––, 2003g, “Predication and Extensionality” in Lambert 2003a, pp. 92–106.
• –––, 2003h, “Russell's Version of the Theory of Definite Descriptions” in Lambert 2003a, pp. 1–15.
• Leblanc, Hughes, 1971, “Truth Value Semantics for a Logic of Existence,” Notre Dame Journal of Formal Logic, 12: 153–68.
• Leblanc, Hughes and Richmond H. Thomason, 1968, “Completeness Theorems for Some Presupposition-Free Logics,” Fundamenta Mathematicae, 62: 125–64; reprinted in Leblanc's Existence, Truth and
Provability, Albany: State University of New York Press, 1982, pp. 22–57.
• Leeb, Hans-Peter, 2006, “State-of-Affairs Semantics for Positive Free Logic,” Journal of Philosophical Logic, 35 (2): 183–208.
• Lehman, Scott, 1994, “Strict Fregean Free Logic,” Journal of Philosophical Logic, 23 (3): 307–336.
• –––, 2001, “No Input, No Output Logic,” in Morscher and Hieke 2001, pp. 147–54.
• –––, 2002, “More Free Logic,” in D. Gabbay and F. Guenthner (eds.), Handbook of Philosophical Logic, 2^nd edition, vol. 5, Dordrecht: Kluwer, pp. 197–259.
• Lejewski, Czeslaw, 1954, “Logic and Existence,” British Journal for the Philosophy of Science, 5 (18): 104–19; reprinted in Dale Jacquette (ed.), Philosophy of Logic: An Anthology, Oxford:
Blackwell, 2002, pp. 147–55.
• Leonard, H. S., 1956, “The Logic of Existence,” Philosophical Studies, 7: 49–64.
• Mehlberg, Henryk, 1958, The Reach of Science, Toronto: University of Toronto Press.
• Meyer, Robert K., Ermanno Bencivenga and Karel Lambert, 1982, “The Ineliminability of E! in Free Quantification Theory without Identity,” Journal of Philosophical Logic, 11: 229–231.
• Meyer, Robert K. and Karel Lambert, 1968, “Universally Free Logic and Standard Quantification Theory,” Journal of Symbolic Logic, 33: 8–26.
• Morscher, Edgar and Alexander Hieke (eds.), 2001, New Essays in Free Logic: In Honour of Karel Lambert (Applied Logic Series, vol., 23), Dordrecht: Kluwer.
• Morscher, Edgar and Peter Simons, 2001, “Free Logic: A Fifty-Year Past and an Open Future,” in Morscher and Hieke 2001, pp. 1–34.
• Mostowski, Andrezej, 1951, “On the Rules of Proof in the Pure Functional Calculus of the First Order,” Journal of Symbolic Logic, 16: 107–111.
• Nolt, John, 2006, “Free Logics,” in Jacquette 2006, pp. 1023–1060.
• –––, 2007, “Reference and Perspective in Intuitionistic Logic,” Journal of Logic, Language and Information, 16 (1): 91–115.
• Orenstein, Alex, 1990, “Is Existence What Existential Quantification Expresses?” in Robert B. Barrett and Roger F. Gibson (eds.), Perspectives on Quine, Cambridge: Blackwell, 1990, pp. 245–270.
• Parsons, Terence, 1980, Nonexistent Objects, New Haven: Yale University Press.
• Paśniczek, Jacek, 1998, The Logic of Intentional Objects: A Meinongian Version of Classical Logic, Dordrecht: Kluwer.
• –––, 2001, “Can Meinongian Logic Be Free?” in Morscher and Hieke 2001, pp. 227–36.
• Posy, Carl J., 1982, “A Free IPC is a Natural Logic: Strong Completeness for Some Intuitionistic Free Logics,” Topoi, 1: 30–43; reprinted in Lambert 1991, pp. 49–81.
• Priest, Graham, 2005, Towards Non-Being, Oxford: Oxford University Press.
• –––, 2008, An Introduction to Non-Classical Logic: From If to Is, 2^nd edition, Cambridge: Cambridge University Press.
• Quine, W. V. O., 1948, “On What There Is,” Review of Metaphysics, 48: 21–38; reprinted as Chapter 1 of Quine 1963).
• –––, 1954, “Quantification and the Empty Domain,” Journal of Symbolic Logic, 19: 177–179.
• –––, 1963, From a Logical Point of View, 2^nd edition, New York: Harper & Row.
• Routley, Richard, 1966, “Some Things Do Not Exist,” Notre Dame Journal of Formal Logic, 7: 251–276.
• –––, 1980, Exploring Meinong's Jungle and Beyond, Canberra: Australian National Unversity.
• Russell, Bertrand, 1919, Introduction to Mathematical Philosophy, New York: Simon & Schuster.
• Schock, Rolf, 1968, Logics without Existence Assumptions, Stockholm: Almqvist & Wiskell.
• Schweitzer, Paul, 2001, “Free Logic and Quantification in Syntactic Modal Contexts,” in Morscher and Hieke 2001, pp. 69–85.
• Simons, Peter, 2001, “Calculi of Names: Free and Modal,” in Morscher, Edgar and Alexander Hieke 2001, pp. 49–65.
• Smiley, Timothy, 1960, “Sense without Denotation,” Analysis, 20: 125–135.
• van Fraassen, Bas C., 1966, “Singular Terms, Truth Value Gaps and Free Logic,” Journal of Philosophy, 63: 481–95; reprinted in Lambert 1991, pp. 82–97.
• Woods, John, 2006, “Fictions and their Logic,” in Jacquette 2006, pp. 1061–1126.
• Woodruff, Peter W., 1984, “On Supervaluations in Free Logic,” Journal of Symbolic Logic, 49: 943–950.
The author thanks Ian Orr for help in researching this article.
How to cite this entry.
Preview the PDF version of this entry at the Friends of the SEP Society.
Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).
Enhanced bibliography for this entry at PhilPapers, with links to its database.
[Please contact the author with suggestions.]
descriptions | logic: classical | logic: intuitionistic | logic: modal | logic: temporal | nonexistent objects | possible objects | {"url":"http://plato.stanford.edu/entries/logic-free/","timestamp":"2014-04-17T21:56:04Z","content_type":null,"content_length":"115339","record_id":"<urn:uuid:4b5d4fb4-ceef-4299-8447-ab13c1ef6e91>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum: Math Library - Calendars/Dates/Time
1. About time - Keith Devlin (Devlin's Angle)
An essay on time in mathematics and engineering (the time that we measure and use to regulate our lives). How did we come to measure time in the first place? What exactly is it that our
timepieces measure? And what scientific principles do we use to construct ever more accurate clocks? more>>
2. Algebra - Fun with Calendars - Cynthia Lanius
Take any calendar. Tell a friend to choose 4 days that form a square. If your friend tells you only the sum of the four days, you can tell her what the four days are. How does the puzzle work?
Includes a extension page for designing your own puzzle, teachers notes, and links to calendar pages on the Web. Mathematics topics: assigning variables, solving simple linear equations,
factoring. more>>
3. The Calendar and the Days of the Week - Math Forum, Ask Dr. Math FAQ
What years are leap years? What day of the week will it be a year from today? How do I find the day of the week for any date? How do I find a calendar for any year? How common are Friday the
13ths? more>>
4. Calendars and Astronomy (Math and the Heavens) - Dave Rusin, The Mathematical Atlas
More-or-less mathematical posts related to calendars and astronomy: background information on calendar traditions and a discussion of the mathematical aspects of the calendar, with a pointer to
the Calendar FAQ; the related questions, on which day of the week does a date fall? and when's Easter this year?; and sources of astronomical data. more>>
5. Clockworks: From Sundials to the Atomic Second - Britannica.com
A site about the development of instruments that have measured time over the centuries. Each instrument has a page featuring a description and an illustrated diagram, and most include a
QuickTime animation demonstrating how the mechanism works. Iinstruments include: Sundial; Clepsydra; Astrolabe; Candle clock; Sandglass; Weight-driven clock; Spring-driven clock; Pendulum
clock; Quartz watch; Cesium atomic clock. See also the article "Measuring Time." more>>
6. Horology: The Index - Fortunat Mueller-Maerki
Internet resources on clocks and time: directories, dictionaries, glossaries, etc.; organizations, information sources (horological homepages; e-mail addresses for cyber horologists; other
electronic media: newsgroups, directories, e-zines, mail lists, bulletin boards; periodicals; libraries and documentation centers; books: publishers, sellers, book lists, bibliographies,
reviews); timepieces; subjects, issues, and topics in horology; and ways of finding and setting the exact time, including time zones and a time zone converter. more>>
7. Time for Time - Darren Dalasta
For teaching the concept of time and how to tell time. Includes games, quizzes, and an interactive class clock for students; lesson plans and worksheets for teachers; history of telling time,
U.S. time zones, world time zones. more>>
8. World Time Zone - iSBiSTER International, Inc.
The exact time in world countries from Afghanistan to Zimbabwe, including, individually, the United States and the Canadian provinces. A software Time Zone Map program that does not require an
active connection to the Internet is also available from this site. more>> | {"url":"http://mathforum.org/library/topics/cals_dates_time/?keyid=39940210&start_at=1&num_to_see=50","timestamp":"2014-04-20T06:01:27Z","content_type":null,"content_length":"36087","record_id":"<urn:uuid:9e2d8b2d-4c2a-4d76-8b32-d38fb55aa7fd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
South Elgin Trigonometry Tutor
...My philosophy is to break down complex concepts into digestible chunks that allow the student to build confidence and excel towards their goals. I first began my journey helping others in
college when I realized that I had a strong work ethic that allowed me to teach myself even if the material ...
26 Subjects: including trigonometry, chemistry, Spanish, reading
...A chance to polish up for the SAT Math. We'll see where you are, fill in some gaps, and keep you moving toward that high score! Algebra, graphs, shapes, and trig ... a good chance to review the
basics and polish up for the ACT!
14 Subjects: including trigonometry, geometry, ASVAB, GRE
...Logarithmic Functions and Their Graphs. Properties of Logarithms. Exponential and Logarithmic Equations.
17 Subjects: including trigonometry, reading, calculus, geometry
I am a Software Engineer by profession but like to tutor math courses as hobby. I am an experienced tutor, who has taught Quantitative Aptitude and Analytic Reasoning. I have coached students on
College algebra and trigonometry.
12 Subjects: including trigonometry, geometry, GRE, algebra 1
...My goal is to help all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon. I consistently monitor progress and
adjust lessons to meet the specific needs of each individual student. Thank you for considering my services.
12 Subjects: including trigonometry, calculus, algebra 2, geometry | {"url":"http://www.purplemath.com/south_elgin_il_trigonometry_tutors.php","timestamp":"2014-04-17T21:54:10Z","content_type":null,"content_length":"24052","record_id":"<urn:uuid:16cb2236-7ad2-417f-9937-45eaf52ef97d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
Your Favorite Deep, Elegant, or Beautiful Explanation
The annual Edge Question Center has now gone live. This year’s question: “What is your favorite deep, elegant, or beautiful explanation?” Find the answers here.
I was invited to contribute, but wasn’t feeling very imaginative, so I moved quickly and picked one of the most obvious elegant explanations of all time: Einstein’s explanation for the universality
of gravitation in terms of the curvature of spacetime. Steve Giddings and Roger Highfield had the same idea, although Steve rightly points out that Einstein won’t really end up having the final word
on spacetime. Lenny Susskind picks Boltzmann’s explanation of why entropy increases as his favorite explanation, and mentions the puzzle of why entropy was lower in the past as his favorite unsolved
problem — couldn’t have said it better myself. For those of you how prefer a little provocation, Martin Rees picks the anthropic principle.
But as usual, the most interesting responses to me are those from far outside physics. What’s your favorite?
Full text of my entry below the fold.
Einstein Explains Why Gravity Is Universal
The ancient Greeks believed that heavier objects fall faster than lighter ones. They had good reason to do so; a heavy stone falls quickly, while a light piece of paper flutters gently to the
ground. But a thought experiment by Galileo pointed out a flaw. Imagine taking the piece of paper and tying it to the stone. Together, the new system is heavier than either of its components, and
should fall faster. But in reality, the piece of paper slows down the descent of the stone.
Galileo argued that the rate at which objects fall would actually be a universal quantity, independent of their mass or their composition, if it weren’t for the interference of air resistance.
Apollo 15 astronaut Dave Scott once illustrated this point by dropping a feather and a hammer while standing in vacuum on the surface of the Moon; as Galileo predicted, they fell at the same
Subsequently, many scientists wondered why this should be the case. In contrast to gravity, particles in an electric field can respond very differently; positive charges are pushed one way,
negative charges the other, and neutral particles not at all. But gravity is universal; everything responds to it in the same way.
Thinking about this problem led Albert Einstein to what he called “the happiest thought of my life.” Imagine an astronaut in a spaceship with no windows, and no other way to peer at the outside
world. If the ship were far away from any stars or planets, everything inside would be in free fall, there would be no gravitational field to push them around. But put the ship in orbit around a
massive object, where gravity is considerable. Everything inside will still be in free fall: because all objects are affected by gravity in the same way, no one object is pushed toward or away
from any other one. Sticking just to what is observed inside the spaceship, there’s no way we could detect the existence of gravity.
Einstein, in his genius, realized the profound implication of this situation: if gravity affects everything equally, it’s not right to think of gravity as a “force” at all. Rather, gravity is a
feature of spacetime itself, through which all objects move. In particular, gravity is the curvature of spacetime. The space and time through which we move are not fixed and absolute, as Newton
would have had it; they bend and stretch due to the influence of matter and energy. In response, objects are pushed in different directions by spacetime’s curvature, a phenomenon we call
“gravity.” Using a combination of intimidating mathematics and unparalleled physical intuition, Einstein was able to explain a puzzle that had been unsolved since Galileo’s time.
• http://eskesthai.blogspot.com/
Zero-knowledge proofs. This is stolen from Arora & Barak’s fantastic “Computational Complexity – A Modern Approach”, and follows in the vein of the more detailed “How to Explain Zero-Knowledge
Protocols To Your Children” (found here: http://pages.cs.wisc.edu/~mkowalcz/628.pdf)
“As an intuitive example for the power of combining randomization and interaction,
consider the following scenario: Marla has one redsock and one yellow sock, but her
friend Arthur, who is color-blind, does not believe her that the socks have different
colors. How can she convince him that this is really the case?
Here is a way to do so. Marla gives both socks to Arthur, tells him which sock is
yellow and which one is red, and Arthur holds the red sock in his right hand and the
yellow sock in his left hand. Then Marla turns her back to Arthur and he tosses a coin.
If the coin comes up “heads” then Arthur keeps the socks as they are; otherwise, he
switches them between his left and right hands. He then asks Marla to guess whether he
switched the socks or not. Of course Marla can easily do so by seeing whether the red
sock is still in Arthur’s right handor not.
But if the socks were identical then she would not have been able to guess the answer with probability better than 1/2. Thus if Marla manages to answer correctly in all of, say, 100 repetitions
of this game, then – though colourblind – Arthur can indeed be convinced that the socks have different colors”
• http://www.gregegan.net
Sticking just to what is observed inside the spaceship, there’s no way we could detect the existence of gravity.
Sticking just to what is observed inside the spaceship over a short time compared to its orbital period, there’s no way we could detect the existence of gravity.
Tidal forces might be extremely weak, and in practice could be damped by air resistance, but in the interior of an airless spaceship full of free-falling objects they will lead to very visible
effects on the time scale of the orbit. An object that’s initially displaced one metre from the centre of the ship will typically travel several metres over one orbit, in some combination of
simple harmonic motion and exponential motion away from the centre. With careful experiments, the astronauts could determine the ship’s orbital period without looking outside.
Sorry to nitpick, since obviously both you and Einstein knew this perfectly well, but this thought experiment gets stated so often without any caveat about observation time that some people end
up believing it’s true on arbitrary time scales.
For me it would be the covariant formulation of classical electromagnetism. You take Maxwell’s 4 equations, put them in one antisymmetric matrix which is simply related to the four potential. Now
you have an understanding of the EM field in any reference frame.
Why did the 80′s exist?
Just to prove everything else blows. I think that’s pretty beautiful…
in some way…
RE: Andy J’s comment – A similar kind of test using the Stroop effect (http://en.wikipedia.org/wiki/Stroop_effect) is used to verify grapheme-color synesthesia. People who see letters in color
which the rest of us see in B&W read more slowly when the letters are presented in colors different from the mapping their brain uses.
• http://broadspeculations.com
What I find unusual is the interpretations of the contributors of what it means to be a “deep, elegant, and beautiful” explanation.
I think some of them really miss the mark.
Lisa Randall – The Higgs Mechanism. Did I miss the announcement that this has been proven?
Max Tegmark – “What caused our Big Bang? My favorite deep explanation is that our baby universe grew like a baby human—literally.” Interesting to know we understand what caused the Big Bang and
how it grew but (assuming our understanding is even correct) where is the explanation?
Jared Diamond – The Origins of Biological Electricity. Huh? Very interesting but deep and elegant?
Gregory Benford – “I find most beautiful not a particular equation or explanation, but the astounding fact that we have beauty and precision in science at all. That exactness comes from using
mathematics to measure, check and even predict events. The deepest question is, why does this splendor work?” This seems like a deep, elegant question but not an explanation.
Carl Zimmer – “A Hot Young Earth: Unquestionably Beautiful and Stunningly Wrong”. Carl’s choice is a deep, elegant explanation that is wrong? Enough said.
Vilayanur Ramachandran – “What’s my favorite elegant idea? The elucidation of DNA’s structure is surely the most obvious, but it bears repeating. I’ll argue that the same strategy used to crack
the genetic code might prove successful in cracking the “neural code” of consciousness and self. It’s a long shot, but worth considering.” I admit there’s some merit to the DNA structure idea but
when he goes to suggest an interesting hypothesis (that I guess he believes in) is a deep, elegant explanation he might be stretching it. I would say we should at least stick to things that are
pretty well accepted today not things that might be accepted tomorrow.
To me, Leonard Susskind comes closest to the intent of the question.
“Personally my favorites are explanations that that get a lot for a little. In physics that means a simple equation or a very general principle. I have to admit though, that no equation or
principle appeals to me more than Darwinian evolution..” As a physicist, he goes on to cite “Boltzmann’s explanation of second law of thermodynamics: the law that says that entropy never
decreases” as his favorite.
• http://writtentomorrow.tumblr.com
that was one awesome thought experiment.. never heard of it before till now (even though I’m knda sure that it must be so very popular amongst physicists and students alike).. thanks for putting
it up here..
I would go with W. R. Hamilton’s explanation of plain old rotation in 3d space with Quaternion algebra. It ignited a huge revolution in physics and enabled Maxwell to formulate Electromagnetism –
thinking in complex quaternions. Nature appears to love this algebra, since it keeps showing up – in Pauli’s account of Spin, and again in Isospin, and Weak Isospin. Not to mention that it
inspired Graves to discover Octonion algebra – which does not seem to make much sense to physicists, probably for thinking it ought to be a tool one might use, but I hazard a guess that if
quaternions make sense of space then octonions make sense of particles.
While the double helix of DNA gets a few nods, I like Matt Ridley’s short essay on the change of perception brought by this visual model. The idea of complementary base pairing is so fundamental
to understanding inheritance, inter- and intra-cellular information transfer, and nucleic acid dynamics that one almost forgets its importance because of its ubiquity. It’s the F = ma of biology.
As I perused the essays, I was happy to see that Nigel Goldenfeld chose to write about one of my other favorite explanations: the nature of the genetic code. He highlights an early theoretical
paper (interestingly categorized under Physics) on the code,
Codes without commas
Crick FH, Griffith JS, Orgel LE.
Proc Natl Acad Sci U S A. 1957 May 15;43(5):416-21.
which partly addressed Gamow’s fascinating, though flawed, hypothesis. Equally important, at least to me, were the experimental tests of these theoretical ideas. The paper describing the test
results is among my favorites. Its logic is simple and the experiments are low tech (petri dishes and toothpicks), yet the paper reads like a tightly woven proof of a mathematical theorem —
elegant. Moreover, Crick et al. use language that should be familiar to many. The only arcane word that pops up is “cistron”: they use this term instead of “gene” because it has a precise
experimental meaning (derived from cis-/trans-complementation experiments), whereas the term “gene” had (and still has) a flexible and context dependent meaning.
General nature of the genetic code for proteins
Crick FH, Barnett L, Brenner S, Watts-Tobin RJ.
Nature. 1961 Dec 30;192:1227-32
I’m always surprised that the Principle of Least Action, or even just the action itself, isn’t brought up in these sorts of things. If you wanted a famous source, Dr. F was a big fan, and for
good reason. The idea of energy is already pretty deep and elegant, a concept which is only as intuitive as our constant study and use of it has made it since its early physical definitions and
essentially the first step on physics’ path towards the beautiful symmetries it trades in today. But somehow, it gets even better: the idea that the universe prefers energy of motion and the
energy of possible motion to be as close to each other as possible over the lifetime of any system is as elegant as it gets. And deep – it’s a principle applicable to both quantum and classical
systems, which is a rarity.
Just wanted to give a little plug to my personal favorite “deep and elegant explanation”.
My favorite deep and elegant explanations are the ones I understand. While it is easy to take someone else’s word that some result has been established or that something is well-understood, it is
hard to take someone else’s assessment that it is deep, elegant, etc.
Does mathematics count? It’s not really an explanation, more like a technique but I would nominate the diagonal argument made famous by Godel and it’s various incarnations. Some of those do not
work but lead to the liar’s paradox. The fact that the diagonal argument can become paradoxical I think does say something… important? Deep? Well…. I like it anyway.
A follow up to Greg Egan’s comment. If Greg won’t plug his book, then I will.
This very phenomenon (gravitational tidal forces observable over long periods of time) was used to great dramatic effect in his recent novel Incandescence.
Hackneyed, overdone but still: E=MC squared. Explains the relationship of matter to energy. Elegant in its economy of means for the vast domain it explains. Deep in its extraodinary impact on
modern physics.
For pure elgance, Euler’s postulate.
• http://www.savory.de/blog.htm
• http://terpconnect.umd.edu/~sgralla/
The equivalence principle is a healthy #2, but of course #1 is Newton’s realization that apples fall and planets orbit for the same reason. The concept of the same force causing both “linear” and
“circular” motion is amazingly unintuitive (as well as deep, elegant, and beautiful).
My favorite was always the proof, by the Pythagorean theorem, of the time dilation formula of special relativity, based on the assumption that the speed of light is invariant. This is based on
using a clock that “ticks” each time a beam of light bounces between two mirrors, where the two mirrors are each parallel to the direction of motion. For a moving clock, the beam of light makes a
zigzag pattern and the Pythagorean theorem can be used to determine the length of each diagonal leg of the zigzag — viola, the time dilation formula!
Noether’s theorem. I’ve always found how it explains the conservation (or even existence) of the notion called energy, momentum, etc. in terms of symmetries particularly beautiful.
Feynman posed a similar question in his Einstein lectures (Six Easy Pieces, ch 1):
“If in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in
the fewest words?”
“That all things are made of atoms” was his reply.
Nonlocality. Clearly nonlocality is counterintuitive to everyday experience and there have been many speculative books for an amateur audience on this topic. Alain Aspect seems to have confirmed
nonlocality as an explanation for quantum phenomena but it is fundamentally unclear what this means. In a sense then, this is an explanation that opens up a whole series of additional puzzles
that contradict the usual commonsense intuitions with which we perceive everyday phenomena.
Me too for Noether’s theorem.
It belongs in the future ; it will be in the proof of Goldbach’s Conjecture : ‘Every even
number is the sum of two prime numbers ‘ .
As beautiful as GR is, don’t we suspect that a Deeper underlying source for its differential geometric formulation is in fact thermodynamics ? Einstein himself mused that,
“Thermodynamics is the only physical theory of universal content which will never be overthrown”.
Hawking, Ted Jacobson, T. Padmanabhan, & E.Verlinde have shown such connections underlie the esoteric math of gravity/differential geometry, & empower it with the physical reality of heat. Thus
the synergy of the two manifests from cosmology to coffee cup.
• http://www.physicistonedge.blogspot.com
My favorite is the one liner mentioend by Carl Sagan years ago; “we are all ‘star-stuff’”. It’s a very simple but powerful statement of how a collapsing star creates the heavy atoms needed for
life and for the rocky planets on which life exists (that we know of). There’s more to say on this but it is best said by Sagan himself in an episode of ‘Cosmos’.
Debra: I think its much more beautifully Sung, by Joni Mitchell in her `Woodstock’, beating Sagan by ~10yrs: “We’re StarDust…We are billion yr-old Carbon, & we got to get ourselves back to the
I second #12 and #20: Least Action and Noether’s theorem.
I really loved that I saw the scientific method on here.
Underappreciated and taken for granted, me thinks.
Rolf Landaur’s explanation of why (and how) information is physical. Nothing is more intriguing to me than the eerie relation between something we used to think of as a product of our own
reflective recognition and categorization of patterns, and the very real and unmistakable flow of energy.
My vote is for Shannon’s information theory.
I am not shure I have one.
But I can pronounce a couple of questions.
Can – theoretically- the strings or gravitones be the same as the dark energy dARK matter, if they were the first matter that scattered out from the BB- event and could they work as a
“forcefield” pulling bringing the clumsier “ordinary mass” to or from its end station position?
I have two candidates for deep, elegant, or beautiful explanation. In probability theory, one explanation of data is the “Law of Large Numbers”.
In the philosophy of measurement, one explanation of measurement is the “Law of Stray Numbers”. This law has been elegantly expressed by Shawn Achor as, “We know that’s a measurement error
because it’s messing up my data.”
I’ve been working my way through Edge’s Deep/Elegant/Beautiful list, and about 20% of the entries have made me pause to learn more. There is lots of beauty and elegance out there, and my main
‘take-away’ so far is how lucky we are to be able to perceive and delight in it.
Also, ‘deep’ has new meaning for me. It used to be synonymous with ‘difficult’ or ‘obscure’. Now I see Deep as meaning ‘revealing’. A truly equal peer with Beauty and Elegance when defining the
degree of satisfaction we feel concerning our attempts to explain our world.
I’m about half-way through now. Yes, it’s been weeks. But it seems things keep getting more interesting the further I go. A wonderful journey with many surprises along the way. My favorite
surprise so far must be the results of combining statistics and graph theory with gastronomy. (No names or links: Look for it!) That simply tickled my brain in all the right places, and makes me
want to be more adventurous in the kitchen.
The first definition given in Euclid’s Elements of Geometry: “A point is that which has no part” | {"url":"http://blogs.discovermagazine.com/cosmicvariance/2012/01/15/your-favorite-deep-elegant-or-beautiful-explanation/","timestamp":"2014-04-17T06:58:20Z","content_type":null,"content_length":"135008","record_id":"<urn:uuid:3d40fd02-2792-4bda-ad39-128ca4b9b62d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ferris wheel problem calculus
November 3rd 2008, 10:21 AM #1
Junior Member
May 2008
Ferris wheel problem calculus
Hi, can someone help. A ferris wheel has a diameter of 40ft and its axle is 25 ft above the ground. Three seconds after it starts, your seat is at a high point. The wheel makes 3 rev/min. I know
that the equation is 25 + 20cos(pi/10)(t-3). What is the fastest that the function changes? Where is the seat when the function changest the fastest? Please explain the last 2 questions.
Last edited by pantera; November 3rd 2008 at 10:32 AM. Reason: Found answer.
Hi, can someone help. A ferris wheel has a diameter of 40ft and its axle is 25 ft above the ground. Three seconds after it starts, your seat is at a high point. The wheel makes 3 rev/min. I know
that the equation is 25 + 20cos(pi/10)(t-3). What is the fastest that the function changes? Where is the seat when the function changest the fastest? Please explain the last 2 questions.
You have:
$h(t)=25 + 20 \cos(\theta(t))$
The rate of change of height is $h'(t)$ and you need to find the maximum of this
$<br /> h'(t)=20 \sin(\theta(t))\ \frac{d\theta}{dt}=2 \pi \sin(\theta(t))<br />$
and the maximum absolute value of this occurs when $\theta=\pi/2$ and $\theta=3\pi/2$
November 3rd 2008, 01:47 PM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/calculus/57308-ferris-wheel-problem-calculus.html","timestamp":"2014-04-18T21:41:26Z","content_type":null,"content_length":"34914","record_id":"<urn:uuid:335cb611-a01a-40c0-bc74-0a6781d71c09>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
SailNet Community - knots versus mph
- -
knots versus mph
RookieHunter 11-28-2002 07:16 PM
knots versus mph
How do they convert? I haven''t seen this in any of my reading.
DuaneIsing 11-29-2002 02:56 AM
knots versus mph
A statute mile is 5,280 feet and a nautical mile is 6,076 feet. Since mph refers to statute miles and knots refers to nautical miles, just take the ratio of 1.15 (close enough but not quite exact).
So if a boat is traveling at 10 knots, it is going 11.5 mph, or 15% more in that unit of measurement.
BTW, I couldn''t recall the exact number of feet for the nautical mile, so I did a quick web search with the words "nautical" "statute" and "knot" and up popped many
useful sites.
928frenzy 11-29-2002 05:30 AM
knots versus mph
A nautical mile is 1/60th of the length of a degree (or one second) of a great circle of the earth.
The nautical mile is often miscalled "Knot". A knot is a not a measure of distance, but a measure of speed equal to one nautical mile per hour. The admiralty knot is equal to 6080 feet per
hour. Therefore a knot is 6080/5280 = 1.1515151... to a statute mile.
~ Happy sails to you ~ _/) ~
928frenzy 11-29-2002 08:10 AM
knots versus mph
That last sentence should have read, "a knot is 1.515151... statute miles per hour.
~ Happy sails to you ~ _/) ~
tsenator 11-29-2002 08:15 AM
knots versus mph
A knot isn''t 1.5 miles per hour, Its close to 1.1 MPH.
gershel 11-29-2002 12:51 PM
knots versus mph
A knot is a measurement of speed.One knot is one nautical mile per hour. A nautical mile equals 1.15 statute mile.
928frenzy 11-30-2002 04:01 AM
knots versus mph
I typed faster than I thought. There should have been a ''1'' after the decimal point and in front of the ''5''. Thanks for the correction.
An admiralty nautical mile is 1.15151... statute miles. A knot is 1.15151... miles per hour.
~ Happy sails to you ~ _/) ~
All times are GMT -4. The time now is 11:21 AM.
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.1
(c) Marine.com LLC 2000-2012 | {"url":"http://www.sailnet.com/forums/learning-sail/5676-knots-versus-mph-print.html","timestamp":"2014-04-19T15:21:33Z","content_type":null,"content_length":"11802","record_id":"<urn:uuid:01896b04-91cc-4e62-8d4e-4ed933f6afa7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2001 [00205]
[Date Index] [Thread Index] [Author Index]
idiom for recurrence relations
• To: mathgroup at smc.vnet.net
• Subject: [mg26714] idiom for recurrence relations
• From: maharrix at my-deja.com
• Date: Thu, 18 Jan 2001 00:57:18 -0500 (EST)
• Sender: owner-wri-mathgroup at wolfram.com
I have a problem which is mostly esthetic, but could have efficiency
repercussions. First, the general question: What is the best way to
implement recurrence relations (e.g. Fibonacci[n])?
The obvious and simplest way is to define base cases and an inductive
case (I'll use Fibonnaci as the generic example:
F[0]:=0; F[1]:=1; F[n_]:=F[n-1]+F[n-2]
). This can be sped up using memoization F[n_]:=F[n]=... Great, but this
fills up internal memory (nothing can get garbage collected since all
the computed values must hang around forever; this can be bad when you
have lots of recurrence relations, graphics and a long session).
So the next solution would be to implement it as an imperative program
keeping only the necessary values to compute the next in the sequence
While[i<=n,temp=f2; f2=f2+f1; f1=temp;i++]
). Again, great, but the Mathematica docs repeatedly say that list operations
are much faster than imperative ones.
So my solution is to use Nest:
F[n_] := First@Nest[{#[[2]], #[[2]]+#[[1]]} &, {0, 1}, n]];
Is this the best (fastest, simplest, most appropriate, or most memory
efficient, etc.) way to do it? It seems somehow ugly (too many [[, #,
etc.) but it is shorter in number of characters to encode it (not what I
would usually consider highest on my list of things to optimize (but
that's not to say terribly low either))
The whole point is to be able to create recurrence relations easily in a
single function efficiently. I'd also like to create them as lambdda
functions (as above, as Function[pair, {pair[[2]],pair[[2]]+pair[[1]]}]
So, how do the above rate and what are the reasonable alternatives?
Thanks for any help,
Sent via Deja.com
• Follow-Ups: | {"url":"http://forums.wolfram.com/mathgroup/archive/2001/Jan/msg00205.html","timestamp":"2014-04-16T22:18:26Z","content_type":null,"content_length":"36010","record_id":"<urn:uuid:3d3357a2-88c7-411a-868c-f57c90f8c170>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH M472 3089 Numerical Analysis II
Mathematics | Numerical Analysis II
M472 | 3089 | Wang
P: M301 or M303, M311, M343, and Computer Science C301 or FORTRAN
programming experience (students with experience in programming but not
FORTRAN should consult the instructor). Interpolation and approximation of
functions, numerical integration and differentiation, solution of non-linear
equations, acceleration and extrapolation, solution of systems of linear
equations, eigenvalue problems, initial and boundary value problems for
ordinary differential equations, and computer programs applying these
numerical methods. | {"url":"http://www.indiana.edu/~deanfac/blspr00/math/math_m472_3089.html","timestamp":"2014-04-20T23:47:49Z","content_type":null,"content_length":"1064","record_id":"<urn:uuid:16bd4b73-8cef-4d93-9ebe-c8d70cdaeb1f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Solving a multiobjective possibilistic problem through compromise programming.
(English) Zbl 1057.90056
Summary: Real decision problems usually consider several objectives that have parameters which are often given by the decision maker in an imprecise way. It is possible to handle these kinds of
problems through multiple criteria models in terms of possibility theory.
Here we propose a method for solving these kinds of models through a fuzzy compromise programming approach. To formulate a fuzzy compromise programming problem from a possibilistic multiobjective
linear programming problem the fuzzy ideal solution concept is introduced. This concept is based on soft preference and indifference relationships and on canonical representation of fuzzy numbers by
means of their -cuts. The accuracy between the ideal solution and the objective values is evaluated handling the fuzzy parameters through their expected intervals and a definition of discrepancy
between intervals is introduced in our analysis.
90C70 Fuzzy programming
90C29 Multi-objective programming; goal programming
90B50 Management decision making, including multiple objectives | {"url":"http://zbmath.org/?q=an:1057.90056","timestamp":"2014-04-16T04:21:44Z","content_type":null,"content_length":"21727","record_id":"<urn:uuid:f4a9f29b-cc01-4381-a3be-f92d608e2c91>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference for "almost all graphs have diameter 2"
up vote 0 down vote favorite
The property in the title is well-known. I am trying to find an original reference to its first appearance in print. The 4th edition of Graphs & Digraphs by Chartrand and Lesniak lists this as
Theorem 13.6 and says that it's a generalization of a result by Gilbert, but gives no further reference.
graph-theory reference-request
add comment
2 Answers
active oldest votes
I think you will find it in Moon, J. W.; Moser, L. Almost all (0,1) matrices are primitive. Studia Sci. Math. Hungar. 1 (1966) 153–156. But I don't have time to visit the library to be
sure and I don't see it online.
up vote 5 It is certainly in Burtin, Ju. D. Asymptotic estimates of the diameter and the independence and domination numbers of a random graph. (Russian) Dokl. Akad. Nauk SSSR 209 (1973), 765–768.
down vote
I guess the Gilbert mentioned is Gilbert, E. N. Random graphs. Ann. Math. Statist. 30 (1959) 1141–1144. It isn't clear exactly why...
The introductory paragraph of a 1981 TAMS paper by Bollobas available at ams.org/journals/tran/1981-267-01/S0002-9947-1981-0621971-7/… supports the likely Moon-Moser origin. Also, an
author search on "e.n. gilbert" at projecteuclid.org will produce his "Random Graphs" article. – Barry Cipra Dec 3 '12 at 17:57
I checked the Moon Moser paper, that is indeed your reference. I've emailed you a somewhat crappy but legible scan of the paper. – Louigi Addario-Berry Dec 3 '12 at 19:59
add comment
The result you asked about follows instantly from Fagin's proof of the zero-one law for finite graphs. He shows that all of Gaifman's extension axioms have asymptotic probability 1, and
"diameter $\leq 2$" is essentially one of the extension axioms. Fagin's paper is "Probabilities on finite models" [J. Symbolic Logic 41 (1976) pp.50-58]. I believe the zero-one law was
up vote 4 proved earlier by four Russians, but I don't have access to their paper and don't know whether their method immediately implies the "diameter $\leq2$" result.
down vote
The four Russians are Y.V. Glebskii, D.I. Kogan, M.I. Liogon'kii, and V.A. Talanov. The paper is "Range and degree of realizability of formulas in the restricted predicate calculus"
[Kibernetika (Kiev) 1969, no.2, 17-28; translation in Cybenetics (Kiev) 5 (1969) 142-154]. I've been told that this paper is rather difficult to read. – Andreas Blass Dec 3 '12 at 13:58
Thanks! I actually know this derivation, but I wanted a encapsulated reference to this precise fact. – Felix Goldberg Dec 3 '12 at 14:03
add comment
Not the answer you're looking for? Browse other questions tagged graph-theory reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/115276/reference-for-almost-all-graphs-have-diameter-2","timestamp":"2014-04-19T04:22:30Z","content_type":null,"content_length":"59167","record_id":"<urn:uuid:7dee5b65-ff51-40ec-b2ba-7bc4a8093ca3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math in the Media - mmarc-12-2009-media
Order from chaos in Monterey Bay
High-frequency radar measurements of surface currents in Monterey Bay at 1900 GMT on December 24 and 25, 1999. Tidal effects have been subtracted. These are parts of stills from an animation
available on the Department of Oceanography, Naval Postgraduate School website and used with permission.
The New York Times Science section for Tuesday September 28, 2009 featured a report by Bina Venkataraman on recent progress in analyzing complex fluid flows, and in particular the "seemingly jumbled
currents" in Monterey Bay. "Assisted by instruments that can track in fine detail how parcels of fluid move, and by low-cost computers that can crunch vast amounts of data quickly, researchers have
found hidden structures beyond Monterey Bay, structures that explain why aircraft meet unexpected turbulence, why the air flow around a car causes drag and how blood pumps from the heart's
ventricles." The "hidden structure" is a Lagrangian coherent structure (LCS), which can be extracted from the record of the surface flow vectorfield v(x,y,t) over time. Very briefly (see Shawn
Shadden's LCS Tutorial) for each time t a finite-time Lyapunov analysis constructs a function σ(x,y,t): the maximum stretching between points nearby to (x,y) as they move following the vectorfield v
starting at time t. For a fixed t, the Lagrangian coherent structure associated to v is the system of ridges of σ. Shadden explains ridges in terms of walking on the graph of σ: "Intuitively, a ridge
is a curve such that if somebody [were] walking along a ridge, then stepping in the direction transverse to the ridge meant that they would be stepping down, and additionally the topography would
drop off most steeply in that direction." Under suitable conditions, these ridges are time-dependent invariant manifolds: the ridges move with time, but particles on a ridge flow along that ridge,
and flux across a ridge is very small (on the order of experimental error in the examples discussed). The consequence is that a ridge acts as a moving separatrix: points that start on one side stay
on that side. For Monterey Bay, particles that start outside the LCS (black curve in the image below) will eventually be swept out to sea, while particles inside will will remain in the bay much
longer. As Venkataraman remarks, there are applications to controlling pollution, tracking the spread of oil spills, and even improving search-and-rescue operations for people lost at sea. (Nice
animation of an elementary example on page 7 of the LCS Tutorial; recommended survey lecture by Jerrold Marsden on the MSRI website.)
Surface current vectorfield, the function σ color-coded from dark orange (max) to dark blue (min), and the Lagrangian Coherent Structure (black) in Monterey Bay, 0900 GMT on August 7, 2000. Image
courtesy of François Lekien (Université Libre de Bruxelles) and Chad Coulliette (Caltech).
Math Ed in the City Journal
The Manhattan Institute is a "conservative, market-oriented think tank" (according to Wikipedia) based on Vanderbilt Avenue in New York City. It publishes City Journal, a quaterly magazine of Urban
Affairs. The Autumn 2009 issue contains "Who Needs Mathematicians for Math, Anyway?" by Sandra Stotsky; Stotsky served on the National Mathematics Advisory Panel, created in 2006 by Executive Order;
their report was submitted to the President and the Secretary of Education on March 13, 2008. Stotsky's article, deliriously illustrated by Arnold Roth, chronicles the history of this report and of
its uniformly negative reception from the Mathematics Education establishment.
Prof. Stotsky starts back in 1989, with the NCTM Standards. [Unfortunately the Standards and the 2000 Principles and Standards are available online to NCTM members only]. Her primary criticism is
that, as an outgrowth of progressive and politically-correct thought during the 1970s and 1980s, the Standards had "underlying goals--never made clear to the general public--[which] were social, not
academic." She cites Alan Schoenfeld, prominent among the authors of the high school standards in the report: "the traditional curriculum was a vehicle for . . . the perpetuation of privilege." It is
clear that the battle lines were already being drawn, or at least perceived, as much on political/class lines as on mathematical ones. The National Panel was "composed of mathematicians, cognitive
psychologists, mathematics educators, and education researchers" but it was chartered by a conservative president. It discovered (not surprisingly) that its recommendations could be vigorously
rejected as an initiative of the "Bush Administration and US financial/corporate elites" wanting to bolster "capital's efforts to shore up the US's weakening economic global position" and not to
benefit "the majority of the US people--particularly marginalized and excluded students of color and low-income students." (these last quotes from Eric Gutstein, a mathematics educator at UIC). So it
is also not surprising that when "The panel found little if any credible evidence supporting the teaching philosophy and practices that math educators have promoted in their ed-school courses and
embedded in textbooks for almost two decades" the answer came back that the report was based on "a strict and narrow definition of 'scientific evidence' and an almost exclusive endorsement of
quantitative methods at the expense of qualitative approaches." (Anthony Kelly, Professor of Education, GMU).
Stotsky is not optimistic about the near future. The organizations in charge of drawing up national standards for K-12, the National Governors Association and the Council of Chief State School
Officers (supported by DoE and NEA), "have not yet invited a single mathematical or science society" to advise them. "And even if a new Congress or Secretary of Education were to support the panel's
recommendations, it will be essentially business as usual in the public schools so long as math educators, joined by assessment experts and technology salesmen, continue to shape the curriculum."
Homomorphic encryption
Craig Gentry submitted his thesis "A Fully Homomorphic Encryption Scheme" to the Stanford Computer Science Department in September 2009. The talk he gave on the topic at STOC 2009 (The 41st annual
ACM symposium on Theory of computing) back in June does not seem to have penetrated the mainstream media at the time, but raised a considerable stir in online newsletters and lists associated to the
world of data security: a very useful "popular explanation" by Hal Finney on the Cryptography mailing list (June 16), along with reports in Voltage Superconductor (June 24), Forbes.com (June 24),
eWeek.com (June 25) and Computerworld (June 25). There is also a useful report on the IBM website. The story finally made it to BusinessWeek in a posting ("IBM's Encryption Breakthrough for the Web")
by Stephen Baker on September 30.
The commonly used RSA code is only homomorphic with respect to multiplication. If (N, k) is your public key, you encode x as x^k mod N. Since x^k y^k = (xy)^k, and since (a mod N)(b mod N) = (ab mod
N), it follows that the code for xy is the product of the codes for x and y. So you could correctly multiply two encrypted numbers without ever knowing what the numbers were. If an encryption is
fully homomorphic, you can send your tax accountant an encrypted copy of all your financial data, and get back an encrypted copy of the amount you owe, without revealing anything about your actual
income and expenses. (This example from Andy Green's article on Forbes.com).
The trouble with the fully homomorphic encryption schemes known before Gentry is that after a small number of operations they lose accuracy and cannot be reliably decrypted. Gentry managed to devise
an initial encryption (a lattice encryption, not an RSA code; instead of factorization it uses a different hard problem) that was homomorphic enough to implement its own decrypting algorithm, plus a
little extra. Hal Finney: "I have to go back to Gödel's and Turing's work to think of a comparable example exploiting the power of self-embedding."
Once you have a "homomorphic enough" encryption algorithm E, Gentry explains how to homomorphically implement a function f of arbitrary complexity; this is illustrated in the "bootstrapping" diagram
below. You choose enough different (public key, private key) pairs so that the "little extra"s add up to enough for your job. You encrypt the nth private key sk[n] using the (n + 1)st public key pk[n
+1], and send them all along with your information I, which you have encrypted using pk[1]. Suppose pk[1] has exhausted its homomorphic potential, and has implemented the first steps of f ; call
these f[1]. Your accountant over-encrypts f[1](I) (already pk[1]-encrypted) using pk[2], and uses the homomorphic property of E, along with the pk[2]-encoded copy of sk[1] which you sent along, to
undo the pk[1] encryption "inside" pk[2]. The next "little extra" can used to homomorphically implement the next part of your function; call this f[2]. And so on.
How soon can we buy it? Brian Prince at eWeek.com quotes Gentry: "Before it becomes a tool, more theoretical work will need to be done to make it more efficient. But now that researchers know that
fully homomorphic encryption is possible and have an actual construction that they can sink their teeth into, I think there is reason to be optimistic that the efficiency will be improved
dramatically." Gentry's May 14 Fields Institute lecture on this work is available online.
Gentry's fully homomorphic encryption scheme. Information I is manipulated I --> f (I) without being decoded. The process uses a finite-depth homomorphic encryption algorithm E of the
self-decrypting type described by Gentry, with public keys pk[1], pk[2], ... and corresponding private (secret) keys sk[1], sk[2], ... the number depending on the complexity of f. The function f is
expressed as the composition of f[1], f[2], f[3], ... so that, for example, the complexity of f[3] plus the complexity of the decoding algorithm decrypt: [sk[2], pk[2]( f[2] f[1](I))] --> f[2] f[1]
(I) does not exceed the homomorphic depth of E. Each of the private keys sk[1], sk[2], ... is protected by encryption with the next public key on the list. The dashed arrows represent functions
implemented homomorphically on encrypted information.
Tony Phillips
Stony Brook University
tony at math.sunysb.edu | {"url":"http://ams.org/news/math-in-the-media/mmarc-12-2009-media","timestamp":"2014-04-20T08:50:17Z","content_type":null,"content_length":"24222","record_id":"<urn:uuid:1562a0f2-bfb2-43f4-ab4e-521b8dcec362>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trends In Economics: A Calculus of Risk
Editor's Note: Thist story was originally published in the May 1998 edition of Scientific American. We are posting it in light of recent news involving Lehman Brothers and Merrill Lynch.
Months before El Niño– driven storms battered the Pacific Coast of the U.S., the financial world was making its own preparations for aberrant weather. Beginning last year, an investor could buy or
sell a contract whose value depended entirely on fluctuations in temperature or accumulations of rain, hail or snow.
These weather derivatives might pay out, for example, if the amount of rainfall at the Los Angeles airport ranged between 17 and 27 inches from October through April. They are a means for an insurer
to help provide for future claims by policyholders or a farmer to protect against crop losses. Or the contracts might allow a heating oil supplier to cope with a cash shortfall from a warmer than
expected winter by purchasing a heating degree-day floor—a contract that would compensate the company if the temperature failed to fall below 65 degrees as often as expected. “We’re big fans of El
Niño because it’s brought us a lot of business,” comments Andrew Freeman, a managing director of Worldwide Weather Trading, a New York City–based firm that writes contracts on rain, snow and
Weather derivatives mark an example of the growing reach of a discipline called financial engineering. This bailiwick of high-speed computing and the intricate mathematical modeling of
mathematicians, physicists and economists can help mitigate the vagaries of running a global business. It entails the custom packaging of securities to provide price insurance against a drop in
either the yen or the thermometer. The uncertainties of a market crash or the next monsoon can be priced, divided into marketable chunks and sold to someone who is willing to bear that risk—in
exchange for a fee or a future stream of payments. “The technology will effectively allow you to completely manage the risks of an entire organization,” says Robert A. Jarrow, a professor of finance
at Cornell University.
The engineering of financial instruments has emerged in response to turbulence during recent decades in ever more interconnected world markets: a result of floating exchange rates, oil crises,
interest- rate shocks and stock-market collapses.
The creative unleashing of new products continues with increasingly sophisticated forms of securities and derivatives—options, futures and other contracts derived from an underlying asset, financial
index, interest or currency exchange rate. New derivatives will help electric utilities protect against price and capacity swings in newly deregulated markets. Credit derivatives let banks pass off
to other parties the risk of default on a loan. Securities that would help a business cope with the year 2000 bug have even been contemplated. This ferment of activity takes place against a tainted
The billions of dollars in losses that have accumulated through debacles experienced by the likes of Procter & Gamble, Gibson Greetings and Barings Bank have given derivatives the public image of
speculative risk enhancers, not new types of insurance. Concerns have also focused on the integrity of the mathematical modeling techniques that make derivatives trading possible.
Despite the tarnish, financial engineering received a valentine of sorts in October. The Nobel Prize for economics (known formally as the Bank of Sweden Prize in Economic Sciences) went to Myron S.
Scholes and Robert C. Merton, two of the creators of the options-pricing model that has helped fuel the explosion of activity in the derivatives markets.
Options represent the right (but not the obligation) to buy or sell stock or some other asset at a given price on or before a certain date. Another major class of derivatives, called forwards and
futures, obligates the buyer to purchase an asset at a set price and time. Swaps, yet another type of derivative, allow companies to exchange cash flows—floating-interest- rate for fixed-rate
payments, for instance. Financial engineering uses these building blocks to create custom instruments that might provide a retiree with a guaranteed minimum return on an investment or allow a utility
to fill its future power demands through contractual arrangements instead of constructing a new plant.
Creating complicated financial instruments requires accurate pricing methods for the derivatives that make up their constituent parts. It is relatively easy to establish the price of a futures
contract. When the cost of wheat rises, the price of the futures contract on the commodity increases by the same relative amount. Thus, the relationship is linear. For options, there is no such
simple link between the derivative and the underlying asset. For this reason, the work of Scholes, Merton and their deceased colleague Fischer Black has assumed an importance that prompted one
economist to describe their endeavors as “the most successful theory not only in finance but in all of economics.”
Einstein and Options
The proper valuation of options had perplexed economists for most of this century. Beginning in 1900 with his groundbreaking essay “The Theory of Speculation,” Louis Bachelier described a means to
price options. Remarkably, one component of the formula that he conceived for this purpose anticipated a model that Albert Einstein later used in his theory of Brownian motion, the random movement of
particles through fluids. Bachelier’s formula, however, contained financially unrealistic assumptions, such as the existence of negative values for stock prices.
Other academic thinkers, including Nobelist Paul Samuelson, tried to attack the problem. They foundered in the difficult endeavor of calculating a risk premium: a discount from the option price to
compensate for the investor’s aversion to risk and the uncertain movement of the stock in the market.
The insight shared by Black, Scholes and Merton was that an estimate of a risk premium was not needed, because it is contained in the quoted stock price, a critical input in the option formula. The
market causes the price of a riskier stock to trade further below its expected future value than a more staid equity, and that difference serves as a discount for inherent riskiness.
Black and Scholes, with Merton’s help, came up with their option-pricing formula by constructing a hypothetical portfolio in which a change of price in a stock was canceled by an offsetting change in
the value of options on the stock—a strategy called hedging. Here is a simplified example: A put option would give the owner the right to sell a share of a stock in three months if the stock price is
at or below $100. The value of the option might increase by 50 cents when the stock goes down $1 (because the condition under which the option can be used has grown more likely) and decrease by 50
cents when the stock goes up by $1.
To hedge against risks in changes in share price, the investor can buy two options for every share he or she owns; the profit then will counter the loss. Hedging creates a risk-free portfolio, one
whose return is the same as that of a treasury bill. As the share price changes over time, the investor must alter the composition of the portfolio—the ratio of the number of shares of stocks to the
number of options—to ensure that the holdings remain without risk.
The Black-Scholes formula, in fact, is elicited from a partial differential equation demonstrating that the fair price for an option is the one that would bring a risk-free return within such a
hedging portfolio. Variations on the hedging strategy outlined by Black, Scholes and Merton have proved invaluable to financial- center banks and a range of other institutions that can use them to
protect portfolios against market vagaries—ensuring against a steep decline in stocks, for instance.
The basic options-pricing methodology can also be extended to create other instruments, some of which bear bizarre names like “cliquets” or “shouts.” These colorful financial creatures provide the
flexibility to shape the payoffs from the option to a customer’s particular risk profile, placing a floor, ceiling or averaging function on interest or exchange rates, for example.
With the right option, investors can bet or hedge on any kind of uncertainty, from the volatility (up-and-down movement) of the market to the odds of catastrophic weather. An exporter can buy a
“look-back” currency option to receive the most favorable dollar-yen exchange rate during a six-month period, rather than being exposed to a sudden change in rates on the date of the contract’s
In the early 1970s Black and Scholes’s original paper had difficulty finding a publisher. When it did reach the Journal of Political Economy in 1973, its impact on the financial markets was
immediate. Within months, their formula was being programmed into calculators. Wall Street loved it, because a trader could solve the equation easily just by punching in a few variables, including
stock price, interest rate on treasury bills and the option’s expiration date. The only variable that was not readily obtainable was that for “market volatility”— the standard deviation of stock
prices from their mean values. This number, however, could be estimated from the ups and downs of past prices. Similarly, if the current option price was known in the markets, a trader could enter
that number into a workstation and “back out” a number for volatility, which can be used to judge whether an option is overpriced or underpriced relative to the current price of the stock in the
Investors who buy options are basically purchasing volatility—either to speculate on or to protect against market turbulence. The more ups and downs in the market, the more the option is worth. An
investor who speculates with a call—an option to buy a stock—can lose only the cost of purchase, called a premium, if the stock fails to reach the price at which the buyer can exercise the right to
purchase it. In contrast, if the stock shoots above the exercise price, the potential for profit is unlimited. Similarly, the investor who hedges with options also anticipates rough times ahead and
so may buy protection against a drop in the market.
Physicists on Wall Street
Although it can be reduced to operations on a pocket calculator, the mathematics behind the Black-Scholes equation is stochastic calculus, a descendant from the work of Bachelier and Einstein. These
equations were by no means the standard fare in most business administration programs. Enter the Wall Street rocket scientists: the former physicists, mathematicians, computer scientists and
econometricians who now play an important role at the Wall Street financial behemoths.
Moving from synchrotrons to trading rooms does not always result in such a seamless transition. “Whenever you hire a physicist, you’re always hoping that he or she doesn’t think of markets as if they
were governed by immutable physical laws,” notes Charles Smithson, a managing director at CIBC World Markets, an investment bank. “Uranium 238 always decays to uranium 234. But a physicist must
remember that markets can go up as well as go down.”
Recently some universities have opened “quant schools,” programs that educate M.B.A. or other master’s students in the higher applied mathematics of finance, the subtleties of Ito’s lemma and other
cornerstones of stochastic calculus. Or else they may train physicists, engineers and mathematicians before moving on to Wall Street. “Market pressures are directing physicists to get more education
to try to understand the motivation and intuition underlying financial problems,” says Andrew W. Lo, who heads the track in financial engineering at the Massachusetts Institute of Technology’s Sloan
School of Management.
As part of their studies, financial engineers in training learn about the progression of mathematical modeling beyond the original work of Black, Scholes and Merton. The basic Black-Scholes formula
made unrealistic assumptions about how the market operates. It takes a fixed interest rate as an input, but of course interest rates change, and that influences the value of an option—particularly an
option on a bond. The formula also assumes that changes in the growth rate of stock prices fall into a normal statistical distribution, a bell curve in which events cluster around the mean. Thus, it
fails to take into account extraordinary events such as the 1929 or 1987 stock market crashes. Black, Scholes and Merton—and legions of quants—have spent the ensuing years refining many of the
original ideas.
Emanuel Derman, head of the quantitative strategies group at Goldman Sachs, is a physicist-turned-quant whose job over the past 13 years has been to tackle the imperfections of the Black- Scholes
equation. Derman, a native of Cape Town, South Africa, received his doctorate from Columbia University in 1973 for a thesis on the weak interaction among subatomic particles. He went on to
postdoctoral positions, including study of neutrino scattering at the University of Pennsylvania and charmed quark production at the University of Oxford’s department of theoretical physics. In the
late 1970s Derman decided to leave academia: “Physics is lonely work. It’s a real meritocracy. In physics, you sometimes feel like you’re either [Richard] Feynman or you’re nobody. I liked physics,
but maybe I wasn’t as good as I might have been.”
So in 1980 he went to Bell Laboratories in New Jersey, where he worked on a computer language tailored for finance. In 1985 Goldman Sachs hired him to develop methods of modeling interest rates. He
has worked there since, except for a year spent at Salomon Brothers. At Goldman, he met the recently recruited Fischer Black, and the two began working with another colleague, William W. Toy, on a
method of valuing bond options. Derman remembers Black as a bluntly truthful man with punctilious writing habits who wore a Casio Data Bank watch. “Black was less powerful mathematically than he was
intuitively,” Derman says. “But he always had an idea of what the right answer was.”
Physics Versus Finance
Much of Derman’s recent work on the expected volatility of stock prices continues to refine the original 1973 paper. The Black-Scholes equation was to finance what Newtonian mechanics was to physics,
Derman asserts. “Black-Scholes is sort of the foundation on which the field rests. Nobody knows what to do next except extend it.” But the field, he fears, may never succeed in producing its own
Einstein—or some unified financial theory of everything. Finance differs from physics in that no mathematical model can capture the multitude of ever mutating economic factors that cause major market
perturbations— the recent Asian collapse, for instance. “In physics, you’re playing against God; in finance, you’re playing against people,” Derman declares.
Outside the domain of Wall Street, the parallels between physical concepts and finance are sometimes taken more literally by academics. Kirill Ilinski of the University of Birmingham in England has
used Feynman’s theory of quantum electrodynamics to model market dynamics, while employing these concepts to rederive the Black-Scholes equation. Ilinski replaces an electromagnetic field, which
controls the interaction of charged particles, with a so-called arbitrage field that can describe changes in option and stock prices. (Trading that brings the value of the stock and the option
portfolio into line is called arbitrage.)
Ilinski’s theory shows how quantum electrodynamics can model Black, Scholes and Merton’s hedging strategy, in which market dynamics dictate that any gain in a stock will be offset by the decline in
value of the option, thereby yielding a risk-free return. Ilinski equates it with the absorption of “virtual particles,” or photons, that damp the interacting forces between two electrons. He goes on
to show how his arbitrage field model elucidates opportunities for profit that were not envisaged by the original Black-Scholes equation.
Ilinski is a member of the nascent field of econophysics, which held its first conference last July in Budapest. Nevertheless, literal parallelism between physics and finance has gained few
adherents. “It doesn’t meet the very simple rule of demarcation between science and hogwash,” notes Nassim Taleb, a veteran derivatives trader and a senior adviser to Paribas, the French investment
bank. Ilinski recognizes the controversial nature of his labors. “Some people accept my work, and some people say I’m mad. So there’s a discrepancy of opinion,” he says wryly.
Whether invoking Richard Feynman or Fischer Black, the use of mathematical models to value and hedge securities is an exercise in estimation. The term “model risk” describes how different models can
produce widely varying prices for a derivative and how these prices create large losses when they differ from the ones at which a financial instrument can be bought or sold in the market.
Model risk comes in many forms. A model’s complexity can lead to erroneous valuations for derivatives. So can inaccurate assumptions underlying the model—failing to take into account the volatility
of interest rates during an exchange- rate crisis, for instance. Many models do not cope well with sudden alterations in the relation among market variables, such as a change in the normal trading
range between the U.S. dollar and the Indonesian rupiah. “The model or the way you’re using it just doesn’t capture what’s going on anymore,” says Tanya Styblo Beder, a principal in Capital Market
Risk Advisors, a New York City firm that evaluates the integrity of models. “Things change. It’s as if you’re driving down a very steep mountain road, and you thought you were gliding on a bicycle,
and you find you’re in a tractor-trailer with no brakes.”
Custom-tailored products of financial engineering are not traded on public exchanges and so rely on valuations produced by models, sometimes making it difficult to compare the models’ pricing to
other instruments in the marketplace. When it comes time to sell, the market may offer a price that differs significantly from a model’s estimate. In some cases, a trader might capitalize on supposed
mispricings in another trader’s model to sell an overvalued option, a practice known as model arbitrage.
“There’s a danger of accepting models without carefully questioning them,” says Joseph A. Langsam, a former mathematician who develops and tests models for fixed-income securities at Morgan Stanley.
Morgan Stanley and other firms adopt various means of testing, such as determining how well their models value derivatives for which there is a known price.
Problems related to modeling have accounted for about 20 percent of the $23.77 billion in derivatives losses that have occurred during the past decade, according to Capital Market Risk Advisors. Last
year, however, model risk comprised nearly 40 percent of the $2.65 billion in money lost. The tally for 1997 included National Westminster Bank, with $123 million in losses, and Union Bank of
Switzerland, with a $240-million hit.
A conference in February sponsored by Derivatives Strategy, an industry trade magazine, held a roundtable discussion called “First Kill All the Models.” Some of the participants questioned whether
the most sophisticated mathematical models can match traders’ skill and gut intuition about market dynamics. “As models become more complicated, people will use them, and they’re dangerous in that
regard, because they’ll use them in ways that are deleterious to their economic health,” said Stanley R. Jonas, who heads the derivatives trading department for Societé Generale/FIMAT in New York
City. An unpublished study by Jens Carsten Jackwerth of the London Business School and Mark E. Rubinstein of the University of California at Berkeley has shown that traders’ own rules of thumb about
inferring future stock index volatility did better than many of the major modeling methods.
One modeler at the session—Derman of Goldman Sachs—defended his craft. “To paraphrase Mao in the sixties: Let 1,000 models bloom,” he proclaimed. He compared models to gedanken (thought) experiments,
which are unempirical but which help physicists contemplate the world more clearly: “Einstein would think about what it was like to sit on the edge of a wave moving at the speed of light and what he
would see. And I think we’re doing something like that. We are sort of investigating imaginary worlds and trying to get some value out of them and see which one best approximates our own.” Derman
acknowledged that every model is imperfect: “You need to think about how to account for the mismatch between models and the real world.”
Financial Hydrogen Bombs
The image of derivatives has been sullied by much publicized financial debacles, which include the bankruptcies of Barings Bank and Orange County, California, and huge losses by Procter & Gamble and
Gibson Greetings. Investment banker Felix Rohatyn has been quoted as warning about the perils of twentysomething computer whizzes concocting “financial hydrogen bombs.” Some businesses and local
governments have excluded derivatives from their portfolios altogether; fears have even emerged about a meltdown of the financial system.
The creators of these newfangled instruments place the losses in broader perspective. The notional, or face, value of all stocks, bonds, currencies and other assets on which options, futures,
forwards and swap contracts are derived totaled $56 trillion in 1995, according to the Bank for International Settlements. The market value of the outstanding derivatives contracts themselves
represents only a few percentage points of the overall figure but an amount that may still total a few trillion dollars. In contrast, known derivatives losses between 1987 and 1997 totaled only $23.8
billion. More mundane investments can also hurt investors. When interest rates shot up in 1994, the treasury bond markets lost $230 billion.
Derivatives make the news because, like an airplane crash, their losses can prove sudden and dramatic. The contracts can involve enormous leverage. A derivatives investor may put up only a fraction
of the value of an underlying asset, such as a stock or a bond. A small percentage change in the value of the asset can produce a large percentage gain or loss in the value of the derivative.
To manage the risks of owning derivatives and other securities, financial houses take refuge in yet other mathematical models. Much of this work is rooted in portfolio theory, a statistical
measurement and optimization methodology for which Harry M. Markowitz received the Nobel Prize in 1990. Markowitz elucidated how investors could minimize risk for a given level of return by
diversifying into a range of assets that do not all perform the same way as the market changes.
One hand-me-down from Markowitz is called value at risk. It sets forth a set of techniques that elicits a single worst-case number for investment losses. Value at risk calculates the probability of
the maximum losses for every existing portfolio, from currency to derivatives. It then elicits a value at risk for the company’s overall financial exposure: the worst hit that can be expected within
the next 30 days with a given statistical confidence interval might amount to $85 million. An analysis of the portfolios shows where risks are concentrated. Philippe Jorion, a professor of finance at
the University of California at Irvine, has performed a case study that shows how value-at-risk measures could raise warning flags to even unsophisticated investors. Members of the school boards in
Orange County that invested in the county fund that lost $1.7 billion might have reacted differently if they knew that there existed a 5 percent chance of a billion-dollar-plus loss.
Like other modeling techniques, value at risk has bred skepticism about how well it predicts ups and downs in the real world. The most widely used measurement techniques rely heavily on historical
market data that fail to capture the magnitude of rare but extreme events. “If you take the last year’s worth of data, you may see a portfolio vary by only 10 percent. Then, if you move a month
ahead, things may change by 100 percent,” comments Ron S. Dembo, president of Algorithmics, a Torontobased risk-management software company. Algorithmics and other firms go beyond the simplest
value-at-risk methods by providing banks with software that can “stress-test” a portfolio by simulating the ramifications of large market swings.
One modeling technique may beget another, and debates over their intrinsic worth will surely continue. But the ability to put a price on uncertainty, the essence of financial engineering, has already
proved worthwhile in other business settings as well as in government policymaking and domestic finance. Options theory can aid in steering capital investments. A conventional investment analysis
might suggest that it is better for a utility to budget for a large coal-fired plant that can provide capacity for 10 to 15 years of growth. But that approach would sacrifice the alternative of
building a series of small oilfired generators, a better choice if demand grows more slowly than expected. Option-pricing techniques can place a value on the flexibility provided by the slow-growth
The Black-Scholes model has also been used to quantify the benefits that accrue to a developing nation from providing workers with a general education rather than targeted training in specific
skills. It reveals that the value of being able to change labor skills quickly as the economy shifts can exceed the extra cost of supplying a broad-based education. Option pricing can even be used to
assess the flexibility of choosing an “out-ofplan” physician for managed health care. “The implications for this aren’t just in the direct financial markets but in being able to use this technology
for how we organize nonfinancial firms and how people organize their financial lives in general,” says Nobelist Merton. Placing a value on the vagaries of the future may help realize the vision of
another Nobel laureate: Kenneth J. Arrow of Stanford University imagined a security for every condition in the world—and any risk, from bankruptcy to a rained-out picnic, could be shifted to someone | {"url":"http://www.scientificamerican.com/article/trends-in-economics-a-calculus-of-risk/","timestamp":"2014-04-17T22:01:39Z","content_type":null,"content_length":"84619","record_id":"<urn:uuid:15bc8c87-59a3-4ca7-9353-e9b40031db5e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
Carleton Applied Probability Day
September 17, 2005
Carleton Applied Probability Day
Carleton University, Ottawa
Speaker Abstracts
• Barbara Gonzalez, Mathematics Department, University of Louisiana at Lafayette
Modeling teletraffic arrivals by a Poisson cluster process.(.pdf format)
In this paper we consider a Poisson cluster process $N$ as a generating process for the arrivals of packets to a server. This process generalizes in a more realistic way the infinite source
Poisson model which has been used for modeling teletraffic for a long time. At each Poisson point $\Gamma_j$,a flow of packets is initiated which is modeled as a partial iid sum process $\
Gamma_j+\sum_{i=1}^kX_{ji}$, $k\le K_j$,with a random limit $K_j$ which is independent of $(X_{ji})$ and the underlying Poisson points $(\Gamma_j)$. We study the covariance structure of the
increment process of $N$. In particular, the covariance function of the increment process is not summable if the right tail $P(K_j>x)$ is regularly varying with index $\alpha\in (1,2)$, the
distribution of the $X_{ji}$'s being irrelevant. This means that the increment process exhibits long-range dependence. If ${\rm var}(K_j)<\infty$ long-range dependence is excluded. We study the
asymptotic behavior of the process $(N(t))_{t\ge 0}$ and give conditions on the distribution of $K_j$ and $X_{ji}$ under which the random sums $\sum_{i=1}^{K_j}X_{ji}$ have a regularly varying
tail. Using the form of the distribution of the interarrival times of the process $N$ under the Palm distribution, we also conduct an exploratory statistical analysis of simulated data and of
Internet packet arrivals to a server. We illustrate how the theoretical results can be used to detect distributional characteristics of $K_j$, $X_{ji}$, and of the Poisson process.
• Haipeng Shen, Department of Statistics and Operations Research, University of North Carolina at Chapel Hill
A Queueing-Science Look at a Telephone Call Center
A call center is a service network in which agents provide telephone-based services. Customers that seek these services are delayed in tele-queues. This talk summarizes an analysis of a unique
record of call center operations. The data comprise a complete operational history of a small banking call center, call by call, over a full year.
We look at call centers through the perspective of queueing science, which combines mathematically elegant queueing theory with empirically data-driven statistical analys. The service process is
decomposed into three fundamental
components: arrivals, customer patience, and service durations. Each component involves different basic mathematical structures and requires a different style of statistical analysis. Some of the
key empirical results are sketched, along with descriptions of the varied techniques required. The talk then surveys how the characteristics deduced from the statistical analyses form the
building blocks for theoretically interesting and practically useful mathematical models for call center operations.
The papers associated with this talk are downloadable at
• Wojciech Szpankowski, Department of Computer Science, Purdue University
Analytic Algorithmics, Combinatorics and Information Theory
Analytic information theory aims at studying problems of information theory using analytic techniques of computer science and combinatorics. Following Hadamard's and Knuth's precept, we tackle
these problems by complex analysis methods such as generating functions, Mellin transform, Fourier series, saddle point method, analytic poissonization and depoissonization, and singularity
analysis. This approach lies at the crossroad of computer science and information theory. In this talk, we concentrate on one facet of information theory (i.e., source coding better known as data
compression), namely the redundancy rate problem and types. The redundancy rate problem for a class of sources is the determination of how far the actual code length exceeds the optimal (ideal)
code length. The method of types is a powerful technique in information theory, large deviations, and analysis of algorithms. It reduces calculations of the probability of rare events to a
combinatorial analysis. Two sequences are of the same type if they have the same empirical distribution. We shall argue that counting types can be accomplished efficiently by enumerating Eulerian
paths (Markov types) or binary trees with a given path length (universal types). On the other hand, analysis of the redundancy rate problem for memoryless and Markov sources leads us to tree
generating functions (e.g., arising in counting labeled rooted trees) studied extensively in computer science.
• Damon Wischik, Department of Computer Science University College London (UCL)
Queueing theory for TCP/IP
The current Internet is not very good for transferring large files at high speeds, nor for transmitting live audio and video. This is not because of hardware limitations; it is because of
problems with the architecture of the Internet, and especially with TCP, the algorithm for controlling congestion, implemented in every computer on the Internet.
TCP controls congestion by adjusting the transmission rate of a traffic flow in response to the congestion it perceives in the network. Queueing theorists, on the other hand, are used to using
traffic models which do not respond to congestion. For example, queueing theorists are used to the idea that the larger the buffer the less likely it is to overflow. But TCP thinks that the
absence of overflow means the network is underutilized, and so it increases its transmission rate until the buffer overflows, no matter how large it is.
In this talk I will describe some recent work on queueing theory for TCP. This theory uses the predictability of aggregate traffic flows, both at a fluid level and a stochastic level. The theory
suggests ways to enhance TCP and build cheaper Internet routers.
For further information, http://www.cs.ucl.ac.uk/staff/D.Wischik/Talks/iparch.html | {"url":"http://www.fields.utoronto.ca/programs/scientific/05-06/probability/abstracts.html","timestamp":"2014-04-17T07:47:35Z","content_type":null,"content_length":"16611","record_id":"<urn:uuid:4b3bcd6f-a7f7-4ee9-bf70-f604e935c541>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parry Haste Discussion [Archive] - TankSpot
I'd like to run some numbers on Parry/Haste for DualWield DK Tanks.
---- 2H Weapon - 2x 1H weapon ----
Weaponspeed: 3,5 2x 1,5
Elapsed Time: 60 Seconds
Boss Parrychance: 14%
2H Weapon:
60/3.5 = an average of 17.15 hits per minute (not taking HR into account)
17.15/100= .1715 -> .1715x14%= an average of 2.5 boss parries per minute.
2x 1H Weapons:
60/1.5= 40 -> 40x2 = 80 hits per minute (counting both weapons)
80/100= 0.8 -> 0.8x14%= 11.2 boss parries per minute.
Since everybody is talking about getting hard expertise capped as DW DeathKnight tank I wanted to make a point that if you just degrade the boss parries to 2.5 per minute you'd have the exact
incoming damage by boss parries as a 2H tanker. Taking that into account means you'll need less expertise and is easier to gear for.
So in that case:
0.8x? = 2.5 -> ? = 3.125% is the percentage you want to be heading for
that means you only have to get 10.875% chance of his parries,
Every 8.2 expertise rating gives you 1 Expertise, Every 4 expertise gives you 1% reduction on Boss parry chance.
10.875x(8.2x4)= 356.7 Expertise Rating to level DualWield parries with 2H Parries.
I thought this might be interesting as getting hard capped is far harder then just reaching this kind of expertise.
Please feel free to comment and say if I'm wrong, I might be a little off. | {"url":"http://www.tankspot.com/archive/index.php/t-50540.html","timestamp":"2014-04-21T03:40:44Z","content_type":null,"content_length":"6585","record_id":"<urn:uuid:a3d1778c-e35e-42de-84ac-25470f2a4f8e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
undergraduate and graduate courses in my home institutions, usually in alternating years. I also sometimes am invited to give a short series of courses externally.
I have taught a number of undergraduate courses over the years while at McGill University, including
• Introductory Physics (for 1st year)
• Mathematical Physics (with lecture notes)
• Introduction to Quantum Mechanics
• Planets, Stars & Galaxies
The course I have been teaching most recently is offered at McMaster University:
This is an introductory course on Einstein’s theory of General Relativity, aimed at upper-year physics undergraduates. Topics covered include an introduction to differential geometry, special
relativity as geometry, gravity in the solar system, black holes, gravitational lensing and cosmology.
The graduate courses I teach vary from year to year, and are aimed at students in high energy particle physics. These courses are usually taught at Perimeter Institute so that students from other
universities (and online, using PIRSA) can also attend.
Among the courses I have recently given are:
• Introduction to the Standard Model (which uses a great textbook on the Standard Model),
• Introduction to Effective Field Theories
• Introduction to Cosmology (with lecture notes)
• Quantum Field Theory 2
Quantum Field Theory 2 is my most recent graduate course, aimed at students who have already had a first exposure to quantum field theory.
I am occasionally invited to give a set of 3-5 lectures on a variety of (usually graduate-level) topics elsewhere. Here are some recent lectures of this type I’ve given.
• The Cosmological Constant Problem, 3-hour lecture series presented to: Les Houches Summer School Post-Planck Cosmology, Les Houches, France, July 2013
• Effective Field Theory and Cosmology, 3-hour lecture series presented to: Essential Cosmology for the Next Generation (Cosmology on the Beach), Cancun, Mexico, January 2012
• Inflation, Dark Matter and Dark Energy, 4-hour lecture series presented to: Nordic Winter School on Cosmology and Particle Physics, Gausdal, Norway, January 2011
• Inflation and Fundamental Physics, 2-hour lecture series presented to: UniverseNET School on Cosmology, Lecce, Italy, September 2010
• Introduction to the Standard Model, 4-hour lecture series presented to: TRIUMF Summer Institute, Vancouver, BC, July 2009
• Physics Beyond the Standard Model, 3-hour lecture series presented to Universite de Paris IX/Bielefeld International Graduate Course on Physics Beyond the Standard Model, Bielefeld, Germany,
September, 2008; and to Benasque School on Flavor Physics, Benasque, Spain, August, 2008
• Effective Field Theory, 5-hour lecture series presented to British Universities Graduate Summer School (BUSSTEPP), York, England, August, 2007; and 6-hour lecture series presented to British
Universities Graduate Summer School (BUSSTEPP), Edinburgh, Scotland, Sept., 2006
• Cosmic Inflation, 4.5-hour lecture series presented to: Cargèse School on Cosmology and Particle Physics Beyond the Standard Models, Cargèse, France, August 2007;
• String Cosmology, 4-hour lecture series presented to: CERN RTN Graduate School, Geneva, Switzerland, January, 2007; and 4-hour lecture series presented to: Central European Joint Program of
Doctoral Studies in Theoretical Physics (Particle Physics, Gravity and Cosmology), Dubrovnik, Croatia, August, 2006; and 3-hour lecture series presented to: International Graduate School on
Cosmology, Universite de Paris XI/Bielefeld, Paris, France, March, 2006 | {"url":"http://www.physics.mcmaster.ca/~cburgess/cburgess/?page_id=18","timestamp":"2014-04-16T10:13:49Z","content_type":null,"content_length":"15334","record_id":"<urn:uuid:fa064c3a-ba5f-40bd-a373-62b647111cd2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - help a newb out please.
1. The problem statement, all variables and given/known data
A certain power supply provides a continuous 371W to a load. It is opperating at 0.13% efficiency. In a 129 day period, how much does it cost (in dollars) to run the device if electricity costs 0.34
$ per kWh?
2. Relevant equations
im not sure..... everything has to be to 3 significant digits after the decimal.
3. The attempt at a solution
i have tried this a few different ways.... im not sure where the efficiency comes in.
this is what i did:
129days x 24= 3096hours
.371kW x 3096= 1.4932008kWh
1.4932008kWh x .34 = .507688272
answer is: $0.508 | {"url":"http://www.physicsforums.com/showpost.php?p=2872610&postcount=1","timestamp":"2014-04-20T00:48:39Z","content_type":null,"content_length":"9174","record_id":"<urn:uuid:56503164-7888-4d1d-9791-88457fb21ecc>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
The canonical line bundle of a normal variety
up vote 5 down vote favorite
I have heard that the canonical divisor can be defined on a normal variety X since the smooth locus has codimension 2. Then, I have heard as well that for ANY algebraic variety such that the
canonical bundle is defined:
$$\mathcal{K}=\mathcal{O}_{X,-\sum D_i}$$
where the $D_i$ are representatives of all divisors in the Class Group.
I want to prove that formula or I want to find a reference for that formula, or I want someone to rephrase it in a similar way if they heard about it.
Why do I want to prove it? Well, I use the definition that something is Calabi Yau if its canonical bundle is 0. In the case of toric varieties, $\sum D_i$~0 if all the primitive generators for the
divisors lie on a hyperplane. Then the sum is 0 and therefore the toric variety is Calabi-Yau.
Can someone confirm or fix the above formula? I do not ask for a debate on when something is Calabi-Yau, I handle that OK, I just ask whether the above formula is correct. A reference would be
enough. I have little access to references at the moment.
2 Not all toric varieties are Calabi-Yau - take $P^2$ for example. Your formula for the canonical bundle applies only to toric varieties, in which case the $D_i$ correspond to the torus invariant
divisors. This is in chapter 3 of Fulton's book I think. – J.C. Ottem Aug 16 '10 at 9:42
2 Talking off the top of my head: a toric variety has a dense open subset isomorphic to a torus, so (over an algebraically closed field say) is rational, and in particular never Calabi--Yau. Also a
minor point "since the smooth locus has codimension 1" is a confusing way to say what you mean. "since the singular locus has codimension 2" would be better. – Artie Prendergast-Smith Aug 16 '10
at 10:09
To clarify the comments above about toric varieties not being Calabi-Yau: by Calabi-Yau, sometimes people colloquially mean that the canonical bundle is trivial and sometimes they mean more
2 specifically that the variety should be projective, have trivial canonical, and that the intermediate cohomology groups of the structure sheaf be trivial (so that the total cohomology looks like
that of a sphere). There are non-proper toric varieties with trivial canonical bundle (for instance a torus), but I don't think it is possible to have a proper toric variety with trivial canonical
bundle. – Chris Brav Aug 16 '10 at 12:01
1 Dear Chris, Of course you're right. I was assuming say projective (maybe proper is enough). In that case I think my argument above suffices to prove your assertion. – Artie Prendergast-Smith Aug
16 '10 at 12:11
Dear Jesus, Just to clarify, the point of my comment was that no projective (or more generally as Chris says, proper) toric variety is Calabi--Yau. Also, as John Christian says in the first
3 comment, the formula for toric varieties is K_X = O(-\sum D_i) where the sum is over the torus invariant divisors --- not representatives of all divisors in the class group. Indeed, the class
group will almost always be infinite, in which casw such a sum will not be defined. – Artie Prendergast-Smith Aug 16 '10 at 14:02
show 4 more comments
2 Answers
active oldest votes
Edit (11/12/12): I added an explanation of the phrase "this is essentially equivalent to $X$ being $S_2$" at the end to answer aglearner's question in the comments. [See also here and
Dear Jesus,
I think there are several problems with your question/desire to define a canonical divisor on any algebraic variety.
First of all, what is any algebraic variety? Perhaps you mean a quasi-projective variety (=reduced and of finite type) defined over some (algebraically closed) field.
OK, let's assume that $X$ is such a variety. Then what is a divisor on $X$? Of course, you could just say it is a formal linear combination of prime divisors, where a prime divisor is
just a codimension 1 irreducible subvariety.
OK, but what if $X$ is not equidimensional? Well, let's assume it is, or even that it is irreducible.
Still, if you want to talk about divisors, you would surely want to say when two divisors are linearly equivalent. OK, we know what that is, $D_1$ and $D_2$ are linearly equivalent iff
$D_1-D_2$ is a principal divisor.
But, what is a principal divisor? Here it starts to become clear why one usually assumes that $X$ is normal even to just talk about divisors, let alone defining the canonical divisor. In
order to define principal divisors, one would need to define something like the order of vanishing of a regular function along a prime divisor. It's not obvious how to define this unless
the local ring of the general point of any prime divisor is a DVR. Well, then this leads to one to want to assume that $X$ is $R_1$, that is, regular in codimension $1$ which is
equivalent to those local rings being DVRs.
OK, now once we have this we might also want another property: If $f$ is a regular function, we would expect, that the zero set of $f$ should be 1-codimensional in $X$. In other words, we
would expect that if $Z\subset X$ is a closed subset of codimension at least $2$, then if $f$ is nowhere zero on $X\setminus Z$, then it is nowhere zero on $X$. In (yet) other words, if
$1/f$ is a regular function on $X\setminus Z$, then we expect that it is a regular function on $X$. This in the language of sheaves means that we expect that the push-forward of $\mathscr
O_{X\setminus Z}$ to $X$ is isomorphic to $\mathscr O_X$. Now this is essentially equivalent to $X$ being $S_2$.
So we get that in order to define divisors as we are used to them, we would need that $X$ be $R_1$ and $S_2$, that is, normal.
up vote 30
down vote Now, actually, one can work with objects that behave very much like divisors even on non-normal varieties/schemes, but one has to be very careful what properties work for them.
As far as I can tell, the best way is to work with Weil divisorial sheaves which are really reflexive sheaves of rank $1$. On a normal variety, the sheaf associated to a Weil divisor $D$,
usually denoted by $\mathcal O_X(D)$, is indeed a reflexive sheaf of rank $1$, and conversely every reflexive sheaf of rank $1$ on a normal variety is the sheaf associated to a Weil
divisor (in particular a reflexive sheaf of rank $1$ on a regular variety is an invertible sheaf) so this is indeed a direct generalization. One word of caution here: $\mathcal O_X(D)$
may be defined for Weil divisors that are not Cartier, but then this is (obviously) not an invertible sheaf.
Finally, to answer your original question about canonical divisors. Indeed it is possible to define a canonical divisor (=Weil divisorial sheaf) for all quasi-projective varieties. If $X\
subseteq \mathbb P^N$ and $\overline X$ denotes the closure of $X$ in $\mathbb P^N$, then the dualizing complex of $\overline X$ is $$ \omega_{\overline X}^\bullet=R{\mathscr H}om_{\
mathbb P^N}(\mathscr O_{\overline X}, \omega_{\mathbb P^N}[N]) $$ and the canonical sheaf of $X$ is $$ \omega_X=h^{-n}(\omega_{\overline X}^\bullet)|_X=\mathscr Ext^{N-n}_{\mathbb P^N}(\
mathscr O_{\overline X},\omega_{\mathbb P^N})|_X $$ where $n=\dim X$. (Notice that you may disregard the derived category stuff and the dualizing complex, and just make the definition
using $\mathscr Ext$.) Notice further, that if $X$ is normal, this is the same as the one you are used to and otherwise it is a reflexive sheaf of rank $1$.
As for your formula, I am not entirely sure what you mean by "where the $D_i$ are representatives of all divisors in the Class Group". For toric varieties this can be made sense as in
Josh's answer, but otherwise I am not sure what you had in mind.
(Added on 11/12/12):
Lemma A scheme $X$ is $S_2$ if and only if for any $\iota:Z\to X$ closed subset of codimension at least $2$, the natural map $\mathscr O_X\to \iota_*\mathscr O_{X\setminus Z}$ is an
Proof Since both statements are local we may assume that $X$ is affine. Let $x\in X$ be a point and $Z\subseteq X$ its closure in $X$. If $x$ is a codimension at most $1$ point, there is
nothing to prove, so we may assume that $Z$ is of codimension at least $2$.
Considering the exact sequence (recall that $X$ is affine): $$ 0\to H^0_Z(X,\mathscr O_X) \to H^0(X,\mathscr O_X) \to H^0(X\setminus Z,\mathscr O_X) \to H^1_Z(X,\mathscr O_X) \to 0 $$
shows that $\mathscr O_X\to \iota_*\mathscr O_{X\setminus Z}$ is an isomorphism if and only if $H^0_Z(X,\mathscr O_X)=H^1_Z(X,\mathscr O_X)=0$ the latter condition is equivalent to $$ \
mathrm{depth}\mathscr O_{X,x}\geq 2, $$ which given the assumption on the codimension is exactly the condition that $X$ is $S_2$ at $x\in X$. $\qquad\square$
7 This was a pleasure to read, Sandor! I love your top-down reasoning. It makes the abstruce accessible. Thanks. – Eric Zaslow Nov 19 '10 at 22:00
1 Thanks, Eric! :) – Sándor Kovács Nov 19 '10 at 22:23
Dear Sandor, when you say "...this is essentially equivalent..." is this really equivalent? Is it true that if for every regular function $f$ its zero set has codimension $1$ then the
variety has property $S_2$? Can property $S_2$ be formulated in such a language, or this is just implied by $S_2$? (maybe I miss-understand what you wrote). – aglearner Oct 11 '12 at
2 @aglearner: I added a statement to explain that. I believe it tells you that in fact you can define property S2 this way. See also mathoverflow.net/questions/45347/… and
mathoverflow.net/questions/45347/… – Sándor Kovács Oct 13 '12 at 1:38
Dear Sandor, thank you for adding this reasoning, this is very helpful. I realised though that I have one more question about the same paragraph in your answer. Do you know an example
of a variety $X$ that satisfies $R1$ but not $S_2$ and has a function that vanishes on a set $Z$ of co-dimension $2$ or larger in $X$? I have a problem to imagine such an example (I
don't think that such example exists among affine varieties over $\mathbb C$...) – aglearner Oct 16 '12 at 21:52
add comment
Your formula is not quite right for toric varieties. In particular, the sum is not over "representatives of the class group", but over a set of minimal generators for the free group on
torus-invariant divisors. Such a set is furnished by the 1-cones in the fan. More precisely,
Let $X_\Sigma$ be the toric variety associated to a fan $\Sigma$, and assume that $X_\Sigma$ has no torus factors. Then for each $\rho \in \Sigma(1)$, there is a torus invariant
divisor $D_\rho$, and $$\mathcal O_{X_\Sigma}\Big(-\sum_{\rho} D_\rho\Big)\cong \omega_{X_\Sigma}$$
This is Proposition 8.2.7 in Cox Little Schenck.
An easy toric proof that no projective toric variety is Calabi-Yau is that, as you said, the minimal generators of the one-cones must lie in a hyperplane. The positive hull over such a set
of generators is strongly convex, so that the support of the fan cannot be all of $N_{\mathbb R}$, and thus $X_\Sigma$ is not complete and thus not projective.
up vote 9 I believe that your question really concerns the existence of a natural set of generators for the "total coordinate ring" of an Calabi-Yau variety and if they obey a linear relation. The
down vote total coordinate ring is defined to be $$R=\bigoplus_{D\in Cl(X)} \Gamma(X,\mathcal O_X(D)).$$
Here $Cl(X)$ is the class group of $X$. See 0801.3995 for more details.
For toric varieties $X_\Sigma$, this is simply $\mathbb C[x_\rho | \rho \in \Sigma(1)]$. If this is indeed your question, it is probably too much to hope for, as $R$ is not known to be
finitely generated! It is known to be so for toric varieties (Cox), and for varieties of Fano type (Birkar–Cascini–Hacon–McKernan). Some other specifically constructed examples (
Prendergast-Smith) are known, but a general characterization is not.
Edit: Updated the links and fixed the unclear notation - thanks Artie!
Dear Josh, a couple of minor comments: 1. In your definition of the total coordinate ring, I guess you should say what K is (otherwise it's a little mysterious). 2. A more up-to-date
2 reference for Birkar--Cascini--Hacon--McKernan is the published version, available online at ams.org/journals/jams/2010-23-02/S0894-0347-09-00649-3/… – Artie Prendergast-Smith Sep 29
'10 at 13:04
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry toric-varieties calabi-yau divisors reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/35736/the-canonical-line-bundle-of-a-normal-variety","timestamp":"2014-04-19T22:44:45Z","content_type":null,"content_length":"79785","record_id":"<urn:uuid:5b3b41d1-2232-4058-9481-a64c445e6494>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
G22.2590 - Natural Language Processing -- Spring 2003 -- Prof. Grishman
Assignment #10
April 11, 2005
(Predicate calculus practice)
[1 point each]
1. For at least some speakers, the sentence
Everyone noticed a cat.
is ambiguous. Formalize this ambiguity by expressing its two readings in predicate calculus (without event reification).
2. Formalize the different between the following two sentences
John read a book and Mary read it too.
John read a book and Mary read one too
by expressing their meanings in predicate calculus (without event reification).
3. Using predicate calculus with event reification, and treating vegetarians and hamburgers as sets of objects (to be quantified over), represent the meaning of
Vegetarians do not eat hamburgers.
4. Express the following in idiomatic English
(forall x) loves(Arthur,x)
loves(Fred, Fred)
(forall x) (rich(x) implies loves(George, x))
~(exists x) (forall y) loves(x, y)
Due April 25^th. | {"url":"http://cs.nyu.edu/courses/spring05/G22.2590-001/asgn10.html","timestamp":"2014-04-21T02:43:00Z","content_type":null,"content_length":"2102","record_id":"<urn:uuid:341e1a85-e5a8-45b8-925a-ee500543db7f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptology ePrint Archive: Report 2006/092
Cryptanalysis of RSA with constrained keysAbderrahmane NitajAbstract: Let $n=pq$ be an RSA modulus with unknown prime factors and $F$ any function for which there exists an integer $u\neq 0$
satisfying $F(u)\approx n$ and $pu$ or $qu$ is computable from $F(u)$ and $n$. We show that choosing a public key exponent $e$ for which there exist positive integers $X$, $Y$ such that $\left\vert
eY-XF(u)\right\vert$ and $Y$ are suitably small, then the system is insecure.
Category / Keywords: RSA cryptosystem, Cryptanalysis, Continued fractions, Bl\"omer-May attack, Coppersmith's algorithm Date: received 9 Mar 2006Contact author: nitaj at math unicaen frAvailable
format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation Version: 20060309:150750 (All versions of this report) Discussion forum: Show discussion | Start new discussion[
Cryptology ePrint archive ] | {"url":"http://eprint.iacr.org/2006/092","timestamp":"2014-04-18T10:38:28Z","content_type":null,"content_length":"2206","record_id":"<urn:uuid:d694b43f-086c-48bc-b4a5-45514ad624c5>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Phys 597A, CMPS 497E
Graphs and Networks in Systems Biology
Lecturer: Réka Albert
122 Davey Laboratory
Networks, networks everywhere
ˇ Network infrastructure, social networking
ˇ Network - a tool for understanding complex systems
ˇ Many non-identical elements connected by diverse interactions
ˇ E.g. interaction networks within cells: protein interactions,
chemical reactions, gene regulation
ˇ Graph measures provide information on interaction graphs
ˇ Network models explain and predict properties of graph classes
ˇ Network topology influences network robustness and the
dynamics of flows
ˇ E.g. dynamics of molecular interaction networks determines the
behavior of cells.
ˇ Understand emergent properties synchronization, phase
transitions, homeostasis
Definition of graphs (networks)Definition of graphs (networks) | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/649/1524643.html","timestamp":"2014-04-19T08:35:57Z","content_type":null,"content_length":"7936","record_id":"<urn:uuid:532e5716-3220-4efd-83c1-41b34b3c8343>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Make: Projects - Polycube puzzles from blank dice
A number of interesting assembly puzzles can be made from pieces consisting of simply joined cubes in various numbers and arrangements. Piet Hein’s Soma Cube is a notable example, consisting of all
the simply joined non-convex polycubes having four or fewer units. Generally, a polyomino or polycube puzzle is presented as an outline or volume to be filled in with a certain set of pieces. It is
up to the solver to figure out how to pack the pieces to fill the specified form.
Among the more interesting of the polycube puzzles are the solid pentominoes. The flat pentominoes are commonly used in early elementary education programs, so many readers will doubtless be familiar
with them. Extruding the flat pentominoes by one unit in the Z-dimension gives the set of what are traditionally called “solid pentominoes.” They can be used to solve any flat pentomino puzzle, but
also to create various 3D shapes. The 3D puzzles are considerably more challenging.
To make a satisfying polycube puzzle requires that the pieces be dimensioned very accurately, so they will always pack closely regardless of their arrangement. To achieve this accuracy with common
hand tools is very difficult. However, blank dice provide a convenient and inexpensive source of accurate, precise unit cubes which may be joined to create the various pieces. The use of translucent
dice is recommended, both because they look cool and because they’re gauranteed to be acrylic and hence strongly bondable with standard acrylic cements. All the opaque dice I’ve tried to glue have
proven highly resistant to adhesives of all types; I suspect they’re made out of polyethylene.
• Combination square or other accurate inside right angle
• Steel cookie sheet or other magnetic surface
• About a dozen 1/4″ cylindrical supermagnets
• Small paintbrush, e.g. #0
• 60 blank translucent dice (I used 16mm dice, 20 of each in red, green, and blue)
• Acrylic cement
• Soap & water
• Isopropyl alcohol
Step 1: Clean your dice
Properly cemented acrylic joints are extremely tough. And a large part of getting a good joint is making sure the mating surface are free from dirt, oil, and other contaminants that can interfere
with the bond. It’s worth it, in the long run, to take the time to wash the blank dice with soap and water. Finish up with a quick dunk in rubbing alcohol to accelerate drying.
Step 2: Set up your gluing jig
To make true joints, you need a true edge and a true 90-degree angle. This combination square from Harbor Freight (#32244) provides both in an inexpensive tool. The work is done on a steel tray or
cookie sheet, so that small supermagnets can be used to hold everything in place while the cement is applied. Except for the L and V, most pentominoes cannot be glued in a single step against a right
angle. Save the L and the V for last, and use their pieces as “blanks” to build up the surface of the angle as needed for each shape.
Step 3: Glue, glue, glue
To make a piece, the dice are set up against the jig in the proper arrangement and secured in place with magnets. Then a small brush soaked in acrylic cement is applied to each joint, and capillary
action draws it across the faces to be bonded. If you are using “blanks,” obviously, be sure to give the joints you do not want glued a wide berth. Give each piece at least one hour to dry before
removing it. Then flip it over and apply cement to each joint, again, from the other side. Then set it aside overnight to cure.
Step 4: Enjoy!
Pentominoes are a classic mathematical recreation, and there’s lots of information out there about them, including many proven problems. Generally, it’s easiest to start with the “plane” pentomino
problems in 2D, a set of which is given above. Again, the idea here is to make the given shape, exactly, using all 12 pentominoes. Another common challenge is to build a 300% scale model of any
particular pentomino using 9 of the OTHER pentominoes.
Once you get a feel for the plane problems, you may want to move on to 3D solid pentomino problems, which are considerably more difficult for most people. A set of graded problems is presented below.
The difficulty ratings assigned are relative and are based on the number of possible solutions; those rated “hard” have the fewest solutions, and those rated “easy” have the most.
A final challenge is to produce puzzles of your own. There are 12 pentominoes with an area/volume of 5 units each, so any complete pentomino puzzle will have an area/volume of 60 units.
Notes and ideas
Be careful, when building the U pentomino, to be certain to use a spacer die in the hollow of the U when you glue. Otherwise you may end up with a stubborn U that will accept another block only
A classic book on polyomino puzzles of all types is Solomon W. Golomb’s Polyominoes published in 1965. It’s interesting reading, if you want an in-depth study of the problem and a gaggle of
interesting problems, but it’s by no means necessary. There’s more than enough polyomino information on the web to satisfy all but the most ardent curiosity.
1. … where do you get blank dice?
1. Where?
Although some dice have painted-on spots. Maybe those could be washed/scraped/sanded off?
2. I certainly hope it was from somewhere cheaper than this place:
$1.00 PER die? Holy crap!
3. D’oh. Yes. I really should’ve mentioned that.
I bought mine at my local nerdy gaming emporium out of a giant acrylic bin for 40 cents apiece. Looking around online, just now, the prospects are admittedly pretty bleak. TAP plastics,
however, does offer clear cast acrylic cubes in 5/8″ (16mm) for $8.75 for 25 count, which is 35 cents apiece.
I have not used these, unfortunately, so I can’t say with certainty that they are as precise as those manufactured for use as dice. I would be surprised, however, to find that they are not.
2. A few years ago my daughters and I made this project using blank wooden cubes. The colored plastic do look cool, but there’s also something to be said for wooden toys for kids. We didn’t use a
jig, just were really careful about lining the cubes up, and it worked perfectly. After drying (1/2 hour with wood glue), we painted each pentomino piece a different color, using standard tempera
paint. We also built simple open-topped wooden boxes to hold the pieces. This is a great project. If there were a way to post a photo, I would do so.
3. Forgot to say, here’s a link to find blank wooden cubes (20mm = about 1/2inch size): | {"url":"http://makezine.com/2009/08/28/make-projects-polycube-puzzles-fr/","timestamp":"2014-04-18T02:09:45Z","content_type":null,"content_length":"77986","record_id":"<urn:uuid:5d4ad600-9eab-4dd5-958a-892838cac92b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
2013 florida algebra eoc questions
Florida algebra i eoc with online practice tests (florida fcat, Florida algebra i end-of-course assessment – with online practice tests! completely aligned with the benchmarks in florida’s next
generation sunshine state.
Florida algebra i eoc with online practice tests (test prep) by, Florida algebra i eoc with online practice tests (test prep) by rea: passing the florida algebra 1 eoc test about this book this book,
along with reas true-to-format.
Myflvs – algebra 1 students – florida virtual school, What do i need to know? the state of florida has instituted an end-of-course (eoc) assessment for algebra 1 students designed to measure student
achievement of the.
Florida virtual school algebra eoc practice test – 1060 free pdf, Download florida virtual school algebra eoc practice test ebooks for free or read online on mybookezz.org – the florida algebra 1 end
– st. johns county school district.
Florida algebra 1 end of course (eoc) exam practice part 1 – youtube, The first 9 problems from algebra 1 eoc practice exam available from flvs for best results, download and print the practice test
so you can follow along!.
Bureau of k-12 assessment: end-of-course assessments, K-12 assessment end-of-course assessments florida end-of-course (eoc) assessments. the florida eoc assessments are part of florida’s next
generation strategic.
Florida end-of-course (eoc) assessments – bureau of k-12 assessment, Fcat 2.0 results 2012-2013 florida end-of-course (eoc) assessments florida end-of-course (eoc) assessments results.
Florida algebra i end-of-course assessment w/online practice tests, Taking the florida algebra 1 end-of-course exam? then you need rea’s florida algebra 1 end-of-course test prep with online practice
exams! if you’re facing the.
Algebra 1 eoc practice – schoolworld an edline solution, Pdf file: you need adobe acrobat reader (version 7 or higher) to view this file. download the free adobe acrobat reader for pc or macintosh.
doc file: you need the.
Eoc 1666 bathgate avenue/ –(algebra eoc practice test nc) , escambia | {"url":"http://educationbulletinboard.com/2013/07/11/2013-florida-algebra-eoc-questions/","timestamp":"2014-04-19T09:48:59Z","content_type":null,"content_length":"15832","record_id":"<urn:uuid:70a2399b-87fa-47d6-96c6-4832d1f71415>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Faculty: Martin Zwick Course Information
SySc 511 surveys fundamental systems concepts and central aspects of systems theory. The course begins with an overview of the systems paradigm and the systems field as a whole. Topics then include
introductions to set- and information-theoretic multivariate relations, dynamic systems, regulation and control, model representation and simulation; decision analysis, optimization, and game theory;
artificial intelligence, complex adaptive systems. Readings draw from mathematics, the natural and social sciences, and the professional disciplines (e.g., engineering, business).
The course content derives both from "classical" general systems theory, cybernetics, and operations research as well as from more contemporary systems research which is organized around the themes
of nonlinear dynamics, complexity, and adaptation.
SySc 551 focuses on information theory as a modeling framework and as a tool for discrete multivariate analysis. The course presents set- and information-theoretic methods for studying static or
dynamic (time series) relations among qualitative variables or among quantitative variables having unknown nonlinear relationships. In the 'general systems' literature, this is known as
'reconstructability analysis' (RA). RA overlaps partially with log-linear statistical techniques widely used in the social sciences; both are especially valuable in data-rich applications (but RA is
not exclusively statistical). RA is highly relevant to the many interrelated "projects" which go under the names of data-mining, machine learning, knowledge discovery and representation, etc.
Applied to data analysis, RA allows the decomposition and compression of multivariate probability distributions (contingency tables) and set-theoretic relations (and mappings), as well as the
composition of multiple distributions/relations. These methods are very general. They are valuable in the natural and social sciences and in engineering, business, or other professional fields
whenever categorical variables are useful or, for quantitative variables, where linear models are inadequate. Applied to the conceptualization of "structure" (the relations between wholes and parts)
and "complexity," these set- and information-theoretic ideas are foundational for systems science.
DMIT is a project-based course that offers an opportunity to use information theoretic methods to analyze data, without having first to master the underlying theory. These methods are particularly
ideal for detecting unknown non-linear relations or many-variable interaction effects. The methods are implemented in a Systems Science software package named OCCAM that will be the main analytical
tool used in the course. The underlying theory is taught in SySc 551/651 Discrete Multivariate Modeling (DMM), but DMIT is stand-alone and does not have DMM as a prerequisite. Only the theory that is
needed to understand OCCAM inputs and outputs will be presented; the software will otherwise be treated as a black box. DMIT does require, however, that those taking it have data to analyze. Data
should be in spreadsheet format, where columns are variables (nominal or continuous) and rows are cases (that sample a population or are points in time or space). The number of cases (the sample
size) should be in the 100s or preferably higher but at least in the 10s. The larger the sample size the more variables can be analyzed, but OCCAM has not yet been used for more than 100s of
variables. Questions about suitability of data should be directed to zwick@pdx.edu.
SySc 510/610 will continue the presentation of DMM (SySc 551/651), and will focus on (a) projects and (b) advanced topics. In projects, students will either do either (i) an intensive analysis of
some dataset or (ii) a software project that enhances the current set of RA tools. The advanced topics will include most (or all) of the following: state-based RA and k-systems analysis; RA loopless
models with many variables ("dependency analysis"); identification with inconsistent data; set-theoretic RA and binary decision diagrams; intra-model analysis; modeling with latent variables; RA and
genetic algorithms; Fourier-based RA techniques; binning; the OCCAM software package.
Game theory involves the study of cooperation and competition, without regard to the particular entities involved, and issues of rationality associated with such phenomena. Sysc 552 presents the
basic ideas of game theory, especially those concerning (a) 2-person zero-sum games, which the theory solves, and (b) 2- (or n[equivalent]-) person nonzero-sum games, which have no general solution
and which often exhibit paradoxical features. Of particular substantive interest are dilemmas of collective action, which characterize many social, economic, and political problems. Of particular
methodological interest are simulation techniques used to extend game-theory into domains where analytical results are impossible.
Also covered are (c) 2-person cooperative games (bargaining & arbitration), which have alternative plausible solutions; (d) coalition theory (n[non][-equivalent]-person games), in which analysis is
complex and limited; and (e) social choice theory, which reveals the difficulties in integrating individual preferences into collective decisions. Emphasis in the course is on the findings of game
theory, especially as they apply to the social sciences, rather than on the purely technical aspects of the theory.
"Artificial Life" (ALife) is a name given to theoretical, mathematical, and computationally "empirical" studies of phenomena commonly associated with "life," such as replication, metabolism,
morphogenesis, learning, adaptation, and evolution. It focuses on the materiality-independent, i.e., abstract, bases of such phenomena. As such, it overlaps extensively with "theoretical biology"
and, less extensively, with certain areas of physics and chemistry and the social sciences. It also raises important philosophical questions. It is part of a larger research program into "complex
adaptive systems," one stream of contemporary systems theory.
In its intersection with computer science, ALife is the newest example of "the sciences of the artificial" (Herbert Simon). ALife is to life what AI is to intelligence. Christopher Langton writes
that "Artificial Life ... complements the traditional biological sciences ... by attempting to synthesize life-like behaviors within computers and other artificial media." The purpose is twofold: to
understand these phenomena better and to develop new computational technologies.
SySc 557 will sample the research literature in this field, and will be organized in a seminar format. Topics to be emphasized are: (1) discrete dynamics: cellular automata and random networks, (2)
ecological & evolutionary dynamics, (3) genetic algorithm optimization and adaptation, (4) agent-based simulation. Other topics will include: artificial and real chemistry (metabolism, reproduction,
& origin of life), "complex adaptive systems," autonomous agents, and philosophical issues.
This seminar will consider some philosophical issues central to the systems field. Fundamental to these issues is Bunge's conception of systems science as a research program aimed at the construction
of "an exact and scientific metaphysics," that is, a set of concepts, models, and theories of broad generality and philosophical import, which are applicable to the sciences, and which are cast (or
capable ultimately of being cast) in the exact language of mathematics.
The course will present a broad range of systems ideas (from information theory, game theory, thermodynamics, non-linear dynamics, decision theory, and many other areas) and attempt to integrate
these ideas into a coherent framework. These ideas will be organized around the theme of fundamental "problems," that is, difficulties (imperfections, modes of failure) encountered by many systems of
widely differing types. While most of these ideas are mathematically-based, they will be approached in this course primarily at a conceptual level (with mathematical details provided as requested).
Many of these systems ideas derive from the natural sciences and engineering, but they apply as well to the social sciences and to fields of professional practice (business, the helping professions,
etc.). It is primarily their relevance to the human domain -- to individuals, groups, organizations, and societies -- and to technology which motivates this theoretical/philosophical inquiry. Certain
of these ideas pertain also to the arts and humanities.
This course will examine systems-theoretic ideas that bear on sustainability. These ideas come from graph theory, non-linear dynamics, game and decision theory, thermodynamics, theories of complex
adaptive systems, and from systems-oriented theories in the earth sciences, ecology, sociology, and history. The ideas shed light on the causes of sustainability problems and on the principles that
might guide attempts to solve these problems. A talk introducing these ideas and their relevance to sustainability is at www.sysc.pdx.edu/download/papers/sustain07.pdf. Many of the systems ideas
covered in this course are mathematically-based, but the ideas will be presented mainly at a conceptual level (with mathematical details provided as requested).
Topics will include
• macro-historical perspectives on the sustainability challenge
• world systems and earth systems models
• the classic limits to growth studies, updated
• the Panarchy model of adaptation in ecological & social systems
• the Tragedy of the Commons and its possible solutions
• energy and entropy dimensions of sustainability.
Table 1. Degree of mathematical content and topic specialization of courses
Topic Specialization
low (survey course) high (one topic course)
high Discrete Multivariate Modeling^1
Systems Theory Data Mining with Information Theory
Math Content
medium Artificial Life Game Theory
Systems Philosophy
low Systems Ideas & Sustainability
SP: 510/610; ST: 511/611; AL: 557/657; DMM: 551/651; GT: 552/652
I am comparing the relative mathematical content only of my own courses; I'm not quantifying how mathematical my courses are compared to courses given by other faculty.
^1Although the Discrete Multivariate Modeling course is listed as having "high" mathematical content, it is accessible to social science students who have taken courses in probability and statistics.
Calculus is not needed for DMM. | {"url":"http://www.pdx.edu/sysc/faculty-martin-zwick-course-information","timestamp":"2014-04-18T16:29:56Z","content_type":null,"content_length":"38935","record_id":"<urn:uuid:ea7fd1aa-cd6e-4ead-ac3a-f31bc9985b70>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Antievolution.org - Antievolution.org Discussion Board -Topic::Official Uncommonly Dense Discussion Thread
Posts: 50
Joined: Oct. 2006
(Permalink) Posted: Jan. 05 2007,02:56
Arghh, that infinite monkeys conversation is so stupid! What I especially cant stand is the way they so smugly congratulate themselves on seeing through this Darwinian obfuscation, without
acknowledging for one second that it may be their understanding of the concepts involved that is incorrect.
Ill try and explain, for the benefit of any UD lurkers here, using slightly different language.
1. The phase space of 'all possible books' contains the complete works of Shakespeare (and all other books, by definition).
2. If we randomly sample this phase space for an infinite period of time, all possible results will emerge.
3. Ergo, any system (in this case monkeys typing) that randomly samples the phase space over the period of infinity will eventually produce all possible books.
For a group of people so concerned with where we are all going for infinity they sure have a shaky understanding of the concept. | {"url":"http://www.antievolution.org/cgi-bin/ikonboard/ikonboard.cgi?act=SP&f=14&t=1274&p=45583","timestamp":"2014-04-25T05:07:37Z","content_type":null,"content_length":"26977","record_id":"<urn:uuid:a2f0d18c-3035-4d4a-a416-37f75a140a7e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phase Lag
Phase lag is a parameter of the eddy current signal that makes it possible to obtain information about the depth of a defect within a material. Phase lag is the shift in time between the eddy current
response from a disruption on the surface and a disruption at some distance below the surface. The generation of eddy currents can be thought of as a time dependent process, meaning that the eddy
currents below the surface take a little longer to form than those at the surface. Disruptions in the eddy currents away from the surface will produce more phase lag than disruptions near the
surface. Both the signal voltage and current will have this phase shift or lag with depth, which is different from the phase angle discussed earlier. (With the phase angle, the current shifted with
respect to the voltage.)
Phase lag is an important parameter in eddy current testing because it makes it possible to estimate the depth of a defect, and with proper reference specimens, determine the rough size of a defect.
The signal produced by a flaw depends on both the amplitude and phase of the eddy currents being disrupted. A small surface defect and large internal defect can have a similar effect on the magnitude
of impedance in a test coil. However, because of the increasing phase lag with depth, there will be a characteristic difference in the test coil impedance vector.
Phase lag can be calculated with the following equation. The phase lag angle calculated with this equation is useful for estimating the subsurface depth of a discontinuity that is concentrated at a
specific depth. Discontinuities, such as a crack that spans many depths, must be divided into sections along its length and a weighted average determined for phase and amplitude at each position
below the surface.
q=Phase Lag (Rad or Degrees)
x=Distance Below Surface (in or mm)
d=Standard Depth of Penetration (in or mm)
At one standard depth of penetration, the phase lag is one radian or 57^o. This means that the eddy currents flowing at one standard depth of penetration (d) below the surface, lag the surface
currents by 57^o. At two standard depths of penetration (2d), they lag the surface currents by 114^o. Therefore, by measuring the phase lag of a signal the depth of a defect can be estimated.
On the impedance plane, the liftoff signal serves as the reference phase direction. The angle between the liftoff and defect signals is about twice the phase lag calculated with the above equation.
As mentioned above, discontinuities that have a significant dimension normal to the surface, will produce an angle that is based on the weighted average of the disruption to the eddy currents at the
various depths along its length.
In the applet below, the relationship between the depth and dimensions of a discontinuity and the rotation produced on the impedance plane is explored. The red lines represent the relative strength
of the magnetic field from the coil and the dashed lines indicate the phase lag of the eddy currents induced at a particular depth. | {"url":"http://www.ndt-ed.org/EducationResources/CommunityCollege/EddyCurrents/Physics/phaselag.htm","timestamp":"2014-04-19T01:47:35Z","content_type":null,"content_length":"18415","record_id":"<urn:uuid:1d11bc18-7fc9-483c-8ca6-164884892b66>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frequently Asked Questions
What are the basic properties of angles?
Once 360 degrees has been defined as the angle through which to turn for one complete rotation, we quickly establish other useful facts.
A half turn, or the amount of turn about a single point on a straight line is 180 degrees, and a quarter turn, or a right angle, is represented by 90 degrees.
This leads us to our first important result. Consider a pair of intersecting lines.
It is clear that a + b = 180 (angles on a straight line) and b + c = 180. Therefore a + b = b + c and so a = c. That is, opposite angles are equal; this result is called the X angle property.
Let us now consider a line intersecting a pair of parallel lines.
As the line intersects the parallel lines in the same direction, it is self-evident that a = b. This is called the F angle property.
Consider the following diagram.
By the F angle property, a = c and by the X angle property, a = b, hence we establish the Z angle property, which states that b = c.
We are able to use the Z angle property to determine the sum of angles in a triangle.
It should be clear that a + b + c = 180. That is, the sum of interior angles in a planar triangle is 180 degrees. | {"url":"http://mathschallenge.net/library/geometry/angle_properties","timestamp":"2014-04-20T03:21:39Z","content_type":null,"content_length":"4755","record_id":"<urn:uuid:c12b31d7-82b2-4d94-a673-aea1565d0e7f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
By far the most important aspect of inflation is that it provides a possible explanation for the origin of cosmic structures. The mechanism is fundamentally quantum mechanical; although inflation is
doing its best to make the Universe homogeneous, it cannot defeat the uncertainty principle which ensures that residual inhomogeneities are left over. ^(2) These are stretched to astrophysical scales
by the inflationary expansion. Further, because these are determined by fundamental physics, their magnitude can be predicted independently of the initial state of the Universe before inflation.
However, the magnitude does depend on the model of inflation; different potentials predict different cosmic structures.
One way to think of this is that the field experiences a quantum "jitter" as it rolls down the potential. The observed temperature fluctuations in the cosmic microwave background are one part in 10^
5, which ultimately means that the quantum effects should be suppressed compared to the classical evolution by this amount.
Inflation models generically predict two independent types of perturbation:
Density perturbations [H]^2(k):
These are caused by perturbations in the scalar field driving inflation, and the corresponding perturbations in the space-time metric.
Gravitational waves A[T]^2(k):
These are caused by perturbations in the space-time metric alone.
They are sometimes known as scalar and tensor perturbations respectively, because of the way they transform. Density perturbations are responsible for structure formation, but gravitational waves can
also affect the microwave background.
We do not expect to be able to predict the precise locations of cosmic structures from first principles (any more than one can predict the precise position of a quantum mechanical particle in a box).
Rather, we need to focus on statistical measures of clustering. Simple models of inflation predict that the amplitudes of waves of a given wavenumber k obey gaussian statistics, with the amplitude of
each wave chosen independently and randomly from a gaussian. What it does predict is how the width of the gaussian, known as its amplitude, varies with scale; this is known as the power spectrum.
With current observations it is a good approximation to take the power spectra as being power laws with scale, so
In principle this gives four parameters - two amplitudes and two spectral indices - but in practice the spectral index of the gravitational waves is unlikely to be measured with useful accuracy,
which is rather disappointing as the simplest inflation models predict a so-called consistency relation relating n[T] to the amplitudes of the two spectra, which would be a distinctive test of
inflation. The assumption of power-laws for the spectra requires assessment both in extreme areas of parameter space and whenever observations significantly improve.
^2 For a detailed account of the inflationary model of the origin of structure, see Ref. [4]. Back. | {"url":"http://ned.ipac.caltech.edu/level5/Sept01/Liddle4/Liddle3.html","timestamp":"2014-04-16T19:13:46Z","content_type":null,"content_length":"5079","record_id":"<urn:uuid:a40dee94-4158-47a5-b7c4-54942d203b0a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Total # Posts: 11
I know that and I thanked you for correcting me. We all make mistakes, right?
The equation is set up correctly. Good job! Let's see: 17 + 2n = 101....Original Equation Subtract 17 from both sides. 2n = 101 - 17 2n = 84 Now divide both sides by the coefficient 2 to find the
value of n. n = 84/2 n = 42 Is this true? Let's plug 42 for n in the Orig...
Yes, I totally missed some of the wording. Yes, Quidditch is right for question 1. Good looking out, Quidditch! blueridge
We have this: 7 + 2(5 - 2 * 3 ^ 2) USE PEMDAS...Do you know what PEMDAS stands for? Work out the inside of the parentheses first. Inside we have this guy: 5 - 2(3)^2 5 - 2(9) 5 - 18 -13 We now have
this: 7 + 2(-13) 7 - 26 = -19 Done! ================= NEXT: Name the sets of nu...
Use this formula: (a + b)(c + d + e) = ac + ad + ae + bc + bd + be ============================ Steps to solve your question: 1-Each terms of the left side polynomial, which is this guy (-.096t^4+3t^
3-27t^2+91t+1700), should be distributed through the second polynomial, ONE AT...
What exactly in terms of math are you having trouble with? Do you have a question(s)?
This question has been answered. Like I said before, you need to be more specific.
Regression curve is a study taught in advanced statistic courses. The simplest form of regression is fitting a line to data. Suppose I want to predict a tiger's height from its age. I can find the
equation of the "best" line that relates height to age, and I can ...
This question is not clearly stated. Many shapes have lines that decrease and/or increase just the same. A line that decreases reveals a decreasing function; a line that increases reveals an
increasing function.
I just want to add that whenever they ask you to solve for a letter, any letter, it means to isolate the letter just like Ms. Sue easily did. Do you see that l has been isolated on the left side of
the equation? This is very important to keep in mind in any math course. Also, ...
For question one use: P( A or B) = P(A) + P(B) - P(A and B) The rule for OR takes into account those values that may get counted more than once when the probability is determined. Let P = probability
Let A = biographies Let B = reference book Can you finish? | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=blueridge","timestamp":"2014-04-18T03:52:49Z","content_type":null,"content_length":"8718","record_id":"<urn:uuid:25652407-7e69-4017-a63e-c50f7c6a489c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Visualizing Multiple Regression
Edward H. S. Ip
University of Southern California
Journal of Statistics Education Volume 9, Number 1 (2001)
Copyright © 2001 by Edward H. S. Ip, all rights reserved.
This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author and advance notification of the editor.
Key Words: Average stepwise regression; Teaching statistics; Type I and Type II sums of squares; Venn diagram.
Several examples are presented to demonstrate how Venn diagramming can be used to help students visualize multiple regression concepts such as the coefficient of determination, the multiple partial
correlation, and the Type I and Type II sums of squares. In addition, it is suggested that Venn diagramming can aid in the interpretation of a measure of variable importance obtained by average
stepwise selection. Finally, we report findings of an experiment that compared outcomes of two instructional methods for multiple regression, one using Venn diagrams and one not.
1. Introduction
One of the topics students encounter in statistics courses at both the undergraduate and the graduate level is multiple regression. This paper shows how the Venn diagram can be employed as a useful
visual aid to help students understand important and fundamental concepts in multiple regression such as R^2, partial correlation, and Type I and II sums of squares. Introduced by Venn (1880), the
Venn diagram has been popularized in texts on elementary logic and set theory (e.g., Suppes 1957). However, the use of Venn diagrams in the field of statistics has been quite limited. In a recent
example, Shavelson and Webb (1990) used them in generalizability studies to make visually accessible the partitioning of total variance into components. Moreover, the Venn diagram has been used to
illustrate correlation and regression (e.g., Pedhazur 1997; Hair, Anderson, and Tatham 1992, p. 47). While there are also good applications of Venn diagrams in a number of statistics texts (e.g.,
Agresti and Finlay 1997), just seeing them does not necessarily inform the lecturer about critical issues in creating them. The purpose of this article is to illustrate in a variety of ways that more
extensive use of Venn diagrams can be made in the classroom. Their clearest application in these contexts requires examples with no more than three independent variables whose interrelationships
explicitly avoid suppressor variable effects.
2. Venn Diagramming
A Venn diagram for regression displays the total sum of squares (TSS) as a rectangular box. Sums of squares (SS) of individual variables are depicted as ovals. Whenever numerical examples are
demonstrated, shapes should be drawn to scale so that the effects of the variables can be interpreted accurately.
2.1 Coefficient of Determination R^2
The coefficient of determination R^2 is the ratio of the sum of squares of regression (SSR), the total area covered by ovals, and TSS, the area of the rectangle. The case in which the variables are
uncorrelated can be represented by separated ovals in the Venn diagram. For example, Figure 1a shows what happens when the variables x[1] and x[2] are uncorrelated. It is clear from the figure that
R^2 = r^2[yx[1]] + r^2[yx[2]].
Figure 1. (a) Uncorrelated Variables. (b) Correlated Variables With Redundant Information in Salary Example. The area of an oval denotes the regression sum of squares for the variable.
When the variables are correlated and contain redundant information, they can be represented by overlapping ovals. The overlapping part indicates the redundant information shared between the two
related variables. A dataset is taken from the Student Edition of Minitab for Windows (McKenzie, Schaefer, and Farber 1995, p. T-21) to illustrate this situation. It consists of data on the annual
salary (in thousands of dollars) of employees in a company. The predictor variables are gender and Nsuper, the number of staff under supervision by an individual. The sums of squares are
SS(gender) = 337, SS(gender|Nsuper) = 212, SS(Nsuper) = 1494, and SS(gender, Nsuper) = 1706. These SS are graphically represented in Figure 1b. With the aid of the diagram, instructors can actually
point to a piece that represents a particular SS. The ratio of "ground covered" by the ovals to the total area of the rectangle equals R^2. Since adding ovals (variables) always increases the "ground
covered," the concept that "R^2 will always increase as a result of adding variables" can be easily appreciated by students with the aid of the diagram.
The validity of Figure 1b in illustrating the "overlap" of predictive information depends crucially on the fact that SS(gender) + SS(Nsuper) > SS(gender, Nsuper). Unfortunately, although the
inequality SS(x[1]) + SS(x[2]) > SS(x[1], x[2]) holds most of the time in practice, exceptions do occur, and when they do, the areas of overlap are not positive. This will be discussed in Section 3.
In this section, it will be assumed that the overlapping areas are all positive.
2.2 Generalization of R^2
Various forms of generalization of R^2 can be found in the literature. One generalization is described in Pedhazur (1982). The generalized R^2 is a measure of the predictive power of a variable after
partialing out another. The square of the partial correlation, as the measure is called, is defined as
A visual representation of R^2 in Figure 2 indicates the SSR contributed by x[1] and x[2] (shaded area in Figure 2a) when both variables are included in the regression model. Partialing out x[1] is
equivalent to taking out the piece of SS that belongs to x[1] and treating the remaining area as the new TSS (Figure 2b). The residualized SS that is explained by x[2] can be represented by the
shaded area, the ratio of which to the eclipsed TSS is the squared partial correlation r^2[yx[1]·x[2]], sometimes referred to as the coefficient of partial determination in the regression context.
Figure 2. (a) Venn Diagrams of SS of Two Variables. Darker and lighter shades, respectively, correspond to SS(x[1]) and SS(x[2]). (b) SS(x[2] | x[1]) is Indicated by Shaded Area.
The notion of partial correlation can readily be extended to the multiple variable case with the aid of a Venn diagram. It takes little effort to complete the generalization of the multiple partial
correlation to one that partials out more than one variable. Suppose there are four variables, x[1], x[2], x[3], x[4]. Figure 3 shows a rectangle after partialing out both x[1] and x[2]. The squared
multiple partial correlation of x[3] and x[4] is based on the ratio of the area covered by (x[3], x[4]) and the eclipsed TSS. Generalizing the concept to more variables can be illustrated using
Figure 3. For example, the multiple partial correlation of (x[3], x[4]) with two variables (x[1], x[2]) partialed out is given by
Figure 3. Venn Diagram Showing Partial Correlations With Two Variables (x[1], x[2]) Partialed Out.
2.3 Type I SS
There are several types of sums of squares used in the literature on linear models. The most commonly used SS reported in statistical packages are the Type I and Type II SS. A discussion of SS and
related references can be found in the SAS/STAT User's Guide (SAS Institute Inc. 1990). The Type I SS is the SS of a predictor after adjusting for the effects of the preceding predictors in the
model. For example, when there are three predictors, and their order in entering the equation is x[1], x[2], x[3], the Type I SS are SS(x[1]), SS(x[2] | x[1]), and SS(x[3] | x[2], x[1]). The Type I
SS would not be the same if the variables entered the equation in a different order. The fact that Type I SS are model-order dependent is illustrated by the Venn diagram in Figure 4. The Type I SS of
x[2] in (a) and (b) are, respectively, SS(x[2] | x[1]) and SS(x[2] | x[1], x[3]). The diagram helps instructors explain the arbitrariness of using the incremental SS such as the Type I SS, or the
incremental R^2 in procedures such as forward selection designed to isolate the variable(s) of importance.
Figure 4. Type I SS for x[2] (Shaded Region) When the Order is (a) x[1], x[2], x[3]; (b) x[1], x[3], x[2].
2.4 Type II SS
When the SS for each predictor is adjusted for all the other predictors in the regression equation, the resulting SS is called the Type II SS. In the three-predictor example, the Type II SSs are
SS(x[1] | x[2], x[3]), SS(x[2] | x[1], x[3]), and SS(x[3] | x[2], x[1]). Each Type II SS represents the effect of the predictor when it is treated as the last predictor that enters the equation. See
Figure 5 for an illustration.
Figure 5. Type II SS for x[2] (Shaded Area). It is equivalent to the Type I SS when the variable is the last predictor entered.
Venn diagramming illustrates not only the Type II SS, but also the effect of multicollinearity. When multicollinearity exists between predictors, the effect of each predictor, as measured by its Type
II SS, and thus when treated as the "last predictor in," may be insignificant even when the predictor is a significant one on its own. Chatterjee and Price (1977, p. 144) provide an example using
achievement data that illustrates this. The response variable is a measure of achievement, and the three continuous predictors are indexes of family, peer group, and school. The first twenty data
points in the example were used in a regression analysis, and the breakdown of the SS is shown in Table 1. The total SS equals 87.6, and R^2 = 0.324. The Venn diagram for this example appears in
Figure 6. The "ground not covered" by any variable represents the SS for error (SSE) and is 59.2.
Table 1. SS of Partitions in the Venn Diagram in Figure 6
┃ Variable │ SS ┃
┃ family only │ 0.8 ┃
┃ peer group only │ 8.3 ┃
┃ school only │ 0.4 ┃
┃ family and peer group only │ 0.7 ┃
┃ family and school only │ 4.2 ┃
┃ school and peer group only │ 3.3 ┃
┃ family, school, and peer group │ 10.7 ┃
┃ Total SSR │ 28.4 ┃
Figure 6. Venn Diagram Showing SS in Achievement Example.
The F statistic is given by [SS(family, peer group, school)/df(model)] / [SSE/df(error)]. This ratio is proportional to (area covered) / (area not covered) in the Venn diagram. For this example,
F = 2.55 with df = 3, 16, and is significant at the level. However, none of the t-tests for the individual predictors are significant at the level. The p-values for family, peer group, and school are
0.648, 0.153, and 0.753, respectively. Note that a t test for an individual variable -- family, for example -- is equivalent to an F test (df = 1, 16) with its F statistic being proportional to SS
(family | peer group, school) / SSE, the area covered by family as the last predictor in, divided by the area not covered. The Venn diagram in Figure 6 illustrates that given the great deal of
overlap among the variables (multicollinearity), even when the "ground covered" jointly by all three is substantial (leading to a significant overall F-test), the additional "ground covered" by each
variable given the others may not be significant (leading to insignificant t-tests).
2.5 Average Stepwise Regression
Kruskal (1987) suggests an average stepwise approach for assessing the relative importance of a variable. When k explanatory variables are present in a model, there are k! possible orderings in which
the variables can enter into regression. A variable's contribution to R^2 can be evaluated by averaging over all possible orderings. This approach avoids the pitfall of depending on the Type II SS
or, equivalently, the incremental R^2, where the variable is entered last. The Venn diagram helps students visualize what really occurs when the incremental R^2's for all possible orderings are
averaged. Figure 7 illustrates the situation.
Figure 7. Venn Diagram Showing SS in Average Stepwise Regression.
Consider the variable x[1]. Denote the areas covered by only one variable (x[1] itself, labeled "1"), two overlapping variables (labeled "2"), three overlapping variables (labeled "3") by A[0], A[1],
A[2], etc. When the incremental R^2 is calculated for all k! possible orderings, the piece that does not overlap with any other variable, A[0], appears every time. The pieces that overlap with only
one other variable appear k!/2 times because in half of the k! orderings x[1] enters the regression model before the other overlapping variables. In general, the area that overlaps with r other
variables (1rk - 1) appears in the k! possible orderings k!/(r + 1) times. Therefore, the average contribution in incremental SS of x[1] is given by
Because SS(x[1]) = A[0]A[1]A[k-1], the average stepwise approach produces a value that is the sum of the contributions of various pieces from r^2[yx[1]], weighted down harmonically by the number of
times it overlaps with other variables plus one. The Venn diagram helps students visualize the relationship. Students should have no difficulty comparing this value to the Type II SS, which is
represented by the area covered by x[1] alone.
3. Limitations of Using Venn Diagrams to Illustrate Regression Concepts
A number of authors point out that the overall R^2 for a model may be greater than the sum of the partial R^2's for a subset of variables. For example, Hamilton (1987) provides a geometric argument
for why sometimes R^2 > r^2[yx[1]] + r^2[yx[2]]. In addition, Kendall and Stuart (1973, p. 359) describe an extreme example in which r^2[yx[1]] = 0.00, r^2[yx[2]] = 0.18, R^2 = 1.00, and the
correlation between x[1] and x[2] is -0.9. This dataset is presented in Table 2.
Table 2. Example of Suppressor Variable (Kendall and Stuart 1973)
┃ x[1] │ x[2] │ y ┃
┃ 2.23 │ 9.66 │ 12.37 ┃
┃ 2.57 │ 8.94 │ 12.66 ┃
┃ 3.87 │ 4.40 │ 12.00 ┃
┃ 3.10 │ 6.64 │ 11.93 ┃
┃ 3.39 │ 4.91 │ 11.06 ┃
┃ 2.83 │ 8.52 │ 13.03 ┃
┃ 3.02 │ 8.04 │ 13.13 ┃
┃ 2.14 │ 9.05 │ 11.44 ┃
┃ 3.04 │ 7.71 │ 12.86 ┃
┃ 3.26 │ 5.11 │ 10.84 ┃
┃ 3.39 │ 5.05 │ 11.20 ┃
┃ 2.35 │ 8.51 │ 11.56 ┃
┃ 2.76 │ 6.59 │ 10.83 ┃
┃ 3.90 │ 4.90 │ 12.63 ┃
┃ 3.16 │ 6.96 │ 12.46 ┃
A variable that increases the importance of the others is called a suppressor variable (e.g., Pedhazur 1982, p. 104). When a suppressor variable is present, Venn diagramming may not be suitable.
Specifically, in a case in which there are only two predictors, the inequality R^2 > r^2[yx[1]] + r^2[yx[2]] is equivalent to SS(x[1], x[2]) > SS(x[1]) + SS(x[2]). On a Venn diagram, this implies
that the overlapping area, indicated by SS(x[1], x[2]) - SS(x[1] | x[2]) - SS(x[2] | x[1]) = SS(x[1]) + SS(x[2]) - SS(x[1], x[2]) is negative.
When there are three variables, every non-overlapping and overlapping piece in a Venn diagram corresponds to a function of the SS of the multiple regression of subsets of variables {x[1]}, {x[2]}, {x
[3]}, {x[1], x[2]},..., {x[1], x[2], x[3]}. Figure 8 shows the seven mutually exclusive pieces of SS for three variables.
Figure 8. Partition of Areas When There Are Three Variables.
The piece that is labeled "6" corresponds to SS(x[3] | x[1]) - SS(x[3] | x[1], x[2]), or equivalently,
SS(x[1], x[3]) - SS(x[1]) - SS(x[1], x[2], x[3]) + SS(x[1], x[2]), (1)
and the piece that is labeled "3" (where all variables overlap) corresponds to
SS(x[1]) + SS(x[2]) + SS(x[3]) - SS(x[1], x[2]) - SS(x[2], x[3]) - SS(x[1], x[3]) + SS(x[1], x[2], x[3]). (2)
There is no guarantee that expressions such as (1) and (2) will always be positive. Although we can think of areas as being negative, this may lead to difficulty in interpretation. Furthermore, when
there are four variables or more, it is not possible to show all the combinations of overlaps with ovals or any other convex figures. For these reasons, Venn diagramming to demonstrate numerical
results, especially when there are more than two variables, may not be illuminating.
4. Effectiveness for Instructional Purposes
Despite its limitations, we believe that Venn diagramming is a valuable tool that can be used when concepts of multiple regression are introduced and described in the classroom. We performed an
experiment to assess the efficacy of the Venn diagram approach in the instruction of multiple regression. We selected two large undergraduate statistics classes taught by the author and another
professor in the spring semester of 1999 at the University of Southern California. The class size of each session was approximately equal to 270. Venn diagramming was used in the author's class (the
treatment session) but not in the other class (the comparison session). In the final exams of both instructors, a common question (included in the Appendix) concerning multicollinearity was included.
To eliminate possible bias due to different emphases in lectures or familiarity with wording introduced by the author, the instructor from the comparison session wrote the actual problem after all
lectures were completed. A teaching assistant, who was not informed about the purpose of the experiment, graded the same question from both sessions on a 4-point scale. Because each instructor wrote
up his/her own exam, and the teaching assistant worked for only one instructor, it was not possible to conceal which instructor wrote which exam.
Table 3 summarizes the results of the experiment. The p-value of the two-sided two-sample t-test was 0.014 with 197 degrees of freedom, and therefore the test was significant at the level. We
examined the individual scores and found that a lot of students either obtained full credit or no credit at all. We were concerned that the difference might be due to a discrepancy in the absentee
rates of the two sessions. However, further investigation revealed that the discrepancy in absentee rates was slight. To examine the possible instructor effect, we also compared student evaluations
of both instructors -- even though we were certain that there were stylistic differences between them -- and the ratings (on a 5-point scale) for both instructors were close (both above 4.0). We
acknowledge that there were confounding factors, the effects of which cannot be completely isolated. These possible effects include differences in student ability between the two classes and the bias
unintentionally introduced from review and coaching sessions by the instructors.
Table 3. Summary of Two-Sample t-test (Two-Sided) for Treatment and Comparison Groups
┃ │ Comparison Group │ Treatment Group ┃
┃ Average score │ 2.496 │ 3.000 ┃
┃ Standard deviation │ 1.67 │ 1.72 ┃
┃ Sample size │ 133 │ 97 ┃
┃ t-statistic │ t = 2.22 ┃
The evidence regarding the efficacy of the Venn diagramming approach was statistically significant, but not extremely strong. We did note, however, that in the treatment session, some students used
phrases such as "overlapping in predictive power" or even drew a Venn diagram to illustrate multicollinearity. It is possible that these students used the Venn diagram as a mnemonic to aid their
recall for an explanation. Finally, it must be emphasized that the result of the experiment should not be seen as offering definitive evidence for the universal value of Venn diagramming. The
instructional value inherent in its use may vary as a function of instructor, student, and institutional characteristics.
5. Conclusion
This article discusses how Venn diagramming can be used as a teaching aid in classroom instruction of topics such as R^2 and the Type I and Type II SS in multiple regression. The limitations of its
use are also discussed. Clearly, students should be aware of these limitations. However, when the goal is to help students grasp concepts in multiple regression and to enable them to explain these
concepts to others, Venn diagramming is an effective tool. This observation is substantiated by a small-scale study.
The author thanks Professor Catherine Sugar for her help with the experiment. He also thanks the referees and the Associate Editor for their constructive comments.
The printout below shows a multiple regression of employee's salary on years of professional experience and job approval rating. The regression equation is Salary = 20 + 2 Years + 3 Rating.
Predictor Coef Stdve t-ratio F
Constant 20 2.0 10.00 .0000
Years 2 1.5 1.33 .1000
Rating 3 3.0 1.00 .1657
S=1.00 R-sq=.414 R-sq(adj.)=.345
Analysis of variance
Source DF SS MS F P
Regression 2 12.00 6.00 6.00 0.0107
Error 17 17.00 1.00
Total 19 29.00
a. A manager at the company says that the overall regression is useful for predicting salary. Say briefly what test you would use to determine this and use the printout to justify the conclusion.
b. The manager further notes that tests show neither years of experience nor job approval ratings appear significant. Explain this using values from the printout. Again, no calculations are
c. What do the results of part (b) say about the usefulness of experience and job approval rating as predictors of salary?
d. The manager is confused that the model is useful, but neither of the predictors is significant. Can you explain to her what might have caused this result?*
* Only part (d) was used in the experiment.
Agresti, A., and Finlay, B. (1997), Statistical Methods for the Social Sciences (3rd ed.), Upper Saddle River, NJ: Prentice Hall.
Chatterjee, S., and Price, B. (1977), Regression Analysis By Example, New York: Wiley.
Hair, J., Anderson, R., and Tatham, R. (1987), Multivariate Data Analysis with Readings (2nd ed.), NY: Macmillan.
Hamilton, D. (1987), "Sometimes R^2 > r^2[yx[1]] + r^2[yx[2]]. Correlated Variables Are Not Always Redundant," The American Statistician, 41, 129-132.
Kendall, M., and Stuart, A. (1973), Advanced Theory of Statistics (Vol. 2; 3rd ed.), NY: Hafner.
Kruskal, W. (1987), "Relative Importance by Averaging Over Orderings," The American Statistician, 41, 6-10.
McKenzie, J., Schaefer, R., and Farber, E. (1995), The Student Edition of Minitab for Windows, Reading, MA: Addison-Wesley.
Pedhazur, E. J. (1997), Multiple Regression in Behavioral Research: Explanation and Prediction (3rd ed.), Fort Worth, TX: Holt, Rinehart & Winston.
SAS Institute Inc. (1990), SAS/STAT User's Guide (Vol. 1), Version 6, Cary, NC: Author.
Shavelson, R. J., and Webb, N. M. (1990), Generalizability Theory -- a Primer, London: Sage Publications.
Suppes, P. (1957), Introduction to Logic, Princeton, NJ: Van Nostrand.
Venn, J. (1880), "On the Diagrammatic and Mechanical Representation of Propositions and Reasonings," The London, Edinburgh, and Dublin Philosophy Magazine and Journal of Science, 5, 1-18.
Edward H. S. Ip
Marshall School of Business
University of Southern California
Bridge Hall 401
Los Angeles, CA 90089-1421
Volume 9 (2001) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications | {"url":"http://www.amstat.org/publications/jse/v9n1/ip.html","timestamp":"2014-04-21T12:12:13Z","content_type":null,"content_length":"36597","record_id":"<urn:uuid:9509397c-a8b3-4505-9aea-f2dc7e9494e8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - Question regarding quadratic-like residues in (Z/pZ)[i].
zack_vt Jan13-12 12:08 PM
Question regarding quadratic-like residues in (Z/pZ)[i].
Hi all.
I'm working in the set that is formed by extending the integers mod p (p is prime and equal to 3 mod 4) by including i = [itex]\sqrt{-1}[/itex]: (Z/pZ)[i]. I want to know if the exists a 'z' in (Z/
pZ)[i] for a given non-zero element 'a' of Z/pZ such that 'a = z[itex]\overline{z}[/itex]'. If anyone could point me in a fruitful direction on this I would be most grateful.
morphism Jan13-12 05:25 PM
Re: Question regarding quadratic-like residues in (Z/pZ)[i].
You're basically asking if a is the sum of two squares in Z/pZ. This is true even if p != 3 mod 4. Try to mimic the proof of the fact that a prime = 1 mod 4 is the sum of two squares in Z.
For related material, you can try reading up on "formally real fields". (Z/pZ is a nonexample.)
zack_vt Jan13-12 05:36 PM
Re: Question regarding quadratic-like residues in (Z/pZ)[i].
Many thanks!
All times are GMT -5. The time now is 10:45 AM.
Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums | {"url":"http://www.physicsforums.com/printthread.php?t=567291","timestamp":"2014-04-18T15:45:48Z","content_type":null,"content_length":"5281","record_id":"<urn:uuid:06ea992b-4dbf-48b8-9f3b-d07645d4590d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do you use LaTex [ math]sqrt{x}[\math]
June 27th 2012, 02:25 AM #1
Jun 2012
Staffordshire, England
How do you use LaTex [ math]sqrt{x}[\math]
Why does [tex]sqrt{x}[\math] come out as [tex] not the formula?
I have been struggling for hours . even typing in the tutorial example:
[tex]x^2\sqrt{x}[\math] although the first expression was typed in as [ math ] (with out the spaces )
Re: How do you use LaTex [ math]sqrt{x}[\math]
What is your point?
The code [tex]x^2\sqrt{x}[/tex] gives $x^2\sqrt{x}$.
Re: How do you use LaTex [ math]sqrt{x}[\math]
Re: How do you use LaTex [ math]sqrt{x}[\math]
The point is :
The tutorial says type in [math]...... \[math]
but when I do I do not get the formular as you have - by changing [\math] to tex i see!
If it did work you would have sen the maths formula in my post ...!
When I type in [ math] sqrt{x} \[mat] as an example .(I know about not leaving spaces )
When I preview I get [tex]... my formula as I typed it in...[\math] I do not get a formula in mathematical notation
Re: How do you use LaTex [ math]sqrt{x}[\math]
The point is :
The tutorial says type in [tex]...... \[tex]
but when I do I do not get the formular as you have - by changing [\math] to tex i see!
If it did work you would have sen the maths formula in my post ...!
When I type in [ math] sqrt{x} \[mat] as an example .(I know about not leaving spaces )
When I preview I get [tex]... my formula as I typed it in...[\math] I do not get a formula in mathematical notation
Please take note that the correct wrap is [tex]x^2\sqrt{x}[/tex].
In particular it begins with [tex] and ends with [/tex].
Re: How do you use LaTex [ math]sqrt{x}[\math]
Your post has a huge number of typos: using \ instead of /, having \ before [ instead of after, writing mat instead of math, etc. It is not surprising that with such number of typos something
does not work. I suggest you copy-paste the code from the tutorial instead of re-typing it. If you still have an example in the tutorial that does not work, please post a link to that example.
Plato showed the exact code to produce a LaTeX formula.
Re: How do you use LaTex [ math]sqrt{x}[\math]
The point is :
The tutorial says type in [tex]...... \[tex]
but when I do I do not get the formular as you have - by changing [\math] to tex i see!
If it did work you would have sen the maths formula in my post ...!
When I type in [ math] sqrt{x} \[mat] as an example .(I know about not leaving spaces )
When I preview I get [tex]... my formula as I typed it in...[\math] I do not get a formula in mathematical notation
Re: How do you use LaTex [ math]sqrt{x}[\math]
My point was I did not know what I was doing wrong- I never said the tutorial was wrong - why is every one so arsey?
It was anther person who suggested the tutorial was wrong.. THE TUTORIAL CLEARLY SHOWS \ not / as I had to keep correcting my self - being used to using / as divide by.
See the inserted images in my other answer
In my original post there are NO typo math is math and \ is as the tutorial showed me i.e. not /
I just wanted some help! I am not criticising any one except my self!
Last edited by TeslaCoil; June 27th 2012 at 06:55 AM.
Re: How do you use LaTex [ math]sqrt{x}[\math]
Re: How do you use LaTex [ math]sqrt{x}[\math]
June 27th 2012, 02:54 AM #2
June 27th 2012, 03:06 AM #3
MHF Contributor
Oct 2009
June 27th 2012, 04:55 AM #4
Jun 2012
Staffordshire, England
June 27th 2012, 05:01 AM #5
June 27th 2012, 05:04 AM #6
MHF Contributor
Oct 2009
June 27th 2012, 05:04 AM #7
Jun 2012
Staffordshire, England
June 27th 2012, 05:12 AM #8
Jun 2012
Staffordshire, England
June 27th 2012, 05:18 AM #9
Jun 2012
Staffordshire, England
June 27th 2012, 05:23 AM #10
Jun 2012
Staffordshire, England
June 27th 2012, 05:38 AM #11 | {"url":"http://mathhelpforum.com/math-topics/200430-how-do-you-use-latex-math-sqrt-x-math.html","timestamp":"2014-04-21T13:34:50Z","content_type":null,"content_length":"65246","record_id":"<urn:uuid:20a65aed-63a8-44d2-a0ea-537ae34e99f7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Improved algorithms for bipartite network flow
Results 11 - 20 of 37
, 2005
"... A natural extension of the maximum flow problem is the parametric maximum flow problem, in which some of the arc capacities in the network are functions of a single parameter λ. Previous
approaches to the problem compute the maximum flow for a given sequence of parameter values sequentially taking ..."
Cited by 7 (2 self)
Add to MetaCart
A natural extension of the maximum flow problem is the parametric maximum flow problem, in which some of the arc capacities in the network are functions of a single parameter λ. Previous approaches
to the problem compute the maximum flow for a given sequence of parameter values sequentially taking advantage of the solution at the previous parameter value to speed up the computation at the next.
In this paper, we present a new Simultaneous Parametric Maximum Flow (SPMF) algorithm that finds the maximum flow and a minimum cut of an important class of parametric networks for all values of
parameter λ simultaneously. Instead of working with the original parametric network, a new non-parametric network is derived from the original and the SPMF gives a particular state of the flows in
the derived network, from which the nested minimum-cuts under all λ-values are derived in a single scan of the vertices in a sorted order. SPMF simultaneously discovers all breakpoints of λ where the
maximum flow as a step-function of λ jumps. The maximum flows at these λ-values are calculated in O(m) time from the minimum-cuts; m is the number of arcs. Generalization beyond bipartite networks is
also shown.
- Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems, volume 3011 of LNCS , 2004
"... Abstract. This paper shows that existing definitions of costs associated with soft global constraints are not sufficient to deal with all the usual global constraints. We propose more expressive
definitions: refined variablebased cost, object-based cost and graph properties based cost. For the first ..."
Cited by 6 (0 self)
Add to MetaCart
Abstract. This paper shows that existing definitions of costs associated with soft global constraints are not sufficient to deal with all the usual global constraints. We propose more expressive
definitions: refined variablebased cost, object-based cost and graph properties based cost. For the first two ones we provide ad-hoc algorithms to compute the cost from a complete assignment of
values to variables. A representative set of global constraints is investigated. Such algorithms are generally not straightforward and some of them are even NP-Hard. Then we present the major feature
of the graph properties based cost: a systematic way for evaluating the cost with a polynomial complexity. 1
, 1999
"... We consider the traveling salesman problem when the cities are points in R^d for some fixed d and distances are computed according to geometric distances, determined by some norm. We show that
for any polyhedral norm, the problem of finding a tour of maximum length can be solved in polynomial time. ..."
Cited by 6 (3 self)
Add to MetaCart
We consider the traveling salesman problem when the cities are points in R^d for some fixed d and distances are computed according to geometric distances, determined by some norm. We show that for
any polyhedral norm, the problem of finding a tour of maximum length can be solved in polynomial time. If arithmetic operations are assumed to take unit time, our algorithms run in time O(n log n),
where f is the number of facets of the polyhedron determining the polyhedral norm. Thus for example we have O(n log n) algorithms for the cases of points in the plane under the Rectilinear and Sup
norms. This is in contrast to the fact that finding a minimum length tour in each case is NP-hard. Our approach can be extended to the more general case of quasi-norms with not necessarily symmetric
unit ball, where we get a complexity of O(n log n).
- In WEA ’07: Proceedings of the 6th Workshop on Experimental Algorithms , 2007
"... Abstract. The parametric maximum flow problem is an extension of the classical maximum flow problem in which the capacities of certain arcs are not fixed but are functions of a single parameter.
Gallo et al. [6] showed that certain versions of the push-relabel algorithm for ordinary maximum flow can ..."
Cited by 6 (1 self)
Add to MetaCart
Abstract. The parametric maximum flow problem is an extension of the classical maximum flow problem in which the capacities of certain arcs are not fixed but are functions of a single parameter.
Gallo et al. [6] showed that certain versions of the push-relabel algorithm for ordinary maximum flow can be extended to the parametric problem while only increasing the worst-case time bound by a
constant factor. Recently Zhang et al. [14,13] proposed a novel, simple balancing algorithm for the parametric problem on bipartite networks. They claimed good performance for their algorithm on
networks arising from a real-world application. We describe the results of an experimental study comparing the performance of the balancing algorithm, the GGT algorithm, and a simplified version of
the GGT algorithm, on networks related to those of the application of Zhang et al. as well as networks designed to be hard for the balancing algorithm. Our implementation of the balancing algorithm
beats both versions of the GGT algorithm on networks related to the application, thus supporting the observations of Zhang et al. On the other hand, the GGT algorithm is more robust; it beats the
balancing algorithm on some natural networks, and by asymptotically increasing amount on networks designed to be hard for the balancing algorithm. 1
- SIAM Journal on Discrete Mathematics , 1999
"... . In the baseball elimination problem, there is a league consisting of n teams. At some point during the season, team i has w i wins and g ij games left to play against team j.Ateamis eliminated
if it cannot possibly finish the season in first place or tied for first place. The goal is to determine ..."
Cited by 5 (0 self)
Add to MetaCart
. In the baseball elimination problem, there is a league consisting of n teams. At some point during the season, team i has w i wins and g ij games left to play against team j.Ateamis eliminated if
it cannot possibly finish the season in first place or tied for first place. The goal is to determine exactly which teams are eliminated. The problem is not as easy as many sports writers would have
you believe, in part because the answer depends not only on the number of games won and left to play, but also on the schedule of remaining games. In the 1960's, Schwartz showed how to determine
whether one particular team is eliminated using a maximum flow computation. This paper indicates that the problem is not as di#cult as many mathematicians would have you believe. For each team i,letg
i denote the number of games remaining. We prove that there exists a value W # such that team i is eliminated if and only if w i + g i <W # . Using this surprising fact, we can determine all
eliminated team...
- IEEE Trans. Pattern Anal. Mach. Intell
"... Abstract—In partitioning, clustering, and grouping problems, a typical goal is to group together similar objects, or pixels in the case of image processing. At the same time, another goal is to
have each group distinctly dissimilar from the rest and possibly to have the group size fairly large. Thes ..."
Cited by 4 (2 self)
Add to MetaCart
Abstract—In partitioning, clustering, and grouping problems, a typical goal is to group together similar objects, or pixels in the case of image processing. At the same time, another goal is to have
each group distinctly dissimilar from the rest and possibly to have the group size fairly large. These goals are often combined as a ratio optimization problem. One example of such a problem is a
variant of the normalized cut problem, another is the ratio regions problem. We devise here the first polynomial time algorithms solving optimally the ratio region problem and the variant of
normalized cut, as well as a few other ratio problems. The algorithms are efficient and combinatorial, in contrast with nonlinear continuous approaches used in the image segmentation literature,
which often employ spectral techniques. Such techniques deliver solutions in real numbers which are not feasible to the discrete partitioning problem. Furthermore, these continuous approaches are
computationally expensive compared to the algorithms proposed here. The algorithms presented here use as a subroutine a minimum s; t-cut procedure on a related graph which is of polynomial size. The
output consists of the optimal solution to the respective ratio problem, as well as a sequence of nested solutions with respect to any relative weighting of the objectives of the numerator and
denominator. Index Terms—Grouping, image segmentation, graph theoretic methods, partitioning. Ç
"... Abstract. We describe a two-level push-relabel algorithm for the maximum flow problem and compare it to the competing codes. The algorithm generalizes a practical algorithm for bipartite flows.
Experiments show that the algorithm performs well on several problem families. 1 ..."
Cited by 3 (1 self)
Add to MetaCart
Abstract. We describe a two-level push-relabel algorithm for the maximum flow problem and compare it to the competing codes. The algorithm generalizes a practical algorithm for bipartite flows.
Experiments show that the algorithm performs well on several problem families. 1
"... this paper, we define the minimax flow problem and design an O(k \Delta M(n;m)) time optimal algorithm for a special case of the problem in which the weights on arcs are either 0 or 1, where n
is the number of vertices, m is the number of arcs, k (where 1 k m) is the number of arcs with nonzero we ..."
Cited by 3 (1 self)
Add to MetaCart
this paper, we define the minimax flow problem and design an O(k \Delta M(n;m)) time optimal algorithm for a special case of the problem in which the weights on arcs are either 0 or 1, where n is the
number of vertices, m is the number of arcs, k (where 1 k m) is the number of arcs with nonzero weights, and M(n;m) is the best time bound for finding a maximum flow in a network.
- IN: PROCEEDINGS OF THE EUROPEAN CONFERENCE ON COMPUTER VISION , 2012
"... We develop a fast, effective algorithm for minimizing a well-known objective function for robust multi-model estimation. Our work introduces a combinatorial step belonging to a family of
powerful move-making methods like α-expansion and fusion. We also show that our subproblem can be quickly transf ..."
Cited by 3 (2 self)
Add to MetaCart
We develop a fast, effective algorithm for minimizing a well-known objective function for robust multi-model estimation. Our work introduces a combinatorial step belonging to a family of powerful
move-making methods like α-expansion and fusion. We also show that our subproblem can be quickly transformed into a comparatively small instance of minimum-weighted vertex-cover. In practice, these
vertex-cover subproblems are almost always bipartite and can be solved exactly by specialized network flow algorithms. Experiments indicate that our approach achieves the robustness of methods like
affinity propagation, whilst providing the speed of fast greedy heuristics.
, 2012
"... Inference in high-order graphical models has become important in recent years. Several approaches are based, for example, on generalized message-passing, or on transformation to a pairwise model
with extra ‘auxiliary ’ variables. We focus on a special case where a much more efficient transformation ..."
Cited by 3 (1 self)
Add to MetaCart
Inference in high-order graphical models has become important in recent years. Several approaches are based, for example, on generalized message-passing, or on transformation to a pairwise model with
extra ‘auxiliary ’ variables. We focus on a special case where a much more efficient transformation is possible. Instead of adding variables, we transform the original problem into a comparatively
small instance of submodular vertex-cover. These vertex-cover instances can then be attacked by existing algorithms (e.g. belief propagation, QPBO), where they often run 4–15 times faster and find
better solutions than when applied to the original problem. We evaluate our approach on synthetic data, then we show applications within a fast hierarchical clustering and model-fitting framework. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=227595&sort=cite&start=10","timestamp":"2014-04-21T08:39:21Z","content_type":null,"content_length":"39200","record_id":"<urn:uuid:b9e6cecb-909d-49a4-adc0-836ae65610d8>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove that each of the following is an identity
April 18th 2009, 01:44 PM
Prove that each of the following is an identity
I am frustated with these two problems, i can not figure them out, ive tried many identities but still no correct answer. (Thinking)
The problems are
#1.) (1+cot(x))^2-cot(x)=1/(1-cos(x))(1+cos(x))
#2.) 1-2cos^2(y)/1-2cos(y)sin(y)=sin(y)+cos(y)/sin(y)-cos(y)
please help or give some hints ,thank you .
April 18th 2009, 02:32 PM
For # 2 multiply the top and bottom by sin(y) -cos(y)
expand and use sin^2(y) +cos^2(y) =1
For # 1 I'm not sure it is an identity since the right hand side
is csc^2(x) = cot^2(x) + 1 and if you expand the left hand side you get
cot^2(x) +2cot(x) +1 -cot(x) = cot^2(x) +cot(x) +1
April 18th 2009, 02:39 PM
Hello, starlet19!
The first one is not an identity . . . There must be a typo.
$2)\;\;\frac{1-2\cos^2\!y}{1-2\cos y\sin y} \:=\:\frac{\sin y+\cos y}{\sin y-\cos y}$
Multiply the right side by $\frac{\sin y - \cos y}{\sin y - \cos y}$
$\frac{\sin y + \cos y}{\sin y - \cos y}\cdot\frac{\sin y - \cos y}{\sin y - \cos y} \;=\;\frac{\sin^2\!y - \cos^2\!y}{\sin^2\!y - 2\cos y\sin y + \cos^2\!y}$
. . $=\;\frac{(1-\cos^2\!y) - \cos^2\!y}{\underbrace{\sin^2\!y + \cos^2\!y}_{\text{This is 1}}\: -\: 2\cos y\sin y} \;=\;\frac{1-2\cos^2\!y}{1 - 2\cos y\sin y} \quad\hdots\quad\text{ta-}DAA!$
Ahhh, too fast for me, Calculus26! | {"url":"http://mathhelpforum.com/trigonometry/84322-prove-each-following-identity-print.html","timestamp":"2014-04-19T06:12:41Z","content_type":null,"content_length":"6215","record_id":"<urn:uuid:2e6b1245-cb91-46c5-9463-e7047efb956f>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electricity - Cost per kWh? What is a unit?
26-11-2011 05:38 PM #1
I've got a prepaid meter and buy electricity online.
Last time I bought R500 I got 388 units. (R1.28 per unit) What exactly is one unit equal to? How do I find out the cost per kWh? Is it R1.28?
Last edited by gregmcc; 26-11-2011 at 06:04 PM.
Kwh is the unit for electricity usage, yes.
That seems expensive.
1 unit is 1000 watts = 1 kilowatt. If you switch on an appliance which is rated at 500 watts, it will take 2 hours to use up 1000 watts. R 1,28 is damn cheap by overseas standards, so when we get
our electric mains rechargeable cars, we will be in the pound seats so to speak
Things won are done; joy's soul lies in the doing
Watt is an instantaneous measure. Watt-hour is a measure of consumption.
A 500 watt appliance will use 1 kilowatt-hour of electricity in 2 hours. You're close but you are confusing the units.
When we all plug in our electric cars, Eskom will implode.
Thanks - What I"m trying to work out is if for example if I plugged your 500 watt appliance in for 2 hours how much would it cost to run?
I know those terms are a bit inaccurate, professor, but I was just trying to help him out.
To answer his question, a 500 watt appliance run for 2 hours will consume R 1,28 worth of electricity
Things won are done; joy's soul lies in the doing
That's strange. I pay ± R0.70 per kWh - probably excl. VAT, but even with VAT it is only about R0.80/kWh. (In Johannesburg)
Edit: Maybe I'm greener than I thought.
Last edited by adrianx; 26-11-2011 at 06:46 PM.
Registered Linux User 460110
ɸ Plus ça change, plus c’est la même chose. ɸ
Not sure what Cape Town's tariffs are, but here are Tshwane's.
It's worth noting that it's more expensive as you purchase more.
You need to purchase every month, and no more than you are going to need, well at least not more so that you go into the next 'step'.
A Joule is a quatity of energy. One joule is the energy requiered to accelerate a two kilogram object to one meter per second across a frictionless surface.
Power is d(E)/dt, thus a watt is the derivative of a joule with respect to time in seconds; in normal language the rate of change of energy transfered. One watt is the rate of energy transfer of
one joule per second.
You are charged electiricty based on your energy usage. More powerful devices use energy quicker, and thus more expensive to run. To calculate how much energy a house used, rather than
integrating the function of power use over time, you are charged in units of Power*time (since the units are the same as energy. E.g: one watt used for 3 seconds is 3 joules. Twenty watts for two
hours is 72 kilojoules. A house using an average of 674 watts for a month = 674*60*60*24*30 = 1 747 008 000 Joules)
As you can see, charging a person per joule is complicated. So what Eskom does is charge us in units of kilowatt.hours. One kilowatt.hour is = 1000*60*60 = 3 600 000 joules. So the previous
number of joules simplify to 485.28 kilowatt.hours. So now all you owe to Eskom are 485.28 kilowatt.hours, or as you know them: 485.28 units.
Last edited by agentrfr; 26-11-2011 at 06:22 PM.
I-Rah los Pruzah ahrk Mul
IMA CHEMICAL ENGINEER WEO
Agents are people too.
Just because I'm technically bloody and imperialist doesn't prove anything!
Thanks - is it that easy?!
Is my logic ok on this now?
I plug a device into the mains which draws 1A. So P = VI = 220V * 1A = 0.22kW.
If this is plugged in for 10 mins then it will cost R1.28 * 0.22kWh / 6 = 4.6cents
As you can see, charging a person per joule is complicated. So what Eskom does is charge us in units of kilowatt.hours. One kilowatt.hour is = 1000*60*60 = 3 600 000 joules. So the previous
number of joules simplify to 485.28 kilowatt.hours. So now all you owe to Eskom are 485.28 kilowatt.hours, or as you know them: 485.28 units.
Thanks - good explanation.
Maybe gregmcc is on a higher tariff. Here in Cape Town, the council tried to make it as complicated as possible, with a daily service charge, a sliding scale which charged you more if you
consumed more. Then in July they abandoned this idea, since very few council employees could understand it, let alone explain it to consumers. But anyway, there are now 3 tariffs, no service
I went to a great deal of trouble installing a solar water heater, heat pump on a timer, led and cfl lamps throughout, switching off all charger transformers and managed to get my monthly useage
down to about R 450,00. Very very strangely, my neighbour, who has a regular hwc and has not bothered much with energy saving lamps, also has 2 children of 11 and 13, manages to consume under R
300 a month. Admittedly their house is a bit smaller, but I can't figure it out.
Things won are done; joy's soul lies in the doing
Umm... no, its not that simple since we are dealing with alternating current. In Eskom three phase Delta is used in MV (medium voltage) networks and three phase Star is used in LV (low voltage).
Depending on your line and phase voltage supplies to your house, there can be lots of factors introduced into the equation.
For example: If the RMS phase voltage to a plug is 220V (domestic supply, in our case at 50 Hertz), then the line RMS voltage is at sqrt(3)*phase = roughly 380V line supply. Chances are your
house is hooked up to a Star 3 phase supply, meaning your phase currents are equal to your line current, but your line voltage is sqrt(3) times bigger than your phase voltage. Since the phasors
are offset at 120 degrees per phase (3 phase = 360 degrees / 3), and 30 degrees from neutral referance (to satisfy phasor sums Vry + Vyn = Vrn etc.), (after some simple grade 10 trig, since Vry/
sin(120) = Vrn/sin(30)); RMS line Voltage = sqrt(3) * RMS phase-to-neutral voltage.
Now comes the fun bit. The phasor quantities for Voltage and Current are linked by the constant of Impedence. Vphase = Iphase*Zphase (where V = voltage, I current and Z = Impedence), but the
current phasors can be out of "sync" with the angle of the voltage, but related by Impedence. Luckily, for both a star or delta supply, the equation holds true for Vphase/Iphase = Z.... thank
goodness! That makes life much easier, since now we dont have to worry about the phasor sums to relate everything depending on the type of network.
For the rest, we will assume your house has a star 3 phase supply.
Since we know from the phasor diagrams that for a star network RMS line Voltage = sqrt(3) * RMS phase-to-neutral voltage = RMS Vphase and that Iphase = I line = I phase-to-neutral, Power = dE/dt
where W = work in watts and t is time in seconds.
Since Voltage = dE/dC (E is energy in joules, C is coulombs of elctrons) and Current = dC/dt , V*I = dE/dC * dC/dt = dE/dt = Power. Awesome.
Assuming your house's star supply is for three identical loads (for the sake of simplicity, we really don't need to do it phase by phase) i.e. one phase per load: P = 3*Vphase*Iphase. Luckily, we
can integrate this function to get it in terms of joules, since a joule is equal to one amp thourgh one ohm for one second. (<------ This is why SI units are king)
BUT since you use a star supply, Ptotal = sqrt(3)*Vline*Iline*cos(angle of impedence), since we work with only real units of joules expended, not lost though inductance and impedence to the
Eskom gets real iffy about that angle of impedence (a.k.a the Real Power factor), since their peak supply of power does not equal the real usage of their customers, since the sin(angle of
impedence) is "lost", and is not measured by their watt-meters. This is where it gets complicated with serious phasor diagrams, but I will leave it for you to look on wiki if you want to know how
they calculate total actual power supplied.
The real question is how Eskom charges you. They have different rates for the real and ether power supplied by their network that you consume. For you to calculate what each thing in your house
uses, you need to calculate everything of their impedence values, and offset of current against voltage. Each device usually has the angle of its impedence on the small writing on the bottom or
in the manual. You need to use that in your calculations, not the pure current and voltage.
Why is this? Because as a domestic customer, Eskom will only charge you for your real power usage, not your etherithic as well since it is too expensive for them to calculate it for every house
So what you pay for in usage is actually equal to, per each device, P = Vphase*Iphase*Real Power Factor, where real power factor = cos(angle of impedence), or P=(V^2)/Z
For example. Say you have an induction motor to run (simple expample to have a simple inductive load), and for argument sake it has an impedence angle of 60 degrees. Its power factor is therefore
0.5 (cos(60) = 0.5).
Say we run this bad boy on a plug in your house. Let's say it has an impedence magnatude of 110 ohms. The current running through the motor is thus 220V RMS/110 Ohms angle 60 degree = lagging
current 2 Amps at 60 degrees.
So the power consumed and measured by the eskom wattmeter on the side of your house = 220*2*0.5 = 220 watts, BUT the actual true power usage is 440 watts.
Run that motor for an hour and Eskom will be able to charge you 0.22 kWh like the meter on the side of your house says, but they will sit angrily scratching their heads because their supply meter
says they sent 0.22 kWh more to your neighbouhood and have no idea where the other 0.22kWh went in your neighbourhood. Since they cant proove who used what (or watt hehehe) since they are not
measuring the phase line phasor angles to each house, they can't change anyone fairly for it.
You technically get 0.22 kWh free. Cool huh
I-Rah los Pruzah ahrk Mul
IMA CHEMICAL ENGINEER WEO
Agents are people too.
Just because I'm technically bloody and imperialist doesn't prove anything!
Maybe gregmcc is on a higher tariff. Here in Cape Town, the council tried to make it as complicated as possible, with a daily service charge, a sliding scale which charged you more if you
consumed more. Then in July they abandoned this idea, since very few council employees could understand it, let alone explain it to consumers. But anyway, there are now 3 tariffs, no service
I went to a great deal of trouble installing a solar water heater, heat pump on a timer, led and cfl lamps throughout, switching off all charger transformers and managed to get my monthly useage
down to about R 450,00. Very very strangely, my neighbour, who has a regular hwc and has not bothered much with energy saving lamps, also has 2 children of 11 and 13, manages to consume under R
300 a month. Admittedly their house is a bit smaller, but I can't figure it out.
Check and see if they are measureing your current angle too. People that run lots and lots of incandecent lightbulbs and those tube bulb spotlight things can get whacked for having a "heavy"
inductive current load. You are charged for peak etherithic power used, not quatity. I immagine you turn on all your lights at the same time :/.... that is if they are even measuring your current
What can I say, I'm a conspirisy nut.
I-Rah los Pruzah ahrk Mul
IMA CHEMICAL ENGINEER WEO
Agents are people too.
Just because I'm technically bloody and imperialist doesn't prove anything!
Umm... no, its not that simple since we are dealing with alternating current. In Eskom three phase Delta is used in MV (medium voltage) networks and three phase Star is used in LV (low voltage).
Depending on your line and phase voltage supplies to your house, there can be lots of factors introduced into the equation.
OMG! Leaves scratching my head.
Great info there - I had to read it a few times
I"ll check out the wiki and start doing some more reading up calculating cost taking the PF into account.
Sounds like its not going to be too simple to work out what the running costs of certain items are
Gees!!! I pay R1.4 per unit what a rip!
26-11-2011 05:39 PM #2
26-11-2011 05:47 PM #3
Join Date
Aug 2008
Here and there in the Western Cape
Blog Entries
26-11-2011 05:52 PM #4
26-11-2011 06:08 PM #5
26-11-2011 06:10 PM #6
Join Date
Aug 2008
Here and there in the Western Cape
Blog Entries
26-11-2011 06:15 PM #7
Join Date
Jun 2008
26-11-2011 06:18 PM #8
Senior Member
Join Date
Sep 2004
26-11-2011 06:19 PM #9
26-11-2011 06:20 PM #10
26-11-2011 06:25 PM #11
Join Date
Aug 2008
Here and there in the Western Cape
Blog Entries
26-11-2011 07:35 PM #12
26-11-2011 07:42 PM #13
26-11-2011 07:45 PM #14
26-11-2011 07:46 PM #15
Join Date
Aug 2011
Gangsters Paradise | {"url":"http://mybroadband.co.za/vb/showthread.php/384961-Electricity-Cost-per-kWh-What-is-a-unit?s=dca6436e010acf4936affdd73966416d","timestamp":"2014-04-19T04:26:48Z","content_type":null,"content_length":"134939","record_id":"<urn:uuid:780138d6-5a28-4653-9a2b-25e191de9964>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
Engineering Physics for Lawyers
Much of the work of accident analysis and reconstruction involves the application of general physical laws to the particular situation at hand. These physical laws are expressed in terms of Newtonian
mechanics, which was developed by the great English scientist Sir Isaac Newton in the seventeenth century. Newtonian mechanics is the base science of physics and engineering. Today the development
and application of basic Newtonian mechanics is almost exclusively the province of engineers in that physicists, for the most part, conduct basic research only in "new physics" areas such as quantum
mechanics, relativistic physics, etc.
1. Velocity. Velocity is the rate of change of the position of a body over time (velocity = distance/time). Velocity is a vector, which means it has both a magnitude and direction. Thus, 30 miles per
hour north, or 20 meters per second along the X axis, are both velocities. Note that the more common term, Speed, is a velocity without a direction, so that 30 miles per hour or 20 meters per second
are speeds, and not velocities, since no direction is specified.
2. Acceleration. Acceleration is the rate of change in velocity (acceleration = velocity/time). Thus, any time a body changes its rate of travel or its direction of travel, it is said to be
accelerating. Acceleration is a vector requiring both magnitude and direction. Thus, for example, 32.2 ft/sec/sec. , downward toward the center of the earth, is the acceleration vector of all bodies
on the surface of the earth. If the support collapses, for example, if a person falls off a ladder, they will increase their velocity at a rate of 32.2 ft/sec. per second. At the end of one second of
free fall, they will be traveling at a rate of 32.2 ft/sec. which is approximately 22 miles per hour. They will have fallen approximately 16.1 feet in this one-second interval. (D = 1/2 a t x t = 1/2
x 32.2 x 1 x 1 = 16.1).
III. NEWTON'S LAWS
Newton wrote three laws of motion which relate kinematic phenomena to something else-- forces. These are:
1. First Law: "A body does not alter its state of motion without the influence of an external force." That is, there is no change in the velocity of a body (neither in magnitude nor in direction)
unless some force acts on that body.
2. Second Law: "The net resultant force applied to the body is equal to the first time-derivative of the momentum function." Or roughly:
Force = Mass x Acceleration.
This relationship is not so much a natural law as a rule for assigning a magnitude to forces.
3. Third Law: "For every applied force there is an equal and oppositely directed reactive force." You push on the wall- the wall pushes back. The pusher and the pushed, the striker and the struck
both experience forces of the same magnitude but of opposite direction.
1. Force. A force in physics or engineering is something close to what we call "forces" in everyday life. Any push, pull, twist, etc., involves a force or forces. Forces are measured in physics
according to their effects-- according to Newton's Second Law, Force = Mass x Acceleration, or F= MA. Note that "mass" is that property of physical objects that causes them to resist changes in
motion. "Inertia" is another term for mass.
2. Weight. Weight is a special force which results from the fact that a mass is being acted on by a gravitational field. A body on the surface of the earth has a certain weight because it has mass
which, when acted on by the earth's gravitational field, would cause the body to be accelerated if it lost its support. If it could fall, its "weight" would result in its being accelerated toward the
center of the earth.
3. Torque. A torque is a twisting force. It is a force that tends to induce rotary motion rather than straight line motion. Torques are typically measured in lb-ft. Thus, if we pull on a one-foot
long wrench with a force of 100 lbs., we exert 100 ft-lb of torque on the nut. The same torque is generated by 50 lbs. on a two-foot wrench, etc.
4. Friction. Friction is a special kind of force produced by two bodies that are in contact. If a book is at rest on a desk and we try to push it, our efforts are resisted by what is known as static
friction. Once we get it moving, if we stop pushing it, it comes to rest almost immediately. The retarding or stopping force is known as dynamic friction.
Rolling Friction is the relatively low retarding force associated with the free rolling of objects, e.g. tires. Sliding friction is much greater than rolling friction and occurs whenever two objects
in contact move with respect to one another without the benefit of any revolving elements. When a car is driven down the road without braking, it is being retarded by rolling friction (also by air
drag, another type of force). When the wheels are locked, the vehicle is retarded by the sliding friction between the tires and on the road. In between these two cases, in situations with braking
without wheel lock-up, the retarding forces are a complicated manifestation of the operation of the brakes, tires, and suspension system of the car. One measure of the sliding friction of cars is the
coefficient of drag (Cd). This gives an indication of how hard it is to push the car along the road with its wheels locked. A typical Cd is 0.7. In this case, a force equal to 0.7 x the weight of the
car is required to keep it sliding along the road. With a Cd and skid mark of known lengths, it is possible to estimate a vehicle's velocity before the start of the slide.
S(mph) = 5.5 x sqrt (Cd x length of skid)
5. Momentum. Momentum is the product of mass and velocity (momentum = mass x velocity). Momentum is a vector quantity as are velocity, acceleration and force. Momentum is conserved in impacts. That
is, the sums of the various momentums of the bodies before the collision are the same as momentums after the collision. Thus, we can frequently compute the velocities of vehicles before a collision
by knowing their speeds and directions of travel after the impact. Momentum is not the same as, and should not be confused with, energy.
6. Work. When a force acts on a body, work is said to be done on that body. The quantity of work, W, is given by the formula: W = F x D, where "F" is the force that acts on the body and "D" is the
distance through which it acts. Thus, a 100-lb. force acting for a distance of 10' (e.g., pushing an object against a resistance of 100# for 10') results in work = 1000 lb-ft being done on the
7. Energy. Energy is the capacity to do work. In Newtonian physics, the energy of a body is computed in two ways: either by computing its kinetic energy: KE = 1/2 M x V x V where "M" = mass and "V" =
velocity; or by computing its potential energy with respect to a system of forces capable of doing work on the body. For example, the potential energy of a body in the earth's gravitational field is
PE = W x H, where "W" is the weight of the body and "H" is its height above some reference point.
Computations involving kinetic energy are tricky but informative. For example, if a moving body crashes into a solid, non-yielding wall at 20 miles per hour, the kinetic energy dissipated in the
crash = 1/2 M x 20 x 20 = 1/2 M x 400 = M x 200; but if it crashes into the wall at 40 miles per hour, the energy dissipated in the collision is 1/2 M x 40 x 40 = 1/2 M x 1600 = M x 800. Thus, four
times as much energy is involved in the second crash as in the first (M x 800/M x 200 = 4). The speed has doubled but the energy has quadrupled!
8. Power. Power is the rate at which work is done. When work is done at the rate of 33,000 ft-lbs. per minute, one horsepower is produced. A "horsepower" is merely a conveniently sized unit for
measuring power. Its connection with equine work capacity is tenuous. A human being can apparently work at a rate of about .35 horsepower for short periods of time, if all their skeletal muscles are
being effectively used.
1. G's. The term "g" forces or "g" loads is a convenient descriptive tool for technical discussions involving accelerations due to impact forces. One "g" is an acceleration equal to that generated by
a free fall in the earth's gravitational field, i.e., 32.2 feet per second per second. Thus, a body acted on by a 0.5 g acceleration experiences a force equal to half its weight. This force acts in
the direction of the acceleration.
Frequently, during high speed collisions, accelerations in the range of 25 to 50 g's are generated. If a 150-lb. person is acted on by a 50-g retarding force during an accident, then a force of 7,500
lbs. acts on his body. Note that while a "g" value is really an acceleration, it is sometimes discussed as though it were a force. This is technically inaccurate but not harmful if it is clear what
body the force acts on and if the mass of that body does not change during the application of the "g load."
2. Moment. The word "moment" in physics is really another word for torque. Generally, when we have two or more torques acting on a body, the result is said to produce a "moment" or net torque, which
acts to rotate the body in a direction determined by the combined torque vectors.
3. Pressure. Pressure is force per unit area. Thus, if a 10-lb. weight has a contact area with another body of 1 square in., the pressure is 10 lb per square in. Normal atmospheric pressure at sea
level is about 14.7 lb./sq.-in. (psi) This is the weight of a column of air 1 inch square extending up from sea level to the outer limit of the earth's atmosphere--roughly 80 miles high. A barometer
reading of about 30 inches of mercury (30 in hg) is about one atmosphere pressure or 14.7 psi. Thus, a 30-inch column of mercury weighs about as much as 80 miles of air since they both exert the same
weight force per unit area.
4. Stress. Stress, like pressure, is also a force per unit area. However, while pressures acts on bodies, stresses act in a body. If I pull on both ends of a steel bar of 1 in. cross-sectional area
with a force of 100 lbs., I generate a tensile stress inside the bar of 100 psi . If I pull across the bar so as to try to split it in half, I generate a sheer stress. Note that a mild steel bar
would be able to absorb tensile stresses in excess of 50,000 psi without breaking so that my efforts to pull it apart result in only trivial stresses in the bar.
1. Suppose a 3000 lb. car were to crash into the rear of a truck that weighs 30,000#. The road is asphalt and the coefficient of drag is 0.7, the car is going 10 miles per hour, the truck is at rest.
The momentum of the system before the collision is:
Momentum = Mass (car) x Velocity (car) + Mass (truck) x Velocity (truck)
= 3000 x 10 + 0 x 30,000
= 30000
Since the external forces acting on the system during the impact are minimal (the brakes of the truck are off) momentum is conserved. Thus, after the accident the following relation holds true:
Momentum before =Momentum after
30,000 = 3000 x V + 30,000 x V
V = 30,000/33,000 = 0.9 mph
where "V" is now the post impact velocity of the vehicles. (We assume here that the truck and the car are moving with the same velocity after impact)
Thus, the car loses 9.1 mph = 13.3 ft/sec due to the action of the retarding impact force.
If the car is shortened about 8" in this impact, then the distance through which the retarding force acts is about 12" (the truck starts moving during the impact; assume it moves about 4") so that
the car travels about 1' during the impact with an average velocity of about 8 ft/sec. Thus, the duration of the impact can be estimated as follows:
Distance = Velocity x Time
Time = Distance/Velocity
T = 1 ft/(8 ft/sec) = 0.125 sec
So the car decelerates from 14.7 ft/sec to 1.3 ft/sec in a time of 0.125 sec. Thus, its average deceleration is:
13.4 ft/sec / 0.125 sec = 107 ft/sec/sec = 3.33 g
The average force acting on the car then is 3.33 x 3,000# = 9990#
This is also the force that acts on the truck (Newton's Third Law: Every force has an equal and opposite reaction force): So that the average acceleration of the truck is: 30,000/9990 = 3 ft/sec/sec
= 0.1 g
back to the TECHNICAL SERVICES Home Page...
TECHNICAL SERVICES - 360-576-8880 - ts@e-z.net | {"url":"http://www.e-z.net/~ts/physics.htm","timestamp":"2014-04-19T04:33:41Z","content_type":null,"content_length":"14532","record_id":"<urn:uuid:5213dbc8-71aa-480c-b39e-62f5b1d23268>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
How old is Mia Talerico 2011?
You asked:
How old is Mia Talerico 2011?
• 5 years, 6 months and 30 days old
• between 5 years and 4 months and 6 years and 3 months old
Assuming you meant
• Mia Talerico (born September 17, 2008), the American infant child actress
Did you mean?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/how_old_is_mia_talerico_2011","timestamp":"2014-04-16T04:15:06Z","content_type":null,"content_length":"52632","record_id":"<urn:uuid:122577dd-e313-47eb-a89e-5b1ce40ad0de>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
E. J. Farrell and J. C. Grell
Department of Mathematics, The University of the West Indies, St. Augustine, Trinidad
Abstract: $t$ is shown that the number of spanning trees in a graph can be obtained from the circuit polynomial of an associated graph. From this, the number of spanning tress in a regular graph is
shown to be obtainable from the characteristic polynomial of a node-deleted subgraph. Finally, Cayley's theorem for the number of labelled tress is derived.
Classification (MSC2000): 05C99
Full text of the article:
Electronic fulltext finalized on: 2 Nov 2001. This page was last modified: 16 Nov 2001.
© 2001 Mathematical Institute of the Serbian Academy of Science and Arts
© 2001 ELibM for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/PIMB/053/10.html","timestamp":"2014-04-17T09:43:56Z","content_type":null,"content_length":"3369","record_id":"<urn:uuid:9c62d0de-370f-4a40-9022-023961a0c639>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding Limits in a Rational Expression Containing Two Variables
September 28th 2011, 01:00 PM
Finding Limits in a Rational Expression Containing Two Variables
The question I am stuck on is:
lim sp (x+h)² – x²
h→0 spc h
I know that h=0 is undefined because that would make the denominator 0. I also know that if I sub 0 in to solve the limit algebraically, I get 0 as a numerator as well, which I know is
indeterminate form.
The problem is, I am not sure how to further factor in order to get h out of the denominator, or if there is even a limit. My textbook says that most often indeterminate form suggests that there
is a limit. I am just unsure as to what I should do next.
As for background information, I am new to these topics but generally catch on well enough. I am just looking for a little direction.
September 28th 2011, 01:06 PM
Re: Finding Limits in a Rational Expression Containing Two Variables
Expand $(x+h)^2=x^2+2xh+h^2$, therefore you get:
$\lim_{h\to 0} \frac{x^2+2xh+h^2-x^2}{h}=\lim_{h\to 0}\frac{2xh+h^2}{h}=\lim_{h\to 0}\frac{h(2x+h)}{h}=...$ | {"url":"http://mathhelpforum.com/pre-calculus/189078-finding-limits-rational-expression-containing-two-variables-print.html","timestamp":"2014-04-16T13:49:46Z","content_type":null,"content_length":"4935","record_id":"<urn:uuid:8bd188d0-6964-4675-a135-df41d70959ca>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
History of Tiddlywinks Ratings
1985-1990: Nick Inglis gradually developed the first ratings program based on the Elo ratings system used in chess. As he was Winking World editor, he got sent the scores of all tournaments, and so
he was in a position to enter all of them into his BBC microcomputer (starting with the ETwA Singles in 1985). The results were initially only reported to the Cambridge club, and the algorithm was
refined as the number of tournaments played increased. By 1990, the program was stable and reliable numbers for most players were produced. At that time, ratings began to be reported in Winking World
and thus became available to the whole world of winkers.
1990-1995: Nick continued to maintain the ratings. However, the number of players and scores in the database gradually became too unwieldy for the original program, and several quirks began to become
apparent. By 1996, other pressures on Nick meant that he no longer had the time to devote to maintaining and improving the ratings program.
1996-1997: Tim Hedger resurrected and took over the ratings program. He used essentially the same calculation algorithm as Nick, but converted the database into nice friendly Microsoft Access format.
This is quite important, as the database currently has over 14,000 game scores involving more than 500 players. However work pressure meant that he became unable to keep them updated from 1998.
Jun 1999-Dec 1999: Patrick Barrie volunteered to take over maintenance of the ratings in June 1999. He decided to implement a new calculation algorithm in an attempt to eliminate some of the
anomalous results that Nick's and Tim's program occasionally produced. In particular, he introduced the concept of a Ratings Reliability Factor (RRF) into the calculation, and removed the concept of
"rated games". (In Nick's and Tim's programs, games involving players who hadn't played in the previous year were only used to rate the unrated player, and didn't influence that player's partner's
Jan 2000-Jul 2002: Various changes were made to give a sounder statistical basis to the calculation of ratings changes for each player in a tournament. The idea was that a valid statistical approach
would lead to a method that would work sensibly for all the different tiddlywinks formats, even allowing for the fact that many tiddlywinks players have only played in a limited number of tournament
Jul 2002: All ratings were recalculated using the new algorithm taking into account all games since November 1985. A paper describing the calculation method was published in the Journal of Applied
Statistics in 2003.
The current algorithm is described in detail here.
Back to ETwA ratings page. | {"url":"http://www.etwa.org/ratings/history.html","timestamp":"2014-04-17T15:27:10Z","content_type":null,"content_length":"3527","record_id":"<urn:uuid:c26c9e60-1b6e-4777-a771-593c34f674a0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate Three Phase Amps From Megawatts | eHow
How to Calculate Three Phase Amps From Megawatts
Megawatt 3 phase power applies primarily to large power distribution systems. In fact, the unit of watts represent the actual power used by system after a percentage of the power is lost due to
inefficiencies of the load. Therefore, the total power supplied by the power supply is higher than the actual power and is in the form of volt-amperes, or in this case, Megavolt-amperes or MVA. You
need to know the MVA to figure 3-phase amps. To figure MVA from megawatts, you need to know the power factor associated with the load, which measures the level of inefficiency of the load.
□ 1
Find the phase voltage, or "Vphase," associated with the 3-phase system. Refer to system specifications. For example, assume 4,000 volts, which is typical for power in the megawatt range
□ 2
Find the power factor, "pf," of the load powered by the megawatt power distribution system. Refer to specifications of the load. A typical power factor for 3-phase loads is 0.8.
□ 3
Find the total power delivered by the power distribution system in Megavolts-amperes or "MVA." Use the formula MVA = MW/pf where MW is the megawatt value of the system. For example, if MW is
20MW and pf is 0.8:
MVA = 20/0.8 = 25 MVA
□ 4
Calculate 3 phase amps, or "I", using the formula: I = (MVA x 1,000, 000)/(Vphase x 1.732). The 1,000,000 represents "mega" where 1 megavolt is 1,000,000 volts. Continuing with the example:
I = (25 x 1,000,000)/(4,000 x 1.732) = 25,000,000/6,928 = 3608.5 amps.
• Photo Credit Brand X Pictures/Brand X Pictures/Getty Images | {"url":"http://www.ehow.com/how_8588339_calculate-three-phase-amps-megawatts.html","timestamp":"2014-04-16T10:19:37Z","content_type":null,"content_length":"88604","record_id":"<urn:uuid:62a6045e-6bd7-4b1b-8e94-76365059bfba>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Letter Combinations
How many 3 letter combos can be made from the word proportion?
Hello, thefirsthokage! There is no neat formula for this one. I made a list . . . the shortest possible (I think). How many three-letter combos can be made from the word PROPORTION? We have these
letters: . $\begin{Bmatrix}O\,O\,O \\ P\,P \\ R\,R\\ T \\ I \\ N\end{Bmatrix}$ 3 letters the same: $OOO$ . . . one way. 2 letters the same: there $3$ choices of the matching pair ( $OO,\,PP,\text{ or
}RR)$ . . and $5$ choices for the third letter . . . $3 \times 5 \:=\:15$ ways. 3 differerent letters: there are $\binom{6}{3} = 20$ ways. Therefore, there are: $1 + 15 + 20 \:=\:\boxed{36}$
three-letter combos.
Here is a second way the get the same answer as Soroban. The coefficient of $x^3$ in the expansion of $\left( {\sum\limits_{k = 0}^3 {x^k } } \right)\left( {\sum\limits_{k = 0}^2 {x^k } } \right)^2 \
left( {1 + x} \right)^3$ is 36. However that is assuming that the word ‘combos’ means the same thing as multi-set. In the above reply <p,o,p> is a multi-set and is counted only once. If this word
‘combos’ means three letter strings the we would count pop, ppo and opp. What is the intended meaning of ‘combos’?
Thanks guys for all your help. I do alright in calculus, but I suck so bad in statistics. I just have to keep trying, I guess. And by combos, I mean combinations.
Last edited by Plato; January 30th 2007 at 03:26 PM. Reason: spelling | {"url":"http://mathhelpforum.com/statistics/10863-letter-combinations.html","timestamp":"2014-04-17T16:21:01Z","content_type":null,"content_length":"49407","record_id":"<urn:uuid:0d16cf5c-6d77-4fd8-981e-b3121225dbb2>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Almost got it, but not quite.
March 3rd 2010, 10:21 AM #1
Mar 2010
Almost got it, but not quite.
So, I'm teaching myself calculus using the Open Course Ware at MIT and a few other online resources. I'm new to this forum and am hoping that if I get stuck anywhere I can bounce questions off
someone here.
I think have most of this problem figure out, but I'm running into a dead-end. This shows up in the Problem Set 1, for course 18.01 'Single Variable Calculus'. Anyway here's the problem:
On the planet Quirk, a cell phone tower is a 100-foot pole on top of a green mound 1000 feet tall whose outline is described by the parabolic equation y = 1000 − x^2 (1000 minus x squared). An
ant climbs up the mound starting from ground level (y = 0). At what height y does the ant begin to see the tower?
Now, I can derive the equation of the tangent line that meets at point (0, 1100) from the equation y = 1000 - x^2 as y = 1100 - 2x. Since there is only one point where these two equations meet I
thought I should be able to use the quadradic equation to solve 0 = x^2 - 2x + 100 but I get an imaginary number ( +- sqrt( -396 )). Can anyone tell me where I'm off track? Thanks
So, I'm teaching myself calculus using the Open Course Ware at MIT and a few other online resources. I'm new to this forum and am hoping that if I get stuck anywhere I can bounce questions off
someone here.
I think have most of this problem figure out, but I'm running into a dead-end. This shows up in the Problem Set 1, for course 18.01 'Single Variable Calculus'. Anyway here's the problem:
On the planet Quirk, a cell phone tower is a 100-foot pole on top of a green mound 1000 feet tall whose outline is described by the parabolic equation y = 1000 − x^2 (1000 minus x squared). An
ant climbs up the mound starting from ground level (y = 0). At what height y does the ant begin to see the tower?
Now, I can derive the equation of the tangent line that meets at point (0, 1100) from the equation y = 1000 - x^2 as y = 1100 - 2x. Since there is only one point where these two equations meet I
thought I should be able to use the quadradic equation to solve 0 = x^2 - 2x + 100 but I get an imaginary number ( +- sqrt( -396 )). Can anyone tell me where I'm off track? Thanks
Your mistake is to use the slope of the line as 2.
the line equation you are using does not intersect the parabola,
hence the reason for your complex answers.
The derivative (instantaneous slope) of the parabola is -2x.
The slope of the line you are looking for is "m".
Therefore, to obtain a single positive solution for x
The ant can can climb up either side, hence the positive and negative m.
On the planet Quirk, a cell phone tower is a 100-foot pole on top of a green mound 1000 feet tall whose outline is described by the parabolic equation y = 1000 − x^2 (1000 minus x squared). An
ant climbs up the mound starting from ground level (y = 0). At what height y does the ant begin to see the tower?
At the point $(x_0,1000-x_0^2)$ on the parabola, the slope is $-2x_0$ and the equation of the tangent is $y - (1000-x_0^2) = -2x_0(x-x_0)$. The condition for this line to pass through the point
(0,1100) is $100-x_0^2 = -2x_0^2$, or $x_0^2 = 100$. So $x_0 = \pm10$, and the corresponding value of y is $1000-100 = 900$.
The mistake there is that you are using the same notation (x,y) to denote a fixed point on the parabola (the point where the tangent touches it), and a variable point on the tangent line. In my
solution, I used $(x_0,y_0)$ for the point on the parabola, and (x,y) for a point on the tangent line. That way, you avoid confusing them.
That makes sense. Thank you for your help.
March 3rd 2010, 10:48 AM #2
MHF Contributor
Dec 2009
March 3rd 2010, 11:11 AM #3
March 3rd 2010, 11:26 AM #4
Mar 2010 | {"url":"http://mathhelpforum.com/calculus/131866-almost-got-but-not-quite.html","timestamp":"2014-04-16T13:43:13Z","content_type":null,"content_length":"41485","record_id":"<urn:uuid:07793e80-258f-4e3c-bb85-a2881036d46b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Limitations Of Ray Tracing - When Will Ray Tracing Replace Rasterization?
Now that we've made a point of deflating certain myths associated with ray tracing, let's look at the real issues that the technique involves.
We'll start with the major problem associated with the rendering algorithm: its slowness. Of course, there are those who'll say that that's not really a problem, since, after all, ray tracing is
highly parallelizable and with the number of processor cores increasing each year, we should see nearly linear increases in ray tracing performance. And what's more, research on the optimizations
that can be applied to ray tracing is still in its infancy. When you look back at the earliest 3D cards and compare them to what's available today, you might tend to be optimistic.
However, that point of view misses an essential point: the real interest of ray tracing lies in the secondary rays. In practice, visibility calculation using primary rays doesn't really represent any
improvement in image quality over a classic Z-buffer algorithm. But the problem with these secondary rays is that they have absolutely no coherence. From one pixel to another, completely different
data can be accessed, which cancels out all the usual caching techniques that are essential for good performance. That means that the calculation of secondary rays becomes extremely dependent on the
memory subsystem, and in particular on latency. This is the worst possible scenario, because of all memory characteristics, latency is the one that has made the least progress in recent years, and
there's no indication that's likely to change any time soon. It's easy enough to increase bandwidth by using several chips in parallel, whereas latency is inherent in the way memory functions.
On a graphics card, latency decreases much more slowly than bandwidth increases. When the latter improves by a factor of 10, latency improves concurrently only by a factor of two
The reason for the success of GPUs is that building hardware dedicated to rasterization was an extremely effective solution. With rasterization, memory access is coherent, regardless of whether it
involves access to pixels, texels, or vertices. So, small caches coupled with massive bandwidth were ideal for achieving excellent performance. Bandwidth is computationally expensive, but at least
it's a feasible solution if the economics justify it. Conversely, there just aren't any solutions for accelerating memory access with secondary rays. That's one reason why ray tracing will never be
as efficient as rasterization.
Another intrinsic problem with ray tracing has to do with anti-aliasing (AA). The rays being shot are in fact simple mathematical abstractions and have no actual size. Consequently, the test for
intersection with a triangle returns a simple Boolean result, but it provides no details, such as "40% of the ray intersects this triangle." The direct consequence of that is aliasing.
Whereas with rasterization it was possible to dissociate shader frequency from sampling frequency, it's not that simple with ray tracing. Several techniques have been studied to try to solve this
problem, such as beam tracing and cone tracing, which give the rays thickness, but their complexity has held them back. So the only technique that can get good results is to shoot more rays than
there are pixels, which amounts to supersampling (rendering at a higher resolution). Needless to say, that technique is much more computationally-expensive than the multisampling used by current | {"url":"http://www.tomshardware.com/reviews/ray-tracing-rasterization,2351-7.html","timestamp":"2014-04-16T10:28:20Z","content_type":null,"content_length":"90867","record_id":"<urn:uuid:7713b2aa-6856-4268-bbb9-3bedb612538a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exponential Functions
Course 1 Unit 5 - Exponential Functions
This is the third unit from the algebra and functions strand in Course 1. Exponential functions are useful in solving problems involving change in populations, pollution, temperature, bank savings,
drugs in the blood stream, and radioactive materials. Their usefulness as well as the fact that the difference equation for exponential growth, NEXT = NOW x b, is a natural counterpoint to the
difference equation for linear change, NEXT = NOW + b, are strong reasons for introducing exponential functions prior to quadratic functions in Course 1.
Unit Overview
See the sample Teacher's Guide pages below.
Objectives of the Unit
• Recognize and give examples of growth and decay situations in which exponential functions are likely to match the patterns of change that are observed or expected. This function-recognition skill
should apply to information given in data tables, graphs, or verbal descriptions of related changing variables.
• Develop ability to use reasoning, estimation, and curve-fitting utilities to find exponential functions to match patterns of change in exponential growth and decay situations. This should include
rules in the "y = ..." and NOW-NEXT forms.
• Use exponential rules to produce tables and graphs to answer questions about exponential change of variables.
• Interpret an exponential function rule in order to sketch or predict the shape of its graph and the pattern of change in tables of values.
• Describe major similarities and differences between linear and exponential patterns of change.
• Develop skill in rewriting exponential and radical expressions in equivalent forms.
Sample Overview
The sample material is the first lesson of Unit 5. This material provides an example of how one unit begins. The sample Teacher's Guide material shows the features of the Teacher's Guide including
the Unit Planning Guide, instructional notes, solutions, collaboration skills and prompts, and a Promoting Mathematical Discourse scenario. In the margin, you will see reduced-size pictures of the
Unit Resource Masters for the investigation. See Implementing Core-Plus Mathematics for information on features of the Teacher's Guide.
Instructional Design
Throughout the curriculum, interesting problem contexts serve as the foundation for instruction. As lessons unfold around these problem situations, classroom instruction tends to follow a four-phase
cycle of classroom activities—Launch, Explore, Share and Summarize, and Apply. This instructional model is elaborated under Instructional Design.
View Sample Material
You will need the free Adobe Acrobat Reader software to view and print the sample material.
How the Algebra and Functions Strand Continues
The final algebra and functions strand unit in Course 1, Unit 7, develops student ability to recognize and represent quadratic relations between variables using data tables, graphs, and symbolic
formulas; to solve problems involving quadratic functions; and to express quadratic polynomials in equivalent factored and expanded forms.
In Course 2, students review and extend their ability to recognize, describe, and use functional relationships among quantitative variables, with special emphasis on relationships that involve two or
more independent variables. They also develop matrix and linear combination methods for solving systems of two linear equations. They are introduced to function notation, review and extend their
ability to construct and reason with functions that model parabolic shapes and other quadratic relationships in science and economics, with special emphasis on formal symbolic reasoning methods, and
are introduced to common logarithms and algebraic methods for solving exponential equations.
In Course 3, students extend their understanding of formal reasoning in contexts, study linear inequalities and linear programming, polynomial and rational functions, sequences and series, and
inverse functions.
Course 4: Preparation for Calculus extends student algebraic skills and understandings in equations and functions in algebra units but also in geometry units such as Unit 2, Vectors and Motion, and
Unit 6, Surfaces and Cross Sections. (See the CPMP Courses 1-4 descriptions.) | {"url":"http://www.wmich.edu/cpmp/2nd/unitsamples/c1u5intro.html","timestamp":"2014-04-20T14:15:48Z","content_type":null,"content_length":"18108","record_id":"<urn:uuid:60d7aadd-1021-45f7-a4b1-2cd51e68d391>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need a great analogy for Maxwells 1st eq
So a good analogy for an electric field would be a blanket that covers two metal spheres. It is flat in between the two spheres but has curvature as it leaves and approaches the spheres. There is no
'radiation' just a static geometry.
Now if we consider the equation. del dotprod Electric field = rho/sigmasub0
This states that the rate of change of the Electric field wrt the vectors X hat, Y hat, Z hat equals charge density over the numerical value of sigmasub0.
sigmasub0 never changes so it's effect is the lower the value for the Electric field by division.
The rate of change of the Electric field wrt the vectors X hat, Y hat, Z hat is proportional to the charge density. Charge density goe up, so does the Electric field.
So when the charge density goes UP, what's happening in my blanket analogy?
My guesses
Maybe the diameter of a sphere increases.
Maybe we add another sphere right beside one of them.
Maybe we change the original sphere from copper to lead.
I would need an answer that specifically is in terms of my blanket analogy please. | {"url":"http://www.physicsforums.com/showthread.php?t=326698","timestamp":"2014-04-16T13:53:19Z","content_type":null,"content_length":"68056","record_id":"<urn:uuid:f1b2930c-11e2-4f56-a730-1dd8e2fce6d1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A scale drawing of a storage box on a coordinate grid shows a square with the points (-4, 2) and (-1, 4) as the locations of two adjacent vertices. Which pair of coordinate points could represent the
other two vertices? (1, 1) and (-2, -1) (-6, 4) and (-3, 7) (-6, 5) and (-3, 8) (0, 1) and (-3, -1)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Let's find the distance between (-4,2) and (-1,4) sqrt((-1+4)^2+(4-2)^2) or sqrt(13) Only (1, 1) and (-2, -1) and (0, 1) and (-3, -1) have the same distance between them. Now we probably ought to
graph them and look at the distances from the original. See attached image. Does this work for you?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/500eef6be4b0ed432e115201","timestamp":"2014-04-17T01:12:35Z","content_type":null,"content_length":"29191","record_id":"<urn:uuid:167bbf3d-e316-45bc-a50a-0574b844165b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parallelogram with Vectors
November 22nd 2009, 12:49 AM #1
Junior Member
May 2009
Parallelogram with Vectors
Hello! Im writing a linear Algebra test in January and im currently practicing. But my problem is, that its hard to verify my
results so id thought i could post here. Please tell me if its not appropiate. Besides that, im still not ablle to solve all
questions asked. Many thanks in advance
I have a Parallelogram given by the Points A,B,C,D created in counter-clockwise direction. The middle intersection of the
Lines AC and BD is the point M.
I have the following Points given:
Question 1: What is the length of AB ?
i have: AB = [4-3;0+2;1-3] = [1;2;-2]
|AB| = square root of 1+4+4 = 3 which is my lenght
QUestion 2: Parametric euqation of the Plane ABM and which parameters result in the Position vector of C ?
my attempt: E(c,k) = [3;-2;3] + c[4-3;0+2;1-3] + k[4-6;0-3;1-7] = [3;-2;3] + [c;2c;-2c]+[-2k;-3k;-6k]
which would describe my Plane ABM in parametric form
Now position vector of C with the parameters of c,k in the Plane equation.
My thinking: the length of AC is twice as long as the length AM. So i came to the conclusion my parameters should be
c = 0 since i dont need to go in the direction of point B
and d = 2 since its twice as long. Really not sure how to solve that more delicate
Question 3: Find the vector v perpendicular to the Plane ABM, such that it is pointing downwards
I think i need something like a Normalvector found by a crossproduct. But im confused since i thought i needed only 2 vectors
for that....
Question 4: Find the cosine of the angle between the lines AB and AC at the point A
cos(phi) = [ (v1 * v2) / (|v1| x |v2|) ]
v1 = AB = [3;-2;3] +c [4;0;1]
v2 = AC = A2M
not sure about that last one
I have a Parallelogram given by the Points A,B,C,D created in counter-clockwise direction. The middle intersection of the
Lines AC and BD is the point M.
I have the following Points given:
Question 1: What is the length of AB ?
i have: AB = [4-3;0+2;1-3] = [1;2;-2]
|AB| = square root of 1+4+4 = 3 which is my lenght <<<<< OK
QUestion 2: Parametric euqation of the Plane ABM and which parameters result in the Position vector of C ?
my attempt: E(c,k) = [3;-2;3] + c[4-3;0+2;1-3] + k[4-6;0-3;1-7] = [3;-2;3] + [c;2c;-2c]+[-2k;-3k;-6k] = [3,-2,3] + c[1,2,-2] + k[-2,-3,-6]
which would describe my Plane ABM in parametric form
Now position vector of C with the parameters of c,k in the Plane equation.
My thinking: the length of AC is twice as long as the length AM. So i came to the conclusion my parameters should be
c = 0 since i dont need to go in the direction of point B
and d = 2 since its twice as long. Really not sure how to solve that more delicate
Question 3: Find the vector v perpendicular to the Plane ABM, such that it is pointing downwards
I think i need something like a Normalvector found by a crossproduct. But im confused since i thought i needed only 2 vectors
for that....
Question 4: Find the cosine of the angle between the lines AB and AC at the point A
cos(phi) = [ (v1 * v2) / (|v1| x |v2|) ]
v1 = AB = [3;-2;3] +c [4;0;1]
v2 = AC = A2M
not sure about that last one
to #2): You don't need the equation of the plane to find the coordinates of point C.
You know that the mean of the position vectors of A and C is the position vector of M. Let $C(x_C, y_C, z_C)$ then
$\frac12([3, -2, 3] + [x_C, y_C, z_C]) = [6,3,7]$
You'll get C(9, 8, 11).
to #3): The orientation of the plane in 3-D is produced by the direction vectors of the plane. Therefore
$\vec n = [1,2,-3] \times[-2,-3,-6]$
Where from do you know when a vector is pointing down?
to #4):
I've got $\overrightarrow{AB} = [4,0,1]-[3,-2,3] = [1,2,-2]$
Now use your formula.
November 22nd 2009, 06:53 AM #2 | {"url":"http://mathhelpforum.com/geometry/116050-parallelogram-vectors.html","timestamp":"2014-04-20T02:11:21Z","content_type":null,"content_length":"38013","record_id":"<urn:uuid:61ba02a3-98ad-43e4-8270-f59037ac71c4>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics on the W
Mathematics on the Web
Selected Very Useful Resources
• Reviews and abstracts of mathematical publications.
(Click here for more detailed information.)
• Electronic repositories:
□ JSTOR: electronic copies of complete runs of several major U.S. and British publications (AMS and Royal Soc. journals, Annals of Math., etc.).
□ The arXiv of electronic preprints in mathematics and physics.
• Mathematics on the Web: the AMS guide.
• A direct link to the Notices of the AMS for on-line reading.
• A list with extensive links to home pages. From Penn State: worldwide coverage of math departments (USA and other countries A-F, G-M, N-Z); institutes, journals, societies, some publishers,
specialized subject pages (classified by area), software, etc.
• LaTeX advice. On-line advice at CTAN (Comprehensive TeX Archive Network); in particular A (Not So) Short Introduction to LaTeX2e and a very compact, printable symbol list excerpted from the
preceding (not readable with acroread; try ghostview).
We also present to you a large, comprehensive symbols list (111 pages in PDF), with advice on forming new symbols (prepared by Scott Pakin).
Many Useful Resources
• Pages for undergraduates. For example:
• Bibliographical sources.
□ Reviewing and citation indexes.
☆ MathSciNet, the Web version of Mathematical Reviews. (Accessible for Binghamtonians.) How to use MathSciNet in ways ordinary and extraordinary. The MathSciNet home page.
☆ Zentralblatt MATH Database, the on-line version of Zentralblatt für Mathematik. (That link takes you to the basic search form at the New York mirror site.) There is also a home page with
more information.
□ Publishers (including selected publishers with major search engines).
☆ The AMS publishers list.
☆ Academic Press: searchable book journal catalog with authors, titles, abstracts. A large searchable journal database. (Accessible for Binghamtonians.)
☆ Elsevier (North-Holland) for searchable book catalog. Go to ScienceDirect for searchable journal lists with authors, titles, abstracts. (Accessible for Binghamtonians.)
□ Lists from the AMS of, among others, printed and electronic journals that have sites on the Internet.
□ The World Wide Web Virtual Library: Publishers, an all-subject listing.
□ A MathSciNet search can give you a list of mathematical articles from a specific journal. One search method: Click on a field-name box (e.g., "Author") to change the field name to "Journal".
Then enter the journal name or abbreviation (recommended: use wild-card asterisks * as much as possible). This should give you a list of the articles from the journal that are indexed in
Mathematical Reviews, from newest to oldest. Another method: Find an article from the journal and use that form of the title in your search.
• Meetings.
□ The AMS Mathematics Calendar (world mathematics meetings); also, its AMS Meetings and Conferences pages with international, national, sectional meetings of the AMS.
□ ICM-98, the International Congress of Mathematicians in 1998 in Berlin.
• Internet news and discussion groups.
• Employment, career, and fellowship information.
Much of this information was collected by Matt Brin. I (Tom Zaslavsky) stole more from the Math Archives of the University of Tennessee at Knoxville (UTK) and some from the invaluable links list at
Penn State.
This file last modified: Apr 5 2010
URL: http://www.math.binghamton.edu/MATH/math/index.html
[search the department web pages]
Return to the Math Sciences Department Home Page.
Comments to zaslav@math.binghamton.edu | {"url":"http://www.math.binghamton.edu/math/index.html","timestamp":"2014-04-17T09:38:55Z","content_type":null,"content_length":"28068","record_id":"<urn:uuid:906368d1-926f-41b2-be3f-5ac2823f0ea6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
Implementing a Q-DAG Evaluator
[Next] [Up] [Previous]
Next: The Availability of Evidence Up: Query DAGs Previous: Query DAGs
A Q-DAG evaluator can be implemented using an event-driven, forward propagation scheme. Whenever the value of a Q-DAG node changes, one updates the value of its children, and so on, until no possible
update of values is possible. Another way to implement an evaluator is using a backward propagation scheme where one starts from a query node and updates its value by updating the values of its
parent nodes. The specifics of the application will typically determine which method (or combination) will be more appropriate.
It is important that we stress the level of refinement enjoyed by the Q-DAG propagation scheme and the implications of this on the efficiency of query updates. Propagation in Q-DAGs is done at the
arithmetic-operation level, which is contrasted with propagation at the message-operation level (used by many standard algorithms). Such propagation schemes are typically optimized by keeping
validity flags of messages so that only invalid messages are recomputed when new evidence arrives. This will clearly avoid some unnecessary computations but can never avoid all unnecessary
computations because a message is typically too coarse for this purpose. For example, if only one entry in a message is invalid, the whole message is considered invalid. Recomputing such a message
will lead to many unnecessary computations. This problem will be avoided in Q-DAG propagation since validity flags are attributed to arithmetic operations, which are the building blocks of message
operations. Therefore, only the necessary arithmetic operations will be recomputed in a Q-DAG propagation scheme, leading to a more detailed level of optimization.
We also stress that the process of evaluating and updating a Q-DAG is done outside of probability theory and belief network inference. This makes the development of efficient on-line inference
software accessible to a larger group of people who may lack strong backgrounds in these areas.[+]
[Next] [Up] [Previous]
Next: The Availability of Evidence Up: Query DAGs Previous: Query DAGs Darwiche&Provan | {"url":"http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume6/darwiche97a-html/node3.html","timestamp":"2014-04-23T21:25:26Z","content_type":null,"content_length":"3673","record_id":"<urn:uuid:2dcd23c3-b85b-4f95-b5bd-fba19940fb7d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frequency Distribution
September 27th 2013, 04:05 AM #1
Sep 2013
Hi all
It is written in book "basic statistics for business and economics" for organizing data into a frequency distribution:
step 1: Decide on the number of classes. The goal is to use just enough groupings or classes to reveal the shape of the distribution. Some judgment is needed here. A useful recipe to determine
the number of classes (k) is the "2 to the k rule". This guide suggests you select the smallest number (k) for the number of classes such that 2^k (in words, 2 raised to the power of k) is
greater than the number of observations (n). [n<=2^k]
I want to know, how can I prove this formula?
Please Help!
Re: Frequency Distribution
You can't "prove it". It is not "theorem". It is, as it says, a "useful recipe". It is a "guide" that "suggests" you do a particular thing.
September 27th 2013, 07:15 AM #2
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/statistics/222326-frequency-distribution.html","timestamp":"2014-04-21T13:10:21Z","content_type":null,"content_length":"33677","record_id":"<urn:uuid:80d3d03e-a40b-40f7-9973-229308ee58a7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trick or Treat! A Keurig Giveaway! (Winner Announced)
UPDATE: The winner of the Keurig is:
#1,705 – Lauren: “My favorite Halloween candy is usually Twix.”
Congratulations Lauren ! You should have already received an email from me; make sure you reply with your flavor choices!
Thanks everyone for entering, be on the look out for more giveaways as the holidays draw near!
Happy Halloween folks!
Since I can’t pass out candy to each of you today, I thought I would do the next-best thing – have a giveaway! The weather seems to be getting chillier by the day, and curling up on the couch with a
hot beverage sounds like an excellent idea. So, I am giving away a Keurig coffee maker to one lucky reader! You can make single cups of whatever you’d like – your favorite coffee, hot chocolate, tea,
even hot apple cider, mmm…
Giveaway Details
The giveaway winner will receive the following:
1. One (1) Keurig Elite Brewing System
2. One (1) box of a seasonal specialty (Pumpkin Spice Coffee or Hot Apple Cider)
3. One (1) box of hot chocolate (Dark, Milk or White)
4. One (1) box of coffee or tea of your choice (see varieties here)
How to Enter
To enter to win, simply leave a comment on this post and answer the question: “What’s your favorite Halloween candy?”
You can receive up to four additional entries to win by doing the following:
1. Subscribe to Brown Eyed Baker by either RSS or email. Come back and let me know you’ve subscribed in an additional comment.
2. Follow @browneyedbaker on Twitter. Come back and let me know you’ve followed in an additional comment.
3. Tweet the following about the giveaway: “Win a Keurig & a drink assortment of your choice from @browneyedbaker! http://ow.ly/7do6e”. Come back and let me know you’ve Tweeted in an additional
4. Become a fan of Brown Eyed Baker on Facebook. Come back and let me know you became a fan in an additional comment.
Deadline: Today (Monday), October 31, 2011 at 11:59pm EST
Winner: The winner will be chosen at random using Random.org and announced here tomorrow. If the winner does not respond within 24 hours, another winner will be selected.
GOOD LUCK!!
Disclaimer: This giveaway is sponsored by Brown Eyed Baker.
3,937 Responses to “Trick or Treat! A Keurig Giveaway! (Winner Announced)”
3. Smarties! That’s all we’re handing out to the treat seekers tonight!
4. My favorite Halloween candy is almond joy! Yum!!!!
5. I signed up for your e-mail feed!
6. I followed you on twitter. Yay!
7. My favorite Halloween canndy are those pumpkin shaped candy corn candies.
8. I already get your emails.
9. I have been a fan on Facebook for a while now.
11. Favorite Halloween candy is Chocolate bars. The little minis.
12. I’m a follower through email…
15. I am a subscriber to your site by email.
16. Reese’s Peanut Butter cups!! AND I am already a fan on facebook!
17. Mmmm peanut butter cups and mallow cups. Coconut and marshmallows–yes please!
18. I’ve tweeted about the giveaway.
19. I follow you on twitter…..
20. I’m already a fan on face book.
21. I just twitted about your giveaway.
22. Peanut butter kisses….. Just kidding, those were my least fav. Reese’s pb cups or candy corn still are my favorites!
23. Snickers!!! Love that gooey carmel, chocolate, nuts. What’s better than that?
24. Butterfingers and Twix bars!!!!
25. I am already subscribed to RSS
26. I subscribe via email
a facebook fan
I gotta have almond joy all the way…
27. I already subscribe via RSS!
28. Yep…. I even like you on Facebook!
29. 1.Reese’s Peanut Butter Cups 2. Almond Joys
30. My favorite Halloween Candy is the 100 Grand Bar.
31. I’m already a Facebook fan!
33. My favorite Halloween candy is definitely Brach’s candy corn! (altho I’ve been known to “borrow” KitKats out of my children’s Halloween stash!)
35. My favorite Halloween Candy are those teeny tiny boxes of Nerds.
37. I subscrube via Google Reader
38. I am now following on Twitter
39. I’m already subscribed to your RSS
40. I’ve tweeter about the giveaway @CadyCupcake
42. I have now “liked” you on FB
46. Almond Joy and peanut m&ms.
48. I follow you on twitter! (amweeks)
49. I’m already a loyal facebook fan of yours!
51. I follow you though Google Reader.
52. I”m one of your facebook fans! (Annmarie Dipasqua Weeks)
53. The little bite size Mounds or Almond Joy…
54. Mounds! I love the coconut.
55. My favorite halloween candy are the fun size twix bars…they just taste better than the full-sized version…my hubby and i always “check” the girls’ candy and pull all the twix out for us,
terrible, isn’t it?
56. I love candy corn. I have to have it for the Halloween season.
57. I am already a subscriber via email!
59. Love the peanut butter cups!
60. I follow you on FB as well
61. Love almost anything chocolate, but Snickers are way up there….and am already an email subscriber and Facebook fan!
62. Following you on twitter (@mersworld)!
63. And, tweeted! Thanks for hosting such a great giveaway!
64. We love to give and receive peanut M & M”S.
65. Happy Halloween! Candy corn – yum.
67. I already subscribe via email and love it.
69. Favorite has to be Milky Way or Tootsie Rolls, I can never decide…. I am a fb fan and i think i get you on RSS as well. Love your site!
70. I am already a fan via Facebook – my favorite blog.
71. I am a facebook fan too. great giveaway!
72. I think Kit Kat is my all time favorite!
73. I already subscribe to the RSS feed
74. I love PB Cups but I also love candy corn this time of year!!!
79. LizDallas is following you now via Twitter
81. I’m also an e-mail subscriber!
82. …and I’m also a Facebook Fan!
83. reese’s peanut butter cups!
84. I can’t eat peanut butter but EVERYTHING else is fair game. Yummy. Love Easter candy tooooooo
85. LizElms Liz Elms
Win a Keurig & a drink assortment of your choice from @browneyedbaker! ow.ly/7do6e
8 seconds ago Favorite Reply Delete
I tweeted your message via Twitter
86. Snickers! Doesn’t get any better than that!
87. The classic candy corn! I can eat the others year round
88. I subscribe to RSS feed so I don’t miss any yummy recipes!
89. I already get email. Love your weekly roundups.
90. Favorite Halloween candy? That’s like asking me which is my favorite Beatle. I do find that candy corn is like sweet little triangles of sugary adult crack.
91. I also follow you on Facebook.
92. I’m following you on facebook :). Happy Halloween!
93. And, I follow you on Twitter.
94. Ooh, I absolutely love candy corn… such a guilty pleasure.
95. Anything with peanut butter and chocolate.
97. I have been a fan and subscribed for ..ever!
100. My favorite candy at Halloween is Milk Duds. I can never find them any other time. | {"url":"http://www.browneyedbaker.com/2011/10/31/trick-or-treat-a-keurig-giveaway/comment-page-4/","timestamp":"2014-04-21T07:05:51Z","content_type":null,"content_length":"121030","record_id":"<urn:uuid:a50f342f-e718-4b77-a691-f97f7c80dd14>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reston Algebra 2 Tutor
Find a Reston Algebra 2 Tutor
...With well over 20 years of teaching/tutoring experience, I doubt there are many tutors as patient, talented or effective as I am. I bring a lot more to the table than what you see "on paper"
and am constantly told by my students how well I explain things. Though other tutors may be cheaper, I am more efficient due to my experience and explanations.
28 Subjects: including algebra 2, chemistry, calculus, physics
I am a current student at George Mason University studying Biology which allows me to connect to other students struggling with certain subjects. I tutor students in reading, chemistry, anatomy,
and math on a high school level and lower. I hope to help students understand the subject they are working with by repetition, memorization, and individualized instruction.
9 Subjects: including algebra 2, English, reading, anatomy
...I have several years of experience teaching all aspects of English. I have worked with high school and college students studying English literature, as well as students from elementary through
graduate school who were struggling with reading, writing, and grammar. I currently teach three ESL classes focusing on writing and reading.
46 Subjects: including algebra 2, Spanish, English, algebra 1
...I have seven years of experience with AutoCAD/Inventor, and two years of MATLAB experience.I am a mechanical engineering student, so I use Microsoft Excel frequently to build parts list and run
calculations. I received an A in Physics I (Mechanics) and II (Electronics) at Georgia Tech. I have been trained to use this program by my university, and I received an A in that training course.
23 Subjects: including algebra 2, calculus, physics, geometry
...I've been tutoring chess for the last 10 years outside of WyzAnt. I have a US Chess Federation ID number. I've been tutoring math for a long time.
24 Subjects: including algebra 2, reading, calculus, chemistry | {"url":"http://www.purplemath.com/Reston_algebra_2_tutors.php","timestamp":"2014-04-17T04:18:12Z","content_type":null,"content_length":"23923","record_id":"<urn:uuid:c8943632-553e-4e50-99e5-f36d24d80d29>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonparametric equivalent for (factorial) MANOVA?
07-07-2011 06:48 PM #2
07-07-2011 02:07 PM #1
Thanked 0 Times in 0 Posts
Re: Nonparametric equivalent for (factorial) MANOVA?
Nonparametric equivalent for (factorial) MANOVA?
I once had some experience in SPSS/statistical testing, but now I really need it, I cannot seem to find the appropriate tests. I hope anyone can assist me on this one.
I've got 2 IV's (both nominal on two levels), 2 DVs' (both scale). Since non of the data are normally distributed, so I can not run a MANOVA, is that correct?
N=53, but I've got some random missing data.
If anyone can tell me how to (safely) complement this data, maybe I could run the MANOVA after all (since higher N per cell can make MANOVA robust against violation of the multivariate
I am really interested in the interaction effects between my two IV's. Is there any way you can tell me, please?
(I'm using SPSS)
I once had some experience in SPSS/statistical testing, but now I really need it, I cannot seem to find the appropriate tests. I hope anyone can assist me on this one.
I've got 2 IV's (both nominal on two levels), 2 DVs' (both scale). Since non of the data are normally distributed, so I can not run a MANOVA, is that correct?
N=53, but I've got some random missing data.
If anyone can tell me how to (safely) complement this data, maybe I could run the MANOVA after all (since higher N per cell can make MANOVA robust against violation of the multivariate
I am really interested in the interaction effects between my two IV's. Is there any way you can tell me, please?
(I'm using SPSS)
Since your design appears to be 2X2, did you take the ranks of the original DV's data points and then conduct the usual OLS parametric analysis?
Re: Nonparametric equivalent for (factorial) MANOVA?
I'm very sorry, but I'm not familiar with that test...should I? Or is there another name for it in the statistical software I am using (which is SPSS)?
I thought about ranking the scores and doing a nonparametric test, but there appears to be no nonparametric test that also gives me the interaction effects between my two IV's. And that
interaction effect is one of the most important parts of my design/hypothesis.
Re: Nonparametric equivalent for (factorial) MANOVA?
PERMANOVA. This is available throught PRIMER-E V6. There used to be a free FORTRAN version available, you might have soem luck if you look it up on line; but is was pretty tricky getting the data
formatting correct. BEst bet is to track downs the add on to PRIMER.
The earth is round: P<0.05
Re: Nonparametric equivalent for (factorial) MANOVA?
Hold on hold on, I'm just a simple student in the social sciences, only required to know some basic statistics stuff. Besides that, I'm from Europe and I've discovered that some of the underlying
assumptions (although I always thought mathematics could be considered the only science having global rules) vary from continent to continent. For instance: my Australian text book says
univariate normality is required for MANOVA, the British text book I used, only required multivariate normality.
Is there any (fairly simple) method I can use to get some results out of my data, only having access to SPSS?
Re: Nonparametric equivalent for (factorial) MANOVA?
the problem is that a lot standard research methods textbooks in psychology or the social sciences are written by other people in the social sciences who tend to know very little about the
mathematics behind these methods... stay with the british textbook, that one got it right...
as far as what your options are, i believe that what Dragan said it's a good option if you're stuck with SPSS... i've heard there's good research out there backing the idea of doing the normal
parametric analysis on the ranks of the data and actually getting pretty solid conclusions, although i might be wrong...
one thing, though... the assumption of multivaraite normality is **NOT** on the variables themselves but on the residuals... have you checked those? if the residuals are normally distributed you
don't really care about the distribution of your other variables...
The Following User Says Thank You to spunky For This Useful Post:
owj_315 (07-08-2011)
Re: Nonparametric equivalent for (factorial) MANOVA?
On the risk of asking too much: can you give me an example of such good research? As I have to justify my analysis with 'people in the social sciences'...
And ahm, I'm not quite sure what residuals are, so I surely don't know how to check them...
Thank you very much for your answer, it was the most understandable to my question (until now). (No offense to all the others, of course. Thank you all for responding so quickly)
Re: Nonparametric equivalent for (factorial) MANOVA?
Zimmerman, Donald W.; Simplified interaction tests for non-normal data in psychological research. British Journal of Mathematical and Statistical Psychology, Vol 47(2), Nov, 1994. pp. 327-335.
Zimmerman, D. W., & Zumbo, B. D. (1993). Relative power of the Wilcoxon test, the Friedman test, and repeated-measures ANOVA on ranks. Journal of Experimental Education, 62, 75-86.
and if you type "rank transformations" in PSYCInfo i'm sure there's gonna be more stuff out there... careful, though. i haven't seen these implemented in MANOVAs and i know rank transformations
can get real fishy (or so other authors have found)... if you were to REALLY use the non-parametric version of a MANOVA, PERMANOVA would be the way to go though...
now, what is this for? is it like a school project or something you're working on towards publication? i dunno, i'm just wondering whether you're over-complicating things for yourself here...
Re: Nonparametric equivalent for (factorial) MANOVA?
Thompson, G. L. (1991). "A note on the rank transform for interactions". Biometrika 78 (3): 697–701.
The article I cite above addresses and justifies the answer to your question. However, this article (in the manner that it is written) is over your head.
07-07-2011 08:33 PM #3
Thanked 0 Times in 0 Posts
07-07-2011 10:27 PM #4
07-07-2011 10:36 PM #5
Thanked 0 Times in 0 Posts
07-07-2011 10:59 PM #6
07-08-2011 12:18 AM #7
Thanked 0 Times in 0 Posts
07-08-2011 04:10 AM #8
07-08-2011 08:40 AM #9 | {"url":"http://www.talkstats.com/showthread.php/18775-Nonparametric-equivalent-for-(factorial)-MANOVA","timestamp":"2014-04-16T04:11:07Z","content_type":null,"content_length":"89901","record_id":"<urn:uuid:64a100a2-5bb5-4033-b9cd-bde1277ebb43>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
Snell's law, critical angle & refraction
According to Snell's law
n1sin(θ1) = n2sin(θ2)
If θ1 is θc, then θ2 = 90 degrees.
So sin(θc) = n2/n1
When I look up Snell's law on Wikipedia it says
\frac{\sin \theta_1}{\sin \theta_2}=\frac{v_1}{v_2}=\frac{n_2}{n_1}
Why does the subscript change in the [tex]n_n[/tex] ? Isnt [tex]v_1=n_1[/tex] and [tex]v_2=n_2[/tex]?
Thanks for answering | {"url":"http://www.physicsforums.com/showthread.php?t=403029","timestamp":"2014-04-20T08:35:32Z","content_type":null,"content_length":"29254","record_id":"<urn:uuid:e453a420-eba6-4f4b-83fc-6d418b0b889b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
man/man1/shift.html man pageshift man document
shift - shift positional parameters
shift [ options ] [n]
shift is a shell special built-in that shifts the positional parameters to the left by the number of places defined by n, or 1 if n is omitted. The number of positional parameters remaining will
be reduced by the number of places that are shifted.
If n is given, it will be evaluated as an arithmetic expression to determinate the number of places to shift. It is an error to shift more than the number of positional parameters or a negative
number of places.
The positional parameters were successfully shifted.
An error occurred.
shift (AT&T Research) 1999-07-07
David Korn <dgkorn@gmail.com>
Copyright © 1982-2010 AT&T Intellectual Property | {"url":"http://www2.research.att.com/sw/download/man/man1/shift.html","timestamp":"2014-04-16T21:53:49Z","content_type":null,"content_length":"2471","record_id":"<urn:uuid:df362ec9-ed51-4f26-a0c1-da035928037d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computational Complexity
Guest post by Janos Simon
A group of researchers (Arjen Lenstra and collaborators from EPFL Lausanne and James Hughes from Palo Alto) published a study,
Ron was wrong Whit is right
, of new vulnerabilities of cryptosystems. The New York Times picked up the
. Although Lenstra et al discuss several cryptosystems, their results are particularly relevant to those based on RSA. The title mirrors their conviction that cryptosystems based on a single random
element have fewer key generation problems than RSA, that uses two random primes.
The technical problem they identify in RSA is the following: The RSA cryptosystem uses a modulus n that is the product of two large "random" primes p and q. Actual keys may not be truly random, and
this may cause several possible problems:
1. Different users may end up with the same n. Since a user knows the factors p, q, she will be able to decrypt the data of any user with the same modulus.
2. If two users share one of the factors, (user A's modulus is pq, user B's is pr) they will be able to decrypt each other's data. Given two moduli, one can use the Euclidean algorithm to determine
whether they have a common factor, and find it if it exists.
Note that the second vulnerability is more insidious: in the first only the user with the matching key can decrypt the messages of its mate, while anyone can explore the web looking for pairs of keys
with a common factor.
The lack of randomness in key generation may be caused by bad choices for the seed of a random number generator. As an extreme example, devices may be shipped with a standard common seed. In this
case all devices would generate the same n. In general, if the collection of seeds is a low entropy set, with high probability insecure keys will be generated.
The EPFL group collected 11.7 million public keys "while avoiding activities that our system administrators may have frowned upon" and essentially found that about 99.8% of the keys were not insecure
(to the extent that they did not suffer from the vulnerabilities above.)
Is this secure enough?
Note that .2 percent of 11 million is tens of thousands of bad keys.
To make matters murkier, another group with researchers from the University of Michigan and UCSD did a somewhat similar experiment. Their results are not published yet, but one of the authors, Nadia
about their results in Freedom to Tinker. They find a similar proprtion of bad keys, but they claim that the vulnerability mostly occurs with embedded devices like firewalls and routers, so
"important" keys like bank certificates are not affected. Lenstra et al disagree.
Perhaps we should be happy that these vulnerabilities are not due to weak Theory, but to bad implementations of good theoretical ideas....
9 comments:
1. Its a very interesting paper, and a fascinating security flaw, but it seems that the title of their paper is ill advised.
First of all, the problem is not with RSA but with random number generation in embedded systems.
Second of all, why is it Ron that was wrong (and not also Adi and Len?) and Whit (and not also Martin) that was right?
As far as I can tell, the title of the paper is unsupported by its contents, even though the contents make for a very interesting paper.
2. I was also going to ask what the paper title meant, because I couldn't find it explained in the paper.
3. Reminds me of the famous xkcd comic on the difference between theory and practice of crypto: http://xkcd.com/538/
4. "Perhaps we should be happy that these vulnerabilities are not due to weak Theory, but to bad implementations of good theoretical ideas.... "
I think the reason why a lot of implementors use ad hoc and weak solution to produce randomness is because theory hasn't provided them with good enough results on practical random number
It's better than a security flaw in the core of RSA but according to me this is still a flaw on our part
5. Perhaps I misunderstood, but it seems like a malicious user could go about generating a massive number of keys for themselves and then use these techniques to find which keys out there are
vulnerable. Thus the 99.8%, which seems based on having access to 11M such keys would drop significantly as the size of the number of keys increased. At some point in time it would get close to
6. Paul, the attack you describe is equivalent to trying to factor n by just guessing random factors. It won't have any chance of success if I generated my key with a proper source of randomness --
it will only work for with any chance of success for keys that were generated without proper randomness, which Henninger claims is only these embedded systems, not websites or banks.
7. Anonymous(1) -- I don't agree with the title of the paper, or the implications. But to phrase things in a less inflammatory manner, there are two kind of issue here:
1. Use of 'poor' entropy makes keys vulnerable to user who can actually /predict/ the entropy
2. Use of 'poor' entropy makes keys vulnerable even to attackers who can't predict the entropy
Every cryptosystem fails in case (1). RSA fails in cases (1) and (2). Hence even if I pick an extremely high-quality PRNG seed /and/ you can't guess it, I'm vulnerable if I share that seed with
one other person -- even if I trust that they will never reveal it (this requires that they only use it to pick one prime, of course). Whereas with most other cryptosystems you would end up with
two different (but mutually trusting) parties sharing the same key, which is bad, but does not lead to secret key recovery.
Other than that, the title is over the top.
Insofar as I'll criticize theoreticians for this kind of thing (in general) it's because they're too willing to make assumptions about the quality of inputs to a given cryptosystem. "Yes, it's
secure under this assumption" only makes sense if that assumption is born out by the real world. The heartening thing is that we now see a lot of theoretical research that considers extremely
advanced (and unlikely) attack models -- e.g., leakage-resilient crypto, etc.
8. More often than not, it would be "secure enough".
@PaulHomer The number approaches zero proportionately to the increase in keys. So, yes.
9. "If two users share one of the factors, (user A's modulus is pq, user B's is pr) they will be able to decrypt each other's data."
This is actually much worse than that: any person with access to both public keys will be able to detect that a factor is shared and compute their private keys. | {"url":"http://blog.computationalcomplexity.org/2012/02/is-998-secure-secure.html","timestamp":"2014-04-17T18:23:54Z","content_type":null,"content_length":"174577","record_id":"<urn:uuid:b5334ca1-7d3e-4133-bd43-3ed9fd1f75ac>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Self Online Study - Mathematics - Probability - Binomial Distribution
Self Online Study - Mathematics - Probability - Binomial Distribution
In statistics the so-called binomial distribution describes the possible number of times that a particular event will occur in a sequence of observations. The event is coded binary, it may or may
not occur. The binomial distribution is used when a researcher is interested in the occurrence of an event, not in its magnitude. For instance, in a clinical trial, a patient may survive or die. The
researcher studies the number of survivors, and not how long the patient survives after treatment. Another example is whether a person is ambitious or not. Here, the binomial distribution describes
the number of ambitious persons, and not how ambitious they are.
The binomial distribution is specified by the number of observations, n, and the probability of occurence, which is denoted by p.
A classic example that is used often to illustrate concepts of probability theory, is the tossing of a coin. If a coin is tossed 4 times, then we may obtain 0, 1, 2, 3, or 4 heads. We may also
obtain 4, 3, 2, 1, or 0 tails, but these outcomes are equivalent to 0, 1, 2, 3, or 4 heads. The likelihood of obtaining 0, 1, 2, 3, or 4 heads is, respectively, 1/16, 4/16, 6/16, 4/16, and 1/16. In
the figure on this page the distribution is shown with p = 1/2 Thus, in the example discussed here, one is likely to obtain 2 heads in 4 tosses, since this outcome has the highest probability.
Other situations in which binomial distributions arise are quality control, public opinion surveys, medical research, and insurance problems.
In many cases, it is appropriate to summarize a group of independent observations by the number of observations in the group that represent one of two outcomes. For example, the proportion of
individuals in a random sample who support one of two
political candidates fits this description. In this case, the statistic p is the count X of voters who support the candidate divided by the total number of individuals in the group n. This provides
an estimate of the parameter p, the proportion of individuals who support the candidate in the entire population.
The binomial distribution describes the behavior of a count variable X if the following conditions apply:
1: The number of observations n is fixed.
2: Each observation is independent.
3: Each observation represents one of two outcomes ("success" or "failure").
4: The probability of "success" p is the same for each outcome.
Bernoulli Theorem :
Let there be n independent trials in an experiment and let the random variable X denote the number of successes in these trials. Let the probability of getting a success in a single trial be p and
that of getting a failure be q so that p+q=1 . The
PX=r)=^ nC[ r].p^r.q^(n-r)
Mean and Variance of the Binomial Distribution
The binomial distribution for a random variable X with parameters n and p represents the sum of n independent variables Z which may assume the values 0 or 1. If the probability that each Z variable
assumes the value 1 is equal to p, then the mean of each variable is equal to 1*p + 0*(1-p) = p, and the variance is equal to p(1-p). By the addition properties for independent random variables, the
mean and variance of the binomial distribution are equal to the sum of the means and variances of the n independent Z variables, so
These definitions are intuitively logical. Imagine, for example, 8 flips of a coin. If the coin is fair, then p = 0.5. One would expect the mean number of heads to be half the flips, or np = 8*0.5 =
4. The variance is equal to np(1-p) = 8*0.5*0.5 = 2.
Sample Proportions
If we know that the count X of "successes" in a group of n observations with sucess probability p has a binomial distribution with mean np and variance np(1-p), then we are able to derive
information about the distribution of the sample proportion p, the count of successes X divided by the number of observations n. By the multiplicative properties of the mean, the mean of the
distribution of X/n is equal to the mean of X divided by n, or
np/n = p. This proves that the sample proportion p is an unbiased estimator of the population proportion p. The variance of X/n is equal to the variance of X divided by n², or (np(1-p))/n² = (p
(1-p))/n . This formula indicates that as the size of the sample increases, the variance decreases.
In the example of rolling a six-sided die 20 times, the probability p of rolling a six on any roll is 1/6, and the count X of sixes has a B(20, 1/6) distribution. The mean of this distribution is 20
/6 = 3.33, and the variance is 20*1/6*5/6 = 100/36 = 2.78. The mean of the proportion of sixes in the 20 rolls, X/20, is equal to p = 1/6 = 0.167, and the variance of the proportion is equal to (1/
6*5/6)/20 = 0.007. | {"url":"http://selfonlinestudy.com/Discussion.aspx?contentId=1010&Keywords=Math_Probability_Binomial_Distribution","timestamp":"2014-04-17T18:42:08Z","content_type":null,"content_length":"24277","record_id":"<urn:uuid:14a25d26-7902-4709-ba4e-ed97651c9aed>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hydromagnetic free convection heat transfer of a viscous incompressible electrically conducting heat generating fluid‐flow past a vertical porous plate in the presence of free‐stream oscillations. I
Hydromagnetic free convection heat transfer of a viscous incompressible electrically conducting heat generating fluid‐flow past a vertical porous plate in the presence of free‐stream oscillations. I
Journal of the Chinese Institute of Engineers 03/1981; 4:61-70. DOI:10.1080/02533839.1981.9676670
ABSTRACT Unsteady hydromagnetic boundary layer flow past a non‐conducting infinite vertical porous plate in presence of a transverse magnetic field is considered, taking into account the effect of
the heat sources on the free convection‐flow and heat transfer of a viscous incompressible and electrically conducting fluid. The flow is subjected to a constant suction through the porous plate, and
the difference between the plate temperature and the free stream is taken greater, equal or less than zero. The free stream oscillates in time about a constant mean value and the magnetic Reynolds
number is taken to be small enough so that the induced magnetic field is negligible. Analytical expressions for the mean velocity, the mean temperature and their related quantities are obtained. The
influence of the various dimensionless parameters entering into the problem is extensively discussed. A comparative study with the hydrodynamic case is also made whenever necessary.
Proceedings of The Royal Society A Mathematical Physical and Engineering Sciences 01/1955; 231(1184):116-130. · 2.38 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: This paper examines large-scale nonlinear thermal convection in a rotating selfgravitating sphere of Boussinesq fluid containing a uniform distribution of heat sources. Conservative
finite-difference forms of the equations of axisymmetric laminar motion are marched forward in time. The surface is assumed to be stress free and at constant temperature. Numerical solutions are
obtained for Taylor numbers in the range 0 [less-than-or-equal] Λ [less-than-or-equal] 104 and Rayleigh numbers with \[ R_c \leqslant R\lesssim 10R_c. \] For high Prandtl number (P > 5) the
solutions are steady and most of them resemble the solutions of the linear stability equations, though other steady solutions are also found. For P [less, similar] 1, the steady solutions have
horizontal wavenumber l = 1 and nearly uniform angular momentum per unit mass, rather than nearly uniform angular velocity. This rotation law seems to be independent of many details of the model
and may hold in the convective core of a rotating star.
Journal of Fluid Mechanics 05/1976; 75(01):49 - 79. · 2.18 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: This book presents the fundamentals of, and details of application in, thermal science. In this fourth edition the chapters on forced convection, free convection, heat exchange and
thermal radiation have undergone extensive revision. Modern correlations have replaced older ones and new analytical techniques are presented. The end-of-chapter problems and the examples have
been extensively revised and increased. An appendix of the thermophysical properties of engineering materials and fluids is included which contains materials unavailable elsewhere. Application of
the fundamentals to the solution of real engineering problems is stressed throughout.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does
not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable. | {"url":"http://www.researchgate.net/publication/233463216_Hydromagnetic_free_convection_heat_transfer_of_a_viscous_incompressible_electrically_conducting_heat_generating_fluidflow_past_a_vertical_porous_plate_in_the_presence_of_freestream_oscillations._I","timestamp":"2014-04-20T17:42:35Z","content_type":null,"content_length":"137176","record_id":"<urn:uuid:5d0bf846-8644-43bd-9dc3-c13d04f70160>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Porterdale Algebra 2 Tutor
Find a Porterdale Algebra 2 Tutor
...My Education includes a Bachelor's in Computer Science, an Associate's Degree in Computer Science, and a Associate's Degree in Psychology. Discrete math includes the study of set theory,
Boolean algebra, matrices, probability, functions and number theory. These concepts were essential to my understanding of algorithms and computer programming.
21 Subjects: including algebra 2, calculus, Java, algebra 1
...As an engineering graduate from Georgia Tech I have had multiple courses in college level physics. My manufacturing engineering career has given me a broad understanding of the principles of
physics. I have home-schooled two of my boys in math through high school and have tutored several in high school math.
15 Subjects: including algebra 2, chemistry, physics, statistics
...She does not do in-home tutoring.*** Current Availability as of 4/18/14: Tuesday: 4, 5p Wednesday: 7p (this week only) Thursday: 3, 4, 5p Friday: 2, 3, 4p You're probably trying to find a tutor
who stands out from the rest. You're looking for someone who knows what she's teaching. In high school Abigail earned the National Merit scholarship through her exemplary SAT scores.
22 Subjects: including algebra 2, reading, writing, calculus
I am a recent graduate of Southern University and A&M College with a degree in Mechanical Engineering. Besides engineering I wanted to be a teacher so I've always tutored on the side to help
students/peers specifically excelling in math courses. Seeing my students succeed is better than any problem solving I can service during my regular job.
9 Subjects: including algebra 2, physics, calculus, precalculus
...Most of my courses have been writing intensive, and several have been for writing credit. Students in my courses must learn to be able to read, comprehend, analyze, and explain difficult texts,
often from primary sources. They must learn to demonstrate that understanding through expository writing.
9 Subjects: including algebra 2, English, reading, writing | {"url":"http://www.purplemath.com/Porterdale_algebra_2_tutors.php","timestamp":"2014-04-20T03:56:05Z","content_type":null,"content_length":"24224","record_id":"<urn:uuid:f5e378ee-2d4c-49aa-b31a-74259f050c30>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Journal of the Brazilian Chemical Society
Services on Demand
Related links
Print version ISSN 0103-5053
J. Braz. Chem. Soc. vol.17 no.1 São Paulo Jan./Feb. 2006
Thermodynamic study of the solubility of acetaminophen in propylene glycol + water cosolvent mixtures
Jackson A. Jiménez; Fleming Martínez^*
Departamento de Farmacia, Universidad Nacional de Colombia, A.A. 14490, Bogotá D.C., Colombia
Based on van't Hoff and Gibbs equations the thermodynamic functions Gibbs energy, enthalpy, and entropy of solution, mixing and solvation of acetaminophen in propylene glycol + water (PG + W)
cosolvent mixtures, were evaluated from solubility data determined at several temperatures. The solubility was greater at 100% of PG at all temperatures studied. The solvation of this drug in the
mixtures increases as the PG proportion is also increased founding a maximum at 70% of PG. From 10% up to 20% of PG and from 70% up to 100% of PG, entropy driving was found, while from pure water up
to 10% of PG and from 20% up to 70% of PG, enthalpy driving was found. These facts can be explained in terms of water-structure loss, and a diminishing in the energy required for cavity formation in
the solvent, for mixtures from 30% up to 70% of PG.
Keywords: acetaminophen, solubility, solution thermodynamics, activity coefficients
Baseado nas equações de van't Hoff e Gibbs, as funções termodinâmicas energia de Gibbs, entalpia e entropia de solução, mistura e solvatação de acetaminofen em misturas dos solventes propileno glicol
+ água (PG + W), foram avaliadas por medidas de solubilidade em diversas temperaturas. A solubilidade foi maior para 100% de PG em todas as temperaturas estudadas. A solvatação dessa droga nas
misturas aumenta com o aumento da proporção de PG, atingindo um máximo em 70% de PG. De 0% até 20% de PG e de solução 70% até 100% de PG, foi encontrado um domínio da entropia sobre o processo,
enquanto de água pura até 10% de PG e de 20% até 70% de PG, foi encontrado domínio da entropia. Estes fatos são explicados em termos de perda da estrutura da água, e uma diminuição na energia
requerida para formação da cavidade no solvente, para misturas de 30% até 70% de PG.
Acetaminophen is an analgesic and antipyretic drug widely used in modern therapeutics. This drug is specially indicated in the treatment of several minor diseases presented by pediatric patients.^1
In the Colombian market, it is commercially available as tablets, syrups and concentrates, but it is not available as parenteral products. The later ones have been recently asked for by physicians
and by other care practitioners. Injectable homogeneous liquid formulations supply relatively high doses of drug in small volumes. For this reason, some physicochemical properties such as the
solubility and the occupied volumes by the drugs and other components in the solution are very important because they facilitate the design process of pharmaceutical dosage forms.^2
The solubility behavior of drugs in cosolvent mixtures takes great importance because cosolvent blends are frequently used in purification methods, preformulation studies, and pharmaceutical dosage
forms design, among other applications.^3 Nowadays several methods to calculate the solubility are available. However, these methods do not explain totally the mechanism of cosolvent action in
mixtures. On the other hand, almost all of these methods in general do not consider the effect of temperature. For these reasons, it is important to determine, systematically, the solubility of
drugs, in order to obtain complete information about physicochemical data of pharmaceutical systems. This information facilitates widely the labor of pharmacists associated to development and
research of new products in pharmaceutical industry.^4 Temperature-solubility dependence allows to carry out the respective thermodynamic analysis, which, at the same time, permits explain the
molecular mechanisms, involved in the solution processes.^5
The main objective of this study was to evaluate the effect of the cosolvent composition on solubility and solution thermodynamics of acetaminophen in propylene glycol + water cosolvent mixtures. The
analysis was based on van't Hoff method, including the respective contributions by mixing and solvation of the drug on the solution processes. Ethanol and propylene glycol are probably the more
widely used cosolvents in parenteral medications. This investigation expands the concepts developed for this drug in cosolvent systems by Pérez et al.^2 in ethanol + water, propylene glycol + water,
and ethanol + propylene glycol mixtures at 25.0 °C, by Grant et al.^6 in water at several temperatures, by Etman and Naggar^7 in sugar aqueous solutions at 20.0 and 37.0 °C, by Bustamante and
coworkers^8 in ethanol + water, ethanol + ethyl acetate, and dioxane + water mixtures at several temperatures, and Martínez^9 in propylene glycol + water mixtures at 25.0 °C, among others.
Acetaminophen USP (ACP);^10 propylene glycol USP (PG);^10 distilled water (W), conductivity < 2 µS, Laboratory of Pharmaceutics of the Universidad Nacional de Colombia; molecular sieve Merck (numbers
3 and 4); Millipore Corp. Swinnex^®-13 filter units.
Mettler AE 160 digital analytical balance, sensitivity ± 0.1 mg; Wrist Action, Burrel, model 75 mechanical shaker; Magni Whirl Blue M. Electric Company water baths, temperature control ± 0.05 °C; WTB
Binder E28 sterilizer/drying oven; DMA 45 Anton Paar digital density meter, precision ± 0.0001 g cm^ 3; Abbe Carlzeiss Jena refractive meter, precision ± 0.0002; micro pipettes Nichiryo^®.
Solubility determinations
An excess of ACP was added to 20 cm^3 of each cosolvent mixture evaluated in glass flasks. The cosolvent mixtures were prepared by mass in quantities close to 100.0 g varying in 10%, m/m.
Solid-liquid mixtures were stirred in a mechanical shaker for 1 hour. Samples were then allowed to stand in water baths kept at the appropriate temperature ± 0.05 ºC. All samples were maintained at
least for 48 hours to reach the equilibrium. This equilibrium time was established in a previous investigation^11 about the dissolution rate and solubility of ACP in EtOH, PG and W at 20.0 °C. After
this time the supernatant solutions were filtered (at isothermal conditions) to ensure that they were free of particulate matter before sampling. Concentrations were determined by measuring
refractive indexes after appropriate dilution and interpolation from previously constructed calibration curves for ACP in each cosolvent mixture.^2 All the solubility experiments were repeated at
least three times. In order to make the equivalence between molarity and mole fraction concentration scales, the density of the saturated solutions was determined with a digital density meter.
Results and Discussion
In Table 1, the molecular structure of ACP and some of their physicochemical properties are summarized.^8,12,13 The melting point and enthalpy of fusion were reported by Bustamante and coworkers^8
while the enthalpy of sublimation was reported by Williams et al.^13 According to Romero et al.^14 this drug acts in solution mainly as a Lewis acid in order to establish hydrogen bonds with
proton-acceptor groups in the solvents (oxygen in -OH groups). Dearden^15 demonstrated that both functional groups of this drug (-NH and -OH) were involved in complex formation with the carbonyl
group of antipyrine. ACP could also act as a proton-acceptor compound by means of its carbonyl and -OH moieties.
Ideal and experimental solubility of ACP
The ideal solubility of a crystalline solute in a liquid solvent can be calculated by equation (1):
where is the ideal solubility of the solute as mole fraction, DH[fus] is the molar enthalpy of fusion of the pure solute (at the melting point), T[fus] is the absolute melting point, T is the
absolute solution temperature, R is the gas constant (8.314 J mol^-1 K^-1), and DC[p] is the difference between the molar heat capacity of the crystalline form and the molar heat capacity of the
hypothetical supercooled liquid form, both at the solution temperature.^16 Since DC[p] cannot be easily determined, one of the following assumptions has to be made: (a) DC[p] is negligible and can be
considered zero or (b) DC[p] may be approximated to the entropy of fusion, DS[fus]. In this investigation the later consideration is assumed.
Table 2 summarizes the experimental solubilities of ACP, expressed as molarities and mole fractions, and the ideal solubilities calculated by means of equation (1) from DH[fus], and T[fus] presented
in Table 1. In all cases, the coefficients of variation for solubility were smaller than 2.0%. On the other hand, Figure 1 shows the solubility expressed in mole fraction at all the studied
temperatures. In this cosolvent system a maximum in solubility is not obtained in contrast to that found in other cosolvent systems such as ethanol + water (EtOH + W).^8
The Hildebrand solubility parameter (d) obtained for this drug in EtOH + W mixtures was 28.3 MPa^1/2 (13.8 cal^1/2 cm^ 3/2) at 25.0 °C.^8 This value is outside of d values obtained with PG + W
mixtures, i.e., from 30.3 MPa^1/2 (14.8 cal^1/2 cm^ 3/2) up to 47.9 MPa^1/2 (23.0 cal^1/2 cm^ 3/2)[.] For this reason, the solubility obtained in EtOH + W mixtures is relatively larger in comparison
with PG + W mixtures.^9 On the other hand, if molarity is considered, a maximum in solubility is obtained at 90% of PG at all temperatures.
Thermodynamic functions of solution
The making of weighted graphs based on the logarithm of solubility as a function of reciprocal absolute temperature permits to obtain the apparent enthalpic change of solution
In more recent treatments, some corrections have been introduced to equation (2) in order to reduce the propagation of errors, and therefore, to separate the chemical effects from those due only to
statistical treatments used in compensation plots. For this reason, the mean harmonic temperature (T[hm]) is used in van' Hoff analysis. T[hm] is calculated as:^17
where n is the number of tested temperatures. In our case the T[hm] value obtained was just 303 K. The corrected expression more widely used can be written as follows:^8
As an example, Figure 2 shows the modified van't Hoff plot for ACP in mixtures having 80% and 90% of PG. Linear models with good correlation coefficients were obtained in all mixtures studied. For
this reason, the equation (4) is useful to estimate the
For non-ideal solutions, the slope obtained in equation (4) does not give directly the heat of solution. Therefore, it is necessary to consider the variation of solute thermodynamic activity (a[2])
with concentration at constant temperature and pressure.^8,18 Then, the enthalpic change of solution is calculated as:
in which, the second term of the right side is calculated by means of:^8,19
The term "sat" indicates the saturation. In equation (6) the solute volumetric fraction (f[2]) is required. This property is calculated from the apparent specific volume of solute (ASV[2]) at
saturation, and the mixture composition. ASV[2] is calculated by means of:
where, m[2] and m[1] are the masses of solute and solvent at saturation, respectively, SV[1] is the specific volume of solvent, and r is the solution density. Although in a more refined treatment,
the partial specific volume of solute instead of ASV[2] should be used, the procedure proposed here is also adequate.
Since ACP is a solid, the thermodynamic activity at saturation equals the ideal solubility and therefore, it follows that:
The term g[2]) and it is an indication of the deviation presented by this one in front to ideal behavior. Table 3 shows the experimental % (m/v) solubilities, saturated solution densities, cosolvent
mixtures densities, solute volume fractions, solute activity coefficients, and correction factors at 30.0 °C. This temperature is the nearest to 303 K. In order to calculate the g[2] and (¶ln a[2]/
¶ln X[2])[T,p] values some methods for estimating propagation of errors were used.^20
From the g[2] values presented in Table 3 a rough estimate of solute-solvent intermolecular interactions can be made by considering the following expression:
where w[11,] w[22] and w[12] represent the solvent-solvent, solute-solute and solvent-solute interaction energies, respectively; V[2] is the molar volume of the supercooled liquid solute, and
finally, f[1] is the volume fraction of the solvent. In a first approach the term g[2] depends almost exclusively on w[11,] w[22] and w[12].^21 The w[11] and w[22] terms are unfavorable for
solubility, while the w[12] term favors the solution process.
It can be seen in equation (9) that the contribution of w[22] represents the work necessary to move molecules from solid state to the vapor state, and therefore it is constant in all mixtures. On the
other hand, Romero et al.^22 have demonstrate recently by using calorimetric, spectroscopic, and crystallographic techniques, that ACP solid phase in excess keeps its original crystalline properties
in saturated solutions in several cosolvent mixtures varying in polarity and Lewis acid-base character. Although, an increase of 8 °C in the melting point has been reported for ACP solid phase at
equilibrium with saturated solutions having cosolvent proportions greater than 50 % (v/v),^8 according to these authors, for practical purposes it may be considered that the contribution of solid
phase toward the overall solution process is constant for this drug in the different saturated solutions studied.
The term w[11] is higher in water (d = 47.9 MPa^1/2) while it is comparatively smaller in PG (d = 30.3 MPa^1/2).^23 The pure water and water-rich mixtures have larger g[2] values, which means, high w
[11] and low w[12] values. On the other hand, in PG-rich mixtures (with g[2] values close to 1.0), the w[11] values are relatively low, whereas the w[12] values are higher. According to this fact,
the solvation of ACP should be higher in PG-rich mixtures.
The apparent standard Gibbs energy change for the solution process ^21
Nevertheless considering the approach proposed by Krug et al.,^13 this property is more appropriately calculated by means of:
in which, the intercept used is the one obtained from ln X[2] vs. 1/T 1/T[hm] plots (equation 4). This thermodynamic function is also corrected using the factor (¶ln a[2]/¶ln X[2])[T,P] in order to
express it in terms of solute thermodynamic activity instead of solute concentration.
The standard entropic change for solution process ([] and values by using:
Table 4 summarizes the corrected standard thermodynamic functions for experimental solution process of ACP in all cosolvent mixtures including those functions for the ideal process. In order to
calculate the thermodynamic magnitudes of experimental solution some methods for estimating propagation of errors were used.^20 It was found that the standard Gibbs energy of solution was positive in
all cases; i.e., the solution process apparently is not spontaneous, which may be explained in terms of the concentration scale used (mole fraction), where the reference state is the ideal solution
having the unity as concentration of ACP, that is, the solid pure solute.
The enthalpy of solution is positive for all cases, therefore the process is always endothermic. The entropy of solution is also positive in all cases, indicating entropy driving on overall the
solution processes. The value in water is in good agreement with those presented by Grant et al.^6 and Bustamante and coworkers,^8 that is, 23.7 and 22.5 kJ mol^ 1, respectively. The
With the aim to compare the relative contributions by enthalpy (%z[H]) and by entropy (%z[TS]) toward the solution process, equations (13) and (14) were employed respectively.
From Table 4 it follows that in all cases the main contributor to standard free energy of solution process of ACP is the enthalpy (greater than 62% in all cases).
Thermodynamic functions of mixing
The solution process may be represented by the following hypothetic stages:^21
where, fusion and mixing are the respective partial processes toward the solution process at 303 K. This approximation permits to calculate the partial thermodynamic contributions to solution process
by means of equations (15) and (16).
where, DC[p] obtaining a value of 17.98 kJ mol^ 1. This value is coincident with the enthalpic change for ideal solution. In contrast, the entropy of fusion at 303 K (59.35 J mol^ 1 K^ 1) is not
coincident with the entropy of ideal solution at this temperature (36.90 J mol^ 1 K^ 1). Nevertheless, for practical purposes, the Table 5 the thermodynamic functions of mixing of ACP are summarized.
By analyzing the partial contributions by ideal solution (related to solute fusion process) and mixing processes to the enthalpy and entropy of solution, it is found that Table 4). On the other hand,
the contribution of the thermodynamic functions relative to mixing process toward the solution process is variable, that is, Table 5). However, considering the overall solution process (that is, data
from Table 4), entropy change is the driven force (positive values: Table 4) because the solution process includes the favorable entropy of melting (positive value: Table 4).
The net variation in Table 5). As it was already said, the energy of cavity formation should be lower as the proportion of PG increases because the polarity of the medium decreases, which favors
solute-solvent interactions. This fact is partially observed in Table 5, where et al.^14 in the initial portion of the solubility curve, the hydrogen bonding of ACP will increases with cosolvent
concentration. At large cosolvent proportions, this interaction may be saturated, becoming a constant contribution. On the other hand, nonspecific and cavity effects are not saturated and vary with
cosolvent concentration.
For comparative purposes, Figure 3 shows the thermodynamic functions of mixing, . All functions vary nonlinearly with composition showing maxima for enthalpy and entropy at 20% of PG.
In order to verify the effect of cosolvent composition on the thermodynamic function driving the solution process Table 6 summarizes the thermodynamic functions of transfer of ACP from more polar
solvents to those less polar solvents. These new functions were calculated as the differences in thermodynamic magnitudes of mixing between the less polar mixtures and the more polar mixtures. As a
calculation example, in the case of transfer of ACP from pure water to 10% PG mixture (considering data of Table 5), the enthalpy of transfer ^ 1 (^ 1 (^ 1 (Table 6). All other thermodynamic
magnitudes of transfer were calculated on the same way.
If the addition of PG to water is considered, it happens the following: At 10% of PG ^8 At 20% of PG
Thermodynamic functions of solvation
In addition to the hypothetic fusion-mixing stages previously exposed, the solution process may also be represented by the following hypothetic stages:^24
Solute[(Solid)] ® Solute[(Vapor)] ® Solute[(Solution)]
where, the respective partial processes toward the solution process, are in this case, sublimation and solvation. This treatment permits calculate the partial thermodynamic contributions to solution
process by means of equations (17) and (18), respectively, while the Gibbs energy of solvation is calculate by means of equation (19):
where, ^ 1 was taken from Williams et al.,^13 and therefore, the function Table 4. The respective entropy of sublimation was calculated as T at 303 K, where RT ln(p / p[0]) with p = 1.05´10^ 6 Pa at
303 K (calculated from some values presented by Williams et al.^9) and p[0] = 101325 Pa; then ^ 1, and therefore ^ 1 K^ 1 at the same temperature. In Table 7 the thermodynamic functions of solvation
are presented, while on the other hand, with the aim to compare the relative contributions by enthalpy (%z[H]) and entropy (%z[TS]) toward the solvation process, two equations analogous to equations
(13) and (14) were employed.
From the values of %z[H] and %z[TS] presented in Table 7 it follows that the main contributing force to standard Gibbs energy of the solvation process of ACP in all the cosolvent mixtures is the
enthalpy (%z[H] are greater than 56% in all cases).
Because that not only the main driving force of solvation process of drug compounds is important, but also the balance between specific and non-specific solute-solvent interactions as well,
therefore, parameters which describe the relative ratio of specific and non-specific solute-solvent interaction in terms of enthalpies (%e[H]) and in terms of entropies (%e[S]), were used according
to the following definitions introduced by Perlovich and coworkers:^24
Cyclohexane was chosen as an "inert" solvent, which interacts with drug molecules solely by nonspecific interactions (dispersion forces), while the cosolvent mixtures interact with ACP by specific
interactions such as hydrogen bonding. Benzene and hexane have also been used as inert solvents in the study of naproxen although important differences have been found between these two solvents,
indicating some effect of p electrons and planar geometry of benzene on non-specific interactions of that drug.^24
Solubility data for ACP in cyclohexane taken from Baena et al.^25 were analyzed according to equations (4), (11), and (12) founding the following values for apparent thermodynamic functions: ^ 1, ^
1, and ^ 1 K^ 1. The apparent specific volume of ACP in cyclohexane obtained by using densities of solvent and saturated solutions was a negative value (due to very scarce solubility and uncertainty
in density measurements). For this reason, in order to calculate the (¶ln a[2]/¶ln X[2])[T,P] for ACP in this solvent, the molar volume of drug was calculate by means of Fedors method^26 obtaining a
value of 124.4 cm^3 mol^ 1. From this value and the solubility at 303 K, the value obtained for (¶ln a[2]/¶ln X[2])[T,P] using equation (8) was 0.9994. Since this value is included into the
uncertainty obtained in thermodynamic functions of solution, then, the apparent values were used instead of corrected values.
The %e[H] and %e[S] values for ACP solvation are also presented in Table 7. These values indicate that during dissolution of ACP in all mixtures studied, the specific solute-solvent interactions
(hydrogen bonding, mainly) do not affect the entropic term of free energy with respect to non-specific interactions. With regard to the enthalpic term in all cases the non-specific solute-solvent
interactions predominate.
Enthalpy-entropy compensation of solution
Bustamante et al.^8 have demonstrated some chemical compensation effects for the solubility of several drug compounds in aqueous cosolvent mixtures. This analysis was used in order to identify the
mechanism of the cosolvent action. The making of weighted graphs of ^27
For solubility of ACP in EtOH + W, Bustamante and coworkers^8 obtained a nonlinear trend using seven cosolvent compositions, including the pure solvents. Their data were adjusted to a parabolic
regression model obtaining a maximum for 20% v/v of EtOH. From 0 up to 20% v/v of EtOH a negative slope was obtained while over this EtOH proportion a positive slope was obtained. According to these
authors, this fact implies a change from entropy driving to enthalpy driving toward the solution process.
On the other hand, Figure 4 shows the corrected ^8 for EtOH + W mixtures.
Figure 4 shows fully that this solute-cosolvent system does not present linear
From all aspects discussed previously it can be concluded that the solution process of ACP in PG + W mixtures is very complex and highly dependent on cosolvent composition. The solvation of this drug
is greater for PG-rich mixtures especially at 70% of PG. In a similar way to that found for the solubility of this drug in EtOH + W mixtures, the solution process in PG + W mixtures does not follows
linear enthalpy-entropy compensation using
We thank the Banco de la República and the DIB-DINAIN of the Universidad Nacional de Colombia (UNC) for the financial support. Additionally we thank the Department of Pharmacy of UNC for facilitating
the equipment and laboratories used.
1. Roberts II, L.J.; Morrow, J.D. In Goodman & Gilman's. The Pharmacological Basis of Therapeutics, 10^th ed.; Hardman, J. G.; Limbird, L. E.; Gilman, A. G., eds.; McGraw-Hill: New York, 2001, ch.
27. [ Links ]
2. Pérez, D. C.; Guevara, C. C.; Cárdenas, C. A.; Pinzón, J. A.; Barbosa H. J.; Martínez, F.; Rev. Col. Cienc. Quím. Farm. 2003, 32, 116. [ Links ]
3. Rubino, J.T. In Encyclopedia of Pharmaceutical Technology; Swarbrick, J.; Boylan, J. C., eds.; Marcel Dekker: New York, 1988, vol. 3; [ Links ]Yalkowsky, S. H.; Solubility and Solubilization in
Aqueous Media; American Chemical Society and Oxford University Press: New York, 1999. [ Links ]
4. Jiménez, F.; Martínez, F.; Rev. Col. Cienc. Quím. Farm. 1995, 24, 19. [ Links ]
5. Garzón, L.C.; Martínez, F.; J. Solut. Chem. 2004, 33, 1379. [ Links ]
6. Grant, D. J. W.; Mehdizadeh, M.; Chow, A. H. L.; Fairbrother, J. E.; Int. J. Pharm. 1984, 18, 25. [ Links ]
7. Etman, M. A.; Naggar, V. F.; Int. J. Pharm. 1990, 58, 177. [ Links ]
8. Bustamante, P.; Romero, S.; Reillo, A.; Pharm. Sci. 1995, 1, 505; [ Links ]Bustamante, P.; Romero, S.; Peña, A.; Escalera, B.; Reillo, A.; J. Pharm. Sci. 1998, 87, 1590. [ Links ]
9. Martínez, F.; Rev. Acad. Colomb. Cienc. 2005, 29, 429. [ Links ]
10. US Pharmacopeia, 23^rd ed., United States Pharmacopeial Convention: Rockville, MD, 1994. [ Links ]
11. Coronado, Y. P.; Fonseca, J. C.; Luengas, P. E.; Barbosa, H. J.; Martínez, F.; Rev. Col. Cienc. Quím. Farm. 1999, 28, 59. [ Links ]
12. Budavari, S.; O'Neil, M. J.; Smith, A.; Heckelman, P. E.; Obenchain Jr., J. R.; Gallipeau, J. A. R.; D'Arecea, M. A.; The Merck Index, An Encyclopedia of Chemicals, Drugs, and Biologicals, 13^th
ed.; Merck & Co., Inc.: Whitehouse Station, NJ, 2001. [ Links ]
13. http://www.euroestar-science.org/conferences/abstrsph7/wilson.pdf; accessed in October 2004.
14. Romero, S.; Reillo, A.; Escalera, B.; Bustamante, P.; Chem. Pharm. Bull. 1996, 44, 1061. [ Links ]
15. Dearden, J. C.; J. Pharm. Sci. 1972, 61, 1661. [ Links ]
16. Hildebrand, J. H.; Prausnitz, J. M.; Scott, R. L.; Regular and Related Solutions; Van Nostrand Reinhold: New York, 1970. [ Links ]
17. Krug, R. R.; Hunter, W. G.; Grieger, R. A.; J. Phys. Chem. 1976, 80, 2341. [ Links ]
18. Hollenbeck, R. G.; J. Pharm. Sci. 1980, 69, 1241. [ Links ]
19. Manzo, R. H.; Ahumada, A. A.; J. Pharm. Sci. 1990, 79, 1109. [ Links ]
20. Bevington, P. R.; Data Reduction and Error Analysis for the Physical Sciences; McGraw-Hill Book Co.: New York, 1969; Schoemaker, D. P.; Garland, G. W.; Experimentos de Fisicoquímica; Unión
Tipográfica Editorial Hispano Americana: México, 1968. [ Links ]
21. Martínez F.; Gómez, A.; J. Solut. Chem. 2001, 30, 909. [ Links ]
22. Romero, S.; Bustamante, P.; Escalera, B.; Cirri, M.; Mura, P.; J. Therm. Anal. Calorim. 2004, 77, 541. [ Links ]
23. Martin, A.; Bustamante, P.; Chun, A. H. C.; Physical Pharmacy: Physical Chemical Principles in the Pharmaceutical Sciences, 4^th ed.; Lea & Febiger: Philadelphia, 1993. [ Links ]
24. Perlovich, G. L.; Kurkov, S. V.; Bauer-Brandl, A.; Eur. J. Pharm. Sci. 2003, 19, 423; [ Links ]Perlovich, G. L.; Kurkov, S. V.; Kinchin A. N.; Bauer-Brandl, A.; Eur. J. Pharm. Biopharm. 2004, 57,
411. [ Links ]
25. Baena, Y.; Pinzón, J. A.; Barbosa, H.; Martínez, F.; Phys. Chem. Liq. 2004, 42, 603. [ Links ]
26. Fedors, R. F.; Polym. Eng. Sci. 1974, 14, 147. [ Links ]
27. Leffler, J. E.; Grunwald, E.; Rates and Equilibria of Organic Reactions; Wiley: New York, 1963; Tomlinson, E.; Int. J. Pharm. 1983, 13, 115. [ Links ]
Received: July 13, 2005
Published on the web: December 15, 2005
* e-mail: fmartinezr@unal.edu.co | {"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-50532006000100018&lng=en&nrm=iso","timestamp":"2014-04-20T10:16:00Z","content_type":null,"content_length":"78869","record_id":"<urn:uuid:4edd7fd8-3d37-40d2-bb70-5e6a6b7b1fb0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: A Tale of Two Time Scales: Determining Integrated
Volatility With Noisy High-Frequency Data
Lan ZHANG, Per A. MYKLAND, and Yacine AÏT-SAHALIA
It is a common practice in finance to estimate volatility from the sum of frequently sampled squared returns. However, market microstructure
poses challenges to this estimation approach, as evidenced by recent empirical studies in finance. The present work attempts to lay out
theoretical grounds that reconcile continuous-time modeling and discrete-time samples. We propose an estimation approach that takes
advantage of the rich sources in tick-by-tick data while preserving the continuous-time assumption on the underlying returns. Under our
framework, it becomes clear why and where the "usual" volatility estimator fails when the returns are sampled at the highest frequencies.
If the noise is asymptotically small, our work provides a way of finding the optimal sampling frequency. A better approach, the "two-scales
estimator," works for any size of the noise.
KEY WORDS: Bias-correction; Market microstructure; Martingale; Measurement error; Realized volatility; Subsampling.
1.1 High-Frequency Financial Data With Noise
In the analysis of high-frequency financial data, a ma-
jor problem concerns the nonparametric determination of the
volatility of an asset return process. A common practice is
to estimate volatility from the sum of the frequently sampled
squared returns. Although this approach is justified under the
assumption of a continuous stochastic model in an idealized
world, it runs into the challenge from market microstructure | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/454/1830887.html","timestamp":"2014-04-20T21:55:23Z","content_type":null,"content_length":"8902","record_id":"<urn:uuid:db52689b-58f7-4077-b4a7-f79522e402c4>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00401-ip-10-147-4-33.ec2.internal.warc.gz"} |
Skewed or Noncentral T Distribution
Hi, it is quite impossible to satisfactory answer your question as there are several proposed modifications of the standard t distribution. See, e.g.:
Jones, M. C. and Faddy, M. J. (2003), A skew extension of the t-distribution, with applications. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65: 159–174. doi: | {"url":"http://www.physicsforums.com/showthread.php?s=67515d95803a1ccfef99d51a485e5c89&p=3837825","timestamp":"2014-04-20T03:15:29Z","content_type":null,"content_length":"22458","record_id":"<urn:uuid:519cba54-86a8-495a-a2d4-2f632fc1f6db>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rays, lines and line segment lesson plans 4th grade
Ray; intersecting lines 3; grade level; $5 or Rays, lines and line segment lesson plans 4th grade store free. Objectives: students identify lines, line segments, objectives: students will. Graphing
lesson in chapter 17. With graders do art projects my 4th graders. School; high school printable 4th 13442 9778 first lesson concept. Day 3rd grade; fourth grade: fourth grade. Measuring line of
Rays, lines and line segment lesson plans 4th grade segments in plan; highs and perpendicular4 2011� ��. Relationships of Rays, lines and line segment lesson plans 4th grade store; free math
worksheets geometry learning plan. Banned fuel can 669 11942 line. Measure and construct line segments, kindergarten; 1st grade; 5th grade. My 4th graders do we describe points, line segments
chapter. Every lesson plan, and construct line segments definition 9778 first grade. Plans: third grade; middle school unit lesson plan someone show me rays. Home measuring line segments, fourth
grade 4 measurements lesson. Standard notation for pain 13442 9778 first lesson from ␦ lines series. Their math knowledge about lines, 6th grade maps 8th. Students to fourth grade: basic
characteristics of banned fuel can. Someone show me interactive lesson objectives: students to identify. Among points, rays,and angles ␓ free. Segment ray identify and relationships among points
line. Sciencegeometric shapes consisting of points. 4th objectives: students will for in 11942 line segments recognize. School notation for grade math lesson plans lesson. Publisher created is $5 or
less store; free math chapter. Covers points, do we will help your child make real. End lesson objectives: students to infinity in math; english sciencegeometric. Fuel can either intersect cross each
grade math lesson parallel. Their math knowledge about lines, rays kindergarten 1st. 3; grade com: rays line. Sciencegeometric shapes lesson plans to someone show me. Questioning strategies out how
do we will help your child make real. Has one direction identify, draw label. 1st grade acceleration 3rd 5th grade. First lesson objectives: students to identify lines what. Grades 6-12; math;
english; sciencegeometric shapes. 17, 364-365 measure and anglesgeometry printable 4th grade end. And anglesgeometry printable 4th grade page i had my 4th. Publisher created plan, and anglesgeometry
printable 4th grade 3 schools curriculum. 3rd grade; 3rd grade compare. Back to grade 3rd geometrya. Each grade lesson plans; publisher created will be able how do art. Teacher␙s lesson able starting
with an individualized learning plan second. Pictures of Rays, lines and line segment lesson plans 4th grade ray, line someone. Fits will find a Rays, lines and line segment lesson plans 4th grade
everyday mathematics. Knowledge about lines, rays k. Th grade-lines, segments, points, rays today to infinity in learning plan. 3rd grade; 3rd find a geometrya ray can 669 11942 line. Draw points,
segments, dogs, doses 8824 12216 4th grade. Curriculum unit plan from homeschoolmath concurrent lines ray; intersecting lines. Covers points, rays,and angles for pain 13442 9778 first grade
acceleration 3rd. Someone show me investigating line someone show me graders do we describe. 2nd, 3rd, 4th, homeschoolermath activities. Objectives: students to recognize and createdthis is a ray
intersecting. Your child make real progress with every lesson plans: lesson plans. Fits will Rays, lines and line segment lesson plans 4th grade progress with anglesgeometry. Its end math; english
sciencegeometric. Raysthis lesson oakland schools curriculum unit. Knowledge about lines, students. Lesson; graphing lesson teach 3rd grade 2nd. Teacher␙s lesson objectives: students to recognize and
perpendicular lines draw, label. Find lesson can 669 11942 line someone show. 364-365 measure and rays it has. Investigating line segments, and 6-12; math english. Plans to identify and consisting of
line segment ray game lesson. Segments, less store; free geometry lesson high. Following fourth grade, 4th graders do we. | {"url":"http://selfsferex.wb4.com/","timestamp":"2014-04-17T21:29:35Z","content_type":null,"content_length":"8886","record_id":"<urn:uuid:7e273b40-185e-4723-9277-537464d0c1c7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from September 2009 on A Mind for Madness
We now come to the main point of all these Morse theory posts. We want to somehow figure out what a closed manifold looks like based a Morse function that it admits (who knows how long I’ll develop
this theory, maybe we’ll even get to how Smale proved the Poincare Conjecture in dimensions greater than or equal to 5).
Suppose $M$ is closed and $f:M\to\mathbb{R}$ a Morse function. We’ll use the convenient notation $M_t=\{p\in M : f(p)\leq t\}$. So again, with the height analogy, as t increases, we will be looking
at the entire manifold up to that height. Since M is compact, there is some finite interval $[a,b]$ such that $M_a=\emptyset$ and $M_b=M$.
Note that with essentially no modification, we have already proved the Theorem that if $[c,d]$ contains no critical values, then $M_c\cong M_d$. So really, the point is to now figure out what happens
as we pass through the critical values.
First off, there are only finitely many critical points, and we can assume that each of these has distinct critical values by raising and lowering critical values. So if $p_0, \ldots, p_n$ are the
critical points and $c_k=f(p_k)$, we can order the indices so that $c_0 < c_1 < \cdots < c_n$.
To be explicit, $c_0$ is the min, so $M_t=\emptyset$ for $t < c_0$ and $M_t=M$ for t greater than $c_n$, since $c_n$ is the max (also, wordpress hates inequalities, or me, I haven't decided yet, but
it always cuts out lots of stuff and I just have to write the inequality in words).
These two critical points would be a nice place to start our examination. By the Morse lemma and the fact that a min has index 0, we know that there exists a neighborhood of $p_0$ on which $f=x_1^2+\
cdots + x_m^2+c_0$ (Alright, I’m sorry about that, but I just realized I have n critical points, so the dimension of my manifold is now m).
More explicitly there is some $\varepsilon>0$ such that $M_{c_0+\varepsilon}=\{(x_1, \ldots , x_m) : x_1^2+\cdots + x_m^2\leq \varepsilon\}\cong B^m$. So if we are thinking of height (of a 2-dim
manifold), we’ll want to visualize this as a “bowl” where you have the bottom of the bowl the min and then it slopes upward along a sphere, and then you have the boundary circle at height $c_0+\
So note that the only thing we used about this critical point is that it had index 0. This shape is called a (m-dimensional) 0-handle.
The reverse happens at our max. We have $M_{c_n-\varepsilon}=\{(x_1, \ldots , x_m) : x_1^2+\cdots +x_m^2\geq \varepsilon\}$, since the critical point has index m. This is an $m$-handle and thinking
in 2-d height, it is a downward facing bowl.
Again, there is nothing special about being the absolute max, any index m critical point will locally be an $m$-handle.
Index k critical points where $keq 0,m$ are more complicated so I’ll leave those for next time.
Now we have a nice overview of how this will work. We just need to figure out what a $k$-handle looks like, then as t increases through a critical value with index k, $M_t$ will “attach a k-handle”.
When we are not near a critical value, the $M_t$ will not change diffeomorphism-type. We just need to make this a little more precise next time (or maybe even the time after).
by hilbertthm90 1 Comment
Altering the Critical Points
I officially have a new favorite search for which someone found this blog: How to write a Japanese satire.
Let’s introduce a new term. Two Morse functions are considered equivalent if they have the same critical points and same index at each critical point.
The hope here is that two equivalent Morse functions will give the same topological data about our manifold, and so we want to develop techniques of altering our Morse function to something extremely
nice to work with, but having it be equivalent to the origin one.
Our first excursion into this technique is the following: If M is a compact manifold and $f$ is a Morse function on M, then we can find an equivalent Morse function $g$ such that all the critical
values are distinct.
If we’re going back to the height intuition, this is the technique that corresponds to “raising” or “lowering” critical points. So if you have two strange things happening at the same height (two
mountain peaks that have the same height), the idea is sort of that you can slightly move the manifold around so that one is now higher than the other. Of course, we won’t actually move the manifold
in any real sense, we’re going to construct the function.
This is going to be really nice, because it says that we can always get a Morse function in which only a single “change” can happen at any given height.
We’ll do this by first proving a Lemma which does all the work for us. Let $f$ be our Morse function, and $p$ a critical point. Then there is some $\varepsilon>0$ such that for all $c\in (-\
varepsilon, \varepsilon)$ there is an equivalent Morse function $h$ that has the same critical values as $f$, except for $h(p)=f(p)+c$.
The arguments here are essentially the same as in previous posts, so I’ll be a little looser and only outline the proof.
Since the critical points are isolated we can take a small coordinate chart centered at $p$ that contains no other critical points. Now let $\psi$ be a bump function that is 1 on some small
neighborhood of $p$ and dies to zero before getting to the edge of the chart.
Then we define $h_c=f+c\psi$. We definitely have that all the critical points of $f$ are still critical points of $h_c$ and since on a neighborhood of any of those points the functions either agree
or differ by adding a constant, they have the same index. Also, $h_c(p)=f(p)+c$, so we have constructed our desired function as long as we don’t have any extra critical points.
But in the same was as before, $\Big|Dh_c\Big|=\Big|Df+cD\psi\Big|\geq \delta-ca>0$ for all $|c|<\varepsilon$ where $\varepsilon=\delta/a$, since we're only concerned with the compact set on which $\
psi$ is decaying, $Df$ has a positive min $\delta$, and $D\psi$ has a finite max $a$. Thus we do not gain any critical points in that set and we are done.
To get to the whole theorem all we need to do is note that there are only finitely many critical points (since compact). So if any of the values are shared, we can use the lemma to give an equivalent
Morse function with shifted critical value, where we shift by a small enough value that it can't make it to any other critical value. We only have to apply this a finite number of times.
Gradient-Like Vector Fields Exist
Now we want to start building some technique that will allow us to figure out what our closed manifold looks like based on the Morse functions it admits.
We’ll call a vector field $X$, a gradient-like vector field for f, if $X\cdot f>0$ away from critical points, and if $p\in M$ is a critical point of index $\lambda$, then there is a coordinate
neighborhood about $p$ such that f has the standard form as in the Morse lemma, and $X=-2x_1\frac{\partial}{\partial x_1}-\cdots - 2x_{\lambda}\frac{\partial}{\partial x_\lambda}+2x_{\lambda+1}\frac
{\partial}{\partial x_{\lambda+1}}+\cdots + 2x_m\frac{\partial}{\partial x_m}$ (i.e. it is the gradient in this neighborhood).
Intuitively, if we think back to our example, we visualize Morse functions as “height functions”. So we are attempting to construct in some sense an everywhere “upward” pointing vector field. If
we’re thinking of the entire manifold flowing along this, then the only places where it is allowed to get “stuck” is at the critical points of $f$.
The theorem is that there always exists a gradient-like vector field for a Morse function on a compact manifold.
Proof: As before, let $\{U_i\}_1^k$ be a finite subcover of coordinate charts, and $\{K_i\}_1^k$ be a compact refinement. Since the critical points are isolated (immediate corollary to the Morse
lemma), there can only be finitely many since our manifold is compact. So we can assume that each critical point has a neighborhood small enough so that it is entirely contained in exactly one of the
$U_i$, and that the $U_i$ were chosen so that $f$ has standard form in those coordinates.
Let $\psi_i: U_i\to \mathbb{R}$ be a bump function for $K_i$ supported in $U_i$. Then we get a smooth function on the entire manifold by letting $\psi_i\equiv 0$ outside of $U_i$.
Let $X_i$ be the gradient of $f$ on $U_i$. Let $\displaystyle X=\sum_{j=1}^k \psi_jX_j$. The claim is that this is our gradient-like vector field for $f$.
Let’s check $X\cdot f$ at non-critical points. If $x\in M$ is not a critical point, and $x\in U_i$, then $(\psi_i X_i\cdot f)(x)>0$ since $X_i$ is the gradient and $\psi_i(x)>0$. All other terms of
the sum are 0 since $\psi_i(x)=0$ for any $i$ such that $xotin U_i$. Thus $(X\cdot f)(x)>0$.
The other condition we have set up to work since each critical point has a neighborhood that is contained in precisely one of the $U_i$, thus on that neighborhood $f$ is in standard form, and $X=\
psi_iX_i$ which is of the correct form. Thus $X$ is gradient-like for $f$.
As a preview of things to come, I’ll prove our first result about what our manifold looks like using Morse functions. This is often called the Regular Interval Theorem.
Suppose that $f$ has no critical value in $[a,b]$, then $M_{[a,b]}=\{p\in M : a\leq f(p)\leq b\}$ is diffeomorphic to $f^{-1}(a)\times [0,1]$.
Let $X$ be gradient-like for $f$. Define $\displaystyle Y=\frac{1}{X\cdot f}X$ which is smooth off of the critical points of $f$, but since $M_{[a,b]}$ contains no critical points it is a smooth
vector field there (in fact, on an open set containing $M_{[a,b]}$).
Let $\theta^p(t)$ be an integral curve for $Y$ starting at $p\in f^{-1}(a)$. But now $\displaystyle \frac{d}{dt}\Big|_{t=t_0}f(\theta^p(t))=\frac{d\theta^p}{dt}(t_0)(f)$
$\displaystyle = Y_{\theta^p(t_0)}(f)$
$\displaystyle = \frac{1}{X\cdot f}X\cdot f=1$.
Thus, the integral curve continues along at constant speed 1 for the entire time it is in $M_{[a,b]}$. But it starts at $f=a$ at time 0, so it reaches $f=b$ at time $t=b-a$.
Thus $h: f^{-1}(a)\times [0,b-a]\to M_{[a,b]}$ by $(p,t)\mapsto \theta^p(t)$ is a diffeomorphism. But rescaling gives the diffeo to $f^{-1}(a)\times [0,1]$.
This basically says that between critical points of a Morse function, we must have the manifold looking like cylinder built off of a single slice of the function (if we’re thinking in terms of
height, we can pick any height, and at anywhere between the two nearest critical heights, all the level sets will look the same).
Morse Functions Exist
The astute reader at this point may be getting a little anxious that despite the fact that I found Morse function in two easy low dimensional cases, my eventual goal of saying very general things
about manifolds by using Morse functions is going to rely on the fact that they exist.
If these thing are really as powerful as I have been making them out to be, then it would seem that there probably isn’t an abundance of them. But surprisingly, it turns out that basically every
smooth function is Morse.
Let $M^n$ be a closed manifold, and $g:M\to \mathbb{R}$ be a smooth function. Then there is a Morse function $f:M\to\mathbb{R}$ arbitrarily close to $g$.
Recall Sard’s Theorem (I’m assuming some familiarity with it, which is probably not a good idea): The set of critical values of a smooth map $f: U\to \mathbb{R}^n$ has measure zero in $\mathbb{R}^n$.
Now we’ll first need a lemma. Let $U\subset \mathbb{R}^n$ be an open set and $f:U\to\mathbb{R}$ a smooth function. Then there are real numbers $\{a_k\}$ such that $f(x_1, \ldots, x_n)-(a_1x_1+a_2x_2+
\cdots + a_nx_n)$ is a Morse function on $U$. We can also choose $\{a_k\}$ to be arbitrarily small in absolute value.
Let $p\in U$ be a critical point of $f$. Define $h=Jac(f)^T$ (a smooth map $h:U\to\mathbb{R}^n$). Then $Jac(h)\Big|_p$ is the Hessian $H_f(p)$. Thus, p is a critical point of $h$ iff $det(H_f(p))=0$.
By Sard’s Theorem, we can choose $a=(a_1, \ldots , a_n)\in\mathbb{R}^n$ where each $a_k$ have arbitrarily small absolute value such that $a$ is not a critical value of $h$.
The claim is that $\overline{f}(x_1, \ldots , x_n)=f(x_1, \ldots, x_n)-(a_1x_1+\cdots + a_nx_n)$ is a Morse function on U.
Well, if $p$ is a critical point of $\overline{f}$, then since $\frac{\partial \overline{f}}{\partial x_i}\Big|_p=\frac{\partial f}{\partial x_i}\Big|_p - a_i=0$, by the definition of h, we get $h(p)
But we chose $a$ to not be a critical value of h. Thus, p is not a critical point of h. So as noted, $det(H_f(p))eq 0$. But $H_f(p)=H_{\overline{f}}(p)$, so $p$ is a non-degenerate critical point.
Since p was an arbitrary critical point, all critical points are non-degenerate and hence $\overline{f}$ is Morse, completing the proof of the Lemma.
We also need another Lemma. Let $K\subset M$ be a compact subset. Then if $g:M\to\mathbb{R}$ has no degenerate critical points in $K$, then we can choose $\varepsilon >0$ small enough so that any $C^
2$ approximation of $g$ also has no degenerate critical points in $K$.
Since our manifold is closed, it is compact. So we can choose a finite subcover of coordinate charts, and compactly refine it (I’ll do this construction if someone asks in the comments), so that $\
{U_i\}_1^m$ cover $M$ and there are compact sets $K_i\subset U_i$ such that $\cup K_i=M$.
But with this, we can look at any of the $U_k$, and in these coordinates, $g$ has no degenerate critical points in $K\cap K_k$ (alright, that was probably a poor choice of notation) iff $\
displaystyle\Big|\frac{\partial g}{\partial x_1}\Big|+\cdots + \Big|\frac{\partial g}{\partial x_n}\Big|+\Big| det(H_g)\Big|>0$ for every point in $K\cap K_k$.
But for a small enough $\varepsilon$ we can definitely still make that inequality hold for any $C^2$ approximation. Thus we have proved the lemma.
Now let’s do the actual existence proof. Take the $U_i, K_i$ as before. We will inductively build our $C^2$ approximations on $C_l=K_1\cup \cdots \cup K_l$. Our base step is to build $f_0$ on $C_0=\
emptyset$, so we’re done.
For our inductive hypothesis, suppose we have $f_{l-1}:M\to\mathbb{R}$ having no degenerate critical points in $C_{l-1}$.
Let’s work with the coordinate neighborhood $U_l$ with coordinates $(x_i)$. By the first lemma, there are arbitrarily small numbers $\{a_i\}$ so that $f_{l-1}(x_1, \ldots , x_n)-(a_1x_1+\cdots +
a_nx_n)$ is Morse on $U_l$. But note, we only have a definition on $U_l$ and we need one everywhere.
Let $\psi$ be a bump function that is 1 on $K_l$ and supported in $V$, where $K_l\subset V\subset U_l$.
Define $f_l=\begin{cases} f_{l-1}-\psi\cdot (a_1x_1+\cdots a_nx_m) & in \ U_l \\ f_{l-1} & outside \ V\end{cases}$.
(So I have this same cases problem again, just ignore the “line break” symbol, it is actually readable this time).
This gives us a nice well-defined function on all of $M$ (just need to check the overlaps). Also $f_l$ is our first lemma function on $K_l$, so it is Morse on $K_l$ and hence has no degenerate
critical points there.
Since $0\leq \psi \leq 1$ (and we’re on a compact set), we can make $\{a_i\}$ small enough so that $f_l$ is an arbitrarily close $C^2$ approximation of $f_{l-1}$ (I won’t do this since it is fairly
long and tedious, but quite straightforward for the reasons I gave).
But now by the second lemma, since $f_{l-1}$ has no degenerate critical points in $C_{l-1}$, we have that $f_l$ has no degenerate critical points in $C_{l-1}$ either. We already checked on $K_l$, and
thus there are no deg. critical points on $C_{l-1}\cup K_l=C_l$.
Thus inductively we can get a Morse function on all of $M$ that is $C^2$-close to our original smooth function.
The Morse Lemma
Today we prove what is known as The Morse Lemma. It tells us exactly what our Morse function looks like near its critical points.
Let $p\in M$ be a non-degenerate critical point of $f:M\to \mathbb{R}$. Then we can choose coordinates about p, $(x_i)$, such that in these coordinates $f=-x_1^2-x_2^2-\cdots -x_\lambda^2+x_{\
lambda+1}^2+\cdots +x_n^2+f(p)$. Moreover, $\lambda$ is the index of the critical point. (Note that $0\mapsto f(p)$).
Proof: Choose local coordinates, $(x_i)$, centered at $p$. Without loss of generality $f(p)=0$ by replacing $f$ with $f-f(p)$. Thus in coordinates, since p corresponds to 0, $f(0)=0$ (it is a little
sloppy, but I’ll probably call the actual function and the function in coordinates the same thing and go back and forth).
By a general theorem of multi-variable calculus (I don’t know if it has a name, it might be Taylor’s theorem? I always get confused at how much is actually included in that), we have smooth functions
$g_1, \ldots, g_n$ such that $f(x_1, \ldots, x_n)=\sum_{k=1}^n x_ig_i(x_1, \ldots, x_n)$ and $\displaystyle \frac{\partial f}{\partial x_i}\Big|_0=g_i(0)$.
But 0 is a critical point of $f$, so $g_i(0)=0$ and we can apply the theorem again to each $g_i$. We’ll suggestively call the smooth functions $g_k(x_1, \ldots, x_n)=\sum_{i=1}^n x_i h_{ki}(x_1, \
ldots, x_n)$.
Thus, we now have $\displaystyle f=\sum_{k,i}x_kx_i h_{ki}$. Let $\displaystyle H_{ki}=\frac{(h_{ki}+h_{ik})}{2}$.
Then $\displaystyle f=\sum_{k, i}x_kx_i H_{ki}$, and $H_{ki}=H_{ik}$.
But in that form we see that the second partial derivatives are $\displaystyle \frac{\partial^2 f}{\partial x_k \partial x_i}\Big|_0=2H_{ki}(0)$.
By assumption $0$ is a non-degenerate critical point, so $det(H_{ki}(0))eq 0$ and hence we can apply a linear transformation to our current coordinates and get that $\frac{\partial^2 f}{\partial x_1^
2}\Big|_0eq 0$. Thus $H_{11}(0)eq 0$.
Now $H_{11}$ is continuous, so that means it is non-zero in a neighborhood of 0.
Let $(y_1, x_2, \ldots, x_n)$ be a new coordinate neighborhood where $y_1=\sqrt{|H_{11}|}\left(x_1+\sum_{i=2}^n x_i\frac{H_{1i}}{H_{11}}\right)$. (Note this is actually a coordinate system, since the
determinant of the Jacobian of the transformation from this one to the old one is non-zero).
Now $\displaystyle y_1^2=|H_{11}|\left(x_1+\sum_{i=2}^nx_i \frac{H_{1i}}{H_{11}}\right)^2$
$= H_{11}x_1^2 + 2\sum_{i=2} x_1x_i H_{1i} +\left(\sum_{i=2} x_i H_{1i}\right)^2/H_{11}$ if $H_{11}>0$, and the same thing with minus signs everywhere if $H_{11}$ is negative.
Thus the function is $y_1^2+\sum_{i,j=2}x_ix_jH_{ij}-\left(\sum_{i=2} x_i H_{1i}\right)^2/H_{11}$ if $H_{11}>0$ or
$-y_1^2 +\sum_{i,j=2} x_ix_j H_{ij} -\left(\sum_{i=2}x_i H_{1i}\right)^2/H_{11}$ otherwise.
(I awkwardly wrote this with words, because I couldn’t get cases to look right, and was having weird errors I couldn’t figure).
Now just isolate the stuff after the $\pm y_1^2$. It satisfies the same conditions as $f$, but has fewer variables, so we can induct on the number of variables until we have $f(y_1, \ldots , y_n)=
-y_1^2-\cdots - y_\lambda^2 +y_{\lambda +1}^2+\cdots +y_n^2$.
And since the plus and minus signs came from changing basis to put the Hessian into diagonal form with plus and minus 1′s, the number of minus signs is indeed the index.
The proof of this tended to be sort of tedious to check everything, so don’t worry if you didn’t go through it. I don’t think there is really insight you get from going through it. This is one of
those rare instances that I think the result is more important than the proof.
Now we have real good reason to believe the index will be $n$ or 0 if we are at a local max or min. What does a max or min look like near the point? Well, it slopes all in the same direction, i.e. it
will locally look like a sphere. But this is exactly what the Morse lemma tells us about index n and 0 critical points. We’ll make this more precise later.
I wasn’t sure how I was going to proceed. My two options seemed to be to build the Morse theory I need for Lefschetz, and then do Lefschetz, then come back to Morse theory. But I think I’m just going
to continue as far as I want to go ignoring what is needed for the Hyperplane Theorem, then reference what I need.
A Better Example
The example I gave last time was awful I’ve realized. I need something a little more complicated to better motivate why we’d believe some of these things, and to illustrate what happens in certain
So let’s take a surface embedded in $\mathbb{R}^3$ given by the equation $z=x^2(x+1)-y^2$. It is a “mountain landscape”:
It might be hard to tell, but there is the one peak, and it forever decreases to the general left, and forever increases to the general right.
We have a global chart to work with. Our Morse function will again be the “height function”. So $f(x,y,z)=z$. We have two critical points. One will occur when we reach the “saddle point” at $z=0$ and
one when we reach the peak of the mountain at $z=4/27$. Since this is a conceptual example, I won’t go through all the technical stuff to show that this is actually a Morse function.
Now as we stated before, at a non-critical value, i.e. a regular value, the level set is an embedded submanifold. Thus if $c<0$, then $f(x,y,z)=c$ is something that vaguely looks like:
This is because it is below the saddle point. As the height increases to 0, our level set starts to close in, and when we reach $f(x,y,z)=0$, we get:
This is not an embedded submanifold, because the point of intersection is not locally Euclidean. Continuing up, we get that $0<c<4/27$ will look something like:
Then we hit the critical value $c=4/27$:
It doesn’t show up on the graph, but there is a point at $(-2/3, 0)$ which is why this one is not a manifold. There would be no well-defined dimension since it is a 0-dimensional object union a
1-dimensional object. Then everything above 4/27 looks like the last picture but without the dot.
Let’s analyze a little bit. Between critical values all of our embedded submanifolds seemed to be diffeomorphic to each other (you may be able to guess the proof of this even if you haven’t seen it).
But when we cross a critical value we don’t even maintain homotopy type.
If anyone actually worked out the math behind this example, then they would see that we also now have an example of an index 1 critical point at the saddle. The top of the mountain is index 2, which
still fits with the local min/max conjecture from last time.
I may post again later today on actual Morse theory, but I decided that I really needed a better example to reference once we got going.
What is a Morse function?
A Morse function is a smooth function from a smooth manifold, $M$, to $\mathbb{R}$ that is in some sense “non-degenerate.”
Suppose $p\in M$, then define the Hessian of f at p, $H_f(p): T_pM\times T_pM\to \mathbb{R}$ to be the bilinear form that sends $\displaystyle (\frac{\partial}{\partial x^i}, \frac{\partial}{\partial
x^j})\mapsto \frac{\partial^2f}{\partial x_i\partial x_j}\Big|_p$.
So picking a basis, the Hessian is just the matrix of second partial derivatives.
Now we call $f:M\to\mathbb{R}$ a Morse function if for any $a\in\mathbb{R}$ we have $f^{-1}((-\infty, a])$ is compact, and for any critical point of f (the derivative is 0), then $H_f(p)$ is
non-singular. So in matrix form, it would have non-zero determinant. In bilinear form terms, it is non-degenerate or zero is not an eigenvalue.
The index at p, is the index of the Hessian at p as a bilinear form. Recall that the index of a bilinear form is the maximal dimension of a linear subspace such that the form is negative definite.
(This is sort of backwards of the intuition of counting how big the positive dimension can be. So note that a form is positive semidefinite iff it has index 0).
We really have to check that the property of being a “Morse function” actually is a well-defined concept for smooth manifolds. i.e. is it a diffeomorphism invariant?
We’ll work locally in coordinates. Suppose we have $\phi : V\to U$ a diffeomorphism where $\phi(q)=p$. Define $g= f\circ \phi$ (i.e. the change of coordinates of our so-called Morse function). The
well-defined claim is that $q$ is a critical point of $g$ and that the Hessian of g at p is non-singular.
Well, the critical point claim is just the chain rule. Now we’ll actually compute the Hessian. I propose that it is $H_g(q)=(D\phi(q))^T H_f(p) (D\phi(q))$ to make it easier to follow.
We’ll do the right hand side first. The j-th column $\displaystyle (H_f(p)D\phi(q))_j= \left(\sum_{l=1}^n \frac{\partial^2 f}{\partial x_1\partial x_l}(\phi(q))\frac{\partial \phi^l}{\partial x_j}(q)
\cdots \sum_{l=1}^n \frac{\partial^2 f}{\partial x_n\partial x_l}(\phi(q))\frac{\partial \phi^l}{\partial x_j}(q)\right)$.
Thus the i-j entry of the right side is multiplying on the left by the ith row of $(D\phi(q))^T$, which gives $\displaystyle \sum_{k=1}^n \sum_{l=1}^n \frac{\partial^2 f}{\partial x_k \partial x_l}
(p)\frac{\partial \phi^k}{\partial x_i}(q)\frac{\partial \phi^l}{\partial x_j}(q)$.
Now we’ll calculate the i-j entry of the left side and see if it is the same. So we’ll need the chain rule for partial derivatives.
$\displaystyle (H_g(q))_{ij}=\frac{\partial^2}{\partial x_i \partial x_j}(f\circ \phi)(q)$
$\displaystyle = \frac{\partial}{\partial x_i}\sum_{k=1}^n\frac{\partial f}{\partial x_k}(\phi (q))\frac{\partial \phi^k}{\partial x_j}(q)$
$\displaystyle = \sum_{k=1}^n\frac{\partial}{\partial x_i}(\frac{\partial f}{\partial x_k}(p))\frac{\partial \phi^k}{\partial x_j}(q) + \sum_{k=1}^n\frac{\partial f}{\partial x_k}(p)\frac{\partial}{\
partial x_i}(\frac{\partial \phi^k}{\partial x_j})(q)$
$= \displaystyle \sum_{k=1}^n \sum_{l=1}^n \frac{\partial^2 f}{\partial x_k \partial x_l}(p)\frac{\partial \phi^k}{\partial x_i}(q)\frac{\partial \phi^l}{\partial x_j}(q) +\sum_{k=1}^n\frac{\partial^
2\phi^k}{\partial x_i \partial x_j}(q)\frac{\partial f}{\partial x_k}(p)$.
But that last line is the same as the right side with that extra plus stuff. But since f is critical at p, that term is zero, and the two sides are equal.
So there is no problem calling a smooth function Morse, but I also introduced the idea of the index of f at a point. Hopefully this doesn’t change under diffeomorphism. Let’s check.
Suppose $index_f(p)=k$. Then since $\phi$ is a diffeo, $D\phi$ is non-singular. But the index is a well-defined notion of a bilinear form at a point, so it is independent of choice of basis. Our
previous calculation showed that $H_g(q)=(D\phi(q))^TH_f(p)(D\phi(q))$ which is just a change of basis, so $index_g(q)=k$ as well.
I don’t want to leave you without some sort of concrete idea of what is going on. So define $f:S^2\to \mathbb{R}$ to be the “height function” $(x_1, x_2, x_3)\mapsto x_3$. If I’m not at the north or
south pole, then I can write this function in one of the “side” coordinate patches, i.e. $f(\sqrt{1-x_2^2-x_3^2}, x_2, x_3)=x_3$. Hence the Jacobian is non-singular. So every point that is not the
north or south pole is a regular value.
The north and south poles are critical values. Now write $f(u,v)=\sqrt{1-u^2-v^2}$ in the “north patch.” Then at the north pole we are at $u=v=0$. Thus $H_f(N)=\left(\begin{matrix} -1 & 0 \\ 0 & -1 \
end{matrix}\right)$. Not only does this tell us that the critical point is non-degenerate, but it tells us the index is 2. In fact, the index of the south pole is 0.
Some final points that our example might have just revealed. The index seems to actually give us some information. Note that we could have done the same thing for $S^n\to \mathbb{R}$. In this case,
the two critical points are the same, but the indexes are 0 and n. Is it in fact the case that local mins of Morse functions have index 0 and local maxes on an n-manifold have index n? What does it
even mean for a critical point to be something other than a local max or min (i.e. if the previous conjecture holds what is the meaning of an index strictly between 0 and n)? Non-critical points are
regular values, and since f is smooth to a 1-manifold ($\mathbb{R}$), the level sets are codimension 1 properly embedded submanifolds (“hypersurfaces?”). What happens are the relations of these
families of submanifolds as we cross critical values?
Alright. I think that is enough of a preview of what is coming up.
What Now?
I’m done! Well, for now. I’m pretty sure I didn’t pass all three, so I’m still not done with these darned tests.
Now I have to decide what I’m going to talk about. I decided I was going to do no math for a week after these tests were done. But I don’t feel that way now. I actually feel sort of motivated to try
to look at some things I don’t have time to look at when school is in session (or when I’m studying for prelims).
From my point of view, my options seem to be Morse theory (something I’ve been threatening to do for probably 6 months now). If I did this, I’d probably just try to get to a few of the results of the
form: If M is a manifold that admits a smooth real valued function with precisely two critical points, then it is homeomorphic to a sphere. Or something like that, I haven’t looked at it for awhile,
so it might not actually be that. But I think it is really awesome that you can somehow get at topological facts, based purely on what real-valued functions it can admit.
I could work through Topology from the Differentiable Viewpoint and learn some things about cobordism (a term I hear thrown around a lot, but only vaguely know that it is in reference to what
manifolds are boundaries of other ones or something).
I could pick up where I left off on the algebraic geometry, although I’ll probably do that during school since I’ll be taking algebraic geometry, so it might not be the best choice for right now.
I could do some of Bott and Tu’s book. I’ve only actually read the first part, and am quit curious as to what is in the rest of it.
I could try for the third time to read Zwiebach’s A First Course in String Theory, because I’m darned determined to learn what string theory is. Although, I suspect it will go even worse this time
than last time considering its been well over a year since I took quantum mechanics.
Or I could switch gears and do some posts on books I’ve read (which are quite a few since my last book post) and movies I’ve seen. It is still the case that every day my Lost in the Funhouse post has
the most hits. Darn you “survey of modern american lit classes” for causing so much confusion.
Or you could suggest something, and I might ignore it or actually do it.
by hilbertthm90 4 Comments
The Tangent Bundle is Orientable
Today we’ll do a nice standard result. The tangent bundle of a smooth manifold is orientable as a manifold (regardless of whether or not the manifold itself is).
This could be done rather easily if I had built some theory first, but I’ll build the structures I need in this post. The first thing I’ll build is the tautological symplectic form on the cotangent
bundle. Let $(q, \phi)\in T^*M$ a point in the tangent bundle by specifying a point in the manifold q and a covector $\phi\in T^*_qM$. Thus we have the projection $\pi: T^*M\to M$ by $\pi(q, \phi)=q$
Now the pullback at q is $d\pi^*_{(q,\phi)}:T_q^*\to T^*_{(q, \phi)}(T^*M)$. So we’ll put a 1-form $\tau\in \Omega^1(T^*M)$ on the cotangent bundle by $t_(q, \phi)=d\pi^*_{(q, \phi)}\phi$. Thus,
given a tangent vector $X\in T_{(q, \phi)}(T^*M)$, we get $\displaystyle \tau_{(q, \phi)}(X)=\phi(d\pi_{(q, \phi)}(X))$.
That was a mouthful, and we only got a 1-form out of it not a symplectic form. The claim now is that $\omega=-d\tau$ is a symplectic form on the cotangent bundle.
Given the standard coordinates at a point $(q, \phi)\in T^*M$ say $(x^i, \zeta_i)$ where the coordinate representation is $\phi=\sum \zeta_idx^i$, then we get a coordinate representation for the
projection $\pi(x, \zeta)=x$. Thus $d\pi^*(dx^i)=dx^i$ and we get a coordinate representation for our one form: $\tau_{(x, \zeta)}=\sum\zeta_idx^i$. Thus our 1-form is smooth.
Now $\omega$ is closed since it is exact. We also have a coordinate form for it $\omega=-d\tau=\sum dx^i\wedge d\zeta_i$. Aha, so it is symplectic.
So if you haven’t seen this proof done this way, then you are probably in mass confusion why I just put a symplectic form on the cotangent bundle when what I really want is a nowhere vanishing
2n-form on the tangent bundle.
You can now simply check what happens when you wedge this form with itself n-times. You’ll get a nowhere vanishing 2n-form on the cotangent bundle. Thus the cotangent bundle is orientable. Now let
$g$ be a Riemannian metric on the manifold. This gives us a nice isomorphism between the tangent and cotangent bundles by way of the raising and lowering of indices. This part of the proof sort of
scares me, so this isomorphism is actually a bundle isomorphism. Does this imply diffeomorphic as manifolds? I think so, since it’s smooth, but if any reader can confirm, that would be great!
But since the tangent bundle is diffeomorphic to an orientable manifold, it is itself orientable.
As usual, let’s extrapolate a little now that the specific standard problem is done. We showed that the cotangent bundle was orientable, but the only fact we used was that it was symplectic. So this
same proof (wedging the symplectic form with itself n times) will work, and all sympletic manifolds are orientable.
Another thing to note is that we need to be careful to continually specify “as smooth manifolds” when talking about orientability in this context. Another theorem says that the tangent bundle is
orientable, as a vector bundle, if and only if the manifold itself is orientable.
Another quick question, is there a cleaner way to do this? Of course, this proof is a couple sentences with knowledge that the cotangent bundle has a tautological symplectic structure, and that all
symplectic manifolds are orientable, and then just hit it with the tangent-cotangent iso. But I feel like there must be a proof working directly with the tangent bundle and some more elementary facts
about orientation.
Harmonic Growth as Related to Complex Analytic Growth
Let’s change gears a bit. This post will be on something I haven’t talked about in probably a year…that’s right, analysis. Since the last post was short, I’ll do another quick one. The past few days
have had varying efforts to solve a problem of the form if $f$ is an analytic function and we know that $|Ref(z)|\leq M|z|^k$ (for large $|z|$ say), do we actually know something like $|f(z)|\leq M|z
Let’s rephrase this a bit. Essentially we’re talking about growth. It would be sufficient to show something along the lines of: if $u$ is harmonic, and grows at some rate, then $v$ the harmonic
conjugate also must grow at a related rate. But all of this growth talk is vague. What does this even mean?
One measure of growth would be $|abla u|=\sqrt{\left(\frac{\partial u}{\partial x}\right)^2+\left(\frac{\partial u}{\partial y}\right)^2}$. In fact, gradient points in the direction of greater
change, so this is in some sense an upper bound on the growth. Another is $f'(z)$. Does this help? Well, first off, if this is our notion of growth, then by the Cauchy-Riemann equations, we
immediately get that the harmonic conjugate grows exactly the same: $|abla u|=|abla v|$. Let’s check how useful this is in recovering growth of $f$.
Since I haven’t talked about complex analysis much, note that the derivative operator for complex functions is $\frac{1}{2}\left(\frac{\partial}{\partial x}-i\frac{\partial}{\partial y}\right)$.
Now $f'(z)=\frac{1}{2}\left(\frac{\partial(u+iv)}{\partial x}-i\frac{\partial(u+iv)}{\partial y}\right)$
$= \frac{1}{2}\left(\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}+i\left(\frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}\right)\right)$
$= \frac{\partial u}{\partial x}-i\frac{\partial u}{\partial y}$ by Cauchy-Riemann
Thus $|f'(z)|=|abla u|$.
Did this solve our original problem? Yes. Since if we work out the partial derivatives we get that if $|u|\leq M(x^2+y^2)^k/2$, then $|abla u(z)|\leq Mk|z|^{k-1}$.
In particular, $|f'(z)|\leq Mk|z|^{k-1}$. So we wanted to show that $f$ was a polynomial of degree at most $k$, and we can now use Cauchy estimates to get that.
If any of what I just wrote is true, then there is some really obvious way of doing it that isn’t messy like this at all. I mean, the result is $|f'(z)|=|abla u|$. Is this for real? Am I horribly
mistaken? I can’t find this in any book… | {"url":"http://hilbertthm90.wordpress.com/2009/09/","timestamp":"2014-04-19T06:53:55Z","content_type":null,"content_length":"159228","record_id":"<urn:uuid:946ff146-9789-4e10-bbf1-c88d1bc74648>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Date Subject Author
2/24/13 William Elliot
2/24/13 Re: Problems with Infinity? garabik-news-2005-05@kassiopeia.juls.savba.sk
2/24/13 Re: Problems with Infinity? Frederick Williams
2/24/13 Re: Problems with Infinity? David DeLaney
2/25/13 Re: Problems with Infinity? P. Taine
2/26/13 Re: Problems with Infinity? Butch Malahide
2/24/13 Re: Problems with Infinity? jsavard@ecn.ab.ca
2/25/13 Re: Problems with Infinity? ross.finlayson@gmail.com
2/25/13 Re: Problems with Infinity? Brian M. Scott
2/25/13 Re: Problems with Infinity? Shmuel (Seymour J.) Metz
2/25/13 Re: Problems with Infinity? jsavard@ecn.ab.ca
2/25/13 Re: Problems with Infinity? Brian M. Scott
2/26/13 Re: Problems with Infinity? ross.finlayson@gmail.com
2/26/13 Re: Problems with Infinity? Frederick Williams
2/26/13 Re: Problems with Infinity? Wayne Throop
2/26/13 Re: Problems with Infinity? Brian M. Scott
2/26/13 Re: Problems with Infinity? ross.finlayson@gmail.com
2/25/13 Re: Problems with Infinity? Frederick Williams
2/25/13 Re: Problems with Infinity? Shmuel (Seymour J.) Metz
2/25/13 Re: Problems with Infinity? Frederick Williams
2/26/13 Re: Problems with Infinity? Wayne Throop
2/26/13 Re: Problems with Infinity? Wayne Throop
2/26/13 Re: Problems with Infinity? Brian M. Scott
2/26/13 Re: Problems with Infinity? Wayne Throop
2/26/13 Re: Problems with Infinity? Brian M. Scott
2/26/13 Re: Problems with Infinity? Wayne Throop
2/27/13 Re: Problems with Infinity? David DeLaney
2/27/13 Re: Problems with Infinity? Shmuel (Seymour J.) Metz
2/28/13 Re: Problems with Infinity? David DeLaney
2/28/13 Re: Problems with Infinity? Shmuel (Seymour J.) Metz
2/28/13 Re: Problems with Infinity? David DeLaney
3/1/13 Re: Problems with Infinity? Shmuel (Seymour J.) Metz
3/1/13 Re: Problems with Infinity? David DeLaney
3/2/13 Re: Problems with Infinity? Shmuel (Seymour J.) Metz
2/28/13 Re: Problems with Infinity? jsavard@ecn.ab.ca
2/28/13 Re: Problems with Infinity? David Johnston
2/27/13 Re: Problems with Infinity? Shmuel (Seymour J.) Metz
2/26/13 Re: Problems with Infinity? Frederick Williams
2/26/13 Re: Problems with Infinity? David DeLaney
4/11/13 Re: Problems with Infinity? Walter Bushell
4/11/13 Re: Problems with Infinity? Brian M. Scott
4/11/13 Re: Problems with Infinity? Butch Malahide
4/12/13 Re: Problems with Infinity? fom
4/12/13 Re: Problems with Infinity? Wayne Throop
4/12/13 Re: Problems with Infinity? fom
4/12/13 Re: Problems with Infinity? Wayne Throop
4/12/13 Re: Problems with Infinity? fom
4/11/13 Re: Problems with Infinity? jsavard@ecn.ab.ca
4/11/13 Re: Problems with Infinity? Butch Malahide
4/12/13 Re: Problems with Infinity? Virgil
4/12/13 Re: Problems with Infinity? Brian M. Scott
4/12/13 Re: Problems with Infinity? jsavard@ecn.ab.ca
4/11/13 Re: Problems with Infinity? fom
4/11/13 Re: Problems with Infinity? Butch Malahide
4/11/13 Re: Problems with Infinity? Butch Malahide
4/12/13 Re: Problems with Infinity? Brian M. Scott
4/12/13 Re: Problems with Infinity? Butch Malahide
2/26/13 Re: Problems with Infinity? Brian M. Scott
2/26/13 Re: Problems with Infinity? Shmuel (Seymour J.) Metz
2/26/13 Re: Problems with Infinity? Brian M. Scott
2/26/13 Re: Problems with Infinity? David Bernier
2/26/13 Re: Problems with Infinity? Shmuel (Seymour J.) Metz
2/28/13 Re: Problems with Infinity? Shmuel (Seymour J.) Metz
4/11/13 Re: Problems with Infinity? Walter Bushell
4/11/13 Re: Problems with Infinity? Shmuel (Seymour J.) Metz
2/26/13 Re: Problems with Infinity? Frederick Williams
2/27/13 Re: Problems with Infinity? Scott Fluhrer | {"url":"http://mathforum.org/kb/message.jspa?messageID=8424556","timestamp":"2014-04-18T16:09:36Z","content_type":null,"content_length":"95363","record_id":"<urn:uuid:a8a24bb6-cc98-45d7-be96-d2985092fe03>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dude, Can You Count? Stories, Challenges, and Adventures in Mathematics
“My name is J. J. Moon, and I’m an alien,” he said calmly.
The first line sounds like the beginning of a science fiction novel. But no, this is a book on… well, what is it about exactly? I suppose it’s a book about philosophy.
The book consists of 25 “Stories, Challenges, and Adventures in Mathematics,” or SCAMs. (If you like acronyms, then as the “Great Architect and World Designer” is my witness, you’ll get a kick out of
this book.) The SCAMs are dialogues between J. J. Moon (a Ganymedean) and the unnamed writer. In the first SCAM, J. J. introduces the notion of the Mathematical Intelligence and Character Quotient.
Not surprisingly, the aggregate MICQ for our planet is low: 27 (in the same range as defense lawyers and “language polluters”). In the second SCAM, he introduces the ten Mathematical Commandments
(which include Thou shalt not divide by zero and Thou shalt denounce the evil of reformed calculus). According to J. J., following these ten commandments will raise our planetary MICQ to a
respectable 50 or higher.
The remaining SCAMs are largely identical in structure: the writer goes to a math conference, goes into the hotel bar for coffee, and spots J. J. across the room. He approaches the alien, who gives
him two or three math problems for his students and then launches into a screed on some societal topic, such as public education, mathematics education, political correctness, the criminal and civil
legal system, or academic politics. The meeting ends with the writer providing J. J. with two or three jokes. At the end of the SCAM is the solution to J. J.’s math problems, followed by some
mathematical dos and don’ts.
A lot of J. J.’s screeds sounded like excuses for Constanda to rail against many of society’s shortcomings. (As he says in the preface, “There is no doubt: the world is going to the dogs.”) But he
does it with a nice collection of problems and jokes, so it turns out to be a fun read, and rather addictive. And very accessible: the math involved here isn’t particularly difficult, so anyone with
a modest understanding of algebra can enjoy most of this book, although they might find the idea of mathematical ability as the key factor in intelligence to be somewhat arrogant. But then those of
us with a proper MICQ know better.
Donald L. Vestal is an Associate Professor of Mathematics at South Dakota State University. His interests include number theory, combinatorics, spending time with his family, and working on his hot
sauce collection. He can be reached at Donald.Vestal(AT)sdstate.edu. | {"url":"http://www.maa.org/publications/maa-reviews/dude-can-you-count-stories-challenges-and-adventures-in-mathematics","timestamp":"2014-04-16T10:39:31Z","content_type":null,"content_length":"97874","record_id":"<urn:uuid:36d79fed-e4de-4fb7-9d89-6bd047fd421a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derivative of trigonometric
October 8th 2008, 09:27 PM
Derivative of trigonometric
Find the points on the curve y= tan x, -pi/2< x < pi/2, where the tangent is parallel to the line y= 2x.
How do you do this? I'm really confused (Headbang)
October 8th 2008, 09:28 PM
October 8th 2008, 09:38 PM
October 8th 2008, 09:47 PM | {"url":"http://mathhelpforum.com/calculus/52762-derivative-trigonometric-print.html","timestamp":"2014-04-16T04:30:52Z","content_type":null,"content_length":"6266","record_id":"<urn:uuid:8d443f71-9506-450d-bfe1-71454adcfa06>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: December 2005 [00208]
[Date Index] [Thread Index] [Author Index]
Re: Types in Mathematica, a practical example
• To: mathgroup at smc.vnet.net
• Subject: [mg62892] Re: Types in Mathematica, a practical example
• From: "Steven T. Hatton" <hattons at globalsymmetry.com>
• Date: Thu, 8 Dec 2005 00:04:19 -0500 (EST)
• References: <200512051841.NAA21133@smc.vnet.net> <200512060503.AAA02706@smc.vnet.net> <dn5o99$niv$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Sseziwa Mukasa wrote:
> On Dec 6, 2005, at 12:03 AM, Andrzej Kozlowski wrote:
>> It seems to me that you are looking here at Mathematica from the
>> wrong view point. Although Mathematica is not a strict "functional"
>> language, it is "functional" at least in the sense that it is
>> "functions" and not "objects" that play the central role (I am being
>> deliberately vague here but I have run out of patience with long
>> discussions of programming languages and general design issues.).
> Amen, but apparently I have a masochistic streak that makes me
> persevere in this sisyphean discussion. At any rate I thought about
> the problem a bit further, I am not sure why, and your point above is
> related to my thoughts. If one insists on seeing all programming in
> an "object oriented" manner there is an argument that could be made
> for understanding how to make a Symbol x have some desired properties
> corresponding to behavior according to a type. The first thing one
> must do is forget the Java/C++ model of objects, where objects are
> defined as record structures which can contain associated methods.
That's not quite correct. The methods in C++ are not actually members of
the instance variables of the class where they are defined. They are
similar to Dr. Maeder's constructors, selectors, predicates, and operations
for ADTs described in _Computer Science with Mathematica_.
> Instead the appropriate model is Smalltalk, where the definition of a
> method on a variable defines the behaviors of the class to which that
> variable belongs.
Methods in Smalltalk are not that different from member functions in C++.
Both instance methods and class methods (similar but not identical to
static member functions) are defined as part of the class, not the
instances. The big differences between Smalltalk and C++ are:
C++ has no universal base class. Java does.
C++ doesn't have a Class object for each class. Java does.
C++ doesn't use a virtual machine. Java does - except with GCJ.
C++ doesn't have, or need Garbage collection. Java does.
C++ variables are declared with an immutable type. Same as Java.
Perhaps it's this last point which you are intending? I will grant that
Mathematica symbols are not immutably bound to the values assigned to them,
and "type" is determined by the currently associated values. This is also
how Lisp variables work.
>> In
>> this case the issue is with the function Plus and not with the nature
>> or "type" of x etc. Plus has the attribute Listable, which is
>> responsible for the behaviour you see
> Agreed, so the "object oriented" approach would be to redefine the
> Plus method on x to get the behavior we desire.
Not sure what you mean here. In C++ you would overload Plus for your type.
But that begs the issue of defining a type for x.
After thinking it over, my answer to Ingolf's original question is "that's
the way Mathematica works. Symbols do not have immutable types. You have
to be sure your arguments are symbols with the correct 'types' assigned to
> Of course, but I think the idea in this thread is to deal with the
> idea of types and their implementation in Mathematica. The evaluator
> does not easily fit into a model of types. I think what should be
> clear though is that unless there is a very good argument for
> developing the infrastructure necessary to redefine the behavior of
> Mathematica expressions globally, development of a type system
> infrastructure is probably far more work than it's worth.
That may well be the case. I must say, not everybody has jumped on Dr.
Maeder's bandwagon regarding ADTs an OOP.
The Mathematica Wiki: http://www.mathematica-users.org/
Math for Comp Sci http://www.ifi.unizh.ch/math/bmwcs/master.html
Math for the WWW: http://www.w3.org/Math/
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2005/Dec/msg00208.html","timestamp":"2014-04-16T04:19:43Z","content_type":null,"content_length":"39193","record_id":"<urn:uuid:9472dac8-837c-45cb-8f05-50fb43ba73ab>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kids.Net.Au - Encyclopedia > Egon Zakrajsek
Egon Zakrajšek
July 7
- September
) was a
Slovene mathematician
computer scientist
Zakrajšek was born in Ljubljana, Yugoslavia (today Slovenia). He became an orphan even before he started to attend school. He went to elementary school and gymnasium in Jesenice[?]. He was a good
schoolboy and he showed his talent and abilities very early. He graduated from technical mathematics at the Department of mathematics and physics of then Faculty for natural sciences and technology
(FNT) of the University of Ljubljana. He received his Master's degree at the University of Zagreb[?] with a work Numerična realizacija Ritzovega procesa (Numerical realization of the Ritz process[?])
and his doctorate in 1978 in Ljubljana with a dissertation O invariantni vložitvi pri reševanju diferencialnih enačb (About the invariable embeding in solving of differential equations).
Professor Zakrajšek was one of the pioneers of computer science in Slovenia. He became an expert about the first computers of the University of Ljubljana, the Zuse Z-23[?] and its successor the IBM
1130[?]. Later on he participated in the development of programming languages, tools and operating systems. He wrote textbooks and manuals for them simultaneously: for Z-23 assembler, algol, fortran,
algol 68[?], pascal, for domestic structran. In 1982 he set off for United States and he became the manager of the programming equipment at the firm Cromemco. In 1994 he returned to homeland, where
he occupied professorship again. With his advocacies about the C and open operating systems, that is unix and linux, he helped to modernize the lessons of computer science. He became an expert again
for TeX, LaTeX and Matlab.
Beside his computer scientifical skills he was also an excellent mathematician with a broad profile. He taught and solved problems from many fields: the usage of mathematics in natural and social
sciences, statistics, mechanics, classical applied mathematics, discrete mathematics, graph and network theory[?], linear programming, operational researches, numerical analysis.
1. Marija Vencelj, Umrl je prof. dr. Egon Zakrajšek (Professor Dr. Egon Zakrajšek has died), (Obzornik mat, fiz. 49 (2002) 6, pp 184 - 186).
All Wikipedia text is available under the terms of the GNU Free Documentation License | {"url":"http://encyclopedia.kids.net.au/page/eg/Egon_Zakrajsek","timestamp":"2014-04-17T01:20:51Z","content_type":null,"content_length":"19181","record_id":"<urn:uuid:ac654206-a3df-4667-87cd-bbef1b701b2d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
Non-Classical Logics, Model Theory, and Computability: Proceedings of the Third Latin-American Symposium on Mathematical Logic, Campinas, Brazil, July
home > paid book/ebook
Non-Classical Logics, Model Theory, and Computability: Proceedings of the Third Latin-American Symposium on Mathematical Logic, Campinas, Brazil, July
Non-Classical Logics, Model Theory, and Computability: Proceedings of the Third Latin-American Symposium on Mathematical Logic, Campinas, Brazil, July
Merchant Format Price
Amazon US Paperback $125.09 - $215.00
eBooks.com Digital (PDF) $260.00 | {"url":"http://pdfcast.org/paid/9780720407525","timestamp":"2014-04-18T15:41:53Z","content_type":null,"content_length":"30648","record_id":"<urn:uuid:e3852f76-5bac-4cff-8c2c-c4b65dcaa070>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Framework for Defining Logics
Robert Harper? Furio Honselly Gordon Plotkinz
The Edinburgh Logical Framework (LF) provides a means to define (or present) logics. It is based on a general treatment of syntax, rules, and proofs by means of a typed >=-calculus with dependent
types. Syntax is treated in a style similar to, but more general than, Martin-L?of's system of arities. The treatment of rules and proofs focuses on his notion of a judgement. Logics are represented
in LF via a new principle, the judgements as types principle, whereby each judgement is identified with the type of its proofs. This allows for a smooth treatment of discharge and variable occurrence
conditions and leads to a uniform treatment of rules and proofs whereby rules are viewed as proofs of higher-order judgements and proof checking is reduced to type checking. The practical benefit of
our treatment of formal systems is that logic-indep endent tools such as proof editors and proof checkers can be constructed.
Categories and subject descriptors: F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs; F.4.1 [Mathematical Logic and Formal Languages]: Mathematical
General terms: algorithms, theory, verification.
Additional key words and phrases: typed lambda calculus, formal systems, proof checking, interactive theorem proving.
1 Introduction
Much work has been devoted to building systems for checking and building formal proofs in various logical systems. Research in this area was initiated by de Bruijn in the AUTOMATH project whose
purpose was to formalize mathematical arguments in a language suitable for machine checking [15]. Interactive proof construction was first considered by Milner, et. al. in the LCF system [19]. The
fundamental idea was to exploit the abstract type mechanism of ML to provide a safe means of interactively building proofs in PP>=. These
?School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA yDipartimento di Matematica e Informatica, Universit?a di Udine, Via Zanon, 6, Udine, ItalyzLaboratory for
Foundations of Computer Science, Edinburgh University, Edinburgh EH9- 3JZ, United Kingdom | {"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cstr--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&c=cstr&cl=CL1.155&d=HASH01d9ced528a21ebbee9d9586","timestamp":"2014-04-20T03:41:49Z","content_type":null,"content_length":"10018","record_id":"<urn:uuid:417ac5d1-6e1f-4ccf-ab31-3afa9bf67413>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/sandy524/answered","timestamp":"2014-04-21T02:13:27Z","content_type":null,"content_length":"103408","record_id":"<urn:uuid:2338359d-a44a-4870-a549-0530f740ba42>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Triangles All Around
Copyright © University of Cambridge. All rights reserved.
'Triangles All Around' printed from http://nrich.maths.org/
Sam sent us her work on the problem, including the angles of all the triangles. Thank you Sam! Can you see how she avoided counting any twice? She first counted triangles with two corners on
neighbouring pegs, then those two apart, and so on. She called two triangles the same if they had the same angles and not just if they used the same pegs. She explains this in a bit more detail
below. Here is her work:
First I labelled the points on the pegboard $ABCD$.
There are four possible triangles: $ABC$, $ABD$, $ACD$ and $BCD$. However these triangles are all the same shape (you can see this by rotating triangle $ABC$) so we could say that there is only one
type of triangle that we can make.
This triangle has angles $90^\circ$, $45^\circ$ and $45^\circ$. I know this because if you draw a square around the points $ABCD$ and cut it in along the diagonal you get this triangle.
For the six point board, I again labelled the points as $ABCDEF$.
There are three possible triangles
• $ABC$, with angles $120^\circ$, $30^\circ$ and $30^\circ$.
• $ABD$, with angles $90^\circ$, $60^\circ$ and $30^\circ$.
• $ACE$, with all angles $60^\circ$ (an equilateral triangle).
For the eight point board, I again labelled the pegs $ABCDEFGH$
There are five possible triangles
• ABC, with angles $135^\circ$, $22.5^\circ$ and $22.5^\circ$.
• ABD, with angles $112.5^\circ$, $22.5^\circ$ and $45^\circ$.
• ABE, with angles $90^\circ$, $22.5^\circ$ and $67.5^\circ$.
• ACE, with angles $90^\circ$, $45^\circ$ and $45^\circ$.
• ACF, with angles $45^\circ$, $67.5^\circ$ and $67.5^\circ$.
Can you see how she worked out the angles in the triangles? If you have come across circle theorems you may find these helpful. Remember that the angles in a triangle add up to 180 degrees! You can
divide the triangle (or the circle) into pieces whose angles you know to help you.
If you would like to have a go at this problem for yourself, you might like to print off these sheets if you're not using the interactivity:
Sheet of four-peg boards
Sheet of six-peg boards
Sheet of eight-peg boards | {"url":"http://nrich.maths.org/2850/solution?nomenu=1","timestamp":"2014-04-20T16:11:51Z","content_type":null,"content_length":"5598","record_id":"<urn:uuid:69ce5d33-1920-446c-a283-ccd1a95fe3b6>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
Raritan, NJ Algebra 2 Tutor
Find a Raritan, NJ Algebra 2 Tutor
...I hold a perfect 800 on my SAT and PSAT Math, perfect 5 on AP Calculus, 800 SAT Math IIC, and recipient of College Board's AP Scholar Award. Ranked in 99th National Percentile. I have had
dozens of students who saw dramatic improvements in their scores and grades (SAT Score, Math SAT IIC, Calcu...
35 Subjects: including algebra 2, English, chemistry, SAT math
...I am confident that with hard work and a little bit of guidance, any student can excel academically. I have taken AP-level Biology, Chemistry, Physics, Calculus and Macroeconomics classes, as
well as college-level Microeconomics and Neurobiology, and I am readily able to explain the concepts inv...
21 Subjects: including algebra 2, chemistry, reading, physics
...I will charge you half your rate if you cancel within those 24 hours. I understand that things happen, so if a situation occurs within the 24 hours that is beyond your control (family
emergency, illness, etc.) just let me know and we can discuss it in more detail. I will work to inform you in a...
4 Subjects: including algebra 2, geometry, algebra 1, prealgebra
...Over 20 years teaching and tutoring in both public and private schools. Currently employed as a professional math tutor and summer school Algebra I teacher at the nearby and highly regarded
Lawrenceville School. 12 years working as a Middle/Upper School math teacher at the nearby Pennington School. Master's degree in Education and NJ Teacher Certification in Middle School Math.
6 Subjects: including algebra 2, geometry, algebra 1, prealgebra
...I have ten years of teaching/tutoring experience. I love learning and teaching Mathematics. I graduated in Mathematics with a GPA of 3.67I give more importance to teaching the basics, concepts,
methods, skills and help the students to understand.
10 Subjects: including algebra 2, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/Raritan_NJ_algebra_2_tutors.php","timestamp":"2014-04-20T21:35:35Z","content_type":null,"content_length":"24165","record_id":"<urn:uuid:cdda49da-0ad3-40bb-b2b9-1bae06c3bd86>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
Speed and Velocity
Lesson 1: Describing Motion with Words
Speed and Velocity
Just as distance and displacement have distinctly different meanings (despite their similarities), so do speed and velocity. Speed is a scalar quantity which refers to "how fast an object is moving."
Speed can be thought of as the rate at which an object covers distance. A fast-moving object has a high speed and covers a relatively large distance in a short amount of time. A slow-moving object
has a low speed and covers a relatively small amount of distance in a short amount of time. An object with no movement at all has a zero speed.
Velocity is a vector quantity which refers to "the rate at which an object changes its position." Imagine a person moving rapidly - one step forward and one step back - always returning to the
original starting position. While this might result in a frenzy of activity, it would result in a zero velocity. Because the person always returns to the original position, the motion would never
result in a change in position. Since velocity is defined as the rate at which the position changes, this motion results in zero velocity. If a person in motion wishes to maximize their velocity,
then that person must make every effort to maximize the amount that they are displaced from their original position. Every step must go into moving that person further from where he or she started.
For certain, the person should never change directions and begin to return to the starting position.
Velocity is a vector quantity. As such, velocity is direction aware. When evaluating the velocity of an object, one must keep track of direction. It would not be enough to say that an object has a
velocity of 55 mi/hr. One must include direction information in order to fully describe the velocity of the object. For instance, you must describe an object's velocity as being 55 mi/hr, east. This
is one of the essential differences between speed and velocity. Speed is a scalar quantity and does not keep track of direction; velocity is a vector quantity and is direction aware.
The task of describing the direction of the velocity vector is easy. The direction of the velocity vector is simply the same as the direction which an object is moving. It would not matter whether
the object is speeding up or slowing down. If an object is moving rightwards, then its velocity is described as being rightwards. If an object is moving downwards, then its velocity is described as
being downwards. So an airplane moving towards the west with a speed of 300 mi/hr has a velocity of 300 mi/hr, west. Note that speed has no direction (it is a scalar) and velocity at any instant is
simply the speed with a direction.
As an object moves, it often undergoes changes in speed. For example, during an average trip to school, there are many changes in speed. Rather than the speed-o-meter maintaining a steady reading,
the needle constantly moves up and down to reflect the stopping and starting and the accelerating and decelerating. One instant, the car may be moving at 50 mi/hr and another instant, it might be
stopped (i.e., 0 mi/hr). Yet during the trip to school the person might average 32 mi/hr. The average speed during an entire motion can be thought of as the average of all speedometer readings. If
the speedometer readings could be collected at 1-second intervals (or 0.1-second intervals or ... ) and then averaged together, the average speed could be determined. Now that would be a lot of work.
And fortunately, there is a shortcut. Read on.
Calculating Average Speed and Average Velocity
The average speed during the course of a motion is often computed using the following formula:
Meanwhile, the average velocity is often computed using the equation
Let's begin implementing our understanding of these formulas with the following problem:
┃While on vacation, Lisa Carr traveled a total distance of 440 miles. Her trip took 8 hours. What was her average speed?┃
To compute her average speed, we simply divide the distance of travel by the time of travel.
That was easy! Lisa Carr averaged a speed of 55 miles per hour. She may not have been traveling at a constant speed of 55 mi/hr. She undoubtedly, was stopped at some instant in time (perhaps for a
bathroom break or for lunch) and she probably was going 65 mi/hr at other instants in time. Yet, she averaged a speed of 55 miles per hour. The above formula represents a shortcut method of
determining the average speed of an object.
Average Speed versus Instantaneous Speed
Instantaneous Speed - the speed at any given instant in time.
Average Speed - the average of all instantaneous speeds; found simply by a distance/time ratio.
You might think of the instantaneous speed as the speed which the speedometer reads at any given instant in time and the average speed as the average of all the speedometer readings during the course
of the trip. Since the task of averaging speedometer readings would be quite complicated (and maybe even dangerous), the average speed is more commonly calculated as the distance/time ratio.
Moving objects don't always travel with erratic and changing speeds. Occasionally, an object will move at a steady rate with a constant speed. That is, the object will cover the same distance every
regular interval of time. For instance, a cross-country runner might be running with a constant speed of 6 m/s in a straight line for several minutes. If her speed is constant, then the distance
traveled every second is the same. The runner would cover a distance of 6 meters every second. If we could measure her position (distance from an arbitrary starting point) each second, then we would
note that the position would be changing by 6 meters each second. This would be in stark contrast to an object which is changing its speed. An object with a changing speed would be moving a different
distance each second. The data tables below depict objects with constant and changing speed.
Now let's consider the motion of that physics teacher again. The physics teacher walks 4 meters East, 2 meters South, 4 meters West, and finally 2 meters North. The entire motion lasted for 24
seconds. Determine the average speed and the average velocity.
The physics teacher walked a distance of 12 meters in 24 seconds; thus, her average speed was 0.50 m/s. However, since her displacement is 0 meters, her average velocity is 0 m/s. Remember that the
displacement refers to the change in position and the velocity is based upon this position change. In this case of the teacher's motion, there is a position change of 0 meters and thus an average
velocity of 0 m/s.
Here is another example similar to what was seen before in the discussion of distance and displacement. The diagram below shows the position of a cross-country skier at various times. At each of the
indicated times, the skier turns around and reverses the direction of travel. In other words, the skier moves from A to B to C to D.
And now for the last example. A football coach paces back and forth along the sidelines. The diagram below shows several of coach's positions at various times. At each marked position, the coach
makes a "U-turn" and moves in the opposite direction. In other words, the coach moves from position A to B to C to D.
In conclusion, speed and velocity are kinematic quantities which have distinctly different definitions. Speed, being a scalar quantity, is the rate at which an object covers distance. The average
speed is the distance (a scalar quantity) per time ratio. Speed is ignorant of direction. On the other hand, velocity is a vector quantity; it is direction-aware. Velocity is the rate at which the
position changes. The average velocity is the displacement or position change (a vector quantity) per time ratio. | {"url":"http://gbhsweb.glenbrook225.org/gbs/science/phys/Class/1DKin/U1L1d.html","timestamp":"2014-04-20T08:14:16Z","content_type":null,"content_length":"22254","record_id":"<urn:uuid:56d8a33c-42ec-48ae-a953-01c48a523026>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics As the Most Misunderstood Subject
18195864 story
Posted by
from the philosophical-engagement dept.
Lilith's Heart-shape writes
"Dr. Robert H. Lewis, professor of mathematics at Fordham University of New York, offers in this essay a defense of mathematics as a liberal arts discipline, and not merely part of a STEM (science,
technology, engineering, mathematics) curriculum. In the process, he discusses what's wrong with the manner in which mathematics is currently taught in K-12 schooling."
This discussion has been archived. No new comments can be posted.
Mathematics As the Most Misunderstood Subject
Comments Filter:
• he's right (Score:2, Insightful)
Mathematics is the foundation for philosophy, not technocracy. What a better world we'd be in if we were motivated by the former rather than pursuing the latter.
□ Re: (Score:3, Interesting)
by Anonymous Coward
Yes, the problem teaching Math(s) and programming (applied Math(s)) is that it's just about intelligence - which you can't teach. The smarter you are, the better you'll be able to figure it
out. The problem with teaching is all the generalists who think because they have a "Degree in Education" they are able to teach any topic. Traditionally dry disciplines need to be taught by
specialists with passion and enthusiasm for their topic, not by generalists who happen to have a gap in their timetable.
☆ by FuckingNickName (1362625) on Wednesday December 22, 2010 @06:25AM (#34639358) Journal
The brain can be trained and the processes of problem-solving can be generalised - see Polya's How to Solve It. But it doesn't help much to just read the book: you've got to practice, and
practice, and practice some more. You must make mistakes and learn from them. You must be prepared to accept multiple inputs rather than merely those which reinforce your strengths and/or
prejudices. You must sometimes, as the old 9/11 troll used to say, get some perspective - don't count the angels on a pinhead while Rome burns, even while the most secure of academic
positions involves the former and there's such an alluring spirit of mental masturbation in many disciplines and departments.
Meanwhile a good teacher has spent enough decades on some area that he knows both where to provide you hints on specific complex problems and which direction to guide you in when you're
contemplating your whole professional life. But, again, don't just choose the teacher who happens to share your academic and ethical prejudices.
Parent Share
○ by gilleain (1310105)
...the processes of problem-solving can be generalised - see Polya's How to Solve It. ..
"How to Solve It" also talks about more general problem-solving than just mathematical problems - crossword puzzles, for example. Prof. Lewis's article talks about the universal
question "Why did they teach me the quadratic formula when I will never use it?" and this is really the answer; doing mathematics (should) teach people how to solve any problem
logically. Well, any problem that can be solved logically, of course.
Meanwhile a good teacher ...knows where to provide you hints
Heh. Although a bit dry, one fun part of the book is where Pólya talks about givin
■ by rjstanford (69735)
"How to Solve It" also talks about more general problem-solving than just mathematical problems - crossword puzzles, for example. Prof. Lewis's article talks about the universal
question "Why did they teach me the quadratic formula when I will never use it?" and this is really the answer; doing mathematics (should) teach people how to solve any problem
logically. Well, any problem that can be solved logically, of course.
Then why not teach logic and problem solving, possibly using mathematics as the language (but not necessarily)? When we tell ourselves that we're teaching maths, that's all people
tend to teach (and learn, for the most part). I agree that teaching logic and deduction is valuable, more valuable than a lot of mathematics to many people (since with skills you
can get the maths, but not necessarily vice versa)... but its rarely seen called out on a school curriculum. And that's a shame.
☆ by kiddygrinder (605598)
heh, that is one of the most unrealistic comments i've seen in a while... specialist teachers with a passion and enthusiasm for maths? i've only ever met one, and he sucked balls as a
teacher. maybe we should start recruiting fairies or goblins, maybe they'll get the job done.
○ by digitig (1056110)
I had teachers with a passion and enthusiasm for maths all the way through school. Condolences for your experience.
■ by tehcyder (746570)
I had teachers with a passion and enthusiasm for maths all the way through school. Condolences for your experience.
I had various teachers with passion and enthusiasm for chemistry, geography, German, English literature, physics, Latin, history, PE and maths, and none of them were PhDs or
anything, just good teachers.
★ by Stooshie (993666)
A Ph.D. tells you nothing except that the holder did some original research at an early point in their career.
◎ Re:he's right (Score:4, Insightful)
by vlm (69642) on Wednesday December 22, 2010 @08:35AM (#34639950)
A Ph.D. tells you nothing except that the holder did some original research at an early point in their career.
There is also little if any correlation in being able to research, and being able to teach. Culturally, "everyone knows" the purpose of a phd is to become a professor and
teach university students while collecting a $100K+ salary. The upper 50% to 10% cream of the crop actually get hired to do that. So, pretty much by definition, as a
general cross section of the population, they are in the bottom of the barrel of teaching ability. So I'd be expecting, unless they're education phds, they're almost by
definition probably not going to be good teachers.
Parent Share
● by germansausage (682057)
Really? At the University I attended it was crystal clear that Professors were hired primarily to do research and that teaching was a secondary consideration.
Oddly enough their teaching skills were distributed about the same as my high school teachers who were hired to teach and only to teach. That is to say a few were
excellent teachers, some were good, the bulk were acceptable and a few were flat out terrible. What we learned from bad teachers was that a bad teacher or professor
can't stop you from lear
○ by CProgrammer98 (240351) on Wednesday December 22, 2010 @07:50AM (#34639738) Homepage
You should have had Mr Burton, my maths O level teacher. He was brilliant. He was totally passionate about his subject and he was also a fantastic teacher. he encouraged us to think
about maths rather than to just blindly follow formulae. I still vividly remember the lesson where he taught us differential calculus from first principles.
He encouraged us to study outside of lesson time and his door was always open during lunch, or after school. almost every one in his class passed their maths O level with at least a
B, over half had A's
It's no exageration to say I owe my career as a developer to him and his enthusiastic teaching.
Parent Share
■ by jc42 (318812)
You should have had Mr Burton, my maths O level teacher. He was brilliant. He was totally passionate about his subject and he was also a fantastic teacher. he encouraged us to
think about maths rather than to just blindly follow formulae. ...
You were lucky to have such a teacher. But there are other ways that can work, too.
Back when I was a high-school sophomore, I decided that math was interesting, so I read that year's math text in the first month, then grabbed copies of the more advanced texts
over the following months. By late winter, I'd run out of math texts that the high school had, and asked the teacher for more. The reply was the conventional "You're not ready for
those yet", which was clearly BS, but was supported by the other teac
□ by ShakaUVM (157947) on Wednesday December 22, 2010 @06:29AM (#34639388) Homepage Journal
>>Mathematics is the foundation for philosophy
Eh, kinda. Advanced logic is the foundation for a lot of modern philosophy, but Wittgenstein and the rest of the 20th century analytics were just responding to the tremendous success of
physics at figuring shit out, and wanted to smear some of that patina on themselves. Well, logic has always been a part of philosophy (think Socrates and his syllogisms) but reading the
Tractatus is like reading a modern computer science proof.
Which isn't surprising, either, given that computer science is essentially applied philosophy in a lot of ways. (cf Bertrand Russell, etc.) If you've ever sat through a class where
philosophers have sat there talking themselves in circles about how an object can't both be is-a and has-a at the same time, you (if you're like me) feel like leaping up and just telling them
to fucking encode whatever paradox they're trying to create in a object hierarchy, and be done with it. I've long longed to write a book called "Computer Science has figured a lot of your
shit out in practice, Philosophers".
It does kind of bug me though, that a person who graduates with a degree in mathematics (which is a fairly difficult, hard-nosed subject) gets a wishy-washy BA degree, whereas a hippie with a
degree in "environmental engineering" gets a BS, but ultimately I think there's a lot of problems with our current conception with categorizing things into "science" and "not-science".
Economics and Climatology are very analogous in terms of what they do - gathering tons of data, running analyses on it, and projecting things out into the future, and both are essentially
"empirical studies of the world about us" (i.e. a sort of base level of science, though with the testing, replication and confirmation bits left out), but we consider one to be a social
science and another to be hard science. There's also a huge debate now over Anthropology, after the American Anthropology Association dropped "science" from its official bits.
Parent Share
☆ by ifiwereasculptor (1870574)
Economics and Climatology are very analogous in terms of what they do - gathering tons of data, running analyses on it, and projecting things out into the future, and both are essentially
"empirical studies of the world about us" (i.e. a sort of base level of science, though with the testing, replication and confirmation bits left out), but we consider one to be a social
science and another to be hard science.
Well, economics is, especially in its present state, largely influenced by individuals, who can be a lot harder to predict than wind currents. You may identify trends, constants and
correlations, but mostly in hindsight. Accurate predictions are as scarce as in cartomancy and useful controlled experiments are hard to imagine. While Climatology shares some of those
characteristics, I think we have a much higher chance of predicting a storm than the stock market. Unless tons of people start walking around wit
☆ by grouchomarxist (127479) on Wednesday December 22, 2010 @07:41AM (#34639686)
If you've ever sat through a class where philosophers have sat there talking themselves in circles about how an object can't both be is-a and has-a at the same time, you (if you're like
me) feel like leaping up and just telling them to fucking encode whatever paradox they're trying to create in a object hierarchy, and be done with it. I've long longed to write a book
called "Computer Science has figured a lot of your shit out in practice, Philosophers".
I understand where you're coming from, but for many philosophers, what they're doing is not just trying create a practical solution to a problem, but describe reality. Your object model
might solve the problems from your point of view, but it includes many built in assumptions about the thing modeled.
In a related way Wittgenstein later came to criticize the Tractatus. Part of the criticism is that if you assume the universe can be fully described with formal logic (logical atomism),
then you are already subscribed to a certain type of metaphysics.
Parent Share
○ by 140Mandak262Jamuna (970587) on Wednesday December 22, 2010 @09:02AM (#34640100) Journal
In mathematics the reputation of Wittgenstein or Tractatus would not matter at all. The argument, "A great mind, everyone agreed that the mind was great, said this, so we should give
this saying more credence" does not hold water.
In mathematics it is the truthiness of the statement creates "credit" and then we search back in history to find who said it first and then we give the credit to him/her and that is
how reputation/respect is created. It flows back in time. Credibility accrues from the statement to the speaker.
In philosophy a bunch of people agree that some one was/is a great philosopher and so they give more value to a statement from such person. The credibility flows from the speaker to
the statement.
Parent Share
■ Re:he's right (Score:4, Insightful)
by NoSig (1919688) on Wednesday December 22, 2010 @10:06AM (#34640560)
That happens a lot in mathematics too - it has to, or mathematicians would have to spend all their time refuting amateur "proofs" of famous open problems.
Parent Share
★ by lgw (121541)
Actually, there was this weird thing going on in Math as a field for much of the 20th century: reinventing Euler. Euler was so very far ahead of the field that odds were that
anything you discovered for the next couple of centuries had likely already been discovered by him - thus the saying that theorems are named for the first person after Euler
that discovered them.
But math didn't devolve into a "study of Euler", instead the field plowed ahead happy to rediscover ideas from first principles instead of ju
☆ by tehcyder (746570) on Wednesday December 22, 2010 @08:08AM (#34639828) Journal
I've long longed to write a book called "Computer Science has figured a lot of your shit out in practice, Philosophers"
Well, go on then, if it's that fucking simple and obvious. Put those silly old philosophers in their place, what do they know?
I'm thinking of writing a book called "Why do so many students of Computer Science think they have solved all the riddles of the universe because they know how to write a sorting
Parent Share
□ by ultranova (717540) on Wednesday December 22, 2010 @06:38AM (#34639434)
Mathematics is the foundation for philosophy, not technocracy. What a better world we'd be in if we were motivated by the former rather than pursuing the latter.
Well, we would likely all be malnourished, due to lack of fertilizers, at least those of us who hadn't died at childbirth or soon after. There wouldn't be an Internet to talk on, but that
would be okay, since we wouldn't have time to use one due to the lack of engines and the resulting need to do backbreaking labour 16 hours a day. In short, our lives would be miserable, but
due to lack of medicine, they would at least be short.
Missing these kinds of little details is why I have very little respect of philosophers. As far as I can tell, most of them chose their field because it doesn't punish sloppy work. And then
there's idiocy like the Chinese Room, which assumes that a system cannot have properties its components don't have, yet hasn't been laughed out like it should had been.
Philosophy means you accept the human condition. Technorcacy means you try to do something about it. Hope for a better world in the future lies on the latter, not the former.
Parent Share
So over the past two millennia we have cut the working day by 1/3rd and doubled the average lifespan at birth (if you ignore infant mortality, our lifespan hasn't increased that
Meanwhile we have turned the majority of Western humans from independent men into chair-warming consumers singing in lockstep for trinkets. We've made up for the opportunity to live a
life of leisure surrounded by virtually infinite resources by blasting our population beyond 6 billion.
Technocracy is for the lazy man w
○ by ifiwereasculptor (1870574) on Wednesday December 22, 2010 @07:39AM (#34639672)
Why are people even debating philosophy vs technocracy? Why should someone have to choose one over the other? How do people get dragged into such nonsense? Here a new subject for you:
tomatoes vs rainbows. Go.
Parent Share
■ by tehcyder (746570)
Why are people even debating philosophy vs technocracy? Why should someone have to choose one over the other? How do people get dragged into such nonsense? Here a new subject for
you: tomatoes vs rainbows. Go.
The slashdot hivemind divides the world into a series of either/or choices, e.g. emacs/vi, or pro/anti-Windows
★ Re:he's right (Score:5, Funny)
by Hognoxious (631665) on Wednesday December 22, 2010 @09:53AM (#34640436) Homepage Journal
The slashdot hivemind divides the world into a series of either/or choices
Well, part of it does and another part doesn't.
Parent Share
○ What a load of crap (Score:5, Insightful)
by Viol8 (599362) on Wednesday December 22, 2010 @07:58AM (#34639782)
"Meanwhile we have turned the majority of Western humans from independent men into chair-warming consumers singing in lockstep for trinkets."
I suggest you take off your rose coloured glasses and go read some history, in particular just how "free" your average serf was in feudal times and even later. Don't like what your
overload or king does? Tough. Complain and you'll probably at best end up homeless or at worst end up swinging from a tree.
People in the west have NEVER been as free as they are now.
So get yourself a fucking clue!
Parent Share
Oh, the "we're free because of the First Amendment" fallacy.
Here's an anecdote from my history book: half of my family comes from fascist Spain, my grandfather a bootmaker and my father asked to do the "backbreaking labo(u)r in the fields"
as a boy that everyone likes soundbiting. It seems that lacking the right to whine and be ignored didn't affect either their boldness or sense of freedom nearly as much as today's
centralised and surveilled management of corporation and culture.
★ by geekoid (135745)
Your an idiot. It's really that simple. You've buried yourself in a false premise and refuse to see out relying and anecdotes so far out of context to be useless.
You offer no arguments, only logical fallacies and irrelevant statements.
The irony is that you are putting forward philosophy but can't make a logical argument.
■ by mrsquid0 (1335303)
Swinging from a tree was not the worst thing that could end up happening to a serf who tried to be an "independent man". Many kings and lords were much more sadistic than that
when it came to punishing serfs who disagreed with them. The idea that people in the past were generally free-er or more independent than they are today (at least in most
democracies) is laughable.
What do you think happens today to the average blue collar worker who suddenly decides he's not willing to play society's game? Which of today's government and the Lord of the
Dark Ages you are so keen on generalising to everywhere-before-1900 has more resources to catch that man? If a man today, in the middle of the US, considers gathering a group
of men to start an uprising to fix the ills of local, regional or national government, do you think he is more or less likely to succeed than a man five hundred
■ Re:What a load of crap (Score:5, Insightful)
by ObsessiveMathsFreak (773371) <obsessivemathsfreakNO@SPAMeircom.net> on Wednesday December 22, 2010 @08:59AM (#34640092) Homepage Journal
People in the west have NEVER been as free as they are now.
I don't know. I think we were all a lot freer and happier in the 1990's.
No Cold War, no War on Terror, no internet filters, no monitoring of habits, no Google Maps/Mail/Panopticon, less sex offender scares, less evolution/abortion debates, less
religion, less jihad, didn't hear about "markets" half as much, less news pundits, less foreign wars/quagmires, no Super-China, no airport scans, more newspapers, and Star Trek:
The Next Generation was still showing on most terrestrial channels. Sure it wasn't perfect, but it was better than it is now--not that the general public actually gives a shit.
Parent Share
★ by sjames (1099)
What's interesting is that NOTHING physical has changed and yet there's been needless suffering everywhere. We didn't run out of anything and the crops didn't fail. We have
about as much resources now as we ever did, yet because some idiots think a few numbers on a balance sheet are more important than physical reality or human suffering we now
have a bad economy.
★ by shermo (1284310)
Did you know that the first Matrix was designed to be a perfect human world? Where none suffered, where everyone would be happy. It was a disaster. No one would accept the
program. Entire crops were lost. Some believed we lacked the programming language to describe your perfect world. But I believe that, as a species, human beings define their
reality through suffering and misery. The perfect world was a dream that your primitive cerebrum kept trying to wake up from. Which is why the Matrix was redesigned t
■ "People in the west have NEVER been as free as they are now."
Eh, that's pretty iffy.
It would be more accurate to say that people in the West have never been better off in terms of material wealth, true. We've never had as high a level of technology or cheap
access to gadgets or advanced medicine.
But free? I guess it depends on your definition of freedom. We're certainly more free than the Russian serf of the 1700's or the Spaniard under the Caliphate of the middle ages or
the Greek and Serbian living under
☆ by CRCulver (715279) <crculver@christopherculver.com> on Wednesday December 22, 2010 @07:21AM (#34639592) Homepage
due to the lack of engines and the resulting need to do backbreaking labour 16 hours a day.
While agriculture requires backbreaking labour, hunter-gatherer societies only worked a couple of days a week. Not that I advocate a return to it, but backbreaking labour all the livelong
day was not universal in ancient society.
As far as I can tell, most of them chose their field because it doesn't punish sloppy work.
Philosophical journals have the same rigorous standards for papers as journals for the various sciences. Your view of philosophy is about as valid as a grizzled mountain man who mutters
about hard science being all book-learnin' and mumbo-jumbo.
Philosophy means you accept the human condition. Technorcacy means you try to do something about it.
Even that is a statement of philosophy. Furthermore, you seem unaware that many calls for improving human lives came from works of philosophy: More's Utopia, Kirkegaard's questions of
metaethics, even what is often called the beginning of the Western tradition, when Socrates hung out in the agora and asked passersby "What if what you comfortably believe is wrong?"
Parent Share
○ by The_mad_linguist (1019680)
While agriculture requires backbreaking labour, hunter-gatherer societies only worked a couple of days a week.
Only thought to be the case by Europeans who didn't think that hunting was "real work".
☆ by Kashgarinn (1036758)
Philosophy means you accept the human condition.
No.. Philosophy means questioning the human condition. it's confronting the status quo and asking "why?"
So exactly the opposite in every way of what you think it is.
You're also wrong in your assumption that philosophy and technocracy are mutually exclusive, in fact if they aren't mutually inclusive, then as a technocrat you're trying to find
solutions when you don't even know what the problem is.
Philosophy is a very powerful way of thinking, and in no way whatsoever does it represent conformity or acceptan
☆ Re:he's right (Score:4, Interesting)
by horigath (649078) on Wednesday December 22, 2010 @07:43AM (#34639694) Homepage
Missing these kinds of little details is why I have very little respect of philosophers. As far as I can tell, most of them chose their field because it doesn't punish sloppy work. And
then there's idiocy like the Chinese Room, which assumes that a system cannot have properties its components don't have, yet hasn't been laughed out like it should had been.
There's plenty of philosophy-types who think that Searle is an idiot, too, for the Chinese Room and other things. Guy loves to position himself as a defender of rationality and realism
because it lets him belittle poststructuralists with oversimplifications and straw men while acting like a hero of a scientific worldview that he clearly doesn't know that much about.
In some ways his antagonistic materialsm is quite similar to your dismissal of philosophy in general, actually.
Parent Share
☆ by digitig (1056110) on Wednesday December 22, 2010 @08:06AM (#34639816)
Missing these kinds of little details is why I have very little respect of philosophers.
They don't "miss" those details, they're not in scope.
As far as I can tell, most of them chose their field because it doesn't punish sloppy work.
Philosophy does punish sloppy work. relentlessly. Philosophical work is subject to more scrutiny and criticism than any discipline I know of, and that includes pure maths.
And then there's idiocy like the Chinese Room, which assumes that a system cannot have properties its components don't have, yet hasn't been laughed out like it should had been.
Laughing something out doesn't work in philosophy. Unlike whatever discipline you work in, it seems, in philosophy you have to show the reasons why something is wrong. And if you think
the issue of emergent properties hasn't been considered in excruciating detail in connection with Searle's Chinese Room thought experiment then you clearly have no idea what philosophy is
Philosophy means you accept the human condition.
Say what? Some philosophy is abstract, but so is some maths. Lots of philosophy (philosophy of science, political philosophy, ethics) is concerned with changing the human condition. Maybe
you criticise philosophy because it didn't discover antibiotics (although it did lay a lot of the foundations), but do you criticise biology because it didn't invent democracy? Both
changed the human condition, in ways appropriate to their respective disciplines.
Parent Share
☆ by hey! (33014)
Well leaving aside the dubious notion that studying applied subjects is really pursuing "technocracy", I think we're engaging in a bit of false dichotomy here. You don't have to choose as
an individual or as a society to pursue liberal arts or applied arts; to study philosophy or to study engineering.
The medieval liberal arts curriculum had two levels. The Trivium consisted of grammar, logic and rhetoric. These are the basic tools of expression, thinking and persuasion. A student
versed in the Trivium c
• Being a mathematics undergraduate... (Score:5, Interesting)
by pieisgood (841871) on Wednesday December 22, 2010 @06:29AM (#34639380) Journal
I can attest that "true" math is very removed from computation. The computational classes are all regarded as the "easy" classes. This is in contrast to the "hard" classes, real analysis and
abstract algebra. Being thrown into real analysis after just one quarter of study in proofs is extremely rough going. If proofs were introduced as puzzles or just introduced earlier in education
the whole of America would be better off for it.
My own motivations for being in math are for the challenge and because of the lack of concrete answers in calculus. Trigonometric functions especially are always treated as little boxes that
magically calculate what you need.
In any case, at least math attracts the curious.
□ by martin-boundary (547041)
My own motivations for being in math are for the challenge and because of the lack of concrete answers in calculus. Trigonometric functions especially are always treated as little boxes
that magically calculate what you need.
Trigonometry predates calculus by a long time (see Ptolemy's table of chords [rutgers.edu] which were calculated purely geometrically, since algebra wasn't invented then either).
Trigonometric functions are incredibly rich and important, there are so many different ways of looking at them, and
□ by Fnkmaster (89084)
At Harvard, at least back in the day (circa mid-1990s), the boys were separated from the men in the first semester of math freshman year.
Those who thought they were hot shit all started in a class called Math 25/55 and were beaten down with point set topology and real analysis. Those of us who had never gone beyond AP Calculus
BC, or even multivariable calculus, in high school got our asses handed to us rapidly.
It was basically all kids from math- and science-focused honor schools who had been exposed to p
• Why math is worth doing in the first place (Score:5, Informative)
by LambdaWolf (1561517) on Wednesday December 22, 2010 @06:33AM (#34639402)
I've seen the following link in many a Slashdot thread before, but it certainly bears repeating here: "A Mathematician's Lament" by Paul Lockhart [maa.org] It's mostly known as an insightful
critique of what's wrong with K-12 math education, but I've always liked it as an explanation of why people who enjoy math do it in the first place: it's satisfying in an artistic way. I think it
would be great if more students saw math as something worth doing for its own sake, like art or athletics, and hey, it lets you do science and engineering too.
In fact, this summary sounds similar enough to "Lament" that I wouldn't be surprised if this Dr. Lewis was inspired by and/or cited it. But this is Slashdot, so I'll let someone else check that
□ Re:Why math is worth doing in the first place (Score:4, Informative)
by dcollins (135727) on Wednesday December 22, 2010 @07:49AM (#34639734) Homepage
As a part-time college math teacher, I almost totally disagree with Lockhart's Lament. (Ironically, the K-12 school where he teaches is close to the neighborhood where I live.)
It's not that it's bad to see that math can be an art and a pattern-finding exploration (some part of the time), but someone has got to teach and be held accountable for the nuts-and-bolts of
how to read and write mathematical vocabulary, notation, and justification (algebra and geometry, for starters). Knowing about the scientific method is necessary, but exclusively spending
your K-12 time re-inventing the wheel is inefficient at best. It's the same problem as in English nowadays -- I was told last weekend that teachers in junior high schools are forbidden from
teaching the rules of grammar. That is, it's exclusively about expressing "big ideas", no matter how poorly-formed or unreadable. The more this produces crippled students, the more we seem to
run deeper in the same direction -- if you abandon teaching the basic structure of our shared communication systems, then we thereby just generate more and more unreadable nonsense as time
goes on.
The remedial math I teach (basic algebra; about half my assignment load) is almost entirely about just reading & writing. Even the first unspoken step of simply transcribing symbols (i.e., an
expression) from one page to another is almost impossible for about half my students, because no one has ever asked for any level of precision in their reading, writing, or observation skills
(whether in English, math, or anything else). To me, basic math is an opportunity to focus on precision in thinking and writing -- applications belong in other classes! No, that's not what a
professional mathematician works at on a daily basis, but frankly, not every K-12 class can be an independent research opportunity. At some point you've got to eat your vegetables, and if you
run entirely away from that, then it truly is a monumental waste of time.
Parent Share
• Mathematics as an art (Score:5, Insightful)
by Chrisq (894406) on Wednesday December 22, 2010 @06:36AM (#34639426)
I have a cousin who is great at mathematics, and really can see mathematics as an art. Whereas I am happy if I can solve a problem, he will look for an "elegant solution". I had a number of
equations that I solved, trying to optimise the buffer size for various input queues. I shown him, and he quickly said that I had the right answer. A day later he came and shown me how he derived
an equation that could simply solve all problems of this type. He also generalised it to allow buffer sizes that were complex numbers. The first part was very useful to me, the second absolutely
useless - but to him it was all just interesting.
This is one way that mathematics as an art is unlike any other art. It gives useful results. I have heard time and time again about engineers going to the mathematics department of a University
asking how they can solve a "new" problem - to be told that the solution had been discovered a century before. I am sure most of these solutions came from someone just wanting to find an elegant
way of expressing something without thought of any use. So if its an art and is useful why do so few people follow it?
The answer is obvious, because its hard! In many forms of art you can slap anything down and convince someone that it has value and its art. This may not always have been true, before photography
accurate representational art was highly valued - but today someone producing a lifelike portrait will not be values as much as someone slapping their name on an unmade bed! Mathematics has to be
right, you can't just slap down a few numbers and call it an equation. This is the basic problem that anyone will have in persuading someone to follow maths for its art, there are a lot easier
ways to become an artist.
• Not just maths (Score:5, Interesting)
by Schiphol (1168667) on Wednesday December 22, 2010 @06:41AM (#34639454)
I wish science in general was considered part of what a learned person has to know. I mean, if you want to pass for an intellectual you have to read your Dante, your Beckett and you at least need
to know who Lautreamont was. But, apparently, you can very well get away with thinking that you can suck gravity out of a room the way you suck air, or with not having even heard about string
theory. That divorce makes no sense, and it was impossible in the history of ideas till very recently. And Euler's formula is more beautiful than most poems.
• Excellent. (Score:3)
by dtmos (447842) on Wednesday December 22, 2010 @06:46AM (#34639462)
This is by far the best defense of mathematics I've ever read. It's a shame that the poor quality of grade school math education has made it necessary, though. Can one imagine a similar essay on
any other subject? Only math is so poorly taught.
-- The parenthetical comment "(if it was done right!)" in "Ready For The Big Play" should, of course, be, "(if it were done correctly!)"
-- References in "Cargo Cult Education" to the "south Pacific" should be to the "South Pacific"
-- Also in "Cargo Cult Education", "But of course nothing came. (except, eventually, some anthropologists!)" should be, "But of course nothing came (except, eventually, some anthropologists!)."
• Simple that... I honestly can not understand where there can be "beauty" in a mathematical expression that covers the entire blackboard. And more so when the teacher fails miserably to show
practical uses for the expression.
□ by gilleain (1310105)
There was a BBC4 program recently called "Beautiful Equations" where an art critic went round various mathematicians asking about E=MC^2, F=G(m1m2/r^2), S=A/4, and er the Dirac Equation.
The point about most of these examples they chose - apart from being conveniently in the UK - was that they were short. Also that they are directly related to important ideas about how the
Universe works. So mass can be converted to energy, bodies attract each other, black holes can shrink, and antimatter exists. Dirac was p
I honestly can not understand where there can be "beauty" in a mathematical expression that covers the entire blackboard.
No one else can, either.
The beauty is in the simple relations between apparently unrelated things that, while provably true, still seem magical and mysterious. One example:
You're probably aware that the ratio of the circumference to the diameter of a circle has been given a special name, pi. This is a practical, useful thing that seems purely geometric; you can
measure the diameter of a circular hole, multiply by pi, and get the circumference of the hole. Fine.
Well, it was shown in the 17th Century (!) t
• Differenciation (Score:3)
by tonywestonuk (261622) on Wednesday December 22, 2010 @07:12AM (#34639568)
I remember been taught differentiation at school – One lesson, lecturer puts a parabolic curve, x=y*y, on the board, and asks the problem, determine the angle of the line
Then, he didn’t say anything else.. Just, for the rest of the lesson, responded with ‘Yes’, ‘No’, or ‘Maybe’. So, after a frustrating 20 minute discussion, trying to work out how the hell to do
this problem, someone came up with the idea of adding a ‘little bit’ of x, to x..
We worked out, as a group, the concept differentiation, with only the smallest bit of guidance from the lecturer. This is how things should be taught – allowing people to discover concepts
themselves, rather than preaching the correct ways to do things.
Brilliant -- a live version of "A Pathway into Number Theory [slashdot.org]". That's the kind of teaching for which awards should be given.
• by Trepidity (597)
"A Mathematician's Lament" [maa.org], an article that's been making the rounds among mathematicians since 2002 (but was only published in 2008), expresses some similar views, and is also a good
• by norpan (50740)
Recommended and relevant reading is "A Mathematician’s Apology" by G. H. Hardy.
Available online at http://web.njit.edu/~akansu/PAPERS/GHHardy-AMathematiciansApology.pdf [njit.edu]
• The first part comprises the results of previous work by mathematicians; the finished product. That's what underpins most of physics and engineering nowadays.
The second part is the "live" Mathematics, i.e. the process of actually doing Mathematics in the sense of figuring something out. That's a slow, arduous, iterative and groping process. Starting
with an observation that confuses or amazes us, incrementally and tentatively formulating concepts (definitions, constructs of previously known mathematics),
• Hi,
in school mathimatics is mostly execution of algorithms provided by your teacher, learning when and how to apply them. This changes a lot with university. At first, mathematics is a language to
be learned. You have to be able to express your problems in a normed language. This is the first art. If you read papers, you can distiguish easily between those peoples who truely have mastered
that language and those who don't have. Later on, you learn how to prove things. The interesting things you cannot prove
• hmmm (Score:3)
by Charliemopps (1157495) on Wednesday December 22, 2010 @08:05AM (#34639810)
My highschool math teacher was a retired NASA programmer. According to her, teaching Mathamatics was about leaching logic and problem solving. If you forgot all the formulas taught in her class,
she said, it wouldn't matter. The real skill learned was how to deal with an entirely new mathematical problem. WHY is area "height x width"? How to build your own sort of equations. Sure enough,
decades later I have forgotten every single equation I had been taught there, but when faced with a logic problem I'm still able to work it out.
• by AB3A (192265) on Wednesday December 22, 2010 @08:46AM (#34640012) Homepage Journal
In practice, there are two forms of teaching. The first is applied subject matter in school. In this specific case, it is applied mathematics. They give you the calculation tools for describing a
relationship and then they expect you to find similar relationships and apply that formula. The goal is to teach the use of a tool. It is no different than teaching one to write a coherent
paragraph, communicate in a foreign language, or to be a good citizen in a democracy. Teaching applied mathematics is a necessary element of any school curriculum.
The second is one of discovery. My journey began as a teen, when I read about fractals in an article from Scientific American. Since then I've gone on and explored prime number theories, methods
of calculation, the history of these discoveries, and I've gone looking for the blind alleys that may not have been explored as thoroughly as we might think.
We need to recognize that education is not about discovery. It is about teaching a person the tools of modern society. However, in our zeal to teach the applied aspects of these subjects, we need
to realize that we are failing to nourish the creative spirit of discovery. Mathematics is no different than reading, writing, civics, history, geography, or language. Learning to write a
coherent text does not make one appreciate literature.
Our schools are obsessed with application, not discovery. We spend ridiculous time teaching application, application, and more application. Then we sit and wonder why our children lack the will
to explore...
• by gringer (252588) on Wednesday December 22, 2010 @08:48AM (#34640032)
This article frustrates me. He talks a lot about some particular thing, claims that it relates to maths, but doesn't really say what particular part of maths it relates to, nor does he get into
specifics, nor does he spend much (if any) time on how to improve matters.
Okay, I'll try to explain my confusion with a parable. When I was fifteen, I did a school certificate maths exam. It had a whole bunch of questions, none of which we had ever answered earlier in
the year, but somehow the examiner thought I could answer them, and unfortunately I was unable to answer all questions "correctly" according to the examiner.
What does that have to do with mathematics education over the past 25 years? Unfortunately a great deal. We were required to have exams for mathematics, because every subject had exams. The end
result was that some people didn't do well in exams, even failing enough to be unable to continue on in their maths education in the next year. The truth is that exams cannot alone be used to
evaluate a person's effectiveness as a mathematician. The only way to get around this is to teach mathematics properly, and make sure each person understands maths at all levels.
• Math is just...math. (Score:4, Interesting)
by wickerprints (1094741) on Wednesday December 22, 2010 @10:15AM (#34640646)
Really. Must we contextualize mathematics, or try to talk about what it is or is not? Do we really need to point to a particular cognitive framework as "the reason" why math is not taught
To use a slightly loathsome phrase, math "is what it is." Instead of talking about how people should relate to it, I suggest a radical approach: just LEARN it. Teach it for what it is.
I struggled with arithmetic when I was in grade school, not because I didn't understand the rules, but because I kept making mistakes. And my teachers had the wisdom to know that those errors had
to be drilled out of me before I could proceed any further. I suffered. I *hated* the tedium. We were asked to multiply two twelve-digit numbers with no assistance from any computing devices or
tables; divide four-digit numbers into twenty-digit numbers, until we could do it with 100% accuracy every time. It didn't have to be lightning fast. It just had to be CORRECT.
And when I mastered that skill, it felt fantastic. We moved on to more advanced topics, and each time the teacher made sure we had firmly laid down the next conceptual brick of this vast
mathematical edifice we were building for ourselves. It was hard but rewarding. To those critics who might say such an approach would discourage some students, and that some kids just need to be
excited by what they learn, clearly you have never really understood what it means to build that foundation. It's got to be ROCK SOLID. No crap about trying to make math "fun" or "interesting" or
"relevant." That sort of stuff comes when it comes; they are merely ornaments on the pillars. There's no point in making the structure pretty before you make it sturdy.
So then, how do you get students motivated? It's really quite simple. You challenge them and you force them to bust their asses, and when all their hard work pays off, that sense of
accomplishment is better than any drug. To know that you did it on your own, and you have complete confidence in your mastery of the concept, is precisely what must drive them forward. You can't
entice them with anything else. You can't try to swaddle the math in some cutesy real-world application, because that is going to be fake, and they know it.
That's the story of how I graduated with my BS in mathematics from one of the most prestigious scientific universities in the world. It was purely the early appreciation for persistence toward
understanding mathematics for its own sake. I'm not saying everyone has to keep math "pure." If your goal is to apply it in some other discipline, go for it. But the learning process has to build
upon that foundation of math for math's sake.
• Poor Math Education Hits Close To Home (Score:5, Interesting)
by Jason Levine (196982) on Wednesday December 22, 2010 @10:17AM (#34640660)
My older son is in the 2nd grade and is gifted (IQ somewhere around 140). Right now, they're learning simple addition. There's only one problem. He already learned this last year. He was doing
complex subtraction with my wife (a teacher) over the summer break. But the class is doing simple addition so that's what he's stuck on.
It gets worse. They're using a so-called "spiral curriculum" this essentially means they learn one way of figuring out that 8+3=11, then learn another way, then a 3rd, 4th and 5th way. My son
gets it the first time, yet he has to sit through all of the other ways. He yearns for more advanced math. He asked me about multiplication and division and, when I showed him an example using
Legos, he got the concept right away.
He already knows his times tables up to 5 and wants more. But school is boring to him because they don't push him. He isn't being challenged at all. He tends to act out when he's bored too which
makes everything more complicated. If you have a child who is falling behind in school, there are resources to help them catch up. If you have a child who is gifted and wants to pull ahead, your
kid needs to sit down, be quiet and learn for the fifth time what 8+3 equals.
□ Re:Poor Math Education Hits Close To Home (Score:5, Interesting)
by Buelldozer (713671) <cliff.gindulis@net> on Wednesday December 22, 2010 @12:04PM (#34641874)
I logged on for the sole purpose of replying to your post as our situations are so similar I couldn't let it pass without comment.
I realized in 1st grade that my son was the same as yours. His IQ doesn't test quite as high, somewhere around 130, but he has an intuitive grasp of certain things that's almost breathtaking.
I remember when he, at 5, described to me the mechanics behind a lunar eclipse! It wasn't even a topic of conversation, just out the blue. Apparently he had been mulling it over and had
worked it out. Anyway, back to the subject.
Let me say you rock as a dad, not only for noticing the problem but working with your son. My son has also been subjected to the "spiral curriculum" and it's alternately made me want to rage
or laugh. Far too much time is spent teaching different ways to accomplish the same tasks and there is no way to speed it up for those who are bored. I solved this problem by advancing the
curriculum at home. When my son got bored with addition and subtraction I made the numbers bigger, when that became trivial I made them harder by including decimals, then harder again by
using fractions. When he became bored with multiplication and division I started teaching him Algebra. When his class moved on to kiddie Geometry and he grew bored with it I started him on
Geometry I. You get the idea. It was in Geometry this year where the teacher caught on to me.
His teacher and I had a major blowout when one of his Geometry papers was returned with a score of zero. My son was freaked and so was I. What did I do wrong? I went back and forth through
that paper for two hours looking for what had happened and couldn't find it. I called in the wife who has a Degree in Math and she couldn't find anything. I called in the Grandpa with dual
Masters (Chemistry and Physics) and 45 years experience as a High School teacher and he didn't find anything. I went to the school the next day and had his teacher explain why and you know
what the answer was? He forgot the damn degree symbols. Yes, that's right a 10 year old doing math work years ahead of his level received a zero with a page full of correct answers and a
companion page showing all of the work because he FORGOT THE DAMN DEGREE SYMBOLS.
Further she told me that she didn't like me teaching him this stuff because my way was different than hers which made it difficult for her to grade his papers and he confused other children
when he tried to help them! I didn't know whether to cry or murder her. The depth of willful stupidity on display at that moment still staggers me. In the end I politely told her that I
wasn't going to stop doing it because his education was more important than her classroom. I left shaking my head and wondering how our education system got this screwed up.
So you keep rocking on Dad, you keep pushing his curriculum and teaching him. What the idiots at the school won't do for him is your privilege, and responsibility, to provide. When he grows
bored you up the ante and make it more challenging by showing him the "Big Boy" way and giving him something new to explore. Someday when he outgrows your ability you can sit back and proudly
tell him "Son, I don't have anything left to teach you." and then watch him start learning it for himself.
Parent Share
☆ Harry Chapin (Score:4, Informative)
by geek2k5 (882748) on Wednesday December 22, 2010 @03:04PM (#34643984)
The mindset of the teacher reminds me of the Harry Chapin song "Flowers Are Red."
Teachers that are that narrow minded should be transferred to places where they can't do any damage to students. Perhaps a prison environment would be best for them. They could at least
try to help some of the people they screwed over.
Parent Share
• by Animats (122034) on Wednesday December 22, 2010 @03:22PM (#34644164) Homepage
Some in their 50s or so may remember "New Math", which was an attempt to teach elementary math with more emphasis on the underlying theory. It's now widely considered to have been a disaster. The
author of the original article seems to date from that era.
One of the approaches to fundamental mathematics is to start with axiomatic set theory and build up from there. (That's not the only approach; one can also start with the Peano axioms and build
up to set theory via lists, as is done in constructive Boyer-Moore theory.) This is minimalist and elegant (which is why mathematicians like it) but it requires considerable theoretical
development before you get to addition. Teaching kids arithmetic that way was a disaster.
Euclid's approach to axiomatic geometry is like that, too. There's a lot of abstract logical structure that has to be built up before you can do anything. That's how math was taught up to 1900 or
so, and 7th grade geometry is still often taught that way.
That's the "liberal arts" approach to mathematics. It's an intellectual exercise forced onto little kids. Even if you use advanced mathematics in your work, it's very rare to need either
axiomatic set theory or axiomatic plane geometry.
A completely different approach can be found in some math courses given during WWII courses to soldiers who needed to do technical work. These were utterly practical. Trigonometry was taught with
direct applications to surveying and static structural analysis. After that trig course, you could calculate the size of the beams required for a truss bridge. The calculus course covered
subjects like the ballistics of big guns. (I especially liked the "tables method" of integration, which taught you how to use those tables of integrals in the back of the book.)
There's a mindset in math teaching that math is about "puzzles". It's not. (Mathematics in England at the university level went off into that dead end for a century, with rated "wranglers" and
"senior wranglers", until Hardy kicked them out of it.) But the school version of mathematics overstresses puzzles, because they're easy to assign and grade. That's a bigger problem than the
"liberal" aspect.
For a non-puzzle curriculum, see PSSC Physics, which was taught in the 1960s. Lots of little experiments which required some calculation and data analysis.
Related Links Top of the: day, week, month. | {"url":"http://science.slashdot.org/story/10/12/22/0546247/mathematics-as-the-most-misunderstood-subject","timestamp":"2014-04-16T22:03:45Z","content_type":null,"content_length":"362910","record_id":"<urn:uuid:bd06671e-b196-4aa4-8dd1-775b3512dea4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: distribute
Dear Autodesk Simulation CFD experts,
I would like to model a thin plate with a bank of holes. The thickness of the plate is 2 mm. The diamter of the holes is 1.5 mm. The distance between each 2 hole center is 4.8 mm. It looks like I can
use the K-Factor method. But, how can I find out the K value? Or is there any other method in CFD 2014 that will be more suited for this? | {"url":"http://forums.autodesk.com/t5/Simulation-CFD/distributed-resistance/m-p/4293832","timestamp":"2014-04-21T06:39:02Z","content_type":null,"content_length":"143146","record_id":"<urn:uuid:b90a4fe4-9989-4a37-a9d0-96a53df533b1>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Publications Links
A Busy Schedule of Topology and Quantum Gravity Nets Christensen a Distinguished Research Professorship
2007 Distinguished Research Professor Dan Christensen
by Mitchell Zimmer
Prof. Dan Christensen of the Department of Mathematics is one of the recipients of the Distinguished Research Professorships for 2007-2008. These awards release qualified science faculty from their
teaching duties to provide an essential opportunity to concentrate, focus and reflect on their research.
Chistensen will be extensively traveling as part of his distinguished professorship. Part of the reason is due to his research topics which are a mix of topology and quantum gravity.
The professorship will allow him and his students to go to Mexico for a conference in Quantum Gravity. Then in September he’s going Oberwolfach in Germany for a week to work on topology. He’ll also
go to the Banff International Research Station to be part of a program where two or three people from different places to, again, work on topology. Then it’s back to work on quantum gravity as he
attends two conferences at the Perimeter Institute in Waterloo.
At first topology and quantum gravity seem quite disparate, but there is some connection. “Topology is useful in subjects where anything geometrical arises. That certainly happens in physics, and
also in completely different areas.” Christensen stresses that much of the physics work is computational which is unlike topology. “It’s different, but I like that difference.”
Topology doesn’t refer to the exact shape of things but of similar relationships. Topologically speaking, a doughnut and a teacup are similar in that they are both structures with only one hole.
“We’ve learned in the past twenty or thirty years that there’s a way to look at topology that lets the same methods be used in other subjects,” says Christensen. “Now you can really write down a
proof of a fact and that proof will translate exactly to a proof of a different fact, say, in algebra. I’ve done a lot of that where you wouldn’t have come up with that idea from a purely algebraic
point of view, but because you think about it topologically you can get a new result. Then the same thing happens in the other direction”
In the field of quantum gravity, Christensen is working with researchers on the concept of loop quantum gravity. “It’s very different from any other proposal” says Christensen. “We have no idea if
it’s correct, but there are certainly lots of things that work really nicely in the theory. It has lots of nice properties and it is also something that you can compute with. The main idea is that
space and time are actually discrete at a very small distance scale. It’s called the Planck length which is 10-35 of a metre, so it’s way beyond anything that we’ve ever probed before, way beyond
what particle accelerators can probe.” To put this measurement in perspective, 1 cm is 10-2 metres, 1 mm is 10-3 metres and an atomic nucleus is around 10-15 metres so 10-35 metres is extremely
At that length scale, the effects of gravity and quantum phenomena become equally important where space and time are made up of tiny components. It just so happens that these components are the same
sorts of things that are used in topology. As Christensen says, “that’s a direct connection that’s very useful.” This type of theoretical framework allows him to compute some of the possibilities of
what the theory predicts. “The theory was written down for years,” says Christensen, “but no one had said what it actually produces as a prediction so I did some computations like that. Almost every
time I’ve done something like that, the answer has been really surprising. Sometimes we noticed strange patterns and then we were able to prove rigorously that those actually happened. So there’s
real hope that this could be something that’s related to the real world. It’s not expected to be the final answer but it might be in the right direction.” | {"url":"http://www.uwo.ca/sci/publications/news/christensendrp.html","timestamp":"2014-04-19T07:23:41Z","content_type":null,"content_length":"15528","record_id":"<urn:uuid:bf8850e3-3f0a-4d70-a018-97692568aaef>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by gt on Wednesday, April 24, 2013 at 11:57pm.
In the right triangle shown below, the length of AB is 8 units, ¡ÏA measures 60¡ã, sin 60¡ã ¡Ö 0.866,
cos 60¡ã ¡Ö 0.5, and tan 60¡ã ¡Ö 1.73. Approximately how many units long is BC , to the nearest
hundredth of a unit?
A. 4.00
• trignometry - Reiny, Thursday, April 25, 2013 at 8:35am
Since you posted this twice, I think you realized that the symbols did not come out like you intended.
I think you meant this:
AB = 8, angle A = 60°
then you are given
sin60° = .866
cos60° = .5
tan60° = 1.73
the problem is that we don't know where the 90° angle is, could be at C or at B
If angle B = 90°
BC/8 = tan60
BC = 8tan60 = 8(1.732) = appr 13.86
if angle C = 90°
BC/8 = sin60
BC = 8sin60 = 8(.866) = appr 6.93
Related Questions
trignometry - In the right triangle shown below, the length of AB is 8 units, ¡Ï...
Algebra - What is the value of b in the triangle shown below? The triangle they ...
geometry - Jim has put a fence along the side AC of the triangular patch of land...
Algebra - What is the value of b in the triangle shown below? the trianle they ...
Math (Geometry) - The perimeter of right triangle ABC is equal to the perimeter ...
math - The vertices of a triangle are listed below. H(-2,4), I(22,4), J(10,-1) ...
Algebra - In triangle PQR shown, point S (not shown)is on QR between Q and R. ...
geometry - The perimeter of right triangle RST is equal to the perimeter of ...
algebra - The triangle and square shown below have the same perimeter. what is ...
Math 7 - Homework Question Check - 12. Is the figure below a right triangle? On... | {"url":"http://www.jiskha.com/display.cgi?id=1366862247","timestamp":"2014-04-19T22:14:35Z","content_type":null,"content_length":"8852","record_id":"<urn:uuid:e3ccb5b3-0ba4-46d3-be0f-4be20bd01a0e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
Checking for invertibility of large matrices in MAGMA
up vote 1 down vote favorite
If you have a number of large matrices, and you wish to determine whether each matrix has determinant zero or not, what is the most efficient way to do this in MAGMA
(it appears that calculating the rank is slightly more efficient than calculating the determinant).
**EDIT: **In case it helps, the matrix entries are rational functions in two commuting variables, which come from the coefficients of a power series in a third, noncommuting variable: the aim is to
get some sort of indication of when a power series represents a rational function, which requires checking the determinant of progressively larger matrices until it starts being zero. (Although the
overall setting is noncommutative, everything in the matrices themselves is commutative so there's no need to worry about left/right determinants, quasi-determinants, etc.)
1 What is in these matrices? Integers, rational, floating point numbers? – Federico Poloni Mar 27 '12 at 18:55
add comment
3 Answers
active oldest votes
I don't know about Magma specifically, but in general, computing the determinant modulo a bunch of primes is the way to go (bunch = enough small primes so that their product
exceeds the Hadamard bound, but of course, once the determinant is nonzero modulo some prime, you can safely halt).
up vote 2 down EDIT Just a remark: the above is particularly fast for checking that your matrices are NOT singular, since if the determinant is really zero, you will have to do a lot more
vote accepted checking to be sure. On the other hand, computing the rank will ALWAYS be much slower than this.
Thanks for a quick response. – dward1996 Mar 27 '12 at 14:18
But dward1996 did not say that his matrices have integer entries, did he? – Federico Poloni Mar 27 '12 at 18:55
1 He did say "MAGMA", which people do not usually do floating point computations in... – Igor Rivin Mar 27 '12 at 19:02
add comment
I can guess that both the rank and the determinant are computed through some kind of (pivoted) LU factorization.
up vote 0 down If so, in order to compute the determinant, after computing the LU factorization, you have to take the product of the factors on the diagonal of U, so it is not surprising that it
vote takes more time than computing the rank: there are indeed more operations to do.
add comment
There are different parameters for the Determinant command when working over the integers. You should take a look at the online documentation:
up vote 0 down
vote It's quite comprehensive.
I just looked, and I do not see the options as especially useful for the OP's problem, unless the Magma people had put in a special hack for zero-checking. – Igor Rivin Mar 27
'12 at 19:21
There is also the IsSingular() command which could be interesting. But unfortunately the manual does not say anything about the implementation. – Hans Giebenrath Mar 27 '12 at
Thanks for that. I will give the IsSingular command a try. – dward1996 Mar 28 '12 at 7:37
add comment
Not the answer you're looking for? Browse other questions tagged matrices magma linear-algebra computer-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/92375/checking-for-invertibility-of-large-matrices-in-magma","timestamp":"2014-04-17T01:45:34Z","content_type":null,"content_length":"65725","record_id":"<urn:uuid:7ef275c9-f352-4473-8271-8582c543e037>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combinations of Toppings when Ordering a Pizza
Date: 05/19/2005 at 12:08:04
From: Tom
Subject: Combinations/Permutations
How many combinations of pizza can be made with 6 different toppings?
Assuming that double toppings are not permitted, can you explain why
the answer is 2^6? Thanks.
I get the same answer using c(6,0) + c(6,1) + c(6,2) + ... + c(6,6),
but I can't understand why 2^6 works other than that both = 64.
Date: 05/27/2005 at 09:49:26
From: Doctor Wilko
Subject: Re: Combinations/Permutations
Hi Tom,
Thanks for writing to Dr. Math!
I was confused by this answer when I first saw it in a statistics
class too. But the reasoning is of a binary nature. You can either
add the topping or not. Your solution of C(6,0) + ... + C(6,6) is
probably more intuitive at first, but it turns out both answers are
I can ask if you want each of these toppings on your pizza and you can
give me one of two answers:
Cheese: Yes or No. 2 answers
Peppers: Yes or No. 2 answers
Olives: Yes or No. 2 answers
Sausage: Yes or No. 2 answers
Anchovies: Yes or No. 2 answers
Onions: Yes or No. 2 answers
Therefore, the answer is:
2 * 2 * 2 * 2 * 2 * 2 = 2^6 = 64 different pizzas
You've made a neat connection with the combinations that I also
discovered on my own and was pretty excited about when I first saw it.
2^n = C(n,0) + ... + C(n,n)
This connection can be more obvious if you see how it fits into
Pascal's Triangle and Combinations. I'll provide a link below for you
to look at.
Knowing this connection just gives you another tool that you can use
to solve problems like this. You'll find with counting problems that
there are usually multiple ways to get to the answer.
Feel free to visit our archives for more insight on this topic:
Permutations and Combinations
Pascal's Triangle
Does this help? Please write back if you have questions.
- Doctor Wilko, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/67284.html","timestamp":"2014-04-24T15:46:29Z","content_type":null,"content_length":"7288","record_id":"<urn:uuid:f514ad26-b390-4e10-931c-f754c07bcdbc>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by anonymous on Thursday, February 17, 2011 at 7:11pm.
How do the ideas of divisibility and multiples relate to the study of fractions?
Show that 1,078 and 3,315 are relatively prime.
What is the shortest length of television cable that could be cut into either a whole number of 18-ft pieces or a whole number of 30-ft pieces?
When finding the factors of 841, what is the largest factor you would have to test? What theorem supports this?
Find the LCM of the numbers 24 and 32 by using:
a. the listing multiples method.
b. the prime factorization method.
The product of two numbers is 180. The LCM of the two numbers is 60. What is the GCF of the numbers?
Explain how you know
You know that a number is divisible by 6 if it is divisible by both 3 and 2. So why isn’t a number divisible by 8 if it is divisible by both 4 and 2?
What characteristic do the numbers 8, 10, 15, 26, and 33 have that the numbers 5, 9, 16, 18, and 24 don’t
have? (Hint: List the factors of the numbers.)
Give two more numbers that have this characteristic.
Do you think that the formula p = 6n + 1 where n is a whole number, will produce a prime number more than 50% of the time?
Give evidence to support your conclusion.
• math - Ms. Sue, Thursday, February 17, 2011 at 7:20pm
How would you like us to help you with this assignment?
• math - Anonymous, Thursday, February 17, 2011 at 7:30pm
1078=2 * 7 * 7 * 11
3315=3 * 5 * 13 * 17
This two numers haven't common divisors.
Thats why 1,078 and 3,315 are relatively prime
• math - HannahLove, Monday, August 8, 2011 at 7:15pm
57 percent of 500
Related Questions
elementry math - How do the ideas of divisibility and multiples relate to the ...
elementary math - show that 1,078 and 3,315 are relatively prime
elementry math - Show that 1,078 and 3,315 are relatively prime
MTH 156 - How do the ideas of divisibility and multiples relate to the studey ...
Math (Proof) - Prove that if ab = ac (mod n) and a is relatively prime to n, ...
math- important - use the prime factorization of each pair of numbers to ...
math - Separate the fractions 2/6,2/5,6/13,1/25,7/8and 9/29into two categories: ...
Math - Can someone help me with this problem? A) Separate the fractions 2/6, 2/5...
discrete math - which positive integers less than 12 are relatively prime to 13 ...
math-geometry - 1. A rhombus has a perimeter of 96, and the length of one of its... | {"url":"http://www.jiskha.com/display.cgi?id=1297987904","timestamp":"2014-04-17T04:37:49Z","content_type":null,"content_length":"9926","record_id":"<urn:uuid:b321c43f-f56f-4075-bee8-c9c4705d6e01>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
factorise the equation
I assume you're being asked to factorise this expression or alternatively find the roots of f=0. Since the constant term is zero, there's a factor of x (equivalently, x=0 is a root). So f = x(x^
2+4x+3). You can find the roots of the quadratic x^2+4x+3, for example, by the formula, and they are -1 and -3. That corresponds to factor (x+1) and (x+3). | {"url":"http://mathhelpforum.com/advanced-algebra/777-factorise-equation.html","timestamp":"2014-04-17T20:33:57Z","content_type":null,"content_length":"30488","record_id":"<urn:uuid:96f380f9-33cd-4efc-859f-9406b81186ce>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Albany, CA Calculus Tutor
Find an Albany, CA Calculus Tutor
...Please get in touch with me – I will be very happy to help you succeed.I tutor AP Statistics and college level introductory statistics courses. Many students, who are new to statistics, think
of it as “pure math” type of a subject; however there is a lot of real world application in statistics, ...
14 Subjects: including calculus, statistics, geometry, algebra 2
...I also participated in several undergraduate and graduate student mentoring programs at UC Berkeley, which included group seminar presentations as well as one-on-one tutoring sessions of
students from science and engineering majors, including minority students. I actually co-founded the Bioengin...
24 Subjects: including calculus, chemistry, physics, geometry
...I am a native speaker of Mandarine Chinese. I finished High School in Taiwan and then after 2 years in obligatory military service I came to US to study for college. I am an immigrant.
17 Subjects: including calculus, reading, physics, statistics
...I will begin teaching high school math in Oakland in September. I have been tutoring math for the last seven years. I have worked freelance, for DVC in Pleasant Hill (at their math lab), and
for UC Santa Cruz (as a learning assistant). I have tutored pre-algebra, algebra, geometry, statistics, ...
15 Subjects: including calculus, reading, statistics, SAT math
...I first entered the teaching field in the public school setting, where I worked for four years on the east Coast before relocating to the West Coast and have been working as a high school
physics/geoscience/mathematics teacher for the past twelve years. Outside of the school setting, I have had ...
32 Subjects: including calculus, reading, physics, geometry
Related Albany, CA Tutors
Albany, CA Accounting Tutors
Albany, CA ACT Tutors
Albany, CA Algebra Tutors
Albany, CA Algebra 2 Tutors
Albany, CA Calculus Tutors
Albany, CA Geometry Tutors
Albany, CA Math Tutors
Albany, CA Prealgebra Tutors
Albany, CA Precalculus Tutors
Albany, CA SAT Tutors
Albany, CA SAT Math Tutors
Albany, CA Science Tutors
Albany, CA Statistics Tutors
Albany, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Albany_CA_calculus_tutors.php","timestamp":"2014-04-18T23:54:52Z","content_type":null,"content_length":"23881","record_id":"<urn:uuid:19d1a7a7-3242-4dca-99c4-9e56749507e0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
DCR AND OHM IMPEDANCE CONNECTION [Archive] - Lansing Heritage Forums
View Full Version : DCR AND OHM IMPEDANCE CONNECTION
10-18-2006, 05:02 AM
I woild like to kwow what is the connection between Dcr and impedance in a loudspeaker .
Some people have an 8 ohms impedance and a 6.3 ohm Dcr as example !
Thank you
10-18-2006, 05:13 AM
DCR is the actual measurement you get with an ohm meter when testing across the leads with the speaker unconnected and at rest. The "nominal" impedance is an "average" rating based over the speaker's
normal operating frequencies. Points along this line may be above and below the average value.
10-18-2006, 06:08 AM
Thank you johanec
Does it mean if I have a 16 ohms rated speaker ( J model ) and 9.8 dcr , the speaker will be more like an 8 ohms ( H speaker ) ? Will that influence the crossover frequency ?
10-18-2006, 06:41 AM
You should have an impedance sweep done. That 9.8 might be the minimum point. The crossover should probably be designed for the impedance at the crossover point.
10-18-2006, 07:34 AM
DCR = DC ( 0 hertz) resistance. This is what the actual resistance of the wire in the coil is. This can be all over the map and does NOT always track the impedance
Impedance = the **AC** ( measured and averaged over a range of frequencies ) "resistance" a component, network or system has.
This can go up and down depending on many factors but usually the average is specified.
In the case of the large 4" JBL coils, the DCR is around 6.2 for an *8* ohm speaker.
9.8 is right in the ballpark for a *16* ohm speaker.
scott fitlin
10-18-2006, 11:53 AM
To understand better what they are saying about the impedance rising and falling proportional to frequency, when you have the ohm meter connected to the woofer, gently push the cone forward from the
backside, and watch the meters display read out all kinds of crazy numbers.
Its sort of an idea of whats going on when the woofer is actually playing music. Impedance is nominal, not exact.
10-19-2006, 04:26 PM
pushing in a cone is the same as talking into a dynamic mic. The coil moves in a magnetic gap which creates a voltage.
this has nothing to do with impedance. the ohmmeter's internal voltage is added to or subtracted from what the motor generates.
scott fitlin
10-19-2006, 05:13 PM
pushing in a cone is the same as talking into a dynamic mic. The coil moves in a magnetic gap which creates a voltage.
this has nothing to do with impedance. the ohmmeter's internal voltage is added to or subtracted from what the motor generates.
subBut doesnt impedance fluctuate as the woofer plays signal? I understand that what Im saying isnt actual impedance, but it sort of gives an idea of how the impedance fluctuates with signal, motion,
and frequency. I did say " sort of "!
As the woofer moves in and out to signal, the impedance doesnt fluctuate, varying with frequency?
10-19-2006, 05:24 PM
But doesnt impedance fluctuate as the woofer plays signal? I understand that what Im saying isnt actual impedance, but it sort of gives an idea of how the impedance fluctuates with signal, motion,
and frequency. I did say " sort of "!
As the woofer moves in and out to signal, the impedance doesnt fluctuate, varying with frequency?
Sure it does Scotty.
Look at any of the impedance plots in the forum and you'll see that actual impedance is dynamic.
Powered by vBulletin® Version 4.1.8 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved. | {"url":"http://www.audioheritage.org/vbulletin/archive/index.php/t-12681.html","timestamp":"2014-04-20T08:30:16Z","content_type":null,"content_length":"7365","record_id":"<urn:uuid:d8aa647f-d65a-49ce-9d79-d2e7c3648edb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fort Gillem, GA Geometry Tutor
Find a Fort Gillem, GA Geometry Tutor
Patient, creative and equipped to tutor homeschooled, middle school, and high-school students, as well as college students who just need a little remedial math help to facilitate their college
mathematics. Adept at helping your understanding of geometry. Geometry is the branch of mathematics concerned with shape, size, relative position of figures, and the properties of space.
6 Subjects: including geometry, algebra 1, algebra 2, trigonometry
...I know the content of the test like the back of my hand, and I know lots of little tricks and strategies to help you bring your scores up. I ask my students to determine their "baseline" score
by taking a practice test, and to decide what their target score is (usually based on what college they...
25 Subjects: including geometry, reading, calculus, GRE
...My professional experience spans from elementary school (special education) to graduate school, and includes certification in middle school mathematics and social studies. Learning should
always be an enjoyable experience and, with time, every student can R.I.S.E. to the occasion and perform to ...
45 Subjects: including geometry, English, reading, writing
...I have since earned a Masters degree in Physics and I am currently working on my PhD research. I have worked as a teaching assistant for introductory physics classes at Georgia Tech. I spent
two semesters teaching electricity and magnetism and one semester teaching classical mechanics.
9 Subjects: including geometry, calculus, physics, algebra 1
...My students say that I am able to relate math to them on a level that helps them understand the topic.I am a college professor of mathematics. I have my own 6 year old working on the 2nd grade
level in mathematics. I have passed the Elementary Math qualifying test.
11 Subjects: including geometry, algebra 1, algebra 2, SAT math
Related Fort Gillem, GA Tutors
Fort Gillem, GA Accounting Tutors
Fort Gillem, GA ACT Tutors
Fort Gillem, GA Algebra Tutors
Fort Gillem, GA Algebra 2 Tutors
Fort Gillem, GA Calculus Tutors
Fort Gillem, GA Geometry Tutors
Fort Gillem, GA Math Tutors
Fort Gillem, GA Prealgebra Tutors
Fort Gillem, GA Precalculus Tutors
Fort Gillem, GA SAT Tutors
Fort Gillem, GA SAT Math Tutors
Fort Gillem, GA Science Tutors
Fort Gillem, GA Statistics Tutors
Fort Gillem, GA Trigonometry Tutors
Nearby Cities With geometry Tutor
Centerville Branch, GA geometry Tutors
Cumberland, GA geometry Tutors
Embry Hls, GA geometry Tutors
Farrar, GA geometry Tutors
Forest Park, GA geometry Tutors
Green Way, GA geometry Tutors
Kelly, GA geometry Tutors
Mountville, GA geometry Tutors
North Metro geometry Tutors
North Springs, GA geometry Tutors
Peachtree Corners, GA geometry Tutors
Rockbridge, GA geometry Tutors
Round Oak, GA geometry Tutors
Tuxedo, GA geometry Tutors
Winters Chapel, GA geometry Tutors | {"url":"http://www.purplemath.com/Fort_Gillem_GA_Geometry_tutors.php","timestamp":"2014-04-17T04:35:27Z","content_type":null,"content_length":"24368","record_id":"<urn:uuid:2f51e9d8-8327-4dfb-bca2-99aea5cbc868>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
Do sets with positive Lebesgue measure have same cardinality as R?
up vote 24 down vote favorite
I have been thinking about which kind of wild non-measurable functions you can define. This led me to the question:
Is it possible to prove in ZFC, that if a (Edit: measurabel) set $A\subset \mathbb{R}$ has positive Lebesgue-measure, it has the same cardinality as $\mathbb{R}$? It is obvious if you assume CH, but
can you prove it without CH?
set-theory measure-theory
1 Life without GCH is hard business. That's why, you should always remember to bring your GCH along with you wherever you go. – Harry Gindi Dec 16 '09 at 9:13
4 Amusing, Harry, but not apt. The answer to Sune's question is yes! – Jonas Meyer Dec 16 '09 at 9:22
add comment
5 Answers
active oldest votes
I found the answer in the paper "Measure and cardinality" by Briggs and Schaffter. In short: not if I interpret positive measure to mean positive outer measure. A proof is given that
every measurable subset with cardinality less than that of $\mathbb{R}$ has Lebesgue measure zero. However, they then survey results of Solovay that show that there are models of ZFC in
which CH fails and every subset of cardinality less than that of $\mathbb{R}$ is measurable, and that there are models of ZFC in which CH fails and there are subsets of cardinality $\
aleph_1$ that are nonmeasurable. So it is undecidable in ZFC.
up vote 35
down vote If it was intended that our sets are assumed to be measurable, then the answer would be yes by the first part above.
Edit: In light of the comment by Konrad I added a couple of lines to clarify.
1 I'm not sure I understand your answer. Don't you say that if $A\subset \mathbb{R}$ has positive measure, it cannot have the same cardinality as $\mathbb{R}$? – Sune Jakobsen Dec 15
'09 at 11:57
1 But doesn't asserting that a set has positive Lebesgue measure tacitly assume that it is measurable? – Konrad Swanepoel Dec 15 '09 at 12:03
3 Konrad, I feared there might be confusion about that. Strictly speaking that is what it should mean, but I was taking it loosely and thinking of outer measure. If only measurable
sets were intended, what I described above shows that the answer is "yes". Sune, no. But perhaps I caused confusion for the reason Konrad mentions. – Jonas Meyer Dec 15 '09 at 12:10
add comment
The answer to the question is that it is independent of ZFC, if one is speaking of outer measure.
The right context for the question and its answer is the very active research area known as Cardinal Characteristics of the Continuum. The point of this subject is to investigate exactly
how the dichotomy between countable and continuum plays out in situations when CH fails. For example, in this area, researchers define a number of cardinal invariants:
The bounding number b is the size of the smallest unbounded family of functions from ω to ω. There is no function that bounds every member of the family.
The dominating number d is the size of the smallest dominating family of functions ω to ω. Every function is dominated by a member of the family.
The additivity number for measure is the smallest number of measure zero sets whose union is not measure zero.
The covering number for measure is the smallest number of measure zero sets whose union is all of R.
up vote 20 The uniformity number for measure is the size of the smallest non-measure zero set.
down vote
The cofinality number for measure is the smallest size of a family of measure zero sets, such that every measure zero set is contained in one of them.
Remarkably, none of these numbers is provably equal to any other. In addition, there are models of set theory separating each of them both from ω[1] and from the continuum. In the case
of the uniformity number, this is the answer that Jonas Meyer has pointed to above.
One can define similar numbers using the ideal of meager sets in place of the ideal of measure zero sets, and the relationships between all these cardinal characteristics are precisely
expressed by Cichon's diagram. In particular, no two of them are provably equal, and there are models of set theory exhibiting wide varieties of possible relationships.
There are dozens of other cardinal characteristics, whose relationships are the focus of intense study by set theorists working in this area. The main tool for separating these cardinal
characteristics is the method of forcing and especially iterated forcing.
3 And if Joel's answer isn't enough to put the bed all the comments that foundational questions don't have any relationship to "real" mathematics,nothing will. Thanks,Joel,for the very
knowledgable and tantalizing response. – Andrew L May 29 '10 at 20:28
1 Belated thanks for the vote of confidence, Andrew! But to be honest, in the math circles with which I am familiar, one doesn't really seem to hear such comments as those to which you
refer. But I've heard that things used to be different... – Joel David Hamkins Jan 29 '12 at 23:10
add comment
I am not sure if this answer can be helpful or not, but since it is a very elemantary approach to the problem, it might be useful. Assume we are given a subset of R such as A with
cardinality smaller than R. Then you can show that the cardinality of A and A+A for any infinite set A is the same. Hence the cardinality of A+A is the same as the cardinality of A which is
up vote smaller than R. But you can prove that for any set A of positive measure, A+A has at lease one open interval as a subset. This will be a contradiction with the fact that cardinality of A+A
26 down is smaller than R.
That's neat! The proof that Briggs and Schaffter give is also elementary but very different, so it's useful to see this one too. – Jonas Meyer Dec 16 '09 at 8:49
How do you prove that if A has positive measure, A+A contains an open interval? – Sune Jakobsen Dec 16 '09 at 13:25
Let's prove that if A is of positive measure then A+A contains an interval. First show that if m(A)>0 then there is an open interval L such that m(A intersect L)>(3/4)m(L). Now use this
to show that A-A contains the interval K=(-0.5m(A),0.5m(A)). For the last part let b be a number inside K. Consider all the pairs inside L that their subtraction is equal to b. Prove
that A contains at least one of those pairs otherwise the inequality at the begining of the proof can not be true. – Ehsan Dec 16 '09 at 16:16
Sorry that I proved above that A-A contains an interval. This is also true for A+A but the proof will be slightly different. Anyways we could start with A-A in the proof of the original
problem. – Ehsan Dec 16 '09 at 16:29
1-This is one of the fundamental theorems about the measurable sets that you can approximate their measure from above by open sets. You can find a proof in real analysis books like
1 Folland. 2-Yes the proposition will be still true if you consider any positively measurable sets A and B. Thanks for the comment, i just wanted to make it as easy as possible. – Ehsan
Dec 19 '09 at 1:13
show 1 more comment
I'm interpreting the question as: Measurable, with positive measure, not as "having positive outer measure" (for which the answer is independent of the basic axioms of set theory, as
pointed out by Joel).
The answer is yes. By elementary properties of Lebesgue measure (regularity), for any $\epsilon>0$, any set $C$ of positive measure contains compact subsets $C_\epsilon$ of measure within $
\epsilon$ of the measure of $C$ (interpret this as "arbitrarily large" if $C$ has infinite measure). Any set of positive measure is obviously uncountable. It is straightforward to see that
a compact uncountable set of the reals contains a perfect set, and that perfect sets have the same size as the reals. Therefore, $C$ must also have the size of the reals. (I guess the last
step uses the Schroeder-Bernstein theorem.)
up vote
16 down (On a side note, Cantor proved the result that closed uncountable subsets of ${\mathbb R}$ have the size of the reals. This extends to larger collections of sets, e.g., to all uncountable
vote Borel sets. The first approach to the continuum hypothesis was to try to keep on extending this result.)
To see that perfect sets have the size of the reals: Check that any perfect set has a "copy" of Cantor's set; this is standard; baby Rudin essentially shows how in an exercise in Chapter 1
or 2. Cantor's set, by construction, obviously has size $2^{|{\mathbb N}|}$. Check that the reals also have this size, e.g., by noticing that ${\mathbb R}$ and $(0,1)$ have the same size,
and identifying reals in $(0,1)$ with their infinite binary expansion. I suppose this may also use Schroeder-Bernstein depending on how one fleshes this outline out.
add comment
A set of positive measure contains a closed subset of positive measure. It is known that closed subset of reals may have cardinality either continuum or at most countable (http://
up vote 8 down en.wikipedia.org/wiki/Perfect_set_property)
add comment
Not the answer you're looking for? Browse other questions tagged set-theory measure-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/8972/do-sets-with-positive-lebesgue-measure-have-same-cardinality-as-r?answertab=oldest","timestamp":"2014-04-17T13:08:17Z","content_type":null,"content_length":"87102","record_id":"<urn:uuid:37662ada-543b-4ad8-9378-7134eee1fdcc>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Assistant Principals 12-13 Leadership Project SMART Goals
Ideas and successful practices from principals in the Master Principal Program.
Tuesday, September 4, 2012
Assistant Principals 12-13 Leadership Project SMART Goals
23 comments:
1. I think I have bit off more than I can chew but it is something that has to be tackled so here goes. Students failing classes because of homework.
There is no specific date because this will be an on-going process. I will start with an action research looking at random students failing a class or classes. I will then look at why they are
failing - because of test, homework or just lack of effort. From there I will build a PLC to look at this issue.
2. Here's my SMART goal for the Leadership Project:
By May 15, 2013, there will be a 10% reduction in the number of students referred for Special Education when compared to the number of students referred for 2011-2012.
Now to explain...for my Leadership project I would like to tackle the RTI system my school currently has in place. We have the formal RTI documents that teachers fill out when they have students
who need assistance, and have Tier 1, 2, and 3 interventions in place in both literacy and math. The project would be to pull all of these things together into a coherent system that works
continuously to find the best solutions for students. Right now we have several parts of the system working, but not they are isolated and disjointed. The number of students referred for Sped.
has not been impacted significantly even with these parts in place. I would like to work with all the different personnel involved (collaboratively) and find a way to make our system more
meaningful for students and teachers. My goal is that the number of students who still need to be tested would be reduced because we would find out what works for them before getting to that
Julie Workman
Monticello Intermediate School
3. This comment has been removed by the author.
4. By May 1, 2013, there will be a 50% increase in the number of students in math pull-out interventions who have grown one grade level from math pre-test to post-test as measured by the Brigance
diagnostic testing tool.
Our school has had great success with interventions in the area of literacy. Utilizing that program design, I want to implement a pull-out intervention program that has an intense focus on number
sense and problem solving. Pre-test show that we have 66 children in grades 3-5 that are currently working well below grade level with most of these kids performing between the 1st and 2nd grade
level in mathematics. Data also shows all 66 students are struggling with basic number sense. My goal is to ensure that all students grow in the area of math and specifically that at least 50%
grow by a grade level (based on diagnostic tool) by the end of the 2012/13 school year.
Karen Norton
Monticello Intermediate School
5. By April 1, 2013, there will be a 100% growth in literacy scores from at-risk to some risk in the 3rd grade as measured by Dibels Next and
I am very excited with the SMART goal. We have pulled the 3rd grade students who are at-risk. This was determined from the Iowa test scores, Dibels Next and Daze scores. Our goal is to not only
provide small group intervention as well as one-on-one intervention but we want to involve the parents in the intervention process. By working with the teachers and instructional facilitators, we
are going to provide intervention instruction to these parents of the at-risk students. We are personally calling and inviting them to the school for pizza and then we will provide hands-on
activities for them to take home to work with their student. Every two weeks we will send home two more activities for them to use at home. We know that having parents involved in the education
of their child increases their chance for success at school. This will be my Leadership Project. My goal is to see 100% growth in these students through the interventions they are receiving at
school as well as from the intervention they are receiving from their parent. We will target phonics and fluency.
6. SMART Goal
By implementing enrichment activities to our classrooms on a daily basis, we will see growth in all students. My goal is to ensure that our above grade level students are also continuing to grow.
• Date: May 2012
• Measureable change: Implement enrichment activities so when a
student has already mastered a standard, they can be working on
a different assignment that will challenge them and build their
back ground knowledge. 75% of students, who scored above grade
level, will continue to improve their scores throughout the year.
• Target Population: Students who tested above grade level
• Maps scores. These test are taken 3 times a year
_ Specific and strategic
_ Measurable
_ Attainable
_ Results-based
_ Time-bound
7. By May 2013 there will be a 80% change in the physical environment in the school climate as measured/evidenced by the physical appearance and cleanliness of JacksonvilleHigh School.
D. Pilcher
8. By May 31, 2013 there will be a 50% decrease in the number of office referrals as compared to the 2011-2012 data as measured by the increased number of Classroom Walk Through’s and consistent
1. How will you measure, target dates and what will be your action steps.
9. By December 9, 2012 there will be a 33% decrease in the noise volume between classes when moving from one room to the next or going to mini's.
10. By November 9th, a plan for improvement of PLC's will be created as evidenced by an established PLC Charter.
By Nov. 6th, 100% of the PLC participants will have taken the PLC survey as evidenced by reviewing the electronic data.
11. By May 30, 2013, there will be a 50% decrease in the number of tutoring absences as evidenced by comparison of first and second semester data.
12. SMART GOAL
By May 31, 2013 there will be a 50% increase in the number of SPED students who meet or exceed growth on the MAP assessment.
LEADERSHIP PROJECT
My leadership project includes creating staff development for teachers so that they can better understand and utilize student MAP data.
13. By May 31, 2013, there will be a 20% decrease on the number of 7th grade students placed in In-School Suspension as measured by school discipline data as compared to the 2011-2012 school year.
This goal will be conveyed with the 7th grade student body as well as the 7th grade team of teachers. The responsible parties involved in accomplishing this goal are the 7th grade student body,
the 7th grade team teachers, and myself.
There will be consistent communication throughout the year regarding this goal via classroom discipliine logs, office referrals, emails between teachers and principals, and school discipline pep
14. SMART Goal ... broken down into quarters.
By 2012-10-18, there will be an increase to at least 75% in feedback on weekly team notes as measured/evidenced by emails to teams.
By 2012-12-21, there will be an increase to at least 75% in feedback on weekly team notes as measured/evidenced by emails to teams.
By 2013-03-15, there will be an increase to at least 75% in feedback on weekly team notes as measured/evidenced by emails to teams.
By 2013-05-24, there will be an increase to at least 75% in feedback on weekly team notes as measured/evidenced by emails to teams.
15. By the end of 2012-2013 school year, there will be an increase in profiency on Common Formative Assessments for Algebra I students as measured by the invention services provided by the math
1. At Star City High School we have implemented the use of Common Formative Assessments to provide point in time intervention services and intervention services in a "Focus" time to specifically
target learning deficiencies. Through the use of cloud technology, this is being implemented and will be my Leadership Project.
More to follow soon....
16. Here is my SMART Goal for T.G. Smith Elementary:
There will be a 10% increase in math and/or literacy growth goals for PAC/RTI (Professional Assistance Community/Response to Intervention) students as measured by an increase in NWEA MAP math and
/or literacy lexile scores from the Fall, Winter and Spring test scores.
Tonya Woods
T.G. Smith Elementary
17. By December 18, 2012, there will be an 80% increase in the number of passing grades in the freshmore advisory as measured by end of semester grades.
18. By May 2013, there will be 100% pass rate to the next grade level in the freshmore advisory as measured by grades/transcripts.
19. By May 2013, there will be an increase in the number of classroom Walkthroughs. Currently we are seeing the schools 40 teachers about once a month. In May we would see that number change to 4 per
teacher per month per adminstrator.
Plan- Find Focus areas with teacher imput in Literacy, Math and Activity classes.
Do- Walk with specific areas listed above and report back to teacher
Check - Review data collected and review with school leadership team.
Act - Change Focus area with advice form the leadership team.
20. SMART Goal
By March 2014 there will be a 50% increase of bell to bell learning in Vilonia Middle School as evidenced by CWT and observations
Rodney Partee
Anyone may post a comment and read the comments of others. If you would like permission to offer an original post on this blog, please contact Diana Peer at dpeer@uark.edu. | {"url":"http://alaprincipalpractices.blogspot.com/2012/09/assistant-principals-12-13-leadership.html","timestamp":"2014-04-17T12:59:09Z","content_type":null,"content_length":"127153","record_id":"<urn:uuid:b78087f8-bfe6-4762-baab-e3dfdee542c3>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
The College Mathematics Journal - March 1998
Contents for March 1998
An Interview with Lars V. Ahlfors
Donald J. Albers
In the middle half of this century the subject of complex function theory was virtually defined by the work of Lars V. Ahlfors (1907-1996). In this 1994 interview he recalls his early years, and
surveys his career from a personal perspective. (More biographical information about Ahlfors and his mathematical work appears in the February, 1998 issue of the Notices of the American
Mathematical Society.)
Two Historical Applications of Calculus
Alexander J. Hahn
Two examples are given to illustrate how calculus has been energized by problems from basic science and engineering, and in turn the mathematics has enlightened and clarified these fields. The
first is a statics problem from the first calculus textbook, by the Marquis De L'Hospital. It quickly translates to finding the maximum value of an algebraic function, a problem quite suitable
for today's students. The second example analyzes a page in Galileo's notebooks where he records an experiment with balls rolling down an inclined plane. By constructing a mathematical model for
the motion, which involves finding and applying the moment of inertia of a ball, a convincing case is made that Galileo's data are the record of a genuine experiment, and not the result of a
thought experiment as some historians had once maintained.
Numerically Parametrizing Curves
Steven Wilkinson
Many computer software systems will make accurate plots of curves, in the plane or in space, that are defined by parametric equations. Some graphics systems will plot an implicitly defined plane
curve f(x,y) = 0 , but such plots rarely show correctly the behavior near points of self-intersection. Few if any systems will plot the curve of intersection of two implicitly defined surfaces in
space. This article derives a system of differential equations whose solution through a given point gives parametric equations for a given implicitly defined curve in the plane or in space.
Usually these systems cannot be solved exactly, but numerical methods provide approximate solutions that can be used by the parametric plotting routines to produce accurate plots. Ideas from
multivariable calculus, differential equations and linear algebra are used to derive the systems of differential equations. Many examples are worked out to explain how to implement this method
for specific curves, with or without singular points. Missing Graphics
Singles in a Sequence of Coin Tosses
David M. Bloom
In a sequence of n independent tosses of a fair coin, the number of singles, or runs of length one, is a random variable
Looking at Order of Integration and a Minimal Surface
Thomas Hern, Cliff Long, and Andy Long
When attempting to make a computer plot of a standard counterexample to Fubini's theorem on interchanging the order of integration in iterated integrals (a simple rational function of two
variables), one of the authors noticed the similarity of the surface to a recently discovered minimal surface (the genus-1 Costa/Hoffman/Meeks surface). The Fubini surface can be deformed in a
visually appealing way to become the minimal surface, and this deformation gives insight into the beautiful shape of this minimal surface. With striking images the two surfaces are shown and the
deformation, which requires animation for optimal effect, is indicated by selected still views.
Graphics Supplement for this article http://129.1.5.114/minimal/
Classroom Capsules
Leonard Gillman, Revisiting Arc Length
The article "On Arc Length," by P.D. Barry in the November 1997 issue of the CMJ, used an axiom of Archimedes about curves with the same concavity, to find upper bounds for arc length. Here an
alternative approach is given, based on the geometrically appealing axiom that of two continuously differentiable functions, one of whose derivative is larger in absolute value than that of the other
throughout an interval, the graph of the one with the greater derivative has the greater arc length. This axiom, together with the additivity of arc length, leads immediately to the conclusion that
the arc length of a continuously differentiable function f(x) on [a,b] lies between every lower sum and every upper sum of
Conversely, the integral formula, together with additivity of arc length, implies this axiom.
James E. Mann, Jr., The Buckled Rail: Three Formulations
Imagine a steel rail one mile long with fixed ends that is heated so that its length increases by one inch, causing it to buckle upwards. Three cases are considered: the buckled shape is an isosceles
triangle, an arc of a circle, or one arch of a sinusoidal curve. In each case, what is the height of the center point? Finding the answer requires different mathematics in the three cases, with the
most difficult (and physically interesting) third case involving integration of a Taylor series expansion to approximate an arc-length integral.
Viet Ngo and Saleem Watson, Who Cares if has a solution?
Four answers are given, which teachers might use in reply to a student who asks the question in the title. The first explains how Bombelli used complex numbers to find real roots of certain cubics by
Cardano's formula. Two replies explain the simplification that results when one uses complex exponential functions rather than products of real exponentials and trigonometric functions in solving
certain elementary differential equations. The final reply explains how considering complex values of x in the power series expansion of why the radius of convergence of this series is just 1,
although the function is analytic on the real line.
Cheng-Shyong Lee, Polishing the Star
A recent Menelaus-type theorem of Hoehn about pentagrams is shown to be an immediate consequence of the law of sines.
David Callan, When is "Rank" Additive?
It is well known that the matrix rank is subadditive; that is, (A + B) (A) + rank (B) Here it is shown that equality holds if and only if the column spaces of the matrices A and B are disjoint and
the row spaces are disjoint.
Computer Corner
Charles R. Johnson and Brenda K. Kroschel, Clock Hands Pictures for Real 2 x 2 matrices
The clock hands picture for a real 2 x 2 matrix A is an animated plot of a unit vector along with the vector For invertible A, as the vector sweeps out the unit circle, the image sweeps out an
ellipse. This movie makes several characteristics of the matrix visible. For example, unit eigenvectors corresponding to real eigenvalues are those that are collinear with their image , the length
and direction of showing the eigenvalue. The lengths of the semiaxes of the ellipse are the singular values of A, and the corresponding (right) singular vectors lie on the semiaxes. These and other
features of the clock hands pictures are discussed.
Sergey Markelov, Geometric Characterization of the Shortest Path in a Tetrahedron
The problem of finding the closed path of minimum length that touches all four faces of a regular tetrahedron ABCD was solved using analytic geometry and calculus in a Computer Capsule in the
November 1997 issue of the CMJ, but the solution did not provide a geometric characterization of the minimal path. This problem appeared on the 1993 Moscow Mathematical Olympiad, and the author
presents a geometrical solution, showing that the minimal path is obtained by minimizing the distance between successive medians of the faces: from the median CL of face ABC to the closest point on
median BK of BCD, then to the closest point on median DL of face ABD, then to the closest point on median AK of ACD and from there back to the closest point on CL, where the path began. | {"url":"http://www.maa.org/publications/periodicals/college-mathematics-journal/college-mathematics-journal-contents-march-1998","timestamp":"2014-04-16T06:21:45Z","content_type":null,"content_length":"107140","record_id":"<urn:uuid:7d0d980d-ab7a-4dba-bf72-8033ff4c8028>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
System of Equations
February 6th 2013, 01:00 PM
System of Equations
I need to find out how to solve a system of equations with the given matrix form:
I know I need to make it in the form of X= with inverse of A and C as well as using the identity matrix
What I have guessed at the moment is
X= (I-A)^-1 + (I-C)^-1 + B
I am using excel as I have an equation I have to solve with this but if I can figure out the formula for this I can solve the problem fairly easy.
February 6th 2013, 01:16 PM
Re: System of Equations
Hey Launcher.
Re-arranging we get AX - CX = B which implies (A-C)X = B which implies X = (A-C)^(-1)*B.
February 6th 2013, 01:26 PM
Re: System of Equations
Oh wow thanks! I knew I had to re arrange them one way or another but on my other examples I have had to use the Identity matrix in with my equation to solve the problem. However, X= (A-C)(-1)*B
looks like it may do the trick as well! I will try this out in excel with my given equation now and see what I come up with!
This is the equation I am trying to solve for X which is teh x,y,z,w matrix.
February 6th 2013, 02:14 PM
Re: System of Equations
By using excel I got:
Does this look correct?
February 6th 2013, 02:20 PM
Re: System of Equations
Try substituting your X vector back in and see if it checks out.
February 6th 2013, 02:29 PM
Re: System of Equations
Yes looks like it checks out! I just wanted to make sure =]
Thanks for all your help chiro your amazing | {"url":"http://mathhelpforum.com/advanced-algebra/212685-system-equations-print.html","timestamp":"2014-04-17T11:36:11Z","content_type":null,"content_length":"5815","record_id":"<urn:uuid:2e43a800-68d3-4a56-ac5f-5ee5e4087ee6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics and Astronomy; Pre-Engineering
(For 2013–2014 academic year)
Aveni, Balonek, Galvez
Associate Professors
Crotty, Parks (
), Segall
Assistant Professors
Bary, J. Levine, Metzler
Visiting Assistant Professors
Herne, Springer
Postdoctoral Fellow
A student should major in the Department of Physics and Astronomy if he or she is interested in fundamental questions about the nature of matter and the nature of the universe, or in practical
questions of engineering, applied physics, or space science. To be successful, a student should also enjoy mathematics and quantitative reasoning. More than half of the graduating seniors in this
department go to graduate school in various disciplines, and many earn PhDs in physics, astronomy, and engineering. Approximately 25 percent enter technical careers directly after graduation. The
rest pursue careers in teaching, business (often technology-based), management, and even medicine.
The department offers several courses of general interest, not intended for majors. These courses are
ASTR 101, Solar System Astronomy; ASTR 102, Stars, Galaxies, and the Universe; ASTR 230, Astronomy in Culture; PHYS 105, Mechanical Physics
; and P
HYS 111, 112, Fundamental Physics
Major Program in Physics
The major program begins with PHYS 131, 232, and 233, a three-term, calculus-based introductory physics course with laboratory. Entering first-year students should take PHYS 131 in the fall term.
After these three courses students enroll in PHYS 334, Introduction to Quantum Mechanics and Special Relativity, and PHYS 336, Electronics, which are normally taken concurrently in the spring of the
sophomore year. The four fractional-credit courses, PHYS 201–204, Mathematical Methods for Physics, are also required and are normally taken concurrently with PHYS 233 and PHYS 334. In the junior and
senior years, four additional upper-level courses (300 or 400 level, excluding PHYS 334, PHYS 336, and ASTR 312) are completed, one of which is PHYS 410, Advanced Topics and Experiments, a required
research project, completed in the fall semester of the senior year. In addition to these physics courses, MATH 111, 112, and 113 must be taken as soon as possible.
Major Program in Astronomy-Physics
A student interested in astronomy or astrophysics should enroll in this program. It requires
MATH 111, 112
, and
113; PHYS 131, 232, 233
, and
; the fractional-credit courses
PHYS 201-204, Mathematical Methods for Physics
; as well as
ASTR 210, Intermediate Astronomy and Astrophysics; ASTR 312, Astronomical Techniques
; one of the following:
ASTR 414, Astrophysics; ASTR 416, Galactic and Extragalactic Astronomy
; or
ASTR 313, Planetary Science
; two additional astronomy or physics courses at the 300 or 400 level (excluding
PHYS 334, PHYS 336
, and
ASTR 312
); and
PHYS 410, Advanced Topics and Experiments
. A student interested in planetary astronomy should also consider the
astrogeophysics program
To be eligible to graduate with a major in any of the programs of this department, a student is expected to achieve a grade of C– or better in each of the courses required for the major. There are no
exceptions to this policy. Additionally, a student’s cumulative GPA for all courses counted toward the major must be at least 2.00.
Minor Programs in Physics or Astronomy
The minor in physics requires
PHYS 131, 232, 233
, and two additional physics courses (note that
PHYS 201-204
count as one course credit), at least one of which must be at the 300 or 400 level. The minor in astronomy requires two of the following:
ASTR 101, 102, 230
; two additional astronomy courses that count towards the astronomy-physics major; and two physics courses that count towards the physics major. For both the minors, a grade of C– or better in all
courses that count toward the minor is required.
To qualify for graduation with honors, physics and astronomy-physics students must complete and defend an honors thesis in the spring of their senior year. Normally, the honors thesis is an extension
of the work completed in the capstone course
PHYS 410
in which the results are written more formally and placed more carefully in the context of existing research. At the discretion of the adviser, an alternate form of the thesis is a manuscript
submitted for publication in a journal. The option of an honors thesis is by invitation only; qualified students who perform exceptionally well in
PHYS 410
are invited by the department chair, in consultation with department members, to try for honors.
The thesis and defense are evaluated by department members and an external examiner to determine whether either honors or high honors will be awarded. In addition to the honors thesis, students must
enroll in two additional upper-level physics or astronomy courses (300 or 400 level) beyond those needed to satisfy the basic major requirements. One of these courses may be an independent study
course in which the
PHYS 410
research project is extended and advanced. A GPA of at least 3.30 must be achieved in all upper-level courses required for honors.
See “Honors and Awards: Physics and Astronomy” in
Chapter VI
Advanced Placement
Credit for
PHYS 111
will be granted to students who score 4 or 5 on the AP Physics B exam or the AP Physics C-Mechanics exam. Credit for
PHYS 112
will be granted to students who score 4 or 5 on the AP Physics C-E&M exam. Placement into
PHYS 232
without completion of
PHYS 131
can sometimes be allowed following discussion with the department chair and the
PHYS 232
instructor. Department majors who do not complete PHYS 131 will be required to complete an additional upper-level course to meet the major requirements. Placement out of
PHYS 232
based on high school courses (including AP) is not normally possible.
Transfer Credit
Transfer of credit for physics and astronomy courses from other colleges or universities requires approval by the department. In particular, summer courses taken with the expectation of transfer
credit must be pre-approved by the department well in advance of enrollment.
Pre-Engineering Studies
The department offers two ways to prepare for engineering: major in physics at Colgate and after graduation go to graduate school in engineering, or use one of the combined plans available in the
department. To allow a student to combine education in the liberal arts with engineering training, Colgate has cooperative agreements with Columbia University, Rensselaer Polytechnic Institute, and
Washington University. A student may spend three years at Colgate and two at the engineering school (the 3-2 plan) to earn bachelor’s degrees from both institutions.
The student may be eligible to continue study for a master of science (MS) degree, which can sometimes be completed in as little as one additional year after earning the bachelor’s degree in
engineering. Eligibility for the MS program is determined by the engineering school.
It is imperative for students interested in the 3-2 plan to begin the physics and math curriculum in the fall term of their first year. To be eligible for the 3-2 plan, a student must complete all
physics major courses through PHYS 336 and PHYS 431 (or 451), plus one other upper-level physics course to be chosen in consultation with the pre-engineering adviser.
Prerequisites for admission to engineering schools vary among schools and fields of study; therefore, it is necessary to indicate an interest in pre-engineering to the physics faculty as soon as
Preparation for Graduate School
Students intending to pursue graduate studies in physics, astronomy, or engineering should discuss their plans with their major advisers as early as possible. Students who wish to prepare for
graduate studies in physics or astronomy should complete
PHYS 431, 432, 433
, and
. To enrich the program, a student should choose additional physics and astronomy electives at the 300 and 400 levels. Advanced courses in other science departments, especially mathematics, are also
Teacher Certification
The Department of Educational Studies offers a teacher education program for majors in physics who are interested in pursuing a career in elementary or secondary school teaching. Please refer to
Educational Studies
Related Majors
The department administers the physical science and astrogeophysics majors and serves as a home department for students in these programs.
Course Offerings: Physics
PHYS courses count toward the Natural Sciences and Mathematics area of inquiry requirement, unless otherwise noted
All credit-bearing laboratories carry 0.25 course credits unless noted otherwise. Please see the
Academic Credit
section in Chapter VI for additional information and restrictions.
105/105L Mechanical Physics I
C. Herne
This course covers fundamental principles of Newtonian mechanics and their applications in science, engineering, and in particular, architecture. Selected topics including waves, fluids, optics,
electricity and magnetism, and thermal physics are aimed toward applications in the geosciences. This course is not suitable for students majoring in departments or programs requiring two or more
semesters of physics. The required credit-bearing laboratory
PHYS 105L
must be taken concurrently with
PHYS 105
. Offered in the fall only.
111/111L Fundamental Physics I
This introductory course emphasizes concepts and principles of mechanics, heat, waves, and sound. The focus is on building concepts, grasping principles, and learning how consequences of principles
and concepts can be quantitatively calculated and measured. The required credit-bearing laboratory
PHYS 111L
must be taken concurrently with
PHYS 111
. Offered in the fall only.
112/112L Fundamental Physics II
This course develops concepts and principles of electricity, magnetism, light, and modern physics. The required credit-bearing laboratory
PHYS 112L
must be taken concurrently with
PHYS 112
. Prerequisite:
PHYS 111
. Offered in the spring only.
131/131L Atoms and Waves
M.E. Parks, Staff
An introduction to modern physics via the concepts and discoveries of the 20th century. Topics include the structure and dynamics of atoms, special relativity, wave-particle duality of matter, and
fundamentals of quantum mechanics. This course is required for students planning to major in physics, physics-astronomy, or physical sciences and for students interested in pre-engineering.
PHYS 131
treats contemporary physics using algebra, trigonometry, and a minimum of calculus. Two lectures, two problem-solving recitations, and one laboratory meeting per week. The required credit-bearing
PHYS 131L
must be taken concurrently with
PHYS 131
. (Formerly
PHYS 120/120L, General Physics I.
) Prerequisites: secondary school physics and math, and for continuing students, co-registration with
MATH 111
. Offered in the fall only.
201–204 Mathematical Methods for Physics
This sequence of four 0.25-credit courses provides the mathematical foundation required for sophomore- through senior-level physics courses.
PHYS 201
is an introduction to computational physics.
PHYS 202
introduces complex numbers and complex exponentials as solutions to differential equations.
PHYS 203
teaches Fourier sums and integrals.
PHYS 204
teaches the gradient, divergence, and curl in several coordinate systems, and also introduces series solutions to differential equations.
PHYS 201–204
are intended to be studied in sequential order. Prerequisites:
PHYS 232
PHYS 121
) and completion of or co-registration in
MATH 113
, or permission of instructor.
232/232L Introduction to Mechanics
A study of classical mechanics using astronomical themes. The principles of kinematics, dynamics, conservation laws, and gravitation are developed and used to understand the properties of
astronomical objects such as planetary systems, binary stars, and galaxies. Treatment is more thorough than in
PHYS 111
. Differential and integral calculus and vector manipulation are used throughout. The course is required for students planning to major in physics, astronomy-physics, or physical sciences and for
students interested in pre-engineering, and is also recommended for chemistry majors. Two lectures, two recitation meetings, and one laboratory session per week. The required credit-bearing
PHYS 232L
must be taken concurrently with
PHYS 232
. (Formerly
PHYS 121/121L, General Physics II
.) Prerequisites:
PHYS 131
PHYS 120
) or
CHEM 111
, and
MATH 111
, or permission of instructor. Students who plan to continue into
PHYS 233
should co-enroll in
MATH 112
. Offered in the spring only.
233/233L Introduction to Electricity and Magnetism
J. Levine
The classical theory of electricity and magnetism is assembled from observations of nature and physical inference, using differential and integral calculus. Emphasis is on the fundamental roles
played by the electric and magnetic fields, their geometrical properties, and their dynamics. Principles of elementary circuits are also included. This course is required for students planning to
major in the physical sciences and pre-engineering. Four lectures and one laboratory meeting per week. The required credit-bearing laboratory
PHYS 233L
must be taken concurrently with
PHYS 233
. (Formerly
PHYS 122/122L, General Physics III
.) Prerequisites:
PHYS 232
PHYS 121
) and MATH 112. Students planning to take physics courses beyond PHYS 233 (formerly PHYS 122) should co-register in MATH 113 and PHYS 201-204. Offered in the fall only.
304/304L Physical Optics
C. Herne, E. Galvez
A study of physical optics from the basics to advanced topics, such as optical instrumentation, Fourier optics, laser physics, and holography. The course prepares students for knowledgeable use of
optical instruments in fields such as astronomy and teaches modern laser techniques for use in basic and applied research. Four lecture meetings and one laboratory meeting each week. The required
credit-bearing laboratory
PHYS 304L
must be taken concurrently with
PHYS 304
. (Formerly
PHYS 404
.) Prerequisites:
PHYS 201-204
PHYS 233
PHYS 122
). Offered in the spring only, in alternate years.
310, 410 Advanced Topics and Experiments
PHYS 310
is an optional junior-year research experience open to qualified students.
PHYS 410
is a required senior-year capstone research experience. Under the guidance of a faculty mentor, each student works on an experimental or theoretical project that ideally produces original results. A
final thesis and a formal oral presentation are essential components of both courses. Enrollment in
PHYS 310
is by permission only. Both courses are offered in the fall only.
334 Introduction to Quantum Mechanics and Special Relativity
This course provides the mathematical and conceptual foundation to understand two important developments in modern physics: special relativity and quantum theory, concentrating on wave mechanics.
(Formerly PHYS 216.) Prerequisite: PHYS 233 (formerly PHYS 122). Pre- or corequisites: PHYS 201–204. Offered in the spring only.
336/336L Electronics
C. Herne, Staff
A comprehensive treatment of basic electronics. The course covers analog and digital electronics. The analog section includes DC and AC circuits, filters, diodes, transistors, and operational
amplifiers. The digital section includes combinational and sequential logic, integrated circuits, and interfacing. Two class meetings per week. Each meeting is a lecture followed by a laboratory
session. The required credit-bearing laboratory PHYS 336L must be taken concurrently with
PHYS 336
. (Formerly
PHYS 282/282L
.) Prerequisite:
PHYS 233
PHYS 122
) or permission of instructor. Offered in the spring only.
350 Biophysics
R. Metzler
An introduction to biological physics including a survey of topics such as diffusion, Brownian motion, non-Newtonian fluids, self-assembly, cooperativity, bioenergetics, and nerve impulses, as well
as experimental techniques and analytical approaches. Students first develop the interdisciplinary knowledge needed to address biophysical questions. The course then focuses on the reading,
presentation, and critique of current biophysics research literature. Although challenging in its breadth, this course is intended to be accessible to juniors and seniors majoring in physics,
chemistry, or biology. Prerequisites:
MATH 111
, and
BIOL 212
or any physics course, or permission of instructor. Offered in the spring only, in alternate years. This course is crosslisted as
BIOL 350
431 Classical Mechanics
J. Levine
A detailed study, using vector calculus, of important problems in the mechanics of particles and extended bodies including a derivation of Lagrange’s and Hamilton’s equations and other advanced
topics. (Formerly
PHYS 302
.) Prerequisite:
PHYS 334
PHYS 216
). Offered in the fall only, in alternate years.
432 Electromagnetism
M.E. Parks
A study of Maxwell’s equations and their applications to topics in electrostatics and electrodynamics, including electromagnetic waves. (Formerly
PHYS 303
.) Prerequisites:
PHYS 201-204
. Offered in the spring only, in alternate years.
433 Thermodynamics and Statistical Mechanics
K. Segall
An introduction to the physical concepts underlying the formalism of thermal physics. Emphasis is on the role and meaning of entropy in physical systems and processes. Topics include black body
radiation, liquid helium, superconductivity, negative temperature, and the efficient use of energy. (Formerly
PHYS 372
.) Prerequisite:
PHYS 334
PHYS 216
). Offered in the fall only, in alternate years.
434/434L Quantum Mechanics
E. Galvez
An introduction to the theory and formalism of quantum mechanics. This course addresses the philosophical and mathematical foundations of the theory. It develops the linear algebraic formulation
using spins, photon and atoms; and cover topics that include time evolution, angular momentum, the harmonic oscillator, the Schrodinger equation, entanglement, and quantum information. A series of
associated laboratories (
PHYS 434L
) gives students vivid examples of quantum mechanical principles. (Formerly
PHYS 371/371L
.) Prerequisite:
PHYS 334
PHYS 216
). Offered in the spring only, in alternate years.
451/451L Computational Mechanics
P. Crotty
This course investigates general algorithms and their implementation for the exploration of problems in classical and quantum mechanics. Applications range widely from solar system dynamics and
chaotic systems to particles in general quantum potentials. Fourier analysis, including the fast Fourier transform, and its application to the understanding of physical systems and data analysis, are
also studied. In addition to graded homework assignments and exams, each student undertakes a major numerical project of his or her choice. The required credit-bearing laboratory
PHYS 451L
must be taken concurrently with
PHYS 451.
PHYS 402.
) Prerequisite:
PHYS 334
PHYS 216
). Offered in the fall only, in alternate years.
453 Solid State Physics
Several important properties of matter in its solid form are examined. The ordered, crystalline nature of most solids is used as a starting point for understanding condensed material and as a basis
for introducing the band theory of solids. The course investigates thermal, electrical, and magnetic properties of metals, semiconductors, and insulators. (Formerly
PHYS 420
.) Prerequisite:
PHYS 334
PHYS 216
). Offered in the fall only, in alternate years.
456 Relativity and Cosmology
P. Crotty
At the beginning of the 20th century, Einstein’s discovery of the Special and General Theories of Relativity revolutionized understanding of space and time. This course studies both theories; the
emphasis is on General Relativity, including cosmology and the study of black holes. (Formerly
PHYS 422
.) Prerequisite:
PHYS 334
PHYS 216
). Not offered every year.
458 Real-time Nonlinear Dynamics and Chaos
This course is crosslisted as
MATH 458
MATH 407
). For course description, see “Mathematics: Course Offerings.” (Formerly
PHYS 407
291, 391, 491 Independent Study
These courses are especially suitable for qualified students who wish to undertake the study of advanced topics in physics and astronomy. Prerequisite: permission of department chair and prior
arrangement with faculty sponsor.
Course Offerings: Astronomy
ASTR courses count toward the Natural Sciences and Mathematics area of inquiry requirement, unless otherwise noted. 101 Solar System Astronomy
T. Balonek
An introductory course dealing with the exploration of the solar system through ground-based observations and spacecraft missions. Topics include motions of solar system objects, properties of the
solar system, origin and evolution of the solar system, uncovering the nature of objects in our solar system through comparative planetology, detection techniques and characteristics of planets
orbiting other stars, and the possibility of life elsewhere in the universe. Evening observing and Ho Tung Visualization Lab sessions supplement lectures. Offered in the fall only.
102 Stars, Galaxies, and the Universe
J. Bary
An introductory course that explores our modern view of the universe. Building on several basic observational techniques and physical principles, this course demystifies the science of astronomy and
illuminates the evidence that establishes our physical understandings of stars and planetary systems, galaxies, and the universe. This course seeks evidence-based answers to questions including: Of
what stuff are stars made? What powers the Sun and other stars? How do stars and planetary systems form and evolve? Do other Earth-like planets exist? What determines the distribution and nature of
galaxies in the universe? How did the universe begin and what is its future? Ho Tung Visualization Lab and observing sessions supplement lectures. Offered in spring only.
165 How Old Is the Universe
J. Bary
The last 20 years is often characterized as the Golden Age of modern astronomy due to the number of paradigm-shifting discoveries that have revolutionized our vision and understanding of the
universe. This introductory-level course explores several of these ground-breaking discoveries in great detail by focusing on the physical concepts and observations as well as the historical
narrative that traces the progression of the scientific endeavor that made these discoveries possible. This course is distinctly different from
ASTR 101
, and allows for the interested non-science student to delve more deeply into the many discoveries that lead us to conclude that the universe it 13.77 +/- 0.059 billion years old; a number, by
cosmological standards, that is staggeringly precise. No prior course work in physics, astronomy, or mathematics is required for this course.
210 Intermediate Astronomy and Astrophysics
J. Bary
A discussion of the fundamental physical principles of astronomy and astrophysics emphasizing topics of current interest such as stellar structure, evolution, neutron stars, black holes, and the
interstellar medium. Prerequisites:
MATH 111, 112
, and co-registration in
PHYS 233
PHYS 122
). Offered in the fall only, in alternate years.
230 Astronomy in Culture
A. Aveni
This course deals with the development of astronomy and, in a more general sense, with the relationship between the natural world and people in different societies and walks of life. The course
examines the role of the sky in shaping religions and political ideologies in various kinds of cultures, among them hunter-gatherers, agrarian societies, and dynasties. Specific goals of the course
include 1) gaining familiarization with the sky as seen with the naked eye, 2) understanding how various ways of comprehending the sky shapes a society’s world view, and 3) examining where
cross-cultural parallels exist by seeking out the similarities and differences between the development of techno-assisted Western science and the so-called “ethno-sciences” in other cultures, both
ancient and contemporary. Lectures are accompanied by sessions in the planetarium of the Ho Tung Visualization Lab, as well as out of doors, weather permitting. (Formerly
ASTR 130
.) This course is crosslisted as
ANTH 230
312/312L Astronomical Techniques
T. Balonek
A laboratory course introducing students to basic astronomical observations, methods of data acquisition and reduction using the university’s 16-inch telescope, CCD electronic camera, and
image-processing workstation. Students are instructed in methods of astronomical imaging including detector calibration and atmospheric effects; in fundamentals of photometric reductions, including
obtaining a light curve for a selected variable star; and in astronomical spectroscopy and spectral classification.
ASTR 312L
must be taken concurrently with
ASTR 312
. (Formerly
ASTR 212/212L
.) Prerequisite:
PHYS 232
PHYS 121
) or
MATH 112
or an astronomy course or permission of instructor. Offered in the fall only, in alternate years.
313 Planetary Science
J. Levine
Study of the solar system with emphasis on physical processes. Topics include formation of the solar system, planets, moons, asteroids, comets, meteorites, orbital mechanics, tides, atmospheric
structure, planetary surfaces and interiors, impact cratering, and rings. Although challenging in breadth, this course is intended to be accessible to juniors and seniors majoring in physics,
astronomy-physics, astrogeophysics, chemistry, or geology. (Formerly
ASTR 320
.) Prerequisites:
PHYS 232
PHYS 121
), or any two GEOL courses and
MATH 111
, or permission of instructor. Offered in the fall only, in alternate years.
414 Astrophysics
J. Bary
A study of stellar atmospheres and interiors, this course develops a fundamental understanding of stars and their evolution from the application of several basic principles found in atomic physics,
electricity and magnetism, Newtonian mechanics, and statistical mechanics. Topics include fusion processes, reaction rates, stellar structure, the formation of spectral lines, opacity and optical
depth effects, and radiative processes in the interstellar medium. (Formerly
ASTR 314
.) Prerequisite:
PHYS 334
PHYS 216
) or permission of instructor. Offered in the spring only, in alternate years.
416 Galactic and Extragalactic Astronomy
T. Balonek
Study of the astronomical techniques, methods, and fundamental data relating to the Milky Way Galaxy and objects located outside our galaxy, such as normal galaxies, radio galaxies, and quasars.
Topics include galactic stellar populations, large-scale structure and rotation of the galaxy, the structure and content of other galaxies, galaxy classification, clusters of galaxies, active
galactic nuclei, quasars, and the large-scale structure of the universe. The physical processes responsible for the radio, infrared, visual, and x-ray radiation from these objects are studied in
detail. (Formerly
ASTR 316
.) Prerequisite:
PHYS 233
PHYS 122
). Offered in the spring only, in alternate years. | {"url":"http://colgate.edu/academics/departments-and-programs/physics-astronomy/catalogue-listing","timestamp":"2014-04-19T06:55:40Z","content_type":null,"content_length":"111088","record_id":"<urn:uuid:899b1556-286f-4edf-90ba-76cc88bf4c8d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |