content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Prove this identity sin(2A)=2sinAcosA.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50799be7e4b035d931353497","timestamp":"2014-04-21T16:10:27Z","content_type":null,"content_length":"123357","record_id":"<urn:uuid:f203d5c9-5dfe-493a-a043-b6c74f06c451>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next: About this document ...
UNDERGRADUATE RESEARCH OPPORTUNITY
IN MATHEMATICAL BIOLOGY
Applications are solicited by undergraduate students interested in research experience in mathematical biology. Priority will be given to students with a strong interest and background in mathematics
and in biology. Students in their Junior or Senior years are welcome to apply. A stipend of $6,000/year will be offered to the best qualified candidates.
The areas of focus are
• mathematics for cardiovascular interventions (mentors Prof. Canic, Mathematics, UH, and Prof. Rosenstrauch, M.D. with expertize in molecular biology, The Texas Heart Institute) and
• mathematical neuroscience (mentors Prof. Josic, Mathematics, UH, and Prof. Colbert, Biology, UH).
The students in this program will
• receive hands-on mathematical research experience by working on projects assigned at the beginning of the program,
• actively participate in experiments at the Molecular Biology Laboratory at the Texas Heart Institute in Houston, or participate in experiments and develop numerical models of neurons based on
experiments conducted in the Neuroscience Laboratory in the Department of Biology and Biochemistry at UH, and
• take elective courses in Biology, Biomedical Engineering and Mathematics that will enhance their interdisciplinary exposure and prepare them for interdisciplinary graduate studies.
Participants will be provided office and computer laboratory space. Contact:
Suncica Canic(canic@math.uh.edu)-math for cardiovascular interventions
Krešimir Josic (josic@math.uh.edu)-mathematical neuroscience Vascular graft inside abdominal aneurysm
Costa Colbert (ccolbert@uh.edu)-mathematical neuroscience
This program is funded by the National Science Foundation.
Examples of research projects.
Next: About this document ... Suncica Canic 2003-08-25 | {"url":"http://www.math.uh.edu/~canic/REUflyer.html","timestamp":"2014-04-19T06:52:41Z","content_type":null,"content_length":"5651","record_id":"<urn:uuid:dec94ddb-2eed-4a79-aea2-e90ff876d502>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Bay Village, FL Algebra 2 Tutor
Find a North Bay Village, FL Algebra 2 Tutor
...We will learn terms like circumference and area of a circle; also, area of a triangle, volume of a cylinder, sphere, and a pyramid. Trigonometric functions and angle derivation will be
explained and applied. Geometric proofs are an important aspect of geometry and so these will be extensively explained.
46 Subjects: including algebra 2, Spanish, reading, writing
...I seem to have a knack not only for helping struggling students push through obstacles and "clear the fog" but for motivating talented students to reach beyond minimal standards and
expectations. I look forward to helping your child realize her/his potential, find a passion to pursue, or "get" s...
61 Subjects: including algebra 2, English, Spanish, reading
...I graduated from the Mechanical Engineering school at the Havana University in 1974. During my undergraduate period (1969-1974), I taught Calculus, Physics, and Technical Drawing as an
Undergraduate Instructor. From 1974 to 1980 I taught Hydraulic, Mechanic of Fluids, and Thermodinamic as an Assistant Professor and Graduate Instructor.
8 Subjects: including algebra 2, calculus, prealgebra, geometry
...I observe the student working out problems and I point out where and why they make mistakes. All of my algebra students have achieved their goals in a short period of time. I have had success
tutoring several students in algebra.
27 Subjects: including algebra 2, chemistry, physics, calculus
I began working as a tutor in High School as part of the Math Club, and then continued in college in a part time position, where I helped students in College Algebra, Statistics, Calculus and
Programming. After college I moved to Spain where I gave private test prep lessons to high school students ...
11 Subjects: including algebra 2, calculus, physics, geometry
Related North Bay Village, FL Tutors
North Bay Village, FL Accounting Tutors
North Bay Village, FL ACT Tutors
North Bay Village, FL Algebra Tutors
North Bay Village, FL Algebra 2 Tutors
North Bay Village, FL Calculus Tutors
North Bay Village, FL Geometry Tutors
North Bay Village, FL Math Tutors
North Bay Village, FL Prealgebra Tutors
North Bay Village, FL Precalculus Tutors
North Bay Village, FL SAT Tutors
North Bay Village, FL SAT Math Tutors
North Bay Village, FL Science Tutors
North Bay Village, FL Statistics Tutors
North Bay Village, FL Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Bal Harbour, FL algebra 2 Tutors
Bay Harbor Islands, FL algebra 2 Tutors
Biscayne Park, FL algebra 2 Tutors
El Portal, FL algebra 2 Tutors
Indian Creek Village, FL algebra 2 Tutors
Medley, FL algebra 2 Tutors
Mia Shores, FL algebra 2 Tutors
Miami Beach algebra 2 Tutors
Miami Shores, FL algebra 2 Tutors
Miami Springs, FL algebra 2 Tutors
Normandy Isle, FL algebra 2 Tutors
North Miami Bch, FL algebra 2 Tutors
North Miami, FL algebra 2 Tutors
Sunny Isles Beach, FL algebra 2 Tutors
Surfside, FL algebra 2 Tutors | {"url":"http://www.purplemath.com/north_bay_village_fl_algebra_2_tutors.php","timestamp":"2014-04-16T04:34:23Z","content_type":null,"content_length":"24809","record_id":"<urn:uuid:c643eac8-9b5f-48c7-9e57-839b02b1289e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: """Friends""""
But to change the subject, This is a friends thread.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: """Friends""""
How many friends do you have? Do you count them?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: """Friends""""
No, but they are less than 1 and greater than 0.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: """Friends""""
That's kinda impossible.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: """Friends""""
It suits me though.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: """Friends""""
OMG, will you stop doing that!!!
Wait, bobby & I never could have been friends anyway. Apparently, people of the opposite gender cannot be friends.
Last edited by Tigeree (2012-04-20 20:16:43)
People don't notice whether it's winter or summer when they're happy.
~ Anton Chekhov
Cheer up, emo kid.
Re: """Friends""""
That is correct! You have finally said something smart and meaningful!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: """Friends""""
But you said we were friends, once.
People don't notice whether it's winter or summer when they're happy.
~ Anton Chekhov
Cheer up, emo kid.
Re: """Friends""""
When was that?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: """Friends""""
Never mind. You wouldn't want to be reminded anyway.
People don't notice whether it's winter or summer when they're happy.
~ Anton Chekhov
Cheer up, emo kid.
Re: """Friends""""
It is not possible. You are thinking one dimensionally. That is fatal when talking to me.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: """Friends""""
Well... would you like to know when you said that?
People don't notice whether it's winter or summer when they're happy.
~ Anton Chekhov
Cheer up, emo kid.
Re: """Friends""""
Yes! Of course it will just mean you misinterpreted me but...
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: """Friends""""
Yeah, I did kind of zone out on that one.
But it was that horrible time a few weeks ago. When things were said... and apologies were made... mostly. Y'know...
People don't notice whether it's winter or summer when they're happy.
~ Anton Chekhov
Cheer up, emo kid.
Re: """Friends""""
We just have different definitions of friends.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: """Friends""""
What's your definition?
People don't notice whether it's winter or summer when they're happy.
~ Anton Chekhov
Cheer up, emo kid.
Re: """Friends""""
You will soon see it posted in a massive post.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: """Friends""""
His definition is simple:
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: """Friends""""
Nope! I stand corrected.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: """Friends""""
Ohh, thanks, Stefy. I was looking forward to an explanation in a massive post.
It might've shed some light on things...
People don't notice whether it's winter or summer when they're happy.
~ Anton Chekhov
Cheer up, emo kid.
Re: """Friends""""
I deleted the massive post.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: """Friends""""
You never wrote one.
People don't notice whether it's winter or summer when they're happy.
~ Anton Chekhov
Cheer up, emo kid.
Re: """Friends""""
While I was writing it, I deleted it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: """Friends""""
Whilst? Rubbish. That's ridiculous.
People don't notice whether it's winter or summer when they're happy.
~ Anton Chekhov
Cheer up, emo kid.
Re: """Friends""""
It was easy.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=212172","timestamp":"2014-04-16T07:21:15Z","content_type":null,"content_length":"36241","record_id":"<urn:uuid:8fe49440-74ae-4bac-aa27-4f13367162ad>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Earthquake Glossary - G or g
• G or g
g is the acceleration of gravity 9.8 (m/s^2) or the strength of the gravitational field (N/kg) (which it turns out is equivalent).
When acceleration acts on a physical body, the body experiences the acceleration as a force. The force we are most experienced with is the force of gravity, which causes us to have weight.
The equation for the force of gravity is F = mg, at the surface of the earth, or F = GMm/r^2 at a distance r from the center of the earth (where r is greater than the radius of the earth). G is
the proportionality constant 6.67x10^-11 (N-m^2/kg^2) in Newton's law of gravity.
When there is an earthquake, the forces caused by the shaking can be measured as a percentage of gravity, or percent g.
For example: The shaking at a particular location is measured as an acceleration of 11 feet per second, or 11*12*2.54 cm/sec/sec = 335 cm/sec/sec. The acceleration due to gravity is 980 cm/sec/
sec, so the measured shaking is 335/980, or 0.34 g. As a percentage, this is 34% g. | {"url":"http://earthquake.usgs.gov/learn/glossary/?term=G%20or%20g","timestamp":"2014-04-17T07:30:19Z","content_type":null,"content_length":"14713","record_id":"<urn:uuid:59fd99f8-3182-4dc2-a7c1-247c57ac0c4d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: June 2002 [00451]
[Date Index] [Thread Index] [Author Index]
Re: Replacement question
• To: mathgroup at smc.vnet.net
• Subject: [mg35132] Re: [mg35114] Replacement question
• From: Sseziwa Mukasa <mukasa at jeol.com>
• Date: Tue, 25 Jun 2002 19:55:06 -0400 (EDT)
• Sender: owner-wri-mathgroup at wolfram.com
On Tuesday, June 25, 2002, at 03:41 AM, ginak wrote:
> Suppose I have something like
> In [100] := h[3]/f[2.33342]
> Out[100] := 24.12711
> and now I want to evaluate h[5]/f[2.33342], i.e. the same as in
> In[100] but replaceing h[3] with h[5]. This won't work
> In [101] := In[100] /. h[3]->h[5]
> Out[101] := 24.12711
> because In[100] is fully evaluated to 24.12711 before the rule is
> applied. (Generally, the expressions I'm interested in are more of a
> pain to type than h[5]/f[2.33342]). How do I tell Mathematica to
> evaluate
> In[100] only enough to apply the given substition rule, apply the
> substitution rule, and only then proceed with the evaluation?
> In fact, I don't even know how to do a replacement like
> In [102] := h[3]/f[2.33342] /. h[3]->h[5]
> for the same reason: the LHS is evaluated before the rule can be
> applied. (Of course, in this case this replacment task is pointless,
> since it is so easy to type out the desired expression, but there are
> situations in which one can obtain a complicated expression by cutting
> and pasting, and wants to apply a substitution rule to the complicated
> expression before Mathematica evaluates it.)
As far as I can tell the two situations are slightly different. In the
case of the expression In[100]/.h[3]->h[5], In[100] evaluates using the
value of h[3] before the rule gets applied, so the expression that the
rule is being applied to is not h[3]/f[2.33342] but the value of that
expression. The following block removes the downvalues of h so that
h[3] does not evaluate, then restores the downvalues and evaluates the
resulting expression:
In the other case you simply need to use HoldPattern to prevent the left
hand side of the rule from evaluating
There may be a simpler way to handle the first case but I can't think of
it right now. | {"url":"http://forums.wolfram.com/mathgroup/archive/2002/Jun/msg00451.html","timestamp":"2014-04-19T02:27:47Z","content_type":null,"content_length":"36125","record_id":"<urn:uuid:12ef632e-0072-4817-8685-6ec67b04946c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert miles to klick - Conversion of Measurement Units
›› Convert mile to klick
›› More information from the unit converter
How many miles in 1 klick? The answer is 0.621371192237.
We assume you are converting between mile and klick.
You can view more details on each measurement unit:
miles or klick
The SI base unit for length is the metre.
1 metre is equal to 0.000621371192237 miles, or 0.001 klick.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between miles and klicks.
Type in your own numbers in the form to convert the units!
›› Definition: Mile
A mile is any of several units of distance, or, in physics terminology, of length. Today, one mile is mainly equal to about 1609 m on land and 1852 m at sea and in the air, but see below for the
details. The abbreviation for mile is 'mi'. There are more specific definitions of 'mile' such as the metric mile, statute mile, nautical mile, and survey mile. On this site, we assume that if you
only specify 'mile' you want the statute mile.
›› Definition: Klick
Klick (sometimes spelled click) is a common military term meaning kilometre (or sometimes kilometres per hour). Its use became popular among soldiers in Vietnam during the 1960s, although veterans of
the war recall its usage as early as the 1950s. Its origin is sometimes linked with the Australian army in Korea.
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0036 seconds. | {"url":"http://www.convertunits.com/from/miles/to/klick","timestamp":"2014-04-16T04:24:56Z","content_type":null,"content_length":"20553","record_id":"<urn:uuid:8b24561a-05b7-48e6-8963-a13cfbf67e7f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics » Grade 1 » Numbers and Operations in Base Ten
Standards in this domain:
Extend the counting sequence.
Understand place value.
• 1.NBT.B.2 Understand that the two digits of a two-digit number represent amounts of tens and ones. Understand the following as special cases:
□ 10 can be thought of as a bundle of ten ones — called a “ten.”
□ The numbers from 11 to 19 are composed of a ten and one, two, three, four, five, six, seven, eight, or nine ones.
□ The numbers 10, 20, 30, 40, 50, 60, 70, 80, 90 refer to one, two, three, four, five, six, seven, eight, or nine tens (and 0 ones).
Use place value understanding and properties of operations to add and subtract.
• 1.NBT.C.4 Add within 100, including adding a two-digit number and a one-digit number, and adding a two-digit number and a multiple of 10, using concrete models or drawings and strategies based on
place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used. Understand that in adding
two-digit numbers, one adds tens and tens, ones and ones; and sometimes it is necessary to compose a ten.
• 1.NBT.C.5 Given a two-digit number, mentally find 10 more or 10 less than the number, without having to count; explain the reasoning used.
• 1.NBT.C.6 Subtract multiples of 10 in the range 10-90 from multiples of 10 in the range 10-90 (positive or zero differences), using concrete models or drawings and strategies based on place
value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used. | {"url":"https://www.educateiowa.gov/pk-12/standards-curriculum/iowa-core/mathematics/grade-1/numbers-operations-base-ten","timestamp":"2014-04-19T10:16:06Z","content_type":null,"content_length":"26489","record_id":"<urn:uuid:3903a3d0-0171-4d9a-96da-95fd2df0f8a8>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
Working Papers
• "Testing for Bias from Censored Regressors" (with Roberto Rigobon), February 2006
□ We derive tests for the presence of bias from using censored regressors in linear regression analysis. The test follows from the principles of (Hausman) specification tests, and is applicable
in situations of exogenous censoring. We apply the test in two substantive empirical applications; the estimation of the effects of financial wealth on household consumption, and the
estimation of the impact of foreign denominated debt on firm investment decisions. In each application we find strong rejection of the absence of censoring bias.
• "Estimation with Censored Regressors: Basic Issues" (with Roberto Rigobon), September 2005, revised June 2007, forthcoming International Economic Review
□ We study issues that arise for estimation of a linear model when a regressor is censored. We discuss the efficiency losses from dropping censored observations, and illustrate the losses for
bound censoring. We show that the common practice of introducing a dummy variable to `correct for' censoring does not correct bias or improve estimation. We show how censored observations
generally have zero semiparametric information, and we discuss implications for estimation. We derive the likelihood function for a parametric model of mixed bound-independent censoring, and
apply that model to the estimation of wealth effects on consumption.
• "Bias from Censored Regressors" (with Roberto Rigobon), September 2005, revised October 2007
□ We study the bias that arises from using censored regressors in estimation of linear models. We present results on bias in OLS regression estimators with exogenous censoring, and IV
estimators when the censored regressor is endogenous. Bound censoring such as top-and bottom-coding result in expansion bias, or effects that are too large. Independent random censoring
results in bias that varies with the estimation method; attenuation bias in OLS estimators and expansion bias in IV estimators. We note how large biases can result when there are several
regressors, and how that problem is particularly severe when a 0-1 variable is used in place of a continuous regressor.
The above papers update and replace the following two unpublished working papers
• "Censored Regressors and Expansion Bias" (with Roberto Rigobon), March 2005
• "Instrumental Variables Bias with Censored Regressors" (with Roberto Rigobon), March 2005
• "Set Identification with Tobin Regressors" (with Victor Chernozhukov and Roberto Rigobon), preliminary October 2007
□ We give semiparametric identification and estimation results for econometric models with a regressor that is endogenous, bound censored and selected, called a Tobin regressor. We show how
parameter sets are identified, and give generic estimation results as well as results on the construction of confidence sets for inference. The specific procedure uses quantile regression to
address censoring, and a control function approach for estimation of the final model. Our procedure is applied to the estimation of the effects on household consumption of changes in housing
wealth. Our estimates fall in plausible ranges, significantly above low OLS estimates and high IV estimates that do not account for the Tobin regressor structure.
• "Models of Aggregate Economic Relationships that Account for Heterogeneity" (with Richard Blundell), January 2007, forthcoming Handbook of Econometrics, Volume 6
□ This chapter covers recent solutions to aggregation problems in three application areas, consumer demand analysis, consumption growth and wealth, and labor participation and wages. Each area
involves treatment of heterogeneity and nonlinearity at the individual level. Three types of heterogeneity are highlighted: heterogeneity in individual tastes, heterogeneity in income and
wealth risks and heterogeneity in market participation. Work in each area is illustrated using results from empirical data. The overall aim is to present specific models that connect
individual behavior with aggregate statistics, as well as discuss the principles for constructing such models.
• "Aggregation (Econometrics)" entry forthcoming in New Palgrave Dictionary of Economics, 2nd Edition
• "Lectures on Semiparametric Econometrics " CORE Lecture Series, May 1991
□ These introductory lectures on semiparametric methods in econometrics are long overdue for a thorough rewrite and updating. The printed lectures have become hard-to-find, and are posted here
in case students find them useful. If you have any comments relevant to updating the lectures, please email them to me at tstoker@mit.edu, and accept my thanks in advance. | {"url":"http://web.mit.edu/tstoker/www/research.htm","timestamp":"2014-04-21T05:03:54Z","content_type":null,"content_length":"6350","record_id":"<urn:uuid:8cdbed90-8150-4910-ac6d-361d3536b6be>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Panel Multinomial Logistic Model
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Panel Multinomial Logistic Model
From David Jacobs <jacobs.184@sociology.osu.edu>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Panel Multinomial Logistic Model
Date Mon, 24 May 2010 13:54:23 -0400
First a random-effects multinomial logit routine may be in Limdep. Although that program is difficult to use, employing it may be less trouble than what you propose.
Second, an alternative solution often recommended on the list when no pooled time series estimator is available in Stata is to simply cluster on cases. While there's no adjustment for separate
case-specific intercepts, at least the standard errors may be OK.
I can't evaluate your proposal, however. Let's see if you get any response to this interesting idea.
Dave Jacobs
At 01:38 PM 5/24/2010, you wrote:
I understand that there is not a stata command for multinomial logistic model for panel data estimation. I wonder if the following can be done for a three-outcome categorical dependent variable
(say, 0, 1, 2):
1. Estimate a panel logit for outcome=1, and predict the exp(xb+random effects), 2. Estimate a panel logit for outcome=2, and predict the exp(xg+random effects), and
3. Hand calculate a panel multinomial logit for the prob(outcome=1) by scrambling the predictions of the two panel logits, namely, exp(xb+random effects)*(1+exp(xb+random effects)+exp(xg+random
I believe that this equivalence exists between multinomial logit and a series of binary logit regressions in a non-panel setting. I wonder if the same logic extends to panel data.
Thanks in advance,
Albert Lee
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-05/msg01263.html","timestamp":"2014-04-17T19:02:25Z","content_type":null,"content_length":"9720","record_id":"<urn:uuid:c66b3f5f-d6eb-4c5c-8ef0-6ef22e83e918>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Grab the latest Math::GSL to get these spiffy features and bugfixes:
• Made Math::GSL play nice with latest and greatest GSL 1.12
• Added swap() to Vector objects with tests and docs
• Added p-norms to Vector objects via norm() and normalize()
• Added operator overloading so that
abs $vector == $vector->norm
• Added as_vector() to Matrix and MatrixComplex objects
• Added inverse(), is_square(), det(), lndet(), zero() and identity() to Matrix objects
• Added inverse(), is_square(), det(), lndet(), zero(), identity() and hermitian() to MatrixComplex objects
• Added dot product to Matrix objects
• Fixed various typos in documentation
• Fixed warnings about overloaded operators in Matrix and BLAS
• Overloaded '==' and '!=' for MatrixComplex and Matrix objects
• Fixed amd64 -fPIC compile failure
• Added tests to Monte and refactored Sort tests
• Refactored and improve error checking in callback interface
• Fixed 'NaN' test failures (thanks CPANtesters!) | {"url":"http://leto.net/code/Math-GSL/release/","timestamp":"2014-04-20T13:19:14Z","content_type":null,"content_length":"42209","record_id":"<urn:uuid:a093ffcf-a032-45bc-a2e6-5cb03e8892a0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Methodology for Estimating the Evolution of Capital Productivity
Text for Discussion
Simplified Methodology for Estimating the Evolution of Capital Productivity
The application of a fixed depreciation rate to the capital stock of the preceding year is used in different economic growth theoretical models. When applied to real data for obtaining the capital
stock, this procedure implies that the historical investments are ignored. Errors up to 20% relative to the depreciation obtained from real investment data were observed. However, this simplification
– besides its theoretical easiness – provides a quick method for evaluating the capital stock and consequently the capital productivity.
As previously demonstrated^, it is possible to estimate the behavior of the capital productivity variable by applying a fixed depreciation rate to the capital stock of the previous year and adding
the investments made on the previous year. In this methodology it is important the choice of the initial stock value and of the depreciation rate adequate for the time of life involved and its
relative participation. In order to estimate this depreciation rate it is also necessary to have or to estimate the growth rate of investments in the previous years.
In the present paper (1) we describe a simplified methodology for obtaining the evolution of the capital productivity, (2) we evaluate the influence of varying the parameters used for determining the
capital productivity values and (3) we apply the methodology to six countries and compare the Brazilian results with those obtained using the linear depreciation methodology.
In procedure (3) the lack of information regarding investments in the period before that for which information is available is compensated by assuming a regular behavior of the capital/product ratio
at the beginning of the period when the parameter is being evaluated. The equivalent depreciation rate adopted is considered to be the same in the studied countries.
Description of the Methodology
The simplified methodology for evaluating the capital productivity evolution consists of the following steps:
1. A depreciation rate is chosen considering the expected life of the goods and the investment growth rate in the past. An approximation is to consider for the previous period the same growth rate
as that of the GDP for which generally longer series are available. The equivalent depreciation rate d is given by
where , t is the investment growth rate and v is the life value (in years) of the depreciated goods
2. An initial value of capital product ratio p[0] is chosen.
3. The stock in year zero K[0] is estimated using the GDP value Y[0] available
K[0] = Y[0] . ρ[0]
4. Temporary values for K[1] = K[0 ]+ I[0] - d . K[0] are obtained.
5. In an analogous way successive values of K[i] and ρ[i ]are obtained
6. The value of ρ[0 ]is iteratively chosen so that the behavior of ρ in the first years is the expected one (slightly upward).
This procedure can be adopted both for the set of goods and the categories separately. In the case of set of goods it is necessary to choose a depreciation rate adequate for the proportion of
machines and equipment and construction in the investment.
We present the results regarding Brazil with the goods divided into residential construction, non-residential construction, machines and equipment and others by the linear depreciation methodology
and compare the results obtained with those of the simplified methodology and with grouped investments.
The initial parameters (capital/product ratio in year zero and depreciation rate) for the simplified methodology were the same obtained using the linear depreciation^[2].
The simplified methodology was applied to the set of goods and two tests were made: the first one applied to the whole series (from 1908 to 2003) and the second one to the 1950-2003 period.
In the first set of data (1908 to 2003), the initial K/Y ρ[0]=1,35 and the depreciation rate d=4.12% annually were used. The value of ρ[0] is the same for the initial year and the depreciation rate
corresponds to the average of the period.
For the second set of data (1950 to 2003) the same procedure was used considering ρ[0]=1,66 and d=3,82% annually.
The result is shown in Figure 1 and presents good agreement between the simplified method and the linear depreciation.
Figure 2: Comparison between the simplified method, applied to two different period, and the linear depreciation method.
In Figure 2 it is shown the effect of varying +20% and –20% the initial values of K/l and d.
Figure 2: Influence of initial parameters variation on the behavior of the capital/product ratio using the simplified method.
We have verified that the simplified methodology applied to the capital stock supplies the basis for constructing the capital/product curve in cases where data concerning previous investment of the
studied period are not available.
The determination of the initial K/Y rate (or of the initial stock) and of the time of life depends on the criteria that will be more or less arbitrary, which in its turn depends on the availability
of other information about the previous period.
In the example considered it was possible to have the results from the other procedure for determining the initial parameters. In Figure 2 it is shown the effect of changing 20% the input parameters
of the 1950-2003 period in the determination of K/Y. For the final year and for 1960 it is shown in Table 1 the deviation relative to the calculation using these parameters as compared with the
values calculated with the average rate and with initial depreciations.
Table 1: Deviation between the values calculated relative to the value obtained using the parameters (for 1950) ρo=1,66 and d=3,82%
Variation of the initial K/Y Variation of the time of life
ρo*1,2 ρo*0,8 v*1.2 v*0,8
2003 0.1% -0.1% -9.8% 11.6%
1960 5.4% -5.4% -4.1% 4.4%
It can be observed that the K/Y value at the end of the period is practically unaffected by the 20% variation in the initial value. Even in 1960 (ten years after the initial year), a 20% variation in
the capital/product ration results in a difference lower than 5% in the projected K/Y values. In 1970, as can be observed in Figure 2, it is already impossible to distinguish the lines resulting from
initial values that differ 40%.
In what concerns the effect of changing the depreciation rate, it is crescent with time. At the final year it reaches –10% or 12% of the K/Y ratio estimated value. It is important to point out that
20% and even 40% variations in the chosen depreciation rate (comparing the lower curve with the upper one) do not induce qualitative interpretation errors as it has occurred with the K/Y ratio.
Application of the simplified methodology to some countries.
The International Monetary Fund publishes economical series of different countries[i]. In studies concerning the influence of capital productivity on the growth process, we are interested in those
countries where, like Brazil, there has been significant variations in the capital productivity in the last fifty years. The described method was applied to different countries considering the total
investment (without subdivision by type of investment) relative to the GDP. That is, GDP and investment was considered in their nominal value. The GDP deflator were applied to these values.
The annual depreciation rate adopted for all countries was 4% and the initial capital/product ratio was introduced so that the variation of the first years of the series became similar to that of the
following years. This procedure, whose limitations were discussed, could be compared in some cases with the results obtained by more elaborate methodologies.
The countries chosen were those whith a significant variation in the capital/product in the period chosen included in the IMF data. It is no coincidence that many of these countries have experienced
deep modifications in their productive system. The countries chosen were Japan, South Korea, Italy, Brazil, Chile and India. The K/Y ratio curves are shown in Figure 3.
In what concerns Brazil, it can be observed that the choice of the initial K/Y ratio was made by the criterion of reproducing the behavior of the following years and this has the effect of
overestimating the initial value so that the first part of the curve is maintained constant. In the following years – less influenced by the initial choice – the curve is similar to that shown in
Figure 2 where the po value is overestimated (capital/product in the initial year). Concerning the absolute values, it should be considered that the Figure 2 values are at constant prices while those
of Figure 3 are at current prices. For the year 2000 the K/Y value by the simplified method is 2.8. Using the linear depreciation method for current prices the value is 2.9
Figure 3: Evolution of the capital/product ratio for different countries using investment data published by the IMF.
In the case of Brazil, one can observe in Figure 4 a comparison between the results calculated with the linear depreciation and with the simplified methodology. The largest difference is due to the
choice of the K/Y ratio initial value. It should be pointed out that the depreciation rate chosen (4%) is very close to that corresponding to the linear depreciation method.
Figure 4: Comparison of the results obtained for the capital/product ratio using the simplified and linear depreciation methods.
The simplified method proved to be useful and trustworthy for obtaining an approximation for the capital productivity of countries. Based on the described methodology, the evolution of the capital
productivity was calculated for 6 countries. The results are discussed in the present e&e issue under the title "Capital Productivity: a further Limitation to Brazilian Growth". | {"url":"http://www.ecen.com/eee44/eee44e/simplif_meth_cp_prod_e.htm","timestamp":"2014-04-17T00:49:03Z","content_type":null,"content_length":"38019","record_id":"<urn:uuid:17dd04ba-9b26-4b31-8fd7-a535d16e82c2>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interactive manipulation of rigid body simulations
Results 1 - 10 of 51
, 2002
"... Dynamic textures are sequences of images of moving scenes that exhibit certain stationarity properties in time; these include sea-waves, smoke, foliage, whirlwind etc. We present a novel
characterization of dynamic textures that poses the problems of modeling, learning, recognizing and synthesizing ..."
Cited by 286 (15 self)
Add to MetaCart
Dynamic textures are sequences of images of moving scenes that exhibit certain stationarity properties in time; these include sea-waves, smoke, foliage, whirlwind etc. We present a novel
characterization of dynamic textures that poses the problems of modeling, learning, recognizing and synthesizing dynamic textures on a firm analytical footing. We borrow tools from system
identification to capture the "essence" of dynamic textures; we do so by learning (i.e. identifying) models that are optimal in the sense of maximum likelihood or minimum prediction error variance.
For the special case of second-order stationary processes, we identify the model sub-optimally in closed-form. Once learned, a model has predictive power and can be used for extrapolating synthetic
sequences to infinite length with negligible computational cost. We present experimental evidence that, within our framework, even low-dimensional models can capture very complex visual phenomena.
- ACM Transactions on Graphics , 2004
"... Optimization is an appealing way to compute the motion of an animated character because it allows the user to specify the desired motion in a sparse, intuitive way. The difficulty of solving
this problem for complex characters such as humans is due in part to the high dimensionality of the search sp ..."
Cited by 140 (13 self)
Add to MetaCart
Optimization is an appealing way to compute the motion of an animated character because it allows the user to specify the desired motion in a sparse, intuitive way. The difficulty of solving this
problem for complex characters such as humans is due in part to the high dimensionality of the search space. The dimensionality is an artifact of the problem representation because most dynamic human
behaviors are intrinsically low dimensional with, for example, legs and arms operating in a coordinated way. We describe a method that exploits this observation to create an optimization problem that
is easier to solve. Our method utilizes an existing motion capture database to find a low-dimensional space that captures the properties of the desired behavior. We show that when the optimization
problem is solved within this low-dimensional subspace, a sparse sketch can be used as an initial guess and full physics constraints can be enabled. We demonstrate the power of our approach with
examples of forward, vertical, and turning jumps; with running and walking; and with several acrobatic flips.
- ACM Trans. Graph
"... We consider the simulation of nonconvex rigid bodies focusing on interactions such as collision, contact, friction (kinetic, static, rolling and spinning) and stacking. We advocate representing
the geometry with both a triangulated surface and a signed distance function defined on a grid, and this d ..."
Cited by 91 (8 self)
Add to MetaCart
We consider the simulation of nonconvex rigid bodies focusing on interactions such as collision, contact, friction (kinetic, static, rolling and spinning) and stacking. We advocate representing the
geometry with both a triangulated surface and a signed distance function defined on a grid, and this dual representation is shown to have many advantages. We propose a novel approach to time
integration merging it with the collision and contact processing algorithms in a fashion that obviates the need for ad hoc threshold velocities. We show that this approach matches the theoretical
solution for blocks sliding and stopping on inclined planes with friction. We also present a new shock propagation algorithm that allows for efficient use of the propagation (as opposed to the
simultaneous) method for treating contact. These new techniques are demonstrated on a variety of problems ranging from simple test cases to stacking problems with as many as 1000 nonconvex rigid
bodies with friction as shown in Figure 1.
, 2003
"... Optimization is a promising way to generate new animations from a minimal amount of input data. Physically based optimization techniques, however, are difficult to scale to complex animated
characters, in part because evaluating and differentiating physical quantities becomes prohibitively slow. Tra ..."
Cited by 91 (3 self)
Add to MetaCart
Optimization is a promising way to generate new animations from a minimal amount of input data. Physically based optimization techniques, however, are difficult to scale to complex animated
characters, in part because evaluating and differentiating physical quantities becomes prohibitively slow. Traditional approaches often require optimizing or constraining parameters involving joint
torques; obtaining first derivatives for these parameters is generally an O(D²) process, where D is the number of degrees of freedom of the character. In this paper, we describe a set of objective
functions and constraints that lead to linear time analytical first derivatives. The surprising finding is that this set includes constraints on physical validity, such as ground contact constraints.
Considering only constraints and objective functions that lead to linear time first derivatives results in fast per-iteration computation times and an optimization problem that appears to scale well
to more complex characters. We show that qualities such as squash-and-stretch that are expected from physically based optimization result from our approach. Our animation system is particularly
useful for synthesizing highly dynamic motions, and we show examples of swinging and leaping motions for characters having from 7 to 22 degrees of freedom.
, 2003
"... We describe a method for controlling smoke simulations through user-specified keyframes. To achieve the desired behavior, a continuous quasi-Newton optimization solves for appropriate "wind"
forces to be applied to the underlying velocity field throughout the simulation. The cornerstone of our appro ..."
Cited by 77 (2 self)
Add to MetaCart
We describe a method for controlling smoke simulations through user-specified keyframes. To achieve the desired behavior, a continuous quasi-Newton optimization solves for appropriate "wind" forces
to be applied to the underlying velocity field throughout the simulation. The cornerstone of our approach is a method to efficiently compute exact derivatives through the steps of a fluid simulation.
We formulate an objective function corresponding to how well a simulation matches the user's keyframes, and use the derivatives to solve for force parameters that minimize this function. For
animations with several keyframes, we present a novel multipleshooting approach. By splitting large problems into smaller overlapping subproblems, we greatly speed up the optimization process while
avoiding certain local minima.
- ACM TRANS. GRAPH. (SIGGRAPH PROC , 2004
"... We describe a novel method for controlling physics-based fluid simulations through gradient-based nonlinear optimization. Using a technique known as the adjoint method, derivatives can be
computed efficiently, even for large 3D simulations with millions of control parameters. In addition, we introdu ..."
Cited by 70 (1 self)
Add to MetaCart
We describe a novel method for controlling physics-based fluid simulations through gradient-based nonlinear optimization. Using a technique known as the adjoint method, derivatives can be computed
efficiently, even for large 3D simulations with millions of control parameters. In addition, we introduce the first method for the full control of free-surface liquids. We show how to compute adjoint
derivatives through each step of the simulation, including the fast marching algorithm, and describe a new set of control parameters specifically designed for liquids.
, 2000
"... Traditional collision intensive multi-body simulations are difficult to control due to extreme sensitivity to initial conditions or model parameters. Furthermore, there may be multiple ways to
achieve any one goal, and it may be difficult to codify a user's preferences before they have seen the avai ..."
Cited by 60 (2 self)
Add to MetaCart
Traditional collision intensive multi-body simulations are difficult to control due to extreme sensitivity to initial conditions or model parameters. Furthermore, there may be multiple ways to
achieve any one goal, and it may be difficult to codify a user's preferences before they have seen the available solutions. In this paper we extend simulation models to include plausible sources of
uncertainty, and then use a Markov chain Monte Carlo algorithm to sample multiple animations that satisfy constraints. A user can choose the animation they prefer, or applications can take direct
advantage of the multiple solutions. Our technique is applicable when a probability can be attached to each animation, with "good" animations having high probability, and for such cases we provide a
definition of physical plausibility for animations. We demonstrate our approach with examples of multi-body rigid-body simulations that satisfy constraints of various kinds, for each case presenting
animations that are true to a physical model, are significantly different from each other, and yet still satisfy the constraints. CR Descriptors: I.3.7 [Computer Graphics]: Three-Dimensional Graphics
and Realism - Animation; I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling - Physically based modeling; I.6.5 [Simulation and Modeling]: Model Development - Modeling methodologies
G.3 [Probability and Statistics]: Probabilistic algorithms; Keywords: plausible motion, Markov chain Monte Carlo, motion synthesis, spacetime constraints 1
, 2002
"... A new paradigm for rigid body simulation is presented and analyzed. Current techniques for rigid body simulation run slowly on scenes with many bodies in close proximity. Each time two bodies
collide or make or break a static contact, the simulator must interrupt the numerical integration of velocit ..."
Cited by 36 (1 self)
Add to MetaCart
A new paradigm for rigid body simulation is presented and analyzed. Current techniques for rigid body simulation run slowly on scenes with many bodies in close proximity. Each time two bodies collide
or make or break a static contact, the simulator must interrupt the numerical integration of velocities and accelerations. Even for simple scenes, the number of discontinuities per frame time can
rise to the millions. An efficient optimization-based animation (OBA) algorithm is presented which can simulate scenes with many convex threedimensional bodies settling into stacks and other
“crowded” arrangements. This algorithm simulates Newtonian (second order) physics and Coulomb friction, and it uses quadratic programming (QP) to calculate new positions, momenta, and accelerations
strictly at frame times. The extremely small integration steps inherent to traditional simulation techniques are avoided. Contact points are synchronized at the end of each frame. Resolving contacts
with friction is known to be a difficult problem. Analytic force calculation can have ambiguous or non-existing solutions. Purely impulsive techniques avoid these ambiguous cases, but still require
an excessive and computationally expensive number of updates in the case of
, 2003
"... For many systems that produce physically based animations, plausibility rather than accuracy is acceptable. We consider the problem of evaluating the visual quality of animations in which
physical parameters have been distorted or degraded, either unavoidably due to real-time frame-rate requirements ..."
Cited by 35 (3 self)
Add to MetaCart
For many systems that produce physically based animations, plausibility rather than accuracy is acceptable. We consider the problem of evaluating the visual quality of animations in which physical
parameters have been distorted or degraded, either unavoidably due to real-time frame-rate requirements, or intentionally for aesthetic reasons. To date, no generic means of evaluating or predicting
the fidelity, either physical or visual, of the dynamic events occurring in an animation exists. As a first step towards providing such a metric, we present a set of psychophysical experiments that
established some thresholds for human sensitivity to dynamic anomalies, including angular, momentum and spatio-temporal distortions applied to simple animations depicting the elastic collision of two
rigid objects. In addition to finding significant acceptance thresholds for these distortions under varying conditions, we identified some interesting biases that indicate non-symmetric responses to
these distortions (e.g., expansion of the angle between postcollision trajectories was preferred to contraction and increases in velocity were preferred to decreases). Based on these results, we
derived a set of probability functions that can be used to evaluate the visual fidelity of a physically based simulation. To illustrate how our results could be used, two simple case studies of
simulation levels of detail and constrained dynamics are presented.
- In PG ’03: Proceedings of the 11th Pacific Conference on Computer Graphics and Applications, IEEE Computer Society , 2003
"... We implement a framework for animating interactive characters by combining kinematic animation with physical simulation. The combination of animation techniques allows the characters to exploit
the advantages of each technique. For example, characters can perform naturallooking kinematic gaits and r ..."
Cited by 32 (5 self)
Add to MetaCart
We implement a framework for animating interactive characters by combining kinematic animation with physical simulation. The combination of animation techniques allows the characters to exploit the
advantages of each technique. For example, characters can perform naturallooking kinematic gaits and react dynamically to unexpected situations. Kinematic techniques such as those based on motion
capture data can create very natural-looking animation. However, motion capture based techniques are not suitable for modeling the complex interactions between dynamically interacting characters.
Physical simulation, on the other hand, is well suited for such tasks. Our work develops kinematic and dynamic controllers and transition methods between the two control methods for interactive
character animation. In addition, we utilize the motion graph technique to develop complex kinematic animation from shorter motion clips as a method of kinematic control. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=42863","timestamp":"2014-04-16T08:45:08Z","content_type":null,"content_length":"41406","record_id":"<urn:uuid:2ed7c280-0985-432b-96f8-2482e6df0c0f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matlab input function
Hi all,
Can someone please help me with my matlab code. I am trying to get the program to quite if the user tries to input a non-numeric value, and will display an error message. I have got it to work useing
the matlab input('promt','s') command, but i dont want to use this command because a user cant enter 5/100 etc.
Here is my code.
j = input('number?:')
if isnan(j)==1
disp('Input must be a number, Function will terminate')
I would just like to know if there is a simply way to do this, if not I will use the command I used before and use the eval function I think...
Thank You
Regards Elbarto | {"url":"http://www.physicsforums.com/showthread.php?t=252457","timestamp":"2014-04-20T11:28:07Z","content_type":null,"content_length":"19988","record_id":"<urn:uuid:cba38ebb-5910-42e8-8bd9-7dcbfd527ed4>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
From OpenWetWare
10/1/05 - 10/7/05
Last week: 9/24/05-9/30/05
Next week: 10/8/05-10/14/05
Goals this week:
Canonical promoters
Intergenic sections of chrIII
Generated by interfeatureregions.py
List of all sections of the chromosome that are in between genes [but may overlap with other elements, like ARS etc], plus 100 bases on either end to show flanking genes: | {"url":"http://openwetware.org/wiki/AlexLabNotebook/ChrIIIRebuild/10/1/05-10/7/05","timestamp":"2014-04-19T01:29:08Z","content_type":null,"content_length":"82105","record_id":"<urn:uuid:9c79c85a-b107-4943-9e29-c70d619e09be>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Formulas for Combinations and Permutations
The factorial is a convenient shorthand way to write the product of several consecutive positive whole numbers. The formula for the factorial can be seen above. By convention we define 0! = 1.
As we will see, other formulas dealing with counting build upon the factorial formula. | {"url":"http://statistics.about.com/od/Formulas/ss/Formulas-For-Combinations-And-Permutations.htm","timestamp":"2014-04-21T15:07:01Z","content_type":null,"content_length":"41480","record_id":"<urn:uuid:9b49bccc-9d67-478c-a0c1-93006a7b8a75>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
Well Ordering
Re: Well Ordering
1) I looked at it. Don't see your point. Least element is not the same as smallest element? 2) Circular definition Frankly, I'm amazed that there should be so much confusion about the ordering of
the integers. It is foundational in every (the ones I have seen) text on algebra and analysis. Landau, for example, states that, for any x and y, either x<y, or x>y or x=y. Emakorov speaks of a
non-strict order: If he means either x>y or x less than or equal to y, the well-ordering principle states less than.
Re: Well Ordering
Two comments.
1. On the Wikipedia statement of well-ordering there is a link for "least element." I suggest you look at it.
2. "A set A with an order relation < is said to be well-ordered if every nonempty subset of A has a smallest element." - "Topology, a first course" Munkres, 1975, pg. 63.
Re: Well Ordering
The concept of the least element makes sense only when the order is specified. Without specifying the order, talking about the well-ordering principle or the least element is like talking about
the number of car doors without specifying the car make and model.
Re: Well Ordering
In mathematics, the well-ordering principle states that every non-empty set of positive integers contains a least element. (Wiki)
Re: Well Ordering
Sorry, this is not an answer. You mentioned three relations using only their notations, without saying what they are. The least element is defined for a partially ordered set. Please provide a
partial (or total) non-strict (i.e., reflexive) order.
Re: Well Ordering
Reply to post #2: n1>n2 or n2<n2 or n1=n2
Reply to post #2: The well-ordering principle specifies a least member, what is it?
Just out of curiousity, why is there so much space in your posts.
Re: Well Ordering
It looks to me like there is only one member so there is just one possible "largest member" or "smallest member"!
Re: Well Ordering
With respect to which order?
Well Ordering
What is the least member of {2}?
Two comments.
1. On the Wikipedia statement of well-ordering there is a link for "least element." I suggest you look at it.
2. "A set A with an order relation < is said to be well-ordered if every nonempty subset of A has a smallest element." - "Topology, a first course" Munkres, 1975, pg. 63.
2) Circular definition
Frankly, I'm amazed that there should be so much confusion about the ordering of the integers. It is foundational in every (the ones I have seen) text on algebra and analysis. Landau, for example,
states that, for any x and y, either x<y, or x>y or x=y.
Emakorov speaks of a non-strict order: If he means either x>y or x less than or equal to y, the well-ordering principle states less than.
Re: Well Ordering
I suspect what you are getting at is that{2} has a glb and a lub. Perhaps each set of integers contains its greatest lower bound makes more sense as a principle than the well-ordering principle.
Thanks for your time and effort.
Re: Well Ordering
If by "the least" you mean "the least with respect to the usual non-strict order ≤ on natural numbers," then the least element of {2} is 2. In fact, as HallsofIvy pointed out, the least element
of {2} with respect to any order is 2 because 2 ≤ x for all x ∈ {2}. This is due to the reflexivity of ≤, which is one component of the definition of a non-strict partial order.
At first I asked which order you mean to preclude some tricks like providing a strict (non-reflexive) or some other unusual relation instead of the regular order. I kept on asking because I
wanted to make sure you know what "the least element" and "the well-ordering principle" are.
Re: Well Ordering
Least clearly means (not <=, but <) total ordering. b&m waffles on this by stating any collection C of positive integers must contain some member m such that whenever c is in C, m<=c, which to me
says it contains it's glb. So you can't use the well ordering principle to prove every set of integers contains its glb.
I was trying to track down a proof that assumes total order (literally least) in a proof, in which case it would make a difference. Haven't been able to do so.
Re: Well Ordering
Re: Well Ordering
"Least" means en element that is less or equal to any other element. The important point is that "least" is an adjective, and it is applied to an element. "Least" does not mean an ordering,
whether strict or non-strict.
Waffles on what? This statement is true.
Yes, the well-ordering principle does not apply to arbitrary subsets of integers because not all of them are well-ordered and not all of them even have the lub.
I did not understand this. A proof of what? So, your initial question has been answered. What other questions do you have?
Wikipedia seems to explain well what the greatest element is, and the least element is similar. The well-ordering principle is here.
Re: Well Ordering
Ok. I agree. It's a matter of definition and wiki gives your definition. btw, i meant glb not lub, corrected in edit.
Perhaps one could say the concept of order is meaningless for a single integer. | {"url":"http://mathhelpforum.com/higher-math/211116-well-ordering-print.html","timestamp":"2014-04-20T10:04:41Z","content_type":null,"content_length":"20903","record_id":"<urn:uuid:25502ccd-00d1-417c-a180-1eba97d15ee1>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
ox Modelling of Hydr
Publication: Research › Ph.D. thesis – Annual report year: 2012
The main topic of the thesis is grey box modelling of hydrologic systems, as well as formulation and assessment of their embedded uncertainties. Grey box model is a combination of a white box model,
a physically-based model that is traditionally formulated using deterministic ordinary differential equations, and a black box model, which relates to models that are obtained statistically from
input-output relations. Grey box model consists of a system description, defined by a finite set of stochastic differential equations, and an observation equation. Together, system and observation
equations represent a stochastic state space model. In the grey box model the total noise is divided into a measurement noise and a process noise. The process noise is due to model approximations,
undiscovered input and uncertainties in the input series. Estimates of the process noise can be used to highlight the lack of fit in state space formulation, and further support decisions for a model
expansion. By using stochastic differential equations to formulate the dynamics of the hydrological system, either the complexity of the model can be increased by including the necessary hydrological
processes in the model, or formulation of process noise can be considered so that it meets the physical limits of the hydrological system and give an adequate description of the embedded uncertainty
in model structure. The thesis consists of two parts: a summary report and a part which contains six scientific papers. The summary report is divided into three distinct parts that introduce the main
concepts and methods used in the following papers. The first part contains the basic concepts in hydrology and related hydrological models. The second part explains the grey box model by presenting
stochastic differential equations and show how the equations can be linked to the available measurements. Moreover, impulse response function models are introduced as an alternative to stochastic
differential equation basedmodels, but by exploiting known hydrological models as the impulse response function in this model makes this model framework partly physically-based. For estimating the
parameters in the grey box models maximum likelihood method is used. The third important part of the summary report is predictions, and with focus on uncertainty of prediction intervals the
corresponding performancemeasures have to include the intervals. The thesis illustrates three performance measures for this performance evaluations: reliability, sharpness and resolution. For
decision making, a performance criterion is preferred that quantifies all of these measures in a single number, and for that the quantile skill score criterion is discussed in this thesis. The second
part of the thesis, which contains the papers, is divided into two different subjects. First are four papers, which consider the grey box model approach to a well field with several operating pumps.
The model foundation is the governing equation for groundwater flow, which can be simplified and represented a state space form that resembles the methods used in numerical methods for well field
modelling. The objective in the first two papers is to demonstrate how a simple grey box model is formulated and, subsequently, extended in terms of parameter estimation using statistical methods.
The simple models in these papers consider only part of the well field, but data analysis reveals that the wells in the well field are highly correlated. In the third paper, all wells pumping from
the same aquifer are included in the state space formulation of the model, but instead, but instead of extending the physical description of the system, the uncertainty is formulated to handle the
spatio-temporal variation in the output. The uncertainty in the model are then evaluated by using the quantile skill score criterion. In the fourth paper, the well field is formulated by considering
the impulse response function models to describe water level variation in the wells, as a function of available pumping rates in the well field. The paper illustrates, through a case study, how the
model can be used to define and solve the well field management problem. The second half of part II consists of two papers where the stochastic differential equation based model is used for sewer
runoff from a drainage system. A simple model is used to describe a complex rainfall-runoff process in a catchment, but the stochastic part of the system is formulated to include the increasing
uncertainty when rainwater flows through the system, as well as describe the lower limit of the uncertainty when the flow approaches zero. The first paper demonstrates in detail the grey box model
and all related transformations required to obtain a feasible model for the sewer runoff. In the last paper this model is used to predict the runoff, and the performances of the prediction intervals
are evaluated by the quantile skill score criterion.
Original language English
Publication date 2011
Name IMM-PHD-2011
Number 263
ISSN (print) 0909-3192
ID: 5824835 | {"url":"http://orbit.dtu.dk/en/publications/grey-box-modelling-of-hydrological-systems(d88d46bd-8831-471b-9af2-cb2006aea622).html","timestamp":"2014-04-19T23:50:44Z","content_type":null,"content_length":"36042","record_id":"<urn:uuid:18d271f4-d943-430f-a523-81f7ccd24195>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Biggest ball included in an intersection of balls
up vote 4 down vote favorite
I would like to prove that for any family of balls $\{B(c_i,r_i)\}_i \subset \mathbb{R}^d$ such that $\{c_1, \dots, c_n\} \subset \bigcap_i B(c_i,r_i) $ and $\forall i, r_i \geq 1$, there exists a
ball of radius $1-\frac{\theta_d}{2}$ included in the intersection $\bigcap_i B(c_i,r_i)$ where $\theta_d/2 = \frac{1}{2}\sqrt{\frac{2d}{d+1}}$ denotes the ratio between the diameter and the radius
of the smallest enclosing ball of a regular simplex.
Intuitively, it seems that the way of making the smallest intersection is to assign all points $c_i$ to the vertices of a regular simplex of diameter $1$ and all $r_i$ to $1$. By doing so, one can
check that the ball of radius $1-\frac{\theta_d}{2}$ centered at $x$ the barycenter of $\{c_1,\dots,c_n\}$ is included in the intersection of balls $ \bigcap_i B(c_i,r_i)$. Indeed, in the case of a
simplex, the radius of the biggest ball centered at $x$ and included in the intersection of balls is $1-\text{Radius}(\sigma) = 1 - \frac{\theta_d}{2}$ (hence the constant is tight in this case).
I am having difficulties to prove that this case is indeed the worst case. I was just able to prove that the result holds when all balls have the same radius. Does this result seems familiar to
someone? I would really appreciate any comment, idea or reference.
ps : The topology tag is here for several reason. One of them is that the biggest radius of the ball included in the intersection corresponds to the weak feature size of the complement of the
intersection. Another one is that this result is linked to a collapsibily result.
Crossposted to MSE: math.stackexchange.com/questions/404006/… – Joseph O'Rourke May 27 '13 at 21:17
add comment
1 Answer
active oldest votes
OK, let's try.
First, $n=d+1$ (Helly's theorem).
Second, if the balls of radii $r_i-\theta_d$ do not have a common point, there is $\theta<\theta_d$ such that the balls of radii $r_i-\theta$ have exactly one common point, say, the
origin. Also, we can ignore the balls such that $0$ is not on their boundary. Thus, we get $m\le d+1$ vectors $v_i$ of length $r_i-\theta$ with $|v_i-v_j|\le \min(r_i,r_j)$. Moreover,
the vectors $v_i$ cannot lie in one half-space (otherwise there would be more common points), so $\sum_i a_iv_i=0$ for some $a_i\ge 0$, not all of them $0$.
up vote 4
down vote Now just note that as soon as the angle between $v_i$ and $v_j$ is obtuse, decreasing each length keeping $|v_i-v_j|$ constant will increase the angle and that in the case of the
accepted equilateral triangle, setting $r_i=1$ will also increase the angle. Thus, if $e_i$ are unit vectors in the directions of $v_i$, we have $\langle e_i,e_j\rangle> -1/d$ and we still have
$\sum b_i e_i=0$ with $b_j\ge 0$. However, the matrix with $1$'s on the diagonal and off-diagonal terms satisfying $0>A_{ij}>-1/d$ is positive definite. We thus run into a
I hope this holds water. Check everything and let me know if there are any gaps. :)
Regarding the first (okay, second) sentence: assume $d = 2$. Certainly, we can construct $4$ balls whose intersection contains all the centers, but so that the intersection of any
$3$ is strictly larger than the intersection of all $4$. Therefore, the largest ball which fits into the intersection of $3$ will almost surely not fit into the intersection of all
$4$. How can we use Helly's theorem to just restrict attention to $d+1$ balls?? – Vidit Nanda May 29 '13 at 6:24
The question is equivalent to asking if the intersection of the balls of_radii $r_i-\theta_d$ (in my notation, I just realized that the OP used $\theta_d/2$ where I used $\theta_d$)
is non-empty. – fedja May 29 '13 at 10:50
@fedja Thank you for your answer. I don't understand the last paragraph and the transformation you described. "decreasing each length keeping $|v_i−v_j|$ constant", can you explain
me what length you are mentioning here? – jojo38 May 29 '13 at 12:43
1 All I wanted to say is that if $\theta$ is less that the radius you want, then in every triangle OIJ with $|OI|=r_i-\theta$, $|OJ|=r_j-\theta$, $|IJ|\le \min(r_i,r_j)$, the angle $O$
is at most as large as that in the triangle with $|OI|=|OJ|=1-\theta$, $|IJ|=1$. – fedja May 29 '13 at 18:03
Now I have it :) Thank you, I really appreciate your help. – jojo38 May 31 '13 at 15:13
add comment
Not the answer you're looking for? Browse other questions tagged computational-geometry computational-topology geometry euclidean-geometry mg.metric-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/132047/biggest-ball-included-in-an-intersection-of-balls","timestamp":"2014-04-18T08:06:09Z","content_type":null,"content_length":"60669","record_id":"<urn:uuid:47ba0ef6-750c-4ea8-b81d-d51cbb0a1266>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dartboard challenge solved - learn about matrices and circular series - E90E50fx
You can read about the challenge itself here:
Sum of the squares of each k consecutive terms in the dartboard sequence
First it was published here (italian):
[Quizzone di Excel] Quesito 10
The challenge was posted on Linkedin a year later:
Excel Hero Group on Linkedin
To download the sample files at the bottom of the page by clicking on the small arrow on the right-hand side.
First solution
The task was to sum of the squares of each k consecutive terms in the dartboard numbering sequence - that means you need to create the k-element groups following the circle or in other words, combine
the first and last elements (circular shift).
The best and generalized solution is a result of a long and entertaining discussion on this forum. Final and best formula is:
The importance of this formula is that it solves the base problem: how to shift k-element groups of an array regarding the fact that we need circular shift .
Note: In the examples - to make the matrixes smaller - we will use numbers from 1 to 10.
When you are thinking on the solution, it could be a great help to write the numbers in a 10x10 matrix, not in 3 column - something like this for the k=3 case:
In the rows of the 10x10 matrix you can see the k-element groups we need to sum and square. The bottom-left elements are the key to create the circular shift. Now we need to build up a help-matrix
which will make possible to do this shift on the original range.
If you think about the rules of matrix multiplication, you find out this is what you need:
How can you build up this shift-matrix? You only need to use the row and column index numbers and combine two simple matrixes.
We set:
rng =
A2:A11 (
assume, for simplicity, the values from 1 to 10
k = 3
Highlight the individual operations with different colors:
Below what happens:
Second solution
We found two other solutions which are not as general as the above mentioned, but could also be interesting.
The second solution is based how we calculate square of sum of elements.
Here is the formula:
Let’s see how the squares look like for k=3:
We have 3 times the square of the element (c), 2*2 times the multiplication with the element above (b) and 2*2 times the element below (d) and 2*1 times the multiplication with the element two-places
above and below (a, e).
In Excel if we multiply the vector and its transposed vector we will have the multiplication of all the possible combination of two elements. (aa, ab, ac, … | ba, bb, bc, ...) We only need to
“select” which combinations we need and how many times. This latter will be a coefficient-matrix depends on the value of k.
We created an example file to explain how to build up the coefficient matrix. It is a little bit similar to the method we used in the general solution. The difference is that in this case we need a
symmetrical matrix, so we need to use ABS formula.
Third solution
solution uses the fact that we have integer numbers. (This is not an “elegant” mathematical solution but a “smart” algorithmic solution. :) )
Here is the formula:
Because the key of the calculation is the order of the numbers, the idea is to “combine” the number and its serial number together. We can add them together after dividing the number by 100 - this
way the integer part will be the serial number while the fractional part will be the number itself. Now we can easily read the k-element groups with the help of an array of serial numbers and the
SMALL function.
We will only need the fractional part, so we subtract the array of serial numbers and multiply the fractional part by 100. And only two things left: summarize the rows of this matrix
(matrix-multiplication by the sum-vector), square it and summarize again.
Words: Kris by | {"url":"https://sites.google.com/site/e90e50fx/home/excel-dartboard-challenge-and-circular-series","timestamp":"2014-04-19T12:44:37Z","content_type":null,"content_length":"65658","record_id":"<urn:uuid:ba5f9dce-246b-425b-9ebb-2465c66e6582>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
For a class pizza party, Esther bought 9 bottles of juice and twelve bottles of water. Let j represent the cost of juice and w represent the cost of water. Write an expression to determine the total
cost for the drinks. Then, calculate the cost if juice costs two dollars and water costs one dollar
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a2a1a1e4b0e22d17ef7a13","timestamp":"2014-04-16T08:07:48Z","content_type":null,"content_length":"39952","record_id":"<urn:uuid:4d1fb597-5ad4-4268-acf1-f0aba77edd74>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Introduction to
Theoretical Chemistry, Second Edition
Jack Simons*
Chemistry Department
University of Utah
Salt Lake City, Utah
* Henry Eyring Scientist and Professor of Chemistry
Useful websites for further information:
http://simons.hec.utah.edu JackÕs homepage
http://simons.hec.utah.edu/TheoryPage JackÕs website on theoretical chemistry.
Second Edition Introductory Remarks and Table of Contents
I am very grateful to Cambridge University Press for agreeing to allow the copyrights on this text to be returned to me so that I could create a Second Edition and make this available at not cost in
this online version. Please feel free to reproduce the files constituting this text, but I ask that you not sell any of this material to others; I want it to remain free to everyone.
In making the additions and changes needed to prepare this Second Edition, I corrected errors I found in the First Edition and I enhanced the level of presentation in several Chapters where readers
of the First Edition had told me such enhancements were needed. However, I tried to keep the overall level of the text appropriate to senior undergraduate or beginning graduate students in good U. S.
I apologize for the quality of the presentation made in this Edition; I did not have available the tools that Cambridge University Press had to assure that all the text, figures, and equations
appeared in wonderful format. Moreover, I removed the equation numbers to make it easier for me to make additional corrections and additions. I did the best I could, so I hope the readers will be OK
with my efforts.
Introductory Remarks
What is theoretical chemistry?
LetÕs begin by discussing what the discipline of theoretical chemistry is about. I think most students of chemistry think that theory deals with using computers to model or simulate molecular
behaviors. This is true, but it is only part of what theory does. Theory indeed contains under its broad umbrella the field of computational simulations, and it is such applications of theory that
have gained much recent attention especially as powerful computers and user-friendly software packages have become widely available. Today, every issue of the Journal of Physical Chemistry, the
Journal of the American Chemical Society, and the Journal of Chemical Physics contains numerous examples of such theoretical studies.
However, the discipline also involves analytical theory, which deals with how the fundamental equations used to perform the simulations are derived from the Schr dinger equation or from classical
mechanics. The discipline also has to do with obtaining the equations that relate laboratory data (e.g., spectra, heat capacities, reaction cross-sections, phase diagrams, conductivity) to molecular
properties (e.g., geometries, bond energies, activation energies, energy levels, intermolecular potentials). This analytical side of theory is also where the equations of statistical mechanics that
relate macroscopic properties of matter to the microscopic properties of the constituent molecules are obtained.
So, theory is a diverse field of chemistry that uses physics, mathematics and computers to help us understand molecular behavior, to simulate molecular phenomena, and to predict the properties of new
molecules and of new phases of matter. It is common to hear this discipline referred to as theoretical and computational chemistry. This text is focused more on the theory than on the computation.
That is, I deal primarily with the basic ideas upon which theoretical chemistry is centered, and I discuss the equations and tools that enter into the three main sub-disciplines of theory- electronic
structure, statistical mechanics, and reaction dynamics. I have chosen to emphasize these elements rather than to stress computational applications of theory because there are already many good
sources available that deal with computational chemistry.
Now, let me address the issue of Ōwho does theoryĶ? It is common for chemists whose primary research activities involve laboratory work to also use theoretical concepts, equations, simulations and
methods to assist in interpreting their data. Sometimes, these researchers also come up with new concepts or new models in terms of which to understand their laboratory findings. These experimental
chemists are using theory in the former case and doing new theory in the latter. Many of my experimental chemistry colleagues have evolved into using theory in this manner.
However, for several decades now there have also been chemists who do not carry out laboratory experiments but whose research focus lies in developing new theory (new analytical equations, new
computational tools, new concepts) and applying theory to understanding chemical processes as a full-time endeavor. These people are what we call theoretical chemists. I am proud to say that I a
member of this community of theorists, and that I believe this discipline offers a very powerful background for understanding essentially all other areas of chemistry. Even though people like me do
not perform laboratory experiments, it is essential that we understand how experiments are done and what elements of molecular behavior they probe. It is for this reason that I include in this text
significant discussion of experimental methods as they relate to the theory upon which I focus.
Where does one learn about theoretical chemistry? Most chemistry students in colleges and universities take classes in introductory chemistry, organic, analytical, inorganic, physical, and bio-
chemistry. It is extremely rare for students to encounter a class that has theoretical chemistry in its title. This book is intended to illustrate to students that the subject of theoretical
chemistry pervades most if not all of the other classes she/he takes in an undergraduate or graduate chemistry curriculum. It is also intended to offer students a modern introduction to the field of
theoretical chemistry and to illustrate how it has evolved into a discipline of its own and now stands shoulder-to-shoulder with the traditional experimental sub-disciplines of chemical science.
How to use this book
I have tried to write this book so it could be used in any of several ways:
1. As a text book that could be used to learn the quantum mechanics and many of the spectroscopy components of a typical junior-or senior-level undergraduate physical chemistry class. This would
involve covering Chapters 1-4 as well as Chapter 5, the latter of which offers a brief overview of how theoretical chemistry fits into the research areas of chemistry. It would also be wise to solve
many of the problems that I offer in pursuing such an avenue of study. Certainly, any student who has not yet taken an undergraduate class in physical chemistry should follow this route.
2. As a first-year graduate-level text in which selected topics in the areas of introductory quantum chemistry, spectroscopy, statistical mechanics, and the theory of reaction dynamics are surveyed.
This would involve covering Chapters 6-8 and solving many of the problems. Although the material of Chapters 1-4 should have been learned by such students in an undergraduate physical chemistry
class, it would be wise to read this material to refresh oneÕs memory. It is likely that full-semester classes in statistical mechanics and in reaction dynamics will require more material than
offered in Chapters 7 and 8, but these Chapters should suffice for briefer classes and for gaining an introduction to these fields.
3. As an introductory survey source for experimental chemists interested in learning about the central concepts and many of the most common tools of theoretical chemistry. To pursue this avenue, the
reader should focus on Chapters 6-8 because the material of Chapters 1-5 covers what such readers probably already know.
Because of the flexibility in how this text can be used, some duplication of material occurs. However, it has been my experience that students benefit from encountering subjects more than one time,
especially if each subsequent encounter is at a deeper level or makes connections with different applications. I believe this is the case for subjects that are covered in more than one place in this
I have also offered many exercises (small problems) and problems to be solved by the reader as well as detailed solutions. Most of these problems deal with topics contained in Chapters 1-4 because it
is these subjects that are likely to be studied in an undergraduate classroom setting where homework assignments are common. Chapters 6-8 are designed to give the reader an introduction to electronic
structure theory, statistical mechanics, and reaction dynamics at the graduate and beginning-research level. In such settings, it is my experience that individual instructors prefer to construct
their own problems, so I offer fewer exercises and problems associated with these Chapters. Most, if not all, of the problems presented here require many steps to solve, so the reader is encouraged
not to despair when attempting them; they may be difficult, but they teach valuable lessons.
Other sources of information
Before launching into the subject of theoretical chemistry, allow me to mention other sources that can be used to obtain information at a somewhat more advanced level than is presented in this text.
Keep in mind that this is a text intended to offer an introduction to the field of theoretical chemistry, and is directed primarily at advanced undergraduate- and beginning graduate- level
readerships. It is my hope that such readers will, from time to time, want to learn more about certain topics that are especially appealing to them. For this purpose, I suggest two sources that I
have been instrumental in developing. First, a web site that I created can be accessed at http://simons.hec.utah.edu/TheoryPage. This site provides a wealth of information including
1. web links to home pages of a multitude of practicing theoretical chemists who specialize in many of the topics discussed in this text;
2. numerous education-site web links that allow students ranging from fresh-persons to advanced graduate students to seek out a variety of information;
3. textual information much of which covers at a deeper level those subjects discussed in this text at an introductory level.
Another major source of information at a more advanced level is my textbook Quantum Mechanics in Chemistry (QMIC) written with Dr. Jeff Nichols (Past Director of the High Performance Computing Group
at the Pacific Northwest National Lab and now Director of Mathematics and Computational Science at Oak Ridge National Lab). The full content of that book can be accessed in .pdf file format through
the TheoryPage web link mentioned above. In several locations within the present introductory text, I specifically refer the reader either to my TheoryPage or QMIC textbook, but I urge you to also
use these two sources whenever you want a more in-depth treatment of a subject.
To the readers who want to access up-to-date research-level treatments of many of the topics we introduce in this text, I suggest several recent monographs to which I refer throughout this text:
Molecular Electronic Structure Theory, T. Helgaker, P. Jŋrgensen, and J. Olsen, J. Wiley, New York, N.Y. (2000), and
Modern Electronic Structure Theory, D. R. Yarkony, Ed., World Scientific Publishing, Singapore (1999)
Theory of Chemical Reaction Dynamics, M. Baer, Ed., Vols. 1-4; CRC Press, Boca Raton, Fla. (1985)
Essentials of Computational Chemistry, C. J. Cramer, Wiley, Chichester (2002).
An Introduction to Computational Chemistry, F. Jensen, John Wiley, New York (1998)
Molecular Modeling, 2^nd ed., A. R. Leach, Prentice Hall, Englewood Cliffs (2001).
Molecular Reaction Dynamics and Chemical Reactivity, R. D. Levine and R. B. Bernstein, Oxford University Press, New York (1997)
Computer Simulations of Liquids, M. P. Allen and D. J. Tildesley, Oxford U. Press, New York (1997),
as well as a few longer-standing texts in areas covered in this work:
Statistical Mechanics, D. A. McQuarrie, Harper and Row, New York (1977)
Introduction to Modern Statistical Mechanics, D. Chandler, Oxford U. Press, New York (1987)
Quantum Chemistry, H. Eyring, J. Walter, and G. E. Kimball, John Wiley, New York (1944)
Introduction to Quantum Mechanics, L. Pauling and E. B. Wilson, Dover, New York (1963),
Molecular Quantum Mechanics, 3^rd Ed., P. W. Atkins and R. S. Friedman, Oxford U. Press, New York (1997).
Modern Quantum Chemistry, A. Szabo and N. S. Ostlund, McGraw-Hill, New York (1989).
R. N. Zare, Angular Momentum, John Wiley, New York (1988).
Because the science of theoretical chemistry makes much use of high-speed computers, it is essential that we appreciate to what extent the computer revolution has impacted this field. Primarily, the
advent of modern computers has revolutionized the range of problems to which theoretical chemistry can be applied. Before this revolution, the classical Newtonian or quantum Schr dinger equations in
terms of which theory expresses the behavior of atoms and molecules simply could not be solved for any but the simplest species, and then often only by making rather crude approximations. However,
present-day computers, which routinely perform 10^9 operations per second, have 10^9 bytes of memory and 50 times this much hard disk storage, have made it possible to solve these equations for large
collections of molecules and for molecules containing hundreds of atoms and electrons. Moreover, the vast improvement in computing power has inspired many scientists to develop better (more accurate
and more efficient) approximations to use in solving these equations.
Unfortunately, the undergraduate and beginning graduate- level educations provided to most chemistry majors no longer requires students to learn how to write computer code to embody such new
theories. I strongly urge any student interested in pursuing theoretical chemistry to take a class in computer programming or find some other way to learn the basics of this field. Because this text
is intended for both an undergraduate and beginning graduate audience and is designed to offer an introduction to the field of theoretical chemistry, it does not devote much time to describing the
computer implementation of this subject. Nevertheless, I will attempt to introduce some of the more basic aspects of the computational aspects of theory especially when doing so will help the reader
understand the basic principles. In addition, the TheoryPage web site contains a large number of links to scientists and to commercial software providers that can give the reader more detail about
the computational aspects of theoretical chemistry.
LetÕs now begin the journey that I hope will give the reader a basic understanding of what theoretical chemistry is and how it fits into the amazing broad discipline of modern chemistry. I hope you
learn a lot and do so in a way that is enjoyable to you.
Table of Contents
Part 1. Background Material
Chapter 1. The Basics of Quantum Mechanics page 1
1.1 Why Quantum Mechanics is Necessary for Describing Molecular Properties
1.2 The Schr dinger Equation and Its Components
1.2.1 Operators
1.2.2 Wave Functions
1.2.3 The Schr dinger Equation
1.3 Your first application of quantum mechanics- motion of a particle in one dimension.
1.3.1 Classical Probability Density
1.3.2 Quantum Treatment
1.3.3 Energies and Wave functions
1.3.4 Probability Densities
1.3.5 Classical and Quantum Probability Densities
1.3.6 Time Propagation of Wave functions
1.4 Free Particle Motions in More Dimensions
1.4.1 The Schr dinger Equation
1.4.2 Boundary Conditions
1.4.3 Energies and Wave functions for Bound States
1.4.4 Quantized Action Can Also be Used to Derive Energy Levels
1.4.5 Action Can Also be Used to Generate Wave Functions
1.5 Chapter Summary
Chapter 2. Model Problems That Form Important Starting Points page 95
2.1 Free Electron Model of Polyenes
2.2 Bands of Orbitals in Solids
2.3 Densities of States in 1, 2, and 3 dimensions.
2.4 The Most Elementary Model of Orbital Energy Splittings: H ckel
or Tight-Binding Theory
2.5 Hydrogenic Orbitals
2.5.1 The F equation
2.5.2 The Q equation
2.5.3 The R equation
2.6 Electron Tunneling
2.7 Angular Momentum
2.7.1 Orbital angular momentum
2.7.2 Properties of general angular momenta
2.7.3 Summary
2.7.4 Coupling of angular momenta
2.8 Rotations of Molecules
2.8.1 Rotational Motion For Rigid Diatomic and Linear Polyatomic Molecules
2.8.2 Rotational Motions of Rigid Non-Linear Molecules
2.9 Vibrations of Molecules
2.10 Chapter Summary
Chapter 3. Characteristics of Energy Surfaces page 201
3.1. Strategies for Geometry Optimization and Finding Transition States
3.1.1 Finding Local Minima
3.1.2 Finding Transition States
3.1.3 Energy Surface Intersections
3.2. Normal Modes of Vibration
3.2.1. The Newton Equations of Motion for Vibration
1. The Kinetic and Potential Energy Matrices
2. The Harmonic Vibrational Energies and Normal Mode Eigenvectors
3.2.2. The Use of Symmetry
1. Symmetry Adapted Modes
2. Point Group Symmetry of the Harmonic Potential
3.3 Intrinsic Reaction Paths
3.4 Chapter Summary
Chapter 4. Some Important Tools of Theory page 229
4.1. Perturbation Theory
4.1.1 An Example Problem
4.1.2 Other Examples
4.2 The Variational Method
4.2.1 An Example Problem
4.2.2 Another Example
4.3. Point Group Symmetry
4.3.1 The C3v Symmetry Group of Ammonia - An Example
4.3.2. Matrices as Group Representations
4.3.3 Characters of Representations
4.3.4. Another Basis and Another Representation
4.3.5 Reducible and Irreducible Representations
4.3.6. More Examples
4.3.7. Projector Operators: Symmetry Adapted Linear Combinations of Atomic Orbitals
4.3.8. Summary
4.3.9 Direct Product Representations
4.3.10 Overview
4.4 Character Tables
4.5 Time Dependent Perturbation Theory
4.6 Chapter Summary
Part 2. Three Primary Areas of Theoretical Chemistry
Chapter 5. An Overview of Theoretical Chemistry page 312
5.1 What is Theoretical Chemistry About?
5.1.1 Molecular Structure- bonding, shapes, electronic structures
5.1.2 Molecular Change- reactions and interactions
1. Changes in Bonding
2. Energy Conservation
3. Conservation of Orbital Symmetry: Woodward-Hoffmann Rules
4. Rates of change
5.1.3. Statistical Mechanics: Treating Large Numbers of Molecules in Close Contact
5.2. Molecular Structure: Theory and Experiment
5.2.1. Experimental Probes of Molecular Shapes
1. Rotational Spectroscopy
2. Vibrational Spectroscopy
3. X-Ray Crystallography
4. NMR Spectroscopy
5.2.2. Theoretical Simulation of Structures
5.3. Chemical Change
5.3.1. Experimental Probes of Chemical Change
5.3.2. Theoretical Simulation of Chemical Change
5.4 Chapter Summary
Chapter 6. Electronic Structures page 372
6.1 Theoretical Treatment of Electronic Structure: Atomic and Molecular Orbital Theory
6.1.1 Orbitals
1. The Hartree Description
2. The LCAO-Expansion
3. AO Basis Sets
a. STOs and GTOs
b. The Fundamental Core and Valence Basis
c. Polarization Functions
d. Diffuse Functions
4. The Hartree-Fock Approximation
a. KoopmansÕ Theorem
b. Orbital Energies and the Total Energy
5. Molecular Orbitals
a. Shapes, Sizes, and Energies of Orbitals
b. Bonding, Anti-bonding, Non-bonding, and Rydberg Orbitals
6.1.2 Deficiencies in the Single Determinant Model
1. Electron Correlation
2. Essential Configuration Interaction
3. Various Approaches to Electron Correlation
a. The CI Method
b. Perturbation Theory
c. The Coupled-Cluster Method
d. The Density Functional Method
e. Energy Difference Methods
f. The Slater-Condon Rules
g. Atomic Units
6.1.3 Molecules Embedded in Condensed Media
6.1.4 High-End Methods for Treating Electron Correlation
6.2. Experimental Probes of Electronic Structure
6.2.1 Visible and Ultraviolet Spectroscopy
1. The Electronic Transition Dipole and Use of Point Group Symmetry
2. The Franck-Condon Factors
3. Time Correlation Function Expressions for Transition Rates
4. Line Broadening Mechanisms
6.2.2 Photoelectron Spectroscopy
6.2.3 Probing Continuum Orbitals
6.3 Chapter Summary
Chapter 7. Statistical Mechanics page 491
7.1. Collections of Molecules at or Near Equilibrium
7.1.1. The Distribution of Energy Among Levels
1. Basis of the Boltzmann Population Formula
2. Equal a priori Probability Assumption
7.1.2. Partition Functions and Thermodynamic Properties
1. System Partition Functions
2. Individual-Molecule Partition Functions
7.1.3. Equilibrium Constants in Terms of Partition Functions
7. 2 Monte Carlo Evaluation of Properties
7.2.1 Metropolis Monte Carlo
7.2.2 Umbrella Sampling
7.3 Molecular Dynamics Simulations
7.3.1 Trajectory Propagation
7.3.2 Force Fields
7.3.3 Coarse Graining
7.4 Time Correlation Functions
7.5 Some Important Chemical Applications of Statistical Mechanics
7.5.1 Gas-Molecule Thermodynamics
7.5.2 Einstein and Debye Models of Solids
7.5.3 Lattice Theories of Surfaces and Liquids
7.5.4 Virial Corrections to Ideal-Gas Behavior
7.6 Chapter Summary
Chapter 8. Chemical Dynamics page 585
8.1 Theoretical Treatment of Chemical Change and Dynamics
8.1.1 Transition State Theory
8.1.2 Variational Transition State Theory
8.1.3 Reaction Path HamiltonianTheory
8.1.4 Classical Dynamics Simulation of Rates
8.1.5 RRKM Theory
8.1.6 Correlation Function Expressions for Rates
8.1.7 Wave Packet Propagation
8.1.8 Surface Hopping Dynamics
8.1.9 Landau-Zener Surface Jumps
8.2 Experimental Probes of Reaction Dynamics
8.2.1 Spectroscopic Methods
8.2.2 Beam Methods
8.2.3 Other Methods
8.3 Chapter Summary | {"url":"http://simons.hec.utah.edu/ITCSecondEdition/TableofContents.html","timestamp":"2014-04-19T22:06:13Z","content_type":null,"content_length":"69143","record_id":"<urn:uuid:87d72d16-98bf-40ee-b00f-2975f45c3ca2>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
ISNI 000000010866810X Cajori (1859-1930)
Butsurigaku no rekishi.
Butsurigakushi kōgi
chequered career of Ferdinand Rudolph Hassler, first superintendent of the United States Coast survey; a chapter in the history of science in America, The
early mathematical sciences in North and South America, The
Elementary algebra first[-second] year course
Grammar school book
great conversation, The : a reader's guide to Great books of the Western world
Gunter's scale and the slide rule during the seventeenth century
History of elementary mathematics, with hints on methods of teaching, by Florian Cajori,..., A
History of elementary mathematics with hints on methods of teaching. rev. ed., A
history of mathematical notations, A : two volumes bound as one
History of mathematical notations, by Florian Cajori,..., A
history of mathematics ... 1901., A
History of mathematicsby Florian Cajori,... 2nd edition, revised and enlarged, A
History of physics in its elementary branches, including the evolution of physical laboratories, by Florian Cajori,..., A
history of physics in its elementary branches (through 1925): including the evolution of physical laboratories., A
history of the conceptions of limits and fluxions in Great Britain from Newton to Woodhouse, A
History of the conceptions of limits and fluxions in Great Britain from Newton Woodhouse, by Florian Cajori,..., A
history of the logarithmic slide rule and allied instruments, and On the history of Gunter's scale and the slide rule during the seventeenth century, A
Introduction to the modern theory of equations
introduction to the theory of equations., An
Kajori shotō sūgakushi
list of Oughtred's mathematical symbols, with historical notes., A
Logarithmic slide rule and allied instruments
Mathematical principles of natural philosophy and his system of the world
Mathematical principles of natural philosophy ; Optics
Mathematics in liberal education; a critical examination of the judgments of prominent men of the ages
Newton's Principia
Notes on the history of geometry and algebra
On the history of Gunter's scale and the slide rule during the seventeenth century
On the history of Gunter's scale and the slide rule during the seventeeth century
Philosophiae naturalis principia mathematica
Shotō sūgakushi, 1928 (1943 printing):
Sir Isaac Newton's Mathematical principles of natural philosophy and his system of the world, translated into English by Andrew Motte in 1729, the translation revised... by Florian Cajori
String figures and other monographs : String figures, by W. W. R. Ball. Methods and theories for the solution of problems of geometrical construction, by J. Petersen. Non-Euclidean plane geometry and
trigonometry, by H. S. Carslaw. A History of the logarithmic slide rule, by F. Cajori
Sūgakushi kōgi.
teaching and history of mathematics in the United States..., The
Treatise on light
William Oughtred, a great seventeenth-century teacher of mathematics, by Florian Cajori,... | {"url":"http://isni-url.oclc.nl/isni/000000010866810X","timestamp":"2014-04-16T20:11:33Z","content_type":null,"content_length":"24806","record_id":"<urn:uuid:82fb6a03-13a4-4968-b6cb-703f45a57a7d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graph of y = 2x + 2
November 19th 2011, 04:33 AM #1
Senior Member
Jul 2008
Graph of y = 2x + 2
Please excuse my lack of understanding Windows 7 and Excell.
I have included a sketch of a graph; y = 2x + 2
I have sketched this on graph paper and the appearance is the same as the attachment. I am happy that I have sketched the graph correctly, however I can't get Excell to re-produce the same graph?
It's quite clear that when inputting the data, i.e. when x = 0 y = 2 and the coordinates (3,8) and (-5, -8) that I am clearly getting it wrong, the graph is like a uneven land scape?
I know the graph is a straight line graph which cuts the y axis at 2, but I can't get Excell to do it?
Would someboby please show me how to input the data correctly.
Re: Graph of y = 2x + 2
try using a free, simple graphing program ...
Re: Graph of y = 2x + 2
Thank you for your link and advise.
The software package is very presentable, but please advise, how did you use that package to plot the points as I am struggling to find where to enter the data?
Re: Graph of y = 2x + 2
see inserting a series of points and calculating a trendline starting on page 43 of the linked pdf manual ...
November 19th 2011, 06:35 AM #2
November 19th 2011, 08:05 AM #3
Senior Member
Jul 2008
November 19th 2011, 09:38 AM #4 | {"url":"http://mathhelpforum.com/algebra/192244-graph-y-2x-2-a.html","timestamp":"2014-04-17T11:20:33Z","content_type":null,"content_length":"40891","record_id":"<urn:uuid:ac6d2982-52d8-46e9-971f-60f18013baaf>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Axiom of Choice and continuous functions
up vote 8 down vote favorite
Do you know if the following statement is an equivalent form of the axiom of choice or not?
If $X$ is a compact metric space, then every continuous function $f: X \longrightarrow \mathbb{R}$ is uniformly continuous.
If you know any references, please let me know.
set-theory gn.general-topology axiom-of-choice metric-spaces
1 What notion of compactness do you use? I ask specifically since with the definition that seems most standard to me: every open cover contains finite subcover, it is not clear to me where the usual
proof even appears to make use of AC. I might be missing something though. If so could you please say where AC is/seems to be needed. – quid Feb 17 '13 at 15:46
4 Quid, one might think that one has to use AC when choosing the ball around each point $x$, but in a metric space there is a canonical choice, so AC is not needed. The argument does not work,
however, in a general topological space as opposed to a metric space. – Joel David Hamkins Feb 17 '13 at 15:52
@Joel David Hamkins: ah, yes, thank you! The usual 'for each $x$ esists $\delta_x$ such that...' does not really "give" a $\delta_x$ but so to say only the option to choose one (this is what I
overlooked). – quid Feb 17 '13 at 16:05
2 Of course "uniformly continuous" is not defined in general topological space. – Gerald Edgar Feb 17 '13 at 17:41
1 Uniform continuity is, however, defined in uniform spaces, namely those spaces $X$ whose topology is defined by a choice of a neighborhood basis for the diagonal in $X \times X$. So this question
would make sense in that context. – Lee Mosher Feb 17 '13 at 23:28
show 2 more comments
3 Answers
active oldest votes
It seems to me that this is provable without using the axiom of choice.
Suppose that $X$ is a compact metric space and $f:X\to\mathbb{R}$ is continuous. Let's show it is uniformly continuous. Fix any $\epsilon\gt 0$. For each point $x\in X$, there is a
small ball $B$ centered at $x$ such that $f(y)$ is within $\epsilon/2$ of $f(x)$ for all $y\in B$, and we may choose $B$ to have radius $1/{n_x}$, where we choose $n_x$ to the smallest
positive natural number for which this radius has the desired property. Thus, we have a canonical choice of radius here, and so we do not need the axiom of choice to define the map $x\
up vote 12 mapsto n_x$.
down vote
accepted Consider now the family $\{ B_{1/{2n_x}}(x) \mid x\in X \}$, consisting of the inner core of each of those balls, with the radius of each of them shrunk to half. This is an open cover
of $X$, and so by compactness there is a finite subcover, which consists of finitely many balls, having some minimal radius $1/{2n}$. Now, if $y$ and $z$ are within $1/{2n}$ of each
other, then they are both within $1/{n_x}$ of the center $x$ of the ball in which $y$ sits in the subcover, and so $f(y)$ and $f(z)$ are within $\epsilon/2$ of $f(x)$, and hence within
$\epsilon$ of each other, showing that $f$ is uniformly continuous.
General case being a function to an arbitrary (locally compact?) metric space? – Asaf Karagila Feb 17 '13 at 17:28
2 I see, there is even no need to choose a particular $n_x$; just include all the balls! – Joel David Hamkins Feb 17 '13 at 23:39
3 Right, in fact if I wrote a book titled "How to avoid the axiom of choice", that would be trick number one: don't choose! – Andrej Bauer Feb 17 '13 at 23:43
3 Because every compact metric space is separable, the theorem can be formalized in second-order arithmetic, where it is provable in $\mathsf{WKL}_0$. – Carl Mummert Feb 18 '13 at 0:24
As far as I can see, the obvious argument for separability of compact metric spaces uses the countable axiom of choice, and further instances of choice may be hidden in the
2 representation of compact metric spaces as quotients of $2^\omega$, which is what you need to translate the theorem into the language of $\mathrm{WKL}_0$. – Emil Jeřábek Feb 18 '13
at 19:37
show 2 more comments
Let us improve slightly on Joel's answer by avoiding not only choice but also excluded middle (which is used in assuming that the minima $n_x$ exist). In passing we also generalize to an
arbitrary metric codomain. Since the various notions of compactness are not equivalent intuitionistically, we have to specify which one we mean. We mean by "compact" the Heine-Borel finite
subcover property.
Theorem: If a map $f : X \to Y$ from a compact metric space to a metric space is continuous then it is uniformly continuous.
Proof. (No excluded middle, no choice.) Let $\epsilon > 0$ be given. Consider the family of open balls $$\mathcal{F} = \lbrace B(x,r) \mid x \in X, r > 0, \forall x', x'' \in B(x,2r) . d(f
(x'), f(x'')) < \epsilon \rbrace.$$ Beware, we put $B(x,r)$ in $\mathcal{F}$ if the larger ball $B(x,2 r)$ is mapped by $f$ to a sufficiently small set.
up vote 14
down vote Because $f$ is continuous, $\mathcal{F}$ covers $X$. By the Heine-Borel property it has a finite subcover $$B(x_1, r_1), \ldots, B(x_n, r_n).$$ Let $\delta = \min (r_1, \ldots, r_n)$.
Suppose $d(y,z) < \delta$ for some $y, z \in X$. There is $i$ such that $d(x_i, y) < r_i$, hence $d(x_i, z) \leq d(x_i, y) + d(y, z) < r_i + \delta \leq 2 r_i$. Thus, since both $y$ and
$z$ are contained in $B(x_i, 2 r_i)$ we conclude $d(f(y), f(z)) < \epsilon$. QED.
As usual, the constructive proof is also the most elegant one. The above proof is an easy adaptation that avoids unecessary use of choice of 4.3.31 and 4.3.32 of Engelking's famous General
Topology. Further reading: Hajime Ishihara and Peter Schuster, Compactness under constructive scrutiny. Math. Log. Quart. 50, No. 6, 540 – 550 (2004).
That's a nice proof! – quid Feb 18 '13 at 0:01
6 The constructive skeleton in the closet is this: one cannot exhibit existence of interesting spaces with the Heine-Borel property, constructively. – Andrej Bauer Feb 18 '13 at 0:15
add comment
Look Ma, no axiom of choice!
THEOREM 0 Let $X$ be a compact space. Let $\Phi$ be a non-empty family of closed subsets of $X$, $F := \bigcap \Phi$, and $G\supseteq F$ an open subset of $X$. Then there exists a finite $
\Phi_0\subseteq \Phi$ such that $\bigcap\Phi_0\subseteq G$.
PROOF Family $\Gamma\ :=\ G\cup\{X\setminus A : A \in \Phi\}$ is an open covering of $X$. Etc. END of PROOF
As an instant corollary we get:
THEOREM 1 Let $X$ be a compact space. Let $\Phi$ be a non-empty family of closed subsets of $X$, $F := \bigcap \Phi$, and $G\supseteq F$ an open subset of $X$. Assume also that $\Phi$ is
linearly ordered by $\subseteq$. Then there exists $A\in \Phi$ such that $A \subseteq G$.
up vote 2
down vote THEOREM 2 Let $(X\ d)$ be a compact metric space. Let $W$ be an open subset of $X^2$, such that $\Delta_X\subset W$, where $\Delta_X := \{(x\ x):x\in X\}$. Then there exists $\delta > 0$
such that $V_X(\delta)\subseteq W$, where $V_X(\delta) := \{(x\ y)\in X^2 : d(x\ y)\le\delta\}$.
PROOF Apply Theorem 1 to $X^2$ as a replacement of $X$ of Theorem 1; etc. END of PROOF
THEOREM 3 Let $f : X\rightarrow Y$ be an arbitrary continuous function of a metric compact space $(X\ d_X)$ into an arbitrary metric space $(Y\ d_Y)$. Then function $f$ is uniformly
PROOF Let $\epsilon > 0$. Let $W\subseteq X^2$ be the inverse image of $V_Y(\epsilon)$ under function $f\times f$. There exists, by Theorem 2, $\delta > 0$ such that $V_X(\delta)\subseteq
W$. Then $d_Y(f(x')\ f(x''))\le\epsilon$ for every $x'\ x''\in X$ such that $d_X(x'\ x'')\le\delta$. END of PROOF
This proof has no balls. Is it a drawback? – Wlodzimierz Holsztynski Feb 18 '13 at 19:24
I assume you require $F = \bigcap_{\phi \in \Phi}\phi$ instead of $F = \bigcap\Phi$. – Vidit Nanda Feb 18 '13 at 19:32
2 @Vel Nias: en.wikipedia.org/wiki/… – Emil Jeřábek Feb 18 '13 at 19:49
In the context of my writing (of my notation) the two are the same. I use pairs of related big operations: direct and indexed. Whenever the direct operation is suitable, it is simpler
(less clutter) hence preferable. It works for Cup, Sum, Max, Sup, Cap, Product, Min, Inf, ... (I also avoid coma whenever possible...; there was around here a thread devoted to
notation, and I enjoy notation and terminology, but ... oh, well). – Wlodzimierz Holsztynski Feb 18 '13 at 20:02
I miss the missing commas. You have excluded middle all over the place, but it might be avoidable: use closed sets instead of open to formulate topology and compactness, and in Theorem
1 replace linearity with directedness, or intuitionstic linearity $x < y \Rightarow x < z \lor z < y$, I am not sure. Anyhow, your proof seems to generalize to uniform spaces, so it is
nice to have it. – Andrej Bauer Feb 18 '13 at 22:44
show 2 more comments
Not the answer you're looking for? Browse other questions tagged set-theory gn.general-topology axiom-of-choice metric-spaces or ask your own question. | {"url":"https://mathoverflow.net/questions/122070/axiom-of-choice-and-continuous-functions/122110","timestamp":"2014-04-20T14:09:44Z","content_type":null,"content_length":"84225","record_id":"<urn:uuid:80526060-e377-4863-8a74-f6af4f9ee6af>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
o Object Oriented Javascript
As of late I have seen a lot of questions about Javascript that could be easily done with some Object Oriented programming, so I decided to write a short tutorial on it.
In this tutorial
We will be creating 2 objects, 1 called Dice and the other called Thrower. The Dice class will be an object that can be set with a naximum number of sides and rolled to get a number. The Thrower
object will take multiple dice and roll them all at once returning the resultant sum of all the dice rolled.
Lets get started
When starting out creating an object it is useful to know what the object will need. FOr instance, you wouldn't want to give the object
a variable called
, so make sure you have thought your object through before creating them.
A note about coding
- To continue, or begin creating good programming habits we will be naming all objects with a capital letter. As you saw with the
object the "A" is capitalized. This is because it is the standard for the time being. For anyone who has programmed in Java, Actionscript, C++ or any other number of languages you know that this is
the case, and has been for quite a while. Later on, when we startcreating sub-functions we will be maning them starting with lower case and then using camel case for new words. That looks like so:
So, to get started with our code we create a function called
(this will be our object), and we give it these 2 variables:
. This code looks like so:
function Dice(){
NOTICE - When using variables that are for an object you must use
otherwise the variables are deleted after they are done being used. This is of course unless they are global, but we don't want to have a whole bunch of global variables.
Because we want to be able to set the max we need to add a parameter to
that is set to max:
function Dice(maxNum){
this.max = maxNum;
Now, however we have a problem. What if someone doesn't set the max?
This can easily be checked for by and if/ else statement:
function Dice(maxNum){
this.max = maxNum;
this.max = 6; //some initial value, in this case a 6 sided dice.
Good, now that we have the basics of our object complete lets start adding some sub-functions (methods in Java, class functions in C++).
Adding functions
Adding functions to an object is a little different than in other languages. This is because it is done inside of a function and acts like a sub-function.
This can be seen as such:
function Dice(maxNum){
this.max = maxNum;
this.max = 6; //some initial value, in this case a 6 sided dice.
this.roll = function(){
// Do stuff here
In the case of our roll function we will want to have it calculate a random number between 1 and its max. THis is accomplished like so:
function Dice(maxNum){
this.max = maxNum;
this.max = 6; //some initial value, in this case a 6 sided dice.
this.roll = function(){
this.number = Math.round(Math.random()*(this.max-1))+1;
return this.number;
Which simply gets a random number (returns a number between 0 and 1), multiplies it by 1 less than max, rounds it to the nearest int, and adds to it. Adding 1 sets the minimum the number can be as 1.
NOTICE - The function returns the number that is calculated.
Now, just because we have returned a number once doesn't mean that we can just let it sit there until we roll again. What if someone wants to take another look at the number?
In that case we will need to let them see it. This is done by creating another sub-function called
PARTICIPATION OPP! - If you feel like trying out some coding on your own for this object create a sub-function that returns the number of the object. If you don't care to try things on your own at
this point continue reading.
So, now that you have made up your mind, here is the current code:
function Dice(maxNum){
this.max = maxNum;
this.max = 6;
this.roll = function(){
this.number = Math.round(Math.random()*(this.max-1))+1;
return this.number;
this.getVal = function(){
return this.number;
Creating a Thrower
PARTICIPATION OPP! - For those of you who feel like trying more OOJS on your own here is your chance to try creating an object on your own! The object name is
and will take an array (or single) dice, be able to roll all the dice, returning the sum of all the rolled dice; and add extra dice to the array.
Now, we will be creating another object. This will be able to control the dice objects (in a sense of the word).
Starting out we will create the object:
function Thrower(){
// Stuff here
To start out the Thrower function will need the variable
. After that is complete we need to get a function to roll the dice created.
For the moment the code looks like so:
function Thrower(dice){
this.myDice = dice;
When creatingthe roll function it is important to remember that we need to have it loop through the whole length of the array
. We also want it to return the sum of the dice, this means we need to have it counting through the loop.
Here is what the thrower object looke like with the roll functuion:
function Thrower(dice){
this.myDice = dice;
this.roll = function(){
var sum = 0;
for(var i=0; i<this.myDice.length; i++){
sum += this.myDice[i].roll();
return sum;
Now, what do we do if people want to add more dice?
This is easy enough, you add to the end of the array
with a simple function called
Here if the final code for the Thrower object:
function Thrower(dice){
this.myDice = dice;
this.roll = function(){
var sum = 0;
for(var i=0; i<this.myDice.length; i++){
sum += this.myDice[i].roll();
return sum;
this.addDie = function(dice){
this.myDice[this.myDice.length] = dice;
Creating instances of an object
When creating instances of an object it is important to remember that you need to use the
Here is a code snippet we can use to create a single instance of a Dice:
var dice = new Dice();
REALIZE - You can call to all the functions and variables inside of the object instance like so
dice.(FUNCTION/VARIABLE NAME HERE)
. So, if we were to call
it would roll the instance of the object that the variable
is pointing to.
The Finished Product!!!
Here is a look at the whole code that will output the number and everything:
function Thrower(dice){
this.myDice = dice;
this.roll = function(){
var sum = 0;
for(var i=0; i<this.myDice.length; i++){
sum += this.myDice[i].roll();
return sum;
this.addDie = function(dice){
this.myDice[this.myDice.length] = dice;
function Dice(maxNum){
this.max = maxNum;
this.max = 6;
this.roll = function(){
this.number = Math.round(Math.random()*(this.max-1))+1;
return this.number;
this.getVal = function(){
return this.number;
var dice = [new Dice(), new Dice()];
var hands = new Thrower(dice);
hands.addDie(new Dice());
for(var i=0; i<100; i++){
document.write("Dice roll "+(i+1)+" = "+hands.roll()+"<br>");
The End
Enjoy your additional knowledge. | {"url":"http://www.dreamincode.net/forums/topic/63911-intro-to-object-oriented-javascript/page__pid__680217__st__0","timestamp":"2014-04-25T02:13:24Z","content_type":null,"content_length":"106180","record_id":"<urn:uuid:2282531e-89b8-4ba6-95db-ec1ed0a7da44>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Following the pattern of the notion of (n,r)-category, an $\left(n,n\right)$-category is a higher category with non-trivial cells of at most dimension $n$ and none of them guaranteed to be
So this is what is usually simply called an n-category.
Note that it is possible to go on to an $\left(n,n+1\right)$-category, or $\left(n+1\right)$-poset. You can either consider than the $n$-cells are ordered, or else consider that there are
irreversible $\left(n+1\right)$-cells which are indistinguishable. (Reversible indistinguishable $\left(n+1\right)$-cells are all identities and so might as well not exist.)
Revised on October 20, 2009 01:38:52 by
Toby Bartels | {"url":"http://www.ncatlab.org/nlab/show/(n%2Cn)-category","timestamp":"2014-04-18T18:57:10Z","content_type":null,"content_length":"13117","record_id":"<urn:uuid:4c2eca1a-28d5-4743-aaa2-c476491eb281>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
mapping subintervals
Nis Jørgensen nis at superlativ.dk
Thu Jun 14 00:21:04 CEST 2007
Matteo skrev:
> OK - I'm going to assume your intervals are inclusive (i.e. 34-51
> contains both 34 and 51).
> If your intervals are all really all non-overlapping, one thing you
> can try is to put all the endpoints in a single list, and sort it.
> Then, you can use the bisect module to search for intervals, which
> will give you a logarithmic time algorithm.
> Here, I'm going to assume you just need the index of the containing
> interval. If you really need a name (i.e. 'a1' or 'a2'), you can use a
> list of names, and index into that.
> I hope those assumptions are valid! if so, the following should work:
I have taken the liberty of simplifying your code, using the fact that
tuples are sorted lexicographically. Note that this requires all
intervals to be tuples and not lists (since list(a) < tuple(b) is always
from bisect import bisect
def test_interval(ivl,intervals):
# Find where ivl would lie in the list
# i.e. the index of the first interval sorting as larger than ivl
# Left endpoints equal is a special case - a matching interval will be
# to the right of the insertion point
if idx < len(intervals) and intervals[idx][0] == ivl[0]:
if intervals[idx][1] >= ivl[1]:
return idx
return None
# Otherwise, we need to check to the left of the insertion point
if idx > 0 and intervals[idx-1][1] >= ivl[1]:
return idx-1
return None
>>> intervals =[(10, 21), (34, 51), (77, 101)]
>>> print test_interval((34,35),intervals)
>>> print test_interval((34,53),intervals)
>>> print test_interval((77,53),intervals)
>>> print test_interval((77,83),intervals)
>>> print test_interval((77,102),intervals)
>>> print test_interval((77,101),intervals)
u"Nis J\xf8rgensen"
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2007-June/446711.html","timestamp":"2014-04-18T06:31:20Z","content_type":null,"content_length":"4437","record_id":"<urn:uuid:cb2895be-0ce6-431c-ada4-05fbd5a9588f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
Design of Concrete Girder Bridge
Slide 1
Design of Concrete Girder Bridge
College of Engineering
Civil & Environmental Engineering Department
Graduation Project II
Team Member:
Ahmed Al-Shehhi 200000069
Waleed Al-Alawi 200101647
Abdullah Al-Neyadi 200101637
Hassan Al-Hassani 200005052
Project’s advisor : Bilal El-Ariss
Slide 2
Presentation outline
• Executive Summary
• Introduction
• Background theory
• Methods and Techniques :
□ Analysis of pier cap
□ Design of bridge deck ,girders and pier cap
• Results and discussions
• Conclusions and recommendations
Slide 3
Executive Summary
• Analysis and design of a concrete girder bridge
• Graduation project I
• Graduation project II
□ Pier cap analysis
□ Design of bridge deck , girders and pier cap
Slide 4
Executive Summary
• Software used :
• SAP2000
□ Analysis and determine bending moments and shear forces
• PROKON
□ Compute the reinforcement areas needed for the shear and moments, and the dimensions of the different components of the bridge
Slide 5
• Project description
• Bridge location
Slide 6
• Project description :
□ Continuous girder bridges .
□ Two lanes in each direction and two shoulders and carries the traffic in two directions .
□ Two span girders .
Slide 7
Slide 8
• AASHTO specifications
• American Concrete Institute (ACI) code
Slide 9
• Bridge location :
• Abu Samra Bridge is located on the high way between Abu Dhabi and Al-Ain .
Slide 10
Background Theory
• Reinforcement requirements:
□ Design method
□ Reinforcement requirements due to flexure
□ Reinforcement requirements due to Shear
□ T-Girder
Slide 11
Design method
• The method which will be used in our project is the ultimate-strength design method.
• It's called now ultimate strength design.
• The working dead and live loads are multiplied by certain load factors and the resulting values are called factored loads.
Slide 12
Reinforcement requirements due to flexure
• The reinforcing bars will be distributed as follows:
□ This reinforcing may not be spaced farther on center than 3 times the slab thickness.
□ A percentage of the main positive moment reinforcement which is perpendicular to the traffic shall be distributed in the parallel direction of the traffic
Slide 13
Reinforcement requirements due to flexure
• Spacing limits for reinforcement:
□ For cast-in-place concrete the clear distance between parallel bars in a layer shall not be less that 1.5 bar diameter.
□ Not less than1.5 times the maximum size of the coarse aggregate or 1.5 inches.
Slide 14
Reinforcement requirements due to flexure
• Positive Moment Reinforcement:
□ At least one-third the positive moment reinforcement in simple members and one-fourth the positive moment reinforcement in continuous members shall extend along the same face of the members
into the support in beams, such reinforcement shall extend into the support at least 6 inches.
• The development length :
□ The reinforcement bars must be extended some distance back into the support and out into the beam to anchor them or develop their strength.
Slide 15
Reinforcement requirements due to Shear
• The failure of reinforced concrete beams in shear are quite different form their failures in bending.
• Shear failures occur suddenly with little or no advance warning.
• If pure shear is produced in a member, a principal tensile stress of equal magnitude will be produced on another plane.
Slide 16
Types of Shear Reinforcement
• Stirrups perpendicular to the axis of the member or making an angle of the member or making and angle of 45 degrees or more with the longitudinal tension reinforcement.
• Welded wire fabric with wires located perpendicular to the axis of the member.
• Longitudinal reinforcement with a bent portion making an angle of 30 degrees or more with longitudinal tension reinforcement.
• Combinations of stirrups and bent longitudinal reinforcement.
• Spirals.
Slide 17
Shear strength
• Design of cross section subject to shear shall be based on:
• Where Vn = nominal shear strength
• Vu= factored shear force at the section considered
Slide 18
Shear strength provided by concrete
• For members subjected to shear and flexure only (Vc) is computed by:
Where bw = the width of web
d = the distance from the extreme compression fiber to the centroid of the longitudinal tension reinforcement.
Slide 19
Shear strength provided by Shear Reinforcement
• When shear reinforcement perpendicular to the axis of the member is used:
• Where Av= the area of shear reinforcement with in distance s.
• S= Spacing between stirrups
• Shear Strength Vs shall not be taken greater than
Slide 20
Minimum shear reinforcement
• A minimum area of shear reinforcement shall be provided in all flexural members expect slab and footing where the factored shear force Vu exceeds one-half the shear strength provided by concrete
• The area provided shall not be less than:
• Where b and s are in inches.
Slide 21
Minimum shear reinforcement
• Spacing of Shear Reinforcement
□ Spacing of shear reinforcement placed perpendicular to the axis of the member shall not exceed d/2 of 24 inches.
• Shrinkage temperature reinforcement:
□ Reinforcement for shrinkage and temperature stress shall be provided near exposed surfaces of walls and slabs not otherwise reinforced.
□ The total area of reinforcement provided shall be at least 1/8 square inch per foot in each direction.
□ The spacing of shrinkage and temperature reinforcement shall not exceed three times the wall or slab thickness, or 18 inches
Slide 22
Girder ( T – Section )
• The Total width of slab effective as a T-girder flange shall not exceed one-fourth of the span length of the girder.
• The effective flange width overhanging on each side of the web shall not exceed six times the thickness of the slab or one-half the clear distance to the next web.
Slide 23
Recommended Minimum Depths for Constant Depth Members.
Slide 24
Slide 25
Analysis of Pier Cap
• Dead load of pier cap
• Live load of pier cap
Slide 26
Dead load of pier cap
• Estimate the thickness
□ L = 50.54 ft
□ Length of span = 25.27 ft
□ Minimum thickness of the bridge cap piers
□ Width (b) = 0.5 Depth = 3 ft
Slide 27
Slide 28
Dead load of pier cap
Dead load
Shear force diagram
Slide 29
Live load of pier cap
• Use several cases by distributing the wheel trucks.
• Take the maximum wheel load = 18000 Ib
• Find the reactions in each supports for all cases.
• Take the maximum values of reaction.
Slide 30
Live load of pier cap
• These are the following cases:
□ Case 1: Full shift left
□ Case 2: Full shift right
□ Case 3: Centre to left
□ Case 4: Centre to right
□ Case 5: one truck centre to left
□ Case 6: one truck to left
□ Case 7: one truck centre to right
□ Case 8: one truck to right
Slide 31
Live load of pier cap
• Example of calculationsCase 3: Centre to left
Slide 32
Live load of pier cap
Uniform wheel load
• ∑ M2 = 0
• R1 =( 26 * 2.95 ) / 7.22 = 10.6 k
• ∑ Fy = 0
• 10.6 + R2 – 26 = 0
• R2 = 15.4 k
• ∑ M3 = 0
• R2 =( 26 * 4.17 ) / 7.22 = 15 k
• ∑ Fy = 0
• 15 + R3 – 26 = 0
• R3 = 11 k
Slide 33
Live load of pier cap
• Reactions for eight cases
Slide 34
Maximum Values in Dead Load
Slide 35
Maximum Values in Live Load
• Found the maximum in the same position of maximum dead load
Slide 36
Maximum Values in Live Load
Maximum shear force in case 2
Maximum positive moment in case 2
Slide 37
Ultimate Moment & Shear
Slide 38
Slide 39
Design of girder bridge
• Design of slab by using Prokon software
• Design the girders using manual calculation method
• Design the pier cap by using Prokon software.
Slide 40
Design of Slab (Inputs)
• Use PROKON for slab
• Inputs: Slab cross section
Slide 41
Design of Slab (Inputs)
Slide 42
Design of Slab (Inputs)
Slide 43
Design of Slab (Inputs)
Slide 44
Design of Slab (Outputs)
Slide 45
Slide 46
Design of Girder (Inputs)
• Use Hand Calculations Method
Slide 47
Design of Girder
• Positive section
□ The following equations were used to compute Area of steel needed for the section (As):
Fy= 420 MPa
F’c= 21 MPa
Mu = 4745 KN-m
b= 2200.656 mm
d= 1601.4 mm
Slide 48
Design of Girder
Slide 49
Design of Girder
• Minimum Spacing of stirrups = Maximum of
□ 600 mm
□ .
□ Use minimum Spacing (S) = 600mm
Slide 50
As required (mm2)
Required reinforcement
Main girder section
Design of Girder
Slide 51
Design of Pier Cap (Inputs)
• Using Prokon software to design
• Inputs:
□ Parameters:
☆ Fcu,
☆ Fy,
☆ D.L and L.L factors
☆ density of concrete
□ Length of each span = 7.7 m
Slide 52
Design of Pier Cap (Inputs)
Slide 53
Design of Pier Cap (Outputs)
Slide 54
Design of Pier Cap (Outputs)
• Minimum spacing s = maximum of ~ (depth – cover)/2 = (1829- 50)/2 = 890 mm
~ 600 mm
• So minimum spacing (s) = 890 mm.
• Minimum number of bars = length of span / Spacing= 7700 / 890 = 8.6 = 9 bars
• Take 10mm Stirrups diameter for the pier cap
Slide 55
Design of Pier Cap (Outputs)
Slide 56
Design of Pier Cap (Outputs)
Slide 57
• Finish the analysis of pier cap.
• Finish the design of superstructure for a girder bridge
• Use SAP2000 and Prokon programs in design
• The objective of GPII is fulfilled
• Learn main concepts on structural analysis and design | {"url":"http://www.slideserve.com/melanion/design-of-concrete-girder-bridge","timestamp":"2014-04-18T10:36:43Z","content_type":null,"content_length":"87068","record_id":"<urn:uuid:c8faeadc-b053-4d70-8ecc-aaaa2a14a50b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimizing a double integral
What region r in the xy plane minimizes the value of $\int\int_{R}$$(x^{2}+y^{2}-9)dA$ ? What are the reasons?
I recognize the equation of a circle but am unsure of how to answer it. Would R be the set of points (x,y) such that x^{2}+y^{2} is greater than 9? | {"url":"http://mathhelpforum.com/calculus/81379-minimizing-double-integral.html","timestamp":"2014-04-20T04:03:40Z","content_type":null,"content_length":"34462","record_id":"<urn:uuid:cb0e38ce-2d54-4014-b3ac-47b8d2f1f641>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coplay Math Tutor
Find a Coplay Math Tutor
...Furthermore, I am proficient in econometrics, having taught several students how to use SAS and STATA to perform regressions and analyses. I am also available to tutor for the quantitative
section of the GRE. As the entrance exam for graduate school, I scored in the 96th percentile.
19 Subjects: including calculus, Microsoft Excel, precalculus, statistics
My name is Karyn, and I currently live in Macungie, PA. I have lived in Lehigh County all of my life and attended schools here. I graduated high school in 1983 from William Allen High School, and
I graduated with a degree in Mathematics from Cedar Crest College.
9 Subjects: including trigonometry, linear algebra, probability, ACT Math
...I have been a practicing engineer for many years and thus I am familiar with many practical applications of math concepts to real world examples. My teaching philosophy is to maintain a
student-focused and student-engaged learning environment to ensure student comprehension and student success. ...
12 Subjects: including calculus, trigonometry, algebra 1, algebra 2
...For the ACT: you should go to act.org and try their practice test. This test has a science section and you are NOT penalized for wrong answers. Therefore, answer every single question.
22 Subjects: including trigonometry, probability, study skills, algebra 1
...I have passed the Chemistry and the Physical Sciences Praxis exams with excellence, scoring in the top 15% of all those who have taken the exams. Score reports are available upon request. Since
I am working to be a teacher, I have all my clearances in order (also available upon request) and updated annually, most recently in August, 2010.
14 Subjects: including trigonometry, biochemistry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/coplay_math_tutors.php","timestamp":"2014-04-18T00:30:18Z","content_type":null,"content_length":"23509","record_id":"<urn:uuid:82030148-6cfb-42cc-8638-82cd22bddb8c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Robert Saye
Robert Saye
Phone: +1 510 486 5412
Robert Saye studied at the Australian National University and received a Bachelor of Philosophy with First Class Honours (2007), specializing in applied mathematics. He expects to graduate from UC
Berkeley with a Ph.D. in applied mathematics in 2013. His research interests include problems involving multiple moving interfaces, development of advanced numerical methods, multi-scale physics,
fluid flow and fluid-structure interaction, high performance computing methods and scientific visualization.
Journal Articles
We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids,
mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and
separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological
changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a
single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method’s accuracy through convergence tests, and demonstrate its applications to geometric
flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces. | {"url":"http://crd.lbl.gov/about/staff/mcs/mathematics/robert-saye/","timestamp":"2014-04-18T20:58:42Z","content_type":null,"content_length":"27055","record_id":"<urn:uuid:d3ea1427-4c99-407c-a551-f122ebe3e300>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
issue with stokes!
July 8th 2013, 03:10 AM #1
May 2013
issue with stokes!
What method should I employ here/how to do it ???(apparently this is stokes problem).cheers.
Given F= (2x+sinyz)i+2x+xzcosyz)j+(y^2+2z+xycosyz)k
evaluate ∮F.dr where C is the cruve C consisting of the straight line segment from (0,0,0) to (0,1,0) followed by the quarter circle y²+z²=1 from (0,1,0) to (0,0,1) followed by the straight line
segment from (0,0,1) to (0,0,0).
Re: issue with stokes!
Do you know Stokes' Theorem? \displaystyle \begin{align*} \int_C{ \mathbf{F} \cdot d \mathbf{r}} = \int{\int_S{\textrm{curl}\,\mathbf{F} \cdot d\mathbf{S}}} \end{align*}.
You should start by drawing a sketch of your region. Can you get an equation for the surface bounded by your contour?
Re: issue with stokes!
Yes, Stoke's theorem is probably simplest but it is not all that hard to integrate directly, on the path. On the first part, (0, 0, 0) to (0, 1, 0), we can use the parameterization x= 0, y= t, z=
0, with t from 0 to1, so that dx= dz= 0, dy= dt so only the dy or "j" part is necessary. And the integrand for dy is xz cos(yz)= 0 cos(0)= 0 so the first integral is 0.
Similarly, on the third portion, the line from (0, 0, 1) to (0, 0, 0), we can take x= 0, y= 0, z= 1- t, dx=dy= 0, dz= -dt with t from 0 to 1. Now we need only look at the dz integral. The
integrand is [tex]y^2+ 2z+ xy cos(yz)= 0^2+ 2(1- t)+ 0cos(0)= 2- 2t. The integral is [tex]\int_0^1 (2- 2t)(-dt)= \int_0^1(2t- 2) dt= \left[t^2- 2t\right]_0^1= 1- 2= -1.
The middle portion is the "hard one"- but not that hard. The quarter circle, from (0, 1, 0) to (0, 0, 1) can be parameterized as x= 0, y= cos(t), z= sin(t) with t from 0 to $\frac{\pi}{2}$. dx=
0, dy= -sin(t)dt, and dz= cos(t)dt. 2x+ xzcos(yz)= 0 so that part is 0. $y^2+ 2z+ xy cos(yz)= cos^2(t)+ 2sin(t)$ so the integral becomes
$\int_0^{\pi/2} cos^3(t)+ 2sin(t)cos(t) dt= \int_0^{\pi/2} (cos^2(t)+ 2sin(t)) cos(t)dt$
$= \int_0^{\pi/2} (1- sin^2(t)+ 2sin(t))cos(t)dt$
Let $u= sin(t)$ so $du= cos(t)dt$ so that the integral becomes $\int_0^1 (1- u^2+ 2u)du$ which is easy to integrate.
(Actually, after trying to do it with Stokes' theorem, I think doing it directly iseasiest!)
Last edited by HallsofIvy; July 8th 2013 at 06:16 PM.
Re: issue with stokes!
Sorry about the late reply...How do you get the equation for the surfacce for stokes ?how do you do this using stokes ?
Re: issue with stokes!
You were given F...
July 8th 2013, 05:01 AM #2
July 8th 2013, 06:11 PM #3
MHF Contributor
Apr 2005
July 13th 2013, 08:30 PM #4
May 2013
July 14th 2013, 01:41 AM #5 | {"url":"http://mathhelpforum.com/calculus/220412-issue-stokes.html","timestamp":"2014-04-19T23:05:48Z","content_type":null,"content_length":"48216","record_id":"<urn:uuid:2ea7cdf8-8d45-4339-84dc-2e9d7a23d134>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Embargo Period
Degree Name
Doctor of Philosophy (PhD)
Mathematical Sciences
Second Advisor
Tom Bohman
Fourth Advisor
Asaf Shapira
In graph theory, as in many fields of mathematics, one is often interested in finding the maxima or minima of certain functions and identifying the points of optimality. We consider a variety of
functions on graphs and hypegraphs and determine the structures that optimize them.
A central problem in extremal (hyper)graph theory is that of finding the maximum number of edges in a (hyper)graph that does not contain a specified forbidden substructure. Given an integer n, we
consider hypergraphs on n vertices that do not contain a strong simplex, a structure closely related to and containing a simplex. We determine that, for n sufficiently large, the number of edges is
maximized by a star.
We denote by F(G, r, k) the number of edge r-colorings of a graph G that do not contain a monochromatic clique of size k. Given an integer n, we consider the problem of maximizing this function over
all graphs on n vertices. We determine that, for large n, the optimal structures are (k − 1)^2-partite Turán graphs when r = 4 and k ∈ {3, 4} are fixed.
We call a graph F color-critical if it contains an edge whose deletion reduces the chromatic number of F and denote by F(H) the number of copies of the specified color-critical graph F that a graph H
contains. Given integers n and m, we consider the minimum of F(H) over all graphs H on n vertices and m edges. The Turán number of F, denoted ex(n, F), is the largest m for which the minimum of F(H)
is zero. We determine that the optimal structures are supergraphs of Tur´an graphs when n is large and ex(n, F) ≤ m ≤ ex(n, F)+cn for some c > 0.
Recommended Citation
Yilma, Zelealem Belaineh, "Results in Extremal Graph and Hypergraph Theory" (2011). Dissertations. Paper 49. | {"url":"http://repository.cmu.edu/dissertations/49/","timestamp":"2014-04-18T08:05:24Z","content_type":null,"content_length":"22406","record_id":"<urn:uuid:6344b1b7-789f-4e1f-a24a-c58a3177d93d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- User Profile for: rj_@_ath.luc.edu
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: rj_@_ath.luc.edu
User Profile for: rj_@_ath.luc.edu
UserID: 35479
Name: Richard J Maher
Registered: 12/6/04
Total Posts: 80
Show all user messages | {"url":"http://mathforum.org/kb/profile.jspa?userID=35479","timestamp":"2014-04-18T08:42:55Z","content_type":null,"content_length":"11932","record_id":"<urn:uuid:14d56946-0b4d-4e38-b858-653ab5af5fb5>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
The current in an inductor when energy is stored. Please help!!!
1. The problem statement, all variables and given/known data
What's the current in a 15mH inductor when the stored energy is 42μJ
2. Relevant equations
E = 1/2 L i^2
3. The attempt at a solution
.000042J = (.5)(.015H)(i^2)
.000042J = (.0075)(i^2)
.0056 = i^2
√ (.0056) = i
.0748331 = i
I need to put the answer in with only two sig figs so i put in .07 but it was wrong
What am I doing wrong? | {"url":"http://www.physicsforums.com/showthread.php?t=654744","timestamp":"2014-04-19T12:44:11Z","content_type":null,"content_length":"26893","record_id":"<urn:uuid:d5bac503-aa40-434d-846f-8a79b100ef4a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: [math-learn] My approach to curriculum writing (Urner)
Replies: 2 Last Post: Feb 9, 2001 3:43 AM
Messages: [ Previous | Next ]
[math-learn] My approach to curriculum writing (Urner)
Posted: Feb 5, 2001 1:33 AM
K. Urner, Oregon Curriculum Network (OCN), Feb 4, 2001
Sometime in the not so distant past, math went through a dark
age -- literally, in that it was "lights out" for
visualizations. Students were supposed to empty their minds of
any naive pictures or diagrams and just get used to operating
with symbols. This was considered the "in" way to think about
matters mathematical -- math, like the rest of culture, is not
immune from fads.
Now the pendulum is swinging back to where visualizations are
no longer considered naive. Semiotics and Wittgenstein have
taught us to see graphical and lexical elements as a part of
the same semantic continuum, neither intrinsically more
"abstract" than the other. Even "visual proofs" are coming
back into vogue.[1]
We also have the "left versus right brain" terminology, which
carries weight regardless of whether neuroscience justifies it
entirely (actually, there's a lot of evidence for hemispheric
Traffic between East and West has only increased in this last
century (1900s) and the right/left brain talk maps fairly well
to yin/yang talk -- there's an isomorphism here, or you can make
it work that way. And this concept of balance, of developing
both hemispheres, of not letting either atrophy, is having an
impact on math education theories, and on pedagogy in general.
So visualizations, the importance of the imagination, is making
a come back under the aegis of the right brain's champions.
As a curriculum writer, I too encourage right brain approaches
to mathematics. This hemisphere I associate not just with
visual/spatial/imaginative abilities, but with the concept of
cardinality (vs. ordinality) -- a link I learned about from
Midhat Gazale.[2]
On the spatial front, I encourage a kind of "mental geometry"
to develop alongside "mental arithmetic", and do so by
exploiting the ancient tradition of nesting polyhedra to form a
"maze" or "matrix". It's something the sacred geometers were
into, down through the theosophists in our own time. Given
this history, our graphics and visuals may tend to raise some
eyebrows.[3] But that's OK. I think it's more interesting to
students as well, to connect to these arts and traditions,
while developing their right brain abilities to consider
geometry in the mind's eye. Opportunities to bring up
historical topics, which offer insight into how we came to
be who we are, should not be ignored.
Where I part company from most polyhedron nesters is I'm not so
fixated on the Platonic Five, thanks to my being a long time
student of the Fuller syllabus, i.e. a reader of books by the
late R. Buckminster Fuller. Rather than the pentagonal
dodecahedron, Bucky, like Kepler, was more into the rhombic
dodecahedron, because the latter is defined by a sphere packing
known variously as the CCP or FCC (or IVM if you're reading
Fuller's 'Synergetics').
The rhombic dodeca isn't even an Archimedean, let alone a
Platonic, and yet chemistry really is a lot about lattices,
which may be referenced to packed spheres. The CCP, like the
XYZ coordinate system, provides a scaffolding, lines of
reference, a grid. It should be taught, starting early. We
need geoboards of both varieties: square-based and hexa-based
in flatland, and cube-based and dodeca-based in space. The
XYZ lattice corresponds to another way of packing spheres
by the way (less dense): the SCP (simple cubic packing).
So yes, I'm into sphere packing and push it as a topic
accessible even to the very young -- that's made very obvious
at my website. And another thing I learned from Bucky:
there's a very streamlined and compact way of both nesting
and scaling a set of inter-related polyhedra so that not
only do they synch with the CCP (with rhombic dodeca = unit
radius sphere domain), but they have whole number volumes
to boot.
I find this intro to spatial geometry far less off-putting to
kids, than the way we phase in non-cubes today. Today, only
cubes get to have simple, whole number volumes, while just
about everything else is irrationally volumed, unless assembled
from cubes, i.e. unless rectilinear, all parallel lines.
Our cubism alienates the other polys, relegates them to the
back of the book, to obscurity. Yet this is really basic stuff
should _not_ be rendered esoteric. It's our obsession with
the cube that makes it so. That was Bucky's critique of our
whole culture and I think there's enough truth in it that I'm
inclined to take decades of practically no discussion of the
issue as more evidence that these valuable diagnostic insights
have been willfully suppressed by those who should know better
than to stand in the way of evolution, yet cling, out of fear
and ignorance, to the status quo.
Here's the new volumes chart:[4]
Shape Volume
Tetrahedron 1
Cube 3
Octahedron 4
Rhombic Dodeca 6
Cubocta 20
Now, how does this work? First of all, we're nesting
polyhedra, ala the ancient tradition of doing so, going back to
Pythagoras and before. We're organizing them concentrically
(they all share the same center). The tetrahedron comprises
the face diagonals of the cube. Another tetrahedron, crossing
the edges of the first at 90 degrees, defines the other 4 of
the cube's 8 corners. These two intersecting tetrahedra are
what Kepler and others called the stella octangula. You can
also see it as an octahedron with tetrahedral tips (a
stellate), these 8 tips defining the corners of our cube. So
we call it a "duo-tet" cube -- because formed from these two
interpenetrating tetrahedra (I use an orange and black duo-tet
as the logo of my company by the way -- 4D Solutions, named
to emphasize continuity with Fuller's own use of '4D' as a
kind of logo).[5]
I mentioned an octahedron, defined by the intersection of the
two tetrahedra (you could use intersection notation at this
point). That octahedron has edges 1/2 those of the tetrahedron,
and therefore 1/8th the volume of an octahedron with edges
the same as the tetrahedron's (double the edges, and volume
goes up by 8 -- it's in the California standard: 7th graders
should all know about the edges:surface area:volume relationship).
I won't go into all the geometric dissections and rearrangements
of pieces (modules, parts), and just say that a series of wordless
geometry cartoons easily communicate a lot of the volume
The students will see, by means of "visual proofs", that the
octahedron has a volume of 4, and inscribes as the long diagonals
of the rhombic dodeca's 12 faces, while the cube's edges inscribe
as the short diagonals. It's a very tight arrangement then:
tetra + dual tetra = cube
cube + dual octa = rhombic dodeca
= domain of unit radius sphere
And 12 such spheres tightly packed around a nuclear sphere, CCP
style, form the vertices of a cuboctahedron of volume 20.
From here, you can keep packing outward, layer after layer of
spheres, always cuboctahedrally conformed, with 10 L^2 + 2
spheres in each layer (L = layer number, starting with 12 when
L=1). There's a way to use simultaneous equations, matrix
algebra, or even Bernoulli numbers, to derive a formula for
the cumulative number of spheres in all layers (plus nuclear).
Let's share that!
So that's what I mean by "mental geometry". There's more to it
of course, but this is the basic framework of what's developed
as a kind of "home base" for the mind's eye. Clearly the
tetrahedron is prominent. This is a divergence from the cubist
standard, but is convergent with other trends and schools of
thought i.e. it may be unfamiliar, but it's certainly not out
of the ballpark, in the sense that tetrahedra are intrinsic to
Tetrahedra are the simplest volumes after all, if limiting
entrants to shapes made from edges, vertices and faces (the
sphere being a limiting shape, a very high frequency polyhedron
of some kind e.g. an icosasphere). The tetrahedron has only 6
edges whereas the cube uses 12. In this sense, the tetrahedron,
or simplex, is the most primitive enclosure. That needs to be
stated directly, in no uncertain terms. If it's not part of
the California standard, it really should be.
So how do we connect to "mental arithmetic" then? Now that
I've outlined the right brain's regimen (including mental
exercises), where do we connect to the other hemisphere?
Answer: through figurate numbers and Pascal's Triangle.
Figurate numbers may be envisioned as sphere packings. The
triangular numbers look like something you'd see in a Pool Hall
-- Pool Hall math. The square and cube-shaped numbers suggest
XY, or the SCP/XYZ lattice. So we're starting to talk about
number series, as the triangular numbers are likewise the sums
of consecutive positive integers. Lots of relationships might
be developed as the students consider and explore, plus we
include a derivation of the aforementioned 10 L^2 + 2, which
applies to icosahedral shells, not just cuboctahedral (links
to viruses, buckyballs, geodesic spheres).
And with Pascal's Triangle, we've got links to the Binomial
Theorem (factorials, probability, Bell Curve), Fibonacci
Numbers, Bernoulli Numbers, Triangular and Tetrahedral
Numbers. That's a very rich set of concepts to go forward
with. Fibonaccis connect to phi, Bernoullis to sums of the
form SIGMA N^c, where c = integer > 0 and N = 1,2,3..., or
to sums of the form SIGMA N^c where c = integer < 0 (Euler's
zeta function -- Riemann's if complex). Throw in continued
fractions, primes vs. composites, and some modulo arithmetic,
and you've set the stage with a lot of important and inter-
related concepts (gcd, lcm, fractions, Fermat's little
theorem, cyphers...) Plus phi gets you back to geometry, to
five-fold symmetry, and to the so-far missing Platonics
(the pentagonal dodecahedron and its dual, the icosahedron).
We also have Pascal's Tetrahedron, and the trinomial theorem,
if we like.[6]
Functions and relations, polynomials, trig, vectors, and much
of what's conventionally covered in math class (plus more
that isn't), all have easy segues or hyperlinks from this
core network of key concepts. We don't lose content, and we
gain a more tightly integrated curriculum, with left and right
brain faculties co-functioning in far greater harmony, producing
more powerful synergies. Sure, that's hype, more pro-OCN
propaganda, but there's plenty of substance to back it up.
The left brain stuff gets developed in tandem with computer
programming (not just calculators), and the right brain stuff
gets developed in tandem with computer graphics -- driven
behind the scenes by the computer programs. Left brain numeric
methods result in right brain artistic renderings. We've got
the bridge between math and art. All we need now is to phase
in music (something computers are good for as well). That's
something I haven't gone into much yet, but some of my colleagues
are exploring in that direction.[7]
[1] James Robert Brown. Philosophy of Mathematics: an
introduction to the world of proofs and pictures
(London: Routledge, 1999)
[2] Re: Cardinality vs Ordinality see:
[3] see L. Gordon Plummer. The Mathematics of the Cosmic
Mind (Wheaton IL: Theosophical Publishing House, 1970)
-- a book both Kiyoshi and I agreed contains a lot of
racist elements, but I'm mentioning it here for the
"maze" depictions.
[4] More re volumes:
[5] 4D Solutions, Porland (PDX):
[6] http://www.inetarena.com/~pdx4d/ocn/numeracy0.html
[7] See Jay Kappraff. Connections: the geometric bridge
between art and science. (New York: McGraw-Hill, 1991)
for more music links. Sir Roger Penrose had an interesting
approach to music using a circle, which he shared with us
at the 1997 Oregon Math Summit
------------------------ Yahoo! Groups Sponsor ---------------------~-~>
eGroups is now Yahoo! Groups
Click here for more details
To unsubscribe from this group, send an email to:
Date Subject Author
2/5/01 [math-learn] My approach to curriculum writing (Urner) Kirby Urner
2/6/01 [math-learn] Re: My approach to curriculum writing (Urner) Kirby Urner
2/9/01 [math-learn] Re: My approach to curriculum writing (Urner) Kirby Urner | {"url":"http://mathforum.org/kb/thread.jspa?threadID=430023","timestamp":"2014-04-21T13:23:12Z","content_type":null,"content_length":"32063","record_id":"<urn:uuid:94141af4-847a-43e0-9eed-090321e1509c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
South Plainfield SAT Math Tutor
...I have been tutoring and teaching since I was in high school myself because I love doing it! During and after earning a Master of Science in mathematics, I spent 8 years teaching at the
post-secondary level in universities and community colleges. I have also worked with middle and high school students.
10 Subjects: including SAT math, calculus, statistics, geometry
...Furthermore, I have tutored high school students in Algebra and SAT prep. My educational background is in Economics. I have a degree in Economics and a Master's degree in Business Economics.
9 Subjects: including SAT math, algebra 1, grammar, elementary (k-6th)
...A lot of the students I tutor with another company are SAT and PSAT students, and algebra is a major subject area on those tests. I received my BS in biology with a concentration in cell and
molecular biology from Hofstra University in 2013. My cumulative GPA was 3.76.
15 Subjects: including SAT math, chemistry, physics, geometry
...After my decision was made I had to figure out what I wanted to teach. I went back to KEAN university to get my second degree in mathematics. Graduating with a 3.5 made it very clear that this
was the right move for me.
10 Subjects: including SAT math, calculus, algebra 1, algebra 2
...In preparation for any of the above math-based tests, I focus on imparting deep-level mathematical understanding, and that can only be taught with individual attention. One of my great
pleasures is helping students reach moments of epiphany when the light dawns and perplexity gives way to clarity. Throughout our sessions, I focus on making you think for yourself and develop
flexible mastery.
55 Subjects: including SAT math, reading, English, writing | {"url":"http://www.purplemath.com/South_Plainfield_SAT_Math_tutors.php","timestamp":"2014-04-21T14:50:49Z","content_type":null,"content_length":"24183","record_id":"<urn:uuid:e89be5d9-cbc2-4a27-95ed-f0c9551b14c1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tucker, GA Geometry Tutor
Find a Tucker, GA Geometry Tutor
...I am a mentor at my high school and involved in many honor societies as well as volunteered to teach at a homework club in an elementary school. If my students do not understand the way I am
teaching I will adjust my teachings to be more suitable for my students. I am flexible with my schedule and I am always punctual.
14 Subjects: including geometry, chemistry, biology, algebra 1
I am a junior Mathematics major at LaGrange College. I will be graduating in 2015. I have been doing private math tutoring since I was a sophomore in high school.
9 Subjects: including geometry, algebra 1, algebra 2, precalculus
...Concepts that I can share my expertise are: •Financial Statements and Cash Flow •Budgeting, Planning, and Financial Forecasting •The Time Value of Money •The Meaning and Measurement of Risk
and Rates of Return •Valuation of Stocks and Bonds •The Cost of Capital •Capital Budgeting: Technique...
18 Subjects: including geometry, accounting, ASVAB, finance
I have a BS and MS in Physics from Georgia Tech and a Ph.D. in Mathematics from Carnegie Mellon University. I worked for 30+ years as an applied mathematician for Westinghouse in Pittsburgh.
During that time I also taught as an adjunct professor at CMU and at Duquesne University in the Mathematics Departments.
10 Subjects: including geometry, calculus, physics, algebra 1
...I have experience in the following at the high school and college level:- pre algebra- algebra- trigonometry- geometry- pre calculus- calculusIn high school, I took and excelled at all of the
listed classes and received a 5 on the AB/BC Advanced Placement Calculus exams. As an undergraduate, I c...
16 Subjects: including geometry, calculus, algebra 1, algebra 2
Related Tucker, GA Tutors
Tucker, GA Accounting Tutors
Tucker, GA ACT Tutors
Tucker, GA Algebra Tutors
Tucker, GA Algebra 2 Tutors
Tucker, GA Calculus Tutors
Tucker, GA Geometry Tutors
Tucker, GA Math Tutors
Tucker, GA Prealgebra Tutors
Tucker, GA Precalculus Tutors
Tucker, GA SAT Tutors
Tucker, GA SAT Math Tutors
Tucker, GA Science Tutors
Tucker, GA Statistics Tutors
Tucker, GA Trigonometry Tutors | {"url":"http://www.purplemath.com/tucker_ga_geometry_tutors.php","timestamp":"2014-04-19T12:42:34Z","content_type":null,"content_length":"23909","record_id":"<urn:uuid:666f14a5-2842-4e67-8e39-814846262970>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sunset Island, FL Trigonometry Tutor
Find a Sunset Island, FL Trigonometry Tutor
Hello there! With more than 15 years acting as a tutor, I tend to gravitate more to Science-based classes, specially Math (Algebra, Geometry, Trigonometry, Calculus I, II and III, Differential
Equations, Statistic, SAT Math, etc..) and Physics. I enjoy tutoring as it allows me to help students to ...
13 Subjects: including trigonometry, chemistry, physics, calculus
...I have taken up to AP Calculus AB in my high school and Trigonometry in my college. I have always loved teaching and I obtained over 500 community service hours from a daycare in Georgia over
summer vacation and worked with pre-schoolers who were special needs, emotionally handicapped, and foster children, amongst others. Teaching is my passion and I love to do it.
11 Subjects: including trigonometry, geometry, algebra 1, algebra 2
I began working as a tutor in High School as part of the Math Club, and then continued in college in a part time position, where I helped students in College Algebra, Statistics, Calculus and
Programming. After college I moved to Spain where I gave private test prep lessons to high school students ...
11 Subjects: including trigonometry, calculus, physics, geometry
...I pride myself in making analogies of complicated concepts to things in real life, as well as just breaking a hard problem down into many simpler problems. I hope to not only work with you, but
spark that appreciation and love for subjects that play such a huge role in my life.I currently teach ...
23 Subjects: including trigonometry, calculus, GRE, ASVAB
...I took Genetics while an undergraduate at Emory University, and received an A in the course. I have also taken genetics recently as I am medical student. I was a chemistry major in college, and
took organic chemistry my freshmen year, receiving A range grades for both semesters.
32 Subjects: including trigonometry, chemistry, calculus, physics | {"url":"http://www.purplemath.com/Sunset_Island_FL_trigonometry_tutors.php","timestamp":"2014-04-21T04:41:28Z","content_type":null,"content_length":"24728","record_id":"<urn:uuid:53b57d56-4f32-4e41-9754-ea4d64bc8ddd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equilibrium Analysis
In the market for any particular good
, the decisions of buyers
interact simultaneously
with the decisions of sellers. When the demand for good
equals the supply of good
, the market for good
is said to be in
. Associated with any market equilibrium will be an
equilibrium quantity
and an
equilibrium price
. The equilibrium quantity of good
is that quantity for which the quantity demanded of good
exactly equals the quantity supplied of good
. The equilibrium price for good
is that price per unit of good
that allows the market to “clear”; that is, the price for which the quantity demanded of good
exactly equals the quantity supplied of good
. The determination of equilibrium quantity and price, known as
equilibrium analysis
, can be achieved in two different ways: by simultaneously solving the algebraic equations for demand and supply or by combining the demand and supply curves in a single graph and determining the
equilibrium price and quantity graphically.
The algebraic approach to equilibrium. The algebraic approach to equilibrium analysis is to solve, simultaneously, the algebraic equations for demand and supply. In the example given above, the
demand equation for good X was
and the supply equation for good X was
To solve simultaneously, one first rewrites either the demand or the supply equation as a function of price. In the example above, the supply curve may be rewritten as follows:
Substituting this expression into the demand equation, one can solve for the equilibrium price:
The equilibrium price of good X is found to be $2. Substituting the equilibrium price of 2 into the rewritten supply equation for good X, one has:
The equilibrium quantity is found to be 4 units of good X.
A graphical depiction of equilibrium. The graphical approach to equilibrium analysis is illustrated in Figure . The equilibrium price and quantity are determined by the intersection of the two
curves. The equilibrium quantity is 4 units of good X, and the equilibrium price is $2 per unit of good X. This result is the same as the one obtained by simultaneously solving the algebraic
equations for demand and supply.
A price of $2 and a quantity of 4 units of X are the equilibrium price and quantity only when the demand and supply for good X are exactly as depicted in Figure . If either the demand curve or the
supply curve shifts, the equilibrium price and quantity change. Examples of shifts in the demand and supply curves and the resultant changes in equilibrium are illustrated in Figures (a) and (b). In
Figure (a), a shift to left of the demand curve, from D [A] to D [B], leads to a decrease in both the equilibrium price and quantity of good X, while a shift to the right of the demand curve, from D
[A] to D [C], leads to an increase in both the equilibrium price and quantity of good X, assuming supply is held constant‐the ceteris paribus assumption. In Figure (b), a shift to the left of the
supply curve, from S [A] to S [B], leads to an increase in the equilibrium price of good X but a decrease in the equilibrium quantity of good X, assuming demand is held constant. A shift to the right
of the supply curve, from S [A] to S [C], leads to a decrease in the equilibrium price of good X but an increase in the equilibrium quantity of good X, again assuming that demand is held constant. | {"url":"http://www.cliffsnotes.com/more-subjects/economics/demand-supply-and-elasticity/equilibrium-analysis","timestamp":"2014-04-18T06:28:01Z","content_type":null,"content_length":"141014","record_id":"<urn:uuid:975669f5-6fdf-4222-bb6f-781cebddfa46>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
Towers of Hanoi
Date: 10/08/2000 at 10:22:15
From: Rakesh
Subject: Towers of Hanoi formula and how to get there
What is the general formula for finding the fewest number of moves
required to finish Towers of Hanoi with n disks, apart from 2^n - 1? I
also need to know how to prove it works and how you got there.
Thank you.
Date: 10/08/2000 at 10:35:39
From: Doctor Anthony
Subject: Re: Towers of Hanoi formula and how to get there
The rules for Towers of Hanoi can be found our "Towers of Hanoi" FAQ:
The formula for the minimum number of moves with 3 pegs and n discs is
2^n - 1.
The recurrence relation that leads to this result is
u(n) = 2*u(n-1) + 1 .....................................[1]
where u(n) is number of moves with n disks present.
To see this, note that to transfer n disks to another peg we must
first transfer the top n-1 disks to the third peg (taking u(n-1)
moves) then transfer the largest disk to a vacant peg (1 move) and
then transfer n-1 disks back to the peg with the largest disk (taking
another u(n-1) moves.)
Adding these moves we get
u(n-1) + 1 + u(n-1) = 2*u(n-1) + 1
Solving this recurrence relation gives
u(n) = 2^n - 1
To prove this result you can either use a descending series of partial
sums based on the recurrence relation (1) above, leading to
1 + 2 + 2^2 + 2^3 + ..... + 2^(n-1) = (2^n -1 )/(2-1) = 2^n - 1
or you can prove the result by induction.
Inductive Proof
First show that the formula is true for n = 1.
For n = 1 the formula gives 2^1 - 1 = 1 and that is correct.
Next assume it is true for some value n = k, that is we assume the
truth of
u(k) = 2^k - 1
Now add one more disk, so that n = k+1.
From the recurrence relation (1) we have
u(k+1) = 2*u(k) + 1
= 2(2^k - 1) + 1
= 2^(k+1) - 2 + 1
= 2^(k+1) - 1
and this is the same equation we had before, except that k is replaced
by k+1. So if the result is true for n = k, then it is true for
n = k+1. But it is true for n=1, therefore it will be true for n = 2,
and if true for n = 2 it will be true for n = 3, and so on to all
positive integral values of n, by the Principle of Mathematical
- Doctor Anthony, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/55956.html","timestamp":"2014-04-18T03:11:02Z","content_type":null,"content_length":"7380","record_id":"<urn:uuid:5cb1a189-5135-4395-a2cf-e3df377b6b48>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some Pages22.html
Least-Squares Approximation
Like the method of cubic splines, the least-square method attempts to fit a function through a set of data points without the wiggle problem associated with higher-order polynomial approximation.
But unlike the cubic spline technique, the derived least-square function does not necessarily pass through every data point. The method involves approximating a function such that the sum of the
squares of the differences between the approximating function and the actual values given by the data is a minimum.
The basis for the method is:
Given a set of
we wish to fit an approximating function
The minimum value of
This set of
These regression coefficients are then substituted into
to give the desired approximating function.
□ Fit options fits the model y(x)
The technique used to estimate a linear relationship of the form
is known as simple regression. Geometrically, this amounts to finding the line in the plane that best fits the data points
This line is called the least squares line, and the coefficients regression coefficients. If all the points were exactly on the least square line, we would have
Following the procedure given above, we get
☆ LeastSquares computes a least-squares approximation to the given points unless the
The function
See also LinearFit in the Statistics package.
The least-squares method can be written in matrix form
x = b =
But since most of these points probably not lie on the line, we have
where residual vector. The vector element
presented in Chapter 6.9. The system is solved for x by Gauss-Jordan elimination.
| | x
The least squares solution of
☆ LeastSquares ( returns a least squares solution vector x that best satisfies the matrix equation
Example 8
Find the least squares line for the data points given in the table.
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
0.12 0.25 0.49 0.83 0.91
> restart:MathMaple:-ini():alias(GaussJord=ReducedRowEchelonForm):
> X:=[seq(0.5*i,i=0..8)]: Y:=[-0.99,-0.70,-0.49,-0.24,0.12,0.25,0.49,0.83,0.91]:
We use the b.
> A:=Matrix([[0,1],[0.5,1],[1.0,1],[1.5,1],[2.0,1],[2.5,1],[3.0,1],[3.5,1],[4.0,1]]): b:=<Y>:
The augmented matrix Gauss-Jordan elimination gives
is given by > av:=Column(%,3):
<a[1],a[0]> =map(evalf,av,3);
> ATA:=Transpose(A).A:
> B:=Matrix([ATA,ATb]):%;
The equation for the least squares line is Direct computation gives
> y=evalf(av[1]*x+av[2],3); > <a[1],a[0]> =map(evalf,LinearAlgebra:-LeastSquares(A,b),3);
> p1:=PlotData(X,Y,style=point,symbol=solidcircle,symbolsize=16): p2:=plot(yx,x=0..4,legend=typeset(y=yx)): display(p1,p2,thickness=2);
Let us also compute the regression coefficients using the basis method described in the first section. We have the data > S=Sum('(y(X[i])-Y[i])^2',i=1..9);
The least-squares procedure in the CurveFitting package gives or by using the Fit procedure
> y=CurveFitting:-LeastSquares(X,Y,t); | {"url":"http://www.hpleym.no/MathWithMaple/Some%20Pages22.html","timestamp":"2014-04-18T13:20:45Z","content_type":null,"content_length":"45223","record_id":"<urn:uuid:8686fd01-9631-471e-8ae6-aa09f8acf8b3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Baldwin Hills, CA Algebra 2 Tutor
Find a Baldwin Hills, CA Algebra 2 Tutor
...I'm here because I love education and I love seeing people, particularly those who are down on school or stressed about coursework, achieve those little lightbulb moments, when they realize
they really can be good at academics and achieve their goals. Please don't hesitate to reach out to me with questions! ShaneHello, I studied Mandarin Chinese as one of my majors at Cornell
30 Subjects: including algebra 2, reading, Spanish, English
...My specialties are Math (up to Calculus), English, and Chemistry. My approach is simple. I feel that it is important to be straightforward and emphasize important concepts in order for the
student to get the most help possible.
12 Subjects: including algebra 2, chemistry, English, reading
...I am a math major at Caltech currently doing research in graph theory and combinatorics with a professor at Caltech. I have taken several discrete math courses and I spent a summer solving
hard problems in discrete math with a friend. I began programming in high school, so the first advanced ma...
28 Subjects: including algebra 2, Spanish, chemistry, French
...And for the past five years, I have been volunteering as a full-time teacher. Mathematics (particularly pre-algebra and algebra) and English (particularly grammar and writing) are my strongest
subjects. I employ techniques such as mnemonic devices, word associations, connecting new information ...
13 Subjects: including algebra 2, reading, English, writing
...I first understand the student's weakness so that I do not waste time strengthening skills that a student already has mastered. Next, I demonstrate how to do certain problems or teach concepts
that I believe the student needs to work on. Finally, I present multiple problems relevant to the area of weakness to guarantee that the student has mastered the concept.
8 Subjects: including algebra 2, statistics, biology, algebra 1
Related Baldwin Hills, CA Tutors
Baldwin Hills, CA Accounting Tutors
Baldwin Hills, CA ACT Tutors
Baldwin Hills, CA Algebra Tutors
Baldwin Hills, CA Algebra 2 Tutors
Baldwin Hills, CA Calculus Tutors
Baldwin Hills, CA Geometry Tutors
Baldwin Hills, CA Math Tutors
Baldwin Hills, CA Prealgebra Tutors
Baldwin Hills, CA Precalculus Tutors
Baldwin Hills, CA SAT Tutors
Baldwin Hills, CA SAT Math Tutors
Baldwin Hills, CA Science Tutors
Baldwin Hills, CA Statistics Tutors
Baldwin Hills, CA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Bicentennial, CA algebra 2 Tutors
Cimarron, CA algebra 2 Tutors
Hancock, CA algebra 2 Tutors
Hollyglen, CA algebra 2 Tutors
La Tijera, CA algebra 2 Tutors
Lennox, CA algebra 2 Tutors
Mar Vista, CA algebra 2 Tutors
Pico Heights, CA algebra 2 Tutors
Playa Vista, CA algebra 2 Tutors
Rancho Park, CA algebra 2 Tutors
Sanford, CA algebra 2 Tutors
View Park, CA algebra 2 Tutors
Westchester, CA algebra 2 Tutors
Westvern, CA algebra 2 Tutors
Windsor Hills, CA algebra 2 Tutors | {"url":"http://www.purplemath.com/Baldwin_Hills_CA_algebra_2_tutors.php","timestamp":"2014-04-18T16:06:27Z","content_type":null,"content_length":"24545","record_id":"<urn:uuid:99bc5096-3e38-4bc2-8857-7399a2220dea>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equation of a plane
July 19th 2008, 05:27 PM #1
Junior Member
Mar 2007
Equation of a plane
What is the equation of the plane with a basis of [2 3 0], [1 -1 3]. There is something in this problem that I just can't figure out. Can someone help me figure out how to get the coefficients of
x, y and z. Thanks
I figured it out
I figured it out. I have to use cross product to get (9, -6, -5) so the equation of the plane is 9x - 6y - 5z = 0. I do have a question if someone can help me. Do you always use the cross product
to find equations using a basis? Thanks
July 19th 2008, 05:45 PM #2
Junior Member
Mar 2007 | {"url":"http://mathhelpforum.com/advanced-algebra/44079-equation-plane.html","timestamp":"2014-04-18T04:12:40Z","content_type":null,"content_length":"31242","record_id":"<urn:uuid:61694f29-9287-4b06-b2cc-2a56b7b182a9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
act one
• 1. Where should we make a horizontal cut so we have equal amounts of cheese?
• 2. Draw a guess.
• 3. Draw a guess you know is too high.
• 4. Draw a guess you know is too low.
act two
• 5. What information would be useful to know here?
• 6. Create a formula that tells you where to make the horizontal cut given any sector of cheese of any angle and radius. | {"url":"http://threeacts.mrmeyer.com/luckycow/","timestamp":"2014-04-21T07:05:12Z","content_type":null,"content_length":"3422","record_id":"<urn:uuid:1e27492f-c082-4294-adfc-e8e6466e44c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find Kernel...
May 8th 2009, 11:16 PM #1
Apr 2009
Find Kernel...
3) Let T: P2 --> R be the linear transformation defined by int [p(x)] from 0 to 1
a) (2) Find the ker(T)
b) (2) Find a basis for basis for ker(T).
for part a, what i did is:
p(x) = a+bx+cx^2
then, int [ a+bx+cx^2] from 0 to 1 and i got: a+b/2+c/3. then set this equation =0
now let a=s, b=t, therefore, c= -3s - 3/2 t
Thus, I conclude that Ker(t) = {(-3s - 3/2 t): where s and t are any real numbers}
Is this process correct?
to continue the problem, for part b
I just simply substitute a set of number, say s=0, t=1, i get -3/2, then is this the basis for the kernel? or did i miss anything there?
Thanks in advance
3) Let T: P2 --> R be the linear transformation defined by int [p(x)] from 0 to 1
a) (2) Find the ker(T)
b) (2) Find a basis for basis for ker(T).
for part a, what i did is:
p(x) = a+bx+cx^2
then, int [ a+bx+cx^2] from 0 to 1 and i got: a+b/2+c/3. then set this equation =0
now let a=s, b=t, therefore, c= -3s - 3/2 t
Thus, I conclude that Ker(t) = {(-3s - 3/2 t): where s and t are any real numbers}
Is this process correct?
to continue the problem, for part b
I just simply substitute a set of number, say s=0, t=1, i get -3/2, then is this the basis for the kernel? or did i miss anything there?
Thanks in advance
The kernel are all the polynomials $s+tx+(-3s-\tfrac{3}{2}t)x^2 = s(1 - 3x^2) + t(x - \tfrac{3}{2}x^2)$.
We see that $1-3x^2, x - \tfrac{3}{2}x^2$ spam the kernel.
Now show that they are linearly indepedent which will mean you got a basis.
May 9th 2009, 10:16 AM #2
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/advanced-algebra/88205-find-kernel.html","timestamp":"2014-04-21T00:39:51Z","content_type":null,"content_length":"36351","record_id":"<urn:uuid:bf9350de-4413-466a-9c44-0e8e12e9e7ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'What time is it?' Brain Teaser
What time is it?
Math brain teasers require computations to solve.
Puzzle ID: #879
Category: Math
Submitted By: Michelle
Corrected By: shenqiang
Between two and three o'clock, someone looked at the clock and mistook the minute hand for the hour hand. Consequently the time appeared to be fifty-five minutes earlier than it actually was. What
time was it?
Show Hint
Show Answer
What Next? | {"url":"http://www.braingle.com/brainteasers/879/what-time-is-it.html","timestamp":"2014-04-16T22:15:05Z","content_type":null,"content_length":"23056","record_id":"<urn:uuid:d6ef2085-88c8-4e6f-9675-6000ca51b261>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics -Conservation of Momentum-
Posted by Isis on Sunday, February 24, 2013 at 7:54pm.
1) A 1 kg mass moving at 1 m/s has a totally inelastic collision with a 0.7 kg mass. What is the speed of the resulting combined mass after the collision?
2) A cart of mass 1 kg moving at a speed of 0.5 m/s collides elastically with a cart of mass kg at rest. The speed of the second mass after the collision is 0.667 m/s. What is the speed 1 kg mass
after the collision?
3) A 0.010 kg bullet is shot from a 0.500 kg gun at a speed of 230 m/s. Find the speed of the gun.
4)Two carts with a masses of 4 kg and 3 kg move toward each other on a frictionless track with speeds of 5.0 m/s and 4 kg m/s respectively. The carts stick together after the colliding head on. Find
the final speed.
5) A cart of mass 1.5 kg moving at a speed of 1.2 m/s collides elastically with a cart of mass 1.0 kg moving at a speed of 0.75 m/s. (the carts are moving at the same direction)The speed
of the second mass (1.0 kg) after the collision is 0.85 m/s. What is the speed
of the 1.5 kg mass after the collision?
please help me!
J=Ft or J=p
P= Fd/t
>> I'm so sorry, please help me!
Related Questions
Physics: Collison/Momentum Problem - A 15 kg mass, moving east at 5 m/s, ...
Physics - Two masses, m1 = 2.0 kg and m2 = 3.3 kg, are moving in the xy-plane. ...
Physics - A bullet with a mass of 55 grams moving with a speed of 100 m/s slams ...
Physics -Conservation of Momentum- - 1) A 1 kg mass moving at 1 m/s has a ...
physics. - 1) A racing car has a mass of 1525 kg. What is its kinetic energy if ...
Science - Two masses of 20 and 10 kg are moving in the same direction and one ...
physics - A compact car, mass 705 kg, is moving at 1.00 102 km/h toward the east...
physics - A compact car, mass 705 kg, is moving at 1.00 102 km/h toward the east...
physics - Will check my work and help me with the last two problems please? A 4 ...
Physics - Will you check my work and help me with the last two problems please? ... | {"url":"http://www.jiskha.com/display.cgi?id=1361753662","timestamp":"2014-04-17T16:38:37Z","content_type":null,"content_length":"9287","record_id":"<urn:uuid:40edb4e7-5c79-4403-b2c1-83c56c48d6b0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Multitransition homoclinic and heteroclinic solutions of the extended Fisher-Kolmogorov equation.
(English) Zbl 0872.34033
The aim is to investigate the structure of the homoclinic and heteroclinic solutions of the scalar fourth order equation
$\gamma {u}^{\left(4\right)}-\beta {u}^{"\text{'}}+{F}^{\text{'}}\left(u\right)=0,$
with $\gamma ,\beta >0$. The two primary examples
${F}_{1}\left(u\right)=\frac{1}{4}{\left({u}^{2}-1\right)}^{2}\phantom{\rule{1.em}{0ex}}\text{and}\phantom{\rule{1.em}{0ex}}{F}_{2}\left(u\right)=\frac{2}{{\pi }^{2}}\left(1+cos\pi u\right)$
are considered as a background of the theory. The authors construct a countable family of multitransition homoclinic and heteroclinic solutions. The glueing method applied in the paper can also be
used to obtain periodic orbits which are in an arbitrarily small neighborhood of the heteroclinic loop and are structurally similar to those which would be found using dynamical methods of Devaney.
Also, the existence of the family of homoclinic and heteroclinic solutions described is sufficient to show that the dynamics are chaotic.
34C37 Homoclinic and heteroclinic solutions of ODE | {"url":"http://zbmath.org/?q=an:0872.34033","timestamp":"2014-04-17T00:51:01Z","content_type":null,"content_length":"22449","record_id":"<urn:uuid:86597afb-a41c-4a8e-acf6-d00ed30abecf>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using SPSS for t-Tests
Using SPSS for t Tests
This tutorial will show you how to use SPSS version 12.0 to perform one-sample t-tests, independent samples t-tests, and paired samples t-tests.
This tutorial assumes that you have:
• Downloaded the standard class data set (click on the link and save the data file)
• Started SPSS (click on Start | Programs | SPSS for Windows | SPSS 12.0 for Windows)
One Sample t-Tests
One sample t-tests can be used to determine if the mean of a sample is different from a particular value. In this example, we will determine if the mean number of older siblings that the PSY 216
students have is greater than 1.
We will follow our customary steps:
1. Write the null and alternative hypotheses first:
H[0]: µ[216 Students] ≤ 1
H[1]: µ[216 Students] > 1
Where µ is the mean number of older siblings that the PSY 216 students have.
2. Determine if this is a one-tailed or a two-tailed test. Because the hypothesis involves the phrase "greater than", this must be a one tailed test.
3. Specify the α level: α = .05
4. Determine the appropriate statistical test. The variable of interest, older, is on a ratio scale, so a z-score test or a t-test might be appropriate. Because the population standard deviation is
not known, the z-test would be inappropriate. We will use the t-test instead.
5. Calculate the t value, or let SPSS do it for you!
The command for a one sample t tests is found at Analyze | Compare Means | One-Sample T Test (this is shorthand for clicking on the Analyze menu item at the top of the window, and then clicking
on Compare Means from the drop down menu, and One-Sample T Test from the pop up menu.):
The One-Sample t Test dialog box will appear:
Select the dependent variable(s) that you want to test by clicking on it in the left hand pane of the One-Sample t Test dialog box. Then click on the arrow button to move the variable into the
Test Variable(s) pane. In this example, move the Older variable (number of older siblings) into the Test Variables box:
Click in the Test Value box and enter the value that you will compare to. In this example, we are comparing if the number of older siblings is greater than 1, so we should enter 1 into the Test
Value box:
Click on the OK button to perform the one-sample t test. The output viewer will appear. There are two parts to the output. The first part gives descriptive statistics for the variables that you
moved into the Test Variable(s) box on the One-Sample t Test dialog box. In this example, we get descriptive statistics for the Older variable:
This output tells us that we have 46 observations (N), the mean number of older siblings is 1.26 and the standard deviation of the number of older siblings is 1.255. The standard error of the
mean (the standard deviation of the sampling distribution of means) is 0.185 (1.255 / square root of 46 = 0.185).
The second part of the output gives the value of the statistical test:
The second column of the output gives us the t-test value: (1.26 - 1) / (1.255 / square root of 46) = 1.410 [if you do the calculation, the values will not match exactly because of round-off
error). The third column tells us that this t test has 45 degrees of freedom (46 - 1 = 45). The fourth column tells us the two-tailed significance (the 2-tailed p value.) But we didn't want a
two-tailed test; our hypothesis is one tailed and there is no option to specify a one-tailed test. Because this is a one-tailed test, look in a table of critical t values to determine the
critical t. The critical t with 45 degrees of freedom, α = .05 and one-tailed is 1.679.
6. Determine if we can reject the null hypothesis or not. The decision rule is: if the one-tailed critical t value is less than the observed t AND the means are in the right order, then we can
reject H[0]. In this example, the critical t is 1.679 (from the table of critical t values) and the observed t is 1.410, so we fail to reject H[0]. That is, there is insufficient evidence to
conclude that the mean number of older siblings for the PSY 216 classes is larger than 1.
If we were writing this for publication in an APA journal, we would write it as:
A t test failed to reveal a statistically reliable difference between the mean number of older siblings that the PSY 216 class has (M = 1.26, s = 1.26) and 1, t(45) = 1.410, p < .05, α = .05.
Independent Samples t-Tests
Single Value Groups
When two samples are involved, the samples can come from different individuals who are not matched (the samples are independent of each other.) Or the sample can come from the same individuals (the
samples are paired with each other) and the samples are not independent of each other. A third alternative is that the samples can come from different individuals who have been matched on a variable
of interest; this type of sample will not be independent. The form of the t-test is slightly different for the independent samples and dependent samples types of two sample tests, and SPSS has
separate procedures for performing the two types of tests.
The Independent Samples t-test can be used to see if two means are different from each other when the two samples that the means are based on were taken from different individuals who have not been
matched. In this example, we will determine if the students in sections one and two of PSY 216 have a different number of older siblings.
We will follow our customary steps:
1. Write the null and alternative hypotheses first:
H[0]: µ[Section 1] = µ[Section 2]
H[1]: µ[Section 1] ≠ µ[Section 2]
Where µ is the mean number of older siblings that the PSY 216 students have.
2. Determine if this is a one-tailed or a two-tailed test. Because the hypothesis involves the phrase "different" and no ordering of the means is specified, this must be a two tailed test.
3. Specify the α level: α = .05
4. Determine the appropriate statistical test. The variable of interest, older, is on a ratio scale, so a z-score test or a t-test might be appropriate. Because the population standard deviation is
not known, the z-test would be inappropriate. Furthermore, there are different students in sections 1 and 2 of PSY 216, and they have not been matched. Because of these factors, we will use the
independent samples t-test.
5. Calculate the t value, or let SPSS do it for you!
The command for the independent samples t tests is found at Analyze | Compare Means | Independent-Samples T Test (this is shorthand for clicking on the Analyze menu item at the top of the window,
and then clicking on Compare Means from the drop down menu, and Independent-Samples T Test from the pop up menu.):
The Independent-Samples t Test dialog box will appear:
Select the dependent variable(s) that you want to test by clicking on it in the left hand pane of the Independent-Samples t Test dialog box. Then click on the upper arrow button to move the
variable into the Test Variable(s) pane. In this example, move the Older variable (number of older siblings) into the Test Variables box:
Click on the independent variable (the variable that defines the two groups) in the left hand pane of the Independent-Samples t Test dialog box. Then click on the lower arrow button to move the
variable in the Grouping Variable box. In this example, move the Section variable into the Grouping Variable box:
You need to tell SPSS how to define the two groups. Click on the Define Groups button. The Define Groups dialog box appears:
In the Group 1 text box, type in the value that determines the first group. In this example, the value of the 10 AM section is 10. So you would type 10 in the Group 1 text box. In the Group 2
text box, type the value that determines the second group. In this example, the value of the 11 AM section is 11. So you would type 11 in the Group 2 text box:
Click on the Continue button to close the Define Groups dialog box. Click on the OK button in the Independent-Samples t Test dialog box to perform the t-test. The output viewer will appear with
the results of the t test. The results have two main parts: descriptive statistics and inferential statistics. First, the descriptive statistics:
This gives the descriptive statistics for each of the two groups (as defined by the grouping variable.) In this example, there are 14 people in the 10 AM section (N), and they have, on average,
0.86 older siblings, with a standard deviation of 1.027 older siblings. There are 32 people in the 11 AM section (N), and they have, on average, 1.44 older siblings, with a standard deviation of
1.318 older siblings. The last column gives the standard error of the mean for each of the two groups.
The second part of the output gives the inferential statistics:
The columns labeled "Levene's Test for Equality of Variances" tell us whether an assumption of the t-test has been met. The t-test assumes that the variability of each group is approximately
equal. If that assumption isn't met, then a special form of the t-test should be used. Look at the column labeled "Sig." under the heading "Levene's Test for Equality of Variances". In this
example, the significance (p value) of Levene's test is .203. If this value is less than or equal to your α level for the test (usually .05), then you can reject the null hypothesis that the
variability of the two groups is equal, implying that the variances are unequal. If the p value is less than or equal to the α level, then you should use the bottom row of the output (the row
labeled "Equal variances not assumed.") If the p value is greater than your α level, then you should use the middle row of the output (the row labeled "Equal variances assumed.") In this example,
.203 is larger than α, so we will assume that the variances are equal and we will use the middle row of the output.
The column labeled "t" gives the observed or calculate t value. In this example, assuming equal variances, the t value is 1.461. (We can ignore the sign of t for a two tailed t-test.) The column
labeled "df" gives the degrees of freedom associated with the t test. In this example, there are 44 degrees of freedom.
The column labeled "Sig. (2-tailed)" gives the two-tailed p value associated with the test. In this example, the p value is .151. If this had been a one-tailed test, we would need to look up the
critical t in a table.
6. Decide if we can reject H[0]: As before, the decision rule is given by: If p ≤ α , then reject H[0]. In this example, .151 is not less than or equal to .05, so we fail to reject H[0]. That
implies that we failed to observe a difference in the number of older siblings between the two sections of this class.
If we were writing this for publication in an APA journal, we would write it as:
A t test failed to reveal a statistically reliable difference between the mean number of older siblings that the 10 AM section has (M = 0.86, s = 1.027) and that the 11 AM section has (M = 1.44, s =
1.318), t(44) = 1.461, p = .151, α = .05.
Independent Samples t-Tests
Cut Point Groups
Sometimes you want to perform a t-test but the groups are defined by a variable that is not dichotomous (i.e., it has more than two values.) For example, you may want to see if the number of older
siblings is different for students who have higher GPAs than for students who have lower GPAs. Since there is no single value of GPA that specifies "higher" or "lower", we cannot proceed exactly as
we did before. Before proceeding, decide which value you will use to divide the GPAs into the higher and lower groups. The median would be a good value, since half of the scores are above the median
and half are below. (If you do not remember how to calculate the median see the frequency command in the descriptive statistics tutorial.)
1. Write the null and alternative hypotheses first:
H[0]: µ[lower GPA] = µ[higher GPA]
H[1]: µ[lower GPA] ≠ µ[Higher GPA]
Where µ is the mean number of older siblings that the PSY 216 students have.
2. Determine if this is a one-tailed or a two-tailed test. Because the hypothesis involves the phrase "different" and no ordering of the means is specified, this must be a two tailed test.
3. Specify the α level: α = .05
4. Determine the appropriate statistical test. The variable of interest, older, is on a ratio scale, so a z-score test or a t-test might be appropriate. Because the population standard deviation is
not known, the z-test would be inappropriate. Furthermore, different students have higher and lower GPAs, so we have a between-subjects design. Because of these factors, we will use the
independent samples t-test.
5. Calculate the t value, or let SPSS do it for you.
The command for the independent samples t tests is found at Analyze | Compare Means | Independent-Samples T Test (this is shorthand for clicking on the Analyze menu item at the top of the window,
and then clicking on Compare Means from the drop down menu, and Independent-Samples T Test from the pop up menu.):
The Independent-Samples t Test dialog box will appear:
Select the dependent variable(s) that you want to test by clicking on it in the left hand pane of the Independent-Samples t Test dialog box. Then click on the upper arrow button to move the
variable into the Test Variable(s) pane. In this example, move the Older variable (number of older siblings) into the Test Variables box:
Click on the independent variable (the variable that defines the two groups) in the left hand pane of the Independent-Samples t Test dialog box. Then click on the lower arrow button to move the
variable in the Grouping Variable box. (If there already is a variable in the Grouping Variable box, click on it if it is not already highlighted, and then click on the lower arrow which should
be pointing to the left.) In this example, move the GPA variable into the Grouping Variable box:
You need to tell SPSS how to define the two groups. Click on the Define Groups button. The Define Groups dialog box appears:
Click in the circle to the left of "Cut Point:". Then type the value that splits the variable into two groups. Group one is defined as all scores that are greater than or equal to the cut point.
Group two is defined as all scores that are less than the cut point. In this example, use 3.007 (the median of the GPA variable) as the cut point value:
Click on the Continue button to close the Define Groups dialog box. Click on the OK button in the Independent-Samples t Test dialog box to perform the t-test. The output viewer will appear with
the results of the t test. The results have two main parts: descriptive statistics and inferential statistics. First, the descriptive statistics:
This gives the descriptive statistics for each of the two groups (as defined by the grouping variable.) In this example, there are 23 people with a GPA greater than or equal to 3.01 (N), and they
have, on average, 1.04 older siblings, with a standard deviation of 1.186 older siblings. There are 23 people with a GPA less than 3.01 (N), and they have, on average, 1.48 older siblings, with a
standard deviation of 1.310 older siblings. The last column gives the standard error of the mean for each of the two groups.
The second part of the output gives the inferential statistics:
As before, the columns labeled "Levene's Test for Equality of Variances" tell us whether an assumption of the t-test has been met. Look at the column labeled "Sig." under the heading "Levene's
Test for Equality of Variances". In this example, the significance (p value) of Levene's test is .383. If this value is less than or equal to your α level for this test, then you can reject the
null hypothesis that the variabilities of the two groups are equal, implying that the variances are unequal. In this example, .383 is larger than our α level of .05, so we will assume that the
variances are equal and we will use the middle row of the output.
The column labeled "t" gives the observed or calculated t value. In this example, assuming equal variances, the t value is 1.180. (We can ignore the sign of t when using a two-tailed t-test.) The
column labeled "df" gives the degrees of freedom associated with the t test. In this example, there are 44 degrees of freedom.
The column labeled "Sig. (2-tailed)" gives the two-tailed p value associated with the test. In this example, the p value is .244. If this had been a one-tailed test, we would need to look up the
critical t in a table.
6. Decide if we can reject H[0]: As before, the decision rule is given by: If p ≤ α , then reject H[0]. In this example, .244 is greater than .05, so we fail to reject H[0]. That implies that there
is not sufficient evidence to conclude that people with higher or lower GPAs have different number of older siblings.
If we were writing this for publication in an APA journal, we would write it as:
An equal variances t test failed to reveal a statistically reliable difference between the mean number of older siblings for people with higher (M = 1.04, s = 1.186) and lower GPAs (M = 1.48, s =
1.310), t(44) = 1.18, p = .244, α = .05.
Paired Samples t-Tests
When two samples are involved and the values for each sample are collected from the same individuals (that is, each individual gives us two values, one for each of the two groups), or the samples
come from matched pairs of individuals then a paired-samples t-test may be an appropriate statistic to use.
The paired samples t-test can be used to determine if two means are different from each other when the two samples that the means are based on were taken from the matched individuals or the same
individuals. In this example, we will determine if the students have different numbers of younger and older siblings.
1. Write the null and alternative hypotheses:
H[0]: µ[older] = µ[younger]
H[1]: µ[older] ≠ µ[younger]
Where µ is the mean number of siblings that the PSY 216 students have.
2. Determine if this is a one-tailed or a two-tailed test. Because the hypothesis involves the phrase "different" and no ordering of the means is specified, this must be a two tailed test.
3. Specify the α level: α = .05
4. Determine the appropriate statistical test. The variables of interest, older and younger, are on a ratio scale, so a z-score test or a t-test might be appropriate. Because the population standard
deviation is not known, the z-test would be inappropriate. Furthermore, the same students are reporting the number of older and younger siblings, we have a within-subjects design. Because of
these factors, we will use the paired samples t-test.
5. Let SPSS calculate the value of t for you.
The command for the paired samples t tests is found at Analyze | Compare Means | Paired-Samples T Test (this is shorthand for clicking on the Analyze menu item at the top of the window, and then
clicking on Compare Means from the drop down menu, and Paired-Samples T Test from the pop up menu.):
The Paired-Samples t Test dialog box will appear:
You must select a pair of variables that represent the two conditions. Click on one of the variables in the left hand pane of the Paired-Samples t Test dialog box. Then click on the other
variable in the left hand pane. Click on the arrow button to move the variables into the Paired Variables pane. In this example, select Older and Younger variables (number of older and younger
siblings) and then click on the arrow button to move the pair into the Paired Variables box:
Click on the OK button in the Paired-Samples t Test dialog box to perform the t-test. The output viewer will appear with the results of the t test. The results have three main parts: descriptive
statistics, the correlation between the pair of variables, and inferential statistics. First, the descriptive statistics:
This gives the descriptive statistics for each of the two groups (as defined by the pair of variables.) In this example, there are 45 people who responded to the Older siblings question (N), and
they have, on average, 1.24 older siblings, with a standard deviation of 1.26 older siblings. These same 45 people also responded to the Younger siblings question (N), and they have, on average,
1.13 younger siblings, with a standard deviation of 1.20 younger siblings. The last column gives the standard error of the mean for each of the two variables.
The second part of the output gives the correlation between the pair of variables:
This again shows that there are 45 pairs of observations (N). The correlation between the two variables is given in the third column. In this example r = -.292. The last column give the p value
for the correlation coefficient. As always, if the p value is less than or equal to the alpha level, then you can reject the null hypothesis that the population correlation coefficient (ρ) is
equal to 0. In this case, p = .052, so we fail to reject the null hypothesis. That is, there is insufficient evidence to conclude that the population correlation (ρ) is different from 0.
The third part of the output gives the inferential statistics:
The column labeled "Mean" is the difference of the two means (1.24 - 1.13 = 0.11 in this example (the difference is due to round off error).) The next column is the standard deviation of the
difference between the two variables (1.98 in this example.)
The column labeled "t" gives the observed or calculated t value. In this example, the t value is 0.377 (you can ignore the sign.) The column labeled "df" gives the degrees of freedom associated
with the t test. In this example, there are 44 degrees of freedom. The column labeled "Sig. (2-tailed)" gives the two-tailed p value associated with the test. In this example, the p value is
.708. If this had been a one-tailed test, we would need to look up the critical value of t in a table.
6. Decide if we can reject H[0]: As before, the decision rule is given by: If p ≤ α, then reject H[0]. In this example, .708 is not less than or equal to .05, so we fail to reject H[0]. That implies
that there is insufficient evidence to conclude that the number of older and younger siblings is different.
If we were writing this for publication in an APA journal, we would write it as:
A paired samples t test failed to reveal a statistically reliable difference between the mean number of older (M = 1.24, s = 1.26) and younger (M = 1.13, s = 1.20) siblings that the students have, t
(44) = 0.377, p = .708, α = .05. | {"url":"http://academic.udayton.edu/gregelvers/psy216/spss/ttests.htm","timestamp":"2014-04-17T21:58:39Z","content_type":null,"content_length":"27291","record_id":"<urn:uuid:441ab948-24e7-4ff2-9d89-43422ba76ba5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
- PROC. LONDON MATH. SOC , 1985
"... It is a well-known conjecture that if a regular graph G of order 2n has degree d(G) satisfying d(G) ^ n, then G is the union of edge-disjoint 1-factors. It is well known that this conjecture is
true for d(G) equal to 2n —1 or 2n — 2. We show here that it is true for d(G) equal to2n — 3, In — 4, or2 ..."
Cited by 14 (4 self)
Add to MetaCart
It is a well-known conjecture that if a regular graph G of order 2n has degree d(G) satisfying d(G) ^ n, then G is the union of edge-disjoint 1-factors. It is well known that this conjecture is true
for d(G) equal to 2n —1 or 2n — 2. We show here that it is true for d(G) equal to2n — 3, In — 4, or2n — 5. We also show that it is true for </(G)>$|K(G)|.
, 1997
"... A chromatic-index-critical graph G on n vertices is non-trivial if it has at most \Deltab n 2 c edges. We prove that there is no chromatic-index-critical graph of order 12, and that there are
precisely two non-trivial chromatic index critical graphs on 11 vertices. Together with known results thi ..."
Cited by 3 (1 self)
Add to MetaCart
A chromatic-index-critical graph G on n vertices is non-trivial if it has at most \Deltab n 2 c edges. We prove that there is no chromatic-index-critical graph of order 12, and that there are
precisely two non-trivial chromatic index critical graphs on 11 vertices. Together with known results this implies that there are precisely three non-trivial chromaticindex -critical graphs of order
12. 1 Introduction A famous theorem of Vizing [20] states that the chromatic index Ø 0 (G) of a simple graph G is \Delta(G) or \Delta(G) + 1, where \Delta(G) denotes the maximum vertex degree in G. A
graph G is class 1 if Ø 0 (G) = \Delta(G) and it is class 2 otherwise. A class 2 graph G is (chromatic index) critical if Ø 0 (G \Gamma e) ! Ø 0 (G) for each edge e of G. If we want to stress the
maximum vertex degree of a critical graph G we say G is \Delta(G)-critical. Critical graphs of odd order are easy to construct while not much is known about critical graphs of even order. One reas...
, 1997
"... A k-critrical graph G has maximum degree k 0, chromatic index Ø 0 (G) = k + 1 and Ø 0 (G \Gamma e) ! k + 1 for each edge e of G. The Critical Graph Conjecture, Jakobsen [8] and Beineke, Wilson
[1], claims that every k-critical graph is of odd order. Fiorini and Wilson [6] conjectured that ev ..."
Add to MetaCart
A k-critrical graph G has maximum degree k 0, chromatic index Ø 0 (G) = k + 1 and Ø 0 (G \Gamma e) ! k + 1 for each edge e of G. The Critical Graph Conjecture, Jakobsen [8] and Beineke, Wilson [1],
claims that every k-critical graph is of odd order. Fiorini and Wilson [6] conjectured that every k-critical graph of even order has a 1-factor. Chetwynd and Yap [4] stated the problem whether it is
true that if G is a k-critical graph of odd order, then G \Gamma v has a 1-factor for every vertex v of minimum degree. These conjectures are disproved and the problem is answered in the negative for
k 2 f3; 4g. We disprove these conjectures and answer the problem in the negative for all k 3. We also construct k-critical graphs on n vertices with degree sequence 23 2 4 n\Gamma3 , answering a
question of Yap [11]. 1 Introduction We consider connected multigraphs M = (V (M); E(M)) without loops, where V (M) (E(M)) denotes the set of vertices (edges) of M . The degree dM (v) of a v...
, 2007
"... The circular chromatic number provides a more refined measure of colourability of graphs, than does the ordinary chromatic number. Thus circular colouring is of substantial impor-tance wherever
graph colouring is studied or applied, for example, to scheduling problems of periodic nature. Precisely, ..."
Add to MetaCart
The circular chromatic number provides a more refined measure of colourability of graphs, than does the ordinary chromatic number. Thus circular colouring is of substantial impor-tance wherever graph
colouring is studied or applied, for example, to scheduling problems of periodic nature. Precisely, the circular chromatic number of a graph G, denoted by χc(G), is the smallest ratio p/q of positive
integers p and q for which there exists a mapping c: V (G) → {1,2,...,p} such that q � |c(u) − c(v) | � p − q for every edge uv of G. We present some known and new results regarding the computation
of the circular chro-matic number. In particular, we prove a lemma which can be used to improve the ratio of some circular colourings. These results are later used to bound the circular chromatic
number of the plane unit-distance graph, the projective plane orthogonality graph, gener-alized Petersen graphs, and squares of graphs. Some of the computations in this thesis are computer assisted.
Neˇsetˇril’s “pentagon problem”, asks whether the circular chromatic number of every cubic graph having sufficiently high girth is at most 5/2. We prove that the statement of the
, 2003
"... A graph is chromatic-index-critical if it cannot be edge-coloured with ∆ colours (with ∆ the maximal degree of the graph), and if the removal of any edge decreases its chromatic index. The
Critical Graph Conjecture stated that any such graph has odd order. It has been proved false and the smallest k ..."
Add to MetaCart
A graph is chromatic-index-critical if it cannot be edge-coloured with ∆ colours (with ∆ the maximal degree of the graph), and if the removal of any edge decreases its chromatic index. The Critical
Graph Conjecture stated that any such graph has odd order. It has been proved false and the smallest known counterexample has order 18 [18, 31]. In this paper we show that there are no
chromatic-index-critical graphs of order 14. Our result extends that of [5] and leaves order 16 as the only case to be checked in order to decide on the minimality of the counterexample given by
Chetwynd and Fiol. In addition we list all nontrivial critical graphs of order 13. Key words: critical graph, edge-colouring, graph generation. Math. Subj. Class (2001): 05C15, 05C30 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=933412","timestamp":"2014-04-21T02:49:44Z","content_type":null,"content_length":"24762","record_id":"<urn:uuid:683a021a-8500-47d8-89df-991ccc878cd0>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Gina on Saturday, September 13, 2008 at 6:13pm.
Determine whether y is a function of x:
Please help. Thanks.
• pre-calculus - David Q, Saturday, September 13, 2008 at 6:23pm
Do you mean "Can y be expressed as a function of x?". If so then the answer is yes it can, if by "x^2y" you mean "x²y" and not "x raised to the power of 2y". The equation x²y - x² + 4y = 0
can be written as (x²+4)y = x², from which you can easily express either x in terms of y, or y in terms of x.
• pre-calculus - Damon, Saturday, September 13, 2008 at 6:28pm
well, you can write y as a function of x, and get one and only one value of y for every x
however x is not a function of y because there are two values of x for every y.
• pre-calculus - David Q, Saturday, September 13, 2008 at 6:42pm
Fair enough, though you could presumably define a function by restricting the range to just zero plus either the positive or negative real numbers.
• pre-calculus - Gina, Saturday, September 13, 2008 at 7:17pm
So the answer is yes?
• pre-calculus - David Q, Sunday, September 14, 2008 at 6:00am
The answer is yes: you can write y as a function of x. Damon and I were debating whether you can write x as a function of y, which wasn't what you were asked.
• pre-calculus - Gina, Sunday, September 14, 2008 at 11:28am
How exactly do you know that you can get one and only one value of y for every x?
• pre-calculus - Anonymous, Sunday, September 14, 2008 at 12:27pm
There's only one value for y because your original equation (x^2y-x^2+4y=0) can be rewritten as (x²+4)y = x². Or, by dividing both sides by (x²+4),
y = x²/(x²+4)
Feed any value of x into the right-hand side, and you'll get exactly one value for y.
• pre-calculus - Gina, Sunday, September 14, 2008 at 2:19pm
OH thanks, gotcha!
Related Questions
calculus - If y is a differentiable function of x, then the slope of the tangent...
Pre-Calculus - Determine whether the equation represents y as a function of x. |...
Algebra - Could you please check my work. Determine whether the realtion is a ...
Pre-Calculus - Find the maximum and minimum values of the function for the ...
Algebra - [2y^2+11y+5]/[4y^2+4y+1] / [2y^3+10y^2]/[4y^3] Would the right answer ...
algebra - y^2-4y+3/y^2*4y+12/y^2-2y+1 2y(2y+3)/ I need some help with this ...
Pre-Algebra - Okay so I have no clue how to determine if a relation is a ...
Expressions help - I have no idea on how to do this equation please help me. ...
pre calculus - find the following for the function f(x)=(x+3)^2(x-1)^2 a.) find ...
Algebra - [2y^2+11y+5]/[4y^2+4y+1] / [2y^3+10y^2]/[4y^3] =2y over (2y+1), right? | {"url":"http://www.jiskha.com/display.cgi?id=1221343985","timestamp":"2014-04-21T15:59:00Z","content_type":null,"content_length":"10516","record_id":"<urn:uuid:4077a2b7-f4ad-48bc-a38e-ca483afa2a37>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
Workshop on Iterated Forcing and Large Cardinals
Joerg Brendle (KOBE University)
Methods in iterated forcing
We present some techniques for iterating forcing constructions.For example, we discuss Shelah's method of iterating by repeatedly taking ultrapowers of a forcing notion. We will also give a brief
outline of Shelah's technique of iterating along templates. While we shall mention some applications, the focus will be on illustrating the basic ideas underlying these techniques.
Moti Gitik
(Tel-Aviv University)
A weak generalization of SPFA to higher cardinals.
We apply a form of the Neeman iteration to finite structures with pistes. This allows to formulate a certain weak analog of SPFA for higher cardinals.
Martin Goldstern
(Technische Universität Wien)
Cichon's diagram and large continuum
I will sketch a forcing construction of a model in which several well-known cardinal characteristics if the continuum (in particular: continuum itself, cofinality of null, uniformity of null,
uniformity of meager, covering of meager) all have different values.
Joint work with Arthur Fischer, Kellner, Shelah. (Work in progress.)
John Krueger Forcing with Models as Side Conditions
I describe a comparison of elementary substructures which allows for a uniform method of forcing with models as side conditions on $\omega_2$.
Heike Mildenberger
(Albert-Ludwigs-Universität Freiburg)
Forcings with block sequences
I will discuss some new preservation theorems for forcings with block sequences.
Tadatoshi Miyamoto (Nanzan University)
A study of iterating semiproper forcing
I would like to introduce a way to iterate semiproper forcing. Suppose we have an initial segment, of limit length, of an iterated forcing. We consider the set of conditions that have sort of
traceable countable stages. It turns out that this set of conditions forms a limit which sits between the direct and full limits. If we keep iterating semiproper p.o. sets under this limit, then
every tail of the iteration is semiproper in the intermediate stage. In particular, the iteration itself is semiproper. This is a generalization of an iteration lemma on proper forcing under
countable support.
Itay Neeman
(University of California, Los Angeles)
Higher analogs of the proper forcing axiom
I will present a higher analogue of the proper forcing axiom, and discuss some of its applications. The higher analogue is an axiom that allows meeting collections of $\aleph_2$ maximal antichains,
in specific classes of posets that preserve both $\aleph_1$ and $\aleph_2$.
This talk will include more details and proofs than my talk in the workshop on Forcing Axioms and their Applications. I will quickly survey the previous talk for audience members who were not present
in the
previous workshop.
Ralf Schindler
(WWU Münster)
An axiom.
We propose and discuss a new strong axiom for set theory.
Xianghui Shi
(Beijing Normal University)
Some consequences of I0 in Higher Degree Theory
We present some consequences of Axiom I0 in higher degree theory. These results indicate some connection between large cardinals and general degree structures. We shall also discuss more
evidences along this direction, raise some open questions. This is a joint work with W. Hugh Woodin.
Matteo Viale
(University of Torino)
Absoluteness of theory of $MM^{++}$
Assume $\delta$ is a limit ordinal.
The category forcing $\mathbb{U}^\mathsf{SSP}_\delta$ has as objects the stationary set preserving partial orders in $V_\delta$ and as arrows the complete embeddings of its elements with a
stationary set preserving quotient.
We show that if $\delta$ is a super compact limit of super compact cardinals and $\mathsf{MM}^{++}$ holds, then
$\mathbb{U}^\mathsf{SSP}_\delta$ completely embeds into a pre saturated tower of height $\delta$.
We use this result to conclude that the theory of $\mathsf{MM}^{++}$ is invariant with respect to stationary set preserving posets that preserve this axiom.
For additional information contact thematic@fields.utoronto.ca
Back to Top | {"url":"http://www.fields.utoronto.ca/programs/scientific/12-13/forcing/cardinals/abstracts.html","timestamp":"2014-04-18T13:24:23Z","content_type":null,"content_length":"18223","record_id":"<urn:uuid:115cdd35-381f-49d4-b379-26c7d1690691>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Christine Berkesch
What did you liked about being at Butler?
I enjoyed all of the interaction I was able to have with the faculty. They were very helpful and available to me and challenged me to become a better mathematician.
What did you do after graduation?
I attended Purdue University and received a Ph.D. in Mathematics in 2010.
What are you doing now? Current Employment?
I am now an Assistant Research Professor at Duke University, where I am teaching and doing research in algebraic geometry and commutative algebra. | {"url":"http://www.butler.edu/math-actuarial/careers/career-profiles/christine-berkesch/","timestamp":"2014-04-17T15:31:42Z","content_type":null,"content_length":"1564","record_id":"<urn:uuid:43fa72ed-2a6b-4efd-a0a3-9536ce0d3fbf>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sequences and series
March 16th 2008, 02:43 AM #1
Jul 2007
Sequences and series
Dear math forum members,
I have a problem
How many integers divisble by 7 can you find in the interval [4500, 7000] ?
Could you please give me just a hint on how to proceed?
Thank you in advance!
Firstly, i'll note that 7000 is a multiple of 7.
So we'll start from there.
The previous multiple of 7 will be obtained by substracting 7 to 7000, and so on.
So if you divide the interval into intervals of 7 numbers, ]a;b], these interval will contain one and only one multiple of 7.
Hence you "just" have to see how many intervals of this sort there are in [4500;7000], and this number is given by (7000-4500)/7.
Except, that the number comes out as a decimal...
I know :-)
But it's as if you could restrict your study to the interval [first multiple of 7 coming after 4500 = N; 7000]
The number of multiples of 7 in [N;7000] will be (7000-N)/7 + 1 (because you count the extremity).
And this number is the same as the number of multiples of 7 in [4500;7000] because there is no multiple of 7 between 4500 and N.
So this shows that if you truncate (7000-4500)/7 by the inferior value (because (7000-4500)/7 = (7000-N)/7 + APositiveNumber) and add 1, you'll have the number you want.
This is the same as taking the superior integer of (7000-4500)/7
I don't know if it's clear enough
You can take a smaller example, such as [10;35]
Hello, Coach!
How many integers divisble by 7 can you find in the interval [4500, 7000] ?
Since every seventh number is divisible by 7,
. . there are: . $\frac{7000}{7} \:=\:1000$ of them on the interval [0, 7000]
We must eliminate the multiple that are less than 4500.
How many are there?
. . There are: . $\frac{4500}{7} \:=\:642.857...\:\Rightarrow\:642$ of them.
Therefore, there are: . $1000 - 642 \:=\:358$ multiples of 7 on [4500, 7000]
Coach, do you know the floor function (aka: the greatest integer function)?
The floor of x, $\left\lfloor x \right\rfloor$, equals the largest integer which does exceed x.
Some examples: $\left\lfloor \pi \right\rfloor = 3\,,\,\left\lfloor { - e} \right\rfloor = - 3\,\& \,\left\lfloor 3 \right\rfloor = 3$.
Most calculators have such a function built in.
For positive integers $d\,\& \,N,\,d < N$ the number of multiples of d in $[1,N]$ is $\left\lfloor {\frac{N}{d}} \right\rfloor$.
Thus to use a calculator to solve your problem we would calculate:
$\left\lfloor {\frac{{7000}}{7}} \right\rfloor - \left\lfloor {\frac{{4499}}<br /> {7}} \right\rfloor$.
March 16th 2008, 02:57 AM #2
March 16th 2008, 03:02 AM #3
Jul 2007
March 16th 2008, 03:14 AM #4
March 16th 2008, 06:06 AM #5
Super Member
May 2006
Lexington, MA (USA)
March 16th 2008, 08:59 AM #6 | {"url":"http://mathhelpforum.com/algebra/31102-sequences-series.html","timestamp":"2014-04-18T22:42:34Z","content_type":null,"content_length":"49652","record_id":"<urn:uuid:563ccadc-c4ea-4bf5-b051-e8c447a61c0f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Software
Mathematics Software for Linux
Mathematics Packages
GNU Octave is a high-level language, primarily intended for numerical computations. It provides a convenient command line interface for solving linear and nonlinear problems numerically, and for
performing other numerical experiments using a language that is mostly compatible with Matlab. It may also be used as a batch-oriented language.
Octave has extensive tools for solving common numerical linear algebra problems, finding the roots of nonlinear equations, integrating ordinary functions, manipulating polynomials, and integrating
ordinary differential and differential-algebraic equations. It is easily extensible and customizable via user-defined functions written in Octave's own language, or using dynamically loaded modules
written in C++, C, Fortran, or other languages.
R is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at Bell Laboratories (formerly AT&T,
now Lucent Technologies) by John Chambers and colleagues. R can be considered as a different implementation of S. There are some important differences, but much code written for S runs unaltered
under R.
R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly
extensible. The S language is often the vehicle of choice for research in statistical methodology, and R provides an Open Source route to participation in that activity.
bc is an arbitrary precision numeric processing language. Syntax is similar to C, but differs in many substantial areas. It supports interactive execution of statements. bc is a utility included in
the POSIX P1003.2/D11 draft standard.
Scilab is a scientific software package for numerical computations providing a powerful open computing environment for engineering and scientific applications. It is developed since 1990 by
researchers from INRIA and ENPC. Distributed freely via the Internet since 1994, Scilab is currently being used in educational and industrial environnments around the world.
Scilab includes hundreds of mathematical functions with the possibility to add interactively programs from various languages (C, Fortran...). It has sophisticated data structures (including lists,
polynomials, rational functions, linear systems...), an interpreter and a high level programming language.
Yorick is an interpreted programming language, designed for postprocessing or steering large scientific simulation codes. Smaller scientific simulations or calculations, such as the flow past an
airfoil or the motion of a drumhead, can be written as standalone yorick programs. The language features a compact syntax for many common array operations, so it processes large arrays of numbers
very efficiently. Unlike most interpreters, which are several hundred times slower than compiled code for number crunching, yorick can approach to within a factor of four or five of compiled speed
for many common tasks. Superficially, yorick code resembles C code, but yorick variables are never explicitly declared and have a dynamic scoping similar to many Lisp dialects. The yorick language is
designed to be typed interactively at a keyboard, as well as stored in files for later use. Yorick includes an interactive graphics package, and a binary file package capable of translating to and
from the raw numeric formats of all modern computers.
Algae is an interpreted language for numerical analysis. Algae was developed because we needed a fast and versatile tool, capable of handling large problems. Algae has been applied to interesting
dynamics problems in aerospace and related fields for more than a decade.
YACAS is an easy to use, general purpose Computer Algebra System, a program for symbolic manipulation of mathematical expressions. It uses its own programming language designed for symbolic as well
as arbitrary-precision numerical computations. The system has a library of scripts that implement many of the symbolic algebra operations; new algorithms can be easily added to the library. YACAS
comes with extensive documentation (320+ pages) covering the scripting language, the functionality that is already implemented in the system, and the algorithms we used.
Rlab is an interactive, interpreted scientific programming environment. Rlab is a very high level language intended to provide fast prototyping and program development, as well as easy
data-visualization, and processing. Rlab is not a clone of languages such as those used by tools like Matlab or Matrix-X/Xmath. However, as Rlab focuses on creating a good experimental environment
(or laboratory) in which to do matrix math, it can be called ``Matlab-like''; since the programming language possesses similar operators and concepts.
EULER is a program for quickly and interactively computing with real and complex numbers and matrices, or with intervals, in the style of MatLab, Octave,... It can draw and animate your functions in
two and three dimensions.
Maxima is a descendant of DOE Macsyma, which had its origins in the late 1960s at MIT. It is the only system based on that effort still publicly available and with an active user community, thanks to
its open source nature. Macsyma was the first of a new breed of computer algebra systems, leading the way for programs such as Maple and Mathematica. This particular variant of Macsyma was maintained
by William Schelter from 1982 until he passed away in 2001. In 1998 he obtained permission to release the source code under GPL. It was his efforts and skill which have made the survival of Maxima
possible, and we are very grateful to him for volunteering his time and skill to keep the original Macsyma code alive and well. Since his passing a group of users and developers has formed to keep
Maxima alive and kicking. Maxima itself is reasonably feature complete at this stage, with abilities such as symbolic integration, 3D plotting, and an ODE solver, but there is a lot of work yet to be
done in terms of bug fixing, cleanup, and documentation. This is not to say there will be no new features, but there is much work to be done before that stage will be reached, and for now new
features are not likely to be our focus.
JACAL is an interactive symbolic mathematics program. JACAL can manipulate and simplify equations, scalars, vectors, and matrices of single and multiple valued algebraic expressions containing
numbers, variables, radicals, and algebraic differential, and holonomic functions.
Symbolic calculations, carried out by computer algebra systems, have become an integral part in the daily work of scientists. The advance in algorithms and computer technology has led to remarkable
progress in several areas of natural sciences. gTybalt was developed as a tool for certain kind of calculations. The characteristics of these calculations are: First of all, these tend to be "long"
calculations, e.g. the system needs to process large amounts of data and efficiency in performance is a priority. Secondly, the algorithms for the solution of the problem are usually developed and
implemented by the scientists themselves. This requires support from the computer algebra system for a programming language which allows to implement complex algorithms for abstract mathematical
entities. In other words, it requires support of object oriented programming techniques from the system. On the other hand, these calculations usually do not require that the computer algebra system
provides sophisticated tools for all branches of mathematics. Thirdly, despite the fact that these calculations process large amounts of data, the time needed for the implementation of the algorithms
usually outweights the actual running time of the program. Therefore convenient development tools are also important.
Symaxx/2 is a graphical frontend for Maxima.
SINGULAR is a Computer Algebra System for polynomial computations with special emphasis on the needs of commutative algebra, algebraic geometry, and singularity theory.
HartMath is an experimental computer algebra system written in Java.
The name GiNaC is an iterated and recursive abbreviation for GiNaC is Not a CAS, where CAS stands for Computer Algebra System. It has been developed to become a replacement engine for xloops which is
up to now powered by the Maple CAS. Its design is revolutionary in a sense that contrary to other CAS it does not try to provide extensive algebraic capabilities and a simple programming language but
instead accepts a given language (C++) and extends it by a set of algebraic capabilities.
Aim of this project is to provide a package that completely evaluates massive one- and two-loop Feynman diagrams to make calculations in high energy physics easier.
PARI-GP is a software package for computer-aided number theory. It consists of a C library, libpari (with optional assembler cores for some popular architectures), and of the programmable interactive
gp calculator. While you can write your own libpari-based programs, many people just start up a gp session, or have gp execute their scripts.
GRASS GIS (Geographic Resources Analysis Support System) is an open source, Free Software Geographical Information System (GIS) with raster, topological vector, image processing, and graphics
production functionality that operates on various platforms through a graphical user interface and shell in X-Windows. It is released under GNU General Public License (GPL).
Macaulay 2 is a software system devoted to supporting research in algebraic geometry and commutative algebra, whose development has been funded by the National Science Foundation.
NumExp is a family of open-source applications for numeric computation. When it was created, the idea was to make a powerfull tool like Mathematica. Now, we know this is almost impossible without
more open-source hackers. Meanwhile, we are trying to make, at least, an useful tool!
GtkGraph is a simple graphing calculator written for X Windows using the Gtk+ widget set. It is intended as a replacement for a standalone graphing calculator, which typically costs over $80 USD, and
has a tiny monochrome display driven by a CPU running at around 6 MHz with no FPU. GtkGraph can plot functions and solve arithmetic expressions using double precision arithmetic.
surf is a tool to visualize some real algebraic geometry: plane algebraic curves, algebraic surfaces and hyperplane sections of surfaces. surf is script driven and has (optionally) a nifty GUI using
the Gtk widget set.
E is a a purely equational theorem prover for clausal logic. That means it is a program that you can stuff a mathematical specification (in clausal logic with equality) and a hypothesis into, and
which will then run forever, using up all of your machines resources. Very occasionally it will find a proof for the hypothesis and tell you so ;-).
TISEAN is free a software project for the analysis of time series with methods based on the theory of nonlinear deterministic dynamical systems, or chaos theory, if you prefer.
Plotting Software
gnuplot is a command-driven interactive function plotting program. It can be used to plot functions and data points in both two- and three-dimensional plots in many different formats, and will
accommodate many of the needs of today's scientists for graphic data representation. gnuplot is copyrighted, but freely distributable; you don't have to pay for it.
The NCAR Command Language (NCL) is a programming language designed specifically for the access, analysis, and visualization of data. NCL can be run in interactive mode, where each line is interpreted
as it is entered at your workstation, or it can be run in batch mode as an interpreter of complete scripts.
Gri is a language for scientific graphics programming. The word "language" is important: Gri is command-driven, not point/click. Some users consider Gri similar to LaTeX, since both provide extensive
power as a reward for tolerating a learning curve. Gri can make x-y graphs, contour graphs, and image graphs, in PostScript and (someday) SVG formats. Control is provided over all aspects of drawing,
e.g. line widths, colors, and fonts. A TeX-like syntax provides common mathematical symbols.
PLplot is a library of functions that are useful for making scientific plots. PLplot can be used from within compiled languages such as C, C++, FORTRAN and Java, and interactively from interpreted
languages such as Octave, Python, Perl and Tcl. The PLplot library can be used to create standard x-y plots, semilog plots, log-log plots, contour plots, 3D surface plots, mesh plots, bar charts and
pie charts. Multiple graphs (of the same or different sizes) may be placed on a single page with multiple lines in each graph.
The PGPLOT Graphics Subroutine Library is a Fortran- or C-callable, device-independent graphics package for making simple scientific graphs. It is intended for making graphical images of publication
quality with minimum effort on the part of the user. For most applications, the program can be device-independent, and the output can be directed to the appropriate device at run time.
The GNU plotutils package contains software for both programmers and technical users. Its centerpiece is libplot, a powerful C/C++ function library for exporting 2-D vector graphics in many file
formats, both vector and raster. It can also do vector graphics animations.
SciGraphica is a scientific application for data analysis and technical graphics. It pretends to be a clone of the popular commercial (and expensive) application "Microcal Origin". It fully supplies
plotting features for 2D, 3D and polar charts. The aim is to obtain a fully-featured, cross-plattform, user-friendly, self-growing scientific application. It is free and open-source, released under
the GPL license.
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif.
Ptplot 5.2 is a 2D data plotter and histogram tool implemented in Java. Ptplot can be used as a standalone applet or application, or it can be embedded in your own applet or application.
DISLIN is a high-level plotting library for displaying data as curves, polar plots, bar graphs, pie charts, 3D-color plots, surfaces, contours and maps.
ImLib3D is an open source C++ library for 3D (volumetric) image processing. It contains most basic image processing algorithms, and some more sophisticated ones. It comes with an optional viewer that
features multiplanar views, animations, vector field views and 3D (OpenGL) multiplanar. All image processing operators can be interactively called from the viewer as well as from the UNIX
command-line. ImLib3D's goal is to provide a standard and easy to use platform for volumetric image processing research. Focus has been put on simplicity for the developer. ImLib3D has been carefully
designed, using modern, standards conforming C++. It intensively uses the Standard C++ Library, including strings, containers, and iterators.
GLgraph visualize mathematical functions. It can handle 3 unknowns (x,z,t) and can produce a 4D function with 3 space and 1 time dimension.
MayaVi is a free, easy to use scientific data visualizer. It is written in Python and uses the amazing Visualization Toolkit (VTK) for the graphics. It provides a GUI written using Tkinter. MayaVi is
free and distributed under the conditions of the BSD license. It is also cross platform and should run on any platform where both Python and VTK are available (which is almost any *nix, Mac OSX or
Graph Drawing Programs from AT&T Research and Lucent Bell Labs
Numerical Libraries
The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. It is free software under the GNU General Public License.
The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting. There are over 1000 functions in total.
The "Simple Algebraic Math Library" is a C library for computer algebra, together with some application programs: a desktop calculator, a spreadsheet (sort of) and a program to factorize integers.
Numerical Python adds a fast, compact, multidimensional array language facility to Python.
The Visualization ToolKit (VTK) is an open source, freely available software system for 3D computer graphics, image processing, and visualization used by thousands of researchers and developers
around the world. VTK consists of a C++ class library, and several interpreted interface layers including Tcl/Tk, Java, and Python. VTK supports a wide variety of visualization algorithms including
scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as implicit modelling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay
triangulation. In addition, dozens of imaging algorithms have been directly integrated to allow the user to mix 2D imaging / 3D graphics algorithms and data. The design and implementation of the
library has been strongly influenced by object-oriented principles.
PDL (``Perl Data Language'') gives standard Perl the ability to compactly store and speedily manipulate the large N-dimensional data arrays which are the bread and butter of scientific computing.
LAPACK is written in Fortran77 and provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular
value problems. The associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are related computations such as reordering of the Schur factorizations
and estimating condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single
and double precision.
PARI-GP is a software package for computer-aided number theory. It consists of a C library, libpari (with optional assembler cores for some popular architectures), and of the programmable interactive
gp calculator. While you can write your own libpari-based programs, many people just start up a gp session, or have gp execute their scripts.
This page lists a number of packages related to numerics, number crunching, signal processing, financial modeling, linear programming, statistics, data structures, date-time processing, random number
generation, and crypto.
LINPACK is a collection of Fortran subroutines that analyze and solve linear equations and linear least-squares problems. The package solves linear systems whose matrices are general, banded,
symmetric indefinite, symmetric positive definite, triangular, and tridiagonal square. In addition, the package computes the QR and singular value decompositions of rectangular matrices and applies
them to least-squares problems. LINPACK uses column-oriented algorithms to increase efficiency by preserving locality of reference.
LINPACK was designed for supercomputers in use in the 1970s and early 1980s. LINPACK has been largely superceded by LAPACK, which has been designed to run efficiently on shared-memory, vector
ATLAS stands for Automatically Tuned Linear Algebra Software. ATLAS is both a research project and a software package. This FAQ describes the software package. ATLAS's purpose is to provide portably
optimal linear algebra software. The current version provides a complete BLAS API (for both C and Fortran77), and a very small subset of the LAPACK API. For all supported operations, ATLAS achieves
performance on par with machine-specific tuned libraries.
CLN is a library for computations with all kinds of numbers. It has a rich set of number classes... [see web page]
This distribution provides an infrastructure for scalable scientific and technical computing in Java. It is particularly useful in the domain of High Energy Physics at CERN: It contains, among
others, efficient and usable data structures and algorithms for Off-line and On-line Data Analysis, Linear Algebra, Multi-dimensional arrays, Statistics, Histogramming, Monte Carlo Simulation,
Parallel & Concurrent Programming. It summons some of the best concepts, designs and implementations thought up over time by the community, ports or improves them and introduces new approaches where
need arises. In overlapping areas, it is competitive or superior to toolkits such as STL, Root, HTL, CLHEP, TNT, GSL, C-RAND / WIN-RAND, (all C/C++) as well as IBM Array, JDK 1.2 Collections
framework, JGL (all Java), in terms of performance (!), functionality and (re)usability.
Programming Languages
Lush is an object-oriented programming language designed for researchers, experimenters, and engineers interested in large-scale numerical and graphic applications. Lush is designed to be used in
situations where one would want to combine the flexibility of a high-level, loosely-typed interpreted language, with the efficiency of a strongly-typed, natively-compiled language, and with the easy
integration of code written in C, C++, or other languages.
Nickle is a programming language based prototyping environment with powerful programming and scripting capabilities. Nickle supports a variety of datatypes, especially arbitrary precision numbers.
The programming language vaguely resembles C. Some things in C which do not translate easily are different, some design choices have been made differently, and a very few features are simply missing.
Nickle provides the functionality of UNIX bc, dc and expr in much-improved form. It is also an ideal environment for prototyping complex algorithms. Nickle's scripting capabilities make it a nice
replacement for spreadsheets in some applications, and its numeric features nicely complement the limited numeric functionality of text-oriented languages such as AWK and PERL.
ODE is a free, industrial quality library for simulating articulated rigid body dynamics - for example ground vehicles, legged creatures, and moving objects in VR environments. It is fast, flexible,
robust and platform independent, with advanced joints, contact with friction, and built-in collision detection.
Blitz++ is a C++ class library for scientific computing which provides performance on par with Fortran 77/90. It uses template techniques to achieve high performance. The current versions provide
dense arrays and vectors, random number generators, and small vectors and matrices.
FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data,
i.e. the discrete cosine and sine transforms, the DCT and DST). We believe that FFTW, which is free software, should become the FFT library of choice for most applications.
Our benchmarks, performed on on a variety of platforms, show that FFTW's performance is typically superior to that of other publicly available FFT software, and is even competitive with vendor-tuned
codes. In contrast to vendor-tuned codes, however, FFTW's performance is portable: the same program will perform well on most architectures without modification. Hence the name, "FFTW," which stands
for the somewhat whimsical title of "Fastest Fourier Transform in the West."
GNU MP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers. It has a rich set of functions, and the functions have a regular
Non-Uniform Rational B-Splines (NURBS) curves and surface are parametric functions which can represent any type of curves or surfaces. This C++ library hides the basic mathematics of NURBS. This
allows the user to focus on the more challenging parts of their projects. The library also offers a lot of features to help generate NURBS from data points.
SciPy is an open source library of scientific tools for Python. SciPy supplements the popular Numeric module, gathering a variety of high level science and engineering modules together as a single
SciPy includes modules for graphics and plotting, optimization, integration, special functions, signal and image processing, genetic algorithms, ODE solvers, and others.
Sites of Interest
The Linux lab project is intended to help people with development of data collection and process control software for LINUX. It should be in understood as software and knowledge pool for interested
people and application developers dealing with this stuff in educational or industrial environment.
This page deals with links to tutorials, documents, and Linux implementations for installing Linux on a PC, getting started with Linux, and then going a step further -- to optimise your PC for
processing power, using multiple processors (Symmetric Muliti Processing - SMP); making a cheap, upgradeable, Supercomputing Linux cluster and finally links to software to do parallel programming on
SAL (Scientific Applications on Linux) is a collection of information and links to software that will be of interest to scientists and engineers. The broad coverage of Linux applications will also
benefit the whole Linux/Unix community. There are currently 3,070 entries in SAL.
Netlib is a collection of mathematical software, papers, and databases.
[Collection of GPL'd and other Free Software] | {"url":"http://karmak.org/2003/linux-scimath/math.html","timestamp":"2014-04-21T02:21:29Z","content_type":null,"content_length":"31680","record_id":"<urn:uuid:d1c3426f-4057-42ba-a546-cef84dc62b2d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the formula to calculate monthly payments on loan?
What is the formula to calculate monthly payments on loan? I need to use principal, annual interest rate, and loan term in years in the formula.
Example: $13,000 loan for three years, 8% interest
What is the monthly payment?
(I don't need to solve a problem, I would like to know the formula)
Re: What is the formula to calculate monthly payments on loa
Magen wrote:What is the formula to calculate monthly payments on loan?
(I don't need to solve a problem, I would like to know the formula)
You can find information online, such as here. | {"url":"http://www.purplemath.com/learning/viewtopic.php?p=6590","timestamp":"2014-04-20T21:00:59Z","content_type":null,"content_length":"18420","record_id":"<urn:uuid:598b3c98-474f-4c42-b779-01a8e07830d4>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finite outer automorphism groups
up vote 9 down vote favorite
It is a theorem of Gopal Prasad (which I hope I am not misquoting...) that lattices in higher rank linear semi-simple Lie groups have finite outer automorphism groups. Is there some other reasonable
class of groups with this finiteness property?
add comment
2 Answers
active oldest votes
Here is a result of Fr\'ed\'eric Paulin: a hyperbolic group with Kazhdan's property (T), has a finite outer automorphism group. See Outer automorphisms of hyperbolic groups and small
up vote 6 actions on R-trees. Arboreal group theory (Berkeley, CA, 1988), 331–343, Math. Sci. Res. Inst. Publ., 19, Springer, New York, 1991.
down vote
Thanks! Of course this begs the question of where such groups would come from... – Igor Rivin Apr 25 '11 at 19:24
2 More generally, relatively hyperbolic groups that don't split over elementary subgroups have this property by a result of Drutu-Sapir, see Theorem 1.12 in front.math.ucdavis.edu/
0601.5305 – Igor Belegradek Apr 25 '11 at 19:28
@Igor Rivin: there are lots of relatively hyperbolic groups with no elementary splittings. What kind of groups are you looking for? – Igor Belegradek Apr 25 '11 at 19:30
1 @Igor Rivin: any countable group embeds into some Out(G) where G has property (T), see front.math.ucdavis.edu/0605.5553. In fact, I seem to recall that Minasyan-Osin showed that any
countable group can be realized as Out(G) where G has property (T), but I cannot find a reference at the moment. – Igor Belegradek Apr 25 '11 at 19:41
1 @Igor Rivin: I meant to say that a relatively hyperbolic group G that doesn't split over elementary subgroups must ahve finite Out(G). – Igor Belegradek Apr 25 '11 at 19:42
show 7 more comments
The theorem Igor Belegradek mentions is a more general form of the theorem (due collectively to Bestvina, Feighn, Paulin, and Rips) that a one-ended hyperbolic group has infinite outer
automorphism group only if it splits over a two-ended subgroup. See
up vote 4 down
vote M. Bestvina, M. Feighn, Stable actions of groups on real trees. Invent. Math. 121 (1995), no. 2, 287-321.
@Richard: thanks! – Igor Rivin Apr 25 '11 at 20:17
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/62952/finite-outer-automorphism-groups/62955","timestamp":"2014-04-18T19:07:00Z","content_type":null,"content_length":"60211","record_id":"<urn:uuid:b40510dd-9706-468b-946a-68e75b3864bd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Slicing a numpy array and getting the "complement"
[Numpy-discussion] Slicing a numpy array and getting the "complement"
Orest Kozyar orest.kozyar@gmail....
Mon May 19 13:21:33 CDT 2008
> If you don't mind fancy indexing, you can convert your index arrays
> into boolean form:
> complement = A==A
> complement[idx] = False
This actually would work perfectly for my purposes. I don't really
need super-fancy indexing.
>> Given a slice, such as s_[..., :-2:], is it possible to take the
>> complement of this slice? Specifically, s_[..., ::-2].
> Hmm, that doesn't look like the complement. Did you mean s_[..., -2:]
> and s_[..., :-2]?
Whoops, yes you're right.
> In general, for any given slice, there may not be a slice giving the
> complement. For example, the complement of arange(6)[1:4] should be
> array([0,4,5]), but there is no slice which can make that. Things get
> even more difficult with start:stop:step slices let alone simultaneous
> multidimensional slices. Can you be more specific as to exactly the
> variety of slices you need to support?
I think Anne's solution will work well for what I need to do. Thanks!
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-May/034030.html","timestamp":"2014-04-17T09:51:08Z","content_type":null,"content_length":"3888","record_id":"<urn:uuid:67ff0db0-09c5-45f5-babb-4fa746cc8cd6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
question inside!
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b05f77e4b0e906b4a5f903","timestamp":"2014-04-21T15:16:14Z","content_type":null,"content_length":"57818","record_id":"<urn:uuid:7931a5ee-96f1-4446-861d-1d0233469efc>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by JEFF
Total # Posts: 560
Did Lyndon Johnson think Eisenhower was a good president?
The least common multiple of 96 and 108?
Calculus I
A tank in the shape of an inverted right circular cone has height $ 10$ meters and radius $ 8$ meters. It is filled with $ 6$ meters of hot chocolate. Find the work required to empty the tank by
pumping the hot chocolate over the top of the tank. Note: the density of hot choco...
A ball of radius 10 has a round hole of radius 5 drilled through its center. Find the volume of the resulting solid.
calculus-differential equation
Thanks so much. I've been struggling with that thing for days!
calculus-differential equation
Consider the differential equation: (du/dt)=-u^2(t^3-t) a) Find the general solution to the above differential equation. (Write the answer in a form such that its numerator is 1 and its integration
constant is C). u=? b) Find the particular solution of the above differential e...
looking to reduce to lowest term 1) 3+x(4+x)/3+x i got 4+x 2)3a^2-13a-10/5+4a-a^2 i got could not reduce any help would be great
sorry next one is 3a^2-13-10/5+4a-a^2 i got cannot reduce
a couple more 1) 3+x(4+x)/3+x i got 4+x
thanks i got could't reduce on the second one the first one the denominator both numbers are to the second power does that matter
trying to reduce to simpilest form 1.t-a/t^2-a^2 i have 1/2(t+a) 2.3y^3+7y^2+4y/y^2+5y+4 i have cannot reduce thanks
Algebra 1
The formula for direct variation is y = kx, where k is the constant of variation. For slope, use y - y1 = m(x-x1)
Math x-int's
Wow... that's a long process. Thanks for the help!
Math x-int's
How do I algebraically find the x-intercepts for this equation y = x^3 - 3x^2 + 3. I know I need to plug in y = 0 to solve for x. x^3 - 3x^2 + 3 = 0 But where do I go from there? I don't think I can
factor, and I don't think I can use the quadratic formula.
Gave 2/3 majority of the 13 colonies.
1) Did not give enough power to the federal government 2) No national currency
also apply the rules of divisibility
I'm tutoring someone 1st year arts math and they have been given this difficult question that I cannot mind boggel. It goes as follows: There are 6 couples at a party. Each can shake another persons
hand once, but they can never shake their spouses hand. At the end of the ...
Im really unsure about this question i have not been taught this type of graphing before. It is a tough one.
tech math
if some one could solve this so i can see if i'am doing it right 8x+y+z=1 7x-2y+9z=-3 4x-6y+8z=-5 thanks
In alcohol fermentation, yeast converts glucose to ethanol and carbon dioxide C6H1206 -----> 2C2H5OH + 2CO2 If 5.97 g of clucose are reacted and 1.44 L of CO2 gas are collected at 293 K and .984 atm,
what is the percent yield of the reaction?
What is the question?
It's been years since I read the book but wasn't Johnny cleared of charges in the book? Wasn't it ruled that he acted in self-defence?
I need the equilibrium constant for the following reaction @ 25 deg. C 2CaCO3 (calcite) + Mg2+ <==> CaMg(CO3)2 (dolomite) + Ca2+
larger land area- california or missouri?
what are the properties of protagonist
physics- what did i do wrong
If the bus is 100 meters ahead of the car and the bus is travling at 65 km/hr and the car at 78km/hr, how long will it take the car to catch the bus? I used the following formula, but was informed my
answer was wrong. 100m + (t) 65km/hr = 0 + (t)78km/hr and solved for (t). t =...
i need help with this one! A 100KHz oscillator thats followed by a doubler and two triplers will produce an output of frequency.... a.) 180 kHz b.) 500 " c.) 800 " d.) 1.8 "
construct a nondiagonal 2 x 2 matrix that is diagonalizable but not invertible. Just write down a diagonal matrix with one zero on the diagonal and then apply an orthogonal transformation. E.g. if
you start with the matrix: A = [1 ,0 0,1] And take the orthogonal transformation...
the lim [as x goes to infinity] (.25)^x =0?? Yes, the limit is zero. The more times you multiply a number by 0.25 (which is what is happening when x goes to infinity), the smaller is the result.
Wallis's method of tangents
consider the curve defined by the equation y=a(x^2)+bx+c. Take a point(h,k) on the curve. use Wallis's method of tangents to show that the slope of the line tangent to this curve at the point(h,k)
will be m= 2ah+b. have to prove this for tow cases: a>0 and a<0. Thank...
Cod+ H20 <==> HCod+ OH- 0.002M x M x M 0.002-x +x +x Kb=(HCod)(OH)/(Cod)=x^2/(0.002-x) 8.33x 10^-7=x^2/(0.002-x) x=4.08x 10^-5 (theres [HCod]) pOH= PKb + log (HB/B-) pOH= -log(8.33x 10^-7) + log
(4.08x10^-5/0.002) = 4.38 POH + pH = 14 : 14- pOH = pH 14-4.38= 9.62
Cod+ H20 <==> HCod+ OH- 0.002M x M x M 0.002-x +x +x Kb=(HCod)(OH)/(Cod)=x^2/(0.002-x) 8.33x 10^-7=x^2/(0.002-x) x=4.08x 10^-5 (theres [HCod]) pOH= PKb + log (HB/B-) pOH= -log(8.33x 10^-7) + log
(4.08x10^-5/0.002) = 4.38 POH + pH = 14 : 14- pOH = pH 14-4.38= 9.62
I thinks i got the problem right this time P=MR=$750 TC =2,500,000+500q+0.005Q^2 (squared) Mc =500+0.01Q Calculate profit maximizing level Calculate the company's optimal profit and optimal profit as
a percentage of sales revenue Can anyone help here I am lost
P=MR+$750 TC =2,500,000+500q+0.005Q^2 (squared) Mc =500+0.01Q Calculate profit maximizing level Calculate the company's optimal profit and optimal profit as a percentage of sales revenue Can anyone
help here I am lost
Profit Maximation question help P=MR+$750 TC =2500000+500q+0.00Q squared Mc =500+0.01Q Calculate profit maximizing level Calculate the company's optimal profit and optimal profit as a percentage of
sales revenue Can anyone help here I am lost
Please help, I am so bored by my science lesson that I can barely focus. I read the lesson, and 5 seconds later I don't even remember what I just read. Any ideas on somehow making the Atmosphere and
Solar Effects more interesting? I like history a thousand times better. Th...
Find Ó x for the following set of numbers 2, 3, 5, 6, 8, 9, 11. If I knew what that funny looking symbol repersents I may be able to do the problem. Can someone explain please. Ok I think I have the
answer now. All that symbol means is the mean. The answer would be 6.
I am getting ready to start an Algebra class for the first time and I was wondering what challenges people have with learning and using algebra concepts. Also what are the best ways to over come math
anxiety? The best way? Learn the language of algebra.
Just a question
Is there such thing as a personologists? Yes, there are people who profess to be able to deduce people's personality from their facial behavior and body language. They seem to have coined the word
and created a profession with that name, even though it is not in most dicti...
Can someone explain to me how diversion programs are related to social process theories.
Will someone please show me how to do this problem. I have to write this ratios in simplest form. The ratio of 5 3/5 to 2 1/10. Change each of your mixed fractions to an improper fraction, then
divide the first by the second. Remember to divide fraction #1 by fraction #2, you ...
I am having a difficult time calculating rates and unit prices. Can someone describe a simple process for using rates and unit prices that will help me understand these concepts? I searched Google
under the key words "rates 'unit prices'" to get these possibl...
Can someone help me think of physical, environmental, and social factors that might affect criminal behavior. Any websites would be greatly appreciated. http://www.google.com/search?q=
Can someone explain rational choice theory to me. How is criminal behavior explained according to the rational choice theory? It is explained here well: http://en.wikipedia.org/wiki/
Rational_choice_theory Look at the (unrealistic) assumptions about the world in that document. ...
In an ionic compound, what is the net ionic charge? and what compound does an -ide ending generally indicate? what grade ur in 11th COOL!!! LIVE IN NYC??? IM IN 11TH TOO myspace?
Is this true Dispersion forces generally increase in strength as the number of electrons in a molecule increases. The number of electrons can only increase if more atoms are added to the molecule.
The molecule has to stay electrically neutral. Dispersion forces are of greater ...
What is a polyatomic ion? A polyatomic ion is an ion composed of more than one kind of atom. For example, the hydrogen carbonate ion, HCO3^- (also called bicarbonate), ammonium ion (NH4^+), oxalate
ion (C2O4^-2), phosphate ion (PO4^-3), sulfate ion (SO4^-2), sulfite (SO3^-2), ...
HELP,HELP,HELP,HELP How can you tell that cobalt(II) iodide is binary ionic compound formed by a transition metal with more than ionic charge? Thanx in Advance form jeff Cobalt(II) chloride tells you
it is binary (composed of two different atoms), cobalt and iodine; compare th...
When water mixes with carbon dioxide in the air, what is formed? H2O + CO2 --> H2CO3 carbonic acid I dont think this is correct because there is no aid in co2. It just ends up to be CO2 in liquid
form. Basically water with air is carbonated water. that is definitely correct...
In the banquet scence (Act III Scene iv), what complaint does Macbeth make about murdered men? I read it 10x through and found various complaints, but can't pinpoint which one. If he ignored his
father's words,he would repent for it
US History
Here is a prompt: Both Thomas Jefferson and Andrew Jackson were promoted as champions of the "common man". By looking at the long careers of both men it is quite obvious that only Thomas Jefferson
truly lived up to this title. To what extent is this true? I am suppos...
at one point in time native americans were called Marginal Americans does anyone know why? Please help Thank you for using the Jiskha Homework Help Forum. The term "Marginal Americans" refers to
those considered outsiders: ethnic minorities, the poor, the disabled, a...
Four roommates are planning to spend the weekend in their dorm room watching old movies, and they are debating how many to watch. Here is their willingness to pay for each film: Orson Alfred Woody
Ingmar Frist film 7 5 3 2 Second film 6 4 2 1 Third film 5 3 1 0 Fourth film 4 2...
social studies
Just a fun question for you to answer. Consider the choices of native Americans who decide to stay on their tribe's native land and those who consider to relocate to a city. If you were presented
with this decision, which would you chose and why? I would chose to leave the...
OK, I'm bookmarking this site, hope to see you all here. This is like a ETH 125 hideout! Way cool!!!
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=JEFF&page=6","timestamp":"2014-04-16T04:59:02Z","content_type":null,"content_length":"20907","record_id":"<urn:uuid:94d2b54e-3064-443a-9f46-d994230e0d60>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] Is there any R package that can find the maxima of a 1-D
(Ted Harding) Ted.Harding at manchester.ac.uk
Wed Mar 17 15:02:53 CET 2010
On 17-Mar-10 12:31:43, mauede at alice.it wrote:
> Is there any R package that can help me with digging out the
> maxima of a 1-D trajectory ?
> I have 975 1-D curves. They are only known as time series.
> That is a set of points ordered with respect to time. Some
> curves exhibit one only peak.
> We wish to find the number of peaks and their position along
> the time axis.
> Apparently it's a trivial problem solved looking at the zeros
> and the change of sign of the 1st derivative.
> In practice it is necessary to apply some criteria (which ones?)
> to discriminate between real peaks and noise oscillations.
> Presumably I ought to define the noise level with respect to
> peaks height in this application ...
> Maybe wavelets can help ?
> Thank you in advance,
> Maura
Precisely. You need to define what you wish "peak" to mean.
Then you can implement your wish in code.
The most inclusive definition:
(X[n] >= X[n-1])&(X[n] >= X[n+1])
will of course catch everything, including noisy fluctuations
(and as a result may hide real ("underlying") peaks hidden by
the noise).
You might extend the above:
or you might apply a smoother (possibly wavelets) to reduce the
noise and then find the peaks of that. And so on ... Apparently
you already have some notion of what you want "peak" to mean,
since you say "Others have two peaks of different height",
and you also recognise an effect of "noise".
But the possibilities are endless!
Sir Hector Munro's classic "Tables of the 3000-feet Mountains
of Scotland" (first published 1891) did not give a formal
definition "owing to the impossibility of deciding what should
be considered separate mountains." On the other hand, J. Rooke
Corbett's later "Scottish Mountains 2500 Feet And Under 3000 Feet
In Height With Re-Ascent Of 500 Feet On All Sides" did use
the "re-ascent" definition given in the title: it is a separate
mountain if you have to climb at least 500 feet from any other
peak to reach its summit.
However, a single mountain may have more than one peak. For
example, the mountain of Lochnagar (overlooking the Balmoral
Estate and the theme of a novel by Prince Charles) is held to
have two separate peaks, marked on the Ordnance Survey Map as
Cac Carn Beag and Cac Carn Mor (don't ask ... ), at 3789 feet
and 3768 feet respectively, separated by a ridge of about 1/4
mile which dips by about 100 feet.
They can be seen at the right-hand end of the photo of Lochnagar
shown in
(to the right of the notch just right of centre)
On the other hand, look at the photo of part of Cairngorm mountain
and ask: Is there a peak here, or is it all noise?
However, here:
you can rather clearly distinguish between peak and noise!
So we can (more or less) make the distinction when we look.
But how to define this sort of thing so that R can understand?
Up to you!
E-Mail: (Ted Harding) <Ted.Harding at manchester.ac.uk>
Fax-to-email: +44 (0)870 094 0861
Date: 17-Mar-10 Time: 14:02:13
------------------------------ XFMail ------------------------------
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2010-March/232194.html","timestamp":"2014-04-18T05:45:57Z","content_type":null,"content_length":"6870","record_id":"<urn:uuid:40499d99-2c28-4451-8b37-3aee36666314>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/hayden4290/asked","timestamp":"2014-04-17T01:30:05Z","content_type":null,"content_length":"78249","record_id":"<urn:uuid:1c60f0f0-64b4-4b25-8ba3-29bb2abe0bf9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Regular Expressions and Automata
Regular expressions are patterns used to describe the lexical parts of languages, such as numbers and identifiers. Strings matching these expressions can be detected by non-deterministic finite
automata (NFAs), which can be transformed to (more efficiently implementable) deterministic finite automata (DFAs) and indeed optimal forms of DFA.
A description of the material can be found in the following paper.
The paper begins with definitions of regular expressions, and how strings are matched to them; this also gives our first Haskell treatment also. After describing the abstract data type of sets we
define non-deterministic finite automata, and their implementation in Haskell. We then show how to build an NFA corresponding to each regular expression, and how such a machine can be optimised,
first by transforming it into a deterministic machine, and then by minimising the state space of the DFA. We conclude with a discussion of regular definitions, and show how recognisers for strings
matching regular definitions can be built.
The material gives an illustration of many of the features of Haskell, including polymorphism (the states of an NFA can be represented by objects of any type); modularisation (the system is split
across a number of modules); higher-order functions (used in finding limits of processes, for example); and type classes amongst other features.
The Haskell libraries implementing the material can be found in the file archive below.
Thanks very much to Oliver Salazar for finding a bug in the implementation.
Last modified 11 April 2003. | {"url":"http://www.cs.kent.ac.uk/people/staff/sjt/craft2e/regExp.html","timestamp":"2014-04-21T12:22:55Z","content_type":null,"content_length":"2200","record_id":"<urn:uuid:c1844595-19d9-414d-8af0-f50e1e402189>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Affine stochastic differential equations with finite and infinite delay
Summary: Affine stochastic differential equations
with finite and infinite delay
Markus Riedle
Humboldt University of Berlin
September, 22nd, 2003
· Affine stochastic differential equations with finite and infinite delay
Introduction
Examples
Differences in the theory of finite and infinite delay
· Stationary solutions
· Equations with infinite delay
reducible to ordinary stochastic differential equations
· Approximations of solutions
Finite delay: deterministic
x(t) =
x(t + s) (ds), t 0,
Source: Applebaum, David - Department of Probability and Statistics, University of Sheffield
Collections: Mathematics
Summary: Affine stochastic differential equations with finite and infinite delay Markus Riedle Humboldt University of Berlin September, 22nd, 2003 Montreal Overview · Affine stochastic differential
equations with finite and infinite delay Introduction Examples Differences in the theory of finite and infinite delay · Stationary solutions · Equations with infinite delay reducible to
ordinary stochastic differential equations · Approximations of solutions 1 Finite delay: deterministic x(t) = [-,0] x(t + s) (ds), t 0,
Source: Applebaum, David - Department of Probability and Statistics, University of Sheffield | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/878/1808414.html","timestamp":"2014-04-19T23:57:39Z","content_type":null,"content_length":"7655","record_id":"<urn:uuid:81692c8e-db04-4d28-b8e6-2462fc89470c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of model after
relational model
management is a
database model
based on
first-order predicate logic
, first formulated and proposed in 1969 by
Edgar Codd
Its core idea is to describe a database as a collection of
over a finite set of predicate variables, describing
on the possible values and combinations of values. The content of the database at any given time is a finite
model (logic)
of the database, i.e. a set of
, one per predicate variable, such that all predicates are satisfied. A request for information from the database (a
database query
) is also a predicate.
The purpose of the relational model is to provide a declarative method for specifying data and queries: we directly state what information the database contains and what information we want from it,
and let the database management system software take care of describing data structures for storing the data and retrieval procedures for getting queries answered.
IBM implemented Codd's ideas with the DB2 database management system; it introduced the SQL data definition and query language. Other relational database management systems followed, most of them
using SQL as well. A table in an SQL database schema corresponds to a predicate variable; the contents of a table to a relation; key constraints, other constraints, and SQL queries correspond to
predicates. However, it must be noted that SQL databases, including DB2, deviate from the relational model in many details; Codd fiercely argued against deviations that compromise the original
Alternatives to the relational model
are the
hierarchical model
network model
. Some
using these older architectures are still in use today in
data centers
with high data volume needs or where existing systems are so complex and abstract it would be cost prohibitive to migrate to systems employing the relational model; also of note are newer
object-oriented databases
, even though many of them are DBMS-construction kits, rather than proper
A recent development is the Object-Relation type-Object model, which is based on the assumption that any fact can be expressed in the form of one or more binary relationships. The model is used in
Object Role Modeling
Notation 3
(N3) and in
Gellish English
The relational model was the first formal database model. After it was defined, informal models were made to describe hierarchical databases (the hierarchical model) and network databases (the
network model). Hierarchical and network databases existed before relational databases, but were only described as models after the relational model was defined, in order to establish a basis for
There have been several attempts to produce a true implementation of the relational database model as originally defined by
and explained by
and others, but none have been popular successes so far.
is one of the more recent attempts to do this.
The relational model was invented by
E.F. (Ted) Codd
as a general model of data, and subsequently maintained and developed by
Chris Date
Hugh Darwen
among others. In
The Third Manifesto
(first published in 1995) Date and Darwen show how the relational model can accommodate certain desired
Codd himself, some years after publication of his 1970 model, proposed a three-valued logic (True, False, Missing or NULL) version of it in order to deal with missing information, and in his
The Relational Model for Database Management Version 2
(1990) he went a step further with a four-valued logic (True, False, Missing but Applicable, Missing but Inapplicable) version. But these have never been implemented, presumably because of attending
complexity. SQL's
construct was intended to be part of a three-valued logic system, but fell short of that due to logical errors in the standard and in its implementations. See the section "
SQL standard
", above.
The model
The fundamental assumption of the relational model is that all
is represented as
mathematical n-ary relations
, an
-ary relation being a
of the
Cartesian product
domains. In the mathematical model,
about such data is done in two-valued
predicate logic
, meaning there are two possible
for each
: either
(and in particular no third value such as
, or
not applicable
, either of which are often associated with the concept of
). Some think two-valued
is an important part of the relational model, where others think a system that uses a form of
three-valued logic
can still be considered relational.
Data are operated upon by means of a relational calculus or relational algebra, these being equivalent in expressive power.
The relational model of data permits the database designer to create a consistent, logical representation of information. Consistency is achieved by including declared constraints in the database
design, which is usually referred to as the logical schema. The theory includes a process of database normalization whereby a design with certain desirable properties can be selected from a set of
logically equivalent alternatives. The access plans and other implementation and operation details are handled by the DBMS engine, and are not reflected in the logical model. This contrasts with
common practice for SQL DBMSs in which performance tuning often requires changes to the logical model.
The basic relational building block is the domain or data type, usually abbreviated nowadays to type. A tuple is an unordered set of attribute values. An attribute is an ordered pair of attribute
name and type name. An attribute value is a specific valid value for the type of the attribute. This can be either a scalar value or a more complex type.
A relation consists of a heading and a body. A heading is a set of attributes. A body (of an n-ary relation) is a set of n-tuples. The heading of the relation is also the heading of each of its
A relation is defined as a set of n-tuples. In both mathematics and the relational database model, a set is an unordered collection of items, although some DBMSs impose an order to their data. In
mathematics, a tuple has an order, and allows for duplication. E.F. Codd originally defined tuples using this mathematical definition. Later, it was one of E.F. Codd's great insights that using
attribute names instead of an ordering would be so much more convenient (in general) in a computer language based on relations . This insight is still being used today. Though the concept has
changed, the name "tuple" has not. An immediate and important consequence of this distinguishing feature is that in the relational model the Cartesian product becomes commutative.
A table is an accepted visual representation of a relation; a tuple is similar to the concept of row, but note that in the database language SQL the columns and the rows of a table are ordered.
A relvar is a named variable of some specific relation type, to which at all times some relation of that type is assigned, though the relation may contain zero tuples.
The basic principle of the relational model is the Information Principle: all information is represented by data values in relations. In accordance with this Principle, a relational database is a set
of relvars and the result of every query is presented as a relation.
The consistency of a relational database is enforced, not by rules built into the applications that use it, but rather by constraints, declared as part of the logical schema and enforced by the DBMS
for all applications. In general, constraints are expressed using relational comparison operators, of which just one, "is subset of" (⊆), is theoretically sufficient. In practice, several useful
shorthands are expected to be available, of which the most important are candidate key (really, superkey) and foreign key constraints.
To fully appreciate the relational model of data it is essential to understand the intended
of a relation.
The body of a relation is sometimes called its extension. This is because it is to be interpreted as a representation of the extension of some predicate, this being the set of true propositions that
can be formed by replacing each free variable in that predicate by a name (a term that designates something).
There is a one-to-one correspondence between the free variables of the predicate and the attribute names of the relation heading. Each tuple of the relation body provides attribute values to
instantiate the predicate by substituting each of its free variables. The result is a proposition that is deemed, on account of the appearance of the tuple in the relation body, to be true.
Contrariwise, every tuple whose heading conforms to that of the relation but which does not appear in the body is deemed to be false. This assumption is known as the closed world assumption.
For a formal exposition of these ideas, see the section Set Theory Formulation, below.
Application to databases
as used in a typical relational database might be the set of integers, the set of character strings, the set of dates, or the two boolean values
, and so on. The corresponding
type names
for these types might be the strings "int", "char", "date", "boolean", etc. It is important to understand, though, that relational theory does not dictate what types are to be supported; indeed,
nowadays provisions are expected to be available for
types in addition to the
ones provided by the system.
Attribute is the term used in the theory for what is commonly referred to as a column. Similarly, table is commonly used in place of the theoretical term relation (though in SQL the term is by no
means synonymous with relation). A table data structure is specified as a list of column definitions, each of which specifies a unique column name and the type of the values that are permitted for
that column. An attribute value is the entry in a specific column and row, such as "John Doe" or "35".
A tuple is basically the same thing as a row, except in an SQL DBMS, where the column values in a row are ordered. (Tuples are not ordered; instead, each attribute value is identified solely by the
attribute name and never by its ordinal position within the tuple.) An attribute name might be "name" or "age".
A relation is a table structure definition (a set of column definitions) along with the data appearing in that structure. The structure definition is the heading and the data appearing in it is the
body, a set of rows. A database relvar (relation variable) is commonly known as a base table. The heading of its assigned value at any time is as specified in the table declaration and its body is
that most recently assigned to it by invoking some update operator (typically, INSERT, UPDATE, or DELETE). The heading and body of the table resulting from evaluation of some query are determined by
the definitions of the operators used in the expression of that query. (Note that in SQL the heading is not always a set of column definitions as described above, because it is possible for a column
to have no name and also for two or more columns to have the same name. Also, the body is not always a set of rows because in SQL it is possible for the same row to appear more than once in the same
SQL and the relational model
, initially pushed as the
language for
relational databases
, deviates from the relational model in several places. The current
SQL standard doesn't mention the relational model or use relational terms or concepts. However, it is possible to create a database conforming to the relational model using SQL if one does not use
certain SQL features.
The following deviations from the relational model have been noted in SQL. Note that few database servers implement the entire SQL standard and in particular do not allow some of these deviations.
Whereas NULL is nearly ubiquitous, for example, allowing duplicate column names within a table or anonymous columns is uncommon.Duplicate rows
The same row can appear more than once in an SQL table. The same tuple cannot appear more than once in a relation.Anonymous columns
A column in an SQL table can be unnamed and thus unable to be referenced in expressions. The relational model requires every attribute to be named and referenceable.Duplicate column names
Two or more columns of the same SQL table can have the same name and therefore cannot be referenced, on account of the obvious ambiguity. The relational model requires every attribute to be
referenceable.Column order significance
The order of columns in an SQL table is defined and significant, one consequence being that SQL's implementations of Cartesian product and union are both noncommutative. The relational model
requires there to be no significance to any ordering of the attributes of a relation.Views without CHECK OPTION
Updates to a view defined without CHECK OPTION can be accepted but the resulting update to the database does not necessarily have the expressed effect on its target. For example, an invocation of
INSERT can be accepted but the inserted rows might not all appear in the view, or an invocation of UPDATE can result in rows disappearing from the view. The relational model requires updates to a
view to have the same effect as if the view were a base relvar.Columnless tables unrecognized
SQL requires every table to have at least one column, but there are two relations of degree zero (of cardinality one and zero) and they are needed to represent extensions of predicates that
contain no free variables.NULL
This special mark can appear instead of a value wherever a value can appear in SQL, in particular in place of a column value in some row. The deviation from the relational model arises from the
fact that the implementation of this ad hoc concept in SQL involves the use of three-valued logic, under which the comparison of NULL with itself does not yield true but instead yields the third
truth value, unknown; similarly the comparison NULL with something other than itself does not yield false but instead yields unknown. It is because of this behaviour in comparisons that NULL is
described as a mark rather than a value. The relational model depends on the law of excluded middle under which anything that is not true is false and anything that is not false is true; it also
requires every tuple in a relation body to have a value for every attribute of that relation. This particular deviation is disputed by some if only because E.F. Codd himself eventually advocated
the use of special marks and a 4-valued logic, but this was based on his observation that there are two distinct reasons why one might want to use a special mark in place of a value, which led
opponents of the use of such logics to discover more distinct reasons and at least as many as 19 have been noted, which would require a 21-valued logic. SQL itself uses NULL for several purposes
other than to represent "value unknown". For example, the sum of the empty set is NULL, meaning zero, the average of the empty set is NULL, meaning undefined, and NULL appearing in the result of
a LEFT JOIN can mean "no value because there is no matching row in the right-hand operand".Concepts
SQL uses concepts "table", "column", "row" instead of "relvar", "attribute", "tuple". These are not merely differences in terminology. For example, a "table" may contain duplicate rows, whereas
the same tuple cannot appear more than once in a relation.
Example database
An idealized, very simple example of a description of some relvars and their attributes:
• Customer(Customer ID, Tax ID, Name, Address, City, State, Zip, Phone)
• Order(Order No, Customer ID, Invoice No, Date Placed, Date Promised, Terms, Status)
• Order Line(Order No, Order Line No, Product Code, Qty)
• Invoice(Invoice No, Customer ID, Order No, Date, Status)
• Invoice Line(Invoice No, Line No, Product Code, Qty Shipped)
• Product(Product Code, Product Description)
In this design we have six relvars: Customer, Order, Order Line, Invoice, Invoice Line and Product. The bold, underlined attributes are candidate keys. The non-bold, underlined attributes are foreign
Usually one candidate key is arbitrarily chosen to be called the primary key and used in preference over the other candidate keys, which are then called alternate keys.
A candidate key is a unique identifier enforcing that no tuple will be duplicated; this would make the relation into something else, namely a bag, by violating the basic definition of a set. Both
foreign keys and superkeys (which includes candidate keys) can be composite, that is, can be composed of several attributes. Below is a tabular depiction of a relation of our example Customer relvar;
a relation can be thought of as a value that can be attributed to a relvar.
Example: customer relation
Customer ID Tax ID Name Address [More fields....]
1234567890 555-5512222 Munmun 323 Broadway ...
2223344556 555-5523232 SS4 Vegeta 1200 Main Street ...
3334445563 555-5533323 Ekta 871 1st Street ...
4232342432 555-5325523 E. F. Codd 123 It Way ...
If we attempted to insert a new customer with the ID 1234567890, this would violate the design of the relvar since Customer ID is a primary key and we already have a customer 1234567890. The DBMS
must reject a transaction such as this that would render the database inconsistent by a violation of an integrity constraint.
Foreign keys are integrity constraints enforcing that the value of the attribute set is drawn from a candidate key in another relation. For example in the Order relation the attribute Customer ID is
a foreign key. A join is the operation that draws on information from several relations at once. By joining relvars from the example above we could query the database for all of the Customers,
Orders, and Invoices. If we only wanted the tuples for a specific customer, we would specify this using a restriction condition.
If we wanted to retrieve all of the Orders for Customer 1234567890, we could query the database to return every row in the Order table with Customer ID 1234567890 and join the Order table to the
Order Line table based on Order No.
There is a flaw in our database design above. The Invoice relvar contains an Order No attribute. So, each tuple in the Invoice relvar will have one Order No, which implies that there is precisely one
Order for each Invoice. But in reality an invoice can be created against many orders, or indeed for no particular order. Additionally the Order relvar contains an Invoice No attribute, implying that
each Order has a corresponding Invoice. But again this is not always true in the real world. An order is sometimes paid through several invoices, and sometimes paid without an invoice. In other words
there can be many Invoices per Order and many Orders per Invoice. This is a many-to-many relationship between Order and Invoice (also called a non-specific relationship). To represent this
relationship in the database a new relvar should be introduced whose role is to specify the correspondence between Orders and Invoices:
OrderInvoice(Order No,Invoice No)
Now, the Order relvar has a one-to-many relationship to the OrderInvoice table, as does the Invoice relvar. If we want to retrieve every Invoice for a particular Order, we can query for all orders
where Order No in the Order relation equals the Order No in OrderInvoice, and where Invoice No in OrderInvoice equals the Invoice No in Invoice.
Set-theoretic formulation
Basic notions in the relational model are relation names and attribute names. We will represent these as strings such as "Person" and "name" and we will usually use the variables $r, s, t, ldots$ and
$a, b, c$ to range over them. Another basic notion is the set of atomic values that contains values such as numbers and strings.
Our first definition concerns the notion of tuple, which formalizes the notion of row or record in a table: Tuple
A tuple is a partial function from attribute names to atomic values. Header
A header is a finite set of attribute names. Projection
The projection of a tuple $t$ on a finite set of attributes $A$ is $t\left[A\right] = \left\{ \left(a, v\right) : \left(a, v\right) in t, a in A \right\}$.
The next definition defines relation which formalizes the contents of a table as it is defined in the relational model. Relation
A relation is a tuple $\left(H, B\right)$ with $H$, the header, and $B$, the body, a set of tuples that all have the domain $H$.
Such a relation closely corresponds to what is usually called the extension of a predicate in first-order logic except that here we identify the places in the predicate with attribute names. Usually
in the relational model a database schema is said to consist of a set of relation names, the headers that are associated with these names and the constraints that should hold for every instance of
the database schema. Relation universe
A relation universe $U$ over a header $H$ is a non-empty set of relations with header $H$. Relation schema
A relation schema $\left(H, C\right)$ consists of a header $H$ and a predicate $C\left(R\right)$ that is defined for all relations $R$ with header $H$. A relation satisfies a relation schema $\
left(H, C\right)$ if it has header $H$ and satisfies $C$.
Key constraints and functional dependencies
One of the simplest and most important types of relation constraints is the
key constraint
. It tells us that in every instance of a certain relational schema the tuples can be identified by their values for certain attributes. Superkey
A superkey is written as a finite set of attribute names.
A superkey $K$ holds in a relation $\left(H, B\right)$ if:
* $K subseteq H$ and
* there exist no two distinct tuples $t_1, t_2 in B$ such that $t_1\left[K\right] = t_2\left[K\right]$.
A superkey holds in a relation universe $U$ if it holds in all relations in $U$.
Theorem: A superkey $K$ holds in a relation universe $U$ over $H$ if and only if $K subseteq H$ and $K rightarrow H$ holds in $U$. Candidate key
A superkey $K$ holds as a candidate key for a relation universe $U$ if it holds as a superkey for $U$ and there is no proper subset of $K$ that also holds as a superkey for $U$. Functional
A functional dependency (FD for short) is written as $X rightarrow Y$ for $X, Y$ finite sets of attribute names.
A functional dependency $X rightarrow Y$ holds in a relation $\left(H, B\right)$ if:
* $X, Y subseteq H$ and
* $forall$ tuples $t_1, t_2 in B$, $t_1\left[X\right] = t_2\left[X\right]~Rightarrow~t_1\left[Y\right] = t_2\left[Y\right]$
A functional dependency $X rightarrow Y$ holds in a relation universe $U$ if it holds in all relations in $U$. Trivial functional dependency
A functional dependency is trivial under a header $H$ if it holds in all relation universes over $H$.
Theorem: An FD $X rightarrow Y$ is trivial under a header $H$ if and only if $Y subseteq X subseteq H$. Closure
Armstrong's axioms: The closure of a set of FDs $S$ under a header $H$, written as $S^+$, is the smallest superset of $S$ such that:
* $Y subseteq X subseteq H~Rightarrow~X rightarrow Y in S^+$ (reflexivity)
* $X rightarrow Y in S^+ land Y rightarrow Z in S^+~Rightarrow~X rightarrow Z in S^+$ (transitivity) and
* $X rightarrow Y in S^+ land Z subseteq H~Rightarrow~\left(X cup Z\right) rightarrow \left(Y cup Z\right) in S^+$ (augmentation)
Theorem: Armstrong's axioms are sound and complete; given a header $H$ and a set $S$ of FDs that only contain subsets of $H$, $X rightarrow Y in S^+$ if and only if $X rightarrow Y$ holds in all
relation universes over $H$ in which all FDs in $S$ hold. Completion
The completion of a finite set of attributes $X$ under a finite set of FDs $S$, written as $X^+$, is the smallest superset of $X$ such that:
* $Y rightarrow Z in S land Y subseteq X^+~Rightarrow~Z subseteq X$
The completion of an attribute set can be used to compute if a certain dependency is in the closure of a set of FDs.
Theorem: Given a set $S$ of FDs, $X rightarrow Y in S^+$ if and only if $Y subseteq X^+$. Irreducible cover
An irreducible cover of a set $S$ of FDs is a set $T$ of FDs such that:
* $S^+ = T^+$
* there exists no $U subset T$ such that $S^+ = U^+$
* $X rightarrow Y in T~Rightarrow Y$ is a singleton set and
* $X rightarrow Y in T land Z subset X~Rightarrow~Z rightarrow Y notin S^+$.
Algorithm to derive candidate keys from functional dependencies
INPUT: a set S of FDs that contain only subsets of a header H
OUTPUT: the set C of superkeys that hold as candidate keys in
all relation universes over H in which all FDs in S hold
C := ∅; // found candidate keys
:= {
}; // superkeys that contain candidate keys
while Q <> ∅ do
let K be some element from Q;
- {
minimal := true;
for each X->Y in S do
K' := (K - Y) ∪ X; // derive new superkey
if K' ⊂ K then
minimal := false;
∪ {
end if
end for
if minimal and there is not a subset of K in C then
remove all supersets of K from C;
∪ {
end if
end while
See also
Further reading
• Date, C. J., Darwen, H. (2000). Foundation for Future Database Systems: The Third Manifesto, 2nd edition, Addison-Wesley Professional. ISBN 0-201-70928-7.
• Date, C. J. (2003). Introduction to Database Systems. 8th edition, Addison-Wesley. ISBN 0-321-19784-4.
External links | {"url":"http://www.reference.com/browse/model%20after","timestamp":"2014-04-17T11:23:10Z","content_type":null,"content_length":"119725","record_id":"<urn:uuid:92565c2d-a2c4-4f33-a77f-943f783d58df>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spacetime: Special Relativity
February 13, 2008, 12:00 am
March 28, 2009, 10:36 pm
The development of the spacetime idea actually arose from the necessity of creating a self-consistent description of Maxwell's electrodynamics that worked for all observers no matter their relative
motion. This led to the recognition by Lorentz that a special set of coordinate transformations allowed the FORM of Maxwell's equations to remain invariant. After Einstein published "Special
Relativity" in 1905, it was Minkowski who took this one step further and showed that the Lorentz Transformation could be re-interpreted as a rotation within a 4-dimensional coordinate space that he
dubbed "spacetime."
In an ordinary coordinate rotation, we can express the endpoints of a meter stick of length L in one coordinate system as P1 = (x1,y1) and P2 = (x2,y2). We can perform a rotation about the origin by
an angle theta to get a second set of coordimnates for these points x' = Lcos(theta), y'= Lsin(theta). The result is that, in both coordinate systems X and X', the invariant length of the meter stick
remains the same, L. However, the rotation changes the specific names we assign to the coordinates.
Special Relativity & Velocity
In special relativity, in the spacetime description, the rotation angle depends on the relative velocity between the two observers, Theta = V/C where C is the speed of light. When V is very small,
Theta ~ 0 and so there is no significant difference between the two descriptions provided by the two observers, for electrodynamic phenomena covered by Maxwell's Equations. When V ~ C, however, the
rotation angle is large, and in general the coordinate descriptions will be very different, leading to phenomena such as time dilation and length contraction.
Mathematically, we can write the Pythagorean Theorem in 2 dimensions as:
ds^2 = Pxx dxdx + Pxy dxdy + Pyx dydx + Pyy dydy
where dx = (x2-x1) dy=(y2-y1)
For Euclidean geometry (flat plane) the coefficients are Pxx = 1, Pxy =- Pyx = 0 and Pyy=1.
In general, for any kind of geometry, Gauss defined a quantity g[ij] as the Fundimental Metric Tensor that defines the geometric properties of a space, so that the generalized Pythagorean Theorem
ds^2 = g[ij] dx[i]dx[j]
where i = 1,2,3 and j=1,2,3 and the coordinates for a point in 3-dimensional space are given by (x[1], x[2], x[3]). These coordinates can be the ordinary Cartesian X, Y and Z, or any other orthogonal
space coordinate system ( e.g. spherical, cylindrical etc). For a Euclidean flat space we have g[11] = 1, g[22] = 1, g[33]=1 and all other terms are zero.
For spacetime, the analogous quantity to the 3-dimensional Metric Tensor is the 4-dimensional Minkowski Tensor ,n[uv], which has the valuesn[xx] = 1 n[yy]= 1, n[zz] = 1, n[tt] = -1 and all other
terms are zero. This gives us:
ds^2 = -(cdt)^2 + dx^2
The interpretation for the metric tensor for Minkowski spacetime, a cornerstone for Special Relativity, is that it is a book-keeping tool to help us perform calculations in special relativity. There
is nothing about it, and the nature of spacetime at this level, that demands a more detailed explanation. However, the advent of General Relativity by Albert Einstein in 1915 introduced a whole new
way to regard spacetime.
Related EoC Articles
Preview Image
An artist's concept of twisted space-time around Earth. (Source: Spacetime Vortex - NASA.)
Odenwald, Sten, Ph.D. (Contributing Author); Bernard Haisch (Topic Editor). 2009. "Spacetime: Special Relativity." In: Encyclopedia of the Cosmos. Eds. Bernard Haisch and Joakim F. Lindblom (Redwood
City, CA: Digital Universe Foundation). [First published February 13, 2008].
(2009). Spacetime: Special Relativity. Retrieved from http://www.cosmosportal.org/view/article/138099 | {"url":"http://www.cosmosportal.org/view/article/138099/","timestamp":"2014-04-17T12:42:10Z","content_type":null,"content_length":"50441","record_id":"<urn:uuid:54413dd7-5f94-4728-9bb2-8756b4b21670>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
InformationDisplay > ID Archive > 2013 > January/February > Frontline Technology: Characterization of 3-D Gray
Characterization of 3-D Gray-to-Gray Crosstalk with a Matrix of Lightness Differences
Stereoscopic televisions, which are mainly striped-retarder displays with passive glasses or time-sequential displays with active glasses, are emerging in the consumer market. 3-D crosstalk is an
important characteristic that defines the quality of these displays. A new crosstalk metric is proposed that uses an intuitive matrix representation with perceptually relevant lightness-difference
values instead of the single percentage value that is often used.
by Hans Van Parys, Kees Teunissen, and Aleksandar Ševo
STEREOSCOPIC TVs are becoming commonplace in the consumer market. Available models are usually striped-retarder displays with passive glasses or time-sequential displays with active glasses. The most
important characteristic in defining the quality of 3-D image perception, and therefore the quality of the user experience, is inter-ocular crosstalk. The use of a good characterization method for
crosstalk is crucial to enable direct comparison of the performance of 3-D TVs and technologies. This means a characterization method that is well-defined and easy to measure, calculate, and
interpret. Only with a good characterization method can the performance of different stereoscopic displays be compared and insight be gained as to the source and nature of crosstalk, which will, in
turn, lead to improvements in 3-D performance.
Different 3-D crosstalk formulas are proposed in the literature.^1-3 A commonly used crosstalk definition is discussed in more detail in the next paragraph. The shortcomings of this characterization
are shown, and a better characterization method is derived in the following paragraphs.
Commonly Used 3-D Crosstalk Characterization
A 3-D crosstalk characterization commonly used in the industry today is provided in the equation below. It is based on the combination of white and black test images for the left and right views (see
Fig. 1) when the luminance is measured through, for instance, the left lens of the 3-D glasses.^4 The equation below assumes “identical” behavior for left and right views.
in which L[M,N] is the measured luminance with M in the observed and N in the unobserved image. M and N can be white (W) or black (B).
Fig. 1: A combination of left and right images while measured through the left-eye lens of 3-D glasses.
However, this formula has several severe drawbacks, especially for the characterization of time-sequential 3-D LCDs. First of all, the characterization of 3-D crosstalk with only one number does not
make sense for many 3-D display types: 3-D crosstalk can be heavily dependent on the applied gray levels, and, as such, also on the image content. This has already been noticed and concluded by, for
instance, Shestak et al.^3 and Barkowsky et al.^5
A second drawback is that white-to-black and black-to-white crosstalk are mixed into one formula. This makes the interpretation of the result less than obvious. Moreover, it becomes problematic when
L[W,B] is higher than L[W,W] – this is possible in time-sequential 3-D displays: with L[W,B] in the nominator, the crosstalk will decrease with higher L[W,B], although more crosstalk will be visible.
An improvement can be made here by replacing L[W,B] with L[W,W].
Finally, Xia et al.^6 found a poor correlation between perceived crosstalk and crosstalk as determined by several crosstalk equations. In particular, the white-to-black crosstalk (see, e.g., Fig. 2)
is much more visible than the black-to-white crosstalk, although the crosstalk percentage values could be identical. This clearly demonstrates the necessity for a perceptually relevant
characterization method.
Fig. 2: The image at the far right shows the effect of visible crosstalk.
A New Method for 3-D Crosstalk Characterization
A proposed measurement setup is shown in Fig. 3. A luminance meter is directed perpendicularly toward the center on the display surface. The 3-D glasses are mounted in front of the luminance meter
with the meter measuring through one of the lenses. The glasses should be mounted in a position similar to what their position would be if a person was wearing them to watch the 3-D display.
Fig. 3: This measurement setup includes a stereoscopic display, 3-D glasses, and a luminance meter directed perpendicularly toward the display and measuring through one of the lenses of the glasses.
During the measurement, a range of test patterns are rendered on the display and for each test pattern, the luminance is measured through the glasses. These test patterns are generated with different
combinations of two gray levels for the left- and the right-eye image as shown in Fig. 4 (left).
Conventionally, only the four combinations of full black (B) and full white (W) are measured. Especially for the time-sequential 3-D LCDs, this leads to an incomplete characterization. For these
displays, the crosstalk is strongly dependent on the particular combination of gray levels for both eyes, due to intrinsic properties of LCDs and response-time compensation technologies. Thorough
characterization of time-sequential 3-D displays may require as many as 17 gray values per view. This leads to a 17 × 17 measurement grid containing 289 cells. However, for convenience, we will
restrict the examples in this paper to a 9 × 9 measurement grid.
Interpretation of the Measurement Grid
The measurement grid in Fig. 4 (right) shows the luminance values as recorded by the luminance meter. In this example, the applied gray values (in the gamma-corrected domain) on an 8-bit scale are 0,
32, 64, 96, 128, 160, 192, 224, and 255. The value of 0 corresponds to full black and 255 to full white. In the grid, the rows correspond to the values of the unobserved right-eye image and the
columns to the values of the observed left-eye image. Obviously, the measurement grid could also have been measured for the right-eye image as the observed image and the left-eye image as the
unobserved image. For most stereoscopic systems, however, the obtained measurement grid would be the same.
Fig. 4: At left are test images for the left eye (observed image) and the right eye (unobserved image); at right is a measurement grid for the combination of left-eye (observed image) and right-eye
(unobserved image) gray levels.
In the upper left corner, we find the level when full black is applied to both images (left and right view), so this number could be called the “black offset,” and it can have multiple origins in the
display as well as in the measurement setup. In the lower right corner, we find the full-white level.
On the diagonal, we find the luminance values for the observed left-eye image when the left and right images have equal gray levels. So, on the diagonal we find per definition the crosstalk-free
luminance values for the applied gray levels, or in other words, the “target luminance levels.”
When the system is crosstalk free, i.e., when the observed image is not influenced by the unobserved image, the luminance values should be constant down every column because in theory the gray level
of the unobserved image (right eye in this example) should have no impact on the gray-level measured from the observed image (left eye in this example). That would represent a case of no crosstalk at
all. In this example, this is apparently not the case; in some cells, the luminance is higher than the luminance on the diagonal. In other cells, it is lower.
Conversion to a Lightness Value
Instead of calculating crosstalk numbers by subtracting and dividing luminance values, we will first perform a conversion to a “lightness value.” This step will make the resulting crosstalk figure
more perceptually uniform.
To do this, we first subtract the “black offset” and normalize on the full-white luminance level. Then we apply the lightness formula from the CIELab colorspace.^7 We propose to use a scale factor of
255 (8-bit equivalent) instead of 100, as this makes the formula more intuitive for engineers working with image processing: the results can be interpreted as 8-bit (gamma-corrected) gray values.
Besides, the scale factor of 255 better suits the rounding used in the last step of the procedure.
For any cell on coordinate r,c (where r is unobserved image and c is observed image) on the measurement grid, the conversion from luminance to 8-bit normalized lightness values (0..255) is expressed
by the following formula:
where Y[r,c] is the luminance in each cell as measured by the luminance meter, L[r,c] is the corresponding lightness, Y[0,0] is the “black offset,” and Y[N,N] is the full-white level. As an
alternative, a simplified formula with a pure power law could also be used:
The exponent 1/γ can be discussed. We propose to use 1/2.2 because, although 1/2.4 is a closer match to the overall CIELab function, 1/2.2 is a better match where it matters most, i.e., for low light
The conversion of the luminance grid to a lightness grid is shown in Fig. 5 (left). This conversion can be interpreted as follows. The numbers show what lightness is perceived for any combination of
gray-level values for the observed and unobserved image. Again, on the diagonal we find the “target lightness” for the columns. The difference between a cell’s lightness and the target lightness of
its column can be qualified as the visible crosstalk. Therefore, to construct the final crosstalk grid, we subtract from the value in every cell the value on the diagonal in the same column and round
the result to the nearest integer.
As a consequence, the result will show zeros on the diagonal, and this fits with our previous observation that there is no visible crosstalk for combinations on the diagonal, per definition. Please
notice that with our 8-bit representation, rounding leaves enough precision for practical applications and makes interpretation faster.
An additional enrichment is a small modification on the sign: for all cells above the diagonal, we will invert the sign. This will give a consistent relationship between the sign of the crosstalk
number and the direction of crosstalk: a positive number will always denote a type of crosstalk that has its luminance level between the observed and unobserved luminance levels (see the equation
Finally, the crosstalk grid can be made even more intuitive by applying a bipolar color map. For example, in Fig. 5 (right), crosstalk with a positive number obtains a blue color, crosstalk with a
negative number obtains a red color, and crosstalk-free cells are black. The more crosstalk, the more saturated the color. The result is a gray-to-gray crosstalk grid in a perceptually uniform
lightness domain that can be interpreted quickly, without the necessity for a three-dimensional graph.
Fig. 5: At left is a measurement grid converted to lightness. A final crosstalk representation as a grid of lightness differences appears at right.
The bipolar color map in Table 1 is inspired by a submission in the Matlab Central File Exchange^8 and describes the exact color mapping. In Table 1, a color value of one is the maximum value for
that color. Outside the range of [-64, 64] colors are clipped to the values for -64 or 64. The colors for crosstalk numbers in between those mentioned in the table are linearly interpolated.
Table 1: This bipolar color map allows a quick interpretation of the crosstalk matrix.
┃ Crosstalk number │ red │ green │ blue │ Color ┃
┃ –64 │ 1 │ 1 │ 0 │ (yellow) ┃
┃ –32 │ 1 │ 0 │ 0 │ (red) ┃
┃ 0 │ 0 │ 0 │ 0 │ (black) ┃
┃ 32 │ 0 │ 0 │ 1 │ (blue) ┃
┃ 64 │ 0 │ 1 │ 1 │ (cyan) ┃
Interpretation of the Crosstalk Grid
Contrary to crosstalk percentages, the lightness-difference-based crosstalk numbers have a more perceptually intuitive meaning. The conversion to lightness makes the result an approximation for
perceptual uniformity. The absolute value of the crosstalk number is a measure of the visibility of the crosstalk – it denotes how many “gamma-corrected gray-level values” (on an 8-bit scale) the
crosstalk is away from the target level.
In the lower left corner of the grid, we find the white-to-black crosstalk, generally the most dominant crosstalk factor in the display, and in many other 3-D characterization methods the only
crosstalk number that is focused upon.
The sign of a crosstalk number denotes the direction of crosstalk. Striped-retarder stereoscopic displays will generally only show positive crosstalk numbers. This type of crosstalk is due to leakage
of the light intended for one view into the other view. In time-sequential stereoscopic displays, however, crosstalk with a negative number is also present. The origin of this is “overcompensated”
crosstalk or so-called “overshoots”.
This method could be seen as a simplification of the method using the DICOM standard and the concept of just-noticeable differences (JNDs) as proposed by Teunissen et al.^9 This is shown in Fig. 6,
where a comparison is made between the two methods. The middle grid shows the ΔJNDs calculated from the same luminance measurements and adapted with the sign and color conventions as proposed here.
In the right grid, the ΔJNDs are scaled for equal numbers on white-to-black crosstalk. The similarity between lightness differences and (scaled) ΔJNDs is clearly visible. This observation supports
the correspondence between our measured crosstalk values (Fig. 6, left) and the (relative) severity of the perceived crosstalk.
Fig. 6: Above are comparisons of the lightness differences (left) with ΔJNDs (middle) and scaled ΔJNDs (right).
The relationship between the level of measured crosstalk and acceptability is not straightforward. The concept of JND, as introduced by Teunissen et al.,^9 does provide an answer if crosstalk is just
visible (JND = 1), perceptible (JND ≥ 3), or easily visible (JND ≥ 10). However, this is calculated for the most critical case, e.g., a white bar on a black background. For natural images, this
critical pattern may not occur or if it occurs, may even be unnoticed. Also, motion in the image may draw attention away from crosstalk. Finally, some image impairments remain unnoticed until someone
points them out. After that, those impairments may become unacceptable, while they initially were unnoticed.
A New Way of Looking at Crosstalk
We presented a new method of crosstalk characterization that is suited for all types of stereoscopic displays and is particularly useful for time-sequential stereoscopic displays. The result is a
matrix of gray-to-gray crosstalk numbers to be interpreted as corresponding gray-level offset or lightness-based difference values. This representation is a good approximation for perceptual
uniformity and clearly shows visibility differences in perceived crosstalk for different gray-level transitions. It allows a quick calculation and analysis of the complete crosstalk behavior of a
stereoscopic display device. Although there are no clear guidelines for crosstalk in terms of acceptability, system developers may strive for lightness difference values less than 5.
^1A. J. Woods, “Understanding Crosstalk in Stereoscopic Displays,” Keynote Presentation at the 3DSA (Three-Dimensional Systems and Applications) Conference, Tokyo, Japan, May 2010.
^2A. Abileah, “3-D Displays: Technologies and Testing Methods,” J. Soc. Info. Display 19/11, 749–763 (2011).
^3S. Shestak, et al., “Measuring the Gray-to-Gray Crosstalk in a LCD Based Time-Sequential Stereoscopic Displays,” SID Symposium Digest Tech Papers 41, 132–135 (2010).
^4J.-C. Liou, K. Lee, F.-G. Tseng, J.-F. Huang, W.-T. Yen, and W.-L. Hsu, “Shutter Glasses Stereo LCD with a Dynamic Backlight,” Proc. SPIE, Stereoscopic Displays and Applications XX 7237, 72370X
^5M. Barkowsky et al., “Crosstalk Measurements of Shutter Glasses 3-D Displays,” SID Symposium Digest Tech. Papers 42, 812–815 (2011).
^6Z. Xia, X. Li, Y Cui, L Chen, and K. Teunissen, “Perceptual Correspondence of Gray-to-Gray Crosstalk Equations for Stereoscopic Displays,” Proc. IDW/AD ’12, 581–584 (2012).
^7Colorimetry, 3rd edition. CIE 15:2004. ISBN 978-3-901906-33-6.
^8G. Ridgway, “Bipolar Colormap,” submission in the Matlab Central File Exchange, 04 Dec 2009.
^9K. Teunissen et al., “Perceptually Relevant Characterization of Stereoscopic Displays,” SID Symposium Digest Tech Papers 42, 994–997 (2011). •
Hans Van Parys is with TP Vision in Belgium. Kees Teunissen (kees.teunissen@philips.com) and Aleksandar Ševo are with TP Vision in the Netherlands. | {"url":"http://informationdisplay.org/IDArchive/2013/JanuaryFebruary/FrontlineTechnologyCharacterizationof3DGray.aspx","timestamp":"2014-04-17T10:01:06Z","content_type":null,"content_length":"101393","record_id":"<urn:uuid:248adce9-61cb-4533-b83c-213d16c6b4d8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - cumulative distribution function question
EDIT: ok i figured it out but i need help on this one.
Let X~N(0,1) . Compute each in terms of function ϕ.
And evaluate it numerically.
for the first one i get
But how do i evaluate it? | {"url":"http://www.physicsforums.com/showpost.php?p=2923587&postcount=2","timestamp":"2014-04-16T22:12:23Z","content_type":null,"content_length":"6986","record_id":"<urn:uuid:07f166d8-bb79-4faf-af19-c1b5a34da220>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: September 2007 [00049]
[Date Index] [Thread Index] [Author Index]
Re: check inside a loop?
• To: mathgroup at smc.vnet.net
• Subject: [mg80828] Re: [mg80736] check inside a loop?
• From: bsyehuda at gmail.com
• Date: Mon, 3 Sep 2007 06:15:39 -0400 (EDT)
• References: <200708310341.XAA07769@smc.twtelecom.net>
For the specific example you posted, first using f[x_]= rather then f[x_]:=
forces Mathematica to evaluate the right hand side of the definition at the
time of definition of f, and it is replaced by Csc[x] which does not return
an error message. It returns ComplexInfinity without the message. In
addition, use Pi with capital P
Next, For and Do loops DO NOT return values. you need to use some sort of
breaking mechanism or collect the values of i when the error is generated,
for example, Catch and Throw (breaks at the first error) or Reap and Sow
(collect all errors)
Reap[Do[If[Check[f[i], "zzz"] == "zzz", Sow[i]], {i, 0, 2,
1/4}]] // Rest
Catch[Do[If[Check[f[i], "zzz"] == "zzz", Throw[i]], {i, 0, 2, 1/4}]]
On 8/31/07, Jeremy Price < cfgauss at u.washington.edu> wrote:
> I have a large loop that is ndsolving/nintegrating a bunch of things, and
> a
> lot of the results give me various errors due to the equations not being
> very nice. I'd like to have a way to check what values of my paramaters
> are
> causing the errors, but I can't find a way to do that inside a loop.
> For example, if I have something like,
> f[x_] = 1/Sin[2 pi x]
> For[ i=1, i < 1000, i++,
> f[i]
> ]
> I'm going to get a lot of "Power::infy: Infinite expression 1/0
> encountered." errors. I'd like to see what values of i these occur at.
> I've tried something like
> For[ i=1, i<1000, i++,
> Check[ f[i] , i]
> ]
> But this just returns "Power::infy: Infinite expression 1/0 encountered."
> errors without the i vaule, which is different than I get by evaluating
> something like
> Check[ f[0],0]
> Which returns:
> Power::infy: Infinite expression 1/0 encountered.
> 0
> Is there any way I can get it to return the index that the error occured
> at
> for every error that occurs inside of the loop? | {"url":"http://forums.wolfram.com/mathgroup/archive/2007/Sep/msg00049.html","timestamp":"2014-04-17T15:53:40Z","content_type":null,"content_length":"35665","record_id":"<urn:uuid:c88942cf-19d9-4da6-af3b-c6a03fb99505>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shoreline, WA Algebra 2 Tutor
Find a Shoreline, WA Algebra 2 Tutor
...If you are still interested in tutoring from me, it would need to be over Skype. Please feel free to contact me for more information. As I would not need to cover transportation costs, my
hourly rate will be lower and more consistent.*** I have a been tutoring for most of my life, from kindergarten-aged children up to college and university-level classes.
12 Subjects: including algebra 2, reading, geometry, accounting
...While working with a middle school student on ancient Egyptian history, for example, I make sure to emphasize how to use chapter summaries to pinpoint what to study for an upcoming exam, the
importance of learning bolded key terms, and the usefulness of section comprehension questions for focusin...
35 Subjects: including algebra 2, English, reading, calculus
...I later went on to teach Algebra 1, and found that there were many of my students who needed extra time and review with their Prealgebra learning. It was then that I started offering lunch
time tutoring for any and all students requiring extra help. I have an A.A. in Mathematics, and almost completed a B.S. in Mathematics as well.
11 Subjects: including algebra 2, reading, writing, geometry
...I am currently a design engineer for The Boeing Company and have a Bachelor's and Master's degree in engineering. I have been tutoring friends, family, and classmates for as long as I can
remember so I figured I should probably try getting paid for it for a change. My strong points would be Mat...
11 Subjects: including algebra 2, geometry, algebra 1, SAT math
...I also coach students through the college application process and enjoy helping them write their personal statement or essay. I've taught both beginning and intermediate SAT classes and also
have much experience working with ESL students, both children and adults. I'm a graduate of the University of Washington with a degree in neurobiology and I plan to attend dental school this
28 Subjects: including algebra 2, chemistry, writing, ESL/ESOL
Related Shoreline, WA Tutors
Shoreline, WA Accounting Tutors
Shoreline, WA ACT Tutors
Shoreline, WA Algebra Tutors
Shoreline, WA Algebra 2 Tutors
Shoreline, WA Calculus Tutors
Shoreline, WA Geometry Tutors
Shoreline, WA Math Tutors
Shoreline, WA Prealgebra Tutors
Shoreline, WA Precalculus Tutors
Shoreline, WA SAT Tutors
Shoreline, WA SAT Math Tutors
Shoreline, WA Science Tutors
Shoreline, WA Statistics Tutors
Shoreline, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/shoreline_wa_algebra_2_tutors.php","timestamp":"2014-04-16T13:53:20Z","content_type":null,"content_length":"24383","record_id":"<urn:uuid:9267fde5-36f8-4917-a93c-35e185646c3a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
Do the coefficients of these irreducible polynomials always become periodic?
up vote 7 down vote favorite
Fix $n\in\mathbb N$ and a starting polynomial (or seed) $p_n=a_0+a_1x+\dots+a_nx^n$ with $a_k\in\mathbb Z\ \forall k$ and $a_0a_n\ne0$.
Define $p_{n+1},p_{n+2},\dots$ recursively by $p_r = p_{r-1}+a_rx^r$ such that $a_r\in \mathbb N$ is the smallest such that $p_r$ is irreducible over $\mathbb Q$.
What can be said about the behaviour of the sequence $(a_{n+1},a_{n+2},...)$ ?
I have asked this question on Math SE for $p_n\equiv 1$, where it did receive some but not much attention, and it was suggested to ask it here.
For the vast majority of seeds, it looks like there is an $r_0$ such that $a_r=1$ for all $r\ge r_0$. Denote the set (=class) of these seeds by $C_0$.
For some, there is an $r_0$ such that $a_r=2$ for all $r\ge r_0$. (i.e. a priori such a polynomial with biggest coefficient $1$ instead of $2$ would have a factor $(1+x)$). Denote these by $C_1$.
For others, the sequence becomes periodic with repeating $[1,...,1,2]$ where there are $k-1$ ones. Denote these classes by $C_k$. The onset can be quite long, e.g. the seed $1-x+x^2-x^3+x^4\in C_3$
has its (supposedly) periodic part $[1,1,2]$ only from $a_{261}$ on, after $a_{260}=3$.
I suppose that it should be not too hard to show the periodicity for a given polynomial, if belonging to one of the above classes, on a case-by-case basis. But:
Are there whole families of polynomials that can be shown to belong to a class $C_k, k\ge 2$?
And then there are seeds yielding much more complicated patterns. Below are for example the coefficients for $p_n=-1+x-x^2+x^3-x^4$, from $a_5$ on, using a color code 1=grey, 2=red, 3=yellow and 4=
blue ($a_{32}=4$ is the only one). The 3's provide 'landmarks', and it turns out that almost all of what comes between them is symmetric (the braces labeled $a,b,c,...$), only for $c$ one 1 at the
margin is not included, and after $d$ the pattern 211 occurs three more times.
${\mathbf{\color{grey}{\color{red}||\color{red}||\color{red}||\color{red}||\color{red}|\color{red}|||||||||\color{red}|||||||||\color{darkblue}| \underbrace{||\color{red}|||\color{red}|||\color{red}|
||\color{red}|||\color{red}|||\color{red}||\color{red}||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}||| }_{a}\color{yellow}| \underbrace{|||||\color{red}|\color
{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|\color{red}||||||}_{b} \color{yellow}| |\underbrace{|\color{red}|||\color{red}|||\color{red}|||\color{red}|||
\color{red}|||\color{red}||}_{c} \color{yellow}|\color{red}|\color{red}|\color{red}|\color{red}|\color{red}| \color{yellow}| \underbrace{|\color{red}|||\color{red}|||\color{red}|||\color{red}|||\
color{red}|||\color{red}||}_{c}|\color{yellow}| \underbrace{|||||\color{red}|\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|\color{red}||||||}_{b} \
color{yellow}| \underbrace{||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}||\color{red}||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\
color{red}||| }_{a} \color{yellow}| \underbrace{|||||\color{red}|\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|\color{red}||||||}_{b} \color{yellow}
| \underbrace{||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}||\color{red}||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}||| }_
{a} \color{yellow}| \underbrace{|||||\color{red}|\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|\color{red}||||||}_{b} \color{yellow} | \underbrace{||
\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}||| \color{red}||\color{red}||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}||| }_{a} \color
{yellow}| \underbrace{|||||\color{red}|\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|\color{red}||||||}_{b} \color{yellow} | \underbrace{||\color
{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}||| \color{red}||\color{red}||\color{red}|||\color{red}|||\color{red}||| \color{red}||\color{red}||\color{red}|||\color{red}|||\color{red}
|||\color{red}|||\color{red}|||\color{red}||| }_{d} \color{red}|||\color{red}|||\color{red}|||\\ \color{yellow}| \underbrace{||||||||\color{red}|||||||||\color{red}|||||||||}_{e} \color{yellow}| \
{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}|||\color{red}||| }_{f} }}}$
followed by at least four more identical lines $3e3f$. Might that be the period? Or will the sequence be only "quasi-periodic" with a kind of self-similarity (a quite exciting possibility...)?
The pattern for the seed $1$ is even more complex, but has again the property that most chunks between two 3's are symmetric.
Some questions about these sequences:
□ Can there be an arbitrary number of consecutive 1's (other than the $C_p$ case)?
We can force an arbitrary number of 1's immediately or shortly after $a_n$, which is a trivial exercise of no interest. But later on in the sequence? The seed $(x^6-1)/(x-1)$ exhibits nine blocks of
$14$ 1's, but after $a_{327}$, where supposedly the periodic part starts, there are only blocks of $2$ and of $5$ 1's. To wit, here are $a_6,...,a_{391}$ displayed in a way that shows the multiple
symmetries in the pre-periodic part.
$$\color{grey}{1\color{red}2 \\ {\color{red}2\color{red}2\color{red}2\color{red}2\color{red}2\color{yellow}311111} \\ {\color{red}211111\color{red}211111\color{red}211111\color{red}211111\color{red}
211111\color{red}2} \\ {111\color{red}211111111111111\color{red}211111111111111\color{red}2111}\\ {\color{red}211111\color{red}211111\color{red}211111\color{red}211111\color{red}211111\color{red}
211111\color{red}211111\color{red}2}\\ {\color{red}211111111111111\color{red}211111111111111\color{red}211111111111111\color{red}211111111111111\color{red}211111111111111\color{red}2}\\ {\color{red}
211111\color{red}211111\color{red}211111\color{red}211111\color{red}211111\color{red}211111\color{red}211111\color{red}2}\\ {111\color{red}211111111111111\color{red}211111111111111\color{red}2111} \\
{\color{red}211111\color{red}211111\color{red}211111\color{red}211111\color{red}211111\color{red}2} \\ {11111\color{yellow}3\color{red}2\color{red}2\color{red}2\color{red}2\color{red}2} \\ \color
The very last line is the periodic (?) part, which is of length 64.
Likewise, the seed $(x^{12}-1)/(x-1)$ produces several blocks of $26$ 1’s, and $1+x+x^{14}$ yields three blocks of $30$ 1's.
□ If the period is displayed as a cycle (i.e. a regular polygon), is there always an axis of symmetry?
This might follow from the fact that if $P(x)$ of degree $m$ is irreducible iff $x^mP(\frac1x)$ is irreducible. Looking at the symmetry in the lines of the above examples (and I've encountered
similar patterns over and over again), there seems to be some deep principle at work here. It reminds me somewhat of continued fractions, even though I don't think there is a connection. As said
above, the tendency is that almost all chunks of only 1's and 2's (i.e. between two entries >2) are symmetric.
□ Can there be arbitrary large numbers, except from near to the beginning?
Next to the beginning, we can force one arbitrary large value (see here). What about later? Occasionally, there can be 5’s occurring (e.g. $a_{51}$ for the seed $1-x^8+x^{12}+x^{24}$, shortly before
it gets periodic [1,2]), [S:but I have not yet encountered more than a 5:S], and in what seems to be a period never more than a 3.
EDIT: even 6’s can appear: the seed $1-x^4+x^8+x^{12}$ yields the pattern $1_{22},3,2_9,3,1_{10} ,(3,2_{10} ,3,1_{10} )_{11},6,1,2,[2,1,1]$.
Curious things can happen. The seed $1-x^6+x^{12}+x^{24}$ for instance yields the sequence $1_{34},3,2_{34},3,3,2,3,[1,2]$. (Here indices denote repeated entries.)
This really looks like it is worth more investigations.
co.combinatorics polynomials factorization divisors-multiples
1 Wow, I sure hope you generated the $\TeX$ for for the first display by machine! :-) – Suvrit Nov 21 '13 at 20:15
Most of it, but not all. Still easier than using another software for creating a graphic, I guess :-) – Wolfgang Nov 22 '13 at 10:25
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged co.combinatorics polynomials factorization divisors-multiples or ask your own question. | {"url":"http://mathoverflow.net/questions/149560/do-the-coefficients-of-these-irreducible-polynomials-always-become-periodic","timestamp":"2014-04-19T22:45:20Z","content_type":null,"content_length":"59104","record_id":"<urn:uuid:fafbe160-f4c6-4679-bb23-6566838257da>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert torr to cm H2O - Conversion of Measurement Units
›› Convert torr to centimeter water [4 °C]
›› More information from the unit converter
How many torr in 1 cm H2O? The answer is 0.735559231358.
We assume you are converting between torr and centimeter water [4 °C].
You can view more details on each measurement unit:
torr or cm H2O
The SI derived unit for pressure is the pascal.
1 pascal is equal to 0.00750061673821 torr, or 0.0101971621298 cm H2O.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between torrs and centimeters water.
Type in your own numbers in the form to convert the units!
›› Definition: Torr
The torr is a non-SI unit of pressure, named after Evangelista Torricelli. Its symbol is Torr.
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0027 seconds. | {"url":"http://www.convertunits.com/from/torr/to/cm+H2O","timestamp":"2014-04-17T07:04:33Z","content_type":null,"content_length":"19869","record_id":"<urn:uuid:6241c7ef-ba21-41e0-8a95-ca08c76c26b1>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
take after
Which transfer-level mathematics course should you take after Intermediate Algebra (101)?
Students should consult the preparation-for-the-major articulation agreements located in the Transfer Center at Oceanside and the Counseling Office at SEC for specific math requirements for their
transfer major.
103: Statistics
Statistics is the study of data. In statistics, analytical and critical thinking are more important than the symbol manipulation that is common in an algebra course, and the problems are from the
real world rather than from the abstract world. Graphing calculator and/or computer technology is an important ally in analyzing real world data. An understanding of the basic concepts and practices
of statistics is important for students in any discipline where data play an important role — such as social, behavioral, physical and biological sciences. This course is an introduction to
statistics which will prepare students to appreciate and understand the quantitative aspects of other disciplines and of life itself. Transfers to UC and CSU; meets Area B4 for CSU GE and Area 2 for
IGETC. Many baccalaureate majors in Social Science and Science require a course in Statistics.
105: Concepts and Structures of Elementary Mathematics I
In Math 105, students study the underlying concepts of arithmetic and other basic structures of mathematics. The class is usually taught using collaborative learning with little or no lecture, where
students work together in small groups both during and outside of class as they help each other discover and learn the material, often by exploring with hands-on objects manipulatives). Parents of
school-age children are often able to immediately apply the knowledge gained from this course to help their children understand their own studies. The foundations of elementary mathematics are
studied at a sophisticated level in an informal class setting. It requires a lot of work, critical thinking and learning on one’s own, but because there is a lot of peer and teacher support, the
drop-out rate is low and the class is a rewarding, relevant experience for most. Students probably even remember and use what they learn. There may be a service learning requirement helping in an
elementary school for 8-10 hours during the semester. Liberal arts majors are usually required to take Math 105. For more info., call Julie Harland at 757-2121, ext. 6387. This course may be used for
CSU GE, but does not qualify for IGETC. It transfers as an elective for UC, but may not be used to meet the math requirement for UC admissions.
115: Calculus with Applications
This course is designed for students majoring in business, economics and the life and social sciences. The goals are to present the basic concepts and techniques of calculus to students and to
demonstrate how calculus can be used to build models and solve problems in various disciplines. We start with a review of the material necessary for the understanding and manipulation of algebraic
expressions. We begin calculus as soon as possible and present it in an intuitive way so that students with a good intermediate algebra background will be able to follow. Verbalization of
mathematical concepts, results and processes is encouraged. This course is a giant leap from Intermediate Algebra but students feel a sense of accomplishment and an improvement in both their critical
thinking and their mathematical skills by the end of the semester. Students planning to transfer into Business majors are usually required to take this course. This course may be used for CSU GE and
125: College Algebra
This course is a continuation of the study of algebra. Students review solving algebraic equations and inequalities, study graphs of linear, quadratic, higher-degree polynomial, rational, and inverse
functions, and continue their study of exponential and logarithmic functions. College Algebra is required for some majors at various four year colleges and universities. This course may be used for
CSU GE and meets Area 2 for IGETC. It transfers to UCcampuses as an elective; however, there is a credit limitation of 4 units for UC when combined with Math 135.
130: Trigonometry
This course is a prerequisite for (or may be taken as a corequisite with) Math 135 (Precalculus). It is also a prerequisite for Math 150 (Calculus and Analytic Geometry). It involves the study of the
trigonometric functions, their graphs, trigonometric identities and equations. Trigonometry has evolved from its use by surveyors, navigators, and engineers to present applications involving ocean
tides, the rise and fall of food supplies in certain ecologies, brain wave patterns, and many other phenomena. This course meets Area B4 for CSU GE but does not qualify for IGETC.
135: Precalculus
This course is a prerequisite for Math 150 (Calculus and Analytic Geometry) which is usually required for Science, Computer Science, Engineering and some Psychology and Business majors. Topics
covered include linear, quadratic, polynomial, rational, trigonometric, exponential and logarithmic functions, systems of equations and inequalities, and conics. Students learn how algebra can be
used as a modeling language for real-life problems. This course meets CSU GE and IGETC math requirements. | {"url":"https://www.miracosta.edu/instruction/mathematics/aftermath101.html","timestamp":"2014-04-19T09:26:25Z","content_type":null,"content_length":"27913","record_id":"<urn:uuid:29135f05-efa9-4cd8-8374-e159f7ced9a7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unique soultion of a set of nonlinear differential equations.
The solution to a given set of differential equations is NEVER unique. The solution to a set of differential equations, satisfying a given additional conditions may be unique.
In particular, the usual "existence and uniqueness" theorem is this: If (t[0],X[0]) is a point in R^n+1 (that is, t[0] is a real number, X[0] is a vector in R^n) and, in some neighborhood of (t[0],X
[0]), F(t, X), where t is a real number and X is a function with values in R^n, is continuous in both variables and satisfies a "Lischitz condition" (see below) in X, then there exist a unique
solution to dX/dt= F(t,X) in some neighborhood of (t[0], X[0]). Since X(t) is in R[n] so is dX/dt and so must be F(t,X). If you write each component of F(t,X) as f[n](t, X), you have the system of
equations dX[n]/dt= f[n](t, x[1], x[2], ....x[n]). Requiring that they satisfy X(t[0])= X[0] means that the values of all the x[n] must be given at the same X[n], what is called an "initial value
Second or higher order problems can be handled in the same way by defining x[sub]n+1[/sup]= dx[1]/dt, x[n+2]= dx[2]/dt, etc. so that each second derivative becomes a the first derivative of a new
variable and you have a 2n first order equations.
If you are not given all the values at the same t[0] (i.e. not an "initial value problem") then the question is much harder.
For example, the differential equation [itex]d^2x/dt^2= -x[/itex] has the general solution x(t)= C[1]cos(t)+ C[2]sin(t) and it is easy to find a unique solution for the initial value problem y(0)= A,
y'(0)= B for any numbers A or B. On the other hand, there exist an infinite number of solutions to that equation that satisfy y(0)= 0, y([itex]\pi[/itex])= 0 while there is NO solution to that
equation satisfying y(0)= 0, y([itex]\pi[/itex])= 1. | {"url":"http://www.physicsforums.com/showthread.php?p=1752546","timestamp":"2014-04-19T17:34:16Z","content_type":null,"content_length":"27832","record_id":"<urn:uuid:282fc163-d37a-4096-a812-4aa199ea1dc3>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ask Ethan #11: Why does gravity get weaker with distance?
This artist’s impression shows the exotic double object that consists of a tiny, but very heavy neutron star that spins 25 times each second, orbited every two and a half hours by a white dwarf star.
The neutron star is a pulsar named PSR J0348+0432 that is giving off radio waves that can be picked up on Earth by radio telescopes. Although this unusual pair is very interesting in its own right,
it is also a unique laboratory for testing the limits of physical theories. This system is radiating gravitational radiation, ripples in spacetime. Although these waves (shown as the grid in this
picture) cannot be yet detected directly by astronomers on Earth they can be sensed indirectly by measuring the change in the orbit of the system as it loses energy. As the pulsar is so small the
relative sizes of the two objects are not drawn to scale.
“I wouldn’t know a spacetime continuum or a warp core breach if they got into bed with me.” -Patrick Stewart
It’s the end of the week once again, and so it’s time for another Ask Ethan segment! There have been scores of good questions to choose from that were submitted this month alone (and you can submit
yours here), but this week’s comes from our reader garbulky, who asks:
Why does gravity decrease the further away you are from the object? I’ve read that it does decrease with distance squared but not why it does this.
This question seems so simple, and yet the answer — to the limits of our understanding — is nothing short of profound.
Physics, and science in general, doesn’t normally address the question of why when it comes to natural phenomena; it normally sticks to how. You give me an overarching theory, such as a set of laws,
and physical objects with specific properties, such as a set of particles, and science tells you how those objects behave according to the predictions of that theory. Gravity is no exception.
For centuries, Newtonian gravitation was the most successful theory describing forces on the largest scales, saying that every object in the Universe that has a mass exerts an attractive force on
every other object in the Universe with a mass, and that the magnitude of that force is proportional to the mass of both objects and inversely proportional to the distance between them. That’s what
Newton’s law of universal gravitation says, and what it tells us is — in principle — how any system of particles will behave under the influence of gravity.
This is relatively straightforward to simulate with modern computers, and the match between theory and observation is spectacular.
Can we say something intelligent about why gravity works this way, though? Let’s think about our own neighborhood for a minute.
The Sun, the largest mass in our Solar System, is orbited in circles and ellipses by practically every known object, from planets to asteroids and (most) comets. There’s something special about
circles and ellipses that we don’t normally think about as special: they’re stable, closed orbits, meaning that these objects return to the same point they started at after what we call a year.
That alone, mathematically, tells you something incredibly interesting. You see, all forces are vectors, meaning they have magnitudes and directions. In the case of our Solar System, the direction of
the force on each object is (to an excellent approximation) towards the center of the Sun. Want something to go around the Sun in a closed orbit? Guess what.
You only have two options! One is to have a force that obeys an inverse-square law (like gravity does), and the other is to have a force that increases linearly with distance (like a spring does),
and there’s a theorem that proves those are the only two possibilities!
So it could have gotten stronger or weaker as the distance increased, but only in one particular way, or we wouldn’t have stable, closed orbits.
And since those are the types of orbits required to have stable, moderate temperatures necessary for life, we sure did luck out that these are the laws governing our Universe!
Now there are some forces where the force increases as your distance from the object increases: the strong force is a great example! And there’s even an example of a type of force that has no
direction and is constant everywhere: that’s what dark energy is, permeating all of space equally!
The thing is, though, saying that gravity is an inverse-distance-squared force is an incomplete story. In fact, the very fact that we have an orbit in our Solar System that very clearly isn’t closed
is how we wound up replacing Newtonian gravity with our modern theory of gravity: General Relativity!
Because the orbit of Mercury
, or
close on itself, that was our first major hint that something was not quite complete about Newton’s theory of gravity. It took about half a century to
solve this problem
and replace Newtonian gravity with Einstein’s General Relativity, and one of the things we realized from that is that gravity isn’t
following an inverse-square law, but that’s only a great approximation when the involved distances are large and masses (and energies) are small.
We’ve come up with a whole host of predictions that have been borne out by experiment and observation, including the gravitational bending of light, the different orbital mechanics of systems with
large masses and small distances, gravitational redshift, and many, many others.
But the greatest advance that’s related to this question of the strength of gravity is the knowledge that all orbiting bodies do not technically obey an inverse-squared force law.
All orbits under General Relativity come from forces that behave ever so slightly stronger than inverse-square laws, and this means that they will eventually decay over long enough timescales. The
innermost planets will have their orbits decay first, followed by progressively outer worlds, because the distance is larger. Eventually, in the absence of all other phenomena, everything would
spiral into the gravitational source at the center of all orbital systems.
For an object like Earth that orbited an imaginary, infinitely-long-lived Sun, it would take something like 10^150 years for the orbit to decay, but it means that a true stable, closed orbit is a
phantasm, something that doesn’t really exist in this Universe!
At least, in a Universe governed by General Relativity, which is the best law of nature we have to describe gravity. In the weak-field limit (an approximation) — when masses are small and distances
are large — this can be shown to reduce to Newtonian gravity, which is where the inverse-square-law-with-distance comes from!
But why do we have General Relativity as the theory that governs gravitation in this Universe, with the particular details that it has? I can’t say for certain; no one can.
Which means I have to resort to the standard cop-out answer: the force of gravity is this way because the laws of nature cause it to be. We can imagine a Universe where those laws are different, but
this is the one we’ve got, and we don’t fully understand why the laws are this way any deeper than that. We can observe phenomena, infer the laws, test them in new and spectacular ways, and maybe
someday we’ll understand why the laws are this way. In the meantime, this is the best answer we’ve got!
1. #1 Brian November 15, 2013
Another way to look at it is that it’s because our space is three-dimensional. For example, when you make a noise, its volume level falls off with distance according to the inverse-square law,
simply because the same amount of energy gets spread out over the surface area of a sphere. As the sphere expands as the noise travels away from its origin, the surface area increases by the
square of the radius. In exactly the same way, light dims according to the inverse-square law, as the same energy is spread out over an increasing surface area. So one would expect gravity to
behave in the same way.
As Ethan points out, it’s not quite that simple, but then General Relativity shows that space isn’t really Euclidian….
2. #2 CatMat November 15, 2013
I’ve been playing around with a causally connected view of spacetime lately, and there the role of energy density and pressure is to shape the transition from past to future light cones.
So, my personal answer to the stated question would be that when the distance between two gravitating bodies increase, the portion of history they share (the overlap of their past light cones) to
their respective total histories decreases as well, which allows for a larger decorrelation of their futures and thus a smaller coupling.
3. #3 CatMat November 16, 2013
(Why does this site keep giving me “Service unavailable” errors when trying to comment?)
I’ve been playing around with a causally connected view of spacetime lately, and there the role of energy density and pressure is to shape the transition from past light cones to future ones.
So, my personal answer to the stated question would be that when the distance between two gravitating bodies increase, the portion of history they share (the overlap of their past light cones) to
their respective total histories decreases as well, which allows for a larger decorrelation of their futures and thus a smaller observed coupling.
4. #4 Zephir
November 16, 2013
The inverse square law (which even general relativity is using in unchanged way) has its roots in LeSage’s shielding mechanism, which has been originally developed by Newton friend, Nicolas Fatio
de Duillier, who was genial Swiss mathematician, living in the shadow of Newton. He was much smarter than him from certain perspective – for example he deduced with it, that the gravity must be
indirectly proportional to the square of distance, i.e. not linearly, how Newton assumed. The same opinion was occupied with Robert Hooke, who was a competitor and public enemy of Newton. Hooke
based his opinion on century old experience of old Arabian astronomers, who were actually first, who deduced the inverse square law. So when it turned out, he was right and Newton wrong in this
matter, the otherwise confident Newton got so upset and ashamed with it, he withdrew himself from scientific life and publications for further sixteen years.
Between others Nicolas Fatio correctly deduced, that the shielding must come from flux of corpuscles, which are spreading faster than the speed of light and he called them ultramundanne. Now we
know, these corpuscles are actually the gravitational waves and they manifest itself with CMBR noise, which is all around us. The AWT just extends this explanation to composite particle bodies
(virtually all fundamental forces can be explained with the same mechanism) and for explanation of cold dark matter (Allais effect), caused with shielding of this shielding with nearby massive
objects. The gravitational shielding of longitudinal waves has its supersymmetric counterpart in shielding of photons at short distances, which is known as a Cassimir force
5. #5 Wow November 16, 2013
CatMat, there’s another site on scienceblogs where they talk about the politics of climate science and deniers are buying time on a spamnet to nuke the site to get at them.
6. #6 David November 16, 2013
The inverse square law also is the low-energy approximation to a scattering problem, in which two fermions interchange virtual massless bosons. Works out that way for electromagnetism (spin 1
boson = photon), works out for gravity (spin 2 boson = graviton).
7. #7 John Duffield November 16, 2013
If you read the original Einstein you see him saying something like a gravitational is present when a concentration of energy tied up as say a massive star conditions the surrounding space,
altering its qualities. The effect of this diminishes with distance. Note that if you had long massive rod, the effect would diminish in a 1/r fashion. But stars are spherical and space is
three-dimensional, so the effect diminishes in a 1/r² fashion.
As regards the rubber sheet pictures, imagine you’ve placed a whole lot of parallel-mirror light-clocks in an equatorial slice through and around the Earth. When you plot all the clock rates,
your plot resembles the rubber sheet because clocks go slower when they’re lower. The curvature you can “see” relates to Riemann curvature, which relates to curved spacetime. And you measured
those clock rates, so it’s a curvature in your metric. It isn’t some curvature of space. And it’s important to remember that a clock that’s lower doesn’t run slower because your plot is curved.
It doesn’t actually run slower “because spacetime is curved”. It runs slower because a concentration of energy conditions the surrounding space, altering it.
8. #8 Uncle B November 16, 2013
Because of the way in which we measure it! only American scientists arrogant enough to “conclude” instead of leaving an open – ended set of observations?
9. #9 Artor November 16, 2013
Uncle B, are you arrogant too, or do you go through life without ever coming to any conclusions?
10. #10 ao9 November 16, 2013
I was also under the idea that it would be similar to a point of light spreading its energy on a sphere, but correct me if I’m wrong, it is implied by the omission of this common explanation that
that’s not the reason for gravity behaving this way, since (correct me if I’m wrong again) the object is not irradiating energy. That’s also why I understand Ethan mentions there’s no answer to
“why”, and Bertrand’s theorem.
11. #11 Mark McAndrew November 16, 2013
Quick heads-up to the webmaster – if climate denier freaks are indeed DDOSing the sites, join Cloudflare.
12. #12 Wow November 17, 2013
ao9, the particle explanation of the square law is that each virtual particle is massless and can only have the energy that is limited by the uncertainty principle. Since to go further takes more
time, the amount of energy the virtual force carrier particle can have by existing drops. And since that internal energy is the force felt between the two points packaged up, the force between
those two particles is also less.
13. #13 John Duffield November 17, 2013
Wow: gravity doesn’t work because of particles whizzing around. Not does electromagnetism. Virtual particles aren’t real particles, hydroigen atoms don’t twinkle, and magnets don’t shine. We’ve
talked about thisa before, see Ethan’s weak force blog and look at teh comments from #25:
14. #14 Wow November 17, 2013
John, particles are the QM version of forces.
If you’ve proven them wrong, where the hell is your prize?
15. #15 Lloyd Hargrove
Monroe, LA
November 17, 2013
Great work once again! As the old saying goes, science asks and seeks to answer “how” while philosophy asks and seeks to answer “why”. Little wonder the highest educational titles merge into
“Doctor of Philosophy” regardless of specialty.
16. #16 David November 17, 2013
John #13: “Wow: gravity doesn’t work because of particles whizzing around. Not does electromagnetism.”
Feynman, Schwinger & Tomonaga won the Nobel for showing how electromagnetism works by “particles whizzing around” – or to be more precise, how the exchange of virtual photons creates the force we
see as electromagnetism at larger, non-quantum scales. As confirmation that their theory is correct, they (at least, F & S) computed the magnetic moment of the electron. Without a theory of
virtual particles, this value should be 2.0; the measured value is 2.0023… The theory agrees with experiment to over 10 decimal digits, and no other theory (without virtual particles) can explain
the value correctly.
John, if you think you have a better theory than that, you should show us a prediction that is at least as accurate.
17. #17 Sinisa Lazarek November 18, 2013
@7 John
“so it’s a curvature in your metric. It isn’t some curvature of space.”
Actually, it is curvature of spacetime since the layout is coded into the metric. Gravitational lensing and bending of light rays is a clear show that yes, spacetime itself curves. It isn’t some
abstract mathematical curve.
18. #18 Denier November 18, 2013
@John #13
Sometimes you have to give up. I’ve tried to communicate in a half dozen threads to Wow that virtual particles are different from particles. He does not understand and is not open to
understanding the concept. Wow believe particles and virtual particles are the same thing except the virtual ones wink out of existence before they have to be real.
19. #19 Denier November 18, 2013
@David #16
The term ‘virtual’ is important. Particles and Virtual Particles are not the same thing. A Virtual Particle is a standing wave that links two points. It does not whiz around anywhere. It doesn’t
move. It can’t travel. It exists between two points, then it doesn’t exist.
20. #20 Wow November 18, 2013
No, you’ve *communicated* that often enough.
What you’ve failed to do is to argue the case for it.
21. #21 CB November 18, 2013
David: That’s technically true, but there is a difference between the virtual particles of a Perturbation Theory, and the full Quantum Field Theory of electromagnetism.
In Perturbation Theory virtual particles are identical to real particles, you just can’t observe them. The picture (specifically the Feynman diagrams) are of little electrons and photons popping
in and out of existence and zipping between objects with lifetimes and energies under the uncertainty principle limits.
In QFT, a real particle is a specific type of disturbance in the fields with well-defined momentums, energies, masses, etc, while virtual particles are mathematically different and really have
little in common with real ones at all. They are still responsible for transmitting forces between real particles, but the picture of a bunch of photons or electrons, identical to the ‘real’ ones
but unobservable, swimming around, is not a good one. They are still disturbances in the electron/photon fields, but not ones that look like an electron or photon.
What happened is John heard about this and the point that “virtual particle” is a misleading piece of jargon, and ran with it off to la-la land. It’s like when he heard about the shear terms in
the stress-energy tensor of GR and started saying “well to mean that means spacetime is…” It’s semantic-implication-aka-pun-based physics with no real understanding.
22. #22 CB November 18, 2013
Sinisa Lazarek: Ha, thanks for finding that gem, and perfect example of what I was talking about.
In GR “curvature of spacetime” and “metric” are the same thing, as in mathematically equivalent, the most literally “the same” two things can be. The metric *is* the geometry of spacetime.
“Spacetime” as opposed to “space” being another example. Taking “space” *not* to mean the Newtonian concept of space but rather the 3 spacial dimensions of our 3+1 dimensional spacetime, then
talking about curvature of space is correct, so long as it is understood that it may also be a curvature in time, or both, to varying degrees depending on relative observer. But space does curve.
That’s what GR says.
23. #23 OKThen
A scientist state your biases and prejudices clearly
November 18, 2013
John Duffield
Seems that you are on a personal quest that involves
“The Power of Intention is.. Divine Guidance.. A greater Will that drives you forward on your Life Mission…” from A Cry for Help 2009 by John Duffield
As well you John Duffield are on a scientific quest
“I ponder what might have been (had Einstien lived longer).. Ilkie to think the end product would have dispelled so much mystery that we could not have failed to grasp how the universe works.. If
only Einstein had somehow passed on what he knew to Feynman.. then things would have turned out different. So different that by now NASA.. wouldn’t be reaching for Mars, they’d be reaching for
the stars.” fromRelativity plus the Theory of Everything by John Duffield
Now John Duffield, you are entitled to think and believe whatever you wish. I have some unusual thoughts myself. But if you are truelly scientific minded; then it is my opinion that you must be
clear about your biases when you speak about science.
Thus, John Duffield, if you were honest; your preface to your comments on this blog would be: I have published at least two books that most scientist think are very speculative. A couple of those
unaccepted speculative ideas are thus and thus.
In another place you could clarify, this particular idea is not speculative; it is generally accepted by most scientist.
In my opinion John Duffield, you have not properly introduced yourself; because you have failed to give a sense of your strongly held personal biases.
The point John Duffield is this: a science discussion is not about blindsiding and misleading in order to convince a naive audience of your arguments. No a science minded person must make effort
not to mislead; as Feynman says, “The idea is to try to give all the information to help others to judge the value of your contribution; not just the information that leads to judgment in one
particular direction or another.”
And “give all the information” is exactly John Duffield what you do not do. No, you speak as if you are an expert; and you are not. You speak as if your pronouncements should be accepted; well
they have not been.
John Duffield you have published your two or three books. If they speak truth; then trust in the test of time. But if they do not speak truth; well, then I understand why you come out to this
blog and try to confuse those who are trying to be part of the honest science discussion.
Be honest that your ideas are seriously speculative or be quiet.
24. #24 dean November 18, 2013
You speak as if your pronouncements should be accepted; well they have not been.
If that troubles you, okthen, then don’t ever visit his blogsite: the foolishness will make your head explode.
25. #25 N. November 19, 2013
Tell us something about MOND, Ethan.
Or is this something that history has alredy deured?
You may take it as “Ask Ethan”, next episode?
26. #26 Sinisa Lazarek November 19, 2013
MOND doesn’t work. Or it works for galaxy rotation and nothing else, so it doesn’t work. It has already been covered here on several occasions. Search through the blog and you’ll find topics
dealing with it.
27. #27 CB November 19, 2013
Here’s a good one for bottom-lining why MOND, while still a neat idea, isn’t about to negate the need for Dark Matter:
28. #28 N. November 19, 2013
Thanks. Far as I know, there still are those who follow the idea. How so?
29. #29 Wow November 19, 2013
People would often prefer to keep a wrong idea than work out a new one, basically.
30. #30 Sinisa Lazarek November 19, 2013
There are still those who think Earth stand’s still… in the 21st century!! I wouldn’t have believed it if I hadn’t seen some of them even post on this blog. How so?
.. same as having people still believe all kinds of other things… our nature I guess.
“How so” can’t be really answered.
31. #31 Michael Kelsey
SLAC National Accelerator Laboratory
November 19, 2013
@Sinisa #30, N #28, etc.: Why do good physicists continue to explore MOND? My personal take (and note that I’m *NOT* an astrophysicist, or even a theorist, just an experimental particle
physicist!) is that there are two complementary effects.
First, psychology and sociology. A fair community of theorists have developed around Milgrom’s model, and have expanded and extended it. It’s rather difficult to give up years of research (or to
repudiate your adviser’s research) if you feel like it’s still viable.
Second, good science. The hallmark of a proper _scientific_ theory is that it is falsifiable: it makes concrete predictions for hitherto unobserved phenomena, which can be tested by appropriately
designed observations. (Note that does not require _experiment_: observational astronomy is a perfectly valid scientific effort, despite what crackpots and YECs might claim). However, working
through the math to actually make those predictions can be exceedingly difficult! Extending MOND to see what effects it could have on cluster/supercluster scales, making it compatible (or at
least parametrized) with GR, and so on, are not trivial.
Finally, there’s the potential payoff. Suppose we do, at some point, discover that observations actually support MOND (to the exculsion of existing GR/DM/DE predictions). That would be a pretty
clear indication of new physics beyond what is already known, something that all _real_ physicists would be extremely excited to find.
32. #32 OKThen
thoughts from the village idiot
November 19, 2013
About MOND, first I don’t disagree with Wow or SL or CB or Ethan.
Rather, I defer to their opinions about MoND.
Yes, yes, it is my turn to be the village idiot. Contradicting even myself.
My personal bias is against MOND; it seems at best to be a useful provocateur theory.
“MoND was proposed by Mordehai Milgrom in 1983″ and Milgrom is still publishing papers on it 40 years later http://arxiv.org/pdf/1311.2579v1.pdf. Unfortunately his papers are unreadable to me.
Wikipedia’s MoND summary says this, “Within the uncertainties of the data, MoND has remained valid.. the uncertainties on the velocity of galaxies within clusters and larger systems have been too
large to conclude in favor of or against MoND. Indeed, conditions for conducting an experiment that could confirm or disprove MoND may only be possible outside the Solar system. . A couple of
near-to-Earth tests of MoND have been proposed though.. A test that might disprove MoND would be to discover any of the theorized Dark Matter particles, such as the WIMPs.. Lee Smolin and
co-workers have tried unsuccessfully to obtain a theoretical basis for MoND from quantum gravity. His conclusion is “MoND is a tantalizing mystery, but not one that can be resolved now.”.. On the
other hand, another 2011 study observing the gravity-induced redshift of galactic clusters found results that strongly supported general relativity, but were inconsistent with MoND (Wojtak,
Hansen, and Hjorth). A recent work has found mistakes in the work by Wojtak, Hansen, and Hjorth, and confirmed that MoND can fit the determined redshifts only slightly worse than does general
relativity with dark halos.”
So that’s that or what is that?
And what in the world am I as a layman suppose to understand that MOND is proposing?
Scientific American in 2002 gave Milgrom space to describe MOND to us laymen http://www.astro.umd.edu/~ssm/mond/sad0802Milg6p.pdf
Note the first and other pages are blank so scroll down.
Even at this most lucid, Milgrom leaves my eyeballs rolled up and stuck looking at the inside of my skull.
And furthermore maybe quantum gravity will explain things
——- with dark matter or without dark matter (my bias)
——- with MoND or without MoND(my bias)
But hey, I have no, in the detail, reasons for my biases. So until the experts prove otherwise; I defer to the dark matter experts (my bias).
Yes, I notice that I contradict myself in that I am biased against dark matter but I defer to dark matter experts. I’ll tell you why!
At least the dark matter hypothesis doesn’t leave my eyeballs stuck looking upward in their skull sockets. Rather, just thinking of dark matter hypothesis, for me anyway, brings my eyeballs back
to their normal position in their skull sockets.
So I say, let the few experts who fiddle with MoND keep fiddling.
But I warn that it is a very tiresome, on my eyeballs, to even try to follow what MoND experts are arguing. Milgrom’s Sci Amer 2002 article leaves me quite unsatisfied; and, as previously noted,
the MoND effect, which leaves my eyeballs stuck looking at the inside of their skull sockets, is quite tiring.
33. #33 Wow November 20, 2013
Falsifiability isn’t really that huge a thing, though it’s needed to weed out the patently anti-scientific “Last Thursday Creationism”-type “theories”.
The point about falsifiability is more that you have no reason to believe you have it RIGHT if your theory cannot be falsified, since there’s nothing consequent from it that would demonstrate it
as being valid over any equally compelling theory.
Falsifiability is about weeding out the bad, not accepting the good.
But the existence of special pleading arguments means that in a colloquial sense it carries far more weight than it does for what is more centrally important: predictability.
Falsifiability requires a prediction to test against.
The use of a theory requires prediction to be used for.
Prediction is what the theory is all about and is the prime difference between a theory in science, which gives predictability, and curve fitting, which doesn’t.
God, nowadays, gives ZERO predictability. When it used to be “able” to predict stuff (tornadoes, flooding, lightning, etc), it was found that there was no God there.
Then NOMA tried to put a box around science so that predictions in science could not replace predictions in religion. However, that didn’t make God-predictions work any better, so the arguments
for any god becomes non-prediction.
“Shit happens” is not a scientific theory.
Neither is “Climate has always changed”.
34. #34 Wow November 20, 2013
“Yes, I notice that I contradict myself in that I am biased against dark matter but I defer to dark matter experts.”
I’m “against” Dark Matter too, if it’s reified like “cold” or “dark”. As a placeholder showing the phenomena’s characteristics, I’m 100% fine with it.
Those working on the theories of Dark Matter are, I hope exclusively, working on a theory of what that Dark Matter *is*, and then testing that theory against the rest of science and predicting
the results.
Failure then fails that theory of what the stuff is, but the *characteristics* remain.
MOND doesn’t display the require characteristics, so in that sense, it is at the very least incomplete in its explanation.
But a non-particle demonstration with the same *characteristics* as Dark Matter would be just as fine as a “matter” demonstration.
35. #35 OKThen
more thoughts
November 20, 2013
Wow #34 #35
Well said.
For those also wondering, Stephen Jay Gould’s idea of non-overlapping magisteria (NOMA).
So Stephen proposes two areas of human understanding (or misunderstanding) that don’t overlap. Really, seems impossible to me. I mean even sense and nonsense seem to overlap everywhere. Oh well,
I’m not only the village idiot; I’m a religious chameleon.
Of course, I believe in Santa Claus and a great deal else depending upon where, what and who I am talking to and whether I wish to agree or disagree with them.
36. #36 Michael Kelsey
SLAC National Accelerator Laboratory
November 20, 2013
@OKThen #35, and Wow #33, #34: Well said, indeed. Good clear statements; I’m not sure I completely agree with your take on falsifiability. The ability of a proper scientific theory or hypothesis
to make _concrete_ predictions, and specifically predictions which (at least in principle) can actually be observed, measured, detected, whatever, is critical.
Falsifiability weeds out not just the anti-science YECs, but also the crackpot “just so stories”, and the ubertheoretical models which make “predictions” about differences from our current models
at scales which are utterly unmeasurable even in principle (string theory, I’m looking at you!).
In any event, I think this is merely a difference in “scale,” not a disagreement of principle.
37. #37 Wow November 21, 2013
Astronomy is one place where concrete predictions and definite observation is often unavailable.
It’s still a science.
Falsifiability is Popper’s take, but there was a lot less indirect measurement necessary in his day for science.
Since then the progress of science to inferential propositions means the usefulness of “falsifiability” in determining if something should be considered pseudo or science is not that high.
Still plays a part, but not a central tenet.
38. #38 Wow November 21, 2013
Quantum gravity is falsified in the realm of General Relativity.
QG still scientific, but “to an extent” wrong.
An example.
39. #39 Sinisa Lazarek November 21, 2013
My issue with MOND is that in trying to make something work, it makes a whole bunch of other things not work.
If we didn’t have relativity, something like MOND could be considered. But GR showed us that ND is already an approximation of GR. So tweaking ND while breaking a more encompassive theory is
futile. Imagine a world without GR and only MOND. There would be dozens of phenomena in the world which we would have absolutely no explanation for.
So this brings us to the real issue. Weather or not to accept DM as something that really exists, but we haven’t detected it yet, or modifying GR, not Newton. There is also an issue of weather or
not we really “know” how GR works on something as large as a galaxy. Have we taken into account all the different components that contribute, have we missed some things.
40. #40 Wow November 21, 2013
SL, that’s not an issue with MOND, it’s an issue with trying to make MOND the only factor at play.
41. #41 CB November 22, 2013
TeVeS tries to correct that deficiency by bringing in relativity, which helps explain a lot of the direct problems with using MOND in a universe that appears to obey GR, like gravitational
It makes sense, as MOND was developed in the context of the anomalous galactic rotation curves, where the prediction of Newtonian gravity shoulda-woulda been enough and relativity could be safely
ignored. But when you want to explain other phenomenon you have to go beyond it.
It still doesn’t work to explain all the universe without something like Dark Matter. So as a Dark Matter alternative MOND and its offshoots aren’t panning out, but they are still interesting.
I mean, it’s not like it’s impossible that there are new particles we haven’t discovered but out-mass known particles 4:1 *and* our understanding of gravity is not quite right. Even if it just
provides another way of looking at things, it could be useful.
42. #42 Tommy Long
USA, NC
November 22, 2013
This is awesome info.
43. #43 Jeff
March 11, 2014
Is it possible that gravity is caused by dark energy or dark matter? If you imagine a large above ground swimming pool, and you suddenly make the sides of the pool disappear, the water will rush
out in all directions. The water furthest from the center moving away from the center much quicker than the water in the middle. Now, think in 3D, and consider outside of the pool as space, and
the water dark energy. Anything near the outside would be accelerating faster than an object near the middle. Because of expansion outer objects would be picking up speed as the water pushes them
away. The universe expanding?
Think now of the pool on a much larger scale, and the water as dark matter trying to fill any void where it is not. Maybe gravity is not objects being drawn together as much as objects being
pushed together by dark energy. Perhaps the reason that the closer two objects are, the more they are attracted to each other (gravitational pull) is because the dark matter between the two
accelerates causing a Bernoulli effect between the objects, thereby drawing them closer.
44. #44 Sinisa Lazarek March 11, 2014
@ Jeff
Gravity is not a force. Your view of it is purely Newtonian, which is fine for everyday stuff. But when you start talking about DE, DM and spacetime, you need to understand what current science
is saying, before new hypothesis. In short, your model fails on many fronts, mainly because you don’t understand how spacetime works in relativity. | {"url":"http://scienceblogs.com/startswithabang/2013/11/15/ask-ethan-11-why-does-gravity-get-weaker-with-distance/","timestamp":"2014-04-18T09:03:45Z","content_type":null,"content_length":"136440","record_id":"<urn:uuid:d2e59e5b-4822-4d7c-a738-b06b5a1bae7f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematician Kenneth Cooke dies at 82
Kenneth Cooke, a mathematician who became a well-known researcher while teaching at a California liberal arts college, has died at age 82.
Cooke suffered from a brain tumor, The Los Angeles Times reported Saturday. He died Aug. 25 at his home in Claremont, Calif.
Known for his work in the mathematics of epidemiology and as the founder of a mathematical area known as delay differential equations, Cooke was the author of 10 textbooks and 100 articles. He helped
work out the parameters of the AIDS epidemic.
Sandy Grabiner, a colleague in the math department at Pomona College, said teaching math at a liberal arts college and doing math research are both difficult.
"To do both is amazing," Grabiner told the Times. "And he combined this with a kind of good sense (and) modesty.
"You wouldn't know how good he was unless you knew how good he was, because he wasn't going to tell you."
A memorial service was scheduled for Saturday afternoon at the United Church of Christ in Claremont.
Copyright 2007 by United Press International | {"url":"http://phys.org/news108522735.html","timestamp":"2014-04-17T07:29:00Z","content_type":null,"content_length":"61824","record_id":"<urn:uuid:6c4c09d5-9c55-43b1-bca1-97ec850eabde>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Tutorial] Implementing the Advanced Encryption Standard
03-24-2007 #1
[Tutorial] Implementing the Advanced Encryption Standard
Note: the latest version of this tutorial can be found at: Advanced Encryption Standard (AES) Tutorial
To kick-start the forum being back online again, I'm starting with a tutorial about implementing the de facto standard encryption algorithm, recommended by the National Institute of Standards and
Technology, which is called "Advanced Encryption Standard", or commonly referenced as AES. I'll start with a short introduction about cryptography, followed by an explanation of the algorithm,
the idea and its creators. Finally, I'll guide you through all the necessary steps to implement this algorithm in C, followed by its integration into a mode of operation for block ciphers to
encrypt plaintext with more than 128 bits.
Introduction to cryptography:
Cryptography is the science of information and communication security. (Serge Vaudenay, A classical introduction to cryptography)
Cryptography is the science of secret codes, enabling the confidentiality of communication through an insecure channel. It protects against unauthorized parties by preventing unauthorized
alteration of use. Generally speaking, it uses an cryptographic system to transform a plaintext into a ciphertext, using most of the time a key.
One has to notice that there exist certain cipher that don't need a key at all. A famous example is ROT13 (abbreviation from Rotation 13), a simple Caesar-cipher that obscures text by replacing
each letter with the letter thirteen places down in the alphabet. Since our alphabet has 26 characters, it is enough to encrypt the ciphertext again to retrieve the original message.
Let me just mention briefly that there are secure public-key ciphers, like the famous and very secure Rivest-Shamir-Adleman (commonly called RSA) that uses a public key to encrypt a message and a
secret key to decrypt it.
Cryptography is a very important domain in computer science with many applications. The most famous example of cryptography is certainly the Enigma machine, the legendary cipher machine used by
the german Third Reich to encrypt their messages, who's security breach ultimately led to the defeat of their submarine force.
Before continuing, please read carefully the legal issues involving cryptography as in several countries even the domestic use of cryptography is prohibited:
Cryptography has long been of interest to intelligence gathering agencies and law enforcement agencies. Because of its facilitation of privacy, and the diminution of privacy attendant on its
prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially
since the advent of inexpensive computers has made possible widespread access to high quality cryptography.
In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically. In China, a license is
still required to use cryptography. Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws in Belarus, Kazakhstan, Mongolia, Pakistan, Russia,
Singapore, Tunisia, Venezuela, and Vietnam.[31]
In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography. One particularly important issue has been the export of
cryptography and cryptographic software and hardware. Because of the importance of cryptanalysis in World War II and an expectation that cryptography would continue to be important for national
security, many western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology
overseas; in fact, encryption was classified as a munition, like tanks and nuclear weapons.[32] Until the advent of the personal computer and the Internet, this was not especially problematic.
Good cryptography is indistinguishable from bad cryptography for nearly all users, and in any case, most of the cryptographic techniques generally available were slow and error prone whether good
or bad. However, as the Internet grew and computers became more widely available, high quality encryption techniques became well-known around the globe. As a result, export controls came to be
seen to be an impediment to commerce and to research.
Introduction to the Advanced Encryption Standard:
The Advanced Encryption Standard, in the following referenced as AES, is the winner of the contest, held in 1997 by the US Government, after the Data Encryption Standard was found too weak
because of its small key size and the technological advancements in processor power. Fifteen candidates were accepted in 1998 and based on public comments the pool was reduced to fived finalists
in 1999. In October 2000, one of these five algorithms was selected as the forthcoming standard: a slightly modified version of the Rijndael.
The Rijndael, whose name is based on the names of its two Belgian inventors, Joan Daemen and Vicent Rijmen, is a Block cipher, which means that it works on fixed-length group of bits, which are
called blocks. It takes an input block of a certain size, usually 128, and produces a corresponding output block of the same size. The transformation requires a second input, which is the secret
key. It is important to know that the secret key can be of any size (depending on the cipher used) and that AES uses three different key sizes: 128, 192 and 256 bits.
To encrypt messages longer than the block size, a mode of operation is chosen, which I will explain at the very end of this tutorial, after the implementation of AES.
While AES supports only block sizes of 128 bits and key sizes of 128, 192 and 256 bits, the original Rijndael supports key and block sizes in any multiple of 32, with a minimum of 128 and a
maximum of 256 bits.
Further readings: Unlike DES, which is based on an Feistel network, AES is a substitution-permutation network, which is a series of mathematical operations that use substitutions (also called
S-Box) and permutations (P-Boxes) and their careful definition implies that each output bit depends on every input bit.
Description of the Advanced Encryption Standard algorithm
AES is an iterated block cipher with a fixed block size of 128 and a variable key length. The different transformations operate on the intermediate results, called state. The state is a
rectangular array of bytes and size the block size is 128 bits, which is 16 bytes, the rectangular array is of size 4x4. (In the Rijndael version with variable block size, the row size is fixed
to four and the number of columns vary. The number of columns is the block size divided by 32 and denoted Nb). The cipher key is similarly pictured as a rectangular array with four rows. The
number of columns of the cipher key, denoted Nk, is equal to the key length divided by 32.
A state:
| a0,0 | a0,1 | a0,2 | a0,3 |
| a1,0 | a1,1 | a1,2 | a1,3 |
| a2,0 | a2,1 | a2,2 | a2,3 |
| a3,0 | a3,1 | a3,2 | a3,3 |
A key:
| k0,0 | k0,1 | k0,2 | k0,3 |
| k1,0 | k1,1 | k1,2 | k1,3 |
| k2,0 | k2,1 | k2,2 | k2,3 |
| k3,0 | k3,1 | k3,2 | k3,3 |
It is very important to know that the cipher input bytes are mapped onto the the state bytes in the order a0,0, a1,0, a2,0, a3,0, a0,1, a1,1, a2,1, a3,1, a4,1 ... , and the bytes of the cipher
key are mapped onto the array in the order k0,0, k1,0, k2,0, k3,0, k0,1, k1,1, k2,1, k3,1, k4,1 ... At the end of the cipher operation, the cipher output is extracted from the state by taking the
state bytes in the same order.
AES uses a variable number of rounds, which are fixed:
A key of size 128 has 10 rounds.
A key of size 192 has 12 rounds.
A key of size 256 has 14 rounds.
During each round, the following operations are applied on the state:
1. SubBytes: every byte in the state is replaced by another one, using the Rijndael S-Box
2. ShiftRow: every row in the 4x4 array is shifted a certain amount to the left
3. MixColumn: a linear transformation on the columns of the state
4. AddRoundKey: each byte of the state is combined with a round key, which is a different key for each round and derived from the Rijndael key schedule
In the final round, the MixColumn operation is omitted.
The algorithm looks like the following (pseudo-C):
AES(state, CipherKey)
KeyExpansion(CipherKey, ExpandedKey);
AddRoundKey(state, ExpandedKey);
for (i = 1; i < Nr; i++)
Round(state, ExpandedKey + Nb*i);
FinalRound(state, ExpandedKey + Nb * Nr);
□ The cipher key is expanded into a larger key, which is later used for the actual operations
□ The roundKey is added to the state before starting the with loop
□ The FinalRound() is the same as Round(), apart from missing the MixColumns() operation.
□ During each round, another part of the ExpandedKey is used for the operations
□ The ExpandedKey shall ALWAYS be derived from the Cipher Key and never be specified directly.
AES operations: SubBytes, ShiftRow, MixColumn and AddRoundKey:
The AddRoundKey operation:
In this operation, a Round Key is applied to the state by a simple bitwise XOR. The Round Key is derived from the Cipher Key by the means of the key schedule. The Round Key length is equal to the
block key length (= 16 bytes).
----------------------------- ----------------------------- -----------------------------
| a0,0 | a0,1 | a0,2 | a0,3 | | k0,0 | k0,1 | k0,2 | k0,3 | | b0,0 | b0,1 | b0,2 | b0,3 |
| a1,0 | a1,1 | a1,2 | a1,3 | XOR | k2,0 | k2,1 | k2,2 | k2,3 | = | b2,0 | b2,1 | b2,2 | b2,3 |
| a2,0 | a2,1 | a2,2 | a2,3 | | k1,0 | k1,1 | k1,2 | k1,3 | | b1,0 | b1,1 | b1,2 | b1,3 |
| a3,0 | a3,1 | a3,2 | a3,3 | | k3,0 | k3,1 | k3,2 | k3,3 | | b3,0 | b3,1 | b3,2 | b3,3 |
----------------------------- ----------------------------- -----------------------------
where: b(i,j) = a(i,j) XOR k(i,j)
A graphical representation of this operation can be found here.
The ShiftRow operation:
In this operation, each row of the state is cyclically shifted to the left, depending on the row index.
The 1st row is shifted 0 positions to the left.
The 2nd row is shifted 1 positions to the left.
The 3rd row is shifted 2 positions to the left.
The 4th row is shifted 3 positions to the left.
----------------------------- -----------------------------
| a0,0 | a0,1 | a0,2 | a0,3 | | a0,0 | a0,1 | a0,2 | a0,3 |
| a1,0 | a1,1 | a1,2 | a1,3 | -> | a1,1 | a0,2 | a1,3 | a1,0 |
| a2,0 | a2,1 | a2,2 | a2,3 | | a2,2 | a2,3 | a2,0 | a2,1 |
| a3,0 | a3,1 | a3,2 | a3,3 | | a3,3 | a3,0 | a3,1 | a3,2 |
----------------------------- -----------------------------
A graphical representation of this operation can be found here.
Please note that the inverse of ShiftRow is the same cyclically shift but this time to the right. It will be needed later for decoding.
The SubBytes operation:
The SubBytes operation is a non-linear byte substitution, operating on each byte of the state independently. The substitution table (S-Box) is invertible and is constructed by the composition of
two transformations:
□ First, taking the multiplicative inverse in Rijndael's finite field.
□ Then, applying an affine transformation which is documented in the Rijndael documentation.
Since the S-Box is independent of any input, pre-calculated forms are used, if enough memory (256 bytes for one S-Box) is available.
Each byte of the state is then substituted by the value in the S-Box whose index corresponds to the value in the state:
a(i,j) = SBox[a(i,j)]
Please note that the inverse of SubBytes is the same operation, using the inversed S-Box, which is also precalculated.
The MixColumn operation:
I will keep this section very short since it involves a lot of very advance mathematical calculations in the Rijndael's finite field.
All you have to know is that it corresponds to the matrix multiplication with:
and that the addition and multiplication operations are a little different from the normal ones.
You can skip this part if you are not interested in the math involved:
Addition and Substraction:
Addition and subtraction are performed by the exclusive or operation. The two operations are the same; there is no difference between addition and subtraction.
Multiplication in Rijndael's galois field is a little more complicated. The procedure is as follows:
* Take two eight-bit numbers, a and b, and an eight-bit product p
* Set the product to zero.
* Make a copy of a and b, which we will simply call a and b in the rest of this algorithm
* Run the following loop eight times:
1. If the low bit of b is set, exclusive or the product p by the value of a
2. Keep track of whether the high (eighth from left) bit of a is set to one
3. Rotate a one bit to the left, discarding the high bit, and making the low bit have a value of zero
4. If a's hi bit had a value of one prior to this rotation, exclusive or a with the hexadecimal number 0x1b
5. Rotate b one bit to the right, discarding the low bit, and making the high (eighth from left) bit have a value of zero.
* The product p now has the product of a and b
The Rijndael Key Schedule:
The Key Schedule is responsible for expanding a short key into a larger key, whose parts are used during the different iterations. Each key size is expanded to a different size:
An 128 bit key is expanded to an 176 byte key.
An 192 bit key is expanded to an 208 byte key.
An 256 bit key is expanded to an 240 byte key.
There is a relation between the cipher key size, the number of rounds and the ExpandedKey size. For an 128-bit key, there is one initial AddRoundKey operation plus there are 10 rounds and each
rounds need a new 16 byte key, therefor we require 10+1 RoundKeys of 16 byte, which equals 176 byte. The same logic can be applied to the two other cipher key sizes. The general formula is that:
ExpandedKeySize = (nbrRounds+1) * BlockSize
The Key Schedule is made up of iterations of the Key schedule core, which works on 4-byte words. The core uses a certain number of operations, which are explained here:
The 4-byte word is cyclically shifted 1 byte to the left:
--------------------- ---------------------
| 1d | 2c | 3a | 4f | -> | 2c | 3a | 4f | 1d |
--------------------- ---------------------
This section is again extremely mathematical and I recommend everyone who is interested to read this description. Just note that the Rcon values can be pre-calculated, which results in a simple
substitution (a table lookup) in a pre-calculated, fixed Rcon table (again, Rcon can also be calculated on-the-fly if memory is a design restriction.)
The Key Schedule uses the same S-Box substitution as the main algorithm body.
Now that we know what the operations are, let me show you the key schedule core (in pseudo-C):
word[0] = word[0] XOR RCON[i];
In the above code, word has a size of 4 bytes and i is the iteration counter from the Key Schedule
The Key Expansion:
First, let me show you the keyExpansion function as you can find it in the Rijndael documentation (there are 2 version, one for key size 128, 192 and one for key size 256):
KeyExpansion(byte Key[4*Nk] word W[Nb*(Nr+1)])
for(i = 0; i < Nk; i++)
W[i] = (Key[4*i],Key[4*i+1],Key[4*i+2],Key[4*i+3]);
for(i = Nk; i < Nb * (Nr + 1); i++)
temp = W[i - 1];
if (i % Nk == 0)
temp = SubByte(RotByte(temp)) ^ Rcon[i / Nk];
W[i] = W[i - Nk] ^ temp;
□ Nk is the number of columns in the cipher key (128-bit -> 4, 192-bit -> 5, 256-bit -> 6)
□ W is of type "word", which is 4-bytes
Let me try to explain this in an easier understandable way:
□ The first n bytes of the expanded key are simply the cipher key (n = the size of the encryption key)
□ The rcon value i is set to 1
□ Until we have enough bytes of expanded key, we do the following to generate n more bytes of expanded key (please note once again that "n" is used here, this varies depending on the key size)
☆ we do the following to generate four bytes
○ we use a temporary 4-byte word called t
○ we assign the previous 4 bytes to t
○ we perform the key schedule core on t, with i as rcon value
○ we increment i
○ we XOR t with the 4-byte word n bytes before in the expandedKey (where n is once either either 16,24 or 32 bytes)
☆ we do the following x times to generate the next x*4 bytes of the expandedKey (x = 3 for n=16,32 and x = 5 for n=24)
○ we assign the previous 4-byte word to t
○ we XOR t with the 4-byte word n bytes before in the expandedKey (where n is once either either 16,24 or 32 bytes)
☆ if n = 32 (and ONLY then), we do the following to generate 4 more bytes
○ we assign the previous 4-byte word to t
○ We run each of the four bytes in t through Rijndael's S-box
○ we XOR t with the 4-byte word 32 bytes before in the expandedKey
☆ if n = 32 (and ONLY then), we do the following three times to generate twelve more bytes
○ we assign the previous 4-byte word to t
○ we XOR t with the 4-byte word 32 bytes before in the expandedKey
□ We now have our expandedKey
Don't worry if you still have problems understanding the Key Schedule, you'll see that the implementation isn't very hard. What you should note is that:
□ the part in red is only for cipher key size = 32
□ for n=16, we generate: 4 + 3*4 bytes = 16 bytes per iteration
□ for n=24, we generate: 4 + 5*4 bytes = 24 bytes per iteration
□ for n=32, we generate: 4 + 3*4 + 4 + 3*4 = 32 bytes per iteration
The implementation of the key schedule is pretty straight forward, but since there is a lot of code repetition, it is possible to optimize the loop slightly and use the modulo operator to check
when the additional operations have to be made.
Implementation: The Key Schedule
We will start the implementation of AES with the Cipher Key expansion. As you can read in the theoretical part above, we intend to enlarge our input cipher key, whose size varies between 128 and
256 bits into a larger key, from which different RoundKeys can be derived.
I prefer to implement the helper functions (such as rotate, Rcon or S-Box first), test them and then move on to the larger loops. If you are not a fan of bottom-up approaches, feel free to start
a little further in this tutorial and move your way up, but I felt that my approach was the more logical one here.
Implementation: General comments
Even though some might think that integers were the best choice to work with, since their 32 bit size best corresponds one word, I strongly discourage you from using integers. You wrongly assume
that integers, or more specifically the "int" type, always has 4 bytes. However, the required ranges for signed and unsigned int are identical to those for signed and unsigned short. On compilers
for 8 and 16 bit processors (including Intel x86 processors executing in 16 bit mode, such as under MS-DOS), an int is usually 16 bits and has exactly the same representation as a short. On
compilers for 32 bit and larger processors (including Intel x86 processors executing in 32 bit mode, such as Win32 or Linux) an int is usually 32 bits long and has exactly the same representation
as a long.
For this very reason, we will be using unsigned chars, since the size of an char (which is called CHAR_BIT and defined in limits.h) is required to be at least 8. Jack Klein wrote: Almost all
modern computers today use 8 bit bytes (technically called octets, but there are still some in production and use with other sizes, such as 9 bits. Also some processors (especially Digital Signal
Processors) cannot efficiently access memory in smaller pieces than the processor's word size. There is at least one DSP I have worked with where CHAR_BIT is 32. The char types, short, int and
long are all 32 bits.
Since we want to keep our code as portable as possible and since it is up to the compiler to decide if the default type for char is signed or not, we will specify unsigned char throughout the
entire code.
Implementation: S-Box
The S-Box values can either be calculated on-the-fly to save memory or the pre-calculated values can be stored in an array. Since I assume that every machine my code runs on will have at least 2x
256bytes (there are 2 S-Boxes, one for the encryption and one for the decryption) we will store the values in an array. Additionally, instead of accessing the values immediately from our program,
I'll wrap a little function around which makes for a more readable code and would allow us to add additional code later on. Of course, this is a matter of taste, feel free to access the array
Here's the code for the 2 S-Boxes, it's only a table-lookup that returns the value in the array whose index is specified as a parameter of the function:
unsigned char sbox[256] = {
//0 1 2 3 4 5 6 7 8 9 A B C D E F
0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76, //0
0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0, //1
0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc, 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15, //2
0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a, 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75, //3
0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0, 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84, //4
0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b, 0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf, //5
0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85, 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8, //6
0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5, 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2, //7
0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17, 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73, //8
0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88, 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb, //9
0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c, 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79, //A
0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9, 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08, //B
0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6, 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a, //C
0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, 0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e, //D
0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94, 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf, //E
0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16 }; //F
unsigned char rsbox[256] =
{ 0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb
, 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87, 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb
, 0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d, 0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e
, 0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2, 0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25
, 0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16, 0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92
, 0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda, 0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84
, 0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a, 0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3, 0x45, 0x06
, 0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02, 0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b
, 0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea, 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73
, 0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85, 0xe2, 0xf9, 0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e
, 0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89, 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b
, 0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20, 0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4
, 0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31, 0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f
, 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef
, 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61
, 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d };
unsigned char getSBoxValue(unsigned char num)
return sbox[num];
unsigned char getSBoxInvert(unsigned char num)
return rsbox[num];
Implementation: Rotate
From the theoretical part, you should know already that Rotate takes a word (a 4-byte array) and rotates it 8 bit to the left. Since 8 bit correspond to one byte and our array type is character
(whose size is one byte), rotating 8 bit to the left corresponds to shifting cyclically the array values one to the left.
Here's the code for the Rotate function:
/* Rijndael's key schedule rotate operation
* rotate the word eight bits to the left
* rotate(1d2c3a4f) = 2c3a4f1d
* word is an char array of size 4 (32 bit)
void rotate(unsigned char *word)
unsigned char c;
int i;
c = word[0];
for (i = 0; i < 3; i++)
word[i] = word[i+1];
word[3] = c;
Implementation: Rcon
Same as with the S-Box, the Rcon values can be calculated on-the-fly but once again I decide to store them in an array since they only require 255 bytes of space. To keep in line with the S-Box
implementation, I write a little access function.
Here's the code for Rcon:
unsigned char Rcon[255] = {
0x8d, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36, 0x6c, 0xd8,
0xab, 0x4d, 0x9a, 0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97, 0x35, 0x6a, 0xd4, 0xb3,
0x7d, 0xfa, 0xef, 0xc5, 0x91, 0x39, 0x72, 0xe4, 0xd3, 0xbd, 0x61, 0xc2, 0x9f,
0x25, 0x4a, 0x94, 0x33, 0x66, 0xcc, 0x83, 0x1d, 0x3a, 0x74, 0xe8, 0xcb, 0x8d,
0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36, 0x6c, 0xd8, 0xab,
0x4d, 0x9a, 0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97, 0x35, 0x6a, 0xd4, 0xb3, 0x7d,
0xfa, 0xef, 0xc5, 0x91, 0x39, 0x72, 0xe4, 0xd3, 0xbd, 0x61, 0xc2, 0x9f, 0x25,
0x4a, 0x94, 0x33, 0x66, 0xcc, 0x83, 0x1d, 0x3a, 0x74, 0xe8, 0xcb, 0x8d, 0x01,
0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36, 0x6c, 0xd8, 0xab, 0x4d,
0x9a, 0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97, 0x35, 0x6a, 0xd4, 0xb3, 0x7d, 0xfa,
0xef, 0xc5, 0x91, 0x39, 0x72, 0xe4, 0xd3, 0xbd, 0x61, 0xc2, 0x9f, 0x25, 0x4a,
0x94, 0x33, 0x66, 0xcc, 0x83, 0x1d, 0x3a, 0x74, 0xe8, 0xcb, 0x8d, 0x01, 0x02,
0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36, 0x6c, 0xd8, 0xab, 0x4d, 0x9a,
0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97, 0x35, 0x6a, 0xd4, 0xb3, 0x7d, 0xfa, 0xef,
0xc5, 0x91, 0x39, 0x72, 0xe4, 0xd3, 0xbd, 0x61, 0xc2, 0x9f, 0x25, 0x4a, 0x94,
0x33, 0x66, 0xcc, 0x83, 0x1d, 0x3a, 0x74, 0xe8, 0xcb, 0x8d, 0x01, 0x02, 0x04,
0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36, 0x6c, 0xd8, 0xab, 0x4d, 0x9a, 0x2f,
0x5e, 0xbc, 0x63, 0xc6, 0x97, 0x35, 0x6a, 0xd4, 0xb3, 0x7d, 0xfa, 0xef, 0xc5,
0x91, 0x39, 0x72, 0xe4, 0xd3, 0xbd, 0x61, 0xc2, 0x9f, 0x25, 0x4a, 0x94, 0x33,
0x66, 0xcc, 0x83, 0x1d, 0x3a, 0x74, 0xe8, 0xcb};
unsigned char getRconValue(unsigned char num)
return Rcon[num];
Implementation: Key Schedule Core
The implementation of the Key Schedule Core from the pseudo-C is pretty easy. All the code does is apply the operations one after the other on the 4-byte word. The parameters are the 4-byte word
and the iteration counter, on which Rcon depends.
void core(unsigned char *word, int iteration)
int i;
/* rotate the 32-bit word 8 bits to the left */
/* apply S-Box substitution on all 4 parts of the 32-bit word */
for (i = 0; i < 4; ++i)
word[i] = getSBoxValue(word[i]);
/* XOR the output of the rcon operation with i to the first part (leftmost) only */
word[0] = word[0]^getRconValue(iteration);
Implementation: Key Expansion
The Key Expansion is where it all comes together. As you can see in the pretty big list in the theory about the Rijndael Key Expansion, we need to apply several operations a number of times,
depending on they key size.
As the key size can only take a very limited number of values, I decided to implement it as an enumeration type. Not only does that limit the key size to only three possible values, it also makes
the code more readable.
enum keySize{
SIZE_16 = 16,
SIZE_24 = 24,
SIZE_32 = 32
Our key expansion function basically needs only two things:
□ the input cipher key
□ the output expanded key
Since in C, it is not possible to know the size of an array passed as pointer to a function, we'll add the cipher key size (of type "enum keySize") and the expanded key size (of type size_t) to
the parameter list of our function. The prototype looks like the following:
void expandKey(unsigned char *expandedKey, unsigned char *key, enum keySize, size_t expandedKeySize);
While implementing the function, I try to follow the details in the theoretical list as close as possible. As I already explained, since several parts of the code are repeated, I'll try to get
rid of the code repetition and use conditions to see when I need to use a certain operation.
Instead of writing:
while (expanded_key_size < required_key_size)
for (i=0; i<4; i++)
I'll use a different structure:
while (expanded_key_size < required_key_size)
if (expanded_key_size%key_size == 0)
This structure comes down to the same thing, but allows me to be more flexible when it comes to add the 256-bit cipherkey version that has those additional steps.
Let me show you the keyexpansion function and give explanations later on:
/* Rijndael's key expansion
* expands an 128,192,256 key into an 176,208,240 bytes key
* expandedKey is a pointer to an char array of large enough size
* key is a pointer to a non-expanded key
void expandKey(unsigned char *expandedKey, unsigned char *key, enum keySize size, size_t expandedKeySize)
/* current expanded keySize, in bytes */
int currentSize = 0;
int rconIteration = 1;
int i;
unsigned char t[4] = {0}; // temporary 4-byte variable
/* set the 16,24,32 bytes of the expanded key to the input key */
for (i = 0; i < size; i++)
expandedKey[i] = key[i];
currentSize += size;
while (currentSize < expandedKeySize)
/* assign the previous 4 bytes to the temporary value t */
for (i = 0; i < 4; i++)
t[i] = expandedKey[(currentSize - 4) + i];
/* every 16,24,32 bytes we apply the core schedule to t and increment rconIteration afterwards */
if(currentSize % size == 0)
core(t, rconIteration++);
/* For 256-bit keys, we add an extra sbox to the calculation */
if(size == SIZE_32 && ((currentSize % size) == 16)) {
for(i = 0; i < 4; i++)
t[i] = getSBoxValue(t[i]);
/* We XOR t with the four-byte block 16,24,32 bytes before the new expanded key.
* This becomes the next four bytes in the expanded key.
for(i = 0; i < 4; i++) {
expandedKey[currentSize] = expandedKey[currentSize - size] ^ t[i];
As you can see, I never use inner loops to repeat an operation, the only inner loops are to iterate over the 4 parts of the temporary array t. I use the modulo operator to check if I need to
apply the operation:
□ if(currentSize % size == 0): whenever we have have created n bytes of expandedKey (where n is the cipherkey size), we run the key expansion core once
□ if(size == SIZE_32 && ((currentSize % size) == 16)): if we are expanding an 32-bit cipherkey and if we have already generated 16 bytes (as I explained above, in the 32-bit version we run the
first loop only 3 times, which generates 12 bytes + the 4 bytes from the core), we add one additional S-Box substitution
Implementation: Using the Key Expansion
Finally, we can test our newly created key expansion. I won't calculate the expandedKey size just yet but rather give it a fixed value (the calculation requires the number of rounds which isn't
needed at this point). Here's the code that would expand a given cipher key:
/* the expanded keySize */
int expandedKeySize = 176;
/* the expanded key */
unsigned char expandedKey[expandedKeySize];
/* the cipher key */
unsigned char key[16] = {0};
/* the cipher key size */
enum keySize size = SIZE_16;
int i;
expandKey(expandedKey, key, size, expandedKeySize);
printf("Expanded Key:\n");
for (i = 0; i < expandedKeySize; i++)
printf("%2.2x%c", expandedKey[i], (i%16) ? '\n' : ' ');
Of course, this code uses several constants that will be generated automatically once we implement the body of the AES encryption.
Here are several test results:
The Key Expansion of an 128-bit key consisting of null characters (like the example above):
9b 98 98 c9 f9 fb fb aa 9b 98 98 c9 f9 fb fb aa
90 97 34 50 69 6c cf fa f2 f4 57 33 0b 0f ac 99
ee 06 da 7b 87 6a 15 81 75 9e 42 b2 7e 91 ee 2b
7f 2e 2b 88 f8 44 3e 09 8d da 7c bb f3 4b 92 90
ec 61 4b 85 14 25 75 8c 99 ff 09 37 6a b4 9b a7
21 75 17 87 35 50 62 0b ac af 6b 3c c6 1b f0 9b
0e f9 03 33 3b a9 61 38 97 06 0a 04 51 1d fa 9f
b1 d4 d8 e2 8a 7d b9 da 1d 7b b3 de 4c 66 49 41
b4 ef 5b cb 3e 92 e2 11 23 e9 51 cf 6f 8f 18 8e
The Key Expansion of an 192-bit key consisting of null characters:
9b 98 98 c9 f9 fb fb aa 9b 98 98 c9 f9 fb fb aa
9b 98 98 c9 f9 fb fb aa 90 97 34 50 69 6c cf fa
f2 f4 57 33 0b 0f ac 99 90 97 34 50 69 6c cf fa
c8 1d 19 a9 a1 71 d6 53 53 85 81 60 58 8a 2d f9
c8 1d 19 a9 a1 71 d6 53 7b eb f4 9b da 9a 22 c8
89 1f a3 a8 d1 95 8e 51 19 88 97 f8 b8 f9 41 ab
c2 68 96 f7 18 f2 b4 3f 91 ed 17 97 40 78 99 c6
59 f0 0e 3e e1 09 4f 95 83 ec bc 0f 9b 1e 08 30
0a f3 1f a7 4a 8b 86 61 13 7b 88 5f f2 72 c7 ca
43 2a c8 86 d8 34 c0 b6 d2 c7 df 11 98 4c 59 70
The Key Expansion of an 256-bit key consisting of null characters:
aa fb fb fb aa fb fb fb aa fb fb fb aa fb fb fb
6f 6c 6c cf 0d 0f 0f ac 6f 6c 6c cf 0d 0f 0f ac
7d 8d 8d 6a d7 76 76 91 7d 8d 8d 6a d7 76 76 91
53 54 ed c1 5e 5b e2 6d 31 37 8e a2 3c 38 81 0e
96 8a 81 c1 41 fc f7 50 3c 71 7a 3a eb 07 0c ab
9e aa 8f 28 c0 f1 6d 45 f1 c6 e3 e7 cd fe 62 e9
2b 31 2b df 6a cd dc 8f 56 bc a6 b5 bd bb aa 1e
64 06 fd 52 a4 f7 90 17 55 31 73 f0 98 cf 11 19
6d bb a9 0b 07 76 75 84 51 ca d3 31 ec 71 79 2f
e7 b0 e8 9c 43 47 78 8b 16 76 0b 7b 8e b9 1a 62
74 ed 0b a1 73 9b 7e 25 22 51 ad 14 ce 20 d4 3b
10 f8 0a 17 53 bf 72 9c 45 c9 79 e7 cb 70 63 85
Implementation: AES Encryption
to be continued...
Last edited by laserlight; 11-23-2007 at 12:53 PM. Reason: Added external link on request of the author.
Excellent! That post about the Enigma Machine last week, peaked my curiosity.
Throughout the pre-industrial age, I have learned, since communications involved messengers who could always be easily intercepted (for the most part), and not perfectly trusted, the art of
"secret writing" was quite important.
Thanks, KONI. I look forward to your other installments. :P
awesome post *votes to move to official Tutorial section*.
You rant and rave about it, but at the end of the day, it doesn't matter if people use it as long as you don't see.
People are free to read the arguments, but if the only way for you to discover gravity is by jumping off a cliff, then that is what you're going to have to experience for yourself.
Eventually, this "fast and loose" approach of yours will bite you one too many times, then you'll figure out the correct way to do things. - Salem
One has to notice that there exist certain cipher that don't need a key at all. A famous example is ROT13 (abbreviation from Rotation 13), a simple Caesar-cipher that obscures text by replacing
each letter with the letter thirteen places down in the alphabet. Since our alphabet has 26 characters, it is enough to encrypt the ciphertext again to retrieve the original message.
umm rot13 has a key.... it is a implied key of 13. Sure it can be done with an algorithm. It is a reciprocal ceazar cipher.
nice tute
after the Data Encryption Standard was found too weak because of his small key size
"its", maybe?
First,le me show you the keyExpansion function
"First, let" maybe?
Execellent tutorial.
Seek and ye shall find. quaere et invenies.
"Simplicity does not precede complexity, but follows it." -- Alan Perlis
"Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra
"The only real mistake is the one from which we learn nothing." -- John Powell
Other boards: DaniWeb, TPS
Unofficial Wiki FAQ: cpwiki.sf.net
My website: http://dwks.theprogrammingsite.com/
Projects: codeform, xuni, atlantis, nort, etc.
25th March 2007, 14:37 (GMT+1): fixed and added Key Schedule implementation
I would like to let you know that I am still working on this tutorial. I converted it to an HTML version that I hosted on my website. You can find the latest version on http://
www.progressive-coding.com/. During the last days, I update some of the features of my website as well, adding a very handy print version of each document and the possibility to download the PDF
version (coming soon, couldn't find an html2pdf converter that had full css support).
I'm giving a basic lecture about algorithm complexity tomorrow and will put the document on my site as well.
New update: Today I implemented 2 mode of operations, one that uses only encryption and transforms the block cipher into a streaming cipher and the other a classical block cipher. I'll continue
working on the code and put the next chapter in the tutorial up tomorrow.
29th March 2007, 13:08 (GMT +1): I uploaded the new version of the tutorial, it includes the new chapter about the entire AES encryption. You should now be able to encrypt a plaintext of 128
bits. I also added the printer friendly version as well as the PDF version:
Direct Link: http://www.progressive-coding.com/tutorial.php?id=0
Printer version: http://www.progressive-coding.com/tu...p?id=0&print=1
PDF version: http://www.progressive-coding.com/pdf/AES.pdf
I should mention that the tutorial now has 22 pages, I will try to keep the decryption rather short and only mention and implement the 2 most common mode of operations.
This Topic must be fixed in the top of the page...
3rd April 2007, 11:41 (GMT +1): New version of the tutorial online. It concludes the AES implementation by adding the decryption section. The next and last part will cover modes of operation to
use the AES algorithm to encrypt messages of any size.
Direct Link: http://www.progressive-coding.com/tutorial.php?id=0
Printer version: http://www.progressive-coding.com/tu...p?id=0&print=1
PDF version: http://www.progressive-coding.com/pdf/AES.pdf
The tutorial has 26 pages now, most of the new pages are C code though, as the decryption is very easy to understand/implement once we have the encryption.
I expect to finish this tutorial this week (with 3 modes of operation).
Great tutorial, KONI!!
Thanks so much.
I am pleased to announce that my AES tutorial is finished. As I explained on my website, I decided to split the modes of operation tutorial into its very own tutorial. I hope you enjoy reading
both tutorials and that you manage to implement your very own version of AES to encrypt messages of any size.
Here are the links to the modes of operation tutorial:
Direct Link: http://www.progressive-coding.com/tutorial.php?id=4
Printer version: http://www.progressive-coding.com/tu...p?id=4&print=1
PDF version: http://www.progressive-coding.com/pd..._operation.pdf
I think my next project will be to write a tutorial about the MD5 hash.
Last edited by KONI; 04-04-2007 at 06:08 AM.
This thread is currently the 5th result in google for "Advanced Encryption Standard tutorial", which is kinda great. Since I can't edit my first post anymore, I would like to ask a mod to edit my
first post and add a header message saying that the latest version of this tutorial can be found here.
03-24-2007 #2
Registered User
Join Date
Sep 2006
03-24-2007 #3
Registered User
Join Date
Nov 2006
03-24-2007 #4
03-24-2007 #5
03-24-2007 #6
03-25-2007 #7
03-27-2007 #8
03-28-2007 #9
03-29-2007 #10
03-30-2007 #11
A.I Programmer
Join Date
Mar 2007
Teresina - Brazil
04-03-2007 #12
04-03-2007 #13
Registered User
Join Date
Sep 2006
04-04-2007 #14
04-05-2007 #15 | {"url":"http://cboard.cprogramming.com/c-programming/87805-%5Btutorial%5D-implementing-advanced-encryption-standard.html","timestamp":"2014-04-17T19:06:36Z","content_type":null,"content_length":"146525","record_id":"<urn:uuid:cfc39555-2a13-4671-947e-a62ad02853d8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
About Us
Student Learning Goals
Major student learning goals/objectives for Mathematics Majors/Minors, Secondary
Education Mathematics Majors/Minors, and Applied Mathematics Majors/Minors
Goal #1: Mathematical Reasoning
Objectives: Students should be able to perform complex tasks, to discern patterns, to apply intellectually demanding and rigorous mathematical reasoning to formulate organized and cogent mathematical
Goal #2: Understanding the Breadth of Mathematics
Objectives: Students should possess an understanding of the breadth of mathematics and its interconnecting principles. Students should understand the interplay among applications, problem-solving,
and theory. Students should be able to appreciate the different areas of mathematics and understand the relevance of mathematics to other disciplines. Students should be aware of the historical and
contemporary context in which mathematics is practiced.
Goal #3: Mathematical Modeling
Objectives: Students should be able to apply their mathematical knowledge to a broad spectrum of complex problems. They should be able to solve multi-step problems and utilize current technology in
so doing. Students should be aware of the process by which mathematical principles are applied to serve society.
Goal #4: Communicating Mathematics
Objectives: Students should be able to read, write, and speak mathematically. They should be able to work effectively in a group setting. They should have library and research skills sufficient to
locate, analyze, synthesize, and evaluate information relating to their area.
Major student learning goals for Elementary Education Mathematics Minors:
(These learning goals are adapted from requirements of the Michigan Department of Education for certification of elementary teachers and elementary teachers with a minor in mathematics.)
Goal #1 Problem Solving
• Students exhibit mature problem solving abilities.
• Students recognize and use patterns, quantities, and spatial relationships that can represent phenomena, solve problems, and manage data.
Goal #2 Reasoning
• Students make and evaluate mathematical conjectures and arguments and validate their own mathematical thinking.
Goal #3 Communication
• Students use both oral and written discourse to develop and extend mathematical understanding.
Goal #4 Connections
• Students demonstrate an understanding of mathematical relationships across disciplines and connections within mathematics.
Goal #5 Mathematical Content
• Students know, understand and apply concepts, procedures, and reasoning in mathematics that define number systems and number sense, geometry, measurement, statistics and probability, and algebra.
Accreditation/Certifying Body: The Department of Mathematical Sciences’ program for Secondary Teacher Certification (major and minor) and minor for Elementary Education follow NCATE/NCTM guidelines
as well as requirements of the State of Michigan Department of Education. Other courses, such as the Calculus sequence, conform to ABET(Accreditation Board for Engineering & Technology)
Professional Organizations/Standards that informed development of learning outcomes:
Major and Minor programs of the Department of Mathematical Sciences follow guidelines developed by CUPM (Committee on the Undergraduate Program in Mathematics) of the Mathematical Association of
America and guidelines of CBMS (Conference Board of the Mathematical Sciences) of the American Mathematical Society in cooperation with the Mathematical Association of America. | {"url":"http://www.svsu.edu/mathematics/aboutus/","timestamp":"2014-04-19T12:08:11Z","content_type":null,"content_length":"27145","record_id":"<urn:uuid:fcb916b4-ccd1-42da-8068-5a2cf872aa7b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
Browse by Ulster Authors and Editors
Number of items: 99.
Hayes, L, Phair, J, McCormac, C, Marti-Villalba, M, Papakonstantinou, P and Davis, J (2013) Illustrating the invisible: engaging undergraduate engineers in explaining nanotechnology to the public
through flash poetry. Journal of Science Education, 14 . pp. 12-15. [Journal article]
Sun, N, McMullan, M, Papakonstantinou, P, Mihailovic, D and Li, M (2013) Amplified optical transduction of proteins derived fromMo6S9−xIx nanowires. Progress in Natural Science: Materials
International, 23 . pp. 326-330. [Journal article]
Wang, T, Gao, D, Zhuo, J, Zhu, Z, Papakonstantinou, P, Y, Li and M, Li (2013) Size-Dependent Enhancement of Electrocatalytic Oxygen-Reduction and Hydrogen-Evolution Performance of MoS2 Particles.
Chemistry A European Journal, 19 . pp. 11939-11948. [Journal article]
Wang, T, Zhu, H, Zhuo, J, Zhu, Z, Papakonstantinou, P, Lubarsky, G, Lin, J and Li, MX (2013) Biosensor Based on Ultrasmall MoS2 Nanoparticles for Electrochemical Detection of H2O2 Released by Cells
at the Nanomolar Level. Analytical Chemistry, 85 . pp. 10289-10295. [Journal article]
Wang , T, Liu, L, Zhu, Z, Papakonstantinou, P, Hu, J, Liu, H and Li, M (2013) Enhanced electrocatalytic activity for hydrogen evolution reaction from self-assembled monodispersed molybdenum sulfide
nanoparticles on an Au electrode. Energy & Environmental Science, 6 . pp. 625-633. [Journal article]
Chiou, JW, Ray, SC, Peng , SI, Chuang, CH, Wang, BY, Tsai, HM, Pao, CW, Lin, HJ, Shao, YC, Wang, HF, Chen, SC, Pong, WF, Yeh, YC, Chen, CW, Chen, LC, Chen, KH, Tsai, HM, Kuamar, A, Ganguly, A,
Papakonstantinou, P, Yamane, H, Kosugi, N, Regier, T, Liu, L and Sham, TK (2012) Nitrogen Functionalized Graphene Nanoflakes (GNFs:N): Tunable Photoluminescence and Electronic Structures. The Journal
of Physical Chemistry C, 116 . pp. 16251-16258. [Journal article]
Erdem, A, Muti, M, Papakonstantinou, P, Canavar, E, Karadeniz, H, Congur, G and Sharma, S (2012) Graphene oxide integrated sensor for electrochemical monitoring ofmitomycin C–DNA interaction.
ANALYST, 137 . pp. 2129-2135. [Journal article]
Kumar, A, Ganguly, A and Papakonstantinou, P (2012) Thermal stability study of nitrogen functionalities in a graphene network. Journal of Physics: Condensed Matter , 24 . 235503-6pages. [Journal
McCormac, C, Davis, J, Papakonstantinou, P and Ward, N (2012) Research Project Success: The Essential Guide for Science and Engineering Students. Royal Society of Chemistry. Cambridge. 120 pp ISBN
1849733821 [Book (authored)]
Shang, N, Kumar, A, Sun, N, Sharma, S, Papakonstantinou, P, Li, MX, Blackley, RA, Zhou, W, Kalrsson, LS and Silva, SRP (2012) Vertical graphene nanoflakes for the immobilization, electrocatalytic
oxidation andquantitative detection of DNA. Electrochemistry Communications, 25 . pp. 140-143. [Journal article]
Shang, NG, Papakonstantinou, P, Sharma, S, Lubarsky, G, Li, M, McNeill, DW, Quinn, AJ, Zhou, W and Blackley, R (2012) Controllable selective exfoliation of high-quality graphene nanosheets and
nanodots by ionic liquid assisted grinding. Chemical Communications , 48 . pp. 1877-1879. [Journal article]
Ganguly, A, Sharma, S, Papakonstantinou, P and Hamilton, JWJ (2011) Probing the Thermal Deoxygenation of Graphene Oxide Using High-Resolution In Situ X-ray-Based Spectroscopies. The Journal of
Physical Chemistry C, 115 . pp. 17009-17019. [Journal article]
Lin, H, Cheng, H, Liu, L, Zhu, Z, Shao, Y, Papakonstantinou, P, Mihailovic, D and Li, M (2011) Thionin attached to a gold electrode modified with self-assembly of Mo6S9−XIXnanowires for amplified
electrochemical detection of natural DNA. Biosensors and Bioelectronics, 26 . pp. 1866-1870. [Journal article]
McMullan, M, Sun, N, Papakonstantinou, P, Li, MX, Zhou, W and Mihalilovic, D (2011) Aptamer conjugated Mo6S9−xIx nanowires for direct and highly sensitive electrochemical sensing of thrombin.
Biosensors and Bioelectronics, 26 . pp. 1853-1859. [Journal article]
Muti, M, Sharma, S, Erdem, A and Papakonstantinou, P (2011) Electrochemical Monitoring of Nucleic Acid Hybridization by Single-Use Graphene Oxide-Based Sensor. Electroanalysis, 23 (1). pp. 272-279.
[Journal article]
Ney, A., Papakonstantinou, P., Ajay, K., Shang, NG and Peng, N. (2011) Irradiation enhanced paramagnetism on graphene nanoflakes. Applied Physics Letters, 99 . pp. 102504-1. [Journal article]
University of Ulster (2011) Oxygen reduction reaction catalyst. [Patent]
Ray, SC, Sahu, DR and Papakonstantinou, P (2011) Dia-Magnetic to Ferro-Magnetic Behavioral Change of Fe-Catalysts Based Nitrogenated Carbon Nanotubes (NCNTs) by the Process of Chlorination/Oxidation.
Journal of Nanoscience and Nanotechnology, 11 . pp. 8269-8273. [Journal article]
Shang, NG, Silva, SRP, Jiang, X and Papakonstantinou, P (2011) Directly observable G band splitting in Raman spectra from individual tubular graphite cones. Carbon, 49 . pp. 3048-3054. [Journal
Erdem, A, Papakonstantinou, P, Murphy, H, McMullan, M, Karadeniz, H and Sharma, S (2010) Streptavidin Modified Carbon Nanotube Based Graphite Electrode for Label-Free Sequence Specific DNA Detection.
Electroanalysis, 22 (6). pp. 611-617. [Journal article]
Pao, CW, Ray, SC, Tsai, HM, Chen , HC, Lin , IN, Pong, WF, Chiou, JW, Tsai, MH, Shang, NG, Papakonstantinou , P and GuO, GH (2010) Change of Structural Behaviors of Organo-Silane Exposed Graphene
Nanoflakes. The Journal of Physical Chemistry C, 114 . pp. 8161-8166. [Journal article]
Ray, SC, Ghosh, SK, Chiguvare, Z, Palnitkar, U, Pong, WF, Lin , IN, Papakonstantinou, P and Strydom, AM (2010) Electron Field Emission of Silicon-Doped Diamond-Like Carbon Thin Films. Japanese
Journal of Applied Physics, 49 . 111301-1-111301-6. [Journal article]
Shang, Naigui, Papakonstantinou, P, Wang, P and Silva, SRP (2010) Platinum Integrated Graphene for Methanol Fuel Cells. The Journal of Physical Chemistry C, 114 (35). pp. 15837-15841. [Journal
Shang, NG, Tan, YY, Stolojan, V, Papakonstantinou, P and Silva, SRP (2010) High-rate low-temperature growth of vertically aligned carbon nanotubes. Nanotechnology, 21 . p. 505604. [Journal article]
Sharma, S, Gunguly , A, Papakonstantinou, P, Miao, X, Li, M, Hutchison, JL, Delichatsios, M. and Ukleja, S (2010) Rapid Microwave Synthesis of CO Tolerant Reduced Graphene Oxide-Supported
PlatinumElectrocatalysts for Oxidation of Methanol. Journal Of Physical Chemistry C, 114 . pp. 19459-19466. [Journal article]
Iyer, GRS, Papakonstantinou, P, Abbas, G, Maguire, PD and Bakirtzis, D (2009) Dual Role of Purification and Functionalisation of Single Walled CNT by Electron Cyclotron Resonance (ECR) Nitrogen
Plasma. e-Journal of Surface Science and Nanotechnology, 7 . pp. 337-340. [Journal article]
Lin, H, Cheng, H, Miao, XP, Papakonstantinou, P, Mihailovic, D and Li, MX (2009) A Novel Hydrogen Peroxide Amperometric Sensor based on Thionin Incorporated onto a Mo6S9-xIx Nanowire Modified Glassy
Carbon Electrode. ELECTROANALYSIS, 21 (23). pp. 2602-2606. [Journal article]
Ray, SC, Palnitkar, U, Pao, CW, Tsai, HM, Pong, WF, Lin, IN, Papakonstantinou, P, Chen, LC and Chen, KH (2009) Enhancement of electron field emission of nitrogenated carbon nanotubes on chlorination.
DIAMOND AND RELATED MATERIALS, 18 (2-3). pp. 457-460. [Journal article]
Shang, NG, Papakonstantinou, P, Wang, P, Zakharov, A, Palnitkar, U, Lin, IN, Chu, M and Stamboulis, A (2009) Self-Assembled Growth, Microstructure, and Field-Emission High-Performance of Ultrathin
Diamond Nanorods. ACS NANO, 3 (4). pp. 1032-1038. [Journal article]
Ray, SC, Palnitkar, U, Pao, CW, Tsai, HM, Pong, WF, Lin, IN, Papakonstantinou, P, Ganguly, A, Chen, LC and Chen, KH (2008) Field emission effects of nitrogenated carbon nanotubes on chlorination and
oxidation. JOURNAL OF APPLIED PHYSICS, 104 (6). 063710. [Journal article]
Shang, NG, Papakonstantinou, P, McLaughlin, JAD, Chen, WC, Chen, LC, Chu, M and Stamboulis, A (2008) Fe catalytic growth, microstructure, and low-threshold field emission properties of open ended
tubular graphite cones. JOURNAL OF APPLIED PHYSICS, 103 (12). p. 124308. [Journal article]
Shang, NG, Papakonstantinou, P, McMullan, M, Chu, M, Stamboulis, A, Potenza, A, Dhesi, SS and Marchetto, H (2008) Catalyst-Free Efficient Growth, Orientation and Biosensing Properties of Multilayer
Graphene Nanoflake Films with Sharp Edge Planes. ADVANCED FUNCTIONAL MATERIALS, 18 (21). pp. 3506-3514. [Journal article]
Sun, N, McMullan, M, Papakonstantinou, P, Gao, H, Zhang, XX, Mihailovic, D and Li, MX (2008) Bioassembled nanocircuits of Mo6S9-xIx nanowires for electrochemical immunodetection of estrone hapten.
ANALYTICAL CHEMISTRY, 80 (10). pp. 3593-3597. [Journal article]
Tweedie, M, Soin, N, Kumari, P, Roy, SS, Mathur, A, Mahony, CMO, Papakonstantinou, P and McLaughlin, JAD (2008) The use of nanotube structures in reducing the turn-on voltage in micro-discharges and
micro-gas sensors - art. no. 70370Z. In: CARBON NANOTUBES AND ASSOCIATED DEVICES, San Diego, USA. SPIE-INT SOC OPTICAL ENGINEERING. Vol 7037 15 pp. [Conference contribution]
Abbas, GA, Papakonstantinou, P, Iyer, GRS, Kirkman, IW and Chen, LC (2007) Substitutional nitrogen incorporation through rf glow discharge treatment and subsequent oxygen uptake on vertically aligned
carbon nanotubes. PHYSICAL REVIEW B, 75 (19). p. 195429. [Journal article]
Li, ZL, Jaroniec, M, Papakonstantinou, P, Tobin, JM, Vohrer, U, Kumar, S, Attard, G and Holmes, JD (2007) Supercritical fluid growth of porous carbon nanocages. CHEMISTRY OF MATERIALS, 19 (13). pp.
3349-3354. [Journal article]
Ray, SC, Pao, CW, Tsai, HM, Chiou, JW, Pong, WF, Chen, CW, Tsai, MH, Papakonstantinou, P, Chen, LC and Chen, KH (2007) A comparative study of the electronic structures of oxygen- and chlorine-treated
nitrogenated carbon nanotubes by x-ray absorption and scanning photoelectron microscopy. APPLIED PHYSICS LETTERS, 91 (20). p. 202102. [Journal article]
Ray, SC, Pao, CW, Tsai, HM, Chiou, JW, Pong, WF, Chen, CW, Tsai, MH, Papakonstantinou, P, Chen, LC, Chen, KH and Graham, WG (2007) Electronic structures and bonding properties of chlorine-treated
nitrogenated carbon nanotubes: X-ray absorption and scanning photoelectron microscopy studies. APPLIED PHYSICS LETTERS, 90 (19). p. 192107. [Journal article]
Ray, SC, Pao, CW, Tsai, HM, Chiou, JW, Pong, WF, Tsai, MH, Okpalugo, TIT, Papakonstantinou, P and Pi, TW (2007) Enhancement of sp(3)-bonding in high-bias-voltage grown diamond-like carbon thin films
studied by x-ray absorption and photoemission spectroscopy. JOURNAL OF PHYSICS-CONDENSED MATTER, 19 (17). p. 176204. [Journal article]
Vohrer, U, Holmes, J, Li, Z, Teh, A, Papakonstantinou, P, Ruether , M and Blau, W (2007) Tailoring the Wettability of Carbon Nanotube Powders, Bucky Papers and Vertically Aligned Nanofibers by Plasma
Assisted Functionalization. Journal of Nanotechnology Online (Azojono), 3 . pp. 1-12. [Journal article]
Erdem, A, Papakonstantinou, P and Murphy, H (2006) Direct DNA hybridization at disposable graphite electrodes modified with carbon nanotubes. ANALYTICAL CHEMISTRY, 78 (18). pp. 6656-6659. [Journal
Fang, WC, Huang, JH, Sun, CL, Chen, LC, Papakonstantinou, P and Chen, KH (2006) Superior electrochemical performance of CNx nanotubes using TiSi2 buffer layer on Si substrates. JOURNAL OF VACUUM
SCIENCE & TECHNOLOGY B, 24 (1). pp. 87-90. [Journal article]
Fang, WC, Sun, CL, Huang, JH, Chen, LC, Chyan, O, Chen, KH and Papakonstantinou, P (2006) Enhanced electrochemical properties of arrayed CNx nanotubes directly grown on Ti-buffered silicon
substrates. ELECTROCHEMICAL AND SOLID STATE LETTERS, 9 (3). A175-A178. [Journal article]
Lemoine, P, Quinn, JP, Maguire, PD, Papakonstantinou, P and Dougan, N (2006) Rheological analysis of creep in hydrogenated amorphous carbon films. THIN SOLID FILMS, 514 (1-2). pp. 223-230. [Journal
Murphy, H, Papakonstantinou, P and Okpalugo, TIT (2006) Raman study of multiwalled carbon nanotubes functionalized with oxygen groups. JOURNAL OF VACUUM SCIENCE & TECHNOLOGY B, 24 (2). pp. 715-720.
[Journal article]
Roy, SS, McCann, R, Papakonstantinou, P, McLaughlin, JAD, Kirkman, IW, Bhattacharya, Basab and Silva, SRP (2006) Near edge x-ray absorption fine structure study of aligned pi-bonded carbon structures
in nitrogenated ta-C films. JOURNAL OF APPLIED PHYSICS, 99 (4). 043511-043515. [Journal article]
Roy, SS, Papakonstantinou, P, Okpalugo, TIT and Murphy, H (2006) Temperature dependent evolution of the local electronic structure of atmospheric plasma treated carbon nanotubes: Near edge x-ray
absorption fine structure study. JOURNAL OF APPLIED PHYSICS, 100 (5). 053703. [Journal article]
Abbas, GA, Papakonstantinou, P and McLaughlin, JAD (2005) Investigation of local ordering and electronic structure in Si- and hydrogen-doped tetrahedral amorphous carbon thin films. APPLIED PHYSICS
LETTERS, 87 (25). p. 251918. [Journal article]
Abbas, GA, Papakonstantinou, P, McLaughlin, JAD, Weijers-Dall, TDM, Elliman, RG and Filik, J (2005) Hydrogen softening and optical transparency in Si-incorporated hydrogenated amorphous carbon films.
JOURNAL OF APPLIED PHYSICS, 98 (10). p. 103505. [Journal article]
Abbas, GA, Papakonstantinou, P, Okpalugo, TIT, McLaughlin, JAD, Filik, J and Harkin-Jones, E (2005) The improvement in gas barrier performance and optical transparency of DLC-coated polymer by
silicon incorporation. THIN SOLID FILMS, 482 (1-2). pp. 201-206. [Journal article]
Abbas, GA, Roy, SS, Papakonstantinou, P and McLaughlin, JAD (2005) Structural investigation and gas barrier performance of diamond-like carbon based films on polymer substrates. CARBON, 43 (2). pp.
303-309. [Journal article]
Ahmad, I, Roy, SS, Maguire, PD, Papakonstantinou, P and McLaughlin, JAD (2005) Effect of substrate bias voltage and substrate on the structural properties of amorphous carbon films deposited by
unbalanced magnetron sputtering. THIN SOLID FILMS, 482 (1-2). pp. 45-49. [Journal article]
Maguire, PD, McLaughlin, JAD, Okpalugo, TIT, Lemoine, P, Papakonstantinou, P, McAdams, ET, Needham, M, Ogwu, AA, Ball, M and Abbas, GA (2005) Mechanical stability, corrosion performance and
bioresponse of amorphous diamond-like carbon for medical stents and guidewires. DIAMOND AND RELATED MATERIALS, 14 (8). pp. 1277-1288. [Journal article]
McCann, R, Roy, SS, Papakonstantinou, P, Abbas, GA and McLaughlin, JAD (2005) The effect of thickness and arc current on the structural properties of FCVA synthesised ta-C and ta-C : N films. DIAMOND
AND RELATED MATERIALS, 14 (3-7, Sp. Iss. SI). pp. 983-988. [Journal article]
McCann, R, Roy, SS, Papakonstantinou, P, Ahmad, I, Maguire, PD, McLaughlin, JAD, Petaccia, L, Lizzit, S and Goldoni, A (2005) NEXAFS study and electrical properties of nitrogen-incorporated
tetrahedral amorphous carbon films. DIAMOND AND RELATED MATERIALS, 14 (3-7, Sp. Iss. SI). pp. 1057-1061. [Journal article]
McCann, R, Roy, SS, Papakonstantinou, P, Bain, MF, Gamble, HS and McLaughlin, JAD (2005) Chemical bonding modifications of tetrahedral amorphous carbon and nitrogenated tetrahedral amorphous carbon
films induced by rapid thermal annealing. THIN SOLID FILMS, 482 (1-2). pp. 34-40. [Journal article]
McCann, R, Roy, SS, Papakonstantinou, P, McLaughlin, JAD and Ray, SC (2005) Spectroscopic analysis of a-C and a-CNx films prepared by ultrafast high repetition rate pulsed laser deposition. JOURNAL
OF APPLIED PHYSICS, 97 (7). pp. 73522-1. [Journal article]
Okapalugo, TIT, Papakonstantinou, P, Murphu, H, McLaughlin, JAD and Brown, NMD (2005) Oxidative functionalization of carbon nanotubes in atmospheric pressure filamentary dielectric barrier discharge
(APDBD). Carbon, 43 (14). p. 2951. [Journal article]
Okpalugo, TIT, Papakonstantinou, P, Murphy, H, McLaughlin, JAD and Brown, NMD (2005) High resolution XPS characterization of chemical functionalised MWCNTs and SWCNTs. Carbon, 43 (1). pp. 153-161.
[Journal article]
Okpalugo, TIT, Papakonstantinou, P, Murphy, H, McLaughlin, JAD, Brown, NMD and McNally, T (2005) Surface-to-Depth Analysis of Functionalized Multi-Wall Carbon Nanotubes (FMWCNTS). Fullerenes,
Nanotubes and Carbon Nanostructures, 13 (1). p. 477. [Journal article]
Papakonstantinou, P, Kern, R, Robinson, L, Murphy, H, Irvine, J, McAdams, ET, McLaughlin, JAD and McNally, T (2005) Fundamental Electrochemical Properties of Carbon Nanotube Electrodes. Fullerenes,
Nanotubes and Carbon Nanostructures, 13 (2). pp. 91-108. [Journal article]
Ray, SC, Okpalugo, TIT, Pao, CW, Tsai, HM, Chiou, JW, Jan, JC, Pong, WF, Papakonstantinou, P, McLaughlin, JAD and Wang, WJ (2005) Electronic structure and photoluminescence study of silicon doped
diamond like carbon (Si : DLC) thin films. MATERIALS RESEARCH BULLETIN, 40 (10). pp. 1757-1764. [Journal article]
Ray, SC, Okpalugo, TIT, Papakonstantinou, P, Bao, CW, Tsai, HM, Chiou, JW, Jan, JC, Pong, WF, McLaughlin, JAD and Wang, WJ (2005) Electronic structure and hardening mechanism of Si-doped/undoped
diamond-like carbon films. THIN SOLID FILMS, 482 (1-2). pp. 242-247. [Journal article]
Ray, SC, Pao, CW, Chiou, JW, Tsai, HM, Jan, JC, Pong, WF, McCann, R, Roy, SS, Papakonstantinou, P and McLaughlin, JAD (2005) Electronic properties of a-CNx thin films: An x-ray-absorption and
photoemission spectroscopy study. JOURNAL OF APPLIED PHYSICS, 98 (3). 033708. [Journal article]
Roy, SS, McCann, R, Papakonstantinou, P, Maguire, PD and McLaughlin, JAD (2005) The structure of amorphous carbon nitride films using a combined study of NEXAFS, XPS and Raman spectroscopies. THIN
SOLID FILMS, 482 (1-2). pp. 145-150. [Journal article]
Abbas, GA, Papakonstantinou, P and McLaughlin, JAD (2004) X-ray reflectivity, photoelectron and nanoindentation studies of tetrahedral amorphous carbon (ta-C) films synthesized by double bend
cathodic arc. DIAMOND AND RELATED MATERIALS, 13 (4-8). pp. 1486-1490. [Journal article]
McCann, R, Roy, SS, Papakonstantinou, P, Maguire, PD and McLaughlin, JAD (2004) An investigation of the structural changes of ta-C and ta-C:N nano films as a function of thickness and arc current.
Bulletin of the American Physics Society, 49 (9). p. 30. [Journal article]
Ray, SC, Bao, CW, Tsai, HM, Chiou, JW, Jan, JC, Kumar, KPK, Pong, WF, Tsai, MH, Wang, WJ, Hsu, CJ, Okpalugo, TIT, Papakonstantinou, P and McLaughlin, JAD (2004) Electronic structure and bonding
properties of Si-doped hydrogenated amorphous carbon films. Applied Physics Letters, 85 (18). pp. 4022-4024. [Journal article]
Roy, SS, Papakonstantinou, P, Abbas, GA, McCann, R, Quinn, JP and McLaughlin, JAD (2004) Bonding configurations in DBOP-FCVA nitrogenated tetrahedral amorphous carbon films studied by Raman and X-ray
photoelectron spectroscopies. Diamond and Related Materials, 13 (4-8). pp. 1459-1463. [Journal article]
Roy, SS, Papakonstantinou, P, McCann, R, McLaughlin, JAD, Klini, A and Papadogiannis, N (2004) Bonding configurations in amorphous carbon and nitrogenated carbon films synthesised by femtosecond
laser deposition. Applied Physics A, 79 (4-6). pp. 1009-1014. [Journal article]
Papakonstantinou, P, Zhao, JF, Lemoine, P, McAdams, ET and McLaughlin, JAD (2002) The effects of Si incorporation on the electrochemical and nanomechanical properties of DLC thin films. DIAMOND AND
RELATED MATERIALS, 11 (3-6, Sp. Iss. SI). pp. 1074-1080. [Journal article]
Papakonstantinou, P, Zhao, JF, Richardot, A, McAdams, ET and McLaughlin, JAD (2002) Evaluation of corrosion performance of ultra-thin Si-DLC overcoats with electrochemical impedance spectroscopy.
DIAMOND AND RELATED MATERIALS, 11 (3-6, Sp. Iss. SI). pp. 1124-1129. [Journal article]
Mckay , K, Papakonstantinou, P, Dodd, PM, Atkinson, R and Pollard, RJ (2001) Microstructure, magnetic and nanomechanical properties of FeTaN films prepared by co-sputtering. JOURNAL OF PHYSICS
D-APPLIED PHYSICS , 34 . pp. 41-47. [Journal article]
Papakonstantinou, P and Lemoine, P (2001) Influence of nitrogen on the structure and nanomechanical properties of pulsed laser deposited tetrahedral amorphous carbon. JOURNAL OF PHYSICS-CONDENSED
MATTER , 13 . pp. 2971-2987. [Journal article]
Papakonstantinou, P, Somasundram, K, Cao, X and Nevin, A (2001) Crystal surface defects and oxygen gettering in thermally oxidized bonded SOI wafers. JOURNAL OF THE ELECTROCHEMICAL SOCIETY , 148 .
G36-G42. [Journal article]
Papakonstantinou, P, Zeze, DA, Klini, A and McLaughlin, JAD (2001) Chemical bonding and nanomechanical studies of carbon nitride films synthesised by reactive pulsed laser deposition. Diamond and
Related Materials, 10 (3-7). pp. 1109-1114. [Journal article]
Papakonstantinou, P, Lemoine, P, McLaughlin, JAD, MacKay, K, Dodd, PM, Polard, RJ and Atkinson, R (2000) Nanoindentation studies of FeXN (X=Ta, Ti) soft magnetic films. JOURNAL OF APPLIED PHYSICS, 87
. pp. 6170-6172. [Journal article]
Mailis, S, Zergioti, I, Koundourakis, G, Ikiades, A, Patentalaki, A, Papakonstantinou, P, Vainos, NA and Fotakis, C (1999) Etching and printing of diffractive optical microstructures by a femtosecond
excimer laser. APPLIED OPTICS, 38 . pp. 2301-2308. [Journal article]
Papakonstantinou, P, Vainos, NA and Fotakis, C (1999) Microfabrication by UV femtosecond laser ablation of Pt, Cr and indium oxide thin films. Applied Surface Science, 151 . pp. 159-170. [Journal
Papakonstantinou, P, O'Neill, M.C., Atkinson, R, Al-Wazzan, R, Morrow, T and Salter, IW (1998) Influence of oxygen pressure on the expansion dynamic of Ba-hexaferrite ablation plumes and on the
properties of deposited thin films. Journal of Magnetism and Magnetic Materials, 189 . pp. 120-129. [Journal article]
Papakonstantinou, P, O'Neill, MC, Atkinson, R, Al-Wazzan, R, Morrow, T and Salter, IW (1998) Emission studies of Ba hexaferrite plume produced by a KrF excimer laser. Journal of Applied Physics, 83 .
6858-3 pages. [Journal article]
Zergioti , I, Mailis, S, Vainos, NA, Papakonstantinou, P, Kalpouzos, C, Grigoropoulos, CP and Fotakis, C (1998) Microdeposition of metal and oxide structures using ultrashort laser pulses. Applied
Physics A: Materials Science & Processing, 66 . pp. 579-582. [Journal article]
Zimsa, Z, Gerber, R, Reid, T, Tesar, R, Atkinson, R and Papakonstantinou, P (1998) OPTICAL ABSORPTION AND FARADAY ROTATION OF BARIUM HEXAFERRITE FILMS PREPARED BY LASER ABLATION DEPOSITION. Journal
of Physics and Chemistry of Solids, 59 . pp. 111-119. [Journal article]
Dodd, PM, Atkinson, R, Papakonstantinou, P, Araghi, MS and Gamble, HS (1997) Correlation between crystalline structure and soft magnetic properties in sputtered sendust films. JOURNAL OF APPLIED
PHYSICS , 81 . pp. 4104-4106. [Journal article]
Papakonstantinou, P, Teggart, B and Atkinson , R (1997) Characterisation of Pulsed Laser Deposited Bi Doped Dy Iron Garnet Thin Films on GGG(111), GGG(110), YSZ(lO0) and Si(100). JOURNAL DE PHYSIQUE
IV , 7 (C1). pp. 475-476. [Journal article]
Gerber, R, Ried, T, Atkinson, R, Papakonstantinou, P, Simsova, J, Cernansky, M, Gemperle, R, Jurek, K and Studnicka, V (1996) Structural and magnetic properties of BaCoxWiyFe12_x_y019 films. Journal
of Magnetism and Magnetic Materials, 157-158 . pp. 295-296. [Journal article]
Kambersky, V, Simsova, J, Gemperle, R, Gerber, R and Papakonstantinou, P (1996) Experimental and theoretical domain periods in BaCoxTiyFe12-x-yO19. Journal of the Magnetics Society of Japan , 20
(Supplement No S1). pp. 365-367. [Journal article]
Papakonstantinou, P, O'Neill, M, Atkinson, R, Salter, IW and Gerber, R (1996) ORIENTED BARIUM, STRONTIUM FERRITE FILMS PULSED LASER DEPOSITED ON YSZ(100) AND Si(100) SUBSTRATES. Journal of the
Magnetics Society of Japan , 20 (Supplement No S1). p. 336. [Journal article]
Papakonstantinou, P, O'Neill , M, Atkinson, R, Salter, IW and Gerber, R (1996) Substrate temperature and oxygen pressure dependence of pulsed laser-deposited Sr ferrite films. Journal of Magnetism
and Magnetic Materials , 152 . pp. 401-410. [Journal article]
Papakonstantinou, P, Teggart, B and Atkinson, R (1996) The effects of substrate temperature and oxygen pressure on pulsed laser deposited Bi-substituted Dy iron garnet films. Journal of Magnetism and
Magnetic Materials, 163 . pp. 378-392. [Journal article]
Papakonstantinou, P, Teggart, B, Atkinson, R and Salter, IW (1996) STRUCTURAL AND MAGNETO-OPTICAL PROPERTIES OF PULSED LASER DEPOSITED Bi SUBSTITUTED IRON GARNET FILMS. Journal of the Magnetics
Society of Japan , 20 (Supplement No S1). pp. 337-340. [Journal article]
Simsa, Z, Zemek, J, Simsova, J, Gerber, R, Papakonstantinou, P and Atkinson, R (1996) Magneto-optical and xps spectra of cobalt and titanium substituted hexaferrite films. Journal of the Magnetics
Society of Japan , 20 (Supplement No S1). pp. 117-120. [Journal article]
Atkinson, R, Kubrakov, NF, O'Neill, M and Papakonstantinou, P (1995) Visualisation of magnetic domain structures through the interaction of their stray fields with magneto-optic garnet films. Journal
of Magnetism and Magnetic Materials, 149 . pp. 418-424. [Journal article]
Papakonstantinou, P, Atkinson, R, O'Neill, M, Salter, IW and Gerber, R (1995) Magneto-Optical Properties of Sr-Ferrite Films Produced by Pulsed Laser Ablation. IEEE TRANSACTIONS ON MAGNETICS, 31 .
pp. 3283-3285. [Journal article]
Papakonstantinou, P, Atkinson, R, Salter, IW and Gerber, R (1995) CoTi-SUBSTITUTED Ba-FERRITE FILMS PREPARED BY PULSED LASER DEPOSITION. Journal of the Magnetics Society of Japan , 19 (Supplement No
S1). pp. 177-180. [Journal article]
Simsova, J, Gemperle, R, Kambersky, R, Cernansky, M, Gerber, R, Papakonstantinou, P and Atkinson, R (1995) Thickness dependence of domain period in BafoxTiyFe12_x_yO 9. Journal of Magnetism and
Magnetic Materials , 148 . pp. 247-248. [Journal article]
Atkinson, R, Papakonstantinou, P, Salter, IW and Gerber, R (1994) Optical and magneto-optical properties of Co-Ti-substituted barium hexaferrite single crystals and thin films produced by laser
ablation deposition. Journal of Magnetism and Magnetic Materials, 128 . pp. 222-231. [Journal article]
Masterson, HG, Lunney, JG, Coey, JMD, Atkinson, R, Salter, IW and Papakonstantinou, P (1993) Thin films of barium ferrite with perpendicular magnetic anisotropy produced by laser ablation deposition.
Journal of Applied Physics, 73 . pp. 3917-3921. [Journal article]
Atkinson, R, Gerber, r, Papakonstantinou, P and Zimsa, Z (1992) Optical and magneto-optical properties of Co/Ti substituted barium ferrite. Journal of Magnetism and Magnetic Materials, 104-107 . pp.
1005-1006. [Journal article]
This list was generated on Wed Apr 16 05:50:44 2014 BST. | {"url":"http://eprints.ulster.ac.uk/view/author_or_editor/193.default.html","timestamp":"2014-04-16T04:50:44Z","content_type":null,"content_length":"61073","record_id":"<urn:uuid:3ab74465-db7b-490c-9fa3-bea070b81280>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
A New Kind of Science: The NKS Forum
All Liar, No Paradox
(Click here to view the original thread with full colors/images)
Posted by: Jon Awbrey
ALNP. Note 1
According to my understanding of it,
the so-called Liar Paradox is just the
most simple-minded of fallacies, involving
nothing more mysterious than the acceptance
of a false assumption, from which anybody can
prove anything at all.
Let us contemplate one of the shapes in which
the re*putative Liar Paradox is commonly cast:
Somebody writes down:
1. Statement 1 is false.
Then you are led to reason:
If Statement 1 is false then
by the principle that permits
the substitution of equals in
a true statement to obtain
yet another true statement,
you can derive the result:
"Statement 1 is false" is false.
Ergo, Statement 1 is true,
and so on, and so on,
ad nauseum infinitum.
Where did you go wrong?
Where were you misled?
As it happens, graphical reasoning does help
to clear this up -- at least, it did for me --
if only because the process of translating
the purported reasoning into another form
of representation gave me a crucial clue
as to where the wool was being pulled.
Just here, to wit, where it is writ:
1. Statement 1 is false.
What is this really saying?
Well, it's the same as writing:
Statement 1. Statement 1 is false.
And what the heck does this dot.comment say?
It is inducing you to accept this identity:
"Statement 1" = "Statement 1 is false".
That appears to be a purely syntactic indexing,
the sort of thing you are led to believe that
you can do arbitrarily, with logical impunity.
But you cannot, for syntactic identity implies
logical equivalence, and that is liable to find
itself constrained by iron bands of logical law.
And you cannot, not with logical impunity, assume the result
of this transmutation, which would be as much as to say this:
"Statement 1" = "Negation of Statement 1"
To write down the last step in the form that I like:
(( Statement_1 , ( Statement_1 ) ))
And this my friends, call it "Statement 0",
is purely and simply a false statement,
with no hint of paradox about it.
Here is Statement 0 in cactus syntax:
| ` ` ` ` ` ` ` ` ` ` ` ` ` ` |
| ` ` ` ` ` ` ` `s_1` ` ` ` ` |
| ` ` ` ` ` ` ` ` o ` ` ` ` ` |
| ` ` ` ` ` ` ` ` | ` ` ` ` ` |
| ` ` ` ` `s_1` ` | ` ` ` ` ` |
| ` ` ` ` ` o-----o ` ` ` ` ` |
| ` ` ` ` ` `\` `/` ` ` ` ` ` |
| ` ` ` ` ` ` \`/ ` ` ` ` ` ` |
| ` ` ` ` ` ` `o` ` ` ` ` ` ` |
| ` ` ` ` ` ` `|` ` ` ` ` ` ` |
| ` ` ` ` ` ` `|` ` ` ` ` ` ` |
| ` ` ` ` ` ` `@` ` ` ` ` ` ` |
| ` ` ` ` ` ` ` ` ` ` ` ` ` ` |
| ` ` ` (( s_1, (s_1) ))` ` ` |
Figure 0. Statement 0
Statement 0 was slipped into your drink
before you were even starting to think.
A bit before you were led to substitute
you should have examined more carefully
the site proposed for the substitution!
For the principle that you rushed to use
does not permit you to substitute unequals
into a statement that is false to begin with,
not just in the first place, but even before,
in the zeroth place of argument, as it were,
and still expect to come up with a truth.
Now let that be the end of that.
Jon Awbrey
Posted by: Philip Ronald Dutton
According to my understanding of it,
the so-called Liar Paradox is just the
most simple-minded of fallacies, involving
nothing more mysterious than the acceptance
of a false assumption, from which anybody can
prove anything at all.
Let us pretend again that the universe has NKS rudiments. Let us assume that the human thinking within the universe is also a product of rudimentary NKS (simple programs, algorithmic, etc.).
Well apparently, the rudiments allowed the universe to "step" to this new universe configuration in which the above assumption, which was false, was accepted.
The act of accepting a false assumption is apparently allowed by the algorithmic rudiments.
Posted by: Jon Awbrey
ALNP. Discussion Note 1
JA = Jon Awbrey
PD = Philip Dutton
JA: According to my understanding of it,
the so-called Liar Paradox is just the
most simple-minded of fallacies, involving
nothing more mysterious than the acceptance
of a false assumption, from which anybody can
prove anything at all.
PD: Let us pretend again that the universe has NKS rudiments.
Let us assume that the human thinking within the universe
is also a product of rudimentary NKS (simple programs,
algorithmic, etc.)
I think I've heard this one before --
| If we pretend that a tail is a leg,
| then how many legs does a dog have?
As good as I am at pretending, this would be a difficult pretense,
even for me. The way I pretend to understand it, the universe of
percourse that we drum up when we talk of "algorithmic rudiments",
and the whole repertoire of flim-flam paradigm-a-doodles that go
along with it, is just the space of recursive partial functions,
and last I counted there were only a countable number of these.
So the "universe at large" (UAL), unless it turns out to be
a "very large but still finite automaton" (VLBSFA), is just
sure to have all sorts of non-algorithmic happenings in it.
But even Aristotle already grasped the circumstance that
not all happenings in the world of phenomena fall within
the purview of science, but only the happenings that are,
as we say, "goings-on", in the sense of having a general
distribution throughout the experience of a community of
inquiry, in particular, persisting without limit in time.
Every method has a limit, indeed, the limit that makes it a method.
The more that one peers at the respective boundaries and interiors,
the more it begins to appear that the same limitation that gives a
method to science could very well be the same limitation to finite
means that stakes out the horizon of the computable.
To make a long story short, there is a sense of pretending that I can
entertain following directions with here, the sense in which a bit of
play-acting on some formal stage or other provides us with a model of
what goes on in the world outside the stage doors. But that requires
intelligent interpretation, not to mention cognizing the abbreviation,
the bias, the compression, and the distortion that is part and parcel
to the relationship between a reality and its finitary representation.
Jon Awbrey
PD: Well apparently, the rudiments allowed the universe
to "step" to this new universe configuration in which
the above assumption, which was false, was accepted.
PD: The act of accepting a false assumption
is apparently allowed by the algorithmic
Posted by: Gunnar Tomasson
Re. the following:
JA: According to my understanding of it,
the so-called Liar Paradox is just the
most simple-minded of fallacies, involving
nothing more mysterious than the acceptance
of a false assumption, from which anybody can
prove anything at all.
What is a "false assumption"?
Posted by: Jon Awbrey
ALNP. Discussion Note 2
In this particular case, the false assumption was the
proposition "Statement_1 <=> Not Statement_1", that
I labeled as "Statement_0", and that contradicts
an axiom of logic.
Jon Awbrey
Posted by: Gunnar Tomasson
From the vantage point of one interested in the epistemological aspects of modern physical science, the concept of a "false assumption" is problematic.
The nature of the problem is reflected in Einstein's statement in August 1954 that "it is quite possible that physics cannot be founded on the concept of field - that is to say, on continuous
Here, the concept of "field" represents (a) an "assumption" about structural aspects of physical reality, and (b) a mathematical given for contemporary theoretical models thereof, where (b) is to (a)
as "map" is to "territory".
But, as indicated by Einstein's remark, it is quite possible for an "assumption" to be "true" with respect to theoretical models of physical reality (b) and "false" with respect to such reality
itself (a).
That is to say, an "assumption" - axiom - may be consistent with respect to (b) and inconsistent with respect to (a).
A case in point.
In contemporary physics, the concept of Black Hole grew out of the "assumption" that "physics can be founded on the concept of field - that is to say, on continuous elements".
And, while physicists will concede that, in principle, their theories can never be 'proved' but only 'falsified', in practice their modus operandi is such that it is logically impossible to
'falsify', say, the Black Hole theory because it is always and necessarily consistent or "true" with respect to its underlying "assumptions".
Einstein incurred the wrath of his peers by making the like point with respect to their Quantum Mechanical orthodoxy - and, as an intellectual outcast within the community of theoretical physicists
for the last three decades of his life, he called their spade a spade in 1949 as follows:
"Science without epistemology is - insofar as it is thinkable at all - primitive and muddled."
Posted by: Jon Awbrey
ALNP. Discussion Note 3
Just headed off to dreamland, where I usually
do all my best thinking, and so I will have to
sleep on this question for now.
But let me just mention the distinction between
descriptive sciences, like physics or psychology,
and normative sciences, like aesthetics, ethics,
and logic.
The so-called Liar Paradox is normally presented
as a difficulty within purely classical logic,
and so I analyzed it in that context.
This is a very different matter from the
approximate and even desirably defeasible
character of all contingent empirical laws.
(Tomorrow &)* ...
Jon Awbrey
Posted by: Jon Awbrey
ALNP. Note 2
| Algebraic Calculation
| For algebras, two rules
| are commonly accepted
| as implicit in the
| use of the sign =.
| Rules of Substitution and Replacement
| Rule 1. Substitution
| If e = f, and if h is an expression constructed
| by substituting f for any appearance of e in g,
| then g = h.
| Rule 2. Replacement
| If e = f, and if every token of a given independent
| variable expression v in e = f is replaced by an
| expression w, it not being necessary for v, w
| to be equivalent or for w to be independent
| or variable, and if as a result of this
| procedure e becomes j and f becomes k,
| then j = k.
| George Spencer Brown, 'Laws of Form',
| George Allen & Unwin, London, UK, 1969,
| combining texts at pp. 26-27 and p. 140.
Posted by: Jon Awbrey
ALNP. Note 3
| As far as the laws of mathematics refer to reality, they are not
| certain; and as far as they are certain, they do not refer to
| reality. It seems to me that complete clearness as to this
| state of things first became common property through that
| new departure in mathematics which is known by the name
| of mathematical logic or "Axiomatics". The progress
| achieved by axiomatics consists in its having neatly
| separated the logical-formal from its objective or
| intuitive content.
| Albert Einstein, "Geometry and Experience" (1921),
| in 'Sidelights on Relativity', Dover, 1983, p. 28-29.
| http://www.bun.kyoto-u.ac.jp/~suchii/EonGeometry.html
Posted by: Jon Awbrey
ALNP. Note 4
| Matter is potentiality (dynamis), while form is
| realization or actuality (entelecheia), and the
| word actuality is used in two senses, illustrated
| by the possession of knowledge (episteme) and the
| exercise of it (theorein).
| So the soul (psyche) must be substance (ousia)
| in the sense of being the form (eidos) of a
| natural body (soma), which potentially (dynamei)
| has life (zoe). And substance in this sense is
| actuality (entelecheia).
| Aristotle, "Peri Psyche", 2.1.
| The passage from power to entelechy takes place
| by means of change (kinesis). This is the
| imperfect energy, the perfected energy
| is the entelechy.
| C.S. Peirce, 'Chronological Edition', CE 5, p. 404.
| I shall, therefore, venture to call [Sum (mv^2)/2]
| the kinetic act or kinetic energy, and the negative
| of the potential, the kinetic power or kinetic
| potency. For the sum of the two I can think of
| no better term than 'motivity' or 'kinesis'.
| C.S. Peirce, 'Chronological Edition', CE 5, p. 275n.
| Tho' obscur'd, this is the form of the Angelic land.
| William Blake, "America",
| Inductory Envoi to the American Edition of:
| George Spencer Brown, 'Laws of Form', 1972.
| In arriving at proofs, I have often been struck
| by the apparent alignment of mathematics with
| psycho-analytic theory. In each discipline
| we attempt to find out, by a mixture of
| contemplation, symbolic representation,
| communion, and communication, what
| it is we already know.
| George Spencer Brown, 'Laws of Form', p. xix.
| One of the motives prompting the furtherance of
| the present work was the hope of bringing together
| the investigations of the inner structure of our
| knowledge of the universe, as expressed in the
| mathematical sciences, and the investigations of
| its outer structure, as expressed in the physical
| sciences. Here the work of Einstein, Schrodinger,
| and others seems to have led to the realization of
| an ultimate boundary of physical knowledge in the
| form of the media through which we perceive it.
| George Spencer Brown, 'Laws of Form', p. xxi.
| What is encompassed, in mathematics, is a
| transcedence from a given state of vision to a
| new, and hitherto unapparent, vision beyond it.
| When the present existence has ceased to make
| sense, it can still come to sense again through
| the realization of its form.
| George Spencer Brown, 'Laws of Form', p. xxiii.
| [ Yes, it's "transcedence", not "transcendence"!!! ]
| [ Sic transit gloria mundi, what a diff "n" makes! ]
| One of the most beautiful facts emerging from
| mathematical studies is this very potent relation-
| ship between the mathematical process and ordinary
| language. There seems to be no mathematical idea
| of any importance or profundity that is not
| mirrored, with an almost uncanny accuracy, in the
| common use of words, and this appears especially
| true when we consider words in their original,
| and sometimes long forgotten, senses.
| George Spencer Brown, 'Laws of Form', pp. 90-91.
| Thus we do not imagine the wave train emitted by
| an excited finite echelon to be exactly like the
| wave train emitted from an excited physical
| particle. For one thing the wave form from an
| echelon is square, and for another it is emitted
| without energy. (We should need, I guess, to make
| at least one more departure from the form before
| arriving at a conception of energy on these lines.)
| George Spencer Brown, 'Laws of Form', p. 100.
| Ladies and gentlemen, I have set this aspect
| of exact science before you because in it the
| affinity with the fine arts becomes most plainly
| visible, and because here one may counter the
| misapprehension that natural science and
| technology are concerned solely with precise
| observation and rational, discursive thought.
| To be sure, this rational thinking and careful
| measurement belong to the scientist's work, just
| as the hammer and chisel belong to the work of the
| sculptor. But in both cases they are merely the
| tools and not the content of the work.
| Werner Heisenberg,
|"The Meaning of Beauty in the Exact Sciences",
|'Across the Frontiers', p. 182.
Forum Sponsored by Wolfram Research
© 2004-2014 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer
vB Easy Archive Final - Created by Xenon and modified/released by SkuZZy from the Job Openings | {"url":"http://forum.wolframscience.com/archive/topic/266-1.html","timestamp":"2014-04-20T05:42:58Z","content_type":null,"content_length":"20091","record_id":"<urn:uuid:a85b7c8f-38a2-475f-a431-2f51cadf8071>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
Do you know a website with...
Hi, I'm looking for some interesting integrals to evaluate, could someone please direct me towards some resources?
Did a Google search for: practice integrals interesting. Relevant finds: [PDF] Practice Integration Problems MATH 182: Fall 2006 [thread on another forum] How Good Am I? - a lot of back and forth
between people trying to find a good method, may or may not be what you're looking for Other possible searches: practice integration problems list difficult integrals practice integration parts
practice integration partial fractions etc. | {"url":"http://mathhelpforum.com/calculus/147544-do-you-know-website.html","timestamp":"2014-04-17T13:28:56Z","content_type":null,"content_length":"35344","record_id":"<urn:uuid:662aef09-0d5a-43f7-9767-b82e83347032>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
NCDC / Climate Resources / Climate Data / U.S. Normals / Products / Search / Help
U.S. Climate Normals 1971-2000, Products
CLIM81 CLIMATOGRAPHY OF THE U.S. NO. 81: Monthly Station Normals
CLIM84 CLIMATOGRAPHY OF THE U.S. NO. 84: Daily Station Normals
Secondary Products
CLIM85 CLIMATOGRAPHY OF THE U.S. NO. 85: Monthly Divisional Normals/Standard Deviations
CLIM20 CLIMATOGRAPHY OF THE U.S. NO. 20: Station Climatological Summaries
Supplemental Products
CLIM81-01 CLIMATOGRAPHY OF THE U.S. NO. 81 - Supplement 1: Monthly Precipitation Probabilities
CLIM81-02 CLIMATOGRAPHY OF THE U.S. NO. 81 - Supplement 2: Annual Degree Days to Selected Bases
CLIM20-01 CLIMATOGRAPHY OF THE U.S. NO. 20 - Supplement 1: Frost/Freeze Data
HCS 4-1;2 HISTORICAL CLIMATOLOGY SERIES 4-1, 4-2: Area-Weighted State, Regional, and National
Monthly Temperature and Precipitation
HCS 5-1;2 HISTORICAL CLIMATOLOGY SERIES 5-1, 5-2: Population-Weighted State, Regional, and National
Monthly Degree Days
CLIMATOGRAPHY OF THE U.S. NO. 81
Monthly Station Normals
This product includes normals of average monthly and annual maximum, minimum, and mean temperature (degrees F), monthly and annual total precipitation (inches), and heating and cooling degree
days (base 65 degrees F) for individual locations for the 1971-2000 period. There are temperature, precipitation, and/or degree day data for 7937 stations. The locations represent sites that
are part of the Cooperative Network, National Weather Service offices, and principal climatological stations in the 50 states, Puerto Rico, Virgin Islands, and Pacific Island locations. These
locations are shown in Figure 1 for (from top to bottom) the contiguous United States, Hawaii, Alaska, Puerto Rico and the Virgin Islands, and Pacific Island locations.
Figure 1
CLIM81 Station Locations
The monthly normals are published by state, with additional publications for Puerto Rico, Virgin Islands, and Pacific Islands (District of Columbia stations are included with Maryland). The
data are arranged in four tables representing temperature, precipitation, heating degree days, and cooling degree days. The locations are listed alphabetically within each table. A station
locator map and cross reference index (with station name, number, type, location, elevation, and flags) are included in the publication for each state.
Computational Procedures
A. Adjustments to the Data
A climate normal is defined, by convention, as the arithmetic mean of a climatological element computed over three consecutive decades (WMO, 1989). Ideally, the data record for such a 30-year
period should be free of any inconsistencies in observational practices (e.g., changes in station location, instrumentation, time of observation, etc.) and be serially complete (i.e. no missing
values). When present, inconsistencies can lead to a non-climatic bias in one period of a station’s record relative to another. In that case, the data record is said to be “inhomogeneous”. Since
records are frequently characterized by data inhomogeneities, statistical methods have been developed to identify and account for these data inhomogeneities. In the application of these methods,
adjustments are made so that earlier periods in the data record more closely conform to the most recent period. Likewise, techniques have been developed to estimate values for missing
observations. After such adjustments are made, the climate record is said to be “homogeneous” and serially complete. The climate normal can then be calculated simply as the average of the 30
values for each month observed over a normals period like 1971 to 2000. By using appropriately adjusted data records, where necessary, the 30-year mean value will more closely reflect the actual
average climatic conditions at all stations.
The methodology used to address inhomogeneity and missing data value problems stations is described in Figure 2. As with all automated quality control and statistical adjustment techniques, only
those data errors and inhomogeneities falling outside defined statistical limits can be identified and appropriately addressed. In addition, even the best procedures can occasionally apply
corrections where none are required or misidentify the exact year of a discontinuity. In the 1971-2000 monthly normals calculations, the sequential year-month data were adjusted to conform to a
common midnight-to-midnight observation schedule. This is necessary since changes in observation time also can lead to non-climatic biases in a station’s record. The data were then quality
controlled to identify suspect observations and missing or erroneous values were estimated. Finally, the serially complete data series were adjusted for non-climatic inhomogeneities. In the
1971-2000 normals, all stations were processed through the same procedures, whereas in the 1961-1990 normals only NWS First Order stations were evaluated for inhomogeneities. Each of the steps in
the data processing procedures used in the 1971-2000 normals calculations is described briefly below.
Figure 2
CLIM81 Processing Steps (Temperature)
In order to effectively compare records among various stations, the time of observation bias, if present, must be removed. While the practice at all NWS First Order stations is to use the
calendar day (midnight recording time) for daily summaries, Cooperative Network Station observers record observations once per day summarizing the preceding 24-hour period ending generally in the
local morning or evening hours. Observations based on observation times other than midnight can exhibit a bias relative to those based on a midnight observation time (see e.g., Baker, 1975).
Moreover, observation times at any one station may change during a station’s history resulting in a potential inhomogeneity at that station. To produce records that reflect a consistent
observational schedule, the technique developed by Karl et al. (1986) was used to adjust the monthly maximum and minimum temperature observations to conform to observations recorded on a
midnight-to-midnight schedule. However, no time of observation bias adjustments were applied to stations in Alaska, Hawaii, or the U.S. possessions since no model for adjustment presently exists
for these regions.
All monthly temperature averages and precipitation totals were cross-checked against archived daily observations to ensure internal consistency. In addition, each monthly observation was
evaluated using an adaptation of the quality control procedures described by Peterson et al.(1998). In this approach, observations at each station are expressed as a departure from the long-term
monthly mean. Then, monthly anomalies at a candidate station are compared with the anomalies observed at neighboring stations. Where anomalies at the candidate disagree substantially with those
of its neighbors, the observations at the candidate are flagged as suspect and an estimate for the candidate is calculated from neighboring observations (see below). If the original observation
and the estimate differ by a wide margin (standardized using the observed frequency distribution at the station), the original is discarded in favor of the estimate. Very few observations were
eliminated based on the quality control evaluation.
To produce a serially complete data set, missing or discarded temperature and precipitation observations were replaced using the observed relationship between a candidate’s monthly observations
and those of up to 20 neighboring stations whose observations exhibited the highest correlation with those at the candidate site. Monthly estimates are calculated using the climatological
relationship between candidate and neighbor as well as a weighting function based on the neighbor’s correlation with the candidate. For temperature estimates, neighboring stations were drawn from
the pool of stations found in the U.S. Historical Climatology Network (USHCN; Karl et al. 1990) whereas for precipitation estimates, all available stations were potentially used as neighbors in
order to maximize station density for estimating the more spatially variable precipitation values.
Peterson and Easterling (1994) and Easterling and Peterson (1995) outline the method that was used to adjust for temperature inhomogeneities. This technique involves comparing the record of the
candidate station with a reference series generated from neighboring data. The reference series is reconstructed using a weighted average of first difference observations (the difference from one
year to the next) for neighboring stations with the with the highest correlation with the candidate. The underlying assumption behind this methodology is that temperatures over a region have
similar tendencies in variation. For example, a cold winter followed by a warm winter usually occurs simultaneously for a candidate and its neighbors. If this assumption is violated, the
potential discontinuity is evaluated for statistical significance. Where significant discontinuities are detected, the difference in average annual temperatures before and after the inhomogeneity
is applied to adjust the mean of the earlier block with the mean of the latter block of data. Such an evaluation requires a minimum of five years between discontinuities. Consequently, if
multiple changes occur within five years or if a change occurs very near the end of the normals period (e.g. after 1995), the discontinuity may not be detectable using this methodology.
The methodology employed to generate the 1971-2000 normals is not the same as in previous normals calculations. For example, in the calculation of the previous normals no attempt was made to
adjust Cooperative Network observer data records for inhomogeneities other than those associated with the time of observation bias. Therefore, serial year-monthly data for overlapping periods
between normals (e.g., for the 20 years in common between the 1961-90 and 1971-2000 normals) will not necessarily be identical.
The following white paper (United States Climate Normals, 1971-2000: Inhomogeneity Adjustment Methodology) [PDF] is available regarding procedures for adjusting station data to account for
inhomogeneities due to changes in station locations, instrumentation, time of observation, surrounding environment, observing practice, sensor drift, etc. The purpose of such adjustments is to
produce a time series and normals statistics that are representative of the observing practices as of the end of the normals period (December 2000), since these are the conditions under which
future observations will likely be compared.
B. Element Computations
The monthly normals for maximum and minimum temperature and precipitation are computed simply by averaging the appropriate 30 values from the 1971-2000 record. The monthly average temperature
normals are computed by averaging the corresponding maximum and minimum normals. The annual temperature normals were calculated by taking the average of the 12 monthly normals. The annual
precipitation normals were calculated by adding the 12 monthly normals. Note that monthly precipitation totals less than 0.005 inch are shown as zero, and that precipitation includes rain and the
liquid equivalent of frozen and freezing precipitation (e.g., snow, sleet, freezing rain, and hail).
Degree day normals were computed in two ways. The following white paper (United States Climate Normals, 1971-2000: Degree Day Computation Methodology) [PDF] is available regarding the two-tiered
approach to computing degree day normals. For stations that are not first-order NWS locations, the rational conversion formulae developed by Thom (1954, 1966) was modified by using a daily
spline-fit assessment of mean and standard deviations of average temperature. The Thom methodology allows the adjusted mean temperature normals and their standard deviations to be converted to
degree day normals with uniform consistency. The modification eliminates an artificial month-by-month 'step' in the data output. In some cases this procedure will yield a small number of degree
days for months when degree days may not otherwise be expected. This results from statistical considerations of the formulae. The annual degree day normals were calculated by adding the
corresponding monthly degree day normals.
Based on the input of the climate research community and energy groups, NCDC is computing monthly degree day totals DIRECTLY from daily average temperature values for first-order sites for the
1971-2000 period. Stations with serially complete records were included in this approach, and are listed (see Degree Day Table) and with an asterisk ‘*' in the HDD/CDD section of the CLIM81 PDF
NCDC advocates use of the newly computed monthly normals over the sum of the daily normals for degree days in climate applications. The daily normals are a useful tool in monitoring day-to-day
climate and are internally consistent, but the monthly normals better represent the observational record.
Digital Data Archive
CLIM81 data are archived by the National Climatic Data Center under data set DOC 9641-C (CLIM81 1971-2000). Normals. This archive includes a variety of statistics associated with the monthly
station data for minimum, maximum, mean temperature, total precipitation, and heating/cooling degree days, including those shown in Table 1.
Table 1
CLIM81 Statistics for the 1971-2000 Normals in DOC 9641-C
│CODE│Data Description │CODE│Data Description │
│01 │No Data │11 │1990-2000 Standard Deviation │
│02 │No Data │12 │1990-2000 Median │
│03 │Number of Estimated Values in Normals Period │13 │Maximum Monthly Value in Normal Period │
│04 │1971-2000 Normal │14 │Year of Occurrence of Maximum Value │
│05 │1971-2000 Standard Deviation │15 │Minimum Monthly Values in Normal Period│
│06 │1971-2000 Median │16 │Year of Occurrence of Minimum Value │
│07 │1980-2000 Mean │17 │Precipitation 10th Percentile │
│08 │1980-2000 Standard Deviation │18 │Precipitation 90th Percentile │
│09 │1980-2000 Median │19 │Time of Observation Adjustment Factor │
│10 │1990-2000 Mean │ │ │
CLIMATOGRAPHY OF THE U. S. NO. 84
Daily Station Normals
This product includes daily 1971-2000 normal maximum, minimum, and mean temperature (degrees F), heating and cooling degree days (base 65 degrees F), and precipitation (inches) for selected
cooperative and First-Order stations. Monthly, seasonal, and annual normals of these elements are also presented. Monthly and annual precipitation probabilities and quintiles are also
included. The data are published by station.
The daily normals are derived by statistically fitting smooth curves through monthly values; daily data were not used to compute daily normals. As a result, the published values reflect
smooth transitions between seasons. The typical daily random patterns usually associated with precipitation are not exhibited; however, the precipitation normals may be used to compute
average amounts accumulated over time intervals.
Computational Procedures
A. Spline-Fit Daily Normals
Daily normals of maximum, minimum, and mean temperatures, heating and cooling degree days, and precipitation were prepared for selected stations by interpolating between the monthly normal
values. The interpolation scheme was a cubic spline fit through the monthly values. Each element was interpolated independently from the other elements. The procedure is described by Greville
The series of daily values of an element resulting from the cubic spline yields a smooth curve throughout the year without requiring the use of daily data. Another property of this technique
is that the average of the daily temperatures in a month equals the monthly normal and that the total of the daily precipitation or degree days in a month equals the monthly normal. In order
to eliminate discontinuities between December 31 and January 1, the spline interpolation was performed on a series of 24 monthly values. This extended series was created by appending
July-December normals before January and January-June normals after December. This process is applied independently to all six climatological elements. February 29 is assigned the same value
as February 28.
Since each element was interpolated independently, the daily series of temperatures and degree days were adjusted using software to remove spurious inflection points caused by rounding and to
ensure adherence to functional relationships among the elements. The software interrogated the data for climatologically reasonable inflection points, daily consistency between elements,
monthly consistency between daily and monthly values by element, and adherence of temperature and degree day values to the formula T - 65 + H - C = 0, where T = mean temperature, H = heating
degree days, and C = cooling degree days. Collectively, the processing steps for CLIM84 are shown in Figure 3.
Figure 3
CLIM84 Processing Steps
Daily precipitation normals were published as generated by the cubic spline interpolation. The smooth curve through a month does not represent a climatologically reasonable distribution. The
spreading of the monthly precipitation by the spline over all the days in a month is useful for accumulating amounts over specified time intervals. A climatologically reasonable normal
precipitation, based on daily data, for any one date would be much different from the published normals.
For some dates at most locations the published degree days are shown by an asterisk. The symbol represents a value of less than one degree day, but more than zero degree days. It is used to
smooth through aperiodic oscillations of zeroes and ones that are climatologically unreasonable. For example, if a station has 17, 15, and 18 normal heating degree days in June, July, and
August, respectively, it is not possible to distribute the 15 July degree days evenly throughout the month using integer values (zeroes and ones) without creating unrealistic oscillations
through the 3-month period. The use of fractional degree days (asterisks) does allow for a smooth transition from June through July to August.
There are several reasons for using a cubic spline fit of the monthly normals instead of averaging the daily data. First, simply averaging the observed daily values would result in a daily
normal curve that has considerable variability from day to day (Guttman and Plantico, 1987), yielding an annual temperature cycle that would be considerably jagged or ragged. This
climatological raggedness could result in daily normals that trend in the opposite direction from what is expected. For example, an autumn daily normal temperature could be considerably
warmer than one from several days earlier, or a spring daily normal temperature could be considerably cooler than one from several days earlier. Using a cubic spline fit of the monthly
normals eliminates this raggedness from the daily normals curve. Furthermore, a complete and homogeneous (i.e., no change in location, instrumentation, exposure, or observation practices)
data set is necessary for the analysis to be accurate. There are very few stations that have complete and homogeneous daily records. Any change of the types indicated above would introduce a
nonclimatic effect which would make the data inhomogeneous. The techniques for estimating missing daily data and adjusting daily data for inhomogeneities are complex and, for some stations,
are difficult to apply. However, the estimation and adjustment techniques for monthly data are not as complex or troublesome. Hence, the official daily normals are based on monthly normals,
which incorporate CLIM81 inhomogeneity adjustments.
B. Precipitation Probabilities and Quintiles:
A secondary part of the CLIM84 product is the monthly precipitation totals that correspond to the indicated probability levels. The probability levels are based on the 1971-2000 sequential
monthly precipitation and are explained below. The historical precipitation data are the serially complete values (including estimated values) that were also used to compute the monthly
normals (i.e., CLIM81).
When historical climate data are accumulated and examined, they generally follow a certain pattern called a statistical distribution. For example, if 30 years of June temperature data were
assembled and examined, the data would display a pattern that consisted of most of the Junes having temperatures close to the normal or average value, a few Junes having very warm
temperatures, and a few Junes having very cold temperatures. This kind of statistical pattern is called a Gaussian distribution and theoretically takes the form of a bell-shaped curve.
Temperature data are more likely to follow a Gaussian distribution than precipitation data. This is because precipitation is zero bounded.
When historical precipitation data are examined, most of the values will be close to the middle of the distribution, but some values will be considerably higher than the middle range. On the
low end of the scale, however, the smallest values will never be less than zero. In particularly dry (e.g., desert) regions, the pattern can be drastically skewed to the left-hand side of the
scale, with most of the values being near zero and a few very wet values spread far to the right. This kind of pattern can be fit by a Gamma distribution. Once the statistical distribution is
identified, the statistical properties of the distribution can be used to estimate the probabilities that certain values will occur, and which values can be expected at certain probability
levels. For summarization purposes, the probability levels desired can be preselected at certain individual levels or at regular intervals.
The Gamma distribution is used to estimate the precipitation probability and quintile values. The probability table shows the amount of precipitation expected at 15 probability (PROB) levels
(0.005, 0.01, 0.05, 0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 0.95, 0.99, and 0.995) for each month of the year and for the annual total. For example, if 1.77 inches corresponds
to the 0.20 probability level, that means that, on average, 2 out of 10 years will have 1.77 inches or less of precipitation in that month. It also means that, on average, 8 out of 10 years
will have more than 1.77 inches of precipitation in that month.
The second table shows the expected precipitation values at the five quintile levels (LVL): 1 (0-20%); 2 (20-40%); 3 (40-60%); 4 (60-80%); 5 (80-100%) for each of the twelve months and for
the year. For example, if 2.91 and 4.07 inches are the bounds for the second quintile, then a monthly total precipitation amount for that month falling in the range 2.91 to 4.07 would be
classified as a second quintile precipitation amount and the month would be considered relatively dry. The first line (LVL 0 <) in this table shows the minimum precipitation value derived
from the historical record. Quintile level 0 would be used if a future precipitation observation is less than the 1971-2000 minimum. The last line (LVL 6>) shows the maximum precipitation
value. Level 6 would be used if the observed precipitation value is more than the 1971-2000 maximum. The quintile table is used primarily in National Weather Service operations for
composition of information that is transmitted in CLIMAT messages and published in the Monthly Climatic Data for the World publication.
CLIMATOGRAPHY OF THE U. S. NO. 85
Monthly Divisional Normals & Standard Deviations
This product includes normals and standard deviations for the five 30-year periods and the 70-year period between 1931-2000 for each division in a state. A division represents a region within
a state that is, as nearly as possible, climatically homogeneous. Some areas, however, may experience rather extreme variations within a division (e.g., the Rocky Mountain states). The
divisions have been established to satisfy researchers in hydrology, agriculture, energy supply, etc., who require data averaged over an area of a state rather than for a point (station).
The normals and standard deviations include values for each of the 12 calendar months and an annual value. The divisional data are displayed by name and number for a state or island. The
states and islands include the contiguous United States, Alaska, Puerto Rico, and the Virgin Islands, and are arranged alphabetically. Hawaii is not included because the varied topography and
locations of the observing stations do not allow for the establishment of homogeneous divisions. The data elements include mean temperature (degrees F), precipitation (inches), and heating
and cooling degree days (base 65 degrees F).
Computational Procedures
Climatic divisions are regions within each state that have been determined to be reasonably climatically homogeneous. The maximum number of divisions in each state is 10. Monthly divisional
average temperature and total precipitation data are derived using data from all stations reporting both temperature and precipitation within a climatological division. The number of
reporting stations within a division varies from month to month and year to year.
Monthly temperature normals and 70-year averages for a division are computed by adding the yearly values for a given month and then dividing by the number of years in the period. The annual
normal and 70-year average are computed by adding all of the monthly normal or long-term average values and then dividing by 12. Consequently, if an annual normal were computed by averaging
annual values obtained for each year in the period (by adding the corresponding 12 monthly values and then dividing by 12), it may be slightly different from the average of the 12 monthly
normals because of rounding differences. Precipitation normals and 60-year averages are computed in a similar manner, except that the annual values are the totals of the 12 monthly values.
Sequential monthly degree days are derived using procedures developed by Thom (1954, 1966). This technique utilizes the historical monthly average temperature and its corresponding standard
deviation (over some "standardizing period") to compute degree days. The procedure for the computation of the divisional degree day normals involves the following three steps:
1. Calculate the standard deviations of the temperatures for each of the 12 calendar months over the standardizing period;
2. Use the Thom technique to compute the heating and cooling degree days for every month for every year in the period 1931-2000; and
3. Calculate the 30-year normals and 70-year (1931-2000) averages of the degree days using the procedure discussed above.
CLIMATOGRAPHY OF THE U. S. NO. 20
Monthly Station Climate Summaries
This product provides climate data from selected sites included in CLIM81, as well as statistics that have not been published elsewhere. The climatological data included in the CLIM20 make
this publication the most appropriate summary for agricultural applications.
CLIMATOGRAPHY OF THE U. S. NO. 81
Supplement Number 1, Monthly Precipitation Probabilities
A probability value is the frequency of occurrence of a quantity (say, a certain precipitation amount) over a given time period. For example, if the quantity has an annual probability of 0.1,
then it would be expected to occur on average once out of every ten years, or 3 times out of every 30 years, etc. It can also be thought of in a predictive sense to mean that, in any given
year, there is a ten percent chance (0.1 probability) that the quantity will occur. In this product, probabilities are applied to monthly and annual precipitation amounts. The sequential
year-month values of monthly (and annual) precipitation, which were used to compute the monthly normals in the Climatography of the United States No. 81 product, were used in the preparation
of this Supplement.
The Supplement No. 1 publication presents the monthly and annual precipitation values (in inches) corresponding to three probability levels: 0.10, 0.50, and 0.90. The stations are listed
alphabetically. There is a separate volume of this publication for each state.
Monthly and annual precipitation probabilities are also available on microfiche and in digital format. The values are summarized in two tables. The first table shows the amount of
precipitation expected at 15 probability levels (0.005, 0.01, 0.05, 0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 0.95, 0.99, and 0.995) for each month of the year and for the annual
total. The second table shows the expected precipitation values at the five quintile levels:
First Quintile: 0-20%
Second Quintile: 20-40%
Third Quintile: 40-60%
Fourth Quintile: 60-80%
Fifth Quintile: 80-100%
The probability tables in this product are determined by fitting the 1971-2000 historical monthly precipitation to a Gamma distribution (Crutcher et al., 1977; Crutcher and Joiner, 1978). The
process was performed with the historical data for each of the twelve months and separately with the annual values to produce 13 sets of probability values for each station.
CLIMATOGRAPHY OF THE U. S. NO. 81
Supplement Number 2, Annual Degree Days to Selected Bases
This product presents annual heating degree day normals to the following bases (in degrees F): 65, 60, 57, 55, 50, 45, and 40, and annual cooling degree day normals to the following bases
(also in degrees F): 70, 65, 60, 57, 55, 50, and 45. The values were computed for all Climatography of the United States No. 81 temperature stations and are summarized alphabetically by state
within each state or territory.
Monthly and annual degree day normals are available on microfiche and in digital format. The heating degree day normals are to the following bases: 70, 65, 60, 57, 55, 50, 45, 43, 40, 35, 32,
and 30. The cooling degree day normals are to the following bases: 80, 75, 70, 65, 60, 57, 55, 50, 45, 43, 40, and 32.
CLIMATOGRAPHY OF THE U. S. NO. 20
Supplement Number 1, Freeze/Frost Data
This product contains freeze/frost-related information for several thousand observation sites within the United States for which a serially-complete daily maximum/minimum temperature observation
data set had been edited and validated by NCDC.
The main contents of this publication are freeze/frost probability tables for each station, listed by state. The tables contain the dates of probable first and last occurrence, during the year
beginning August 1 and ending July 31, of freeze-related temperatures; probable durations (in days) where the temperature exceeds certain freeze-related values; and the probability of
experiencing a given temperature, or less, during the year period August 1 through July 31. For the fall and spring dates of occurrence, and freeze-free period, probabilities are given for three
temperatures (36, 32, and 28 degrees F) at three probability levels (10, 50, and 90 percent). A series of maps present calendar data related to the probability of occurrence of freeze at two
temperature thresholds.
Extended tables of freeze/frost data, which contain the dates for probabilities of 0.1 through 0.9 in increments of 0.1 versus temperature thresholds of 36, 32, 28, 24, 20, and 16 degrees F, are
available on magnetic tape or on microfiche (by state) for all of the sites given in the publication.
HISTORICAL CLIMATOLOGY SERIES 4-1 and 4-2
State, Regional, and National Monthly and Annual Temperature; State, Regional, and National Monthly and Annual Precipitation (Weighted by Area)
Each month, averages of temperature and precipitation are calculated for U.S. Climate Divisions by simple averaging of data from all stations within the division that record both temperature
and precipitation. A division represents a region within a state that is climatically quasi-homogeneous or, in some cases, a semi-homogeneous dranage basin (as described by CLIM85).
The average monthly temperature and precipitation for a state are derived from the divisional values by weighting each division by its percentage of the total state area, including the 48
contiguous states, Alaska, Hawaii, Puerto Rico, and the Virgin Islands. The District of Columbia is treated as part of Maryland.
The nation was divided into nine census divisions as defined and used by the Census Bureau. The divisions and states they comprise are as follows:
NORTHEAST REGION
New England Division: Maine, New Hampshire, Vermont, Massachusettes, Rhode Island, Connecticut
Middle Atlantic Division: New York, New Jersey, Pennsylvania
MIDWEST REGION
East North Central Division: Ohio, Indiana, Illinois, Michigan, Wisconsin
West North Central Division: Minnesota, Iowa, Missouri, North Dakota, South Dakota, Nebraska, Kansas
SOUTH REGION
South Atlantic Division: Delaware, Maryland, District of Columbia, Virginia, West Virginia, North Carolina,
South Carolina, Georgia, Florida
East South Central Division: Kentucky, Tennessee, Alabama, Mississippi
West South Central Division: Arkansas, Louisiana, Oklahoma, Texas
WEST REGION
Mountain Division: Montana, Idaho, Wyoming, Colorado, New Mexico, Arizona, Utah, Nevada
Pacific Division: Washington, Oregon, California, Alaska, Hawaii
The areal weights used to produce monthly and regional temperatures are also shown. These weights were obtained by dividing the area of each state by the total regional area. A particular
regional monthly temperature value was obtained by multiplying the corresponding state temperature within a region by the approriate wieght and adding all of the products. Annual values were
obtained by taking the average of the monthly values. Monthly and annual temperatures for the nine census divisions are presented in tables following the weights.
The national temperatures were devied by areally weighting the temperature values for the nine census divisions. The national value, therefore, covers only the contiguous United States.
HISTORICAL CLIMATOLOGY SERIES 5-1 and 5-2
State, Regional, and National Monthly and Seasonal Heating Degree Days; State, Regional, and National Monthly and Seasonal Cooling Degree Days, (Weighted by Population)
The population weights for U.S. Climate Divisions are computed from the 2000 Census county and metropolitan populations in that division. Divisional population totals are summed from 2000
county totals for counties residing completely within a given division. For counties residing in more than one division, 2000 county populations are divided proportionally by overlaying the
climate divisions on a one-kilometer squared population database based on the 1990 census and provided by the Socioeconomic Data Application Center (SEDAC). Approximately 25%, or about 800
out of 3200 counties, require division in this manner. Once divisional totals are determined, their proportion in the context of the state, division, region, and nation are determined.
Top of Page
NCDC / Climate Resources / Climate Data / U.S. Normals / Products / Search / Help
Downloaded Monday, 21-Apr-2014 04:01:28 EDT
Last Updated Wednesday, 20-Aug-2008 12:32:09 EDT by Tom.Whitehurst@noaa.gov
Please see the NCDC Contact Page if you have questions or comments. | {"url":"http://www.ncdc.noaa.gov/oa/climate/normals/usnormalsprods.html","timestamp":"2014-04-21T08:01:28Z","content_type":null,"content_length":"44809","record_id":"<urn:uuid:061c9743-c10b-4f52-98f4-ddd4362e85e6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dropping a Perpendicular
4. To drop a perpendicular from a point not on a line to the line.
In the construction for dropping a perpendicular from a point to a line, we are
This proof is more involved. While we know that triangle ABC is isosceles, we will need to know that AD either hits the base at its midpoint, is perpendicular to the base, or bisects the vertex angle
before we can use the Isosceles Triangle Theorems. That is is perpendicular to the base is what we have to prove. The one we will be able to establish is that it bisects the vertex angle with the
congruent triangle proof above. | {"url":"http://www.sonoma.edu/users/w/wilsonst/Courses/Math_150/Theorems/C-S/D-P.html","timestamp":"2014-04-20T01:56:47Z","content_type":null,"content_length":"1565","record_id":"<urn:uuid:07c582c3-0927-45d2-a13b-82ccbf306d93>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Occurrence Trees
An occurrence tree is the set of all discrete sequences of activity occurrences. They are isomorphic to substructures of the situation tree from situation calculus, the primary difference being that
rather than a unique initial situation, each occurrence tree has a unique initial activity occurrence. As in the situation calculus, the poss relation is introduced to allow the statement of
constraints on activity occurrences within the occurrence tree. Since the occurrence trees include sequences that modellers of a domain will consider impossible, the poss relation "prunes" away
branches from the occurrences tree that correspond to such impossible activity occurrences.
It should be noted that the occurrence tree is not the structure that represents the occurrences of subactivities of an activity. The occurrence tree is not representing a particular occurrence of an
activity, but rather all possible occurrences of all activities in the domain.
The basic ontological commitments of the Occurrence Tree Theory are based on the following intuitions:
Intuition 1:
An occurrence tree is a partially ordered set of activity occurrences, such that for a given set of activities, all discrete sequences of their occurrences are branches of the tree.
An occurrence tree contains all occurrences of all activities; it is not simply the set of occurrences of a particular (possibly complex) activity. Because the tree is discrete, each activity
occurrence in the tree has a unique successor occurrence of each activity.
Intuition 2:
There are constraints on which activities can possibly occur in some domain.
This intuition is the cornerstone for characterizing the semantics of classes of activities and process descriptions. Although occurrence trees characterize all sequences of activity occurrences, not
all of these sequences will intuitively be physically possible within the domain. We will therefore want to consider the subtree of the occurrence tree that consists only of possible sequences of
activity occurrences; this subtree is referred to as the legal occurrence tree.
The definitional extensions of the PSL Ontology use different constraints on possible activity occurrences as a way of classifying activities.
Intuition 3:
Every sequence of activity occurrences has an initial occurrence (which is the root of an occurrence tree).
This intuition is closely related to the properties of occurrence trees. For example, one could consider occurrences to form a semilinear ordering (which need not have a root element) rather than a
tree (which must have a root element). However, we are using occurrence trees to characterize the semantics of different classes of activities, rather than using the occurrence tree to represent
history (which may not have an explicit initial event). In our case, it is sufficient to consider all possible interactions between the set of activities in the domain, and we lose nothing by
restricting our attention to initial occurrences of the activities. For example, given the query "Can the factory produce 1000 widgets by Friday?", one can take the initial state to be the current
state, and the initial activity occurrences being the activities that could be performed at the current time.
Intuition 4:
The ordering of activity occurrences in a branch of an occurrence tree respects the temporal ordering.
Within the theory of occurrence trees, the ordering over activity occurrences and the ordering over timepoints are distinct. The set of activity occurrences is partially ordered (hence the intuition
about occurrence trees), but timepoints are linearly ordered (since this theory is an extension of PSL-Core). However, every branch of an occurrence tree is totally ordered, and the intuition
requires that the beginof timepoint for an activity occurrence along a branch is before the beginof timepoints of all following activity occurrences on that branch.
Informal Semantics for Occurrence Trees
(initial ?occ) is TRUE in an interpretation of the Occurrence Tree Theory if and only if the activity occurrence ?occ is a root of the occurrence tree.
(earlier ?occ1 ?occ2) is TRUE in an interpretation of the Occurrence Tree Theory if and only if the two activity occurrences ?occ1 and ?occ2 are on the same branch of the tree and ?occ1 is closer to
the root of the tree than ?occ2. In other words, the earlier relation specifies the partial ordering over the activity occurrences in this tree.
(= (successor ?a ?occ) ?occ2) is TRUE in an interpretation of the Occurrence Tree Theory if and only if ?occ2 denotes the occurrence of ?a that follows consecutively after the activity occurrence ?
occ in the occurrence tree.
(arboreal ?s) is TRUE in an interpretation of the Occurrence Tree Theory if and only if ?s is an element of the occurrence tree.
(generator ?a) is TRUE in an interpretation of the Occurrence Tree Theory if and only if ?a is an activity whose occurrences are elements of the occurrence tree.
(legal ?occ) is TRUE in an interpretation of the Occurrence Tree Theory if and only if the activity occurrence ?occ is an element of the legal occurrence tree.
(poss ?a ?occ) is TRUE in an interpretation of the Occurrence Tree Theory if and only if the activity ?a can possibly occur after the activity occurrence ?occ.
(precedes ?occ1 ?occ2) is TRUE in an interpretation of the Occurrence Tree Theory if and only if the activity occurrence ?occ1 is earlier than the activity occurrence ?occ2 in the occurrence tree and
such that all activity occurrences between them correspond to activities that are possible. This relation specfies the sub-tree of the occurrence tree in which every activity occurrence is the
occurrence of an activity that is possible
Last Updated: Wednesday, 15-December-2003 11:42:40
Return to the PSL homepage | {"url":"http://www.mel.nist.gov/psl/psl-ontology/part12/occtree_page.html","timestamp":"2014-04-16T17:08:11Z","content_type":null,"content_length":"6635","record_id":"<urn:uuid:3c6be9b8-44e4-449f-b46e-7febc9638d59>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sharing large data sets
Sharing large data sets
In this section we present the main features of three solutions proposed for orbitals' data sharing which are schematically illustrated in Fig .
A CASINO computation proceeds by moving one walker at a time, the transition probability at each step depends on the current values of the Jastrow factor and OPO at the current position of all random
walkers. The orbitals can be represented using various basis sets, including plane waves, Gaussian, or B-splines. In this report we are concerned with the representation in B-Splines [5], which are
localised third order polynomials sitting on a three-dimensional grid in real space which spans the whole physical system. They share the same properties of plane-waves as being systematically
improvable and unbiased, but they are localised, and as such a factor more efficient than plane waves: for each point in space there are always only 64 B-Splines that have non-zero values. Therefore
the evaluation of each orbital requires only the computation of 64 B-splines, which is much less than the total number of plane wave functions for a system with a large number of electrons (the
number of plane wave functions scales with the ).
In the program the B-Splines coefficients (BC) are stored in a rank five array , where , , , , are the number of orbitals, the number of the of grid points in three spatial directions and the number
of spins, respectively.
The amount of BC needed in computation is determined by two factors: i) For each spin value the number of orbitals must be equal to the number of electrons with that spin, ii) the grid spacing is
determined by the precision of the DFT calculation used to obtain the OPO, the higher the precision the finer the grid must be.
The above requirements conspire to create a large amount of BC. For example, if we consider a system with 1000 electrons, split in half spin up, half spin down, we need at least 500 one-particle
orbitals for a non-magnetic system since in this case one can use the same set of BC for both spins. The spatial grid can reach or exceed 80 points in each direction, hence, for the previous quoted
numbers one needs approximately 2 GB of memory, if the values of BC are stored in double precision, which is close to the maximum available memory per core for the processors used on HECToR.
In the initial algorithm of CASINO each task has a copy of the BC needed to compute the orbitals values. Since the BC sets are identical on each task and their values do not change during computation
the obvious solution to the memory problem is to share the data among groups of tasks, especially when the hardware provides shared memory. | {"url":"http://www.hector.ac.uk/cse/distributedcse/reports/casino/casino/node7.html","timestamp":"2014-04-21T15:48:24Z","content_type":null,"content_length":"7758","record_id":"<urn:uuid:4e83e14b-6498-408c-9b75-e614cb9c6cf0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Tutor] Floor and modulus for complex arguments
Lie Ryan lie.1296 at gmail.com
Fri Jul 3 16:25:13 CEST 2009
Angus Rodgers wrote:
> I'm a little confused by: (i) the definition of the modulus and
> floor division functions for complex arguments; (ii) the fact
> that these functions for complex arguments are now "deprecated";
> and (iii) the fact that the math.floor() function is not defined
> at all for a complex argument.
maybe because math module is specifically designed NOT to support
complex number. For which cmath (complex math) is the appropriate
module. But even cmath doesn't have floor().
> If I were thinking about this from scratch (in the context of
> mathematics, rather than any particular programming language),
> I /think/ I would be naturally inclined to define:
> floor(x + yj) = floor(x) + floor(y)j for all real x, y
> z % w = z - floor(z / w) * w for all complex z, w (!= 0)
I'm not a mathematician, and I understand zilch about complex numbers.
But looking literally at your definition, python 2.5 seems to define
complex mod complex as you have defined:
>>> import math
>>> def floor(comp):
... return math.floor(comp.real) + math.floor(comp.imag) * 1j
>>> def mod(z, w):
... return z - floor(z / w) * w
>>> mod(10.4j+5.1, 5.2j+3.2)
>>> (10.4j+5.1) % (5.2j+3.2)
__main__:1: DeprecationWarning: complex divmod(), // and % are deprecated
> These seem like they would be mathematically useful definitions
> (e.g. in algebraic number theory, where one has to find the
> "nearest" Gaussian integer multiple of one Gaussian integer to
> another - I forget the details, but it has something to do with
> norms and Euclidean domains), and I don't understand why Python
> doesn't do it this way, rather than first defining it a different
> way (whose mathematical usefulness is not immediately apparent
> to me) and then "deprecating" the whole thing! It seems like
> a wasted opportunity - but am I missing something?
> Has there been heated debate about this (e.g. in the context
> of Python 3, where the change to the division operator has
> apparently already provoked heated debate)?
There is this:
and the justification seems to be related to the grand unified numeric
type. Apparently this is the reason given:
Update of /cvsroot/python/python/dist/src/Objects
In directory usw-pr-cvs1:/tmp/cvs-serv7877
Modified Files:
Log Message:
SF bug #543387.
Complex numbers implement divmod() and //, neither of which makes one
lick of sense. Unfortunately this is documented, so I'm adding a
deprecation warning now, so we can delete this silliness, oh, around
2005 or so.
Bugfix candidate (At least for 2.2.2, I think.)
> Also, by the way, is there some obvious reason for Python's use
> of the notation x + yj, rather than the more standard (except
> perhaps among electrical engineers) x + yi?
I'm not a mathematician, and I understand little about complex numbers.
But it seems the reason is because when the decision was made, nobody in
the devteam understand the reasoning behind complex numbers implementing
the divmod, //, and % as such, and caused it to be removed due to being
"makes no sense". Perhaps if you can convince the devteam about the math
behind complex mod complex, this feature can be reintroduced. FWIW it
has been 5+ years and nobody seems to complain before, it seems complex
mod complex must have been very rarely used.
More information about the Tutor mailing list | {"url":"https://mail.python.org/pipermail/tutor/2009-July/070182.html","timestamp":"2014-04-20T12:49:13Z","content_type":null,"content_length":"6710","record_id":"<urn:uuid:7e00fab8-c762-4de6-8b39-f001eb40880d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Michigan City, IN Math Tutor
Find a Michigan City, IN Math Tutor
My name is Jonathon and I have been teaching math and science at Hobart high school for 21 years. I am licensed in both math and physics at the high school level. I have taught a wide variety of
courses in my career: prealgebra, math problem solving, algebra 1, algebra 2, precalculus, advanced placement calculus, integrated chemistry/physics, and physics.
12 Subjects: including algebra 1, algebra 2, calculus, geometry
...Having also worked as a computer consultant for my university, I have a lot of experience helping clients with computer and technical problems that they encounter on campus. I've dealt with
people ranging from the technically illiterate to the geniuses of the electronic age. Having played the piano for over 9 years now, I am quite familiar with the basic and intermediate skills of
the piano.
16 Subjects: including geometry, SAT math, English, algebra 1
...With WyzAnt, I hope to be able to reach my desire to personally help more people that has not been able to catch up with the requires skills in Mathematics to improve their prospects in their
lives. I have taught Algebra and Algebra II at several high schools, and I'm constantly working with a l...
11 Subjects: including prealgebra, geometry, SAT math, algebra 1
...Word, Excel, Access, Power Point, and Outlook and more. And,3. A Masters of Accounting and Financial Management.
53 Subjects: including calculus, elementary (k-6th), geometry, Microsoft Access
I've taught Algebra 1, Algebra 2, Geometry, and Pre-Calculus at the high school level for 6 years. In addition, I've completed a BS in Electrical Engineering and I am quite knowledgeable of
advance mathematical concepts. (Linear Algebra, Calculus, Differential Equations) I create an individualized...
12 Subjects: including calculus, general computer, precalculus, trigonometry | {"url":"http://www.purplemath.com/michigan_city_in_math_tutors.php","timestamp":"2014-04-18T04:20:32Z","content_type":null,"content_length":"24173","record_id":"<urn:uuid:55619c16-ee16-4dae-8e94-686e60150efc>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Centripetal force - My Math Forum
April 23rd, 2011, #2
11:29 PM
Global Moderator Re: Centripetal force
The Conical Pendulum
Joined: Jul 2010 A small body of mass m is suspended from a string of length L. The body revolves in a horizontal circle of radius r with constant speed v. Since the string sweeps out the surface
of a cone, the system is known as a conical pendulum. We can find the speed of the body, the period of revolution $T_p$ and the tension T in the string.
From: St.
Augustine, FL., Drawing a free-body diagram for the mass m, where the force exerted by the string is T, we may resolve this force into a vertical component:
U.S.A.'s oldest
city $T\cos\theta$
Posts: 11,556 and a component acting toward the center of rotation:
Thanks: 101 $T\sin\theta$
Math Focus: The Since the body does not accelerate in the vertical direction, the vertical component of T must balance the weight, thus:
(1) $T\cos\theta=mg\:\therefore\:T=\frac{mg}{\cos\theta }$
Since the central force in this example is provided by the component $T\sin\theta$, from Newton's second law we get:
(2) $T\sin\theta=ma_r=m\frac{v^2}{r}$
By dividing (2) by (1) we eliminate T and find:
But, from the geometry, we note that $r=L\sin\theta$, thus:
$v=\sqrt{rg\tan\theta}=\sqrt{Lg\sin\theta\tan\theta }$
Since the mass travels a distance of 2?r (the circumference of the circular path) in a time equal to the period of revolution, $T_p$ (not to be confused with the force T) we
(3) $T_p=\frac{2\pi r}{v}=\frac{2\pi L\sin\theta}{\sqrt{Lg\sin\theta\tan\theta}}=2\pi\s qrt{\frac{L\cos\theta}{g}}$
1.) m = 0.50 kg, L = 1.0 m, ? = 30°, g = 9.81 m/sē
a) $T_p=2\pi\sqrt{\frac{\(1.0\text{ m}\)\cos\(30^{\circ}\)}{9.81\text{ \frac{m}{s^2}}}}\approx1.87\text{ s}$
b) $T=\frac{\(0.50\text{ kg}\)\(9.81\text{ \frac{m}{s^2}}\)}{\cos\(30^{\circ}\)}\approx5.66\t ext{ N}$
The Simple Pendulum
A simple pendulum consists of a mass m attached to a light string of length L. The mass is released from rest when the string makes an angle $\theta_0$ with the vertical and the
pivot is frictionless. If the mass is released from rest at the angle $\theta_0$, it will never swing above this position during its motion. At the start of the motion, position
a, its energy is entirely potential. This initial potential energy is all transformed into kinetic energy at the lowest elevation, position b. As the mass continues to move along
the arc, the energy again becomes entirely potential at position c, where $\theta=-\theta_0$.
First, we need to find the speed of the mass at an arbitrary position d along the arc of motion, where $-\theta_0\le\theta\le\theta_0$.
The only force that does work on m is the force of gravity, since the force of tension is always perpendicular to each element of the displacement and hence does no work. Since
the force of gravity is a conservative force, the total mechanical energy is constant. Therefore, as the pendulum swings, there is a continuous transfer between potential and
kinetic energy.
If we measure the y coordinates from the center of rotation, then:
Applying the principle of constancy of mechanical energy gives:
(1) $v_d=\sqrt{2gL\(\cos\theta-\cos\theta_0\)}$
We can see, without resorting to calculus, that $v_d$ is at its maximum value when $\theta=0$
Now we can find the tension in the string at point d. Since the force of tension does no work, it cannot be determined using the energy method. To find $T_d$, we can apply
Newton's second law to the radial direction. First, recall that the centripetal acceleration is:
directed toward the center of rotation. Since r = L, we get:
(2) $\sum F_r=T_d-mg\cos\theta=ma_r=m\frac{v_d^2}{L}$
Substituting (1) into (2) gives for the tension at point d:
Again, it doesn't take calculus to see that the maximum tension occurs when $\theta=0$, or at the bottom of the swing, at position b. Thus, the maximum tension T is:
Notice that if $\theta_0=0$ then T = mg, as we should expect.
2.) m = 0.25 kg, $\theta_0=5^{\circ}$ thus the maximum tension is given by:
$T=\(0.25\text{ kg}\)\(9.81\text{ \frac{m}{s^2}}\)\(3-2\cos\(5^{\circ}\)\)\approx2.47\text{ N}$ | {"url":"http://mymathforum.com/physics/18940-centripetal-force.html","timestamp":"2014-04-19T14:29:24Z","content_type":null,"content_length":"39849","record_id":"<urn:uuid:ee538754-20f9-40f8-895d-9003ad699323>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Algebra
Tutorial 55: Fundamental Counting Principle
After completing this tutorial, you should be able to:
1. Use the Fundamental Counting Principle to determine the number of outcomes in a problem.
In this tutorial we will be going over the Fundamental Counting Principle. It will allow us to count the number of ways a task can occur given a series of events. Basically you multiply the number of
possibilities each event of the task can occur. It is like multiplying the dimensions of it. I think you are ready to count away.
Suppose that a task involves a sequence of k choices. Let kth stage or event can occur after the first k - 1 stages or events have occurred. Then the total number of different ways the task can occur
Example 1: A deli has a lunch special which consists of a sandwich, soup, dessert and drink for $4.99. They offer the following choices:
Sandwich: chicken salad, ham, and tuna, and roast beef
Soup: tomato, chicken noodle, vegetable
Dessert: cookie and pie
Drink: tea, coffee, coke, diet coke and sprite
How many lunch specials are there?
Let’s use the basic counting principle:
There are 4 stages or events: choosing a sandwich, choosing a soup, choosing a dessert and choosing a drink.
There are 4 choices for the sandwich, 3 choices for the soup, 2 choices for the dessert and 5 choices for the drink.
Putting that all together we get:
# of lunch specials
Sand. Soup Dessert Drink
4 x 3 x 2 x 5 = 120
So there are 120 lunch specials possible.
Example 2: You are taking a test that has five True/False questions. If you answer each question with True or False and leave none of them blank, in how many ways can you answer the whole test?
Let’s use the basic counting principle:
There are 5 stages or events: question 1, question 2, question 3, question 4, and question 5.
There are 2 choices for each question.
Putting that all together we get:
quest. 1 quest. 2 quest. 3 quest. 4 quest. 5 # of ways to answer test
2 x 2 x 2 x 2 x 2 = 32
So there are 32 different ways to answer the whole test.
Example 3: A company places a 6-symbol code on each unit of product. The code consists of 4 digits, the first of which is the number 5, followed by 2 letters, the first of which is NOT a vowel. How
many different codes are possible?
Let’s use the basic counting principle:
There are 6 stages or events: digit 1, digit 2, digit 3, digit 4, letter 1, and letter 2.
In general there are 10 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. The first digit is limited to being the number 5, so there is only one possibility for that one. There are no restriction on digits 2
- 4, so each one of those has 10 possibilities.
In general, there are 26 letters in the alphabet. The first letter, cannot be a vowel (a, e, i, o, u), so that means there are 21 possible letters that could go there. The second letter has no
restriction, so there are 26 possibilities for that one.
Putting that all together we get:
digit 1 digit 2 digit 3 digit 4 letter 1 letter 2 # of codes
1 x 10 x 10 x 10 x 21 x 26 = 546000
So there are 546000 different 6-symbol codes possible.
These are practice problems to help bring you to the next level. It will allow you to check and see if you have an understanding of these types of problems. Math works just like anything else, if you
want to get good at it, then you need to practice it. Even the best athletes and musicians had help along the way and lots of practice, practice, practice, to get good at their sport or instrument.
In fact there is no such thing as too much practice.
To get the most out of these, you should work the problem out on your own and then check your answer by clicking on the link for the answer/discussion for that problem. At the link you will find the
answer as well as any steps that went into finding that answer.
Practice Problems 1a - 1c: Solve using the counting principle.
1b. Next semester you are going to take one science class, one math class, one history class and one english class. According to the schedule you have 4 different science classes, 3 different
math classes, 2 different history classes, and 3 different English classes to choose from. Assuming no scheduling conflicts, how many different four-course selections can you make?
(answer/discussion to 1b)
1c. Six students in a speech class all have to give there speech on the same day. One of the students insists on being first. If this student’s request is granted, how many different ways are
there to schedule the speeches?
(answer/discussion to 1c)
Last revised on May 19, 2011 by Kim Seward.
All contents copyright (C) 2002 - 2011, WTAMU and Kim Seward. All rights reserved. | {"url":"http://www.wtamu.edu/academic/anns/mps/math/mathlab/col_algebra/col_alg_tut55_count.htm","timestamp":"2014-04-16T08:47:51Z","content_type":null,"content_length":"36270","record_id":"<urn:uuid:ab38e2f9-196b-4b7a-b7a8-8f255a934c47>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Who's That Mathematician? Paul R. Halmos Collection - Page 32
For more information about Paul R. Halmos (1916-2006) and about the Paul R. Halmos Photograph Collection, please see the introduction to this article on page 1. A new page featuring six photographs
will be posted at the start of each week during 2012.
Halmos photographed Libuše Löwig and Henry F. J. Löwig (1904-1995) on May 22, 1972, during a visit to the University of Alberta in Edmonton, Alberta, Canada, where Henry Löwig was a mathematics
professor. Heinrich Löwig earned his Ph.D. in 1928 from the German Technical University in Prague in what is now the Czech Republic with the dissertation “On Periodic Difference Equations.” During
the 1930s he taught at the German University in Prague and at a variety of German-language schools. During World War II, he was interned in German labor camps. In 1948, he moved to Hobart, Australia
to teach at the University of Tasmania and, in 1957, he joined the faculty at the University of Alberta, becoming Professor Emeritus in 1972. According to MathSciNet, he published mathematical papers
from 1931 to 1981, on the topics of difference and differential equations during the first decade, and mainly on algebra and lattices from 1941 (when he became “Henry”) onward. (Sources: Introduction
to Forgotten Mathematician Henry Löwig (1904-1995) (2012), Mathematics Genealogy Project, MathSciNet)
Eugene Lukacs (1906-1987), left, and Lipman (Lipa) Bers (1919-2004) were photographed in April of 1973 at an AMS Council meeting in New York. Two photos of Bers appear on page 5 of this collection,
where you can read more about him.
Born in Hungary, Lukacs grew up in Vienna, Austria, and earned his Ph.D. in geometry in 1930 from the University of Vienna. After teaching secondary school and working as an actuary in Vienna, Lukacs
emigrated to the U.S. in early 1939 after the Anschluss. There, his friend Abraham Wald interested him in statistics and probability research and, by 1942, he had made important advances in
statistics and was publishing papers on probability with Otto Szasz. After teaching at various colleges in Cincinnati, Ohio, and working briefly for the National Bureau of Standards and the Office of
Naval Research, Lukacs joined the faculty at the Catholic University of America in Washington, D.C., in 1955 and made CUA a center of statistical research. In 1972 he returned to Ohio to teach at
Bowling Green State University. During the 1960s and 1970s, he held many visiting positions, mainly in Europe. (Source: MacTutor Archive)
Halmos photographed Gunter Lumer (1929-2005) in June of 1990 at Stanford University in Palo Alto, California. Lumer earned his Ph.D. in 1959 from the University of Chicago under advisor Irving
Kaplansky (pictured on page 26 of this collection). He and Halmos first met at the University of Montevideo, Uruguay, where Halmos was a visiting professor during the 1951-52 academic year and Lumer
one of two students who worked closely with Halmos during that year. Halmos later wrote (1985, p. 187):
Gunter Lumer was born in Germany, received some of his early education in France (where he was Guy Lumer), and went to university in Uruguay (which was the permanent haven his parents found away
from Hitler). He ... [has] a wide grin, and a vivacious manner; he is always in motion. ... His positive personality was visible even in his mathematical attitudes, even when he was a young
student. ... He was energetic and talented enough that it’s not at all clear I really did him any good; I am sure he would have become a mathematician no matter what I did.
According to the second source listed below, after receiving his Ph.D. from the University of Chicago in 1959, Lumer spent one year each at the University of California, Los Angeles, and Stanford
University in Palo Alto, California, before joining the mathematics faculty at the University of Washington in Seattle in 1961, where he remained until 1974. In 1973, he became a member of the
faculty at the Université de Mons-Hainaut, Belgium, and, in 1999, he added a position at the Solvay Institutes for Physics and Chemistry in Brussels, Belgium. He would serve in both of these
positions until his death in 2005. Indeed, in 1985, Halmos wrote that Lumer had traveled to a 1980 conference at Oberwolfach from Belgium (p. 386) and that he currently was “doing hard Hardy spaces
in Belgium” (p. 188). Lumer's primary research interests were functional analysis, partial differential equations, and evolution equations. Sources:
• Paul Halmos, I Want To Be A Mathematician, Springer, 1985, pp. 187-9, 386;
• "Life and Work of Günter Lumer," Functional Analysis and Evolution Equations: The Günter Lumer Volume, Birkhäuser, 2008, pp. ix-x (available for download from Springer).
Halmos photographed, left to right, Maxwell Reade, Roger Lyndon (1917-1988), and Frieda (?) Lyndon (Halmos identified the two on the right only as “Lyndon^2” on the back of the photograph) on June 7,
1967, at the University of Michigan in Ann Arbor (“A^2”). Reade, Roger Lyndon, and Halmos were faculty members at the University of Michigan at the time, with Reade serving from 1946 to 1986, Lyndon
from 1953 to 1988, and Halmos from 1961 to 1968.
Complex analyst Maxwell Reade earned his Ph.D. in 1940 from the Rice Institute (now Rice University) in Houston, Texas. After teaching at Ohio State University and Purdue University, he joined the
mathematics faculty at the University of Michigan in 1946, where he has been both a very popular teacher and a researcher in complex function theory throughout his career and where he became
Professor Emeritus in 1986. Besides mathematical analysis, he and Halmos had photography as a common interest. The University of Michigan’s African American Music Collection includes the Maxwell O.
Reade Collection of Early Jazz and Blues Recordings. For a description of mathematics faculty life at the University of Michigan from 1946 to 1960 from the perspective of a faculty spouse, see
Marjorie Reade’s “What Was It Like Then? (Post War 1946-1960).” (Sources: Mathematics Genealogy Project, University of Michigan Faculty History Project, University of Michigan Library)
Group theorist Roger Lyndon earned his Ph.D. in 1946 from Harvard University under advisor Saunders Mac Lane, with the dissertation, “The Cohomology Theory of Group Extensions.” From 1946 to 1948, he
worked in the Office of Naval Research and, from 1948 to 1953, he was on the mathematics faculty at Princeton University, where he became interested in combinatorial group theory. He also was
interested in logic and model theory, and his three books, Notes in Logic (1966); Combinatorial Group Theory, co-authored by Paul Schupp (1976); and Groups and Geometry (1985) represent his main
mathematical interests well. The latter two were especially influential and authoritative. From 1953 onward, Lyndon was a mathematics professor at the University of Michigan, becoming Professor
Emeritus in 1988. (Sources: MacTutor Archive, University of Michigan Faculty History Project)
Halmos photographed Saunders Mac Lane (1909-2005) in 1958. Mac Lane earned his Ph.D. in 1934 from the University of Göttingen under advisors Hermann Weyl and Paul Bernays. He was on the faculty at
Harvard University from 1938 to 1947, and at the University of Chicago from 1947 to 1982, serving as department chair from 1952 to 1958, when this photo was taken. Halmos was on the faculty at
Chicago from 1946 to 1961.
Mac Lane’s primary research areas were homological algebra and category theory, and he advised at least 41 Ph.D. students during his career, most of them at the University of Chicago. His first
doctoral student was Irving Kaplansky (pictured on page 26 of this collection), who earned his Ph.D. in 1941 at Harvard and joined the Chicago faculty in 1945. Mac Lane may be best known for his
textbook, A Survey of Modern Algebra, co-authored with Garrett Birkhoff in 1941. Mac Lane was MAA president during 1951-52 and AMS president during 1973-74. (Sources: MacTutor Archive, Mathematics
Genealogy Project, MAA Presidents, AMS Presidents)
Halmos photographed Saunders Mac Lane (1909-2005) again in March of 1982 in Bloomington, Indiana.
For an introduction to this article and to the Paul R. Halmos Photograph Collection, please see page 1. Watch for a new page featuring six new photographs each week during 2012.
Regarding sources for this page: Information for which a source is not given either appeared on the reverse side of the photograph or was obtained from various sources during 2011-12 by archivist
Carol Mead of the Archives of American Mathematics, Dolph Briscoe Center for American History, University of Texas, Austin. Permission to reproduce photos must be obtained from the Dolph Briscoe
Center for American History, University of Texas, Austin. | {"url":"http://www.maa.org/publications/periodicals/convergence/whos-that-mathematician-paul-r-halmos-collection-page-32","timestamp":"2014-04-16T22:35:39Z","content_type":null,"content_length":"123944","record_id":"<urn:uuid:1ecf45ea-50fb-43a4-a8ea-51e8bbe2eb08>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modern Differential Geometry Of Curves And Surfaces With Mathematica
Modern Differential Geometry Of Curves And Surfaces With Mathematica PDF
Sponsored High Speed Downloads
Modern Differential Geometry of Curves and Surfaces with Mathematica R ... For those who want to use Curves and Surfaces to learn Mathematica, ... returning the differential geometry of curves and
surfaces to its proper place in
MODERN DIFFERENTIAL GEOMETRY of Curves and Surfaces with ... Studying Curves in the Plane with Mathematica 25 2.1 Computing Curvature of Curves in the Plane 29 ... 18. Asymptotic Curves on Surfaces
417 18.1 Asymptotic Curves 418
In this notebook we develop Mathematica tools for the Euclidean differential geometry of curves. ... Alfred Gray, Simon Salamon, Elsa Abbena. Modern Differential Geometry of Curves and Surfaces with
Mathemat-ica. Third ed. CRC Press. 2006. [G94] Alfred Gray. Differentialgeometrie.
“Modern Differential Geometry of Curves and Surfaces via Mathematica, Second Edition” (CRC Press, 1998). A gallery of surfaces is available: ... “Modern Differential Geometry of Curves and Surfaces
via Mathematica, Second Edition” (CRC Press, 1998).
Geometry > Curves > Plane Curves > Algebraic Curves > ... Gray, A. Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed. Boca Raton, FL: CRC Press, pp. 119-120, 1997.
Lawrence, J. D. ... Mathematica » The #1 tool for ...
Differential Geometry of Surfaces with Mathcad: ... A., Abbena, E., Salamon, S., Modern Differential Geometry of Curves and Surfaces with Mathematica, Third Edition, Chapmann & Hall/CRC, 2006. 4. ...
A Modern Course on Curves and Surfaces; ...
Title: Modern differential geometry of curves and surfaces with Mathematica - Libraries Created Date: 3/29/2014 5:34:08 AM
Differential Geometry ... To study the basic principles of differential geometry which relate curves and surfaces in space. ... A. Gray. Modern Differential Geometry of curves and surface with
Mathematica, 2nd Edition. CRC Press, Boca Raton, FL, 1998.
Geometry > Curves > Plane Curves > Polar Curves > H ist or ya nd Tem lg>Dcp R u ... Gray, A. Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed. Boca Raton, FL: CRC Press,
... Mathematica » The #1 tool for ...
The interaction of curves and surfaces ... The first edition of Alfred Gray's book "The Modern Differential Geometry of Curves and Surfaces with Mathematica" was already designed to exploit the
unique characteristics of Mathematica's programming and plotting capabilities. Since then, ...
SOME EXAMPLES OF USING MATHEMATICA IN TEACHING GEOMETRY Sonja Gorjanc Faculty of Civil Engineering Zagreb, Croatia ... gebraic ruled surfaces. ... 1998, Modern Differential Geometry of Curves and
Surfaces with Mathematica. CRC Press, Boca Raton [4] Gorjanc S., ...
Geometry > Curves > Plane Curves > Roulettes ... Complete Mathematica Documentation >> Show your math savvy with a ... News. Learn more about the cycloid in Modern Differential Geometry of Curves and
Surfaces with Mathematica. / ˘˘ @˘" ˛ ˘ˇ ˇ ˘ˇ ˘ˇ ...
... Mathematica for Differential Equations: Projects, Insights, ... Modern Differential Geometry Curves and Surfaces with Mathematica, Third Edition 2 686 2 955 Gray A. ... CRC Standard Curves and
Surfaces with Mathematica 2 318 2 550
Differential geometry of curves and surfaces: Tangent vector, normal plane ... Curves and surfaces for CAGD, Gerald Farin, Morgan ... Alfred Gray, Modern Differential Geometry of Curves and Surfaces
with Mathematica, CRC Press Ltd., 1996 5. I.D. Faux. and M. J Pratt, Computational ...
MATHEMATICA ITALIA 5° USER GROUP MEETING Ricerca, Didattica, ... Surfaces of rotation Abstract: ... A. Gray, E. Abbena, S. Salamon: Modern Differential Geometry of Curves and Surfaces with
Mathematica, CRC Press, 2006.
... Differential geometryof curves and surfaces, Prentice-Hall, Inc ... Berlin, Heidelberg, New York, 1993. [Gr] A. Gray: Modern Differential Geometry of Curves and Surfaces with Mathematica, CRC
Press, Boca ... [Op] J. Oprea: Differential Geometry and its Applications ...
Differential geometry is a core component of modern mathematics. ... M.P. doCarmo, Differential Geometry of Curves and Surfaces, Prentice-Hall, 1976. 2. A. Gray, Modern Differential Geometry of
Curves and Surfaceswith Mathematica, CRC, 1997. 3. D.
Geometry History and Terminology Number Theory ... A. Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd. L'Hospital's Rule ... O. Grundzüge der Differential- und
Integralrechnung, Vol. 1. Leipzig, Germany: Teubner, pp. 72-84, 1893.
Gray, A., Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed. Boca Raton, FL: CRC Press, 1997. ... Analysis of Bending of Surfaces Using Program Package Mathematica, Facta
Univer-sitatis, Series Arhitecture and Civil Engineering,vol 2.
Modern Differential Geometry of Curves and Surfaces with Mathematica by Alfred Gray. ... of Curves and Surfaces with Mathematica. The students start by parametrically plotting forty variations of a
common surface such as the Enneper, ...
geometry and this is proof of its importance [1]. ... Curves from motion, motion from curves, Curve and Surface Design: ... Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd
ed. CRC Press, 1997. [4] J. L. Lagrange, ...
MATH - 420: Introductory Differential Geometry Three credit hours Prepared by ... of Curves and Surfaces with Mathematica , CRC; 3d edition, 2006. [3] ... Di erential Geometry: Curves - Surfaces -
Manifolds , Amer-
ANALYSIS OF BENDING OF SURFACES USING PROGRAM PACKAGE MATHEMATICA ... building mechanics and also in the theory of deformation of surfaces, a part of differential geometry. ... 3. Gray A., Modern
differential geometry of curves and surfaces, CRC Press,1993. 4. Velimirović Lj. S.,
... Nonlinear programming, differential geometry, calculus of variation , general ... Modern technology can be applied today as well. ... Differential Geometry of Curves and Surfaces, Prentice-Hall,
1976 [7] ...
MINIMAL SURFACES FOR ARCHITECTURAL CONSTRUCTIONS ... Pictures are made using program package Mathematica. 1. ... Gray, A. Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed.
Boca Raton, FL: CRC Press, 1997. 4.
Differential Geometry of Curves and Surfaces with Mathematica, 2. nd. ed. Boca Raton, FL: CRC Press, pp. 111-115, 1997. 3. Durell, C.V. ... Modern Differential Geometry of Curves and Surfaces, Boca
Raton, FL: CRC Press, 1993. www.hpc.msstate.edu . Questions?
... we look at the Mathematica commands necessary for finding C(1,0.5) and C ... Differential geometry of curves and surfaces, Prentice-Hall, Inc., Englewood Cliffs, February 1976. Alfred Gray,
Modern Differential Geometry of Curves and Surfaces with Mathematica, Second Edition, CRC Press, ...
Book review of "Modern Differential Geometry of Curves and Surfaces with Mathematica, ... We continue working on applying Differential Geometry and Mathematica to understand ... and we talked about
the geometry of those surfaces.
The intersection for φ= /2 is derived in Gray, A. Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed. Boca Raton, FL: CRC Press, 1997. To derive the intersection for
arbitrary φ, start with
for the wave function Ψreduce to surfaces of constant action S ... Modern Differential Geometry of Curves and Surfaces with Mathematica, CRC Press (1998). ... Differential Geometry and the calculus
of Variations, Report
Curves And Surfaces for ... 387-24196 -8. [8] Gibson, C. G. (2001), Elementary Geometry of Differentiable Curves: An Undergraduate Introduction, Cambridge ... A. "Reuleaux Polygons." §7.8 in Modern
Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed. Boca Raton, FL ...
... Exploring Analytic Geometry with Mathematica. Academic Press, 1999 Higher Algebra A.C. Hibbard, ... Problem Solving Using Mathematica. Alfred Gray: Modern Differential Geometry of Curves and
Surfaces. Deutsch: ...
Modern differential geometry of curves and surfaces with Mathematica / Gray, Alfred. 3. Ed. Hardback ... Nonlinear partial differential equations : the Abel symposium 2010 / Holden ... Analytic
Methods in Algebraic Geometry (vol. 1 in the Surveys of Modern Mathematics series) / Demailly, Jean ...
Modern Differential Geometry of Curves and Surfaces, CRC Press, Boca Raton, FL, 1993. Second Edition, CRC Press, Boca Raton, FL, 1998. Spanish and ... Curves and Surfaces (Mathematica and Maple
programs for the geometry of Curves and Surfaces).
Fourier Analysis and Partial Differential Equations Peter B. Gilkey ... Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd Edition Eugenio Hernandez and Guido Weiss, A First
Course on Wavelets Kenneth B ... tion on surfaces. We will give a brief treatment of Stokes ...
is a partial differential equation that describes how the ... Modern Differential Geometry of Curves and Surfaces with Mathematica, CRC Press (1998). [3] ... Differential Geometry and the calculus of
Variations, Report
... A. Gray, modern differential geometry of curves and surfaces with mathematica, CRC Press (1998) [7] Wantzel M. L; recherches sur los moyens de reconnoitre si un problème de Gèmètrie peut se
rèsoudre avec la règle et le
Fourier Analysis and Partial Differential Equations Peter B ... Spectral Geometry, Riemannian Submersions, and the Gromov-Lawson Conjecture Alfred Gray, Modern Differential Geometry of Curves and
Surfaces with Mathematica, 2nd Edition Eugenio ... Asymptotic formulae in spectral geometry/ Peter ...
J.N. Cederberg (1989) A Course in Modern Geometry, Springer-Verlag. W. Boehm & H ... A. Gray (1998) Modern differential geometry of curves and surfaces with MATHEMATICA, ... David W. Henderson (1998)
Differential Geometry, Prentice-Hall. Haggar,Ann. (2004) Pattern cutting for lingerie ...
Alfred Gray, Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd Edition ... A modern treatment ... Maple, Mathematica, and Matlab.
... Several Complex Variables and the Geometry of Real Hypersurfaces ... Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd Edition ... Kenneth L. Kuttler, Modern Analysis
Michael Pedersen, Functional Analysis in Applied Mathematics and Engineering Clark Robinson ...
Modern Differential Geometry of Curves and Surfaces with Mathematica, Third Edition ... A Course in Modern Geometries Series: Undergraduate Texts in ... CRC Standard Curves and Surfaces with
Mathematica, Second Edition 9781584885993 2006 51 CRC 149
New York: Wiley, pp. 115-119, 1969. | • Gray, A. Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed. Boca Raton, FL: CRC Press, 1997. | • Hilbert, D. and Cohn-Vossen, S.
Geometry and the Imagination. New York: Chelsea, p. 4, 1999 ... in A Book of Curves.
Modern differential geometry of curves and surfaces with Mathematica 351636 GAM ... 31 Differentiable curves and surfaces, M. do carmo, Prentice Hall, New Jersey, ... Modern differential Geometry of
curves and surface with Mathematic, Gray, 2nd Edition, ...
... to Geometry, 2nd ed. New York: Wiley, pp. 115-119, 1969. 2. Gray, A. Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed. Boca Raton, FL: CRC Press, 1997. 3. MacTutor
History of Mathematics Archive "Hyperbola." http://www-groups.dcs.st-and.ac.uk/~history/Curves ...
... Differential Geometry: Manifolds, Curves and Surfaces, Graduate Texts in Mathematics 115 ... 2004. [9] A. Gray, Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed., Boca
Raton ... S. Kivelä, “On the Visualization of Riemann Surfaces,” in Applied Mathematica, ...
Ceník Knižní publikace vztahující se k programu Mathematica ... Mathematica: With Applications to ... Modern Differential Geometry Curves and Surfaces with Mathematica, Third Edition 2 686 2 955 Gray
A ...
ming in Mathematica. (The code used here is slightly differ-ent from Maeder’s; see ... braic geometry. ... Modern Differential Geometry of Curves and Surfaces. CRC Press. Smith, Cameron, and Nancy
Blachman. 1995.
drawings represent vessels with interiors that are surfaces of revolution (see, e.g., ... Surfaces of Revolution. Ch. 20 in Modern Differential Geometry of Curves and Surfaces with Mathematica (2nd
ed.). Boca Raton, FL.
1 Uvod u diferencijalnu geometriju . Obvezna literatura: 1. A. Gray, Modern Differential Geometry of Curves and Surfaces with Mathematica, CRC Press, Boca Raton-Boston-London-New York-Washington, | {"url":"http://ebookily.org/pdf/modern-differential-geometry-of-curves-and-surfaces-with-mathematica","timestamp":"2014-04-23T22:34:22Z","content_type":null,"content_length":"47924","record_id":"<urn:uuid:fc843e6e-7ab5-4cb0-80d0-522b5c7ad1fe>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
Least Common Multiple Calculator
I have updated this page: Least Common Multiple Calculator
Could you throw some numbers at it and let me know if it behaves itself, thank you.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Least Common Multiple Calculator
Hi MIF;
It is a nice little calculator. Very good!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Star Member
Re: Least Common Multiple Calculator
Seems to work like a charm!! Did 2 and 3 numbers filled in, primes and related multiple cousins etc.
igloo myrtilles fourmis
Re: Least Common Multiple Calculator
Thanks both.
Quality control
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Least Common Multiple Calculator
Thank You Beth!
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Least Common Multiple Calculator
New version (0.6): Least Common Multiple Calculator
I upgraded it to use my "Full Precision" functions, so it can handle larger numbers, so lots of changes to internal calcs.
Again, could you throw some numbers at it?
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Least Common Multiple Calculator
It is working fine. No problems.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Least Common Multiple Calculator
Hi MIF!
Did you mean for it to work with decimals too? The few I tested worked.
Did you mean for it to work with negative numbers? That didn't work.
Works nice for positive integers.
You might consider limiting the inputs to just positive integers.
Zero poses a problem, since any number times zero equals zero. multiplies of zero are 0,0,0,...
The multiples of 2 are 0,2,4,6,... So accordingly lcm(0,2) would be zero.
People are going to try to input all kinds of numbers if there are no restrictions.
It may be better to simply not accept their inputs rather than accept the inputs and then
not get an answer or get an answer they can't understand.
Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional).
LaTex is like painting on many strips of paper and then stacking them to see what picture they make.
Real Member
Re: Least Common Multiple Calculator
lcm is defined for positive integers only, so why should anyone input anything else, besides their curiosity?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Least Common Multiple Calculator
I could reject the "-" sign.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Least Common Multiple Calculator
Hi MIF!
You could allow the minus sign, since according to Wikipedia the lcm of two integers is the smallest
POSITIVE integer that is divisible by both. In essence the minus signs are just ignored. Wiki also
says that if either a or b is zero then the lcm is zero.
I've never seen lcm applied to decimals although your program seems to work accepting them as input but treats them as if they had no decimals and then outputs the correct integer accordingly but
puts a decimal in the answer so that both inputs divide into it an INTEGRAL number of times.
If I input 2.43 and 8.1 into your program it outputs 24.3 which both 2.43 and 8.1 divide into an
INTEGRAL number of times. So maybe you've come up with a way to define lcms of terminating
decimals! Inputting .166 and .333 gives 55.278 which both .166 and .333 divide into an INTEGRAL
number of times. Perhaps this could be extended to fractions if we write them in a base that makes BOTH of them TERMINATING decimals???
Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional).
LaTex is like painting on many strips of paper and then stacking them to see what picture they make.
Re: Least Common Multiple Calculator
Thanks noelevans, well argued and well said.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Least Common Multiple Calculator
Hi again!
Take the equation
x 3x
----- + ----- = 10 and multiply both sides by the lcm 24.3 that your program gives.
2.43 8.1
Then we get 10x + 9x = 243 so 19x=243 so x=12.789. So maybe there are some decent
applications for the concept.
Of course we could have multiplied the equation through by both 8 .1 and 2.43 and solved, but
then we would have had decimal coefficients for the variable.
8.1x + 3(2.43)x = 10(8.1*2.43)
8.1x + 7.29x = 196.83
15.39x = 196.83
x = 196.83/15.39
x = 12.789
Does make the arithmetic a bit easier!
But perhaps we can get lcms for fractions a/b and c/d in the sense that we are looking for the
smallest fraction that both a/b and c/d divide into giving integers.
Example: 15/77 and 25/49 and see if 75/7 does the trick.
(75/7)/(15/77)=(75*77)/(7*15)=5*11=55 and (75/7)/(25/49)=(75*49)/(25*7)=3*7=21
which yields integral values for each division.
x x
So given ------ + ------- = 7 and multiplying both sides by 75/7 we obtain
15/77 25/49
55x + 21x = 7*(75/7) = 75
x = 75/76
It looks like lcm(a/b,c/d) = lcm(a,c)/hcf(b,d)
Example: lcm(1/4,1/6) = lcm(1,1)/hcf(4,6) = 1/2. (1/2)/(1/4)=2 and (1/2)/(1/6) = 3.
Example: lcm(4,6) = lcm(4/1,6/1) = lcm(4,6)/hcf(1,1) = 12/1 = 12. (Works for integers)
Example: lcm(2/3,6/15) = lcm(2,6)/hcf(3,15) = 6/3 = 2
2/(2/3) = 3 and 2/(6/15) = 5
Hmmmmmm. This might at times be an easier approach to solving equations involving fractions.
But of course for just two fractions we could replace lcm(a,c) with ac/hcf(a,c) so the formula
would become lcm(a/b,c,d) = ac/(hcf(a,c)*hcf(b,d))
And would lcm(a/b,c/d,e/f) = lcm(a,c,e)/hcf(b,d,f) etc. for more than two fractions?
Your program is generating some interesting questions!
Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional).
LaTex is like painting on many strips of paper and then stacking them to see what picture they make.
Re: Least Common Multiple Calculator
And again HI!
Do you have a highest common factor (greatest common divisor) calculator too?
It seems to me that "gcd" is preferred more in higher levels of mathematics.
hcf(a/b,c/d) = hcf(a,c)/hcf(b,d). (Extends to more than two fractions)
These are used in a method of adding and subtracting fractions which is easier to
use in most cases where the denominators have a common factor.
Example: ---- + ---- = --- ( -- + -- ) = --- x --- = ---- = ---
The 5/11 is the hcf of 10/33 and 15/22.
The 66 in the 130/66 is the least common denominator of the fractions, but if
one cancels out the 2 from the 10 and 6 before multiplying then the 65/33 is
obtained thus bypassing the 130/66. Thus the least common denominator is
not necessarily seen in the process.
Any non-zero linear combination of two (or more) natural numbers M and N, say aM+bN
where a and b are non-zero integers has as one of its factors hcf(M,N).
If c is a common factor of of M and N then there are numbers x and y such that
M=cx and N=cy. Hence aM+bN = axc + bcy = c(ax + by). So ANY common factor
of M and N must be a factor of the linear combination also.
This should also apply to two (or more) fractions a/b and c/d.
I've heard of even and odd fractions, prime and composite fractions, and now lcm
and hcf of fractions. What's next? And what interesting applications might arise from
these concepts?
Have a super day (or night as the case may be)!
I gotta get back to sleep.
Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional).
LaTex is like painting on many strips of paper and then stacking them to see what picture they make.
Re: Least Common Multiple Calculator
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Least Common Multiple Calculator
The gcd calculator seems to work fine for positive integer inputs. I input negative integers, decimals and fractions and it just returned 1 in every case although it allows inputting these (fractions
in form a/b) forms. I have written many programs in BASIC and have had to try to "idiot-proof" them. It's difficult to anticipate what other people might do.
Somewhere down the line when you have a good chunk of time (if ever) it would be nice to have a
calculator that allows the input of integers, decimals and fractions as well as positive integers and
then have the program calculate both the lcm and gcd for the input set of numbers. You could
have the first site on the internet that does these calculations for all these kinds of inputs. It might
generate a good bit of curiosity and cause a bit more membership and traffic on the site.
Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional).
LaTex is like painting on many strips of paper and then stacking them to see what picture they make.
Re: Least Common Multiple Calculator
The LCM calculator is a Flash App that uses my "full precision" library
But the GCF calculator is a fairly simple javascript program ... I could re-make it in Flash.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Least Common Multiple Calculator
I adapted the LCM calculator as a GCF calculator!
Here: Greatest Common Factor Calculator
Have a play, tell me what works/doesn't work.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Full Member
Re: Least Common Multiple Calculator
Hi MIF;
This is a very good job. Congratulations!
Winter is coming.
Re: Least Common Multiple Calculator
Hello again MIF,
The gcf, gcd, hcf, hcd calculator seems to work just fine. Works with negative numbers and with
decimals too. And if one tries to input fractions with the "/" it just ignores the "/". Good work!
I'm still working on the lcm and gcd of fractions trying to get equivalent formulations and
examples of problems that it can apply to.
Have a great day!
Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional).
LaTex is like painting on many strips of paper and then stacking them to see what picture they make.
Re: Least Common Multiple Calculator
Hi again!
An interesting note:
Assume a,b,c,d are integers in the following and that the fractions are in reduced form.
(It still seems to work OK even if the fractions are not reduced.)
Using hcf(a/b,c/d)=hcf(a,c)/hcf(b,d) and lcm(a/b,c/d)=lcm(a,c)/hcf(b,d) works for
whole numbers like 10 and 15 written as 10/1 and 15/1.
hcf(10,15) = hcf(10/1, 15/1) = hcf(10,15)/hcf(1,1) = 5/1 = 5.
lcm(10,15) =lcm(10/1, 15/1) = lcm(10,15)/hcf(1,1) = 30/1 = 30 and hcf*lcm=5*30=150=10*15
So whole numbers (and so also integers) also work under the definition of hcf for fractions
with the usual M*N=lcm(M,N)*hcf(M,N) formula intact.
BUT for other kinds of fractions this product of the original two numbers equals the product
of the hcf and lcm does NOT necessarily work.
Example2: hcf(1/10, 1/15) = hcf(1,1)/hcf(10,15) = 1/5.
lcm(1/10,1/15) = lcm(1,1)/hcf(10,15) = 1/5
so hcf*lcm = (1/5)(1/5)=1/25 whereas (1/10)(1/15) = 1/150. The hcf*lcm is missing the other
factor of each of the 10 and 15. So we only get the 5 and 5 but not the other factors 2 and 3.
Example3: hcf(15/8, 25/6) = hcf(15,25)/hcf(8,6) = 5/2.
lcm(15/8, 25/6) = lcm(15,25)/hcf(8,6) = 75/2.
so hcf*lcm = 375/4 whereas (15/8)(25/6)=375/48. So again we are missing the other factors
in the denominator. The numerators are always the same since they are a product of the lcm
and hcf of INTEGERS.
So it looks like in the case of integers, we get hcf(M,N)*lcm(M,N)=M*N as a SPECIAL CASE of the
more general definition of lcm and hcf because the denominators are both 1's.
Example4: hcf(10/7, 15/7)=hcf(10,15)/hcf(7,7) = 5/7.
lcm(10/7, 15/7)=lcm(10,15)/hcf(7,7) = 30/7.
So hcf*lcm = (5/7)(30/7) = 150/49 and (10/7)(15/7) = 150/49.
So if BOTH denominators are the SAME then the product of the original numbers = lcm*hcf holds.
Example5: hcf(10/3, 15/7)=hcf(10,15)/hcf(3,7) = 5/1 = 5.
lcm(10/3, 15/7)=lcm(10,15)/hcf(3,7) = 30/1 = 30.
so hcf*lcm = 5*30=150 whereas (10/3)(15/7)=150/21 Again these are not equal.
CONCLUSION1: THE lcm*hcf BEING EQUAL TO THE PRODUCT OF THE original two numbers
ONLY WORKS WHEN THE TWO NUMBERS HAVE THE same DENOMINATOR.
Of course, integers are written over 1 to make them fractions for the formula.
CONCLUSION2: The old trick of calculating the lcm by dividing the product of the original numbers
by the hcf cannot be used when dealing with fractions unless their denominators
are the same.
CONCLUSION3: Given fractions a/b and c/d with a,b,c,d integral the equality
lcm(a/b, c/d) = a*c/(hcf(a,c)*hcf(b,d)) is I believe true because we can
substitute lcm(a,c) = a*c/hcf(a,c) since a and c are integers.
CONCLUSION4: Given a/b and c/d if gcd(b,d)=1 then the lcm and hcf of the two fractions are
integers. See example 5. Furthermore they are the lcm and hcf of just the
So MIF, can I blame my lack of sleep on you? You really got my mind a buzzin' with your lcm
P.S. The decimals seem to still work for the lcm and gcd calculators and seem to give the same
answer when changed into fractions.
Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional).
LaTex is like painting on many strips of paper and then stacking them to see what picture they make. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=174312","timestamp":"2014-04-19T02:17:03Z","content_type":null,"content_length":"43851","record_id":"<urn:uuid:2487b021-f3a4-40e3-a740-7e389ec6a349>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Engaging Algebraic Identity
Recent Comments
02 Jul
A question has been asked on a linkedin group to prove the following engaging identity
$(\frac{b-c}{a} + \frac{c-a}{b}+ \frac{a-b}{c})(\frac{a}{b-c}+\frac{b}{c-a}+\frac{c}{a-b})=9,$
provided $a+b+c=0$.
One of the posts pointed to a solution at Stevens Society of Mathematicians. What follows is a slight simplification of that proof.
Denote the left factor $L(a,b,c)$ and the right factor $R(a,b,c)$. Observe that whenever two of the arguments in $R$ are equal, the whole expression vanishes. For example,
\begin{align}L(a,a,c)&=\frac{a-c}{a}+\frac{c-a}{a}+\frac{a-a}{c} \\ &=\frac{a-c}{a}-\frac{a-c}{a}=0.\end{align}
Adding the fractions in $L(a,b,c)$, $L(a,b,c)=\frac{L'(a,b,c)}{abc}$. What we just showed implies that the numerator $L'$ is divisible by $(a-b)(b-c)(c-a)$. Multiplying through confirms that $L'
Now, let's turn to the right factor. Up to now we have not used the condition $a+b+c=0$. It's time we do. Introduce
$\begin{cases}x = b - c \\ y = c - a \\ z = a - b.\end{cases}$
Seen as a system of linear equations with $a,b,c$ as unknown, it's degenerate because $x+y+z=0$. The situation improves if we replace any of the equations with $a+b+c=0$. Then, for example,
\begin{align}y-z &= (c-a)-(a-b) \\ &= (b+c)-2a \\ &=-3a. \end{align}
Similarly, $-3b=z-x$ and $-3c=x-y$. This allows us to express the right factor $R(a,b,c)$ in terms of $x,y,z$:
$R(a,b,c)=-\frac{1}{3} (\frac {y-z} {x} + \frac {z-x} {y} + \frac {x-y} {z} ).$
This is exactly the same form as $L(a,b,c)$, implying that
Finally, $L(a,b,c) \cdot R(a,b,c) = 9.$ | {"url":"http://www.mathteacherctk.com/blog/2012/07/an-algebraic-identity/","timestamp":"2014-04-19T17:14:27Z","content_type":null,"content_length":"46565","record_id":"<urn:uuid:515dd327-7ff8-4bff-a8a0-047df67be8a3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stockton, CA Math Tutor
Find a Stockton, CA Math Tutor
...I am very optimistic and excited to have the opportunity to share my knowledge with other students.I just obtained my BA with a major in French with a concentration in French Studies. I
studied abroad for a semester in Nantes, France in the spring of 2010. I am also looking forward to helping students better their French because I myself am looking to practice my French.
10 Subjects: including algebra 1, English, prealgebra, algebra 2
...While teaching, I tutored on the side. I have been teaching for 4 years. I enjoy geometry most of all, but believe that the basis to any good math student is algebra.
3 Subjects: including algebra 1, geometry, algebra 2
...I have recently decided to take time away from a full time teaching position to be a stay at home mother. In addition to Physics, I have trained experience teaching AVID, a class which
includes an emphasis on improving study and organizational skills, improved note-taking, and integrating readin...
16 Subjects: including geometry, prealgebra, trigonometry, algebra 1
...That is where my superb mathematics ability comes into play. I can add, subtract, multiply, and divide fractions like no one else. I also have been cooking since I was ten years old.
17 Subjects: including trigonometry, algebra 1, algebra 2, elementary math
...I took 2 semesters of Sign Language and passed both with an A in class. On top of that, two of my brothers are deaf and mute so I have been doing sign language for the past 15 years. I have
tutored in CBEST preparation before.
27 Subjects: including algebra 1, algebra 2, ACT Math, calculus
Related Stockton, CA Tutors
Stockton, CA Accounting Tutors
Stockton, CA ACT Tutors
Stockton, CA Algebra Tutors
Stockton, CA Algebra 2 Tutors
Stockton, CA Calculus Tutors
Stockton, CA Geometry Tutors
Stockton, CA Math Tutors
Stockton, CA Prealgebra Tutors
Stockton, CA Precalculus Tutors
Stockton, CA SAT Tutors
Stockton, CA SAT Math Tutors
Stockton, CA Science Tutors
Stockton, CA Statistics Tutors
Stockton, CA Trigonometry Tutors
Nearby Cities With Math Tutor
Antioch, CA Math Tutors
Concord, CA Math Tutors
Elk Grove Math Tutors
Fremont, CA Math Tutors
French Camp, CA Math Tutors
Hayward, CA Math Tutors
Lathrop, CA Math Tutors
Lodi, CA Math Tutors
Manteca Math Tutors
Modesto, CA Math Tutors
Oakland, CA Math Tutors
Pleasanton, CA Math Tutors
Sacramento, CA Math Tutors
San Jose, CA Math Tutors
Tracy, CA Math Tutors | {"url":"http://www.purplemath.com/Stockton_CA_Math_tutors.php","timestamp":"2014-04-20T21:37:48Z","content_type":null,"content_length":"23545","record_id":"<urn:uuid:dbc6d827-27a2-4824-900c-99884d78034d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Feferman's ten theses
Arnon Avron aa at math.tau.ac.il
Mon Jan 12 19:32:48 EST 1998
It seems to me that according to his last explanation, the term
"imagination" as used by Sol refers more to the process of getting
acquainted with mathematical concepts (or understanding them) than
to the "place" where they exist. Perhaps a case can be made that
there is really no difference between the two things as far as abstract
concepts are concerned. Taken literally, however, there are at least
two different ways to understand Sol's first thesis:
1) Mathematical objects do EXIST. The answer to the question "where
do they exist?" is "our imagination/thoughts/mind (etc)". (such question,
and maybe also the answer, rest on assumption that whatever exists
should exist SOMEWHERE. Personally, I dont see why this should be the case
with nonphysical objects).
2) Mathematical objects are just imaginary. They do not really exist.
Now both readings might be correct when applied to different mathematical
objects. For me, at least, the natural numbers are objects of the first
type, while "arbitrary" sets of reals (to say nothing about measurable
cardinals) are of the second. I have no clear idea what is the status
of the reals (because of their geometrical interpretation). What I think
should be clear is that the second part of Sol's 6 ("there are objective
questions of truth and falsity") applies only to objects of the first type.
I believe that the really key word in the first thesis is "our"
(... objects which exist only in OUR imagination). According to my
understanding, it means that we cannot attribute existence to
what WE are unable to fully concieve (at least potentially). This
is why I am so suspicious about arbitrary sets of reals. But does "we"
mean each of us alone, or all of us as a total? Sol's 5 seems to
imply that the answer is something in the middle. Of all the theses,
this is the one for which I like to get a more elaborate explanation,
since it is not clear to me how we can even communicate the contents
of our imagination to each other without having already some concepts
which are apriorily built into us.
Some short comments on steel's arguments:
>We do
>use facts about real numbers to build bridges and send men to the moon
All actual calculations are done with the rationals. The reals are
just used as an instrument for deriving results concerning these
calculations. So were infinitisimals (still are, in fact). So what?
> The interesting things that CAN be said
> about sets are said in set theory, and the sciences which apply it. There
> are lots of really useful things to be said in this domain--that's why
> society supports mathematicians. Virtually everything said in this domain
> logically implies that there are sets. None of it is about how
> these sets are related to our imaginations or social conventions.
Try to substitute here "God" for sets, "Theology" for "set theory"
and "theologians" for "mathematicians", and you realize why I find
such arguments hardly convincing (by the way, I am not comparing
the existence of sets to that of god: I admit to have SOME intuitions
concerning sets).
Arnon Avron
Position: Professor of Mathematics and Computer Science
Institute: Deparment of Computer Science, School of
Mathematical Sciences, Tel-Aviv University,
Research interest: Foundations of Logic, Foundations of Mathematics,
non-classical logics, automated deduction, applications of logic in CS.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-January/000732.html","timestamp":"2014-04-17T07:45:10Z","content_type":null,"content_length":"5845","record_id":"<urn:uuid:4eecec58-f745-459b-83f9-6bfc86cd8452>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
A physics proof
February 22nd 2007, 06:45 PM #1
Feb 2007
A physics proof
I have a differential equations problem that I thought someone here may be able to help with. So here goes:
In the motion of an object through a certain medium (air at certain pressures is an example), the medium furnishes a resisting force proportional to the square of the velocity of the moving
object. Suppose a body falls due to the action of gravity, through the medium. Let t represent time, and v represent velocity, positive downward. Let g be the usual constant acceleration of
gravity, and let w be the weight of the body. Use newton's law, for equals mass times acceleration, to conclude that the differential equation of motion is
where kv^2 is the magnitude of the resisting force furnished by the medium.
I have a differential equations problem that I thought someone here may be able to help with. So here goes:
In the motion of an object through a certain medium (air at certain pressures is an example), the medium furnishes a resisting force proportional to the square of the velocity of the moving
object. Suppose a body falls due to the action of gravity, through the medium. Let t represent time, and v represent velocity, positive downward. Let g be the usual constant acceleration of
gravity, and let w be the weight of the body. Use newton's law, for equals mass times acceleration, to conclude that the differential equation of motion is
where kv^2 is the magnitude of the resisting force furnished by the medium.
I would not called physics a "proof" but anyways.
By the second law: F=ma
And the equilbrium law: SUM (Forces) = 0 (vec).
Okay, thus we have that when a object is falling down there are two forces: the force of gravity and the resistance force. By the conditions of the problem the resistance force is proportional to
the square of speed, in simple terms, kv^2. But! There is also a downard force, w, the weight. Since the sum of the forces is zero, we have,
w-kv^2=0 (since it is going in opposite direction, thus we have a negative).
By Newton's second law the overall force is: a(w-kv^2) where "a" is acceleration. The acceleration in this case is the constant at g. Thus,
But there is another way to find the force.
Again, F=a*m=(dv/dt)*w because m, mass, and weight are presumably the same here. And dv/dt is the acceleration, as you know.
I understand the first part, but this second and third parts are getting me:
b) Solve the differental equation of part A, with the initial conditions that v=v0 when t=0. Introduce the constant a^2=w/k to simpilfy the formulas.
c) There are mediums that resist motion through them with a force porportional to the first power of the velocity. For such a medium, state and solve problems analogous to parts A) and B), except
that for convenience a constant b=w/k may be introduced to replace the a^2. Show that b has the dimensions of a velocity.
I realize how good this forum is now, and I'll hang around here to learn some math techniques. Thanks for the help if you can provide it.
The ODE (W/g)dv/dt=w-kv^2 is of variables seperable type so:
int a^2/(w-kv^2) dv = int dt
the left hand side may be rewritten:
k int a^2 / (a^2 - v^2) dv = t + C
and then partial fractions may be used to integrate the left hand side:
k (a/2) int [1/(a+v) - 1/(a-v)] dv = t + C
k a/2 ln[(v-a)/(v+a)] = t + C
(v-a)/(v+a) = exp[2 (t+C)/(ka)]
v = {exp[2 (t+C)/(ka)] +a}/{1-exp[2 (t+C)/(ka)]}
The right hand side probably simolifies some more when you put in the initial
condition etc.
In case you want it, I'll give you a bit more information about these equations and their solutions. But I'll need to stop home and pick up the appropiate text first. I'll try to post later
i was able to go to the teacher today and she showed me a simpler way to do it.
w/g dv/dt=w-kt b=w/k
w/g/k dv/dt=w-kv/k
w/gk dv/dt = w/k - v
bg dv/dt =b-v
b dv/dt =(b-v)g
b/(b-v) dv=g dt
-b/v-bdv =g dt
v=ce^(-gt/b) +b
v=(v0-b)e^(-gt/b) + b
Thanks for all your help on this problem guys
February 22nd 2007, 07:11 PM #2
Global Moderator
Nov 2005
New York City
February 22nd 2007, 08:23 PM #3
Feb 2007
February 22nd 2007, 09:02 PM #4
Grand Panjandrum
Nov 2005
February 23rd 2007, 08:37 AM #5
February 23rd 2007, 08:50 AM #6
Feb 2007 | {"url":"http://mathhelpforum.com/calculus/11872-physics-proof.html","timestamp":"2014-04-17T15:37:42Z","content_type":null,"content_length":"49504","record_id":"<urn:uuid:63d571e1-f0b2-432c-b2c5-87c57e34391a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stockton, CA Math Tutor
Find a Stockton, CA Math Tutor
...I am very optimistic and excited to have the opportunity to share my knowledge with other students.I just obtained my BA with a major in French with a concentration in French Studies. I
studied abroad for a semester in Nantes, France in the spring of 2010. I am also looking forward to helping students better their French because I myself am looking to practice my French.
10 Subjects: including algebra 1, English, prealgebra, algebra 2
...While teaching, I tutored on the side. I have been teaching for 4 years. I enjoy geometry most of all, but believe that the basis to any good math student is algebra.
3 Subjects: including algebra 1, geometry, algebra 2
...I have recently decided to take time away from a full time teaching position to be a stay at home mother. In addition to Physics, I have trained experience teaching AVID, a class which
includes an emphasis on improving study and organizational skills, improved note-taking, and integrating readin...
16 Subjects: including geometry, prealgebra, trigonometry, algebra 1
...That is where my superb mathematics ability comes into play. I can add, subtract, multiply, and divide fractions like no one else. I also have been cooking since I was ten years old.
17 Subjects: including trigonometry, algebra 1, algebra 2, elementary math
...I took 2 semesters of Sign Language and passed both with an A in class. On top of that, two of my brothers are deaf and mute so I have been doing sign language for the past 15 years. I have
tutored in CBEST preparation before.
27 Subjects: including algebra 1, algebra 2, ACT Math, calculus
Related Stockton, CA Tutors
Stockton, CA Accounting Tutors
Stockton, CA ACT Tutors
Stockton, CA Algebra Tutors
Stockton, CA Algebra 2 Tutors
Stockton, CA Calculus Tutors
Stockton, CA Geometry Tutors
Stockton, CA Math Tutors
Stockton, CA Prealgebra Tutors
Stockton, CA Precalculus Tutors
Stockton, CA SAT Tutors
Stockton, CA SAT Math Tutors
Stockton, CA Science Tutors
Stockton, CA Statistics Tutors
Stockton, CA Trigonometry Tutors
Nearby Cities With Math Tutor
Antioch, CA Math Tutors
Concord, CA Math Tutors
Elk Grove Math Tutors
Fremont, CA Math Tutors
French Camp, CA Math Tutors
Hayward, CA Math Tutors
Lathrop, CA Math Tutors
Lodi, CA Math Tutors
Manteca Math Tutors
Modesto, CA Math Tutors
Oakland, CA Math Tutors
Pleasanton, CA Math Tutors
Sacramento, CA Math Tutors
San Jose, CA Math Tutors
Tracy, CA Math Tutors | {"url":"http://www.purplemath.com/Stockton_CA_Math_tutors.php","timestamp":"2014-04-20T21:37:48Z","content_type":null,"content_length":"23545","record_id":"<urn:uuid:dbc6d827-27a2-4824-900c-99884d78034d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
Degrees of Freedom
In the Chi-square test, my textbook says that degrees of freedom are the number of independent variables minus one so df = n - 1
does this mean that that n is equal to the number of observed values from the equation aka the number of times I've added together the numbers?
sum [(O-E)^2]/E
Is there an instance where it isn't equal to the number of observed values I have?
(there's an example in my book (but no answer) with an experiment with observed values of 2 trials of genetic crosses where observed in
trial 1 was 0.5
trial 2 was 0.3
but both of these values were measuring the same variable which was heterozygosity. The expected value is 0.8. Does this mean the df = 1? or is it 0 since there is only 1 independent variable?) | {"url":"http://www.physicsforums.com/showthread.php?p=2594842","timestamp":"2014-04-17T01:00:57Z","content_type":null,"content_length":"21393","record_id":"<urn:uuid:bc4e808f-4645-4eb7-b693-344f07701be8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |