content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Need Urgent Help!!!!!!!!!
July 1st 2008, 06:52 AM
Need Urgent Help!!!!!!!!!
Can anyone help me with the following problem asap:
There are 9 tennis balls in a box, of which only 5 have not previously been used. Four of the balls are randomly chosen from the box. Let 'z' be the number of new balls among the selected ones:
(a) Find the probability Distribution of z
(b) Evaluate the expectation E(z)
(c) Evaluate the variance Var(z)
Cheers Guys!!!
July 1st 2008, 02:48 PM
mr fantastic
Can anyone help me with the following problem asap:
There are 9 tennis balls in a box, of which only 5 have not previously been used. Four of the balls are randomly chosen from the box. Let 'z' be the number of new balls among the selected ones:
(a) Find the probability Distribution of z
(b) Evaluate the expectation E(z)
(c) Evaluate the variance Var(z)
Cheers Guys!!!
Read up on the hypergeometric distribution: Hypergeometric distribution - Wikipedia, the free encyclopedia | {"url":"http://mathhelpforum.com/advanced-statistics/42832-need-urgent-help-print.html","timestamp":"2014-04-19T05:18:12Z","content_type":null,"content_length":"4704","record_id":"<urn:uuid:aa96a436-8dae-41b0-8da2-a019445f0a81>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Testing for a change in mean in a time series
up vote 2 down vote favorite
Suppose I have m observations from a time series before some specified date, and n additional observations after that date. The mean of the time series is expected to change on that date by an
unknown amount, without further changes to the time series. Given the m + n observations and the date of the expected change, how does one test for a change in mean?
As an example, think about new traffic rules, that are expected to decrease the number of accidents in some country. We have the daily number of accidents in the country for, say, the 60 days before
the new rules change, and for 40 days after the rules change, and want to know if the change did any good, i.e., lowered the number of accidents. Importantly, the observations may not be assumed to
be independent, so a simple two-sample t test or Wilcoxon test aren't appropriate; any reasonable correlation structure - say, AR(1) - may be assumed. In my specific application, both m and n are
around 10.
Any help will be greatly appreciated.
Question - why do you not want to assume the observations are independent? – Bjørn Kjos-Hanssen Mar 5 '11 at 1:20
Because in a time series the observations are rarely assumed independent. In the driving example, a high number of accidents on a certain day may mean bad weather, which in turn, means high
probability for bad weather - and hence, more accidents again - on the following day. (This example is not the best, but I hope my point is clear.) – Buchuck Mar 5 '11 at 6:58
One calls this a "structural break" in time series analysis. See stat.columbia.edu/~rdavis/lectures/Cyprus2_04.pdf for some pointers. A lot of work exists on this issue. – Michael Greinecker Apr 4
'12 at 22:09
add comment
2 Answers
active oldest votes
With m and n so small (about 10), either the change is large enough that it's going to jump at you by looking at the data or it's small enough that you won't be able to say anything very
conclusive with a statistical test.
If you insist on a formal approach nonetheless, MDL provides a framework.
up vote 1 Write the shortest program $P$ that outputs an infinite time series that starts like the $m+n$ values and let $x = |P|$ the length of $P$ in bits. Then write two short program $P_1$ and
down vote $P_2$ that output infinite time series that start respectively like the first $m$ value and the subsequent $n$ values such that $y = |P_1| + |P_2| - |P_1 \cap P_2|$ is minimal, where $|P_1
\cap P_2|$ is the longest prefix of code shared by $P_1$ and $P_2$.
If $x < y$ you can't really justify treating the two time series as different. Otherwise, you can look at the mean implied by $P_1$ and $P_2$ and see how they differ.
add comment
The situation you describe, a change in seat belt laws in UK, is discussed in Brockwell & Davis, Introduction to Time Series..., example 6.63
up vote 0 down vote
add comment
Not the answer you're looking for? Browse other questions tagged st.statistics or ask your own question. | {"url":"http://mathoverflow.net/questions/57407/testing-for-a-change-in-mean-in-a-time-series/93152","timestamp":"2014-04-20T01:15:17Z","content_type":null,"content_length":"56601","record_id":"<urn:uuid:1e9434a6-e823-4385-941c-c55659df2fdd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derivative question
October 24th 2007, 01:43 PM #1
Derivative question
I'm particularly confused on this question:
Find the derivative:
tan xy - (x^3)(y) = 3sec y + 7x^2
My biggest confusion is the tan xy portion. I really don't know where to start on that. The rest I can probably figure out - product rule for the second term with dy/dx, and so on.
Thanks in advance.
d/dx tan(xy)
= sec^2(xy) *[x*dy/dx +y]
October 24th 2007, 02:33 PM #2
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/calculus/21235-derivative-question.html","timestamp":"2014-04-20T03:47:54Z","content_type":null,"content_length":"29699","record_id":"<urn:uuid:04813262-84a3-40b1-97b7-0716e035da98>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Counting the Disabled: Using Survey Self-Reports to Estimate Medical Eligibility for Social Security’s Disability Programs
Counting the Disabled: Using Survey Self-Reports to Estimate Medical Eligibility for Social Security’s Disability Programs
by Debra Dwyer, Jianting Hu, Denton R. Vaughan, and Bernard Wixon
ORES Working Paper No. 90 (released January 2001)
Text description for Chart 1.
SSA disability determination process
This flow chart details the five-step process used in establishing the medical eligibility of disability applicants. Steps 1–3 are screens: step 1 is an earnings screen, and steps 2 and 3 are medical
An applicant is denied at step 1 if he or she earns more than the maximum SGA amount. In step 2, impairment(s) is assessed to determine severity; if impairment(s) is not severe, the applicant is
denied at this stage. A duration test, typically at step 2, is used to determine whether the impairment(s) has lasted or is expected to last 12 months, or whether the impairment(s) is expected to
result in death.
At step 3, an applicant is allowed if the impairment(s) satisfies the Listing of Impairments criteria. A severely impaired applicant who is not allowed at step 3 is evaluated at the final two steps
(steps 4 and 5) of the determination process, involving an assessment of his or her residual capacity to work.
At step 4, an applicant who is found able to perform his or her past work is denied. After step 4, a remaining applicant is allowed in step 5 if he or she is found unable to do any work; that
applicant is otherwise denied at this stage.
Text description for Chart 2.
The sequential disability determination model
This flow chart is a model of steps 2–5 of the disability determination process. The vertical line shows the four decision nodes in the process—k, l, m, and n—which result in five outcomes.
The first outcome, d[2] (right arrow extending from the first node, k) denotes a denial at step 2, based on nonseverity of medical impairment(s).
The second outcome, a[3] (left arrow extending from the second node, l) denotes allowance at step 3, based on the Listing of Impairments.
The third outcome, d[4] (right arrow extending from the third node, m) denotes denial at step 4, based on residual capacity for past work.
The fourth outcome, a[5] (left arrow extending from the fourth node, n) denotes allowance at step 5, based on residual incapacity for any work.
The fifth outcome, d[5] (right arrow extending from the fourth node, n) denotes denial at step 5, based on residual capacity for any work.
Text description for Chart 3.
Disability Allowance Probabilities, By Work Limitation Status, With Sample Selection, Full Sample
Y-axis = Frequency (0–0.8); X-axis = Allowance probability (0–1).
This line chart plots allowance probabilities for the full sample by health status for the models with sample selection. The chart shows that both distributions, "limited" and "not limited," center
on an allowance probability of 0.2, but the distribution is wider for respondents with work limitations.
Text description for Chart 4.
Disability Allowance Probabilities, By Work Limitation Status, Without Sample Selection, Full Sample
Y-axis = Frequency (0–0.8); X-axis = Allowance probability (0–1).
This line chart plots allowance probabilities for the full study sample by health status for the models without sample selection. Compared with Chart 3, which uses sample selection, Chart 4 shows
that without sample selection, the probabilities of allowance are centered near 0.4 for both the "limited" and "not limited" groups.
The exaggerated probabilities shown here are expected because this chart gives allowance probabilities without sample-selection.
Text description for Chart 5.
Disability Allowance Probabilities, By Work Limitation Status, With Sample Selection, Restricted Sample
Y-axis = Frequency (0–0.8); X-axis = Allowance probability (0–1).
This line chart illustrates how sample selection alters the distribution of allowance probabilities for the restricted sample. It shows that the model with sample selection does a more accurate job
of identifying applicants with and without severe health limitations. The chart also shows that an alternative probability cutoff of 0.4 would distinguish between people with work limitations and
high allowance probabilities (those most severely impaired) from those without work limitations.
The contrast in the distributions is not as pronounced for those in the restricted sample (Charts 5 and 6) as it is for those in the full sample (Charts 3 and 4) because there is less variation in
health status among members of the restricted sample.
Text description for Chart 6.
Disability Allowance Probabilities, By Work Limitation Status, Without Sample Selection, Restricted Sample
Y-axis = Frequency (0–0.8); X-axis = Allowance probability (0–1).
This line chart illustrates the degree to which sample selection alters the distribution of disability allowance probabilities for the restricted sample. The distribution between those with work
limitations and high allowance probabilities and those with no work limitations cannot be drawn as efficiently without sample selection. The contrast in the two distributions is not as pronounced for
members of the restricted sample because there is less variation in health status among those members.
In addition, the presence of a work limitation appears to be more correlated with allowance probabilities for the model with sample selection (Chart 5) than for the model without sample-selection
controls (Chart 6).
Text description for Chart 7.
Disability Allowance Probabilities, by Application Status, With Sample Selection, Full Sample
Y-axis = Frequency (0–0.8); X-axis = Allowance probability (0–1).
This line chart plots allowance probabilities for nonapplicants and applicants. The chart shows that for nonapplicants, the distribution of disability allowance probabilities is centered at 0.2, with
little variation. The distribution for the applicant pool, on the other hand, is much more uniform, suggesting that an alternative probability cutoff at 0.5 might be too restrictive and would miss
most allowed applicants. | {"url":"https://www.socialsecurity.gov/policy/docs/workingpapers/wp90-text.html","timestamp":"2014-04-16T21:56:58Z","content_type":null,"content_length":"11566","record_id":"<urn:uuid:f449510f-26c0-450f-ba03-7f24e3f40fa7>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Welcome to April! This time, I have some Math Mammoth news... but also plenty of other stuff! Quadrilaterals, triangles, simplifying fractions before multiplying... and how to tell if your math
curriculum is working.
1. Math Mammoth news 2. Simplify before you multiply! 3. Free worksheets for classifying triangles and quadrilaterals 4. How to tell your homeschool math program is working 5. Tidbits
1. Math Mammoth news
Math Mammoth Grade 5
has been updated & revised! You can get all the
details about the changes here
If you are an old customer and would like the updated files, please use the contact form and include your purchase information (email/name used, if at Kagi or Co-op). Currclick customers can log in
to their account at Currclick and download it from there.
Questions about the
Common Core Standards and Math Mammoth?
I created a long FAQ with lots of details that hopefully will answer everyone's questions!
Math Mammoth South African version
is now available for grades 1-3! You can also purchase all three grades as a discounted bundle.
See information and samples:
Grade 1
(South African version)
Grade 2
(South African version)
Grade 3
(South African version)
2. Simplify before you multiply!
I'm having difficulty in solving this question which involves calculating fractions- this question relates to finding an arc length.
140 divided by 360, multiplied by 2, multiplied by 22 divided by 7, multiplied by 12:
You can either put everything in the calculator, multiplying the top numbers, then dividing by 360 and 7.
Or, you can simplify before you multiply.
Read more here
3. Free worksheets for classifying triangles and quadrilaterals
I've completed making two new worksheet generators for HomeschoolMath.net:
Classify triangles
- make worksheets for classifying triangles by their sides, angles, or both.
Classify quadrilaterals
- make worksheets for classifying (recognizing, idenfitying, naming) quadrilaterals. There are seven special types of quadrilaterals: square, rectangle, rhombus, parallelogram, trapezoid, kite,
scalene, and these worksheets ask students to name the quadrilaterals among these seven types.
Use the links above to set your options (image size, number of problems, etc.)
They look sort of like this:
I had fun making the scripts, though also some challenges. But overall I enjoy such work - programming is similar to problem solving in math.
In this case the problems often were math, such as how to make a script that gives me a kite or a scalene quadrilateraly with varying dimensions. I was using php GD library to first create a bunch of
images, and then made the worksheet script that simply chooses randomly among the pre-made images.
4. How to tell your homeschool math program is working
Denise has posted an article
How to Recognize a Successful Homeschool Math Program
on her blog. I enjoyed that a lot and recommend you read it too, no matter what math curriculum you are using!
She summarizes it this way:
If you are wondering how well your homeschool math program is working, pay attention to your children. Do they understand that common sense applies to math? Can they give logical reasons for
their answers? Even when they get confused, do they know that math is nothing to fear?
If so, then be assured: your children are already miles ahead of most of their peers. Their foundations are solid, and the details will eventually fall into place as you continue to play with
mathematical ideas together.
She also notes her 'yardstick' for measuring math anxiety: if your child does not fear
word problems
, he/she is not suffering from math anxiety.
There was a time when my second daughter actually relished word problems and thought they were the BEST part of her math work (it was about 2nd- 3rd grade). Now she said she still enjoys them, but
likes mental math problems best (she just started 5th).
When it comes to mental math, I sometimes give myself a little challenge (such as when making an answer key to my books): can I do this problem mentally instead of a calculator? It's not anything I
fear - it's enjoyable in a sense.
I feel this is similar to when people do crossword puzzles, solve Sudoku, or even play Freecell: you actually enjoy the mental challenge, right? The same can happen with mental math, or with math in
general:- it doesn't have to be something fearful, disgusting, or repulsive -- far from that! : )
5. Tidbits
Feel free to forward this issue to a friend/colleague!
Subscribe here
Till next time,
Maria Miller | {"url":"http://www.homeschoolmath.net/newsletter/volume71-april-2013.htm","timestamp":"2014-04-20T05:42:29Z","content_type":null,"content_length":"11336","record_id":"<urn:uuid:e57e2c7e-9adc-4ab9-96a0-b6435be3b295>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Re: A new characterization of recursivity
Henk@degas.ceu.hu Henk at degas.ceu.hu
Thu Jun 3 20:25:30 EDT 2004
On Wed, Jun 02, 2004 at 12:18:43PM -0400, Ali Enayat wrote:
> It is easy to see that if A is end extended by B, then Sigma_1 predicates
> are upward absolute (and conversely, Pi_1 predicates are downward absloute).
> Here the classification Sigma_n / Pi_n is as in arithmetic and set theory,
> where bounded quantification does not increase the complexity.
Hm, I can't follow this reasoning in all details.
Tersely: I don't see how can you put together the preservation-style
characterizations of Sigma_1 and Pi_1 and gain a preservation-style
characterization of Delta_1 from it.
In a bit more verbose manner:
What I see (V is standard model of finite set theory):
If a subset X of V is Sigma_1, then by upward absoluteness of Sigma_1
predicates we have:
there is a Sigma_1 formula f(x) such that
i) f(x) defines X in V;
ii) for any q in X, there is a finite subset F of V such that
* F includes q;
* for any faithful embedding (aka. P-extension, end extension) i: F -> B
B |= f(i(q)).
If a subset X of V is Pi_1, then by downward absoluteness of Pi_1 predicates
we have:
there is a Pi_1 formula g(x) such that
i) g(x) defines X in V;
ii) for any q not in X, there is a finite subset F of V such that
* F includes q;
* for any faithful embedding i: F -> B
B !|= g(i(q)) ["!" means not here].
>From this, by "recursive <=> Sigma_1 and Pi_1" we get:
If a subset X of V is recursive, then there is Sigma_1 formula f(x) and a
Pi_1 formula g(x) such that
i) both of f(x), g(x) define X in V;
ii) for any q in X, there is a finite subset F of V such that
* F includes q;
* for any faithful embedding i: F -> B
B |= f(i(q));
iii) for any q not in X, there is a finite subset F of V such that
* F includes q;
* for any faithful embedding i: F -> B
B !|= g(i(q))
-- this characterization is weaker than the one I gave, and I see no easy
way to deduce my one from this. If you see, I'd appreciate if you told me
> The converse
> also happens to be true and is due to Feferman, and immediately yields
> Henk's characterization (modulo well-known arguments). I am curious,
> however, whether Henk's proof involves different ideas or not.
> Feferman's result appears in:
> Feferman, Solomon
> Persistent and invariant formulas for outer extensions.
> Compositio Math. 20 1968 29--52 (1968).
> It is worth pointing out that Feferman used a proof theoretic argument to
> establish his result. Later Marker found a model theoretic argument (using
> recursively saturated models):
> Marker, David
> A model theoretic proof of Feferman's preservation theorem.
> Notre Dame J. Formal Logic 25 (1984), no. 3, 213--216.
I think this is the easier direction. Thank you for the references, they are
very apt to the problem, and I didn't know them. However, one can easily
prove this direction directly, eg., as you did in your other reply.
Csaba Henk
"There's more to life, than books, you know but not much more..."
[The Smiths]
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2004-June/008249.html","timestamp":"2014-04-17T10:19:55Z","content_type":null,"content_length":"5763","record_id":"<urn:uuid:6ecad16d-5af9-4b59-a98d-39f015462617>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Corrections to the McKitrick (2002) Global Average Temperature Series
Last week I wrote about Paul Georgia’s review of Essex and McKitrick’s Taken by Storm. Based on their book, Georgia made multiple incorrect statements about the physics of temperature. Of course, it
might have just been that Georgia misunderstood their book. Fortunately Essex and McKitrick have a briefing on their book, and while Georgia mangles the physics even worse than them, they do indeed
claim that there is no physical basis to average temperature. They present two graphs of temperature trends that purport to show that you can get either a cooling trend or a warming trend depending
on how you compute the average. McKitrick recently was in the news for publishing a controversial paper that claimed that an audit of the commonly accepted reconstruction of temperatures over the
past 1000 years was incorrect, so it only seems fair to audit Essex and McKitrick’s graphs. As we will see, both of their graphs are wrong, and their results go away when the errors are corrected.
In their briefing, Essex and McKitrick claim that physics provides no basis for defining average temperature and:
“In the absence of physical guidance, any rule for averaging temperature is as good as any other. The folks who do the averaging happen to use the arithmetic mean over the field with specific
sets of weights, rather than, say, the geometric mean or any other. But this is mere convention.”
Physics does, in fact, provide a basis for defining average temperature. Just connect the two systems that you want to average by a conductor. Heat will flow from the hotter system to the colder one
until the temperatures are equalized. The final temperature is the average. That average will be a weighted arithmetic mean of the original temperatures. Which is why the folks doing the averaging
use weighted arithmetic means rather than the geometric mean.
They next present a graph where they
“treat each month as a vector of 10 observed temperatures, and define the aggregate as the norm of the vector (with temperatures in Kelvins). This is a perfectly standard way in algebra to take
the magnitude of a multidimensional array. Converted to an average it implies a root mean square rule.”
Note that nobody, but nobody, averages temperatures this way. Anyway, when they calculated the trend they found an overall cooling trend of +0.17 degree Celsius per decade.
They triumphantly conclude:
“The same data can’t imply global warming and cooling can they? No they can’t. The data don’t imply global anything. That interpretation is forced on the data by a choice of statistical cookery.
The data themselves only refer to an underlying temperature field that is not reducible to a single measure in a way that has physical meaning. You can invent a statistic to summarize the field
in some way, but your statistic is not a physical rule and has no claim to primacy over any other rule.”
I looked at their graphs and something seemed wrong to me. Their root mean square average gives almost the same answer as the arithmetic mean. For example, it gives the mean of 0 and 20 degrees
Celsius as 10.2 instead of 10 degrees. It didn’t make sense to me that it could make as big a difference to the trend as what they found.
McKitrick kindly sent me a spreadsheet containing the data they used and I almost immediately saw where they had gone wrong. You see, some stations had missing values, months where no temperature had
been recorded. When calculating the root mean square they treated the missing values as if they were measurements of 0 degrees. This is incorrect, since the temperature was not actually zero degrees.
Because the overall average temperature was positive this meant that the root mean square was biased downwards when there were missing observations. And since there were more missing values in the
second half of the time series, this produced a spurious cooling trend.
When calculating the arithmetic mean they treated missing values differently. If only eight stations had observations in a given month, they just used the average of those stations. This isn’t as
obviously wrong as the other method they used, but the stations in colder climates were more likely to have missing observations, so this biased the average upwards and produced a spurious warming
I filled in the missing values by using the observation for that station from the same month in the previous year and recalculated the trends. Now both mean and root mean square averaging produced
the same trend of -0.03, which is basically flat. When analysed correctly, their data shows neither warming or cooling, regardless of which average is used. The different trends they found were not
because of the different averaging methods, but because of inconsistent treatment of missing data.
I also calculated the trend with their root mean square average and ignoring missing values, and with the arithmetic mean and replacing missing values with zero (spreadsheet is here). As the table
below shows, the averaging method made almost no difference, but treating missing values incorrectly does.
│ Trend │
│Missing values │Mean │Root Mean Square │
│Ignored │0.16 │0.15 │
│Treated as 0 degrees │-0.15│-0.17 │
│Previous year used │-0.03│-0.03 │
I emailed McKitrick to point out that arithmetic mean and root mean square did not give different results. He replied:
Thanks for pointing this out. It implies there are now 4 averages to choose from, depending on the formula used and how missing data are treated, and there are no laws of nature to guide the
choice. The underlying point is that there are an infinite number of averages to choose from, quite apart from the practical problem of missing data.
Incredible isn’t it? He still doesn’t understand basic thermodynamics. And he seems to think that are no laws of nature to guide us in estimating the missing values so that it is just as valid to
treat them as zero as any other method, even for places where the temperature never gets that low.
1. #1 Dano May 21, 2004
Nice work Tim.
And nice to see they can find some time in between bagpipe lessons to botch more data analysis.
I wonder if Heartland or Heritage will pony up funds to send these two amateurs to a couple of science courses at the local Community College.
2. #2 Tom May 21, 2004
“You see, some stations had missing values, months where no temperature had been recorded. When calculating the root mean square they treated the missing values as if they were measurements
of 0 degrees.”
Oh dear God. Say it isn’t so.
3. #3 ben May 21, 2004
How is the actual average temperature calculated? It seems like there ought to be some sort of numerical integration done over area and altitude, with density of measurements higher in areas
where the temperature gradient is known or expected to be greater (e.g. locations where coastal areas abrutply meet mountainous areas).
In any case, a simple average doesn’t seem like it passes muster. Maybe it makes more sense to calculate and focus on an estimate for total thermal energy in the atmosphere. Then just look to see
trends in this measure, up, down, stable, rate of change or whatever.
4. #4 Dano May 21, 2004
Ben, the total thermal energy in the atmosphere varies with height and airmass. Vertical profiles of the atm with rawinsonde balloons (depicted as a ‘Skew-T’ – Skewed Temperature) attempt to get
a shot of this profile in order to better analyze and forecast the energy differences.
It is very common to have more thermal energy in one layer than the other, and also for airmasses to be totally different aloft than at the sfc – occluded fronts are fronts not reflected
necessarily at the surface, and the stacking of low pressure systems differs as well (a negatively tilted trof axis, for instance, runs SE-NW (n hemisphere) whereas a ‘normal trof’ axis runs
SW-NE, but of course trof axes can run straight N-S and then tilt, etc.
5. #5 ben May 21, 2004
I know that the thermal energy varies with height. I also understand that the atmosphere has somewhat distinctive layers. So why not compute the total energy in each layer (as they are defined)
and the thermal energy for the atmosphere as a whole, and look at that, wrather than look at temperature? Seems to me a trend up or down or stable in total thermal energy (and in each layer)
would be more interesting scientifically than the average temperature, however it is computed. After all, if the “average” global temperature is rising, would this not be reflected in the total
thermal energy of the atmospheric system (or its layers)?
6. #6 ben May 21, 2004
er, replace “thermal” with “thermodynamic”. i.e. total internal energy of the atmospheric system.
7. #7 Dano May 21, 2004
Ben, you’re talking about having decent measurements of moisture, wind (OK with balloons) and friction (not OK with balloons).
What you are talking about is currently done in a certain sense using the term ‘vorticity’. But the computation does not extend to the entire atmosphere.
To get to what you want, first you already have folks competing for computing time to run their models. What you envision – beyond vorticity – is likely years away, but you are correct – there is
far more useful information in your approach. It just has to wait.
In the meantime, the current atmospheric models use vorticity.
8. #8 ben May 21, 2004
Yeah, but you should be able to get those effects of viscosity on some larger scale using some sort of mean, the way it is done in CFD codes to take care of turbulence in the boundary layer.
Calculating the turbulent flow in the boundary layer is intractible for a big problem, but using some averaging techniques you can deal with some of its effects. Can’t this be done to get an
“estimate” of the inernal energy in the atmosphere?
9. #9 Dano May 21, 2004
This is good, ben. Fun!
OK. We’re back to scalar issues and computing power again. The other issue is the equations you use to solve for the analysis.
This is the current 00Z map of a particular model’s (ETA) solution of the northern hemisphere 500 hpa (mb) isobar.
This is the current 00Z map of another particular model’s (AVN) solution of the northern hemisphere 500 hpa (mb) isobar.
Open them in 2 windows and reduce the windows to place them side by side. Look at the 24 hr forecast (upper right).
The absolute vorticity is the best way to proxy the inertial energy of the atm at a particular point (which happens to be a solution for a grid – the grids nest at a particular granularity
depending upon the forecast model). The scale at the left is, simplified, the upward velocity (seen, for our purposes, as ‘positive’ inertia).
So. Look at, say, SE Minnesota.
The blue blob is a forecasted upward vorticity area that likely is indicative of possible thunderstorm activity, but you need the amount of moisture in the lower atmosphere to know whether there
is enough moisture to condense and then have strong lift.
Anyway, see the difference between the first URL I gave you (the AVN model) and the second (the ETA model) solution over SE MN? The ETA has a spot of green. I’ve never forecast weather in that
area, so I don’t know whether that much lift can cause hail. But if there is ample low level moisture, I might look hard to see whether there are other indications for hail. The AVN doesn’t have
green, so that solution says there is less lift in that area.
Which model is right? Wait a day.
Maybe the AVN is a better solution today, because that set of equations dealt with conditions better. Maybe tomorrow the ETA will do a better job.
The point?
There is not always one best solution.
Viscosity is very tricky at these scales due to the heterogeneity of the earth’s surface, which contributes unevenly to the effects on the lower and successively higher layers, which have
different densities and are moving at different speeds.
I appreciate your averaging/mean comments, which is done at the climate scale – aggregated averaging, if you will, for a point on the earth’s surface.
My point is that we are still not at the level of cheap computing power to solve for your intuitive sense of how to visualize the system. We are getting there. But because of the vast scale
involved, we have a ways to go.
Climate is what you expect, weather is what you get.
OK, that was fun and a nice diversion from my work.
10. #10 Louis May 21, 2004
“I looked at their graphs and something seemed wrong to me. Their root mean square average gives almost the same answer as the arithmetic mean.
You know very little about statistics – you are confusing a linear regression with an arithmetic mean.
Both are summaries using the root-mean square – one over another variable – TIME, the other not.
11. #11 Tim Lambert May 21, 2004
Louis, you just left thirteen comments. You seem to have left a comment every time you had a thought. I deleted all of them except for one. To encourage you to collect your thoughts before
commenting, I am limiting you to one comment at a time.
I have not confused linear regression with arithmetic mean. McKitrick constructed the first graph by first taking the arithmetic mean of the ten temperatures for each month, and then doing a
linear regression on those mean temperatures.
12. #12 ben May 21, 2004
yarg, I wasn’t really talking about simulation anyway. All I was saying is to use existing data and fit it to a model for internal energy. Can’t this be done? I’m not talking about forecasting,
I’m talking about backcasting.
13. #13 Tim Lambert May 22, 2004
Ben, you don’t need to do anything that complicated. You don’t want the average over the whole atmosphere, just the average temperature at the surface. So you just do an inegration over area,
using the stations that are near to each point on the surface to interpolate temperatures. It’s all explained here.
14. #14 Matt McIrvin May 22, 2004
Essex and McKittrick’s argument is far stupider than I imagined. I figured they just had some weighting they thought was better physically motivated than the standard one. But to claim that the
root-mean-square temperature, with missing data treated as zero, is just as good as an arithmetic integration? How could anyone even take that seriously?
I suppose what they’re really trying to do is just create lots of confusion in order to argue that the whole notion of average temperature is meaningless. But this is in itself incredibly
This seems to keep happening to me– I try to be fair and end up giving these people way more credit than they turn out to deserve.
15. #15 Paige May 22, 2004
So Essex and McKitrick say “treat each month as a vector of 10 observed temperatures, and define the aggregate as the norm of the vector (with temperatures in Kelvins). This is a perfectly
standard way in algebra to take the magnitude of a multidimensional array. Converted to an average it implies a root mean square rule.”
This is utter nonsense because it implies that magnitude of a vector is somehow related to something that might be considered the midpoint — which, under some circumstances, the mean is a very
good estimate of a midpoint of a distribution. The word “magnitude” does not imply midpoint. Obviously, E&M don’t understand that you choose a statistic in a certain situation because it has
certain properties, not because it is used elsewhere.
16. #16 BoulderDuck May 22, 2004
Thanks for debunking yet another piece of disinformation!
17. #17 Eli Rabett May 23, 2004
To continue Paige’s comment, doing what E&M did makes no sense even if there are no missing data points because temperatures separated by three days or so are not independent of each other.
18. #18 Eli Rabett May 23, 2004
Let me do a Louis here. I just looked at the graphs again. The little dishonest buggers did not remove the annual cycle. Nuff said
19. #19 ben May 23, 2004
uh-oh Tim, Lott has another article on guns on foxnews.com. Better hop-to on that one
20. #20 ben May 23, 2004
I’m gonna do a Louis too… Most of the article is fine, but then he whips out some of his dubious statistics at the end. The article would be just fine without them, too bad.
21. #21 Webster Hubble Telescope May 23, 2004
This is the kind of stuff that would pass the Bush administration’s Data Quality Act with flying colors. These guys are to science, as calligraphy is to journalism. I have the strange feeling
that some sort of context is missing here, but then when they say that temperature has no physical meaning ???
22. #22 Dano May 24, 2004
Sorry, ben.
Just trying to illustrate graphically the difficulty in characterizing work across such scales.
Vorticity is work up or down (sometimes both in a parcel of air, esp. a thunderstorm). Advection is ‘work’ horizontally. But you want temperature, I think, to reflect useful work. Water vapor is
potential work – another complication.
Anyway, back to my computing power argument & I’ll shut up now. thanks for the indulgence.
23. #23 ben May 24, 2004
ah, I see, I didn’t know the atomospheric science lingo. I was thinking of vorticity as I understand it in terms of basic fluid mechanics.
24. #24 James Lindgren May 26, 2004
Excellent work!
25. #25 Paige May 27, 2004
I want to add one more comment about Essex and McKitrick. This has nothing to do with thermodynamics, but simply a comment on the statistical arguments made by E&M. They can’t seem to understand
the usage of the arithmetic average, they argue strenuously against it and try to have you believe that there are many, many ways to compute an average that are just as good as the arithmetic
average — in effect they want to throw out the use of any average — and then they make use of a regression? That makes no sense at all.
But there’s a larger, more worrisome context. Just as many on the far right have used “sound science” to mean no science at all, I believe that E&M are beginning to lay out the groundwork to
throw out all statistical arguments. They can say they are using “sound statistics” or whatever, while throwing out all legitimate statistics for bogus reasons. In fact, one can imagine the
right-wingers eventually arriving at a set of arguments to disprove any statistical analysis. Imagine if E&M’s attempts to make using the mean meaningless catch on with the right-wingers. Since
basically all of environmental science, medical science, and many other fields are essentially statistical in nature, any time a scientist comes up with a result that the right-wingers don’t
like, they can say “Well, there are many ways of calculating a statistic, so that doesn’t prove anything.”
26. #26 Dano May 27, 2004
You nailed it Paige. They attack science all the time. Any way they can.
Just take a look at Tech Central Station any day of the week – one of their apparent agendas is to cast doubt on any finding that prevents their sponsors from enjoying unfettered profitability.
That blog that louis writes for is another example. Follow their little links around and note how many times you can find an example of exactly what you said above [and how many rubes fall for
27. #27 Louis May 27, 2004
editing my comments makes my case.
Your bat.
28. #28 Should be writing my thesis August 13, 2005
I read some of Tim’s defense of temperature averaging. I am not sure however how correct it is or what assumptions he is making. Please feel free to point out any errors I make and feel free to
correct anything ( I took thermodynamics about 4 years ago and to be honest I don’t have a very solid grasp of the subject) but I believe that in order to temperature average you need to do
something a little more complicated than just do an arithmetic average. For instance consider a system of air and vacuum held in a partitioned box such that the air is on one side of the
partition and the vacuum is on the other (assume that the box and partition are perfect thermal insulators). The volume of box on either side of the partition is the same. Now the arithmetic mean
of the temperatures is what exactly for this system? Can you even define an arithmetic mean for this system or are arithmetic means only for systems which are in some sense homogeneous. Also if I
take away the partition and assume that no heat is lost or gained by the system then you have an adiabatic process in which for an ideal gas (T*V)^(gamma-1) = constant where gamma = Cp/Cv = 1.4
for air. Cp and Cv are the specific heats at constant pressure and temperature respectively. So the formula for the new temperature of the whole system is something like
T2 = T1V2^(0.4)/(V1^0.4) =T1(2V1)^0.4/(V1^0.4)
= T12^0.4
= 1.31 T1
I have obtained the formula for adiabatic processes from http://stp.clarku.edu/notes/chap2.pdf
I have no idea of what possible weighted arithmetic formula could lead to anything like the temperature I have just given for the whole system when the partition is removed. You may object that I
have averaged with a vacuum but I have only used a vacuum because its simple. In fact vacuum is just the special case of no air, if you put a very small amount of air in the second partition it
approximates a vacuum. You may also object that I am not using samples at the same pressure. But isn’t air at different places at different pressures. Doesn’t the pressure at sea level vary over
time and space. Otherwise why do barometers exist? And pressure varies with humidity and variations in pressure are basically what lead to climate so how can climate exist if pressure is the same
everywhere. Is the justification for arithmetic averaging that the pressure variations are small?
Temperature as far as I know only makes sense when a system is in equilibrium meaning that all time derivatives for state variables are basically zero (there are no changes in state, pressure,
volume etc). Let me quote Richard Tolman:
” It is of interest first of all to point out once more that the temperature of a system is in any case a quantity to which we can assign precise meaning only for systems which are in a condition
of equilibrium” pg. 563 The Principles of Statistical Mechanics Richard C. Tolman
Secondly you can perform an even more interesting experiment although I am not sure what the outcome is. Take the same partitioned system but this time make the volumes different. The volume of
the box on one side is 1/100 the volume on the other. Air is in both boxes and temperatures are the same which implies the pressure in the smaller box is 100 times greater than the bigger box.
Now remove the partition. What happens? I don’t know. Someone with a better thermodynamics background who is reading this can answer. But my feeling is that the temperature will change which of
course doesn’t make sense from the point of view of arithmetic averaging.
29. #29 Should be writing my thesis Says: August 13, 2005
Sorry I made some mistakes in my previous comment. But my argument is untouched. Namely formula and calculation shoud be
T*(V^(gamma-1)) = constant instead of
(T*V)^(gamma-1) = constant
T2 = T1*V1/V2 = (1/2)^0.4 = 0.757 T1
T2 = 2^0.4 T1 = 1.3 T1
But anyways I don’t see how this comes about from arithematic averaging
30. #30 Ray Lopez October 10, 2005
This response by Lambert is discussed here:
31. #31 z December 22, 2005
“For instance consider a system of air and vacuum held in a partitioned box such that the air is on one side of the partition and the vacuum is on the other (assume that the box and partition are
perfect thermal insulators). The volume of box on either side of the partition is the same. Now the arithmetic mean of the temperatures is what exactly for this system? Can you even define an
arithmetic mean for this system or are arithmetic means only for systems which are in some sense homogeneous.”
Well, these publications discussing the “frigid cold” of outer space have always been a pet bugaboo of mine. Vacuum has no temerature, since temp is average kinetic energy and the average kinetic
energy of zero mass is undefined. Sometimes, you see discussions of the “scorching heat” of the naked sun in outer space, which is a little more reasonable; in fact, when you do encounter a
particle out there it’s zipping along at such a good clip that the average temp is actually enormous; total heat is minimal, though.
So no, you can’t define an average temp for that system you describe, but it’s not because it’s not “in some sense homogenous”, it’s because one of the heterogenous temperatures is undefined, by
definition. Of course, “in some sense homogenous” may be defined as “homogenous in avoiding the complete absence of mass”, in which case things would be fine. | {"url":"http://scienceblogs.com/deltoid/2004/05/20/mckitrick3/","timestamp":"2014-04-20T14:14:26Z","content_type":null,"content_length":"112924","record_id":"<urn:uuid:a811d709-5c76-459d-8915-f454a821a29c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 9,948
The domain of all polynomials is real, i.e. (-∞,∞).
The coefficients of sin(x) and -cos(x) are 1 and -1 respectively. We divide each term by the factor sqrt(1²+(-1)²)=sqrt(2) y=sinx-cosx => y=sqrt(2)[sin(x)*(1/sqrt(2)+cos(x)*(-1/sqrt(2)) Since cos(-π/
4)=1/sqrt(2), and sin(-π/4)=-1/sqrt(2), we set a=3π/2, t...
Factorize using cos(x) as a variable: (2cos(x)+1)(3cos(x)-1)=0 Solve for cos(x).
(a) 1. Find vectors AB and AC. 2. Calculate the cross product of ABxAC, which gives the normal vector to the required plane. 3. Form the equation of the plane. The components of the normal vector
correspond to the coefficients of x, y and z of the equation of the plane. 4. Fin...
Multiply them out using the FOIL rule. When you get i², replace by -1. Using FOIL: F=1 O=-5i I=3i L=-15i²=15 Sum like terms: (1+3i)(1-5i) = 16-2i
Absolutely, thanks Reiny!
Probability of choosing 2 girls =(10/15) * (9/14) =3/7
Vertical asymptote is where the denominator of a rational function becomes zero. Assuming you made a typo and that you really mean: y = 4/(x-3) + 2 then the denominator becomes zero at x=3. The
horizontal asymptote is what y becomes as x-> -∞ or x->+∞. Can yo...
If there is no typo, the solution is complex. (x-1)^2=-4 => (x-1)=±2i so x=1±2i
Assume upper case only, gives 26 letters. Add 10 digits, 1 space and 4 symbols. Total 26+10+1+4=41 possible symbols. Subtract the plate that has all spaces. This is a 7 part experiment each with 41
outcomes, so the total number of possible plates is N=41^7-1 =194754273880 I wi...
Try 7 times for the first, 7 times for the second 7 times for the third, the possibilities (3 independent trials each of 7 outcomes) is 7³=343.
3 red 5 yellow and 4 green. 1 red picked, leaves: 2 red 5 yellow and 4 green total 11 apples. What would be the probability of picking a yellow out of the remaining 11, of which 5 are yellow?
Math (need formula)
This is the same problem as a ball rolls down a board making an angle θ with the horizontal in t=2 seconds over a distance S of 1.3 m. We ignore the size of the ball, which can slow down the ball.
Let g=acceleration due to gravity = 9.8 m/s² then a=acceleration down...
Set up the equations: 6H+7C=2500 1H+13C=2310 and solve for H (horce) and C (cow).
It could range from 730kg for a smart car to 2300 kg for a Cadillac de Ville. http://en.wikipedia.org/wiki/Smart_Fortwo http://en.wikipedia.org/wiki/Cadillac_Sedan_de_Ville
As posted, 14!/19*18*17*16*15=6402373705728000/19 but I think you mean 14!/(19*18*17*16*15) which equals 20180160/323 In any case, please check your problem for typo and parentheses. If you are
posting a fraction, parentheses must be inserted BY YOU around the numerator and de...
Four people have different number of quarters and dimes. In order to make $1.85 without nickels, each must have an ODD number of quarters, namely 1,3,5 and 7, and the remainder made up of dimes. The
total number of quarters is therefore 1+3+5+7=16
Normalize the variable for z=(X-μ)/σ=(67-65)/1.5=1.3333
algebra 2/help
10C2 = 10!/((10-2)!2!) x!=factorial x=x(x-1)(x-2)....(2)(1)
math 5th grade
$36 is the cost of pants, which accounts for (1-2/5)=3/5 of the total cost. So the total cost is 36÷(3/5)=60 If there are no sales taxes, he should get 100-60=$40 in change.
36x^2 + 60x +25 =(6x)^2+2*5*(6x)+5^2 Does that look familiar?
f'(x) should read: 16x^3-12x^2-70x+36 and f'(-2)=0 From there, you can find the other factors.
Try to start a new post for a new question. Sometimes these piggy-backed questions get forgotten. I will answer it this time. R is a function of x. So substitute 135 for x to get R=314.28*135 -
Like terms are terms with the same variables raised to the same powers. 8x^5 - 7x^9-7x^5+ 8x^9 First groupe like terms and arrange in descending order = 8x^9-7x^9 + 8x^5-7x^5 Now the simplification =
x^9 + x^5
Side of base = √64 = 8 cm Distance from centre to corner of base =8(√2)/2=4√2 Let h=height, h²=15²-(4√2)² =225-32=193 h=√193 = 13.89 cm approx. Volume =(64)h/3 =64*(√193)/3 =296.4 cm³ approx.
Can you please provide a definition of events A and B? Also, in (a) do you mean p(A|B) instead of P(A/B)? That is to say, probability of event A given B has happened?
I think the parentheses are in the wrong place. It should probably read: (1-cos2x)/tan x = sin2x Again, split everything into sin and cos, and don't forget the identities: cos(2x)=cos²(x)-sin²(x) sin
(2x)=2sin(x)cos(x) (1-cos(2x))/tan(x) =(1-(cos²(x)-sin²...
One way to deal with identities is to convert every term to sin and cos: (sec x/csc x) + (sin x/cos x) =(1/cos (x))÷ (1/sin(x)) + sin(x)/cos(x) =sin(x)/cos(x)+sin(x)/cos(x) =2tan(x) =2/cot(x)
algebra 2
The reason you are not getting answers is probably because the question is not very clear. I assume the following: 1. The curved back wall is at the back of the auditorium, facing the stage. 2. There
is no side wall, so the curved back wall joins the stage near the front at 40...
This is a hypergeometric distribution problem. Assuming that the tagged whales are subsequently randomly distributed in the whale population, we have population N, where N = (115/5)*100 = 2300
approximately. For more accurate estimations, use the hypergeomatric distribution an...
Treat 11.443 as a fraction over 1: 11.443 -------- 1 Multiply both top and bottom by 1000: 11443 -------- 1000 which is then the required fraction.
The .15% is the tail end of the probability distribution. Probability distributions are such that the total area is 1.0 from Z=-∞ to Z=+∞. For the normal distribution, Z=0 is at 0.5, which means that
the probability of a variable falling above 0 is exactly that of ...
See your previous post with double answer: http://www.jiskha.com/display.cgi?id=1336949898
Yep, you got it, congrats! I hope you understand how it works. If not, feel free to post.
I guess you have not read this on Sunday: Standard equation of a circle with a radius of r and centre (x0,y0) is: (x-x0)²+(y-y0)²=r² Centre of circle: ((5-1)/2, (4+(-4))/2)=(2,0) Out of the four
choices, there is only one that has a centre positioned at (2,0). (...
1. Centre is at (2,0) but (x+2)^2+y^2=5 has a centre at (-2,0) 2. radius is 5, so the right hand side should be 5^2=25.
Your answer is not correct. Please explain how you got (x+5)^2+(y+4)^2=100 as your answer. Hint: Centre of circle is at: ((5-1)/2, (4+(-4))/2)=(2,0)
I do not have the same answer as you. If AB is the diameter, the centre is located at the mid-point between A and B. The mid-point between A and B is ((Xa+Xb)/2, (Ya+Yb)/2)=(2,0) I do not know how
you got -2 and 3 for the centre of the circle. Can you explain?
Standard equation of a circle with a radius of r and centre (x0,y0) is: (x-x0)²+(y-y0)²=r² Centre of circle: ((5-1)/2, (4+(-4))/2)=(2,0) Out of the four choices, there is only one that has a centre
positioned at (2,0). (hint: refer to the standard equation of th...
p=probability of voting q=(1-p)=probability of not voting. Out of 5 adults randomly selected, the probability that exactly 2 voted is calculated according to the binomial expansion, C(5,2)p^2q^3 =(5!
/(2!3!))*0.57^2*0.43^3 =0.258 (approx.)
Total number of marbles = 5+8+3 = 16 Marbles which are not green = 5+3 = 8 P(not green) = 8/16 = 1/2
55 mph= 55mph*5280 ft/mi * 12 in/ft /(60 min/h) = 58080 in/min Each revolution of the trailer tire travels a distance equal to the circumference of the tire =(13)*π inches Number of revolutions per
minute =58080/13π =1422 rpm approx. Note: the answer is the same if the c...
math :)
You're welcome!
Volumes of pyramids above the cutting planes are proportional to the cube of the height. So the volume above the cutting plane at h/4 is V1=(3/4)^3*100=2700/64 in³ Volume above the 2h/3 cut is V2=(1/
3)^3*100=100/27 in³ The volume between two cuts are therefore = V1-V...
The volume of a sphere is (4π/3)r³ The shell thicknesses are: X=1", Y=2", Z=1" The proportions of volumes X, Y and Z are: X:Y:Z =(1³-0³):(3³-1³):(4³-3³) =1:8:55 Therefore weight of ball =
(1*16+8*14+55*12)/(1+8+55) =788/64 =1...
You're welcome!
Out of the possible numbers (in the sample space), if you cannot represent both events (odd and even) by a single outcome, then the events are mutually exclusive. In this case, we cannot have a
number which is both odd and even, so E and F are mutually exclusive. Events A and ...
Use P(1+r)^n Since interest is compounded quarterly, the interest rate has to be divided by 4 to get the quarterly rate. The number of periods (years) has to be multiplied by 4 to get quarters. P=500
R=(1+0.0425/4) n=12*4=48 So 500*(1+0.0425/4)^48 =830.41
Number of ways to choose 3 from each would be: n=11!/(8!3!) * 5!/(2!3!) out of total number of ways N=16!/(6!10!) Probability: n/N = 75/364
Among the 16, how many are from Atlantis and how many are from Zedonia?
factoring math
Use difference of two squares: -16sin^2a+81 =81-16sin^2a =9²-(4sin(a))² =(9+4sin(a))(9-4sin(a))
Assuming 6-sided number cubes. Let the events A=throwing less than 6 B=one of them is a three Then P(A∩B) =|{(1,3),(2,3),(3,1),(3,2)}| / 36 = 1/9 P(B) =2/6 =1/3 Throwing less than six given one of
them is a 3 is therefore the conditional probability of A given B, or P(A|B)...
Math 4
3 apples + 1 peach = 4 fruits. There are 10 fruits in the basket. What is the probability of picking (at random) one of 3 apples and 1 peach?
f(x)=4x sqrt(1-x²) use substitution u=1-x^2 du=-2xdx ∫f(x)dx =∫(4xsqrt(u))du/(-2x) =-∫2sqrt(u)du =-(4/3)u^(3/2)+C =-(4/3)sqrt(1-x^2)^(3/2)+C
Algebra 2
9C9 =9!/(9!(9-9)!) =1
f(x)=(1/3)x+5 g(x)=(1/3)x-2 can be interpreted as y=(1/3)x+5 y=(1/3)x-2 both of which are in slope-intercept form.
The same question was asked a few days ago and it appears in the "related questions" below. Here's a link to the question: http://www.jiskha.com/display.cgi?id=1336634629
We don't see the figure, so we don't know where to put the answers!
The area of a polygon of n sides with an apothem of a and side c is A=nac/2 For a hexagon, n=6, apothem=c*sqrt(3)/2 so Area=6*c*(sqrt(3)/2)*c/2 =3c²sqrt(3) In the given case, 96sqrt(3) = 3c²sqrt(3)
Solve for c (length of side).
Questions like this are generally not answered because we don't see any effort on the poster's part. If you are working on the problem and have a specific question, we'd be pleased to help.
What have you done so far?
Assuming order of the toppings are not relevant, then you can order 7C4=7!/((7-4)!(4!)) Sundaes =35
Use conditional probability P(A|B)=P(A∩B)/P(B) P(B)=blue line = 0.6 P(N)=shot on net=0.3 P(N∩S)=on net & score = 0.01 P(B∩S)=blue line & score = 0.005 P(B∩S|B) =P(B∩S)/P(B) =0.005/0.6=0.0833 P(N∩S|N)
=P(N∩S)/P(N) =0.01/0.3=0.0333
Programming with Eclipse
What have you done so far and what is your question?
Analytic Geometry
Given two planes: Π1 : Ax+By+Cz+D=0 and Π2 : ax+by+cz+d=0 The following uses the notation that <a,b,c> represents a vector with three components. The normal vectors are: N1 : <A,B,C> N2 : <a,b,c> If
Π1 is perpendicular to Π2, then N1.N2=0 (dot-pro...
Each minute, Lisa will shovel 1/45 of the driveway, while Bill will shovel 1/65 of the driveway. Together, each minute they will shovel (1/45+1/65) of the driveway. They will take 1/(1/45+1/65)
minutes to shovel the whole driveway. Note: the answer is between 26 and 27 minutes.
"represents t, the time in seconds, as a function of l the number of laps" means that l is the independent variable, and t(time) is the dependent variable. Therefore you only have a choice between A
and D. Choose the one with the correct relation.
If you have not done the geometric distribution at school, you need to read up about it before. I agree that it is not obvious if you have not done the distribution before, or if you have not done
summation of geometric series before. In my calculations above, P(1) is the prob...
The geometric distribution gives the probability of getting a success at the xth trial: P(X=x)=Px(x)=(1-p)^(x-1)p The probability of getting the first success in n trials is therefore the sum of the
above, or P(n)=Σ(1-p)^(n-1)p =pΣ(1-p)^(n-1) (geom seq.) =p(1-(1-p)...
Correct. I get P(C\T) (probability inside circle minus triangle) =(Ac-At)/Ac =81.92/113.1=72.4%
All right-triangles inscribe in a circle with a diameter equal to the hypotenuse. Therefore for the 30-60-90 triangle, the radius of the circle is 6 inches, and the short side is also 6 inches. The
height is 6sqrt(3), so the area of the triangle is At=36sqrt(3)/2 = 18 sqrt(3) ...
If he takes out 8 marbles, they could be 4 blue and 4 green. So how many does he need to make sure there is one of each?
The question is best answered with a Venn diagram. Start with those that use all three. 14 use A & B, so 14-3=11 use A & B but not C. 8 use A & C, so 8-3=5 use A & C but not C. So those who use A
only A only = 36-(11+3+5)=17
A & B but not C = A&B - all three = 19-9 =10
Many Access SQL tutorials are available online. Here's one of them: http://www.lynda.com/Access-training-tutorials/140-0.html?utm_source=google&utm_medium=cpc&utm_campaign=Search-Biz-Access&
algebra 2
y=x^2+2 -4x-y=10 y=-3x^2+x-2 y=-5x+3 Both systems have no real roots. If you graph the two curves, they do not intersect. If you are interested in complex roots, eliminate y by comparison and solve
for x. Substitute x into the equations to find y.
The two draws are independent. So the probability of getting a yellow in both bags is the product of the probability of getting a yellow in each bag. P1(Y)=3/12 P2(Y)=5/15 So P(Y,Y)=P1(Y)*P2(Y)
Use the intersecting chord theorem: When two chords cut each other into p1,p2 and p3,p4, then p1*p2=p3*p4. In the given case (42/2)(42/2)=4.8(2R-4.8) Solve for R.
Try using the standard form: a^2-b^2=(a+b)(a-b) and noting that 9x^2-16 = (3x)^2 - 4^2
First digit is even: 4 choices (assuming zero is excluded) Last digit is odd: 5 choices Eight digits remain for the second digit, and 7 for the third (no repetition). Total=product of all four
Between 1 to 8: C: {4,6,8} composite. So P(C)=3/8 O: {1,3,5,7} odd. P(O)=4/8 P(C,O)=P(C)*P(O)=3/16
Put it in the form: y=ax^2+bx+c if a>0, then it's a happy parabola (opens up) if a<0, then it opens down (sad).
If you notice that the Taylor's series expansion of e^(-x/4) = 1-x/4+x^2/32-x^3/384+x^4/6144-... is exactly the given series less the first term with x=1, so the given series is e^(-1/4)-1 = -0.2212
to four places. If you sum term by term, you just need to sum until the ne...
Do you just want the answer or you think you can work (or start to work) along the lines of your previous problem? http://www.jiskha.com/display.cgi?id=1336497331
Difference =($2.50 per basket)(125 baskets)(4 shifts per week)(4 weeks) =$2.50*125*4*4 =$5000
From 4x + 5y 3 = 0 Divide all coefficients by sqrt(4^2+5^2) to get 4x/sqrt(41) +5y/sqrt(41) - 3/sqrt(41) = 0 where 3/sqrt(41) = length of normal cos(θ)=4/sqrt(41), and sin(θ)=5/sqrt(41) Note: the
normal form is given by: x cos(θ) + y sin(θ) - p =...
Answering by logic and mental calculations: Since the order of digits is not important (they will be reversed anyway), the only two-digit combinations that add up to 15 are (8,7) and (6,9). (8,7)
does not work because 87-78=9. Now try 96-69=27, which is obviously the answer.
As I indicated, this is a trick question. I have assumed that not all five books need to be distributed because the question says the teacher "prepares" 5 books, and any student can take "one or no
book". It did not say he "distributes" 5 books. T...
This is a tricky question. The teacher prepares 5 books for 7 students, but did not say that all five books HAVE to be distributed. So my interpretation is that 0 to 5 books could have been
distributed. For the case of 0 book distributed, there is only one way: no one gets any...
Normalize 515 and 585, namely z(515)=(515-550)/35=-1 z(585)=(585-550)/35=1 The percent is the difference of probabilities of one-tail z-values between -1 and +1. It should be a little less than 70%.
c programming
Here, we do not write complete programmes for you. We can give help in debugging, design, or answer specific questions. Here, you can create a 2-d array and work accordingly. Be sure to check for
off-board indices to avoid memory errors. If you need help with your pseudocode o...
A hexagon has six sides, so n=6. Use the formula given in: http://www.jiskha.com/display.cgi?id=1336457737
Area of a polygon with n sides is given by Area = na/2 where n=number of sides of polygon a=apothem. For a triangle, n=3.
math probability
I assume you mean that it is a fair spinner (i.e. it stops at any position with equal probability) but the sectors of the spinner are of different sizes. In this case, the probability of it stopping
on "a" is the sector angle of a (in degrees) divided by 360. If it i...
F.PA=<5,10,10><6,5,1> =5*6+10*5+10*1 =30+50+1 =81
Most books write vectors as <x,y,z> to differentiate from a point (x,y,z). You would separate the work done by P along the two lines (vectors) OP and PA. The force is 15N along <1,2,2>. Since <1,2,2>
is not a unit vector, we need to normalize it as <1,2,2&...
Work done by a force F along a vector P is F.P (dot product). So total work done is F.P + F.A where A=<1,-3,4>, P=<7-1, 2-(-3), 5-4> and F=15<1,2,2>/sqrt(1^2+2^2+2^2)=<5,10,10> The division by sqrt
(...) is required to normalize the direction vector to a...
Work in tiers. If sales exceed the first tier: commission 30000*3%=$900 Subtract the first tier and calculate on the next: 130000-30000=100000 All of it goes on 7%, so 100000*5%=$5000 So he gets
$5000+$900=$5900 in commission.
You're welcome!
Let x=east, y=north, x-component of velocity = 35 y-component of velocity = 8 Magnitude=sqrt(8^2+35^2)=sqrt(1289)=35.9 mph. approx. direction: θ=tan-1(35/8) direction=NθE
Pages: <<Prev | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=MathMate&page=16","timestamp":"2014-04-23T11:11:42Z","content_type":null,"content_length":"31930","record_id":"<urn:uuid:78e91562-4776-4a1a-8c22-e1bac00a8b64>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Breakeven Analysis
Breakeven analysis answers the question, "what do I need in sales in order to break even?" By breaking even, we mean not losing any money, but also not making any money. The breakeven sales amount is
commonly referred to as your monthly "nut". If sales are below this amount, you feel bad, if they are higher, you feel good. Unless you use Enron type accounting, or are swimming in debt, you should
be able to work out the cash flow to sustain your business if you are consistently profitable.
Of course, you want to make a profit, not just break even month after month. So you can use this type of analysis to set a profit goal, and figure out what your sales should be to reach that goal.
The Breakeven Coverage Ratio is another way of stating a profit goal.
Recently, I had a client ask me to figure out what his Breakeven Sales were for him. I thought to myself - "why can't you do that yourself? It's not exactly rocket science."
But maybe it is not so obvious. There are some nuances for a small business that can make the calculation difficult. The key is to separate your costs into fixed and variable portions. The variable
costs are those incurred only when a sales is made. Then you do a little algebra.
If you are a retailer or wholesaler, your variable costs would be the cost of goods that you re-sell, plus perhaps some credit card charges, and maybe commissions.
For a service company, you may have no variable costs, or perhaps just commissions, or maybe sub-contract labor. When you are small, salaries are not variable over small increments in sales. You make
do with the work force you have.
Fixed costs are things like rent, utilities, telephone, salaries, and benefits. For a basic breakeven analysis, we consider costs as fixed if they don't vary with small increments of sales.
Obviously, if your sales quadruple, you would need to add staff and incur other costs that are fixed in the short term. But for the purposes of a breakeven analysis, we consider these costs to be
"Breakeven Sales" = "Fixed Costs" / (1 - "Variable Cost % of Sales")
For a profit goal expressed as Return on Sales (ROS), we use this formula:
"Sales" = "Fixed Costs" / (1 - Variable Cost % of Sales”- "ROS %")
(see the derivations of these formulas at the end of this article)
Example 1 - a Services Company
Fixed costs are now $30,000 per month. During the last 6 months, variable costs amounted to $32,000 on sales of $200,000. You calculate your "Variable Cost % of Sales" to be 0.16 ($32,000/ $200,000
or 16%) - which is for commissions and credit card fees. Your breakeven is $30,000 / (1 - 0.16) = $30,000 / 0.84 = $35,714.
What will your breakeven be if you add a salaried sales rep at $5,000 per month (including FICA and benefits)? It is $35,000 / 0.84 = $41,667, an increase of $5,953.
What sales do you need to produce a 10% return on sales after adding this salaried rep? It is $35,000 / (1 - 0.16 - 0.1) = $35,000 / 0.74 = $47,297.
Example 2 - a Retailer
Fixed costs are now $30,000 per month. You thought that your "Variable Cost % of Sales" was 50% because your standard markup is 2 times cost. But, based on the last 6 months of actual financial
results, you calculate your "Variable Cost % of Sales" to be 0.58 (i.e. 58%) - because of markdowns and credit card processing fees. Your breakeven is $30,000 / (1 - 0.58) = $30,000 / 0.42 = $71,429.
You figure it will cost you $3,000 a month to extend your store hours by 10 hours a week. How much additional sales will you have to generate to cover the additional costs? It is $3,000 / 0.42 =
$7,143. Your total breakeven would then be $33,000 / 0.42 = $78,571 per month.
What do you need now to produce a 10% return on sales? You need $33,000 / (1 - 0.58 - 0.10) = $33,000 / 0.32 = $103,125.
Calculating Breakeven in the Real World
There are a few things you run into when you try to apply this technique in the real world. Keep in mind that the numbers used to calculate breakeven are coming from your accounting system. Here’s
what you run into:
• Fixed costs seem to vary from month to month
• Gross profit margins and hence variables costs may also vary from month to month
• Owner compensation distorts the breakeven calculation
• The existence of debt service makes a cash breakeven a better measure
The solution is to smooth out variations using moving averages, and calculate more than one breakeven number to find one that works best for your situation.
Fixed costs seem to vary from month to month
This seems like a ridiculous statement. After all, what is a fixed cost, but one that is "fixed." Here are some reasons these numbers can bounce around:
• the Bookkeeper may sometimes put an expense in the wrong month
• there may be other bookkeeping errors, especially if there is more than one person making entries in the book. What is booked as a cost of sales one month may be a fixed expense another month.
• some expenses are quarterly or annual (such as business licenses)
• you have unusual legal or accounting fees that are not related to the level of sales
Gross profit margins may also vary from month to month
The mix of sales may change from one month to the next, affecting your overall gross margin. Sales promotions may reduce average prices. Tiered commission plans may cause average commission rates to
Owner compensation distorts the breakeven calculation
Owner compensation consists of owner salary and benefits, and possibly a few other expenses such as the company delivery yacht or the European training seminars. Separating out these expenses and
calculating a breakeven on what is left helps you figure out what the number is you need to make to keep the business running. The theory is that in a pinch, you can give up the perks and live on a
mere mortal’s salary.
The existence of debt service makes a cash breakeven a better measure
You may or may not have a lot of debt service. If you do have auto loans, equipment loans, or mortgages – it is a good idea to include the principal payments as part of your breakeven calculation.
Some improvements to the breakeven calculation
The examples below are from an Excel spreadsheet built to take care of these problems. You can download the Excel spreadsheet by clicking on this link:
Simple Breakeven
There are four basic inputs from your Income Statement needed to calculate breakeven:
• Sales
• Cost of Sales
• Other Variable Costs (to catch truly variable costs such as commissions that may not be included in "Cost of Sales")
• Fixed Costs
This example shows six months of data for a hypothetical company with fixed costs of $20,000 per month, the same for all six months. But because the mix of sales varies from month to month, the gross
margin is anywhere from 60% in month 1 down to 48.15% in month 6. Total variable costs as a percentage of sales range from 50% to 61.48%. As a result, "Breakeven Sales" ranges from $40,000 to
The Breakeven Coverage Ratio is simple the "Actual Sales" for the month divided by the "Breakeven Sales." This is a measure of how well you’ve got your breakeven covered. Depending on your type of
business, a coverage ratio of 1.25 or better is considered good.
3 Month Moving Average
To answer the question, "what is my breakeven?" – a moving average is helpful to smooth out the variations in costs and margins from month to month.
The owner of this company would feel confident saying his breakeven is about $45,000 per month. The breakeven coverage ratio of 1.21 is a little below the target of 1.25 or higher.
Breakeven before Owner’s Compensation
If the owner is drawing $5,000 per month in compensation, you can back that out of fixed costs to calculate Breakeven before owner’s compensation.
Notice how this lowers the breakeven quite a bit.
Cash Breakeven
If you are paying back a loan, the "Cash Breakeven" may be more important to you than the "Sales Breakeven." Just add back the principal portion of the loan to the figure for total fixed costs (the
interest will already be included in fixed costs) before calculating the breakeven. You can see what this does to the breakeven coverage ratio below – now the company is barely breaking even.
What if?
A key reason to look at breakeven is to understand how much you have to sell to in order to sustain the business. You also want to know what that figure will be as you hire people, acquire more
space, or buy new equipment. To calculate the breakeven for a future month, you need to make just two assumptions:
• Fixed costs
• Variable Costs as a % of Sales
You can look at the year to date average, or a 3 month moving average to decide on what to use for these numbers. Then bump up the fixed costs by the cost of the new person, or the increase in rent,
and you have a new breakeven.
In the example below, we pick the 3 month moving average of 55.85% for "variable costs as a % of sales", not too far off from the year to date average. We increase the fixed costs by $5,000 to
reflect the hiring of a new person, and the resulting breakeven is $56,625.
How the formulas were derived
First, let's state the fundamental calculation of profit:
"Profit" = "Sales" - "Fixed Costs" - "Variable Costs".
Furthermore: "Variable Costs" = "Variable Cost % of Sales" * "Sales"
So: "Profit" = "Sales" - "Fixed Costs" - "Variable Cost % of Sales" * "Sales"
Now its time to consult with your 12 to 14 year old to solve this equation for Sales algebraically:
For breakeven, we want Profit to be zero. So now we have:
(1) 0 = "Sales" - "Fixed Costs" - "Variable Cost % of Sales" * "Sales"
To get all the Sales terms on the same side of the equal sign:
(2) "Fixed Costs" = "Sales" - "Variable Cost % of Sales" * "Sales"
Simplifying the Sales terms:
(3) "Fixed Costs" = "Sales" times (1 - "Variable Cost % of Sales")
Divide both sides of the equation by (1- ‘Variable Cost % of Sales”)
(4) "Fixed Costs" / (1 - ""Variable Cost % of Sales") = "Sales"
Which is the same as:
(5) "Sales" = "Fixed Costs" / (1 - "Variable Cost % of Sales")
Profit Goal
For a return on sales (ROS) of 10% (0.1), we want Profit to be 10% of sales. So now we have:
"Profit" = 0.1 * "Sales"
But also:
"Profit" = Sales" - "Fixed Costs" - "Variable Cost % of Sales" * "Sales"
(1) 0.1 * "Sales" = "Sales" - "Fixed Costs" - "Variable Cost % of Sales" * "Sales"
To get all the Sales terms on the same side of the equal sign:
(2) "Fixed Costs" = "Sales" - "Variable Cost % of Sales" * "Sales" - 0.1 * "Sales"
Simplifying the Sales terms:
(3) "Fixed Costs" = "Sales" times (1 - "Variable Cost % of Sales" - 0.1)
Divide both sides of the equation by (1- "Variable Cost % of Sales" – 0.1)
(4) "Fixed Costs" / (1 - "Variable Cost % of Sales" - 0.1) = "Sales"
Which is the same as:
(5) "Sales" = "Fixed Costs" / (1 - "Variable Cost % of Sales" - 0.1)
(6) "Sales" = "Fixed Costs" / (1 - "Variable Cost % of Sales" - "ROS %") | {"url":"http://www.survivalware.com/articles/breakeven.php","timestamp":"2014-04-21T02:48:58Z","content_type":null,"content_length":"48056","record_id":"<urn:uuid:921518c2-9939-4db6-a265-836cd79556e2>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Implementation of Determinism and Buffer Boundedness Checking via Linear Programming
This is an implementation of the algorithm described in by Terauchi and Aiken, as well as a variant for buffer boundedness described in a forthcoming paper by Terauchi and Megacz.
To obtain the code, use darcs and type
darcs get http://research.cs.berkeley.edu/project/cccd-impl/
You will need ghc 6.6 and lp_solve in order to use this software. Once you have installed the prerequisites, just type
to run the examples in Examples.lhs.
The source code for the implementation can be found in literate Haskell format. | {"url":"http://research.cs.berkeley.edu/project/cccd-impl/","timestamp":"2014-04-17T12:49:40Z","content_type":null,"content_length":"5873","record_id":"<urn:uuid:fd2a6278-3000-4b97-abb1-bc4c62a1b483>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamic Analysis of Cracked Cantilever, Electrostatic Microactuators Using Radial Basis Functions
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 865461, 11 pages
Research Article
Dynamic Analysis of Cracked Cantilever, Electrostatic Microactuators Using Radial Basis Functions
Department of Electrical Engineering, National Penghu University of Science and Technology, Magong, Penghu, Taiwan
Received 10 July 2012; Revised 7 November 2012; Accepted 17 November 2012
Academic Editor: Slim Choura
Copyright © 2012 Ming-Hung Hsu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The dynamic problems of a microactuator with a single edge crack are numerically formulated using radial basis functions. The microactuator model incorporates the taper ratio, electrode shapes, and
crack length, all of which govern the dynamic behavior of microactuators. To optimize the design of a microactuator, many characteristics of various shaped cantilevers and curved electrodes are also
1. Introduction
Microelectromechanical systems exploit microscale effects to extend the range applications of actuators, accelerometers, angular rate sensors, and other devices. Elucidation of the dynamic mechanism
of electrostatic microactuators contributes markedly to their design. Legtenberg et al. [1] investigated the dynamic behavior of active joints for different electrostatic actuator designs and
proposed the idea of using a curved electrode to improve pull-in performance. Electrostatic actuators are widely applied in microelectromechanical systems. Electrostatic microactuator devices have a
high operating frequency and low power consumption. Hong et al. [2] studied the influence of the dimensions and stress of such a device on fatigue endurance when an external force was applied to a
normal microcantilever beam and a notch cantilever beam. They performed analysis simulation indicating that the stress was maximal at the fixed end. Their results showed that a deep notch in a
specimen concentrate stress and thus promote specimen failure. Mehdaoui et al. [3] presented the vertical cointegration of AlSi MEMS tunable capacitors and Cu inductors in tunable LC blocks.
Etxeberria and Gracia [4] proposed tunable MEMS volume capacitors for high-voltage applications. Liu et al. [5] presented actuation by electrostatic repulsion established by nonvolatile charge
injection. Gallant and Wood [6] investigated how fabrication techniques affect the performance of widely tunable micromachined capacitors. Borwick III et al. [7] analyzed a high
microelectromechanical capacitor with large tuning range for RF filter systems. Harsh et al. [8] studied the design and realization of a flip-chip integrated microelectromechanical tunable capacitor.
Osterberg et al. [9, 10] proposed a one-dimensional model and a three-dimensional model for analyzing electrostatically deformed diaphragms. Their results revealed that the electrostatic deformation
calculated using the one-dimensional model is close to that obtained using a three-dimensional model. Gilbert et al. [11] analyzed the three-dimensional coupled electromechanics of
microelectromechanical systems using a CoSolve-EM simulation algorithm. Elwenspoek et al. [12] studied the dynamic behavior of active joints for various electrostatic actuator designs. Shi et al. [13
] presented the combination of an exterior boundary element method for analyzing electrostatics and a finite-element method for analyzing elasticity to evaluate the effect of coupling between the
electrostatic force and the elastic deformation. Gretillat et al. [14] employed the three-dimensional MEMCAD and finite-element method programs to simulate the dynamics of a nonlinear actuator,
taking into account squeeze-film damping. Hung and Senturia [15] developed leveraged bending and strain-stiffening methods to increase the maximum travel distance before the pull-in of electrostatic
actuators. Chan et al. [16] measured the pull-in voltage and capacitance-voltage characteristic and performed two-dimensional simulations that included the electrical effects of fringing fields and
finite-beam thickness to determine the material properties of electrostatic microactuators. Li and Aluru [17] developed a mixed-regime approach for combining linear and nonlinear theories to analyze
large microelectromechanical structure deformations at large applied voltages. Their results demonstrated that electrostatic actuators can undergo large deformation at certain driving voltages.
Chyuan et al. [18–20] established the validity and accuracy of the dual boundary element method and employed it to elucidate the effect of a variation in gap size on the levitation of a
microelectromechanical comb drive. Qiao et al. [21] presented a suspension beam called a two beam to realize a parallel-plate actuator with an extended working range, but without the disadvantages of
complex control circuit and high actuation voltage. In this investigation, radial basis functions are adopted to analyze how cantilever shape, damping, cracks, and electrode shape affect dynamic
behavior in electrostatic actuator systems. The radial basis function scheme is applied to formulate the electrostatic field problems in matrix form. The integrity and computational accuracy of
radial basis functions are demonstrated with reference to various case studies. To the author’s knowledge, very few published investigates have presented a vibration analysis of a cantilever
electrostatic microactuator with an edge crack using radial basis functions.
2. Radial Basis Function
A radial basis function is a real-valued function whose value depends on the distance from an origin. Kansa [22, 23] studied a given function or partial derivatives of a function with respect to a
coordinate direction which is expressed as a linear weighted sum of all functional values at all mesh points along the direction that was initiated based on the concept of radial basis function. In
their algorithm, the node distribution was completely unstructured. Wang and Liu [24] proposed a point interpolation meshless method that was based on radial basis functions and incorporated the
Galerkin weak form for solving partial differential equations. Elfelsoufi and Azrar [25] investigated the buckling, flutter, and vibration of beams using radial basis functions. Hon et al. [26] used
radial basis functions for function fitting and solving partial differential equations using global nodes and collocation procedures. Liu et al. [27] constructed shape functions with the delta
function property based on radial and polynomial basis functions. In this study, shape functions are constructed using radial basis functions. A radial basis function can be expressed as follows [28,
29]: where is a constant. The radial basis function is typically used to develop the functional approximations of the following form [28, 29]: where is the coefficient to be determined. The
microcantilever deflection denotes a sum of radial basis functions, each associated with a different center . The domain contains collocation points. Although this nonlinear equation of the
electrostatic microactuator does not have an analytical solution, numerical approaches can be adopted to solve it. These nonlinear partial differential equations are obtained numerically using the
radial basis function approach, which does not require a mesh.
3. Dynamic Behavior of Clamped-Free Microactuators
Figure 1 displays the geometry of an electrostatic actuator with an edge crack close to its fixed end. Variable is the thickness of a microactuatorat , denotes the tip thickness of a
microactuator at, and is the length of the microbeam. An electrostatic force, introduced by the difference between the driving voltage of the curved electrode and that of the cantilever, pulls the
cantilever toward the curved electrode. The electrostatic force is approximately proportional to the inverse of the square of the distance between the curved electrode and the shaped cantilever. The
equation of motion of an electrostatic microactuator with an edge crack near the fixed end can be derived as [1, 30] where represents the shape of the curved electrode, and is given by a polynomial
such that , is the gap between the tip of a curved electrode at and the tip of a micro cantilever at, is defined as the ratio, and is the polynomial order of the shape of the electrode. The
electrode shape varies with the value of. is the driving voltage; is Young’s modulus of the actuator material; is the dielectric constant of air, ; is the width of the microactuator; is the initial
gap, as displayed in Figure 1. The dielectric layer prevents short circuits. The cross-sectional area of a microactuator is, and is the moment of inertia of the cross-sectional area of a
microactuator and is given by and . The Kelvin-Voigt damping force is assumed to model resistance to the actuator strain velocity. is the Kelvin-Voigt damping coefficient. To ensure generality,
Kelvin-Voigt damping effects are considered in the formulation of the equations of motion [31, 32]. Equation (3.1) depicts the fringing effects of the electrical field. The dynamic characteristics of
edge-cracked beams are of considerable importance in many designs. The flexibility caused by a crack of depth can be determined using Broek’s approximation [33] to be where is the stress intensity
factor under mode loading; is Poisson’s ratio; denotes the bending moment at the crack; is the flexibility of the micro cantilever. The magnitude of the stress intensity factor can be determined
using Tada’s formula [34], as where Substituting the stress intensity factor into (3.2), yields Since the bending stiffness of the cracked section of a micro cantilever can be expressed as A crack
can be represented as a spring of zero length and zero mass. The boundary conditions associated with (3.1) for a microactuator with an edge crack near its fixed end are given by This nonlinear
equation does not have an analytical solution; however, numerical approaches can be utilized to solve it. The radial basis function approach is adopted to solve numerically these nonlinear partial
differential equations. In the radial basis function approach, (2.2) is substituted into (3.1). The equation of motion of a fixed-free microbeam can be rearranged into a formula based on the radial
basis function approach: Based on the radial basis function approach, the boundary conditions of a clamped-free microactuator with an edge crack can be rearranged into matrix forms as
4. Numerical Results
The following figures summarize the results thus obtained. Figure 2 shows the frequencies of a clamped-free curved electrode microactuator. The material and geometric parameters of the actuator
considered herein are, μm, μm, μm, , , and μm [1, 35]. The figure plots the analytical solutions and the numerical results obtained using radial basis function approach. Numerical results indicate
that the estimated frequencies remain stable even when only fifteen collocation points are considered. They also suggest that the frequencies calculated using the radial basis function approach are
extremely close to the exact solutions. Figure 3 shows the frequencies of a clamped-free curved electrode microactuator for various crack depths. The frequency falls, as the crack depth increases.
The crack depth significantly affects the frequencies of the micro cantilever. Computational results solved using radial basis function approach are compared with numerical results obtained using the
differential quadrature method. Figure 4 compares the tip deflections of an actuating electrode for various driving voltages and electrode shapes. Changing the electrode shape in an electrostatic
microactuator is an effective technique for varying the electrostatic force distribution therein. Numerical and measured results suggest that the tip deflections calculated using the radial basis
function approach are in good agreement with published experimental results [1]. Numerical results demonstrate that the pull-in voltage declines gradually, as the value of increases. Figure 5 plots
the tip responses of the microactuator with different values. The tip deflections of the micro cantilever drop, as increases. As increases, the applied voltage required to cause a particular
deflection of the tip of the micro cantilever increases. The cantilever shapes substantially influence the pull-in behavior of microactuators. Figure 6 plots the tip responses of the microactuator
for various values of. Notably, the tip deflection increases with crack depth. The depth of the crack significantly affects the tip response. Figure 7 shows the variation of the tip responses of
the actuator with. Because of recent advances in stably responding and high performance actuator structures, the enhancement of damping has become a very significant issue. The numerical results in
this example show that internal damping can significantly affect the dynamic behavior of the actuator system. Strong residual vibration occurs in a system with a zero internal damping coefficient.
5. Conclusions
This work examines the radial basis functions for dynamic problems of an electrostatic actuator with a crack. The effects of internal damping, electrode shape, edge cracking, and cantilever shape on
the pull-in behavior of electrostatic microstructures are investigated. The frequency of the microcantilever declines, as the crack depth increases. The value of radial basis functions that describe
the pull-in behavior of a microactuator with an edge crack is determined.
1. R. Legtenberg, J. Gilbert, S. D. Senturia, and M. Elwenspoek, “Electrostatic curved electrode actuators,” Journal of Microelectromechanical Systems, vol. 6, no. 3, pp. 257–265, 1997. View at
Publisher · View at Google Scholar · View at Scopus
2. H. Hong, J. N. Hung, and Y. H. Guu, “Various fatigue testing of polycrystalline silicon microcantilever beam in bending,” Japanese Journal of Applied Physics, vol. 47, no. 6, pp. 5256–5261, 2008.
View at Publisher · View at Google Scholar · View at Scopus
3. A. Mehdaoui, M. B. Pisani, R. Fritschi, P. Ancey, and A. M. Ionescu, “Vertical co-integration of AlSi MEMS tunable capacitors and Cu inductors for tunable LC blocks,” Microelectronic Engineering,
vol. 84, no. 5–8, pp. 1369–1373, 2007. View at Publisher · View at Google Scholar · View at Scopus
4. J. A. Etxeberria and F. J. Gracia, “Tunable MEMS volume capacitors for high voltage applications,” Microelectronic Engineering, vol. 84, no. 5–8, pp. 1393–1397, 2007. View at Publisher · View at
Google Scholar · View at Scopus
5. Z. Liu, M. Kim, N. Y. M. Shen, and E. C. Kan, “Actuation by electrostatic repulsion by nonvolatile charge injection,” Sensors and Actuators A, vol. 119, no. 1, pp. 236–244, 2005. View at
Publisher · View at Google Scholar · View at Scopus
6. A. J. Gallant and D. Wood, “The role of fabrication techniques on the performance of widely tunable micromachined capacitors,” Sensors and Actuators A, vol. 110, no. 1–3, pp. 423–431, 2004. View
at Publisher · View at Google Scholar · View at Scopus
7. R. L. Borwick III, P. A. Stupar, J. DeNatale et al., “A high Q, large tuning range MEMS capacitor for RF filter systems,” Sensors and Actuators A, vol. 103, no. 1-2, pp. 33–41, 2003. View at
Publisher · View at Google Scholar · View at Scopus
8. K. F. Harsh, B. Su, W. Zhang, V. M. Bright, and Y. C. Lee, “Realization and design considerations of a flip-chip integrated MEMS tunable capacitor,” Sensors and Actuators A, vol. 80, no. 2, pp.
108–118, 2000. View at Publisher · View at Google Scholar · View at Scopus
9. P. Osterberg, H. Yie, X. Cai, J. White, and S. Senturia, “Self-consistent simulation and modeling of electrostatically deformed diaphragms,” Proceedings of the IEEE Micro Electro Mechanical
Systems, pp. 28–32, 1994. View at Scopus
10. P. M. Osterberg and S. D. Senturia, “M-test: a test chip for MEMS material property measurement using electrostatically actuated test structures,” Journal of Microelectromechanical Systems, vol.
6, no. 2, pp. 107–118, 1997. View at Publisher · View at Google Scholar · View at Scopus
11. J. R. Gilbert, R. Legtenberg, and S. D. Senturia, “3D coupled electro-mechanics for MEMS: applications of CoSolve-EM,” in Proceedings of the IEEE Micro Electro Mechanical Systems Conference, pp.
122–127, February 1995. View at Scopus
12. M. Elwenspoek, M. Weustink, and R. Legtenberg, “Static and dynamic properties of active joints,” in Proceedings of the 8th International Conference on Solid-State Sensors and Actuators and
Eurosensors, pp. 412–415, June 1995. View at Scopus
13. F. Shi, P. Ramesh, and S. Mukherjee, “Simulation methods for micro-electro-mechanical structures (MEMS) with application to a microtweezer,” Computers and Structures, vol. 56, no. 5, pp. 769–783,
1995. View at Publisher · View at Google Scholar · View at Scopus
14. M. A. Gretillat, Y. J. Yang, E. S. Hung et al., “Nonlinear electromechanical behavior of an electrostatic microrelay,” in Proceedings of the International Conference on Solid-State Sensors and
Actuators, pp. 1141–1144, June 1997. View at Scopus
15. E. S. Hung and S. D. Senturia, “Extending the travel range of analog-tuned electrostatic actuators,” Journal of Microelectromechanical Systems, vol. 8, no. 4, pp. 497–505, 1999. View at Publisher
· View at Google Scholar · View at Scopus
16. E. K. Chan, K. Garikipati, and R. W. Dutton, “Characterization of contact electromechanics through capacitance-voltage measurements and simulations,” Journal of Microelectromechanical Systems,
vol. 8, no. 2, pp. 208–217, 1999. View at Publisher · View at Google Scholar · View at Scopus
17. G. Li and N. R. Aluru, “Linear, nonlinear and mixed-regime analysis of electrostatic MEMS,” Sensors and Actuators A, vol. 90, no. 3, pp. 278–291, 2001. View at Scopus
18. S. W. Chyuan, Y. S. Liao, and J. T. Chen, “An efficient method for solving electrostatic problems,” Computing in Science and Engineering, vol. 5, no. 3, pp. 52–58, 2003. View at Publisher · View
at Google Scholar · View at Scopus
19. S. W. Chyuan, Y. S. Liao, and J. T. Chen, “Computational study of variations in gap size for the electrostatic levitating force of MEMS device using dual BEM,” Microelectronics Journal, vol. 35,
no. 9, pp. 739–748, 2004. View at Publisher · View at Google Scholar · View at Scopus
20. Y. S. Liao, S. W. Chyuan, and J. T. Chen, “Efficaciously modeling the exterior electrostatic problems with singularity for electron devices,” Circuits and Devices Magazine, vol. 20, no. 5, pp.
25–34, 2004. View at Publisher · View at Google Scholar
21. D. Y. Qiao, W. Z. Yuan, and X. Y. Li, “A two-beam method for extending the working range of electrostatic parallel-plate micro-actuators,” Journal of Electrostatics, vol. 65, no. 4, pp. 256–262,
2007. View at Publisher · View at Google Scholar · View at Scopus
22. E. J. Kansa, “Multiquadrics—a scattered data approximation scheme with applications to computational fluid-dynamics-I surface approximations and partial derivative estimates,” Computers and
Mathematics with Applications, vol. 19, no. 8-9, pp. 127–145, 1990. View at Zentralblatt MATH · View at Scopus
23. E. J. Kansa, “Multiquadrics—a scattered data approximation scheme with applications to computational fluid-dynamics-II solutions to parabolic, hyperbolic and elliptic partial differential
equations,” Computers and Mathematics with Applications, vol. 19, no. 8-9, pp. 147–161, 1990. View at Zentralblatt MATH · View at Scopus
24. J. G. Wang and G. R. Liu, “A point interpolation meshless method based on radial basis functions,” International Journal for Numerical Methods in Engineering, vol. 54, no. 11, pp. 1623–1648,
2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
25. Z. Elfelsoufi and L. Azrar, “Buckling, flutter and vibration analyses of beams by integral equation formulations,” Computers and Structures, vol. 83, no. 31-32, pp. 2632–2649, 2005. View at
Publisher · View at Google Scholar · View at Scopus
26. Y. C. Hon, M. W. Lu, W. M. Xue, and Y. M. Zhu, “Multiquadric method for the numerical solution of a biphasic mixture model,” Applied Mathematics and Computation, vol. 88, no. 2-3, pp. 153–175,
1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
27. G. R. Liu, X. Zhao, K. Y. Dai, Z. H. Zhong, G. Y. Li, and X. Han, “Static and free vibration analysis of laminated composite plates using the conforming radial point interpolation method,”
Composites Science and Technology, vol. 68, no. 2, pp. 354–366, 2008. View at Publisher · View at Google Scholar · View at Scopus
28. G. B. Wright, Radial basis function interpolation: numerical and analytical developments [Ph.D. thesis], University of Colorado at Boulder, 2003.
29. M. D. Buhmann, Radial Basis Functions: Theory and Implementations, Cambridge University Press, New York, NY, USA, 2003.
30. K. S. Chen and K. S. Ou, “Development and verification of 2D dynamic electromechanical coupling solver for micro-electrostatic-actuator applications,” Sensors and Actuators A, vol. 136, no. 1,
pp. 403–411, 2007. View at Publisher · View at Google Scholar · View at Scopus
31. R. W. Clough and J. Penzien, Dynamics of Structures, McGraw-Hill, New York, NY, USA, 1975.
32. S. S. Rao, Mechanical Vibrations, Addison-Wesley, New York, NY, USA, 1990.
33. D. Broek, Elementary Engineering Fracture Mechanics, Martinus Nijhoff, Leiden, The Netherlands, 1986.
34. H. Tada, P. C. Paris, and G. R. Irwin, The Stress Analysis of Crack Handbook, Professional Engineering, 2000.
35. Y. Liu, K. M. Liew, Y. C. Hon, and X. Zhang, “Numerical simulation and analysis of an electroactuated beam using a radial basis function,” Smart Materials and Structures, vol. 14, no. 6, pp.
1163–1171, 2005. View at Publisher · View at Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/mpe/2012/865461/","timestamp":"2014-04-20T09:04:10Z","content_type":null,"content_length":"274606","record_id":"<urn:uuid:8f078925-9d3d-44d3-8bbe-edad865075cb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Open Door Web Site : IB Math Studies: Geometry and Trigonometry Index
Study Guide Homepage
IB Math Studies Homepage
Using the Graphic Display Calculator
Presumed Knowledge
Number and Algebra
> Geometry and Trigonometry
Logic Sets and Probability
Listings, Recognitions and Awards
EABJM Public Web Site
© The Open Door Team
Any questions or problems regarding this site should be addressed to the webmaster
© Linda Noan 2014 | {"url":"http://www.saburchill.com/math/004.html","timestamp":"2014-04-17T05:15:10Z","content_type":null,"content_length":"11291","record_id":"<urn:uuid:bb657a2c-19d9-4b30-8baa-705e2c29920f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Grid algorithm not working properly
September 21st, 2013, 10:57 AM
Grid algorithm not working properly
I'm trying to translate from pseudocode from a textbook into java, but it's not working, so I'm clearly doing something wrong. Before I show you my code, I think it's best I give you the text, so
you know exactly what I'm trying to do.
Let’s write down a more precise version of our algorithm:
ALGORITHM GRID(n)
The input will be a positive whole number.
The output will be an n*n gridG. G[r,c] means the number in row r column c of the grid.
The steps to be performed are:
For each row r, numbered 1 up to n inclusive, and for each column c, numbered 1 up to n inclusive
do this:
Use algorithm GET-GRID-ENTRY to get the number m that we should fill into row r and column c of the grid G.
Fill in m to G[r,c]
This feels like cheating a little, we’ll need to specify what the GET-GRID-ENTRY algorithm is. Try that, in the more precise style we’ve just used for the GRID algorithm.
ALGORITHM GET-GRID-ENTRY(G,n,r,c)
The input will be a partially filled in grid G, n, the size of G, and two positive whole numbers r and c
The output will be a single number m, that is the smallest number not appearing aboveG[r,c]in the same column, or to the left ofG[r,c]in the same row.
The steps to be performed are:
Let U be a piece of scratch space with n entries, with U[0,...,n] all zero.
For each grid entry in row r, column i, with i ranging from1up to c-1 inclusive
do this:
Let x beG[r,i]
SetU[x]to 1
For each grid entry in column c row j, with j ranging from1up to r-1inclusive do this:
Let y be G[j,c]
SetU[y]to 1
For each entry in U, numbered by k from 1 up to n inclusive
do this:
ifU[k]equals zero
thenStop this algorithm, and give our answer as k.
We now have a reasonably precise description of the algorithm to solve the grid problem.
These two algorithms together are supposed to create a grid of size (n*n) where each cell in that grid is the smallest possible non-negative number not already to the left or directly above that
cell. I can show some examples, if people think that will make it any clearer.
Anyway, here is my attempt to code the algorithm(s)
Code :
public class GRID {
public static void main(String[]args){
* Method for creating grid
* @param n Size of grid
public static void createGrid(int n){
//Initialize a grid of size n*n
int array[][] = new int[n][n];
for (int r=0;r<array.length;r++){
for (int c=0;c<array[r].length;c++){
//Use GetGridEntry to get the required number
int m =getGridEntry(array,n,r,c);
//Fill in m to G[r][c]
//Finally, print out the grid
for(int i=0;i<array.length;i++){
for(int j=0;j<array[i].length;j++){
* Method for finding grid entry
* @param G Partially filled in grid G
* @param n The size of G
* @param r Row Designation
* @param c Column designation
* @return
public static int getGridEntry(int G[][], int n, int r, int c){
//Iniitailze the int to be returned
int m =0;
//Le U be a piece of scratch space with n entries, with U[0,...,n] all zero
int U[]= new int[n];
//For each grid entry in row r, column i
for(int gridEntry: G[r]){
//i ranging from 1 to c-1 inclusive
for(int i=1;i<c;i++){
//Let x be G[r][i]
int x =G[r][i];
//Set U[x] to 1
//For each grid entry in column c row j,
//with j ranging from 1 up to r-1 incluzive
for(int j=0;j<r;j++){
//Let y be G[j][c]
int y =G[j][c];
//Set U[y] to 1
//For each entry in U, numbered by k from 1 up to n inclusive
for(int k: U){
return m;
return m;
This code only every gives me grids with output of zeros and ones, so I must have strayed from the instructions at some point. I know that the version in the text should definitely work. I've
clearly gone wrong somewhere, but can you spot where?
--- Update ---
I've just realized I shouldn't have put this here. :-s
September 21st, 2013, 11:09 AM
Re: Grid algorithm not working properly
I think you've done this step wrong:
For each entry in U, numbered by k from 1 up to n inclusive
do this:
ifU[k]equals zero
thenStop this algorithm, and give our answer as k.
I love your commenting and how you've used that as a basis for your coding (because that's how I work), but I think you strayed from the comments when you wrote the code for this step.
September 21st, 2013, 11:53 AM
Re: Grid algorithm not working properly
Hmm. I see what you mean. I'm not even using the number n for anything there, despite the fact it's explicitly stated. Nevertheless, I can't seem to interpret that part correctly. Of all the
changes to that part I've tried so far, this strikes me as the closest:
Code :
for(int k=0;k<n;k++){
return k;
The above gives this as the output
What I need to get is:
Maybe I should just give my poor little brain a rest and come back to it. Maybe I'll find it was glaringly obvious all along. That's what usually happens. Thanks for the kind words about my
commenting by the way :).
September 21st, 2013, 12:07 PM
Re: Grid algorithm not working properly
I think the part of the instructions, oft repeated, "row r, numbered 1 up to n inclusive, and for each column c, numbered 1 up to n inclusive," is the key to getting the results you need. "1 to n
inclusive" is not how we typically deal with array elements, but that's what's required here, at least for the indices we use to get to those elements.
I'll play with that a bit and let you know if I strike gold.
Edit: Nope, I changed my mind. This line of the instructions:
Let U be a piece of scratch space with n entries, with U[0,...,n] all zero.
causes me question the whole set of instructions. U as defined by that line would contain n + 1 entries, so now I'm not sure of the "1 to x, inclusive" language used throughout the instructions.
I don't trust them.
September 30th, 2013, 05:31 PM
Re: Grid algorithm not working properly
So, this problem was finally solved by replacing this:
Code :
//For each grid entry in row r, column i
for(int gridEntry: G[r]){
//i ranging from 1 to c-1 inclusive
for(int i=1;i<c;i++){
//Let x be G[r][i]
int x =G[r][i];
//Set U[x] to 1
with this
Code :
//i ranging from 0 to c-1 inclusive
for(int i=0;i<c;i++){
//Let x be G[r][i]
int x =G[r][i];
//Set U[x] to 1
I then found out that the pattern being replicated is simply an XOR grid and the code below is all you need to get the required pattern:
Code :
for(int row = 0; row < n; ++row)
for(int col = 0; col < n; ++col)
System.out.print("\t" + (row^col)); | {"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/31763-grid-algorithm-not-working-properly-printingthethread.html","timestamp":"2014-04-20T09:06:07Z","content_type":null,"content_length":"15592","record_id":"<urn:uuid:0add4c0e-4b4e-4d25-9798-f7b145d27853>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rollingbay Math Tutor
Find a Rollingbay Math Tutor
Professional tutor focusing on Math/Statistics, Chemistry, Physics and Computers. Personally scored 800 on both SAT Math & SAT Math II & 787 in Chemistry prior to attending CalTech. Have extensive
IT industry experience and have been actively tutoring for 2 years.
43 Subjects: including trigonometry, linear algebra, computer science, discrete math
...I like to communicate plainly and simply, and have always enjoyed presenting material in a way that I find easy to understand, and like to approach the subject matter so that it becomes
engaging. For me, many of these subjects are fascinating, which makes learning seem a lot more like fun, which...
39 Subjects: including algebra 2, SAT math, algebra 1, geometry
...I have over 50 hours of experience tutoring students in areas of calculus, including differential equations. I studied logic at Bellevue College in high school. I have tutored several logic
62 Subjects: including precalculus, ACT Math, ACT English, discrete math
...In college, I completed math through Calculus 3 and am proficient in Advanced Trigonometry, Calculus 1 and 2. My favorite types of math are Trigonometry, Geometry, Algebra 1 and 2. I truly like
and understand math concepts and enjoy helping others understand the underlying principles.
26 Subjects: including algebra 1, ACT Math, probability, SAT math
...From learning shapes, colors, letters and numbers to learning simple addition and subtraction, I have some experience with montessori methods using manipulatives and other objects to help teach
the younger elementary school age all that they need to build on in the upper elementary ages. I also ...
46 Subjects: including ACT Math, trigonometry, SAT math, algebra 1 | {"url":"http://www.purplemath.com/rollingbay_math_tutors.php","timestamp":"2014-04-17T22:02:12Z","content_type":null,"content_length":"23588","record_id":"<urn:uuid:c7141742-9d2a-4744-9375-3f3e5ef07cca>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lakeside, CA Prealgebra Tutor
Find a Lakeside, CA Prealgebra Tutor
...My job is to help students comprehend the material that is presented to them. I am aware of the different types of learning and I am very patient. My students really enjoy working with me
because I work with them and I constantly ask questions to make sure they comprehend the material.
25 Subjects: including prealgebra, reading, writing, algebra 1
...I earned a 35 on my ACT Exam. Since then, I have been attending UCSD, from which I will be graduating in June. In the fall, I will be leaving San Diego to attend medical school.
42 Subjects: including prealgebra, English, Spanish, writing
...I am able to do this because I struggled with math too. I had those same questions!, and now that I can be the person with answers, the satisfaction I get is inexplicable. People prefer
different learning styles such as visual, verbal, sequential, kinesthetic, global, etc.
5 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...In teaching, I aim to support my students to learn for themselves. This means that I ask them to do most of the work (since most of us learn better by trying it ourselves) and watch what they
do to find the places where they have difficulties. Note to new students: I find I'm often helping people with a specific programming assignment.
22 Subjects: including prealgebra, English, statistics, grammar
I am a finance professional with experience in investment banking and the energy industry. I have 15 years if experience in my industry, an MBA, and my series licenses. Much of my success is
attributed to an ability to learn complex industries and be extremely detail oriented.
9 Subjects: including prealgebra, algebra 1, algebra 2, economics
Related Lakeside, CA Tutors
Lakeside, CA Accounting Tutors
Lakeside, CA ACT Tutors
Lakeside, CA Algebra Tutors
Lakeside, CA Algebra 2 Tutors
Lakeside, CA Calculus Tutors
Lakeside, CA Geometry Tutors
Lakeside, CA Math Tutors
Lakeside, CA Prealgebra Tutors
Lakeside, CA Precalculus Tutors
Lakeside, CA SAT Tutors
Lakeside, CA SAT Math Tutors
Lakeside, CA Science Tutors
Lakeside, CA Statistics Tutors
Lakeside, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Lakeside_CA_prealgebra_tutors.php","timestamp":"2014-04-19T17:16:23Z","content_type":null,"content_length":"24004","record_id":"<urn:uuid:f3a6f153-14d9-4801-be96-8f572594ae96>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oriented equational logic programming is complete
- In Proceedings of ICALP 2001 , 2001
"... A crucial way for reducing the search space in automated deduction are the so-called selection strategies: in each clause, the subset of selected literals are the only ones involved in
inferences. For first-order Horn clauses without equality, resolution is complete with an arbitrary selection o ..."
Cited by 5 (1 self)
Add to MetaCart
A crucial way for reducing the search space in automated deduction are the so-called selection strategies: in each clause, the subset of selected literals are the only ones involved in inferences.
For first-order Horn clauses without equality, resolution is complete with an arbitrary selection of one single literal in each clause [dN96]. For Horn clauses with built-in equality, i.e.,
paramodulation-based inference systems, the situation is far more complex. Here we show that if a paramodulation-based inference system is complete with eager selection of negative equations and,
moreover, it is compatible with equality constraint inheritance, then it is complete with arbitrary selection strategies. A first important application of this result is the one for paramodulation
wrt. non-monotonic orderings, which was left open in [BGNR99]. 1
, 2001
"... Up to now, all existing completeness results for ordered paramodulation and Knuth-Bendix completion require the term ordering to be well-founded, monotonic and total(izable) on ground terms. For
several applications, these requirements are too strong, and hence weakening them has been a well-known ..."
Cited by 1 (0 self)
Add to MetaCart
Up to now, all existing completeness results for ordered paramodulation and Knuth-Bendix completion require the term ordering to be well-founded, monotonic and total(izable) on ground terms. For
several applications, these requirements are too strong, and hence weakening them has been a well-known research challenge. Here we
"... Abstract. Common Logic (CL) is a recent ISO standard for exchanging logic-based information between disparate computer systems. Sharing and reasoning upon knowledge represented in CL require
equation solving over terms of this language. We study computationally well-behaved fragments of such solving ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. Common Logic (CL) is a recent ISO standard for exchanging logic-based information between disparate computer systems. Sharing and reasoning upon knowledge represented in CL require equation
solving over terms of this language. We study computationally well-behaved fragments of such solving problems and show how they can influence reasoning in CL and transformations of CL expressions. 1
, 1996
"... Introducing equality into standard Horn clauses leads to a programming paradigm known as Equational Logic Programming. We propose here a scheme for the evaluation of such equational logic
programs combining two powerful operational techniques: directed narrowing for the equational part and linear co ..."
Add to MetaCart
Introducing equality into standard Horn clauses leads to a programming paradigm known as Equational Logic Programming. We propose here a scheme for the evaluation of such equational logic programs
combining two powerful operational techniques: directed narrowing for the equational part and linear completion for the logical part. Thus we provide a goal-oriented solving procedure, keeping the
well-known advantages of Linear Completion (a reduced search space with a loop avoiding effect and the possibility to finitely synthesize an infinite set of answers) and of Directed Narrowing (search
space pruning).
, 2000
"... . We introduce a notion of modular redundancy for theorem proving. It can be used to exploit redundancy elimination techniques (like tautology elimination, subsumption, demodulation or other
more refined methods) in combination with arbitrary existing theorem provers, in a refutation complete wa ..."
Add to MetaCart
. We introduce a notion of modular redundancy for theorem proving. It can be used to exploit redundancy elimination techniques (like tautology elimination, subsumption, demodulation or other more
refined methods) in combination with arbitrary existing theorem provers, in a refutation complete way, even if these provers are not (or not known to be) complete in combination with the redundancy
techniques when applied in the usual sense. 1 Introduction The concept of saturation in theorem proving is nowadays a well-known, widely recognized useful concept. The main idea of saturation is that
a theorem proving procedure does not need to compute the closure of a set of formulae w.r.t. a given inference system, but only the closure up to redundancy. Examples of early notions of redundancy
(in the context of resolution) are the elimination of tautologies and subsumption. Bachmair and Ganzinger gave more general abstract notions of redundancy for inferences and formulae (see, e.g., | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=908072","timestamp":"2014-04-17T01:24:12Z","content_type":null,"content_length":"24102","record_id":"<urn:uuid:7591e834-2127-4101-8e97-8f56f2a9dc28>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Possible Answer
Explore This Topic: Do all quadrilaterals have 360 degrees? Yes. How do the angles inside a quadrilateral add up to 360? If you cut any quadrilateral along one of its diagonals, the figures formed
Explore This Topic: Do all quadrilaterals have 360 degrees? Yes. Why does all quadrilaterals add up to 360 degrees? Proof: Draw any quadrilateral. Draw a line from one corner to the opposite corner.
Share your answer: why does a quadrilateral add up to 360 degrees?
Question Analizer
why does a quadrilateral add up to 360 degrees resources | {"url":"http://www.askives.com/why-does-a-quadrilateral-add-up-to-360-degrees.html","timestamp":"2014-04-19T04:44:48Z","content_type":null,"content_length":"36122","record_id":"<urn:uuid:55ec66d8-1da4-4d23-8ce0-02ed4314f284>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
LSD Randomized Lower Bounds
Apparently a few people took issue with my (Data) Structures paper because it claimed a randomized lower bound for LSD without a proof, and this propagated to the FOCS PC. I recently received an
email from the PC asking for a sketch of the proof, pronto.
As I explained before to anon commenters on this blog, I don't know what the big fuss is about. The paper is certainly not trying to claim credit for this proof (it is not even mentioned in the
introduction). It's just trying to say "yeah, we know how to do it, and it's not interesting; don't waste your time on it." The results of the paper are interesting independently of any LSD proofs.
If you don't like my claim, you only get deterministic lower bounds (see here for deterministic LSD lower bounds, which were known since STOC 95). No big loss.
But since I had already claimed that the randomized LSD lower bounds are straightforward (once you understand the literature), I had the moral obligation to support my claim. So here is my PDF
response, in which I give a (sketch of a) proof for the randomized bounds. Don't be too harsh on this; due to the whole context I only had a few hours to write the thing, so some calculations of
constants are left out. The writing is also bad, but hopefully if you're actually interested in the topic you already know enough to follow.
I think we need to grow up as a discipline, and understand that the ideas of a paper are not measured by the formula depth. If it's painfully complicated (which this proof is), it doesn't mean it's
actually interesting or cool or worthy.
14 comments:
Hi Mihai. Having looked at your paper, I'll go on record as really enjoying the insight presented. I'm a believer that the "computer science" view of the world and the "information theory" view
of the world should be brought closer together, and this paper does that nicely and shows some of the benefits of these connections.
I do think that it's reasonable, however, for a PC to be concerned when it sees unsubstantiated claims made in a paper. You suggest that if they don't like the fact that there's a claim about LSD
randomized lower bounds that doesn't seem to appear elsewhere, the PC should just ignore it. On the other hand, you could have avoided the situation by just sticking entirely to the deterministic
results, pointing out that any randomized lower bounds would carry over, and then suggest such lower bounds could/would be forthcoming. This would avoid confusion for some readers, and the PC (or
any reviewer) has a legitimate right to question if your current writeup could confuse or possibly misread a reader who isn't, say, a regular reader of your blog.
Also, while I understand your point of view is that this LSD randomized lower bound is "boring", given that it's useful in this setting you've established, I'd suggest it does make sense to at
some point write it down carefully.
All this shouldn't prevent the results from being published eventually. But given that you've announced them (so there's no issue of credit) and described them, I think it's well within a
reviewer's purview to argue that the paper can and should be written better before publication. (And, of course, it's within your rights to argue otherwise.)
Best of luck.
Michael, thank you for the positive comments on the paper. I find it very cool also, but I'm understandably biased :)
Regardless, let me disagree with the rest :) Conferences are not created equal. If I stay up for 3 nights in a row to submit 4 papers to a conference it is because I want them there. Rejecting a
good paper because you don't like the light in which some results are presented (which essentially boils down to PC whim) is unacceptable. It is tantamount to disconsidering the author's work,
and it destroys all the motivation quickly.
Also, conferences are not created equal because this is the last one when I'm a student. :)
Mihai, I understand your argument from the point of view of the author. However, from the point of view of a PC member, would you allow a paper with unsubstantiated claims to be published, even
if these claims are not the main result of the paper? I would not, because this will lead to confusion later, and make life hard for people who wish to do follow-up work in this area. There are
already too many papers like this, in FOCS and STOC -- where results are claimed, but full proofs cannot be found, even when one contacts the authors. In addition, I know of several papers where
full proofs of some results were not presented, and later a bug was found. To prevent such contingencies, I would do what the FOCS committee did. Of course, another alternative could be a
conditional acceptance of the paper, provided the author either removes the claims about the randomized lower bounds, or provides full proofs of such bounds.
Forget the PC. Why not write the randomized proof and add it to your paper, even as an appendix if you want. The point of a paper is not to prove how smart you are to a PC, but to communicate
ideas and insights to other researchers. Also, the proof is not that complicated.
I do not know how to do the randomized proof. If, indeed, randomized lower bounds for asymmetric LSD yield all these other lower bounds, isn't the proof for LSD the one thing that I *really* want
to know if I'm thinking about randomized data structure lower bounds for some other problem?
Even in the deterministic case, if the proof is just a "one paragraph counting argument," why not include it for the sake of completeness so that I can read the damn thing without having to go
through all the overhead in the paper you reference?
1. To recast my earlier arguments in your terms, an acceptance with the comment "I really hate the claim in those two paragraphs, please remove" is one thing, rejecting a paper is quite another.
I disagree with Michael that you have the right to reject papers for trivialities.
2. I did just write the randomized proof, and I explained the deterministic one carefully in my original blog post. But no, the randomized proof will not fit in any reasonable size appendix
(within 10 pages).
3. Hastad's PCP or Raz's parallel repetition are key tools in hardness of approximation, but few people have any clear sense of how they're proved. Here, I would expect that most people in data
structures will have little interest in the randomized LSD proof, except to know that it exists.
On a second thought, I'm really disturbed by y'all saying that I should be conservative and not tell people a straight story about a result (I know how to do it, it's not interesting). Would you
rather have the facts misrepresented? Perhaps then I should write another paper on LSD? (since it's now ointed as an important problem...)
If we want to do serious research we can't allow ourselves to slip slowly towards becoming a bunch of gutless conservatives, who put formalism above anything else.
I disagree with Michael that you have the right to reject papers for trivialities.
I doubt they will reject the paper. I suspect it would be accepted even if you had replied "Fine, just pay attention to the deterministic lower bounds." But if, for some reason, the reviewers
expressed concerns about incompleteness, I don't think it's out of line for the PC to ask you. (In fact, it's nice of them to give you the chance to counter a possibly questionable reviewer.)
I did just write the randomized proof, and I explained the deterministic one carefully in my original blog post. But no, the randomized proof will not fit in any reasonable size appendix (within
10 pages).
Your paper does not contain a link to your blog post. We cannot all track the entire arc of your research career and electronic persona in order to have context for every result. As the reader, I
appreciate a (central!) one paragraph argument being included for the sake of completeness.
Even a sketch is helpful, and for reasons that you don't envision at the time. Again, the fact that so many problems are LSD-hard means that there must be something important going on in that
proof, even if no new ideas are required to write it down. I would at least appreciate your paper having a guide to figuring out what that something is.
Also, on principle, no one can claim a randomized lower bound via a reduction from LSD unless the hardness of LSD is established somewhere (and that place cannot be "Mihai's head").
Hastad's PCP or Raz's parallel repetition are key tools in hardness of approximation, but few people have any clear sense of how they're proved.
So? Neither author suggested that we should "trust them" about the proof, because knowing the details is not important to use the result. Moreover, it inevitably arises (and happened in both
these cases), that at some point one has to dig into the details to make progress, and they have to be there in that 1% of cases where someone needs them.
On a second thought, I'm really disturbed by y'all saying that I should be conservative and not tell people a straight story about a result (I know how to do it, it's not interesting).
Dar? I didn't see anyone assert that you should claim the randomized LSD lower bound is important or hard or new. Just that you should prove it if you want to write "Theorem: ..." in your paper.
I often write "The following argument is essentially known, and follows by combining [31] with [12], using the framework of ..." followed by 5 pages of technical arguments. If you don't want to
interrupt the flow of the exposition, you can always put it in an appendix.
James, I agree with you on essentially all counts. (I think) they can't reject it regardless of all this randomized nonsense. The randomized lower bound should be written (and will be in the
fall). And PCs should talk to the authors freely (and I have on the SWAT PC, as detailed in a memorable post on this blog).
The best temporary solution that I found was to claim the result in the paper so people can cite it. That's not giving me extra credit, but I'm accepting the burden of writing it without claiming
credit -- and in the mean time people can cite the conference version. So again, what's the big fuss about? There's zero reason why anybody would want to rebut my claim.
This all feels like an automatic reaction from people that have been preconditioned by the formalist creed: "Aha, no proof! No proof! All hell will break loose! Let's use the power of the PC to
avert the disaster!"
I agree that my reaction is too strong for the present case, and I apologize for that. It's just that I've been getting annoyed by hearing these kind of things over the years.
I agree with Mihai (and James).
The important thing is to have an actual proof for the main contribution of a paper.
The "peripheral" results -- even if they are technical -- are of minor importance.
In particular, this paper should (and would) get into FOCS, even if the randomized LSD lower bound claim was not there in the first place.
I disagree with Michael that you have the right to reject papers for trivialities.
Mihai, can we avoid putting words in my mouth? (Now, and in the future.) I suppose that's your interpretation of what I said, but it's not what I said.
If there's a fundamental point of disagreement, it seems to be that, in my opinion, a reviewer has the right to recommend a paper be rejected if the writing is substandard beyond some threshold.
(If you re-read my last paragraph, that seems pretty clearly to be my point.) I'm not sure if you disagree with that, since you seem intent on arguing with something I never said. But perhaps you
do. If you do, then we indeed have a point of disagreement we could discuss.
I am not arguing that your paper is below that threshold. But I can understand why an unsubstantiated claim that could be significant to the paper would be of concern to the PC. You are arguing
that the claim is substantiated (in this blog and your reply, if not in the paper itself), and that's it's actually not significant. I'd imagine if the PC agrees with that assessment, these
concerns will disappear. But their opinion may differ in the end. In short, they may find it less trivial than you. Please note, I am not offering an opinion one way or another, I'm just
explaining the framework you're working in as I understand it, and if you read my original post, I'm offering suggestions as to how such issues could be avoided in the future.
Hi Michael! I'm not really trying to put words into your mouth. Sorry if it feels like that.
It's just that in my world view, we have "big ideas" and "trivialities." (yes, I get complaints about my world view all the time)
If you don't get the idea because of writing quality, fine. If you get it and you don't think it's "big enough," fine. But if you get the idea, and it's big, the rest are trivialities.
You can strongly advise the author to explain this connection better, cite this paper, clarify this proof etc etc. But that's not valid reason for even thinking about rejection. PCs should not be
adversarial. It would be a great contribution to this community if the image of the PC were improved to a "friendly counselor" from that of an institution giving mean feedback and looking for any
reasons to reject.
I guess this is our point of disagreement. Maybe I can convince you to switch to this view before STOC'09 :)
Even though you are completely right about the main point of the paper being independent of the randomized LB (and so did the PC think, it seems), your treatment of it was very un-professional:
(1) You mention it as just another theorem. To the reader it certainly seems like you are claiming credit for it.
(2) Due to the importence of this problem (even before this paper, and doubly so now), an improved LB is of significant interest -- it has to be written clearly and carefully.
(3) The fact that you claim a LB BEFORE a full proof is WRITTEN is very problematic: lower bounds are known to often "fall apart" once being written down. In fact, finding the exact notions,
definitions, and lemmata for a FORMAL proof is a very large part of the LB work (this is somewhat different than for, say, algorithms.)
In short, this was "your bad". Suggestion: just get over it and publish a full proof of the lower bound in ECCC asap.
Alright, alright. But I must disagree with (1). Claiming credit in CS means writing an introduction meant to bring the readers to tears, which explains how your result is a fulfilment of an
important Biblical prophecy.
My theorem is not even mentioned in the introduction, and I never once say that the problem is important or interesting.
I suggest renaming the
"(Data) STRUCTURES" paper to
"On the Power of LSD". =) | {"url":"http://infoweekly.blogspot.com/2008/06/lsd-randomized-lower-bounds.html?showComment=1213912020000","timestamp":"2014-04-21T15:48:16Z","content_type":null,"content_length":"75798","record_id":"<urn:uuid:0356bd3a-8990-4bf0-8fd4-f78759587dfa>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prides Crossing Math Tutor
Find a Prides Crossing Math Tutor
...Tutoring also provides me with a vehicle for keeping my mind active and challenged over the summer. Patience and relentless optimism are my two greatest assets in teaching and tutoring.I have
been teaching Algebra 1 to 7th and 8th grade students for three years. My state teaching certification is for grades 5-8 math and science, inclusive of Algebra 1.
13 Subjects: including algebra 2, precalculus, trigonometry, probability
...Then you solve those equations. You will get what you are looking for. I can help you understand concepts well and improve problem solving skills.
5 Subjects: including algebra 1, algebra 2, precalculus, physics
...I am a consulting scientist/business owner with more than 20 years experience using the Microsoft Office suite, including Outlook. I have extensive experience using all Outlook features,
including setting up Exchange, POP, and IMAP accounts; calendaring and task scheduling. I am a consulting sc...
41 Subjects: including algebra 1, chemistry, English, reading
...Because I teach technology I also tutor in Microsoft Office, Google Apps, etc. I am available days, evenings and weekends for tutoring in the summer and on weekends during the school year. I am
a Massachusetts licensed teacher in the elementary grades of 1 through 6.
14 Subjects: including prealgebra, reading, ESL/ESOL, grammar
...I am qualified to tutor Precalculus because I have studied all the courses that precede calculus: algebra I, algebra II, geometry, trigonometry, and analytic geometry. In college I took
Calculus I, II, III,and IV. I have taken at least 30 math courses beyond calculus on the undergraduate level and the graduate level.
10 Subjects: including algebra 1, algebra 2, geometry, prealgebra | {"url":"http://www.purplemath.com/prides_crossing_math_tutors.php","timestamp":"2014-04-18T21:40:02Z","content_type":null,"content_length":"23923","record_id":"<urn:uuid:27d3b1a2-4685-458a-b6e4-69d4359387ca>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Extending my question. Was: The relational model and relational algebra - why did SQL become the industry standard?
From: Mikito Harakiri <mikharakiri_at_ywho.com> Date: Mon, 17 Feb 2003 15:42:26 -0800 Message-ID: <0xe4a.16$7H6.165@news.oracle.com>
"Steve Kass" <skass_at_drew.edu> wrote in message news:b2rqnp$1tj$1_at_slb2.atl.mindspring.net...
> >The biggest problem with (x,y)={{x},{x,y}} definition is that it's not
> >associative:
> >
> ><a,<b,c>> != <<a,b>,c>
> >
> What isn't associative? What does <> represent?
> What does it mean for a definition to be associative?
> Associativity is a property of binary operations. What
> binary operation are you talking about here?
I'm sorry: copy and paste typo. I meant
(a,(b,c)) != ((a,b),c)
> Huh? Many definitions are possible, of course. We could represent
> (x,y,z) in any of these ways, I think (I'm sure I messed up the
> brackets, though):
> {{1,{x}},{2,{y}}, {3,{z}}
You have to define 1, 2, 3, first. Assuming
And substitution into your definition, you'll get yet another difinition for an ordered pair:
(a,b) = {{{{}},{a}},{{{},{{}}},{b}}}
> {{x},{y,{y}},{z,{z},{{z}}}}
> {{x},{x,{y,{y,z}}}} -- This is the one I'd call the most natural
> {{x,{x,y}},{{x,{x,y}},z}} -- This is the alternative that bothers you..
This is just a series of random definitions, well spiced with curly bracketed symbols. Or is there any wonderful theory about it? Where in math did you see "We have 10 unrelated alternative
definitions, pick anyone you like"? Received on Mon Feb 17 2003 - 17:42:26 CST | {"url":"http://www.orafaq.com/usenet/comp.databases.theory/2003/02/17/0236.htm","timestamp":"2014-04-20T06:19:56Z","content_type":null,"content_length":"9340","record_id":"<urn:uuid:831ac240-ccde-4596-bf14-f907f407b771>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Handbook of mathematics
Calculator Usage, Special Keys
The Decimal Numbering System
Adding Whole Numbers
Subtracting Whole Numbers
Multiplying Whole Numbers
Dividing Whole Numbers
Hierarchy of Mathematical Operations
Average Value
Proper and Improper Fractions
Equivalent Fractions
Addition and Subtraction of Fractions
Least Common Denominator Using Primes
Addition and Subtraction
Algebraic Laws
Solutions to Algebraic Equations
Algebraic Equations
Types of Algebraic Equations
Linear Equations
Solving Fractional Equations
Ratio and Proportion
Types of Quadratic Equations
Solving Quadratic Equations
Taking Square Root
Factoring Quadratic Equations
The Quadratic Formula
Solving Simultaneous Equations
Basic Approach to Solving Algebraic Word Problems
Steps for Solving Algebraic Word Problems
Word Problems Involving Money
Problems Involving Motion
Solving Word Problems Involving Quadratic Equations
Calculator Usage, Special Keys
Log Rules
Common and Natural Logarithms
Natural and Common Log Operations
The Cartesian Coordinate System
Cartesian Coordinate Graphs
Logarithmic Graphs
Graphing Equations
Interpolation and Extrapolation
Important Facts
Area and Perimeter of Triangles
Rectangular Solids
Right Circular Cone
Right Circular Cylinder
Pythagorean Theorem
Inverse Trigonometric Functions
Radian Measure
Frequency Distribution
The Mean
Normal Distribution
Imaginary Numbers
Complex Numbers
The Matrix
Addition of Matrices
Multiplication of a Scaler and a Matrix
Multiplication of a Matrix by a Matrix
The Determinant
Using Matrices to Solve System of Linear Equation
Dynamic Systems
Differentials and Derivatives
Graphical Understanding of Derivatives
Application of Derivatives to Physical Systems
Integral and Summations in Physical Systems
Graphical Understanding of Integral
Download : pdf1 pdf2
i think this site is very good…
Thanks, hopefully this book gives benefit to throng
thanks,will be use for me very nice, please provide metric standard mathematic book
i need a free electronic math handbook
Thanks a lot for your service .It is very simple but in detail.It
will benefit all.
Thank a lot for your service, It is very practice and I will hope specially for geometry, you will added paralelepipedum and tetrahedra, and good luck
hey thanks.. wonderful books u have out there
i would like to download this handbook
Thanks a lot !!!!
mexico city
Thank you for your great information. You rock!
Thanks,It is very easy to understand very nice, Can I Get metric standard mathematic book
thanks. this will be a benefit for my sons.
thanks all books are very good please add mathematics book
Thank a lot for your service,most of the books are very good and helpful
realy thanks.. you are doing an immence service..for Students
thanks for giving this useful data ………this site is awesome
excellent site to talk about!!11111
it is an achievement of the student through esnips……..
i prefer to read from e books…
so i save my money ,to purchase prints & invest it on by hobby……….
try it…….bro & sis..
get me mazzidi microcontrollers if possible
Good book for learning and teaching from basics onwards
Very great to help my job Thanks alot
you rock man great deed fundamentals very useful
Very good book for bascis learning
Good job
Thanks yar.
Ak DAM JHAKKAAS Thats Great this is a very useful for us engineering students we can get easly help in our study. Thanks to the team and keep it up you are doing a very good job. Thanks again for a
great information site on engineering……..
Realy Tussi Great Ho G………
How do you download the other modules mentioned here ?
i could see only 2 pdf files which contain module 1 and
module 3 ? help
I AM TOO THANKFUL FOR YR SITE
I’m really thnkful for ur web site. it has tremendous information that can be viewed and d/l.
thnks for the work.
thank u for yr website and thank u for yr free book ,it”s too rarely
THIS IS A VERY GOOD SITE
The best boon for the students, and the research persons.
Thanks to the person who has started this.
Great amount of the information.
need some things useful
how can i get the e-book?
that is so nice and helpful to me
really u r doing a great job. This is the boon for the students, pl publish more books in chemistry, rubber technology, tyre technology, polymers science too.
All the best
We want more ebooks about Advanced Mechanical Engineering- please give us more of them and accept my best wishes- and I want to be your friend- can I?
fourier series LPP
thanks alot
agreat thanks to those who make the others happy
A superb effort to scatter the knowledge via the route of internet. I liked all the books which
were downloaded by me including this one.Positive Use of resources and material available! Carry o
on and pl. upload more books; especially about electrical engineering and earthing-grounding of
all e book very good site thank you very much for people in the wold very good and very good for my country ( thailand) too.
This book is has a wide source of information, It also assist easily in understanding the fundermentals of all application.
thanks to they who are pripared this pages
books are really good.great source of information.thank u
Muchas Gracias, por este material.
Thanks this site is nice | {"url":"http://artikel-software.com/blog/2006/10/08/handbook-of-mathematics/","timestamp":"2014-04-19T09:24:55Z","content_type":null,"content_length":"90554","record_id":"<urn:uuid:29abbb44-96b8-4f59-905f-071019cea981>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modeling Of Photovoltaic Array Biology Essay
According to the theory of semiconductors, an ideal PV cell can be considered as a current source which is in parallel with one or two diodes depending on the manufacturing technology and
semi-conducting materials [11], [13]. However, in case of representing practical behavior of a PV cell, observations show that the model should include two additional elements: parallel resistance
and series resistance [13]. Based on this model, the fundamental equation describing the current-voltage characteristics of a cell is given as:
_ Iph = Photo generated current
_ Is= Saturation current of diode
_ Rs= Cell series resistance
_ Rp= Cell shunt resistance
_ A= Diode quality factor
_ T=Cell operating temperature, Kelvin
_ q= Electronic charge, (1.6x10-19 C)
_ k= Boltzmann’s constant, (1.38 x 10-23 J/K)
According to (1), the five parameters Iph, Is, Rs, Rp and A which are regarded hereafter as model parameters, should be known in order to construct the PV model. Model parameters are not usually
provided by manufacturers. They can be either determined from the data provided in the datasheet or obtained through experiments. Nevertheless, model parameters do not have constant values as they
all vary depending on temperature and solar irradiation levels. In other words, each set of temperature and solar irradiance corresponds to a unique set of model parameters. Therefore, in order to
model PV characteristics under various environmental conditions, the effects of solar irradiance and temperature on PV parameters should be taken into account. One approach that has been widely
proposed in the literature is to mathematically formulate the relationship of model parameters with temperature and solar irradiance [9], [12], [14]. Thus far, several equations have been developed.
The most popular and accepted one is the equation derived for the photo -generated current, Iph, which relates it to both temperature and solar irradiance:
where, Iph(n) is the photo-generated current at the standard condition (25_C and 1000 W=m2). T and T0 are the cell actual and standard temperatures (K) respectively. Similarly, E and En are the
actual and standard irradiance levels (W=m2) on the PV surface.
Unlike the photo-generated current, the diode saturation current, Is, has been reported to have a non-linear dependence on temperature. Despite its strong dependence on temperature, it is not
considerably affected by irradiance changes:
where, Is(0) is the saturation current at the standard temperature and Eg is the band gap energy of semiconductor. Another electrical parameter that can be formulated is Voc:
Voc = Voc(n) + k(T – T0) (4)
In comparison to above equations, there has been less success in presenting consistent equations relating Rs and Rp to the environmental parameters. In [3] the following equations are presented:
where, k1 - k4 are constants and Rp(n) is the nominal parallel resistance at the standard condition. In many modeling techniques […], Rs and Rp are assumed to be constant under all environmental
conditions, i.e. being equal to their nominal values. Such assumption simplifies the modeling technique at the cost of the accuracy. On the other hand, an accurate model is needed to analyze the PV
characteristics under partially shaded condition in which PV array is modeled under both non-shaded and shaded conditions. Therefore, a comprehensive modeling of PV parameters is required to
represent PV characteristics under various weather conditions.
In order to be worthwhile, the comprehensive modeling of PV should include another crucial stage which is experimental validation. Since the proposed technique in this work has been initially
developed based on experimental study, it was able to generate satisfactory results in agreement with experimental results.
Modeling of PV cell based on experimental study
Formulation of model parameters based on environmental parameters
In order to model PV parameters based on environmental parameters, temperature and irradiance level, the above-mentioned equations need to be verified through experimental study. The experiments have
been carried out on a single PV module subject to various temperature and uniform solar irradiation levels. By using an algorithm which is described in section III.B, the model parameters
corresponding to experimental data have been determined. The model parameters together with temperature and irradiance values have been applied to the equations and ultimately the accuracy of each
equation has been investigated. Table I provides eight sets of experimental data including the model parameters determined for each set. Based on the available experimental data, equations (2) and
(4) have been validated and the parameters associated to each equation are determined. However, equations (3), (5) and (6) failed to satisfy the experimental conditions. Instead, the equations
relating Voc, Impp and Vmpp to temperature and solar irradiance were proved to be effective. Impp and Vmpp are formulated in the same form as Isc and Voc, respectively. In other words, (2) can be
applied to Iph, Isc and Impp while (4) can be applied to Voc and Vmpp.
Table 1: Experimental data for various temperature and solar irradiance conditions
T (˚C)
Isc (A)
Impp (A)
Vmpp(V )
Voc(V )
Iph (A)
Is (μA)
Rs (mΩ)
Rp (Ω)
Equation (2) can be rewritten in the short form of:
where and
In order to find the coefficient values k0 and k1, two different sets of experimental data Iph, E and T are needed to be substituted in (7). From the available experimental data presented in Table I,
all possible combinations of two sets have been attempted to ensure the effectiveness of equation (7) under various conditions. The resulted k1
has the mean value of -0.00247. Using this value for k1 in (7), k0 is determined by solving the equation for each set of data. Table II shows the results for k0. According to Table II, k0 varies in a
small range and could hence be approximated by its mean value which is 21.648. The complete form of equation can be written as:
Iph = 21.648 x E(1 – 0.00247 x T) (8)
The similar procedure can be followed for Isc and Impp equations. The results for k0 corresponding to each equation are also shown in Table II. Incorporating the determined coefficients into (7)
leads to:
Isc = 4.2348 x E(1 – 0.000015 x T) (9)
Impp = 7.104851 x E(1 – 0.0015 x T) (10)
The equation used for Voc and Vmpp is the simplified form of (4), expressed as:
Voc = Voc(n) + k(T – T0) = k0 + k1T (11)
Like aforementioned equations for currents, the coefficients
k0 and k1 can be determined through substitution of two sets of experimental data Voc and T into (11). This method has been attempted for all possible combinations of two out of eight sets. The
resulted mean value of k1 equals -0.09019 which, upon substitution in (11), leads to a single value of k0 for each set of data. The results are shown in Table II. According to Table II, the mean
value of k0 equals 93.1442 and hence (11) can be written as:
Voc = 93.1442 – 0.09019 x T (12)
The same procedure can be followed for Vmpp. Consequently, equation (13) can be written for Vmpp:
Vmpp = 118.9274 – 0.2153 x T (13)
Having developed mathematical formulas (9) - (13), the PV electrical parameters can now be calculated for any temperature and solar irradiance levels. The model parameters are now needed to be
determined based on these electrical parameters. This can be easily fulfilled through developing an algorithm, capable of determining model parameters from electrical ones; voltages and currents. The
algorithm has been used earlier to determine the model parameters associated with each set of experimental data.
Table 2. Coefficient k0 based on experimental study
k0 - ISC
k0 - IMPP
k0 - VMPP
k0 - VOC
Determination of PV model parameters
In this section, the PV model parameters; Iph, Is, Rs, Rp and A are determined based on electrical parameters; Isc, Impp, Vmpp and Voc. The objective is to find the model parameters such that the
resulted I-V and P-V curves accurately match the experimental curves, specifically at three key points: short-circuit, maximum power and open circuit. In order to satisfy this criterion, the model
parameters are obtained by solving the fundamental equation (1), for the key points. The electrical parameters for a particular weather condition are substituted in these equations. The values for
Iph, Is, Rs and Rp are then determined through an iterative procedure. As A is an empirical value which expresses the quality of diode used in PV, an initial value can be chosen in order to obtain
other model parameters. Depending on PV device structure and material, A usually varies in the range of 1 to 1.5. In this paper, A is assumed to be equal 1. This value can be modified later if fine
tune of the model to the experimental study is needed.
In order to determine the other four parameters, four equations are required. In addition to the equations derived for short circuit, maximum power and open circuit conditions, the forth equation
governs that the models maximum power occurs at the same voltage as the measured Vmpp.
The equations for the three key points can be written as:
The forth equation can be derived using the fact that the derivative of power with respect to voltage should be zero at the maximum power point, Vmpp and Impp:
The derivative of power can be written as:
According to (18), the derivative of current with respect to the voltage should be found first:
Solving the above equation for dI/dV and substituting the value in (18) results in:
Substituting Vmpp and Impp in (20) leads to the forth equation:
The iterative procedure is illustrated in Fig.1. As shown, by using (15), Impp(cal) is calculated and compared to its experimental value. Depending on the difference, Rs is slowly incremented or
decremented. Based on the new value of Rs and above equations, Rp, Iph and Is can be calculated. These values are substituted in (15) in the next iteration and the same procedure repeats. The
iteration continues until the difference between Impp(cal) and Impp reaches zero and all the values become stable. Unlike the algorithm proposed in [11] in which Rs is incremented starting from zero,
in this paper an initial value for Rs has been assumed based on the equation proposed in [2]:
Assuming an initial value for Rs reduces the number of iterations needed to reach the final answer. It also has a great impact on the convergence of the algorithm. The other three parameters; Rp, Iph
and Is are set to initial values [2]:
Depending on the software in which the algorithm is implemented, the number of iterations may be required as well. In Matlab, the algorithm reaches the stable final answer with less than 100
iterations, whereas in PSCAD, there is no need to define a number and it automatically reaches the stable answers within less than 0.1 second.
Share This Essay
Did you find this essay useful? Share this essay with your friends and you could win £20 worth of Amazon vouchers. One winner chosen at random each month.
Request Removal
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:
Request the removal of this essay.
More from UK Essays
Need help with your essay?
We offer a bespoke essay writing service and can produce an essay to your exact requirements, written by one of our expert academic writing team. Simply click on the button below to order your essay,
you will see an instant price based on your specific needs before the order is processed: | {"url":"http://www.ukessays.com/essays/biology/modeling-of-photovoltaic-array-biology-essay.php","timestamp":"2014-04-21T14:41:32Z","content_type":null,"content_length":"35258","record_id":"<urn:uuid:aedde59e-524d-408f-8dd7-f64684bd285d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Incomplete Beta Function Ratios
Incomplete Beta Function Ratios
TOMS708 is a FORTRAN77 library which computes the Incomplete Beta Function ratio
, by Armido Didonato, Alfred Morris, Jr.
11 January 2006: Thanks to Ian Smith for pointing out that my earlier difficulties with this program were caused simply by using the wrong set of machine constants!
The original, true, correct version of ACM TOMS Algorithm 708 is available through ACM: http://www.acm.org/pubs/calgo or NETLIB: http://www.netlib.org/toms/index.html.
TOMS708 is available in a JAVA version and a FORTRAN77 version and a FORTRAN90 version.
Related Data and Programs:
ASA063, a FORTRAN77 library which evaluates the incomplete Beta function.
ASA109, a FORTRAN77 library which inverts the incomplete Beta function.
ASA226, a FORTRAN77 library which evaluates the CDF of the noncentral Beta distribution.
ASA310, a FORTRAN77 library which computes the CDF of the noncentral Beta distribution.
BETA_NC, a FORTRAN90 library which evaluates the CDF of the noncentral Beta distribution.
TEST_VALUES, a FORTRAN77 library which stores a few values of various mathematical functions.
TOMS179, a FORTRAN77 library which is an earlier ACM TOMS algorithm which also approximates the incomplete Beta function.
Armido Didonato, Alfred Morris, Jr.
1. Barry Brown, Lawrence Levy,
Certification of Algorithm 708: Significant Digit Computation of the Incomplete Beta Function Ratios,
ACM Transactions on Mathematical Software,
Volume 20, Number 3, 1994, pages 393-397.
2. Armido Didonato, Alfred Morris,
Algorithm 708: Significant Digit Computation of the Incomplete Beta Function Ratios,
ACM Transactions on Mathematical Software,
Volume 18, Number 3, 1992, pages 360-373.
Source Code:
Examples and Tests:
TOMS708_PRB1 is the test distributed with the code.
TOMS708_PRB2 is a more extensive test.
List of Routines:
• ALGDIV computes LN(GAMMA(B)/GAMMA(A+B)) when 8 <= B.
• ALNREL evaluates the function LN(1 + A).
• APSER evaluates I(1-X)(B,A) for A very small.
• BASYM performs an asymptotic expansion for IX(A,B) for large A and B.
• BCORR evaluates a correction term for LN(GAMMA(A)).
• BETA_CDF_VALUES returns some values of the Beta CDF.
• BETA_LOG_VALUES returns some values of the Beta function for testing.
• BETALN evaluates the logarithm of the Beta function.
• BFRAC: continued fraction expansion for IX(A,B) when A and B are greater than 1.
• BGRAT uses asymptotic expansion for IX(A,B) when B < A.
• BPSER evaluates IX(A,B) when B <= 1 or B*X <= 0.7.
• BRATIO evaluates the incomplete Beta function IX(A,B).
• BRCMP1 evaluates EXP(MU) * (X**A*Y**B/BETA(A,B)).
• BRCOMP evaluates X**A*Y**B/BETA(A,B).
• BUP evaluates IX(A,B) - IX(A+N,B), where N is a positive integer.
• ERF evaluates the error function.
• ERF_VALUES returns some values of the ERF or "error" function for testing.
• ERFC1 evaluates the complementary error function.
• ESUM evaluates EXP(MU+X).
• EXPARG returns the largest value for which EXP can be computed.
• FPSER uses a series for IX(A,B) with B < min(eps,eps*A) and X <= 0.5.
• GAM1 evaluates 1/GAMMA(A+1) - 1 for -0.5 <= A <= 1.5
• GAMLN evaluates LN(GAMMA(A)) for positive A.
• GAMLN1 evaluates LN(GAMMA(1 + A)) for -0.2 <= A <= 1.25
• GAMMA_INC_VALUES returns some values of the incomplete Gamma function.
• GAMMA_LOG_VALUES returns some values of the Log Gamma function for testing.
• GRAT1 evaluates P(A,X) and Q(A,X) when A <= 1.
• GSUMLN evaluates LN(GAMMA(A + B)) for 1 <= A <= 2 and 1 <= B <= 2.
• IPMPAR sets integer machine constants.
• PSI evaluates the Digamma function.
• PSI_VALUES returns some values of the Psi or Digamma function for testing.
• R4_EPSILON returns the round off unit for floating arithmetic.
• REXP evaluates the function EXP(X) - 1.
• RLOG1 evaluates the function X - LN(1 + X).
• SPMPAR returns single precision real machine constants.
• TIMESTAMP prints out the current YMDHMS date as a timestamp.
You can go up one level to the FORTRAN77 source codes.
Last revised on 08 January 2008. | {"url":"http://people.sc.fsu.edu/~jburkardt/f77_src/toms708/toms708.html","timestamp":"2014-04-19T10:07:08Z","content_type":null,"content_length":"9151","record_id":"<urn:uuid:59a98168-987b-43a4-b068-70a7f1ec205a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Truth of the Poisson summation formula
up vote 11 down vote favorite
The Poisson summation says, roughly, that summing a smooth $L^1$-function of a real variable at integral points is the same as summing its Fourier transform at integral points(after suitable
normalization). Here is the wikipedia link.
For many years I have wondered why this formula is true. I have seen more than one proof, I saw the overall outline, and I am sure I could understand each step if I go through them carefully. But
still it wouldn't tell me anything about why on earth such a thing should be true.
But this formula is exceedingly important in analytic number theory. For instance, in the book of Iwaniec and Kowalski, it is praised to high heavens. So I wonder what is the rationale of why such a
result should be true.
fourier-analysis intuition
Poisson summation is the Fourier inversion formula for the circle in disguise. See mathoverflow.net/questions/89504/… and Darsh Ranjan's answer below. – Phil Isett Feb 26 '12 at 6:48
add comment
2 Answers
active oldest votes
up vote 6 It is a special case of the trace formula. Both sides are the trace of the same operator.
down vote
A new MO record? A question asked, answered within four minutes or so, and accepted within another five (or less, I can't see the acceptance timestamp). – Harald Hanche-Olsen Feb 7
'10 at 23:33
Yes, I too would imagine it is within less than five minutes. Maybe three, or four. – Feb7 Feb 7 '10 at 23:44
2 As seen on Wikipedia: "When Γ is the cocompact subgroup Z of the real numbers G=R, the Selberg trace formula is essentially the Poisson summation formula." -- en.wikipedia.org/wiki/
Selberg_trace_formula – Konrad Voelkel Feb 8 '10 at 0:36
On the other hand, one mustn't think that Poisson summation comes "for free" via trace fla. The classical proof of Poisson summation uses at one point the standard (but needs a fair
5 bit of justification if one is trying to do everything purely from first principles) proof that (+) a continuous periodic function on R can be written as sum_{m in Z} a_m e^{2 pi i m
x}. Poisson summation now follows rather easily. On the other hand, the trace formula for R/Z gives (sthg)=(sthg), but to deduce Poisson summation you have to compute the (something)s
and I think you end up having to invoke (+) anyway. – Kevin Buzzard Feb 8 '10 at 10:04
Right. So somehow my point is simply that if one needs to know the spectral decomposition of L^2(R/Z) to get Poisson summation from the trace formula, then in truth the trace formula
2 is not actually saving you any work because there is a completely elementary (as in "a few lines of undergraduate manipulation, and interchanging a sum and an integral") derivation of
it if you assume that sum_n f(x+n) in L^2(R/Z) has a Fourier decomposition. That's all I'm pointing out. – Kevin Buzzard Feb 9 '10 at 12:00
show 2 more comments
In what follows, I'll use the convention $$ \hat{f}(\xi) = \int_{-\infty}^{\infty} f(x)e^{-2\pi i x \xi}dx,$$ so that $$ f(x) = \int_{-\infty}^{\infty} \hat{f}(\xi)e^{2\pi i x \xi}d\xi.$$
I like the following interpretation of Poisson summation, which also gives a generalization: Consider the Dirac comb distribution $C(x) = \sum_{n\in \mathbb{Z}} \delta(x-n)$. This is a
tempered distribution, so it has a Fourier transform. In fact, it is its own Fourier transform. To justify this, I'm going to give a very nonrigorous argument. But if intuition is the main
goal, then I think it will help. First, note that $C(x)$ is periodic with period 1. Thus, its "Fourier transform" is actually a Fourier series: its support is in $\mathbb{Z}$. This follows
by noting that
$$C(x) &= \int_{-\infty}^{\infty} \hat{C}(\xi)e^{2\pi i x \xi}d\xi; \\ C(x) &= \sum_{n\in \mathbb{Z}}a_n e^{2\pi i n x}; $$
up vote Where the first line is the Fourier inversion formula and the second line is the Fourier series for $C$. It follows by uniqueness that $\hat{C}(\xi) = \sum_{n\in \mathbb{Z}}a_n \delta(\xi -
17 down n)$. On the other hand, the (inverse) Fourier transform of $\hat{C}$ is also supported on $\mathbb{Z}$, so $\hat{C}$ is also periodic with period 1. Thus, all the $a_n$ are the same:
$$\hat{C}(\xi) = a\sum_{n\in \mathbb{Z}}\delta(\xi - n),$$ where $a$ is some scalar. It's not hard to see that the scalar has to be 1.
To derive Poisson summation from this, use the convolution theorem: let $f$ be any function. On the one hand, $$(f*C)(x) = \sum_{n\in \mathbb{Z}} f(x+n).$$ On the other hand, we can use the
convolution theorem: $$\widehat{(f*C)}(\xi) = \hat{f}(\xi)\hat{C}(\xi) = \hat{f}(\xi)\sum_{n\in \mathbb{Z}} \delta(\xi-n) = \sum_{n\in \mathbb{Z}} \hat{f}(n)\delta(\xi-n).$$ The last sum
gives the Fourier series of the periodic function $f*C$: $$(f*C)(x) = \sum_{n\in \mathbb{Z}} \hat{f}(n)e^{2\pi i n x}.$$ Plugging in $x=0$ gives the Poisson summation formula, QED. But the
result for general $x$ is interesting as well: given a function $f$, you can obtain a periodic function $g(x)$ by (a) adding up $f(x+n)$ over all integers $n$, or (b) taking the Fourier
transform of $f$ at integer frequencies and making that the Fourier series of $g$. The result is that (a) and (b) give the same function.
1 +1. I always found that distribution proofs are the most beautiful proofs of classical statements and this one is no exception. It lets you do everything as you want to do and you get the
answers to technical questions like "Does this function has a fourier transform? May I interchange these limites? Does this series converge?" all for free. – Johannes Hahn Feb 8 '10 at
One should be careful to note that being supported on ${\mathbb Z}$ is not enough to conclude $C(x) = \sum_n a_n \delta(x)$; for example, take $E(x) = \sum_n \frac{d}{dx} \delta(x - n)$.
On the other hand, the property that $e^{2 \pi i \xi} \hat{C}(\xi) = \hat{C}(\xi)$ does imply the desired representation. Similarly, the integer periodicity of $\hat{C}$ follows from how
$e^{2 \pi i x} C(x) = C(x)$. Also, instead of convolving, using the fact that $ < \hat{f}, C > = <f, \hat{C}> $ is slightly more direct ($C$ and $\hat{C}$ are real and even). – Phil Isett
Feb 26 '12 at 6:44
add comment
Not the answer you're looking for? Browse other questions tagged fourier-analysis intuition or ask your own question. | {"url":"http://mathoverflow.net/questions/14568/truth-of-the-poisson-summation-formula","timestamp":"2014-04-18T13:27:40Z","content_type":null,"content_length":"67276","record_id":"<urn:uuid:b2d2fee2-5516-48b7-a8bf-c29f021c409a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Balls in a bag
March 3rd 2011, 11:09 AM
Balls in a bag
There are 10 red balls, 10 green balls and 6 white balls.
Two balls picked at random - what is the probability that they are of different colors?
so probability of the first ball is 1/26 (if i'm right?), i don't get how to find t he probablity of the second one?
March 3rd 2011, 11:20 AM
The probability of each ball being picked is $\frac{1}{26}$, which is a start.
The way I would solve this would be to find the probability of two red balls, the probability of two green balls, and the probability of two white balls. If you add these, you will find the
probability of receiving two balls that are the same colour.
I'm going to assume (as you haven't specified) that the balls are not replaced, and that this is therefore conditional probability.
P(2 red) = $\frac{10}{26}\times\frac{9}{25} = \frac{9}{65}$. This is because there are $10$ red balls out of the $26$ total during the first selection, and then there are $9$ red balls left out
of the $25$ remaining balls during the second selection.
P(2 green) is the same as p(2 red) because there are the same amount of red balls as green balls, so p(2 green) $=\frac{9}{65}$
P(2 white) = $\frac{6}{26}\times\frac{5}{25}=\frac{3}{65}$
Do you understand so far?
So what is the total probability of receiving two balls that are the same colour?
And, therefore, how can you work out from that the probability of receiving two balls which are not the same colour? Show your working if you get stuck.
March 3rd 2011, 11:36 AM
mr fantastic
Draw a tree diagram.
March 3rd 2011, 11:55 AM
So i did (1-P(sum of all of the probabilities)) and got 44/65. That seems right.
Thank you.
March 3rd 2011, 12:02 PM
Probability both are white is $P(W_1\cap W_2)=\frac{6}{26}\cdot\frac{5}{25}$.
So find $P(R_1\cap R_2)+P(G_1\cap G_2)+P(W_1\cap W_2)$
March 3rd 2011, 12:29 PM
Archie Meade
You could have tried...
What is the probability of getting
(1) a red with a green
(2) a red with a white
(3) a green with a white
There are $\binom{26}{2}$ ways to choose 2 balls. This is the denominator for your probability fraction.
There are 10(10)=100 ways to get a green with a red
There are 10(6)=60 ways to get a red with a white
There are 10(6)=60 ways to get a green with a white
The sum of the above three products is the numerator of your probability fraction.
March 3rd 2011, 01:13 PM
I went through, both of your methods. Since I already did it using the first one, i'll stick to it.
But thanks you very much :) it really helped me understand better.
March 3rd 2011, 06:42 PM
[1]Prob white : 6/26 = 3/13 ; [2]prob not white : 20/25 = 4/5 ; 3/13 * 4/5 = 12/65
[1]Prob not white: 20/26 = 10/13 ; [2]prob not same: 16/25 ; 10/13 * 16/25 = 32/65
12/65 + 32/65 = 44/65 | {"url":"http://mathhelpforum.com/statistics/173321-balls-bag-print.html","timestamp":"2014-04-21T07:11:53Z","content_type":null,"content_length":"11253","record_id":"<urn:uuid:00045d15-07fa-4af6-a6e6-37f56c723da5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
A network with tunable clustering, degree correlation and degree distribution, and an epidemic thereon
Find out how to access preview-only content
March 2013
Volume 66
Issue 4-5
pp 979-1019
A network with tunable clustering, degree correlation and degree distribution, and an epidemic thereon
Purchase on Springer.com
$39.95 / €34.95 / £29.95*
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
Get Access
A random network model which allows for tunable, quite general forms of clustering, degree correlation and degree distribution is defined. The model is an extension of the configuration model, in
which stubs (half-edges) are paired to form a network. Clustering is obtained by forming small completely connected subgroups, and positive (negative) degree correlation is obtained by connecting a
fraction of the stubs with stubs of similar (dissimilar) degree. An SIR (Susceptible $\rightarrow $ Infective $\rightarrow $ Recovered) epidemic model is defined on this network. Asymptotic
properties of both the network and the epidemic, as the population size tends to infinity, are derived: the degree distribution, degree correlation and clustering coefficient, as well as a
reproduction number $R_*$ , the probability of a major outbreak and the relative size of such an outbreak. The theory is illustrated by Monte Carlo simulations and numerical examples. The main
findings are that (1) clustering tends to decrease the spread of disease, (2) the effect of degree correlation is appreciably greater when the disease is close to threshold than when it is well above
threshold and (3) disease spread broadly increases with degree correlation $\rho $ when $R_*$ is just above its threshold value of one and decreases with $\rho $ when $R_*$ is well above one.
Within this Article
1. Introduction
2. The network model and the epidemic
3. Properties of the network model
4. Epidemics on network without rewiring
5. Epidemics on rewired networks
6. Numerical examples
7. Discussion
8. References
9. References
Other actions
1. Andersson H (1997) Epidemics in a population with social structures. Math Biosci 140:79–84 CrossRef
2. Andersson H (1998) Limit theorems for a random graph epidemic model. Ann Appl Prob 8:1331–1349 CrossRef
3. Andersson H (1999) Epidemic models and social networks. Math Sci 24(2):128–147
4. Andersson H, Britton T (2000) Stochastic epidemic models and their statistical analysis. Springer lecture notes in statistics, vol 151. Springer, New York
5. Badham J, Stocker R (2010) The impact of network clustering and assortativity on epidemic behaviour. Theor Pop Biol 77:71–75 CrossRef
6. Ball FG (1983) The threshold behaviour of epidemic models. J Appl Prob 20:227–241 CrossRef
7. Ball FG (1986) A unified approach to the distribution of total size and total area under the trajectory of the infectives in epidemic models. Adv Appl Prob 18:289–310 CrossRef
8. Ball FG (2000) Susceptibility sets and the final outcome of stochastic SIR epidemic models. Research Report 00–09. Division of Statistics, School of Mathematical Sciences, University of
9. Ball FG, Lyne OD (2001) Stochastic multitype SIR epidemics among a population partitioned into households. Adv Appl Prob 33:99–123 CrossRef
10. Ball FG, Mollison D, Scalia-Tomba G (1997) Epidemics with two levels of mixing. Ann Appl Prob 7:46–89 CrossRef
11. Ball FG, Neal P (2002) A general model for stochastic SIR epidemics with two levels of mixing. Math Biosci 180:73–102 CrossRef
12. Ball FG, O’Neill PD (1999) The distribution of general final state random variables for stochastic epidemic models. J Appl Prob 36:473–491 CrossRef
13. Ball FG, Sirl DJ (2012) An SIR epidemic model on a population with random network and household structure, and several types of individuals. Adv Appl Prob 44:63–86 CrossRef
14. Ball FG, Sirl DJ, Trapman P (2009) Threshold behaviour and final outcome of an epidemic on a random network with household structure. Adv Appl Prob 41:765–796 CrossRef
15. Ball FG, Sirl DJ, Trapman P (2010) Analysis of a stochastic SIR epidemic on a random network incorporating household structure. Math Biosci 224: 53–73 (See also erratum (2010), ibid. 225:81.)
16. Ball FG, Sirl DJ, Trapman P (2012) Epidemics on random intersection graphs (submitted). arXiv:1011.4242
17. Britton T, Lindholm M, Turova T (2011) A dynamic network in a dynamic population: asymptotic properties. J Appl Prob 48:1163–1178 CrossRef
18. Britton T, Nordvik MK, Liljeros F (2007) Modelling sexually transmitted infections: the effect of partnership activity and number of partners on $R_0$ . Theor Pop Biol 72:389–399 CrossRef
19. Britton T, Deijfen M, Lindholm M, Lagerås AN (2008) Epidemics on random graphs with tunable clustering. J Appl Prob 45:743–756 CrossRef
20. Coupechoux E, Lelarge M (2012) How clustering affects epidemics in random, networks. arXiv:1202.4974
21. Diekmann O (1978) Thresholds nad traveling waves for the geographical spread of infection. J Math Biol 6:109–130 CrossRef
22. Diekmann O, Heesterbeek JAP (2000) Mathematical epidemiology of infectious diseases. John Wiley, Chichester
23. Diekmann O, Heesterbeek JAP, Metz JAJ (1990) On the definition and computation of the basic reproduction ratio $R_0$ in models for infectious diseases in heterogeneous populations. J Math Biol
28:365–382 CrossRef
24. Diekmann O, de Jong MCM, Metz JAJ (1998) A deterministic epidemic model taking account of repeated contacts between the same individuals. J Appl Prob 35:448–462 CrossRef
25. Durrett R (2006) Random graph dynamics. Cambridge University Press, Cambridge CrossRef
26. Erdős P, Rényi A (1959) On random graphs. Publicationes Mathematicae 6:290–297
27. Feller W (1971) An introduction to probability theory and its applications, vol II, 2nd edn. Wiley, New York
28. Gleeson JP (2009) Bond percolation on a class of clustered random networks. Phys Rev E 80:036107 CrossRef
29. Gleeson JP, Melnik S, Hackett A (2010) How clustering affects the bond percolation threshold in complex networks. Phys Rev E 81:066114 CrossRef
30. van der Hofstad R, Litvak N (2012) Degree–degree correlations in random graphs with heavy-tailed degrees. arXiv:1202.3071v3
31. Isham V, Kaczmarska J, Nekovee M (2011) Spread of information and infection on finite random networks. Phys Rev E 83:046128 CrossRef
32. Janson S (2009) The probability that a random multigraph is simple. Combinatorics Prob Comput 18:205–225 CrossRef
33. Karrer B, Newman MEJ (2010) Random graphs containing arbitrary distributions of subgraphs. Phys Rev E 82:066118 CrossRef
34. Ma J, van den Driessche P, Willeboordse FH (2012) Effective degree household network disease model. J Math Biol. doi:10.1007/s00285-011-0502-9
35. May RM, Anderson RM (1987) Transmission dynamics of HIV infections. Nature 326:137–142 CrossRef
36. Miller JC (2009) Percolation and epidemics in random clustered networks. Phys Rev E 80:020901(R)
37. Mode CJ (1971) Multitype branching processes. Theory and applications. In: Modern analytic and computational methods in science and mathematics, vol 34. Elsevier, New York
38. Mollison D (1977) Spatial contact models for ecological and epidemic spread. J Roy Stat Soc B 39(3):283–326
39. Molloy M, Reed B (1995) A critical point for random graphs with a given degree sequence. Rand Struct Alg 6:161–179 CrossRef
40. Newman MEJ, Strogatz SH, Watts DJ (2001) Random graphs with arbitrary degree distributions and their applications. Phys Rev E 64:026118 CrossRef
41. Newman MEJ (2002a) Assortative mixing in networks. Phys Rev Lett 89:208701 CrossRef
42. Newman MEJ (2002b) Spread of epidemic disease on networks. Phys Rev E 66:016128 CrossRef
43. Newman MEJ (2003) The structure and function of complex networks. SIAM Rev 45:167–256 CrossRef
44. Newman MEJ (2009) Random graphs with clustering. Phys Rev Lett 103:058701 CrossRef
45. Trapman P (2007) On analytical approaches to epidemics on networks. Theor Pop Biol 71:160–173 CrossRef
46. Volz E (2004) Random networks with tunable degree distribution and clustering. Phys Rev E 70:056115 CrossRef
47. Warde WD, Katti SK (1971) Infinite divisibility of discrete distributions, II. Ann Math Stat 42:1088–1090
48. Watts SC, Strogatz SH (1998) Collective dynamics of ‘small-world’ networks. Nature 393:440–442 CrossRef
A network with tunable clustering, degree correlation and degree distribution, and an epidemic thereon
Cover Date
Print ISSN
Online ISSN
Additional Links
□ Branching process
□ Configuration model
□ Epidemic size
□ Random graph
□ SIR epidemic
□ Threshold behaviour
□ 92D30
□ 05C80
□ 60J80
Industry Sectors
Author Affiliations
□ 1. School of Mathematical Sciences, University of Nottingham, University Park, Nottingham, NG7 2RD, UK
□ 2. Department of Mathematics, Stockholm University, Stockholm, 106 91, Sweden
□ 3. Mathematics Education Centre, Loughborough University, Loughborough, Leicestershire, LE11 3TU, UK | {"url":"http://link.springer.com/article/10.1007%2Fs00285-012-0609-7","timestamp":"2014-04-17T15:53:51Z","content_type":null,"content_length":"61203","record_id":"<urn:uuid:14578011-0d17-4efe-86b4-ed4e3957a2be>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the equation of the line that passes through (12, 4) and is perpendicular to the graph of y = –1/3x – 2?
Best Response
You've already chosen the best response.
Look at your slope of that line. The slope is -1/3. In order for your other line to be perpendicular, it needs to be the "negative inverse." That means you have to flip the -1/3 and then multiply
it by -1. -1*(-3/1) = 3 So you have your point (12,4) and your slope m=3 Plug these values into y=mx+b to get your b-value (y-intercept) After that, you'll have your m-value and your b-value.
Plug those (only those) into y=mx+b and you'll have the eqn of your perpendicular line.
Best Response
You've already chosen the best response.
-4, 6
Best Response
You've already chosen the best response.
Okay, I'll start from here: So you have your point (12,4) and your slope m=3 Plug these values into y=mx+b to get your b-value (y-intercept) 4 = 3*(12) +b b=4- (3)*12=-32 m=4 b=-32 Plug them into
y=mx+b and you have the equation of your line. You're not looking for a point. You're finding an equation.
Best Response
You've already chosen the best response.
Do you know how to plug numbers in to formulas?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Read through this whole thing then. You won't be able to do these problems if you don't know how to graph or plug numbers in. http://tutorial.math.lamar.edu/Classes/Alg/Lines.aspx
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f74d52fe4b0f07ddab14527","timestamp":"2014-04-20T00:43:52Z","content_type":null,"content_length":"40355","record_id":"<urn:uuid:eaec2098-43dc-45c4-a1fc-ea72918813ae>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can a butterfly start a hurricane?
Jul 25, 2007 To the editor: "The butterfly effect" is the proposition that the flapping of a butterfly's wings could affect weather half a world away. It was popularized to explain the generation of
tropical waves in the Atlantic: a butterfly flapping its wings in equatorial Africa can turn an ordinary tropical wave into the beginnings of a hurricane. Weather forecasting these days is largely
about computer modeling, and people like Herb Hilgenberg, the free weather forecaster who goes by the name of his boat, Southbound II, or Bob McDavitt in the South Pacific, who help cruising sailors
choose when to make a passage, rely on a favorite model, along with their own observations and those of the sailors they speak to, to make predictions.
One scientist who had a major impact on meteorology was Edward Lorenz. In 1959 while on the faculty at MIT's Department of Meteorology, he developed a computer model of an idealized atmosphere to run
on the primitive computer in his office. Lorenz plugged a dozen variables into so-called filtered equations, which recalculated them and moved the numbers along in time. It took a minute to simulate
one day in the atmosphere, and by plotting the variables Lorenz could draw a line between the points, creating a graph that showed a simple atmosphere changing in a way that repeated itself, but not
exactly, much as our weather does. Since a periodic system eventually repeats itself exactly, Lorenz's model was cyclical but non-periodic.
Lorenz wanted to rerun a segment of the model to examine it in greater detail, so he stopped the computer, typed in numbers the computer had calculated several sections back, and started it going. He
walked off to get a cup of coffee. By the time he returned the model had generated two months of numbers but the numbers didn't resemble the first run. Lorenz suspected a physical problem with his
computer and went back over the machine's trail to locate the mistake. Instead of an obvious break, the first few days were indeed an exact repeat, but after that the model began to generate errors
that multiplied until by the end of the second month the graph looked nothing like the original he'd intended to duplicate. At first the numbers differed from the original in tiny amounts, but these
kept increasing and doubled in size about every four days.
When Lorenz took the numbers to rerun the model, he'd rounded them off before entering them into the computer. Something in the simple model was amplifying the tiny round-off errors. If the real
atmosphere behaved like this, weather prediction would be impossible unless one knew the exact temperature, pressure, humidity, etc. at every point in the atmosphere.
Lorenz knew he was on to something. He called what he discovered in his model, "sensitive dependence on initial conditions." Ultimately, Lorenz came up with the theory of chaos. In simple terms,
chaos is the theory of systems that only appear to be random but are in fact predictable if one knows the exact circumstances that generate them. The trouble is that with chaotic systems, even the
tiniest difference leads to enormous discrepancies down the line. The atmosphere is chaotic.
Even as meteorologists gain greater understanding of large cycles that drive our weather - El Nino, for example - and the consequences in terms of what we know as weather, the smaller features can
bump a system out of its predicted course.
Fortunately for day sailors and racers, short-term forecasts in populated areas are very good. Unfortunately for cruisers, a weather window may only be good for three days. Every day added to the
prediction offers more chance for errors to compound.
Ann Hoffner and her husband Tom Bailey voyage aboard their Peterson 44 Oddly Enough. | {"url":"http://www.oceannavigator.com/July-August-2007/Can-a-butterfly-start-a-hurricane/","timestamp":"2014-04-19T17:04:27Z","content_type":null,"content_length":"18903","record_id":"<urn:uuid:82977881-c9e6-4ac8-a394-8343201bf888>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Paintings of Crockett Johnson
Mathematical Paintings of Crockett Johnson
Selected Works of David Crockett Johnson
Barnaby, New York, NY: Henry Holt and Company, 1943.
Barnaby and Mr. O’Malley, New York: Henry Holt and Company, 1944.
Harold and the Purple Crayon, New York: Harper 7 Row, 1955.
“A Geometrical Look at vp,” Mathematical Gazette, 54 (Feb 1970): 59-60.
“On the Mathematics of Geometry in My Abstract Paintings,” Leonardo, 5 (1972): 97-101.
“A construction for a regular heptagon,” Mathematical Gazette, 17 (March 1975): 17-21.
Papers of Crockett Johnson, Mathematics Collections, National Museum of American History, Smithsonian Institution.
Correspondence in the Harley Flanders Papers, Mathematics Collections, National Museum of American History.
Correspondence in the Ad Reinhardt Papers, Archives of American Art, Smithsonian Institution.
Selected Works about Crockett Johnson
Stephanie Cawthorne and Judy Green, “Cubes, Conic Sections, and Crockett Johnson,” Convergence, vol. 11, 2014. http://www.maa.org/publications/periodicals/convergence/
Stephanie Crawthorne and Judy Green, “Harold and the Purple Heptagon,” Math Horizons (September 2009): 5-9.
Philip Nel, “Crockett Johnson and the Purple Crayon: A Life in Art,” Comic Art, 5 (2004): 2-18.
Philip Nel. Crockett Johnson and Ruth Krauss: A Biography, Jackson: University Press of Mississippi, in preparation.
James B. Stroud, “Crockett Johnson's Geometric Paintings,” Journal of Mathematics and the Arts, 2 #2 (June 2008): 77-99.
For a more detailed bibliography and further information, see the Crockett Johnson Web site created and maintained by Philip Nel.
For a description of American mathematics and science education at the time of Crockett Johnson’s paintings, see the Museum's Web site: “Mobilizing Minds: Teaching Math and Science in the Age of
This introduction and the accounts of Crockett Johnson paintings given below have benefited from insights of Uta C. Merzbach, Judy Green, J. B. Stroud, Philip Nel, Mark Kidwell, Emmy Scandling, and
Joan Krammer.
"Mathematical Paintings of Crockett Johnson - Resources" showing 3 items.
According to the classical Greek tradition, the quadrature or squaring of a figure is the construction, with the aid of only straight edge and compass, of a square equal in area to that of
the figure. Finding the area bounded by curved surfaces was not an easy task. The parabola and other conic sections had been known for almost a century before Archimedes wrote a short
treatise called Quadrature of the Parabola in about 240 BC. This was the first demonstration of the area bounded by a conic section.
In his proof, Archimedes first constructed a triangle whose sides consisted of two tangents of a parabola and the chord connecting the points of tangency. He then showed that the area under
the parabola (shown in white and light green in the painting) is two thirds of the area of the triangle that circumscribes it. Once the area bounded by the tangent could be expressed in terms
of the area of a triangle, it was easy to construct the corresponding square. Crockett Johnson’s painting is based on diagrams illustrating a discussion of Archimedes’s proof given by H.
Dorrie (Figure 54) or J. R. Newman (Figure 9).
This oil painting is #43 in the series, and is signed: CJ69. It has a gray background and a gray frame. It shows a triangle that circumscribes a portion of a parabola. The large triangle is
divided into a triangle in shades of light green, which touches a triangle in shades of dark green. The region between the triangles is divided into black and white areas. A second painting
in the series, #78 (1979.1093.52) illustrates the same theorem.
References: Heinrich Dorrie, trans. David Antin, 100 Great Problems of Elementary Mathematics: Their History and Solution (1965), p. 239. This volume was in Crockett Johnson’s library and his
copy is annotated.
James R. Newman, The World of Mathematics (1956), p. 105. This volume was in Crockett Johnson's library. The figure on this page is annotated.
Currently not on view
date made
Johnson, Crockett
ID Number
catalog number
accession number
Data Source
National Museum of American History, Kenneth E. Behring Center
According to the classical Greek tradition, the quadrature or squaring of a figure is the construction, with the aid of only straight edge and compass, of a square equal in area to that of
the figure. But finding the area bounded by curved surfaces was not an easy task. The parabola and other conic sections had been known for almost a century before Archimedes wrote a short
treatise called Quadrature of the Parabola in about 240 BC. This was the first demonstration of the area bounded by a conic section. In his proof, Archimedes first constructed a triangle
whose sides consisted of two tangents of a parabola and the chord connecting the points of tangency. He then showed that the area under the parabola (shown in gray and black in the painting)
is two thirds of the area of the triangle which circumscribes it. Once the area bounded by the tangent could be expressed in terms of the area of a triangle, it was easy to construct the
corresponding square. Crockett Johnson’s painting follows two diagrams illustrating a discussion of Archimedes’s proof given by Heinrich Dorrie (Figure 54).
This oil or acrylic painting on masonite is #78 in the series and is signed “CJ67” in the bottom left corner. It has a gray wooden frame. For a related painting, see #43 (1979.1093.31).
References: Heinrich Dorrie, trans. David Antin, 100 Great Problems of Elementary Mathematics: Their History and Solution (1965), p. 239. This volume was in Crockett Johnson's library and the
diagram in his copy is annotated.
James R. Newman, The World of Mathematics (1956), p. 105. This volume was in Crockett Johnson's library. The figure on this page (Figure 9) is annotated.
Currently not on view
date made
Johnson, Crockett
ID Number
catalog number
accession number
Data Source
National Museum of American History, Kenneth E. Behring Center
The construction of regular polygons using straightedge and compass alone is a problem that has intrigued mathematicians from ancient times. Crockett Johnson was particularly interested in
the construction of regular seven-sided figures or heptagons, which require not only a compass but a marked straight edge. The mathematician Archimedes reportedly proposed such a
construction, which was included in a treatise now lost. Relying heavily on Thomas Heath's Manual of Greek Mathematics, Crockett Johnson prepared this painting.
Archimedes had reduced the problem of finding a regular hexagon to that of finding two points that divided a line segment into two mean proportionals. He then used a construction somewhat
like that of the painting to find a line segment divided as desired. Crockett Johnson's papers include not only photocopies of the relevant portion of Heath, but his own diagrams.
The painting is #104 in the series. It is in acrylic or oil on masonite., and has purple, yellow, green and blue sections. There is a black wooden frame. The painting is unsigned and undated.
Relevant correspondence in the Crockett Johnson papers dates from 1974.
References: Heath, Thomas L., A Manual of Greek Mathematics (1963 edition), pp. 340–2.
Crockett Johnson, "A construction for a regular heptagon," Mathematical Gazette, 59 (March 1975): pp. 17–18.
Currently not on view
date made
ca 1974
Johnson, Crockett
ID Number
catalog number
accession number
Data Source
National Museum of American History, Kenneth E. Behring Center | {"url":"http://americanhistory.si.edu/collections/object-groups/mathematical-paintings-of-crockett-johnson?ogmt_page=mathpaintings-resources&edan_start=0&ogmt_view=grid&edan_fq=name%3A%22Archimedes%22","timestamp":"2014-04-19T13:30:57Z","content_type":null,"content_length":"58853","record_id":"<urn:uuid:66261e49-1d96-4b0a-b2b5-1aabbb0e0b21>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Princeton Mathematics Community in the 1930s
Transcript Number 2 (PMC2)
ŠThe Trustees of Princeton University, 1985
(with ALBERT TUCKER)
This is an interview of Valentine Bargmann at Princeton University on 12 April 1984. The interviewers are William Aspray and Albert Tucker.
Aspray: Could you tell me something about your earlier career and about how you came to Princeton?
Bargmann: I was born in Berlin of Russian Jewish parents. I went to school in Berlin and started studying at Berlin University. In 1933 Hitler came to power, and it was to clear to our family that we
would have to emigrate as fast as possible. My parents went to Lithuania, where my father had his business, and I went to Switzerland and continued studying at the University of Zurich. In 1936 I got
my Ph.D. degree. I could not stay in Switzerland; they didn't accept any immigrants. Instead I went to Lithuania to visit my parents. Luckily a close friend of my parents was a secretary at the
American Consulate. By his intervention I got an American visa almost immediately. I hardly knew of any case where it went so easily.
I did not have any connection with American mathematicians or physicists, but before coming to America I asked my professors in Switzerland for letters of recommendation, in particular Wolfgang
Pauli, who had been in Princeton in 1935. He also advised me to consult Rabi in New York, which I did. Rabi said, "I would suggest that you go to Ann Arbor. There is now a summer symposium, where you
will meet many American physicists. I did this, and indeed I met many American physicists, among them [Gregory] Breit, who was then at Wisconsin. I had a letter to Breit, but he told me that he could
not help me, he didn't have any funds. He suggested I try the Institute for Advanced Study. Well, after the symposium was over I traveled to Princeton. It was just the time of the summer vacation.
Nevertheless it happened that John von Neumann was in town, and Miss Blake, his secretary, suggested I talk with him. This I did, and he accepted me, so to speak, on the spot. I had seen him in
Berlin, but I was not acquainted with him.
Aspray: What did he arrange for you?
Bargmann: I became a member of the Institute. He didn't offer me a stipend, but I didn't want everything. It was marvelous. After I was accepted I began to get acquainted with the members of the
Tucker: Was this in the period when the Institute's School of Mathematics was still in the old Fine Hall?
Bargmann: Yes, it was the summer of '37.
Aspray: Weren't you at one time one of Einstein's assistants?
Bargmann: Yes. It was like this. When I came, his assistant was Peter Bergmann. During that academic year I got acquainted with Einstein, and Einstein suggested that I come to see them. "Them" is
Peter Bergmann and the Polish physicist Leopold Infeld. I got acquainted with what they were doing, and I could participate in their discussions. Then Einstein asked me if I would like to do it on a
regular basis. The Institute had already offered me a stipend for the coming year. Things were looking up. It is difficult to say whether there was any difference between the position of Peter
Bergmann and my position. We were simply collaborators. After a year I think Peter Bergmann had already been assistant for two years or so we switched so that I was officially the assistant. It
didn't mean anything substantial.
Tucker: What was Einstein and company doing at that time?
Bargmann: There were two projects. One was the problem of motion, the other was unified field theory. The first had been started with Leopold Infeld and Banesh Hoffmann, and it was a question in
general relativity. The second was the construction of a unified field theory.
Tucker: That was Einstein's chief interest at that time.
Bargmann: It was a major interest, which would occupy Einstein to the end of his life. But the problem of motion had also occupied him for many years and had, in Einstein's view, not been adequately
Aspray: Were there other people, besides Einstein, Bergmann, Infeld, and you, in the Princeton community working on these problems?
Bargmann: No.
Aspray: Was there much interaction between these people and the other people at Fine Hall who might be considered physicists?
Tucker: Bob Robertson, Ed Condon.
Bargmann: Condon wasn't around then. Robertson, of course, knew about it, but I think he was more interested in the impact of Einstein's work on astrophysics. Occasionally we talked to Weyl to get
advice about the solutions of partial differential equations.
Tucker: Von Neumann?
Bargmann: I don't think so.
Aspray: Wigner?
Bargmann: It was a period when Fine Hall and the Institute were still relatively small, and we talked to each other constantly. I told people what I was doing, they told me what they were doing, but
they didn't try to produce new ideas for the solution of my problems.
Aspray: We ask these questions because neither of us has much knowledge of what the mathematical physicists were doing at that time, so we're interested in the whole range of mathematical physics at
that time, besides in your own and Einstein's activities.
Bargmann: The wider picture is this. Wigner was still in Wisconsin. He produced a very important piece of work on the Lorentz group, and we discussed this in great detail. Later on, at the suggestion
of Pauli, who in the meantime had arrived in Princeton, I started to work on the Lorentz group too. So we had some very strong common interests.
Tucker: John Wheeler?
Bargmann: Wheeler at that time was doing nuclear physics.
Tucker: He got into that, as I recall, by working with Niels Bohr.
Bargmann: Yes, he had been a visitor to Niels Bohr's Institute in Copenhagen, and was later called to Princeton. When I came to Princeton in '37, Wheeler was here as a visiting professor here. He
gave an excellent course on nuclear physics.
From then on he stayed in Princeton. In 1939 Bohr visited the Institute. He and Wheeler wrote a famous paper on uranium (specifically, uranium fission).
Aspray: At the time that Fine Hall was opened, how close were the mathematical and physics communities?
Bargmann: The way I remember it, the focal point was the common room in Fine Hall.
Tucker: It was the focal point of most things.
Bargmann: Whenever I had a moment free, not teaching or not going to lectures or whatever it was, I went to the common room, sat down, and asked what was going on. Therefore one knew very well what
people were doing. I found it extremely attractive.
Aspray: Did the physicists in Palmer Lab come over to the common room regularly?
Bargmann: Maybe not so regularly, but they did. There was formally tea for the mathematicians and coffee for the physicists in Fine Hall every afternoon. I knew quite a number of the physicists.
Aspray: I understand from talking to Al that some people object to the term 'mathematical physicist', that they would rather be considered a theoretical physicist.
Tucker: I mentioned this about Wigner; he would much p refer to be called a theoretical physicist than a mathematical physicist.
Bargmann: I see. It is true there are some people who will choose a problem, not because it is of particular interest within physics, but for its mathematical interest. This Wigner doesn't like, so
he stresses the distinction.
Tucker: I can remember that he declined to take some students for theses because he felt they didn't have sufficient grounding in physics.
Bargmann: Yes, maybe.
Aspray: What's behind my question is the fact that the Jones Professorship was in mathematical physics, and a stipulation made when Fine Hall was built was that the mathematical physicist be over
with the mathematicians.
Bargmann: Yes. Now I want to emphasize that in the past, maybe in the dim past, people talked about mathematical physics and even had professorships in mathematical physics. My teacher in Zurich,
Gregor Wentzel, before he came to Zurich had a professorship in mathematical physics, I think in Leipzig. He knew mathematics very well, but his interest was physics, there was no doubt about it. In
the same way, in England physics was called natural philosophy for quite some time.
Aspray: Did the way things were arranged at Princeton have a bearing on this? Did the physicists somehow resent this involvement with the math department?
Bargmann: I don't know how sensitive I am in that respect.
Aspray: To ask a positive question, did this arrangement bring about more interaction with mathematicians than might otherwise have been the case?
Bargmann: I don't think a title means much. What means very much is the living tradition.
Tucker: Before Fine Hall existed, the mathematics department, as far as it had a home, was in Palmer. When I arrived here in 1929, there was no Fine Hall; there were a couple of rooms in Palmer. The
ones that you encounter on your right as you enter. The mathematics seminary, as it was called, was there. That was the library and a place for the mathematicians to hang out. The mathematics
secretary shared an office with the physics secretary. Down at the end of the hall was Palmer 222, where all the mathematics graduate courses were taught. That's where Veblen had his seminar. Veblen
had an office up on the next floor, and Alexander had an office there. I don't know whether any of the other mathematics professors had offices.
I don't know when all this started. I think it started when Palmer was built. I think that until that time the mathematics seminary had been in the University library, the old library. It was then
moved because the physicists wanted to have their books down in Palmer. The mathematicians went along, so that it was a marriage that had existed for a long time. It was certainly one of the
stipulations of the mathematicians, Veblen and Eisenhart, that when Fine Hall was built it should be immediately adjacent to Palmer. There were, of course, other reasons for mathematicians and
physicists being good friends, but at Princeton this goes way back.
Bargmann: What you just said concerns what I wanted to say. There was a tradition, a living tradition, that mathematicians and physicists talk to each other, and it can't be done artificially. I
think at Chicago Eckhart Hall and at San Francisco, math and physics are in the same building, without any considerable effect.
In '39 the European war started., and more and more mathematicians were removed for World War II. I was approached in 1941, I think by H.P. Robertson, to give a course in methods of mathematical
physics. It appealed to me very much, and the next year I was asked if I would give a course on electrodynamics. So I was a kind of unofficial lecturer for the physics department.
Tucker: Methods of mathematical physics was actually a joint course.
Bargmann: Yes, but the next one wasn't; it was a course in the physics department. Now this continued. But during this time I was no longer an alien. In '43 1 became an American citizen and could do
war work.
Tucker: You also did some Army specialized training programs in teaching in '43-'44.
Bargmann: In '43, yes. But I started to do war work with von Neumann. When the war was over, he was already deeply interested in electronic computers, and he asked me whether I would like to
continue. But I wanted to get back to physics, so I asked the physics department whether they could give me an appointment.
Aspray: What was the war work you did with von Neumann?
Bargmann: Gas dynamics. I was not sufficiently high up to know exactly what it was used for.
Tucker: Well, it was certainly used in connection with building the atom bomb.
Bargmann: Yes, but with the particular problem we worked on we weren't yet very far, so to what extent it was used, I can't say. But it's quite possible it was used.
I stayed in Princeton for a while. Then Elliot Montroll, who was a close friend of mine and who had gone to the University of Pittsburgh, recommended me for a position at the mathematics department
of the University of Pittsburgh. So I went to Pittsburgh for one term. It was a tenured position, but after one term was over Princeton invited me back to a tenured position, and since then I've been
at Princeton. This was 1948.
Tucker: This was the year that Robertson left?
Bargmann: Yes, essentially I got his position.
Aspray: I understand from Professor Tucker that you were responsible for most of the instruction in mathematical physics, at least in terms of course work, from then on.
Bargmann: Well, at the beginning. Then, of course, the younger generation came up, in particular Arthur Wightman.
Tucker: But you did this even after you retired.
Bargmann: That is correct.
Tucker: Do you remember particularly any of the graduate students working in mathematical physics? Feynman, for example.
Bargmann: Feynman I knew not so much as a student, but as a colleague. I mean he was then already doing his own work.
Tucker: We were talking yesterday with John Tukey, and he remembers Feynman well from the time Feynman was living at the Graduate College. At the Graduate College he mixed with the mathematicians,
rather than with the physicists.
Bargmann: That's interesting, because he also wanted to make sure ...
Tucker: To be called a theoretical physicist, rather than a mathematical physicist.
Bargmann: I remember the graduate students, in particular the graduate students in the first year.
Aspray: Who were they?
Bargmann: In the first course I had Philip Stehle, who is now at Pittsburgh. He is almost retired; he has been chairman of the department. Then there was a mathematician whom I lost sight of, [Paul]
Tucker: He went to Cornell. I think that he is now at the University of Oregon; he is indeed president of the University of Oregon.
Aspray: Who else was there at the time?
Tucker: Did William Sharp work with you?
Bargmann: Not with me. He took courses of mine.
Tucker: He was with Wigner.
Aspray: Was John Bardeen here at that time?
Bargmann: No.
Tucker: That was earlier. Bardeen took his degree I think in '36. 1 have fun with people occasionally by telling them that someone who was a graduate student in mathematics at Princeton has won two
Nobel Prizes. Of course I get the answer, "That's impossible; the Nobel Prize isn't given in mathematics." And I say, "But he got it in physics." When he was a graduate student at Princeton, he was
in mathematics. He held a JSK Fellowship in his first year, and he took the general examination. I've talked to him about this. He is proud of the fact that he was a Princeton mathematician at that
Aspray: I was wondering if you could, considering all the people at the Institute and at the University in mathematical physics, compare that community in the late 1930s with the other centers of
mathematical physics, both in the U.S. and in Europe.
Bargmann: This won't be easy, because I haven't traveled. Unless one lives in a community, one misses out on quite a bit.
Aspray: What were the other major centers of research?
Bargmann: I would say that after the war there were, of course, many more than just Princeton.
Aspray: But in 1937-1940 what were the other centers?
Bargmann: 1937-1940 is a different time, because the war started and the war-related work started.
Aspray: So it might be said that Princeton was one of the few really active groups, just because of all the disruption because of the war.
Bargmann: But Princeton wasn't active. Wigner wasn't there. Robertson wasn't there. Von Neumann wasn't there.
Tucker: You're not talking about the period after 1940, but the period f rom '37 to '40.
Bargmann: But Robertson was already traveling.
Aspray: Was von Neumann also traveling by that time?
Bargmann: I would think so.
Aspray: That I wasn't aware of. Can you tell me a bit about the differences in the personal styles of work between some of the major figures that were here during that period?
Bargmann: That is a hard question. You see, I wasn't working with von Neumann on his mathematical problems. What I was doing was work that had to be done. "Here is the problem; see what you can do
with it. " It was not work as it arises in physics or mathematics.
Aspray: How much of a physicist was von Neumann as far as you could tell?
Bargmann: If one applies an appropriately broad view of physics one must say that von Neumann had a quite outstanding insight into the problems of physics. Because he has done first-rate work, and he
was the man who succeeded in giving a correct mathematical formulation of quantum mechanics, and this was the major theory in physics in the first half of the century.
Aspray: Did he think the same way as some of these other people?
Bargmann: No, probably not, I would say tentatively. But I think you cannot point to a specific physical problem which John von Neumann solved. But I may be wrong. I don't remember.
Aspray: It's my impression that that's right.
Bargmann: Because what he did in quantum mechanics was the general framework.
Aspray: That's seems characteristic of his work in other areas as well. In set theory to a certain degree, in theory of computers, in operator theory.
Bargmann: I would not agree with operator theory. I think there are many things which he solved. This is typical, because this is part physics and part mathematics. For a time there was the problem
whether the p- and q-operators in quantum mechanics are uniquely defined up to equivalence by the commutation rules. Von Neumann gave the first proof of their uniqueness. This is a specific
mathematical problem. In those days many of the great theoretical physicists knew every atom like a friend, and knew exactly what happens in sodium and knew exactly what happens in potassium and so
on. This, I think, Johnny did not.
Tucker: That would be the difference between him and Wigner.
Bargmann: About Einstein. This is a bit more difficult. I would consider Einstein to be in the class of Bohr and Wigner, but Einstein was also in a way only interested in problems which are general.
Tucker: Was there any interaction between von Neumann and Einstein?
Bargmann: I wasn't aware of any, but it's difficult to say. I mean I didn't know everything that went on in Fine Hall. I know that when Pauli was here, he talked to Einstein frequently.
Aspray: Did Pauli speak regularly with von Neumann?
Bargmann: I don't know. But it's possible, because I remember a few times when Pauli said, "Yesterday I talked to von Neumann, and he told me.. ." something about a mathematical problem.
Tucker: Was there ever any regular seminar in which Einstein participated?
Bargmann: No. That is, if you're asking about one in which Einstein regularly participated, the answer is no.
Tucker: But occasionally?
Bargmann: Yes. Occasionally he came to a physics colloquium.
Tucker: I don't ever remember seeing him in Fine Hall, except going to and from his office.
Bargmann: I remember Milton White. He came from California; he was the man who was to build the cyclotron. This was before I was here, but I know it from Milton. He gave a talk in a physics seminar
or colloquium, I don't remember exactly. He had prepared it well. He came to the lecture room, and in the front row was Einstein. This he hadn't expected. So I know it happened occasionally, and
Einstein was also strongly impressed when Rabi talked about his work on the moment of the neutron.
Tucker: But mainly Einstein worked with a few associates. That is my recollection.
Bargmann: Yes, correct.
Tucker: I remember that at that time I was helping with the Annals of Mathematics, particularly with papers that were being refereed up until the time that some decision was made with them. I
remember taking one or two papers to Einstein at his office, and indeed I still have somewhere a referee's report that he wrote. He wrote it in German and addressed me as "my neighbor colleague". I
don't remember ever hearing him speak in a lecture or seminar. I think that any time he did that it must have been kept under wraps so that the place wouldn't be mobbed.
Bargmann: By people who wouldn't understand anything.
Tucker: Out of sheer curiosity. I mentioned this to explain why I was never aware of occasions when he talked in a seminar.
Bargmann: It wasn't widely known, but I think I always knew, and not from him.
Tucker: No, but it would have been an inner circle.
Bargmann: Was it on the bulletin board?
Tucker: I don't think so. I don't ever remember such a thing being listed on the weekly-seminar bulletin.
Aspray: I believe you told me, Al, that at one time Einstein's office had to be moved from the first floor to the second floor. Is that correct?
Tucker: I think that when Einstein arrived the office he was given in Fine Hall was on the first floor. Then there were people who came and peeked in the windows, so that his office was moved to the
office that Wigner subsequently had. I myself had an office at the time that Einstein first came. I was an instructor sharing an office with E.J. McShane. This was 108, on the first floor. It is my
recollection that at that time Einstein had the adjoining office, 109.
Aspray: When you first came to Princeton was there still a problem about preserving Einstein's privacy?
Bargmann: Well yes, but I would say it wasn't as bad as one might have expected.
Tucker: Of course it was terrible in 1933. There were reporters and photographers in town for weeks trying to catch him in an unguarded moment.
Aspray: This effort to protect his privacy, did it mean that things had to be more formal for the inner circle to get to see him? Did you have to make appointments or do anything like that?
Bargmann: No.
Aspray: He was accessible to you.
Bargmann: I would say Einstein was utterly accessible. For example, in Fine Hall and at the Institute, to anybody who was genuinely interested in science I mean if you came as a colleague.
I arrived here today asking myself a question, 'What in Fine Hall or in Princeton impressed me most?' The answer is that the library was open 24 hours a day. This I hadn't seen anywhere. These were
good times.
Aspray: And I assume that going in it at just about any hour you'd find somebody there?
Bargmann: Of course.
Tucker: The common room and the library were occupied constantly.
Bargmann: Yes, of course. I remember particularly well, because I came from Switzerland, where they made an awful fuss about the library. Zurich had two institutions, the University of Zurich and the
Federal Institute of Technology (abbreviated 'ETH' from the German 'Eidenoessische Technische Hochschule'). On the door of the ETH library was the following notice: "if you knock at this door it will
not be opened."
Tucker: Well, I had the experience in reverse, because I had experienced Fine Hall in the last year of my graduate study, and the following year I had a National Research Council Fellowship which I
used in Cambridge, England, because I wanted to go to the international congress at Zurich in the fall of '32. By taking the first part of my fellowship at Cambridge, I was able to go to Zurich as
part of my fellowship. But what I wanted to say is that at Cambridge I found it utterly impossible to use library facilities. I could go to the library, but I had to go to the catalog and order a
book, and then read it in the reading room. I couldn't go to the stacks and browse, and that's the only way for a scholar to use a library.
I realize that the same thing is generally true thoughout Europe. That a library's main objective was to preserve books rather than to have books used, because if they're used, this leads to their
not being preserved. After Cambridge I was at Harvard, and I found this almost as bad at Harvard. I could get into the stacks there, but it wasn't particularly agreeable. There wasn't any place there
to work, which was the great advantage of the Fine Hall library. I had a chance to stay on at Harvard as an instructor, or to return to Princeton as an instructor. I had no doubt about my choice,
because the conditions for working were immeasurably better here than they were at Harvard at that time.
Bargmann: Yes, Fine Hall was quite new.
Tucker: What you said about the common room and the tea and coffee, this was in some sense the heart of the life there. And this, you know, was due to Oswald Veblen. He was the one who introduced the
afternoon tea to Princeton. He had it in his office at Palmer, using a Bunsen burner to prepare tea, but it was, really only for the small group of people who were working with him. And, of course,
he was the one who more or less designed Fine Hall. I mean, that it should have a common room, that it should have a professors' room, that there should be a library, and all that. He worked with the
architects on these things. It was his idea from the very start that the common room should have tea.
In the first year of Fine Hall I happened to be put in charge of the afternoon tea. On a voluntary basis, we served the tea and cleaned up and washed the dishes and so forth. But then the janitor,
Mr. Hahr, whom you probably remember, objected to Dean Eisenhart and asked that he be paid two hours overtime to stay on and serve the tea and clean up. That's the way it was, of course, when you got
here. It was rather chaotic the first year, but even to putting me in charge of tea, it was the work of Oswald Veblen.
Bargmann: Yes, I understand.
Tucker: So he should be given a lot of credit for the atmosphere that this created. As Dean [J.D.] Brown loved to say, "Men create institutions and institutions create men." | {"url":"http://www.princeton.edu/~mudd/finding_aids/mathoral/pmc02.htm","timestamp":"2014-04-21T01:27:22Z","content_type":null,"content_length":"29920","record_id":"<urn:uuid:1b0d577a-9834-41b6-a572-f92a6aeeb656>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
ACS pointing patterns
pointing patterns
This is a two-dimensional 4-point dither pattern, with the following allowable parameter ranges:
Pattern_Number: 1
Pattern_Type: ACS-WFC-DITHER-BOX
Pattern_Purpose: DITHER
Number_Of_Points: 4
Point_Spacing: 0.01 - 10.0 (arcseconds)
Line_Spacing: 0.01 - 10.0 (arcseconds)
Coordinate_Frame: POS-TARG
Pattern_Orient: 0.0 - 360.0 (degrees)
Angle_Between_Sides: 0.0 - 360.0 (degrees)
Center_Pattern: ? (YES or NO)
The following versions of this Pattern_Type are projected onto the detector pixel space in the graphic above. The default pattern has relative pixel coordinates
(0, 0), (5.0, 1.5), (2.5, 4.5), (-2.5, 3.0).
It is a parallelogram pattern designed for optimal half-pixel sampling in both x and y, with overall dimensions large enough to help reject the larger detector artifacts.
Pattern_Number: 16
Pattern_Type: ACS-WFC-DITHER-BOX
Pattern_Purpose: DITHER
Number_Of_Points: 4
Point_Spacing: 0.264
Line_Spacing: 0.185
Coordinate_Frame: POS-TARG
Pattern_Orient: 20.86
Angle_Between_Sides: 69.07
Center_Pattern: NO
POS TARG equivalent: 0.000, 0.000
0.247, 0.094
0.124, 0.232
-0.124, 0.138
This diagram shows how much this pattern's shape varies across the entire WFC field-of-view, as a result of scale variation. The following pattern was designed for the UDF observations. It is
slightly more compact than the default pattern above, and therefore reduces the effect of the scale variation on the integrity of the pattern across the entire WFC field-of-view:
Pattern_Type: ACS-WFC-DITHER-BOX
Pattern_Purpose: DITHER
Number_Of_Points: 4
Point_Spacing: 0.171
Line_Spacing: 0.171
Coordinate_Frame: POS-TARG
Pattern_Orient: 30.16
Angle_Between_Sides: 145.82
Center_Pattern: NO
POS TARG equivalent: 0.000, 0.000
0.148, 0.086
0.222, 0.240
0.074, 0.154
The compact pattern is about as compact as possible, therefore it minimizes the effect of the scale variation on the integrity of the pattern across the entire WFC field-of-view, at the expense of
not rejecting some of the larger detector artifacts.
Pattern_Type: ACS-WFC-DITHER-BOX
Pattern_Purpose: DITHER
Number_Of_Points: 4
Point_Spacing: 0.104
Line_Spacing: 0.053
Coordinate_Frame: POS-TARG
Pattern_Orient: 18.31
Angle_Between_Sides: 80.79
Center_Pattern: NO
POS TARG equivalent: 0.000, 0.000
0.099, 0.033
0.074, 0.080
-0.025, 0.047
This expanded pattern is designed to move an object completely off of its own PSF between exposures. It is similar to the other BOX patterns here, but has overall dimensions about 4 times larger.
This is optimized for smallish objects near the defined aperture, i.e. when the shape of the pattern near the edges of the field-of-view is less important.
Pattern_Type: ACS-WFC-DITHER-BOX
Pattern_Purpose: DITHER
Number_Of_Points: 4
Point_Spacing: 0.841
Line_Spacing: 0.795
Coordinate_Frame: POS-TARG
Pattern_Orient: 19.91
Angle_Between_Sides: 97.36
Center_Pattern: NO
POS TARG equivalent: 0.000, 0.000
0.790, 0.286
0.618, 1.063
-0.173, 0.776
This pattern is not plotted above, but its relative dimensions can be viewed in this comparison plot. | {"url":"http://www.stsci.edu/hst/acs/proposing/dither/ACS-WFC-DITHER-BOX.html","timestamp":"2014-04-20T16:47:45Z","content_type":null,"content_length":"22097","record_id":"<urn:uuid:974dde7b-7fc5-4acd-84dd-1bbe70c434e3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
deep_alpha : (string * string) list -> term -> term
Modify bound variable according to renaming scheme.
When applied to a list of string-string pairs
deep_alpha ["x1'","x1"; ...; "xn'","xn"]
a conversion results that will attempt to traverse a term and systematically replace any bound variable called xi with one called xi'. It will quietly do nothing in cases where that is impossible
because of variable capture.
# deep_alpha ["x'","x"; "y'","y"] `?x. x <=> !y. y = y`;;
Warning: inventing type variables
val it : term = `?x'. x' <=> (!y'. y' = y')`
This is used inside PART_MATCH to try to achieve a reasonable correspondence in bound variable names, e.g. so that the bound variable is still called `n' rather than `x' here:
# REWR_CONV NOT_FORALL_THM `~(!n. n < m)`;;
val it : thm = |- ~(!n. n < m) <=> (?n. ~(n < m))
alpha, PART_MATCH. | {"url":"http://www.cl.cam.ac.uk/~jrh13/hol-light/HTML/deep_alpha.html","timestamp":"2014-04-18T16:08:17Z","content_type":null,"content_length":"1801","record_id":"<urn:uuid:67fbd0ba-8e00-46a0-87a1-7f9437f65691>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: A resolvable subfilter-scale model specific to large-eddy simulation
of under-resolved turbulence
Yong Zhou, James G. Brasseur,a)
and Anurag Junejab)
Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park,
Pennsylvania 16802
Received 20 February 2001; accepted 6 June 2001
Large-eddy simulation LES of boundary-layer flows has serious deficiencies near the surface when
a viscous sublayer either does not exist rough walls or is not practical to resolve high Reynolds
numbers . In previous work, we have shown that the near-surface errors arise from the poor
performance of algebraic subfilter-scale SFS models at the first several grid levels, where integral
scales are necessarily under-resolved and the turbulence is highly anisotropic. In under-resolved
turbulence, eddy viscosity and similarity SFS models create a spurious feedback loop between
predicted resolved-scale RS velocity and modeled SFS acceleration, and are unable to
simultaneously capture SFS acceleration and RSSFS energy flux. To break the spurious coupling
in a dynamically meaningful manner, we introduce a new modeling strategy in which the
grid-resolved subfilter velocity is estimated from a separate dynamical equation containing the
essential inertial interactions between SFS and RS velocity. This resolved SFS RSFS velocity is
then used as a surrogate for the complete SFS velocity in the SFS stress tensor. We test the RSFS
model by comparing LES of highly under-resolved anisotropic buoyancy-generated homogeneous | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/217/5446605.html","timestamp":"2014-04-16T11:26:33Z","content_type":null,"content_length":"8700","record_id":"<urn:uuid:bfe34e24-e8ee-4ddc-8dae-e36cd93e5136>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Undecidability Result on Limits of Sparse Graphs
Given a set $\mathcal{B}$ of finite rooted graphs and a radius $r$ as an input, we prove that it is undecidable to determine whether there exists a sequence $(G_i)$ of finite bounded degree graphs
such that the rooted $r$-radius neighbourhood of a random node of $G_i$ is isomorphic to a rooted graph in $\mathcal{B}$ with probability tending to 1. Our proof implies a similar result for the case
where the sequence $(G_i)$ is replaced by a unimodular random graph.
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v19i2p21/0","timestamp":"2014-04-20T19:24:55Z","content_type":null,"content_length":"14315","record_id":"<urn:uuid:6c4ff4e2-30a1-4206-9226-a646fcb04f14>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angle between 2 vector
December 28th 2009, 11:16 PM #1
Dec 2008
Angle between 2 vector
Question : If $2 \theta$ is the angle between the 2 unit vectors $\bar a$ and $\bar b$, then show that $|\bar a - \bar b| = 2 sin \theta$
I know $\bar a - \bar b = |\bar a ||\bar b| cos \theta$
Is dont know what to do afterwards???
Question : If $2 \theta$ is the angle between the 2 unit vectors $\bar a$ and $\bar b$, then show that $|\bar a - \bar b| = 2 sin \theta$
I know $\bar a - \bar b = |\bar a ||\bar b| cos \theta$
Is dont know what to do afterwards???
Draw a diagram then it is simple geometry.
CB i dont know how to draw a diagram with so little ......
angle is 2 tita how should i dram that.........
Notice that
$\left|\bar a-\bar b\right|^2=<\bar a-\bar b,\bar a-\bar b>$
Dear zorro,
This is a method which does not require any geometrical drawing. But I think sometimes you will find this method an interesting one. Sorry,I did'nt have time to type the answer. If you have any
questions please don't hesitate to ask me.
Last edited by Sudharaka; December 29th 2009 at 03:42 AM.
Hello zorro
Have a look at the attached drawing.
Since $\textbf{a}$ and $\textbf{b}$ are unit vectors, $|PQ| = |PS| = 1$, and $|\textbf{a}-\textbf{b}| = |QS|$.
Can you complete it now?
Thanks Sudharaka and grandad and also every one for ur answer.....
December 29th 2009, 12:50 AM #2
Grand Panjandrum
Nov 2005
December 29th 2009, 12:58 AM #3
Dec 2008
December 29th 2009, 01:58 AM #4
December 29th 2009, 02:29 AM #5
Super Member
Dec 2009
December 29th 2009, 02:45 AM #6
December 29th 2009, 12:24 PM #7
Dec 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/121860-angle-between-2-vector.html","timestamp":"2014-04-17T14:17:49Z","content_type":null,"content_length":"51072","record_id":"<urn:uuid:f61ba215-69b8-4ad1-b122-6e85d5602860>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
java program to find the root of quadratic equation
Author Message
medaxonan Posted: Wednesday 27th of Dec 21:49
Hi there I have almost taken the decision to look fora algebra tutor , because I've been having a lot of stress due to algebra homework this year. each time when I come home from school
I spend all the afternoon with my algebra homework, and in the end I still seem to be getting the incorrect answers. However I'm also not certain whether a algebra private teacher is
worth it, since it's not cheap , and who knows, maybe it's not even so good . Does anyone know anything about java program to find the root of quadratic equation that can help me? Or
maybe some explanations regarding triangle similarity,converting decimals or conversion of units? Any ideas will be much appreciated .
nxu Posted: Thursday 28th of Dec 19:32
Believe me, it’s sometimes quite hard to learn a topic by your own because of its difficulty just like java program to find the root of quadratic equation. It’s sometimes better to
request someone to explain the details rather than understanding the topic on your own. In that way, you can understand it very well because the topic can be explained systematically .
Luckily, I discovered this new program that could help in understanding problems in algebra. It’s a cheap quick hassle-free way of learning math lessons . Try making use of Algebrator
and I assure you that you’ll have no trouble solving math problems anymore. It shows all the pertinent solutions for a problem. You’ll have a good time learning algebra because it’s
Registered: user-friendly. Give it a try.
nedslictis Posted: Saturday 30th of Dec 14:57
I agree. Algebrator not only gets your assignment done faster, it actually improves your understanding of the subject by providing very useful information on how to solve similar
problems . It is a very popular product among students so you should try it out.
SjberAliem Posted: Monday 01st of Jan 14:54
Algebrator is a very user friendly product and is certainly worth a try. You will find many interesting stuff there. I use it as reference software for my math problems and can say that
it has made learning math more fun .
Macintosh HD
Nort Roedim Posted: Wednesday 03rd of Jan 07:00
This sounds really great . Do you know where I can purchase the program ?
Noddzj99 Posted: Friday 05th of Jan 09:04
Hi Dudes , I had a chance to try Algebrator offered at http://www.mathscitutor.com/factoring-polynomials-completely.html yesterday. I am really very thankful to you all for pointing me
to Algebrator. The big formula list and the detailed explanations on the fundamentals given there were really understandable. I have finished and turned in my assignment on system of
equations and this was all possible only with the help of the Algebrator that I purchased based on your recommendations here. Thanks a lot.
From: the | {"url":"http://www.mathscitutor.com/expressions-maths/matrices/java-program-to-find-the-root.html","timestamp":"2014-04-16T04:11:17Z","content_type":null,"content_length":"60838","record_id":"<urn:uuid:0e63e4e5-4e84-41f2-a81a-3b0cb7f1be55>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rockrimmon, CO ACT Tutor
Find a Rockrimmon, CO ACT Tutor
...I was a teacher and tutor for a leading Test Preparation company for three years. I studied all aspects of the ACT math section and successfully taught over 100 students in effective ACT math
strategies. I was a teacher and tutor with a leading test preparation company for three years.
20 Subjects: including ACT Math, reading, writing, English
...I am certified to teach math and science in grades 7 through 12 and have several years of teaching experience, including physical science and physics. I have helped prepare students for
science fairs and have coached for Science Olympiad. I have taught and tutored math, algebra and geometry for over 20 years.
9 Subjects: including ACT Math, geometry, algebra 1, GED
...Prealgebra is meant to prepare for the study of algebra. It provides the student with the tools they will need in algebra. Variable manipulation is one of the most important of those tools.
16 Subjects: including ACT Math, calculus, physics, statistics
...I have tutored in traditional after-school settings, alternative high school settings, and programs for homeschoolers. I hold undergraduate degrees in Optics and Mathematics and a Master's
degree in Physics. I am available to tutor math, physics, and test prep.
16 Subjects: including ACT Math, calculus, physics, GRE
...This past spring I coached girls high school tennis. While at Stanford, one summer home I worked at Early Connections Learning Center (in downtown Colorado Springs) as a Teaching Assistant. My
senior year of high school I was a Classroom Helper at a local elementary school.
34 Subjects: including ACT Math, reading, English, Spanish
Related Rockrimmon, CO Tutors
Rockrimmon, CO Accounting Tutors
Rockrimmon, CO ACT Tutors
Rockrimmon, CO Algebra Tutors
Rockrimmon, CO Algebra 2 Tutors
Rockrimmon, CO Calculus Tutors
Rockrimmon, CO Geometry Tutors
Rockrimmon, CO Math Tutors
Rockrimmon, CO Prealgebra Tutors
Rockrimmon, CO Precalculus Tutors
Rockrimmon, CO SAT Tutors
Rockrimmon, CO SAT Math Tutors
Rockrimmon, CO Science Tutors
Rockrimmon, CO Statistics Tutors
Rockrimmon, CO Trigonometry Tutors
Nearby Cities With ACT Tutor
Buckskin Joe, CO ACT Tutors
Crystal Hills, CO ACT Tutors
Deckers, CO ACT Tutors
Edison, CO ACT Tutors
Elkton, CO ACT Tutors
Ellicott, CO ACT Tutors
Fair View, CO ACT Tutors
Goldfield, CO ACT Tutors
Ilse, CO ACT Tutors
Parkdale, CO ACT Tutors
Penitentiary, CO ACT Tutors
Stratmoor Hills, CO ACT Tutors
Tarryall, CO ACT Tutors
Truckton, CO ACT Tutors
Twin Rock, CO ACT Tutors | {"url":"http://www.purplemath.com/rockrimmon_co_act_tutors.php","timestamp":"2014-04-19T07:31:52Z","content_type":null,"content_length":"23891","record_id":"<urn:uuid:47e6f006-0b7d-48d4-869b-eaa2a3042f08>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts Tagged with 'Psychometrics'—Wolfram|Alpha Blog
Prior to releasing Wolfram|Alpha into the world this past May, we launched the Wolfram|Alpha Blog. Since our welcome message on April 28, we’ve made 133 additional posts covering Wolfram|Alpha news,
team member introductions, and “how-to’s” in a wide variety of areas, including finance, nutrition, chemistry, astronomy, math, travel, and even solving crossword puzzles.
As 2009 draws to a close we thought we’d reach into the archives to share with you some of this year’s most popular blog posts.
Rack ’n’ Roll
Take a peek at our system administration team hard at work on one of the
many pre-launch projects. Continue reading…
The Secret Behind the Computational Engine in Wolfram|Alpha
Although it’s tempting to think of Wolfram|Alpha as a place to look up facts, that’s only part of the story. The thing that truly sets Wolfram|Alpha apart is that it is able to do sophisticated
computations for you, both pure computations involving numbers or formulas you enter, and computations applied automatically to data called up from its repositories.
Why does computation matter? Because computation is what turns generic information into specific answers. Continue reading…
Live, from Champaign!
Wolfram|Alpha just went live for the very first time, running all clusters.
This first run at testing Wolfram|Alpha in the real world is off to an auspicious start, although not surprisingly, we’re still working on some kinks, especially around logging.
While we’re still in the early stages of this long-term project, it is really gratifying to finally have the opportunity to invite you to participate in this project with us. Continue reading…
Wolfram|Alpha Q&A Webcast
Stephen Wolfram shared the latest news and updates about Wolfram|Alpha and answered several users’ questions in a live webcast yesterday.
If you missed it, you can watch the recording here. Continue reading… More »
We’re really catching the holiday spirit here at Wolfram|Alpha.
We recently announced our special holiday sale for the Wolfram|Alpha app. Now we are launching our first-ever Wolfram|Alpha “Holiday Tweet-a-Day” contest.
Here’s how it works.
From tomorrow, Tuesday, December 22, through Saturday, January 2, we’ll use Twitter to give away a gift a day. Be the first to retweet our “Holiday Tweet-a-Day” tweet and you get the prize! You can
double your chances to win by following and playing along with Wolfram Research.
Start following us today so you don’t miss your chance to win with our Wolfram|Alpha “Holiday Tweet-a-Day” contest.
When we launched Wolfram|Alpha in May 2009, it already contained trillions of pieces of information—the result of nearly five years of sustained data-gathering, on top of more than two decades of
formula and algorithm development in Mathematica. Since then, we’ve successfully released a new build of Wolfram|Alpha’s codebase each week, incorporating not only hundreds of minor behind-the-scenes
enhancements and bug fixes, but also a steady stream of major new features and datasets.
We’ve highlighted some of these new additions in this blog, but many more have entered the system with little fanfare. As we near the end of 2009, we wanted to look back at seven months of new
Wolfram|Alpha features and functionality.
Psychrometry deals with the thermodynamic properties of gas-vapor mixtures. Air-water vapor mixtures are the most common systems studied because of their importance in heating, ventilating, air
conditioning, and weather reporting.
Students of engineering are introduced to the subtleties of psychrometry in their thermodynamics courses. But we are all exposed to psychrometry any time we watch weather reports on television. Your
favorite meteorologist probably speaks about the relative humidity, dry bulb temperature, and dew point temperature.
Let’s start our exploration of psychrometry by querying “psychrometric properties“. Wolfram|Alpha returns a presentation of data for moist air, including a graph known as a psychrometric chart.
Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more.
Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies…
Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes!
Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon?
Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step. | {"url":"http://blog.wolframalpha.com/tag/psychometrics/","timestamp":"2014-04-17T15:26:42Z","content_type":null,"content_length":"46190","record_id":"<urn:uuid:41f1ad26-20fc-48ef-b893-93867f67a26d>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determining efficiency of multiplication algorithm
This explanation is from my text book for analyzing time efficiency of (non-recursive) algorithms.
1) Decide on a parameter indicating the input size
2) Identify the algorithm's basic operation
3) Setup a sum expressing the number of times the basic operation is executed
4) Using sum manipulations and standard formulas find a closed form formula for the count and/or establish it's order of growth.
So, first you want to be able to measure the size of your input. For example, if you have an array of integers to represent your bignum, then your input size is the size of the array.
Second, locate the basic operation, generally this is inside the inner loop of the algorithm.
Third, express the number of times this basic operation is executed as a sum, i.e.
[tex]\sum_{i=0}^{n} a_{i}[/tex]
where n is the number of iterations and a is the basic operation.
Four, manipulate the sum to obtain a nice formula.
Here's an example of a matrix multiplication algorithm:
MatrixMultiply: A*B=C
for (i = 0, i < n, i++)
for (j = 0; j < n; j++)
for (k = 0; k < n; k++)
C[i][j] += A[i][k] * B[k][j]
It should be clear that we have 3 loops that iterate n times each (from 0 to n-1) and the basic operation is a multiplication and an addition. Let's say the cost is 1 to keep it simple. A sum to
represent the number of basic operations for each matrix element i,j is:
[tex]\sum_{k=0}^{n} 1[/tex]
and the total number of basic operations can be represented by:
[tex]\sum_{i=0}^{n} \sum_{j=0}^{n} \sum_{k=0}^{n} 1[/tex]
It should be clear that the innermost sum:
[tex]\sum_{k=0}^{n} 1[/tex]
is equal to n, because we sum 1 from k = 0 to k = n. We can reduce the triple sum to:
[tex]\sum_{i=0}^{n} \sum_{j=0}^{n} n[/tex]
and with the same reasoning, reduce this double sum to:
[tex]\sum_{i=0}^{n} n^{2}[/tex]
and further reduce it to:
So the cost of this algorithm, as a function of input size is C(n) = c*n
where c is the cost of the basic operation, and you would say that the order of growth is O(n
) - the constant c is irrelevant for Big Oh notation because the size of n
will eclipse any constant c for large n.
This is a bit simplified from the text, but that should give you an example of how to go about calculating the complexity of your algorithm. | {"url":"http://www.physicsforums.com/showthread.php?p=4235719","timestamp":"2014-04-20T05:53:19Z","content_type":null,"content_length":"53381","record_id":"<urn:uuid:2a389942-4cf8-4d0a-a499-7101d11d1a63>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
writing quadratic equation for a graph
Re: writing quadratic equation for a graph
writing quadratic equation for a graph
Write a quadratic function that fits each set of points: (-1,-11), (1,0) (2,9). I plotted these points on a graph but I cannot find a common equation that would fit all these points. Would appreciate
advice that would help solve this problem. Thank you.
Motherof8 wrote:Write a quadratic function that fits each set of points: (-1,-11), (1,0) (2,9). I plotted these points on a graph but I cannot find a common equation that would fit all these
points. Would appreciate advice that would help solve this problem. Thank you.
There's an example of this exact same kind of thing
Your system of equations will be:
a - b + c = -11
a + b + c = 0
4a + 2b + c = 9 | {"url":"http://www.purplemath.com/learning/viewtopic.php?p=6964","timestamp":"2014-04-17T04:39:51Z","content_type":null,"content_length":"18908","record_id":"<urn:uuid:b42a3986-5f9a-44b2-940d-26cb82863ce2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teaching Textbooks
You are here: Home → Curriculum reviews → Teaching Textbooks
Teaching Textbooks
Grades: 5-12 Teaching Textbooks
Teaching Textbooks have been specifically designed for homeschoolers. They contain far more explanations than typical math books, and feature step-by-step solutions to ALL problems in a multimedia
Both of the above features can be of enormous help to those homeschooling parents who do not know math well themselves.
The books include Pre-Algebra, Algebra 1, Algebra 2, and Geometry, and grade-level packages from 5th grade through 7th.
Pricing: Price: $119.90-$184.90 for each product, which includes textbooks, answer key, tests, and CDs.
Add a review
Reviews of Teaching Textbooks
Level Teaching Textbooks 7 & Pre-Algebra
Time: 2 years
Your situation: In my 25 years of home education, I have looked at or used just about every math curriculum out there. I have yet to find one that is absolutely the "perfect" curriculum.
We used another mastery approach curriculum that I thought was really good, but I found it progressed too quickly for my son, and he was becoming frustrated and not getting enough practice with
concepts before moving on.
Why you liked/didn't like the curriculum: TT has been a real positive for our situation, and for my son who is more of a visual/auditory learner. I feel the curriculum could improve by giving some
sort of supplemental practice problems for those who might need it. But overall, I am very happy with this curriculum, and plan to use it through Pre-Calc.
I also liked the automatic grading, so my son sees his mistakes right away, gets another shot at getting it right, and then can see an explanation of the problem if it is missed. This is a valuable
tool! It also keeps a digital grade book that you can print out, which is great! Overall, I am very pleased with Teaching Textbooks!
I also like that I can re-sell TT. You can also purchase replacement discs if needed for a minimal fee. I wish their workbooks were made of a little better quality material, and I wish there were
supplemental problems for more practice of a certain concept.
Any other helpful hints: Good for the average student and for those that need more instruction than just reading from a book. If you have an above average student, go with something a little more
meaty like Foerster's, Systematic Mathematics, or ABeka.
I still don't think the perfect curriculum exists, because there are a wide variety of learners out there, but TT has been great for us, and for our situation!
Review left February 22, 2014
Time: 1 year
Your situation: I work outside the home and homeschool our kids with my husband's help. We selected an out of the box complete curriculum that we love. However, we noticed that both of my children
needed some remedial help in their math in certain areas. We picked up a used copy of Teaching Textbooks Math 4 to see how the kids could integrate it into their overall math program. I've got to
say, it fits in nicely!
Why you liked/didn't like the curriculum: Love this: We would do the online school for our kids then they would spend another 30 min on Teaching Textbook and wouldn't even complain. They just
considered it extra practice (just note, my kids are coming from private/public school used to much longer school days so this isn't uncomfortable for them.) Also, having immediate access to the
solutions was great. After a time, doing problems in their heads became easier.
Cons: I started evaluating Algebra 1 and 2 programs for my oldest and found when I compared Teaching Textbooks to other curriculums it was seriously lacking in the Algebra 2 areas. Therefore, I would
greatly recommend that parents who have college bound students seek out other math programs and ONLY used TT for Algebra 1 / 2 as a Supplement. However, the Geometry program seemed very sound. I even
reviewed the Pre-Calculus program which we are considering. We'd use Rapid Learning Pre-Calculas with it as a supplement though.
Any other helpful hints: You can use this up to Pre-Algebra as your main curriculum but if you need extra practice, and theory, then you'll have to supplement. Find another main math curriculum for
Algebra 1 and 2, using TT only for extra practice. Pre-Calculus also appeared as though supplements for understanding would benefit most students.
Review left August 2, 2013
Time: 1 year
Your situation: We have 3 home school students: a 8th grader doing algebra, a 6th grader doing 6th grade math, and a 5th grader doing 6th grade math. We have been home schooling for 4 years. I
believe that math is the most important subject. We have tried SOS, Aleks, Mammoth Math, Life Pacs, Saxon, and my own mix.
Why you liked/didn't like the curriculum: WE LOVE TEACHING TEXTBOOKS!!! It does use a spiral and the lessons are taught on video. I wish they had a science as well. The only downfall is the cost and
the inabilty to make a backup disc, but if you were to ask me if it were worth it, I would say YES.
Any other helpful hints: Make sure you take the placement test as it runs a little behind grade level till you get to algebra.
Review left October 24, 2012
Time: 5 years
Your situation: 3 children took Algebra 1,2 and Geometry; 2 took Pre-Calculus
Why you liked/didn't like the book: I can't address some of the objections as to math levels and mathematical thinking. I can only share my family's experience. Two of my children went through
Singapore 6th grade and from that straight into TT Algebra 1 without a problem (that's a definite plug for Singapore!) They then went on through TT Pre-Calculus. They both scored 700 or higher on
their SAT math. So something was right about the curriculum. The third child who took it, had struggled more with math and hadn't gotten as far in Singapore. She integrated the TT program easily and
stopped feeling that math was hard. She did nicely on her ACT (don't remember the exact score. Not brilliant, but fine.) When she went to College she was the best student in her College Algebra
class. So, again something is working with TT.
Any other helpful hints: Don't judge the outcome until the child has gone through the whole program!
LeeAnne Huling
Review left March 27, 2012
Time: 1 year
Your situation: I typed in the right answer to a problem like 1/4 +1/4 =1/2 it says it is not right but if you enter it a second time you get it right! And they make you do stuff the hard way when I
looked the same thing up and it took 1/2 the time!
Why you liked/didn't like the book: They don't know what they are teaching!
Any other helpful hints: Make sure you have someone that knows math to see if your answer is actually right!
Review left December 16, 2011
Teaching Textbooks 4th grade
Time: 3 months
Your situation: Struggling child, falling behind in school.
Why you liked/didn't like the book: I love this program. I purchased it for my child who is rapidly falling behind in math. It starts REALLY basic, probably a little to basic but this has been okay
for her to build speed and core knowledge. We can do 1-3 lessons per day and she's been able to do it all by herself. I've for sure noticed improvement in school as well. She was very excited when
one subject was covered (and mastered) online then a few days later they started doing it in school--she's aced it which has been wonderful for her self esteem. We've paid tutors and spent months
with them--only 20 minutes a night and a one time payment with the Teaching Textbooks have been way more helpful. I plan to purchase the next year edition and keep my child going on this as a
supplement and reinforcement to what they are doing in school.
Review left December 9, 2011
Teaching Textbooks
Time: six months
I have a 10 year old daughter who dislikes math and we would have a power struggle everyday. She races through math just to have it done and over with. She would make many mistakes and be careless.
Teaching textbooks is a good fit for her; she likes to do independent work and I can see on the gradebook if she is checking the solution when she gets it wrong... I require that she go back and
check every solution for missed problems because this forces her to slow down, go back and be thorough. Thoroughness is not her strong suit. So, it is teaching her good habits but I can monitor it
very easily because of the gradebook/checked solutions format that helps me keep her accountable for good study habits. I purchased levels 4, 5, 6, and 7 because she is behind but also needs to be
able to go back to public school next year... other reviewers are right... TT misses concepts in grade levels so the grades do not correlate with state standards. But I think the format of the
teaching is superior for those kids that have math anxiety and shut down... far superior than other products. So, basically, I use our public school's curriculum guide to plan her week and her
lessons. I do not go in the order that they have; I follow the public school topics in the order they teach as much as possible. I also have K12 student worksheets for grade 5 to supplement. The
extra effort is worth it becuase it is BUILDING my daughter's confidence and we have a much better teacher-student relationship. It is also giving her much review to build her confidence, and many
people think this is a waste of time and they need to move on to more complex problems, but the review is so important for confidence building. It takes me about an hour a week to plan for the week,
picking lessons and appropriate supplements, but it is SO WORTH IT---because I don't have the power struggle anymore... she does her work and enjoys it. I also figure that these books will be a great
resource to have on hand for my other kids who are in the public schools... just pull it off the shelf and you have a personal tutor. I am planning on purchasing all of the curriculum for these
reasons... geometry, pre algebra and algebra.
Why you liked/didn't like the book:
It is slower and not on target grade-wise. I just purchased multiple grade levels to compensate, knowing that I can keep the product as a resource. Money well spent, I think.
Any other helpful hints:
I would supplement by checking state standards. I have my child do three 20 minute sessions of math, so a full hour for 5th grade... so she can potentially get three lessons done per day depending on
the content. If she struggles in one lesson, I can instantly see it because of the gradebook. There is so much review in this material that I don't worry that she will be overwhelmed. She will get it
Jennifer McDonnell
Review left August 31, 2011
Teaching Textbook 7
Time: 2 years
Your situation: I have a dyslexic child and a son who is very auditory and has some language weaknesses. I use Math U See up to 6th grade. Neither of my children were ready for Pre-Algebra in 7th
grade. I used TT7 for both in the 7th grade - it is basically a review of all basic math. Since it is worded differently and in a different order than MUS, it really showed me what the weak spots
were in my children and we spent extra time on those skills.
My dyslexic son ended up doing very well in pre- Algebra ( his tutor uses Glencoe, that is what our public schools use and it is more advanced that MUS or TT), I think the year 7TT reinforcement is
what he needed for his math fluency and comfort .
My younger son just started and likes the format. He has to keep a log of what each lesson was about and write down what he missed and why. This reinforcement will be helpful to him thru the year.
Why you liked/didn't like the book:
-The way it fit exactly as a review of all of MUS, that is basic math....
-The computer program was motivating to my kids, they liked the format
-The gradebook made it simple to keep up with their progress and it is easy to see what they did wrong on missed questions
Any other helpful hints: See what they are using in the quality public schools and private schools in your area. TT may or may not be challenging enough for high schoolers on a college track. Don't
just listen to homeschool parent reviews - most homeschool parents do not have strong math backgrounds.
It may be perfect for those students who need remedial help, summer review, LD's etc.
Review left August 19, 2011
Your situation:
Homeschooling for 17 years
Two children graduated (ages 22 and 21)
One child left who will graduate June 2012
We used A Beka for the first 8 years for math and were pretty happy. Our son is a math genius and sailed through until 9th grade. We tried A Beka's 9th grade Algebra I and he struggled greatly. We
then switched to a secular program, which was better, but not much. He had been using Apologia Science and kept saying that he wished that Apologia would write a math curriculum. We went to a
convention and saw Teaching Textbooks for the first time. They even had a banner which said "The Apologia-like Math Curriculum"! Needless to say, we decided to try it. Our son loved it and once again
sailed through math. He later took the SAT for college and completely aced the math section with a perfect score!!!! Our two daughter have also gone through all three years and despite not being math
people, they have understood the concepts and have done well on the tests. I highly recommend Teaching Textbooks for high school math. It's awesome!!!!
Any other helpful hints:
DVDs are useful, but not absolutely necessary. We did every year but one without the DVDs and were able to successfully complete the curriculum - and I am NOT a math person. The one year we did with
the DVDs (on loan from a friend) was nice simply because I no longer had to sit down and try to figure out the reason an answer was wrong and how to correct it. Saved time and frustration, but if you
want to save money instead - go ahead and just order the student text and answer key.
Ellen Gerwitz
Review left May 7, 2011
I have 6 children, 2nd grade, 4th grade, 7th grade, preschoolers and toddler. Before homeschooling, I taught 7th grade math and Algebra 1 in a private school. We have tried Saxon, Developmental Math,
and a few other workbooks. My oldest struggles with math, Saxon moves too fast for her and I don't enjoy teaching it. Although I do think Saxon is a very good math program.
I like Teaching Textbooks because my children have the option of using the computer or book or both. My kids are happy just using the textbook. I like the pace of the lessons and that it is still a
spiral method, just not as fast paced as Saxon. I also like that it's one big book, and a small answer book and CDs, as opposed to a giant teacher's manual and 2 packs of worksheets like Saxon. It
seems to not have as many tips when introducing math facts, other then saying they need to be memorized. I also wish it had a K, 1st and 2nd grade level. I will probably use Saxon to fill in with my
youngest 3 kids until they reach grade 3 or TT comes out with more grade levels.
Any other helpful hints:
If you use it, you will need to build in time for reviewing math facts. And if your kids are a math whiz, simply bump them up a grade for TT.
Review left May 2, 2011
Level: Teaching Textbooks 6 Time used: 5 years
This is excellent for a student who needs to hear things more than once to get it.
The audio visual is also good for cementing the concepts. We sat through some boring explanations, but I learned that my students could not verbalize back to me some basic math concepts or the
understanding behind them and this helped to weed those out.
Extensive testing was done the first two weeks of public school when my son got to Calculus and he was shown to be lacking these things:
1-He did not have enough Trig background
2-He did not have the instant skills with the T89 calculator that the other kids who had taken Pre-Calc had. (T89 is difficult to use without specific instructions. You don't just pick it up and
start using it)
3-He dropped down to Pre-Calc, but this is no big deal to us. Pre-Calc is reported to be harder than the Cal itself and he is doing excellent there with a great teacher.
What my son did have:
The teacher told me that he had better algebra/thinking skills than any of the other kids who remained in the AP Cal class. The teacher was amazed that he could figure out some of the problems on his
own (it took him a long time) which the other students were all performing on their calculators. I think this goes with the TT author's goal of teaching for understanding as opposed to just getting
through the course. I do not feel bad at all that he is taking a course labeled Pre-Cal twice because they are, in fact, two very different courses and that is what the instructor told me and my son
Why you liked/didn't like the book:
I did like this series very much because the solutions to everything are a click away and the explanations that go with them are extremely thorough. I like that they get practice listening to a
lecture, which is a skill they need for college. It is great for children who have learning disabilities such as ADD or dyslexia because it is multi-sensory and the lecture does not go too fast. It
is much more effective if done with the parent (as all curriculum is)
An index would be nice.
The courses could probably be labeled differently by bumping them up a level. In other words Algebra 1/2 could be used for 6th graders, Algebra I is a very thorough Pre-Algebra course (7th grade),
Algebra II is a great Algebra I course etc. I can't say about elementary courses because I never used them.
We did Algebra I, then II, and when my son did geometry, he also reviewed algebra at the same time by taking the tests again. This kept his algebra skills current while he did geometry, which is
helpful when the standardized test is taken.
I really like this program. If it's too slow, bump it up.
I am also making flashcards to go with Pre-Algebra and orally quiz the concepts with the two boys I'm teaching. I would suggest this for any math program.
Review left October 6, 2010
Level: Teaching Textbooks 6 Time used: 1 month
Our grade 6 daughter struggles with math concepts. We love Saxon math, and our other two boys who understand it fine - but it is very difficult for her. She would be in tears trying to understand the
concepts. I spent hours trying to show her different ways of doing problems to try to help her. It was becoming too difficult for us to manage, so we started TT6.
Why you liked/didn't like the book:
We like it as almost a remedial program for a struggling student. She is gaining confidence and loving the program. The graphics and teaching are fun. The parent controls in the gradebook are very
handy. She loves being able to check her gradebook as well. Everyday she is getting 90% or higher.
I would not recommend this for academic mathematical students, however for struggling students it is excellent and enables them to learn some math that they might not otherwise have learned.
Any other helpful hints:
The grade 6 book we are using for our grade 6 daughter is excellent review after Saxon math. She is picking up on skills she missed before after not doing well. If your student is doing well, please
put them in a grade ahead.
Review left September 22, 2010
Level: Teaching Textbooks Algebra 1 Time used: 1 year
We loved this curriculum until we looked at the Prentice Hall Algebra I book that the local high school was using. Then we realized that Teaching Textbooks Algebra I is way behind grade level! My son
completed TT Algebra I and now is going through the Prentice Hall Algebra I book, to fill in the gaps, which are huge. This is taking him another 4-5 months! I had trusted Cathy Duffy's reviews of
TT, and found I was wrong not to check it out more.
However, the method of Teaching Textbook is great -- the kids enjoyed doing it on their own, with access to the CD's and textbook.
Any other helpful hints:
If your student is college bound and plans to take the SAT and/or enter public high school, I would not recommend this curriculum.
Cindy Stuckey
Review left August 20, 2010
Level: Teaching Textbooks Pre-Algebra and Geometry Time used: 2 years
My oldest daughter did average to well in Abeka Algebra 1 but it was very labor intensive and the DVD for Algebra 2 was expensive and had to be returned (I could not use it for my other children).
So, we used Saxon Algebra 2 and seriously thought about throwing the book and CD out the back door on several occasions, but completed it anyway. Then she took the ACT's and was barely above the
national average. The next year she did TT Geometry and was thrilled with it. Her ACT score rose 3 points in Algebra and Geometry after completing TT Geometry. She has gone on to get a B in College
Algebra -- a subject she had previously refused to EVER take. This daughter excelled in other challenging dual enrollment college course work. She was accepted into 6 major universities in our state
and is now doing well in college as a sophomore. Even though algebra is still not her favorite subject, TT helped her get there. I have now used TT Pre-Algebra with my 12-year old and my 14-year old
after they had used Abeka math for 7 years. They are thrilled with TT and that makes our homeschool a happier place for all of us. It is easier than Abeka but it is giving them the skills they need
for college without the tears and frustration my older daughter went through in Saxon Algebra 2.
Why you liked/didn't like the book:
My children found the lectures easy to understand and liked the man's sense of humor (if you homeschool you know this is important amidst the everyday toils high school homeschool involves.) There
were no problems. We only used the the test solutions explanations once or twice the whole year but were glad to have them. With other curriculums we had to consult outside sources on occasion for
higher math. TT saves us that frustration.
Any other helpful hints:
Learn all you can about your kids and the different programs out there. Then find out what works for your family and do it! Never underestimate the power of prayer in your choice. Teach them to enjoy
their work by enjoying yours. They grow up very fast.
Review left August 5, 2010
Level: Teaching Textbooks Geometry Time used: 3 years
My son hated Math u See in pre-algebra :0. We started TT for Algebra 1 and continue via Algebra 2 and Geometry. His last year as a senior, he took College Algebra at our local college and aced it.
(Not a tech school either) Cody excelled in Geometry on the ACT as well, was average in Algebra... (please realize this was his only B in all of high school). In summary, my son did well on testing,
excelled in testing for Geometry, and made an A average in his first year of college algebra using Teaching Textbooks. I can't complain :)
Why you liked/didn't like the book:
No complaints, it does what is needed if used with all of high school math.
Kim Horne
Review left July 8, 2010
Your situation:
started homeschooling middle of last year, went through all the popular math currics, none work for my 10 yr old who is diagnosed with dyscalculia, this year we also decided to homeschool our 8 yr
Why you liked/didn't like the book:
I will give the good first, this computer program has worked well for my 5th grader. We used the 4th grade cdroms. My son would cry at school during a math lesson and once we started homeschooling
with Saxon and other like that he started banging his head on table, just pulling out the math book sent him into a major melt down. I am confident that this program is a perfect fit for my 10 yr
old. 5 days a week he completes one lesson on his own without a fight. I did have to sit beside him for about three months everyday before he was confident on his own. I highly suggest this for kids
that seem to have problems with math computations or have anxiety when you pull out a math book.
Now the bad, and this is really bad. I am a math tudor for several public school stundents. This program claims to be the best homeschool math for the 21st century, yet it is a good 2 yrs behind.
When I called to ask about this, the lady just blew me off and laughed, "Our program follows state standards." I let her know that I am a math AND standardized tudor for 3 states. I explained that I
loved their product; however they need to add so much. If you do use this product, contact your local school district and request a copy of their math standardized test prep. Each state gives each
child a s.t.p workbook that has one or two questions that are required for that year. If you only use these product your child will not be considered grade level if you had to put them back in school
or wanted them tested, for most homeschool families we are not concerned with "keeping up" with public school. I think those test prep workbooks are only helpful to children if you take time to go
over each concept. All my students say that their teacher spent on day on several pages, one page can have up to 5 different concepts. If you can get one of these workbooks, talk your time, use the
Teaching Textbooks for "happy time" math everyday and take one concept from the state standard workbook, go over it until your child really gets it. Homeschoolers are not slaves to government
produced standards. We have learned that children and ready when they are ready, enforcing concepts too soon, too fast can change a child forever. My 5th grade students are doing stem and leaf
proability graphs, which is not explored in this product. Also 5th grade students are required to know mode, median, average. This product has this in their 7th grade program.
To sum it up, my 5th grade LD student is doing the 4th grade lessons, my 2nd grade student is also using the 4th grade lessons. As with any homeschool product, it's not going to fit the perfect mold
with state standards. If state standards worked for our kids, we would put them back in school.
Any other helpful hints:
Know what is important to you. Do you want your child to pull out their laptop and do their math lesson on their own, then you can go back and check the grade book
do you want to keep up, keep up, until your child is banging their head on the table?
Review left February 12, 2010
Your situation:
Two of my home students have used TT products.
Why you liked/didn't like the book:
Serious warning about the algebra courses: Both TT algebra 1 AND 2 must be completed in order to cover the material expected from any other math programs algebra I. In other words, Teaching Textbooks
does not cover second year high school algebra, in spite of the claims made. I base this observation on what knowledgeable "math people" tell me, as well as on the different algebra program used by
an older child.
The geometry course seemed "adequate" to my husband, who had taught geometry the previous year, using Jacobs' geometry (3rd edition). (He prefers Jacobs'.)
My daughter, who has severe math learning disabilities, is using TT for grade 5 this year. The content is, without question, appropriate for grade 4. Chunks of the material, sadly, is covered by many
textbooks for grade 3.
At the risk of sounding unkind, I suspect that the families who rave about Teaching Textbooks are closing their eyes to the "too easy for the grade label" content. Of course a student would enjoy
that !
The lessons truly are, nonetheless, designed for "self-learning". The product excels in this respect. All they need to do is to package the content for the appropriate grade level.
Any other helpful hints:
Examine the materials thoroughly, either at a homeschool conference, or by borrowing materials owned by a friend.
Review left January 27, 2010
I am a single homeschooling mom who works, too, so school curriculum has to be easy to use and not too time consuming to set up. We used and loved Math U See until 5th grade when they went to a full
book on nothing but fractions. I understand the reason why they did that, but for my son it wasn't going to work. Then we tried Saxon with the DIVE CDs and eventually grew to hate it. Now we are
using Teaching Textbooks, which incidentally, your site shows as only high school but they start in 6th grade now. My son is in 6th grade, but tested into 7th grade in Teaching Textbooks.
Teaching Textbooks has been wonderful for my son! While there is NO perfect curriculum of any kind that I have found yet, this is really good. The step by step approach to walking through every
problem was perfect for him! If he gets a problem wrong, he has the option of reworking it or marking it wrong and seeing the solution done for him. After the second wrong attempt at the problem, it
is counted wrong and he can choose to see the problem worked for him or go on to the next one. The only problem is that you can't go in and adjust anything that I have found. Occasionally he has only
gotten part of the answer entered and bumped the enter key and got it counted wrong, and I can't fix that, which I don't like. He can work only half a lesson if that is what I assign for him without
it giving him trouble. The computer grading is wonderful overall, and makes record keeping so much easier. I think that in order to get a true grade, you have to complete the whole lesson, though.
With Saxon, we often worked only every other problem, but with TT, he has needed to do every problem anyway in order to master the concepts, so it hasn't been an issue. I think the amount of review
on each concept is pretty appropriate, but there have been some holes that I had to fill in... like some basics that needed to be drilled more before going forward. It wasn't too time consuming to do
that, however. I am not a math whiz, but I have been able to refresh my skills in watching these lectures when needed to help my son. There are some practice problems and about 20-21 regular problems
in each lesson. Overall we really like Teaching Textbooks and would recommend it.
As with any program, you may occasionally have to watch the lectures to help your child through something. Don't expect it to do everything for you, but it comes close. Some of the ways they teach
are different than I learned and seem harder, but it all gets you to the same answer.
Review left January 5, 2010
I have a daughter with a processing disorder. She is almost 13 years old. Last year she made it almost through Bob Jones Grade 4 in Math. Then she got stuck on long division. I decided to buy
Teaching Textbooks and hope that the visual part of that would fill in the gaps for her.
Why you liked/didn't like the book:
I like TT. It does help students understand concepts in a clearer way. But there is a huge problem with these books. They are a year behind faith based homeschool curriculum. Grade 5 TT is exactly
like Bob Jones Grade 4!
Any other helpful hints:
If you are considering getting this program for the first time get a grade above where your child last placed if she was in a faith based program prior to this. Examples would be; Abeka, Christian
Light Education, Bob Jones, etc. For a chapter to chapter comparison and a recommendation on how to fit this program in to your homeschool read my review at Blueberry House.
Mrs Darling
Review left October 15, 2009
My son didn't like doing the math worksheets/workbooks and hated math. Up to this point, we had tried Saxon, Horizons, and Singapore math. They were all boring and tedious to him. We started using
Teaching Textbooks this fall and we love it. Because the reviews said it tends to be a little easier than other curriculums, we are doing both 4th and 5th grade this year at the same time. It is very
interactive with the CD, so he doesn't mind doing 2 or 3 lessons a day. He is in 4th grade, so next year he will be on schedule to do the 6th grade level. I can't say enough about this program. It
has solved my math problem. He was falling behind last year because he hated it so, we weren't keeping up with the lessons.
Why you liked/didn't like the book:
I love it because the CDs provide him with the interaction he needs and keeps it interesting for him. You can also go at your own pace and cover more lessons than just one on the days he's up for it.
These are also a great value, because if you've noticed the resale value on these is terrific. People tend to get at least 80% of what they originally paid for them when they resell them.
The 4th grade program is brand new this year, but it is even more interactive and entertaining than the other levels. Great for a young child.
Any other helpful hints:
Consider bumping it up a level or do 2 years in one as it tends to be a little easy for the grade level.
Rebecca Morse
Review left August 22, 2009
Grade: Algebra Time: 5 years
I have 4 kids and they have all used Teaching Textbooks.
I never thought I would hear them say "Math was enjoyable" but with TT, that is exactly what they say. Not to mention they are all A students!
My children and I enjoyed every aspect of the Algebra lessons. I have recommended Teaching Textbooks to so many others. What an amazing product.
Review left August 24, 2009
Grade: Algebra 2 Time: 1 year
I'm the mother of 8, 4 are now in college where they are all A students. I've used Saxon, Bob Jones Homesat, Chalkdust, and this past year with my 5th child Teaching Textbooks. The format was very
student friendly. However when my daughter began to prepare for the ACT/SAT tests we realised I should have been more involved. Though she had completed the textbook there were several concepts the
book had not even covered. We postponed her test covered those concepts with Chalkdust and Saxon and her scores were up to the level of her siblings. But the panic was palpable.
Please do not market this book as an Algebra 2 text. It does NOT include all the concepts normally taught in Algebra 2.
Review left June 3, 2009
This is my 3rd year homescholing and we have struggled w/ our Math curriculm.
We started Teaching Textbooks a couple months ago. My dd loves it! She finished it w/o complaints. Anytime she has a question shhe goes right to the computer and has it answered. Which is helpful
when I am working w/ my 2 other children. My only problem is I wish they had 4th grade!
Any other helpful hints:
It is a 30-day trial... try you won't send it back!
M Kern
Review left September 16, 2008
Grade: Math 7 Time: 1 full year
I am a former professional electrical engineer with a master's degree, so I know math and feel comfortable teaching it. We had been using Math-U-See, but it was teacher intensive and we had a new
baby this year, so I wanted something my daughter could use more independently. We had done 5 levels of Math-U-See, but when I compared the content against TT Math 7, I saw that more than 75 percent
would be review for her. It was then I realized how advanced Math-U-See had been. Anyway, I thought the review would be good for her and she could do it by herself, so we went with it.
Why you liked/didn't like the book:
Some of the word problems are ambiguous and at least two my daughter brought to my attention because she could not understand the solutions were just plain incorrect. What bothers me most about the
mistakes is that they were not careless errors in calculation, but errors in mathematical thinking or expression. It made me wonder how many other errors my daughter was studying without knowing to
ask me for a second opinion. The units on geometry she asked me about were also troublesome. In classifying triangles, the multiple choice questions ask the student to select one from scalene,
isosceles, equilateral, acute, right, or obtuse that BEST describes the triangle pictured. At least two answers will always be correct, and how does "best" apply in this case? The answer is right or
not. There is no meaning to "best" in this situation. Unfortunately, there were too many little instances like those, whenever I did get a look at the teaching, that made me wonder if the authors
were really qualified to write math textbooks. The program format is fine, but the quality of instruction was a little lacking for my taste.
Any other helpful hints:
If you are math-knowledgeable, you will not like this curriculum. Math-U-See is far superior, IMO, precisely because Steve Demme knows his math AND knows how to convey it.
Review left May 27, 2008
Grade: Pre-algebra Time: 1 year
Your situation:
Two Children in pre-algebra--ages 11 and 13.
I have two children who are almost done with the pre-algebra text this year. We really like this program. It is not overwhelming and they get their math done in about 30-45 minutes time. We did take
the pre-test to find out where they would place, and it put them in pre-algebra. We have found that the text is VERY slow for them, and I wonder if I wouldn't have been better off putting them in
algebra. But I feel that it is probably better to go more slowly and get these concepts ingrained than to rush over it.
Any other helpful hints:
Check through the text and see if you can skip some of the early lessons or at least just breeze over them quickly.
hs mom
Review left May 19, 2008
Grade: Pre-Calculus Time: seven months
I have home schooled our daughter since the beginning. I love math and we have used Modern Curriculum Press and ABeka until this year.
In our opinion, Teaching Textbooks should NOT have released this course without the solutions guide. Even though I love math and have done very well every other year, this book has been difficult
without the solutions guide for assistance when uncertain. Also there is no index, which is frustrating.
Wait until they improve Pre-Calculus before buying. I emailed them with one question this year, and never got a response.
Kim Kowalik
Review left March 17, 2008
Grade: Geometry Time: half a year
I bought this product mainly for my benefit because I didn't want to have to figure out any questions that might come up. I knew I wouldn't have the time this year with an active change-of-life
toddler in the house plus the middle schooler and the high schooler. (I sound like such a bad mother, but that's the truth of the matter. Mothering was a lot easier when I was younger!)
Why you liked/didn't like the book:
This product has really met and exceeded my expectations because, true to the author's claims, I don't have to do anything more than photocopy the tests and then grade them. BUT, I do think that it's
too easy for our high schooler. This course takes a significant chunk of his study time, about an hour each day, but he's getting nearly everything correct and doing very well with it. He says it's
kind of tedious to do because none of the examples make you think outside the box, they really are just repeats of the examples in the lessons. He says he likes the fact that the practice problems
are so detailed. And he says he likes the lectures because they point out other information that isn't in the text lesson. Before I discourage you, I should mention that he's an advanced student
who's been doing two maths this year, this geometry class, and Foerster's Algebra II and Trigonometry, which is a great deal harder than the other products currently available. (But I do have the
solutions manual, so it's working for me too!) He's halfway done with the Geometry text and will definitely finish it before this school term ends, which is great because that will free up more time
for SAT prep next school year. All in all, I like the product and am seriously considering adding the Pre-Algebra and Algebra I books to our home for our middle schooler, who is not as advanced with
his math skills. (Besides, I doubt our toddler is going to get in much less mischief on a daily basis for at least another year or two, so I need the break this kind of product can afford me.)
Any other helpful hints:
Yes, it is expensive, but remember that your time is worth something! If you are like me and you lack the time to do the problems with your child in order to figure out how or where he went wrong,
then this may be just the thing for you.
Grade: Algebra 2 Time: 6 months
Your situation:
Co-op math teacher
Why you liked/didn't like the book:
It presents topics in a completely different order than any other Algebra II book I've seen. This is a problem when I am attempting to help struggling students by assigning additional problems since
I cannot pull problems from other texts easily. Also, some very important Algebra II topics (functions, absolute value, inequalities) are left until the very end. I personally think not teaching
synthetic division until the additional topics of Precalculus is just ridiculous. (My students agree after I showed them how much easier it is than long polynomial division.) The reordering of the
topics makes it difficult to apply some standard solving techniques to help my struggling students. Some of my students are doing well with this product, but I have 3 or 4 students who are really
struggling with this format.
Any other helpful hints:
Because this does not cover all of the topics normally covered in an Algebra II course you will need to plan on staying with Teaching Textbooks through PreCalculus (where some of those topics are
picked up). Do not assume your high schooler is going to be disciplined enough to watch the DVDs and the problem solutions without your guidance. The program does not work if you try to just read the
text. (Word problems are only solved on the DVD).
Your situation:
This was my 1st year using Teaching Textbooks; we've used Saxon before this year.
Hi, this is coming from a homeschool student. We have been using Saxon math up until this year, which I'm doing Algebra 2. This is the 1st year I've actually understood the math. Here are 2 reasons
that it is so great: 1. They actually SHOW you how to do the problems, and 2. When you get a problem wrong, you can go back and the disc can show you how to do it, so you don't waste a lot of time
trying to figure out something you would have never guessed. ;)
My daughter has always loved and done well at math (this year she is 10 and is doing pre-agebra). Last year we decided to go with Saxon. She got bogged down by it, and began to hate math! We worked
around some problems, added and subtracted things to help her through it. But it never "clicked". I couldn't pay that much, do so much work to try to get her to understand it, AND have it not work
anymore! I heard about Teaching Textbooks and that it was written for Homeschoolers, which is what we are, so decided to try it.
Why you liked/didn't like the book:
We have only done a few weeks of it so far, but it's a WORLD of difference for her. It's clearly explained, has step by step answers for every problem, and it clicked for her right away! She loves
doing her math again, and is moving right along!
Any other helpful hints:
They have a test you can do on-line that helps you figure out what level to start on. I also read reviews on-line written by people who had used it before. There was a wealth of info about it, some
written by the students themselves! The student first watches the explanation of it (cd in computer), then does the work on paper. There is also a book, so the student can read along while the cd is
playing if they wish.
Your situation:
My daughter is in 7th grade and will finish Algebra 1 this summer.
Why you liked/didn't like the book:
Last year my daughter was ready for pre-algebra and we used a different curriculum. The highest math I took was algebra 2 so any time she needed help with pre-algebra I would have to set down and try
to figure it out for her. Even the with the teachers book it would take time. Another homeschool mom recommended Teaching Textbooks to us and we tried it this year. We have loved it. She totally gets
it now. The best part is that when she needs help she just pops in the CD and it's like having a tutor there with her. No more need for me to fumble around and try to figure it out. If she needs help
she gets it right away and can move on.
Any other helpful hints:
I would highly recommend Teaching Textbooks to people who feel their kids are reaching their parents math level. It's difficult to teach something that isn't like second nature to you. Let them use
this curriculum and quickly get the help they need!
Grade levels used: Algebra 1 and 2 Time 2 years
Your situation:
One child in Algebra 1 (she's in 8th grade) and one child in Algebra 2 (he's finishing up 11th)
Why you liked/didn't like the book:
These courses were recommended to me by a homeschool mom who was formally a Math teacher. My son struggled through AOP Algebra and several supplements, although he is very bright. These two courses
are extremely good. The cost is a little expensive, the highest investment I've made. But I felt the math is so very important to nail down. They include CDs as well as textbooks and ALL the problems
are given a complete solution. I'm so grateful to my friend for having suggested Teaching Textbooks to me.
Any other helpful hints:
Although it's rather expensive, if you have more than one child, I would strongly consider this curriculum. I really feel it's even worth it for just one.
Patrice Berke
Go back to homeschool mathematics curricula list | {"url":"http://www.homeschoolmath.net/curriculum_reviews/teaching_textbooks.php","timestamp":"2014-04-18T02:59:29Z","content_type":null,"content_length":"70324","record_id":"<urn:uuid:6e55d0e6-acea-42fb-aac2-96a0278b7024>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interesting complexity classes $PR \subsetneq c \subsetneq R$
up vote 3 down vote favorite
I'm working on a proof-checker that can verify termination proofs. The fundamental method it provides for constructing such proofs is to translate the program into primitive recursion. Basically, I
provide a combinator $\rho$ typed as:
$\rho: \forall A,B:(A\rightarrow Nat \rightarrow A)\rightarrow (A \rightarrow B)\rightarrow A\rightarrow Nat \rightarrow B$
which, in the notation defined here, constructs $h$ given $f$ and $g$.
Although the term language contains a fixed-point combinator and is therefore Turing-complete, terms that use it have a "tentative" flag in their type that indicate this. The $\rho$ combinator and
the fixed-point combinator are the only two language primitives that allow for recursion or looping of any sort (i.e., without either of these two combinators, all you've got is a finite-state
machine). Therefore, all terms that are well-typed and non-tentatively typed are primitive recursive.
What I'm wondering is if there are any interesting complexity classes that you can build by starting with primitive-recursive constructions, and adding a finite number of other functions $Nat \
rightarrow Nat$, each of which is in R but not in PR, and allowing composition with these functions. It's easy to come up with non-interesting examples of such classes, e.g. "primitive recursion plus
the Ackermann function", but I'm looking for any that have sufficiently interesting properties that it would be worth adding the functions which characterize them as admitted axioms in the proof
computational-complexity soft-question
Maybe I misunderstood something, but you seem to claim that membership in PR is decidable. It is not (see Rice’s theorem). I’m not sure, but it might be possible if your programs are written in
some non-Turing-equivalent programming language. Could you clarify? – Antonio E. Porreca Aug 13 '10 at 10:59
...right, which is what I get for trying to ask coherent math questions at 5:30 AM. The system I'm working in is one based on Luo's Extended Calculus of Constructions. It contains a fixed-point
combinator in addition to the above $\rho$ combinator and therefore is Turing-complete, but expressions that invoke it have their types flagged to indicate that they do. But, every well-typed term
that is not so-flagged is in PR, because $\rho$ is the only other combinator that provides recursion. – dfranke Aug 13 '10 at 12:50
So, to rephrase my question without having to go into the gory details of the type calculus: can you get any interesting complexity classes by starting with a system that permits only
primitive-recursive constructions, and augmenting it with certain $Nat \rightarrow Nat$ total functions that are not in PR. – dfranke Aug 13 '10 at 12:53
Could you please edit the original question according to your last comment? – Antonio E. Porreca Aug 13 '10 at 13:35
No problem. Done. – dfranke Aug 13 '10 at 14:03
show 1 more comment
1 Answer
active oldest votes
First of all, it’s certainly possible to obtain some intermediate class by taking a language that only computes PR functions (say, an imperative programming language using only for
loops) and adding any total computable but non PR function (e.g., Ackermann’s function). The resulting language L is non-universal, because it only computes total functions: you can
still construct a computable but non-L-computable function by diagonalisation. However, L is clearly more powerful than the original language.
As for “interesting”, I guess it really depends on what you mean by that.
If “interesting” means “of practical use”, then one could answer that all computable functions of practical use are PR, since a non-PR function requires an amount of time to compute
up vote 2 that is not, in turn, PR. Considering that time bounds such as 2^n, 2^2^n, 2^2^2^n, …, are all PR, you see that there isn’t much hope to compute non-PR functions for large values of n.
down vote
accepted If “interesting” means “logically interesting”, then I think the answer is “yes”. I’m somewhat familiar with Girard’s System F (also called “second order λ-calculus” or “polymorphic
λ-calculus”), described for instance in Girard’s Proofs and Types (freely available here). The functions that can be computed in F are “exactly those which are provably total in [second
order Peano arithmetic]” (page 123), and among these we have Ackermann’s function. There is an explicit λ-term for it on these slides (page 20).
If I recall correctly, the standard calculus of constructions includes System F and only computes total functions, so it also provides an example.
Sorry, I answered before your edit to the original question; now I see you already excluded the “PR + Ackermann” case. :-) – Antonio E. Porreca Aug 13 '10 at 14:08
Nifty. So the answer is that I don't even need $\rho$ as an axiom: the underlying system is already expressive enough to construct it and much more beyond it, and I need to finish
wrapping my brain around the properties of that system. – dfranke Aug 13 '10 at 14:35
Alright, I get it now. What I was missing before was the essentialness of encoding natural numbers as iterator functions. That's where you get the "potential energy", so to speak, in
order to compute complex functions without the need for a recursion operator. Thanks again. – dfranke Aug 15 '10 at 15:33
add comment
Not the answer you're looking for? Browse other questions tagged computational-complexity soft-question or ask your own question. | {"url":"http://mathoverflow.net/questions/35461/interesting-complexity-classes-pr-subsetneq-c-subsetneq-r","timestamp":"2014-04-16T16:26:46Z","content_type":null,"content_length":"62285","record_id":"<urn:uuid:5d56037a-2008-41e8-83d3-a0f63cb5eee6>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
no. of comparisons in linear/binary search
06-05-2004 #1
no. of comparisons in linear/binary search
hi, what is the no. of comparison in a worst case unsorted one-dimensional array of size N with linear search?
is it 'N'?
and if worst case sorted one-dimensional array of size N with binary search,
is it (log N / log2) ?
but what is best case in binary search?
hi, what is the no. of comparison in a worst case unsorted one-dimensional array of size N with linear search?
is it 'N'?
and if worst case sorted one-dimensional array of size N with binary search,
is it (log N / log2) ?
but what is best case in binary search?
I dunno if I'm right, but I'll try.
worst case for linear search: n
worst case binary search: log n
best case binary search: 1 (assuming it found the result on the first attempt)
Well, just to add a small element....
You refer to the order by the Big O notation.
Order for Linear Search is: O(n)
and order for Binary Search is: O(log n).
Worse case is log n / log 2 rounded up to the next whole integer.
So: 8,000,000 elements would require at most 23 searches.
Well, just to add a small element....
You refer to the order by the Big O notation.
Order for Linear Search is: O(n)
and order for Binary Search is: O(log n).
I thought that was a given.
then what is
Worst case unsorted two-dimensional array of size N with linear search &
Best case unsorted two -dimensional array of size N with linear search?
I think the first one is log(NxN) / log 2 ?
but what is the second one? I haven't any idea.
or can someone told me where can I find more information(URL)?
thk a lot.
Last edited by alice; 06-05-2004 at 10:46 PM.
Wouldn't the array be setup in such a way that you're only searching one of the two columns? Even if you did have to look at each column, I imagine you're looking at O(2n). I'm not sure about
that though. Best case could be O(1) in any search.
Worst case unsorted two-dimensional array of size N with linear search &
Best case unsorted two -dimensional array of size N with linear search?
Worse case for unsorted arrays is always the number of elements, since the only way to search an unsorted array is by linear search.
06-05-2004 #2
06-05-2004 #3
06-05-2004 #4
06-05-2004 #5
06-05-2004 #6
06-05-2004 #7
06-05-2004 #8 | {"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/53654-no-comparisons-linear-binary-search.html","timestamp":"2014-04-20T07:50:13Z","content_type":null,"content_length":"69264","record_id":"<urn:uuid:6c6221e8-b806-4343-a25d-87bf04b02bb9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: OPTIMAL CONTROL PROBLEMS GOVERNED BY
H. AMANN AND P. QUITTNER
Abstract. We study the existence of optimal controls for problems governed
by semilinear parabolic equations. The nonlinearities in the state equation
need not be monotone and the data need not be regular. In particular, the
control may be any bounded Radon measure. Our examples include problems
with nonlinear boundary conditions and parabolic systems.
1. Introduction
In [8] we developed a general existence and uniqueness theory for semilinear par
abolic problems involving measures and low regularity data. The proofs were based
on a generalized variationofconstants formula in suitable extrapolated spaces and
the Banach fixed point theorem. Other papers on this topic mostly use approxima
tion of singular data by regular ones and, consequently, require a priori estimates
(usually based on maximum principles) for the approximating solutions in order to
solve the original problem. The approach in [8] is much simpler and more flexible.
In particular, it can be easily used for problems with nonmonotone nonlinearities
and for systems. In [8] we also established stability estimates and compactness
properties which play an important role in control theory. | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/073/3919202.html","timestamp":"2014-04-21T00:28:59Z","content_type":null,"content_length":"8362","record_id":"<urn:uuid:5f46adbb-d11b-4b35-ae06-167809555a07>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Register or Login To Download This Patent As A PDF
United States Patent Application 20040153430
Kind Code A1
Sayad, Saed August 5, 2004
Method and apparatus for data analysis
A computer system, method and computer program product for enabling data analysis is provided. An analytical engine, executable on a computer, provides a plurality of knowledge elements from one or
more data sources. The analytical engine is linked to a data management system for accessing and processing the knowledge elements. The knowledge elements include a plurality of records and/or
variables. The analytical engine updates the knowledge element dynamically. The analytical engine defines one or more knowledge entity, each knowledge entity including at least one knowledge element.
The knowledge entity, as defined by the analytical engine, consists of a data matrix having a row and a column for each variable, and the knowledge entity accumulates sets of combinations of
knowledge elements for each variable in the intersection of the corresponding row and column. The invention provides a method for data analysis involving the analytical engine, including a method of
enabling parallel processing, scenario testing, dimension reduction, dynamic queries and distributed processing. The analytical engine disclosed also enables process control. A related computer
program product is also described.
Inventors: Sayad, Saed; (North York, CA)
Correspondence Address: Eugene J. A. Gierczak
Suite 2500
20 Queen Street West
M5H 3S1
Serial No.: 668354
Series Code: 10
Filed: September 24, 2003
Current U.S. Class: 706/61; 707/E17.005
Class at Publication: 706/061
International Class: H04B 001/74
1) A computer implemented system for enabling data analysis comprising: A computer linked to one or more data sources adapted to provide to the computer a plurality of knowledge elements; and An
analytical engine, executed by the computer, that relies on one or more of the plurality of knowledge elements to enable intelligent modeling, wherein the analytical engine includes a data management
system for accessing and processing the knowledge elements.
2) The computer implemented system claimed in claim 1, wherein the analytical engine defines one or more knowledge entities, each of which is comprised of at least one knowledge element.
3) The computer implemented system as claimed in claim 2, wherein the analytical engine is adapted to update dynamically the knowledge elements with a plurality of records and a plurality of
4) The computer implemented system claimed in claim 2, wherein the knowledge entity consists of a data matrix having a row and a column for each variable, and wherein the knowledge entity accumulates
sets of combinations of knowledge elements for each variable in the intersection of the corresponding row and column.
5) The computer implemented system as claimed in claim 4, wherein the analytical engine enables variables and/or records to be dynamically added to, and subtracted from, the knowledge entity.
6) The computer implemented system claimed in claim 5, wherein the analytical engine enables the deletion of a variable by deletion of the corresponding row and/or column, and wherein the knowledge
entity remains operative after such deletion.
7) The computer implemented system claimed in claim 5, wherein the analytical engine enables the addition of a variable by addition of a corresponding row and/or column to the knowledge entity, and
wherein the knowledge entity remains operative after such addition.
8) The computer implemented system claimed in claim 5, wherein an update of the knowledge entity by the analytical engine does not require substantial re-training or re-calibration of the knowledge
9) The computer implemented system claimed in claim 2, wherein the analytical engine enables application to the knowledge entity of one or more of: incremental learning operations, parallel
processing operations, scenario testing operations, dimension reduction operations, dynamic query operations or distributed processing operations.
10) A computer implemented system for enabling data analysis comprising: a) A computer linked to one or more data sources adapted to provide to the computer a plurality of knowledge elements; and b)
An analytical engine, executed by the computer that relies on one or more of the plurality of knowledge elements to enable intelligent modeling, wherein the analytical engine is linked to a data
management system for accessing and processing the knowledge elements.
11) A method of data analysis comprising: a) Providing an analytical engine, executed by a computer, that relies on one or more of a plurality of knowledge elements to enable intelligent modeling,
wherein the analytical engine includes a data management system for accessing and processing the knowledge elements; and b) Applying the intelligent modeling to the knowledge elements so as to engage
in data analysis.
12) A method of enabling parallel processing, comprising the steps of: a) Providing an analytical engine, executed by a computer, that relies on one or more of a plurality of knowledge elements to
enable intelligent modeling, wherein the analytical engine includes a data management system for accessing and processing the knowledge elements; b) Subdividing one or more databases into a plurality
of parts and calculating a knowledge entity for each part using the same or a number of other computers to accomplish the calculations in parallel c) Combining all or some of the knowledge entities
to form one or more combined knowledge entities; and d) Applying the intelligent modeling to the knowledge elements of the combined knowledge entities so as to engage in data analysis.
13) A method of enabling scenario testing, wherein a scenario consists of a test of a hypothesis, comprising the steps of: a) Providing an analytical engine, executed by a computer, that relies on
one or more of a plurality of knowledge elements to enable intelligent modeling, wherein the analytical engine includes a data management system for accessing and processing the knowledge elements,
whereby the analytical engine is responsive to introduction of a hypothesis to create dynamically one or more new intelligent models; and b) Applying the one or more new intelligent models to see
future possibilities, obtain new insights into variable dependencies as well as to assess the ability of the intelligent models to explain data and predict outcomes.
14) A method of enabling dimension reduction, comprising the steps of: a) Providing an analytical engine, executed by a computer, that relies on one or more of a plurality of knowledge elements to
enable intelligent modeling, wherein the analytical engine includes a data management system for accessing and processing the knowledge elements; and b) Reducing the number of variables in the
knowledge entity by the analytical engine defining a new variable based on the combination of any two variables, and applying the new variable to the knowledge entity.
15) The method as claimed in claim 14, further comprising the step of successively applying a series of new variables so as to accomplish further dimension reduction.
16) A method of enabling dynamic queries: a) Providing an analytical engine, executed by a computer, that relies on one or more of a plurality of knowledge elements to enable intelligent modeling,
wherein the analytical engine includes a data management system for accessing and processing the knowledge elements; b) Establishing a series of questions that are directed to arriving at one or more
particular outcomes; and c) Applying the analytical engine so as to select one or more sequences of the series of questions based on answers given to the questions, so as to rapidly converge on the
one or more particular outcomes.
17) A method of enabling distributed processing: a) Providing an analytical engine, executed by a computer, that relies on one or more of a plurality of knowledge elements to enable intelligent
modeling, wherein the analytical engine includes a data management system for accessing and processing the knowledge elements, whereby the analytical engine enables the combination of a plurality of
knowledge entities into a single knowledge entity; and b) Applying the intelligent modeling to the single knowledge entity.
18) The computer-implemented system claimed in claim 1, wherein the analytical engine: a) Enables one or more records to be added or removed dynamically to or from the knowledge entity; b) Enables
one or more variables to be added or removed dynamically to or from the knowledge entity; c) Enables use in the knowledge entity of one or more qualitative and/or quantitative variables; and d)
Supports a plurality of different data analysis methods.
19) The computer-implemented system claimed in claim 18, wherein the knowledge entity is portable to one or more remote computers.
20) The computer-implemented system claimed in claim 1, wherein the intelligent modeling applied to relevant knowledge elements enables one or more of: a) credit scoring; b) predicting portfolio
value from market conditions and other relevant data; c) credit card fraud detection based on credit card usage data and other relevant data; d) process control based on data inputs from one or more
process monitoring devices and other relevant data; e) consumer response analysis based on consumer survey data, consumer purchasing behaviour data, demographics, and other relevant data; f) health
care diagnosis based on patient history data, patient diagnosis best practices data, and other relevant data; g) security analysis predicting the identity of a subject from biometric measurement data
and other relevant data; h) inventory control analysis based on customer behaviour data, economic conditions and other relevant data; i) sales prediction analysis based on previous sales, economic
conditions and other relevant data; j) computer game processing whereby the game strategy is dictated by the previous moves of one or more other players and other relevant data; k) robot control
whereby the movements of a robot are controlled based on robot monitoring data and other relevant data; and l) A customized travel analysis whereby the favorite destination of a customer is predicted
based on previous behavior and other relevant data; and
21) A computer program product for use on a computer system for enabling data analysis and process control comprising: a) a computer usable medium; and b) computer readable program code recorded on
the computer useable medium, including: i) program code that defines an analytical engine that relies on one or more of the plurality of knowledge elements to enable intelligent modeling, wherein the
analytical engine includes a data management system for accessing and processing the knowledge elements.
22) The computer program product as claimed in claim 21, where the program code defining the analytical engine instructs the computer system to define one or more knowledge entities, each of which is
comprised of at least one knowledge element.
23) The computer program product as claimed in claim 22, wherein the program code defining the analytical engine instructs the computer system to update dynamically the knowledge elements with a
plurality of records and a plurality of variables.
24) The computer program product as claimed in claim 22, wherein the program code defining the analytical engine instructs the computer system to establish the knowledge entity so as to consist of a
data matrix having a row and a column for each variable, and wherein the knowledge entity accumulates sets of combinations of knowledge elements for each variable in the intersection of the
corresponding row and column.
25) The computer program product as claimed in claim 24, wherein the program code defining the analytical engine instructs the computer system to enable variables and/or records to be dynamically
added to, and subtracted from, the knowledge entity.
26) The computer program product as claimed in claim 25, wherein the program code defining the analytical engine instructs the computer system to enable the deletion of a variable by deletion of the
corresponding row and/or column, and wherein the knowledge entity remains operative after such deletion.
27) The computer program product claimed in claim 25, wherein the program code defining the analytical engine instructs the computer system to enable the addition of a variable by addition of a
corresponding row and/or column to the knowledge entity, and wherein the knowledge entity remains operative after such addition.
28) The computer program product claimed in claim 25, wherein the program code defining the analytical engine instructs the computer system to enable the update of the knowledge entity without
substantial re-training or re-calibration of the knowledge elements.
29) The computer program product claimed in claim 22, wherein the program code defining the analytical engine instructs the computer system to enable application to the knowledge entity of one or
more of: incremental learning operations, parallel processing operations, scenario testing operations, dimension reduction operations, dynamic query operations or distributed processing operations.
30) A computer-implemented system as claimed in claim 1, wherein the analytical engine enables process control.
31) The computer-implemented system as claimed in claim 30, wherein the analytical engine enables fault diagnosis.
32) A method according to claim 11, wherein the method is implemented in a digital signal processor chip or any miniaturized processor medium.
[0001] Data analysis is used in many different areas, such as data mining, statistical analysis, artificial intelligence, machine learning, and process control to provide information that can be
applied to different environments. Usually this analysis is performed on a collection of data organised in a database. With large databases, computations required for the analysis often take a long
time to complete.
[0002] Databases can be used to determine relationships between variables and provide a model that can be used in the data analysis. These relationships allow the value of one variable to be
predicted in terms of the other variables. Minimizing computational time is not the only requirement for successful data analysis. Overcoming rapid obsolescence of models is another major challenge.
[0003] Currently tasks such as prediction of new conditions, process control, fault diagnosis and yield optimization are done using computers or microprocessors directed by mathematical models. These
models generally need to be "retrained" or "recalibrated" frequently in dynamic environments because changing environmental conditions render them obsolete. This situation is especially serious when
very large quantities of data are involved or when large changes to the models are required over short periods of time. Obsolescence can originate from new data values being drastically different
from historical data because of an unforeseen change in the environment of a sensor, one or more sensors becoming inoperable during operation or new sensors being added to a system for example.
[0004] In real-world applications, there are several other requirements that often become vital in addition to computational speed and rapid model obsolescence. For example, in some cases the model
will need to deal with a stream of data rather than a static database. Also, when databases are used they can rapidly outgrow the available computer storage available. Furthermore, existing computer
facilities can become insufficient to accomplish model re-calibration. Often it becomes completely impractical to use a whole database for re-calibration of the model. At some risk, a sample is taken
from the database and used to obtain the re-calibrated model. In developing models, "scenario testing" is often used. That is, a variety of models need to be tried on the data. Even with moderately
sized databases this can be a processing intensive task. For example, although combining variables in a model to form a new model is very attractive from an efficiency viewpoint (termed here
"dimension reduction"), the number of possible combinations combined with the data processing usually required for even one model, especially with a large database, makes the idea impractical with
current methods. Finally, often models are used in situations where they must provide an answer very quickly, sometimes with inadequate data. In credit scoring for example, a large number of risk
factors can affect the credit rating and the interviewer wishes to obtain the answer from a credit assessment model as rapidly as possible with a minimum of data. Also, in medical diagnosis, a doctor
would like to converge on the solution with a minimum of questions. Methods which can request the data needed based on maximizing the probability of arriving at a conclusion as quickly as possible
(termed here "dynamic query") would be very useful in many diagnostic applications.
[0005] Finally, mobile applications are now becoming very important in technology. A method of condensing the knowledge in a large database so that it can be used with a model in a portable device is
highly desirable.
[0006] This situation is becoming increasingly important in an extremely diverse range of areas ranging from finances to health care and from sports forecasting to retail needs.
[0007] The present invention relates to a method and apparatus for data analysis.
[0008] The primary focus in the previous art has been to focus upon reducing computational time. Recent developments in database technology are beginning to emphasize "automatic summary tables"
("AST's") that contain pre-computed quantities needed by "queries" to the database. These AST's provide a "materialized view" of the data and greatly increase the speed of response to queries.
Efficiently updating the AST's with new data records, as the new data becomes available for the database has been the subject of many publications. Initially only very simple queries were considered.
Most recently incrementally updating an AST in accordance with a method of updating AST's that applies to all "aggregate functions" has been proposed. However, although the AST's speed up the
response to queries, they are still very extensive compilations of data and therefore incremental re-computation is generally a necessity for their maintenance. Palpanas et al. proposed what they
term as "the first" general algorithm to efficiently re-compute only the groups in the AST which need to be updated in order to reply to the query. However, their method is a very involved one. It
includes a considerable amount of work to select the groups that are to be updated. Their experiments indicate that their method runs in 20% to 60% of the time required for a "full refresh" of the
AST. There is increasing interest in using AST's to respond to queries that originate from On-line Analytical Processing ("OLAP"). These can involve standard statistical or data-mining methods.
[0009] Chen et al. examined the problem of applying OLAP to dynamic rather than static situations. In particular, they were interested in multi-dimensional regression analysis of time-series data
streams. They recognized that it should be possible to use only a small number of pre-computed quantities rather than all of the data. However, 25 the algorithms that they propose are very involved
and constrained in their utility.
[0010] U.S. Pat. No. 6,553,366 shows how great economies of data storage requirements and time can be obtained by storing and using various "scalable data mining functions" computed from a relational
database. This is the most recent 30 version of the "automatic summary table" idea.
[0011] Thus, although the prior art has recognized that pre-computing quantities needed in subsequent modeling calculations saves time and data storage, the methods developed fail to satisfy some or
all of the other requirements mentioned above. Often they can add records but cannot remove records to their "static" databases. Adding new variables or removing variables "on the fly" (in real time)
is not generally known. They are not used to combine databases or for parallel processing. Scenario testing is very limited and does not involve dimension reduction. Dynamic query is not done with
static decision trees being commonplace. Methods are generally embedded in large office information systems with so many quantities computed and so many ties to existing interfaces that portability
is challenging.
[0012] It is therefore an object of the present invention to provide a method of and apparatus for data analysis that obviates or mitigates some of the above disadvantages.
[0013] In one aspect, the present invention provides a "knowledge entity" that may be used to perform incremental learning. The knowledge entity is conveniently represented as a matrix where one
dimension represents independent variables and the other dimension represents dependent variables. For each possible pairing of variables, the knowledge entity stores selected combinations of either
or both of the variables. These selected combinations are termed the "knowledge elements" of the knowledge entity. This knowledge entity may be updated efficiently with new records by matrix
addition. Furthermore, data can be removed from the knowledge entity by matrix subtraction. Variables can be added or removed from the knowledge entity by adding or removing a set of cells, such as a
row or column to one or both dimensions.
[0014] Preferably the number of joint occurrences of the variables is stored with the selected combinations.
[0015] Exemplary combinations of the variables are the sum of values of the first variable for each joint occurrence, the sum of values of the second variable for each joint occurrence, and the sum
of the product of the values of each variable.
[0016] In one further aspect of the present invention, there is provided a method of performing a data analysis by collecting data in such the knowledge entity and utilising it in a subsequent
[0017] According to another aspect of the present invention, there is provided a process modelling system utilising such the knowledge entity.
[0018] According to other aspects of the present invention, there is a provided either a learner or predictor using such the knowledge entity.
[0019] The term "analytical engine" is used to describe the knowledge entity together with the methods required to use it to accomplish incremental learning operations, parallel processing
operations, scenario testing operations, dimension reduction operations, dynamic query operations and/or distributed processing operations. These methods include but are not limited to methods for
data collecting, management of the knowledge elements, modelling and use of the modelling (for prediction for example). Some aspects of the management of the knowledge elements may be delegated to a
conventional data management system (simple summations of historical data for example). However, the knowledge entity is a collection of knowledge elements specifically selected so as to enable the
knowledge entity to accomplish the desired operations. When modeling is accomplished using the knowledge entity it is referred to as "intelligent modeling" because the resulting model receives one or
more characteristics of intelligence. These characteristics include: the ability to immediately utilize new data, to purposefully ignore some data, to incorporate new variables, to not use specific
variables and, if necessary, to do be able to utilize these characteristics on-line (at the point of use) and in real time.
[0020] Embodiments of the invention will now be described by way of example only with reference to the accompanying drawings in which:
[0021] FIG. 1 is a schematic diagram of a processing apparatus;
[0022] FIG. 2 is a representation of a controller for the processing apparatus of FIG. 1;
[0023] FIG. 3 is a schematic of a the knowledge entity used in the controller of FIG. 2;
[0024] FIG. 4 is a flow chart of a method performed by the controller of FIG. 2;
[0025] FIG. 5 is another flow chart of a method performed by the controller of FIG. 2;
[0026] FIG. 6 is a further flow chart of a method performed by the controller of FIG. 2;
[0027] FIG. 7 is a yet further flow chart of a method performed by the controller of FIG. 2;
[0028] FIG. 8 is a still further flow chart of a method performed by the controller of FIG. 2;
[0029] FIG. 9 is a schematic diagram of a robotic arm;
[0030] FIG. 10 is a schematic diagram of a Markov chain;
[0031] FIG. 11 is a schematic diagram of a Hidden Markov model;
[0032] FIG. 12 is another schematic diagram of a Hidden Markov model.
[0033] To assist in understanding the concepts embodied in the present invention and to demonstrate the industrial applicability thereof with its inherent technical effect, a first embodiment will
describe how the analytical engine enables application to the knowledge entity of incremental learning operations for the purpose of process monitoring and control. It will be appreciated that the
form of the processing apparatus is purely for exemplary purposes to assist in the explanation of the use of the knowledge entity shown in FIG. 3, and is not intended to limit the application to the
particular apparatus or to process control environments. Subsequent embodiments will likewise illustrate the flexibility and general applicability in other environments.
[0034] Referring therefore to FIG. 1, a dryer 10 has a feed tube 12 for receiving wet feed 34. The feed tube 12 empties into a main chamber 30. The main chamber 30 has a lower plate 14 to form a
plenum 32. An air inlet 18 forces air into a heater 16 to provide
air to the plenum 32. An outlet tube 28 receives dried material from the main chamber 30. An air outlet 20 exhausts air from the main chamber 32.
[0035] The dryer 10 is operated to produce dried material, and it is desirable to control the rate of production. An exemplary operational goal is to produce 100 kg of dried material per hour.
[0036] The dryer receives wet feed 34 through the feed tube 12 at an adjustable and observable rate. The flow rate from outlet tube 28 can also be monitored. The flow rate from outlet tube 28 is
related to operational parameters such as the wet feed flow rate, the temperature provided by heater 16, and the rate of air flow from air inlet 18. The dryer 10 incorporates a sensor for each
operational parameter, with each sensor connected to a controller 40 shown in detail in FIG. 2. The controller 40 has a data collection unit 42, which receives inputs from the sensors associated with
the wet feed tube 12, the heater 16, the air inlet 18, and the output tube 28 to collect data.
[0037] The controller 40 has a learner 44 that processes the collected data into a knowledge entity 46. The knowledge entity 46 organises the data obtained from the operational parameters and the
output flow rate. The knowledge entity 46 is initialised to notionally contain all zeroes before its first use. The controller 40 uses a modeller 48 to form a model of the collected data from the
knowledge entity 46. The controller 40 has a predictor 50 that can set the operational parameters to try to achieve the operational goal. Thus, as the controller operates the dryer 10, it can monitor
the production and incrementally learn a better model.
[0038] The controller 40 operates to adjust the operational parameters to control the rate of production. Initially the dryer 10 is operated with manually set operational parameters. The initial
operation will produce training data from the various sensors, including output rate.
[0039] The data collector 42 receives signals related to each of the operational parameters and the output rate, namely a measure of the wet feed rate from the wet feed tube 12, a measure of the air
temperature from the heater 16, a measure of the air flow from the air inlet 18, and a measure of the output flow rate from the output tube 28.
[0040] The learner 44 transforms the collected data into the knowledge entity of FIG. 3 as each measurement is received. As can be seen in FIG. 3, the knowledge entity 46 is organised as an
orthogonal matrix having a row and a column for each of the sensed operating parameters. The intersection of each row and column defines a cell in which a set of combinations of the variable in the
respective row and column is accumulated.
[0041] In the embodiment of FIG. 3, for each pairing of variables, a set of four combinations is obtained. The first combination, n.sub.i,j is a count of the number of joint occurrences of the two
variables. The combination .SIGMA.X.sub.i represents the total of all measurements of the first variable X.sub.i, which is one of the sensed operational parameters. The second quantity .SIGMA.X.sub.j
records the total of all measurements of the second variable X.sub.j, which is another of the sensed operational parameters. Finally, .SIGMA.X.sub.iX.sub.j records the total of the products of all
measurements of both variables. It is noted that the summations are over all observed measurements of the variables.
[0042] These combinations are additive, and accordingly can be computed incrementally. For example, given observed measurements [3, 4, 5, 6] for the variable X.sub.i, then .SIGMA.X.sub.i=3+4+5+6=18.
If the measurements are subdivided into two collections of observed measurements [3, 4] and [5, 6], for example from sensors at two different locations, then 1 [ 3 , 4 ] X i = 7 and [ 5 , 6 ] X i =
11 so [ 3 , 4 , 5 , 6 ] X i = [ 3 , 4 ] X i + [ 5 , 6 ] X i .
[0043] The nature of the subdivision is not relevant, so the combination can be computed incrementally for successive measurements, and two collections of measurements can be combined by addition of
their respective combinations.
[0044] In general, the combinations of parameters accumulated should have the property that given a first and second collection of data, the value of the combination of the collections may be
efficiently computed from the values of the collections themselves. In other words, the value obtained for a combination of two collections of data may be obtained from operations on the value of the
collections rather than on the individual elements of the collections.
[0045] It is also recognised that the above combinations have the property that given a collection of data and additional data, which can be combined into an augmented collection of data, the value
of the combination for the augmented collection of data is efficiently computable from the value of the combination for the collection of data and the value of the combination for the additional
data. This property allows combination of two collections of measurements.
[0046] An example of data received by the data collector 42 from the dryer of FIG. 1 in four separate measurements is as follows:
1TABLE 1 Wet Dry Measurement Feed Rate Air Temperature Air Flow Output Rate 1 10 30 110 2 2 15 35 115 3 3 5 40 120 1.5 4 15 50 140 6
[0047] With the measurements shown above in Table 1, measurement 1 is transformed into the following record represented as an orthogonal matrix:
2TABLE 2 Wet Air Dry Measurement 1 Feed Rate Temperature Air Flow Output Rate Wet Feed Rate 1 = n.sub.11 1 1 1 10 = x.sub.1 10 10 10 10 = x.sub.2 30 110 2 100 = x.sub.1x.sub.2 300 1100 20 Air
Temperature 1 1 1 1 30 30 30 30 10 30 110 2 300 900 3300 60 Air Flow 1 11 1 1 110 110 110 110 10 30 110 2 1100 3300 12100 220 Dry Output Rate 1 1 1 1 2 2 2 2 10 30 110 2 20 60 220 4
[0048] This measurement is added to the knowledge entity 46 by the learner 42. Each subsequent measurement is transformed into a similar table and added to the knowledge entity 46 by the learner 42.
[0049] For example, upon receipt of the second measurement, the cell at the intersection of the wet feed row and air temperature column would be updated to contain:
3 TABLE 3 Air Temperature Wet Feed Rate 1 + 1 = 2 10 + 15 = 25 30 + 35 = 65 300 + 525 = 825
[0050] Successive measurements can be added incrementally to the knowledge entity 46 since the knowledge entity for a new set of data is equal to the sum of the knowledge entity for an old set data
with the knowledge entity of the additional data. Each of the combinations F used in the knowledge entity 46 have the exemplary property that F(A.orgate.B)=F(A)+F(B) for sets A and B. Further
properties of the knowledge entity 46 will be discussed in more detail below.
[0051] As data are collected, the controller 40 accumulates data in the knowledge entity 46 which may be used for modelling and prediction. The modeller 48 determines the parameters of a
predetermined model based on the knowledge entity 46. The predictor 50 can then use the model parameters to determine desirable settings for the operational parameters.
[0052] After the controller 40 has been trained, it can begin to control the dryer 10 using the predictor 50. Suppose that the operator instructs the controller 40 through the user interface 52 to
set the production rate to 100 kg/h by varying the air temperature at heater 16, and that the appropriate control method uses a linear regression model.
[0053] The modeller 48 computes regression coefficients as shown in FIG. 4 generally by the numeral 100. At step 102, the modeller computes a covariance table. Covariance between two variables
X.sub.i and X.sub.j may be computed as 2 Covar i , j = X i X j - X i X j n ij n ij .
[0054] Since each of these terms is one of the combinations stored in the knowledge entity 46 at the intersection of row i and column j, computation of the covariance for each pair of variables is
done with two divisions and one subtraction. When i=j, the covariance is equal to the variance, i.e. Covar.sub.i,j=Var.sub.i=Var.sub.j. The modeller 48 uses this relationship to compute the
covariance between each pair of variables.
[0055] Then at step 104, the modeller 48 computes a correlation table. The correlation between two variables X.sub.i and X.sub.j may be computed as 3 R i , j = Covar i , j Var i Var j .
[0056] Since each of 10 these terms appears in the covariance table obtained from the knowledge entity 46 at step 102, the correlation coefficient can be computed with one multiplication, one square
root, and one division. The modeller 48 uses this relationship to compute the correlation between each pair of variables.
[0057] At step 106, the operator selects a variable Y, for example X.sub.4, to model through the user interface 52. At step 107, the modeller 48 computes .beta.=R.sub.i,j.sup.-1R.sub.y,j using the
entries in the correlation table.
[0058] At step 108, the modeller 48 first computes the standard deviation s.sub.y of the dependent variable Y and the standard deviation s.sub.j of independent variables X.sub.j. Conveniently, the
standard deviations s.sub.y={square root}{square root over (Var.sub.y)} and s.sub.j={square root}{square root over (Var.sub.j)} are computed using the entries from the covariance table. The modeller
48 then computes the coefficients 4 b j = j ( s y s j ) .
[0059] At step 109, the modeller 48 computes an intercept a={overscore (X.sub.4)}-b.sub.1{overscore (X.sub.1)}-b.sub.2{overscore (X.sub.2)}-b.sub.3{overscore (X.sub.3)}. The modeller 48 then provides
the coefficients a, b.sub.1, b.sub.2, b.sub.3 to the predictor 50.
[0060] The predictor 50 can then estimate the dependent variable as Y=a+b.sub.1{overscore (X.sub.1)}+b.sub.2{overscore (X.sub.2)}+b.sub.3{overscore (X.sub.3)}.
[0061] The knowledge entity shown in FIG. 3 provides the analytical engine ignificant flexibility in handling varying collections of data. Referring to FIG. 5 a ethod of amalgamating knowledge from
another controller is shown generally by he numeral 110. The controller 40 first receives at step 112 a new knowledge entity rom another controller. The new knowledge entity is organised to be of the
same form as the existing knowledge entity 46. This new knowledge entity may be based upon a similar process in another factory, or another controller in the same factory, or even standard test data
or historical data. The controller 40 provides at step 114 the new knowledge entity to learner 44. Learner 44 adds the new knowledge to the knowledge entity 46 at step 116. The new knowledge is added
by performing a matrix addition (i.e. addition of similar terms) between the knowledge entity 46 and the new knowledge entity. Once the knowledge entity 46 has been updated, the model is updated at
step 118 by the modeller 48 based on the updated knowledge entity 46
[0062] In some situations it may be necessary to reverse the effects of amalgamating knowledge shown in FIG. 5. In this case, the method of FIG. 6 may be used to remove knowledge. Referring therefore
to FIG. 6, a method of removing knowledge from the knowledge entity 46 is shown generally by the numeral 120. To begin, at step 122, the controller 40 accesses a stored auxiliary knowledge entity.
This may be a record of previously added knowledge from the method of FIG. 5. Alternatively, this may be a record of the knowledge entity at a specific time. For example, it may be desirable to
eliminate the knowledge added during the first hour of operations, as it may relate to startup conditions in the plant which are considered irrelevant to future modelling. The stored auxiliary
knowledge entity has the same form as the knowledge entity 46 shown in FIG. 3. The controller 40 provides the auxiliary knowledge entity to the learner 44 at step 124. The learner 44 at step 126 then
removes the auxiliary knowledge from the knowledge entity 46 by subtracting the auxiliary knowledge entity from knowledge entity 46. Finally at step 128, the model is updated with the modified
knowledge entity 46.
[0063] To further refine the modelling, an additional sensor may be added to the dryer 10. For example, a sensor to detect humidity in the air inlet may be used to consider the effects of external
humidity on the system. In this case, the model may be updated by performing the method shown generally by the numeral 130 in FIG. 7. First a new sensor is added at step 132. The learner 44 then
expands the knowledge entity by adding a row and a column. The combinations in the new row and the new column have notional values of zero. The controller 44 then proceeds to collect data at step
136. The collected data will include that obtained from the old sensors and that of the new sensor. This information is learned at step 138 in the same manner as before. The knowledge entity 46 in
the analytical engine can then be used with the new sensor to obtain the coefficients of the linear regression using all the sensors including the new sensor. It will be appreciated that since the
values of `n` in the new row and column initially are zero, that there will be a significant difference between the values of `n` in the new row and column and in the old rows and columns. This
difference reflects that more data has been collected for the original rows and columns. It will therefore be recognised that provision of the value of `n` contributes to the flexibility of the
knowledge entity.
[0064] It may also be desirable to eliminate a sensor from the model. For example, it may be discovered that air flow does not affect the output speed, or that air flow may be too expensive to
measure. The method shown generally as 140 in FIG. 7 allows an operational parameter to be removed from the knowledge entity 46. At step 142, an operational parameter is no longer relevant. The
operational parameter corresponds to a variable in the knowledge entity 46. The learner 44 then contracts the knowledge entity at step 144 by deleting the row and column corresponding to the removed
variable. The model is then updated at step 146 to obtain the linear regression coefficients for the remaining variable to eliminate use of the deleted variable.
[0065] It will be noted in each of these examples that the updates is accomplished without requiring a summing operation for individual values of each of the previous records. Similarly subtraction
is performed without requiring a new summing operation for the remaining records. No substantial re-training or re-calibration is required.
Distributed and Parallel Data Processing
[0066] A particularly useful attribute of the knowledge entity 46 in the analytical engine is that it allows databases to be divided up into groups of records with each group processed separately,
possibly in separate computers. After processing, the results from each of these computers may be combined to achieve the same result as though the whole data set had been processed all at once in
one computer. The analytical engine is constructed so as to enable application to the knowledge entity of such parallel processing operations. This can achieve great economies of hardware and time
resources. Furthermore, instead of being all from the one database, some of these groups of records can originate from other databases. That is, they may be "distributed" databases. The combination
of diverse databases to form a single knowledge entity and hence models which draw upon all of these databases is then enabled. That is, the analytical engine enables application to the knowledge
entity of distributed processing as well as parallel processing operations.
[0067] As an illustration, if the large database (or distributed databases) can be divided into ten parts then these parts may be processed on computers 1 to 10 inclusive, for example. In this case,
these computers each process the data and construct a separate knowledge entity. The processing time on each of these computers depends on the number of records in each subset but the time required
by an eleventh computer to combine the records by processing the knowledge entity is small (usually a few milliseconds). For example, with a dataset with 1 billion records that normally requires 10
hours to process in a single computer, the processing time can be decreased to 1 hour and a few seconds by subdividing the dataset into ten parts.
[0068] To demonstrate this attribute, the following example considers a very small dataset of six records and an example of interpretation of dryer output rate data from three
. If, for example, the output rate from the third dryer is to be predicted from the output rate from the other two
dryers then an equation is required relating it to these other two output rates. The data is shown in the table below where X.sub.1, X.sub.2 and X.sub.3 represent the three output rates. The sample
dataset with six records and three variables is set forth below at Table 4.
4TABLE 4 X.sub.1 X.sub.2 X.sub.3 2 3 5 3 4 7 1 1 3 2 3 6 4 4 8 3 5 7
[0069] With such a small amount of data it is practical to use multiple linear regression to obtain the needed relationship:
[0070] Multiple linear regression for the dataset shown in Table 4 provides the relationship:
[0071] However, if this dataset consisted of a billion records instead of only six then multiple linear regression on the whole dataset at once would not be practical. The conventional approach would
be to take only a random sample of the data and obtain a multiple linear regression model from that, hoping that the resulting model would represent the entire dataset.
[0072] Using the knowledge entity 46, the analytical engine can use the entire dataset for the regression model, regardless of the size of the data set. This can be illustrated using only the six
records shown as follows and dividing the dataset into only three groups.
[0073] Step 1: Divide the dataset to three subsets with two records in each, and complete a knowledge entity for each subset. The data in subset 1 has the form shown below in Table 5.
[0074] Subset 1:
5TABLE 5 X.sub.1 X.sub.2 X.sub.3 2 3 5 3 4 7
[0075] From the data in Table 5 above, a knowledge entity I (Table 6) is calculated for subset 1
[0076] (Table 5) using a first computer.
6 TABLE 6 X.sub.1 X.sub.2 X.sub.3 X.sub.1 2 2 2 5 5 5 5 7 12 13 18 31 X.sub.2 2 2 2 7 7 7 5 7 12 18 25 43 X.sub.3 2 2 2 12 12 12 5 7 12 31 43 74
[0077] As described above, the knowledge entity 46 is built by using the basic units which includes an input variable X.sub.j an output variable X.sub.i and a set of combinations indicated as
W.sub.ij, as shown in Table 7:
7 TABLE 7 X.sub.j X.sub.i W.sub.ij
[0078] Where W.sub.ij includes one or more of the following four basic elements:
[0079] N.sub.ij is the total number of joint occurrence of two variables
[0080] .quadrature.X.sub.i is the sum of variable X.sub.i
[0081] .quadrature.X.sub.j is the sum of variable X.sub.j
[0082] .quadrature.X.sub.iX.sub.j is the sum of multiplication of variable X.sub.i and X.sub.j
[0083] In some applications it may be advantageous to include additional knowledge elements for specific calculation reasons. For example: .quadrature.X.sup.3, .quadrature.X.sup.4 and .quadrature.
(X.sub.iX.sub.j)- .sup.2 can generally be included in the knowledge entity in addition to the four basic elements mentioned above without adversely affecting the intelligent modeling capabilities.
[0084] The data in subset 2 has the form shown below in Table 8.
[0085] Subset 2:
8TABLE 8 X.sub.1 X.sub.2 X.sub.3 1 1 3 2 3 6
[0086] A knowledge entity II (Table 9) is calculated for subset 2 (Table 8) using a second computer.
9 TABLE 9 X.sub.1 X.sub.2 X.sub.3 X.sub.1 2 2 2 3 3 3 3 4 9 5 7 15 X.sub.2 2 2 2 4 4 4 3 4 9 7 10 21 X.sub.3 2 2 2 9 9 9 3 4 9 15 21 45
[0087] Similarly, for subset 3 shown in Table 10, a knowledge entity III (Table 11) is computed using a third computer.
[0088] Subset 3:
10TABLE 10 X.sub.1 X.sub.2 X.sub.3 4 4 8 3 5 7
11 TABLE 11 X.sub.1 X.sub.2 X.sub.3 X.sub.1 2 2 2 7 7 7 7 9 15 25 31 53 X.sub.2 2 2 2 9 9 9 7 9 15 31 41 67 X.sub.3 2 2 2 15 15 15 7 9 15 53 67 113
[0090] Step 2: Calculate a knowledge entity IV (Table 12) by adding together the three previously calculated knowledge tables using a fourth computer.
12 TABLE 12 X.sub.1 X.sub.2 X.sub.3 X.sub.1 6 6 6 15 15 15 15 20 36 43 56 99 X.sub.2 6 6 6 20 20 20 15 20 36 56 76 131 X.sub.3 6 6 6 36 36 36 15 20 36 99 131 232
[0091] Step 3: Calculate the covariance matrix from knowledge entity 4 using the following equation. If i=j the covariance is the variance. Each of the terms used in the covariance matrix are
available from the composite knowledge entity shown in Table 12.
13 TABLE 13 X.sub.J X.sub.i 5 Covar ij = X i X j - ( X i X j ) N ij N ij
[0092] The resulting covariance matrix from Table 12 is set out below at Table 14.
14 TABLE 14 X.sub.1 X.sub.2 X.sub.3 X.sub.1 0.916666667 1 1.5 X.sub.2 1 1.555555556 1.833333333 X.sub.3 1.5 1.833333333 2.666666667
[0093] Step 4: Calculate the correlation matrix from the covariance matrix using the following equation.
15 TABLE 15 X.sub.J X.sub.i 6 R ij = Covar ij Var i Var j where : Var i = Covar ii Var j = Covar jj
[0094] Correlation matrix:
16 TABLE 16 X.sub.1 X.sub.2 X.sub.3 X.sub.1 1 0.837435789 0.959403224 X.sub.2 0.837435789 1 0.900148797 X.sub.3 0.959403224 0.900148797 1
[0095] Step 5: Select the dependent variable y (X.sub.3) and then slice the correlation matrix to a matrix for the independent variables R.sub.ij and a vector for the dependent variable R.sub.yj.
Calculate the population coefficient .beta..sub.j for independent variables X.sub.j using the relationship.
[0096] From Table 16, a dependent variable correlation vector R.sub.yj is obtained as shown in Table 17.
17TABLE 17 X.sub.3 0.959403224 0.900148797
[0097] Similarly, the independent variables correlation matrix R.sub.ij and its inverse matrix R.sub.ij.sup.-1 for X.sub.1 and X.sub.2 is obtained from Table 16 as set forth below at Tables 18 and 19
18 TABLE 18 X.sub.1 X.sub.2 X.sub.1 1 0.837435789 X.sub.2 0.837435789 1
19 TABLE 19 X.sub.1 X.sub.2 X.sub.1 3.347826087 -2.803589382 X.sub.2 -2.803589382 3.347826087
[0099] Calculate .quadrature. vector for Table 17 and 19 to obtain:
20TABLE 20 .quadrature. 0.68826753 0.32376893
[0100] Step 6: Calculate sample coefficients b.sub.j
[0101] s.sub.y is the sample standard deviation of dependent variable X.sub.3 and s.sub.j the sample standard deviation of independent variables (X.sub.1, X.sub.2) which can be easily calculated from
the knowledge entity 46.
[0102] Step 7: Calculate intercept a from the following equation (Y is X.sub.3 in our example):
a={overscore (Y)}-b.sub.1{overscore (X)}.sub.1-b.sub.2{overscore (X)}.sub.2- . . . -b.sub.n{overscore (X)}.sub.n
[0103] where any mean value can be calculated from .quadrature.X.sub.i/N.s- ub.ii
[0104] Step 8: Finally the linear equation which can be used for the prediction.
[0105] which will be recognised as the same equation calculated from whole dataset.
[0106] The above examples have used a linear regression model. Using the knowledge entity 46, the analytical engine can also develop intelligent versions of other models, including, but not limited
to, non-linear regression, linear classification, on-linear classification, robust Bayesian classification, nave Bayesian classification, Markov chains, hidden Markov models, principal component
analysis, principal component regression, partial least squares, and decision tree.
[0107] An example of each of these will be provided, utilising the data obtained from the process of FIG. 1. Again, it will be recognised that this procedure is not process dependent but may be used
with any set of data.
Linear Classification
[0108] As mentioned above, effective scenario testing depends upon being able to examine a wide variety of mathematical models to see future possibilities and assess relationships amongst variables
while examining how well the existing data is explained and how well new results can be predicted. The analytical engine enables provides an extremely effective method for accomplishing scenario
testing. One important attribute is that it enables many different modeling methods to be examined including some that involve qualitative (categorical) as well as quantitative (numerical)
quantities. Classification is used when the output (dependent) variable is a categorical variable. Categorical variables can take on distinct values, such as colours (red, green, blue) or sizes
(small, medium, large). In the embodiment of the dryer 10, a filter may be provided in the vent 20, and optionally removed. A categorical variable for the filter has possible values "on" and "off"
reflective of the status of the filter. Suppose the dependent variable X.sub.i has k values. Instead of just one regression model we build k models by using the same steps as set out above with
reference to a model using linear regression.
X.sub.i1=a.sub.1+b.sub.11X.sub.1+b.sub.21X.sub.2+ . . . +b.sub.n1X.sub.n
X.sub.i2=a.sub.2+b.sub.12X.sub.1+b.sub.22X.sub.2+ . . . +b.sub.n2X.sub.n . . .
X.sub.ik=a.sub.k+b.sub.1kX.sub.1+b.sub.2kX.sub.2+ . . . +b.sub.nkX.sub.n
[0109] In the prediction phase, each of the models for X.sub.i1, . . . , X.sub.ik is used to construct an estimate corresponding to each of the k possible values. The k models compete with each other
and the model with the highest value will be the winner, and determines the predicted one of the k possible values. Using the following equation will transform the actual value to probability.
[0110] Suppose we have a model with two variables (X.sub.1, X.sub.2) and X.sub.2 is a categorical variable with values (A, B). In the example of the dryer, A corresponds to the filter being on, and B
corresponds to the filter being off. The knowledge entity 46 for this model is going to have one column/row for any categorical value (X.sub.2A, X.sub.2B)
[0111] Table 21 shows a knowledge entity 46 with a categorical variable X.sub.2.
21 TABLE 21 X.sub.1 X.sub.2 X.sub.1 X.sub.2A X.sub.2B X.sub.1 X.sub.1 N.sub.11 N.sub.12A N.sub.12B .quadrature. X.sub.1 .quadrature. X.sub.1 .quadrature. X.sub.1 .quadrature. X.sub.1 .quadrature.
X.sub.2A .quadrature. X.sub.2B .quadrature. X.sub.1 X.sub.1 .quadrature. X.sub.1 X.sub.2A .quadrature. X.sub.1 X.sub.2B X.sub.2 X.sub.2A N.sub.2A1 N.sub.2A2A N.sub.2A2B .quadrature. X.sub.2A
.quadrature. X.sub.2A .quadrature. X.sub.2A .quadrature. X.sub.1 .quadrature. X.sub.2A .quadrature. X.sub.2B .quadrature. X.sub.2A X.sub.1 .quadrature. X.sub.2A X.sub.2A .quadrature. X.sub.2A
X.sub.2B X.sub.2B N.sub.2B1 N.sub.2B2A N.sub.2B2B .quadrature. X.sub.2B .quadrature. X.sub.2B .quadrature. X.sub.2B .quadrature. X.sub.1 .quadrature. X.sub.2A .quadrature. X.sub.2B .quadrature.
X.sub.2B X.sub.1 .quadrature. X.sub.2B X.sub.2A .quadrature. X.sub.2B X.sub.2B
[0112] Table 22 shows a knowledge entity 46 for X.sub.2A
22 TABLE 22 X.sub.1 X.sub.2 X.sub.1 X.sub.2A X.sub.1 X.sub.1 N.sub.11 N.sub.12A .quadrature. X.sub.1 .quadrature. X.sub.1 .quadrature. X.sub.1 .quadrature. X.sub.2A .quadrature. X.sub.1 X.sub.1
.quadrature. X.sub.1 X.sub.2A X.sub.2 X.sub.2A N.sub.2A1 N.sub.2A2A .quadrature. X.sub.2A .quadrature. X.sub.2A .quadrature. X.sub.1 .quadrature. X.sub.2A .quadrature. X.sub.2A X.sub.1 .quadrature.
X.sub.2A X.sub.2A
[0113] Table 23 shows a knowledge entity 46 for X.sub.2B
23 TABLE 23 X.sub.1 X.sub.2 X.sub.1 X.sub.2B X.sub.1 X.sub.1 N.sub.11 N.sub.12B .quadrature. X.sub.1 .quadrature. X.sub.1 .quadrature. X.sub.1 .quadrature. X.sub.2B .quadrature. X.sub.1X.sub.1
.quadrature. X.sub.1X.sub.2B X.sub.2 X.sub.2B N.sub.2B1 N.sub.2B2B .quadrature. X.sub.2B .quadrature. X.sub.2B .quadrature. X.sub.1 .quadrature. X.sub.2B .quadrature. X.sub.2BX.sub.1 .quadrature.
[0114] The knowledge entity 46 shown in Tables 22 and 23 may then be applied to model each value of the categorical variable X.sub.2. Prediction of the categorical variable is then performed by
predicting a score for each possible value. The possible value with the highest score is chosen as the value of the categorical variable. The analytical engine thus enables the development of models
which involve categorical as well as numerical variables
Non-Linear Regression and Classification
[0115] The analytical engine is not limited to the generation of linear mathematical models. If the appropriate model is non-linear, then the knowledge entity shown in FIG. 3 is also used. The
combinations used in the table are sufficient to compute the non-linear regression.
[0116] The method of FIG. 7 showed how to expand the knowledge entity 46 to include additional variables. This feature also allows the construction of non-linear regression or classification models.
It is noted that non-linearity is about variables not coefficients. Suppose we have a linear model with two variables (X.sub.1, X.sub.2) but we believe Log (X.sub.1) could give us a better result.
The only thing we need to do is to follow the three steps for adding a new variable. Log (X.sub.1) will be the third variable in the knowledge entity 46 and a regression model can be constructed in
the explained steps. If we do not need X.sub.1 anymore it can be removed by using the contraction feature described above.
24 TABLE 24 X.sub.1 X.sub.2 X.sub.3 = Log (X.sub.1) X.sub.1 N.sub.11 N.sub.12 N.sub.13 .quadrature. X.sub.1 .quadrature. X.sub.1 .quadrature. X.sub.1 .quadrature. X.sub.1 .quadrature. X.sub.2
.quadrature. X.sub.3 .quadrature. X.sub.1X.sub.1 .quadrature. X.sub.1X.sub.2 .quadrature. X.sub.1X.sub.3 X.sub.2 N.sub.21 N.sub.22 N.sub.23 .quadrature. X.sub.2 .quadrature. X.sub.2 .quadrature.
X.sub.2 .quadrature. X.sub.1 .quadrature. X.sub.2 .quadrature. X.sub.3 .quadrature. X.sub.2X.sub.1 .quadrature. X.sub.2X.sub.2 .quadrature. X.sub.2X.sub.3 X.sub.3 N.sub.31 N.sub.32 N.sub.33
.quadrature. X.sub.3 .quadrature. X.sub.3 .quadrature. X.sub.3 .quadrature. X.sub.1 .quadrature. X.sub.2 .quadrature. X.sub.3 .quadrature. X.sub.3X.sub.1 .quadrature. X.sub.3X.sub.2 .quadrature.
[0117] Once the knowledge entity 46 has been constructed, the learner 44 can acquire data as shown in FIG. 7. The new variable X.sub.3 notionally represents a new sensor which measures the logarithm
of X.sub.1. However, values of the new variable X.sub.3 may be computed from values of X.sub.1 by a processor rather than by a special sensor. Regardless of how the values are obtained, the learner
44 builds the knowledge entity 46. Then the modeller 48 determines a linear regression of the three variables X.sub.1, X.sub.2, X.sub.3, where X.sub.3 is a non-linear function of X.sub.1. It will
therefore be recognised that operation of the controller 40 is similar for the non-linear regression when the variables are regarded as X.sub.1, X.sub.2, and X.sub.3. The predictor 50 can use a model
such as X.sub.2=a+b.sub.1X.sub.1+b.sub.3 X.sub.3 to predict variables such as X.sub.2.
Dimension Reduction
[0118] As stated earlier, reducing the number of variables in a model is termed "dimension reduction". Dimension reduction can be done by deleting a variable. As shown earlier, using the knowledge
entity the analytical engine easily accommodates this without using the whole database and a tedious re-calibration or re-training step. Such dimension reduction can also be done by the analytical
engine using the sum of two variables or the difference between two variables as a new variable. Again, the knowledge entity permits this step to be done expeditiously and makes extremely
comprehensive testing of different combinations of variable practical, even with very large data sets. Suppose we have a knowledge entity with three variables but we want to decrease the dimension by
adding two variables (X.sub.1, X.sub.2). For example, the knowledge elements in the knowledge entity associated with the new variable X.sub.4 which is the sum of two other variables, X.sub.1 and
X.sub.2 are calculated as follows:
25 TABLE 25 (1) 7 X 4 = X 1 + X 2 (2) 8 X 4 = ( X 1 + X 2 ) = X 1 + X 2 (3) 9 X 4 X 3 = ( X 1 + X 2 ) X 3 = X 1 X 3 + X 2 X 3 (4) 10 X 4 X 4 = ( X 1 + X 2 ) ( X 1 + X 2 ) = X 1 X 1 + 2 X 1 X 2 + X 2
X 2
[0119] This is a recursive process and can decrease a model with N dimensions to just to one dimension if it is needed. That is, a new variable X.sub.5 can be defined as the sum of X.sub.4 and
[0120] Alternatively, if we decide to accomplish the dimension reduction by subtracting the two variables, then the relevant knowledge elements for the new variable X.sub.4 are:
26 TABLE 26 (1) 11 X 4 = X 1 - X 2 (2) 12 X 4 = ( X 1 - X 2 ) = X 1 - X 2 (3) 13 X 4 X 3 = ( X 1 - X 2 ) X 3 = X 1 X 3 - X 2 X 3 (4) 14 X 4 X 4 = ( X 1 - X 2 ) ( X 1 - X 2 ) = X 1 X 1 - 2 X 1 X 2 + X
2 X 2
[0121] The knowledge elements in the above tables can all be obtained from the knowledge elements in the original knowledge entity obtained from the original data set. That is, the knowledge entity
computed for the models without dimension reduction provides the information needed for construction of the knowledge entity of the dimension reduced models.
[0122] Now, returning to the example of Table 4 showing the output rates for three different
dryers the knowledge entity for the sample dataset is:
27 TABLE 27 X.sub.1 X.sub.2 X.sub.3 X.sub.1 N.sub.11 = 6 N.sub.12 = 6 N.sub.13 = 6 .quadrature. X.sub.1 = 15 .quadrature. X.sub.1 = 15 .quadrature. X.sub.1 = 15 .quadrature. X.sub.1 = 15 .quadrature.
X.sub.2 = 20 .quadrature. X.sub.3 = 36 .quadrature. X.sub.1X.sub.1 = 43 .quadrature. X.sub.1X.sub.2 = 56 .quadrature. X.sub.1X.sub.3 = 99 X.sub.2 N.sub.21 = 6 N.sub.22 = 6 N.sub.23 = 6 .quadrature.
X.sub.2 = 20 .quadrature. X.sub.2 = 20 .quadrature. X.sub.2 = 20 .quadrature. X.sub.1 = 15 .quadrature. X.sub.1 = 20 .quadrature. X.sub.3 = 36 .quadrature. X.sub.2X.sub.1 = 56 .quadrature.
X.sub.2X.sub.2 = 76 .quadrature. X.sub.2X.sub.3 = 131 X.sub.3 N.sub.31 = 6 N.sub.32 = 6 N.sub.33 = 6 .quadrature. X.sub.3 = 36 .quadrature. X.sub.3 = 36 .quadrature. X.sub.3 = 36 .quadrature. X.sub.1
= 15 .quadrature. X.sub.2 = 20 .quadrature. X.sub.3 = 36 .quadrature. X.sub.3X.sub.1 = 99 .quadrature. X.sub.3X.sub.2 = 131 .quadrature. X.sub.3X.sub.3 = 232
[0123] Table 27 as the same quantities as did Table 12. Table 12 was calculated by combining the knowledge entities from data obtained from dividing the original data set into three portions (to
illustrate distributed processing and parallel processing). The above knowledge entity was calculated from the original undivided dataset.
[0124] Now, to show dimension reduction can be accomplished by means other than removal of a variable, the data set for variables X.sub.4 and X.sub.3 (where X.sub.4=X.sub.1+X.sub.2) is:
28 TABLE 28 X.sub.4 = X.sub.1 + X.sub.2 X.sub.3 5 5 7 7 2 3 5 6 8 8 8 7
[0125] The knowledge entity for the X.sub.4, X.sub.3 data set above is:
29 TABLE 29 X.sub.4 X.sub.3 X.sub.4 N.sub.44 = 6 N.sub.43 = 6 .quadrature. X.sub.4 = 35 .quadrature. X.sub.4 = 35 .quadrature. X.sub.4 = 35 .quadrature. X.sub.3 = 36 .quadrature. X.sub.4X.sub.4 = 231
.quadrature. X.sub.4X.sub.3 = 230 N.sub.34 = 6 N.sub.33 = 6 X.sub.3 .quadrature. X.sub.3 = 36 .quadrature. X.sub.3 = 36 .quadrature. X.sub.4 = 35 .quadrature. X.sub.3 = 36 .quadrature. X.sub.3X.sub.4
= 230 .quadrature. X.sub.3X.sub.3 = 232
[0126] Note that exactly the same knowledge entity can be obtained from the knowledge entity for all three variables and the use of the expressions in Table 25 above.
30TABLE 30 X.sub.4 X.sub.3 X.sub.4 N.sub.44 = 6 N.sub.43 = 6 .quadrature. X.sub.4 = 15 + 20 = 35 .quadrature. X.sub.4 = 15 + 20 = 35 .quadrature. X.sub.4 = 15 + 20 = 35 .quadrature. X.sub.3 = 36
.quadrature. X.sub.4X.sub.4 = 43 + (2 * 56) + 76 = 231 .quadrature. X.sub.4X.sub.3 = 99 + 131 = 230 X.sub.3 N.sub.34 = 6 N.sub.33 = 6 .quadrature. X.sub.3 = 36 .quadrature. X.sub.3 = 36 .quadrature.
X.sub.4 = 15 + 20 = 35 .quadrature. X.sub.3 = 36 .quadrature. X.sub.3X.sub.4 = 99 + 131 = 230 .quadrature. X.sub.3X.sub.3 = 232
Dynamic Queries
[0127] The analytical engine can also enable "dynamic queries" to select one or more sequences of a series of questions based on answers given to the questions so as to rapidly converge on one or
more outcomes. The Analytical Engine can be used with different models to derive the "next best question" in the dynamic query. Two of the most important are regression models and classification
models. For example, regression models can be used by obtaining the correlation matrix from the knowledge entity
[0128] The Correlation Matrix:
[0129] Then, the following steps are carried out:
[0130] Step 1: Calculate the covariance matrix. (Note: if i=j the covariance is the variance.)
31 TABLE 31 X.sub.1 . . . X.sub.j . . . X.sub.n X.sub.1 r.sub.11 . . . r.sub.1j . . . r.sub.1n . . . . . . . . . . . . . . . . . . X.sub.i r.sub.i1 . . . r.sub.ij . . . r.sub.in . . . . . . . . . . .
. . . . . . . X.sub.m r.sub.m1 . . . r.sub.mj . . . r.sub.mn
32 TABLE 32 X.sub.J X.sub.i 15 Covar ij = X i X j - X i X j N ij N ij
[0132] Step 2: Calculate the correlation matrix from the covariance matrix. (Note: if i=j the elements of the matrix are unity.)
33 TABLE 33 X.sub.J X.sub.i 16 r ij = Covar ij Var i .times. Var j where : Var i = Covar ii Var j = Covar jj
[0133] Once these steps are completed the Analytical Engine can supply the "next best question" in a dynamic query as follows:
[0134] 1. Select the dependent variable X.sub.d.
[0135] 2. Select an independent X.sub.i with the highest correlation to X.sub.d. If X.sub.i has already been selected, select the next best one.
[0136] 3. Continue till there is no independent variables or some criteria has been met (e.g., no significance change in R2).
[0137] Classification methods can also be used by the Analytical Engine to supply the next best question. The analytical engine selects the variable to be examined next (the "next best question") in
order to obtain the maximum impact on the target probability (e.g. probability of default in credit assessment). The user can decide at what point to stop asking questions by examining that
[0138] The general structure of this Knowledge Entity for using classification for dynamic query is
34 TABLE 34 X.sub.1 . . . X.sub.j . . . X.sub.n X.sub.1 N.sub.11 . . . N.sub.1j . . . N.sub.1n . . . . . . . . . . . . . . . . . . X.sub.i N.sub.i1 . . . N.sub.ij . . . N.sub.in . . . . . . . . . . .
. . . . . . . X.sub.m N.sub.m1 . . . N.sub.mj . . . N.sub.mn where the . . . are "ditto" marks.
[0139] The analytical engine uses this knowledge entity as follows:
[0140] 1. Calculate T.sub.j=.quadrature.N.sub.ij (i=l . . . m; j=l . . . n)
[0141] 2. Select X.sub.c (column variables, c=l . . . n) with the highest T. If X.sub.c has already been selected, select the next best one.
[0142] 3. Calculate S.sub.i=S.sub.i.times.(N.sub.ic/N.sub.ii) or S.sub.i=S.sub.i.times.(N.sub.ic/.quadrature.N.sub.ic) for all variables (i=l . . . m)
[0143] 4. Select X.sub.r (row variables, r=l . . . m) with the highest S. If X.sub.r has already been selected, select the next best one.
[0144] 5. Select Rule Out (Exclude) or Rule In (Include) strategy
[0145] a. Rule Out: calculate T.sub.j=N.sub.rj/N.sub.rr for all variables where X.sub.r< >X.sub.j (j=l . . . n)
[0146] b. Rule In: calculate T.sub.j=N.sub.rj/.quadrature.N.sub.ij for all variables where X.sub.r< >X.sub.j (j=l . . . n
[0147] 6. Go to step 2 and repeat steps 2 through 5 until the desired target probability is reached or exceeded.
Normalized Knowledge Entity
[0148] Some embodiments preferably employ particular forms of the knowledge entity. For example, if the knowledge elements are normalized the performance of some modeling methods can be improved. A
normalized knowledge entity can be expressed in terms of well known statistical quantities termed "Z" values. To do this, .quadrature.X.sub.i, .quadrature.X.sub.iX.sub.j, .quadrature. and
.quadrature. can be extracted from the un-normalized knowledge entity and used as shown below: Then, returning again to the three dryer data of Table 4
35 TABLE 35 (1) 17 Z i = X i - i i (2) 18 Z i = X i - i i = X i - N i i = X i - X i i = 0 (3) 19 Z i Z j = ( X i - i i .times. X j - j j ) = ( X i X j - X i j - i X j + i j i j ) = X i X j - j X i -
i X j + ( n i + n j 2 ) i j i j where: 20 i = X i N i , j = X j N j i = X i X i - X i N i N i , j = X j X j - X j N j N j
[0149] The un-normalized knowledge entity was given in Table 12. and the normalized one is provided below.
Normalized Knowledge Entity for the Sample Dataset:
36TABLE 36 Z.sub.1 Z.sub.2 Z.sub.3 Z.sub.1 N.sub.11 = 6 N.sub.12 = 6 N.sub.13 = 6 .quadrature. Z.sub.1 = 0 .quadrature. Z.sub.1 = 0 .quadrature. Z.sub.1 = 0 .quadrature. Z.sub.1 = 0 .quadrature.
Z.sub.2 = 0 .quadrature. Z.sub.3 = 0 .quadrature. Z.sub.1Z.sub.1 = 6 .quadrature. Z.sub.1Z.sub.2 = 5.024615 .quadrature. Z.sub.1Z.sub.3 = 5.756419 Z.sub.2 N.sub.21 = 6 N.sub.22 = 6 N.sub.23 = 6
.quadrature. Z.sub.2 = 0 .quadrature. Z.sub.2 = 0 .quadrature. Z.sub.2 = 0 .quadrature. Z.sub.1 = 0 .quadrature. Z.sub.1 = 0 .quadrature. Z.sub.3 = 0 .quadrature. Z.sub.2Z.sub.1 = 5.024615
.quadrature. Z.sub.2Z.sub.2 = 6 .quadrature. Z.sub.2Z.sub.3 = 5.400893 Z.sub.3 N.sub.31 = 6 N.sub.32 = 6 N.sub.33 = 6 .quadrature. Z.sub.3 = 0 .quadrature. Z.sub.3 = 0 .quadrature. Z.sub.3 = 0
.quadrature. Z.sub.1 = 0 .quadrature. Z.sub.2 = 0 .quadrature. Z.sub.3 = 0 .quadrature. Z.sub.3Z.sub.1 = 5.756419 .quadrature. Z.sub.3Z.sub.2 = 5.400893 .quadrature. Z.sub.3Z.sub.3 = 6
Serialized Knowledge Entity
[0151] It is also possible to serialize and disperse the knowledge entity to facilitate some software applications.
[0152] The general structure of the knowledge entity:
37 TABLE 37 X.sub.1 . . . X.sub.j . . . X.sub.n X.sub.1 W.sub.11 . . . W.sub.1j . . . W.sub.1n . . . . . . . . . . . . . . . . . . X.sub.1 W.sub.i1 . . . W.sub.ij . . . W.sub.in . . . . . . . . . . .
. . . . . . . X.sub.m W.sub.m1 . . . W.sub.mj . . . W.sub.mn
[0153] can be written as the serialized and dispersed structure:
38 TABLE 38 X.sub.1 X.sub.1 W.sub.11 X.sub.1 X.sub.j W.sub.1j X.sub.1 X.sub.n W.sub.1n . . . . . . . . . X.sub.i X.sub.1 W.sub.i1 X.sub.i X.sub.j W.sub.ij X.sub.i X.sub.n W.sub.in . . . . . . . . .
X.sub.m X.sub.1 W.sub.m1 X.sub.m X.sub.j W.sub.mj X.sub.m X.sub.n W.sub.mn
[0154] then the knowledge entity for the three dryer data (Table 4) used above becomes:
39TABLE 39 X.sub.1 X.sub.1 N.sub.11 = 6 .quadrature. X.sub.1 = 15 .quadrature. X.sub.1 = 15 .quadrature. X.sub.1X.sub.1 = 43 X.sub.1 X.sub.2 N.sub.12 = 6 .quadrature. X.sub.1 = 15 .quadrature.
X.sub.2 = 20 .quadrature. X.sub.1X.sub.2 = 56 X.sub.1 X.sub.3 N.sub.13 = 6 .quadrature. X.sub.1 = 15 .quadrature. X.sub.3 = 36 .quadrature. X.sub.1X.sub.3 = 99 X.sub.2 X.sub.2 N.sub.22 = 6
.quadrature. X.sub.2 = 20 .quadrature. X.sub.2 = 20 .quadrature. X.sub.2X.sub.2 = 76 X.sub.2 X.sub.3 N.sub.23 = 6 .quadrature. X.sub.2 = 20 .quadrature. X.sub.3 = 36 .quadrature. X.sub.2X.sub.3 = 131
X.sub.3 X.sub.3 N.sub.33 = 6 .quadrature. X.sub.3 = 36 .quadrature. X.sub.3 = 36 .quadrature. X.sub.3X.sub.3 = 232
Robust Bayesian Classification
[0155] In some cases, the appropriate model for classification of a categorical variable may be Robust Bayesian Classification, which is based on Bayes's rule of conditional probability: 21 P ( C k x
) = P ( x C k ) P ( C k ) P ( x )
[0156] Where:
[0157] P(C.sub.k.vertline.x) is the conditional probability of C.sub.k given x
[0158] P(x.vertline.C.sub.k) is the conditional probability of x given C.sub.k
[0159] P(C.sub.k) is the prior probability of C.sub.k
[0160] P(x) is the prior probability of x
[0161] Bayes's rule can be summarized in this simple form: 22 posterior = likelihood .times. prior normalization factor
[0162] A discriminant function may be based on Bayes's rule for each value k of a categorical variable Y:
y.sub.k(x)=ln P(x.vertline.C.sub.k)+ln P(C.sub.k)
[0163] If each of the class-conditional density functions P(x.vertline.C.sub.k) is taken to be an independent normal distribution, then we have:
y.sub.k(x)=-1/2(x-.mu..sub.k).sup.T.SIGMA..sub.k.sup.-1(x-.mu..sub.k)-1/2l- n.vertline..SIGMA..sub.k.vertline.+ln P(C.sub.k)
[0164] There are three elements, which the analytical engine needs to extract from the knowledge entity 46, namely, the mean vector (.quadrature..sub.k), the covariance matrix (.quadrature..sub.k),
and the prior probability of C.sub.k(P(C.sub.k)).
[0165] There are five steps to create the discriminant equation:
[0166] Step 1: Slice out the knowledge entity 46 for any C.sub.k where C.sub.k is a X.sub.i.
[0167] Step 2: Create the .quadrature. vector by simply using two elements in the knowledge entity 46 .quadrature.X and N where .quadrature.=.quadrature.X/N
[0168] Step 3: Create the the covariance matrix (.quadrature..sub.k), by using four basic elements in the knowledge entity 46 as follows: 23 Covar i , j = X i X j - ( X i X j ) N ij N ij
[0169] Step 4: Calculate the P(C.sub.k) by using two elements in the knowledge entity 46 .quadrature.X and N. If C.sub.k=X.sub.i then
[0170] Step 5 k discriminant functions
[0171] In the prediction phase these k models compete with each other and the model with the highest value will be the winner.
Nave Bayesian Classification
[0172] It may be desirable to use a simplification of Bayesian Classification when the variables are independent. This simplification is called Nave Bayesian Classification and also uses Bayes 's
rule of conditional probability: 24 P ( C k x ) = P ( x C k ) P ( C k ) P ( x )
[0173] Where:
[0174] P(C.sub.k.vertline.x) is the conditional probability of C.sub.k given x
[0175] P(x.vertline.C.sub.k) is the conditional probability of x given C.sub.k
[0176] P(C.sub.k) is the prior probability of C.sub.k
[0177] P(x) is the prior probability of x
[0178] When the variables are independent, Bayes's rule may be written as follows: 25 P ( C k x ) = P ( x l C k ) .times. P ( x 2 C k ) .times. P ( x 3 C k ) .times. .times. P ( x n C k ) .times. P (
C k ) P ( x )
[0179] It is noted that P(x) is a normalization factor.
[0180] There are five steps to create the discriminant equation:
[0181] Step 1: Select a row of the knowledge entity 46 for any C.sub.k and suppose C.sub.k=X.sub.i
[0182] Step 2a. If x.sub.j is a value for a categorical variable X.sub.j we have P(x.sub.j.vertline.X.sub.i)=.quadrature.X.sub.j/.quadrature.X.sub- .i. We get .quadrature.X.sub.j from W.sub.ij and
.quadrature.X.sub.i from W.sub.ii.
[0183] Step 2b. If x.sub.j is a value for a numerical variable X.sub.j we calculate P(x.sub.j.vertline.X.sub.i) by using a density function like this: 26 f ( x ) = 1 2 - ( x - ) 2 2 2
[0184] Where:
[0185] .quadrature.=.quadrature.X.sub.i/N.sub.ii
[0186] .quadrature..sub.i=sqrt(Covar.sub.ii)
[0187] Step 3. Calculate the P(C.sub.k) by using two elements in the knowledge entity 46 .quadrature.X and N. If C.sub.k=X.sub.i then
[0188] Step 4: Calculate P(C.sub.k.vertline.x) using 27 P ( C k | x ) = P ( x 1 | C k ) .times. P ( x 2 | C k ) .times. P ( x 3 C k ) .times. ... .times. P ( x n | C k ) .times. P ( C k ) P ( x )
[0189] In the prediction phase these k models compete with each other and the model with the highest value will be the winner.
Markov Chain
[0190] Another possible model is a Markov Chain, which is particularly expedient for situations where observed values can be regarded as "states." In a conventional Markov Chain, each successive
state depends only on the state immediately before it. The Markov Chain can be used to predict future states.
[0191] Let X be a set of states (X.sub.1, X.sub.2, X.sub.3 . . . X.sub.n) and S be a sequence of random variables (S.sub.0, S.sub.1, S.sub.2 . . . S.sub.l) each with sample space X. If the
probability of transition from state X.sub.i to X.sub.j depends only on state X.sub.i and not to the previous states then the process is said to be a Markov chain. A time independent Markov chain is
called a stationary Markov chain. A stationary Markov chain can be described by an Nby N transition matrix, T, where N is the state space and with entries T.sub.ij=P(S.sub.k=X.sub.i-
[0192] In a k.sup.th order Markov chain, the distribution of S.sub.k depends only on the k variables immediately preceding it. In a 1.sup.th order Markov chain, for example, the distribution of
S.sub.k depends only on the S.sub.k-1. The transition matrix T.sub.ij for a 1.sup.st order Markov chain is the same as N.sub.ij in the knowledge entity 46. Table 40 shows the transition matrix T for
a 1st order Markov chain extracted from the knowledge entity 46.
40 TABLE 40 X.sub.1 . . . X.sub.j . . . X.sub.n X.sub.1 N.sub.11 . . . N.sub.1j . . . N.sub.1n . . . . . . . . . . . . . . . . . . X.sub.i N.sub.i1 . . . N.sub.ij . . . N.sub.in . . . . . . . . . . .
. . . . . . . X.sub.n N.sub.n1 . . . N.sub.nj . . . N.sub.nn
[0193] One weakness of a Markov chain is its unidirectionality which means S.sub.k depends just on S.sub.k-1 not S.sub.k+1. Using the knowledge entity 46 can solve this problem and even give more
flexibility to standard Markov chains. A 1.sup.st order Markov chain with a simple graph with two nodes (variables) and a connection as shown in FIG. 10.
[0194] Suppose X.sub.1 and X.sub.2 have two states A and B then the knowledge entity 46 will be of the form shown in Table 41.
[0195] A
41 TABLE 41 X.sub.1 X.sub.2 X.sub.1A X.sub.1B X.sub.2A X.sub.2B X.sub.1 X.sub.1A W.sub.1A1A W.sub.1A1B W.sub.1A2A W.sub.1A2B X.sub.1B W.sub.1B1A W.sub.1B1B W.sub.1B2A W.sub.1B2B X.sub.2 X.sub.2A
W.sub.2A1A W.sub.2A1B W.sub.2A2A W.sub.2A2B X.sub.2B W.sub.2B1A W.sub.2B1B W.sub.2B2A W.sub.2B2B
[0196] It is noted that W.sub.#A.multidot.B indicates the set of combinations of variables at the intersection of row #A and column *B. The use of the knowledge entity 46 produces a bidirectional
Markov Chain. It will be recognised that each of the above operations relating to the knowledge entity 46 can be applied to the knowledge entity for the Markov Chain. It is also possible to have a
Markov chain with a combination of different order in one knowledge entity 46 and also a continuous Markov chain. These Markov Chains may then be used to predict future states.
Hidden Markov Model
[0197] In a more sophisticated variant of the Markov Model, the states are hidden and are observed through output or evidence nodes. The actual states cannot be directly observed, but the probability
of a sequence of states given the output nodes may be obtained.
[0198] A Hidden Markov Model (HMM) is a graphical model in the form of a chain. In a typical HMM there is a sequence of state or hidden nodes S with a set of states (X.sub.1, X.sub.2, X.sub.3 . . .
X.sub.n), the output or evidence nodes E a set of possible outputs (Y.sub.1, Y.sub.2, Y.sub.3 . . . Y.sub.n), a transition probability matrix A for the hidden nodes and a emission probability matrix
B for the output nodes as shown in FIG. 11.
[0199] Table 42 shows a transition matrix A for a 1.sup.st order Hidden Markov Model extracted from knowledge entity 46.
42 TABLE 42 X.sub.1 . . . X.sub.j . . . X.sub.n X.sub.1 N.sub.11 . . . N.sub.1j . . . N.sub.1n . . . . . . . . . . . . . . . . . . X.sub.i N.sub.i1 . . . N.sub.ij . . . N.sub.in . . . . . . . . . . .
. . . . . . . X.sub.n N.sub.n1 . . . N.sub.nj . . . N.sub.nn
[0200] Table 43 shows a transition matrix B for a 1.sup.st order Markov chain extracted from knowledge entity 46
43 TABLE 43 X.sub.1 . . . X.sub.j . . . X.sub.n Y.sub.1 N.sub.11 . . . N.sub.1j . . . N.sub.1n . . . . . . . . . . . . . . . . . . Y.sub.i N.sub.i1 . . . N.sub.ij . . . N.sub.in . . . . . . . . . . .
. . . . . . . Y.sub.n N.sub.n1 . . . N.sub.nj . . . N.sub.nn
[0201] Each of the properties of the knowledge entity 46 can be applied to the standard Hidden Markov Model. In fact we can show a 1.sup.st HMM with a simple graph with three nodes (variables) and
two connections as shown in FIG. 12.
[0202] Suppose X.sub.1 and X.sub.2 have two states (values) A and B and X.sub.3 has another two values C and D then the knowledge entity 46 will be as shown in Table 44, which represents a 1.sup.st
order Hidden Markov Model.
44 TABLE 44 X.sub.1 X.sub.2 X.sub.3 X.sub.1A X.sub.1B X.sub.2A X.sub.2B X.sub.3C X.sub.3D X.sub.1 X.sub.1A W.sub.1A1A W.sub.1A1B W.sub.1A2A W.sub.1A2B W.sub.1A3C W.sub.1A3D X.sub.1B W.sub.1B1A
W.sub.1B1B W.sub.1B2A W.sub.1B2B W.sub.1B3C W.sub.1B3D X.sub.2 X.sub.2A W.sub.2A1A W.sub.2A1B W.sub.2A2A W.sub.2A2B W.sub.2A3C W.sub.2A3D X.sub.2B W.sub.2B1A W.sub.2B1B W.sub.2B2A W.sub.2B2B
W.sub.2B3C W.sub.2B3D X.sub.3 X.sub.3C W.sub.3C1A W.sub.3C1B W.sub.3C2A W.sub.3C2B W.sub.3C3C W.sub.3C3D X.sub.3D W.sub.3D1A W.sub.3D1B W.sub.3D2A W.sub.3D2B W.sub.3D3C W.sub.3D3D
[0203] The Hidden Markov Model can then be used to predict future states and to determine the probability of a sequence of states given the output and/or observed values.
Principal Component Analysis
[0204] Another commonly used model is Principal Component Analysis (PCA), which is used in certain types of analysis. Principal Component Analysis seeks to determine the most important independent
[0205] There are five steps to calculate principal components for a dataset.
[0206] Step 1: Compute the covariance or correlation matrix.
[0207] Step 2: Find its eigenvalues and eigenvectors.
[0208] Step 3: Sort the eigenvalues from large to small.
[0209] Step 4. Name the ordered eigenvalues as .quadrature..sub.1, .quadrature..sub.2, .quadrature..sub..quadrature. . . . and the corresponding eigenvectors as .nu..sub.1, .nu..sub.2, .nu..sub.3, .
. .
[0210] Step 5: Select the k largest eigenvalues.
[0211] The covariance matrix or correlation matrix are the only prerequisites for PCA which are easily can be derived from knowledge entity 46.
[0212] The Covariance matrix extracted from knowledge entity 46.
45 TABLE 45 X.sub.J X.sub.i 28 Covar ij = X i X j - ( X i X j ) N ij N ij
[0213] The Correlation matrix.
46 TABLE 46 X.sub.J X.sub.i 29 R ij = Covar ij Var i Var j where : Var i = Covar ii Var j = Covar jj
[0214] The principal components may then be used to provide an indication of the relative importance of the independent variables based on the covariance or correlation tables computed from the
knowledge entity 46, without requiring re-computation based on the entire collection of data.
[0215] It will therefore be recognised that the controller 40 can switch among any of the above models, and the modeller 48 will be able to use the same knowledge entity 46 for the new model. That
is, the analytical engine can use the same knowledge entity for many modelling methods. There are many models in addition to the ones mentioned above that can be used by the analytical engine. For
example, the OneR Classification Method, Linear Support Vector Machine and Linear Discriminant Analysis are all readily employed by this engine. Pertinent details are provided in the following
[0216] The OneR Method
[0217] The main goal in the OneR Method is to find the best independent (Xj) variable which can explain the dependent variable (Xi). If the dependent variable is categorical there are many ways that
the analytical engine can find the best dependent variable (e.g. Bayes rule, Entropy, Chi2, and Gini index). All of these ways can employ the knowledge elements of the knowledge entity. If the
dependent variable is numerical the correlation matrix (again, extracted from the knowledge entity) can be used by the analytical engine to find the best independent variable. Alternatively, the
engine can transform the numerical variable to a categorical variable by a discretization technique.
[0218] Linear Support Vector Machine
[0219] The Linear Support Vector Machine can be modeled by using the covariance matrix. As shown in [0079] the covariance matrix can easily be computed from the knowledge elements of the knowledge
entity by the analytical engine.
[0220] Linear Discriminant Analysis
[0221] Linear Discriminant Analysis is a classification technique and can be modeled by the analytical engine using the covariance matrix. As shown in [0079] the covariance matrix can easily be
computed from the knowledge elements of the knowledge entity.
[0222] Model Diversity
[0223] As evident above, use of the analytical engine with even a single knowledge entity can provide extremely rapid model development and great diversity in models. Such easily obtained diversity
is highly desirable when seeking the most suitable model for a given purpose. In using the analytical engine, diversity originates both from the intelligent properties awarded to any single model
(e.g. addition and removal of variables, dimension reduction) and the property that switching modelling methods does not require new computations on the entire database for a wide variety of
modelling methods. Once provided with the models, there are many methods for determining which one is best ("model discrimination") or which prediction is best. The analytical engine makes model
generation so comprehensive and easy that for the latter problem, if desired, several models can be tested and the prediction accepted can be the one which the majority of models support.
[0224] It will be recognised that certain uses of the knowledge entity 46 by the analytical engine will typically use certain models. The following examples illustrate several areas where the above
models can be used. It is noted that the knowledge entity 46 facilitates changing between each of the models for each of the following examples.
[0225] The above description of the invention has focused upon control of a process involving numerical values. As will be seen below, the underlying principles are actually much more general in
applicability than that.
[0226] Control of a Robotic Arm
[0227] In this embodiment an amputee has been fitted with a robotic arm 200 as shown in FIG. 9. The arm has an upper portion 202 and a forearm 204 connected by a joint 205. The movement of the
robotic arm depend upon two sensors 206, 208, each of which generate a voltage based upon direction from the person's brain. One of these sensors 208 is termed "Biceps" and is for the upper muscle of
the arm. The second 206 is termed "Triceps" and is for the lower muscle. The arm moves in response to these two signals and this movement has one of four possibilities: flexion 210 (the arm flexes),
extension 210 (the arm extends), pronation 212 (the arm rotates downwards) and supination 212 (the arm rotates upwards). The usual way of relating movement to the sensor signals would be to gather a
large amount of data on what movement corresponds to what sensor signals and to train a classification method with this data. The resulting relationship would then be used without modification to
move the arm in response to the signals. The difficulty with this approach is its inflexibity. For example, with wear of parts in the arm the relationship determined from training may no longer be
valid and a complete new retraining would be necessary. Other problems can include: the failure of one of the sensors or the need to add a third sensor. The knowledge entity 46 described above may be
used by the analytical engine to develop a control of the arm divided into three steps: learner, modeller and predictor. The result is that control of the arm can then adapt to new situations as in
the previous example.
[0228] The previous example showed a situation where all the variables were numeric and linear regression was used following the learner. This example shows how the learner can employ categorical
values and how it can work with a classification method.
[0229] Exemplary data collected for use by the robotic arm is as follows:
47 TABLE 47 Biceps Triceps Movement 13 31 Flexion 14 30 Flexion 10 31 Flexion 90 22 Extension 87 19 Extension 65 15 Extension 28 16 Pronation 27 12 Pronation 33 11 Pronation 72 24 Supination 70 36
Supination 58 28 Supination . . . . . . . . .
[0230] The record corresponding to the first measurement of 1: 13, 31, 1, 0, 0, 0 is as follows using the set of combinations n.sub.ij, .SIGMA.X.sub.i, .SIGMA.X.sub.j, .SIGMA.X.sub.iX.sub.j is as set
out below in Table 48.
48 TABLE 48 Movement Biceps Triceps Flexion Extension Pronation Supination Biceps 1 1 1 1 1 1 13 13 13 13 13 13 13 31 1 0 0 0 169 403 13 0 0 0 1 1 1 1 1 1 Triceps 31 31 31 31 31 31 13 31 1 0 0 0 403
961 31 0 0 0 Movement Flexion 1 1 1 1 1 1 1 1 1 1 1 1 13 31 1 0 0 0 13 31 1 0 0 0 Extension 1 1 1 1 1 1 0 0 0 0 0 0 13 31 1 0 0 0 0 0 0 0 0 0 Pronation 1 1 1 1 1 1 0 0 0 0 0 0 13 31 1 0 0 0 0 0 0 0 0
0 Supination 1 1 1 1 1 1 0 0 0 0 0 0 13 31 1 0 0 0 0 0 0 0 0 0
[0231] Once records as shown in Table 48 have been learned by the learner 44 into the knowledge entity 46, the modeller 48 can construct appropriate models of various movements. The predictor can
then compute the values of the four models:
[0232] Flexion=a+b.sub.1*Biceps+b.sub.2*Triceps
[0233] Extension=a+b.sub.1*Biceps+b.sub.2*Triceps
[0234] Pronation=a+b.sub.1*Biceps+b.sub.2*Triceps
[0235] Supination=a+b.sub.1*Biceps+b.sub.2*Triceps
[0236] When signals are received from the Biceps and Triceps sensors the four possible arm movements are calculated. The Movement with the highest value is the one which the arm implements.
Prediction of the Start Codon in Genomes
[0237] Each DNA (deoxy-ribonucleic acid) molecule is a long chain of nucleotides of four different types, adenine (A), cytosine (C), thymine (T), and guanine (G). The linear ordering of the
nucleotides determines the genetic information. The genome is the totality of DNA stored in chromosomes typical of each species and a gene is a part of DNA sequence which codes for a protein. Genes
are expressed by transcription from DNA to mRNA followed by translation from mRNA to protein. mRNA (messenger ribonucleic acid) is chemically similar to DNA, with the exception that the base thymine
is replaced with the base uracil (U). A typical gene consists of these functional parts: promoter->start codon->exon->stop codon. The region immediately upstream from the gene is the promoter and
there is a separate promoter for each gene. The promoter controls the transcription process in genes and the start codon is a triplet (usually ATG) where the translation starts. The exon is the
coding portion of the gene and the start codon is a triplet where the translation stops. Prediction of the start codon from a measured length of DNA sequence may be performed by using the Markov
Chain to calculate the probability of the whole sequence. That is, given a sequence s, and given a Markov chain M, the basic question to answer is, "What is the probability that the sequence s is
generated by the Markov chain M? The problems with the conventional Markov chain were described above. Here these problems can cause poor predictability because in fact, in genes the next state, not
just the previous state, does affect the structure of the start codon.
ATTTCTAGGAGTACC . . .
49 TABLE 49 X.sub.1 X.sub.2 A T T T T C C T T A A G G G G A A G G T T A A C C C . . . . . .
[0239] Classic Markov Chain:
[0240] Record 1: A T
50 TABLE 50 X.sub.1 A C G T X.sub.2 A 0 0 0 0 C 0 0 0 0 G 0 0 0 0 T 1 0 0 0
[0241] A Markov Chain stored in knowledge entity 46 is constructed as follows:
[0242] The first Record 1: 1, 0, 0, 0, 0, 0, 0, 1 is transformed to the table:
51 TABLE 51 X.sub.1 X.sub.2 A C G T A C G T X.sub.1 A 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 C 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 G 1 1 1 1 1 1 1
1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 T 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 A 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 C 1 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 G 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 T 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1
[0243] X.sub.2
[0244] The knowledge entity 46 is built up by the analytical engine from records relating to each measurements. Controller 40 can then operate to determine the probability that a start codon is
generated by the Markov Chain represented in the knowledge entity 46.
Sales Prediction
[0245] The next embodiment shows that the model to be used with the learner in the analytical engine can be non-linear in the independent variable. In this embodiment sales from a business are to be
related to the number of competitors' stores in the area, average age of the population in the area and the population of the area. The example shows that the presence of a non-linear variable can
easily be accommodated by the method. Here, it was decided that the logarithm of the population should be used instead of simply the population. The knowledge entity is then formed as follows:
52 TABLE 52 No. of Log Competitors Average Age (Population) Sales 2 40 4.4 850000 2 37 4.4 1100000 3 36 4.3 920000 2 31 4.2 950000 1 42 4.6 107000 . . . . . . . . . . . .
[0246] From the record: 2, 40, 4.4, 850000, the knowledge entity 46 is generated as set out below in Table 53.
53TABLE 53 No. of Log Competitors Average Age (Population) Sales No. of 1 1 1 1 Com- 2 2 2 2 petitors 2 40 4.4 850000 4 80 8.8 1700000 Average 1 1 1 1 Age 40 40 40 40 2 40 4.4 850000 80 1600 176
34000000 Log 1 1 1 1 (Popu- 4.4 4.4 4.4 4.4 lation) 2 40 4.4 850000 8.8 176 19.36 3740000 Sales 1 1 1 1 850000 850000 850000 850000 2 40 4.4 850000 1700000 34000000 3740000 722500000000
[0247] The sales are modelled using the relationship:
[0248] Sales=a+b.sub.1*No. of Competitors+b.sub.2*Average Age+b.sub.3*Log (Population)
[0249] The coefficients may then be derived from the knowledge entity 46 as described above.
[0250] The ability to diagnose the cause of problems, whether in machines or human beings is an important application of the knowledge entity 46.
Disease Diagnosis
[0251] In this part we want to use the analytical engine to predict a hemolytic disease of the newborn by means of three variables (sex, blood hemoglobin, and blood bilirubin).
54 TABLE 54 Newborn Sex Hemoglobin Bilirubin Survival Female 18 2.2 Survival Male 16 4.1 Death Female 7.5 6.7 Death Male 3.5 4.2 . . . . . . . . . . . .
[0252] A knowledge entity for constructing a nave Bayesian classifier would be as follow (just for first and forth records):
[0253] Record 1: Survival, Female, 18, 2.2
[0254] Record 4: Death, Male, 3.5, 4.2
[0255] There is a categorical value then we transform it to numerical one:
[0256] Record 1 (transformed): 1, 0, 1, 0, 18, 2.2
[0257] Record 4: 0, 1, 0, 1, 3.5, 4.2
55 TABLE 55 Newborn Sex Survival Death Female Male Hemoglobin Bilirubin Survival 2 2 1 1 1 1 1 1 1 0 18 2.2 1 1 1 0 324 4.84 Death 2 2 1 1 1 1 1 1 0 1 3.5 4.2 1 1 0 1 12.25 17.64
[0258] As we can see this Knowledge entity is not orthogonal and uses three combinations of the variables (N, .quadrature.X and .quadrature.X.sup.2) which are enough to model a nave Bayesian
classifier. The knowledge entity 46 may be used to predict survival or death using the Bayesian classification model described above.
[0259] From the above examples, it will be recognised that the knowledge entity of FIG. 3 may be applied in many different areas. A sampling of some areas of applicability follows.
Banking and Credit Scoring
[0260] In banking and credit scoring applications, it is often necessary to determine the risk posed by a client, or other measures of relating to the clients finances. In banking and credit scoring,
the following variables are often used.
[0261] checking_status, duration, credit_history, purpose, credit_amount, savings_status, employment, installment_commitment, personal_status, other_parties, residence_since, property_magnitude, age,
other_payment_plans, housing, existing credits, job, num_dependents, own_telephone, foreign_worker, credit_assessment. Dynamic query is particularly important in applications such as credit
assessment where an applicant is waiting impatiently for a decision and the assessor has many of questions from which to choose. By having the analytical engine select the "next best question" the
assessor can rapidly converge on a decision.
Bioinformatics and Pharmaceutical Solutions
[0262] The example above showed gene prediction using Markov models. There are many other applications to bioinformatics and pharmaceuticals.
[0263] In a microarray, the goal is to find a match between a known sequence and that of a disease.
[0264] In drug discovery the goal is to determine the performance of drugs as a function of type of drug, characteristics of patients, etc.
Ecommerce and CRM
[0265] Applications to eCommerce and CRM include email analysis, response and marketing.
[0266] Fraud Detection
[0267] In order to detect fraud on credit cards, the knowledge entity 46 would use variables such as number of credit card transactions, value of transactions, location of transaction, etc.
Health Care and Human Resources
[0268] To perform diagnosis of the cause of abdominal pain uses approximately 1000 different variables.
[0269] In an application to the diagnosis of the presence of heart disease, the variables under consideration are:
[0270] age, sex, chest pain type, resting blood pressure, blood cholesterol, blood glucose, rest ekg, maximum heart rate, exercise induced angina, extent of narrowing of blood vessels in the heart
Privacy and Security
[0271] The areas of privacy and security often require image analysis, finger print analysis, and face analysis. Each of these areas typically involves many variables relating to the image and to
attempt to match images and find patterns.
[0272] Retail
[0273] In the retail industry, the knowledge entity 46 may be used for inventory control, and sales prediction.
Sports and Entertainment
[0274] The knowledge entity 46 may be used by the analytical engine to collect information on sports events and predict the winner of a future sports event.
[0275] The knowledge entity 46 may also be used as a coaching aid.
[0276] In computer games, the knowledge entity 46 can manage the data required by the games artificial intelligence systems.
Stock and Investment Analysis and Prediction
[0277] By employing the knowledge entity 46, the analytical engine is particularly adept at handling areas like investment decision making, predicting stock price, where there is a large amount of
data which is constantly updated as stock trades are made on the market.
Telecom, Instrumentation and Machinery
[0278] The areas of telecom, instrumentation and machinery have many applications, such as diagnosing problems, and controlling robotics.
[0279] Yet another application of the analytical engine employing the knowledge entity 46 is as a travel agent. The knowledge entity 46 can collect information about travel preferences, costs of
trips, and types of vacations to make predictions related to the particular customer.
[0280] From the preceding examples, it will be recognised that the knowledge entity 46 when used with the appropriate methods to form the analytical engine, has broad applicability in many
environments. In some embodiments, the knowledge entity 46 has much smaller storage requirements than that required for the equivalent amount of observed data. Some embodiments of the knowledge
entity 46 use parallel processing to provide increases in the speed of computations. Some embodiments of the knowledge entity 46 allow models to be changed without re-computation. It will therefore
be recognised that in various embodiments, the analytical engine provides an intelligent learning machine that can rapidly learn, predict, control, diagnose, interact, and co-operate in dynamic
environments, including for example large quantities of data, and further provides a parallel processing and distributed processing capability.
* * * * * | {"url":"http://patents.com/us-20040153430.html","timestamp":"2014-04-20T13:38:28Z","content_type":null,"content_length":"124301","record_id":"<urn:uuid:d515cd1c-a5a4-49bf-a664-6e4e816a524c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
Technical Battery Discussion
So I am an engineer and really want to understand the battery. It is my understanding that there are 6831 NCR18650 Panasonic batteries in the "Battery pack". 11 moduals in series each with 9 "bricks"
in series and each brick with 69 18650's in parallel. So 69 * 9 * 11 = 6,831. Are you with me so far?
Now the hard part. On Panasonic's website i get specs for the NCR18650 showing a nominal voltage and capacity of 3.6 VDC and 2.9 AH, respectively. Lets do the voltage first. 9 bricks X 3.6 volts X 11
moduals = 356.4 volts. But the Tesla specs say the battery is 375 volts. Backing into the nominal voltage needed to get 375 volts each battery has to have a nominal of 3.78 volts. If someone knows
the answer to this puzzle I would greatly appreciate an explanation. Thanks in advance.
Now lets do kWH. 69 cells X 2.9 AH = 200.1 AH/brick X 9 bricks X 11 moduals X 3.78 volts /1000 = 74.9 kWH. So how do I match this up to the 56 kWH claimed by Tesla? At 75% the number is 56kWH. Does
that mean that only 75% of the batteries nominal capacity is "usable"? Is this because there is about a 25% loss between input energy (from the HPC) and actual usable stored and and delivered energy
from the battery to the inverter/motor?
Finally, the new 18650 battery is supposed to be 4 AH instead of 2.9 AH in 2012. The 2.9 AH battery weighs 44 g while the new 4 AH battery weighs 54 g. 6831 * 44 g = 300.564 kg or 663 lbf. Since the
entire battery weighs 990 lbf, the battery enclosure, coolant and electronics must weigh 990 - 663 = 327 lbf. Therefore if the new batteries weigh 6831 * 54 g = 368.874 kg or 814 lbf, then a new
battery pack for my Roadster with the new batteries should weigh 814 + 327 = 1,141 lbf. For the extra 151 lbf of weight, one should get an increased nominal range of 245 * 4.0/2.9 = 338 miles minus a
little for hauling around the extra weight. Do I have it right on all counts?
rsdio | June 12, 2011
As for weight, you have to consider that there is more in the Tesla battery than just the Panasonic NCR18659 in a pile. At the very least, there is a system for pumping coolant through the assembly,
and the assembly itself probably weighs a substantial amount. In other words, a new battery pack made with new batteries will probably weigh more, but they could easily change more than just the
Panasonic part when revising the pack.
Timo | June 12, 2011
You have wrong battery. The one used in Roadster is 2.1Ah battery, not 2.9. The one used in Model S is probably 3.1Ah battery.
Also that 4Ah battery weights only 46g not 54g. That is unless you have some better and newer information than I do. Those that go to Tesla are also modified somehow, which probably does something to
the weight, not sure what though.
jmollenkopf@com... | June 12, 2011
Thanks Timo.
The 2.1 AH clears up the kWH. I found the 54g on a website somewhere. 48g is a lot better news. Looks like all of them are the same physical dimensions (2.1, 2.9, 3.1 and 4.0). Do they all have the
same discharge ability? Do you know what the xC rate is? Wouldn't the 3.1's give 47% more range (3.1/2.1 = 1.47)? In any event, it is exciting to think that much more range and or lower weight for
the same range is just over the horizon. Can't wait.
Any clarification on the nominal voltage? Is the 2.1 AH battery nominal 3.78 volts? My model airplane LiPo's are all 3.7 v.
If you look at the calculations again the batteries themselves made up 663# of the 990# weight so I assumed the balance of the weight was due to the enclosure, cooling system and electronics (about
327# worth).With the 48 gram weight for the 4 AH I get 1050# total pack weight (663*48/44 = 723 + 327 = 1050#
qwk | June 13, 2011
The roadster batteries are 2200mah,3.7V and 44g.
Model S 300 mile pack batteries are supposedly the 3100mah, 3.6v and weigh 44.5g.
jmollenkopf@com... | June 15, 2011
Thx to all. Calculations all work now (accept for voltage). I will be watching the battery technology and feel certain that someday we will all be able to get new batteries that either weigh a lot
less for the same range or have considerably more range for the wieght when the time comes for replacement. 6 weeks left for #1364.
Ramon123 | June 18, 2011
we will all be able to get new batteries that either weigh a lot less for the same range or have considerably more range for the wieght when the time comes for replacement.
I'd say it's a safe bet that the batteries will be imporved with respec to power/weight ratios, but far more important will be improved costs.Frankly, right now, with 300 miles of range, and a 45
minute recharge, they are good enough to be fully competitive with gas powered jobs.
rsdio | June 19, 2011
The Roadster Innovations / Battery page has a typographical error. It says that a brick contains sixty-nine cells, a sheet contains ninety-nine bricks, and a pack contains 11 sheets. That would be
75,141 cells in all, but we know it's only 6,831. Based upon the opening message of this thread, I'm assuming that the "Ninety-nine bricks" should read "Nine bricks" - but maybe they mean that the 99
bricks in series make 11 sheets. It's basically a little confusing the way it's worded.
psusi | June 29, 2011
45 minute recharge? Even with the 90A charger it takes about 3 hours.
dsm363 | July 2, 2011
The 45 min recharge is referring to some kind of level 3 charging (such as DC fast charging), not the level 2 charging the Roadster can use. At the 70A level 2 charging level of the Roadster, the 300
mile pack would probably take around 5 hours to fully recharge.
curiousguy | October 21, 2012
the way you calculate the energy available gives the correct number but its not representing the correct electrochemical event of discharge.
even though you have 6831 cells, only the capacity (Q) of 69 of them (those arranged in parallel) is used for discharge at the nominal voltage of 375V (given by the arrangement in series of the
remainder 99). E = Q x V so E = (69 x 2.2)A x 375V = 56.9 kWh. the capacity coming from the cells arranged in series is only there for "balancing". If they were "empty" then the capacity of the 69
arranged in parallel would have to be averaged over the entire number of cells (6831).
as you can see an increase in capacity is most wanted as it allows for both an increase in energy (driving range) and power (acceleration, etc) EVEN if the individual cell operates at a much lower
voltage. For example, if you had double the capacity in a single cell you would only need 35 of them in parallel for the same energy (driving range) and 195 in series which would now give a much
higher nominal voltage. V = 195 x 3.5 = 683V for a 3.5V cell or 585 for a 3.0V cell, etc.
so lets hope there is interest in post lithium ion batteries such that the chemistries of such batteries are ironed out in due time :).
rupy | March 19, 2013
Does anyone know how much litium the battery contains?
rupy | March 19, 2013
So I have calculated that tesla will be able to manufacture 367 million cars with the 11.000.000 tonnes of lithium in the world. Do you concur?
DHrivnak | March 20, 2013
I think we are safe on the lithium supply until we we are making a billion plus cars. See the very insightful article from Nick Butcher about battery constraints. http://seekingalpha.com/article/
rupy | March 21, 2013
Yes, I miscalculated: 1,6 billion! | {"url":"http://www.teslamotors.com/en_BE/forum/forums/technical-battery-discussion","timestamp":"2014-04-16T11:26:47Z","content_type":null,"content_length":"41868","record_id":"<urn:uuid:34b7b9ab-34cb-4402-8fe2-ed0156671460>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Using isomorphisms to transform polynomials to vectors.
Yes, sure. This can indeed be done.
This can be shown to be an isomorphism. So the vector spaces [itex]P_3[/itex] and [itex]\mathbb{R}^4[/itex] are the same for all linear algebra purposes. So a basis with the polynomials can be found
by searching a basis in [itex]\mathbb{R}^4[/itex] first. | {"url":"http://www.physicsforums.com/showpost.php?p=3641655&postcount=2","timestamp":"2014-04-16T16:11:56Z","content_type":null,"content_length":"7521","record_id":"<urn:uuid:36ec1352-87aa-41ab-acba-a6d2489d4205>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nokesville Math Tutor
Find a Nokesville Math Tutor
...I work with parents so life-long learning habits are established. Children naturally enjoy learning. My goal is to make sure the whole family feels good about the time we spend together.
32 Subjects: including algebra 1, algebra 2, biology, chemistry
...I'll even buy the coffee!Algebra 2 contains both content that reviews or extends concepts and skills learned in previous grades as well as expanding into new, more abstract concepts in
algebra. During this level course, students gain proficiency in solving linear equations, inequalities, and sys...
17 Subjects: including trigonometry, algebra 1, algebra 2, calculus
...These courses involved solving differential equations related to applications in physics and electrical engineering. As an undergraduate student in Electrical Engineering and Physics and as a
graduate student, I took courses in mathematical methods for physics and engineering. These courses inc...
16 Subjects: including algebra 1, algebra 2, calculus, geometry
...I currently tutor at a Kumon center and realize this is what I love and plan to do with rest of my life. I plan on graduating in 2015 with a BS in Mathematics with hopefully a Computer Science
minor. Also I am interested in doing an accelerated master's program so I should have my master's by 2016.
10 Subjects: including calculus, trigonometry, algebra 1, algebra 2
...After that, I got my master's degree from George Mason University's School of Public Policy in 2010. I have been tutoring since high school. I know French, Spanish and Portuguese.
19 Subjects: including prealgebra, logic, probability, Spanish | {"url":"http://www.purplemath.com/nokesville_va_math_tutors.php","timestamp":"2014-04-17T13:35:55Z","content_type":null,"content_length":"23488","record_id":"<urn:uuid:3df741cf-72d7-444e-8c9a-2dc93f15b7ff>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
William Poundstone
William Poundstone is an American author, columnist, and skeptic. He has written a number of books including the Big Secrets series and a biography of Carl Sagan. He is a cousin of comedian Paula
• Simple rules can have complex consequences. This simple rule has such a wealth of implications that it is worth examining in detail. It is the far from self-evident guiding principle of
reductionism and of most modern investigations into cosmic complexity. Reductionism will not be truly successful until physicists and cosmologists demonstrate that the large-scale phenomena of
the world arise from fundamental physics alone. This lofty goal is still out of reach. There is uncertainty not only in how physics generates the structures of our world but also in what the
truly fundamental rules of physics are.
□ The Recursive Universe (1985), p. 31
Labyrinths of Reason (1988)Edit
• Are there any mythical beasts which aren't simple pastiches of nature? Centaurs, minotaurs, unicorns, griffons, chimeras, sphinxes, manticores, and the like don't speak well for the human
imagination. None is as novel as a kangaroo or starfish.
□ Chapter 1: "Paradox", p. 11
• The best paradoxes raise questions about what kinds of contradictions can occur—what species of impossibilities are possible.
□ Chapter 1: "Paradox", p. 19
• At a bare minimum, understanding entails being able to detect an internal contradiction: a paradox.
□ Chapter 1: "Paradox", p. 21
• Paradox is thus a much deeper and universal concept than the ancients would have dreamed. Rather than an oddity, it is a mainstay of the philosophy of science.
□ Chapter 1: "Paradox", p. 23
• The assumption that anything true is knowable is the grandfather of paradoxes.
□ Chapter 12: "Omniscience", p. 260
Fortune's Formula (2005)Edit
• By the mid-1930s, Moe Annenburg was AT&T's fifth largest customer.
□ Prologue: The Wire Service, p. 6
• There were many at Bell Labs and MIT who compared Shannon's insight to Einstein's. Others found that comparison unfair - unfair to Shannon.
□ Part One, Entropy, Claude Shannon, p. 15
• In American culture the coin toss is the paradigm of the random event. A coin toss decides who kicks off the Super Bowl. Looked at another way, a coin toss is not random at all. It is physics.
□ Part One, Entropy, Toy Room, p. 46
• Expectation is a statistical fiction, like having 2.5 children.
□ Part One, Entropy, Gamblers Ruin, p. 50
• Shannon's most radical insight was that meaning was irrelevant.
□ Part One, Entropy, Randomness, Disorder, Uncertainty, p. 55
• In real conversations, we are always trying to outguess each other.
□ Part One, Entropy, Randomness, Disorder, Uncertainty, p. 56
• The more improbable the message, the less "compressible" it is, and the more bandwidth it requires. This is Shannon's point: the essence is its improbability.
□ Part One, Entropy, Randomness, Disorder, Uncertainty, p. 57
• Use "entropy" and you can never lose a debate, von Neumann told Shannon - because no one really knows what "entropy" is.
□ Part One, Entropy, Randomness, Disorder, Uncertainty, p. 57
• The best strategy is one that offers the highest compound return consistent with no risk of going broke.
□ Part One, Entropy, Private Wire, p. 69
• Kelly was aware that there is one type of favorable bet available to everyone; the stock market.
□ Part One, Entropy, Minus Sign, p. 75
• The story of the Kelly system is a story of secrets - or if you prefer, a story of entropy.
□ Part One, Entropy, Minus Sign, p. 76
• The dealer now theorized that Thorp was memorizing the entire deck. He knew exactly which cards remained in the deck and bet accordingly.
Thorp said it was impossible for any one to do that.
□ Part Two, Blackjack, More Trouble Than an $18 Dollar Whore, p. 96 (See Also: Stu Ungar Section; Blackjack)
• The engine driving the Kelly system is the "law of large numbers." In a 1713 treatise on probability, Swiss mathematician Jakob Bernoulli propounded a law that has been misunderstood by gamblers
(and investors) ever since.
□ Part Two, Blackjack, The Kelly Criterion Under The Hood, p. 102
• Samuelson spotted a mistake in Bacheliers work. Bachelier's model had failed to consider that stock prices cannot fall below zero.
□ Part Three, Arbitrage, The Random Walk Cosa Nostra, p. 122
• Samuelson, however, hedged his personal bets - by putting some of his own money in Berkshire Hathaway.
□ Part Three, Arbitrage, The Random Walk Cosa Nostra, p. 125
• Carl Friedrich Gauss, often rated the greatest mathematician of all time, played the market. On a salary of 1,000 thalers a year, Euler left an estate of 170,587 thalers in cash and securities.
Nothing is known of Gauss's investment methods.
□ Part Three, Arbitrage, This Is Not the Time To Buy Stocks, p. 132
• "Average" isn't so hot at the race track given those steep track takes. "Average" is pretty decent for stocks, something like 6 percent above the inflation rate. For a buy-and -hold investor,
commissions and taxes are small.
□ Part Three, Arbitrage, This Is Not the Time To Buy Stocks, p. 134
• Bernoulli's real contribution was to coin a word. The word has been translated into English as "utility". It describes this subjective value people place on money.
□ Part Four, St. Petersburg Wager, Daniel Bernoulli, p. 184
• Your second ducat, like your second million, is never quite as sweet.
□ Part Four, St. Petersburg Wager, Daniel Bernoulli, p. 186
• There is a deep connection between Bernoulli's dictum and John Kelly's 1956 publication. It turns out that Kelly's prescription can be restated as this simple rule: When faced with a choice of
wagers or investments, choose the one with the highest geometric means of outcomes.
□ Part Four, St. Petersburg Wager, Natures Admonition To Avoid The Dice, p. 191
• To hedge the bets he made every working day, Meriwether kept a set of rosary beads in his briefcase.
□ Part Six, Blowing Up, Martingale Man, p. 278
• For reasons mathematical, psychological, and sociological, it is a good idea to use a money management system that is relatively forgiving of estimation errors.
□ Part Six, Blowing Up, Survival Motive, p. 296-297
• The ultimate compound return rate is acutely sensitive to fat tails.
□ Part Six, Blowing Up, Survival Motive, p. 297
• The problem with winning at blackjack and sports betting is that sooner or later a big guy in a suit tells you to leave.
□ Part Seven, Signal and Noise, Hong Kong Syndicate, p. 323
External linksEdit
Last modified on 24 March 2013, at 23:34 | {"url":"https://en.m.wikiquote.org/wiki/William_Poundstone","timestamp":"2014-04-18T18:14:43Z","content_type":null,"content_length":"28869","record_id":"<urn:uuid:63a31418-59f2-47a7-a3dc-e4759471d7e3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Open MPI User's Mailing List Archives
Hi Derek
Typically in the domain decomposition codes we have here
(atmosphere, oceans, climate)
there is an overlap across the boundaries of subdomains.
Unless your computation is so "embarrassingly parallel" that
each process can operate from start to end totally independent from
the others, you should expect such an overlap,
but you didn't tell what computation you want to do.
The width of the overlap depends on the computation being done.
For instance, in a two-point stencil finite difference PDE solver
the overlap may have width one, but for broader FD stencils you
will need broader overlaps.
The redundant calculations of overlap points on neighbor subdomains
in general cannot be avoided.
Exchanging the overlap data across neighbor subdomain processes
cannot be avoided either.
However, **full overlap slices** are exchanged after each computational
step (in our case here a time step).
It is not a point-by-point exchange as you suggested.
Overlap exchange does limit the usefulness/efficiency
of using too many subdomains (e.g. if your overlap-to-useful-data
ratio gets close to 100%).
However, is not as detrimental as you imagined based on your
point-by-point exchange conjecture.
If your domain is 100x100x100 and you split in subdomain slices
across 5 processes, with a 1-point overlap (on each side)
you will have a 2x5/100 = 10% waste due to overlap calculations
(plus the MPI communication cost/time),
but your problem is still being solved in (almost) 1/5 of the time
it would take in serial mode.
Since your array seems to fit nicely in Cartesian coordinates,
you could use the MPI functions that create and explore
the Cartesian domain topology.
For details, see Chapter 6, section 6.5 of "MPI, The complete Reference,
Volume 1, The MPI Core, 2nd. Ed.,
by M. Snir, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra,
MIT Press, 1998."
Also, this tutorial from Indiana University solves the 2D diffusion
equation (first serial, then parallel with MPI) and may help.
Unfortunately they don't use the MPI Cartesian functions, though:
I believe there are other examples in the web,
check the LLNL site:
The book
"Parallel Programming with MPI, by Peter Pacheco,
Morgan Kauffman, 1997" has worked out examples also.
An abridged version is available here
I hope this helps,
Gus Correa
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
Cole, Derek E wrote:
> Hi all. I am relatively new to MPI, and so this may be covered somewhere
> else, but I can t seem to find any links to tutorials mentioning any
> specifics, so perhaps someone here can help.
> In C, I have a 3D array that I have dynamically allocated and access
> like Array[x][y][z]. I was hoping to calculate a subsection for each
> processor to work on, of size nx in the x dimension, ny in the y
> dimension, and the full Z dimension. Starting at Array[sx][sy][0] and
> going to Array[ex][ey][z] where ey-sy=ny.
> What is the best way to do this? I am able to calculate the neighboring
> processors and assign a sub-section of the XY dimensions to each
> processor, however I am having problems with sharing the border
> information of the arrays with the other processors. I don t really want
> to have to do a MPI_Send for each of the 0..Z slices s border
> information. I d kind of like to process all of the Z, then share the
> full face of the border information with the neighbor processor. For
> example, if process 1 was the right neighbor of process zero, I d want
> process zero to send Subarray[0..nx][ny][0..Z](the right most face) to
> processor 1 s left-most face..assuming the X-Y plane was your screen,
> and the Z dimension extended into the screen.
> If anyone has any information that talks about how to use the MPI data
> types, or some other method, or wants to talk about how this might be
> done, I m all ears.
> I know it is hard to talk about without pictures, so if you all like, I
> can post a picture explaning what I want to do. Thanks!
> Derek
> ------------------------------------------------------------------------
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users | {"url":"http://www.open-mpi.org/community/lists/users/2010/03/12306.php","timestamp":"2014-04-18T03:26:19Z","content_type":null,"content_length":"32696","record_id":"<urn:uuid:d2bd284f-2cfe-4cff-8789-de6683f302ed>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
rearranging formula online calculator
Author Message
mh_gml Posted: Friday 03rd of Aug 17:18
I am looking for someone who can help me with my math. I have a very important assignment coming up and need help in rearranging formula online calculator and
perfect square trinomial. I need help mainly with topics covered in Algebra 2 class and seek help to understand everything that I need to know so I can improve
my grades.
Registered: 29.09.2001
From: United Kingdom
AllejHat Posted: Sunday 05th of Aug 08:32
I find these typical queries on almost every forum I visit. Please don’t misunderstand me. It’s just as we advance to high school , things change in a flash.
Studies become challenging all of a sudden. As a result, students encounter trouble in completing their homework. rearranging formula online calculator in
itself is a quite complex subject. There is a program named as Algebrator which can assist you in this situation.
Registered: 16.07.2003
From: Odense, Denmark
DVH Posted: Sunday 05th of Aug 18:25
It would really be great if you could let us know about a utility that can offer both. If you could get us a resource that would offer a step-by-step solution
to our problem, it would really be nice. Please let us know the reliable websites from where we can get the tool .
Registered: 20.12.2001
Paubaume Posted: Tuesday 07th of Aug 07:02
I remember having problems with inverse matrices, monomials and angle complements. Algebrator is a really great piece of math software. I have used it through
several math classes - Algebra 1, Intermediate algebra and College Algebra. I would simply type in the problem and by clicking on Solve, step by step solution
would appear. The program is highly recommended.
Registered: 18.04.2004
From: In the stars... where you left
me, and where I will wait for you...
Coffie-n-Toost Posted: Thursday 09th of Aug 09:12
Sounds exactly like what I want. Where can I get hold of it?
Registered: 25.10.2002
From: Rainy NW ::::
Admilal`Leker Posted: Friday 10th of Aug 13:30
Here you go kid, http://www.emathtutoring.com/iterative-solution-of-linear-equations.html
Registered: 10.07.2002
From: NW AR, USA | {"url":"http://www.emathtutoring.com/algebra-1-tutoring/equivalent-fractions/rearranging-formula-online.html","timestamp":"2014-04-20T16:43:24Z","content_type":null,"content_length":"29149","record_id":"<urn:uuid:2244b954-6111-402f-a33a-8ff697568399>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: Differencing when dependent variable only defined every 4th year
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: Differencing when dependent variable only defined every 4th year
From Nick Cox <n.j.cox@durham.ac.uk>
To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu>
Subject st: RE: Differencing when dependent variable only defined every 4th year
Date Wed, 23 Nov 2011 18:20:30 +0000
You may need to calculate all your variables before you fit to election years only.
Sebastian Barfort
I've encountered a problem in a time series regression I want to do. I have a panel with 49 US states running from 1960-2008. All my independent variables have observations for all years. However, the dependent variable is defined only every other 4th year (it's presidential election data). The regression I want to run is the following:
y(t)-y(t-4)=alpha+beta1(x1(t)-x1(t-1))+beta2(x1(t-1)-x1(t-2))+beta3(x2(t)-x2(t-1))+beta4(x2(t-1)-x2(t-2)) etc etc...
But how can I do this in Stata? I've defined the data as time series with t=year the time variable, but can't seem to find the right way to go around the problem.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-11/msg01184.html","timestamp":"2014-04-21T02:52:40Z","content_type":null,"content_length":"8336","record_id":"<urn:uuid:46d9125a-2f99-4778-8552-7a0f0474a58b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A kitten pushes a ball of yarn rolling toward it at 1.00 cm/s with its nose, displacing the ball of yarn 17.5 cm in the opposite direction in 2.00 s. What is the acceleration of the ball of yarn?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51032151e4b0ad57a562cdf0","timestamp":"2014-04-18T00:48:36Z","content_type":null,"content_length":"42058","record_id":"<urn:uuid:d3f118b3-e782-4cc4-a85b-f6147ad7fe4f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 11
- Bull. Amer. Math. Soc , 1992
"... Abstract. In this paper we discuss the basic problems of algorithmic algebraic number theory. The emphasis is on aspects that are of interest from a purely mathematical point of view, and
practical issues are largely disregarded. We describe what has been done and, more importantly, what remains to ..."
Cited by 40 (3 self)
Add to MetaCart
Abstract. In this paper we discuss the basic problems of algorithmic algebraic number theory. The emphasis is on aspects that are of interest from a purely mathematical point of view, and practical
issues are largely disregarded. We describe what has been done and, more importantly, what remains to be done in the area. We hope to show that the study of algorithms not only increases our
understanding of algebraic number fields but also stimulates our curiosity about them. The discussion is concentrated of three topics: the determination of Galois groups, the determination of the
ring of integers of an algebraic number field, and the computation of the group of units and the class group of that ring of integers. 1.
- MSRI Publications , 1999
"... A variety of questions in combinatorics lead one to the task of analyzing a simplicial complex, or a more general cell complex. For example, a standard approach to investigating the structure of
a partially ordered set is to instead study the topology of the associated ..."
Cited by 18 (0 self)
Add to MetaCart
A variety of questions in combinatorics lead one to the task of analyzing a simplicial complex, or a more general cell complex. For example, a standard approach to investigating the structure of a
partially ordered set is to instead study the topology of the associated
- Seminaires et Congres
"... Key words and phrases. — Moduli spaces of covers, j-line covers, braid group and Hurwitz monodromy group, Frattini and Spin covers, Serre’s lifting invariant. Support from NSF #DMS-99305590, #
DMS-0202259 and #DMS-0455266. This contains many advances on my March 12, 2004, Luminy talk (subsumed by ove ..."
Cited by 10 (3 self)
Add to MetaCart
Key words and phrases. — Moduli spaces of covers, j-line covers, braid group and Hurwitz monodromy group, Frattini and Spin covers, Serre’s lifting invariant. Support from NSF #DMS-99305590, #
DMS-0202259 and #DMS-0455266. This contains many advances on my March 12, 2004, Luminy talk (subsumed by overheads in [Fri05a]). One of those centers on Weigel cusps and whether they exist. This
interchange with Thomas Weigel occurred in Jerusalem and Milan during the long trip including Luminy. Prop. 3.12 is due to Darren Semmen, a constant modular representation consultant. Conversations
with Anna Cadoret, Pierre Debes and Kinya Kimura influenced me to be more complete than otherwise I would have been. 2 M. D. FRIED Abstract. — Publication: In Groupes de Galois arithmetiques et
differentiels (Luminy 2004; eds. D. Bertrand and P. Dèbes), Sem. et Congres, Vol. 13
"... Traditional Morse theory deals with real valued functions f : M R and ordinary homology H (M ). The critical points of a Morse function f generate the Morse-Smale (f) over Z, using the gradient
flow to define the differentials. The isomorphism H (M) imposes homological restrictions on real valued Mo ..."
Cited by 3 (0 self)
Add to MetaCart
Traditional Morse theory deals with real valued functions f : M R and ordinary homology H (M ). The critical points of a Morse function f generate the Morse-Smale (f) over Z, using the gradient flow
to define the differentials. The isomorphism H (M) imposes homological restrictions on real valued Morse functions. There is also a universal coefficient version of the Morse-Smale complex, involving
the universal cover M and the fundamental group ring Z[# 1 (M )].
, 2010
"... Given a compact PEL-type Shimura variety, a sufficiently regular weight (defined by mild and effective conditions), and a prime number p unramified in the linear data and larger than an
effective bound given by the weight, we show that the étale cohomology with Zp-coefficients of the given weight v ..."
Cited by 3 (1 self)
Add to MetaCart
Given a compact PEL-type Shimura variety, a sufficiently regular weight (defined by mild and effective conditions), and a prime number p unramified in the linear data and larger than an effective
bound given by the weight, we show that the étale cohomology with Zp-coefficients of the given weight vanishes away from the middle degree, and hence has no p-torsion. We do not need any other
assumption (such as ones on the images of the associated Galois representations).
"... We present a new, intrinsic approach to Morse Theory which has interesting applications in geometry. We show that a Morse function f on a manifold determines a submanifold T of the product X \
Theta X, and that (in the sense that Stokes theorem is valid) T has boundary consisting of the diagonal \Del ..."
Cited by 1 (0 self)
Add to MetaCart
We present a new, intrinsic approach to Morse Theory which has interesting applications in geometry. We show that a Morse function f on a manifold determines a submanifold T of the product X \Theta
X, and that (in the sense that Stokes theorem is valid) T has boundary consisting of the diagonal \Delta ae X \Theta X and a sum P = X p2Cr(f) Up \Theta Sp where Sp and Up are the stable and unstable
manifolds at the critical point p. In the language of currents, @T = \Delta \Gamma P:(Stokes Theorem) This current (or kernel) equation on X \Theta X is equivalent to an operator equation d ffi T+T
ffi d = I \Gamma P; ((Chain Homotopy)) where P is a chain map onto the finite complex of currents S f spanned by (integration over) the stable manifolds of f . The operator P can be defd on an
exterior form ff by P(ff) = lim t!1 '
, 2004
"... Abstract. The two-category with three-manifolds as objects, h-cobordisms as morphisms, and diffeomorphisms of these as two-morphisms, is extremely rich; from the point of view of classical
physics it defines a nontrivial topological model for general relativity. A rather striking amount of work on p ..."
Add to MetaCart
Abstract. The two-category with three-manifolds as objects, h-cobordisms as morphisms, and diffeomorphisms of these as two-morphisms, is extremely rich; from the point of view of classical physics it
defines a nontrivial topological model for general relativity. A rather striking amount of work on pseudoisotopy theory [Hatcher, Waldhausen, Cohen-Carlsson-Goodwillie-Hsiang-Madsen...] can be
formulated as a TQFT in this framework. The resulting theory is far from trivial even in the case of Minkowski space, when the relevant three-manifold is the standard sphere. Topological gravity
extends Graeme Segal’s ideas about conformal field theory to higher dimensions. It seems to be very interesting, even in extremely restricted geometric contexts: §1 basic definitions A cobordism W:
V0 → V1 between d-manifolds is a (d + 1)-dimensional manifold W together with a distinguished diffeomorphism ∂W ∼ = V op 0 V1; a diffeomorphism Φ: W → W ′ of cobordisms will be assumed consistent
with this boundary data. Cob(V0, V1) is the category whose objects are such cobordisms, and whose morphisms are such diffeomorphisms. Gluing along the boundary defines a composition functor # : Cob(V
′ , V) × Cob(V, V ′ ′ ) → Cob(V, V ′ ′ ). The two-category with manifolds as objects and the categories Cob as morphisms is symmetric monoidal under disjoint union. The categories Cob are topological
groupoids (all morphisms are invertible), with classifying spaces
, 2001
"... Traditional Morse theory deals with real valued functions f: M → R and ordinary homology H∗(M). The critical points of a Morse function f generate the Morse-Smale complex CMS (f) over Z, using
the gradient flow to define the differentials. The isomorphism H∗(CMS (f)) ∼ = H∗(M) imposes homological ..."
Add to MetaCart
Traditional Morse theory deals with real valued functions f: M → R and ordinary homology H∗(M). The critical points of a Morse function f generate the Morse-Smale complex CMS (f) over Z, using the
gradient flow to define the differentials. The isomorphism H∗(CMS (f)) ∼ = H∗(M) imposes homological restrictions on real valued Morse functions. There is also a universal coefficient version of the
Morse-Smale complex, involving the universal cover ˜ M and the fundamental group ring Z[π1(M)]. The more recent Morse theory of circle valued functions f: M → S1 is more complicated, but shares many
features of the real valued theory. The critical points of a Morse function f generate the Novikov complex CNov (f) over the Novikov ring Z((z)) of formal power series with integer coefficients,
using the gradient flow of the real valued Morse function f: M = f ∗R → R on the infinite cyclic cover to define the (M) is the Z((z))-coefficient homology of (M) imposes homological restrictions on
circle valued Morse functions. Chapter 1 reviews real valued Morse theory. Chapters 2,3,4 introduce circle valued
, 2008
"... Morse index and causal continuity. A criterion for topology change in quantum gravity. ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1575832","timestamp":"2014-04-20T17:26:29Z","content_type":null,"content_length":"34109","record_id":"<urn:uuid:c1f3271a-229b-4721-b229-d3c5f391e2e5>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiple Choice Exam & Jelly Beans in a Bag
February 4th 2010, 04:00 AM
Multiple Choice Exam & Jelly Beans in a Bag
Probability drives me crazy (Headbang)
I couldn't figure out how it solves(Worried)(Worried)
could you help me please (Itwasntme)
Multiple Choice Exam
Each question on a 5 questions multiple choice examination has 4 choices,only one of which is correct . If a student answers each question on a random fashion, what's the probability that the
student answers exactly 2 question incorrectly ?
My answer :
# $(S)=1024 (4^5)$
# $(E)=10 (5C3)$ I said if the student answers 2 question incorrectly is the same as if he answers ignore any 2 question
$P(E)= 10/1024$
tell me how to solve it plz
BUT the right answer is ( $45/512$)
Jelly Beans in a Bag
Abag contains 4 red and 6 green jelly beans.
a) if 2 jelly beans are randomly selected in succession with replacement, determine the probability that both are red.
b)if the selection is made without replacement .determine the probability that both are red.
My answer
#( $S)=24$
a)# $(E)= 4c2<br /> P(E)= 6/24=1/4<br />$
b)#( $E)= 4p2<br /> P(E)= 12/24=1/2<br />$
BUT the right answer is
$a)4/25<br />$
$<br /> b)2/15<br />$
Tell me please how to solve them
February 4th 2010, 10:27 AM
The first one is solved by realizing that the student gets 3 correct and two incorrect, that the probability of a correct answer on any one question is 1/4, and the probablilty of an incorrect
answer is 3/4. The right and wrong answers can be arranged in any one of 5C2 ways. So you get:
5C2*(1/4)^3 * (3/4)^2 = 10*9/1024 = 45/512.
For the second problem: there are 4 reds and 10 total, so the probability of pulling a red on the first draw is 4/10. If you put that red one back and draw again, the probablity of drawing red a
second time is also 4/10. But if you don't replace it, then for the second draw there are only 3 reds out of the 9 that are left, so the probability of a second red is 3/9. Putting it together:
a) prob of two reds in a row with replacement = 4/10 * 4/10 = 4/25.
b) prob of two reds in a row without replacement = 4/10 * 3/9 = 2/15. | {"url":"http://mathhelpforum.com/statistics/127126-multiple-choice-exam-jelly-beans-bag-print.html","timestamp":"2014-04-20T11:36:10Z","content_type":null,"content_length":"7218","record_id":"<urn:uuid:1638f2d8-cd03-4338-99cf-006d8bdd6dbb>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Network Structure and the Spread of Disease
Contagious disease are spread (generally) when one person comes in contact with another. Thus, the number of links in a network (the number of connections one has) will go a long way to determining
how fast diseases are spread.
One question which needs to be answered is whether a hub-and-spoke network or a more diffused network will spread diseases faster. On the one hand, if the hub gets infected, it is very likely that
everyone else gets infected in the hub and spoke diagram. On the other hand, if the hub does not get infected, then a more diffused network likely will spread diseases more quickly.
A paper by Jackson and Rogers (2007) uses the concept of stochastic dominance to demonstrate which types of networks spread diseases the quickest. Today, I will summarize their model.
All nodes (think of a node as a person) in a network can either be infected (have the disease) or susceptible (do not have the disease and are not immune). We will ignore immunity in this model. The
probability a node is infected is: ν(d[i]θ[i] + x) where ν ∈ (0,1) describes the infection rate. The variable d[i] represents the degree (number of connections) that node i has, θ[i] ∈ [0,1] is the
fraction of i‘s neighbors who are infected and x is a non-negative scalar representing the rate at which infection sprouts up independent of social connections.
An individual recovers from a disease with probability δ.
Now we want to characterize how diseases spread through different social networks. Let P(d) be the probability a randomly chosen node has d connections (degree d). If ρ(d) equal the average infection
rate among nodes with degree d, then the average infection rate can be calculated as:
• θ=(Σ[d]ρ(d)P(d)d)/(Σ[d]P(d)d)
The variable θ is the average neighbor infection rate. We can estimate the change in the infection rate over time for nodes of degree d with the following equation:
• ∂ρ(d)/∂t=[1-ρ(d)] ν(θd+x) – ρ(d)δ
The first part of the fraction shows how quickly susceptible nodes (i.e.; 1-ρ(d)) are infected and the second part show how quickly infected nodes (i.e., ρ(d)) are cured. In steady state [i.e., ∂ρ(d)
/∂t=0], we have that the average infection rate is:
• θ=m^-1Σ[d][(ν(θd^2+xd)P(d)/δ]/[1+ν(θd +x)/δ]]
• m = Σ[d]P(d)d
Network Comparisons
Let us now define networks according to the concept of stochastic dominance. Network P’ has first order stochastic dominance over P if Σ[(d=0 to Y)] P’(d) ≤ Σ[(d=0 to Y]) P(d) ∀ Y, with Σ[(d=0 to Y])
P’(d) < Σ[(d=0 to Y) ]P(d) for some Y. This means that network P’ has a higher fraction of nodes with lots of connections compared to network P. Jackson and Rogers prove the following:
1. If P’ strictly first order stochastically dominates P, then the steady state θ’ > θ and the steady state ρ’ > ρ.
2. If P’ is a strict mean-preserving spread of P, then θ’ > θ.
Theory (1) implies that if a network has more connections, it will have a higher steady state average neighbor infection rate (θ), and a higher overall average infection rate (ρ). This makes perfect
One the other hand, theory (2) shows what happens as we move towards a hub and spoke system (i.e., a mean-preserving spread in P). A mean preserving spread means the average number of connections
between nodes stays the same, but there are more likely to be nodes with very few connection or very many connection. Thus, a hub and spoke system will have a higher neighborly infection rate, but
this does not mean that the average infection rate will be higher.
The authors expound on theory (2) in more detail below:
The change in infection rate due to a change in the degree distribution comes from countervailing sources, as more extreme distributions have relatively more very high degree nodes and very low
degree nodes. Very high degree nodes have high infection rates and serve as conduits for infection, thus putting upward pressure on average infection. Very low degree nodes have fewer neighbors to
become infected by and thus have relatively low infection rates. Which of these two forces is the more important one depends on the ratio λ=ν/δ, i.e., the effective spreading rate. For low λ, the
first effect is the more important one, as nodes recover relatively rapidly, and so there must be nodes with many neighbors in order keep the infection from dying out. In contrast, when λ is high,
then nodes become infected more quickly than they recover. Here the more important effect is the second one, as most nodes tend to have high infection rates, and so how many neighbors a given node
has is more important than how well those neighbors are connected.
For fast spreading disease where people recover slowly, a diffuse network increases the average infection rate. For slow spread diseases, or diseases where people recover relatively quickly, a
hub-and-spoke system increases the average infection rate.
[...] the world, the spread patterns of infectious diseases have come to many people’s attention. This article published on Healthcare Economist discusses how the structure of a social network
influences the [...] | {"url":"http://healthcare-economist.com/2008/07/29/network-structure-and-the-spread-of-disease/","timestamp":"2014-04-19T05:33:45Z","content_type":null,"content_length":"28931","record_id":"<urn:uuid:7fd9fa5b-bbec-40cc-821a-50c8aaa82a3a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
find the median
03-30-2008 #1
Registered User
Join Date
Mar 2008
find the median
Can anyone give me some kind of hint of what I am doing wrong here?
I am trying to find the median integer in the file. I have set it up the way my teacher asked us to, which was to make 2 passes through the file entitled "Section51.dat". The first pass (loop) is
supposed to figure out how many numbers are in the file. This works correctly.
But the second loop is supposed to get the number in the middle of the file. For me it is just dividing the count in half which means I am obviously not doing something correct.
I also can't figure out how to do this in a case when you have an even set of integers in "Section51.dat" because then you will obviously need to average the two numbers in the middle of the file
to get the median.
I have not had very much experience (or luck) with while loops yet, and I can't figure out how I am supposed to get the median integer. Can someone give me some guidance?
#include <iostream>
#include <fstream>
using namespace std;
int main()
ifstream WOW;
int count = 0, aNumber;
while (WOW >> aNumber)
int median;
if (count / 2.0 == 0)
median = (count / 2.0) + ((count / 2.0) + 1) / 2.0;
median = count / 2.0;
while (WOW >> aNumber && count << median)
cout << aNumber;
return 0;
>> while (WOW >> aNumber && count << median)
Fix the typo.
if (count / 2.0 == 0)
median = (count / 2.0) + ((count / 2.0) + 1) / 2.0;
median = count / 2.0;
Let's take examples.
If count==4, then you want the mean between 2 & 3.
If count==5, you want 3.
The only number you can divide by 2 and get zero is zero. You want to look at the remainder of the divide by 2, therefore, use the modulo operator.
If the remainder is zero, for the case of count==4, the take the middle value (2) that's your first of two values.
If the remainder is not zero, then add 1 to count and take the middle value. So, if count==5, then add 1 to count (6) and divide by 2, to yield 3.
Last edited by Dino; 03-30-2008 at 12:54 PM. Reason: typo
Mac and Windows cross platform programmer. Ruby lover.
Quote of the Day
12/20: Mario F.:I never was, am not, and never will be, one to shut up in the face of something I think is fundamentally wrong.
Amen brother!
okay I changed the median code around, but now I have this code,and it is giving me an error on line 16.
I don't understand why I can't use '%' here.
It says " invalid operands of types ‘int’ and ‘double’ to binary ‘operator%’ "
#include <iostream>
#include <fstream>
using namespace std;
int main()
ifstream WOW;
int count = 0, aNumber;
while (WOW >> aNumber)
int median;
if (count % 2.0 == 0.0)
median = (count + 1) / 2;
median = count / 2.0;
while (WOW >> aNumber && count << median)
cout << aNumber;
return 0;
2.0 is a double. % won't work with doubles. Try 2.
Mac and Windows cross platform programmer. Ruby lover.
Quote of the Day
12/20: Mario F.:I never was, am not, and never will be, one to shut up in the face of something I think is fundamentally wrong.
Amen brother!
If the remainder is zero, for the case of count==4, the take the middle value (2) that's your first of two values.
If the remainder is not zero, then add 1 to count and take the middle value. So, if count==5, then add 1 to count (6) and divide by 2, to yield 3.
Also, there was a disconnect between what I said would work and what you implemented.
Mac and Windows cross platform programmer. Ruby lover.
Quote of the Day
12/20: Mario F.:I never was, am not, and never will be, one to shut up in the face of something I think is fundamentally wrong.
Amen brother!
count << median
This returns a boolean? << is not a comparison operator is it, nor is median a stream.
It's a bitshift operator, so it does not cause a compile error even though it's wrong. It is also a typo that still needs to be fixed.
That's right! That's why I missed it. Dangerous! That's a creative way to screw up a program. Anyway I don't think the program will work even if it were a <. After all median <= count and usually
< count prior to entering this loop.
You're right. There are other problems as well. Re-using the file stream like that won't work without extra code (there's a recent thread about this). JoeJoe, I would first try to get the code
that determines the median position working, then worry about outputting the number at that position.
03-30-2008 #2
Registered User
Join Date
Jan 2005
03-30-2008 #3
03-30-2008 #4
Registered User
Join Date
Mar 2008
03-30-2008 #5
03-30-2008 #6
03-30-2008 #7
Registered User
Join Date
Apr 2007
03-30-2008 #8
Registered User
Join Date
Jan 2005
03-30-2008 #9
Registered User
Join Date
Apr 2007
03-30-2008 #10
Registered User
Join Date
Jan 2005 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/101113-find-median.html","timestamp":"2014-04-17T18:47:23Z","content_type":null,"content_length":"75649","record_id":"<urn:uuid:f733ce3c-2fda-4807-ae76-0c44f83fbb1e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00215-ip-10-147-4-33.ec2.internal.warc.gz"} |
Correlation: Easy as 1-2-3?
November 1, 2012
By skycondition
I recently had a task to take a look at some assessment (audit) data. I was assuming, rather hoping for data with a normal distribution and thought it would be a quick case of Pearson correlation
between two columns: "Duration" and "Score". Just conjecture at this point as I did not understand what the assessment process that generated the scores and durations entails, I proposed that in
general an assessment's "Score" might decrease as "Duration" increases (negatively correlated). This could be explained by:
• The person conducting the assessment being more thorough, experienced, or skilled
• There are more negative findings thus more to assess
• The critically of specific issues are so blatantly bad that it prompts the assessor to delve into something they'd otherwise dismiss
• The first 3 questions out of an assessment of 30 questions are wrong which change the posture of the auditor for the duration of the audit. Similar to the difference between velocity and
acceleration in physics, the more questions you get wrong upfront the higher your probability of lower overall score.
• Causation: X is wrong which mandates the assessor to examine Y as they are integral, and so on
Not necessarily with a hypothesis, starting with a .csv and working in R, the task would include:
Check Model Assumptions:
1. Check the form of the model.
2. Check for outliers
It is well known that it is statistical malpractice to remove outliers as they might be telling a story or highlight an inherent flaw in the system or process that obtains or generates the data. With
this specific dataset there were conspicuous problems such as auditors that forgot to end the audit leading to inordinate duration values, this was part of the initial implementation process such as
lack of auditor training.
A business can reap benefits from a data Analyst that strives to reduce variation, not just model it. It is a better use of time to discover the underlying causes of variation rather than massaging
data to find the correct distribution or transformation method to make low-score predictions.
3. Check for independence.
4. Check for constant variance.
5. Check for normality.
The question data scientists expect the normality test to answer is, does the data deviate enough from the Gaussian paradigm to forbid use of tests that assume Gaussian distributions? Scientists
intend for the normality test to indicate when to abandon conventional tests (ANOVA, etc.) and instead analyze transformed data, use a rank-based non-parametric test,resampling, or bootstrap
Generally testing for normality (or any distributional assumption) should consist of two parts:
a. Graphical inspection of data via either a normal probability plot or density estimator plot
b. Formal goodness-of-fit test such as the Shapiro-Wilks, Anderson-Darlin, or Carmer-von Mises
I like a visual representation that conveys the distribution of the data rather than tests. It seems that most data analysts align themselves with George Box's thoughts that, "To make a preliminary
test on variances is rather like putting to sea in a row boat to find out whether conditions are sufficiently calm for an ocean liner to leave port!"
I prefer a density plot over truehist(), > density <- density(data) # returns the density data > plot(density)
Deciding on the type of transformation to make data "normal"
Once we determine a distribution we decide whether to transform the variables. The rationale might be to make outcome more normally distributed, equalize outcome variance, or to linearize predictor
effects. The drawbacks are that the original or untransformed variables might be more interpretable or credible such as the difference of natural scale cost versus log cost.
Select a correlation Test
I won't go into the full details of how to implement R or the proofs behind each mathematical method.
To be continued...
for the author, please follow the link and comment on his blog:
Kevin Davenport » R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/correlation-easy-as-1-2-3/","timestamp":"2014-04-18T03:24:29Z","content_type":null,"content_length":"37869","record_id":"<urn:uuid:95654b10-f48b-4c83-863a-068797fff7ca>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the square root of 500?
You asked:
What is the square root of 500?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/what_is_the_square_root_of_500","timestamp":"2014-04-25T03:17:54Z","content_type":null,"content_length":"48644","record_id":"<urn:uuid:7093b9e9-37b8-4cd4-b9b7-eecdc48c3111>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Olney, MD Algebra 2 Tutor
Find an Olney, MD Algebra 2 Tutor
...In addition to teaching science, I worked on reading and math skills with the students. I have a bachelor's degree in Biology and a doctorate degree in cell and molecular biology. I'm a very
patient person and would happy to help to share my knowledge and love of math and science with you.My ed...
14 Subjects: including algebra 2, reading, geometry, biology
...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a
system of self-learning and studying that allowed me to efficiently learn all the required materials whil...
15 Subjects: including algebra 2, calculus, physics, geometry
...I always receive same kind of appreciations from students there too. Since I enjoy teaching, it is no wonder that I become very involved while teaching. Not only that, I try to correlate the
theoretical knowledge with real life examples.
9 Subjects: including algebra 2, chemistry, calculus, geometry
...I use trig on a regular basis to decompose forces and loads. I have a BS in Mechanical Engineering from UMCP, and have been working in the aerospace and healthcare industries for 16 years.
Areas of expertise include hand stress calculation, component design, GD&T, tolerance studies.
10 Subjects: including algebra 2, physics, calculus, geometry
...I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and algebra problems. I enjoy teaching and working through
problems with students since that is the best way to learn.Have studied and scored high marks in econometric...
14 Subjects: including algebra 2, calculus, statistics, geometry
Related Olney, MD Tutors
Olney, MD Accounting Tutors
Olney, MD ACT Tutors
Olney, MD Algebra Tutors
Olney, MD Algebra 2 Tutors
Olney, MD Calculus Tutors
Olney, MD Geometry Tutors
Olney, MD Math Tutors
Olney, MD Prealgebra Tutors
Olney, MD Precalculus Tutors
Olney, MD SAT Tutors
Olney, MD SAT Math Tutors
Olney, MD Science Tutors
Olney, MD Statistics Tutors
Olney, MD Trigonometry Tutors | {"url":"http://www.purplemath.com/olney_md_algebra_2_tutors.php","timestamp":"2014-04-19T07:16:41Z","content_type":null,"content_length":"24001","record_id":"<urn:uuid:8720ba8d-226a-4ed6-aa5c-046064b989f7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from April 2012 on Math Jokes 4 Mathy Folks
Archive for April, 2012
This morning, my friend AJ called to ask for help in solving a problem from his ten-year-old daughter’s homework. When he explained his dilemma, the first thing I did, of course, was laugh. “Wow,” I
said. “You really aren’t as smart as a fifth-grader, are you?”
AJ and his daughter are both intelligent, and his daughter loves math. The problem they were trying to solve was this:
What is the units digit of the product of the first 21 prime numbers?
You can use this list of prime numbers if you need some help. As a hint, the 21st prime number is 73 (which, incidentally, is the Chuck Norris of numbers).
Once you solve the problem, of course, you realize that the problem would have the same answer if asked as follows:
What is the units digit of the product of the first n prime numbers, for n > 3?
This made me think that this could be a good problem for the classroom. Have all students randomly generate a positive integer, and then have them solve the problem above using their random number to
replace n. It would be impactful for students to see that everyone gets the same answer; and those who multiplied things out might be compelled to look for a pattern and figure out why everyone got
the same answer.
But then I realized: this problem is gender biased. Well, maybe. The problem asks for the units digit of the product of the first 21 prime numbers. The choice of 21 was very deliberate, I’m sure.
It’s small enough that an industrious student might actually try to calculate the product. In my experience, female students are more industrious than males and therefore more likely to do the
computation. But the number is large enough that male students, who are lazy like I am, will think, “That’s too much work. There’s got to be a trick!”
I mentioned to AJ that if a larger number were chosen — for instance, if it involved the product of the first 1,000 prime numbers — then it might be more obvious that students ought to look for a
pattern. “You haven’t met my daughter,” he said. “She’d still try to compute it.”
You may think my assertion is crazy. There is nothing in the problem that appears inherently biased against females.
A few years ago, the AAUW published a report about gender bias in math questions. One of the selected questions was something like, “What is the value of n if n + 2 = 7?” Despite the neutrality of
the content, girls scored significantly lower than boys on this question, so it was deemed to be biased. (Sorry, I wasn’t able to find a reference to the report. If anyone knows the report to which
I’m referring, please share in the comments.)
Further, FairTest claims that the gender gap all but disappears on all types of questions except multiple choice when other question types were examined on Advanced Placement tests. What is it about
multiple choice questions that makes them implicitly unfair to females? I have no idea.
The 2012 Annual Meeting of the National Council of Teachers of Mathematics (NCTM) is happening next week, April 25‑28, in Philadelphia, PA. As it winds down, the USA Science and Engineering Festival
starts in Washington, DC, and will occur April 28‑29. It will be a busy week for me — I am performing twice at each event! If you happen to be attending either event, please stop by and say hello.
At the NCTM Annual Meeting…
• To 10 and Beyond Using Free Illuminations Resources
Friday, April 27, 8:30-10:00 a.m.
Salon A/B (Philadelphia Marriott Downtown)
• Using Free NCTM Resources to Promote an Understanding of Proportion
Friday, April 27, 1:00-2:30 p.m.
Salon A/B (Philadelphia Marriott Downtown)
At the USA Science and Engineering Festival, Washington, DC…
• Puns and Puzzles
Saturday, April 28, 2:00-2:30 p.m.
Franklin Stage (Washington Convention Center)
• Puns and Puzzles
Sunday, April 29, 3:00-3:30 p.m.
Franklin Stage (Washington Convention Center)
I am expecting an engaged crowd at each event, and I am hopeful that my presentations are received better than this…
A mathematician and an engineer attend a physics lecture. The topic is Kulza-Klein theories involving physical processes that occur in 9-dimensional space. The mathematician is enjoying the
lecture, but the engineer is confused and frustrated. At the end, the mathematician comments about how wonderful he thought the lecture was. The engineer asks, “How do you understand this stuff?”
The mathematician replies, “I just visualize the process.”
“But how can you possibly visualize something that occurs in 9-dimensional space?”
“Easy,” says the mathematician. “First, I visualize it in n-dimensional space, and then I let n = 9.”
One of the most exciting plays in the history of professional (American) football was the opening play of the second half of Super Bowl XLIV, when the New Orleans Saints recovered an onside kick.
They then scored to take a 13–10 lead, and eventually won the game 31–17.
But onside kicks could be a thing of the past. Yesterday, New York Giants’ co-owner John Mara suggested that kickoffs might someday be eliminated from the NFL. This caused a lot of sports pundits to
react, saying that it would inherently change the game. On the Mike and Mike Show, analyst Mark Schlereth responded with these rhetorical questions:
What’re you gonna do, flip a coin three times in a row? You gotta get heads three times in a row to get an onside kick?
Once again, probability was placed front-and-center in recent football discussions. While I like Schlereth’s new, less violent, and more mathematical approach to onside kicks, I just wish he had
gotten the math right.
If you flip three coins, the probability of getting three heads is 12.5%. That’s not enough. Data shows that onside kicks in the NFL are successful 26% of the time. So the following would be a
reasonable modification to Schlereth’s proposal:
Flip two coins. Two heads results in a successful onside kick.
Then the probability would be 25%, closer to the current reality.
Unfortunately, that’s not exactly right, either — it’s based on a misleading statistic. The success rate of onside kicks is highly dependent on whether the team receiving the kickoff is expecting it
or not. When teams are expecting it, the success rate hovers around 20%; when teams aren’t expecting it, however, the success rate jumps to 60%. Considering that data, the process might be modified
as follows:
1. Kicking team indicates to referee that they will try an onside kick.
□ Of course, this must be done secretly, so as not to arouse the suspision of the receiving team. I propose that one referee be assigned to each team; the team would encode the message using
RSA encryption, and the assigned referee would be given the corresponding RSA numbers. A message can then be passed without fear of interception by the receiving team. To ensure that this
procedure does not signficantly delay the game, messages stating “we WILL try an onside kick” and “we WILL NOT try an onside kick” could be prepared in advance, and unemployed math PhD’s
could be hired as NFL referees to decode the messages.
2. The receiving team must similarly indicate whether or not they suspect an onside kick.
□ Again, use RSA encryption.
3. If the kicking team chooses an onside kick, and the receiving team suspects an onside kick, then:
□ Flip 9 coins. If 9, 8, 3, or 1 of them land heads, the onside kick is successful.
□ P(9, 8, 3, or 1 head with 9 coins) = 20.1%
4. If the kicking team chooses an onside kick, but the receiving team does not suspect it, then:
□ Flip 9 coins. If 9, 8, 5, 4, or 2 of them land heads, the onside kick is successful.
□ P(9, 8, 5, 4, or 2 heads with 9 coins) = 60.0%
5. If the kicking team does not choose an onside kick, then:
□ Flip 9 coins, just so the receiving team is unaware of what the kicking team decided to do, which will allow for the element of surprise with future kicks.
If the NFL decides to accept Mark Schlereth’s suggestion for using coins to determine onside kicks, I am hopeful that they will give my proposal serious consideration. If necessary, I have an Excel
spreadsheet that I would be willing to share with them.
We finished a meal at our favorite Mexican restaurant, and my wife said, “I’m not going to finish my margarita. Would you like the rest?” My response was:
Now there’s a question to which I’ll never say, “No.”
That got me to thinking… there are quite a few questions to which my answer would never be, “No.” The following is a partial list:
• Do you want to tell me a math joke?
• Paper or plastic?
• Do you want to play Scrabble^®?
• Will the Barbershop Harmony Society’s international convention be a harmonic function?
• Would you like to hear a really great math problem?
• Would you like to give a talk to our math club?
• Isn’t 2 to the power of infinity equal to infinity, and therefore isn’t 2^ℵ[0] = ℵ[0]?
• Do you want to go see the Escher exhibit at the art museum?
• Aren’t almost all numbers very, very, very large? (See Frivolous Theorem of Arithmetic.)
• Do you want to learn a new math game?
• Is there a seed number A for which A^3^n will always be prime, for integer values of n?
• Is math cool?
And all this talk of yes/no questions reminded me of a joke:
Professor: Are you good at math?
Student: Well, yes and no.
Professor: What do you mean?
Student: Yes, I’m no good at math!
Wonder when you’ll be happiest? You could look for statistical research to find the answer, but remember that 83.74% of all statistics are made up.
As it turns out, a large number of statistics that aren’t made up don’t really provide much help, either.
The Pew Research Center says that men are happiest over age 65 and that we are least happy in our 20′s. Friends Reunited says that people are happiest at age 33. A Gallup poll from 2009 said that men
are least happy in their 50′s and late 80′s, but a different Gallup poll from 2008 claimed that people are happiest at 85. This last result agrees with a report from the National Academy of Sciences,
which states that people are most depressed at age 44, as shown by the U-bend happiness curve below:
Well, shoot. With all this conflicting information, how will I know when to be happy? Until I get this all sorted out, I’ll just have to keep doing the activities that make people happiest. (Are you
really surprised by the first item on that list?)
Jean Jacques Rousseau once defined happiness as follows:
Happiness: a good bank account, a good cook, and a good digestion.
But I would define it thus:
Happiness: a sharp pencil and some paper, a good problem, and a quiet place with some time to think.
It has been said that happiness adds and multiplies, as we divide it with others. But let’s not forget how subtraction can bring happiness, too…
Some people bring happiness wherever they go. But you? You bring happiness whenever you go.
[Update, 4/13/12: When I checked into a Comfort Inn hotel last night, I was given a bag with fresh cookies. On the outside it said:
Happiness is a warm chocolate chip cookie.
That might be the best definition yet!]
“Alex,” I said, “on our walk to the gym tonight, I have a game for you and Eli to play.”
Alex responded, “Daddy, you have a lot of games.”
Yeah, it’s true.
Earlier in the afternoon, I played a game with them that I had created. On a set of index cards I had written animal names, with one catch: All of the vowels were removed. So instead of DOG, the card
had DG, and instead of ZEBRA, the card had ZBR. You get the idea.
Before we started playing, I told them an elaborate tale about how I had tried to write animal names on index cards, but the Vowel Thief kept stealing the vowels from me. At the end of my story, Eli
asked, “Did he steal all the vowels, or just some of them?” A-ha, the ruse worked! Amazing how easy it is to pull the wool over a four-year-old’s eyes. (As I explained the game, I also mentioned that
“it’ll be like reading from the Torah.” Sadly, my best joke of the day, but it fell on the deaf ears of the wrong audience.)
Here’s my list of vowel-less animals, roughly in order from easiest to hardest. Good luck.
• RHNCRS
• CHMPNZ
• RNGTN
• BFFL
• RMDLL
• HDGHG
• CHTH
• PRPS
• KNGR
• RCCN
• GZLL
• GRLL
• GRFF
• DLPHN
• CGR
• WHL
• LLM
• FRRT
• LPRD
• PND
• MNK
• TDPL
• NTTR
• TTR
• GN
• MS
Ask a silly question, get a silly answer.
Teacher: If you have $4, and you ask your father for another dollar, how much would you have?
Johnny: Four dollars.
Teacher: Young man, you don’t know your addition facts!
Johnny: Ma’am, you don’t know my father!
Johnny’s father and my dad seem to have a lot in common. But my dad would have been proud of me yesterday. While walking home from the local coffee shop, I noticed a corner of a dollar bill on the
ground. Not the whole bill, mind you, just a corner that had been ripped off. I thought not much of it, until two feet later I saw another scrap of the dollar bill… then another… and another…
I know and understand Calculus, and I realized that a lot of little things can add up to a lot, so I spent 15 minutes scouring the area for as many pieces of the dollar bill as I could find. I took
them home and asked my sons, “Wanna do a puzzle?” We spent a half-hour reconstructing the bill and taping it together. The pictures below show the before and after:
The bill was not in good enough shape to be accepted by a vending machine (too much tape, I suspect, and the missing piece on the right side surely didn’t help, either), but it was in good enough
shape for my bank to give me four shiny quarters in exchange for it.
I know that a penny saved is a penny earned. But what is a dollar found?
And the bigger question: What should I do with my new-found wealth?
I decided to buy a lottery ticket. The state gambling commission organized a raffle that boasted an infinite amout of money as the prize. To my great surprise, I won! When I showed up to claim the
prize, they told me it would be disbursed as 1 dollar now, 1/2 dollar next week, 1/3 dollar the thrid week, 1/4 dollar the week after that, and so on.
But the joke’s on them. My winnings for the third week will include a one-third cent piece, and that’s gotta be worth something, right?
(Note: Almost everything above is true. I really did find the pieces of a dollar bill on the ground yesterday. As best I can tell, the bill had been on the lawn when it was cut by the blades of a
power mower. And my bank really did give me four quarters in exchange for the taped-up, reconstructed version.)
A coworker is currently preparing to take the GRE, and today she complimented my use of semicolons in a serial list. Apparently, the comma’s stronger cousin has been a topic of study for her
recently. I told her that if she really wanted to brush up on her grammar, she ought to review the debate over the Oxford comma. The Oxford comma is the comma used immediately before and, or, and
sometimes nor preceding the final item in a list of three or more items.
I always use the Oxford comma. That’s probably because I’m old, and that’s what I was taught to do back in my one-room schoolhouse days. But it’s not just habit. There are two practical reasons.
For a serial list where each of the items contains many words (including conjunctions), I find that the Oxford comma tells the reader where to pause . For example,
Yesterday, I squared a circle, trisected an angle, found a one-line proof of the Riemann Hypothesis, and conjectured and verified three new theorems.
But the Oxford comma is especially useful for lists of exactly three items; it makes it clear that the second and third items are actually part of a list, not just modifiers of the first item. An
unintended consequence of not using the Oxford comma is shown below:
And the good folks at We Know Awesome (one of my new favorite sites) have a pretty good example involving JFK and Stalin.
It turns out that math can be useful when thinking about grammar. During a recent presentation, I showed how proportional reasoning can be used to identify past participles:
Just cross-multiply, cancel the ew and gr, and algebra reveals the past participle of flew!
April is Math Awareness Month, National Poetry Month, and National Humor Month.
I tried to run a humorous math poem contest last year, and it was a remarkable failure. There were only two entries, and one of them was submitted after the contest ended. The winning entry can be
found at the link above; the other, submitted after deadline by Chris Smith, is worth sharing, too:
Some folks, they dream of wealth and fame,
Or that some girl would know their name —
Pathetic! I reserve my slumber
For imagining my favourite number.
As rapid movement stirs my eyes,
No need for me to fantasize
Of infinitely distant wishes.
Instead I feast on π — delicious!
Not very good at learning lessons, I’m trying it again. But this time, I’m letting the good folks in the Thinkfinity Community run the contest, and maybe there will be more entrants when it’s
announced to their 50,000+ members. That’s where you can learn more about this year’s humorous math poem contest, and you should post your entries in this discussion forum. If you’re a math teacher,
you might also want to check out the discussions in the Learning Math group.
Got math? Got rhyme? Got iambic pentameter? Then let the world share your treasure! Post your entry here. | {"url":"http://mathjokes4mathyfolks.wordpress.com/2012/04/","timestamp":"2014-04-20T09:09:18Z","content_type":null,"content_length":"64848","record_id":"<urn:uuid:9882e764-477b-421a-bf84-2877d77b3212>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to the Theory of Computation by Michael Sipser
199 Introduction to the Theory of Computation by Michael Sipser £10.00 £100.00 /l/Introduction-to-the-Theory-of-Computation-by-Michael-Sipser
Introduction to the Theory of Computation
Introduction to the Theory of Computation
Introduction to Languages and the Theory of Computation
Introduction to Languages and the Theory of Computation
Hertz John A.: Introduction to the Theory of Neural Computation
This book is a comprehensive introduction to the neural network models currently under intensive study for computational applications. It is a detailed, logically-developed treatment that covers
the theory and uses of collective computational networks, including associative memory, feed forward networks, and unsupervised learning. It also provides coverage of neural network applications
in a variety of problems of both theoretical and practical interest.
eruditor.com eruditor.com
Providing an introduction to the theory of computation, this work emphasizes formal languages, automata and abstract models of computation, and computability. It includes an introduction to
computational complexity and NP-completeness. It also introduces the necessary mathematical tools in the context in which they are used.
e-Study Guide for: An Introduction to the Theory of Numbers by G. H. Hardy, ISBN 9780199219865
Never Highlight a Book Again! Just the FACTS101 study guides give the student the textbook outlines, highlights, practice quizzes and optional access to the full practice tests for their
A Methodical Introduction to the Theory and Practice of Physic. by David MacBride, M.D.
A Methodical Introduction to the Theory and Practice of Physic. by David MacBride, M.D. : Paperback : Gale Ecco, Print Editions : 9781170442456 : 29 May 2010
wordery.com wordery.com
e-Study Guide for DeathQuest III: An Introduction to the Theory and Practice of Capital Punishment in the United States, textbook by Robert M. Bohm
Never Highlight a Book Again! Just the FACTS101 study guides give the student the textbook outlines, highlights, practice quizzes and optional access to the full practice tests for their
A Familiar Introduction to the Theory and Practice of Perspective. by Joseph Priestley, LL.D. F.R.S.
A Familiar Introduction to the Theory and Practice of Perspective. by Joseph Priestley, LL.D. F.R.S. : Paperback : Gale Ecco, Print Editions : 9781170600559 : 29 May 2010
wordery.com wordery.com
AN INTRODUCTION TO THE GRAMMAR OF OLD ENGLISH by Michael Cummings ( 9781845533649 )
This book applies the techniques of systemic functional grammar to the description of the Old English historical dialect, 650-1150 CE. Systemic functional grammar is an approach to the
description of language which distinguishes three separate functions in communication: language as representation, language as attitude, and language as the construction of text. Most
applications of systemic functional theory have concentrated on modern English. This book is the first comprehensive description of...
Epistemology: A Contemporary Introduction to the Theory of Knowledge by Robert Audi
Details This textbook introduces the concepts and theories central for understanding the nature of knowledge. It is aimed at students who have already done an introductory course. Epistemology,
or the theory of knowledge, is concerned about how we know what we do, what justifies us in believing what we do, and what standards of evidence we should use in seeking truths about the world of
human experience. The author's approach draws the reader into the subfields and theories of the subject, guided...
worldofbooks.com worldofbooks.com
Introduction to the Theory of infiniteseimals
Introduction to the Theory of infiniteseimals
Introduction to the Theory of Statistics
Introduction to the Theory of Statistics
An Introduction to the Theory of Stellar Structure and Evolution
An Introduction to the Theory of Stellar Structure and Evolution
Introduction to the Theory of Games
One of the classic early monographs on game theory, this comprehensive overview of the mathematical theory of games illustrates applications to situations involving conflicts of interest,
including economic, social, political, and military contexts. Appropriate for advanced undergraduate and graduate courses; advanced calculus a prerequisite. Includes 51 figures and 8 tables. 1952
Introduction to the Theory of Programming Languages
The design and implementation of programming languages, from Fortran and Cobol to Caml and Java, has been one of the key developments in the management of ever more complex computerized systems.
Introduction to the Theory of Programming Languages gives the reader the means to discover the tools to think, design, and implement these languages. It proposes a unified vision of the different
formalisms that permit definition of a programming language: small steps operational semantics, big steps operational...
Introduction to the Theory of Science and Metaphysics
Introduction to the Theory of Science and Metaphysics : Paperback : Hardpress Publishing : 9781290412018 : 10 Jan 2012
wordery.com wordery.com
Introduction to the Theory of the Early Universe: Cosmological Per
Introduction to the Theory of the Early Universe: Cosmological Per
An Introduction to the Theory of Linear Spaces
This introduction to linear algebra and functional analysis offers a clear expository treatment, viewing algebra, geometry, and analysis as parts of an integrated whole rather than separate
subjects. All abstract ideas receive a high degree of motivation, and numerous examples illustrate many different fields of mathematics. Abundant problems include hints or answers.
Introduction to the Theory of Statistical Inference
An Introduction to the Theory of Knowledge
An Introduction to the Theory of Knowledge
An Introduction to the Theory of Numbers
The sixth edition of the classic undergraduate text in elementary number theory includes a new chapter on elliptic curves and their role in the proof of Fermat's Last Theorem, a foreword by
Andrew Wiles and extensively revised and updated end-of-chapter notes.
An Introduction to the Theory of Surreal Numbers
The surreal numbers form a system which includes both the ordinary real numbers and the ordinals. Since their introduction by J. H. Conway, the theory of surreal numbers has seen a rapid
development revealing many natural and exciting properties. These notes provide a formal introduction to the theory in a clear and lucid style. The the author is able to lead the reader through
to some of the problems in the field. The topics covered include exponentiation and generalized e-numbers.
Introduction to the Theory of Distributions
The theory of distributions is an extension of classical analysis which has acquired a particular importance in the field of linear partial differential equations, as well as having many other
applications, for example in harmonic analysis. Underlying it is the theory of topological vector spaces, but it is possible to give a systematic presentation without presupposing a knowledge, or
using more than a bare minimum, of this. This book adopts this course and is based on graduate lectures given over...
wordery.com wordery.com
An Introduction to the Theory of Groups
This introductory exposition of group theory by an eminent Russian mathematician is particularly suited to undergraduates, developing material of fundamental importance in a clear and rigorous
fashion. A wealth of simple examples, primarily geometrical, illustrate the primary concepts. Exercises at the end of each chapter provide additional reinforcement. 1959 edition.
An Introduction to the Theory of Probability
The Theory of Probability is a major tool that can be used to explain and understand the various phenomena in different natural, physical and social sciences. This book provides a systematic
exposition of the theory in a setting which contains a balanced mixture of the classical approach and the modern day axiomatic approach.
Introduction to the Theory of Random Signals and Noise
Introduction to the Theory of Random Signals and Noise
A Concise Introduction to the Theory of Integration (Series in Pur
A Concise Introduction to the Theory of Integration (Series in Pur
An Introduction to the Theory of Canonical Matrices
Thorough and self-contained, this penetrating study of the theory of canonical matrices presents a detailed consideration of all the theory’s principal features. Topics include elementary
transformations and bilinear and quadratic forms; canonical reduction of equivalent matrices; subgroups of the group of equivalent transformations; and rational and classical canonical forms. The
final chapters explore several methods of canonical reduction, including those of unitary and orthogonal transformations...
An Introduction to the Theory of Elasticity
This accessible text requires minimal mathematical background and provides a firm foundation for more advanced studies. Topics include deformation and stress, the derivation of the equations of
finite elasticity, and the formulation of infinitesimal elasticity with application to some two- and three-dimensional static problems and elastic waves. Solutions. 1980 edition.
An Introduction to the Theory of Electricity
An Introduction to the Theory of Electricity : Paperback : Nabu Press : 9781145855014 : 25 Feb 2010
wordery.com wordery.com
An Introduction to the Theory of Functional Equations and Inequalities
Marek Kuczma was born in 1935 in Katowice, Poland, and died there in 1991. After finishing high school in his home town, he studied at the Jagiellonian University in Krakow. He defended his
doctoral dissertation under the supervision of Stanislaw Golab. In the year of his habilitation, in 1963, he obtained a position at the Katowice branch of the Jagiellonian University (now
University of Silesia, Katowice), and worked there till his death. Besides his several administrative positions and his outstanding...
wordery.com wordery.com
An Introduction to the Theory of Graph Spectra
This introductory text explores the theory of graph spectra: a topic with applications across a wide range of subjects, including computer science, quantum chemistry and electrical engineering.
The spectra examined here are those of the adjacency matrix, the Seidel matrix, the Laplacian, the normalized Laplacian and the signless Laplacian of a finite simple graph. The underlying theme
of the book is the relation between the eigenvalues and structure of a graph. Designed as an introductory text for...
An Introduction to the Theory of Groups of Finite Order
An Introduction to the Theory of Groups of Finite Order : Hardback : Kessinger Publishing : 9781163549100 : 10 Sep 2010
wordery.com wordery.com
An Introduction to the Theory of Infinite Series
An Introduction to the Theory of Infinite Series : Paperback : Nabu Press : 9781178005462 : 29 Aug 2010
wordery.com wordery.com
An Introduction to the Theory of Knowledge (Cambridge Introduction
An Introduction to the Theory of Knowledge (Cambridge Introduction | {"url":"http://www.shopwiki.co.uk/l/Introduction-to-the-Theory-of-Computation-by-Michael-Sipser","timestamp":"2014-04-16T07:45:39Z","content_type":null,"content_length":"198229","record_id":"<urn:uuid:e276db10-78af-4cd2-8057-5fb92e7e98f6>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trick for computing log(1+x)
Charles Karney showed me a clever trick for computing log(1+x). It only takes a couple lines of code, but those lines are not obvious. If you understand this trick, you’ll understand a good bit of
how floating point arithmetic works.
First of all, what’s the big deal? Why not just add 1 to x and take the log? The direct approach will be inaccurate for small x and completely inaccurate for very small x. For more explanation, see
Math library functions that seem unnecessary.
Here’s how Charles Kerney implements the code in C++:
template<typename T> static inline T log1p(T x) throw() {
volatile T y = 1 + x, z = y - 1;
return z == 0 ? x : x * std::log(y) / z;
The code comes from GeographicLib, and is based on Theorem 4 of this paper paper. I’ve read that paper several times, but I either didn’t notice the trick above or forgot it.
The code includes comments explaining the trick. I cut out the comments because I’ll explain the code below.
If y = 1+x and z = y-1, doesn’t z just equal x? No, not necessarily! This is exactly the kind of code that a well-meaning but uninformed programmer would go through and “clean up.” If x isn’t too
small, no harm done. But if x is very small, it could ruin the code. It’s also the kind of code an optimizing compiler might rearrange, hence the volatile keyword telling optimizers to not be clever.
Imagine x = 1e-20. Then y = 1+x sets y to exactly 1; a standard floating point number does not have enough precision to distinguish 1 and 1 + 10^-20. Then z = y-1 sets z to 0. While z will not
exactly equal x in general, it will be a good approximation for x.
If z is not zero, log(y)/z is a good approximation to log(1 + x)/x because y is close to 1+x and z is close to x. Multiplying log(y)/z by x gives a good approximation to log(1+x).
If z is zero, x is so small that x is an excellent approximation to log(1 + x) because the error is on the order of x^2.
Sliderule users knew about this problem; heavy duty sliderules had separate scales for arguments close to 1.
For me the most interesting part of the code is that it uses x * std::log(y) / z instead of just std::log(y) or, what amounts to the same, std::log(1+x). As you say, this is a good approximation to
the logairthm of 1+x, but what is amazing is that it’s a better approximation than computing it directly.
I have one comment and one question.
Instead of convoluted computation of z (with volatiles etc) which you test against zero, you could just check “x < std::numeric_limits::epsilon()”. This will tell you whether 1+x is representable as
a floating point of type T.
My question is this: For the case where x > epsilon (z != 0), why multiply log(y) by x and then divide by z? Why not just return log(y)? I assume this is somehow compensating for the loss of
precision when x is small but not too small, but it is not obvious how.
Stupid comment field dropped my < and >
That should read:
“you could just check x < std::numeric_limits<T>::epsilon()”
I’d also love to see the explanation of why x * log(1+x) / z is a better approximation of log(1+x) than just log(1+x) itself. The comments in the code don’t even claim that it is (?!) — they say “The
multiplication x * (log(y)/z) introduces little additional error”, which makes it sound undesirable.
So consider z is a crude estimate for x. x = z + s where s is small (not necessarily positive) (actually z is small. s is really small)
in terms of z and s you want
log(1+z+s) = (z+s)*log(1+z+s)/(z+s) = (z+s) * [log(1+z+s)/(z+s)]
so you have z+s exactly, and you can do multiplication without too much loss of precision. so you want a good estimate for [log(1+z+s)/(z+s)].
but your original problem is you can’t calculate log(1+z+s). You can only calculate log(1+z) = log(y). But note that log(1+z+s)/log(1+z) is close to (z+s)/z. So a good estimate for [log(1+z+s)/(z+s)]
is [log(1+z)/z] = [log(y)/z].
Which brings us to the result.
But why not be more direct? Let F(s)=log(1+z+s), and expand at s=0 to get F(s) ~ log(1+z)+ s/(1+z) ~ log(1+z)+s. So can’t we use log(y)+(x-z) instead? Indeed, can’t we just use this last expression
without the z==0 test?
IIt might be instructive to work through the algorithm with x = 1.5 * eps
where eps is the machine epsilon (the smallest positive number for which 1
+ eps is computed exactly).
x = 1.5 * eps
y = 1 + x = 1 + 2 * eps (using the round to even rule)
z = y – 1 = 2 * eps
log(y) = 2 * eps
log(y)/z = 1
return x * log(y)/z = 1.5 * eps
Goldberg bounds the error in this method (assuming that the error in
log(x) is bounded).
@SteveBrooklineMA That’s a very interesting suggestion. I can’t see anything wrong with it.
@SteveBrooklineMA @Jitse Niesen yeah, that does sound right. I did a quick simulation and you get about half the error by using (z+s) * [log(1+z+s)/(z+s)]. Though my calculus is failing me at the
moment for explaining why.
@SteveBrooklineMA @Jitse Niesen
but log(1+z)+ s/(1+z) is way better.
so in the orignal c
std::log(y) + (x-z)/y
If A is the output from the original code, B =log(y)+(x-z) and C=log(y)+(x-z)/y, I haven’t been able to get either (A-B)/A or (A-C)/A to be more than something on the order of 1e-16. The absolute
error between B and C seems even lower. This is just by naively testing with simple code and various x. So if it were up to me, I think I’d go with B.
In the original code, is x * (std::log(y) / z) better, since (x * std::log(y)) / z may overflow for large x?
But why the volatile T in the second line?
The only thing I can think of is trying to disable compiler optimization for those variables and trivial calculations (addition, subtraction).
Like most such clever formulas, this one originates with Velvel Kahan.
A proof of why it works is given in the solution to problem 1.5 of
Accuracy and Stability of Numerical Algorithms (2nd ed., 2002).
Section 1.14.1 of the book discusses a similar formula for (exp(x)-1)/x.
Thanks Nick, I see you wrote the book on this. Is there motivation for the formula beyond its similarity with (exp(x)-1)/x? You cite this paper, and say it gives an alternate method:
and the method there is log(y) + (x-z)/y. Isn’t this latter formula more straightforward?
[...] Trick for computing log(1+x) ::: The Endeavour [...]
Tagged with: Math
Posted in Computing | {"url":"http://www.johndcook.com/blog/2012/07/25/trick-for-computing-log1x/","timestamp":"2014-04-20T10:47:15Z","content_type":null,"content_length":"46517","record_id":"<urn:uuid:7a753ccd-6d21-4bf0-a019-abd4f1cf729d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/tux/answered","timestamp":"2014-04-17T16:17:33Z","content_type":null,"content_length":"108078","record_id":"<urn:uuid:8e7f0278-9a40-4ef5-8c0f-d73b5120fc4a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tangent to the curve from a parametric equation
September 26th 2012, 06:28 AM
Tangent to the curve from a parametric equation
I've got this question I've been labouring over.
The parametric equations of a curve are : x = 1 + 2sin^2Ɵ, y = 4tanƟ.
From a previous question I found dy/dx to be 1 / (sinƟcos^3Ɵ).
Now it says
"Find the equation of the tangent to the curve at the point where Ɵ = π/4, giving your answer in the form y = mx + c.
So what I did was first try get a value for x and y by plugging in π/4 in the parametric equations.
x = 1 + (sin/4)^2
x = 1.00
y = 4tan(π/4)
y = 0.05
Then I put in π/4 into the dy/dx equation.
1 / (sinπ/4)((cosπ/4)^3)
= 72.95
Using the point-slope rule:
y - y1 = m(x - x1)
y - 0.05 = 72.95(x - 1)
y = 72.95x - 72.95 + 0.05
y = 72.95x + 73
According to the mark scheme, that is WAY off, as their answer is y = 4x - 4. Where did I go wrong? Sorry if this is in the wrong category.
September 26th 2012, 07:01 AM
Re: Tangent to the curve from a parametric equation
You made several computational mistakes. Were you using a calculator perhaps? This problem should be solved without one.
When $\theta = \pi/4$:
$x(\theta) = x(\pi/4) = 1+2\sin^2(\pi/4) = 1 + 2 \left(\frac{\sqrt{2}}{2}\right)^2 = 1 + 2\left(\frac{2}{4}\right) = 1 + 1 = 2$.
$y(\theta) = y(\pi/4) = 4\tan(\pi/4) = 4(1) = 4$.
$\frac{dy}{dx}(\theta) = \frac{dy}{dx}(\pi/4) = \frac{1}{(\sin(\pi/4)\cos^3(\pi/4)} = \frac{1}{(\sqrt{2}/2)(\sqrt{2}/2)^3}$.
$= \frac{1}{(\sqrt{2}/2)^4} = \frac{1}{(\sqrt{16}/16)} = \frac{1}{(4/16)} = \frac{1}{(1/4)} = 4$.
So when $\theta = \pi/4$, it's at the point $(2, 4)$ and has tangent line of slope $4$.
September 26th 2012, 07:32 AM
Re: Tangent to the curve from a parametric equation
Wow, you lost me.
How did you get from $2\sin^2(\pi/4)$ to $2 \left(\frac{\sqrt{2}}{2}\right)^2$ ?
September 26th 2012, 07:33 AM
Re: Tangent to the curve from a parametric equation
Another approach to find the slope of the tangent line:
$\frac{dy}{dx}=\frac{dy}{d\theta}\cdot\frac{d\theta }{dx}=\left(4\sec^2(\theta) \right)\left(\frac{1}{2\sin(2\theta)} \right)=\frac{2\sec^2(\theta)}{\sin(2\theta)}$
$\frac{dy}{dx}\left|_{\theta=\frac{\pi}{4}}=\frac{4 }{1}=4$
We could also eliminate the parameter $\theta$, to find the equivalent Cartesian equation:
$x=1+2\sin^2\left(\tan^{-1}\left(\frac{y}{4} \right) \right)=1+\frac{2y^2}{y^2+16}$
Implicitly differentiate with respect to $x$:
When $\theta=\frac{\pi}{4}$ we have $(x,y)=(2,4)$ hence, at this point, we have:
September 26th 2012, 11:36 AM
Re: Tangent to the curve from a parametric equation
It seems you don't know that $\sin(\pi/4) = \frac{\sqrt{2}}{2}$.
If that's so, then you don't understand some pretty basic trigonometry. If you're taking a calculus class, then that's going to cause you serious problems. I'd suggest you take measures -
self-study, a tutor, whatever - to get caught up with trigonometry.
September 26th 2012, 12:23 PM
Re: Tangent to the curve from a parametric equation
The trigonometric rules in my textbook I know and understand. They are:
y = sinx
dy/dx = cosx
y = cosx
dy/dx = -sinx
y = tanx
dy/dx = sec^2x
But it seems that they've thrown me something they haven't taught me. I'm just asking what is that rule you're talking about. Is there a more general law for it?
September 26th 2012, 02:22 PM
Re: Tangent to the curve from a parametric equation
The trigonometric rules in my textbook I know and understand. They are:
y = sinx
dy/dx = cosx
y = cosx
dy/dx = -sinx
y = tanx
dy/dx = sec^2x
But it seems that they've thrown me something they haven't taught me. I'm just asking what is that rule you're talking about. Is there a more general law for it?
it's called the unit circle ... you're expected to have been exposed to it prior to enrolling in a calculus course.
September 27th 2012, 01:15 PM
Re: Tangent to the curve from a parametric equation
Right, I've got my work cut out for myself. Thanks for the help, everyone. :-) | {"url":"http://mathhelpforum.com/calculus/204111-tangent-curve-parametric-equation-print.html","timestamp":"2014-04-18T21:17:57Z","content_type":null,"content_length":"15731","record_id":"<urn:uuid:a8134f64-b403-49c2-abf8-55410cfa5d7a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
?: Dedekind
Previous: Spectra
Next: Cantor
What are numbers, and what is their meaning?: Dedekind
Let us recall that by 1850 the subject of analysis had been given a solid footing in the real numbers -- infinitesimals had given way to small positive real numbers, the In particular he was not
satisfied with his geometrical explanation of why it was that a monotone increasing variable, which is bounded above, approaches a limit. By November of 1858 Dedekind had resolved the issue by
showing how to obtain the real numbers (along with their ordering and arithmetical operations) from the rational numbers by means of cuts in the rationals -- for then he could prove the above
mentioned least upper bound property from simple facts about the rational numbers. Furthermore, he proved that applying cuts to the reals gave no further extension.
These results were first published in 1872, in Stetigkeit und irrationale Zahlen. In the introduction to this paper he points out that the real number system can be developed from the natural
I see the whole of arithmetic as a necessary, or at least a natural, consequence of the simplest arithmetical act, of counting, and counting is nothing other that the successive creation of the
infinite sequence of positive whole numbers in which each individual is defined in terms of the preceding one.
In a single paragraph he simply states that, from the act of creating successive whole numbers, one is led to the concept of addition, and then to multiplication. Then to have subtraction one is led
to the integers. Finally the desire for division leads to the rationals. He seems to think that the passage through these steps is completely straight-forward, and he does not give any further
Given the rationals he comes to the conclusion that what is missing is continuity, where continuity for him refers to the fact that you cannot create new numbers by cuts. By applying cuts to the
rationals he gets the reals, lifts the operations of addition, etc., from the rationals to the reals, and then shows that by applying cuts to the reals no new numbers are created.
In his penetrating 1888 monograph Dedekind returns to numbers. The nature of numbers was a topic of considerable philosophical interest in the latter half of the 1800's -- we have already said much
about Frege on this topic. In 1887 Kronecker published Begriff der Zahl, in which he does rather little of technical interest, but he does quote an interesting remark which Gauss made in a letter to
Bessel in 1830. Gauss says that numbers are distinct from space and time in that the former are a product of our mind. Dedekind picks up on this theme in the introduction to his monograph when he
In view of this freeing of the elements from any other content (abstraction) one is justified in calling the numbers a free creation of the human mind.
This seems to contrast with Kronecker's later remark:
God made the natural numbers. Everything else is the work of man.
Regarding the importance of the natural numbers, Dedekind says that it was well known that every theorem of algebra and higher analysis could be rephrased as a theorem about the natural numbers --
and that indeed he had heard the great Dirichlet make this remark repeatedly (Stetigkeit, p. 338). Dedekind now proceeds to give a rigorous treatment of the natural numbers, and this will be far more
exacting than his cursory remarks of 1872 indicated. Actually Dedekind said he had plans to do this around 1872, but due to increasing administrative work he had managed, over the years, to jot down
only a few pages. Finally, in 1888, he did finish the project, and published it under the title Was sind und was sollen die Zahlen?
Dedekind starts by saying that objects (Dinge) are anything one can think of; and collections of objects are called classes (Systeme), which are also objects. He takes as absolutely fundamental to
human thought the notion of a mapping. He then defines a chain (Kette) as a class A together with a mapping complete induction holds for chains, i.e., if A and f are given, and if B is a set of
generators for A, then for any class C we have
To say that B is a set of generators for A means that A which has B as a subclass and is closed under f is A.
Next a class A is defined to be infinite if there is a one-to-one mapping s is a thought which he has, then by letting s ' be a thought about the thought s he comes to the conclusion that there are
an infinite number of possible thoughts, and thus an infinite class of objects.
A is said to be simply infinite if there is a one-to-one mapping a in it, and a generates A. He shows that every infinite A has a simply infinite B in it. Combining this with his proof that infinite
classes exist we have a proof that simply infinite sets exist. Any two simply infinite classes are shown to be isomorphic, so he says by abstracting from simply infinite classes one obtains the
natural numbers N.
Let 1 be the initial natural number (which generates N), and let n ' be the successor of a natural number n (i.e., n ' is just f (n)). The ordering < of the natural numbers is defined by m < n iff
the class of elements generated by n is a subclass of the class of elements generated by m '; and the linearity of the ordering is proved. Next he introduces definition by recursion namely given any
set A and any function
He proves this by first showing (by induction) that for each natural number m there is a unique A, where Then he defines
Now he turns to the definition of the basic operations. For each integer m he uses recursion to get a function
Then + is defined by The operation + is then proved to be completely characterized by the following: Likewise multiplication and exponentiation are defined and shown to be characterized by
Using induction the following laws are established:
The verification of these fundamental laws can be found in Appendix B of LMCS.
Now one can use the usual operations of + and N and the ordering
R. Dedekind, Stetigkeit und irrationale Zahlen. 1872.
R. Dedekind, Was sind und was sollen die Zahlen? Braunschweig, 1888.
Previous: Spectra
Next: Cantor
Mon Feb 3 17:59:47 EST 1997
© Stanley Burris | {"url":"http://www.math.uwaterloo.ca/~snburris/htdocs/scav/dedek/dedek.html","timestamp":"2014-04-18T03:23:01Z","content_type":null,"content_length":"12026","record_id":"<urn:uuid:65f205d1-03e8-4d23-ae34-8111c6376a68>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
OP-SF WEB
Extract from OP-SF NET
Topic #2 ------------ OP-SF NET 16.5 ----------- September 15, 2009
From: Marcel de Bruin and Tom Koornwinder
Subject: Henk G. Meijer 1940-2009
Henk G. Meijer, professor emeritus at Delft University of Technology, passed away on 7 September 2009 at the age of 68. He is survived by his partner and a grown-up daughter.
Henk Meijer obtained his PhD at the University of Amsterdam in 1967 with a thesis in number theory (Uniform distribution of g-adic numbers); his advisor was Prof. Jan Popken. In 1968 he became a
lector at Delft University of Technology where he was promoted to full professor (in Analysis) in 1973, keeping this position until his retirement at the end of 2005. During his first decade in Delft
he continued working in number theory, publishing 21 papers with primary MSC classification 10 (Number Theory).
He spent much time on duties for the Department in various councils, several of which he chaired. After a period (1978-1984) without publications, he turned his research efforts to orthogonal
polynomials and special functions. In particular he gave much attention to Sobolev orthogonal polynomials, on which he also published jointly with Francisco Marcellán and other Spanish mathematicians
as well as with his Dutch collaborators. MathSciNet lists 11 of Henk Meijer?s papers with primary MSC classification 42C05 (Orthogonal Functions and Polynomials) and 7 papers with primary
classification 33 (Special Functions). His best-known paper is "Determination of all coherent pairs", J. Approx. Theory 89 (1997), 321-343 (MR1451509).
At Delft University of Technology Henk Meijer headed a group on classical analysis which included Herman Bavinck and Marcel de Bruin as senior members. Among his PhD students were Roelof Koekoek and
René Swarttouw.
Back to Home Page of
SIAM AG on Orthogonal Polynomials and Special Functions Page maintained by Bonita Saunders | {"url":"http://math.nist.gov/opsf/personal/meijer.html","timestamp":"2014-04-21T02:03:27Z","content_type":null,"content_length":"2630","record_id":"<urn:uuid:f75bd9a0-e9a2-4a38-812b-8234c3098679>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
N Richland Hills, TX Algebra 1 Tutor
Find a N Richland Hills, TX Algebra 1 Tutor
...When I was in high school, I would help my classmates in every subject from English to government to calculus. I love tutoring because the one-on-one setting allows me to help each student
individually with the things they are struggling with the most. I graduated from New Tribes Bible Institute with the equivalent of an associate's degree in Biblical studies.
40 Subjects: including algebra 1, chemistry, reading, elementary math
...I took this course in high school and received an A. I have learned many applications for this subject since and have taken math through college going as far as differential equations. I can
tutor anyone taking this class.
9 Subjects: including algebra 1, chemistry, geometry, algebra 2
...Chemistry and placed out of first semester chemistry in college. Although my interests in math eventually won out over chemistry when I had to declare a major, I have never lost my interest in
chemistry. I now study biochemistry and nutrition as one of my hobbies.
82 Subjects: including algebra 1, English, chemistry, calculus
...I majored in microbiology at college and obtained a PhD in cancer biology. Currently, I am an instructor at a world-renowned institute in TX. I have tutoring experience with students from
middle school/high school/college for math and biology.I was born in and grew up in south Korea.
6 Subjects: including algebra 1, calculus, biology, algebra 2
...I have been a guest lecturer in the graduate schools of management at St Louis University (Healthcare Administration) and the University of Groningen (the Netherlands) in Healthcare
Administration. I am a former director of Professional Examination Services of New York City, having served 6 year...
27 Subjects: including algebra 1, reading, physics, geometry
Related N Richland Hills, TX Tutors
N Richland Hills, TX Accounting Tutors
N Richland Hills, TX ACT Tutors
N Richland Hills, TX Algebra Tutors
N Richland Hills, TX Algebra 2 Tutors
N Richland Hills, TX Calculus Tutors
N Richland Hills, TX Geometry Tutors
N Richland Hills, TX Math Tutors
N Richland Hills, TX Prealgebra Tutors
N Richland Hills, TX Precalculus Tutors
N Richland Hills, TX SAT Tutors
N Richland Hills, TX SAT Math Tutors
N Richland Hills, TX Science Tutors
N Richland Hills, TX Statistics Tutors
N Richland Hills, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/N_Richland_Hills_TX_algebra_1_tutors.php","timestamp":"2014-04-17T22:07:54Z","content_type":null,"content_length":"24523","record_id":"<urn:uuid:068a505e-16d7-4c04-be3d-7b8e9e4e2fc4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] Is this correct?
[R] Is this correct?
Rlover lmoto001 at fiu.edu
Mon Nov 24 03:45:55 CET 2008
I have to answer the following question for a homework assignment.
A researcher was interested in whether people taking part in sports at
university made more money after graduating, taking into account the
students' GPA. They sampled 200 alumni from a large university. The
variables are: income (income 10 years after graduating), sports (1 if they
did sports, 0 if they did not), and GPA (the grade point average at
university). Discuss the relationship between taking part in sports, GPA,
and income for these data.
The R code I used so far is
Does sports predict GPA?
> lm1<-lm(GPA~sports)
> summary(lm1)
Does sports predict income?
> lm2<-lm(income~sports)
> summary(lm2)
Does GPA predict income?
> lm3<-lm(income~GPA)
> summary(lm3)
Does sports predict income after accounting for GPA?
> lm4<-lm(income~GPA+sports)
> summary(lm4)
Can someone let me know is the above is correct? I am not sure if to keep
all four regressions or only 1 and 4. Also, I need to plot the data on a
graph or table. Can anyone suggest how to do this? Thank you in advance :)
View this message in context: http://www.nabble.com/Is-this-correct--tp20654004p20654004.html
Sent from the R help mailing list archive at Nabble.com.
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2008-November/180760.html","timestamp":"2014-04-19T17:52:06Z","content_type":null,"content_length":"3789","record_id":"<urn:uuid:81a87418-e871-41d8-a896-a6d140feb00a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
Early Life and Education
Godfrey Harold Hardy more commonly known as G.H. Hardy was one of the most prominent mathematicians of all time. Born to mathematically inclined parents, in Surrey, England on 7th February 1877, he
started showing signs of similar affinity from young age. At the age of only two, he could write numbers up to million and used to factorize the numbers of hymns. Hardy won a scholarship to
Winchester College where he pursued his mathematical work later to join the well reputed Trinity College in 1896. He passed the Mathematics Tripos exams; part 1 and 2 and earned his Master’s degree
in 1903. He got the post of Lecturer at Trinity College but in 1906 left the position for the ‘Savilian Chair of Geometry’ at Oxford only to return to Cambridge in 1931 where he taught as a Professor
till 1942.
Contribution to Mathematics
Hardy brought ‘rigour’ to British mathematics which is a gold standard for mathematical proof. He worked extensively in mathematical analysis and analytical number theory alongside J.E Littlewood. He
also worked a lot for the development of the number theory. The first and second Hardy-Littlewood conjectures are fine examples of the work G.H. Hardy did to develop the number theory. The
Hardy-Weinberg principle (basics of population genetics) is another notable work done by him. Another noteworthy contribution of this brilliant mathematician was the Hardy-Ramanujan asymptotic
formula, based on ‘integer partitions’ that he worked out with his coworker Srinivasa Ramanujan.
Hardy wanted his work to be referred to as ‘pure mathematics’ rather than applied mathematics. He was highly against the use of mathematics in war and military maneuvers. In his view mathematics was
not something to be used in social destruction and to fulfill political purposes. Hardy was interested in pure mathematics and its topics including Diophantine analysis, distribution of primes,
Fourier series, the Riemann zeta function, and the summation of divergent series.
The seriousness that Hardy brought to mathematics was pretty uncommon at that time. From formulation of essays to bringing in new techniques in various mathematical methods, Hardy proved himself to
be a highly significant figure of the field.
Personal Life and Death
Hardy never married but did have a few relationships. He was a very shy person who did not like to be center of attention or meeting new people. He was also known to be very cold and peculiar at
times. Being a bright student he was awarded many times but he detested receiving any kind of appreciation in front of everyone. He was member of various societies later on in life such as the
Cambridge Apostles and the Bloomsbury group. He was also briefly involved in politics not as an activist but he did take part in the Union of Democratic Control in the First World War.
His students however had insight to his softer side. According to many, Hardy wanted his pupils to succeed and could not withstand any kind of failure. G.H. Hardy died in December 1947 after devoting
all his life to mathematical work. | {"url":"http://www.famous-mathematicians.com/g-h-hardy/","timestamp":"2014-04-18T23:15:11Z","content_type":null,"content_length":"35528","record_id":"<urn:uuid:4e9d4251-e8a5-425b-9bb4-dae40a0666e4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is Math Getting Too Hard?
The Edge Foundation has collected over a hundred essays in response to the question “What is your dangerous idea?“.
I haven’t yet read all of them (75,000 words!), but I thought that Steven Strogatz’s idea was worth mentioning. With reference to the four-colour theorem, classification of simple groups, and sphere
packing, he worries that mathematics might be getting too hard, that the use of computer programs in mathematical proofs leaves mathematicians with the ability to show something is true without
understanding why.
Obviously the use of computers as an aid in proofs is relatively new. But, is it new that there are results where we dont really understand why they are true? I’ve always thought that on the
frontiers things are usually not well understood; but, as the body of knowledge grows, new tools are developed and new insights achieved, and what was hard becomes easier. Computer proofs may have
skewed this progression somewhat, but do they signal a more fundamental change? Is it worth speculating whether or not, without computers, mathematicians might have continued working on the
four-colour theorem and we might have a “real proof” by now?
I don’t think I’m quite ready to accept the idea that we are now reaching the limits of the human brain.
Aside: Professor Strogatz mentions a recent article by Brian Davies, Whither Mathematics, which talks about similar issues. It also talks about using formal verification of computer programs when
they are included in a mathematical proof. Until now I’ve not paid much attention to such things, but I guess that if mathematical proofs are requiring computer programs then we’ll need techniques to
verify their correctness so they can be verified like more traditional proofs.
Another aside: Not Even Wrong and Cosmic Variance have some comments about a few of the physics related Edge essays.
8 thoughts on “Is Math Getting Too Hard?”
1. In case anyone reading this is interested, there’s now a whole branch of computer science which studies formal verification of computer programs and program specifications, including methods for
the automation of verification. Much of this work uses formal logic, which is one reason why modal, temporal, epistemic, deontic, etc, logics now play such a large role in CS.
A fascinating, and easy-to-read, account of the conflicts between mathematicians and computer scientists in the development of formal verification can be found in this book by Donald MacKenzie (a
sociologist of technology at the University of Edinburgh, Scotland): “Mechanizing Proof: Computing, Risk, and Trust” (MIT Press, 2004).
2. the use of computer programs in mathematical proofs leaves mathematicians with the ability to show something is true without understanding why.
I was always under the impression that people still don’t really understand why the Monstrous Moonshine conjectures are true despite Borcherds’s proof. His proof works but it doesn’t seem to
satisfy people’s intutions. So I think think there’s nothing really new about proof without understanding. In fact, ever since mathematicians first figured out how to do algebraic manipulation
there have been proofs without understanding.
3. I second sigfpe’s opinion, and take it one further – all through grad school I was able to show things we true with absolutly no understanding of why
Kidding aside, this “problem” has been around since the Hilbert basis theorem and I worry about it not one bit. Even if you look at a computer proof as an oracle, it isn’t as if you only get one
question. As Robbie said, knowledge accretes around a fact, and eventually comprehension dawns. I feel that the people who worry about this have never actually done any mathematical reseach and
felt the process happen in their head.
4. It depends on your level of paranoia. If a very useful theorem was proved only by a computer because it would take years for a human to verify the proof, then you can formulate truckloads of
maths (possibly creating other computer-only proofs).
What if one day you reach some insane conclusions and realize that you have to check every one of these robot-proofs?
The situation is very similar to compiling programs. You have to trust the compiler, or else you can’t do any business. If every native binary executable had to be checked by a human compiler
writer, I wouldn’t be able to post this message.
5. You have to trust the compiler
Actually I think it’s different. I don’t trust the compiler (and this isn’t just abstract theoretical doubt, it’s based on actual experience). Binary executables do need to be checked. This
checking takes the form of testing and using the principle of philosophical induction: once I’ve tested it enough, I’ll decide code is good enough to release. But I’ll never say “this absolutely
definitely works”. In mathematics I need to say “this absolutely definitely works”.
6. I must admit that this handwringing doesn’t make a whole of of sense to me.
After all, the idea of a true but unprovable statement is not exactly new — even if people are still in denial about it. And that is a much more devasting thing than the “true but only provable
by messy proof” situation of the 4 color theorem and things like it. What is the logical requirement that the truth be neat and human-understandible?
Now, intuition is of course a good thing. One point being missed by Strogatz is that a major purpose “Wolframian” experiments is exactly to build intuition by exploring the space of what is
possible, instead of sticking to the easy ones. And by exploring the larger space, you get a sense of where known knowledge fits.
I need to say “this absolutely definitely works†.
I’d rephrase that as I’d like to say this absolutely works. If there’s a finite number of proofs we can fit in one page, at one point we’re guaranteed to spill into the second page. I’d like to
hope that I’ll never reach my personal limits in my lifetime, but that ought to be less and more less likely with every generation.
8. PeterMcB: I was vaguely aware of formal verification in computer
science. However, until now, I never thought it would be useful for
any programs I’d be interested in and so I never paid much attention
to it. (I always imagined it as something for control systems on machines that can’t afford to fail).
sigfpe: Not trusting the compiler seems reasonable. But, where does
it stop? Do you trust the microcode in the processor? (I remember
about 10-15 years ago the pentium processors had a bug in floating
point arithmetic). Do we need formal verification of everything from
the source code down to the logic gates (i.e. the compiler and
microcode)? Then, what about the logic gates? We know they work from
theoretical and experimental physics and decades of processors doing
what we expect them to. But we cant verify mathematically that they
I’m sure the questions have already been tackled by mathematicians and computer scientists. I’m curious what the answers are.
(Sorry for the late reply; I’ve been traveling). | {"url":"http://www.arsmathematica.net/2006/01/05/is-math-getting-too-hard/","timestamp":"2014-04-17T21:26:47Z","content_type":null,"content_length":"25766","record_id":"<urn:uuid:5a393ff6-fb25-48c8-a30a-ee7b1e6c7e4c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dealing with Dependent Uncertainty in Modelling: A Comparative Study Case through the Airy Equation
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 279642, 12 pages
Research Article
Dealing with Dependent Uncertainty in Modelling: A Comparative Study Case through the Airy Equation
Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
Received 12 July 2013; Accepted 16 September 2013
Academic Editor: Benito Chen-Charpentier
Copyright © 2013 J.-C. Cortés et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The consideration of uncertainty in differential equations leads to the emergent area of random differential equations. Under this approach, inputs become random variables and/or stochastic
processes. Often one assumes that inputs are independent, a hypothesis that simplifies the mathematical treatment although it could not be met in applications. In this paper, we analyse, through the
Airy equation, the influence of statistical dependence of inputs on the output, computing its expectation and standard deviation by Fröbenius and Polynomial Chaos methods. The results are compared
with Monte Carlo sampling. The analysis is conducted by the Airy equation since, as in the deterministic scenario its solutions are highly oscillatory, it is expected that differences will be better
highlighted. To illustrate our study, and motivated by the ubiquity of Gaussian random variables in numerous practical problems, we assume that inputs follow a multivariate Gaussian distribution
throughout the paper. The application of Fröbenius method to solve Airy equation is based on an extension of the method to the case where inputs are dependent. The numerical results show that the
existence of statistical dependence among the inputs and its magnitude entails changes on the variability of the output.
1. Introduction and Motivation
Deterministic differential equations (ddes) have demonstrated to be powerful tools for modelling numerous problems appearing in different areas including physics, chemistry, economy, engineering, and
epidemiology. Their practical application requires knowing their inputs (coefficients, forcing terms, initial/boundary conditions, etc.). This task can only be done after accurate measurements that
usually contain uncertainty due to measuring errors or the inherent complexity or ignorance of the phenomena under study. This approach leads us to consider the inputs of such models as random
variables (rvs) or stochastic processes (sps) rather than deterministic constants or functions, respectively. Differential equations containing in their formulation randomness are usually referred to
as random differential equations (rdes) or stochastic differential equations (sdes) depending on the kind of uncertainty therein. When randomness is just considered through the white noise process
(i.e., the generalized derivative of the Wiener process), they are usually called sdes Then, Itô calculus is required in order to conduct the study. Otherwise, the term rde is used (see, [1, page
66], [2]).
Rdes constitute a natural extension of ddes. Generalized Polynomial Chaos (usually denoted by gPC) and Monte Carlo sampling (MCs) are probably the most popular techniques to deal with rdes (see for
instance [3, 4], resp.). In addition to these approaches, the extension of some deterministic techniques to the random scenario, based on the so-called -stochastic calculus, also constitutes useful
tools to solve rdes (see [5, 6] and the references therein). In particular, a random power series Fröbenius method has been recently proposed by some of the authors to study some significant rdes by
assuming that random inputs are independent [7, 8]. Although independence assumption simplifies the mathematical treatment of the models, it should not be assumed in many practical situations. Apart
from few contributions such as [9, 10] where the authors study the dependent scenario by taking advantage of gPC approach, most methods developed to study rdes rely on independence of random inputs.
In particular, to the best of our knowledge, applications of the random Fröbenius method considering dependent rvs have not been studied yet. As a consequence, the study of rdes with dependent inputs
is currently an active research area, mainly stimulated by the necessity of providing more realistic approaches and more accurate answers in mathematical modelling.
In this paper, we present a comparative study about the capability of previous approaches to deal with rdes containing dependent random inputs. To conduct the study, we will consider the Airy random
differential equation: where are assumed to be Gaussian dependent rvs on a probability space . We point out that the Airy equation has been selected since, as it is well-known, in the deterministic
scenario its solutions are highly oscillatory [11]; therefore, it is expected that differences among gPC, Fröbenius method, and MCs will be better highlighted.
Specifically, we will compare, by means of several illustrative examples, the quality of the numerical approximations provided by the three approaches to compute the average and standard deviation of
the solution sp to the initial value problem (ivp) (1). These examples will allow us to elucidate, through the random Airy differential equation, whether the statistical independence between the
random inputs (initial conditions and coefficients), usually assumed in many applications, has a significant influence on the output.
The paper is organized as follows. Section 2 is divided in two parts. The first one is devoted to construct a mean square convergent random power series solution to the ivp (1) using the random
Fröbenius method including approximations of the average and the variance of the solution sp In the second part, we summarize the main features of the gPC method to study rdes, and we apply gPC to
the particular case where inputs are Gaussian dependent rvs. In Section 3 we will present several illustrative examples. The aim of these examples is twofold. First, to highlight the similarities and
differences between the three approaches in dealing with both dependent/independent (Gaussian) rvs; second, to reveal, through the Airy rde, the importance of setting appropriately the statistical
dependence of random inputs in dealing with mathematical models. Conclusions are drawn in the last section.
2. Development
In this section, we present the main results required to construct using Fröbenius and gPC methods the approximations of the mean and standard deviation of the solution sp to the ivp (1) when inputs
,, are assumed to be dependent rvs. We point out that the description of Fröbenius method in this scenario is presented briefly deliberately since it follows in broad outline that of independent
case, which has been developed by some of the authors previously [7]. Foundations and further details about gPC method can be found in [3], for instance.
2.1. Tackling Random Dependence Using Fröbenius Method
Random Fröbenius method consists of constructing a power series solution to the ivp (1), say, which is mean square convergent on a certain -domain. The rigorous construction of such a mean square
convergent random infinite series requires -calculus with , where convergence is defined in the -norm: [5, 7]. Convergence with /-norm is usually referred to as mean square (ms)/fourth (mf)
convergence. By the Schwarz inequality, it is straightforward to proof that mean fourth convergence implies mean square convergence. Based on the ideas developed in [7], now we assume that the
following joint absolute moments with respect to the origin increase at most exponentially; that is, there is a nonnegative integer and positive constants and , such that In the case of ivp (1), the
random power series solution is given by [7] Under hypothesis (2) mf convergence (and hence ms convergence) of the first series appearing in (3) follows straightforwardly: where . As a consequence,
for we have obtained as a majorant series that is convergent for all as it can be directly checked by D’Alembert test: The mf convergence for the second series in (3) follows analogously.
Remark 1. By assuming that there are positive constants and such that for every and for , m.s. convergence of series (3) can also be established analogously as we have shown previously. In fact, this
follows immediately from the Schwarz inequality where , being <, . However, notice that this condition is stronger than (2).
Taking advantage of m.s. convergence of series appearing in right-hand side of (3) together with the following property ([2, page 88]): we can obtain approximations for the average, , and variance, ,
(or equivalently standard deviation) by truncating the random power series of given by (3). For the approximation of the average one gets In order to obtain approximations of the standard deviation
of , we will take into account the well-known representation of the variance in terms of the two first moments, . Therefore, it is enough to approximate the second moment:
Remark 2. In order to legitimate the use of the previous approximations to the average and the standard deviation, condition (2) must be checked in practice. However, there is a lack of explicit
formulae for the absolute moments with respect to the origin of some rvs. This aims us to look for a general approach to deal with a wide range of random inputs taking advantage of the so-called
censuring method (see [12, chapter V]). Let us assume that rvs , , satisfy Then where denotes the joint probability density function (p.d.f.) of rvs , and , , being , . Indeed, in the case that , one
gets Notice that in the last step the double integral is just 1, since is a pdf The other cases can be analyzed analogously. Substituting the integral by a sum in (12), previous reasoning remains
true when and/or , are discrete rvs. As a consequence, important rvs such as binomial, hypergeometric, uniform or beta satisfy condition (2), which is related to joint absolute moments of , , . It is
worthwhile to point out that there are significant rvs that do not satisfy condition (2) such as the exponential rv, for instance. In fact, taking , in (2), if , , then . As a consequence, in this
case condition (2) does not fulfill. Although other unbounded rvs can also verify condition (2), we do not need to check it each time, since if we censure its codomain suitably, we are legitimated to
compute approximations to the mean and standard deviation according to formulae (9)-(10), respectively. The larger the censured interval, the better the approximations. However, in practice,
intervals relatively short provide very good approximations. For instance, as an illustrative example notice that the truncated interval contains the of the probability mass of a Gaussian rv with
mean and standard deviation .
2.2. Tackling Random Dependence by Generalized Polynomial Chaos Method
As it has been underlined in Section 1, gPC constitutes a powerful method to deal with randomness in differential equations, say where denotes a differential operator; is the solution sp to be
determined and is a forcing term. Notice that in the rde (14) uncertainty is represented by and it just enters through its coefficients and forcing term, although in practice it could also be
considered via initial and/or boundary conditions. For the sake of clarity, in the following each scalar random input will be denoted by .
gPC permits to represent spectrally each in the random dimension, and the solution sp, , in . These representations are given by infinite random series defined in terms of certain orthogonal
polynomial expansions which depend on a number of rvs , , This set constitutes a complete orthogonal basis in with the inner product where is the Kronecker delta and denotes the ensemble average
defined as follows: being the joint pdf of and its support.
The choice of the trial basis is crucial in dealing with rdes. In [3], authors provide a comprehensive way to choose the trial basis according to the statistical distribution of the random input in
order to achieve optimal convergence in (15). For instance, if rv follows a binomial, negative binomial, hypergeometric, Poisson, beta, or gamma distribution, then should be taken as Krawtchouk,
Meixner, Hahn, Charlier, Jacobi, Laguerre orthogonal polynomials belonging to the Wiener-Askey scheme, respectively. In the significant case that is a Gaussian rv, Hermite polynomials are required.
This particular case is referred to as Polynomial Chaos rather than gPC. Throughout this paper only PC will be used. The key connection to do an adequate selection of the trial basis lies in the
close relationship between the pdf of some standard rvs and the weight function that defines the inner product (17) with respect to which some classical polynomials are orthogonal.
In order to keep the computations affordable in dealing with rdes, each random model parameter as well as the solution sp is represented by truncated series of the form (15), where the number of
components of random vector also needs to be truncated at a number called the order of chaos, . The truncation order is made so that all expansion polynomials up to a certain maximum degree, , are
included. This entails the following relationship between the number of terms in the series expansions, the maximum degree , and the order of chaos : In this context, solving the rde (14) consists of
computing coefficients appearing in (18) which also allows to compute approximations of the expectation and the standard deviation to the solution sp as follows: To achieve this goal, first the
expansion of given by (18) is substituted into the rde (14). Second, a Galerkin projection is done by multiplying the rde by every polynomial of the expansion basis , and then, the ensemble average
is taken. This leads to that corresponds to a deterministic system of coupled differential equations whose unknowns are the node functions . These unknowns can be computed by standard numerical
techniques such as Runge-Kutta scheme.
Most of the contributions based on gPC assume that rvs , are independent which facilitates the study. The case in which random parameters are assumed to be dependent is currently a topic under study.
In [9, 10], authors present methods based on gPC to tackle dependence in differential equations. Both contributions provide general techniques that can be applied whenever the joint pdf of the random
inputs is known. However, in practice the availability of this joint pdf can be very difficult even impossible. In the particular case where the inputs are dependent Gaussian rvs, an alternative
method can be applied to conduct the corresponding study taking advantage that uncorrelation and independence are equivalent notions for Gaussian rvs together with Cholesky matrix decomposition. To
exhibit how the method is going to be applied in our case, let us remember the following basic result.
Proposition 3. Let be a Gaussian vector with mean and variance-covariance matrix : (the symbol ⊤ denotes the usual matrix transpose operator). For each deterministic vector and deterministic matrix ,
the random vector defined by the linear transformation: follows the Gaussian distribution: .
On the other hand, let be a Gaussian random vector. As is a variance-covariance matrix, it is Hermitian and positive definite. Hence, there is a matrix, say , such that . For instance, Cholesky
decomposition provides a well-known procedure to compute matrix [13].
Keeping the notation of the previous context, we apply Proposition 3 to the particular case where (so, and , being the identity matrix of size ); and . Then As , then , are Gaussian and uncorrelated
r.v.’s and, therefore, independent. As a consequence, expression (22) provides a direct way to represent a Gaussian random vector with components statistically dependent by means of a linear
transformation of a Gaussian vector whose components are independent: .
Now we detail how the previous development can be applied to transform the ivp (1), where random inputs ,, are assumed to be Gaussian dependent into another one with Gaussian independent random
inputs. Let be the multivariate Gaussian distribution of the random data, and let us denote by the Cholesky decomposition of variance-covariance matrix of . According to (22), we define the linear
transformation: where denotes the mean of vector and , being independent and identically distributed standard Gaussian rvs, that is, , . By (23), ivp (1) can be recast as follows: where random inputs
, , are independent standard Gaussian rvs. This allows us to compute approximations of the expectation and standard deviation functions by PC according to (20).
3. Examples
In this section we will present several illustrative examples based on ivp (1) in order to compare the approximations provided by Polynomial Chaos, Fröbenius, and Monte Carlo simulation. The
comparison is performed by computing the average and standard deviation functions of the solution of ivp (1). As we pointed out in the Introduction section, the study is conducted through Airy
differential equation since, as in the deterministic case its solutions are highly oscillatory, it is expected that differences among the three previous approaches will be better highlighted in the
random framework. Computations have been carried out with Mathematica package [14]. In particular, the coupled systems of differential equations obtained after applying gPC in each example are
numerically solved with this software.
The examples have been designed to explore both the marginal influence of randomness on the output when it is assumed that only some inputs of ivp (1) are rvs (see Examples 1 and 2, where and are
assumed to follow a bivariate Gaussian rvs, resp.), and the cases presented in Examples 3 and 4 where all the random inputs are assumed to follow a multivariate Gaussian distribution. In the two
first examples, we also investigate the influence of the numerical value of the correlation coefficient of the two-dimensional random input changes. Examples 3 and 4 seek to illustrate the different
qualitative behaviour of the solution sp of ivp (1) depending on rv .
Example 1. This first example has been devised to investigate whether statistical dependence between the initial conditions entails a substantial change in the output with respect to independence
assumption. In addition, it permits to highlight some significant advantages of Fröbenius and PC methods in comparison with MCs. Let us consider the ivp (1) where and the initial conditions are
assumed to be dependent on a Gaussian r.v.’s: , where In this case, we can directly monitor the influence of the statistical dependence between and , measured through its correlation coefficient , in
the computations of the average and the standard deviation when these moments are calculated using anyone of the three methods. For the Fröbenius method, taking into account that and expression (10),
we observe that dependence only contributes through the term: . In case of PC method, monitoring dependence is made directly over the resulting ivp: since some coefficients , , depend on .
Specifically, are the entries of the Cholesky decomposition of variance-covariance matrix . In this case one gets Notice that in (26), and are independent standard Gaussian rvs: . With respect to MC
simulation, it is obvious that dependence is monitored directly through the bivariate pdf which depends on : The random samples needed to apply MC method have been sampled with the Mathematica
instruction: “RandomVariate[MultinormalDistribution].”
Notice that Fröbenius and PC approaches turn out much more fruitful than MCs in this particular example. In the case of Fröbenius method, the series representation of the solution sp given by (3)
permits not only to compute reliable approximations of the average and standard deviation but also to determine its full statistical distribution, which in general is a major challenge. Firstly,
notice that under condition (2), we have proven that the infinite series (3) converges in the mean square sense for each ; therefore, it also converges in distribution for every . In accordance with
(3), the truncated approximation of is given by where we adopt the convention for . Note that where denotes the confluent hypergeometric function of order and evaluated at the point .
Since the vector has a bivariate Gaussian distribution, follows a univariate Gaussian distribution whose average and variance are given by respectively. By property (8), these expressions converge as
to the exact average, , and variance , respectively. This determines completely the statistical distribution of for every .
On the other hand, PC method also provides, in this example, a useful series representation of the solution sp that permits to obtain its statistical distribution. It is straightforward to check that
the PC series representation obtained from ivp (26) has the following finite linear form: where , independent, and , and . Hence, following an analogous reasoning as in the previous case, we deduce
the full distribution of . With respect to MC approach, it only provides a set of numerical values to the solution at some selected time instants from which only rough approximations of the
statistical distribution of the solution can be given.
In Tables 1 and 2, we compare the approximations of the expectation and standard deviation at different time instants and correlation values: , respectively. Notice that these values correspond to
average negative dependence; independence; average positive dependence; and strong positive dependence, respectively. We assume , , , . (), () and () denote the approximate expectation (and standard
deviation) at time obtained by Fröbenius (see expressions (9) and (10), or equivalently, (31) and (32)) with truncation orden , PC (see expression (20)) of order and MCs using simulations,
respectively. The values of and show in Tables 1 and 2 are those obtained when the numerical stabilization at six significant digits is achieved.
We observe the numerical results provided by Fröbenius and PC approaches match while MCs captures about three significant digits. From Tables 1 and 2, we realize that the correlation value between
and does not influence the average, but it does decisively influuence the standard deviation of the solution. This indicates to us that it is crucial to know not only the existence of statistical
dependence between initial conditions but also quantifying, as accurate as possible, its value by the correlation coefficient . Notice that these numerical conclusions agree with formulae (31)-(32).
Example 2. In the previous example, uncertainty entered in the equation just through initial conditions and . Although both rvs are dependent, we have showed that it does not influence the
expectation of the solution but its standard deviation. Does this answer change in case that randomness is considered through coefficient and an initial condition? In order to answer this question
let us consider the ivp (1) where are assumed to be dependent Gaussian rvs: , where To perform the computations of the average () and standard deviation () shown in Figures 1 and 2, respectively, we
have taken , , and . The initial condition is assumed to be deterministic: . Computations for Fröbenius and PC have been carried out until the numerical stabilization at six significant digits is
achieved. This stabilization is got for Fröbenius method with for the average and for the standard deviation. The corresponding values for PC are and . These results for and do not depend on the
correlation coefficient . Results obtained by MCs have been carried out with simulations. As a reference of the computational burden required by each one of the three methods, we indicate the CPU
seconds to carry out the standard deviation plotted in Figure 2 in an Intel Core 7 with GHz: Fröbenius (s), PC (s), and MC (s).
Although similar numerical differences between MCs and the other techniques (Fröbenius and PC) could be reported in a table as we did in the foregoing example, for both the average and the standard
deviation, now we present the numerical approximations in Figures 1 and 2 without labelling each method since they are indistinguishable graphically. We underline that numerical results for Fröbenius
and PC methods match with the six significant digits. While, MCs captures three of these digits.
In contrast to what happened in Example 1, we now observe that average changes, but slightly, when does (see Figure 1). These changes are greater when computing standard deviation (see Figure 2).
Both conclusions agree with formulae (9)-(10). Again, these results indicate that independence between random parameters (in this case, between coefficient and the initial condition ) must be checked
previously since the existence of statistical dependence influences significantly on the output.
Example 3. Now, we consider the case where all the data involved in ivp (1) are rvs. We will assume that the random vector follows a multivariate Gaussian distribution. The aim of this example is
twofold: first, to confirm the conclusions drawn in the two previous examples and, second, to show and compare the capabilities of the three methods under analysis to tackling satisfactorily full
randomness in the ivp (1). Computations have been performed taking as average: and, and as the variance-covariance matrices given by (35) that correspond to statistical independence and dependence,
Computations for Fröbenius and PC have been carried out until the numerical stabilization at six significant digits is achieved. This stabilization is got for Fröbenius method with for the average
and for the standard deviation. The corresponding values for PC are and . As it happened in the foregoing example, the obtained results for and do not depend on the variance-covariance matrix.
Numerical values obtained by MCs have been carried out with simulations. Again, as in the previous example the numerical results provided by Fröbenius and PC methods coincide with the six significant
digits, while MCs only captures three of these digits.
Figures 3 and 4 show the results for the average and the standard deviation on the time interval , respectively. Analogous comments as we did in Example 2 can be done: one presents changes for both
the expectation and standard deviation of the solution depending on the statistical dependence of the input, being these changes greater on the standard deviation.
Example 4. The aim of this example is to show that the qualitative behaviour of the solution sp of the random Airy differential (1) is different depending on the random input . To illustrate this
fact, we will assume that the random vector follows a multivariate Gaussian distribution with average: and the same variance-covariance matrices as the ones considered in Example 3 (see expression (
35)). Note that we are assuming that most part of the probability mass of rv is on the negative real line which implies a different behaviour of the solution with respect to previous examples. Figure
5 shows the mean of the solution in both cases, that is, when the random inputs are independent and dependent. Although both plots are quite similar, in Figure 6 we see in contrast to what happens in
previous examples that now standard deviations as well as their difference increase over time.
4. Conclusions
The consideration of uncertainty in models based on differential equations leads to random differential equations. Over the last few decades, these types of random continuous models have demonstrated
to be powerful tools in dealing with mathematical modelling. However, for simplicity, most of these contributions rely on the assumption that random inputs are statistically independent, a hypothesis
that could not be met in many applications. The study of differential equations whose inputs are statistically dependent constitutes currently a topic under development.
In this paper we have studied two methods for solving differential equations whose parameters are assumed to be Gaussian dependent, namely, Fröbenius and Polynomial Chaos. The numerical results, for
the average and standard deviation of the solution, provided by both methods have been compared with those computed by Monte Carlo simulations, which can be considered the most common approach to
deal with random differential equations. The study has been performed through the Airy equation, which is expected to be an excellent test model to highlight differences among previous approaches,
due to the highly oscillatory behaviour of its solutions in the deterministic case. The examples reveal that Fröbenius and Polynomial Chaos perform better than Monte Carlo simulations since both are
more accurate. A major conclusion drawn from the study case performed through the examples is the significant influence of statistical dependence among the random inputs on the variability of the
output. As a consequence, the usual hypothesis of statistical independence for the random parameters should be checked carefully in modelling. Furthermore, when dependence is assumed, its numerical
value (measured, for example, by the correlation coefficient) must be determined as accurately as possible, since it has been shown that it also influences both the average and the variability of the
Notice that in order to conduct the study, we have had to extend the Fröbenius method presented by some of the authors in the previous contribution [7] to the case where random inputs are dependent.
Although we have focused the study on the case in which random variables are just Gaussian, notice that Fröbenius approach does not depend on the statistical type of the involved random variables,
while tackling statistical dependence with Polynomial Chaos has been carried out through a direct approach based on nice properties of Gaussian random variables. This approach has allowed us to
transform the random initial value problem (1) into another having independent Gaussian random variables, which has facilitated the study.
Solving random differential equations mainly consists of computing the average and standard deviation of the solution stochastic process. A major challenge is to determine its statistical
distribution. Example 1 shows, by means of a simple but still illustrative scenario, the potentiality of both, Fröbenius and gPC methods, to deal with this issue when other random differential
equations appear in modelling. We think that the combined application of the novel theory of copulas [15] and previous methods constitutes a promising approach that will be considered in the
forthcoming works.
This work has been partially supported by the Ministerio de Economía y Competitividad Grants MTM2009-08587 and DPI2010-20891-C02-01 and Universitat Politècnica de València Grant PAID06-11-2070.
1. T. C. Gard, Introduction to Stochastic Differential Equations, Marcel Dekker, New York, NY, USA, 1988. View at MathSciNet
2. T. T. Soong, Random Differential Equations in Science and Engineering, Academic Press, New York, NY, USA, 1973. View at MathSciNet
3. D. Xiu and G. E. Karniadakis, “The Wiener-Askey polynomial chaos for stochastic differential equations,” SIAM Journal on Scientific Computing, vol. 24, no. 2, pp. 619–644, 2002. View at Publisher
· View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
4. D. P. Kroese, T. Taimre, and Z. I. Botev, Handbook of Monte Carlo Methods, John Wiley & Sons, Hoboken, NJ, USA, 2011.
5. L. Villafuerte, C. A. Braumann, J.-C. Cortés, and L. Jódar, “Random differential operational calculus: theory and applications,” Computers & Mathematics with Applications, vol. 59, no. 1, pp.
115–125, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
6. J. C. Cortés, L. Jódar, and L. Villafuerte, “Numerical solution of random differential initial value problems: multistep methods,” Mathematical Methods in the Applied Sciences, vol. 34, no. 1,
pp. 63–75, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
7. J. C. Cortés, L. Jódar, F. Camacho, and L. Villafuerte, “Random airy type differential equations: mean square exact and numerical solutions,” Computers & Mathematics with Applications, vol. 60,
no. 5, pp. 1237–1244, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
8. G. Calbo, J. C. Cortés, and L. Jódar, “Random hermite differential equations: mean square power series solutions and statistical properties,” Applied Mathematics and Computation, vol. 218, no. 7,
pp. 3654–3666, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
9. C. Soize and R. Ghanem, “Physical systems with random uncertainties: chaos representations with arbitrary probability measure,” SIAM Journal on Scientific Computing, vol. 26, no. 2, pp. 395–410,
2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
10. X. Wan and G. E. Karniadakis, “Multi-element generalized polynomial chaos for arbitrary probability measures,” SIAM Journal on Scientific Computing, vol. 28, no. 3, pp. 901–928, 2006. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
11. A. Iserles, “Think globally, act locally: solving highly-oscillatory ordinary differential equations,” Applied Numerical Mathematics, vol. 43, no. 1-2, pp. 145–160, 2002. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
12. M. Loève, Probability Theory, vol. 45 of Graduate Texts in Mathematics, Springer, New York, NY, USA, 4th edition, 1977. View at MathSciNet
13. G. H. Golub and C. F. van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, Md, USA, 1985.
14. “Mathematica Version 8.0,” Wolfram Research, http://www.wolfram.com/mathematica.
15. R. B. Nelsen, An Introduction to Copulas, Springer Series in Statistics, Springer, New York, NY, USA, 2nd edition, 2006. View at MathSciNet | {"url":"http://www.hindawi.com/journals/aaa/2013/279642/","timestamp":"2014-04-19T22:13:53Z","content_type":null,"content_length":"594492","record_id":"<urn:uuid:66f5126e-0d58-403d-9740-f9eccd541f52>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Japanese Lesson Study
Japanese Lesson Study Summary
Thursday, July 10, 2003
Today we welcomed Stacey and Roy from Provo School District to our group. They were gracious enough to try out our lesson with us.
We received an e-mail from Lars saying the plans look great and he wished he wasn't missing the lesson (he'll be at a conference.
Suzanne took our picture and asked us if we would be willing to write an article for AP Central an on-line website. We agreed that we would like to pursue this and asked Suzanne to get us details.
We continued to work on our lesson. The following is the work to date:
Japanese Lesson Study Lesson
Topic: Scaling, Measurement and Dimensionality - Stair-Step Fractal
I. Overarching Goal:
II. Mathematical Objective - When considering the measurements length, area, and
volume, students will be able to clearly articulate the effect of scaling one
measurement on the two remaining measurements
III. How does this objective fit into a unit? Students will understand the
relationship between dimensionality and scale factor
IV. What is the pre-requisite knowledge?
How to find the area of a rectangle
How to find the volume of a rectangular solid
V. Math problem - hook, problem on which they will work
A. When students enter the class, they will be given printed (and picture)
directions on how to construct the 1st iteration of the stair step fractal.
After students finish construction, then the teacher will address the class
and say: You can make some amazing things with paper. Mathematicians often
make models to describe or understand mathematical concepts. "Is this amazing?"
Well, why not? Well, what happens if we repeat the process? -- Walk the
students through the 2nd and 3rd iterations - Pop it up again? amazing?
What interesting things do you see and what do you wonder when you look at
this figure?"
i. Brainstorm individually before taking student answers.
Potential responses:
ii. It looks like stairs – Response: are all the stairs the same?
iii. I see squares / boxes
iv. The stairs are getting bigger
v. The figure is symmetrical – Response” What type of symmetry?
or how?
vi. It looks like how you make snowflakes
vii. It's a cool design. - What makes it cool?
viii.It looks like a building - What type of building
ix. It looks like stairs - what might these stairs be used to model?
x. The little steps are half the size of the medium steps, medium
steps are half of larger steps
xi. There are small, medium, and large squares
xii. All the shapes are similar and the same kind of shape
xiii. There are different sized steps.
xiv. It looks like a fractal.
xv. The middle step is centered on the large step in the same way
that the small step is centered on the middle step.
B. Key Question: What relationships exist between the stairs in this model?
Find a partner and compile your conclusions in an organized way (In Pairs)
VI. Student strategies - how might students solve the problem
(In teacher responses be sure to be clear about which dimensions students are
A. Students might use inspection as their strategy.
i. The teacher can then push them what do you mean small, medium? Large?
Can you quantify that? Are they all rectangles? Are there any squares?
ii. Students may say 1 large, 2 medium, 4 small. In this case teacher
should say affirm it but not lead students to believe that it is
the key idea
B. Students might measure side length and/or area and/or volume
i. If students do not mention all three, after discussing their
observations, ask what measurement is missing
C. Units -the small one is how many of the medium etc.
VII. Teacher responses - anticipate how you will respond to student questions
See above
VIII. Summing up - use the kids words not a pre-written script. (Otherwise they
had no reason to do the activity)
Create a t
IX. Evaluation - what evidence will we have that kids understood?
A. Create a table and ask students to fill in the scaled by 3 row.
Present students with the scaled by a factor of 3. Deal with misconceptions.
**Note introduce the vocabulary of scale by after students see the need – 5thng,
6thing etc.
**You can keep scaling up to the n case and then go to cases where n < 1
Potential HW Questions:
1) Suppose that each dimension of the figure is scaled with a different scale
factor, without actually doing extensive computation, can you predict the
relationship between the volume of the original figure and the scaled figure?
2) Suppose the figure we see is a set of stairs. How many people can fit on the
stairs if ___ fit on the top stair?
3) If 1/4 inch on your model represents 1 foot, how many cubic yards of concrete would it
take to make this a solid staircase?
4) How many cubic yards of concrete would it take to create the staircase if
each face is created with a wall of concrete 6 inches thick?
5) You are building a deck in your backyard. You order 1 cubic yard of concrete.
It turns out that you only use 1/2 of the concrete you ordered. How long of a
sidewalk can you pour with the leftover concrete if the sidewalk is 3 feet wide and
4 inches deep?
Back to Journal Index
PCMI@MathForum Home || IAS/PCMI Home
© 2001 - 2013 Park City Mathematics Institute
IAS/Park City Mathematics Institute is an outreach program of the School of Mathematics
at the Institute for Advanced Study, Einstein Drive, Princeton, NJ 08540
Send questions or comments to: Suzanne Alejandre and Jim King
With program support provided by Math for America
This material is based upon work supported by the National Science Foundation under Grant No. 0314808.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. | {"url":"http://mathforum.org/pcmi/hstp/sum2003/wg/lesson/journal/data10.html","timestamp":"2014-04-19T09:51:53Z","content_type":null,"content_length":"7718","record_id":"<urn:uuid:d868080c-249d-43b8-9d36-03ed1583ce00>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
A mole is the amount of pure substance containing the same number of chemical units as there are atoms in exactly 12 grams of carbon-12 (i.e., 6.023 X 10^23). This involves the acceptance of two
dictates -- the scale of atomic masses and the magnitude of the gram. Both have been established by international agreement. Formerly, the connotation of "mole" was "gram molecular weight." Current
usage tends to apply the term "mole" to an amount containing Avogadro 's number of whatever units are being considered. Thus, it is possible to have a mole of atoms, ions, radicals, electrons, or
quanta. This usage makes unnecessary such terms as "gram-atom," "gram-formula weight," etc.
All stoichiometry essentially is based on the evaluation of the number of moles of substance. The most common involves the measurement of mass. Thus 25.000 grams of water will contain 25.000/18.015
moles of water, 25.000 grams of sodium will contain 25.000/22.990 moles of sodium.
The convenient measurements on gases are pressure, volume, and temperature. Use of the ideal gas law constant R allows direct calculation of the number of moles: n=P V/R T. T is the absolute
temperature, R must be chosen in units appropriate for P, V, and T. The acceptance of Avogadro 's law is inherent in this calculation; so too are approximations of the ideal gas.
The mole is a unit of measurement for the amount of substance. Moles give us a consistent method to convert between atoms/molecules and grams.
The mathematical expression for calculating moles is expressed as:
We can express moles in terms of molecular mass, number of atoms/molecules and in terms of volume.
· 1 mole of a pure substance has a mass in grams equal to its molecular mass.
• 1 mole contains the same number of particles as there are in 12g of carbon-12 atoms. This number is called Avogadro’s number and is equal to 6.023 × 10^23 particles.
• 1 mole of a gas occupies a volume of: 22.4 litres at S.T.P.
For example: 1 mole of sodium atom has mass 23 g. It contains 6.023 × 10^23 atoms and occupies 22.4 litres of volume at S.T.P.
Let us solve a simple numerical to understand this concept in a better way.
Calculate the number of particles in each of the 46 g of Na atoms.
In this numerical it is desired to find the number of particles in 46 gm of Na.
We know that one mole of Na contains 6.023 × 10^23 particles.
Therefore, we will first find the number of moles in 46 gm of Na.
We know that 1 mole = 6.022 × 10^23 particles.
Therefore, 2 moles = 2×6.022 × 10^23 particles
= 12.044 × 10^23 particles
Hence, 46 gm of Na contains 12.044 × 10^23 particles.
To be perfect in the mole concept, try to solve more numerical.
If you face query, do get back to us. We will be happy to help you.
EXAMPLE : the gram atomic massof carbon is 12. the mass of one atom of carbon has been calculated as 1.9924 *10 raised to -23 g. the number of carbon atoms in 12 g of carbon can be calculated as :
number of carbon atoms = gram atomic mass of carbon / mass of one carbon atom
= 12(g) / 1.9924 * 10 raised to -23 (g) = 6.022 * 10 raised to 23
i hope that you may be kowing about avogadro 's number, the avogadro 's number of particles of any substance are expressed in the form of a term mole. in other words " a mole denotes Avogadro 's
number of particles " pls note that these particles may be any thing atoms, molecules, ions etc......
hope it helps.... thumb 's up plzzzzzzzzz
One mole of a substance (atoms, molecules, ions, or particles) is that quantity in number having a mass equal to its atomic or molecular mass in grams. One mole of any substance contains 6.022 × 10 ^
23 particles (atoms, molecules, or ions) . This means that one mole atom of any substance contains 6.022 ×10 ^ 23 atoms. Similarly, one mole molecule of any substance contains 6.022 ×10 ^ 23
molecules, and one mole ion of any substance contains 6.022 ×10 ^ 23 ions. Hence, the mass of a particular substance is fixed.
The number 6.022 × 10 ^ 23 is an experimentally obtained value and is known as Avogadro’s number or Avogadro constant (represented by N [ o ] ). It is named after the Italian scientist, Amedeo
Thus, 1 mole of oxygen atoms (O) = 6.022 ×10 ^ 23 oxygen atoms.
1 mole of oxygen molecules (O [ 2 ] ) = 6.022 × 10 ^ 23 oxygen molecules.
@shubhangi: I hope you would have understood. If your doubt still persists then get back to us.
@others: Very good! Keep up the good work.
This conversation is already closed by Expert
Show me more questions | {"url":"http://www.meritnation.com/ask-answer/question/what-is-an-mole-explain-mole-concept/atoms-and-molecules/1469763","timestamp":"2014-04-20T05:54:20Z","content_type":null,"content_length":"164222","record_id":"<urn:uuid:ed79b158-b1ba-43c1-95f0-d3b9aeb33869>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
mathematical induction
I have to prove this inequality using mathematical induction:
---------------- > 1 + (1/3) + (1/5) + .... + 1/(2n-1).
I'm given that for positive reals a1, a2, a3,....,an where n >= 2 that
the cartesian product of (1 + ai) from i = 1 to n is greater than 1 + a1 + a2 + .... + an.
I know how to do the inductions with equal signs, but I can't seem to replace all the terms before (n + 1) on the left hand side with the terms on the right hand side (after assuming the
statement is true for all n) since they're not equal.
How do you do this one?
Don't worry about the base case.
Thanks for any help. | {"url":"http://mathhelpforum.com/algebra/18845-mathematical-induction.html","timestamp":"2014-04-16T19:18:30Z","content_type":null,"content_length":"53366","record_id":"<urn:uuid:df51e58f-fb98-4623-8e87-70ca4d03f68a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Midpoint Formula - Economics Online Tutor
Do you find the information on this page helpful?
The left column of the home page lists all of the economics topics covered in this website. Now,
you can receive a FREE ebook version of the information in this site under the title
Basic Economics for Students and Non-Students Alike
By Jerry Wyant
Or if you prefer, you may purchase a paperback version from Amazon, Barnes & Noble, Sony,
Apple, and other distributors.
This makes a great handbook and reference. Students: please help to make sure your
classmates and teachers are aware of this resource!
Click here to order a FREE ebook from smashwords.com
Click here to purchase a paperback version from amazon.com | {"url":"http://economicsonlinetutor.com/elasticitymidpoint.html","timestamp":"2014-04-20T08:13:17Z","content_type":null,"content_length":"26892","record_id":"<urn:uuid:4a3bcf77-5541-433a-87e9-3b8b2c4a60d5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Early exercise of options
Early exercise of American calls for dividends
This page sets out some typical rules which are employed in the early exercise of an option. However this is not to be construed as advice in any specific case, and you should seek your own
independent advice before making any decisions.
The following table sets out the rule of thumb for when an American call option is likely to be exercised ahead of expiry, before the stock goes ex-dividend.
Call option Put option (same strike) Exercise likely when
In-the-money zero value dividend > interest expense of buying shares early
In-the-money value > 0 dividend > put price + interest expense of buying shares early
Where a call option is deep-in-the-money, with little chance of the stock falling below the strike price before expiry, the option is a candidate for early exercise.
This generally occurs where the dividend the investor would receive, if they were to exercise the call, is greater than the interest expense incurred in buying the shares which are the subject of the
option ahead of the expiry date. Generally this only occurs on the day before the ex-dividend date.
For in-the-money calls where the corresponding put still has some value, the rule used by most of the market is that if the value of the dividend is more than the value of the corresponding put plus
interest, then the call should generally be exercised for the dividend.
Writers of call options who want to avoid assignment (being exercised against) may need to either buy back or roll that short call position to another strike in another expiry, being mindful again
that the option they roll to is not also a candidate for early exercise.
National Australia Bank
Ex-Div 7th June 2004
Dividend 83 cents
Share Price $30.31 on last cum dividend date
1. June 2600 Call (deep-in-the-money)
• Corresponding put is worthless
• Interest = 6.4 cents (Strike Price $26.00 X Interest Rate 5.25%) / 365 days X 17 days till expiry
• 83 cents > 6.4 cents
Therefore the June 2600 call will generally be exercised
2. June 3000 Call (In-the-money)
• Corresponding put is 68 cents
• Interest = 7.3 cents
• 83 cents > 75.3
Therefore the June 3000 call will generally be exercised
3. June 3050 Call (In-the-money)
• Corresponding put is $1.09
• Interest = 7.5 cents
• 83 cents < $1.165
Therefore the June 3050 call will generally not be exercised because the dividend isn't large enough.
Early exercise of American puts for interest
The following table sets out the rule of thumb for when an in-the-money American put option is likely to be exercised ahead of expiry.
Put option Exercise likely when
In-the-money Interest expense of holding the shares until expiry > corresponding call price
When put options are deep-in-the-money they become candidates for early exercise.
Consider an example where an investor owns both stock and a put option over the same stock, and the put is trading at intrinsic value (as is often the case when the option is deep in the money). By
exercising early, the holder of the put sells their shares at the exercise price of the option and earns interest on the proceeds earlier than if they were to wait until expiry to exercise. This
usually occurs after the stock has gone ex-dividend, so that the dividend is retained by the shareholder.
Another way of looking at whether the put should be exercised early is to compare the value of the corresponding call option with the cost of carrying the underlying stock to expiry. The importance
of this relationship is due to the fact that stock ownership plus a long put is an equivalent position to holding a call option with the same strike price and expiry. The two strategies are said to
be synthetically equivalent.
Both positions, stock and long put (S+P), and the long call (+C), profit if the stock goes up and limit losses if it falls. Therefore, if one can in effect exchange the synthetic position (S+P) for
its equivalent (+C), and the cost of doing so (ie. the cost of the call) is less than the interest earned on the funds received from selling the stock, the early exercise of the put is worthwhile.
This simple arbitrage relationship is known as put/call parity, and is the fundamental relationship of option pricing. It is also the reason that mispricing between call and put options with the same
strike and expiry is rarely found. Such mispricing offers the opportunity for arbitrage, where pricing differences are exploited at little or no risk to the trader, in the process forcing prices back
to put/call parity.
National Australia Bank
Share Price $28.00
Interest Rate 5.25%
7 days till expiry
1. 3400 Put (deep-in-the-money)
• Corresponding call is worthless
• Interest = 3.42 cents (Strike Price $34.00 X Interest Rate 5.25%) / 365 days X 7 days till expiry
• 3.42 cents > 0
Therefore the 3400 put will generally be exercised early.
2. 3000 Put (In-the-money)
• Corresponding call is 5 cents
• Interest = 3.02 cents
• 3.02 cents < 5 cents
Therefore the 3000 put will generally not be exercised early.
Early exercise calculator | {"url":"http://www.asx.com.au/products/equity-options/early-exercise-options.htm","timestamp":"2014-04-20T08:21:57Z","content_type":null,"content_length":"47881","record_id":"<urn:uuid:f9216752-d11b-4083-898b-36f030e1854d>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Acceleration of the Universe - A.R. Liddle
2.5. Characteristic scales and horizons
The big bang Universe has two characteristic scales
• The Hubble time (or length) H^-1.
• The curvature scale a| k|^-1/2.
The first of these gives the characteristic timescale of evolution of a(t), and the second gives the distance up to which space can be taken as having a flat (Euclidean) geometry. As written above
they are both physical scales; to obtain the corresponding comoving scale one should divide by a(t). The ratio of these scales gives a measure of the total density; from the Friedmann equation we
A crucial property of the big bang Universe is that it possesses horizons; even light can only have travelled a finite distance since the start of the Universe t[*], given by
For example, matter domination gives d[H](t) = 3t = 2H^-1. In a big bang Universe, d[H](t[0]) is a good approximation to the distance to the surface of last scattering (the origin of the observed
microwave background, at a time known as `decoupling'), since t[0] >> t[dec]. | {"url":"http://ned.ipac.caltech.edu/level5/Liddle3/Liddle2_5.html","timestamp":"2014-04-19T06:56:37Z","content_type":null,"content_length":"2489","record_id":"<urn:uuid:b2e0ca66-0d66-4aeb-8e52-42c30cf91a87>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
"Size" of elemetary particles
I stumbled upon this nice link showing the universe at different scales:
However, if you scroll down to the attometer scale you get to the elementary particles which have been given sizes. Does anyone know what these sizes mean? I thought elementary particles were
dimensionless and so have no strict size.
You're correct. Apparently what he's diagramming here is the Compton wavelength for each particle, ħ/mc. Of the six quarks, the top quark has the greatest rest mass, hence the shortest Compton
wavelength. Electron (classical) is the classical electron radius, e | {"url":"http://www.physicsforums.com/showthread.php?s=7fedaff2cc494d5fd32760d7cbddc81b&p=4479163","timestamp":"2014-04-23T20:45:40Z","content_type":null,"content_length":"29731","record_id":"<urn:uuid:d7c17190-6d38-4e32-ada0-a4de030723a1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 44
- Handbook of Philosophical Logic , 1984
"... ed to be true under the valuation u iff there exists an a 2 N such that the formula x = y is true under the valuation u[x=a], where u[x=a] agrees with u everywhere except x, on which it takes
the value a. This definition involves a metalogical operation that produces u[x=a] from u for all possibl ..."
Cited by 825 (8 self)
Add to MetaCart
ed to be true under the valuation u iff there exists an a 2 N such that the formula x = y is true under the valuation u[x=a], where u[x=a] agrees with u everywhere except x, on which it takes the
value a. This definition involves a metalogical operation that produces u[x=a] from u for all possible values a 2 N. This operation becomes explicit in DL in the form of the program x := ?, called a
nondeterministic or wildcard assignment. This is a rather unconventional program, since it is not effective; however, it is quite useful as a descriptive tool. A more conventional way to obtain a
square root of y, if it exists, would be the program x := 0 ; while x < y do x := x + 1: (1) In DL, such programs are first-class objects on a par with formulas, complete with a collection of
operators for forming compound programs inductively from a basis of primitive programs. To discuss the effect of the execution of a program on the truth of a formula ', DL uses a modal construct <>',
, 1988
"... Nowadays computer science is surpassing mathematics as the primary field of logic applications, but logic is not tuned properly to the new role. In particular, classical logic is preoccupied
mostly with infinite static structures whereas many objects of interest in computer science are dynamic objec ..."
Cited by 153 (16 self)
Add to MetaCart
Nowadays computer science is surpassing mathematics as the primary field of logic applications, but logic is not tuned properly to the new role. In particular, classical logic is preoccupied mostly
with infinite static structures whereas many objects of interest in computer science are dynamic objects with bounded resources. This chapter consists of two independent parts. The first part is
devoted to finite model theory; it is mostly a survey of logics tailored for computational complexity. The second part is devoted to dynamic structures with bounded resources. In particular, we use
dynamic structures with bounded resources to model Pascal.
, 1981
"... A survey of various results concerning Hoare's approach to proving partial and total correctness of programs is presented. Emphasis is placed on the soundness and completeness issues. Various
proof systems for while programs, recursive procedures, local variable declarations, and procedures with par ..."
Cited by 66 (2 self)
Add to MetaCart
A survey of various results concerning Hoare's approach to proving partial and total correctness of programs is presented. Emphasis is placed on the soundness and completeness issues. Various proof
systems for while programs, recursive procedures, local variable declarations, and procedures with parameters, together with the corresponding soundness, completeness, and incompleteness results, are
- Fundamental Approaches to Software Engineering (FASE 2000), number 1783 in LNCS , 2000
"... This paper formalises a semantics for statements and expressions (in sequential imperative languages) which includes non-termination, normal termination and abrupt termination (e.g. because of
an exception, break, return or continue). This extends the traditional semantics underlying e.g. Hoare logi ..."
Cited by 63 (6 self)
Add to MetaCart
This paper formalises a semantics for statements and expressions (in sequential imperative languages) which includes non-termination, normal termination and abrupt termination (e.g. because of an
exception, break, return or continue). This extends the traditional semantics underlying e.g. Hoare logic, which only distinguishes termination and non-termination. An extension of Hoare logic is
elaborated that includes means for reasoning about abrupt termination (and side-effects). It prominently involves rules for reasoning about while loops, which may contain exceptions, breaks,
continues and returns. This extension applies in particular to Java. As an example, a standard pattern search algorithm in Java (involving a while loop with returns) is proven correct using the
proof-tool PVS.
- STEPWISE REFINEMENT OF DISTRIBUTED SYSTEMS: MODELS, FORMALISMS, CORRECTNESS. PROCEEDINGS. 1989, VOLUME 430 OF LECTURE NOTES IN COMPUTER SCIENCE , 1989
"... A lattice theoretic framework for the calculus of program refinement is presented. Specifications and program statements are combined into a single (infinitary) language of commands which
permits miraculous, angelic and demonic statements to be used in the description of program behavior. The weakes ..."
Cited by 55 (3 self)
Add to MetaCart
A lattice theoretic framework for the calculus of program refinement is presented. Specifications and program statements are combined into a single (infinitary) language of commands which permits
miraculous, angelic and demonic statements to be used in the description of program behavior. The weakest precondition calculus is extended to cover this larger class of statements and a
game-theoretic interpretation is given for these constructs. The language is complete, in the sense that every monotonic predicate transformer can be expressed in it. The usual program constructs can
be defined as derived notions in this language. The notion of inverse statements is defined and its use in formalizing the notion of data refinement is shown.
- Formal Aspects of Computing , 1998
"... Auxiliary variables are essential for specifying programs in Hoare Logic. They are required to relate the value of variables in different states. However, the axioms and rules of Hoare Logic
turn a blind eye to the rle of auxiliary variables. We stipulate a new structural rule for adjusting auxiliar ..."
Cited by 38 (0 self)
Add to MetaCart
Auxiliary variables are essential for specifying programs in Hoare Logic. They are required to relate the value of variables in different states. However, the axioms and rules of Hoare Logic turn a
blind eye to the rle of auxiliary variables. We stipulate a new structural rule for adjusting auxiliary variables when strengthening preconditions and weakening postconditions. Courtesy of this new
rule, Hoare Logic is adaptation complete, which benefits software re-use. This property is responsible for a number of improvements. Relative completeness follows uniformly from the Most General
Formula property. Moreover, contrary to common belief, one can show that Hoare Logic subsumes VDM's operation decomposition rules in that every derivation in VDM can be naturally embedded in Hoare
Logic. Furthermore, the new treatment leads to a significant simplification in the presentation for verification calculi dealing with more interesting features such as recursion or concurrency.
, 1989
"... We present a denotational continuation semantics for Prolog with cut. First a uniform language B is studied, which captures the control flow aspects of Prolog. The denotational semantics for B
is proven equivalent to a transition system based operational semantics. The congruence proof relies on the ..."
Cited by 34 (5 self)
Add to MetaCart
We present a denotational continuation semantics for Prolog with cut. First a uniform language B is studied, which captures the control flow aspects of Prolog. The denotational semantics for B is
proven equivalent to a transition system based operational semantics. The congruence proof relies on the representation of the operational semantics as a chain of approximations and on a convenient
induction principle. Finally, we interpret the abstract language B such that we obtain equivalent denotational and operational models for Prolog itself. Section 1 Introduction In the nice textbook of
Lloyd [Ll] the cut, available in all Prolog-systems, is described as a controversial control facility. The cut, added to the Horn clause logic for efficiency reasons, affects the completeness of the
refutation procedure. Therefore the standard declarative semantics using Herbrand models does not adequately capture the computational aspects of the Prolog-language. In the present paper we study
the Prolog...
- ACM Transactions on Computational Logic , 2004
"... Data types containing infinite data, such as the real numbers, functions, bit streams and waveforms, are modelled by topological many-sorted algebras. In the theory of computation on topological
algebras there is a considerable gap between so-called abstract and concrete models of computation. We pr ..."
Cited by 30 (19 self)
Add to MetaCart
Data types containing infinite data, such as the real numbers, functions, bit streams and waveforms, are modelled by topological many-sorted algebras. In the theory of computation on topological
algebras there is a considerable gap between so-called abstract and concrete models of computation. We prove theorems that bridge the gap in the case of metric algebras with partial operations. With
an abstract model of computation on an algebra, the computations are invariant under isomorphisms and do not depend on any representation of the algebra. Examples of such models are the ‘while ’
programming language and the BCSS model. With a concrete model of computation, the computations depend on the choice of a representation of the algebra and are not invariant under isomorphisms.
Usually, the representations are made from the set N of natural numbers, and computability is reduced to classical computability on N. Examples of such models are computability via effective metric
spaces, effective domain representations, and type two enumerability. The theory of abstract models is stable: there are many models of computation, and
- Formal Methods for Open Object-Based Distributed Systems (FMOODS) VI. Volume 2884 of LNCS. (2003) 64–78 , 2003
"... Abstract. This paper outlines a sound and complete Hoare logic for a sequential object-oriented language with inheritance and subtyping like Java. It describes a weakest precondition calculus
for assignments and object-creation, as well as Hoare rules for reasoning about (mutually recursive) method ..."
Cited by 25 (8 self)
Add to MetaCart
Abstract. This paper outlines a sound and complete Hoare logic for a sequential object-oriented language with inheritance and subtyping like Java. It describes a weakest precondition calculus for
assignments and object-creation, as well as Hoare rules for reasoning about (mutually recursive) method invocations with dynamic binding. Our approach enables reasoning at an abstraction level that
coincides with the general abstraction level of object-oriented languages. 1
, 1993
"... Formal methods are necessary in achieving correct software: that is, software that can be proven to fulfil its requirements. Formal specifications are unambiguous and analysable. Building a
formal model improves understanding. The modelling of nondeterminism, and its subsequent removal in formal ste ..."
Cited by 25 (14 self)
Add to MetaCart
Formal methods are necessary in achieving correct software: that is, software that can be proven to fulfil its requirements. Formal specifications are unambiguous and analysable. Building a formal
model improves understanding. The modelling of nondeterminism, and its subsequent removal in formal steps, allows design and implementation decisions to be made when most suitable. Formal models are
amenable to mathematical manipulation and reasoning, and facilitate rigorous testing procedures. However, formal methods are not widely used in software development. In most cases, this is because
they are not suitably supported with development tools. Further, many software developers do not recognise the need for rigour. Object oriented techniques are successful in the production of large,
complex software systems. The methods are based on simple mathematical models of abstraction and classification. Further, the object oriented approach offers a conceptual consistency across all
stages of soft... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=529953","timestamp":"2014-04-17T07:32:27Z","content_type":null,"content_length":"38601","record_id":"<urn:uuid:6967a55a-fe8c-4cf4-86f4-dd8df51403ae>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Algebra Resources
A plane called p has the cartesian equation 4x + 2y - 2z = 7 Given that the plane separates 3D space into two parts. Find the normal vector of the plane, pointing...
Latest answer by
Michael F.
Wilton, CT
[7,-5, 2, -4, 9, -4, 0, 6, 10]
Latest answer by
Parviz F.
Woodland Hills, CA
G =[1/5, 0, 0, 0, 1/7, 0, 0, 0, 1
F =[1,0,-2,3]
Latest answer by
Philip P.
Olney, MD
matrix A= [3,-2, 0,5] matrix B=[1,7,4,2,3,1]
Latest answer by
Francisco E.
Miami, FL
3x1 - 2x2 = 18 4x1 + 5x2 = 1 7x1 + 3x2 = 19
Jonathan T.
Richardson, TX
In this video I just show what it would be like to have an interactive session online with me or to have me make a video lesson on a topic of your choice.
Hafid M.
Chula Vista, CA
Provides graph to the inequality 8x - 3y > 14 and answers if the point (2,3) is a solution.
Top voted answer by
Vivian L.
Middletown, CT
The line with m = 2/3 and passing through (1, 1).
Top voted answer by
Vivian L.
Middletown, CT
A) (4, 0) B) (0, 4) C) all points on the line
Top voted answer by
Marcus H.
Bethlehem, PA
Consider the sum: (5)2 + (11)2 + (17)2 + ... + (18n - 1)2 How do I write this in sigma notation? The final term is really confusing me.
Top voted answer by
Vivian L.
Middletown, CT
True or False?
Top voted answer by
Vivian L.
Middletown, CT
The line including (3, 1)and (-2, 3).
Top voted answer by
Patrick S.
Fort Worth, TX
True or False?
Answered by
William S.
Aliquippa, PA
There were 305 tickets sold for a basketball game. The activity cardholders' tickets cost $1.25 and the non-cardholders' tickets cost $2.50. The total amount of money collected was $578.75. How
KEVIN from Mesquite, TX
6 Answers | 0 Votes
Top voted answer by
Angelika K.
Gorham, ME
Write your answer in Slope-Intercept Form?
Bruce S.
Chicago, IL
Humans have a tremendous capacity to learn and adapt. However, we consistently build barriers that hinder our natural ability to change and grow. Many people, regardless of age, perceive themselves
as not being talented enough to excel at math and science. They view math and science as the realms in which only scientists, engineers, mathematicians, and geniuses truly soar. Nothing could... read
Tracey M.
Stockbridge, GA
SUMMER OPPORTUNITIES Now that students, teachers, parents and tutors have had a chance to catch their breath from final exams, it's time to make use of the weeks we have before school starts back.
Consider all that could be accomplished in the next few weeks: Areas of math that students NEVER REALLY GRASPED could be fully explained. This could be elementary skills like adding fractions,...
Wendy T.
San Luis Obispo, CA
Hi math students :) When preparing for a mathematics tutoring session, try to have the following things at hand... Textbook (online or e-text) Syllabus, assignment, tips/hints/suggestions, answer
sheet/key Class notes Pencils, pens, erasers, paper (graph paper, ruler, protractor) All necessary formulas, laws, tables, constants, etc. Calculator that you will use on tests Do... read more
Latest answer by
Mitasvil P.
Cambridge, MA
I'm trying to understand matching functions to their graphs. I've been given pictures of graphs and four functions. I am suppose to match each function to the correct graph. | {"url":"http://www.wyzant.com/resources/linear_algebra","timestamp":"2014-04-21T13:24:23Z","content_type":null,"content_length":"63506","record_id":"<urn:uuid:6ffdc073-e935-4c09-ae54-a05020ca2f8e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Instruction Set
Return to Index
The Instruction Set
This section presents an alphabetical listing of the TrueType instruction set. Each description begins with the basic facts. A brief description of the instruction's functionality follows. This
material is intended for reference. For an introduction to the TrueType language see Instructing Fonts .
The following sections summarize the basic information needed to understand the instruction summaries that follow.
Each instruction description begins with a tabulation of basic information as shown in FIGURE 1 below. For a given instruction, only the relevant information fields are included. For example, the
"From IS" field is omitted for all but the "push" instructions. In generial, if the "Uses" field is omitted from a particular instruction description, it is safe to assume that the instruction has no
graphics state dependencies. The instruct control state variable is an exception to this rule. It will not appear in the uses field for each instruction though it can, turn off the execution of all
FIGURE 1 The instruction summary format
MNEMONIC[flags] explanation of mnemonic
Code Range the range of hexadecimal codes identifying this instruction and its variants
Flags an explanation of the meaning of a bracketed binary number
From IS any arguments taken from the instruction stream by push instructions
Pops any arguments popped from the stack
Pushes any arguments pushed onto the stack
Uses any state variables whose value this instruction depends upon
Sets any state variables set by this instruction
Gets the state variable whose value is retrieved by this instruction
Related instructions any closely related instruction including those with a similar or an opposite effect
In the instruction summaries that follow, the arguments an instruction pops from the stack or pushes onto the stack will be listed along with a brief description of their purpose and data type.
In the case of arguments popped from the stack, the first argument listed is the first one popped from the stack, the second is the next one popped and so forth.
Pops arg[3]: first argument popped (uint32)
arg[2]: second argument popped (uint32)
arg[1]: third argument popped (F26Dot6)
In the case of arguments popped from the stack, the first result pushed onto the stack appears first, the second result pushed appears below it and so forth.
Pushes result1: first result pushed (F26Dot6)
result2: second result pushed (F26Dot6)
When it is necessary to summarize the stack interaction of an instruction, the information will be written in a single line. The items popped are on the left to the left of two hyphens, the items
pushed are to the right. The example above would be written:
( arg1 arg2 arg3 -- result1 result2 ).
The right most item in the list is always the item at the top of the stack.
Many of the TrueType instructions interact with the interpreter stack. In simplest terms, they take data from the stack and return results to the stack. The stack elements that they manipulate are
all 32 bit values. The way in which instructions interpret these values varies. Some consider all 32 bits pushed or popped to be significant. Some use only certain bits. Some treat the 32 bit
quantity as a signed integer, some as an unsigned integer and some as a fixed point value.
The data types that can appear on the stack are listed in Table 1 below.
In cases where only a portion of the 32 bits pushed or popped are relevant, the data type has a name that begins with a capital letter E for "extended." Unsigned values are extended to 32 bits with
zeroes to the left of the bits that are significant. Signed values are sign extended to 32 bits.
Fixed point numbers have names that begin with the letter F. The name consists of the number of bits representing the integral part of the number, the letters "DOT", representing the binary point.
followed by the number of fractional bits. An extended fixed point number begins with the letters EF.
Generic stack elements have the data type StkElt. Any 32 bit quantity can have this data type.
Table 1 : The instruction set data type
│Data type│Description │
│Eint8 │sign extended 8-bit interger │
│Euint16 │zero extended 16-bit unsigned integer │
│EFWord │sign extended 16-bit signed integer that describes a quanity in FUnits, the smallest measurable unit in the em space │
│EF2Dot14 │sign extended 16-bit signed fixed number with the low 14 bits representing fraction │
│uint32 │32-bit unsigned integer │
│int32 │32-bit signed interger │
│F26Dot6 │32-bit signed fixed number with the low 6 bits representing fraction │
│StkElt │any 32 bit quantity │
A number of instructions have accompanying illustrations. Most of these illustrations explain the effects an instruction has on the position of points in a glyph outline. FIGURE 1 lists the
conventions used in those illustrations. Remember that,
• Unless otherwise noted, distances are measured along the projection vector
• Unless otherwise noted, instructions move points along the freedom vector
For more information on moving points see Instructing Fonts .
FIGURE 1 Key to illustrations
Return to Contents
AA[] Adjust Angle
Code Range 0x7F
Pops p: point number (uint32)
Pushes -
Related instructions SANGW[ ]
Pops one argument from the stack. This instruction is anachronistic and has no other effect.
Return to Contents
ABS[] ABSolute value
Code Range 0x64
Pops n: fixed point number (F26Dot6)
Pushes |n|: absolute value of n (F26Dot6)
Replaces the number at the top of the stack with its absolute value.
Pops a 26.6 fixed point number, n, off the stack and pushes the absolute value of n onto the stack.
Return to Contents
ADD[] ADD
Code Range 0x60
Pops n2: fixed point number (F26Dot6) n1: fixed point number (F26Dot6)
Pushes sum: n1 + n2(F26Dot6)
Adds the top two numbers on the stack.
Pops two 26.6 fixed point numbers, n2 and n1, off the stack and pushes the sum of those two numbers onto the stack.
Return to Contents
ALIGNPTS[] ALIGN Points
Code Range 0x27
Pops p2: point number (uint32) p1: point number (uint32)
Pushes -
Uses zp0 with point p2 and zp1 with point p1, freedom vector, projection vector
Related instructions ALIGNRP[ ]
Aligns the two points whose numbers are the top two items on the stack along an axis orthogonal to the projection vector.
Pops two point numbers, p2 and p1, from the stack and makes the distance between them zero by moving both points along the freedom vector to the average of their projections along the projection
In the illustration below, points p1 and p2 are moved along the freedom vector until the projected distance between them is reduced to zero. The distance from A to B equals d/2 which equals the
distance from B to C. The value d/2 is one-half the original projected distance between p1 and p2.
Return to Contents
ALIGNRP[] ALIGN to Reference Point
Code Range 0x3C
Pops p1, p2, , ploopvalue: point numbers (uint32)
Pushes -
Uses zp1 with point p and zp0 with rp0, loop, freedom vector, projection vector
Related instructions ALIGNPTS[ ]
Aligns the points whose numbers are at the top of the stack with the point referenced by rp0.
Pops point numbers, p1, p2, , ploopvalue, from the stack and aligns those points with the current position of rp0 by moving each point pi so that the projected distance from pi to rp0 is reduced to
zero. The number of points aligned depends up the current setting the state variable loop.
In the illustration below, point p is moved along the freedom vector until its projected distance from rp0 is reduced to zero.
Return to Contents
AND[] logical AND
Code Range 0x5A
Pops e2: stack element (StkElt) e1: stack element (StkElt)
Pushes (e1 and e2): logical and of e1 and e2 (uint32)
Related instructions OR[ ]
Takes the logical and of the top two stack elements.
Pops the top two elements, e2 and e1, from the stack and pushes the result of a logical and of the two elements onto the stack. Zero is pushed if either or both of the elements are FALSE (have the
value zero). One is pushed if both elements are TRUE (have a non-zero value).
Return to Contents
CALL[] CALL function
Code Range 0x2B
Pops f: function identifier number (int32 in the range 0 through (n-1) where n is specified in the 'maxp' table)
Pushes -
Related instructions FDEF[ ], EIF[ ]
Calls the function identified by the number of the top of the stack.
Pops a function identifier number, f, from the stack and calls the function identified by f. The instructions contained in the function body will be executed. When execution of the function is
complete, the instruction pointer will move to the next location in the instruction stream where execution of instructions will resume.
Return to Contents
Code Range 0x67
Pops n: fixed point number (F26Dot6)
Pushes n : ceiling of n (F26Dot6)
Related instructions FLOOR[ ]
Takes the ceiling of the number at the top of the stack.
Pops a number n from the stack and pushes n , the least integer value greater than or equal to n. Note that the ceiling of n, though an integer value, is expressed as 26.6 fixed point number.
Return to Contents
CINDEX[] Copy the INDEXed element to the top of the stack
Code Range 0x25
Pops k: stack element number (int32)
Pushes ek: kth stack element (StkElt)
k: stack element number
Stack before e1: stack element
ek: stack element
ek: indexed element
Stack after e1: stack element
ek: stack element
Related instructions MINDEX[ ]
Copies the indexed stack element to the top of the stack.
Pops a stack element number, k, from the stack and pushes a copy of the kth stack element on the top of the stack. Since it is a copy that is pushed, the kth element remains in its original position.
This feature is the key difference between the CINDEX[ ] and MINDEX[ ] instructions.
A zero or negative value for k is an error.
Return to Contents
CLEAR[] CLEAR the stack
Code Range 0x22
Pops all the items on the stack (StkElt)
Pushes -
Clears all elements from the stack.
Return to Contents
DEBUG[] DEBUG call
Code Range 0x4F
Pops n: integer (uint32)
Pushes -
Pops an integer from the stack. In non-debugging versions of the interpreter, the execution of instructions will continue. In debugging versions, available to font developers, an implementation
dependent debugger will be invoked.
This instruction is only for debugging purposes and should not be a part of a finished font. Some implementations do not support this instruction.
Return to Contents
DELTAC1[] DELTA exception C1
Code Range 0x73
Pops n: number of pairs of exception specifications and CVT entry numbers (uint32)
argn, cn, argn-1,cn-1, , arg1, c1: pairs of CVT entry number and exception specifications (pairs of uint32s)
Pushes -
Uses delta shift, delta base
Related instructions DELTAC2[ ], DELTAC3, DELTAP1, DELTAP2, DELTAP3
Creates an exception to one or more CVT values, each at a specified point size and by a specified amount.
Pops an integer, n, followed by n pairs of exception specifications and control value table entry numbers. DELTAC1[] changes the value in each CVT entry specified at the size and by the pixel amount
specified in its paired argument.
The 8 bit arg component of the DELTAC1[] instruction decomposes into two parts. The most significant 4 bits represent the relative number of pixels per em at which the exception is applied. The least
significant 4 bits represent the magnitude of the change to be made.
The relative number of pixels per em is a function of the value specified in the argument and the delta base. The DELTAC1[] instruction works at pixel per em sizes beginning with the delta base
through the delta_base + 15. To invoke an exception at a larger pixel per em size, use the DELTAC2[] or DELTAC3[] instruction which can affect changes at sizes up to delta_base + 47 or, if necessary,
increase the value of the delta base.
The magnitude of the move is specified, in a coded form, in the instruction. Table 5 lists the mapping from exception values and the magnitude of the move made.The size of the step depends on the
value of the delta shift.
Table 4: Magnitude values mapped to number of steps to move
│Selector │0 │1 │2 │3 │4 │5 │6 │7 │8│9│10│11│12│13│14│15│
│Number of steps │-8│-7│-6│-5│-4│-3│-2│-1│1│2│3 │4 │5 │6 │7 │8 │
For additional information on the DELTA instructions see Instructing Fonts .
Return to Contents
DELTAC2[] DELTA exception C2
Code Range 0x74
Pops n: number of pairs of exception specifications and CVT entry numbers (uint32)
argn, cn, argn-1,cn-1, , arg1, c1: pairs of CVT entry number and exception specifications (pairs of uint32s)
Pushes -
Uses delta shift, delta base
Related instructions DELTAC2[ ], DELTAC3[], DELTAP1[], DELTAP2[], DELTAP3[]
Creates an exception to one or more CVT values, each at a specified point size and by a specified amount.
Pops an integer, n, followed by n pairs of exception specifications and CVT entry numbers. DELTAC2[] changes the value in each CVT entry specified at the size and by the amount specified in its
paired argument.
The DELTAC2[] instruction is exactly the same as the DELTAC1[] instruction except for operating at pixel per em sizes beginning with the (delta_base + 16) through the (delta_base + 31). To invoke an
exception at a smaller pixel per em size, use the DELTAC1[] instruction. To invoke an exception at a smaller pixel per em size, use the DELTAC3[] instruction which can affect changes at sizes up to
delta_base + 47 or, if necessary, change the value of the delta base.
For more information see the entry for DELTAC1[] or Instructing Fonts .
Return to Contents
DELTAC3[] DELTA exception C3
Code Range 0x75
Pops n: number of pairs of CVT entry numbers and exception specifications (uint32)
argn, cn, argn-1,cn-1, , arg1, c1: pairs of CVT entry number and exception specifications (pairs of uint32s)
Pushes -
Uses delta shift, delta base
Related instructions DELTAC2[ ], DELTAC3[], DELTAP[], DELTAP2[], DELTAP3[]
Creates an exception to one or more CVT values, each at a specified point size and by a specified amount.
Pops an integer, n, followed by n pairs of exception specifications and CVT entry numbers. DELTAC3[] changes the value in each CVT entry specified at the size and by the amount specified in its
paired argument.
The DELTAC3[] instruction is exactly the same as the DELTAC1 instruction except for operating at pixel per em sizes beginning with the (delta_base + 32) through the (delta_base + 47).
For more information see the entry for DELTAC1[] or Instructing Fonts .
Return to Contents
DELTAP1[] DELTA exception P1
Code Range 0x5D
Pops n: number of pairs of exception specifications and points (uint32) argn, pn, argn-1, pn-1, , arg1, p1: n pairs of exception specifications and points (pairs of uint32s)
Pushes -
Uses zp0, delta base, delta shift, freedom vector, projection vector
Related instructions DELTAC2[ ], DELTAC3, DELTAP1, DELTAP2, DELTAP3
Creates an exception at one or more point locations, each at a specified point size and by a specified amount.
DELTAP1[] works on the points in the zone reference by zp0. It moves the specified points at the size and by the amount specified in the paired argument. Moving a point makes it possible to turn on
or off selected pixels in the bitmap that will be created when the affected outline is scan converted. An arbitrary number of points and arguments can be specified.
The grouping [argi, pi] can be executed n times. The value of argi consists of a byte with lower four bits of which represent the magnitude of the exception and the upper four bits, the relative
pixel per em value.
The actual pixel per em size at which a DELTAP instruction works is a function of the relative pixel per em size and the delta base. The DELTAP1[] instruction works at pixel per em sizes beginning
with the delta base through the delta_base + 15. To invoke an exception at a larger pixel per em size, use the DELTAP2[] or DELTAP3[] instruction which together can affect changes at sizes up to
delta_base + 47 or, if necessary, increase the value of the delta base.
The magnitude of the move is specified, in a coded form, in the instruction. Table 5 lists the mapping from exception values used in a DELTA instruction to the magnitude in steps of the move made.
The size of the step depends on the value of the delta shift.
Table 5: Magnitude values mapped to number of steps to move
│Selector │0 │1 │2 │3 │4 │5 │6 │7 │8│9│10│11│12│13│14│15│
│Number of steps │-8│-7│-6│-5│-4│-3│-2│-1│1│2│3 │4 │5 │6 │7 │8 │
Return to Contents
DELTAP2[] DELTA exception P2
Code Range 0x71
Pops n: number of pairs of exception specifications and points (uint32)
argn, pn, argn-1, pn-1, , arg1, p1: n pairs of exception specifications and points (pairs of uint32s)
Pushes -
Uses zp0, delta shift, delta base, freedom vector, projection vector
Related instructions DELTAC2[ ], DELTAC3, DELTAP1, DELTAP2, DELTAP3
Creates an exception at one or more point locations, each at a specified point size and by a specified amount.
DELTAP2[] works on the points in the zone reference by zp0. It moves the specified points at the size and by the amount specified in the paired argument. Moving a point makes it possible to turn on
or off selected pixels in the bitmap that will be created when the affected outline is scan converted. An arbitrary number of points and arguments can be specified.
The DELTAP2[] instruction is identical to the DELTAP1[] instruction save for operating at pixel per em sizes beginning with the (delta_base + 16) through the (delta_base + 31). To invoke an exception
at a smaller pixel per em size, use the DELTAP1[] instruction. To invoke an exception at a smaller pixel per em size, use the DELTAP3[] instruction. If necessary, change the value of the delta_base.
Return to Contents
DELTAP3[] DELTA exception P3
Code Range 0x72
Pops n: number of pairs of exception specifications and points (uint32)
argn, pn, argn-1, pn-1, , arg1, p1: n pairs of exception specifications and points (pairs of uint32s)
Pushes -
Uses zp0, delta base, delta shift, freedom vector, projection vector
Related instructions DELTAC2[ ], DELTAC3, DELTAP1, DELTAP2, DELTAP3
Creates an exception at one or more point locations, each at a specified point size and by a specified amount.
Pops an integer, n, followed by n pairs of exception specifications and points. DELTAP3[] works on the points in the zone reference by zp0. It moves the specified points at the size and by the amount
specified in the paired argument. Moving a point makes it possible to turn on or off selected pixels in the bitmap that will be created when the affected outline is scan converted. An arbitrary
number of points and arguments can be specified.
The DELTAP3[] instruction is identical to the DELTAP1[] instruction save for operating at pixel per em sizes beginning with the (delta_base + 32) through the (delta base + 47). To invoke an exception
at a smaller pixel per em size, use the DELTAP1[] or the DELTAP2[] instruction. If necessary, change the value of the delta base.
Return to Contents
DEPTH[] DEPTH of the stack
Code Range 0x24
Pops -
Pushes n: number of elements (int32)
Pushes n, the number of elements currently in the stack, onto the stack.
Return to Contents
DIV[] DIVide
Code Range 0x62
Pops n2: divisor (F26Dot6)
n1: dividend (F26Dot6)
Pushes (n1 * 64)/n2: quotient (F26Dot6)
Divides the number second from the top of the stack by the number at the top of the stack.
Pops two 26.6 fixed point numbers, n1 and n2 off the stack and pushes onto the stack the quotient obtained by dividing n2 by n1. The division takes place in the following fashion, n1 is shifted left
by six bits and then divided by 2.
Return to Contents
DUP[] DUPlicate top stack element
Code Range 0x20
Pops e: stack element (StkElt)
Pushes e: stack element (StkElt)
e: stack element (StkElt)
Duplicates the top element on the stack.
Pops an element, e, from the stack, duplicates that element and pushes it twice.
Return to Contents
EIF[] End IF
Code Range 0x59
Pops -
Pushes -
Related instructions IF[ ], ELSE[ ]
Marks the end of an IF or IF-ELSE instruction sequence.
Return to Contents
ELSE[] ELSE clause
Code Range 0x1B
Pops -
Pushes -
Related instructions IF[ ], EIF[ ]
Marks the start of the sequence of instructions that are to be executed when an IF instruction encounters a FALSE value on the stack. This sequence of instructions is terminated with an EIF
The ELSE portion of an IF-ELSE-EIF sequence is optional.
Return to Contents
ENDF[] END Function definition
Code Range 0x2D
Pops -
Pushes -
Related instructions FDEF[ ], IDEF[ ]
Marks the end of a function definition or an instruction definition. Function definitions and instruction definitions cannot be nested.
Return to Contents
EQ[] EQual
Code Range 0x54
Pops e2: stack element e1: stack element
Pushes b: Boolean value (uint32 in the range [0,1])
Related instructions NEQ[ ]
Tests whether the top two numbers on the stack are equal in value.
Pops two 32 bit values, e2 and e1, from the stack and compares them. If they are the same, one, signifying TRUE is pushed onto the stack. If they are not equal, zero, signifying FALSE is placed onto
the stack.
Return to Contents
EVEN[] EVEN
Code Range 0x57
Pops e: stack element (F26Dot6)
Pushes b: Boolean value (uint32 in the range [0,1])
Uses round state
Related instructions ODD[ ]
Tests whether the number at the top of the stack, when rounded according to the round state, is even.
Pops a 26.6 number, e, from the stack and rounds that number according to the current round state. The number is then truncated to an integer. If the truncated number is even, one, signifying TRUE,
is pushed onto the stack; if it is odd, zero, signifying FALSE, is placed onto the stack.
Return to Contents
FDEF[] Function DEFinition
Code Range 0x2C
Pops f: function identifier number (integer in the range 0 through (n-1) where n is specified in the 'maxp' table
Pushes -
Related instructions ENDF[ ], CALL[ ]
Marks the start of a function definition and pops a number, f, from the stack to uniquely identify this function. That definition will terminate when an ENDF[] is encountered in the instruction
stream. A function definition can appear only in the font program or the CVT program. Functions must be defined before they can be used with a CALL[ ] instruction.
Return to Contents
FLIPOFF[] set the auto FLIP Boolean to OFF
Code Range 0x4E
Pops -
Pushes -
Sets auto flip
Affects MIRP, MIAP
Related instructions FLIPON[ ], MIRP[ ], MIAP[ ]
Sets the auto flip Boolean in the graphics state to FALSE causing the MIRP[] and MIAP[] instructions to use the sign of control value table entries. When auto flip is set to FALSE, the direction in
which distances are measured becomes significant. The default value for the auto flip state variable is TRUE.
Return to Contents
FLIPON[] set the auto FLIP Boolean to ON
Code Range 0x4D
Pops -
Pushes -
Sets auto flip
Affects MIRP, MIAP
Related instructions FLIPOFF[ ], MIRP[ ], MIAP[ ]
Sets the auto flip Boolean in the graphics state to TRUE causing the MIRP[] and MIAP[] instructions to ignore the sign of control value table entries. When the auto flip variable is TRUE, the
direction in which distances are measured becomes insignificant. The default value for the auto flip state variable is TRUE.
Return to Contents
FLIPPT[] FLIP PoinT
Code Range 0x80
Pops p1, p2, , ploopvalue: point number (uint32)
Pushes -
Uses zp0, loop
Related instructions FLIPRGON[ ], FLIPRGOFF[ ]
Makes an on-curve point an off-curve point or an off-curve point an on-curve point.
Pops points, p, p1, p2, , ploopvalue from the stack. If pi is an on-curve point it is made an off-curve point. If pi is an off-curve point it is made an on-curve point. None of the points pi is
marked as touched. As a result, none of the flipped points will be affected by an IUP[ ] instruction. A FLIPPT[ ] instruction redefines the shape of a glyph outline.
Return to Contents
FLIPRGOFF[] FLIP RanGe OFF
Code Range 0x82
Pops h: high point number in range (uint32)
l: low point number in range (uint32)
Pushes -
Uses zp0
Related instructions FLIPPT[ ], FLIPRGOFF[ ]
Changes all of the points in the range specified to off-curve points.
Pops two numbers defining a range of points, the first a highpoint and the second a lowpoint. On-curve points in this range will become off-curve points. The position of the points is not affected
and accordingly the points are not marked as touched.
Return to Contents
FLIPRGON[] FLIP RanGe ON
Code Range 0x81
Pops h: highest point number in range of points to be flipped (uint32)
l: lowest point number in range of points to be flipped (uint32)
Pushes -
Uses zp0
Related instructions FLIPPT[ ], FLIPRGOFF[ ]
Makes all the points in a specified range into on-curve points.
Pops two numbers defining a range of points, the first a highpoint and the second a lowpoint. Off-curve points in this range will become on-curve points. The position of the points is not affected
and accordingly the points are not marked as touched.
Return to Contents
FLOOR[] FLOOR
Code Range 0x66
Pops n: number whose floor is desired (F26Dot6)
Pushes n : floor of n (F26Dot6)
Related instructions CEILING[ ]
Takes the floor of the value at the top of the stack.
Pops a 26.6 fixed point number n from the stack and returns n , the greatest integer value less than or equal to n. Note that the floor of n, though an integer value, is expressed as 26.6 fixed point
Return to Contents
GC[a] Get Coordinate projected onto the projection vector
Code Range 0x46 - 0x47
a 0: use current position of point p
1: use the position of point p in the original outline
Pops p: point number (uint32)
Pushes c: coordinate location (F26Dot6)
Uses zp2, projection vector, dual projection vector
Related instructions SCFS[ ]
Gets the coordinate value of the specified point using the current projection vector.
Pops a point number p and pushes the coordinate value of that point on the current projection vector onto the stack. The value returned by GC[] is dependent upon the current direction of the
projection vector.
The illustration below, GC[1], with p1 at the top of the stack, returns the original position of point p1 while GC[0], with p2 at the top of the stack, returns the current position of point p2.
Return to Contents
GETINFO[] GET INFOrmation
Code Range 0x88
Pops selector: integer (uint32)
Pushes result: integer (uint32)
Used to obtain data about the version of the TrueType engine that is rendering the font as well as the characteristics of the current glyph. The instruction pops a selector used to determine the type
of information desired and pushes a result onto the stack.
Setting bit 0 in the selector requests the engine version. Setting bit 1 asks whether the glyph has been rotated. Setting bit 2 asks whether the glyph has been stretched. To request information on
two or more of these values, set the appropriate bits. For example, a selector value of 6 (0112) requests information on both rotation and stretching.
The result is pushed onto the stack with the requested information. Bits 0 through 7 of result comprise the font engine version number. The version numbers are listed in TABLE 0-2.
Bit 8 is set to 1 if the current glyph has been rotated. It is 0 otherwise. Bit 9 is set to 1 to indicate that the glyph has been stretched. It is 0 otherwise.
TABLE 0-1 Selector bits and the results produced
Table 6:
│selector bits │meaning │result bits│
│0 │get engine version │0-7 │
│1 │rotated? │8 │
│2 │stretched? │9 │
The possible values for the engine version are given in TABLE 0-2.
TABLE 0-2 Font engine version number
System Engine Version
Macintosh System 6.0 1
Macintosh System 7.0 2
Windows 3.1 3
KanjiTalk 6.1 4
If the TrueType engine is the System 7.0 version and the selector requested information on the version number, rotation and stretching and the glyph is rotated but not stretched, the result will be
01 0000 00102 or 258.
Return to Contents
GFV[] Get Freedom Vector
Code Range 0x0D
Pops -
Pushes p[x]: x component (EF2Dot14)
p[y]: y component (EF2Dot14)
Gets freedom vector
Related instructions GPV[ ]
Decomposes the current freedom vector into its x and y components and puts those components on the stack as two 2.14 numbers. The numbers occupy the least significant two bytes of each long.
The first component pushed, px, is the x-component of the freedom vector. The second pushed, py, is the y-component of the freedom vector. Each is a 2.14 number.
GFV[] treats the freedom vector as a unit vector originating at the grid origin. In the illustration below, the distance from point A to point B is 1 unit.
Return to Contents
GPV[] Get Projection Vector
Code Range 0x0C
Pops -
Pushes p^x: x component (EF2Dot14)
p^y: y component (EF2Dot14)
Gets projection vector
Related instructions GFV[ ]
Decomposes the current projection vector into its x and y components and pushes those components onto the stack as two 2.14 numbers.
The first component pushed, px, is the x-component of the projection vector. The second pushed, py, is the y-component of the projection vector.
GPV[] treats the projection vector as a unit vector originating at the grid origin. In the illustration below, the distance from point A to point B is one unit.
Return to Contents
GT[] Greater Than
Code Range 0x52
Pops e2: stack element e1: stack element
Pushes b: Boolean value (uint32 in the range [0,1])
Related instructions LT[ ], GTEQ[ ]
Compares the size of the top two stack elements.
Pops two integers, e2 and e1, from the stack and compares them. If e1 is greater than e2, one, signifying TRUE, is pushed onto the stack. If e1 is not greater than e1, zero, signifying FALSE, is
placed onto the stack.
Return to Contents
GTEQ[] Greater Than or EQual
Code Range 0x53
Pops e2: stack element e1: stack element
Pushes b: Boolean value (uint32 in the range [0,1])
Related instructions LTEQ[ ], GT[ ]
Compares the size of the top two stack elements.
Pops two integers, e2 and e1, from the stack and compares them. If e1 is greater than or equal to e2, one, signifying TRUE, is pushed onto the stack. If e1 is not greater than or equal to e1, zero,
signifying FALSE, is placed onto the stack.
Return to Contents
IDEF[] Instruction DEFinition
Code Range 0x89
Pops opcode (Eint8)
Pushes -
Related instructions ENDF[ ]
Begins the definition of an instruction. The instruction is identified by the opcode popped. The intent of the IDEF[ ] instruction is to allow old versions of the scaler to work with fonts that use
instructions defined in later releases of the TrueType interpreter. Referencing an undefined opcode will have no effect. The IDEF[ ] is not intended for creating user defined instructions. The FDEF[
] should be used for that purpose.
The instruction definition that began with the IDEF[ ] terminates when an ENDF[ ] is encountered in the instruction stream. Nested IDEFs are not allowed. Subsequent executions of the opcode popped
will be directed to the contents of this instruction definition. IDEFs should be defined in the font program. Defining instructions in the CVT program is not recommended.
Return to Contents
IF[] IF test
Code Range 0x58
Pops e: stack element
Pushes -
Related instructions ELSE[ ], EIF[ ]
Marks the beginning of an if-statement.
Pops an integer, e, from the stack. If e is zero (FALSE), the instruction pointer is moved to the associated ELSE or EIF[] instruction in the instruction stream. If e is nonzero (TRUE), the next
instruction in the instruction stream is executed. Execution continues until the associated ELSE[] instruction is encountered or the associated EIF[] instruction ends the IF[] statement. If an
associated ELSE[] statement is found before the EIF[], the instruction pointer is moved to the EIF[] statement.
Return to Contents
INSTCTRL[ ] INSTRuction execution ConTRoL
Code Range 0x8E
Pops s: selector (int32) v: value for instruction control (int32)
Pushes -
Sets instruction control
Sets the instruction control state variable making it possible to turn on or off the execution of instructions and to regulate use of parameters set in the CVT program.
This instruction clears and sets various control flags. The selector is used to choose the relevant flag. The value determines the new setting of that flag.
In the version 1.0 there are only two flags in use.
Flag 1 is used to inhibit grid-fitting. It is chosen with a selector value of 1. If this flag is set to TRUE (v=1), any instructions associated with glyphs will not be executed. If the flag is FALSE
(v=0), instructions will be executed. For example, to inhibit grid-fitting when a glyph is being rotated or stretched, use the following sequence on the preprogram:
PUSHB[000] 6 /* ask GETINFO to check for stretching or rotation */
GETINFO[ ] /* will push TRUE if glyph is stretched or rotated */
IF[] /* tests value at top of stack */
PUSHB[000] 1 /* value for INSTCTRL */
PUSHB[000] 1 /* selector for INSTCTRL */
INSTCTRL[] /* based on selector and value will turn grid-fitting off */
Flag 2 is used to establish that any parameters set in the CVT program should be ignored when instructions associated with glyphs are executed. These include, for example, the values for scantype and
the CVT cut-in. When flag2 is set to TRUE the default values of those parameters will be used regardless of any changes that may have been made in those values by the preprogram. When flag2 is set to
FALSE, parameter values changed by the CVT program will be used in glyph instructions.
INSTCTRL[] can only be executed in the CVT program.
Return to Contents
IP[] Interpolate Point
Code Range 0x39
Pops p1, p2, , ploopvalue: point number (uint32)
Pushes -
Uses zp0 with rp1, zp1 with rp2, zp2 with point p, loop, freedom vector, projection vector, dual projection vector
Related instructions IUP[ ]
Interpolates the position of the specified points to preserve their original relationship with the reference points rp^1 and rp^2.
Pops point numbers, p^1, p^2, , ploopvalue, from the stack. Moves each point p^i so that its relationship to rp^1 and rp^2 is the same as it was in the original uninstructed outline. That is, the
following relationship holds:
This instruction is illegal if rp^1 and rp^2 occupy the same position on the projection vector.
More intuitively, an IP[] instruction preserves the relative relationship of a point relative to two reference points.
In the illustrations below, point p is interpolated relative to reference points rp^1 and rp^2. In the first illustration, which depicts the situation before the IP[] instruction is executed, the
distance from of point p to the original position of rp^1 is d1 and the distance from point p to the original position of point rp^2 is d2. The ratio of the two distances is d1:d2.
The effect of the IP[] instruction is shown in the illustration below. It moves point p along the freedom vector until the ratio of the distance, d3, from the current position of rp1 to point p, to
the distance, d4, from point p to the current position of point rp2 is equal to d1:d2. That is, point p is moved along the freedom vector until d1:d2 = d3:d4.when these distances are measured along
the projection vector.
Return to Contents
ISECT[] moves point p to the InterSECTion of two lines
Code Range 0x0F
a^0: start point of line A (uint32)
a^1: end point of line A (uint32)
Pops b^0: start point of line B (uint32)
b^1: end point of line B (uint32)
p: point to move (uint32) Pushes -
Uses zp^2 with point p, zp^0 with line A, zp1 with line B
Moves the specified point to the intersection of the two lines specified.
Pops the end points of line A, a^0 and a^1, followed by the end points of line B, b^0 and b^1 followed by point p. Puts point p at the intersection of the lines A and B. The points a^0 and a^1 define
line A. Similarly, b^0 and b^1 define line B. ISECT ignores the freedom vector in moving point p.
In the degenerate case of parallel lines A and B, the point is put in the middle. That is.
In the illustration below, point p is moved from its current position to the intersection of the line defined by a^0, a^1 and the line defined by b^0, b^1. Note that point p need not move along the
freedom vector but is simply relocated at the point of intersection.
Return to Contents
IUP[a] Interpolate Untouched Points through the outline
Code Range 0x30 - 0x31
a 0: interpolate in the y-direction
1: interpolate in the x-direction
Pops -
Pushes -
Uses zp^2
Related instructions IP[ ]
Interpolates untouched points in the zone referenced by zp^2 to preserve the original relationship of the untouched points to the other points in that zone.
Considers the reference glyph outline contour by contour, moving any untouched points that fall sequentially between a pair of touched points. How such a point is moved, however, depends on whether
its projection fall between the projections of the touched points. That is, if the projected x-coordinate or y-coordinate (depending on whether the interpolation is in x or in y) of an untouched
point were originally between those of the touched pair, that coordiante is linearly interpolated between the new coordinates of the touched points. Otherwise the untouched point is shifted by the
amount the nearest touched point was shifted from its original outline position. The value of the Boolean a, determines whether the interpolation will be in the x-direction or the y-direction. The
current settings of the freedom and projection vectors are not relevant.
The set of fiigures below illustrates this distinction. The first illustration shows the contour before the IUP[] instruction is executed. Here p^1, p^2, p^3, p^4 and p^5 are consecutive points on a
contour. Point p^2, p[3 ]and p[4] all fall sequentially between p[1] and p[5] on the contour. Assume that point p[3] has been touched.
Point p[4 ]has an x-coordinate that is between p[1] and p[5] while points p[2] and p[3] do not. Assume that p1 and p5 have been moved by a previous instructions and that point p[3] has been touched
but not moved from its original position. As a result of an IUP[1] an interpolation in the x--direction takes place. Point p[4] will be linearly interpolated. Point p[2] will be shifted by the amount
the nearest touched point was shifted. Point p[3] will be unaffected. (Points p[2] and p[4] are assumed to be in their original position. This is not strictly necessary as a point that has been moved
can be untouched with the UTP[ ] instruction and hence subject to the actions of an IUP[ ] instruction.)
As the result of the IUP[1] instruction, two points are moved. The first move is the shift illustrated below. Point p[1] has moved a distance ds units parallel to the x-axis from its original
position. Point p[2] is moved parallel to the x-axis until it is at a distance equal to ds from its original position.
The second move is the linear interpolation shown in the illustration below. Point p4 is moved along the specified axis to a new position that preserves its relative distance from points p[1] and p
[5]. After the interpolation the ratio of the original distance from point p[4] to p[1] (d[1]) to the original distance of point p[4] to p[5] (d[2]) is the same as the ratio of the new distance from
point p[4] to p[1](d[3]) to the new distance of point p[4] to p[4] (d[4]). That is: d[1]:d[2] = d[3]:d[4]
This instruction operates on points in the glyph zone pointed to by zp[2]. This zone should always be zone 1. Applying IUP[ ] to zone 0 is illegal.
The IUP[ ] instruction does not touch the points it moves. Thus the untouched points affected by an IUP[ ] instruction will be affected by subsequent IUP[] instructions unless they are touched by an
intervening instruction.
Return to Contents
JMPR[] JuMP Relative
Code Range 0x1C
Pops offset: number of bytes to move instruction pointer (int32)
Pushes -
Related instructions JROF[ ], JROT[ ]
Moves the instruction pointer to a new location specified by the offset popped from the stack.
Pops an integer offset from the stack. The signed offset is added to the instruction pointer and execution is resumed at the new location in the instruction steam. The jump is relative to the
position of the instruction itself. That is, an offset of +1 causes the instruction immediately following the JMPR[] instruction to be executed.
Return to Contents
JROF[] Jump Relative On False
Code Range 0x79
Pops e: stack element offset: number of bytes to move instruction pointer (int32)
Pushes -
Related instructions JMPR[ ] JROT[ ]
Moves the instruction pointer to a new location specified by the offset popped from the stack if the element tested has a FALSE (zero) value.
Pops a Boolean value, e and an offset. In the case where the Boolean, e, is FALSE, the signed offset will be added to the instruction pointer and execution will be resumed at the new location;
otherwise, the jump is not taken. The jump is relative to the position of the instruction itself.
Return to Contents
JROT[] Jump Relative On True
Code Range 0x78
e: stack element
Pops offset: number of bytes to move
instruction pointer (int32)
Pushes -
Related instructions JMPR[ ] JROF[ ]
Moves the instruction pointer to a new location specified by the offset value popped from the stack if the element tested has a TRUE value.
Pops a Boolean value, e and an offset. If the Boolean is TRUE (non-zero) the signed offset will be added to the instruction pointer and execution will be resumed at the address obtained. Otherwise,
the jump is not taken. The jump is relative to the position of the instruction itself.
Return to Contents
LOOPCALL[] LOOP and CALL function
Code Range 0x2A
Pops f: function number integer in the range 0 through (n-1) where n is specified in the 'maxp' table
count: number of times to call the function (signed word)
Pushes -
Related instructions SLOOP[ ]
Repeatedly calls a function.
Pops a function number f and a count. Calls the function, f, count number of times.
Return to Contents
LT[] Less Than
Code Range 0x50
Pops e2: stack element (StkElt)
e1: stack element (StkElt)
Pushes b: Boolean value (uint32 in the range [0,1])
Related instructions GT[ ], LTEQ[ ]
Compares the two number at the top of the stack. The test succeeds if the second of the two numbers is smaller than the first.
Pops two integers from the stack, e[2] and e[1], and compares them. If e[1] is less than e[2], 1, signifying TRUE, is pushed onto the stack. If e[1] is not less than e[2], 0, signifying FALSE, is
placed onto the stack.
Return to Contents
LTEQ[] Less Than or EQual
Code Range 0x51
Pops e2: stack element
e1: stack element
Pushes b: Boolean value (uint32 in the range [0,1])
Related instructions GTEQ[ ], LT[ ]
Compares the two numbers at the top of the stack. The test succeeds if the second of the two numbers is smaller than or equal to the first.
Pops two integers, e2 and e1 from the stack and compares them. If e[1] is less than or equal to e[2], one, signifying TRUE, is pushed onto the stack. If e[1] is greater than e[2], zero, signifying
FALSE, is placed onto the stack.
Return to Contents
MAX[] MAXimum of top two stack elements
Code Range 0x8B
Pops e2: stack element
e1: stack element
Pushes maximum of e1 and e2
Related instructions MIN[ ]
Returns the larger of the top two stack elements.
Pops two elements, e[2] and e1, from the stack and pushes the larger of these two quantities onto the stack.
Return to Contents
MD[a] Measure Distance
Code Range 0x49 - 0x4A
a 0: measure distance in grid-fitted outline
1: measure distance in original outline
Pops p2: point number (uint32) p1: point number (uint32)
Pushes d: distance (F26Dot6)
Uses zp[0] with point p[1], zp[1] with point p[2], projection vector, dual projectionv ector
Measures the distance between the two points specified.
Pops two point numbers p[2] and p[1] and measures the distance between the two points specified. The distance, d, is pushed onto the stack as a pixel coordinate. The distance is signed. Reversing the
order in which the points are listed will change the sign of the result.
Depending upon the setting of the Boolean variable a, distance will be measured in the original outline or the grid-fitted outline. MD[0] measures the distance in the original outline while MD[1]
measures the distance in the grid-fitted outline. As always, distance is measured along the projection vector. Just as reversing the order in which the points are listed will change the sign of the
distance, reversing the orientation of the projection vector will have the same effect.
In the example below, MD[1] will yield the original outline distance from point p[1] to point p[2]. MD[0] will yield the distance from point p[1] to point p[2].
Return to Contents
MDAP[a] Move Direct Absolute Point
Code Range 0x2E - 0x2F
a: 0: do not round the value
1: round the value
Pops p: point number (uint32)
Pushes -
Sets rp0 and rp1 are set to point p
Uses zp0, freedom vector, projection vector, round state
Related instructions MDRP[ ], MIAP[ ]
Touch and, in some cases, round the specified point. A point that is "dapped" will be unaffected by subsequent IUP[ ] instructions and is generally intended to serve as a reference point for future
instructions. Dapping a point with rounding set to grid will cause the point to have an integer valued coordinate along the projection vector. If the projection vector is set to the x-axis or y-axis,
this will cause the point to be grid-aligned.
Pops a point number, p, and sets reference points rp0 and rp1 to point p. If the Boolean a is set to 1, the coordinate of point p, as measured against the projection vector, will be rounded and then
moved the rounded distance from its current position. If the Boolean a is set to 0, point p is not moved, but nonetheless is marked as touched in the direction(s) specified by the current freedom
Return to Contents
MDRP[abcde] Move Direct Relative Point
Code Range 0xC0 - 0xDF
a 0: do not reset rp0 to point p
1: set rp0 to point p
b 0: do not keep distance greater than or equal to minimum distance
1: keep distance greater than or equal to minimum distance
c 0: do not round distance
1: round the distance
de distance type for engine characteristic compensation
Pops p: point number (uint32)
Pushes -
Sets after point p is moved, rp1 is set equal to rp0, rp2 is set equal to point number p; if the a flag is set to TRUE, rp0 is set equal to point number p
Uses zp0 with rp0 and zp1 with point p, minimum distance, round state,single width value, single width cut-in, freedom vector, projection vector, dual projection vector
Related instructions MDAP[ ], MIRP[ ]
Preserves the master outline distance between the specified point and the reference point rp[0].
Pops a point number, p, and moves point p along the freedom vector so that the distance from its new position to the current position of rp[0] is the same as the distance between the two points in
the original uninstructed outline, and then adjusts it to be consistent with the Boolean settings. Note that it is only the original positions of rp[0] and point p and the current position of rp0
that determine the new position of point p along the freedom vector.
MDRP[] is typically used to control the width or height of a glyph feature using a value which comes from the original outline. Since MDRP[] uses a direct measurement and does not reference the
control value cut-in, it is used to control measurements that are unique to the glyph being instructed. Where there is a need to coordinate the control of a point with the treatment of points in
other glyphs in the font, a MIRP[] instruction is needed.
Though MDRP[] does not refer to the CVT, its effect does depend upon the single-width cut-in value. If the device space distance between the measured value taken from the uninstructed outline and the
single width value is less than the single width cut-in, the single width value will be used in preference to the outline distance. In other words, if the two distances are sufficiently close (differ
by less than the single width cut-in), the single width value will be used.
The setting of the round state graphics state variable will determine whether and how the distance of point p from rp[0] is rounded. If the round bit is not set, the value will be unrounded. If the
round bit is set, the effect will depend upon the choice of rounding state.
A MDRP[] instruction can also be set to use the minimum distance value. Minimum distance sets a lower bound on the value the distance between two points can be rounded to.
Distances measured with the MDRP[] instruction, like all TrueType distances, must be either black, white or grey. Indicating this value in Booleans de allows the interpreter to compensate for engine
characteristics as needed.
The illustration below, point p is moved along the freedom vector from its current position to a new position that is a distance, d from the reference point rp[0]. This distance is the same as the
original distance from p to rp[0].
Return to Contents
MIAP[a] Move Indirect Absolute Point
Code Range 0x3E - 0x3F
a 0: don't round the distance and don't look at the control value cut-in
1: round the distance and look at the control value cut-in
Pops n: CVT entry number (F26Dot6)
p: point number (uint32)
Pushes -
Sets set rp0 and rp1 to point p
Uses zp0, round state, control value cut-in, freedom vector, projection vector
Related instructions MSIRP[ ], MIRP[ ], MDAP[ ]
Makes it possible to coordinate the location of a point with that of other similar points by moving that point to a location specified in the control value table.
Pops a CVT entry number n and a point number p and then moves point p to the absolute coordinate position specified by the nth control value table entry. The coordinate is measured along the current
projection vector. If boolean a has the value one, the position will be rounded as specified by round state. If boolean a has the value one and the device space difference between the CVT value and
the original position is greater than the control value cut-in, the original position will be rounded (instead of the CVT value.)
The a Boolean above controls both rounding and the use of the control value cut-in. To have this Boolean specify only whether or not the MIAP[] instruction should look at the control value cut-in
value, use the ROFF[] instruction to turn off rounding.
This instruction can be used to "create" twilight zone points. This is accomplished by setting zp0 to zone 0 and moving the specified point, which is initially at the origin to the desired location.
In the illustration below, point p is moved along the freedom vector until it occupies a position that projects to c units along the projection vector.
Return to Contents
MIN[] MINimum of top two stack elements
Code Range 0x8C
Pops e2: stack element e1: stack element
Pushes minimum of e1 and e2
Related instructions MAX[ ]
Returns the minimum of the top two stack elements.
Pops two elements, e2 and e1, from the stack and pushes the smaller of these two quantities onto the stack.
Return to Contents
MINDEX[] Move the INDEXed element to the top of the stack
Code Range 0x26
Pops k: stack element
Pushes ek: stack element
k: stack element number (uint32)
e1: stack element
Stack before ...
ek-1: stack element
ek: stack element
ek: indexed element
Stack after e1: stack element
ek-1: stack element
Related instructions CINDEX[ ]
Moves the indexed element to the top of the stack thereby removing it from its original position.
Pops an integer, k, from the stack and moves the element with index k to the top of the stack.
Return to Contents
MIRP[abcde] Move Indirect Relative Point
Code Range 0xE0 - 0xFF
a 0: Do not set rp0 to p
1: Set rp0 to p
b 0: Do not keep distance greater than or equal to minimum distance
1: Keep distance greater than or equal to minimum distance
c 0: Do not round the distance and do not look at the control value cut-in
1: Round the distance and look at the control value cut-in value
de: distance type for engine characteristic compensation
Pops n: CVT entry number (F26Dot6) p: point number (uint32)
Pushes -
Uses zp0 with rp0 and zp1 with point p. round state, control value cut-in, single width value, single width cut-in, freedom vector, projection vector, auto flip, dual projection
Sets After it has moved the point this instruction sets rp1 equal to rp0, rp2 is set equal to point number p; lastly, if a has the value TRUE, rp0 is set to point number p.
Related instructions MSIRP[ ], MIAP[ ], MDRP[ ]
Makes it possible to coordinate the distance between a point and a reference point with other similar distances by making that distance subject to a control value table entry.
Moves point p along the freedom vector so that the distance from p to the current position of rp[0] is equal to the distance stated in the referenced CVT entry, assuming that the cut-in test
succeeds. Note that in making the cut-in test, MIRP[] uses the original outline distance between p and rp0. If the cut-in test fails, point p will be moved so that its distance from the current
position of rp0 is equal to the original outline distance between p and the point referenced by rp[0].
A MIRP[] instruction makes it possible to preserve the distance between two points subject to a number of qualifications. Depending upon the setting of Boolean flag b, the distance can be kept
greater than or equal to the value established by the minimum distance state variable. Similarly, the instruction can be set to round the distance according to the round state graphics state
variable. The value of the minimum distance variable is the smallest possible value the distance between two points can be rounded to. Additionally, if the c Boolean is set, the MIRP[] instruction
acts subject to the control value cut-in. If the difference between the actual measurement and the value in the CVT is sufficiently small (less than the cut-in value), the CVT value will be used and
not the actual value. If the device space difference between the CVT value and the single width value is smaller than the single width cut-in, then use the single width value rather than the control
value table distance.
The c Boolean above controls both rounding and the use of control value table entries. If you would like the meaning of this Boolean to specify only whether or not the MIRP[] instruction should look
at the control value cut-in, use the ROFF[] instruction to turn off rounding. In this manner, it is possible to specify that rounding is off but the cut-in still applies.
MIRP[] can be used to create points in the twilight zone.
In the illustration below, point p is moved along the freedom vector until its distance to point rp[0] is equal to the distance d found in the reference CVT entry.
Return to Contents
MPPEM[] Measure Pixels Per EM
Code Range 0x4B
Pops -
Pushes ppem: pixels per em (Euint16)
Uses projection vector
Related instructions MPS[ ]
Pushes the current number of pixels per em onto the stack. Pixels per em is a function of the resolution of the rendering device and the current point size and the current transformation matrix. This
instruction looks at the projection vector and returns the number of pixels per em in that direction. The number is always an integer.
The illustration below shows magnifications of an 18 point Times New Roman s at 72 dpi, 144 dpi, and 300 dpi, respectively. Increasing the number of pixels per em improves the quality of the image
obtained. It does not, however, change the absolute size of the image obtained.
Pushes the current point size onto the stack.
Measure point size can be used to obtain a value which serves as the basis for choosing whether to branch to an alternative path through the instruction stream. It makes it possible to treat point
sizes below or above a certain threshold differently.
The illustration below shows magnifications of 12 point, 24 point, and 48point Times New Roman Q at 72 dpi. Note that increasing the point size of a glyph increases its absolute size. On a low
resolution device, like a screen, more detail can be captured at a higher point size.
Return to Contents
MSIRP[a] Move Stack Indirect Relative Point
Code Range 0x3A - 0x3B
a 0: do not change rp0
1: set rp0 to point number p
Pops d: distance (F26Dot6) p: point number (uint32)
Pushes -
Uses zp1 with point p and zp0 with rp0, freedom vector, projection vector
Related instructions MIRP[ ]
Makes it possible to coordinate the distance between a point and a reference point by setting the distance from a value popped from the stack.
Pops a distance, d and a point number, p, and makes the distance between point p and the current position of rp0 equal to d. The distance, d, is in pixel coordinates.
MSIRP[ ] is very similar to the MIRP[ ] instruction except for taking the distance from the stack rather than the CVT. Since MSIRP[ ] does not use the CVT, the control value cut-in is not a factor as
it is in MIRP[ ]. Since MSIRP[ ] does not round, its effect is not dependent upon the round state.
MSIRP[] can be used to create points in the twilight zone.
In the illustration below, point p is moved along the freedom vector until it is at a distance d from rp0.
Return to Contents
MUL[] MULtiply
Code Range 0x63
Pops n[2]: multiplier (F26Dot6)
n[1]: multiplicand (F26Dot6)
Pushes (n[2] * n[1])/64: product (F26Dot6)
Related instructions DIV[ ]
Multiplies the top two numbers on the stack. Pops two 26.6 numbers, n2 and n1, from the stack and pushes onto the stack the product of the two elements. The 52.12 result is shifted right by 6 bits
and the high 26 bits are discarded yielding a 26.6 result.
Return to Contents
NEG[] NEGate
Code Range 0x65
Pops n: pixel coordinate (F26Dot6)
Pushes -n: negation of n1 (F26Dot6)
Negates the number at the top of the stack.
Pops a number, n, from the stack and pushes the negated value of n onto the stack.
Return to Contents
NEQ[] Not EQual
Code Range 0x55
Pops e2: stack element
e1: stack element
Pushes b: Boolean value (uint32 in the range [0,1])
Related instructions EQ[ ]
Determines whether the two elements at the top of the stack are unequal.
Pops two numbers, e2 and e1, from the stack and compares them. If they are different, one, signifying TRUE is pushed onto the stack. If they are equal, zero, signifying FALSE is pushed onto the
Return to Contents
NOT[] logical NOT
Code Range 0x5C
Pops e: stack element
Pushes (not e): logical negation of e (uint32)
Takes the logical negation of the number at the top of the stack.
Pops a number e from the stack and returns the result of a logical NOT operation performed on e. If e was zero, one is pushed onto the stack if e was nonzero, zero is pushed onto the stack.
Return to Contents
NPUSHB[] PUSH N Bytes
Code Range 0x40
From IS n: number of bytes to push (1 byte interpreted as an integer)
b1, b2,...bn: sequence of n bytes
Pushes b1, b2,...bn: sequence of n bytes each extended to 32 bits (uint32)
Related instructions NPUSHW[ ], PUSHB[ ], PUSHW[]
Takes n bytes from the instruction stream and pushes them onto the stack.
Looks at the next byte in the instructions stream, n, and takes n unsigned bytes from the instruction stream, where n is an unsigned integer in the range (0 255), and pushes them onto the stack. The
number of bytes to push, n, is not pushed onto the stack.
Each byte value is unsigned extended to 32 bits before being pushed onto the stack.
Return to Contents
NPUSHW[] PUSH N Words
Code Range 0x41
From IS n: number of words to push (one byte interpreted as an integer)
w1, w2,...wn: sequence of n words formed from pairs of bytes, the high byte appearing first
Pushes w1, w2,...wn: sequence of n words each extended to 32 bits (int32)
Related instructions NPUSHW[ ], PUSHB[ ]
Takes n words from the instruction stream and pushes them onto the stack.
Looks at the next instruction stream byte n and takes n 16-bit signed words from the instruction stream, where n is an unsigned integer in the range (0 255), and pushes them onto the stack. Each word
is sign extended to 32 bits before being placed on the stack.The value n is not pushed onto the stack.
Return to Contents
NROUND[ab] No ROUNDing of value
Code Range 0x6C - 0x6F
ab distance type for engine characteristic compensation
Pops n[1]: pixel coordinate (F26Dot6)
Pushes n[2]: pixel coordinate (F26Dot6)
Related instructions ROUND[ ]
Changes the values of the number at the top of the stack to compensate for the engine characteristics.
Pops a value, n1, from the stack and, possibly, increases or decreases its value to compensate for the engine characteristics established with the Boolean setting ab. The result, n[2], is pushed onto
the stack.
NROUND[ab] derives its name from it relationship to ROUND[ab]. It does the same operation as ROUND[ab] except that it does not round the result obtained after compensating for the engine
Return to Contents
ODD[] ODD
Code Range 0x56
Pops e1: stack element (F26Dot6)
Pushes b: Boolean value
Uses round state
Related instructions EVEN[ ]
Tests whether the number at the top of the stack is odd.
Pops a number, e1, from the stack and rounds it according to the current setting of the round state before testing it. The number is then truncated to an integer. If the truncated number is odd, one,
signifying TRUE, is pushed onto the stack if it is even, zero, signifying FALSE is placed onto the stack.
Return to Contents
OR[] logical OR
Code Range 0x5B
Pops e2: stack element e1: stack element
Pushes (e1 or e2): logical or of e1 and e2 (uint32)
Related instructions AND[ ]
Takes the logical or of the two numbers at the top of the stack.
Pops two numbers, e2 and e1 off the stack and pushes onto the stack the result of a logical or operation between the two elements. Zero is pushed if both of the elements are FALSE (have the value
zero). One is pushed if either both of the elements are TRUE (has a nonzero value).
Return to Contents
POP[] POP top stack element
Code Range 0x21
Pops e: stack element
Pushes -
Pops the top element from the stack.
Return to Contents
PUSHB[abc] PUSH Bytes
Code Range 0xB0 - 0xB7
abc number of bytes to be pushed - 1
From IS b0, b1, bn: sequence of n + 1 bytes where n = 4a+2b+c = abc2
Pushes b0, b1, ,bn: sequence of n + 1 bytes each extended to 32 bits (uint32)
Related instructions NPUSHB[ ], PUSHW[ ], NPUSHB[]
Takes the specified number of bytes from the instruction stream and pushes them onto the interpreter stack.
The variables a, b, and c are binary digits representing numbers from 000 to 111 (0-7 in binary). The value 1 is automatically added to the abc figure to obtain the actual number of bytes pushed.
When byte values are pushed onto the stack they are non-sign extended with zeroes to form 32 bit numbers.
Return to Contents
PUSHW[abc] PUSH Words
Code Range 0xB8 - 0xBF
abc number of words to be pushed - 1.
From IS w0,w1, wn: sequence of n+1 words formed from pairs of bytes, the high byte appearing first
Pushes w0,w1,...wn: sequence of n+1 words each padded to 32 bits (uint32)
Related instructions NPUSHW[ ], PUSHB[ ]
Takes the specified number of words from the instruction stream and pushes them onto the interpreter stack.
The variables a, b, and c are binary digits representing numbers from 000 to 111 (0-7 binary). The value 1 is automatically added to the abc figure to obtain the actual number of bytes pushed.
When word values are pushed onto the stack they are sign extended to 32 bits.
Return to Contents
RCVT[] Read Control Value Table entry
Code Range 0x45
Pops location: CVT entry number (uint32)
Pushes value: CVT value (F26Dot6)
Related instructions WCVTP[ ], WCVTP[ ]
Read a control value table entry and places its value onto the stack.
Pops a CVT location from the stack and pushes the value found in the location specified onto the stack.
Return to Contents
RDTG[] Round Down To Grid
Code Range 0x7D
Pops -
Pushes -
Sets round state
Affects MDAP[], MDRP[], MIAP[], MIRP[], ROUND[]
Related instructions RUTG[ ], RTG[], RTHG[], RTDG[], ROFF[]
Sets the round state variable to down to grid. In this state, distances are first subjected to compensation for the engine characteristics and then truncated to an integer. If the result of the
compensation and rounding would be to change the sign of the distance, the distance is set to 0.
Return to Contents
ROFF[] Round OFF
Code Range 0x7A
Pop -
Pushes -
Sets round state
Affects MDAP[], MDRP[], MIAP[], MIRP[], ROUND[]
Related instructions RDTG[], RUTG[ ], RTG[], RTHG[], RTDG[]
Sets the round state variable to round off. In this state engine compensation occurs but no rounding takes place. If engine compensation would change the sign of a distance, the distance is set to 0.
Return to Contents
ROLL[] ROLL the top three stack elements
Code Range 0x8A
a: top stack element
Pops b: second stack element from the top
c: third stack element from the top
b: second stack element
Pushes a: top stack element
c: third stack element
Related instructions MINDEX[ ]
Performs a circular shift of the top three stack elements.
Pops the top three stack elements, a, b, and c and performs a circular shift of these top three objects on the stack with the effect being to move the third element to the top of the stack and to
move the first two elements down one position. ROLL is equivalent to MINDEX[] with the value 3 at the top of the stack.
Return to Contents
ROUND[ab] ROUND value
Code Range 0x68 - 0x6B
Flags ab: distance type for engine characteristic compensation
Pops n1: device space distance (F26Dot6)
Pushes n2: device space distance (F26Dot6)
Related instructions NROUND[ ]
Uses round state, freedom vector
Rounds the value at the top of the stack while compensating for the engine characteristics.
Pops a 26.6 fixed point number, n1, and, depending on the engine characteristics established by Booleans ab, the result is increased or decreased by a set amount. The number obtained is then rounded
according to the current rounding state and pushed back onto the stack as n2.
In TrueType, rounding is symmetric about zero and includes compensation for printer dot size. See "Engine compensation using color" on page 2-65.
Return to Contents
RS[] Read Store
Code Range 0x43
Pops n: storage area location (uint32)
Pushes v: storage area value (uint32)
Related instructions WS[ ]
Reads the value in the specified storage area location and pushes that value onto the stack.
Pops a storage area location, n, from the stack and reads a 32-bit value, v, from that location. The value read is pushed onto the stack. The number of available storage locations is specified in the
'maxp' table in the font file'.
Return to Contents
RTDG[] Round To Double Grid
Code Range 0x3D
Pops -
Pushes -
Sets round state
Affects MDAP[], MDRP[], MIAP[], MIRP[], ROUND[]
Related instructions RDTG[], ROFF[], RUTG[ ], RTG[], RTHG[]
Sets the round state variable to double grid. In this state, distances are compensated for engine characteristics and then rounded to an integer or half-integer, whichever is closest.
In TrueType, rounding is symmetric about zero and includes compensation for printer dot size. See "Engine compensation using color" on page 2-65.
Return to Contents
RTG[] Round To Grid
Code Range 0x18
Pops -
Pushes -
Sets round state
Affects MDAP[], MDRP[], MIAP[], MIRP[], ROUND[]
Related instructions RDTG[], ROFF[], RUTG[ ], RTDG[], RTHG[]
Sets the round state variable to grid. In this state, distances are compensated for engine characteristics and rounded to the nearest integer.
In TrueType, rounding is symmetric about zero and includes compensation for printer dot size. See "Engine compensation using color" on page 2-65.
Return to Contents
RTHG[] Round To Half Grid
Code Range 0x19
Pops -
Pushes -
Sets round state
Affects MDAP[], MDRP[], MIAP[], MIRP[], ROUND[]
Related instructions RDTG[], ROFF[], RUTG[ ], RTDG[], RTG[]
Sets the round state variable to half grid. In this state, distances are compensated for engine characteristics and rounded to the nearest half integer. If these operations change the sign of the
distance, the distance is set to +1/2 or -1/2 according to the original sign of the distance.
In TrueType, rounding is symmetric about zero and includes compensation for printer dot size. See "Engine compensation using color" on page 2-65.
Return to Contents
RUTG[] Round Up To Grid
Code Range 0x7C
Pops -
Pushes -
Sets round state
Affects MDAP[], MDRP[], MIAP[], MIRP[], ROUND[]
Related instructions RDTG[], ROFF[], RTDG[], RTG[], RTHG[]
Sets the round state variable to up to grid. In this state, after compensation for the engine characteristics, distances are rounded up to the closest integer. If the compensation and rounding would
change the sign of the distance, the distance will be set to 0.
In TrueType, rounding is symmetric about zero and includes compensation for printer dot size. See "Engine compensation using color" on page 2-65.
Return to Contents
S45ROUND[] Super ROUND 45 degrees
Code Range 0x77
Pops n: uint32 decomposed to obtain period, phase, threshold (uint32)
Pushes -
Sets round state
Affects MDAP[], MDRP[], MIAP[], MIRP[], ROUND[]
Related instructions SROUND[ ]
S45ROUND[ ] is analogous to SROUND[ ]. The differ is that it uses a gridPeriod of pixels rather than 1 pixel. S45ROUND[ ] is useful for finely controlling rounding of distances that will be measured
at a 45 angle to the x-axis.
In TrueType, rounding is symmetric about zero and includes compensation for printer dot size. "Engine compensation using color" on page 2-65.
Pops a number, n, from the stack and decomposes that number to obtain a period, a phase and a threshold used to set the value of the graphics state variable round state. Only the lower 8 bits of the
argument n are used to obtain these values. The byte is encoded as shown in Table 2 below.
Table 2 SROUND[] byte encoding
│period │phrase │threshold│
│7 │5 │4 │3 │2 │1 │0 ││
The next three tables give the meaning associated with the possible values for the period, phase and threshold components of n in an S45ROUND[] instruction.
Table 3 : Setting the period
│bit values │setting │
│00 │sqr(2)/2 pixels │
│01 │sqr(2) pixels │
│10 │2sqr(2) pixels │
│11 │Reserved │
Table 4: Setting the phase
│bits │phase │
│00 │0 │
│01 │period/4 │
│10 │period/2 │
│11 │period*3/4 │
Table 5 : Setting the threshold
│bits│threshold │
│0000│period -1 │
│0001│-3/8 * period │
│0010│-2/8 * period │
│0011│-1/8 * period │
│0100│0/8 * period = 0 │
│0101│1/8 * period │
│0110│2/8 * period │
│0111│3/8 * period │
│1000│4/8 * period │
│1001│5/8 * period │
│1010│6/8 * period │
│1011│7/8 * period │
│1100│8/8 * period = period │
│1101│9/8 * period │
│1110│10/8 * period │
│1111│11/8 * period │
Return to Contents
SANGW[] Set ANGle Weight
Code Range 0x7E
Pops weight: value for angle weight (uint32)
Pushes -
Sets angle weight
Related instructions AA[ ]
Pops a 32 bit integer, weight, from the stack and sets the value of the angle weight state variable accordingly. This instruction is anachronistic. Except for popping a single stack element, it has
no effect.
Return to Contents
SCANCTRL[] SCAN conversion ConTRoL
Code Range 0x85
Pops n: flags indicating when to turn on dropout control mode
Pushes -
Sets scan control
Related instructions SCANTYPE[ ]
Pops a number, n, which is decomposed to a set of flags specifying the dropout control mode. SCANCTRL is used to set the value of the graphics state variable scan control which in turn determines
whether the scan converter will activate dropout control for this glyph. Use of the dropout control mode is determined by three conditions:
• Is the glyph rotated?
• Is the glyph stretched?
• Is the current setting for ppem less than a specified threshold?
The interpreter pops a word from the stack and looks at the lower 13 bits.
Bits 0-7 represent the threshold value for ppem. In conjunction with bit 8, a value of FF in bits 0-7 means invoke dropout control for all sizes. Sia value of 15 in bits 0-7 means invoke dropout
control below 16 pixels per em. Note that 0xFE or 254 is the largest number of pixels per em for which dropout control can be selectively invoked.
Bits 8-13 are used to specify when to dropout control. Bits 8, 9 and 10 are used to turn on the dropout control mode (assuming other conditions do not block it). Bits 11, 12, and 13 are used to turn
off the dropout mode unless other conditions force it.
│Bit│Meaning If set │
│8 │Set dropout control to TRUE if other conditions do not block and ppem is less than or equal to the threshold value │
│9 │Set dropout control to TRUE if other conditions do not block and the glyph is rotated │
│10 │Set dropout control to TRUE if other conditions do not block and the glyph is stretched. │
│11 │Set dropout control to FALSE unless ppem is less than or equal to the threshold value. │
│12 │Set dropout control to FALSE unless the glyph is rotated. │
│13 │Set dropout control to FALSE unless the glyph is stretched │
For example, the values given below have the effect stated.
0x0 No dropout control is invoked
0x1FF Always do dropout control
0xA10 Do dropout control if the glyph is rotated and has less than 16 pixels per em
The scan converter can operate in either a "normal" mode or in a "fix dropout" mode depending on the value of a set of enabling and disabling flags.
Return to Contents
Code Range 0x8D
Pops n: stack element
Pushes -
Sets scan_control
Related instructions SCANCTRL[ ]
Used to choose between dropout control with subs and without stubs.
Pops a stack element consisting of a16-bit integer extended to 32 bits. The value of this integer is used to determine which rules the scan converter will use. If the value of the argument is 2, the
non-dropout control scan converter will be used. If the value of the integer is 0 or 1, the dropout control mode will be set. More specifically,
if n=0 rules 1 and 2 are invoked (dropout control scan conversion including stubs)
if n=1 rules 1 and 3 are invoked (dropout control scan conversion excluding stubs)
if n=2 rule 1 is invoked (fast scan conversion)
The scan conversion rules are shown here:
Rule 1
If a pixel's center falls within or on the glyph outline, that pixel is turned on and becomes part of that glyph.
Rule 2
If a scan line between two adjacent pixel centers (either vertical or horizontal) is intersected by both an on-Transition contour and an off-Transition contour and neither of the pixels was already
turned on by rule 1, turn on the left-most pixel (horizontal scan line) or the bottom-most pixel (vertical scan line)
Rule 3
Apply Rule 2 only if the two contours continue to intersect other scan lines in both directions. That is, do not turn on pixels for 'stubs'. The scanline segments that form a square with the
intersected scan line segment are examined to verify that they are intersected by two contours. It is possible that these could be different contours than the ones intersecting the dropout scan line
segment. This is very unlikely but may have to be controlled with grid-fitting in some exotic glyphs.
Return to Contents
SCFS[] Sets Coordinate From the Stack using projection vector and freedom vector
Code Range 0x48
Pops c: coordinate value (F26Dot6)
p: point number (uint32)
Pushes -
Uses zp2, freedom vector, projection vector
Related instructions GC[ ]
Moves a point to the position specified by the coordinate value given on the stack.
Pops a coordinate value, c, and a point number, p, and moves point p from its current position along the freedom vector so that its component along the projection vector becomes the value popped off
the stack.
This instruction can be used to "create" points in the twilight zone.
In the illustration below, point p is moved along the freedom vector until its coordinate on the projection vector has the value c.
Return to Contents
SCVTCI[] Set Control Value Table Cut-In
Code Range 0x1D
Pops n: value for cut-in (F26Dot6)
Pushes -
Sets control value cut-in
Affects MIAP, MIRP
Establish a new value for the control value table cut-in.
Pops a value, n, from the stack and sets the control value cut-in to n. Increasing the value of the cut-in will increase the range of sizes for which CVT values will be used instead of the original
outline value.
Return to Contents
SDB[] Set Delta Base in the graphics state
Code Range 0x5E
Pops n: value for the delta base (uint32)
Pushes -
Sets delta base
Affects DELTAP1[], DELTAP2[], DELTAP3[], DELTAC1[], DELTAC2[], DELTAC3[]
Related instructions SDS[ ]
Establishes a new value for the delta base state variable thereby changing the range of values over which a DELTA[] instruction will have an affect.
Pops a number, n, and sets delta base to the value n. The default for delta base is 9.
Return to Contents
SDPVTL[a] Set Dual Projection Vector To Line
Code Range 0x86 - 0x87
a 0: Vector is parallel to line
1: Vector is perpendicular to line
Pops p2: point number (uint32)
p1: point number (uint32)
Pushes -
Sets dual projection vector, projection vector, zp2 with p2, zp1 with p1
Related instructions SPVTL[ ]
Sets a second projection vector based upon the original position of two points. The new vector will point in a direction that is parallel to the line defined from p2 to p1. The projection vector is
also set in in a direction that is parallel to the line from p2 to p1 but it is set using the current position of those points.
Pops two point numbers from the stack and uses them to specify a line that defines a second, dual projection vector. This dual projection vector uses coordinates from the original outline before any
instructions are executed. It is used only with the IP[], GC[], MD[], MDRP[] and MIRP[] instructions. The dual projection vector is used in place of the projection vector in these instructions. This
continues until some instruction sets the projection vector again.
Return to Contents
SDS[] Set Delta Shift in the graphics state
Code Range 0x5F
Pops n: value for the delta shift (uint32)
Pushes -
Sets delta shift
Affects DELTAP1[], DELTAP2[], DELTAP3[], DELTAC1[], DELTAC2[], DELTAC3[]
Related instructions SDB[ ]
Establish a new value for the delta shift state variable thereby changing the step size of the DELTA[] instructions.
Pops a value n from the stack and sets delta shift to n. The default for delta shift is 3.
Return to Contents
SFVFS[] Set Freedom Vector From Stack
Code 0x0B
Pops y: y component of freedom vector (F2Dot14)
x: x component of freedom vector (F2Dot14)
Pushes -
Sets freedom vector
Related instructions SFVTL[ ], SFVTPV[ ], SFVTCA[ ]
Changes the direction of the freedom vector using values take from the stack and thereby changing the direction in which points can move.
Sets the direction of the freedom vector using the values x and y taken from the stack. The vector is set so that its projections onto the x and y -axes are x and y, which are specified as signed
(two's complement) fixed-point (2.14) numbers. The value (x2 + y2) must be equal to 1 (0x4000).
Return to Contents
SFVTCA[a] Set Freedom Vector to Coordinate Axis
Code range 0x04 - 0x05
a 0: set the freedom vector to the y-axis
1: set the freedom vector to the x-axis
Pops -
Pushes -
Sets freedom vector
Related instructions SFVFS[ ], SFVTL[ ], SFVTPV[ ]
Sets the freedom vector to one of the coordinate axes depending upon the value of the flag a.
Return to Contents
SFVTL[a] Set Freedom Vector To Line
Code Range 0x08 - 0x09
a 0: set freedom vector to be parallel to the line segment defined by points p1 and p2
1: set freedom vector perpendicular to the line segment defined by points p1 and p2; the vector is rotated counter clockwise 90 degrees
Pops p2: point number (uint32)
p1: point number (uint32)
Pushes -
Sets freedom vector
Uses zp1 points to the zone containing point p1 zp2 points to the zone containing point p2
Related instructions SFVTPV[ ], SFVFS[ ], SFVTCA[ ]
Change the value of the freedom vector using the direction specified by the line whose end points are taken from the stack. The effect is to change the direction in which points can move to be
parallel to that line. The order in which the points are chosen is significant. Reversing the order will reverse the direction of the freedom vector.
Pops two point numbers p2 and p1 from the stack and sets the freedom vector to a unit vector parallel or perpendicular to the line segment defined by points p1 and p2 and pointing from p2 to p1.
If the Boolean a has the value 0, the freedom vector is parallel to the line from p2 to p1.
If the Boolean a has the value one, the freedom vector is perpendicular to the line from p2 to p1. More precisely, the freedom vector is obtained by rotating the vector that is parallel to the line
90 counter clockwise.
Return to Contents
SFVTPV[] Set Freedom Vector To Projection Vector
Code 0x0E
Pops -
Pushes -
Sets freedom vector
Related instructions SFVFS[ ], SFVTL[ ], SFVTCA[ ]
Sets the freedom vector to be the same as the projection vector. This means that movement and measurement will be in the same direction.
Return to Contents
SHC[a] SHift Contour using reference point
Code Range 0x34 - 0x35
a 0: uses rp2 in the zone pointed to by zp1
1: uses rp1 in the zone pointed to by zp0
Pops c: contour to be shifted (uint32)
Pushes -
Uses zp0 with rp1 or zp1 with rp2 depending on flag zp2 with contour c freedom vector, projection vector
Related instructions SHP[ ], SHZ[ ]
Shifts a contour by the amount that the reference point was shifted.
Pops a number, c, and shifts every point on contour c by the same amount that the reference point has been shifted. Each point is shifted along the freedom vector so that the distance between the new
position of the point and the old position of that point is the same as the distance between the current position of the reference point and the original position of the reference point. The distance
is measured along the projection vector. If the reference point is one of the points defining the contour, the reference point is not moved by this instruction.
This instruction is similar to SHP[], but every point on the contour is shifted.
In the illustration below, the triangular contour formed by points ,, and is shifted by the amount, d, that reference point rp was moved from its original position. The new contour p1, p2, p3 retains
the original shape but has been translated in space, along the freedom vector by the amount, d.
Return to Contents
SHP[a] SHift Point using reference point
Code Range 0x32 - 0x33
a 0: uses rp2 in the zone pointed to by zp1
1: uses rp1 in the zone pointed to by zp0
Pops p1, p2, , ploopvalue: point to be shifted (uint32)
Pushes -
Uses zp0 with rp1 or zp1 with rp2 depending on flag zp2 with point p loop, freedom vector, projection vector
Shifts points specified by the amount the reference point has already been shifted.
Pops point numbers, p1, p2, , ploopvalue, and shifts those points by the same amount that the reference point has been shifted. Each point pi is moved along the freedom vector so that the distance
between the new position of point pi and the current position of point pi is the same as the distance between the current position of the reference point and the original position of the reference
In the illustration below, the distance between the current position of the reference point and its original position is d. Line LL' is drawn perpendicular to the projection vector at a distance d
from point A'. Point p is moved along the freedom vector to the point where the vector intersects with line LL'. The distance from point A' to B', d, is now the same as the distance from A to B.
Return to Contents
SHPIX[] SHift point by a PIXel amount
Code Range 0x38
Pops d: magnitude of the shift (F26Dot6)
p1, p2, , ploopvalue: point to be shifted (uint32)
Pushes -
Uses zp2, loop, freedom vector
Related instructions SHP[ ]
Shift the specified points by the specified amount.
Pops point numbers p1, p2, , ploopvalue and an amount. Shifts each point pi by amount d.
SHPIX[ ] is unique in relying solely on the direction of the freedom vector It makes no use of the projection vector. Measurement is made in the direction of the freedom vector.
In the example below, point p is moved d pixels along the freedom vector.
Return to Contents
SHZ[a] SHift Zone using reference point
Code Range 0x36 - 0x37
a 0: the reference point rp2 is in the zone pointed to by zp1
1: the reference point rp1 is in the zone pointed to by zp0
Pops e: zone to be shifted (uint32)
Pushes -
Uses zp0 with rp1 or zp1 with rp2 depending on flag freedom vector, projection vector
Related instructions SHP[ ], SHC[ ]
Shifts all of the points in the specified zone by the amount that the reference point has been shifted.
Pops a zone number, e, and shifts the points in the specified zone (Z1 or Z0) by the same amount that the reference point has been shifted. The points in the zone are shifted so that the distance
between the new position of the shifted points and their old position is the same as the distance between the current position of the reference point and the original position of the reference point.
SHZ[a] uses zp0 with rp1 or zp1 with rp2. This instruction is similar to SHC[ ], but all points in the zone are shifted, not just the points on a single contour.
Return to Contents
SLOOP[] Set LOOP variable
Code Range 0x17
Pops n: value for loop graphics state variable (integer)
Pushes -
Sets loop
Affects ALIGNRP[], FLIPPT[], IP[], SHP[], SHPIX[]
Related instructions LOOPCALL[ ]
Changes the value of the loop variable thereby changing the number of times the affected instructions will execute if called.
Pops a value, n, from the stack and sets the loop variable count to that value. The loop variable works with the SHP[a], SHPIX[a], IP[ ], and ALIGNRP[]. The value n indicates the number of times the
instruction is to be repeated. After the instruction executes the required number of times, the loop variable is reset to its default value of 1. Setting the loop variable to zero is an error.
Return to Contents
SMD[] Set Minimum Distance
Code Range 0x1A
Pops distance: value for minimum_distance (F26Dot6)
Pushes -
Sets minimum distance
Establishes a new value for the minimum distance, the smallest possible value to which distances will be rounded. An appropriate setting for this variable can prevent distances from rounding to zero
and therefore disappearing when grid-fitting takes place.
Pops a 26.6 value from the stack and sets the minimum distance variable to that value.
Return to Contents
SPVFS[] Set Projection Vector From Stack
Code Range 0x0A
Pops y: y component of projection vector (F2Dot14) x: x component of projection vector (F2Dot14)
Pushes -
Sets projection vector
Related instructions SPVTL[ ], SPVTCA[ ]
Establishes a new value for the projection vector using values taken from the stack.
Pops two numbers y and x representing the y an x components of the projection vector. The values x and y are 2.14 numbers extended to 32 bits. Sets the direction of the projection vector, using
values x and y taken from the stack, so that its projections onto the x and y-axes are x and y, which are specified as signed (two's complement) fixed-point (2.14) numbers. The value (x2 + y2) must
be equal to 1 (0x4000).
Return to Contents
SPVTCA[a] Set Projection Vector To Coordinate Axis
Code range 0x02 - 0x03
a 0: set the projection vector to the y-axis
1: set the projection vector to the x-axis
Pops -
Pushes -
Sets projection vector
Related instructions SPVTL[ ], SPVFS[ ]
Sets the projection vector to one of the coordinate axes depending on the value of the flag a.
Return to Contents
SPVTL[a] Set Projection Vector To Line
Code Range 0x06 - 0x07
a 0: sets projection vector to be parallel to line segment from p2 to p1
1: sets projection vector to be perpendicular to line segment from p2 to p1; the vector is rotated counter clockwise 90 degrees
Pops p2: point number (uint32)
p1: point number (uint32)
Pushes -
Uses point p1 in the zone pointed at by zp1 point p2 in the zone pointed at by zp2
Sets projection vector
Related instructions SPVFS[ ], SPVTCA[ ]
Changes the direction of the projection vector to that specified by the line defined by the endpoints taken from the stack. The order in which the points are specified is significant Reversing the
order of the points will reverse the direction of the projection vector.
Pops two point numbers, p2 and p1 and sets the projection vector to a unit vector parallel or perpendicular to the line segment from point p2 to point p1 and pointing from p2 to p1.
Return to Contents
SROUND[] Super ROUND
Code Range 0x76
Pops n: number decomposed to obtain period, phase, threshold (Eint8)
Pushes -
Sets round state
Affects MDAP[], MDRP[], MIAP[], MIRP[], ROUND[]
Related instructions S45ROUND[ ]
Provides for fine control over the effects of the round state variable by directly setting the values of the three components of the round state: period, phase, and threshold.
Pops a number, n, from the stack and decomposes that number to obtain a period, a phase and a threshold used to set the value of the graphics state variable round state. Only the lower 8 bits of the
argument n are used to obtain these values. The byte is encoded as shown in Table 8 below.
Table 8: SROUND byte encoding
│period │phase │threshold│
│7 │6 │5 │4 │3 │2 │1 ││
The period specifies the length of the separation or space between rounded values. The phase specifies the offset of the rounded values from multiples of the period. The threshold specifies the part
of the domain, prior to a potential rounded value, that is mapped onto that value.Additional information on rounding can be found in "Rounding" on page 2-66.
For SROUND[] the grid period used to compute the period shown in Table 9 is equal to 1.0 pixels. Table 10 lists the possible values for the phase and Table 11 the possible values for the threshold.,
Table 9: Setting the period
│bit value │setting │
│00 │1/2 pixel │
│01 │1 pixel │
│10 │2 pixel │
│11 │Reserved │
Table 10: Setting the phase
│bit value │setting │
│00 │0 │
│01 │period/4 │
│10 │period/2 │
│11 │period*3/4 │
Table 11: Setting the threshold
│bit value │setting │
│0000 │period -1 │
│0001 │-3/8 * period │
│0010 │-2/8 * period │
│0011 │-1/8 * period │
│0100 │0/8 * period = 0 │
│0101 │1/8 * period │
│0110 │2/8 * period │
│0111 │3/8 * period │
│1000 │4/8 * period │
│1001 │5/8 * period │
│1010 │6/8 * period │
│1011 │7/8 * period │
│1100 │8/8 * period = period │
│1101 │9/8 * period │
│1110 │10/8 * period │
│1111 │11/8 * period │
Return to Contents
SRP0[] Set Reference Point 0
Code Range 0x10
Pops p: point number (uint32)
Pushes -
Sets rp0
Affects ALIGNRP[], MDAP[], MDRP[], MIAP[], MIRP[] MSIRP[]
Related instructions SRP1[ ], SRP2[ ]
Sets a new value for reference point 0.
Pops a point number, p, from the stack and sets rp0 to p.
Return to Contents
SRP1[] Set Reference Point 1
Code Range 0x11
Pops p: point number (uint32)
Pushes -
Sets rp1
Affects IP[], MDAP[], MIAP[], MSIRP[], SHC[], SHP[], SHZ
Related instructions SRP0[], SRP2[ ]
Sets a new value for reference point 1.
Pops a point number, p, from the stack and sets rp1 to p.
Return to Contents
SRP2[] Set Reference Point 2
Code Range 0x12
Pops p: point number (uint32)
Pushes -
Sets rp2
Affects IP[], SHC[], SHP[], SHZ[]
Related instructions SRP1[ ], SRP0[]
Sets a new value for reference point 2.
Pops a point number, p, from the stack and sets rp2 to p.
Return to Contents
SSW[] Set Single Width
Code Range 0x1F
Pops n: value for single width value (FUnit)
Pushes -
Sets single width value
Related instructions SSWCI[ ]
Establishes a new value for the single width value state variable. The single width value is used instead of a control value table entry when the difference between the single width value and the
given CVT entry is less than the single width cut-in.
Pops a 32 bit integer value, n, from the stack and sets the single width value in the graphics state to n. The value n is expressed in FUnits.
Return to Contents
SSWCI[] Set Single Width Cut-In
Code Range 0x1E
Pops n: value for single width cut-in (F26Dot6)
Pushes -
Sets single width cut-in
Affects MIAP[], MIRP[]
Related instructions SSW[ ]
Establishes a new value for the single width cut-in, the distance difference at which the interpreter will ignore the values in the control value table in favor of the single width value.
Pops a 32 bit integer value, n, and sets the single width cut-in to n.
Return to Contents
SUB[] SUBtract
Code Range 0x61
Pops n2: subtrahend (F26Dot6)
n1: minuend (F26Dot6)
Pushes (n1 - n2): difference (F26Dot6)
Related instructions ADD[ ]
Subtracts the number at the top of the stack from the number below it.
Pops two 26.6 numbers, n1 and n2, from the stack and pushes the difference between the two elements onto the stack.
Return to Contents
SVTCA[a] Set freedom and projection Vectors To Coordinate Axis
Code range 0x00 - 0x01
a 0: set vectors to the y-axis
1: set vectors to the x-axis
Pops -
Pushes -
Sets projection vector
freedom vector
Related instructions SPTCA[ ], SFVTCA[ ]
Sets both the projection vector and freedom vector to the same coordinate axis causing movement and measurement to be in the same direction. The setting of the Boolean variable a determines the
choice of axis.
SVTCA[ ] is a shortcut that replaces the SFVTCA[ ] and SPVTCA[ ] instructions. As a result, SVTCA[1] is equivalent to SFVTCA[1] followed by SPVTCA[1].
Return to Contents
SWAP[] SWAP the top two elements on the stack
Code Range 0x23
Pops e2: stack element (StkElt)
e1: stack element (StkElt)
Pushes e2: stack element (StkElt)
e1: stack element (StkElt)
Swaps the top two stack elements.
Pops two elements, e2 and e1, from the stack and reverses their order making the old top element the second from the top and the old second element the top element.
Return to Contents
SZP0[] Set Zone Pointer 0
Code Range 0x13
Pops n: zone number (uint32)
Pushes -
Sets zp0
Affects AA[], ALIGNPTS[], ALIGNRP[], DELTAC1[], DELTAC2[], DELTAC3[], DELTAP1[], DELTAP2[], DELTAP3[], FLIPPT[], FLIPRGOFF[], FLIPRGON[], IP[], ISECT[], MD[], MDAP[], MDRP[], MIAP[], MIRP[],
MSIRP[], SHC[], SHE[], SHP[], SHZ[], UTP[]
Related SZP1[ ], SZP2[ ], SZPS[ ]
Establishes a new value for zp0. It can point to either the glyph zone or the twilight zone.
Pops a zone number, n, from the stack and sets zp0 to the zone with that number. If n has the value zero, zp0 points to zone 0 (the twilight zone). If n has the value one, zp0 points to zone 1 (the
glyph zone). Any other value for n is an error.
Return to Contents
SZP1[] Set Zone Pointer 1
Code Range 0x14
Pops n: zone number (uint32)
Pushes -
Sets zp1
Affects ALIGNPTS[], ALIGNRP[], IP[], ISECT[], MD[], MDRP[], MIRP[], MSIRP[], SDPVTL[], SFVTL[], SHC[], SHP[], SHZ[], SPVTL[]
Related instructions SZP0[ ], SZP2[ ], SZPS[ ]
Establishes a new value for zp1. It can point to either the glyph zone or the twilight zone.
Pops a zone number, n, from the stack and sets zp1 to the zone with that number. If n has the value zero, zp1 points to zone 0 (the twilight zone). If n has the value one, zp1 points to zone 1 (the
glyph zone). Any other value for n is an error.
Return to Contents
SZP2[] Set Zone Pointer 2
Code Range 0x15
Pops n: zone number (uint32)
Pushes -
Sets zp2
Affects IP[], ISECT[], IUP[], GC[], SDPVTL[], SHC[], SHP[], SFVTL[], SHPIX[], SPVTL[], SC[]
Related instructions SZP0[ ], SZP1[ ], SZPS[ ]
Establishes a new value for zp2. It can point to either the glyph zone or the twilight zone.
Pops a zone number, n, from the stack and sets zp2 to the zone with that number. If n has the value zero, zp2 points to zone 0 (the twilight zone). If n has the value one, zp2 points to zone 1 (the
glyph zone). Any other value for n is an error.
Return to Contents
SZPS[] Set Zone PointerS
Code Range 0x16
Pops n: zone number (uint32)
Pushes -
Sets zp0, zp1, zp2
Affects ALIGNPTS[], ALIGNRP[], DELTAC1[], DELTAC2[], DELTAC3[], DELTAP1[], DELTAP2[], DELTAP3[], FLIPPT[], FLIPRGOFF[], FLIPRGON[], GC[], IP[], ISECT[], IUP[], MD[], MDAP[], MDRP[], MIAP[],
MIRP[], MSIRP[], SC[], SDPVTL[], SFVTL[], SHPIX[], SPVTL[], SHC[], SHP[], SHZ[], SPVTL[], UTP[]
Related SZP0[ ], SZP1[ ], SZP2[ ]
Sets all three zone pointers to refer to either the glyph zone or the twilight zone.
Pops an integer n from the stack and sets all of the zone pointers to point to the zone with that number. If n is 0, all three zone pointers will point to zone 0 (the twilight zone). If n is 1, all
three zone pointers will point to zone 1 (the glyph zone). Any other value for n is an error.
Return to Contents
UTP[] UnTouch Point
Code Range 0x29
Pops p: point number (uint32)
Pushes -
Uses zp0 with point p, freedom vector
Affects IUP[ ]
Marks a point as untouched thereby causing the IUP[ ] instruction to affect its location.
Pops a point number, p, and marks point p as untouched. A point may be touched in the x-direction, the y-direction, or in both the x and y-directions. The position of the freedom vector determines
whether the point is untouched in the x-direction, the y-direction, or both. If the vector is set to the x-axis, the point will be untouched in the x-direction. If the vector is set to the y-axis,
the point will be untouched in the y-direction. Otherwise the point will be untouched in both directions.
A points that is marked as untouched will be moved by an IUP[ ] instruction even if the point was previously touched.
Return to Contents
WCVTF[] Write Control Value Table in Funits
Code Range 0x70
Pops n: number in FUnits (uint32) l: control value table location (uint32)
Pushes -
Sets control value table entry
Related instructions WCVTP[ ]
Writes a scaled F26Dot6 value to the specified control value table location.
Pops an integer value, n, and a control value table location l from the stack. The FUnit value is scaled to the current point size and resolution and put in the control value table. This instruction
assumes the value is expressed in FUnits and not pixels.
Since the CVT has been scaled to pixel values, the value taken from the stack is scaled to the appropriate pixel value before being written to the table.
Return to Contents
WCVTP[] Write Control Value Table in Pixel units
Code Range 0x44
Pops v: value in pixels (F26Dot6)
l: control value table location (uint32)
Pushes -
Sets control value table entry
Related instructions WCVTF[ ]
Writes the value in pixels into the control value table location specified.
Pops a value v and a control value table location l from the stack and puts that value in the specified location in the control value table. This instruction assumes the value taken from the stack is
in pixels and not in FUnits. The value is written to the CVT table unchanged. The location l must be less than the number of storage locations specified in the 'maxp' table in the font file.
Return to Contents
WS[] Write Store
Code Range 0x42
Pops v: storage area value (uint32)
l: storage area location (uint32)
Pushes -
Sets storage area value
Related instructions RS[ ]
Write the value taken from the stack to the specified storage area location.
Pops a storage area location l, followed by a value, v. Writes this 32-bit value into the storage area location indexed by l. The value must be less than the number of storage locations specified in
the 'maxp' table of the font file.
Return to Contents
Return to Index | {"url":"https://developer.apple.com/fonts/TTRefMan/RM05/Chap5.html","timestamp":"2014-04-19T07:07:53Z","content_type":null,"content_length":"234045","record_id":"<urn:uuid:086f44b2-58dd-4802-94ca-86ed7900009b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Mojo
Math Mojo
a question. They will receive an automated email and will return to answer you as soon as possible. Please
to ask your question.
Kathleen S.
Do you have a math common core test prep PowerPoint Bundle?
Are the vocabulary words in the Math Flipbook Common Core?
Kam Reagan
ESE Teacher
Floral City Elementary
They are not specifically Common Core, but I do have vocabulary packs with a word wall, flip books and flash cards that are aligned to the Common Core. The links are below. You will need to copy and
paste the links into your browser.
5th Grade Common Core Math Vocabulary
4th Grade Common Core Math Vocabulary
3rd Grade Common Core Math Vocabulary
6th Grade Common Core Math Vocabulary
Alexis (Math Mojo)
Dani W.
For your fractions and decimals number line activity, are all the fractions/mixed numbers 8ths?
Thank you!
Yes, this set is in eights. I do have a set of fraction number line cards that have a variety of fractions. The link is below (you will need to cut and paste it into your browser.
(TpT Seller) re: Common Core 4th Grade Math Task Cards Mega Bundle - All Domains and Standards
I agree with the previous person, these are amazing and i WISH you had them for 5th grade because my teaching partner and I would be allllllllll over them!
This is on my "to do" list. I just did a 5th Grade Skills Scoot Bundle (the link is below).
I am going to work on the task cards and I will put them out 1 set at a time and bundle them when I have all the standards done. The task cards are time consuming to make because I work hard to
research each standard to ensure I fully cover each standard.
Alexis (Math Mojo)
Kristen S. re: Common Core Math Task Cards Multiplication (Double Digit) CCSS 4.NBT.B.5
Can you make these 4 problems to a page?
Email me at math_mojo@aol.com and I,will make them for you.
Alexis (Math Mojo)
Buyer re: Common Core Math Task Cards Decomposing Figures CCSS 3.G.2
Cards 14, 15, and 16 don't match the answer sheet. For example, card 15 shows an oval and asks them to partition the shape into 4 equal parts but on their answer sheet there's a pentagon drawn. Is
there anyway I can get an attachment with the corrections? I had my kids cross those 3 problems off and not answer them.
I will corrected them and you can download the new version by going to My Purchases. If you prefer, you can email me at math_mojo@aol.com and i will send you a corrected copy (I don't have access to
your email address).
Alexis (Math Mojo)
Antoinette K. re: Common Core 4th Grade Math Task Cards Mega Bundle - All Domains and Standards
This is an awesome product! I have a fifth grade colleague who would like to know if you have same for their grade.
I am going to start working on 5th grade task cards. I will sell them individually, then bundle them when I am done with all the standards. Making task cards for each standard will take me 2 - 4
months to complete. My goal is to be done with all the 5th grade standards by mid to late summer. I am making a 5th Grade Skills Scoot Bundle that will be available in the next week (many of the
cards are similar to task cards).
Alexis (Math Mojo)
Brenda S.
I Love all your products. I really like the Common Core exit slips. However, are planning on making any for 6th Grade Math. I am currently teaching that grade level, and I would love a set. I also
have shared your shop name with others in my building teaching 3rd, 4th, & 5th grade. Thanks Brenda
I have a few projects that I am currently working on, but it is on my "to do list".
Alexis (Math Mojo)
Laura H.
I have $77 of your products in my cart - love your work! I have ordered from you in the past. :-) Before I check out, I wanted to see if you were getting ready to have a sale soon? It is always my
luck to miss a sale by a day or two. Thanks! Laura
My next sale will be the Teacher Appreciation Sale in May.
Thank you!
Alexis (Math Mojo)
Buyer re: 4th Grade Common Core Math Test Prep Game Show (NBT) PowerPoint
Powerpoint found a problem with your content and gives the option to "click" to repair, but it cannot repair due to untrusted resources and pulls up blank slides with red "x"'s on the 28 slides. My
email is mrscortez2013@gmail.com can you please just send me an updated copy or refund the money? Thank you.
Email me at math_mojo@aol.com and I will send you the file. Please include your username and date of purchase.
Thank you,
Alexis (Math Mojo)
Suzanne P. re: Fraction and Decimal Number Line Cooperative Learning Cards
When I went to print the pages are off and some of the writing is on the first number line. Is this something you need to correct or is it something I did?
Email me at math_mojo@aol.com and I will correct it for you.
Alexis (Math Mojo)
Georgianna B.
Is the individual test prep packs different material than what is in the test prep bundle that has all standards for 4th grade?
The individual packs are the same as the bundle, but you save money buying the bundle.
I purchased the Math Exit Slips-4th Grade- Mega Bundle, and it will not download. It says the file is damaged.
Email me at math_mojo@aol.com with your username and date of purchase and I will send you the file. I downloaded it and it was working. You might need the latest version of Adobe and be sure to let
it fully download before opening it. If you think file size is the issue, let me know and I will send you the individual domain files.
Tanya Klanert
(TpT Seller)
what is the status of the 5th grade test prep pack? Was wondering if I can wait on it or need to find something different. Do you have a time-frame for it?
I have completed packs for three of the five math domains. There are links to the packs below. The two remaining packs are being edited and should be up in the next few days. The bundled pack with
all the domains will be $10, but if you want to take advantage of the 20% off sale I am having and buy the packs I have uploaded, I will send you the remaining 2 for free as soon as my editor is
done, just send me your user name and date of purchase. My email is math_mojo@aol.com
The links to the packs are below (you will need to cut and paste them into your browser.
I hope to have the remaining packs complete early this week!
Alexis (Math Mojo)
Buyer re: Common Core 4th Grade Math Task Cards Mega Bundle - All Domains and Standards
I really love the task cards! The problems are comprehensive and challenging. As I was putting them together, I noticed that on page 44, the answer to card #18 of the Multi-Stop Problem Solving set
is incorrect.
I will look into that and correct it. Email me at math_mojo@aol.com if you need the correction tonight. I will try to upload it as soon as possible this evening!
Alexis (Math Mojo)
Math is Rad
(TpT Seller)
I really like your 3rd and 4th Grade Common Core Math Assessments - All Standards Bundle - It looks like you are working on 5th grade soon? I really hope so, because the other 2 grades are GREAT!
I am working on a 5th Grade Common Core Math Test Prep pack right now (several domains will be posted today or tomorrow). I am planning on adding a 5th grade math assessment pack in the next few
weeks if possible.
Thank you!
Alexis (Math Mojo)
Buyer re: Common Core Math Task Cards Elapsed Time CCSS 3.MD.1
I recently purchased this product and love it except on cards 16 and 20 the hour hand is incorrect (pointing right to the hour when it should be past the number and moving towards the next hour) and
on card 11 the hour and minute hand are the same length. Could you possibly correct these three cards? Thanks!
I sent you an email with the corrected cards.
Thank you,
Alexis (Math Mojo)
Amanda Gritton
(TpT Seller)
I have a question about my purchase. I looked at the preview and read the description. I was really excited about the exit slips and all that came with this pack. I normally wouldn't spend $10 unless
I knew it would be worth my while. When I purchased and downloaded the pack there were only 58 pages when the description says 175. There are also not ANY exit slips that were shown in the preview. I
can email you the document that I downloaded for you to see. I am not sure what happened here. I hope we can take care of this.
I uploaded a new file and emailed you.
Thank you!
Alexis (Math Mojo)
Buyer re: 5th Grade Common Core Math Vocabulary
Sorry, I just saw your response for the wall display on separate pages. Thank you sooooo much! I really love your work!
I emailed you.
Thank you,
Alexis (Math Mojo)
Jennifer B.
I purchased the 3rd grade math common core task cards ultimate bundle, which I love by the way. However, when I started printing and making them today I noticed one page is in here twice and some
questions are left out. It is page 261 and 262 in the "weight and volume" section. Questions 9-12 are missing and questions 13-16 are in there twice. I was wondering if you could send me questions
9-12. You have the answers to them on the answer key, so I'm assuming you created them and they just accidentally were left out. Let me know! THANKS!
I don't have access to your email, but if you email me at math_mojo@aol.com I will send it to you tonight. Please write task cards in the subject line.
UPDATE - I emailed you and corrected the task card pack. You can download it by doing to My Purchases.
Thank you!
Alexis (Math Mojo)
Connie Havens
(TpT Seller)
Great!! Thank you so much!!!
You are welcome!
Connie Havens
(TpT Seller)
I love your 4th grade common core test prep for 4th grade. Do you have one for 5th grade?
I am in the process of making one. I will post each domain as soon as they are edited, then when I have completed them all I will make a bundled pack.
Thank you!
Alexis (Math Mojo)
Buyer re: 5th Grade Common Core Math Vocabulary
Hi, I purchased this from you the other day. I really like it!! I printed one of the vocab sheets for display but I've found it to be too small to be seen across the room. Can you or advise me how to
put 1 vocab word per sheet instead of 2. Thanks! Peggy
The only way to do that would be for me to remake the file with 1 word per page in landscape. Email me at math_mojo@aol.com and I can send you a sample what it would look like.
Thank you,
Alexis (Math Mojo)
Jolynda C.
Hi there. I was just looking over your 4th grade math scoot bundle and wondering how many cards are included with each scoot activity. Thank you.
There are 24 cards. I also have extra cards for larger classes. Some teachers with larger classes use several extra spots for review questions from other skills.
Alexis (Math Mono)
Dear Math Mojo,
I think your PPTs are really great, but they don't cater for Australian/British spelling and use of commas in numbers etc. Is it possible to get an editable version after purchase?
Due to the license agreement with the clip art in ht e product, I am unable to give an editabe version. I am so sorry.
Thank you!
Alexis (Math Mojo)
MJ @ Teaching in Heels
(TpT Seller) re: Common Core Math Task Cards - Changing Improper Fractions to Mixed Numbers
Is this scoot included in any other scoot bundles?
The specific fractions/questions in this set of task cards is not part of any of my other bundles. These are self checking task cards (meaning you can fold the cards so the answer is on the back).
This skill is included in my Fraction Skills Scoot Bundle, but those cards are designed specifically for Scoot so they are not "self checking". It depends on how you want to use them. Here is a link
to my Fraction Skills Scoot Bundle.
I hope that answers your question!
Thank you,
Alexis (Math Mojo)
Beverly K. re: Fraction and Decimal Number Line Cooperative Learning Cards
Is it possible to have more fractions other than 8ths? Mixing it up would prevent automatically writing 8 as the denominator.
Can you email me at math_mojo@aol.com and let me know which grade level you teach and/or the fractions you want. I can make additional cards for this set over the weekend, but it would be helpful to
know your grade level.
Alexis (Math Mojo)
Rose Mary B. re: 3rd Grade Common Core Math Test Prep - All Standards Mega Bundle
hello, does this bundle include OA, NBT, NF, MD and G 3rd grade common core? I'm only asking because I don't see all of them listed.....Thanks...
It includes all the domains and each standard. There is also an general review for each domain. If you email me at math_mojo@aol.com I can send you the table of contents and a sample so you can see
what this includes.
Thank you!
Alexis (Math Mojo)
Lindsay M. re: 3rd Grade Math Skills Scoot Mega Bundle
Hi there, I am thinking about buying your third grade task cards mega bundle. What is the difference between this product and that one? Thanks!
The Task Card Mega bundle has a set of self checking task cards for each Common Core Standard. Each standard is specifically addressed and there is more problem solving. The Skills Scoot focuses
specific 3rd grade skills and most Common Core Standards are covered, but the focus is more on computation and specific 3rd grade math skills. Both products have samples you can download. If you look
at those it might help you determine which will meet your needs.
Thank you!
Alexis (Math Mojo)
Jenny Aleman
(TpT Seller)
I bought the 5th grade math exit clips… I am having difficultly opening the adobe file. Is there an additional file that I can access?
Email me at math_mojo@aol.com and I will send you the individual domain files. The smaller file size will make them easier to open.
Alexis (Math Mojo)
Tanya Klanert
(TpT Seller)
Looking for your test bundle or game bundle for grade 5 common core...is there one??
Not yet. I am almost done with the fourth grade bundle, then I am am starting the 5th grade bundle. I should be done in a week or two.
Alexis (Math Mojo)
Kathleen J.
Hi Alexis, I have purchased a few of your items and love your work! I recently bought your Valentine Geometry Game. Is there anyway you can send it to me in a non-pdf version? I want to make a
Jeopardy Game for my class and it would be so much easier if I could cut and paste. Thank you for you consideration!
I will email you a response.
Krystal F. re: Surfing Into A New School Year Back to School Pack
I am wanting to do this theme in my classroom, but was wondering if the folder covers are editable to include students names? Thanks
They are not editable at this time. I am going to make an editable version in the next few months.
Alexis (Math Mojo)
I have a friend that bought, "Common Core Math Assessments 4th Grade Operations and Algebraic Thinking"....do you have one for 3rd grade?? I've been searching....unless I passed it, I'm not finding
I have an assessment pack with all the standards for 3rd grade. It has an assessment for each standard. The link is below (you have to cut and paste the link into your browser).
I did not make a pack for each domain, but if you are interested in buying just the Operations and Algebraic Thinking assessments let me know and I will post the product tonight. The pack would be
just like the 4th grade pack. My email address is math_mojo@aol.com.
Thank you,
Alexis (Math Mojo)
Jennifer Oakley
(TpT Seller)
I love playing games in my math center time to reinforce skills. I was given a 3/4 combo after school started due to budget cuts and have not been able to find 4th grade games correlated to common
core. THANK YOU! I can even use some of these with my 3rd graders. Do you have a plan to make a product like this for third grade?
It is on my "to do" list. I hope to be done in the next few weeks. I am thrilled to hear you like the games!
Thank you!
Alexis (Math Mojo)
Kim G. re: Valentine Math Games and Centers - 3rd Grade (Aligned to the Common Core)
Was also going to ask about the "space movement" on special delivery...another question is in the directions for special delivery game it states that player moves number of spaces of the missing
factor. The example is _x3=6 the unknown factor is 2 so player would move 4 spaces? Shouldn't it be 2 spaces or am I misunderstanding the directions? Thank you! Can't wait to use Friday!
It should be 2, that is a typo. I will correct it and upload a corrected version.
Update - You can download the corrected version from My Purchases or you can email at math_mojo@aol.com for the corrected version.
Thank you!
Alexis (Math Mojo)
Tara Lyn N. re: 4th Grade Common Core Math Test Prep - All Standards Mega Bundle
Just checking- does this set include answer keys? I didn't see it specified on the description of the product.
Yes. It includes an answer key.
Thank you!
Alexis (Math Mojo)
Meredith M.
I purchased your Valentine Math Games and Centers-4th grade and am having trouble getting it to print. I tried in a pdf format and Adobe. Any suggestions.
Meredith McLaughlin
Email me at math_mojo@aol.com and I will send you each game individually. That should be easier to print. I don't have access to your email address so I need for you to email me.
Thank you!
Alexis (Math Mojo)
Miss E's Explorations
(TpT Seller) re: Valentine Math Games and Centers - 3rd Grade (Aligned to the Common Core)
On the special delivery game, the "space movement" mentions something about less than or greater than 1/2. I thought the space movement was to be the unknown factor on the card. Did I misunderstand
or is it a typo? Thanks so much. Love this resource!
It is a typo. That is from the 4th grade game, I will fix it!
Update - You can download the corrected version from my purchases or you can email at math_mojo@aol.com for the corrected version.
Thank you!
Alexis (Math Mojo)
Rebecca B.
Do you have the fun Friday math pack for 3rd?
I am thinking about making one soon! It is on my "to do list".
Alexis (Math Mojo)
Melody Boyd
(TpT Seller)
Can you pretty please add perimeter? :-)
Yes. Please email me at math_mojo@aol.com because I don't have access to your email. I will make it tonight. Please list also any other words you need.
Alexis (Math Mojo)
Pamela S. re: Math Exit Slips - 4th Grade Common Core All Standards Mega Bundle
Do you have a second grade version of this?
I do not have one at this time. A friend who is a second grade teacher was going to do a 2nd grade version, but she has been very busy. I will check and see if she is still interested in making them.
if she is not I will make them.
Alexis (Math Mojo)
Pamela S. re: Math Exit Slips - 3rd Grade Common Core All Standards Mega Bundle
Thank you, Alexis. I have already purchased the fourth grade and LOVE them. I may need to purchase the third grade "bundle" also toward the end of school. These are FABULOUS!!
Do you have any "spiraled" morning bell work for Math??
Spiraled bell work is on my "to do" list! Thank you for your kind comment! I am thrilled that you like the exit slips.
Alexis (Math Mojo)
Pamela S. re: Math Exit Slips - 3rd Grade Common Core Number and Operations - Fractions
Do you have exit slips available for each domain in math?
I have them for each domain and a set for test prep. I have them in a bundled pack. Here is the link -
You might need to cut an paste the link into your browser.
Thank you,
Alexis (Math Mojo)
Beth W.
beth.williams@comalisd.org, thank you so much!
Beth W.
I do have a MAC and I guess I need to look and see if I have pages! Here are some of the words I have so far. I will jot them down real quick. I usually have my students replace these words in the
question with a synonym. Here are examples of a few:
define - give meaning
provide - give
evidence - proof
feeling words - shows emotion
best - greatest choice
foreshadow - gives clue of something that will happen later in story. clue - shadow is usually in front of you just like foreshadow event will happen later in story
figurative language - a statement that does not really mean what it says. clue- underline figur(e), you have to figure out the meaning of the statement
emphasize - highlight, focus attention
entry - passage
selection - passage
passage - story
text - passage, story, article, poem
Thank you!
I need you to email me at math_mojo@aol.com to give me your email address. I don't have access to your email. I will work on these tonight and tomorrow.
Alexis (Math Mojo)
Beth W. re: 5th Grade Common Core ELA Ultimate Vocabulary Resource
Thank you again for all your time emailing me and sending me the samples via email. I know you said in your email if I sent you the words I needed that were not included in the flashcard set you
would make them. If you would like to send me the template I don't mind doing them or I will send you the list this week, I need to gather them, and I will gladly pay you for them. Thank you for such
a great product and for being so kind.
I can send you a template if you have a a Mac and have Pages. I create almost everything in that program. I can try to create a template using another program, but it often doesn't look exactly the
same. If you send me the words I don't mind making them for you.
My email is math_mojo@aol.com.
Alexis (Math Mojo)
Linda K. re: Math Exit Slips PowerPoint 4th Grade Common Core Number & Operations in Base Ten
Are there answer keys for all of your 4th grade exit slips powerpoints?
Yes, they are generally on the second to last slide. Please email me at math_mojo@aol.com if you need help!
Thank you,
Alexis (Math Mojo)
Beth W.
thank you!
I sent you the sample.
Beth W.
HI, I have emailed you at the email address you noted below for an example of the flash cards. I have not received a undeliverable message but I am wondering if you have received my email. My
computer is doing some odd things! Thanks.
It was in my spam file. I will email it to you ASAP!
Alexis (Math Mojo)
I have been teaching for 15 years. I LOVE to create high quality materials. I specialize in math games, cooperative learning items, and test prep. I have taught in 3 states and 2 countries. I have a
M.Ed. in Elementary Education and National Board Certification (Middle Child Generalist).
I have taught in 6 different school in 3 states and 2 countries. Each school had a unique and varied approach. I have taught at an Arts Impact School, a Coalition for Essential School school, a
school that was involved in an in depth study of the Reggio Emilia philosophy, and currently the school where I teach emphasizes high engagement, cooperative learning (Kagan Cooperative Learning).
Needless to say I have learned a great deal. I take the good ideas from all these experiences and have an eclectic mix of best practices that work for my classroom.
National Board Certification
I have a B.S degree and an M.Ed. degree.
I can never seem to find exactly what I want from textbooks and commercially made materials so I end up creating many materials. I love creating high quality classroom materials!
PreK, Kindergarten, 1^st, 2^nd, 3^rd, 4^th, 5^th, 6^th, 7^th, 8^th, 9^th, 10^th, Homeschool, Staff
English Language Arts, Creative Writing, Reading, Grammar, Vocabulary, Specialty, Math, Algebra, Arithmetic, Basic Operations, Fractions, Geometry, Graphing, Measurement, Numbers, Order of Operations
, Other (Math), Arts & Music, Other (Specialty), Math Test Prep, Life Skills, For All Subject Areas, Classroom Management, Statistics, Character Education, Word Problems, Writing, Holidays/Seasonal,
Back to School, Thanksgiving, Christmas/ Chanukah/ Kwanzaa, Poetry, Autumn, Mental Math, Halloween, Winter, Valentine's Day, Decimals, St. Patrick's Day, Place Value, Tools for Common Core, For All
Subjects, Summer, Test Preparation, End of Year | {"url":"http://www.teacherspayteachers.com/Store/Math-Mojo/Type-of-Resource/Graphic-Organizers","timestamp":"2014-04-17T04:06:56Z","content_type":null,"content_length":"250548","record_id":"<urn:uuid:632e2843-353e-4258-958e-9a77903928eb>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
rewriting a recursive function
Join Date
Feb 2012
Rep Power
Hello, I have to rewrite a recursive function that take a number and raises it to a power. It's sounds simple, but we have to do it in Logn time using X^2n = X^n times X^n. I am having a hard
time implementing this. Here is the code for the other power function that I have to modify.
Java Code:
public static double pow(double x, int n )
if(x == 0 && n <= 0)
throw new IllegalArgumentException("Wrong inputs");
else if (x == 0)
return 0;
else if (n == 0)
return 1;
else if (n > 0)
return x * pow(x, n-1);
return 1/pow(x,-n);
I am able to follow the recursive calls for this method fine, I just don't know how to implement that weird form I was given to use. Thanks for any help.
Join Date
Feb 2009
New Zealand
Rep Power
Java Code:
else if (n > 0)
return x * pow(x, n-1);
The idea seems to be that you keep subtracting 1 from n until it becomes small (zero or one).
I think what you are asked to do is divide n by 2 instead of subtracting one. There is a small complication to deal with if n is odd. But even that shouldn't be too difficult since, in that
case, n-1 is even.
Join Date
Apr 2012
New York State of Confusion, USA
Blog Entries
Rep Power
Here's the deal. The original algorithm will work just fine for solving x^cn where c is some constant such as -2, 3, 21, etc. It will just take longer than if you take advantage of the
algebraic rule that x^cn = (x^c)^n. Note that x^cn does NOT equal x^c * x^n
The original algorithm requires (c*n)-1 passes to achieve the answer. Taking advantage of factoring the exponents will take (n-1)+(c-1) passes.
I think I have that correct upon re-reading it before posting.
Join Date
Feb 2012
Rep Power
Java Code:
else if (n > 0)
return x * pow(x, n-1);
The idea seems to be that you keep subtracting 1 from
until it becomes small (zero or one).
I think what you are asked to do is
divide n by 2
instead of subtracting one. There is a small complication to deal with if
is odd. But even that shouldn't be too difficult since, in that case, n-1 is even.
Would the modification involve adding another base case? Why would it being odd matter?
Join Date
Sep 2008
Voorschoten, the Netherlands
Blog Entries
Rep Power
Write down the recurrence relations for the pow operator:
1) x^0 == 1
2) x^1 == x
3) x^n == x^(n/2)*x^(n/2) if n is even
4) x^n == x*x^(n/2)*x^(n/2) if n is odd
cases 1) and 2) are the sentinel cases while case 3) and 4) describe the recursive step.
kind regards,
cenosillicaphobia: the fear for an empty beer glass
Join Date
Feb 2012
Rep Power
Thank you very much. I was able to solve it. Now I just have to write it nonRecursively.
Join Date
Sep 2008
Voorschoten, the Netherlands
Blog Entries
Rep Power
Think of the exponent n as a binary number; each bit represents a power of the number x: x^1, x^2, x^3, x^4 etc. If a bit in that number n is 1, multiply your result by the corresponding
power of x.
kind regards,
cenosillicaphobia: the fear for an empty beer glass | {"url":"http://www.java-forums.org/new-java/58553-rewriting-recursive-function.html","timestamp":"2014-04-17T14:05:07Z","content_type":null,"content_length":"88679","record_id":"<urn:uuid:1b5ab3e8-fcb8-4548-97b2-8928f343749d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
M.S. Applied Physics
This plan provides a strong core of applied physics as well as essential research skills, and prepares you for technical employment in high-technology industries, research institutes, or college
teaching—as well as for further professional study in various fields of applied physics.
This plan can be interdisciplinary, integrating a broad range of subject areas to enhance your opportunities for research, teaching, or careers in the private sector. There are both thesis and
course-work options in General Physics and specialization options in Planetary Science and Teaching College Physics.
│Emphasis │Thesis Option│Course-Work Option│Other Features │
│General Physics–Thesis │Y │ │ │
│General Physics–Course Work │ │Y │Oral Comprehensive Exam │
│Planetary Science–Thesis │Y │ │ │
│ │ │ │Oral Comprehensive Exam │
│Teaching College Physics │ │Y │ │
│ │ │ │Supervised College Teaching │
Degree Requirements
For this 30-unit plan you must complete the following 12 units, regardless of your Emphasis:
· PHY 520 (4 units)
· PHY 525 (3 units)
· PHY 535 (3 units)
· At least 2 units of PHY 698
Emphasis Requirements
Choose one of the following four Emphases (18 units):
General Physics—Thesis Emphasis
• PHY 530 (3 units)
• 6 units of 500-600-level electives in any science, mathematics, engineering, or other field that’s appropriate to your career goals, chosen with your advisor's approval.
• 6 units of PHY 685 or 500-600-level electives in any science, mathematics, engineering, or other field that’s appropriate to your career goals, chosen with your advisor's approval.
• 3 units of PHY 699, for the research, writing, and oral defense of an approved thesis
▓ Please be aware that you may only count 3 units of thesis credit toward your degree. However, you may end up taking more units because you must enroll for PHY 699 each term while you are working on
your thesis.
▓ Also, please be aware that university Graduate Assistants must be enrolled for a minimum of nine hours each semester.
Planetary Science—Thesis Emphasis
• PHY 530 (3 units)
• 6 units of 500-600-level electives in any science, mathematics, engineering, or other field that’s appropriate to your career goals, chosen with your advisor's approval.
• 6 units of PHY 685 or 500-600-level electives in any science, mathematics, engineering, or other field that’s appropriate to your career goals, chosen with your advisor's approval.
• 3 units of PHY 699, for the research, writing, and oral defense of an approved thesis
▓ Please be aware that you may only count 3 units of thesis credit toward your degree. However, you may end up taking more units because you must enroll for PHY 699 each term while you are working on
your thesis.
▓ Also, please be aware that university Graduate Assistants must be enrolled for a minimum of nine hours each semester.
General Physics—Course-Work Emphasis
• PHY 530, 545, and 550 (9 units)
• 9 units of electives in any science, mathematics, engineering, or other field that’s appropriate to your career goals, chosen with your advisor's approval.
□ These nine units must be formal, graded course work.
□ Up to 6 of these hours may be at the 400-level, and the remainder must be at the 500- or 600-level.
• a comprehensive oral exam
▓ Also, please be aware that university Graduate Assistants must be enrolled for a minimum of nine hours each semester.
Teaching College Physics Emphasis
• PHY 500 (3 units)
• 6 units chosen from the following: PHY 530, 545, or 550
• 5 units of electives in any science, mathematics, engineering, or other field that’s appropriate to your career goals, chosen with your adviser’s approval.
□ These units must all be formal, graded coursework; and
□ May be at the 400-, 500-, or 600-level.
• 4 units of PHY 608, during which you will teach lecture courses, in introductory college physics, under supervision.
• a comprehensive oral exam
▓ Also, please be aware that university Graduate Assistants must be enrolled for a minimum of nine hours each semester.
Click here for Physics graduate courses, here for Astronomy graduate courses, and here for Physical Sciences graduate courses.
Click here for Physics and Astronomy faculty. | {"url":"http://www7.nau.edu/academiccatalog/2011/Educational_programs/Engineering_Natural_Sciences/Physics_and_Astronomy/MSAppPhysics.htm","timestamp":"2014-04-20T10:47:11Z","content_type":null,"content_length":"20011","record_id":"<urn:uuid:10ac636f-37a3-4ebe-8d0a-0b3220487935>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
September 23rd 2012, 11:27 PM #1
Sep 2012
Sum Subsets
Hi everyone, first time on this forum and I'm thrilled. I was wondering if anyone could perhaps show me where I can find an article, formula, book, etc, that tells me what I want to find below.
You have a number n and a list of numbers. I want to generate all the different combinations from the list that add up the n.
n = 4;
list = [1,2]
Combination 1: 1 + 1 + 1 + 1 = 4
Combination 2: 1 + 1 + 2 = 4
Combination 3: 2 + 2 = 4
There are 3 different combinations that sum up to 4 from the given list.
n = 6
list = [1,3,6]
Combination 1: 1+1+1+1+1+1 = 6
Combination 2: 1+1+1+3 = 6
Combination 3: 3+3 = 6
Combination 4: 6=6
There are 4 different combinations that sum up to 6 from the given list.
Is there an algorithm or something that I can use in a java program that will accomplish this?
Thank you.
Re: Sum Subsets
I think what you're looking for might be called: enumeration of partitions (only you restricted to a base list, not all integers). Maybe that will help you hunt for this. I'd be shocked if there
weren't a LOT of theory and/or code for exactly this. It wouldn't surprise me if exactly what you're requesting doesn't already exist as a routine in one of the major math software packages.
However, I don't know of it - that's just a guess. If you're at a university, you might want to ask a professor, especially a comp sci professor or math professor.
There are various recurrsive algorithms: Here's one off the top of my head, where you shrink down your choice list in one part and the size of your sum in the other. I've no idea if this is a
good approach - I'm offering it up to give you some ideas.
ResultList (SumsTo=N; Uses=ChoiceList; RunningAppendList)
(Pick a k in the incoming ChoiceList)
ResultList (SumsTo=N-k; Uses=ChoiceList; RunningAppendList.AppendWith(k))
ResultList (SumsTo=N; Uses=ChoiceList.Remove(k); RunningAppendList)
Last edited by johnsomeone; September 28th 2012 at 07:38 AM.
September 28th 2012, 07:36 AM #2
Super Member
Sep 2012
Washington DC USA | {"url":"http://mathhelpforum.com/advanced-algebra/203967-sum-subsets.html","timestamp":"2014-04-19T16:11:12Z","content_type":null,"content_length":"32661","record_id":"<urn:uuid:8399aca1-da89-47f6-a32d-d6d2f94e8593>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
Suquamish Math Tutor
Find a Suquamish Math Tutor
...I studied computer science in college, with additional coursework in biology, physics, chemistry, anatomy and physiology, and linguistics. After graduation I worked as a technical writer and
computer programmer for eight years. I enjoy writing, editing, and illustrating, and I have a deep understanding of computers, how they work, and how to make them work.
18 Subjects: including algebra 1, algebra 2, biology, chemistry
...Throughout the years I had a number of Geometry student to tutor privately. The biggest problem my students face in Geometry is writing up proofs. I have my own approach to proofs requiring
developing analytical thinking, useful in every day life.
20 Subjects: including trigonometry, ACT Math, SAT math, algebra 1
...I have used PowerPoint for almost 20 years. The versions change, but most of the functionality remains the same. I have created many PowerPoint presentations with various transitions,
templates, and embedded elements from other Office programs.
39 Subjects: including algebra 1, algebra 2, grammar, linear algebra
Aloha! My name is Misa, a 25-year-old woman who grew up in Hawaii and moved to Washington to pursue a college degree. Crazy right?
14 Subjects: including algebra 2, trigonometry, anthropology, algebra 1
...So please, if you have any questions or concerns, feel free to get in contact with me. My information is provided on my profile page.Ever since completing high school I have worked hard to have
a grounded foundation in mathematics. Algebra I and II are among one of my favorites.
38 Subjects: including prealgebra, calculus, algebra 2, algebra 1 | {"url":"http://www.purplemath.com/suquamish_wa_math_tutors.php","timestamp":"2014-04-18T23:26:40Z","content_type":null,"content_length":"23506","record_id":"<urn:uuid:e3a55b16-82ff-4bed-b4df-9a2ae3546639>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The rectangular dance floor is twice as long as it is wide. The tent surrounds the dance floor, leaving some space (7 feet in each direction) for guests to mingle and cool off after dancing. This
extra space (represented by the grey area) has an area of 952 square feet. Your task is to find the dimensions of the dance floor.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ok you know area formula right?
Best Response
You've already chosen the best response.
the extra space means what you have left over
Best Response
You've already chosen the best response.
f you have 2 pieces of space and you have the extra left over which is 952
Best Response
You've already chosen the best response.
can you put that into a formula?
Best Response
You've already chosen the best response.
The dance floor is 18 ft wide and 36 ft long.
Best Response
You've already chosen the best response.
Let's say that the width of the dance floor = x ft Length = 2x ft Area = 2x² sqft Tent width = x + 7 + 7 = x + 14 ft Tent length = 2x + 7 + 7 = 2x + 14 ft Total tent area = (x + 14)(2x + 14) sqft
= 2x² + 42x + 196 sqft The grey area sounds like it's (2x² + 42x + 196) - 2x² sqft 42x + 196 = 952 42x = 756 x = 18
Best Response
You've already chosen the best response.
Thanks everyone (:
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ce3a9be4b0031882dc6e74","timestamp":"2014-04-19T13:02:42Z","content_type":null,"content_length":"45660","record_id":"<urn:uuid:a8e7e20b-47a1-4dfa-8da9-bcd569afa512>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Can somebody please solve these?
Replies: 1 Last Post: Oct 29, 2009 5:31 PM
Messages: [ Previous | Next ]
Can somebody please solve these?
Posted: Oct 27, 2009 3:06 PM
1-x^2 / x^4-1
----- ------
6x+6 6x^2+6
- - -
r s
--- - 1
Please show me or tell me how you came to your answer, because the only way for me to figure out how to do this is to see how it's done, as well as seeing the answer.
Date Subject Author
10/27/09 Can somebody please solve these? dsdsdsds
10/29/09 Re: Can somebody please solve these? Mocha | {"url":"http://mathforum.org/kb/thread.jspa?threadID=1998597","timestamp":"2014-04-19T20:31:42Z","content_type":null,"content_length":"17385","record_id":"<urn:uuid:e8106e32-0755-4d88-af37-3c0d92ecf7f4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
LUCIFER: the first block cipher
One could perhaps quarrel with the title of this section. What about Playfair, or the Hill cipher? But LUCIFER, part of an experimental cryptographic system designed by IBM, was the direct ancestor
of DES, also designed by IBM.
Like DES, LUCIFER was an iterative block cipher, using Feistel rounds. That is, LUCIFER scrambled a block of data by performing an encipherment step on that block several times, and the step used
involved taking the key for that step and half of that block to calculate an output which was then applied by exclusive-OR to the other half of the block. Then, the halves of the block were swapped,
so that both halves of the block would be modified an equal number of times.
Incidentally, this page refers to LUCIFER as actually implemented, and described in an article in the journal Cryptologia by Arthur Sorkin. An article in Scientific American discussed plans for
LUCIFER on a more general level, and described what was essentially a different kind of block cipher.
LUCIFER enciphered blocks of 128 bits, and it used a 128-bit key.
The F-function in LUCIFER had a high degree of symmetry, and could be implemented in terms of operations on one byte of the right half of the message at a time. However, I will describe LUCIFER here
in the same general fashion that DES is described.
Subkey generation
Each round uses a 72-bit subkey. The subkey for the first round consists of the first byte of the key repeated twice, followed by the next seven bytes of the key. Rotate the key left by seven bytes,
then generate the subkey for the next round.
The f-function
XOR the right half of the block with the last eight bytes of the subkey for the round.
Based on the bits of the first byte of the subkey for that round, swap nibbles in the eight bytes of that result for those bytes which correspond to a 1 bit.
Use S-box 0 for the most significant nibble of each of these eight bytes, and S-box 1 for the least significant nibble of each byte:
Input: 0 1 2 3 4 5 6 7
S-box 0 output: 12 15 7 10 14 13 11 0
S-box 1 output: 7 2 14 9 3 11 0 4
Input: 8 9 10 11 12 13 14 15
S-box 0 output: 2 6 3 1 9 4 5 8
S-box 1 output: 12 13 1 10 6 15 8 5
Permute the 64 bits of the result, numbered from 0 (for the most significant bit) to 63 (for the least significant bit), by the following permutation:
The General Structure
LUCIFER has sixteen rounds. In each round, the f-function is calculated using that round's subkey and the left half of the block. The result is then XORed to the right half of the block, which is the
only part of the block altered for that round.
After every round except the last one, the right and left halves of the block are swapped.
Although LUCIFER has a larger block and key size than DES, it is considerably more vulnerable to attacks from differential cryptanalysis, and is also weak due to the regular nature of its key
However, this does not mean that the LUCIFER algorithm is useless. If a reasonably good stream cipher is used both before and after LUCIFER, its weaknesses essentially become irrelevant, and its
strengths are still present. It might indeed be argued that this kind of precaution ought to be used with DES as well.
Chapter Start
Skip to Next Section
Table of Contents
Main Page Home Page | {"url":"http://www.quadibloc.com/crypto/co0401.htm","timestamp":"2014-04-18T23:15:15Z","content_type":null,"content_length":"5083","record_id":"<urn:uuid:46046850-81a0-41ab-bd8e-bfc98658350a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
The distribution of square-free numbers
"... Abstract. We say that an integer n is k–free (k ≥ 2) if for every prime p the valuation vp(n) < k. If f: N → Z, we consider the enumerating function Sk f (x) defined as the number of positive
integers n ≤ x such that f(n) is k–free. When f is the identity then Sk f (x) counts the k–free positive int ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. We say that an integer n is k–free (k ≥ 2) if for every prime p the valuation vp(n) < k. If f: N → Z, we consider the enumerating function Sk f (x) defined as the number of positive
integers n ≤ x such that f(n) is k–free. When f is the identity then Sk f (x) counts the k–free positive integers up to x. We review the history of Sk f (x) in the special cases when f is the
identity, the characteristic function of an arithmetic progression a polynomial, arithmetic. In each section we present the proof of the simplest case of the problem in question using exclusively
elementary or standard techniques. 1. Introduction- The
"... Sieve methods have had a long and fruitful history. The sieve of Eratosthenes (around 3rd century B.C.) was a device to generate prime numbers. Later Legendre used it in his studies of the prime
number counting function π(x). Sieve methods bloomed and became a topic of intense investigation after th ..."
Add to MetaCart
Sieve methods have had a long and fruitful history. The sieve of Eratosthenes (around 3rd century B.C.) was a device to generate prime numbers. Later Legendre used it in his studies of the prime
number counting function π(x). Sieve methods bloomed and became a topic of intense investigation after the pioneering work of Viggo Brun (see [Bru16],[Bru19], [Bru22]). Using his formulation of the
sieve Brun proved, that the sum | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1978526","timestamp":"2014-04-19T14:19:41Z","content_type":null,"content_length":"13958","record_id":"<urn:uuid:8c50a1f9-b9fd-475e-b8e8-1a5dee2fa065>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can somebody explain me the sense of using the Baye's Law of probability and the total probability theorem? Thanks.
• 5 months ago
• 5 months ago
Best Response
You've already chosen the best response.
Suppose you have a tree of possible events like the one below: |dw:1385668006646:dw| Then, Baye's law says that the probability of one event on the far right (say \(\alpha\)), given that one of
the preliminary events (say \(A\)) has occurred, is given by \[P(\alpha|A)=\frac{P(A|\alpha)P(\alpha)}{P(A)}=\frac{P(A|\alpha)P(\alpha)}{P(A|\alpha)+P(A|\beta)+P(A|\Gamma)}\] Basically, it says
that the probability of some event \(A\) occurring, given the occurrence of another event \(\alpha\), is given by the ratio of (1) [the probability of \(A\) and \(\alpha\) occurring together] to
(2) [the total probabilities of \(A\) occurring]. (1) The probability of two events occurring together is \(P(A\cap\alpha)\). Using the conditional probability definition, we get \(P(A|\alpha)=\
dfrac{P(A\cap\alpha)}{P(\alpha)}\), i.e. \(P(A\cap\alpha)=P(A|\alpha)P(\alpha)\). (2) The total probability theorem is another way of saying that the denominators in the above equation are the
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5274cffde4b0bd59f4c06b90","timestamp":"2014-04-19T10:08:26Z","content_type":null,"content_length":"34061","record_id":"<urn:uuid:502ded91-5ba1-4889-8a9a-337eeab61020>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
stop that bus!
July 11th 2008, 01:33 AM #1
Jul 2008
A bus moves with k travelers, and it can stop in n stations according to every traveler's request. The probability of getting down for every traveler is the same and independent of other
travelers' getting out. The bus stops given that at least one traveler request to get down. What is the mathematical expectation for the number of stops?
A bus moves with k travelers, and it can stop in n stations according to every traveler's request. The probability of getting down for every traveler is the same and independent of other
travelers' getting out. The bus stops given that at least one traveler request to get down. What is the mathematical expectation for the number of stops?
as a noob when it comes to probability, i shall take a stab at this one.
we have independent events here, and we have two outcomes which are the same for each passenger involved (the bus stops for you or not). we can deem the bus stopping as a "success" and the bus
not stopping as a "failure" and use the binomial distribution.
recall that the probability of $k$ successes in $n$ independent trials is given by:
$P(k) = {n \choose k} p^kq^{n - k}$
where $p$ is the probability of success, and $q = 1 - p$ is the probability of failure
Now, you want $P(X \le k)$, where $X$ is a discrete random variable for the number of stops the bus will make. so you add all the probabilities from 1 up to k inclusive.
Now recall that for a random variable $X$, with probability mass function given by $p(x)$ (here, that is given by the formula for the binomial distribution), has expected value:
$E(X) = \sum_i x_i p(x_i)$
now replace $p(x)$ with the formula for the binomial distribution, and sum from $i$ goes from $1$ up to $n$, where $1 \le n \le k$.
Sadly, I don't think you can use the binomial distribution. That gives you the number of True's or successes given n trials, given a probability of true for 1 trial being p. What would be a
trial? If it's the person getting off, then it would be possible for them to get off more than once since you repeat the trial n times. I don't see how we can fit this into that distribution. But
maybe I'm missing something obvious.
A bus moves with k travelers, and it can stop in n stations according to every traveler's request. The probability of getting down for every traveler is the same and independent of other
travelers' getting out. The bus stops given that at least one traveler request to get down. What is the mathematical expectation for the number of stops?
Let $X_i = 1$ if at least one passenger gets off at stop $i$, $0$ otherwise.
The probability that any given passenger gets off at stop $i \text{ is } 1/n$, so
$Pr(X_i=0) = (1-1/n)^k \text{ for } i = 1,2, \dots ,n$
hence $E(X_i) = Pr(X_i = 1) = 1 - (1-1/n)^k$
$E(\sum_{i=1}^n X_i) = \sum_{i=1}^n E(X_i) = \sum_{i=1}^n [1- (1-1/n)^k] = n [1 - (1-1/n)^k]$
The $X_i$ measures how many people got off at a stop i.
$<br /> \sum_{i=1}^n X_i = k$
There are only k people and they all have to get off.
$X_i$ measures how many people got off at a stop i
is an incorrect interpretation of the random variable $X_i$.
Eg. If all passengers get off at stop 1 then $X_1 = 1$ and all other $X_i$ are equal to zero. Hence $\sum_{i=1}^n X_i = 1$ in this case.
Let $X_i = 1$ if at least one passenger gets off at stop $i$, $0$ otherwise.
The probability that any given passenger gets off at stop $i \text{ is } 1/n$, so
$Pr(X_i=0) = (1-1/n)^k \text{ for } i = 1,2, \dots ,n$
hence $E(X_i) = Pr(X_i = 1) = 1 - (1-1/n)^k$
$E(\sum_{i=1}^n X_i) = \sum_{i=1}^n E(X_i) = \sum_{i=1}^n [1- (1-1/n)^k] = n [1 - (1-1/n)^k]$
My bad. I misread how $X_i$ was defined. However, I just don't see how any of this ensures that k people get off of the bus. If you treat each stop completely seperately without taking into
account how people there were and how many got off the bus, I don't see how you can do the problem.
For example, based on the above $\mathbf{P}(\sum_{x=1}^{n}X_i>k)>0$ which is impossible.
My bad. I misread how $X_i$ was defined. However, I just don't see how any of this ensures that k people get off of the bus. If you treat each stop completely seperately without taking into
account how people there were and how many got off the bus, I don't see how you can do the problem.
For example, based on the above $\mathbf{P}(\sum_{x=1}^{n}X_i>k)>0$ which is impossible.
$\mathbf{P}(\sum_{x=1}^{n}X_i > k){\color{red}=}0$ which is exactly as things should be.
It follows from the definition that $1 \leq \sum_{x=1}^{n}X_i \leq k$.
Let $X_i = 1$ if at least one passenger gets off at stop $i$, $0$ otherwise.
The probability that any given passenger gets off at stop $i \text{ is } 1/n$, so
$Pr(X_i=0) = (1-1/n)^k \text{ for } i = 1,2, \dots ,n$
hence $E(X_i) = Pr(X_i = 1) = 1 - (1-1/n)^k$
$E(\sum_{i=1}^n X_i) = \sum_{i=1}^n E(X_i) = \sum_{i=1}^n [1- (1-1/n)^k] = n [1 - (1-1/n)^k]$
Mr. Fantastic's remarks above are exactly right. However, I can see that I should have provided a little more explanation. Maybe two observations will help clarify matters:
1. The $X_i$ variables are not independent. (But then, no one ever said they were.)
2. The key to the solution method is the simple yet powerful theorem, applied in the last line above, stating that $E(X+Y) = E(X) + E(Y)$. Independence of $X \text{ and } Y$ is not required for
this result (a fact which always seems surprising to me).
With all due respect, no, it follows from the definition that $1 \leq \sum_{x=1}^{n}X_i \leq n$ The emphesis on the n.
Think about it. How can a binomial distribution represent this experiment? By definition, a binomial distribution describes an experiment in which you have n independent trials. But this is not
the case for the above problem. If k-1 people get off at the first stop, the probability that 1 or more people get off at the next stop is different then it was at the first stop. That violates
the assumptions.
Last edited by meymathis; July 11th 2008 at 06:59 PM. Reason: clearer wording.
With all due respect, no, it follows from the definition that $1 \leq \sum_{x=1}^{n}X_i \leq n$ The emphesis on the n.
Think about it. How can a binomial distribution represent this experiment? By definition, a binomial distribution describes an experiment in which you have n independent trials. But this is not
the case for the above problem. If k-1 people get off at the first stop, the probability that 1 or more people get off at the next stop is different then it was at the first stop. That violates
the assumptions.
Where have a mentioned the binomial distribution? In fact, I thanked your reply (#2) as being useful for saying that that the binomial distribution is not applicable to this question!
You still do not understand what the random variable $X_i$ represents.
The maximum value of $\sum_{x=1}^{n}X_i$IS equal to k and occurs when each of the k passengers gets off at a different bus stop. But even when that happens, some of the $X_i$'s will equal zero
(assuming n > k).
How can the sum possibly be greater than k, the number of passengers? For the sum to be greater than k you need:
1. More than k passengers.
2. Each of those passengers to get off at a diferent bus stop.
Condition 1 is never satisfied.
If you're still unconvinced, please provide a concrete example, with specific values for each of the $X_i$'s, that supports your argument.
thanks all!
Let $X_i = 1$ if at least one passenger gets off at stop $i$, $0$ otherwise.
The probability that any given passenger gets off at stop $i \text{ is } 1/n$, so
$Pr(X_i=0) = (1-1/n)^k \text{ for } i = 1,2, \dots ,n$
hence $E(X_i) = Pr(X_i = 1) = 1 - (1-1/n)^k$
$E(\sum_{i=1}^n X_i) = \sum_{i=1}^n E(X_i) = \sum_{i=1}^n [1- (1-1/n)^k] = n [1 - (1-1/n)^k]$
My apologies for some of the confusion (I'm new at the forums and didn't see your "thanks"). I thought people were arguing for the binomial distribution. Looking more carefully (which I should
have done before) at awkward's example, the main thrust of my argument still holds, I think. Probably the most succinct way of saying it is this:
$Pr(X_i=0) = (1-1/n)^k \text{ for } i = 1,2, \dots ,n$
doesn't work because the number of people on the bus is not constantly $k$.
$\mathbf{P}(X_2=0|X_1=1)eq$ $\mathbf{P}(X_2=0|X_1=0)$ because there are no longer k people on the bus, then
$Pr(X_2=0|X_1=1) = (1-1/n)^{k-l}$
where $l$ are the number of people that got off at the first stop.
Again, sorry for not being clearer before.
I guess why I said binomial distribution is that $\sum X_i$ is binomially distributed based on awkward's definition. There is some experiment with probability of success p that is repeated n
With all due respect, no, it follows from the definition that $1 \leq \sum_{x=1}^{n}X_i \leq n$ The emphesis on the n.
Think about it. How can a binomial distribution represent this experiment? By definition, a binomial distribution describes an experiment in which you have n independent trials. But this is not
the case for the above problem. If k-1 people get off at the first stop, the probability that 1 or more people get off at the next stop is different then it was at the first stop. That violates
the assumptions.
It is true that $\sum_{x=1}^{n}X_i \leq n$-- you can't have the bus stop more times than there are stops. It is also true that $\sum_{x=1}^{n}X_i \leq k$-- you can't have the bus stop more times
than there are passengers. By definition of the $X_i$, $\sum_{x=1}^{n}X_i$ is the total number of times the bus stops.
I agree with you that the Binomial distribution is not appropriate here. But then, I never said it was-- that was a post by someone else.
[I missed some of the posts above while I was composing this.]
Last edited by awkward; July 12th 2008 at 11:30 AM. Reason: Beat to the reply by others
July 11th 2008, 02:32 PM #2
July 11th 2008, 02:59 PM #3
Jul 2008
July 11th 2008, 03:17 PM #4
July 11th 2008, 03:24 PM #5
Jul 2008
July 11th 2008, 04:25 PM #6
July 11th 2008, 04:43 PM #7
Jul 2008
July 11th 2008, 04:50 PM #8
July 11th 2008, 06:02 PM #9
July 11th 2008, 06:58 PM #10
Jul 2008
July 11th 2008, 08:32 PM #11
July 12th 2008, 01:01 AM #12
Jul 2008
July 12th 2008, 05:21 AM #13
Jul 2008
July 12th 2008, 05:37 AM #14
Jul 2008
July 12th 2008, 11:28 AM #15 | {"url":"http://mathhelpforum.com/advanced-statistics/43471-stop-bus.html","timestamp":"2014-04-18T23:42:30Z","content_type":null,"content_length":"101289","record_id":"<urn:uuid:95f24ad9-993a-46cd-bf33-318e5e11f1f0>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
s.split() on multiple separators
Antoon Pardon apardon at forel.vub.ac.be
Tue Oct 2 13:03:23 CEST 2007
On 2007-10-02, Hrvoje Niksic <hniksic at xemacs.org> wrote:
> Antoon Pardon <apardon at forel.vub.ac.be> writes:
>> It may be convincing if you only consider natural numbers in
>> ascending order. Suppose you have the sequence a .. b and you want
>> the reverse. If you work with included bounds the reverse is just b
>> .. a. If you use the python convention, things become more
>> complicated.
> It's a tradeoff. The convention used by Python (and Lisp, Java and
> others) is more convenient for other things. Length of the sequence
> x[a:b] is simply b-a. Empty sequence is denoted simply with x[a:a],
> where you would need to use the weird x[a:a-1] with inclusive bounds.
> Subsequences such as x[a:b] and x[b:c] merge smoothly into x[a:c],
> making it natural to iterate over subsequences without visiting an
> element twice.
Sure it is a tradeoff and the python choice may in the end still turn
out the best. But that doesn't contradict that a number of
considerations were simply not mentioned in the article refered to.
>> Another problem is if you are working with floats. Suppose you have
>> a set of floats. Now you want the subset of numbers that are between
>> a and b included. If you want to follow the convention that means
>> you have to find the smallest float that is bigger than b, not a
>> trivial task.
> The exact same argument can be used against the other convention: if
> you are working with inclusive bounds, and you need to represent the
> subset [a, b), you need to find the largest float that is smaller than
> b.
Which I think is a good argument against using any convention and
having explict conditions for the boundaries to include or exclude
So instead of writing xrange(2,6) you have to write something like
xrange(2 <= x < 6) which explictly states 2 is included and 6 is
excluded. If someone wants both boundaries include he can write
xrange(2 <= x <= 5).
A slice notation that would somehow indicate which boundaries are included
and which are excluded would be usefull IMO.
Antoon Pardon
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2007-October/431837.html","timestamp":"2014-04-20T05:54:53Z","content_type":null,"content_length":"5024","record_id":"<urn:uuid:43e7aa75-8e2b-4785-8c07-6510f0c43aae>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: what does increase in coefficient value over two time period ind
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: what does increase in coefficient value over two time period indicate
From Prakash Singh <prakashbhu@gmail.com>
To statalist <statalist@hsphsun2.harvard.edu>
Subject Re: st: what does increase in coefficient value over two time period indicate
Date Mon, 5 Nov 2012 16:47:44 +0530
Maarten, thanks a lot for the quick reply. I agree to your words but I
was told that this is it so I thought of clarifying myself.
the estimated model includes
log of sector gdp (dependent variable)
log of credit allocated (independent variable)
and I have used Panel cointegration technique.
Please let me know if more information is required
On Mon, Nov 5, 2012 at 4:37 PM, Maarten Buis <maartenlbuis@gmail.com> wrote:
> On Mon, Nov 5, 2012 at 11:54 AM, Prakash Singh wrote:
>> Suppose I run regression for two different time period for same set of
>> variables and get 0.13 and .34 (both statistically significant) as
>> estimated coefficient of a variable. Now I want to know that: does
>> this high coefficient value in the second period indicate improvement
>> in the effect of the variable on the dependent variable.
> Possible, but not certain.
> 1) We don't know what the unit of either the dependent and independent
> variable is, so we have no way of judging whether this is a
> substantively meaningful or a substantively meaningless change.
> 2) You did not say which regression model you used. If it is a
> non-linear model the comparison between periods becomes much harder,
> some would say impossible.
> 3) We cannot say with the information you have given use whether we
> can reject the hypothesis that these coefficients are equal. My
> default way to perform such a test is to estimate one model and add
> interaction terms, others prefer -suest-.
> -- Maarten
> ---------------------------------
> Maarten L. Buis
> WZB
> Reichpietschufer 50
> 10785 Berlin
> Germany
> http://www.maartenbuis.nl
> ---------------------------------
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/faqs/resources/statalist-faq/
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-11/msg00121.html","timestamp":"2014-04-17T15:40:59Z","content_type":null,"content_length":"10457","record_id":"<urn:uuid:fafb1c2f-50cd-4de6-8b8e-deda530af501>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: simple statistics question
Replies: 2 Last Post: Mar 20, 2013 3:55 PM
Messages: [ Previous | Next ]
Re: simple statistics question
Posted: Mar 20, 2013 1:16 PM
On 2013-03-20, Thomas Plehn <thomas.plehn@gmail.com> wrote:
> while(1) %Matlab code
> rr = rand(1,50); %sequence of 50 U(0,1) Values
> des = rand(1,50); %sequence of 50 U(0,1) Values
> diff = rr - des;
> %This are both decide statistics, a, b
> a = mean(rr);
> b = min(diff);
> disp(a-b); %their difference is nearly constant
> %but how is it distributed (mu,sigma)
> %and how does that depend on sequence length (n=50)
> %i think we can chose U(0,1) insted of U(a,b) without los of generality
> %(linear transformation of coordinates)
One observation in rr or des has mean 1/2 and variance 1/12.
Therefore the mean of 50 of them has mean 1/2 and variance 1/600.
The difference of two such has mean 0 and variance 1/300.
> end
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Department of Statistics, Purdue University
hrubin@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558
Date Subject Author
3/20/13 Re: simple statistics question Herman Rubin
3/20/13 Re: simple statistics question RGVickson@shaw.ca | {"url":"http://mathforum.org/kb/message.jspa?messageID=8696982","timestamp":"2014-04-16T06:02:42Z","content_type":null,"content_length":"18306","record_id":"<urn:uuid:448b8db7-a3ee-496b-a6b5-53c4f6d2eab6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Roxbury SAT Math Tutor
Find a West Roxbury SAT Math Tutor
Hello! I am glad you are seeking out a tutor, I believe tutors are an excellent choice for any student, as all of us can benefit from one-on-one instructional time. Here is a little bit about me:
I am a passionate educator, and in addition to in-school and in-home tutoring I have worked with stude...
16 Subjects: including SAT math, reading, writing, algebra 1
...I enjoy helping others understand the logic and rules that govern our writing, interpretation, and speech. I have almost six months' experience tutoring in English half-time, including grammar.
I have a masters degree in math, but have not lost sight of the difficulties encountered in elementary math.
29 Subjects: including SAT math, English, reading, writing
...I've used MATLAB to analyze data for my research in neuroscience for about 10 years and for three years at least a large fraction of my day was spent writing MATLAB code. I've written more than
500 data analysis and software routines in MATLAB and I know all the commands and how best to organize...
47 Subjects: including SAT math, reading, chemistry, statistics
...I have also played in the Metropolitan Youth Orchestra and Children's Orchestra Society. I was the principal violinist for my middle and high school orchestras. I would love to be able to bring
joy, music, and life to a child by teaching him/her how to play the violin.
11 Subjects: including SAT math, Spanish, accounting, ESL/ESOL
...I currently work as software developer at IBM. When it comes to tutoring, I prefer to help students with homework problems or review sheets that they have been assigned. I prefer to focus on
examples from each section, rather than each specific problem, to make sure they understand all of the concepts.
17 Subjects: including SAT math, statistics, geometry, economics | {"url":"http://www.purplemath.com/West_Roxbury_SAT_math_tutors.php","timestamp":"2014-04-18T05:38:38Z","content_type":null,"content_length":"24143","record_id":"<urn:uuid:7fa318cf-50f4-44df-af21-a5502d4f878b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra 2 Tutors
Clinton Township, MI 48035
...rcent of a number; using the order of operations to evaluate expressions; finding the mean, median and mode of a data set; looking for patterns. My GED lessons will explain the problem solving
strategies such as working backwards, solving a similar problem, making...
Offering 4 subjects including algebra 2 | {"url":"http://www.wyzant.com/Macomb_MI_Algebra_2_tutors.aspx","timestamp":"2014-04-16T13:47:20Z","content_type":null,"content_length":"60138","record_id":"<urn:uuid:b9db9bfc-4daf-45de-8f90-e58464aed586>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Anna-Marie on Monday, March 3, 2008 at 10:30pm.
1. The sides of a triangle have lenghts x, x+4, and 20. Specify those values of x for which the triangle is acute with the longest side 20.
2. use the information to decide if triangle ABC is acute, right, or obtuse.
AC=13, BC= sq. rt. 34, CD=3
>>i know this is obtuse but why?
3. If x and y are positive numbers with x>y, show that a triangle with sides of lenghts 2xy, x^2 - y^2, and x^2 + y^2 is always a right triangle.
I HATE WORD PROBLEMS!! Thanks for helping =]
• Geometry (#1 of 3) - drwls, Tuesday, March 4, 2008 at 12:47am
A right triangle with largest side 20 would have
x^2 + (x+4)^2 = 400
x^2 + 4x + 8 = 200
x^2 + 4x -192 = 0
(x-12)(x+16) = 0
The only root that makes sense (by being positive) is x = 12. That means that you have a right triangle if the sides are 12 and 16. (The x = 12 case). If x>12, the triangle is acute, meaning thet
the largest angle is less than 90 degrees. If x<12, it is obtuse. You should be able to convince yourself of that by drawing the figure or by using the law of cosines.
• Geometry - drwls (#2 of 3), Tuesday, March 4, 2008 at 6:39am
It is not obtuse; it is impossible because one side is longer than the sum of the other two.
You also posted this problem separately. I gave a more complete answer there.
• Geometry (#3 of 3) - drwls, Tuesday, March 4, 2008 at 6:44am
Let the first side be A, the second side b and the third side C.
Note that
(x^2 - y^2)^2 + (2xy)^2 = x^4 + 2x^2y^2 + y^2 = (x^2 + y^2)^2
Therefore the relationship A^2 + B^2 = C^2 is obeyed. This is only true for a right triangle.
Related Questions
math - I need to find out the type of triangle (by the type of angles). Is ...
geometry - the sides of a triangle is 9,12,15cm..Find the sides of a similar ...
geometry - in a triangle the middle length side is 3 more than the shortest side...
geometry - Two sides of a triangle measure 8 and 15. How many integer values can...
geometry - the sides of a triangle have lengths 4x+1, 2x+1 and 6x-1. if the ...
Geometry - The sides of a triangle measure 9 , 15 , and 18 . If the shortest ...
algebra 1 - The perimeter of a triange with sides a,b,and c is 24 cm. Side a is ...
MATH - two sides of a triangle measure 8 and 15. how many integer values can be ...
math - The lengths of the sides of a triangle are 4in.,14 in., and 16 in.If the...
geometry - A triangle has sides 12 ft, 14 ft, and 20 feet. The smallest side of ... | {"url":"http://www.jiskha.com/display.cgi?id=1204601404","timestamp":"2014-04-19T15:32:33Z","content_type":null,"content_length":"10216","record_id":"<urn:uuid:e5e78524-d038-4245-ae1a-23c28b0531b0>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
- Journal of the American Statistical Association , 1996
"... A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the
distribution of interest. Research into methods of computing theoretical convergence bounds holds promise ..."
Cited by 223 (6 self)
Add to MetaCart
A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the
distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in
applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area,
we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and
conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating
MCMC sampler conver...
, 1997
"... This paper is organised as follows. In Section 2, we present an over-simplified version of a convergence diagnostic, and study analytically its performance on certain simple Markov chains. We
restrict ourselves primarily to chains which in fact produce i.i.d. samples from ..."
Cited by 16 (2 self)
Add to MetaCart
This paper is organised as follows. In Section 2, we present an over-simplified version of a convergence diagnostic, and study analytically its performance on certain simple Markov chains. We
restrict ourselves primarily to chains which in fact produce i.i.d. samples from
, 1994
"... In this paper, we propose to monitor a Markov chain sampler using the cusum path plot of a chosen 1-dimensional summary statistic. We argue that the cusum path plot can bring out, more
effectively than the sequential plot, those aspects of a Markov sampler which tell the user how quickly or slowly t ..."
Cited by 12 (3 self)
Add to MetaCart
In this paper, we propose to monitor a Markov chain sampler using the cusum path plot of a chosen 1-dimensional summary statistic. We argue that the cusum path plot can bring out, more effectively
than the sequential plot, those aspects of a Markov sampler which tell the user how quickly or slowly the sampler is moving around in its sample space, in the direction of the summary statistic. The
proposal is then illustrated in four examples which represent situations where the cusum path plot works well and not well. Moreover, a rigorous analysis is given for one of the examples. We conclude
that the cusum path plot is an effective tool for convergence diagnostics of a Markov sampler and for comparing different Markov samplers. KEY WORDS: Convergence diagnostic; Cusum path plot, Markov
sampler; Mixing; Sequential plot; Summary statistic. Research supported in part by ARO Grant DAAL03-91-G-007. y Research supported in part by NSF Grant DMS-9305601. 1 Introduction As Markov chain
"... this article we present yet another example, from our current applied research. Figure 0.1 displays an example of slow convergence from a Markov chain simulation for a hierarchical Bayesian
model for a pharmacokinetics problem (see Bois et al., 1994, for details). The simulations were done using a M ..."
Add to MetaCart
this article we present yet another example, from our current applied research. Figure 0.1 displays an example of slow convergence from a Markov chain simulation for a hierarchical Bayesian model for
a pharmacokinetics problem (see Bois et al., 1994, for details). The simulations were done using a Metropolis-approximate Gibbs sampler (as in Section 4.4 of Gelman, 1992); due to the complexity of
the model, each iteration was expensive in computer time, and it was desirable to keep the simulation runs as short as possible. Figures 1a and 1b display time series plots for a single parameter in
the posterior distribution in two independent simulations, each of length 1000. The simulations were run in parallel simultaneously on two workstations in a network. It is clear from the separation
of the two sequences that, after 1000 iterations, the simulations are still far from convergence. However, either sequence alone looks perfectly well behaved. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3071743","timestamp":"2014-04-20T19:31:45Z","content_type":null,"content_length":"20960","record_id":"<urn:uuid:461a8f3a-965e-4e09-828a-52128ea9084f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |