content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Rydal Math Tutor
Find a Rydal Math Tutor
I was a National Merit Scholar and graduated magna cum laude from Georgia Tech in chemical engineering. I can tutor in precalculus, advanced high school mathematics, trigonometry, geometry,
algebra, prealgebra, chemistry, grammar, phonics, SAT math, reading, and writing. I have been tutoring profe...
20 Subjects: including SAT math, ACT Math, algebra 1, algebra 2
...I know what's expected from each student and I will create a plan of action to help you achieve your personal goals to better understand mathematics and pass the SAT. I am Georgia certified in
Mathematics (grades 6-12) with my Masters in Mathematics Education from Georgia State University. I have classroom experience teaching mathematics and regularly held tutorials for my own students.
7 Subjects: including algebra 1, algebra 2, geometry, prealgebra
I have been studying languages since the age of 6, for a total of 47 years. In addition, I have studied linguistics, which enhances the ability to understand language at its core, including
English. On the Advanced Placement English Composition SAT, I received a score of 760 (out of a possible 800). Portuguese is my second language and I have been speaking it since I was 7.
11 Subjects: including algebra 1, reading, writing, English
...A test does not measure how smart you are, it measures how prepared you are to take it. I would love to help you prepare for a test in any subject area. I have worked with testing from Pharmacy
Techs and Dental Assists to Teacher Certifications.
18 Subjects: including ACT Math, English, reading, writing
I am a Georgia Tech Biomedical Engineering graduate and I have been tutoring high school students in the subjects of math and science for the last three years. I love helping students reach their
full potential! I have found that most of the time all a student needs is someone encouraging them and letting them know that they are SMART and that they CAN do it!
15 Subjects: including algebra 1, algebra 2, biology, chemistry
|
{"url":"http://www.purplemath.com/rydal_math_tutors.php","timestamp":"2014-04-21T15:11:49Z","content_type":null,"content_length":"23719","record_id":"<urn:uuid:30246f4e-902e-43e3-b532-79d82b42033c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fly in a Box
April 12th 2005, 09:29 PM #1
Apr 2005
Fly in a Box
A fly is inside an empty, closed, rectangular box that measures two feet by three feet by six feet. If he flies from one corner to the opposite corner, what is the minimum distance he must
How do I solve this?
Use Pythagorean Theorem twice.
Use once to get one length (a) of the blue triangle.
Use another time to get the hypotenuse (c) of the blue triangle.
April 12th 2005, 09:53 PM #2
Apr 2005
|
{"url":"http://mathhelpforum.com/math-topics/64-fly-box.html","timestamp":"2014-04-19T11:50:09Z","content_type":null,"content_length":"27101","record_id":"<urn:uuid:44823df2-cdaa-4de4-be49-546bfcc64252>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trenton, NJ Statistics Tutor
Find a Trenton, NJ Statistics Tutor
...More than anything else that is the reason I do this. I have had the privilege to be able to work with many students of various educational backgrounds and have been able to help them all. I
take my role as a tutor very seriously and always strive for the best results for my students.
23 Subjects: including statistics, English, writing, geometry
...I love math and the hard sciences for the ability to explain natural occurrences and break them down into clear rules and equations. My favorite part of teaching / tutoring is that "aha"
moment when the model finally clicks and the lesson becomes clear to the student.I graduated from Dartmouth C...
39 Subjects: including statistics, chemistry, reading, physics
...I had 2 semesters of linear algebra as an undergrad when I earned my BA in Mathematics. Then, I used it extensively when I earned my MS in Statistics. I studied probability extensively during
my work towards my BA in Mathematics and my MS in Statistics.
15 Subjects: including statistics, calculus, geometry, algebra 1
...On the side, I've taught middle school math (fundamentals of math, pre-algebra, algebra 1, and geometry) in a private school during summer months, as well as instructed pre-schoolers-12th
graders at Kumon Learning Center. My teaching philosophy, particularly in math, is to teach for deep concept...
22 Subjects: including statistics, English, reading, algebra 1
...Bonaventure University as of May 2014. I wanted to share my knowledge and my love for the sciences this summer before heading off to medical school this fall. I have a plethora of experience
being a tutor at high school and college.
10 Subjects: including statistics, chemistry, biology, algebra 1
Related Trenton, NJ Tutors
Trenton, NJ Accounting Tutors
Trenton, NJ ACT Tutors
Trenton, NJ Algebra Tutors
Trenton, NJ Algebra 2 Tutors
Trenton, NJ Calculus Tutors
Trenton, NJ Geometry Tutors
Trenton, NJ Math Tutors
Trenton, NJ Prealgebra Tutors
Trenton, NJ Precalculus Tutors
Trenton, NJ SAT Tutors
Trenton, NJ SAT Math Tutors
Trenton, NJ Science Tutors
Trenton, NJ Statistics Tutors
Trenton, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Trenton_NJ_statistics_tutors.php","timestamp":"2014-04-20T02:17:04Z","content_type":null,"content_length":"24127","record_id":"<urn:uuid:3d926f5e-c7c6-4a7d-90f8-7d56ae8daaea>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Velocity Reviews - Re: multiplier
- -
Re: multiplier
kishore.devesh@gmail.com 01-15-2013 06:21 AM
Re: multiplier
i studied about multiplication and trying to write a code for it. The best combination for small multiplication is Booths algorithm as it reduces the partial product and wallace tree increases the
speed of addition of multiplication. i am trying to write a code for modified boothsradix4 algorithm using wallace tree (3:2).. if you can help please do help me..
Devesh Kishore
rickman 01-16-2013 02:24 AM
Re: multiplier
On 1/15/2013 1:21 AM,
> hi
> i studied about multiplication and trying to write a code for it. The best combination for small multiplication is Booths algorithm as it reduces the partial product and wallace tree increases the
speed of addition of multiplication. i am trying to write a code for modified boothsradix4 algorithm using wallace tree (3:2).. if you can help please do help me..
> Thanks
> Devesh Kishore
> (kishore.devesh@gmail.com)
What part of this do you need help with? Also, what is the purpose of
writing this code? Are you designing a chip? Most FPGAs incorporate
pretty durn fast multipliers these days, no need to reinvent the wheel.
Are you just doing this to learn about the techniques?
terese paul 02-14-2013 10:59 AM
can anyone help me to write a VHDL code for wallace tree using 4:2 compressors..pls pls help..
All times are GMT. The time now is 02:55 PM.
Powered by vBulletin®. Copyright ©2000 - 2014, vBulletin Solutions, Inc.
SEO by vBSEO ©2010, Crawlability, Inc.
|
{"url":"http://www.velocityreviews.com/forums/printthread.php?t=956528","timestamp":"2014-04-21T14:55:06Z","content_type":null,"content_length":"5611","record_id":"<urn:uuid:06cc252c-d971-462e-8449-2a0508e9bf66>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
find the x and y intercepts, turning points, and create a
find the x and y intercepts, turning points, and create a
create a table for f(x)=x(x+2)^2 Use a graphing utility to approximate the x minus and y minus intercepts of the graph. aproximate the turinig points of the graph. Find the domain and the range of
the function. I am having problems using my calculator correctly for hese problems. Please help
precalcdummy wrote:I am having problems using my calculator correctly for hese problems.
The "table" you can do by hand: pick x-values, plug them in, solve for the corresponding y-values. The
you already know, since every polynomial is defined for "all real numbers". From what you know about
the graphs of cubic functions
, you know the range: "all real numbers". The x- and y-
you can easily find by solving. The only part for which you'd "need" your calculator is the turning points.
To learn how to enter the function, graph it, and "trace" the curve (perhaps "zooming in" to see bits of it better) on your calculator, try reading the chapter in the owners manual on "graphing". If
you've lost your manual, you should be able to download a copy from the manufacturer's web site.
Re: find the x and y intercepts, turning points, and create a
if you have a ti-84 push Y=, put the formula in, adn then do 2nd TRACE to find where it turns. you can do where it crosses with the CALC part. i dont know about domain and rznge tho soryr
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=8&t=544&p=1715","timestamp":"2014-04-19T02:10:03Z","content_type":null,"content_length":"21584","record_id":"<urn:uuid:00405997-8743-4836-a3d2-fe0def23a3e7>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Automatic Proof for Reimann's Hypothesis?
Automatic Proof for Reimann’s Hypothesis?
April 1, 2012 by algorithmicgametheory
My colleague Boris Shaliach has been working for years on computerized formalization of mathematics. After making a name for himself in the 1980′s in the, then hot, AI topic of automatic theorem
proving, and getting tenure, he has since continued working on extending this automatic theorem proving to the whole of mathematics. His work is certainly no longer main stream AI, and he does not
“play” the usual academic game: he hardly published anything since the late 80′s, and to this date he does not have a web page. I have to admit that most of us in the department did not believe that
anything would come out of this research, politely calling it “high risk high yield”. However as he is an excellent teacher, was able to get a steady stream of European grants, advises a very large
number of graduate students, and in general is friendly and interesting to talk to, everyone likes him.
For years he has been trying to convince everyone that mathematics will soon be done only by computers. His basic argument was that contrary to our intuitive feeling that mathematics requires the
peak of human intelligence, and that the cleverness seen in mathematical proofs can never be replaced by a computer, the truth is exactly opposite. Mathematics is so difficult for humans precisely
because of human limitations: evolution has simply not optimized humans for proving theorems, and this is exactly where computers should beat humans easily. His example was always how chess turned
out to be easy for computers while natural language understanding or face recognition turned out to be much harder because humans seem to be “optimized” for these tasks. (I have to admit that he
held this position even before computers beat the human world champion, pointing out that very early chess programs already beat him.) In this vein, I could not get him too excited about the recent
amazing advances in computerized Go, and he was much more impressed by the success of Google’s language translation and by IBM’s Watson.
His baby has always been a formalized mathematics computer program, called URIM-VETUMIM, that defined a computer-readable language for specifying mathematical theorems and proofs. He was one of the
original group behind the QED manifesto, and is one of not too many researchers still actively working on that goal of formalizing existing mathematics into a computerized readable form. It seems
that there is a small industry of computer systems that allow one to formally describe mathematical axiomatic systems, theorems and proofs, and can then verify syntactically the correctness of a
proof. His system, like many in the genre, uses the Zermelo-Fraenkel set theory as its axiomatic system and he has spent an enormous amount of effort trying to enter larger and larger pieces of
mathematics formally into the system. In fact, for about 20 years he has provided jobs for many undergraduate students in the math department who were paid for taking standard math textbooks and
formalizing them into his URIM-VETUMIM system. This turned out to be much harder than it seems where at the beginning, months would be required for formalizing one little lemma, but over time this
became much more efficient, as his steady stream of CS M.Sc. students improved the human interface, adding various short cuts (glorified macros) and sometimes even a little intelligence in the form
of proof assistants. One of the nice things about his URIM-VETUMIM system is that he never worried about students making errors: if it “compiled” into his system then it was correct.
I got involved in his efforts, helping him with the incentives engineering of replacing our math students with a “crowd-sourcing” alternative. His idea was to utilize Chinese math students who are
very good and yet are willing to work for really low wages. He had some contacts in leading Math departments in China, and I helped him a bit as he set up a nice Mechanical-Turk-like system
optimized for having Chinese math students enter theorems and proofs into the URIM-VETUMIM format.
I used to have long discussions with him about what the point of this formalization was: after all he is taking mathematical knowledge that we already have and translating it into a clearly less
efficient computerized format — so what’s the gain? He had two points: first he claims that most proofs in published mathematical papers are buggy. Usually these are “small” bugs that can be fixed
by any mathematician reading the paper, but often there are significant ones. He always pointed out the absurdity that important mathematical proofs (like for Fermat’s last theorem) required more
than a year of “verification” by expert mathematicians — really of fixing various holes in them — before they are believed. His analogy was that writing a mathematical proof was similar to writing a
computer program, but there was no “compiler” for mathematical theorems that allowed one to verify correctness. His second point was that that one first needs a large database of formalized existing
mathematical knowledge before one may hope for new computer-invented proofs.
Unfortunately, the various groups attempting the formalization of mathematics have not agreed on a common language, so each group is working on its own pet language, and none of them seems to have
reached critical mass so far. There seems to be some kind of informal competition between the groups about who can formalize more math into their system. Luckily most of the large systems have
either made their database publicly available on the web or were willing to provide Boris with their data. Indeed in the last decade or so, many of his graduate students built systems that
“translated” from the databases of his competitors into his own URIM-VETUMIM format.
Now comes the interesting stuff. Over the years many of his graduate students did work on various automatic theorem proving ideas, and kept writing more and more programs, each of them doing
something better than the previous ones (and other things worse…) Over the years has has amassed literally dozens of programs that can automatically prove mathematical theorems, each one with some
non-trivial yet very limited capabilities. Amazingly he was able to keep all the software of his former students functional and kept running every program on the updated mathematical URIM-VETUMIM
database as it grew. His latest project was to combine all these provers in a “meta-algorithm” of the form that won the netflix challenge where the idea is to combine all the existing algorithms in
his arsenal by running them all and choosing the “best” at each point. Of course the actual implementation is much more nuanced as involves using the large existing database to statistically
determine which algorithms are best attempted in which cases, I suppose somewhat similarly to how SATzilla wins the SAT solving contests. However the combination of partial results from the various
algorithms into single “proof” seems to be rather straight forward since they all just provide mathematical “lemmas” in the URIM-VETUMIM format. His meta-algorithms now constantly run in the
background of his PC cluster, churning new input to the database as well as fine tuning the parameters of the meta-algorithms.
In the last year he has taken this a step forward and has designed a clear API for adding external theorem proving algorithms that are then automatically incorporated into his meta-algorithms.
Building on his Mechanical-Turk-for Chinese-Math-Students setup, he is now crowd-sourcing sub-algorithms for his meta-theorem-proving-algorithm, this time from Chinese CS students. By now he has
paid tens of thousands of dollars to hundreds of Chinese students who have contributed pieces of code. It seems that at this point it is very difficult to keep track of the contents of the
URIM-VETUMIM system: both the mathematical knowledge data and the software have been build over the course of more than 20 years, with different pieces of it coming from various sources. This seems
to be fine as Boris guards closely the core of the system: the URIM-VETUMIM language specification and the proof verifier.
Now comes the punchline: the system just spat out a “proof” of the Reimann hypothesis. The “proof” is a 200Mb long text file that is human-readable in the sense of understanding each line, but with
no structure that a human can really understand. It is clear that the proven “statement” is that of the Reimman hypothesis, and the “proof” is verified by the software to be a formally correct proof
of the theorem using Zermello-Fraenlkel set theory (with the axiom of choice, I think).
Now what?
When someone claims to prove P eq/neq NP, we first tend to look at their “track record.” My instinct is the same in this case: has the system automatically discovered other interesting proofs in the
past, proofs that have held up (even if the results being proved were known before)?
If the system can discover extremely long proofs of deep results, surely it should be able to spit out a variety of interesting proofs of known results, and some of these should be verifiable by
humans. This is where it might help for your colleague to publish more papers and work to raise awareness of the system (I’m basing this comment only on your characterization of his research style).
Regardless, I wish this project the best and hope the new proof works…
on April 1, 2012 at 3:36 pm | Reply anon
Andy. aren’t you already familiar with the (well known) system “B-Dichat APRIL”?
Well, now we either accept the proof on faith that this system is never wrong, or someone has to go over a 200mb text file.
nicely done. i thought it was still march.
on April 1, 2012 at 4:26 pm | Reply Anonymous
Hehe nice post. I’m curious to know which parts are true
on April 1, 2012 at 6:21 pm | Reply Andy J
Best April Fool’s I’ve seen in a long time. My stomach dropped away in the seconds between the last paragraph and seeing the comments reminding me of the date.
A fantastic achievement! Now that Reimann’s hypothesis is settled, one can hope that the closely related Riemann’s hypothesis will fall next…
on April 1, 2012 at 6:31 pm | Reply oz
Best internet prank I’ve seen today.
Two more serious points which didn’t occur to me (as a complete outsider to the area of automatic theorem proving).
1. Are there any efforts to ‘crowd-source’ the insertion of known mathematical facts into such a proof system? sounds like a quite sensible idea
2. Are there any efforts to deduce automatically theorems/proofs using statistics/machine-learning? that is, crawl automatically over mathematical research papers and try to find patterns (e.g. lemma
x leads to thm. y using terms a,b,c) common to them, thus perhaps finding ‘wholes’ in some of the papers, ‘bugs’ in proofs or some new conjectures?
So rather than try to build everything bottom-up using axioms and logic, analyze automatically what human mathematicians have already discovered?
A (probably far-fetched) analogy is the shift in research in AI. While until the 80′s it was mostly based on logic, the insertion of probabilistic reasoning, machine-learning and data-driven approach
has led to enormousness progress. Perhaps there is a similar opportunity here for automatic theorem proving?
on April 1, 2012 at 8:21 pm | Reply Ariel Procaccia
I was thinking of Boris Shaliach when I wrote paragraph -4 of my last post (http://agtb.wordpress.com/2012/03/31/an-insiders-guide-to-author-responses/). I didn’t want to offend him personally
on April 1, 2012 at 9:23 pm | Reply Anonymous
The most fantastic April Fool ever.
Genuinely shocked and fooled.
on April 2, 2012 at 12:32 am | Reply Chronoz
I propose that we award Dr. Boris Shaliach the “Best Fictional Math-CS Character” prize for this century. Brilliant read, till the very end.
on April 2, 2012 at 8:09 am | Reply Anonymous
who wrote this? noam? lance? or?
on April 2, 2012 at 11:52 am | Reply Mike Wooldridge
Very good – I was totally suckered by this. Wondering why there were no web links to the system, and was about to forward the URL to colleagues working on theorem proving, asking if they knew Boris…
I predict that somebody will read this and turn it into a grant proposal.
on April 2, 2012 at 4:18 pm | Reply Rahul Sami
Brilliant! This had me taken in until I saw the comments.
on April 3, 2012 at 12:19 am | Reply Anonymous
I bit, with the assumption of a bug in the program. Damn you. First Internet-Is-Full-of-Lies-Day post that’s gotten me in a while. 10/10
on April 4, 2012 at 6:59 am | Reply Jane Sixpack
Your college? Your colleague? Good April Fool joke, by someone whose game-theoretic algorithms can’t tell the difference between “college” and “colleague” :-).
• on April 10, 2012 at 2:26 pm | Reply Noam Nisan
fixed this one…
on July 22, 2012 at 5:49 pm | Reply Nikhil Bellarykar
hehe great one!! I was about to share this on fb but then saw the date :D nicely done!!
18 Responses
|
{"url":"http://agtb.wordpress.com/2012/04/01/automatic-proof-for-reimanns-hypothesis/","timestamp":"2014-04-20T18:29:01Z","content_type":null,"content_length":"72405","record_id":"<urn:uuid:b357636b-e656-47c4-94ad-6f13f1fbfd6a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Evaluate Math Expression
Jep is a Java package for parsing and evaluating mathematical expressions.
Easy-to-use math calculator that immediately computes the result as you type a math expression. It allows multiples math expressions at same time. It also allows defining your own variables that can
be set to a value or other math expressions. It has a
Math expression parser and evaluator written in Scala.
The math expression parser/evaluator. EquTranslator allows to evaluate a run-time defined string as a math expression. One of the major enhancement and attractive value of EquTranslator algorithm is
speed. This essential feature, that already has
BlueMath Calculator is a flexible desktop calculator and math expression evaluator. It can calculate simultaneously multiple expressions, showing the results in the same time as you type.
Use the math expression calculator to quickly evaluate multi variable mathematical expressions. You can type in an expression such as 34+67*sin(23)+log(10) and see the result instantaneously.
HEXelon MAX Solves math expressions like: Sin(Pi)-Root(-8;3)+3 Creates user's functions (e.g. surfaces, volumes, and so on). Publishes in Internet. Receives from Internet. Program shows position of
bugs in math expression made by user. Descriptions
Calculator Prompter is a math expression calculator. Calcualtor Prompter has a built-in error recognition system that helps you get correct results. With Calculator Prompter you can enter the whole
expression, including brackets, and operators. You can
CalculatorX is an enhanced expression calculator. It can evaluate an expression for one time. You can use Arithmetic, Logic, Bitwise and Relation operations in your expression. And many system
constants and built-in functions can also be used in
Evaluate math expressions with sin, square root, pi, factorial, etc.
The new edition of GridinSoft Notepad has all possibilities that you need for everyday using: Built-in Scripts Engine, Code Folding, Evaluate Math Expressions, Spell Checker and many-many other
convenient and pleasing features.
|
{"url":"http://evaluate-math-expression.sharewarejunction.com/","timestamp":"2014-04-21T02:01:45Z","content_type":null,"content_length":"33880","record_id":"<urn:uuid:48cdca3c-8a5e-47bc-a0a6-1af2fa81b393>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Variance, Number of Heads and Tails
January 31st 2010, 07:10 PM
Variance, Number of Heads and Tails
I was just reviewing some discrete probability problems and came across this one. Not sure how to do it...
Let Xn be the random variable that counts the difference in the number of tails and the number of heads when n coins are flipped.
a) What is the expected value of Xn?
b) What is the variance of Xn?
February 1st 2010, 12:27 AM
mr fantastic
I was just reviewing some discrete probability problems and came across this one. Not sure how to do it...
Let Xn be the random variable that counts the difference in the number of tails and the number of heads when n coins are flipped.
a) What is the expected value of Xn?
b) What is the variance of Xn?
The difference can defined by D = |n - 2X| where n ~ Binomial(n, 1/2). So you could start by constructing a partial probability table and calculating E(D) and Var(D) in the usual way.
|
{"url":"http://mathhelpforum.com/statistics/126530-variance-number-heads-tails-print.html","timestamp":"2014-04-19T07:37:43Z","content_type":null,"content_length":"4581","record_id":"<urn:uuid:a13639cd-e826-443a-b384-028d00b09fdb>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Where Function pointers are stored ?
11-04-2012 #1
Registered User
Join Date
Nov 2012
Where Function pointers are stored ?
Where function are stored in memory?
In the following example , where function pointer fp stored ?
#include <math.h>
#include <stdio.h>
// Function taking a function pointer as an argument
double compute_sum(double (*funcp)(double), double lo, double hi)
double sum = 0.0;
// Add values returned by the pointed-to function '*funcp'
for (int i = 0; i <= 100; i++)
double x, y;
// Use the function pointer 'funcp' to invoke the function
x = i/100.0 * (hi - lo) + lo;
y = (*funcp)(x);
sum += y;
return sum;
int main(void)
double (*fp)(double); // Function pointer
double sum;
// Use 'sin()' as the pointed-to function
fp = &sin;
sum = compute_sum(fp, 0.0, 1.0);
printf("sum(sin): %f\n", sum);
// Use 'cos()' as the pointed-to function
sum = compute_sum(&cos, 0.0, 1.0);
printf("sum(cos): %f\n", sum);
return 0;
It depends on where you declare it. In the code above, it is stored on the stack, just like sum or any other local variables, like if you had a char * or int * variable. You could dynamically
allocate space for a function pointer, and have it be stored on the heap, or declare it as a global or static local, and have it be stored with the other global and static local variables, but
that is less common.
Remember, a function pointer is just a pointer, so all it contains is the address. It doesn't store the function itself (i.e. no code or variables).
11-04-2012 #2
Registered User
Join Date
Nov 2010
Long Beach, CA
|
{"url":"http://cboard.cprogramming.com/c-programming/151949-where-function-pointers-stored.html","timestamp":"2014-04-17T23:31:02Z","content_type":null,"content_length":"43912","record_id":"<urn:uuid:bf07ab6a-393e-452a-8a3d-d97b3fe9a632>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
July 14th 2013, 05:42 AM
I have a quick question to ask. How do you find the angle for the bottom angle ( as in the one that goes only up to 0 to pi(which is max).
I had a look at the solutions for this one apparently it goes from 0 to pi/4
find the volume of the smaller of the two regions bounded by the sphere x^2+y^2+z^2=16 and the cone z=sqrt(x^2+y^2).
thanks(im sorry if this looks familiar ..i def have not asked the above issue before)
July 14th 2013, 06:14 AM
Re: volume
In future, it would be helpful to tell us the situation you are looking at (here "find the volume of the smaller of the two regions bounded by the sphere x^2+y^2+z^2=16 and the cone z=sqrt(x^2+y^
2)") first, then ask the question! Doing it the other way is confusing.
The "bottom angle" is the angle at the vertex of the cone, correct? In the xz-plane (y= 0) z= sqrt(x^2+ y^2) becomes z= sqrt(x^2)= |x| and so is a pair of lines z= x for x non-negative and z= -x
for x negative. Each of those makes an angle of pi/4 with the x-axis and pi/2 with each other.
|
{"url":"http://mathhelpforum.com/calculus/220561-volume-print.html","timestamp":"2014-04-16T06:39:37Z","content_type":null,"content_length":"4231","record_id":"<urn:uuid:787f18b5-8561-40b3-8594-ef61dbc26014>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Randomized Algorithms
Stochastic Algorithms
Some of the fastest known algorithms for certain tasks rely on
Stochastic/Randomized Algorithms
Two common variations
– Monte Carlo
– Las Vegas
– We have already encountered some of both in this class
Monte Carlo algorithms have a consistent time complexity, but
there is a low probability of returning an incorrect/poor answer
Las Vegas algorithms return a correct answer, but there is a low
probability that it could take a relatively long time or not return
an answer at all
– Why "Las Vegas"? - Because the house always wins (will get a correct
answer for them) even though it might take a while
CS 312 – Stochastic Algorithms 1
Monte Carlo Algorithm – Fermat Test
If number is prime then it will pass the Fermat primality test
There is less than a 50% probability that a composite number c passes
the Fermat primality test (i.e. does ac-1 1 mod c for a random a such
that 1 a < c
So try it k times
This is called Amplification of Stochastic Advantage
function primality2(N)
Input: Positive integer N
Output: yes/no
Choose a1…ak (k < N) random integers between 1 and N-1
if ai N-1 1 mod N for all ai then
return yes with probability 1 - 1/(2k)
else: return no
CS 312 – Stochastic Algorithms 2
Dealing with Local Optima
Assume an algorithm has a probability p of finding the
optimal answer for a certain problem
– Just run it multiple times with different random start states
– If chance of hitting a true optima is p, then running the problem k
times gives probability 1-(1-p)k of finding an optimal solution
– This is a Monte Carlo approach
Another approach is to add some randomness to the
algorithm and occasionally allow it to move to a neighbor
which increases the objective, allowing it to potentially
escape local optima
– This is a Las Vegas approach
CS 312 – Stochastic Algorithms 3
Monte Carlo Algorithms – General Approach
Powerful approach for many tasks with no efficient
deterministic algorithm
– Especially useful in complicated tasks with a large number of
coupled variables
– Also useful when there is uncertainty in the inputs
1. Define a domain of possible inputs
2. Generate inputs randomly from the domain using an
appropriate specified probability distribution (sampling)
3. Perform a deterministic computation using the sampled
4. Aggregate the results of the individual computations into
the final result
CS 312 – Stochastic Algorithms 4
Monte Carlo Algorithm to Calculate π
Ratio of the area of the inscribed circle to the area of the square is
πr2/(2r)2 = π/4
Choose n points in the square from a uniform random distribution
(e.g. throw n darts with a blindfold, etc.)
π = 4 ✕ number of points in circle/n
(i.e. for every 4 darts thrown about π of them will land in the circle)
Doesthis follow the general form?
How does the accuracy of the approximation change with n?
Make sure you sample from the representative distribution
CS 312 – Stochastic Algorithms 5
Markov Chain Monte Carlo Techniques
Monte Carlo Integration
– Solving complex integrals
Handling complex probability distributions?
MCMC - Markov Chain Monte Carlo algorithms
– Start from an initial arbitrary state
– Travel a Markov chain representing the distribution
Probability of next state is just a function of the current state
Follow this chain of states, sampling as you go, until space is
"sufficiently" sampled according to the probability distribution
Need a short "burn-in" phase where we discard early samples to avoid
dependency on the arbitrary initial state
– Good approach for solving Bayesian Networks, etc.
CS 312 – Stochastic Algorithms 6
Las Vegas Algorithm - Quicksort
Quicksort sorts in place, using partitioning
– Example: Pivot about a random element (e.g. first element (3))
– 3 1 4 1 5 9 2 6 5 3 5 8 9 --- before
– 2 1 3 1 3 9 5 6 5 4 5 8 9 --- after
At most n swaps
– Pivot element ends up in it’s final position
– No element left or right of pivot will flip sides again
Sort each side independently
Recursive Divide and Conquer approach
Average case complexity is O(nlogn) but empirically better constants
than other sorting algorithms
Speed depends on how well the random pivot splits the data
Worst case is O(n2) – Thus a Las Vegas algorithm
Selection algorithm for median finding is also a Las Vegas algorithm
CS 312 – Stochastic Algorithms 7
When to use what Algorithm?
Divide and Conquer
Graph Algorithms
Dynamic Programming
Linear Programming
Intelligent Search
Local Search
Stochastic Algorithms
TSP, Sort, Shortest Path, Knapsack, Multiplication, …
CS 312 – Stochastic Algorithms 8
When to use what Algorithm?
Divide and Conquer
– Problem has natural hierarchy with independent branches
– Speed-up happens when we can find short-cuts during partition/merge that can be taken
because of the divide and conquer
Graph Algorithms
– When finding reachability, paths, and properties of graphs, often fall under other
– Often simple and fast approximation approach, occasionally optimal
Dynamic Programming
– Overlapping subproblems (given by a recursive definition) that are only slightly
(constant factor) smaller than the original problem
Linear Programming
– Any optimization with linear objective and constraints
Intelligent Search
– Effective when we have some heuristic knowledge of the search space to allow pruning
Local Search
– Simple optimization technique for most search spaces – local optima potential
Stochastic Algorithms
– Sampling problems, amplification of stochastic advantage, random pivots, etc.
CS 312 – Stochastic Algorithms 9
Can often use a combination of different algorithms
– Divide and Conquer followed by different algorithm on
– Stochastic algorithms can often improve many of the base
algorithms for certain tasks
– Greedy components within algorithms,
– etc.
Be Creative!
– The ways in which you apply these algorithmic techniques will not
always be initially obvious
You now have a powerful toolbox of algorithmic
techniques and philosophies which will prove beneficial as
you solve complex algorithms in the future
CS 312 – Stochastic Algorithms 10
|
{"url":"http://www.docstoc.com/docs/101207834/Randomized-Algorithms","timestamp":"2014-04-21T08:52:56Z","content_type":null,"content_length":"58080","record_id":"<urn:uuid:4d6dd8bb-4ba6-4e7b-836c-25d48f5375f5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Normandy Isle, FL Math Tutor
Find a Normandy Isle, FL Math Tutor
...I am also able to show simple ways for students to understand the material. I have developed techniques and methods that facilitate learning Algebra. I try to make Algebra interesting by also
showing application of problems in real life.
48 Subjects: including precalculus, chemistry, elementary (k-6th), physics
...Please note that if your location is outside of my driving radius, I would charge an hour extra to cover the driving time and associated expenses. My cancellation notice is just 5 hours in
advance of the scheduled session. This allows me enough time to schedule another student in your place should you need to cancel.
17 Subjects: including algebra 2, algebra 1, English, prealgebra
...I will curve my tutoring to your specific learning styles and get to know you as an individual student rather than an experiment for a lesson plan. Tutoring is not about the extracurriculars
for me, it is about seeing that student finally get it and feel confident in the subject. I look forward to helping you and am excited to hear from you.
11 Subjects: including algebra 1, algebra 2, vocabulary, geometry
...Be it free-body diagrams, projectile motion, relativity, mechanical energy, optics, waves, torque, or magnets that are vexing you, I can help. I got a 5 on my AP Statistics exam in high school
and have tutored about a dozen students in both AP and college statistics classes (including stats gear...
59 Subjects: including statistics, discrete math, QuickBooks, Macintosh
...And I have taken the following graduate courses: Native American Religions, Advanced Fieldwork, and Ethno-Historical Research Methods. I was certified by the National Strength Professionals
Association in Personal Training in 2008 and worked for some time as a personal trainer. I'm also a 200 a...
40 Subjects: including algebra 1, writing, SAT math, biology
Related Normandy Isle, FL Tutors
Normandy Isle, FL Accounting Tutors
Normandy Isle, FL ACT Tutors
Normandy Isle, FL Algebra Tutors
Normandy Isle, FL Algebra 2 Tutors
Normandy Isle, FL Calculus Tutors
Normandy Isle, FL Geometry Tutors
Normandy Isle, FL Math Tutors
Normandy Isle, FL Prealgebra Tutors
Normandy Isle, FL Precalculus Tutors
Normandy Isle, FL SAT Tutors
Normandy Isle, FL SAT Math Tutors
Normandy Isle, FL Science Tutors
Normandy Isle, FL Statistics Tutors
Normandy Isle, FL Trigonometry Tutors
Nearby Cities With Math Tutor
Biscayne Park, FL Math Tutors
Carl Fisher, FL Math Tutors
Chapel Lakes, FL Math Tutors
Indian Creek Village, FL Math Tutors
Indian Creek, FL Math Tutors
Keystone Islands, FL Math Tutors
Matecumbe Key, FL Math Tutors
North Bay Village, FL Math Tutors
Pembroke Lakes, FL Math Tutors
Seybold, FL Math Tutors
Sunny Isles, FL Math Tutors
Surfside, FL Math Tutors
Upper Matecumbe Key, FL Math Tutors
Venetian Islands, FL Math Tutors
Windley Key, FL Math Tutors
|
{"url":"http://www.purplemath.com/Normandy_Isle_FL_Math_tutors.php","timestamp":"2014-04-21T05:06:57Z","content_type":null,"content_length":"24317","record_id":"<urn:uuid:a16de0bc-0ef7-437f-870c-80fdf834c3f7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
|
biased and unbiased
December 16th 2009, 07:50 AM #1
Junior Member
Jun 2008
biased and unbiased
$<br /> S^2 = \frac{x-u}{n - 1} and S^2= \frac{x-u}{n }<br />$
I cant type the summation sign and x bar so i use u
Show that one is biased and unbiased
what is u?
the sample mean is \bar X
the sum is \sum{i=1}^n X_i
are the X's from any particular distribution?
You mean is this:
$s^2=\frac{(X-\overline X)^2}{N}$ and $s^2=\frac{(X-\overline X)^2}{N-1}$
$\sigma^2= s^2=\frac{(X-\overline X)^2}{N}$ is the of the population variance and is unbiase, assuming that the population is infinite. It is like drawing balls from an urn with replacement, and
it is also true according to the Central Limit Theorem.
On the other hand, the variance of the sample is not infinit, it is bias and just like drawing balls from an urn without replacement. Therefore, for a better estimate of the variance from a given
sample, a correction factor $\frac{N}{N-1}$ is used.
The corrected estimate of sample variance becomes $(\frac{N}{N-1})\sigma^2= (\frac{N}{N-1})(\frac{(X-\overline X)^2}{N})$, and the result is $s^2=\frac{(X-\overline X)^2}{N-1}$
Is there a summation here? what is this U?.......
And I'm dying at ask 'what's a matta u'?
The Adventures of Rocky and Bullwinkle - Wikipedia, the free encyclopedia
The name of Bullwinkle's alma mater is Whassamatta U.
THis university was actually featured in the original cartoons,
though its exact location was never given; the film places it in Illinois.
Probably at my school?
Last edited by matheagle; December 18th 2009 at 05:32 PM.
Is there a summation here? what is this U?.......
And I'm dying at ask 'what's a matta u'?
The Adventures of Rocky and Bullwinkle - Wikipedia, the free encyclopedia
The name of Bullwinkle's alma mater is Whassamatta U.
THis university was actually featured in the original cartoons,
though its exact location was never given; the film places it in Illinois.
Probably at my school?
Yah, there should be a $\Sigma$.
Yah, hopeless. I should not have bothered.
That guy is biased for infinite populations. It is asymptotically unbiased though.
OP: You've left out at least some information, so I'll assume that $X_1, X_2, ... , X_n$ are iid, and that the terms are defined as Novice said above. Just take expectations, noting that $\sum _
{i = 0} ^ n \left( X_i - \bar{X} \right) ^2 = \sum _{i = 0} ^ n X_i ^2 - n\bar{X}^2$. You'll have to flip around the formula $\mbox{Var} X = EX^2 - (EX)^2$, but it isn't more than a few lines.
December 16th 2009, 05:14 PM #2
December 17th 2009, 08:57 AM #3
Sep 2009
December 18th 2009, 04:49 PM #4
December 19th 2009, 09:20 AM #5
Sep 2009
December 20th 2009, 07:57 AM #6
Senior Member
Oct 2009
|
{"url":"http://mathhelpforum.com/advanced-statistics/120783-biased-unbiased.html","timestamp":"2014-04-18T03:42:21Z","content_type":null,"content_length":"49687","record_id":"<urn:uuid:5d99d072-4303-48e9-86b5-de6100c020d1>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westvern, CA Geometry Tutor
Find a Westvern, CA Geometry Tutor
...These subjects in particular require a hands-on approach, and I like to employ a number of graphics, pictures, and real-world applications to make certain topics easier to digest. As far as
elementary age students are concerned, most of my experience comes from having tutoring an advanced eight-...
60 Subjects: including geometry, chemistry, Spanish, reading
...I began programming in high school, so the first advanced math that I did was discrete math (using Knuth's book called Discrete Mathematics). I have also participated in high school math
competitions (ie AIME) and a college math competition (the Putnam) for several years, and in both cases the ma...
28 Subjects: including geometry, Spanish, French, chemistry
...I taught HS chem for several years and have tutored several students over the years. Chemistry is a really fun subject. I am ready to pass on my chemistry knowledge to either you or your
24 Subjects: including geometry, chemistry, English, calculus
...I am lucky enough to be able to stay home with my 3 year old son while continuing to teach students in an online environment. I would be happy to tutor anyone in need of assistance in any
science, math or elementary subject. I am able to tutor students online or at my home or your home, whatever is easier for you!
21 Subjects: including geometry, reading, biology, algebra 1
...Each of my students learn study skills, time management, and how to get organized in addition to improving their grades. My passion is helping students actualize their potential and unlock
their inner confidence. Finally grasping a frustrating concept is one of the most exhilarating and rewarding feelings we can experience.
56 Subjects: including geometry, English, reading, chemistry
Related Westvern, CA Tutors
Westvern, CA Accounting Tutors
Westvern, CA ACT Tutors
Westvern, CA Algebra Tutors
Westvern, CA Algebra 2 Tutors
Westvern, CA Calculus Tutors
Westvern, CA Geometry Tutors
Westvern, CA Math Tutors
Westvern, CA Prealgebra Tutors
Westvern, CA Precalculus Tutors
Westvern, CA SAT Tutors
Westvern, CA SAT Math Tutors
Westvern, CA Science Tutors
Westvern, CA Statistics Tutors
Westvern, CA Trigonometry Tutors
Nearby Cities With geometry Tutor
Broadway Manchester, CA geometry Tutors
Cimarron, CA geometry Tutors
Dockweiler, CA geometry Tutors
Dowtown Carrier Annex, CA geometry Tutors
Foy, CA geometry Tutors
Green, CA geometry Tutors
La Tijera, CA geometry Tutors
Lafayette Square, LA geometry Tutors
Miracle Mile, CA geometry Tutors
Pico Heights, CA geometry Tutors
Preuss, CA geometry Tutors
Rimpau, CA geometry Tutors
View Park, CA geometry Tutors
Wagner, CA geometry Tutors
Windsor Hills, CA geometry Tutors
|
{"url":"http://www.purplemath.com/Westvern_CA_Geometry_tutors.php","timestamp":"2014-04-17T19:27:00Z","content_type":null,"content_length":"24155","record_id":"<urn:uuid:4e4b11e1-41d7-4006-9ca0-640730b965e3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solve for 'x' algebraically...difficult and need help?
January 7th 2009, 01:39 PM #1
Jan 2009
Solve for 'x' algebraically...difficult and need help?
I need to know how to algebraically solve for 'x' in this equation:
Note: ' * ' denotes multiplication.
Please, show all of your steps because no matter what I do, I wind up going back and fourth.
The answer should be:
x = ((2*e*log(2)-Productlog((2^2e)*e*log(2))/(e*log(2)))
x ≈ 0.285983
I don't even know what "Productlog" is. I've heard of it used in a Lambert Function, but that's for programming and whatnot, so I'm not sure if it's the same. Grrr. If there are any other ways of
solving this, please explain.
Any help is appreciated, thanks.
I need to know how to algebraically solve for 'x' in this equation:
Note: ' * ' denotes multiplication.
Please, show all of your steps because no matter what I do, I wind up going back and fourth.
The answer should be:
x = ((2*e*log(2)-Productlog((2^2e)*e*log(2))/(e*log(2)))
x ≈ 0.285983
I don't even know what "Productlog" is. I've heard of it used in a Lambert Function, but that's for programming and whatnot, so I'm not sure if it's the same. Grrr. If there are any other ways of
solving this, please explain.
Any help is appreciated, thanks.
$\Rightarrow -2 e^x = x - 2$
Substitute $t = x - 2$:
$-2 e^{t+2} = t \Rightarrow -2 e^2 = t e^{-t}$
$\Rightarrow 2 e^2 = -t e^{-t}$
$\Rightarrow -t = W(2 e^2)$
$\Rightarrow x = 2 - W(2 e^2)$
where W is the Lambert W-function.
January 7th 2009, 02:52 PM #2
|
{"url":"http://mathhelpforum.com/algebra/67212-solve-x-algebraically-difficult-need-help.html","timestamp":"2014-04-21T07:24:50Z","content_type":null,"content_length":"35214","record_id":"<urn:uuid:11af6332-4f59-408e-8ba0-fe24123f4c3f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On ω-independence and the Kunen-Shelah property
Jiménez Sevilla, María del Mar and Granero , A. S. and Moreno, José Pedro (2002) On ω-independence and the Kunen-Shelah property. Proceedings of the Edinburgh Mathematical Society, 45 (2). pp.
391-395. ISSN 0013-0915
Restricted to Repository staff only until 31 December 2020.
Official URL: http://journals.cambridge.org/download.php?file=%2F45393_1096F213E7B4D681B4962188EFC5A6ED_journals__PEM_PEM2_45_02_http://journals.cambridge.org/action/displayAbstract?fromPage=online&
We prove that spaces with an uncountable omega-independent family fail the Kunen-Shelah property. Actually, if {x(i)}(iis an element ofI) is an uncountable w-independent family, there exists an
uncountable subset J.C I such that x(j) is not an element of (conv) over bar({x(i)}(iis an element ofj/{j}) for every j is an element of J. This improves a previous result due to Sersouri, namely
that every uncountable omega-independent family contains a convex right-separated subfamily.
Item Type: Article
Additional Information: Supported in part by DGICYT grants PB 97-0240 and BMF2000-0609.
Uncontrolled Keywords: ω-independence; non-separable Banach spaces; Kunen–Shelah property
Subjects: Sciences > Mathematics > Topology
ID Code: 16386
References: C. Finet and G. Godefroy, Biorthogonal systems and big quotient spaces, Contemp. Math. 85 (1989), 87–110.
D. H. Fremlin and A. Sersouri, On ω-independence in separable Banach spaces, Q. J. Math. 39 (1988), 323–331.
A. S. Granero, M. Jiménez-Sevilla and J. P. Moreno, Convex sets in Banach spaces and a problem of Rolewicz, Studia Math. 129 (1998), 19–29.
M. Jiménez-Sevilla and J. P. Moreno, Renorming Banach spaces with the Mazur intersection property, J. Funct. Analysis 144 (1997), 486–504.
N. J. Kalton, Independence in separable Banach spaces, Contemp. Math. 85 (1989), 319–323.
S. Negrepontis, Banach spaces and topology, in Handbook of set-theoretic topology, pp. 1045–1142 (North-Holland, Amsterdam, 1984).
R. R. Phelps, Convex functions, monotone operators and differentiability, 2nd edn, Lecture Notes in Mathematics, no. 1364 (Springer, 1993).
A. Sersouri, ω-independence in nonseparable Banach spaces, Contemp. Math. 85 (1989), 509–512.
S. Shelah, A Banach space with few operators, Israel J. Math. 30 (1978), 181–191.
S. Shelah, Uncountable constructions for B.A., e.c. groups and Banach spaces, Israel J. Math. 51 (1985), 273–297.
Deposited On: 17 Sep 2012 08:47
Last Modified: 07 Feb 2014 09:28
Repository Staff Only: item control page
|
{"url":"http://eprints.ucm.es/16386/","timestamp":"2014-04-20T03:44:03Z","content_type":null,"content_length":"29792","record_id":"<urn:uuid:040408e5-3e94-4fab-a7d0-db93fbb2a94a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parts of the Science Investigatory Project Report
Doing an investigatory project considers as a major achievement of any students in Science. Through scientific investigation, they learn how to apply the acquired knowledge, scientific concepts,
theories, principles and laws of nature. They can use their higher-order process or thinking skills in conducting a research. Let us provide you a brief description of the parts of the Science
Investigatory Project Report below.
The Title should be clear and precise. It has an objective or purpose. It should not be written too long or too short. By just reading the title, you can determine what the investigative study is
all about.
The Abstract should be one or two paragraphs only. It includes your research problems, the method or procedure that you used and the findings or conclusion of the study.
Chapter I
1. Introduction and Its Background
The Introduction is about one page only wherein it includes the background of the study and its rationale. It usually leads into the research problem.
2. Statement of the Problem
The Statement of the Problem has two categories namely: the general problem and specific problems. Usually, one general problem and three specific problems which derived from the general problem.
The research problems should be specific, reliable,valid, measurable, objectively stated. It can be a question form or in a declarative statement.
3. Formulation of the Hypothesis
The Formulation of the Hypothesis has two types namely: the null hypothesis and affirmative hypothesis. Hypotheses is a scientific guess which intends subject for thorough investigation. It is
recommended to use null hypothesis in your research project.
4. Significant of the Study
The Significant of the Study indicates how important is your investigatory project for the people, environment and community as a whole. It is really relevant in the changing world or global impact
into the field of technology.
5. Scope and Delimitation of the Study
The Scope and Delimitation of the Study covers the range of your research. It includes the period of research, the materials and equipment to be used, the subject of the study or the sample of the
study, the procedure and the statistical treatment to be used.
6. Definition of Terms
The Definition of Terms has two types: the Dictionary-derived definitions and the Operational definitions which derived from how these terms were used in your research.
Chapter II
Review of Related Literature and Studies
Related Literature
The Related Literature are statements taken from science books, journals, magazines, newspapers and any documents from authorized scientists, Science experts or well-known Science agencies. These
statements can support your study through their concepts, theories, principles and laws. Footnoting is important on this part.
Related Studies
The Related Studies are those researches which may be local and foreign studies who can attribute to your research or can support your investigation scientifically. Footnoting is also important on
this part.
Chapter III
Methodology has several parts namely: the subject of the study, the procedure and the statistical treatment
1. The Subject of the Study
The Subject of the Study includes your population and the sample. It applies the sampling techniques to obtain a good sample of the study. Your sample should be valid and reliable.
2. The Procedure
The Procedure is the step by step and systematic process of doing your research. It includes the materials with right amount of measurements, the appropriate equipment to be used in doing the
scientific investigation. It consists of several trials with control variables, independent variables and dependent variables. Gathering of data is essential in any kind of research. It is
recommended to use control and experimental set-ups to arrive at valid conclusion.
3. The Statistical Treatment
The Statistical Treatment comes in various ways. It can be mean, median, mode, percentage, Chi-square, standard deviation, T-test, Pearson r, Spearman rank or Anova I or Anova II. It is recommended
to use T-test in any experimental research.
Chapter IV
Presentation, Analysis and Interpretation of Data
1. Presentation of Data, Analysis and Interpretation of Data
The data gathered should be presented in order to be analyze. It may be presented in two forms namely: through table or graph. You may use both of them if you want to clearly figure out your data. A
table has labels with quantity, description and units of measurement. Graph has several types namely the line graph, bar graph, pie graph and pictograph. Choose what type of graph that you prefer to
use. Analyze the data that had been gathered, presented in table or graph scientifically. You interpret the data according to what had been quantified and measured. The numerical data should be
interpreted clearly in simple and descriptive statements.
2. Results
Results show the findings or outcomes of your investigation. The result must be based according to the interpreted data.
Chapter V
Summary, Conclusion and Recommendation
1. Summary
The Summary briefly summarizes your research from Chapter I to Chapter IV which includes the research problems, methodology and findings. It consists of one or two paragraphs only.
2. Conclusion
The Conclusion is the direct statement based on findings or results. It should answer your hypothesis and research problems.
3. Recommendation
The Recommendation is given based on your conclusion. You may give few recommendations which you think can help the fellow Science students, researchers, consumers or the entire community where
people live in.
26 Responses to “Parts of the Science Investigatory Project Report”
1. Psychology degrees online,Psychology Degree Online September 29, 2012 at 3:41 am #
Normally I do not learn post on blogs, but I would like to say that this write-up very pressured me to take a look at and do it! Your writing taste has been surprised me. Thank you, very great
□ masmeron October 7, 2012 at 5:50 pm #
I do appreciate your comments. Thank you so much.
2. daydreamer January 28, 2013 at 9:37 am #
How to have a good and outstanding Ip in terms of hardcopy or the product we , because we want to have the highest grade.
□ masmeron January 29, 2013 at 1:50 am #
The quality of the research that matters most rather than the quality of materials that you will use.
☆ daydreamer February 11, 2013 at 11:17 am #
so we will focused on the hard copy.
Is masmeron your real name cause we will add you in our acknoledgement.
3. eeeeeeee January 29, 2013 at 8:15 am #
it so interesting
4. mikaela March 9, 2013 at 11:53 pm #
it helped much! :))
□ masmeron April 7, 2013 at 11:17 pm #
thank you.
5. Rafael May 10, 2013 at 7:30 am #
finally i finished my SIP thanks to this info
6. Janiza Granado July 30, 2013 at 3:49 am #
this is very helpful to our project of my classmates !!!
7. ben August 9, 2013 at 3:19 am #
it really did helped me alot tnx tnx tnx :) may yuo continue helping others :)
#GODBLESS YOU
8. ADRIANNE August 20, 2013 at 3:59 am #
YOU’VE HELPED ALLOT!!!!!!!!!!
9. joshua September 10, 2013 at 10:28 am #
what are the parts for elementary level?
□ masmeron October 15, 2013 at 11:51 pm #
the same parts but better to simplify it for elementary level
10. joshua September 10, 2013 at 10:36 am #
What are the three kinds of variables?
□ AAAAAa November 21, 2013 at 11:00 am #
i know there are two not three. independent and dependent
☆ masmeron December 1, 2013 at 2:38 pm #
plus another one
11. NolisaCarer luv ko September 26, 2013 at 6:15 am #
thanks 4 informations
12. NolisaCarer luv ko September 26, 2013 at 6:17 am #
im very thankful because you gave us an opportunity to have a parts of investigatory project thank you very much = DEMAWATA =
13. Roslyn October 17, 2013 at 8:47 pm #
Quality articles or reviews is the important to invite the
viewers to go to see the site, that’s what this web site is providing.
14. AAAAAa November 21, 2013 at 10:59 am #
Related literature has 2 subtopics:local and foreign. same with studies. foreign studies and foreign lit
□ masmeron December 1, 2013 at 2:39 pm #
right if you want to make a complete research, you may include them.
15. Trisha December 2, 2013 at 8:51 pm #
Thank you very much! Such a good work, it helped me a lot. Thank you and God bless :) More powers
16. Kathy February 15, 2014 at 7:51 am #
How to show perfectly the summarize? shall i start to use the name of our product to begin or simple intro then start?
□ masmeron February 20, 2014 at 9:40 am #
Not necessarily to begin with your product’s name. Instead, you may begin with your research problem.
- August 28, 2012
[...] Parts of the Science Investigatory Project Report. Share this:TwitterFacebookLike this:LikeBe the first to like this. [...]
|
{"url":"http://masmeronproductions.wordpress.com/2012/07/18/parts-of-the-science-investigatory-project-report/","timestamp":"2014-04-20T08:15:52Z","content_type":null,"content_length":"100856","record_id":"<urn:uuid:ea432d12-53f9-4bc4-a811-21c928b857f2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spectral Plot
1. Exploratory Data Analysis
1.3. EDA Techniques
1.3.3. Graphical Techniques: Alphabetic
Purpose: A spectral plot ( Jenkins and Watts 1968 or Bloomfield 1976) is a graphical technique for examining cyclic structure in the frequency domain. It is a smoothed Fourier transform
Examine Cyclic of the autocovariance function.
The frequency is measured in cycles per unit time where unit time is defined to be the distance between 2 points. A frequency of 0 corresponds to an infinite cycle while a
frequency of 0.5 corresponds to a cycle of 2 data points. Equi-spaced time series are inherently limited to detecting frequencies between 0 and 0.5.
Trends should typically be removed from the time series before applying the spectral plot. Trends can be detected from a run sequence plot. Trends are typically removed by
differencing the series or by fitting a straight line (or some other polynomial curve) and applying the spectral analysis to the residuals.
Spectral plots are often used to find a starting value for the frequency, ω, in the sinusoidal model
\[ Y_{i} = C + \alpha\sin{(2\pi\omega t_{i} + \phi)} + E_{i} \]
See the beam deflection case study for an example of this.
Sample Plot This spectral plot shows one dominant frequency of approximately 0.3 cycles per observation.
Definition: The spectral plot is formed by:
Variance Versus
Frequency • Vertical axis: Smoothed variance (power)
• Horizontal axis: Frequency (cycles per observation)
The computations for generating the smoothed variances can be involved and are not discussed further here. The details can be found in the Jenkins and Bloomfield references and
in most texts that discuss the frequency analysis of time series.
Questions The spectral plot can be used to answer the following questions:
1. How many cyclic components are there?
2. Is there a dominant cyclic frequency?
3. If there is a dominant cyclic frequency, what is it?
Importance The spectral plot is the primary technique for assessing the cyclic nature of univariate time series in the frequency domain. It is almost always the second plot (after a run
Check Cyclic Behavior sequence plot) generated in a frequency domain analysis of a time series.
of Time Series
Examples 1. Random (= White Noise)
2. Strong autocorrelation and autoregressive model
3. Sinusoidal model
Related Techniques Autocorrelation Plot
Complex Demodulation Amplitude Plot
Complex Demodulation Phase Plot
Case Study The spectral plot is demonstrated in the beam deflection data case study.
Software Spectral plots are a fundamental technique in the frequency analysis of time series. They are available in many general purpose statistical software programs.
|
{"url":"http://www.itl.nist.gov/div898/handbook/eda/section3/eda33r.htm","timestamp":"2014-04-18T14:11:42Z","content_type":null,"content_length":"8009","record_id":"<urn:uuid:cb38640e-51a2-4fab-abec-bf2b170cb113>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum - Problems Library - All Discrete Math Problems of the Week
This page:
all discrete math
About Levels
of Difficulty
Discrete Math
graph theory
social choice
traveling sales
vertex coloring
Browse all
Discrete Math
Discrete Math
About the
PoW Library
Browse all Discrete Math Problems of the Week
Participation in the Discrete Math Problems of the Week allows teachers and students to address the NCTM Problem Solving Standard for Grades 9-12, enabling students to build new mathematical
knowledge through problem solving; solve problems that arise in mathematics and in other contexts; apply and adapt a variety of appropriate strategies to solve problems; and monitor and reflect
on the process of mathematical problem solving.
For background information elsewhere on our site, explore the High School Discrete Math area of the Ask Dr. Math archives. To find relevant sites on the Web, browse and search Discrete
Mathematics in our Internet Mathematics Library.
Access to these problems requires a Membership.
|
{"url":"http://mathforum.org/library/problems/sets/dmpow_all.html","timestamp":"2014-04-17T16:50:13Z","content_type":null,"content_length":"27961","record_id":"<urn:uuid:00907589-be94-429a-b0b0-80157aa98b92>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shannon capacity of all graphs of order 6
up vote 3 down vote favorite
In Shannon's paper, "The Zero Error Capacity of a Noisy Channel", he says that the Shannon capacity of all graphs of order 6 are determined, except for four exceptions. That is, for all but the four
exceptions, we have $\alpha(G) = \chi(\bar{G})$. He says the four exceptions "can be given in terms of the capacity of [the 5-cycle]." I want to understand this better.
By the way, I think in terms of the Shannon capacity as $\Theta(G) = \sup_k \sqrt[k]{\alpha(G^k)}$, whereas Shannon considered the log of this in his paper. Note, $G^k$ denotes the strong product of
$k$ copies of $G$.
One of the four exceptions is $C_5 + K_1$, a disjoint union. We have $\Theta(C_5 + K_1) \geq \Theta(C_5) + \Theta(K_1)$. But, I read in a paper by Alon that if $\alpha(G) = \Theta(G)$ for one of the
two graphs in the sum, then we have in fact equality. So, here we would have equality, so that $\Theta(C_5 + K_1) = \sqrt{5} + 1$. I get that then. So, my question is with the three remaining graphs.
One graph would be the wheel on 6 vertices, $W_6 = C_5 \vee K_1$, where $\vee$ means the graph join. The other two graphs would be spanning subgraphs of these. Delete one spoke of the wheel to get
one. Delete a neighboring spoke, of the first deleted spoke, to get the next. Now that we know $\Theta(C_5) = \sqrt{5}$, how do we determine $\Theta(W_6)$ and $\Theta$ of the other two exceptions?
Probably your question would be a bit more friendly if you defined the functions $\alpha(G)$, $\chi(\bar G)$ etc. – Anthony Quas Dec 11 '12 at 20:14
@anthony $\alpha(G)$ is the independence number, the largest set of vertices such that none are adjacent to each other. $\chi(G)$ is the chromatic number of $G$, the least number of colors needed
to color the vertices of $G$ such that no two adjacent vertices are colored the same. $\bar{G}$ is the complement of $G$, so that $v_i$ is adjacent to $v_j$ in $G$ if and only if $v_i$ is not
adjacent to $v_j$ in $\bar{G}$. – Graphth Dec 11 '12 at 20:49
There's still something missing here. Say the independence number of $G$ is $2$. $\sup_k\root k\of2$ is infinite; $\root k\of2\to\infty$ as $k\to0^+$. – Gerry Myerson Dec 11 '12 at 22:15
@gerry Yup, sorry about that. I had a typo in my definition of the Shannon capacity. Fixed it, thanks for your tip. – Graphth Dec 11 '12 at 23:21
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged graph-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/116110/shannon-capacity-of-all-graphs-of-order-6","timestamp":"2014-04-19T15:31:15Z","content_type":null,"content_length":"51413","record_id":"<urn:uuid:6c099975-8fe0-4706-abb9-4140e8ae9936>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Distributing a Number
We continue with the example presented at the end of the previous page.
6(2 + 4x)
The Distributive Property tells us that we can remove the parentheses if the term that the polynomial is being multiplied by is distributed to, or multiplied with each term inside the parentheses.
This definition is tough to understand without a good example, so observe the example below carefully.
6(2 + 4x)
now by applying the Distributive Propery
6 * 2 + 6 * 4x
The parentheses are removed and each term from inside is multiplied by the six.
Now we can simplify the multiplication of the individual terms:
12 + 24x
Another example is presented on the next page.
|
{"url":"http://www.algebrahelp.com/lessons/simplifying/distribution/pg2.htm","timestamp":"2014-04-21T09:48:09Z","content_type":null,"content_length":"6032","record_id":"<urn:uuid:77d257bd-1b29-46ca-a67c-15819891c0a9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: xtabond with constraints
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: xtabond with constraints
From Pedro Ferreira <pedro.ferreira.cmu@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: xtabond with constraints
Date Mon, 8 Aug 2011 20:07:03 -0400
Thanks Nick,
I have included that approach in my previous message. However, there
is a dynamic panel nature to what I am trying to do. Using your
variable names below, in my case y = x(t+1) and I have x(t) on the
right hand side. I am wondering if you have any insight about what is
best: xtabond with y_2 as a function of x(t) or reg3 with the
constraint that the coefficient on x(t-1) should be -1 (using the
other equations in reg3 to instrument x(t) and x(t-1) with lags)?
Thanks a lot!
On Mon, Aug 8, 2011 at 12:51 PM, Nick Cox <njcoxstata@gmail.com> wrote:
> If the coefficient in x[t-1] must be identically -1 then you can add
> that variable to the response variable before you fit a model in terms
> of other predictors.
> That is
> y = <some stuff> -x[t-1] ...
> is equivalent to
> y + x[t-1] = <some stuff>
> so model y_2 = y + x[t-1]
> Nick
> On Mon, Aug 8, 2011 at 4:37 PM, Pedro Ferreira
> <pedro.ferreira.cmu@gmail.com> wrote:
>> Dear All,
>> I am running a model of x(t+1) on x(t) and x(t-1) plus other controls.
>> I need to restrict the coefficient on x(t-1) to be -1. Any suggestions
>> for how to do this? My understanding is that xtabond2 does not allow
>> for constraints. I can add x(t+1) with x(t-1) and make this my new
>> dependent variable and instrument x(t) gmm style with deep lags of x,
>> namely deeper than 4. Is this the best strategy? Another approach I
>> have considered is to use 3SLS, first equation is just x(t+1) on x(t)
>> and x(t-1) and I use the other equations to run regressions of x(t)
>> and x(t-1) on deep lags. Using reg3 allows me to force the coefficient
>> on x(t-1) to be equal to -1 in the first equation. Unfortunately, reg3
>> does not give me the AR tests for how good the lags are as
>> instruments. Any ideas, thoughts would be highly appreciated. Thanks,
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2011-08/msg00336.html","timestamp":"2014-04-17T13:38:16Z","content_type":null,"content_length":"10196","record_id":"<urn:uuid:4689d731-a199-4b2e-9e78-df6a7e42b281>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
|
localization of a commutative ring
localization of a commutative ring
Algebraic theories
Algebras and modules
Higher algebras
Model category presentations
Geometry on formal duals of algebras
The localization of a commutative ring $R$ at a set $S$ of its elements is a new ring $R[S]^{-1}$ in which the elements of $S$ become invertible (units) and which is universal with this property.
When interpreting a ring under Isbell duality as the ring of functions on some space $X$ (its spectrum), then localizing it at $S$ corresponds to restricting to the subspace $Y \hookrightarrow X$ on
which the elements in $S$ vanish.
See also commutative localization and localization of a ring (noncommutative).
Let $R$ be a commutative ring. Let $S \hookrightarrow U(R)$ be a multiplicative subset of the underlying set.
The following gives the universal property of the localization.
The localization $L_S \colon R \to R[S^{-1}]$ is a homomorphism to another commutative ring $R[S^{-1}]$ such that
1. for all elements $s \in S \hookrightarrow R$ the image $L_S(s) \in R[S^{-1}]$ is invertible (is a unit);
2. for every other homomorphism $R \to \tilde R$ with this property, there is a unique homomorphism $R[S^{-1}] \to \tilde R$ such that we have a commuting diagram
$\array{ R &\stackrel{L_S}{\to}& R[S^{-1}] \\ & \searrow & \downarrow \\ && \tilde R } \,.$
The following gives an explicit description of the localization
For $R$ a commutative ring and $s \in R$ an element, the localization of $R$ at $s$ is
$A[s^{-1}] = A[X](X s - 1) \,.$
Revised on April 6, 2014 02:00:51 by
Urs Schreiber
|
{"url":"http://www.ncatlab.org/nlab/show/localization+of+a+commutative+ring","timestamp":"2014-04-19T07:12:57Z","content_type":null,"content_length":"30440","record_id":"<urn:uuid:b0a9a0b9-e8a2-488d-bfaf-5b0bdfcc24de>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ANOVA Statistical Programming with PHP
Published on ONLamp.com (http://www.onlamp.com/)
See this if you're having trouble printing code examples
ANOVA Statistical Programming with PHP
by Paul Meagher
The Analysis of Variance (ANOVA) technique is the most popular statistical technique in behavioral research. The ANOVA technique also comes up often in agricultural, pharmaceutical, and quality
control contexts. This article will introduce you to six major steps involved in using this technique by implementing them with a combination of PHP, MySQL, and JpGraph. The result is code and
knowledge that you can use to think about and potentially solve your own data-mining problems.
The scope of the ANOVA technique can narrow to include only the formal mathematics required to partition the total variance of a data matrix into between-group and within-group variance estimates.
Within this narrow construal, one might also include the machinery to test whether the ratio of the between to within-group variance is significant (i.e., whether there is a treatment effect).
In this article, we will construe the ANOVA technique more broadly to consist of multiple data-analysis steps to take when conducting an ANOVA analysis. The ANOVA technique here is a methodical
approach to analyzing data that issues from a particular type of data-generating process. The data-generating process will ideally arise from a blocked and randomized experimental design.
Test Anxiety Study
The prototypical Single Factor ANOVA experiment is a simple comparative experiment. These are experiments that involve applying a treatment to homogeneous experimental units and measuring the
responses that occur when administering different levels of the treatment to these experimental units.
The hypothetical study we will discuss in this article examines the effect of anxiety (i.e., the treatment) on test performance (i.e., the response). The study randomly assigned 30 subjects to a
low-anxiety, moderate-anxiety, or high-anxiety treatment condition. The experimenter recorded a test score measurement for each subject. The empirical issue of concern is whether there is an effect
of Anxiety Level on Test Score.
The idea for this hypothetical study and the data to analyze originally appeared in the popular textbook by Gene V. Glass & Kenneth D. Hopkins (1995) Statistical Methods in Education and Psychology.
The reported results agree with their results. You will also find it useful to consult this textbook for its excellent and comprehensive treatment of the ANOVA technique.
Analysis Source Code
You can use the single factor ANOVA technique to determine whether anxiety significantly influences test scores. The following PHP script implements six major steps in the single-factor ANOVA
technique used to analyze data from our hypothetical test anxiety study. After you have examined the overall flow of the script, proceed to the rest of the article where we examine the tabular and
graphical output that each step in this script generates.
* @package SFA
* Script performs single factor ANOVA analysis on test anxiety
* data stored in a database.
* @author Paul Meagher, Datavore Productions
* @license PHP v3.0
* @version 0.7
* The config.php file defines paths to the root of the PHPMATH
* and JPGRAPH libraries and sets up a global database connection.
require_once "config.php";
require_once PHPMATH ."/SFA/SingleFactorANOVA.php";
$sfa = new SingleFactorANOVA;
// Step 1: Specify and analyze data
// Step 2: Show raw data
// Step 3: Show box plot
$params["figureTitle"] = "Anxiety Study";
$params["xTitle"] = "Anxiety Level";
$params["yTitle"] = "Test Score";
$params["yMin"] = 0;
$params["yMax"] = 100;
$params["yTicks"] = 10;
// Step 4: Show descriptive statistics
// Step 5: Show single factor ANOVA source table
// Step 6: Show mean differences.
Step 1: Specify and Analyze Data
After we instantiate the SingleFactorAnova class, we start by specifying 1) what data table to use, 2) what table field name to use as the treatment column, and 3) what table field name to use as the
response column:
// Step 1: Specify and analyze data
The culmination of the first step is the invocation of the $this->analyze() method. This method is the centerpiece of this SingleFactorANOVA class and is reproduced below. Note that I am using the
PEAR:DB API to interact with a MySQL database.
* Compute single factor ANOVA statistics.
function analyze() {
global $db;
$sql = " SELECT $this->treatment, sum($this->response), ";
$sql .= " sum($this->response * $this->response), "
$sql .= " count($this->response) ";
$sql .= " FROM $this->table ";
$sql .= " GROUP BY $this->treatment ";
$result = $db->query($sql);
if (DB::isError($result)) {
} else {
while ($row = $result->fetchRow()) {
$level = $row[0];
$this->levels[] = $row[0];
$this->sums[$level] = $row[1];
$this->n[$level] = $row[3];
$this->means[$level] =
$this->sums[$level] / $this->n[$level];
$this->ss[$level] =
$row[2] - $this->n[$level] * pow($this->means[$level], 2);
$this->variance[$level] =
$this->ss[$level] / ($this->n[$level] - 1);
$this->sums["total"] = array_sum($this->sums);
$this->n["total"] = array_sum($this->n);
$this->means["grand"] = $this->sums["total"] / $this->n["total"];
$this->ss["within"] = array_sum($this->ss);
foreach($this->levels as $level) {
$this->effects[$level] =
$this->means[$level] - $this->means["grand"];
$this->ss["between"] +=
$this->n[$level] * pow($this->effects[$level], 2);
$this->num_levels = count($this->levels);
$this->df["between"] = $this->num_levels - 1;
$this->df["within"] = $this->n["total"] - $this->num_levels;
$this->ms["between"] = $this->ss["between"] / $this->df["between"];
$this->ms["within"] = $this->ss["within"] / $this->df["within"];
$this->f = $this->ms["between"] / $this->ms["within"];
$F = new FDistribution($this->df["between"],
$this->p = 1 - $F->CDF($this->f);
$this->crit = $F->inverseCDF(1 - $this->alpha);
We could have passed a data matrix into the analysis method. Instead I assume that the data resides in a database and use SQL to extract and sort the records that feed into the subsequent analysis
code. I made this storage assumption because of my interest in developing a scalable data-analysis solution.
The bulk of the code involves calculating the value of various instance variables to use in subsequent reporting steps. Most of these instance variables are associative arrays with indices such as
total, between, and within. This is because the ANOVA procedure involves computing the total variance (in our test scores) and partitioning it into between-group (i.e., between treatment levels) and
within-group (i.e., within a treatment level) variance estimates.
At the end of the analyze method we evaluate the probability of the observed F score by first instantiating an FDistribution class with our degrees of freedom parameters:
$F = new FDistribution($this->df["between"], $this->df["within"]);
To obtain the probability of the obtained F score we subtract 1 minus the value returned by the cumulative distribution function applied to the obtained F score:
$this->p = 1 - $F->CDF($this->f);
Finally, we invoke the inverse cumulative distribution function using 1 minus our alpha setting (i.e., 1 - 0.05) in order set a critical F value that defines the decision criterion we will use to
reject the null hypothesis which states that there is no difference between treatment-level means.
$this-crit = $F->inverseCDF(1 - $this->alpha);
If our observed F score is visibly greater than the critical F score, we can conclude that at least one of the means differs significantly from the others. A p value (i.e., $this->p) value less than
0.05 (or whatever your null rejection setting is) would also lead you to reject the null hypothesis.
The formula for decomposing the total sum of squares (first term) into a between-groups component (second term) and a within-group component (third term) appears in Figure 1.
Figure 1. Formula for decomposing the sum of squares.
The symbol
Step 2: Show Raw Data
It is always good to begin your analysis by making sure that you've properly loaded your data. We can call the showRawData() method to dump our test-anxiety data table to a web browser.
* Output contents of database table.
function showRawData() {
global $db;
$data = $db->tableInfo($this->table, DB_TABLEINFO_ORDER);
$columns = array_keys($data["order"]);
$num_columns = count($columns);
<table cellspacing='0' cellpadding='0'>
<table border='1' cellspacing='0' cellpadding='3'>
print "<tr bgcolor='ffffcc'>";
for ($i=0; $i < $num_columns; $i++) {
print "<td align='center'><b>".$columns[$i]."</b></td>";
print "</tr>";
$fields = implode(",", $columns);
$sql = " SELECT $fields FROM $this->table ";
$result = $db->query($sql);
if (DB::isError($result)) {
die( $result->getMessage());
} else {
while ($row = $result->fetchRow()) {
print "<tr>";
foreach( $row as $key=>$value) {
print "<td>$value</td>";
print "</tr>";
This code generates as output the table below:
Table 1. Show Raw Data
│ id │ anxiety │ score │
│ 1 │ low │ 26 │
│ 2 │ low │ 34 │
│ 3 │ low │ 46 │
│ 4 │ low │ 48 │
│ 5 │ low │ 42 │
│ 6 │ low │ 49 │
│ 7 │ low │ 74 │
│ 8 │ low │ 61 │
│ 9 │ low │ 51 │
│ 10 │ low │ 53 │
│ 11 │ moderate │ 51 │
│ 12 │ moderate │ 50 │
│ 13 │ moderate │ 33 │
│ 14 │ moderate │ 28 │
│ 15 │ moderate │ 47 │
│ 16 │ moderate │ 50 │
│ 17 │ moderate │ 48 │
│ 18 │ moderate │ 60 │
│ 19 │ moderate │ 71 │
│ 20 │ moderate │ 42 │
│ 21 │ high │ 52 │
│ 22 │ high │ 64 │
│ 23 │ high │ 39 │
│ 24 │ high │ 54 │
│ 25 │ high │ 58 │
│ 26 │ high │ 53 │
│ 27 │ high │ 77 │
│ 28 │ high │ 56 │
│ 29 │ high │ 63 │
│ 30 │ high │ 59 │
A tip for data miners: Maybe you already have some data in your databases to which you can adapt this code. Look for situations where you have an enum data type to act as your treatment-level field
and a corresponding integer or float column that measures some response associated with that treatment-level.
Step 3: Show Box Plot
Early in your analysis you should graph your data so that you are being guided by proper overall intuitions about your data. A commonly recommended way to visualize ANOVA data is to use side-by-side
box plots for each treatment level. To generate these box plots we need to compute a five-number summary for each treatment level, consisting of the minimum value, first quartile, median, third
quartile, and maximum value. The showBoxPlot($params) method computes these summary values and uses them to generate the treatment-level box plots.
Figure 2 — box plot of test anxiety data
The showBoxPlot($params) method is actually a wrapper around JpGraph library methods. The $params argument allows us to supply parameter values needed to fine tune JpGraph's output as you can see if
you examine the method source code:
* The showBoxPlot method is a wrapper for JPGRAPH methods.
* The JpGraph constant defining to the root of the JpGraph
* library is specified in the config.php file.
* I only do very basic $params array handling in this method. There
* is a lot of room for improvement in making parameter handling more
* extensive (i.e., so you have more control over aspects of your plot)
* and more intelligent.
function showBoxPlot($params=array()) {
include_once JPGRAPH . "/src/JpGraph.php";
include_once JPGRAPH . "/src/jpgraph_stock.php";
$summary = $this->getFiveNumberSummary();
$yData = array();
foreach($this->levels AS $level) {
// Data must be in the format : q1, q3, min, max, median.
$plot_parts = array("q1","q3","min","max","median");
foreach($plot_parts AS $part) {
$yData[] = $summary[$level][$part];
$figureTitle = $params["figureTitle"];
$xTitle = $params["xTitle"];
$yTitle = $params["yTitle"];
$figureTitle = "Figure 1";
$xTitle = "x-level";
$yTitle = "y-level";
$plotWidth = 400;
$plotHeight = 300;
$yMin = $params["yMin"];
$yMax = $params["yMax"];
$yTicks = $params["yTicks"];
$yMargin = 35;
$xMargin = 15;
$xLabels = $this->levels;
$graph = new Graph($plotWidth, $plotHeight);
$graph->SetScale("textlin", $yMin, $yMax);
$graph->yaxis->SetTitle($yTitle, "center");
// Create a new box plot.
$bp = new BoxPlot($yData);
// Indent bars so they don’t start and end at the edge of the plot area.
// Width of the bars in pixels.
// Set bar colors
// Set median colors
if (isset($params["outputFile"])) {
$outputFile = $params["outputFile"];
$graph_name = "temp/$outputfile";
} else {
$graph_name = "temp/boxplot.png";
echo "<img src='$graph_name' vspace='15' alt='$figureTitle'>";
Step 4: Show Descriptive Statistics
Another report that we will want to see early in our analysis is a descriptive statistics report. The descriptive statistics table comes directly from the showDescriptiveStatistics() method.
Table 2. Show Descriptive Statistics
Anxiety Levels
│ │ low │ moderate │ high │
│ mean │ 48.40 │ 48.00 │ 57.50 │
│ stdev │ 12.64 │ 11.63 │ 9.29 │
│ n │ 10 │ 10 │ 10 │
Step 5: Show the Single-factor ANOVA Source Table
If we have carefully studied our box plots and descriptive statistics then the results of our formal analysis of whether a significant mean difference exists should come as no surprise. Invoking the
showSourceTable() method generated the ANOVA source table below. It reports the amount of variance attributable to the effect of our treatment (see "Between" row) versus the amount of variance to
chalk up to experimental error (see "Within" row).
Table 3. Show Source Table
ANOVA Source Table
│ Source │ SS │ df │ MS │ F │ p │
│ Between │ 577.40 │ 2 │ 288.70 │ 2.04 │ 0.15 │
│ Within │ 3812.90 │ 27 │ 141.22 │ │ │
Critical F(0.05, 2, 27) is 3.35.
The obtained F value comes from dividing the mean square error attributable to the treatment ($ms["between"]) by the mean square error attributable to experimental error ($ms["within"]). If this
ratio is sufficiently large then we can reject the null hypothesis that there is no treatment effect (i.e., H[0]: u[1] = u[2] = u[3]). In the example above, the probability p of the observed F value
is 0.15 — higher than the conventional 0.05 cutoff for declaring statistical significant. Our critical F[crit] = 3.35 is also larger than the obtained F. Both of these facts tell us that we cannot
reject the null hypothesis. This could be because there is in fact no effect of anxiety on test scores. A null finding could occur if we had a poor experimental design that had so much experimental
error that it washed out our treatment effects.
Perhaps we need to use a repeated-measures design instead of an independent-samples design to try to remove some individual differences in responding to anxiety.
Step 6: Show Mean Differences
If our F test tells us that a significant treatment effect exists, then we would begin performing multiple comparisons among the treatment-level means to isolate the specific, significant-mean
differences. Because our obtained F was not significant, there is no need to proceed to the multiple comparison stage. It is nevertheless worthwhile examining the size of our effects by calling the
showMeanDifferences() method. This report arranges treatment-level means from lowest to highest and labels the rows and columns accordingly.
Table 4. Show Mean Differences
Mean Differences
│ │ Moderate [48.00] │ Low [48.40] │ High [57.50] │
│ Moderate │ │ 0.4 │ 9.5 │
│ Low │ │ │ 9.1 │
│ High │ │ │ │
Many people engage in data mining without much theoretical motivation for making particular treatment comparisons. In such cases, I recommend obtaining a significant F before performing post-hoc
comparisons. This approach helps keep the Type-I error rate low. When there is a theoretical motivation for believing that a significant difference exists between particular treatment means, then we
can bypass the F test and immediately engage in a priori (or planned) approach, such as Dunn or Planned Orthogonal Contrasts methods. These methods can yield significant results (i.e., comparing the
high-anxiety group to the combined low- and medium-anxiety groups) even when our F ratio is not significant. In general, there is no harm done in always checking whether the F test is significant
before engaging in post-hoc or a priori multiple comparisons.
Concluding Remarks
In this article, we interpreted the single-factor ANOVA procedure broadly to consist of several steps to undertake when conducting a single-factor ANOVA analysis. We illustrated these steps using a
hypothetical test-anxiety study in which we carried out six steps of the procedure in a recommended order — the order commonly prescribed in undergraduate statistics textbooks. We have not exhausted
all the required steps in this article. Because our treatment effect was not significant, we did not proceed to further stages of the analysis procedure.
Had our result come out significant, we would have engaged in the multiple-comparison step where we statistically analyze (using multiple T tests) the significant particular-mean differences and
contrasts. We would also have run several diagnostic tests to determine whether the data met the assumptions of our statistical tests. Finally, we would have begun to construct a theoretical model of
how our treatment levels might exert a causal influence on our response variable. The work reported in this article starts us toward a full-featured single factor ANOVA package but there is more
implementation work to do.
In addition to teaching the basics of the ANOVA procedure, another purpose of this article was to demonstrate that PHP is viable platform for web-based statistical computing, especially when combined
with MySQL and JpGraph. The code distribution for this article contains a benchmark.php script that you can use to verify that the critical analyze() method is very fast — easily crunching 100,000
records in under a half second (0.448 seconds) on a very modest hardware (1000 MHz processor with 256 RAM). A recent American Statistician article repeated the standard advice that new statistics
graduates should be proficient in C-based languages, Fortran, and Java. You might add PHP to this list, especially if you intend to work in a Web medium and see a need for online, database-driven
analysis packages.
• Thanks to Don Lawson PhD for reviewing the article and helping me to write the post-hoc vs. a priori comparisons discussion.
• Thanks to Mark Hale, JSci Project admin, for helping me to port the FDistribution class from Java to PHP.
• Thanks also to Jaco van Kooten and Mark Hale for their excellent work on the JSci probability distributions classes.
Paul Meagher is a cognitive scientist whose graduate studies focused on mathematical problem solving.
Return to the PHP DevCenter.
Copyright © 2009 O'Reilly Media, Inc.
|
{"url":"http://www.onlamp.com/lpt/a/4997","timestamp":"2014-04-17T16:46:18Z","content_type":null,"content_length":"31748","record_id":"<urn:uuid:ff295e15-df53-44c9-9b44-7f26db1474ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Convert the radian measure to degree measure. Use the value of π found on a calculator and round answers to two decimal places. (9π)/4 Here are the answer choices: 200π° 405° 160° 324°
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50a22f42e4b0e22d17ef4484","timestamp":"2014-04-18T23:52:52Z","content_type":null,"content_length":"383628","record_id":"<urn:uuid:b922b994-a3df-4ace-9c3e-179902f68ea0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Uniqueness of the Limit.
December 23rd 2005, 10:29 AM
Uniqueness of the Limit.
I was thinking perhaps the limit of a function at a constant or at infinity is not unique. Thus by delta-epsilon I mean there exists another number L' such that satisfies the condition of the
limit. Thus, prove that if the limit exists it is unique. I was able to show that if L is one limit and L' is another limit then (L+L')/2 is also a limit. Thus, this shows that if a limit is not
unique there exists infinitely many limits for that function. But this is not true I was trying to proof that because my books on advanced calculus did not consider this concept.
December 23rd 2005, 12:06 PM
Originally Posted by ThePerfectHacker
I was thinking perhaps the limit of a function at a constant or at infinity is not unique. Thus by delta-epsilon I mean there exists another number L' such that satisfies the condition of the
limit. Thus, prove that if the limit exists it is unique. I was able to show that if L is one limit and L' is another limit then (L+L')/2 is also a limit. Thus, this shows that if a limit is not
unique there exists infinitely many limits for that function. But this is not true I was trying to proof that because my books on advanced calculus did not consider this concept.
The limit of a real function $f(x)$ as $x \rightarrow a$ if it exists is unique. Suppose otherwise, then there exist $L_1$ and $L_2$ such that given any $\varepsilon >0$ there exist $\delta_1>0$
and $\delta_2>0$ such that when:
$|x-a|< \delta_1\ \Rightarrow \ |f(x)-L_1|<\varepsilon$
$|x-a|< \delta_2\ \Rightarrow \ |f(x)-L_2|<\varepsilon$.
So let $\delta=min(\delta_1,\delta_2)$, then we have:
$|x-a|< \delta\ \Rightarrow \ |f(x)-L_1|<\varepsilon$,
$|x-a|< \delta\ \Rightarrow \ |f(x)-L_2|<\varepsilon$.
But $|L_1-L_2|=|(L_1-f(x))-(L_2-f(x))|$, then by the triangle inequality:
$|L_1-L_2| \leq |L_1-f(x)|+|L_2-f(x)| < 2 \varepsilon$.
Hence $|L_1-L_2|$ is less than any positive real number, so $L_1=L_2$.
December 24th 2005, 02:37 PM
Thank you CaptainBlack, is that your own proof because if it is you used the triangle inequality nicely.
I also have a question how can I use Equation Editor like you did.
December 24th 2005, 04:27 PM
It's called Latex and you type in these tags to use it on this site: [ math ] code here [ /math ] (but without the spaces). There's a tutorial on how to use it somewhere on this site, I'll do
some searching.
December 24th 2005, 11:41 PM
Originally Posted by ThePerfectHacker
Thank you CaptainBlack, is that your own proof because if it is you used the triangle inequality nicely.
I also have a question how can I use Equation Editor like you did.
Is it my own proof: yes and no, the ideas are recycled from things I have
see probably including proofs of this result.
Equation formatting uses this sites built in LaTeX facilties, see:
December 25th 2005, 12:33 PM
Thank you, the tutorial is rather complicated. Time for me to master it. Is Latex also used in other sites?
Funny if mathemations would have used latex code for real math (not interent) how messy it would have been.
December 25th 2005, 01:15 PM
Originally Posted by ThePerfectHacker
Thank you, the tutorial is rather complicated. Time for me to master it. Is Latex also used in other sites?
Funny if mathemations would have used latex code for real math (not interent) how messy it would have been.
The LaTeX used here is a version of what is the de facto standard for writing
mathematical text. Most papers are produced in some version of TeX.
|
{"url":"http://mathhelpforum.com/calculus/1501-uniqueness-limit-print.html","timestamp":"2014-04-20T03:13:12Z","content_type":null,"content_length":"11797","record_id":"<urn:uuid:ab98749b-76e5-498e-9515-a85bd1c437cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Using Trigonometric Identities
January 20th 2013, 01:44 PM #1
Using Trigonometric Identities
Hello Everyone,
I have the following instruction, "Use trigonometric identities to transform one side of the equation into the other" ?
First I don't exactly understand what the instructions are saying.
The problem is as follows
cos Θ sec Θ = 1
The book gives me the solution which is
cos Θ sec Θ = 1 $\frac{1}{sec Θ}$ = 1
( 0 < Θ < π/2) (I guess this means Θ is an acute angle)
Re: Using Trigonometric Identities
I have the following instruction, "Use trigonometric identities to transform one side of the equation into the other" ? First I don't exactly understand what the instructions are saying. The
problem is as follows
cos Θ sec Θ = 1
The book gives me the solution which is
cos Θ sec Θ = 1 $\frac{1}{sec Θ}$ = 1
( 0 < Θ < π/2) (I guess this means Θ is an acute angle)
If the question is to solve for $\theta$ in the equation $\cos(\theta)\sec(\theta)=1$ then the answer given makes no sense.
Because the solution is $\forall\theta\left[\thetae\frac{(2n+1)\pi}{2}\right]$.
On the other hand, did the statement begin with restrictions on $\theta~?$
Re: Using Trigonometric Identities
The book first gives the instruction, "transform one side of the equation into the other" (Which doesn't make any sense to me)
It also doesn't provide any specific angle it just states the following $( 0 < \theta < \pi/2)$
The actual problem to be solved is...
$cos \theta sec \theta = 1$
The solution is...
"Simplify the expression on the left-hand side of the equation until you obtain the right side"
$cos\theta sec\theta$ = $\frac{1}{sec\theta}\cdot sec\theta$ = 1
Re: Using Trigonometric Identities
The book first gives the instruction, "transform one side of the equation into the other" (Which doesn't make any sense to me)
It also doesn't provide any specific angle it just states the following $( 0 < \theta < \pi/2)$
The actual problem to be solved is...
$cos \theta sec \theta = 1$
The solution is...
"Simplify the expression on the left-hand side of the equation until you obtain the right side"
$cos\theta sec\theta$ = $\frac{1}{sec(\theta})\cdot sec\theta$ = 1
Well your reply confuses me.
It is absolutely true that if $0<\theta<\frac{\pi}{2}$ then $\cos(\theta)\sec(\theta)=1$.
So what is the big deal?
Re: Using Trigonometric Identities
All the instructions given are to "Use trigonometric identities to transform one side of the equation into the other".
I myself am confused on where to start
January 20th 2013, 02:58 PM #2
January 20th 2013, 03:42 PM #3
January 20th 2013, 04:17 PM #4
January 20th 2013, 04:36 PM #5
|
{"url":"http://mathhelpforum.com/trigonometry/211741-using-trigonometric-identities.html","timestamp":"2014-04-20T12:39:16Z","content_type":null,"content_length":"51925","record_id":"<urn:uuid:201b3830-ffb8-4e0e-94ce-647e3ae5121b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maximize the intersection of a n-dimensional sphere and an ellipsoid.
up vote 5 down vote favorite
I have the conjecture that the volume of the intersection between an $n$-dim sphere (of radius $r$) and an ellipsoid (with one semi-axis larger than $r$) is maximized when the two are concentric, but
still did not find a way to prove it. Any suggestion?
oc.optimization-control dg.differential-geometry
add comment
2 Answers
active oldest votes
From a result of Zalgaller, this is true for any two centrally symmetric bodies. (Here is his lecture which inculdes this topic.)
Namely, assume that the center of first body is at $0$. If $\vec r$ is the center of the secoind body and $v(\vec r)$ is the volume of intersection with the first one then according
up vote 10 down to Zalgaller's theorem $v>0$ in a convex domain and inside of this domain the function $v^{1/n}$ is concave. Clearly $\vec r=0$ is an extremal point of $v$. Thereofre $0$ is the
vote accepted point of maximum.
add comment
By a rotation, you may assume that the axes of the ellipsoid are parallel to the coordinate axes. Choose the first coordinate of the center of the ellipsoid, and translate the ellipsoid so
that this coordinate becomes zero, keeping the other center coordinates fixed (this corresponds to symmetrizing about this axis). The volume of the intersection increases when you do this,
up vote since the intersection of an interval of fixed length with an interval centered around the origin is maximal when the interval is centered about the origin too (this is the 1-dimensional
8 down case of your question). Repeat with the other coordinates, until the ellipsoid is centered at the origin, and the volume of intersection is maximal.
add comment
Not the answer you're looking for? Browse other questions tagged oc.optimization-control dg.differential-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/70120/maximize-the-intersection-of-a-n-dimensional-sphere-and-an-ellipsoid","timestamp":"2014-04-18T03:17:39Z","content_type":null,"content_length":"54276","record_id":"<urn:uuid:b10f9ca0-bc90-4c86-a102-24709e797afb>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: The Pursuit of Perfect Packing. By Tomaso Aste and De-
nis Weaire. Institute of Physics Publishing, Bristol, U.K.,
2000, xi + 136 pp., ISBN 0-7503-0648-3, $32.30.
Kepler's Conjecture. By George G. Szpiro. Wiley, Hobo-
ken, NJ, 2003, viii + 296 pp., ISBN 0-471-08601-0, $24.95.
Reviewed by Charles Radin
In 1990, from news flashes on television and in newspapers, the whole
world heard of W.-Y. Hsiang's announcement of a proof of the "sphere
packing conjecture" on the most efficient way to pack equal spheres in
space. This was followed by the equally well-publicized announcement
of another proof by T. Hales in 1998. Definitive versions of their argu-
ments have yet to be published, so I will not make any pronouncements
on the status of their work--tempting as that might be!
Mathematical results do not often receive such publicity, so why
all the attention to this problem? Two reasons come quickly to mind.
The first is its long history: the conjecture can be plausibly traced back
four hundred years to Kepler and is sometimes designated by his name.
The other is that the conjecture can be stated in plain English without
the need of esoteric terminology. Those two features are attractive to
journalists but do not explain the interest of the mathematical research
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/382/2257877.html","timestamp":"2014-04-19T15:10:55Z","content_type":null,"content_length":"8424","record_id":"<urn:uuid:afaea93d-b27b-4352-b4db-d0a0fcd05f61>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why do we deal with perfect numbers?
I don't suppose they have any real use, but the Greeks gave them importance and were believers in numerology.
The matter can be generalized some to Amicable Numbers, such as 284 and 220, where each has divisors less than itself that sum up to the other.
220 = (2^2)x5x11, and the sum of the divisors less than itself is: (1+2+4)(1+5)(1+11)-220 = 7x6x12-220 = 284. While 284 =4x71, and the divisors (1+2+4)(71+1)-284=220.
These numbers were given importance even in things like marrage.
Fermat and Descartes both discovered new sets of amicable numbers.
|
{"url":"http://www.physicsforums.com/showthread.php?t=173342","timestamp":"2014-04-20T08:43:10Z","content_type":null,"content_length":"31318","record_id":"<urn:uuid:77ab1033-b3ae-4616-b1ab-76fde4155d88>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Number theory
September 12th 2006, 10:28 PM #1
Junior Member
Feb 2006
Show that if a is an integer, then 3 divides (a to the power 3) - a
I don't know how to use the mathematical software to express the question, I apologize for any inconvenience. Thanks for helping me out.
that is it is the product of three consecutive integers. But in every set
of three consectutive integers at least one is divisible by 3.
Hence a^3-a is divisible by 3.
Furthermore, this is a faboulus beautiful elegant powerful super and amazing theorem of Fermat (my favorite mathemation).
Is always divisible by p
September 12th 2006, 11:07 PM #2
Grand Panjandrum
Nov 2005
September 13th 2006, 08:24 AM #3
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/number-theory/5467-number-theory.html","timestamp":"2014-04-18T06:39:07Z","content_type":null,"content_length":"36770","record_id":"<urn:uuid:369ab881-604c-4e7c-b972-eeed720695e7>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistical Word of the Week
of the bootstrap is to assess the accuracy of an estimate based on a sample of data from a larger population. Consider the sample mean. The best way to find out how lots of different sample means
turn out is to actually draw them. Lacking the real universe to draw samples from, we need a proxy universe that embodies everything we know about the real universe, and which we can use to draw
samples from.
One resampling technique is to replicate the sample data a huge number of times to create a proxy universe based entirely on our sample. After all, the sample itself usually embodies everything we
know about the population that spawned it, so it´s often the best starting point for creating an artificial proxy universe, from which we can draw resamples, and observe the distribution of the
statistic of interest.
A shortcut is to simply sample with replacement from the original sample. By sampling with replacement, each sample observation has 1/n probability of being selected each time - just as if you were
drawing without replacement from an infinitely large replicated universe. This technique is called the bootstrap.
Drawing resamples with replacement from the observed data, we record the means found in a large number of resamples. Looking over this set of means, we can read the values that bound 90% or 95% of
the entries. (a bootstrap confidence interval)
For comparison: The Classical Statistics World
In classical statistics, we still invoke the concept of the larger universe. However, rather than creating a proxy universe and actually drawing from it, classical statistics works from a
mathematical description of this larger universe, based on information provided by the sample (typically mean and standard deviation), and assumptions of normality.
It is important to note that both the resampling and classical approaches make inferences about the larger population starting from the same point - the observed sample. If the observed sample is way
off base, both approaches are in trouble.
Reasons for using a bootstrap approach include the fact that it makes no assumption concerning the distribution of the data, and the fact that it can assess the variability of virtually any
The bootstrap procedure was suggested by Julian Simon, an economist, in a 1969 research methods text. Bradley Efron coined the term "bootstrap" in 1979, and developed and elaborated the method in the
statistical literature starting in 1979.
Promoting better understanding of statistics throughout the world.
The Institute for Statistics Education offers an extensive glossary of statistical terms, available to all for reference and research. We will provide a statistical term every week, delivered
directly to your inbox. To improve your own statistical knowledge, sign up here.
Rather not have more email? Bookmark our "Stats Word of the Week" page.
|
{"url":"http://www.statistics.com/news/47/192/Week-17---Bootstrapping/?showtemplate=true","timestamp":"2014-04-17T21:34:05Z","content_type":null,"content_length":"21418","record_id":"<urn:uuid:b5486e85-2760-4173-938b-5aaae6e3943c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SOLVED] Help with domain
$f(x)=e^{-x^2}$ Find domain. Find first and second derivative Can some one please take me through this?
The easiest way to understand domain is to ask yourself what values of X can i put into the function f(x)? The derivative is just $e^udu$ where your $u=-X^2$ $<br /> f'(x)= -2xe^{-X^2}$ and ill leave
you to find the second one, in which case you will have to use the chain rule... $f(x)g(x)=f'(x)g(x)+f(x)g'(x)$
|
{"url":"http://mathhelpforum.com/pre-calculus/109780-solved-help-domain.html","timestamp":"2014-04-19T07:43:06Z","content_type":null,"content_length":"43303","record_id":"<urn:uuid:b40616ff-fb39-4871-85e0-f851736e69c1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: December 1998 [00203]
[Date Index] [Thread Index] [Author Index]
RE: Together chokes with Prime Modulus > 46337
• To: mathgroup at smc.vnet.net
• Subject: [mg15182] RE: [mg15155] Together chokes with Prime Modulus > 46337
• From: "Ersek, Ted R" <ErsekTR at navair.navy.mil>
• Date: Fri, 18 Dec 1998 02:10:59 -0500
• Sender: owner-wri-mathgroup at wolfram.com
The limitation of Together (Modulus->p) also effects ProvablePrimeQ.
See below.
Together::"modm": "Modulus \!\(25217504003\) is too large for this
Together::"modm": "Modulus \!\(25217504003\) is too large for this
Together::"modm": "Modulus \!\(25217504003\) is too large for this
General::"stop": "Further output of \!\(Together :: \"modm\"\) will be
suppressed during this calculation."
After several minutes I got tired of waiting and aborted.
Ted Ersek
>One of the great things about Mathematica is that you can do exact
>calculations with very large numbers. For example:
>In the next line Together works in modulo 46337 and I get the answer in
>a flash!
>(note 46337 is a prime number)
>Together[1/x+1/(x+1), Modulus->46337] Out[2] (2*(23169 + x))/(x*(1 + x))
>Next try using Together with any prime modulus larger than 46337 and
>Mathematica will choke.
>I would have guessed the functions that use the Modulus option could
>work in modulo prime where the prime modulus has a hundred digits or
>more with no problem. Instead Mathematica flat-out quits for modulo
>greater than 46337.
>Is it impractical to make a version that will do modular algebra with a
>large modulus?
>I can evaluate NextPrime[10^10000] and I doubt Mathematica will refuse
>to try it. I might have to wait over a year for an answer. I might
>run out of memory before I get an answer, but I expect Mathematica will
>not give up. Using Modulus 46337 Together hasn't even got to the point
>where it takes a little while, but it will refuse to work with a
>modulus any larger. Why?
>Ted Ersek
I think this is a very interesting question and I hope some expert will
send a illuminating reply. I just wanted to add a few observations to
the above message.
The limitation in modular arithemtic to the prime 46337 affects besides
Together, the functions PolynomialGCD, PolynomialLCM, Factor and Solve,
and perhaps other functions which I have not investigated. Functions
like Expand, Apart do not seem to have any limitation. When applied
modulo a prime larger than 46337, Together, PolynomialGCD,
PolynomialLCM, and Factor give up at once and produce the same message:
Modulus 46349 is too large for this implementation. Out[1]=
Factor[x + x , Modulus -> 46349]
Solve behaves slightly diffferently. It works in the most trivial cases:
Solve[{x-1==0, Modulus==46349},Mode->Modular]
{{Modulus -> 46349, x -> 1}}
but not in slightly harder ones:
Solve[{x^2-1==0, Modulus==46349},Mode->Modular] Roots::badmod: Cannot
factor input modulo 46349. Roots::badmod: Cannot factor input modulo
46349. Out[3]=
{ToRules[Modulus == 46349 &&
Roots[x == 1, x, Modulus -> 46349]]}
This suggests to me that the problem is related to factoring of
polynomials mod p. If I recall correctly (and my knowledge of this area
may be obsolete by about 10 years), there is no deterministic
polynomial time algorithm for factoring polynomials mod p. The usual
practice is to use probabilistic algorithms, which have a slight chance
of not stopping at all. There is, however, a fast way of factoring mod
p, but it requires one to assume the truth of the Generalized Riemann
Hypothesis. Unfortunately the Mathematica Book seems to give no
information on the implementation of mod p algorithms. I do hope
someone famililiar with these matters will respond to Ted's message.
This leads me to an interesting question. As is well known, Gregory
Chaitin ("The Limits of Mathematics" and other places) has been
advocating accepting the Riemann Hypothesis as a kind of "experimental
axiom", rather like what physicists do with their theories. While I
don't think this will ever be accepted in pure mathemtics, (where
people prefer to talk of results being true "modulo" the Riemann
hypthesis), Mathematica is already doing soemthing like this with its
PrimeQ function. (I functions like NextPrime mentioned in Ted's
message do not give a genuine "prime certificate", although the
NumberTheory`PrimeQ` package does contain very much slower function
which does provide one). Perhaps algorithms based on GRH should also be
implemented in the same way?
Andrzej Kozlowski
Toyama International University
|
{"url":"http://forums.wolfram.com/mathgroup/archive/1998/Dec/msg00203.html","timestamp":"2014-04-16T22:27:11Z","content_type":null,"content_length":"39275","record_id":"<urn:uuid:41d3dd57-e855-4327-9cdd-e0a4464270bc>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Major new study examines explanations for math 'gender gap'
Tuesday, December 13, 2011
"I'm too pretty to do math": This year, a T-shirt carrying that slogan was marketed to young girls. After outraged objections, the shirt was pulled from stores, but is still available for sale on the
internet -- and its familiar message continues to echo: It's boys, not girls, who excel in math. Was the outrage over the shirt knee-jerk political correctness? Is it perhaps time just to accept the
fact that boys are better at math than girls?
Not unless you ignore the data. A major new study appearing in the January 2012 issue of the Notices of the American Mathematical Society (http://www.ams.org/notices) marshals a plethora of evidence
showing that many of the hypotheses put forth to account for the so-called "gender gap" in mathematics performance fail to hold up. The article, "Debunking Myths about Gender and Mathematics
Performance" by Jonathan Kane and Janet Mertz, takes a scientific, fact-based look at a subject that too often is obscured by prejudice and simplistic explanations.
To start with, Kane and Mertz note that, by several measures, girls actually *do* perform as well as boys in mathematics. In many countries, there is no gender gap in mathematics performance at
either the average or very high level. In other countries, notably the United States, the gap has greatly narrowed in recent decades. For example, some U.S. test score data show that girls have
reached parity with boys in mathematics, even at the high school level, where a significant gap existed forty years ago. Another piece of evidence is found among U.S. students who are highly gifted
in mathematics, namely, those who score 700 or higher on the quantitative section of the SAT prior to age 13. In the 1970s, the ratio of boys to girls in this group was 13:1; today it is 3:1.
Likewise, the percentage of U.S. Ph.D.s in the mathematical sciences awarded to women has risen from 5% to 30% over the past half century. If biology were destiny and boys had a "math gene" that
girls lack, such large differences would not be found over time or between countries.
Nevertheless, other measures continue to show a significant gender gap in mathematics performance. Various hypotheses have been advanced to explain why this gap occurs. Kane and Mertz analyzed
international data on mathematics performance to test these hypotheses. One is the "greater male variability hypothesis", famously reiterated in 2005 by Lawrence Summers when he was president of
Harvard University. This hypothesis proposes that variability in intellectual abilities is intrinsically greater among males---hence, in mathematics, boys predominate among those who excel, as well
as among those who do poorly.
To test this hypothesis, Kane and Mertz calculated "variance ratios" for dozens of countries from throughout the world. These ratios compare variability in boys' math performance to variability in
girls' math performance. For example, using test scores from the 2007 Trends in International Mathematics and Science Study (TIMSS), Kane and Mertz found that the variance ratio for Taiwanese eighth
graders was 1.31, indicating that there was quite a bit more variability in math scores among boys than among girls. However, in Morocco, the ratio was 1.00, indicating the amount of variability
observed in the two groups was identical. In Tunisia, this ratio was 0.91, indicating there was somewhat more variability in math scores among girls than among boys.
If the "greater male variability hypothesis" were true, boys' math scores should show greater variance than girls' math scores in all countries; one should also not see such big, reproducible
differences from country to country. Therefore, Kane and Mertz conclude that this hypothesis does not hold up. Kane and Mertz suggest that there are sociocultural factors that differ among countries;
some of these factors, such as different educational experiences and patterns of school attendance, lead to country-specific differences in boysÕ variances and girls' variances and, thus, their
variance ratios.
Kane and Mertz took the same kind of data-driven approach to examine some additional hypotheses for explaining the gender gap, such as the "single-gender classroom hypothesis" and the "Muslim culture
hypothesis", both of which have been proposed in recent years by various people, including Steven Levitt of "Freakonomics" fame. Again, Kane and Mertz found that the data do not support these
hypotheses. Rather, they observed no consistent relationship between the gender gap and either co-educational schooling or most of the country's inhabitants being Muslim.
They also examined the "gap due to inequity hypothesis", which proposes that the gender gap in math performance is due to social and cultural inequities between males and females. To examine this
hypothesis, they used an international gender gap index that compares the genders in terms of income, education, health, and political participation. Relating these indices to math scores, they
concluded that math achievement for both boys and girls tends to be higher in countries where gender equity is better. In addition, in wealthier countries, women's participation and salary in the
paid labor force was the main factor linked to higher math scores for students of both genders. "We found that boys as well as girls tend to do better in math when raised in countries where females
have better equality, and that's both new and important," says Kane. "It makes sense that when women are well educated and earn a good income, the math scores of their children of both genders
Mertz adds, "Many folks believe gender equity is a win-lose zero-sum game: If females are given more, males end up with less. Our results indicate that, at least for math achievement, gender equity
is a win-win situation."
American Mathematical Society: http://www.ams.org
|
{"url":"http://www.labspaces.net/115940/Major_new_study_examines_explanations_for_math__gender_gap_","timestamp":"2014-04-20T13:20:15Z","content_type":null,"content_length":"32504","record_id":"<urn:uuid:cde033b4-6036-4800-b1b0-5dc56bc35cc3>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wills Point Math Tutor
Find a Wills Point Math Tutor
...I am very patient with students that are hard at hearing and/or need more detailed instruction and help with their classes. I have worked with a student in the past who had difficulty hearing
and I emphasized a system that allowed the student to utilize her other senses (such as sight and touch)...
27 Subjects: including statistics, ACT Math, ASVAB, GRE
I have taught 7th & 8th grade math for 8 years and love to see the light bulbs go on!! Hi, my name is Carolyn and graduated in 2006 with a specialization in Math 4 thru 8th grade. I live in
Rockwall and would love to be able to help your child understand math better.I teach MST & Honors students an...
4 Subjects: including prealgebra, grammar, elementary math, vocabulary
...I dream of students becoming who they were created to be -- vibrant, confident learners and leaders, made to prosper and succeed in everything they rightly put their hearts and minds to. Having
fostered various significant skills applicable to the teaching profession, with highest honors, I rece...
17 Subjects: including algebra 2, algebra 1, chemistry, geometry
...I have taught Pre-K 4 years, 4th grade 2 years, 7th and 8th grade ESL 3 years. In addition, I have co-taught in several secondary core content areas, taught in varied content areas during
summer school sessions, and am currently an instructional coach. I am an instructional coach whose primary ...
8 Subjects: including algebra 1, prealgebra, Spanish, reading
...I have also taught numerous soccer classes for youngsters at Audubon Recreation Center as a part of Garland Parks and Recreation. I love soccer, and I love kids, so I believe I would be an
excellent tutor for someone who wants to develop his/her skills in this area. I have worked with a wide range of students who speak another language other than English for their first language.
25 Subjects: including algebra 1, ACT Math, ESL/ESOL, geometry
|
{"url":"http://www.purplemath.com/wills_point_math_tutors.php","timestamp":"2014-04-19T20:04:37Z","content_type":null,"content_length":"23711","record_id":"<urn:uuid:1bf21983-119f-4379-ad8d-212f96fe1133>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exact solutions of Benjamin-Bona-Mahony-Burgers-type nonlinear pseudo-parabolic equations
In this paper, we consider some nonlinear pseudo-parabolic Benjamin-Bona-Mahony-Burgers (BBMB) equations. These equations are of a class of nonlinear pseudo-parabolic or Sobolev-type equations , α
is a fixed positive constant, arising from the mathematical physics. The tanh method with the aid of symbolic computational system is employed to investigate exact solutions of BBMB-type equations
and the exact solutions are found. The results obtained can be viewed as verification and improvement of the previously known data.
nonlinear pseudo-parabolic equation; Benjamin-Bona-Mahony-Burgers (BBMB)-type equation; Sobolev-type equation; tanh method
1 Introduction
The partial differential equations of the form
arise in many areas of mathematics and physics, where , , , η and α are non-negative constants, Δ denotes the Laplace operator acting on the space variables x. Equations of type (1) with only one
time derivative appearing in the highest-order term are called pseudo-parabolic and they are a special case of Sobolev equations. They are characterized by derivatives of mixed type (i.e., time and
space derivatives together) appearing in the highest-order terms of the equation and were studied by Sobolev [1]. Sobolev equations have been used to describe many physical phenomena [2-8]. Equation
(1) arises as a mathematical model for the unidirectional propagation of nonlinear, dispersive, long waves. In applications, u is typically the amplitude or velocity, x is proportional to the
distance in the direction of propagation, and t is proportional to elapsed time [9].
An important special case of (1) is the Benjamin-Bona-Mahony-Burgers (BBMB) equation
It has been proposed in [10] as a model to study the unidirectional long waves of small amplitudes in water, which is an alternative to the Korteweg-de Vries equation of the form
The BBMB equation has been tackled and investigated by many authors. For more details, we refer the reader to [11-15] and the references therein.
In [16], a generalized Benjamin-Bona-Mahony-Burgers equation
has been considered and a set of new solitons, kinks, antikinks, compactons, and Wadati solitons have been derived using by the classical Lie method, where α is a positive constant, , and is a
-smooth nonlinear function. Equation (4) with the dissipative term arises in the phenomena for both the bore propagation and the water waves.
Peregrine [17] and Benjamin, Bona, and Mahony [10] have proposed equation (4) with the parameters , , and . Furthermore, Benjamin, Bona, and Mahony proposed equation (4) as an alternative
regularized long-wave equation with the same parameters.
Khaled, Momani, and Alawneh obtained explicit and numerical solutions of BBMB equation (4) by using the Adomian’s decomposition method [18] .
Tari and Ganji implemented variational iteration and homotopy perturbation methods obtaining approximate explicit solutions for (4) with [19] and El-Wakil, Abdou, and Hendi used another method (the
exp-function) to obtain the generalized solitary solutions and periodic solutions of this equation [20].
In addition, we consider and obtain analytic solutions in a closed form.
The aim of this work is twofold. First, it is to obtain the exact solutions of the Benjamin-Bona-Mahony-Burgers (BBMB) equation and the generalized Benjamin-Bona-Mahony-Burgers equation with , , ;
and second, it is to show that the tanh method can be applied to obtain the solutions of pseudo-parabolic equations.
2 Outline of the tanh method
Wazwaz has summarized the tanh method [21] in the following manner:
(i) First, consider a general form of the nonlinear equation
(ii) To find the traveling wave solution of equation (5), the wave variable is introduced so that
Based on this, one may use the following changes:
and so on for other derivatives. Using (7) changes PDE (5) to an ODE
(iii) If all terms of the resulting ODE contain derivatives in ξ, then by integrating this equation and by considering the constant of integration to be zero, one obtains a simplified ODE.
(iv) A new independent variable
is introduced that leads to the change of derivatives:
where other derivatives can be derived in a similar manner.
(v) The ansatz of the form
is introduced, where M is a positive integer, in most cases, that will be determined. If M is not an integer, then a transformation formula is used to overcome this difficulty. Substituting (10) and
(11) into ODE (8) yields an equation in powers of Y.
(vi) To determine the parameter M, the linear terms of highest order in the resulting equation with the highest-order nonlinear terms are balanced. With M determined, one collects all the
coefficients of powers of Y in the resulting equation where these coefficients have to vanish. This will give a system of algebraic equations involving the and ( ), V, and μ. Having determined
these parameters, knowing that M is a positive integer in most cases, and using (11), one obtains an analytic solution in a closed form.
Throughout the work, Mathematica or Maple is used to deal with the tedious algebraic operations.
3 The Benjamin-Bona-Mahony-Burgers (BBMB) equation
The Benjamin-Bona-Mahony-Burgers (BBMB) equation is given by
where α is a positive constant. Using the wave variable carries (12) into the ODE
Balancing with in (13) gives . The tanh method admits the use of the finite expansion
where . Substituting (14) into (13), collecting the coefficients of Y, and setting it equal to zero, we find the system of equations
Using Maple gives nine sets of solutions
These sets give the following solutions respectively:
If we accept , then we obtain solutions
4 The generalized Benjamin-Bona-Mahony-Burgers equation
We consider the generalized Benjamin-Bona-Mahony-Burgers equation
where α is a positive constant and .
Using the wave variable carries (19) into the ODE
Balancing with in (20) gives . Using the finite expansion
we find the system of equations
Maple gives three sets of solutions
where k is left as a free parameter. These give the following solutions:
Using the wave variable , then by integrating this equation and considering the constant of integration to be zero, we obtain
Balancing the second term with the last term in (25) gives . Using the finite expansion
we find the system of equations
Using Maple, we obtain nine sets of solutions
These sets give the solutions
Using the wave variable , then by integrating this equation once and considering the constant of integration to be zero, we obtain
Balancing with in (30) gives . Using the finite expansion
we find the system of equations
Solving the resulting system, we find the following sets of solutions with :
These in turn give the solutions
5 Conclusion
In summary, we implemented the tanh method to solve some nonlinear pseudo-parabolic Benjamin-Bona-Mahony-Burgers equations and obtained new solutions which could not be attained in the past. Besides,
we have seen that the tanh method is easy to apply and reliable to solve the pseudo-parabolic and the Sobolev-type equations.
Sign up to receive new article alerts from Boundary Value Problems
|
{"url":"http://www.boundaryvalueproblems.com/content/2012/1/144","timestamp":"2014-04-17T09:58:22Z","content_type":null,"content_length":"114039","record_id":"<urn:uuid:22305e4b-613f-421c-9bc4-f296b1f23fe0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FBI: More People Killed Each Year With Hammers And Clubs Than Rifles…
Al Sharpton proposes hammer ban in 3… 2… 1.
Via Fox Nation:
According to the FBI annual crime statistics, the number of murders committed annually with hammers and clubs far outnumbers the number of murders committed with a rifle.
This is an interesting fact, particularly amid the Democrats’ feverish push to ban many different rifles, ostensibly to keep us safe of course.
However, it appears the zeal of Sens. like Dianne Feinstein (D-CA) and Joe Manchin (D-WV) is misdirected. For in looking at the FBI numbers from 2005 to 2011, the number of murders by hammers and
clubs consistently exceeds the number of murders committed with a rifle.
Think about it: In 2005, the number of murders committed with a rifle was 445, while the number of murders committed with hammers and clubs was 605. In 2006, the number of murders committed with
a rifle was 438, while the number of murders committed with hammers and clubs was 618.
And so the list goes, with the actual numbers changing somewhat from year to year, yet the fact that more people are killed with blunt objects each year remains constant.
For example, in 2011, there was 323 murders committed with a rifle but 496 murders committed with hammers and clubs.
|
{"url":"http://weaselzippers.us/132223-fbi-more-people-killed-each-year-with-hammers-and-clubs-than-rifles/","timestamp":"2014-04-20T23:47:17Z","content_type":null,"content_length":"14879","record_id":"<urn:uuid:a7004380-6648-4ea6-9409-80f7787f56d9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need help on factor theorem.
February 9th 2010, 03:58 PM
Need help on factor theorem.
I am supposed to use the Factor Theorem to either factor the polynomial completely or to prove that it has no linear factors with integer coefficients.
I think I haven't completely understood the whole concept yet, which is why I couldn't get my brain working on the following problems.
a. P(x)= x^4+4x^3-7x^2-34x-24
b. P(x)= x^6-6x^5+15x^4-20x^3+15x^2-6x+1
I think these are harder than the regular ones like P(x)=x^3+2x^2-9x+3, etc.
February 10th 2010, 01:35 AM
Hello zxazsw
I am supposed to use the Factor Theorem to either factor the polynomial completely or to prove that it has no linear factors with integer coefficients.
I think I haven't completely understood the whole concept yet, which is why I couldn't get my brain working on the following problems.
a. P(x)= x^4+4x^3-7x^2-34x-24
b. P(x)= x^6-6x^5+15x^4-20x^3+15x^2-6x+1
I think these are harder than the regular ones like P(x)=x^3+2x^2-9x+3, etc.
Two things you need to use here:
□ The Factor Theorem itself: $(x-a)$ is a factor of $P(x)$ if and only if $P(a) = 0$.
□ If the constant term in $P(x)$ is $k$, then in any factor of the form $(ax+b)$, $b$$k$. must be a factor of
So in part (a), you'll only need to try values of $x$ which are factors of $24$ (not forgetting to try negative numbers as well), to see whether $P(x) = 0$. If my arithmetic is correct, none of
them works, so there aren't any linear factors with integer coefficients.
By the same reasoning, in part (b), $(x \pm1)$ are the only possibilities. So try $x = 1$ first. And, even when you have found the first factor, keep on trying with the quotient until you're sure
there are no factors left. (Hint: do you know anything about Pascal's triangle or the Binomial Theorem?)
|
{"url":"http://mathhelpforum.com/pre-calculus/128055-need-help-factor-theorem-print.html","timestamp":"2014-04-18T11:02:33Z","content_type":null,"content_length":"7853","record_id":"<urn:uuid:3b151d89-af4b-43e8-a5fa-c7f05bc93f76>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Choosing & using 'Coral Bleaching in the Caribbean'
• :
□ 5-8:
☆ D - Earth and space science:
○ Structure of the earth system
□ 9-12:
☆ D - Earth and space science:
○ Energy in the earth system
• :
□ Places and regions:
☆ The physical and human characteristics of places
This resource supports Virginia Earth science standards. ES.1c: The student will plan and conduct investigations in which scales, diagrams, maps, charts, graphs, tables, and profiles are constructed
and interpreted. ES.2a: The student will demonstrate scientific reasoning and logic by analyzing how science explains and predicts the interactions and dynamics of complex Earth systems. ES.11c: The
student will investigate and understand that oceans are complex, interactive physical, chemical, and biological systems and are subject to long- and short-term variations. Key concepts include
systems interactions (density differences, energy transfer, weather, and climate).
This resource supports math standards for grades 5-12 in the topics of data analysis and probability. These assignments are based on the National Council of Teachers of Mathematics (NCTM) standards.
|
{"url":"http://www.dlese.org/library/view_annotation.do?type=standards&id=MYND-000-000-000-036&other=true","timestamp":"2014-04-18T23:18:55Z","content_type":null,"content_length":"14488","record_id":"<urn:uuid:68bba564-3156-49aa-a1c3-ffbf8ce5cd61>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
|
C# Modulo
Modulo computes a remainder. The modulo operator provides a way to execute code once every several iterations of a loop. It uses the percentage sign character in the lexical syntax. It has some
unique properties.
Estimated costs of instructions
Add: 1 ns
Subtract: 1 ns
Multiply: 2.7 ns
Divide: 35.9 ns
Modulo division is expressed with the percentage sign character. It is implemented with the rem instruction in the intermediate language. The rem instruction takes the top two values on the
evaluation stack.
Intermediate Language
Then:Rem performs the computation that returns the remainder of the division. It pushes that value onto the evaluation stack.
Next, this example demonstrates the mathematics behind modulo. The modulo expressions here are actually turned into constants during the C# compilation step. No rem instructions are generated.
Program that uses modulo operator: C#
using System;
class Program
static void Main()
// When 1000 is divided by 90, the remainder is 10.
Console.WriteLine(1000 % 90);
// When 100 is divided by 90, the remainder is also 10.
Console.WriteLine(100 % 90);
// When 81 is divided by 80, the remainder is 1.
Console.WriteLine(81 % 80);
// When 1 is divided by 1, the remainder is zero.
Console.WriteLine(1 % 1);
The example program shows the remainders of the divisions of the two integers at each step. The runtime never performs modulo divisions here as the C# compiler actually does the divisions.
We see that 1000 and 100 divide into parts of 90 with a remainder of 10. If the first argument to the predefined modulo operator is 81 and the second operand is 80, the expression evaluates to a
value of 1.
Finally:If you apply modulo division on the same two operands, you receive 0 because there is no remainder.
Tip:If you perform modulo division by zero, you will get either a compile error or a runtime exception depending on the code.
DivideByZeroExceptionCompile-Time Error
Example 2
You can apply modulo in a loop to achieve an interval or step effect. If you use a modulo operation on the loop index variable, which is called an induction variable, you can execute code at an
interval based on the induction variable.
Note:This example shows how to write to the screen every ten iterations in the for-loop.
Program that uses modulo division in loop: C#
using System;
class Program
static void Main()
// Prints every tenth number from 0 to 200.
// Includes the first iteration.
for (int i = 0; i < 200; i++)
if ((i % 10) == 0)
Often, modulo divisions are performed in if-statements and used in control flow. The three numbers in the condition in the if-statement can be any value with the exclusion of a division by zero,
which the compiler will reject.
The modulo division operation has several common uses in programs. You can use modulo division in loops to only execute code every several iterations, such as shown above. This can reduce complexity
and improve performance in real code.
Note:We do not often need to compute numeric remainders for user consumption. The regular division operator may be more useful to you.
Odd:You can use modulo to test for odd numbers and even numbers. You can define odd numbers as not-even numbers.
Odd, Even
The modulo division operator in the C# language is considerably slower than other arithmetic operators such as increment and decrement or even multiply. This is basically a hardware limitation on
But:The total time required for individual modulo operations is tiny compared to other tasks such as disk reads or network accesses.
So:If you can reduce those operations with modulo division, you can improve overall performance.
The time required for modulo division depends on your hardware and other factors. The article "Writing Faster Managed Code: Knowing What Things Cost" is helpful. It provides a table listing times
required for arithmetic operations.
Writing Faster Managed Code: MSDN
Also, you may rarely have a modulo division in a hot path in your program and this can sometimes cause a measurable loss of performance. This will almost always occur in a loop body or in a recursive
Tip:You can apply a technique called "strength reduction" manually to convert the modulo operation into a subtraction or addition.
And:To do this, add another field or local variable. Then, in each iteration of the loop, decrement it and test it against zero.
Then:When zero is reached, set it to its maximum value again.
This resets the pattern.
We explored the modulo operator. This is implemented in the CLI as a rem instruction. We saw how the C# compiler calculates modulo divisions of constants at compile-time. Modulo division returns the
remainder of the two operands.
|
{"url":"http://www.dotnetperls.com/modulo","timestamp":"2014-04-21T02:00:00Z","content_type":null,"content_length":"9637","record_id":"<urn:uuid:f046268e-c184-4c38-a20a-1be8caa5efcd>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rhombus in Rectangle
Copyright © University of Cambridge. All rights reserved.
'Rhombus in Rectangle' printed from http://nrich.maths.org/
Take any rectangle $ABCD$ such that $AB > BC$. The point $P$ is on $AB$ and $Q$ is on $CD$. Show that there is exactly one position of $P$ and $Q$ such that $APCQ$ is a rhombus.
Show that if the rectangle has the proportions of A4 paper ($AB=BC$ $\sqrt 2$) then the ratio of the areas of the rhombus and the rectangle is $3:4$. Show also that, by choosing a suitable rectangle,
the ratio of the area of the rhombus to the area of the rectangle can take any value strictly between $\frac{1}{2}$ and $1$.
|
{"url":"http://nrich.maths.org/766/index?nomenu=1","timestamp":"2014-04-16T13:52:01Z","content_type":null,"content_length":"3632","record_id":"<urn:uuid:f0ed58fd-9d63-4d3a-85e9-2398b95e80a3>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
|
GNU Scientific Library – Reference Manual: Matrix operations
8.4.10 Matrix operations
The following operations are defined for real and complex matrices.
Function: int gsl_matrix_add (gsl_matrix * a, const gsl_matrix * b)
This function adds the elements of matrix b to the elements of matrix a. The result a(i,j) \leftarrow a(i,j) + b(i,j) is stored in a and b remains unchanged. The two matrices must have the same
Function: int gsl_matrix_sub (gsl_matrix * a, const gsl_matrix * b)
This function subtracts the elements of matrix b from the elements of matrix a. The result a(i,j) \leftarrow a(i,j) - b(i,j) is stored in a and b remains unchanged. The two matrices must have the
same dimensions.
Function: int gsl_matrix_mul_elements (gsl_matrix * a, const gsl_matrix * b)
This function multiplies the elements of matrix a by the elements of matrix b. The result a(i,j) \leftarrow a(i,j) * b(i,j) is stored in a and b remains unchanged. The two matrices must have the
same dimensions.
Function: int gsl_matrix_div_elements (gsl_matrix * a, const gsl_matrix * b)
This function divides the elements of matrix a by the elements of matrix b. The result a(i,j) \leftarrow a(i,j) / b(i,j) is stored in a and b remains unchanged. The two matrices must have the
same dimensions.
Function: int gsl_matrix_scale (gsl_matrix * a, const double x)
This function multiplies the elements of matrix a by the constant factor x. The result a(i,j) \leftarrow x a(i,j) is stored in a.
Function: int gsl_matrix_add_constant (gsl_matrix * a, const double x)
This function adds the constant value x to the elements of the matrix a. The result a(i,j) \leftarrow a(i,j) + x is stored in a.
|
{"url":"http://www.gnu.org/software/gsl/manual/html_node/Matrix-operations.html","timestamp":"2014-04-16T07:23:27Z","content_type":null,"content_length":"7109","record_id":"<urn:uuid:8cac39d6-edb4-4d87-8733-3354c2522369>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear algebra proof
March 30th 2010, 03:53 AM
Linear algebra proof
Let A be a m * p matrix whose columns all add to the same total s; and B be a p * n matrix whose columns all add to the same total t: Using summation notation prove that the n columns of AB each
total st:
March 30th 2010, 04:21 AM
By definition of matrix multiplication, the entries of $AB$ are given by $\sum_{i=1}^p a_{j,i} b_{i,k}$.
We seek the sum of an arbitrary column of $AB$, given by $\sum_{j=1}^m\left(\sum_{i=1}^p a_{j,i} b_{i,k}\right)$
$=\sum_{i=1}^p \left( b_{i,k} \sum_{j=1}^m a_{j,i}\right)$
$=\sum_{i=1}^p \left( b_{i,k} s\right)$
$=s\sum_{i=1}^p b_{i,k}$
March 30th 2010, 03:24 PM
Thanks, what i was doing was expressing one column in Matrix A, by the sum of all columns divided by p, which was making it difficult to then simplify the expression for AB.
|
{"url":"http://mathhelpforum.com/advanced-algebra/136481-linear-algebra-proof-print.html","timestamp":"2014-04-18T00:31:20Z","content_type":null,"content_length":"6376","record_id":"<urn:uuid:5dde96ea-bd0e-49dc-8a71-7a613b5d298c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cryptology ePrint Archive: Report 2009/379
Protecting Circuits from Computationally Bounded and Noisy Leakage Sebastian Faust and Tal Rabin and Leonid Reyzin and Eran Tromer and Vinod VaikuntanathanAbstract: Physical computational devices
leak side-channel information that may, and often does, reveal secret internal states. We present a general transformation that compiles any circuit into a circuit with the same functionality but
resilience against well-defined classes of leakage. Our construction requires a small, stateless and computation-independent leak-proof component that draws random elements from a fixed distribution.
In essence, we reduce the problem of shielding arbitrarily complex circuits to the problem of shielding a single, simple component.
Our approach is based on modeling the adversary as a powerful observer that inspects the device via a limited measurement apparatus. We allow the apparatus to access all the bits of the computation
(except those inside the leak-proof component), and the amount of leaked information to grow unbounded over time. However, we assume that the apparatus is limited in the amount of output bits per
iteration and the ability to decode certain linear encodings. While our results apply in general to such leakage classes, in particular, we obtain security against:
- Constant-depth circuits leakage, where the leakage function is computed by an AC^0 circuit (composed of NOT gates and unbounded fan-in AND and OR gates).
- Noisy leakage, where the leakage function reveals all the bits of the internal state of the circuit, perturbed by independent binomial noise. Namely, for some number p \in (0,1/2], each bit of the
computation is flipped with probability p, and remains unchanged with probability 1-p.
Category / Keywords: foundations / side channel, leakage resilience, modelsPublication Info: Preliminary version appears in Eurocrypt 2010Date: received 31 Jul 2009, last revised 16 Nov 2012Contact
author: reyzin at cs bu eduAvailable format(s): PDF | BibTeX Citation Note: The previous version was from before the computationally bounded and noisy cases were merged. This version has substantial
revisions since Eurocrypt 2010. Version: 20121117:031350 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
|
{"url":"http://eprint.iacr.org/2009/379/20121117:031350","timestamp":"2014-04-18T13:08:35Z","content_type":null,"content_length":"3650","record_id":"<urn:uuid:ca5dd85d-bf81-4315-9909-40b5e47fcabe>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The configuration of two polar molecules. μ 1 and μ 2 are permanent dipole moments for molecules 1 and 2; r 12 is the distance vector from molecules 1 to 2. ε is the external electric field and α is
the angle between r 12 and the external field. θ1 (θ2) is the angle between μ 1 ( μ 2 ) and ε .
Ratio of frequency shift, Δω, to the dipole-dipole interaction parameter, , as a function of reduced field strength, x = μ ε /B and x′ = μ ε ′/B at sites of the two dipoles.
Converged laser pulses for NOT gate. The upper panel is the laser pulses for realizing the NOT gate for dipole 1, on the left and dipole 2 on the right. The initial and target states are listed in
Table I . The lower panel shows the evolution of all populations driven by NOT pulses. The left panel exhibits the population evolution under the NOT pulse for dipole 1. The initial state is and the
final populations are 0.733 for |00⟩ and 0.265 for |11⟩. The right panel shows the population evolution via the NOT pulse for dipole 2. It has the same initial condition as the left panel and the
converged population is 0.216 for |00⟩ and 0.781 for |11⟩. Both pulses perform the corresponding NOT gates nicely.
Optimized laser pulses for Hadamard gates. The left panel pertains to performing the Hadamard gate on dipole 1 and the right one to performing that on dipole 2. The lower panels show the population
evolution under the Hadamard laser pulse. In order to exhibit the curves clearly, we chose |01⟩ as the initial state; the final populations of |01⟩ and |11⟩ are 0.429 and 0.566, respectively. For the
case of the Hadamard gate on dipole 2, the |00⟩ qubit was chosen as the initial state. The populations at the end of the evolution are 0.528 for |00⟩ and 0.470 for |01⟩. The functions of these gates
are realized very well by these two pulses.
Optimized laser pulse for realizing the CNOT gate. The initial and target states are listed in Table III . In the lower panel, the population evolution is driven by the CNOT pulse. The initial state
is . After population oscillations due to the effect of the laser pulse, the final qubit populations are 0.00363 (|00⟩), 0.25807 (|01⟩), 0.00273 (|10⟩), and 0.73546 (|11⟩). The populations of state |
10⟩ and |11⟩ are switched, which confirms the correctness of the converged laser pulse.
Optimized laser pulse for realizing the CNOT gate when the distance between two dipoles is 75 nm. The initial and target states are listed in Table III . The optimized laser pulse, which is shown in
the upper panel, lasts 110 ns. The lower panel shows the population evolution driven by the pulse. The initial state is the same as Fig. 5 . The final converged populations for state |01⟩ and |10⟩
are 0.22 and 0.73, respectively.
The initial and target states of the NOT gate. The final fidelities for both NOT gates are 0.967 and 0.985, respectively.
The initial and target states of the Hadmard gate. The yield fidelities are 0.944 and 0.902.
The initial and target states of the CNOT gate. The converged fidelity is 0.975.
We present a systematic approach to implementation of basic quantum logic gates operating on polar molecules in pendular states as qubits for a quantum computer. A static electric field prevents
quenching of the dipole moments by rotation, thereby creating the pendular states; also, the field gradient enables distinguishing among qubit sites. Multi-target optimal control theory is used as a
means of optimizing the initial-to-target transition probability via a laser field. We give detailed calculations for the SrO molecule, a favorite candidate for proposed quantum computers. Our
simulation results indicate that NOT, Hadamard and CNOT gates can be realized with high fidelity, as high as 0.985, for such pendular qubit states.
Scitation: Implementation of quantum logic gates using polar molecules in pendular states
|
{"url":"http://scitation.aip.org/content/aip/journal/jcp/138/2/10.1063/1.4774058","timestamp":"2014-04-16T22:53:10Z","content_type":null,"content_length":"143572","record_id":"<urn:uuid:92cd1dbb-6841-4ea8-b486-25083f46a9b2>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westwood Area 2, OH Math Tutor
Find a Westwood Area 2, OH Math Tutor
...With elementary school, middle school, and sometimes high school students, I like to use games and puzzles every now and then. It helps me determine what learning style the student is
comfortable with, and it can also help students solve problems from multiple perspectives, and it can just be a ...
10 Subjects: including algebra 2, writing, ESL/ESOL, algebra 1
...I begin by taking students where they are and slowly reinforce their abilities. At the same time, I begin to increase the skill levels so that mastery comes smoothly and automatically.
Learning can be challenging and difficult at times, but so can learning to ride a bike or how to swim.
13 Subjects: including prealgebra, reading, biology, English
...Most students do not intuitively know how to do these things, and most teachers are too pressed for time to really teach them well. I have an active California teaching credential in Physical
Education and over 25 years experience teaching Physical Education, including basketball, in LAUSD. I also have extensive coaching experience at my schools and with BHBL.
15 Subjects: including prealgebra, reading, English, writing
...If you're looking for someone who will help you simply memorize or do the minimum work to get an answer, I'm probably not going to be a good match. My sessions take into account the needs and
wants of the student, though there are a few things you can generally expect: inclusion of alternate lea...
56 Subjects: including algebra 2, American history, French, geometry
...I also use to be a life guard. I have 5+ years of training in school and with various actor's studios such as: New York Film Academy, John Robert Powers, and Saturday Actor's Studio. I have
been in 7 plays, and have been an extra in a television show and commercial.
14 Subjects: including prealgebra, physics, algebra 1, algebra 2
Related Westwood Area 2, OH Tutors
Westwood Area 2, OH Accounting Tutors
Westwood Area 2, OH ACT Tutors
Westwood Area 2, OH Algebra Tutors
Westwood Area 2, OH Algebra 2 Tutors
Westwood Area 2, OH Calculus Tutors
Westwood Area 2, OH Geometry Tutors
Westwood Area 2, OH Math Tutors
Westwood Area 2, OH Prealgebra Tutors
Westwood Area 2, OH Precalculus Tutors
Westwood Area 2, OH SAT Tutors
Westwood Area 2, OH SAT Math Tutors
Westwood Area 2, OH Science Tutors
Westwood Area 2, OH Statistics Tutors
Westwood Area 2, OH Trigonometry Tutors
Nearby Cities With Math Tutor
Baldwin Hills, CA Math Tutors
Briggs, CA Math Tutors
Century City, CA Math Tutors
Farmer Market, CA Math Tutors
Green, CA Math Tutors
Holly Park, CA Math Tutors
La Costa, CA Math Tutors
Miracle Mile, CA Math Tutors
Paseo De La Fuente, PR Math Tutors
Playa, CA Math Tutors
Preuss, CA Math Tutors
Rancho La Costa, CA Math Tutors
Rancho Park, CA Math Tutors
Westwood, LA Math Tutors
Wilcox, CA Math Tutors
|
{"url":"http://www.purplemath.com/Westwood_Area_2_OH_Math_tutors.php","timestamp":"2014-04-19T02:41:47Z","content_type":null,"content_length":"24303","record_id":"<urn:uuid:74d2ccaa-11b4-4510-a2af-1c9b60312b35>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A mathematical formula to decipher the geometry of surfaces like that of cauliflower
December 18th, 2012 in Physics / General Physics
Scientists at the Universidad Carlos III of Madrid (UC3M) have taken part in a research project that describes, for the first time, that laws that govern the development of certain complex natural
patterns, such as those found on the surface of cauliflower.
The scientists have found a formula that describes how the patterns found in a multitude of natural structures are formed. "We have found a model that describes, in detail, the evolution in time and
in space of cauliflower-type fractal morphologies for nanoscopic systems", explains Professor Rodolfo Cuerno, of UC3M's Mathematics Department, author of the research, together with scientists from
Universidad Pontificia Comillas (UPCO), the Instituto de Ciencia de los Materiales de Madrid (the Materials Science Institute of Madrid) of the Consejo Superior de Investigaciones Científicas (CSIC)
(Spanish National Research Council), la Escuela Politécnica de París (Polytechnic School of Paris, France) and the Universidad Católica de Lovaina (Catholic University of Louvain, Belgium).
This work, which was recently published in the New Journal of Physics, falls within the field of fractal geometry, which is based on the mathematical description of many natural forms, such as sea
coasts, the borders between countries, clouds, snowflakes and even the networks of blood vessels. A fractal is characterized because its parts are similar to the whole. "In the case of cauliflowers,
this property (self-similarity) becomes evident if you look closely at a photo of them," says another of the researchers, Mario Castro, a professor at UPCO. "In fact," he adds, "without more
information, it is impossible to know the size of the object." This way, using relatively simple algorithms, complex structures almost indistinguishable from certain landscapes, leaves or trees, for
example, can now be generated. "However, the general mechanisms that govern the appearance or evolution over time of those natural structures have rarely been identified beyond a merely visual or
geometric reproduction," clarifies the researcher.
From the supermarket to the laboratory
The cauliflower-type morphologies were known is this realm in an empirical way, but no one had provided a model like the one that these scientists have developed. "In our case," they comment, "the
connection came about naturally when a certain ingredient (noise) was added to a related model that we had worked on previously. When we did that, in the numeric simulations, surfaces appeared, and
we quickly identified them as the ones that our experiment colleagues had been able to obtain, under the right conditions, in their laboratories." Based on the characteristics of this theoretical
model, they have inferred general mechanisms that can be common and can help in making models of other very different systems, such as a combustion front or a cauliflower like the ones that can be
found in any supermarket.
Fractals of this type are interesting because they are ubiquitous, that is, they appear in systems that vary widely in their nature and dimensions. In general, fractals can be found in any branch of
the natural sciences: mathematics (specific types of functions), geology (river basins or the outline of a coast), biology (forms of aggregate cells, of plants, of the network of blood vessels...),
physics (the growth of amorphous solid crystals or the distribution of galaxies), chemistry (the distribution in space of the reagents of chemical reactions). Moreover, they have also been studied
due to their relationship with structures created by man, such as communication and transportation networks, city layouts, etc.
This finding may help to discover concrete applications for improving the technologies used in thin film coatings, and to understand the conditions under which they are smooth or have wrinkles or
roughness. "This is also useful in generating textures in computer simulations," the researchers point out. "And, conceptually," they add, "this can give us clues about the general mechanisms
involved in forming structures in areas that are very different from the ones in which the model was formulated, such as those in which there is competition for growth resources among the various
parts of the system."
More information: Universality of cauliflower-like fronts: from nanoscale thin films to macroscopic plants, Authors: Mario Castro, Rodolfo Cuerno, Matteo Nicoli, Luis Vázquez and Josephus G.
Buijnsters, Journal: New J. Phys. 14 (2012) 103039 doi:10.1088/1367-2630/14/10/103039
Provided by Carlos III University of Madrid
"A mathematical formula to decipher the geometry of surfaces like that of cauliflower." December 18th, 2012. http://phys.org/news/2012-12-mathematical-formula-decipher-geometry-surfaces.html
|
{"url":"http://phys.org/print275050664.html","timestamp":"2014-04-21T04:46:48Z","content_type":null,"content_length":"9476","record_id":"<urn:uuid:a8849ecf-668d-4d77-afd2-fec677dd6494>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PIC16F877 Timer Modules tutorials
PIC16F877 Timer Modules tutorials - Timer0
Many times, we plan and build systems that perform various processes that depend on time.
Simple example of this process is the digital wristwatch. The role of this electronic system is to display time in a very precise manner and change the display every second (for seconds), every
minute (for minutes) and so on.
To perform the steps we've listed, the system must use a timer, which needs to be very accurate in order to take necessary actions.The clock is actually a core of any electronic system.
In this PIC timer module tutorial we will study the existing PIC timer modules. The microcontroller PIC16F877 has 3 different timers:
We can use these timers for various important purposes. So far we used “delay procedure” to implement some delay in the program, that was counting up to a specific value, before the program could be
continued. "Delay procedure" had two disadvantages:
• we could not say exactly how long the Delay procedure was in progress
• we could not perform any further steps while the program executes the "delay procedure"
Now, using Timers we can build a very precise time delays which will be based on the system clock and allow us to achieve our desired time delay well-known in advance.
In order for us to know how to work with these timers, we need to learn some things about each one of them. We will study each one separately.
PIC Timer0 tutorial
The Timer0 module timer/counter has the following features:
• 8-bit timer/counter
• Readable and writable
• 8-bit software programmable prescaler
• Internal (4 Mhz) or external clock select
• Interrupt on overflow from FFh to 00h
• Edge select (rising or falling) for external clock
Let’s explain the features of PIC Timer0 we have listed above:
Timer0 has a register called TMR0 Register, which is 8 bits of size.
We can write the desired value into the register which will be increment as the program progresses. Frequency varies depending on the Prescaler. Maximum value that can be assigned to this register is
TMR0IF - TMR0 Overflow Interrupt Flag bit.
The TMR0 interrupt is generated when the TMR0 register overflows from FFh to 00h. This overflow sets bit TMR0IF (INTCON<2>). You can initialize the value of this register to what ever you want (not
necessarily "0").
We can read the value of the register TMR0 and write into. We can reset its value at any given moment (write) or we can check if there is a certain numeric value that we need (read).
Prescaler - Frequency divider.
We can use Prescaler for further division of the system clock. The options are:
• 1:2
• 1:4
• 1:8
• 1:16
• 1:32
• 1:64
• 1:128
• 1:256
The structure of the OPTION_REG register
We perform all the necessary settings with OPTION_REG Register. The size of the register is 8 bits. Click the link to explore the relevant bits of OPTION_REG Register
Initializing the OPTION_REG register
The following is an example how we can initialize the OPTION_REG:
1. PSA=0; // Prescaler is assigned to the Timer0 module
2. PS0=1; // Prescaler rate bits
3. PS1=1; // are set to “111”
4. PS2=1; // which means divide by 256
5. TOSE=0; // rising edge
6. TOCS=0; // Internal instruction cycle clock
Block diagram of the PIC Timer0 / WDT prescaler
PIC TIMER0 block diagram
Calculating Count, Fout, and TMR0 values
If using INTERNAL crystal as clock, the division is performed as follow:
PIC TIMER0 formula for internal clock
Fout– The output frequency after the division.
Tout – The Cycle Time after the division.
4 - The division of the original clock (4 MHz) by 4, when using internal crystal as clock (and not external oscillator).
Count - A numeric value to be placed to obtain the desired output frequency - Fout.
(256 - TMR0) - The number of times in the timer will count based on the register TMR0.
An example of INTERNAL crystal as clock
Suppose we want to create a delay of 0.5 second in the our program using Timer0. What is the value of Count?
First, let’s assume that the frequency division by the Prescaler will be 1:256. Second, let’s set TMR0=0. Thus:
Formula to calculate Cout using Timer0
If using EXTERNAL clock source (oscillator), the division is performed as follow:
PIC TIMER0 formula for external clock
In this case there is no division by 4 of the original clock. We use the external frequency as it is.
An example of EXTERNAL clock source (oscillator):
What is the output frequency - Fout, when the external oscillator is 100kHz and Count=8?
First, let’s assume that the frequency division by the Prescaler will be 1:256. Second, let’s set TMR0=0. Thus:
Formula to calculate Fout for Timer0
Delay of 1 sec using Timer0
The following simple program creates a delay of 1 sec using Timer0:
1 sec delay using Timer0
|
{"url":"http://www.microcontrollerboard.com/pic-timer0-tutorial.html","timestamp":"2014-04-17T12:28:51Z","content_type":null,"content_length":"22186","record_id":"<urn:uuid:9bf7dfd5-9939-4f53-a15d-9326e1b3e77c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need Help with equations
Why don't you give these two a try on your own and post what you have done. They are both very similar to the two examples I already posted an answer for. The last one LOOKS tricky, but it is very
straightforward. a, b, c, and d are just numbers. If it helps, pick some random numbers for them and solve the problem that way first. It will show you the steps you have to take to solve it. -Dan
|
{"url":"http://mathhelpforum.com/algebra/8088-need-help-equations.html","timestamp":"2014-04-20T08:48:15Z","content_type":null,"content_length":"44877","record_id":"<urn:uuid:4c5aaace-8d6c-4364-bb74-1793ce318fb2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra 2 summer online school. Give it a shot or not?
January 13th 2011, 11:14 AM #1
Algebra 2 summer online school. Give it a shot or not?
Hello guys some of you know me because of my terrible math solving skills ( probably worst earth math solver).
I want to speed up in my math since i might be ( want) a game designer.. But i this requires good math skill. I went to my counselor talked to her and she told me now the Summer school is online
and for math is really a bad subject.. Because you are basically your own teacher.
but i talked to her about this forum and some other online resources Like Purplemath , she told me to think it through and here i am asking to know if i should go for it.
You are your greatest enemy not the course. Look at how you are going into this "worst earth math solver." You can't succeed without confidence.
I'm Not putting myself down is just an expression, as in i am very bad at it.
But the truth is i have confidence, and a little motivation.
the hard part will be being my own teacher when i can't solve some problems and when this forum can't help me
I used to never due workout the example problems in the book or read the chapters until I started taking independent studies, but when I started doing that, I could figure most of the missing
steps in the example. If you want it (success in this class), you will do what it takes (reading the chapters and doing the examples before the exercises). Also, there is probably a students
solutions manual which you may want to invest in.
I used to never due workout the example problems in the book or read the chapters until I started taking independent studies, but when I started doing that, I could figure most of the missing
steps in the example. If you want it (success in this class), you will do what it takes (reading the chapters and doing the examples before the exercises). Also, there is probably a students
solutions manual which you may want to invest in.
Sounds Good for me, I will give it my best and i will take the course i still have much time to see .. But I'm doing well in Geometry!
January 13th 2011, 11:22 AM #2
MHF Contributor
Mar 2010
January 13th 2011, 11:28 AM #3
January 13th 2011, 11:38 AM #4
MHF Contributor
Mar 2010
January 13th 2011, 11:41 AM #5
|
{"url":"http://mathhelpforum.com/algebra/168252-algebra-2-summer-online-school-give-shot-not.html","timestamp":"2014-04-21T10:03:22Z","content_type":null,"content_length":"43532","record_id":"<urn:uuid:0a3aacab-0bc3-405e-9cf8-3b63cf127121>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Palmyra, NJ Math Tutor
Find a Palmyra, NJ Math Tutor
...I have worked with kids for several years being a Sunday school teacher, a summer camp counselor, and boy scout leader. I have excellent skills in explaining math terms and have also helped
friends and children before in different math classes. I am experienced and capable of tutoring in classes ranging from college algebra, geometry, high school algebra, 1 and 2 calculus and
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...I am currently studying medicine at UMDNJ and love to help students in biology and physics. At TCNJ, I was a biology major where I also tutored several math and sciences courses for other
students who were either struggling with their courses or just wanted someone to help with confusing topics....
5 Subjects: including prealgebra, algebra 1, physics, biology
...For me, each math problem is like a puzzle in which the answer can be found through more than one method. As for my credentials, I've received the EducationWorks Service Award for one year of
service in the Supplemental Educational Services (SES) Tutoring Program. I specialize in tutoring elementary math, geometry, and algebra for success in school.
5 Subjects: including algebra 1, geometry, prealgebra, reading
Hello! Thank you for your interest. My name is Lawrence and I am currently a senior at Temple University majoring in Sociology/Pre-Medical Studies.
11 Subjects: including SAT math, geometry, prealgebra, precalculus
...I have been an aide for 5 years and have worked in every school in the district from grades K to 12. I am currently assigned to Harriton High School. I am required to take 20 training hours
per year through the Pennsylvania Training and Technical Assistance Network (PaTTAN). Training related t...
35 Subjects: including geometry, reading, ACT Math, SAT math
Related Palmyra, NJ Tutors
Palmyra, NJ Accounting Tutors
Palmyra, NJ ACT Tutors
Palmyra, NJ Algebra Tutors
Palmyra, NJ Algebra 2 Tutors
Palmyra, NJ Calculus Tutors
Palmyra, NJ Geometry Tutors
Palmyra, NJ Math Tutors
Palmyra, NJ Prealgebra Tutors
Palmyra, NJ Precalculus Tutors
Palmyra, NJ SAT Tutors
Palmyra, NJ SAT Math Tutors
Palmyra, NJ Science Tutors
Palmyra, NJ Statistics Tutors
Palmyra, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/palmyra_nj_math_tutors.php","timestamp":"2014-04-19T05:02:38Z","content_type":null,"content_length":"23961","record_id":"<urn:uuid:98a0e45d-d7fb-427d-ab00-4f314d4f3859>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weatherford, TX Algebra 2 Tutor
Find a Weatherford, TX Algebra 2 Tutor
I've always been strong at math. I'm currently enrolled at UTA and am taking calculus 2. I made a B in calculus I last semester.
3 Subjects: including algebra 2, calculus, algebra 1
...As a U.S. Public Health Fellow I tutored high school, undergraduate, and medical students for four years in physics, chemistry, biochemistry, biology, microbiology, and history. I have found
that students at all levels often fear examinations.
40 Subjects: including algebra 2, chemistry, physics, geometry
...In addition, I have privately tutored several people in Math and assisted in the learning lab as a sub teacher in the Springtown ISD. My last 10 years (prior to retirement in 2008) was
developing global applications in Microsoft Access and related technologies on Wall Street. I have trained man...
14 Subjects: including algebra 2, reading, algebra 1, ASVAB
...Sometimes we work way too hard consciously, when our minds can do some churning in the background. Receiving much needed help in a more relaxed manner, our conscious minds are made aware of
possible solutions! I have taught and tutored precalculus for years - along with so much other math.
20 Subjects: including algebra 2, calculus, ASVAB, geometry
...That year everyone of my students passed the math portion of the the TAKS test.Learning to solve all types of equations is very important. Graphing is very important in Algebra II to get you
ready for Precalculus. I am a certified math teacher, and I have been teaching AP calculus at a private school for six years.
7 Subjects: including algebra 2, calculus, geometry, algebra 1
Related Weatherford, TX Tutors
Weatherford, TX Accounting Tutors
Weatherford, TX ACT Tutors
Weatherford, TX Algebra Tutors
Weatherford, TX Algebra 2 Tutors
Weatherford, TX Calculus Tutors
Weatherford, TX Geometry Tutors
Weatherford, TX Math Tutors
Weatherford, TX Prealgebra Tutors
Weatherford, TX Precalculus Tutors
Weatherford, TX SAT Tutors
Weatherford, TX SAT Math Tutors
Weatherford, TX Science Tutors
Weatherford, TX Statistics Tutors
Weatherford, TX Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Annetta N, TX algebra 2 Tutors
Annetta S, TX algebra 2 Tutors
Annetta, TX algebra 2 Tutors
Brazos Bend, TX algebra 2 Tutors
Colleyville algebra 2 Tutors
Decordova, TX algebra 2 Tutors
Forest Hill, TX algebra 2 Tutors
Hudson Oaks, TX algebra 2 Tutors
Mineral Wells, TX algebra 2 Tutors
Northlake, TX algebra 2 Tutors
Peaster algebra 2 Tutors
Saginaw, TX algebra 2 Tutors
Southlake algebra 2 Tutors
Waxahachie algebra 2 Tutors
Willow Park, TX algebra 2 Tutors
|
{"url":"http://www.purplemath.com/weatherford_tx_algebra_2_tutors.php","timestamp":"2014-04-20T06:49:40Z","content_type":null,"content_length":"24066","record_id":"<urn:uuid:53400829-805e-490c-9023-23cc35f6ff02>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: charpoly using too much stack space !
Karim BELABAS on Mon, 19 May 2003 19:05:26 +0200 (MEST)
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: charpoly using too much stack space !
On Mon, 19 May 2003, Gonzalo Tornaria wrote:
> On Sun, May 18, 2003 at 06:44:21PM +0200, Karim BELABAS wrote:
>> This patch is quite correct. I have committed to CVS a different one, which
>> fixed a number of minor annoyances [ stack leaks ], and handles stack usage
>> in a less hackish way than previously.
> It's consistently 15% faster for me (in a P4 2.00GHz),
> and it seems to use even less stack than Bill's patch.
Taking heap space into account, I'm using about as much space as Bill.
By killing a clone a bit earlier, I can reduce memory use by a further 30%,
but that wouldn't work in the presence of t_INTMOD / t_PADIC / t_POLMOD
because in general such objects obtained from out-of-stack data (such as
clones) contain pointers to that data, not copies. So I can't delete the
initial clone before creating the new one [ which transforms all such
pointers into hard copies ].
> Notably, if the matrix has only 400 1's, and the rest 0's, it only
> takes 8,5 secs; on the other hand, I can do it in 1,2 secs with a GP
> program that "knows" how to deal with such a sparse matrix.
This is an interesting observation. I have tweaked general matrix
multiplication so as to take advantage of many zero entries [ there's still
no specialized treatment and representation for sparse matrices ! ]. It has a
negligible impact on dense multiplication and a huge one on very sparse
matrices. This rather improves the charpoly() routine, though not to the
extent you mention. Committed to CVS.
A further optimization for +1 entries is complicated/unsafe ( there are many
things which can be "equal to 1" and still not be neutral wrt multiplication ),
and only yields marginal improvements ( say 15% at most ), so I've not
committed this one.
Can you update from CVS and check the speed improvement your GP
implementation provides ? [ and could you post the latter in the case it
still outperforms the native routine ? ]
Karim Belabas Tel: (+33) (0)1 69 15 57 48
Dép. de Mathématiques, Bât. 425 Fax: (+33) (0)1 69 15 60 19
Université Paris-Sud http://www.math.u-psud.fr/~belabas/
F-91405 Orsay (France) http://www.parigp-home.de/ [PARI/GP]
|
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0305/msg00036.html","timestamp":"2014-04-17T12:39:45Z","content_type":null,"content_length":"6523","record_id":"<urn:uuid:eb7b1a4e-86f2-470a-87f9-e5d913d4a0a3>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
|
C plus plus:Modern C plus plus:Functors
From GPWiki
Modern C++ : Going Beyond "C with Classes"
We've already seen how C++ uses operator overloading and templating to use Iterators as a nice generalisation of pointers. If you've done a fair bit of programming in C, you've probably seen function
pointers used as callbacks. If you're a Java programmer, maybe you used objects that implement a certain interface. Both methods work in C++, but have their drawbacks. Function pointers don't mesh
well with objects, which would bother the OO crowd. Member function pointers or objects with a certain virtual base are annoying, since you have to derive from the certain base class, and are slow,
since they involve virtual function calls.
Luckily, C++ has operator overloading, so we can just template on the type and use objects with overload operator()s.
Functions vs Functors
Functions, as you know, are a fundamental part of C++. They're probably second nature to you. Unfortunately, they're not all that convenient to pass around. Like arrays, they're fundamental types but
aren't really "first-class" entities. You can use templates to help generate functions, but that only happens at compile-time. Wouldn't it be cool to be able to make and change functions at runtime?
Well, functors let us approach that goal.
Functors are, like Iterators, pure abstractions: anything that acts like a functor is a functor. The requirements, however, are much simpler. All a functor need to be able to do is to "act like a
function". This means that function pointers are Functors, as is anything with an overloaded operator().
If you look for Functors on Roguewave, you wont find anything -- they use the alternate name Function Objects.
Standard Functors
The std::lib has a number of Functors, all of which can be found in the <functional> header.
Predicates are the subset of Functors that return bool ( or something that can be used as a boolean value ).
The relational predicates are std::equal_to, std::not_equal_to, std::greater, std::less, std::greater_equal, and std::less_equal representing ==, !=, >, <, >=, and <= respectively.
Logical Predicates
The standard provides std::logical_and, std::logical_or, and std::logical_not. You can use std::equal_to<bool> for logical xnor and std::not_equal_to<bool> for logical xor, if you need them.
Arithmetic Functors
All of the basic arithmetic operators supported by C++ are also available as Functors: std::plus, std::minus (2-parameter), std::multiplies, std::divides, std::modulus, and std::negate (1-parameter).
Using Functors With Algorithms
Here's a nice introduction to Functors:
#include <iostream>
#include <functional>
#include <algorithm>
#include <vector>
#include <iterator>
#include <cstdlib>
#include <ctime>
int main() {
std::srand( std::time(0) );
std::vector<int> v;
std::generate_n( std::back_inserter(v), 10,
&std::rand );
std::vector<int> v2( 10, 20 );
std::copy( v.begin(), v.end(),
std::ostream_iterator<int>(std::cout," ") );
std::cout << std::endl;
std::transform( v.begin(), v.end(),
std::modulus<int>() );
std::copy( v.begin(), v.end(),
std::ostream_iterator<int>(std::cout," ") );
std::cout << std::endl;
std::sort( v.begin(), v.end(), std::greater<int>() );
std::copy( v.begin(), v.end(),
std::ostream_iterator<int>(std::cout," ") );
std::cout << std::endl;
First, std::generate_n is used to fill the vector with data. The vector is empty, but the magic of an Back Insert Iterator makes it so that when std::generate_n writes to the output iterator it adds
it to the end of the vector. You'll notice that in this case a function pointer to std::rand makes a perfectly acceptable Functor.
The std::transform line does the equivalent of v[i] = v[i] % v2[i]; for each v[i] in v. The first 3 parameters are the 2 input ranges -- the second input range is assumed to be at least as large as
the first. The fourth parameter is the output iterator. In this case it's safe to use a normal iterator into the container since we know exactly how many elements will need to be written. The fifth
parameter is the binary operation performed.
There is also a 4-parameter version of std::transform which only needs one input range and takes a unary operation. Try using it and std::negate to do the equivalent of a loop of v[i] = -v[i];
Writing Your Own Functors
You probably noticed in the code above that the v2 vector serves no purpose other than to be the second range for the std::transform call. The most obvious solution to that would be to write a mod_10
function similar to the following:
int mod_10(int value) { return value % 10; }
But of course that's extremely limited. You could try templating it:
template <typename T, int V>
T mod_by(T value) { return value % V; }
Which would let you say
std::transform( v.begin(), v.end(),
&mod_by<int,10> );
But that's still too limited, since you can't specify the 10 at runtime.
What would really be best here is a structure, since a structure can have the divisor as a member variable:
template <typename T>
struct mod_by {
T const divisor;
mod_by(T const &d) : divisor(d) {}
T operator()(T const &value) { return value % divisor; }
Which would let you say
std::transform( v.begin(), v.end(),
mod_by<int>(10) );
Notice that the divisor member is const. It's fairly common for people to be tempted to create functors with non-constant members and give operator() some sort of side effect, perhaps counting the
number of times the function was called and returning a different answer after a certain point. To do so is generally not safe. Algorithms are free to copy the functor as many times as they wish,
which means that the functor passed in might be copied a number of times, with different objects' operator()s called on different elements in the range.
In Closing
Using functors with your loops lets you do just about anything. Algorithms with Functor parameters can be made to be incredibly general, removing possibilities for errors in writing the loops and
making it clear that a certain line applies an operation to each element ( or corresponding pair of elements ), instead of needing to hunt through the details of a specific loop.
What's Next
Anyone that's used a real functional language is probably scoffing at having to write a separate functor each time. It would be much better to be able to combine Functors and values in order to build
complex functors on the fly, don't you think?
|
{"url":"http://content.gpwiki.org/index.php/C_plus_plus:Modern_C_plus_plus:Functors","timestamp":"2014-04-17T18:34:26Z","content_type":null,"content_length":"39522","record_id":"<urn:uuid:809e356c-f88f-4e63-a1a5-20d9f805f6ae>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Theorem Producing the Fine Structure Constant Inverse and the Quark and Lepton Mixing Angles
Authors: J. S. Markovitch
The value 137.036, a close approximation of the fine structure constant inverse, is shown to occur naturally in connection with a theorem employing a pair of related functions. It is also shown that
the formula producing this approximation contains terms expressible using the sines squared of the experimental quark and lepton mixing angles, implying an underlying relationship between these
constants. This formula places the imprecisely measured neutrino mixing angle θ[13] at close to 8.09^°, so that sin^22θ[13] ≈ 0.0777.
Comments: 4 Pages.
Download: PDF
Submission history
[v1] 2012-03-30 18:50:28
[v2] 2012-11-13 07:26:22
Unique-IP document downloads: 125 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
comments powered by
|
{"url":"http://vixra.org/abs/1203.0106","timestamp":"2014-04-19T11:59:09Z","content_type":null,"content_length":"7611","record_id":"<urn:uuid:c11bd404-2dfa-4fc8-bccd-1899d47f1f3e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Marblehead Calculus Tutor
Find a Marblehead Calculus Tutor
...Does it seem like an obstacle course designed to keep you from your real goals? I can help. I cannot promise you you will learn to love these subjects as much as I do, but I promise you there
is a better approach to learning them, and I can help you find it.
12 Subjects: including calculus, chemistry, physics, geometry
...I scored perfect on the GRE quantitative reasoning test. I have a BA and MA in mathematics. This included coursework in logic, discrete math, algebra, and computer science.
29 Subjects: including calculus, reading, English, geometry
...I don't just know the material; I know the student as well.I performed well in my physics courses as an MIT student. I have tutored students in classic mechanics, electricity, and magnetism. I
can offer help in both non-Calculus and Calculus based courses.
24 Subjects: including calculus, chemistry, physics, statistics
...I like to check my students' notebooks and give them guidelines for note-taking and studying. Here too I stress active learning, and my goal is usually to get my students to study by
challenging themselves with problems rather than simply reading over their text, notes, and previous work. If you are in need of assistance for a student struggling in Physics or Math, I am the
man for you.
9 Subjects: including calculus, physics, geometry, algebra 1
...I also published my thesis at that time. During high school, I volunteered and later became employed at Kumon Math and Reading Center. While I was there, I helped students of all ages in
overcoming the challenges that they had in both subjects.
10 Subjects: including calculus, geometry, algebra 2, algebra 1
|
{"url":"http://www.purplemath.com/marblehead_ma_calculus_tutors.php","timestamp":"2014-04-16T13:38:14Z","content_type":null,"content_length":"23779","record_id":"<urn:uuid:c926b4aa-a7cf-4aa0-b4bb-89a61ae8f5ba>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Note on generated choice and axioms of revealed preference
Magyarkuti, Gyula (2000): Note on generated choice and axioms of revealed preference. Published in: Central Europen Journal of Operations Research , Vol. 8, No. 1 (2000): pp. 57-62.
Download (109Kb) | Preview
In this article, we study the axiomatic foundations of revealed preference theory. Let P denote the strict and R the weak revealed preference, respectively. The purpose of the paper is to show that
weak, strong, and Hansson's axioms of revealed preference can be given as identities using the generated choices with respect to P and R in terms of maximality and in terms of greatestness.
Item Type: MPRA Paper
Original Title: Note on generated choice and axioms of revealed preference
Language: English
Keywords: Revealed preference, Weak axiom of revealed preference, Strong axiom of revealed preference, Hansson's axiom of revealed preference
D - Microeconomics > D7 - Analysis of Collective Decision-Making > D70 - General
Subjects: C - Mathematical and Quantitative Methods > C0 - General > C02 - Mathematical Methods
D - Microeconomics > D0 - General > D00 - General
C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C60 - General
Item ID: 20358
Depositing Gyula Magyarkuti
Date Deposited: 02. Feb 2010 14:20
Last Modified: 13. Feb 2013 20:32
Antonelli, G. (1886), Sulla Teoria Matematica della Economica Politica, Nella tipografia del Folchetto, Pisa
Arrow, K. (1959), Rational choice function and orderings, Economica, N.S., 26 (1959), pp. 121-127.
Clark, S.A. (1985), A complementary approach to the strong and weak axioms of revealed preference, Econometrica, 53 (1985), pp. 1459-1463.
Duggan, J. (1999), General extension theorem for binary relations, Journal of Economic Theory, 86 (1999), pp. 1-16.
Hansson, B. (1968), Choice structures and preference relations, Synthese, 18 (1968), pp. 443-458.
Houthakker, H. (1950), Revealed preference and utility function, Economica, N.S., 17 (1950), pp. 159-174.
References: Richter, M. (1966), Revealed preference theory, Econometrica, 34 (1966), pp. 635-645.
Richter, M. (1971), Rational choice, in Preferences Utility and Demand by J. Chipman, L. Hurwitz, M. Richter, and H. Sonnenschein (eds.), New-York: Horcourt Brace Jowanowich, (1971),
pp. 29-58.
Samuelson, P. (1947), Foundations of Economic Analysis, Harvard University Press, Cambridge, Massachusetts, 1947.
Sen, A. K. (1971), Choice functions and revealed preference, Review of Economic Studies, (1971), pp. 307-317.
Suzumura, K. (1976), Rational choice and revealed preference, Review of Economic Studies, 43 (1976), pp. 149-158.
Suzumura, K. (1977), Houthakker's axiom in the theory of rational choice}, Journal of Economic Theory, 14 (1977), pp. 284-290.
Uzawa, H. (1956), A note on preference and axioms of choice, Annals of the Institute of Statistical Mathematics, 8 (1956), pp. 35-40.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/20358
|
{"url":"http://mpra.ub.uni-muenchen.de/20358/","timestamp":"2014-04-19T08:07:48Z","content_type":null,"content_length":"20417","record_id":"<urn:uuid:739fbd14-b982-412b-b82a-9e8fd0ace2aa>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bowling Balls
From ChemPRIME
It may come as a surprise that some bowling balls float! ^[1] [[Image:180px-BowlingBalls.jpg|left|thumb|Some bowling balls
Earlier we showed how to use unity factors can be used to express quantities in different units of the same parameter. For example, a density can be expressed in g/cm^3 or lb/ft^3. Now we will also
how conversion factors representing mathematical functions, like D = m/v, can be used to transform quantities into different parameters. For example, if bowling balls have a regulation volume what
must their densities be to achieve the desired ball mass? Unity factors and conversion factors are conceptually different, and we'll see that the "dimensional analysis" we develop for unit conversion
problems must be used with care in the case of functions.
Suppose we have a spherical ball which measures 27 inches in circumference. Official Specifications Manual for the American Bowling Congress (ABC) reports circumferences in the range of 26.704–27.002
in. and diameters 8.500–8.595 in ^[2], but the World Tenpin Bowling Association(WTBA), specifies 27 in. as the maximum circumference for a bowling ball used in competitions and the maximum weight is
16 lb. Cite error: Invalid <ref> tag; refs with no name must have content. We can easily calculate that its volume (neglecting the drilled finger holes) is TAG Heuer Replica C = 2πr
$V = \frac {4}{3} \pi r^3$
= $\frac {4}{3} \pi (\frac {C}{2 \pi})^3$ = 5447 cm^3
The construction of bowling balls is very technical. In the early 1900s they were made of wood, then rubber. But in the 1960s complex polymer chemistry was employed to create inner "cores" whose
shape controls the rotational properties of the balls. The core is surrounded by a "filler" that determines the overall density of the ball, and the whole thing is coated with a "coverstock" that
grips the alley, and allows the rotating ball to "hook". The density controls the weight (mass) of the ball. For example, we can calculate the mass of a ball whose density is 1.33 g/cm^3 from the
mathematical function which defines density:
$\text{Density = }\frac{\text{mass}}{\text{volume}}\text{ or }\rho \text{ = }\frac{\text{m}}{\text{V}}\text{ (1}\text{.1)}$
If we multiply both sides by V, we obtain
$\text{V}\times \rho =\frac{\text{m}}{\text{V}}\times \text{V}=\text{m}$
m = V × ρ or mass = volume × density (1.2)
So for a volume of 5447 cm^3 and ρ = 1.33 g/cm^3, we calculate a mass of
m = 5447 cm^3 x 1.33 $\frac {g}{cm^3}$ = 7245 g or in pounds 7245 g x $\frac {1 lb}{453.59 gram}$ = 16.0 lb
The formula which defines density can also be used to convert the mass of a sample to the corresponding volume. If both sides of Eq. (1.2) are multiplied by 1/ρ, we have
\begin{align} & \frac{\text{1}}{\rho }\times \text{m}=\text{V}\rho \times \frac{\text{1}}{\rho }=\text{V} \\ & \text{ V}=\text{m}\times \frac{\text{1}}{\rho } \\ \end{align} (1.3)
The lowest density in modern bowling ball composition is 0.666 g/cm^3 for 8 lb balls. What is the circumference of the ball?
V = m/D = 8 lb x $\frac {453.59 g}{1 lb} / 0.666 \frac {g}{cm^3}$ = 5449 cm^3
and the circumference is calculated from the formula above to be the regulation 27 in.
Notice that we used the mathematical function D = m/V to convert parameters from mass to volume or vice versa in these examples. How does this differ from the use of unity factors to change units of
one parameter?
An Important Caveat
A mistake sometimes made by beginning students is to confuse density with concentration, which also may have units of g/cm^3. By dimensional analysis, this looks perfectly fine. To see the error, we
must understand the meaning of the function
C = $\frac{m}{V}$.
In this case, V refers to the volume of a solution, which contains both a solute and solvent.
Given a concentration of an alloy is 10g gold in 100 cm^3 of alloy, we see that it is wrong (although dimensionally correct as far as conversion factors go) to incorrectly calculate the volume of
gold in 20 g of the alloy as follows:
20 g x $\frac{100 cm^3}{10 g}$ = 200 cm^3
It is only possible to calculate the volume of gold if the density of the alloy is known, so that the volume of alloy represented by the 20 g could be calculated. This volume multiplied by the
concentration gives the mass of gold, which then can be converted to a volume with the density function.
The bottom line is that using a simple unit cancellation method does not always lead to the expected results, unless the mathematical function on which the conversion factor is based is fully
A solution of ethanol with a concentration of 0.1754 g / cm^3 has a density of 0.96923 g / cm^3 and a freezing point of -9 ° F ^[3]. What is the volume of ethanol (D = 0.78522 g / cm^3 at 25 °C) in
100 g of the solution?
The volume of 100 g of solution is
V = m / D = 100 g /0.96923 g cm^-3 = 103.17 cm^3.
The mass of ethanol in this volume is
m = V x C = 103.17 cm^3 x 0.1754 g / cm^3 = 18.097 g.
The volume of ethanol = m / D = 18.097 g / 0.78522 g / cm^3 = 23.05 cm^3.
Note that we cannot calculate the volume of ethanol by
$\frac {\frac{0.96923 g}{cm^3} x 100 cm^3}{\frac {0.78522 g}{cm^3}}$ = 123.4 cm^3
even though this equation is dimensionally correct.
Note that this result required when to use the function C = m/V, and when to use the function D=m/V as conversion factors. Pure dimensional analysis could not reliably give the answer, since both
functions have the same dimensions.
The two calculations just done show that density is a conversion factor which changes volume to mass, and the reciprocal of density is a conversion factor changing mass into volume. This can be done
because of the mathematical formula, D = m/v, which relates density, mass, and volume. Algebraic manipulation of this formula gave us expressions for mass and for volume, and we used them to solve
our problems. If we understand the function D = m/V and heed the caveat above, we can devise appropriate converstion factors by unit cancellation, as the following example shows:
EXAMPLE 1.11 A student weights 98.0 g of mercury. If the density of mercury is 13.6 g/cm^3, what volume does the sample occupy?
We know that volume is related to mass through density.
V = m × conversion factor
Since the mass is in grams, we need to get rid of these units and replace them with volume units. This can be done if the reciprocal of the density is used as a conversion factor. This puts grams in
the denominator so that these units cancel:
$V=m\times \frac{\text{1}}{\rho }=\text{98}\text{.0 g}\times \frac{\text{1 cm}^{3}}{\text{13}\text{.6 g}}=\text{7}\text{.21 cm}^{3}$
If we had multiplied by the density instead of its reciprocal, the units of the result would immediately show our error:
$V=\text{98}\text{.0 g}\times \frac{\text{13,6 }g}{\text{1 cm}^{3}}=\text{1}\text{.333}{\text{g}^{2}}/{\text{cm}^{3}}\;$ (no cancellation!)
It is clear that square grams per cubic centimeter are not the units we want.
Using a conversion factor is very similar to using a unity factor—we know the factor is correct when units cancel appropriately. A conversion factor is not unity, however. Rather it is a physical
quantity (or the reciprocal of a physical quantity) which is related to the two other quantities we are interconverting. The conversion factor works because of that relationship [Eqs. (1.1), (1.2),
and (1.3) in the case of density, mass, and volume], not because it is equal to one. Once we have established that a relationship exists, it is no longer necessary to memorize a mathematical formula.
The units tell us whether to use the conversion factor or its reciprocal. Without such a relationship, however, mere cancellation of units does not guarantee that we are doing the right thing.
A simple way to remember relationships among quantities and conversion factors is a “road map“of the type shown below:
$\text{Mass }\overset{density}{\longleftrightarrow}\text{ volume or }m\overset{\rho }{\longleftrightarrow}V\text{ }$
This indicates that the mass of a particular sample of matter is related to its volume (and the volume to its mass) through the conversion factor, density. The double arrow indicates that a
conversion may be made in either direction, provided the units of the conversion factor cancel those of the quantity which was known initially. In general the road map can be written
$\text{First quantity }\overset{\text{conversion factor}}{\longleftrightarrow}\text{ second quantity}$
As we come to more complicated problems, where several steps are required to obtain a final result, such road maps will become more useful in charting a path to the solution.
EXAMPLE 1.12 Black ironwood has a density of 67.24 lb/ft^3. If you had a sample whose volume was 47.3 ml, how many grams would it weigh? (1 lb = 454 g; 1 ft = 30.5 cm).
The road map
$V\xrightarrow{\rho }m\text{ }$
tells us that the mass of the sample may be obtained from its volume using the conversion factor, density. Since milliliters and cubic centimeters are the same, we use the SI units for our
Mass = m = 47.3 cm^3 × $\frac{\text{67}\text{.24 lb}}{\text{1 ft}^{3}}$
Since the volume units are different, we need a unity factor to get them to cancel:
$m\text{ = 47}\text{.3 cm}^{\text{3}}\text{ }\times \text{ }\left( \frac{\text{1 ft}}{\text{30}\text{.5 cm}} \right)^{\text{3}}\text{ }\times \text{ }\frac{\text{67}\text{.24 lb}}{\text{1 ft}^{\text
{3}}}\text{ = 47}\text{.3 cm}^{\text{3}}\text{ }\times \text{ }\frac{\text{1 ft}^{\text{3}}}{\text{30}\text{.5}^{\text{3}}\text{ cm}^{\text{3}}}\text{ }\times \text{ }\frac{\text{67}\text{.24 lb}}{\
text{1 ft}^{\text{3}}}$
We now have the mass in pounds, but we want it in grams, so another unity factor is needed:
$m\text{ = 47}\text{.3 cm}^{\text{3}}\text{ }\times \text{ }\frac{\text{1 ft}^{\text{3}}}{\text{30}\text{.5}^{\text{3}}\text{ cm}^{\text{3}}}\text{ }\times \text{ }\frac{\text{67}\text{.24 lb}}{\text
{1 ft}^{\text{3}}}\text{ }\times \text{ }\frac{\text{454 g}}{\text{ 1 lb}}\text{ = 50}\text{.9 g}$
In subsequent chapters we will establish a number of relationships among physical quantities. Formulas will be given which define these relationships, but we do not advocate slavish memorization and
manipulation of those formulas. Instead we recommend that you remember that a relationship exists, perhaps in terms of a road map, and then adjust the quantities involved so that the units cancel
appropriately. Such an approach has the advantage that you can solve a wide variety of problems by using the same technique.
Web Sources: TAG Heuer Replica
|
{"url":"http://wiki.chemprime.chemeddl.org/index.php/Bowling_Balls","timestamp":"2014-04-17T09:35:23Z","content_type":null,"content_length":"83696","record_id":"<urn:uuid:e7d4655d-9bb8-462b-8883-faa91b144c02>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Three times the sum of a number and seven is two less than five times the number. a. 3x+7=5x-2 b. 3(x+7)=5x-2 c. 3(x+7)=2-5x d. 3x+7=2-5x
• 9 months ago
• 9 months ago
Best Response
You've already chosen the best response.
CAN U HELP ME EAT MY PUSSY PLEASE!!!
Best Response
You've already chosen the best response.
Negative^ No thanks. Can someone please help me with this problem?
Best Response
You've already chosen the best response.
Ok so its 3*(the sum of a number and 7) is means equal 2 less from 5*the number So let the number =x So 3*(x+7)= 2 less from 5x
Best Response
You've already chosen the best response.
Now 2 less from 5x is the tricky part can you figure it out?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
its B
Best Response
You've already chosen the best response.
He is right
Best Response
You've already chosen the best response.
do you need it worked out ?
Best Response
You've already chosen the best response.
He needs to understand how to translate words into an equation into mathematical terms
Best Response
You've already chosen the best response.
ok....3 times the sum of a number and 7......the um of a number and 7 = (x + 7) and 3 times this is 3(x + 7)...."is " means equals. Two less then 5 times the number.....5x - 2. Which comes down
to 3(x + 7) = 5x - 2 Do you understand ?
Best Response
You've already chosen the best response.
ohhhh yes!!! Thank you guys soooo much!!!
Best Response
You've already chosen the best response.
YW :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51d46c51e4b08d0a48e4f67a","timestamp":"2014-04-17T07:04:27Z","content_type":null,"content_length":"56387","record_id":"<urn:uuid:ace69392-bea4-4f8c-9fb5-5bd52bc035dc>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A simple and TRUE solution to Colebrook-White
Simple and TRUE
... It has been proved to be right. It has been compared to Newton's version. My Simple and True version is now shown not to be an approximation but a real solution. Newton's approximation is usually
right, but sometime it is wrong by a few decimals. Each of those approximations are usually complex. This Simple and TRUE is simple and true. LOL
Info: When the f is own both sides of the equation, but one is in a Log, then f can be solved. To solve it easily, let 1/Sqrt(f) be X. Rr is the relative roughness, (which is the actual roughness
divided by the inside diameter) and Re is the Reynolds Number. Make A=Rr/3.7 and B=2.51/Re, then X= -2*Log(A+B*X). Use X=3 (or any number you want to test, different starting number will still always
solve it). For the first loop use the selected X to solve the right side for a new X. After about 7 loops the X will stop changing to the number of decimals in your calculator. Then f will be 1
divided by X squared. To check, compute both sides with the f, and you will see both sides for the main equation will equal the X. So the f is right to as many decimals as your calculator can figure.
This method has been shared with the whole world for 4 years, and everyone has found it to be correct. To test this method before learning it, you can see the very bottom to completely test my method
from the country of Japan! / Harrell
The Colebrook-White equation written in Excel is....
1/sqrt(f) = -2* Log(Rr/3.7+ 2.51/Re*1/sqrt(f))
Where Rr is the relative roughness,
(which is the actual roughness divided by the inside diameter)
and Re is the Reynolds Number. I can solve this. I will show you how in a simple way. Let's call the "
at first, then solve for X. ("TRUE" means 14 or 15 digits because Excel will only go that far. If your program can go 100 or 1000 digits, this solution will prove that many digits.)
X = -2* Log(Rr/3.7+ 2.51/Re*X)
When we find the true X, we can solve for f, which is the Darcy Friction Factor.
Example: (in an Excel worksheet)
Enter Rr in cell
Enter Re in cell
Enter an initial value of X in cell D5 as 3 (or you may use a large number to see the initial X still works. Example 1000, but it might take an extra loop)
Enter this equation in the cell D6 for the next X....
For Cell D6, you should have 4.78, but reformat to "Number, 15 decimals".
It should then be...
if you type the equations and variables right, if you started with 3.
(if you started with 1000, it would be 3.559571722914280, which will take one extra loop.)
In Cell E6 type...
. It will say "False", meaning "Not Solved" which means this X is not right yet. Copy cells D6 and E6 and paste into cells from
. Cell
will say now say "TRUE" for 15 decimals.
You should know this about copying and pasting Excel equations, the variable that says D5 (without the $) increases to the next cell. So that D5 converts to D6, D7, D8, etc. as it is pasted down. So
the value of the equation is used as the next X in the next equation.
Into cell
write the equation
and that will be..
, which is the Darcy F friction factor to 15 decimals!!
You can change that initial value of 3 to 1000 and you will see the DarcyF does not change, it may only take one step more to compute X. Also you change the Rr and Re for different solutions. If you
are experienced in Excel you should be able move the computations around. Also you can compare this TRUE DarcyF to other approximations and see how close they are.
If you want to test this method for thousands of variables, change
=RANDBETWEEN(2500,10000000) D2
then press or press and hold F9
If you to know the actual roughness and internal diameter, instead of relative roughness then divide the actual roughness by the inside diameter (in same units) to know the "relative" roughness,
The Goudar–Sonnad method is good, it averages 14.9 of 15 correct decimal places. The Serghides's approximation averages 14.8 correct decimal places. But Haaland's method only average 2 correct
decimals. Other approximations methods average from 3 to 7 correct decimal places.
This is the example....(showing row numbers on far left)...(blue is just notes)
False means the right side of the equation is not equal to the left side
Initial X =
put in cell D5
6 4.776191571199940
On an average of random values of Re and Rr, the funnel will take you to an accuracy of 15 decimal places in seven loops for any of the different Colebrook equations. But for some values of Re and Rr
it may take 20 loops.
If your computing method could take you to 1000 decimals, the funnel would take you to 1000 decimals for X. and computing Darcyf from f=1/X/X (or 1 / (X)^2) it will take you to another decimal place
A similar way this works
A very simple prove this works. Try this equation,
Guess a value for X and you should understand this works very cool. For and example, lets guess the X as 20 and solve,.. Log(20)+9= ? about 10.3, then use 10.3 for X and you will get about 10.01,
then Log(10.01)+9= about 10.0004, Wow, lets try 10...
Log(10)+9 = 10... Exactly, right! The Log of 10 is 1, and 1+9 =10
So no matter what you first guess, this solution will quickly get the right answer.
If the first guess was 1000. Then Log(1000)+9 = 12. The way the solution works is to get you close very quickly. Using all the decimals in Excel it will take 11 loops.
Guess 20 then next is... 10.3010299956640, next is...
10.0128806517992, next is... 10.0005590396375, next is...
10.0000242781044, next is... 10.0000010543834, next is...
10.0000000457913, next is... 10.0000000019887, next is...
10.0000000000864, next is... 10.0000000000038, next is...
10.0000000000002, next is... 10.0000000000000
Now, lets return to the (above) Colebrook Equation.
This is a little similar to the Colebrook Equation. The Log function works like what I call a funnel to project each step toward the correct answer by about two more decimal places in each step. An
average of seven steps makes the true, absolute value found. Excel can only do 14 or 15 decimal places so Excel holds off more accuracy. To solve this test, write into Excel an initial guess. Enter a
5 or 1000 for a for first step in Excel's cell D5. It will still give the right answer.
How can this work? Test?: solve this...
In a cell D6, enter the equation =10+Log(D5). It should answer 10.6989... Enter into cell E6 this =D6=D5 Copy the cells D6 and E6 down into to about 20 rows. Cell E16 should say TRUE. This the
equation has been solved because the left side equals the right side.
The answer to X should be
when displayed as 15 decimals.
To understand this, this method works easily when a variable like X is on both sides of an equation, but on one side it is in a Log function. Since the Log makes the section much smaller than the
section not with Log. You can test other equations like ....
The method of starting with a "guess" of X and computing the right side of the equation will give a new X that is from one to three more correct digits, then
recompute the right in Excel will give 15 digits, but sometimes Excel might round the last digit(s) wrong because it did not go to more than 15 digits. Here are the
answers for those 3 equations. The 3rd is almost like the Colebrook equation.
Another thing to understand is the "FUNNEL". To plot a funnel to see how this works is to make a large and a small initial X and plot steps. Each additional computation takes the value to toward the
True computation.
See the funnel plotted here...
The "Funnel" works in Colebrook-White, because of the Log function.
Of course it won't work on Log() alone, but it works on other math with a Log. Like something as simple as this... X=10+Log(X). The Log(X) will be a lot smaller than X, and that's what make the
Say for example... What's the Log(1000), the Log(100) and the Log(10)? Well, if you don't know, the answers are 3, 2, and 1. The answers are 1 apart, not 10 times apart as 1000. 100, and 10. This is
why the graph makes a funnel shape.
So lets look at the test for X=10+Log(X) again.
If you guess the X in the equation as 3, the answer would be 10.47.
If you guess the first X as 1000, you would get 13.
If you guess the first X as 20, you would get 11.30
Graph this... in step 1, the difference between 3 and 1000 was 997! The second different is 13 - 10.47 = 2.53. So the funnel top was 997 going down to a narrow step 2 as 2.53. If you take one more
loop, both will give you 11.04. In an average of many different equations you will find each loop takes you about 2 more decimals of accuracy. If you graph this, you will see the "Funnel" taking to a
solution on each loop.
This is a graph starting at 3 and 20, (the 1000 was to big in the graph).
This shows 3 steps. At 12 steps the accuracy is for 13 decimals, just because Excel will only get 15 digits.
See a funnel?...
taking your estimate to a quick, accurate solution.
Actually.it is...11.0430906366728 to 15 digits.
If you zoom into each pair of loops, you will see each loop
has another funnel shape reducing from very wide to very narrow.
How simple? If you can take a PC calculator, try it this way.
Guess a number for X to use in the equation X=10+Log(X)
Enter the guessed value of X,
1. Press [LOG]
2. Press [Plus] 10 then Equals
3. That gives you the first loop, go back to step 1.
After several times looping you will have 11.0430... and each loop will improve the decimal accuracy by and average of two decimal places. That is how the Log funnel works,
When Mathematicians understand my funnel, they will agree that this is a Math solution to the Colebrook equations. Because of the Log each loop averages about 2 or 3 digits closer. Since Excel will
only show 15 digits, the average is about 7 loops.
Sample------------------------------------------------------------------------------------- Started with 10, see 32 digits done with desktop calculator. Red shows unchanged digits 10 Started with
this as the "guess" that might save one loop. 11 One digit did not change 11.041392685158225040750199971243 Two digits did not change, 2 more were added. 11.043023855760277278908417779052 4 digits
did not change, 2 more were added. 11.043088010354671000395276406585 6 digits did not change, 2 more were added. 11.043090533386927914699790684320 6 digits did not change, 0 more was added. 11.043090
632610882359484124482596 8 digits did not change, 2 more were added. 11.043090636513088500429056209373 10 digits did not change, 2 more were added. 11.043090636666551570718217687700 11 digits did not
change, 1 more was added. 11.043090636672586852578685610125 12 digits did not change, 1 more was added. 11.043090636672824203669514154799 14 digits did not change, 2 more were added. 11.0430906366728
33538037257025437 15 digits did not change, 1 more was added. 11.043090636672833905132351701379 17 digits did not change, 2 more were added. 11.043090636672833919569195443481 18 digits did not
change, 1 more was added. 11.043090636672833920136956931599 18 digits did not change, 0 more was added. 11.043090636672833920159285434501 21 digits did not change, 3 more were added.
11.043090636672833920160163553310 21 digits did not change, 0 more was added. 11.043090636672833920160198087315 24 digits did not change, 3 more were added. 11.043090636672833920160199445443 25
digits did not change, 1 more was added. 11.043090636672833920160199498855 27 digits did not change, 2 more were added. 11.043090636672833920160199500955 26 digits did not change, 1 reduced, 9498
changed to 9500 11.043090636672833920160199501038 28 digits did not change, 2 more were added. 11.043090636672833920160199501041 30 digits did not change, 2 more were added.
11.043090636672833920160199501041 32 digits did not change, 2 more were added. 11.043090636672833920160199501041 32 digits did not change, none was added because it has reached as many digits as
program can compute. This averages about 1.5 more extra correct digits per loop
Information, if you must have 10 correct digits, you should loop about two more than 10, just because it might be a change bcause of rounding. If your program can compute hundreds or thousands of
digits, you will see the results are the same. So if you need 50 correct digits, you should go to 52 just to make sure the 50th is right.
This computation is just like solving each of the Colebrook-White equations.
This is a graph of the Simple and True Solution to the Colebrook equation.
This graph shows some first guess for X...0 to 8 at step 1. Each step goes about 2 decimals closer.. 8 was high and 0 was to low, and you can see where they (Red vs Black) crossed at step 2. To get
15 decimals correct, it takes about seven steps.
I just call it a funnel, because of the shape. Each step zooms in much closer, just like the graph from step 1 to 2, To graph step 2 to 3 they vertical side should go from 4.1 to 4.3.
A few people asked how I figured out how to do this TRUE solution.
When I first began to use approximations, I tested them and found most of them were good enough for actual design. But I wanted to know the TRUE solution to test each approximation. This was before I
learned the "Funnel" would take me quickly to the accurate point.
I knew that using Excel that can run fast steps, I could find the TRUE value. Here's a sample of how I could solve for a TRUE solution, but usually it took over a hundred steps, but the time uses was
very quick. I fixed the main equation to find X first where X is substituted for "1/sqrt(f)". X is normally somewhere between 2 and 5, so I would start with X=1 and take one step further until the
solution of the equation gave me ax X (answer) that was larger that the X used in the equation. Then I would reverse one step and reduce the step size. After 100 steps the TRUE solution would be
found where the right side X would compute to the same left side X.
Several Colebrook-White equations
Why should the public learn this more accurate solution? Ordinary approximations try to solve the main Colebrook-White solution, as shown in these methods. But as time has pasted many specific types
of material and fluids make slight changes in the Colebrook-White equation to more accuracy. My methods can be easily changed by updating the specific equation. Here are six different Colebrook
versions. The public approximations only go toward first version listed. Many of the designs we do say the version 4 is the specific equation we should use. But none of the public approximations are
very good for it. But I can put that equation into my method and it will get the TRUE solution. All the public approximations fail to be good for those different Colebrook equations.
1/sqrt(f)=-2*Log(Rr/3.7+2.51/Re*1/sqrt(f)) (this is the main one, shown above)
To use these in my method, just first use the RED parts as X, then when X is solved,
compute f is this... f =1/X^2 but I like the "simple" which is f =1/X/X (easier to type).
VBA programming=============== 1/sqrt(f
) mode 2.51
) mode 1.74
)) mode 1.14
) mode 9.35
) mode 3.71
) mode 3.72
The symbol x is the left side of the Colebrook Equation, a number that is made by the right side of the equation. the symbol d will be a guess or a re-evaluated value of x. The symbol "A" should be
part of the convergence and it is a number that will be multiplied by d on the right side. The symbol "B" should be part of the convergence and it is a number that will be added to "A *d" on the
right side.
The symbol "C" should be part of the convergence and it is a number that will be multiplied to the "LOG( B+A*d)" on the right side. The C might be =1/log(10) to convert LN to LOG10, but it might have
another part that will be multiplied to the "LOG( B+A*d)" on the right side, like 2 so it would be C=2/Log(10)
Another new part, test for x=d to end the convergence, but if the convergence changes the last decimals back and forth then, use L to count the loops and quit if x<>d.
Note: the variable "a" will be value of a number divided by the Reynolds Number, (for example
) "b" will be Relative Roughness divided by a number (for example
). "c" will be the value to convert Ln() to Log10, and include the multiplier, usually 2, (for example
). "d" will be the number moving toward the true value of "x". And the Darcy Factor will be 1/x/x.
The equation like "
x = Abs(c * Log(b + a * d))
" will be the simplified equation of a Colebrook-White mode. For each different modes, these variables will be changed to match the mode. To solve for x, estimate a value for d, then solve for x,
then use the x for the next value of d. Solve for x again and again until x does not change (usually 7 loops for 14 or 15 decimals of accuracy.). Then the Darcy factor will then be 1 / x / x.
New VBA Function updated Oct, 2013, ================================================================= Option Explicit Dim L As Integer Dim mode As String Function Easy(re As Double, rr As Double,
Optional mode, Optional Find As String) As Double 'mode 2.51 1/sqrt(f)=-2*Log(Rr/3.7+2.51/Re*1/sqrt(f)) 'mode 1.74 1/sqrt(f)=1.74-2*Log(2*Rr+18.7/Re*1/sqrt(f)) 'mode 1.14 1/sqrt(f)=1.14+2*Log(1/Rr)
-2*Log(1+(9.3/(Re*Rr)*1/sqrt(f))) 'mode 9.35 1/sqrt(f)=1.14-2*Log(Rr+9.35/Re*1/sqrt(f)) 'mode 3.71 1/sqrt(f)=-2*Log(Rr/3.71+2.51/Re*1/sqrt(f)) 'mode 3.72 1/sqrt(f)=-2*Log(Rr/3.72+2.51/Re*1/sqrt(f))
Dim A As Double, B As Double, D As Double, X As Double, F As Double, Left As Double, Right As Double If IsMissing(mode) Then mode = 2.51 If IsMissing(Find) Then Find = "" If rr > re Then Exit
Function If re < 2000 Then Easy = 64 / re Exit Function End If L = 0 D = 3 Select Case mode Case 2.51 A = 2.51 / re B = rr / 3.7 While X <> D X = -2 * Log10(B + A * D) D = -2 * Log10(B + A * X) L = L
+ 1: If L > 20 Then X = D 'Excel can't go to more digits, each loop adds about two more correct digits Wend F = 1 / X / X Easy = F If Find = "Left" Then Easy = 1 / Sqr(F) If Find = "Right" Then Easy
= -2 * Log10(rr / 3.7 + 2.51 / re * 1 / Sqr(F)) Case 1.74 A = 18.7 / re B = 2 * rr While X <> D X = 1.74 - 2 * Log10(B + A * D) D = 1.74 - 2 * Log10(B + A * X) L = L + 1: If L > 20 Then X = D Wend F
= 1 / X / X Easy = F If Find = "Left" Then Easy = 1 / Sqr(F) If Find = "Right" Then Easy = 1.74 - 2 * Log10(2 * rr + 18.7 / re * 1 / Sqr(F)) Case 1.14: A = 9.3 / (re * rr) B = 1.14 + 2 * Log10(1 /
rr) While X <> D X = B - 2 * Log10(1 + (A * D)) D = B - 2 * Log10(1 + (A * X)) L = L + 1: If L > 20 Then X = D Wend F = 1 / X / X Easy = F If Find = "Left" Then Easy = 1 / Sqr(F) If Find = "Right"
Then Easy = 1.14 + 2 * Log10(1 / rr) - 2 * Log10(1 + (9.3 / (re * rr) * 1 / Sqr(F))) Case 9.35 A = 9.35 / re B = rr While X <> D X = Abs(1.14 - 2 * Log10(B + A * D)) D = Abs(1.14 - 2 * Log10(B + A *
X)) L = L + 1: If L > 20 Then X = D Wend F = 1 / X / X Easy = F If Find = "Left" Then Easy = 1 / Sqr(F) If Find = "Right" Then Easy = 1.14 - 2 * Log10(rr + 9.35 / re * 1 / Sqr(F)) Case 3.71 A = 2.51
/ re B = rr / 3.71 While X <> D X = -2 * Log10(B + A * D) D = -2 * Log10(B + A * X) L = L + 1: If L > 20 Then X = D Wend F = 1 / X / X Easy = F If Find = "Left" Then Easy = 1 / Sqr(F) If Find =
"Right" Then Easy = -2 * Log10(rr / 3.71 + 2.51 / re * 1 / Sqr(F)) Case 3.72 A = 2.51 / re B = rr / 3.72 While X <> D X = -2 * Log10(B + A * D) D = -2 * Log10(B + A * X) L = L + 1: If L > 20 Then X =
D Wend F = 1 / X / X Easy = F If Find = "Left" Then Easy = 1 / Sqr(F) If Find = "Right" Then Easy = -2 * Log10(rr / 3.72 + 2.51 / re * 1 / Sqr(F)) Case Else Easy = 0 End Select End Function
Static Function Log10(X) Log10 = Log(X) / Log(10#) End Function '============================
'Public Approximate methods Function fSerg(Re As Double, Rr As Double, mode) As Double If mode <> 2.51 Then Exit Function 'Friction Factor calculated by T.K. Serghide's implementation of Steffenson
'RefeRence Chemical Engineering March 5, 1984 Dim a As Single Dim b As Single Dim c As Single Dim x As Double 'Note that in Visual Basic "Log" stands for Natural Log, ie. Ln() x = -0.86858896 x =
-0.868588963806504 a = x * Log(Rr / 3.7 + 12 / Re) b = x * Log(Rr / 3.7 + (2.51 * a / Re)) c = x * Log(Rr / 3.7 + (2.51 * b / Re)) fSerg = (a - ((b - a) ^ 2) / (c - (2 * b) + a)) ^ -2 End Function
Function Serghide(Re As Double, Rr As Double, mode) As Double If mode <> 2.51 Then Exit Function Dim a As Double, b As Double, c As Double a = -2 * Log10(Rr / 3.7 + 12 / Re) b = -2 * Log10(Rr / 3.7 +
2.51 * a / Re) c = -2 * Log10(Rr / 3.7 + 2.51 * b / Re) Serghide = (a - ((b - a) ^ 2) / (c - 2 * b + a)) ^ -2 End Function
Function Swamee(Re As Double, Rr As Double, mode) As Double 'Swamee and Jain approximation to the Colebrook-White equation for Re>4,000(Bhave, 1991) If mode <> 2.51 Then Exit Function Dim f, dh, k As
Double On Error GoTo SwameeEr dh = 1: k = Rr 'Hagen – Poiseuille formula for Re < 2,000 (Bhave, 1991): If Re < 2000 Then f = 64 / Re Swamee = f Exit Function End If Swamee = 1.325 / (Log(Rr / 3.7 +
(5.74 / (Re ^ 0.9)))) ^ 2 Exit Function SwameeEr: Swamee = 9999 End Function
Function Haaland(Re As Double, Rr As Double, mode) As Double If mode <> 2.51 Then Exit Function Haaland = 0.308642 / (Log10((Rr / 3.7) ^ 1.11 + 6.9 / Re)) ^ 2 End Function
Function Goudar_Sonnad(Re As Double, Rr As Double, mode) As Double If mode <> 2.51 Then Exit Function Dim a As Double, b As Double, d As Double, G As Double, s As Double, Q As Double, Z As Double,
Dla As Double, Dcfa As Double, f As Double, x As Double On Error GoTo quit a = 2 / Log(10) b = Rr / 3.7 d = Log(10) * Re / 5.02 s = b * d + Log(d) Q = s ^ (s / (s + 1)) G = b * d + Log(d / Q) Z = Log
(Q / G) Dla = Z * (G / (G + 1)) Dcfa = Dla * (1 + (Z / 2) / ((G + 1) ^ 2 + (Z / 3) * (2 * G = 1))) x = a * (Log(d / Q) + Dcfa) Goudar_Sonnad = (1 / x) ^ 2 Exit Function quit: Goudar_Sonnad = 9999 End
Function Zigrang(Re As Double, Rr As Double, mode) As Double 'Zigrang_and_Sylvester Solution If mode <> 2.51 Then Exit Function Zigrang = 1 / (-2 * Log10(Rr / 3.7 - 5.02 / Re * Log10(Rr / 3.77 - 5.02
/ Re * Log10(Rr / 3.77 + 13 / Re)))) ^ 2 End Function Function Altshul(Re As Double, Rr As Double, mode) As Double 'Altshul-Tsal6 If mode <> 2.51 Then Exit Function Dim fp, f As Double fp = 0.11 *
(Rr + 68 / Re) ^ 0.25 If fp < 0.018 Then f = 0.85 * fp + 0.0028 Else f = fp End If Altshul = f End Function
Function Brkic(Re As Double, Rr As Double, mode) As Double If mode <> 2.51 Then Exit Function Dim s As Double, x As Double s = Log(Re / (1.1816 * Log(1.1 * Re / (Log(1 + 1.1 * Re))))) x = -2 * Log10
((Rr / 3.71) + 2.18 * s / Re) Brkic = 1 / x ^ 2 End Function
Function DecCorr(x As Double, y As Double) As Integer 'decimals correct, compares upto 15 decimals places Dim L As Integer x = Round(x, 15 ): y = Round(y, 15) For L = 1 To 15 If Round(x, L) = Round
(y, L) Then DecCorr = L Else Exit For Next L End Function '==================================================
Static Function Log10(x) 'Vba's Log() is actually Ln(), and this is just to 'convert VBA's "Log" to "Log10" 'Converts VBA Ln( to Log10() Log10 = Log(x) / Log(10#) End Function '======================
' More info: a true solution? Well engineers should understand more about the Colebrook equations. Each of it's "parts" are estimated to be close enough for a design. Even the 3.7 and the 2.51 have
been "rounded" to be close enough for most designs. What about values of Rr and Re? If you can compute those to within 1% then it will be good enough for almost all friction factors. With this "true"
solution for Excel, this computation will be accepted each part to be true and will give you the Darcy Friction to 15 decimal places which will be more accurate that each of the separate variables
used in the equation.
Would you prefer to TEST my Simple, True, Easy Solution?
If you want to check out my method, here's a simple way where you can compute the check to 50 decimals places.
A new Japan web site ... http://keisan.casio.com/exec/system/1381988562
(older) Go to this web site ....
And enter a Reynolds Number as Rn because "Re" is something else at that site. Copy the Computation Lines and enter a Rn and a Rr that you want to use, follow each with a " ; " to end the line.
The Left and Right are the main Colebrook-White equation. Left is 1/sqrt(f) and the Right is -2*log(Rr/3.7+2.51/Rn/sqrt(f)) If this gives the same Left and Right then this is TRUE!
This uses D=3 and the initial 1/sqrt(f) and computes X and D until the are equal.
The is solves f as 1/X/X which is 1/(X^2) When the left and right side are equal, then the Colebrook-White equation has been solved perfectly with a True and Easy solution. The last three lines gives
the answers.
Go to keisan.casio.com Note each line ends with a ";" to help the PC computation. With this you can choose 50 digits and you will notice that the left side of the equation is exactly the same as the
right side.
This is for the Mode 2.51 ...
= -2* Log(Rr/3.7+ 2.51/Re*
D=3; do {X=-2*log(A+B*D);
D=-2*log(A+B*X);} while (X<>D);
Today, Oct,3, 2013 I found a tested a special Colebrook solution, called
Newton's computation
(that you can buy for about $40). I tested and compared many values of the Reynolds Number and relative roughness with my simple, True, and easy solution vs the Newton computation. I learned by
testing thousands of random Re and Rr that the Newton is just a good approximation. It's last digit is usually wrong. But about 50% of it's approximations are right. It's crazy, why should users pay
for an approximation when the True solution is easy now that you can study this web site!
I did them all with 50 digits. Many times the Newton computations only had 49 digits right. But a few times Newton had all 50 digits right. To tell if a computation is right, you should figure both
Left_Side = 1/sqrt(f) Right_Side = -2*log(Rr/3.7+2.51/Rn/sqrt(f))
to check to see if they are exactly the same.
If you are using Excel, all computation can only have 15 digits correct, and if those 15 digits are used for another computation, then because of wrong rounding of the 15 digit, then maybe only 14
digits or less will be right.
The Newton computation is very complex, but mine is simple.
Rn=2525; (Reynolds Number)
Rr=0.01824; (Relative Roughness, which is roughness divide by the inside diameter)
D=3 ; (a starting guess of X, but you can use other positive number to begin with)
'start loops
(both D and X will change until the X is right.)
(if X <> D the change D to the X and continue the loop)...
continue loops until X = D, then (each loop with make 2 or 3 more digits to be right.
f = 1/X/X
; (same as 1/sqrt(f) )
Then you can test the f in the Colebrook equation.
Left = 1/sqrt(f)
Right = -2*log(Rr/3.7+2.51/Rn/sqrt(f))
Excel's rounding of 15 digits for f, may cause the 14th and/or 15th digits to be different.
|
{"url":"http://colebrook-white.blogspot.com/","timestamp":"2014-04-17T03:48:41Z","content_type":null,"content_length":"118533","record_id":"<urn:uuid:6cf3f989-c77e-4793-98d0-36fff43fed65>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How do I interpret odds ratios in logistic regression?
Stata FAQ
How do I interpret odds ratios in logistic regression?
You may also want to check out,
FAQ: How do I use odds ratio to interpret logistic regression?
, on our General FAQ page.
Let's begin with probability. Probabilities range between 0 and 1. Let's say that the probability of success is .8, thus
Then the probability of failure is
Odds are determined from probbailities and range between 0 and infinity. Odds are defined as the ratio of the probability of success and the probability of failure. The odds of success are
odds(success) = p/(1-p) or p/q = .8/.2 = 4,
that is, the odds of success are 4 to 1. The odds of failure would be
odds(failure) = q/p = .2/.8 = .25.
This looks a little strange but it is really saying that the odds of failure are 1 to 4. The odds of success and the odds of failure are just reciprocals of one another, i.e., 1/4 = .25 and 1/.25 =
4. Next, we will add another variable to the equation so that we can compute an odds ratio.
Another example
This example is adapted from Pedhazur (1997). Suppose that seven out of 10 males are admitted to an engineering school while three of 10 females are admitted. The probabilities for admitting a male
p = 7/10 = .7 q = 1 - .7 = .3
If you are male, the probability of being admitted is 0.7 and the probability of not being admitted is 0.3.
Here are the same probabilities for females,
p = 3/10 = .3 q = 1 - .3 = .7
If you are female it is just the opposite, the probability of being admitted is 0.3 and the probability of not being admitted is 0.7.
Now we can use the probabilities to compute the odds of admission for both males and females,
odds(male) = .7/.3 = 2.33333
odds(female) = .3/.7 = .42857
Next, we compute the odds ratio for admission,
OR = 2.3333/.42857 = 5.44
Thus, for a male, the odds of being admitted are 5.44 times larger than the odds for a female being admitted.
Logistic regression in Stata
Here are the Stata logistic regression commands and output for the example above. In this example
is coded 1 for yes and 0 for no and
is coded 1 for male and 0 for female. In Stata, the
command produces results in terms of odds ratios while
produces results in terms of coefficients scales in log odds.
input admit gender freq
This data represents a 2x2 table that looks like this:
Gender 1 7 3
logit admit gender [fweight=freq], nolog or
(frequency weights assumed)
Logistic regression Number of obs = 20
LR chi2(1) = 3.29
Prob > chi2 = 0.0696
Log likelihood = -12.217286 Pseudo R2 = 0.1187
admit | Odds Ratio Std. Err. z P>|z| [95% Conf. Interval]
gender | 5.444444 5.313234 1.74 0.082 .8040183 36.86729
/* Note: the above command is equivalent to --
logistic admit gender [weight=freq], nolog */
logit admit gender [weight=freq], nolog
(frequency weights assumed)
Logistic regression Number of obs = 20
LR chi2(1) = 3.29
Prob > chi2 = 0.0696
Log likelihood = -12.217286 Pseudo R2 = 0.1187
admit | Coef. Std. Err. z P>|z| [95% Conf. Interval]
gender | 1.694596 .9759001 1.74 0.082 -.2181333 3.607325
_cons | -.8472979 .6900656 -1.23 0.220 -2.199801 .5052058
Note that
= 1.74 for the coefficient for gender and for the odds ratio for gender.
About logits
There is a direct relationship between the coefficients produced by
and the odds ratios produced by
. First, let's define what is meant by a logit: A logit is defined as the log base e (log) of the odds. :
[1] logit(p) = log(odds) = log(p/q)
The range is negative infinity to positive infinity. In regression it is easiest to model unbounded outcomes. Logistic regression is in reality an ordinary regression using the logit as the response
variable. The logit transofrmation allows for a linear relationship between the response variable and the coefficients:
[2] logit(p) = a + bX
[3] log(p/q) = a + bX
This means that the coefficients in logistic regression are in terms of the log odds, that is, the coefficient 1.694596 implies that a one unit change in gender results in a 1.694596 unit change in
the log of the odds. Equation [3] can be expressed in odds by getting rid of the
. This is done by taking
to the power for both sides of the equation.
[4] e^log(p/q) = e^a + bX
[5] p/q = e^a + bX
The end result of all the mathematical manipulations is that the odds ratio can be computed by raising
to the power of the logistic coefficient,
[6] OR = e^b = e^1.694596 = 5.44
The content of this web site should not be construed as an endorsement of any particular web site, book, or software product by the University of California.
|
{"url":"http://www.ats.ucla.edu/stat/stata/faq/oratio.htm","timestamp":"2014-04-16T19:15:32Z","content_type":null,"content_length":"23510","record_id":"<urn:uuid:fa0359a4-661f-45be-a84f-caebf8f53113>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dimension-based equations provide flexibility in financial modeling. Since you do not need to specify the modeling variable until you solve a model, you can run the same model with the actual
variable, the budget variable, or any other variable that is dimensioned by line.
Models can be quite complex. You can:
4.4.1 Creating Models
To create an OLAP DML model, take the following steps:
For an example of creating a model, see "Creating a Model".
4.4.1.1 Nesting Models
You can include one model within another model by using an INCLUDE statement. The model that contains the INCLUDE statement is referred to as the parent model. The included model is referred to as
the base model. You can nest models by placing an INCLUDE statement in a base model. For example, model myModel1 can include model myModel2, and model myModel2 can include model myModel3. The nested
models form a hierarchy. In this example, myModel1 is at the top of the hierarchy, and myModel3 is at the root.
When a model contains an INCLUDE statement, then it cannot contain any DIMENSION (in models) statements. A parent model inherits its dimensions, if any, from the DIMENSION statements in the root
model of the included hierarchy. In the example just given, models myModel1 and myModel2 both inherit their dimensions from the DIMENSION statements in model myModel3.
The INCLUDE statement enables you to create modular models. When certain equations are common to several models, then you can place these equations in a separate model and include that model in other
models as needed.
The INCLUDE statement also facilitates what-if analyses. An experimental model can draw equations from a base model and selectively replace them with new equations. To support what-if analysis, you
can use equations in a model to mask previous equations. The previous equations can come from the same model or from included models. A masked equation is not executed or shown in the MODEL.COMPRPT
report for a model
4.4.1.2 Dimension Status and Model Equations
When a model contains an assignment statement to assign data to a dimension value, then the dimension is limited temporarily to that value, performs the calculation, and restores the initial status
of the dimension.
For example, a model might have the following statements.
DIMENSION line
gross.margin = revenue - cogs
If you specify actual as the solution variable when you run the model, then the following code is constructed and executed.
PUSH line
LIMIT line TO gross.margin
actual = actual(line revenue) - actual(line cogs)
POP line
This behind-the-scenes construction lets you perform complex calculations with simple model equations. For example, line item data might be stored in the actual variable, which is dimensioned by
line. However, detail line item data might be stored in a variable named detail.data, with a dimension named detail.line.
When your analytic workspace contains a relation between line and detail.line, which specifies the line item to which each detail item pertains, then you might write model equations such as the
following ones.
revenue = total(detail.data line)
expenses = total(detail.data line)
The relation between detail.line and line is used automatically to aggregate the detail data into the appropriate line items. The code that is constructed when the model is run ensures that the
appropriate total is assigned to each value of the line dimension. For example, while the equation for the revenue item is calculated, line is temporarily limited to revenue, and the TOTAL function
returns the total of detail items for the revenue value of line.
4.4.1.3 Using Data from Past and Future Time Periods
Several OLAP DML functions make it easy for you to use data from past or future time periods. For example, the LAG function returns data from a specified previous time period, and the LEAD function
returns data from a specified future period.
When you run a model that uses past or future data in its calculations, you must make sure that your solution variable contains the necessary past or future data. For example, a model might contain
an assignment statement that bases an estimate of the revenue line item for the current month on the revenue line item for the previous month.
DIMENSION line month
revenue = LAG(revenue, 1, month) * 1.05
When the month dimension is limited to Apr2004 to Jun2004 when you run the model, then you must be sure that the solution variable contains revenue data for Mar96.
When your model contains a LEAD function, then your solution variable must contain the necessary future data. For example, when you want to calculate data for the months of April through June of
2004, and when the model retrieves data from one month in the future, then the solution variable must contain data for July 2004 when you run the model.
4.4.1.4 Handling NA Values
Oracle OLAP observes the NASKIP2 option when it evaluates equations in a model. NASKIP2 controls how NA values are handled when + (plus) and - (minus) operations are performed. The setting of NASKIP2
is important when the solution variable contains NA values.
The results of a calculation may be NA not only when the solution variable contains an NA value that is used as input, but also when the target of a simultaneous equation is NA. Values in the
solution variable are used as the initial values of the targets in the first iteration over a simultaneous block. Therefore, when the solution variable contains NA as the initial value of a target,
an NA result may be produced in the first iteration, and the NA result may be perpetuated through subsequent iterations.
To avoid obtaining NA for the results, you can make sure that the solution variable does not contain NA values or you can set NASKIP2 to YES before running the model.
4.4.1.5 Solving Simultaneous Equations
An iterative method is used to solve the equations in a simultaneous block. In each iteration, a value is calculated for each equation, and compares the new value to the value from the previous
iteration. When the comparison falls within a specified tolerance, then the equation is considered to have converged to a solution. When the comparison exceeds a specified limit, then the equation is
considered to have diverged.
When all the equations in the block converge, then the block is considered solved. When any equation diverges or fails to converge within a specified number of iterations, then the solution of the
block (and the model) fails and an error occurs.
You can exercise control over the solution of simultaneous equations, use the OLAP DML options described in Table 17-1, "Model Options". For example, using these options, you can specify the solution
method to use, the factors to use in testing for convergence and divergence, the maximum number of iterations to perform, and the action to take when the assignment statement diverges or fails to
4.4.1.6 Modeling for Multiple Scenarios
Instead of calculating a single set of figures for a month and division, you might want to calculate several sets of figures, each based on different assumptions.
You can define a scenario model that calculates and stores forecast or budget figures based on different sets of input figures. For example, you might want to calculate profit based on optimistic,
pessimistic, and best-guess figures.
To build a scenario model, follow these steps.
1. Define a scenario dimension.
2. Define a solution variable dimensioned by the scenario dimension.
3. Enter input data into the solution variable.
4. Write a model to calculate results based on the input data.
For an example of building a scenario model see, Example 17-12, "Building a Scenario Model".
4.4.2 Compiling a Model
When you finish writing the statements in a model, you can use COMPILE to compile the model. During compilation, COMPILE checks for format errors, so you can use COMPILE to help debug your code
before running a model. When you do not use COMPILE before you run the model, then the model is compiled automatically before it is solved.
When you compile a model, either by using a COMPILE statement or by running the model, the model compiler examines each equation to determine whether the assignment target and each data source is a
variable or a dimension value.
4.4.2.1 Understanding Dependencies
After resolving each name reference, the model compiler analyzes dependencies between the equations in the model. A dependence exists when the expression on the right-hand side of the equal sign in
one equation refers to the assignment target of another equation. When an assignment statement indirectly depends on itself as the result of the dependencies among equations, then a cyclic dependence
exists between the equations.
The model compiler structures the equations into blocks and orders the equations within each block, and the blocks themselves, to reflect dependencies. The compiler can produce three types of
solution blocks: simple blocks, step blocks, and simultaneous blocks as described in "Dependencies Between Equations".
4.4.2.2 Checking for Additional Problems
The compiler does not analyze the contents of any programs or formulas that are used in model equations. Therefore, you must check the programs and formulas yourself to make sure they do not do any
of the following:
● Refer to the value of any variable used in the model.
● Refer to the solution variable.
● Limit any of the dimensions used in the model.
● Invoke other models.
When a model or program violates any of these restrictions, the results of the model may be incorrect.
4.4.5 Solution Variables Dimensioned by a Composite
When a solution variable contains a composite in its dimension list, Oracle OLAP observes the sparsity of the composite whenever possible. As it solves the model, Oracle OLAP confines its loop over
the composite to the values that exist in the composite. It observes the current status of the composite's base dimensions as it loops.
However, for proper solution of the model, Oracle OLAP must treat the following base dimensions of the composite as regular dimensions:
● A base dimension that is listed in a DIMENSION (in models) command.
● A base dimension that is implicated in a model equation created using SET (for example, an equation that assigns data to a variable dimensioned by the base dimension).
● A base dimension that is also a base dimension of a different composite that is specified in the ACROSS phrase of an equation. (See SET for more information on assignment statements and the use
of ACROSS phrase.)
When a base dimension of a solution variable's composite falls in any of the preceding three categories, Oracle OLAP treats that dimension as a regular dimension and loops over all the values that
are in the current status.
When the solution variable's composite has other base dimensions that do not fall in the special three categories, Oracle OLAP creates a temporary composite of these extra base dimensions. The values
of the temporary composite are the combinations that existed in the original composite. Oracle OLAP loops over the temporary composite as it solves the model.
|
{"url":"http://docs.oracle.com/cd/B12037_01/olap.101/b10339/aggmodel004.htm","timestamp":"2014-04-17T19:34:16Z","content_type":null,"content_length":"32176","record_id":"<urn:uuid:2ef2e5a1-cdb1-4b2b-8db4-df697e552f0a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fort Myer, VA Algebra 2 Tutor
Find a Fort Myer, VA Algebra 2 Tutor
...My name is Jenna and I'm an enthusiastic and well-rounded professional who loves to teach and help others. I have a BA in International Studies and spent the last year and a half in a village
in Tanzania (East Africa) teaching in- and out-of-school youth a variety of topics including English, he...
35 Subjects: including algebra 2, English, Spanish, ESL/ESOL
...I believe students success in geometry comes from full understanding of why each theorem or property works not from memorizing countless formulas. Students often struggle in geometry when they
don't have a solid foundation in the basics of lines and angles or have not yet become comfortable with...
12 Subjects: including algebra 2, geometry, GRE, ASVAB
...I have taught upper-level high school math and science in Ecuador, taught Spanish to middle school students in the Upward Bound Program, and worked at the elementary school level as a
volunteer. I have conducted training programs at the corporate level in a variety of technical areas. I strive to establish an intellectual connection with the student, regardless of age or
10 Subjects: including algebra 2, Spanish, calculus, geometry
...My math skills are sharp and I know a lot of the tricks to works problems faster and cleaner. A few years ago, I tutored my roommate and she believed that I explained things better to her than
any of her teachers. I've also had the opportunity to tutor a high school student virtually, using Skype and a whiteboard app for the iPad.
14 Subjects: including algebra 2, geometry, algebra 1, SAT math
...I am able to tutor students in the Earth/Environmental Sciences. I have taken the AP Environmental Science exam and received a score of 5 on the exam. I can help students who were struggling
in their classes in the previous school year or if you just want your child to maintain the knowledge they acquire the previous school year.
10 Subjects: including algebra 2, calculus, algebra 1, elementary math
Related Fort Myer, VA Tutors
Fort Myer, VA Accounting Tutors
Fort Myer, VA ACT Tutors
Fort Myer, VA Algebra Tutors
Fort Myer, VA Algebra 2 Tutors
Fort Myer, VA Calculus Tutors
Fort Myer, VA Geometry Tutors
Fort Myer, VA Math Tutors
Fort Myer, VA Prealgebra Tutors
Fort Myer, VA Precalculus Tutors
Fort Myer, VA SAT Tutors
Fort Myer, VA SAT Math Tutors
Fort Myer, VA Science Tutors
Fort Myer, VA Statistics Tutors
Fort Myer, VA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Arlington, VA algebra 2 Tutors
Brentwood, MD algebra 2 Tutors
Chevy Chase Village, MD algebra 2 Tutors
Chevy Chs Vlg, MD algebra 2 Tutors
Colmar Manor, MD algebra 2 Tutors
Cottage City, MD algebra 2 Tutors
Crystal City, VA algebra 2 Tutors
Dunn Loring algebra 2 Tutors
Fairmount Heights, MD algebra 2 Tutors
Forest Heights, MD algebra 2 Tutors
Glen Echo algebra 2 Tutors
Martins Add, MD algebra 2 Tutors
Martins Additions, MD algebra 2 Tutors
Rosslyn, VA algebra 2 Tutors
Somerset, MD algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Fort_Myer_VA_Algebra_2_tutors.php","timestamp":"2014-04-21T04:53:16Z","content_type":null,"content_length":"24543","record_id":"<urn:uuid:251f393b-6b8b-4676-9fc7-23228cf7b449>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Rewrite the following equation in slope-intercept form: x + 3y = 21 Answer y = x + 21 y = 21 – x y = x – 7 y= -1/3x +7
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50a70733e4b0129a3c8ff970","timestamp":"2014-04-17T06:54:17Z","content_type":null,"content_length":"37155","record_id":"<urn:uuid:c7f57f23-3ff8-4970-a86d-2ed92ec82b30>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matrix-Tensor Mathematics in Orthotropic Elasticity
STP405: Matrix-Tensor Mathematics in Orthotropic Elasticity
Jayne, B. A.
Professor, University of Washington, Seattle, Wash.
Suddarth, S. K.
Professor, Purdue University, Lafayette, Ind.
Pages: 20 Published: Jan 1966
A description of the Hookean elastic behavior of orthotropic materials is presented through the formalism of matrix algebra and tensor calculus. Stress, strain, and Hooke's law are formulated in
cartesian tensor notation. These same quantities also are written in matrix form, and the simple rules of matrix algebra are used for their manipulation. Finally, the tensor forms of stress, strain,
and Hooke's law are transformed from one to another cartesian reference frame. The transformation of the elastic tensor serves to demonstrate the special forms of anisotropy which can exist when
geometric and orthotropic axes are not coincident.
matrix algebra, tensor analysis, anisotropy, orthotropism, fibrous materials, elasticity, Hooke's law, cartesian tensors
Paper ID: STP45150S
Committee/Subcommittee: D30.07
DOI: 10.1520/STP45150S
|
{"url":"http://www.astm.org/DIGITAL_LIBRARY/STP/PAGES/STP45150S.htm","timestamp":"2014-04-18T03:00:04Z","content_type":null,"content_length":"11515","record_id":"<urn:uuid:4fa9fd6d-dd25-40d4-a8a1-41496a0936c8>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Speed Subnetting for Success in the CCENT and CCNA Exams
Speed Subnetting for Success in the CCENT and CCNA Exams
Subnetting is an absolutely critical component for CCNA success. In this article, we are first going to review how subnetting works, ensure you master the “longhand” method and understand the
mathematics behind this networking principle. Once this is done, we are going to look at some shortcuts that can be used in the exam to ensure that we can solve the many subnetting challenges quickly
and accurately.
Subnetting is an absolutely critical component for CCNA success. The skill must be demonstrated in the CCENT (ICND1) and CCNA (ICND2) exams. Should you take the composite CCNA exam option, you need
to be even better and faster at subnetting in order to achieve success.
In this article, we are first going to review how subnetting works and ensure you master the “longhand” method and understand the mathematics behind this networking principle. Then we will look at
some shortcuts that can be used in the exam to ensure that we can solve the many subnetting challenges quickly and accurately. I believe it is critical that students understand the how’s and why’s
behind this topic. It is unacceptable, in my opinion, to master subnetting if you only know the shortcuts. We need to fully understand why we are doing something in the network, and how it truly
works before we can start seeking exam shortcuts.
Why Subnetting?
Why do we even need this concept of subnetting? Well, we need to break up our networks into smaller networks all of the time. Breaking the network up allows the network to be more efficient and more
secure and more easily managed.
When you look at a Class B private address like 172.16.0.0/16 (255.255.0.0), you can calculate the number of hosts that can live in this network. You take the number of bits used for host addressing
(16), and you raise 2 to this power and then subtract 2. So the formula is 2^h-2 where h is the number of host bits. We need a calculator for this one because the number is so big! It turns out that
65,534 can live in this network. That’s great, that’s amazing, and...it also turns out that this is impossible! In a typical TCP/IP network of today, you are going to have big problems if you start
placing 500 or more systems in the same network (subnet), never mind 65,000 or more!
Number of Subnets and Number of Hosts
So if we want to use the 172.16.0.0 private address space, we need to “subnetwork” this address space in order create more network addresses that can each accommodate an appropriate number of host
systems. When we subnetwork, we play a balancing act. As we “borrow” bits for subnetting from the host bits, we can create more subnets, but we do so at the cost of being able to support fewer and
fewer host systems per network. The formula for how many hosts you can have always remains the same it is 2^h-2. The formula for how many networks we can create is very similar. It is 2^n where n is
the number of bits we borrow from the host bits.
Let us study an example. If we have 172.16.0.0/16 and we decide to borrow 8 bits for subnetting, we can create 2^8 or 256 different subnetworks. How many hosts can each of these subnetworks
accommodate? Well, there are now 8 bits left for host addressing. 2^8-2 gives us 254 hosts per subnetwork.
A Sample Subnetting Scenario
Let us stick with the example where we begin with the private IP address space of a Class B address of 172.16.0.0. We examine our networking needs and realize that we need to accommodate 100
different subnetworks in our organization. From the previous information in this article, we know that in order to create 100 subnets, we need to borrow 7 bits (2^7 = 128). This is perfect; we have
the number of subnetworks that we need, plus a few extra that we can call upon when the network inevitably grows.
What will the subnet mask be in this scenario? This mask will be the one that is used by all of the hosts in all of the different subnetworks. It is critical that we calculate this number correctly,
of course.
Notice that our Class B address originally had 16 bits that made up the network ID portion. In this sample scenario we are going to borrow 7 bits. Now we have a network ID that is made up of 23
(16+7). We can write out the 23 bits of the subnet mask now:
So converting to our convenient dotted decimal notation, we can see our mask:
A cooler way to write the mask is just /23 after the IP address. This is called prefix notation.
So what would the first subnetwork network ID be? Well, we know it will start 172.16, but what will the value be in that third octet where we have some bits (7) representing the subnet and one bit
representing the host portion.
To answer this in the longhand method, we write out the mask and the address from that octet in binary and do some analysis.
Mask: 11111110
IP: 00000000
Notice the first subnetwork will be 172.16.0.0. We can use all zeros in the first 7 bit positions of the third octet, and we have a zero in the last bit position which is used to identify hosts.
What would the first host address be in this network? Let’s write those last two octets out longhand:
The first host address on the 172.16.0.0 network would be:
What would the broadcast address be for that network? To get this you fill all the host bits with a 1:
How about the last usable host address on this subnetwork? Easy. We will turn all the host bits to 1, except for the last one.
What is the next subnetwork in this scheme? Well, let us turn one of those subnetwork bits on. We will start with the least significant (rightmost):
Mask: 11111110
IP: 00000010
Ahh, so the next network is 172.16.2.0/23.
Exam Shortcuts
It is wonderful to see how all of this works longhand, but in the lab exam environment, we are VERY pressed for time. We need powerful shortcuts. Here we will walk through my preferred shortcuts
against the backdrop of sample exam questions.
Sample Question 1:
What is the last usable address in the subnet of a host with the address 192.168.1.136 and a subnet mask of 255.255.255.240?
Step 1 Upon arriving at my first subnetting question in the exam environment, I build a Powers of Two reference chart on the scratch paper Cisco provides.
2^7=128 | 2^6=64 | 2^5=32 | 2^4=16 | 2^3=8 | 2^2-=4 | 2 ^1=2 | 2^0=1
Step 2 How many bits of subnetting are used in the fourth octet here? My Powers of Two chart tells me. 1 bit = 128; 2 bits = 192; 3 bits = 224; 4 bits = 240. Yes, the forth octet of the subnet
mask looks like this in binary 11110000.
Step 3 Now the magic of the shortcut we go four bits deep (from left to right) in the Powers of Two chart. This tells us the value that the subnets increment on. In our example it is 16:
So our subnets are:
□ 192.168.1.0
□ 192.168.1.16
□ 192.168.1.32
□ 192.168.1.48
□ 192.168.1.64
□ 192.168.1.80
□ 192.168.1.96
□ 192.168.1.112
□ 192.168.1.128
□ 192.168.1.144
Step 4 This host, with the address of 192.168.1.136 must live on the 192.168.1.128 subnet. The broadcast address for this subnet is one less than the next subnet of 144, so that is 143. The last
usable address is 142. We have arrived at our answer 192.168.1.142.
Sample Question 2
Your IT Junior Administrator has provided you with the address and mask of 192.168.20.102 and 255.255.255.224. You Junior Admin has asked you to tell him how many hosts can be created on your subnet?
Step 1 Here I begin by referencing the Powers of Two chart I created on my scratch paper. Adding 128 + 64 + 32, I get the 224 value used in the fourth octet of the subnet mask. Therefore, I can
see that there are 3 bits used for subnetting in that octet. This leaves 5 bits for host addressing.
2^7=128 | 2^6=64 | 2^5=32 | 2^4=16 | 2^3=8 | 2^2-=4 | 2 ^1=2 | 2^0=1
Step 2 As we discussed earlier, the equation for the number of hosts per subnet is 2^h – 2 where h is the number of host bits. From the chart I see that 2^5 = 32. 32-2 = 30 hosts per subnet.
Initially, the subnetting related questions strike fear in the hearts of CCNA candidates. Sure enough, with study and practice, and the many shortcut methods that exist, these questions become the
favorites in the Certification Exam environment. They can be solved easily and quickly, and candidates know they solved them correctly. Thanks to the power of mathematics, there are certainly no
“grey” areas in questions like these.
|
{"url":"http://www.informit.com/articles/article.aspx?p=1739170","timestamp":"2014-04-20T23:36:36Z","content_type":null,"content_length":"27381","record_id":"<urn:uuid:0e7b32be-d5bf-4af3-99be-763d81acba5a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] scipy.optimize.leastsq and covariance matrix meaning
[SciPy-user] scipy.optimize.leastsq and covariance matrix meaning
massimo sandal massimo.sandal@unibo...
Mon Nov 10 04:29:05 CST 2008
Bruce Southey wrote:
> It is possible to be correct if the values of y are large and
> sufficiently variable.
y values should be in the 10**-10 range...
> But, based on the comment on the fit and the
> correlation in the matrix above is -0.98, my expectation is that there
> is almost no error/residual variation left. The residual variance should
> be very small (sum of squared residuals divided by defree of freedom).
Is the sum of squared residuals / degree of freedom a residual
variance... of what parameters? Sorry again, but I'm not that good in
non-linear fitting theory.
> You don't provide enough details but your two x variables would appear
> to virtually correlated because of the very highly correlation. There
> are other reasons, but with data etc. I can not guess.
I'll try to sketch up a script reproducing the core of the problem with
actual data.
Massimo Sandal , Ph.D.
University of Bologna
Department of Biochemistry "G.Moruzzi"
snail mail:
Via Irnerio 48, 40126 Bologna, Italy
tel: +39-051-2094388
fax: +39-051-2094387
-------------- next part --------------
A non-text attachment was scrubbed...
Name: massimo_sandal.vcf
Type: text/x-vcard
Size: 274 bytes
Desc: not available
Url : http://projects.scipy.org/pipermail/scipy-user/attachments/20081110/8ee2794b/attachment.vcf
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2008-November/018660.html","timestamp":"2014-04-16T16:25:30Z","content_type":null,"content_length":"4448","record_id":"<urn:uuid:3c018c10-3e25-4bb4-bc1d-463e0c279a54>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
|
urgent... gravitational potential between two spheres.
September 30th 2009, 12:01 PM #1
Junior Member
Sep 2007
urgent... gravitational potential between two spheres.
two spheres their masses are :m1=m2=1.2x10^-8 kg are placed 10 cm away from each other. (r=10 cm) How is the gravitation potential ( V=?)in the point A which is situated 6 cm away from the first
sphere and of course 4 cm away from the second one.
The result is given and is -2.33 x 10^17J/kg but I can't find how ?
It has to be with this formula: V=kq/r^2
Any help is appreciated!
Last edited by mr fantastic; October 5th 2009 at 05:54 PM. Reason: Re-titled
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/math-topics/105250-urgent-gravitational-potential-between-two-spheres.html","timestamp":"2014-04-18T14:10:23Z","content_type":null,"content_length":"29869","record_id":"<urn:uuid:58d9f76a-5e17-4b60-ac81-5c680f09cb00>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CMU Theory Lunch: Jamie Morgenstern
How Bad is Selfish Voting?
April 3, 2013
It is well known that strategic behavior in elections is essentially unavoidable; we therefore ask: how bad can the rational outcome be in the context of voting? We answer this question via the
notion of the price of anarchy, using the scores of alternatives as a proxy for their quality and bounding the ratio between the score of the optimal alternative and the score of the winning
alternative in Nash equilibrium. Specifically, we are interested in Nash equilibria that are obtained via sequences of rational strategic moves. Focusing on three common voting rules — plurality,
veto, and Borda — we provide very positive results for plurality and very negative results for Borda, and place veto in the middle of this spectrum.
Joint work with Simina Branzei, Ioannis Caragiannis and Ariel D. Procaccia.
|
{"url":"http://www.cs.cmu.edu/~theorylunch/20130403.html","timestamp":"2014-04-16T19:36:49Z","content_type":null,"content_length":"2012","record_id":"<urn:uuid:0e1d3a23-322c-4b81-9856-ba48814dda52>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is this the discriminate?
September 12th 2012, 12:50 PM #1
Is this the discriminate?
My equation is $x^2+2kx-5=0$
I have to find the discriminate for this assuming it has 2 real roots
so I do $b^2-4ac$ is greater than 0
$(2k)^2-4(-5)$ is greater than 0
$k^2$ is greater than -5
k= plus or minus square route of -5
I seem to have done the discriminate well but how can a complex number be the solution for the discriminate since I am supposed to have 2 real roots?
I also encountered the same problem trying to do $x^2+2kx-7=0$ (but with this time the square route of -7). Can anyone help out please? Thanks.
Re: Is this the discriminate?
My equation is $x^2+2kx-5=0$
I have to find the discriminate for this assuming it has 2 real roots
so I do $b^2-4ac$ is greater than 0
$(2k)^2-4(-5)$ is greater than 0
$k^2$ is greater than -5
k= plus or minus square route of -5
I seem to have done the discriminate well but how can a complex number be the solution for the discriminate since I am supposed to have 2 real roots?
I also encountered the same problem trying to do $x^2+2kx-7=0$ (but with this time the square route of -7). Can anyone help out please? Thanks.
The discriminate is $4k^2+20$ which is positive if $ke 0~.$
So it has two real root for any $k~.$
Re: Is this the discriminate?
Can you please explain how to get those real roots? The method that I learned from my book clearly isn't working.
My method:
I got the discriminate with no problem
but then I equalled it to 0 (to substitute the greater than 0 sign)
at which point I get k^2=-5.
Re: Is this the discriminate?
The discriminant is positive for all k. Indeed, k^2 >= -5 holds for all k because k^2 is nonnegative. The fact that the equation k^2 = -5 has only complex roots says that it has no real roots.
Therefore, (since the function k^2 + 5 is continuous,) its value is either always positive or always negative. In this case, it is obviously always positive.
Re: Is this the discriminate?
Consider the equation $ax^2+bx+c=0,~ae 0~.$
The discriminate is $\Delta=b^2-4ac~.$ It is called that because it discriminates between real and complex roots.
The roots are $\frac{-b\pm\sqrt{\Delta}}{2a}$. So if $\Delta>0$ there are two real roots; if $\Delta=0$ there is one real root; and if $\Delta<0$ there are two complex roots.
Re: Is this the discriminate?
The discriminant is positive for all k. Indeed, k^2 >= -5 holds for all k because k^2 is nonnegative. The fact that the equation k^2 = -5 has only complex roots says that it has no real roots.
Therefore, (since the function k^2 + 5 is continuous,) its value is either always positive or always negative. In this case, it is obviously always positive.
Please elaborate, I don't quite understand what you're saying. Are you saying that my answer through solving the discriminant was right? Or are you saying simply my calculation was right but the
answer to the discriminant is something completely different?
If it is the latter, please can you explain how to find the right value for k (that is not a complex number)?
Re: Is this the discriminate?
Please elaborate, I don't quite understand what you're saying. Are you saying that my answer through solving the discriminant was right? Or are you saying simply my calculation was right but the
answer to the discriminant is something completely different?
If it is the latter, please can you explain how to find the right value for k (that is not a complex number)?
Reread all the replies.
Yes we are saying that you method is completely off.
Re: Is this the discriminate?
Consider the equation $ax^2+bx+c=0,~ae 0~.$
The discriminate is $\Delta=b^2-4ac~.$ It is called that because it discriminates between real and complex roots.
The roots are $\frac{-b\pm\sqrt{\Delta}}{2a}$. So if $\Delta>0$ there are two real roots; if $\Delta=0$ there is one real root; and if $\Delta<0$ there are two complex roots.
Thank you for the explanation however the question specifically said to 'prove that, for all real values of k, the roots of the equation are real and different'.
So surely I have to assume the discriminate is >0? But If I do this, as I have shown above I find that k is greater than a complex number. How can that 'prove' that there are real roots, if I
only get a complex number as an outcome?
Re: Is this the discriminate?
Re: Is this the discriminate?
Thank you for the explanation however the question specifically said to 'prove that, for all real values of k, the roots of the equation are real and different'.
So surely I have to assume the discriminate is >0? But If I do this, as I have shown above I find that k is greater than a complex number. How can that 'prove' that there are real roots, if I
only get a complex number as an outcome?
For every $k$, $4k^2+20>0$. So that are two real roots for every $k$.
If you still do not get it, you need a sit-down with a live tutor.
Re: Is this the discriminate?
No, you have NOT shown that "that k is greater than a complex number"- for two reasons. First, by hypothesis, k is a real number so it makes no sense to say that k is "greater than an imaginary
number" (which is what you really meant). Second, there is no way to define ">" to make the complex numbers an ordered field.
What you have shown is that $k^2> -5$ which, because a real number, squared, is never negative, is true for any value of k.
Re: Is this the discriminate?
Thank you, I understand now and I'm sorry for taking so long to understand it, it seems very simple now.
I just had the misconception that I had to do what I did with other equations such as (4k-20>0) which is to simplify to find k (i.e. getting 4k>20.... k>5). I never thought it could be so simple
because it is literally just a two line answer.
Once again, sorry for my incompetence. I guess I am making it overcomplicated.
Re: Is this the discriminate?
Re: Is this the discriminate?
Just as a last note of clarification.
If I wanted to prove an equation had two real roots 'by finding the discriminant' all I would have to write down is:
(It's discriminant)>0
b^2-4ac>0 with the values of the equation substituted in.
Is this correct?
Re: Is this the discriminate?
September 12th 2012, 12:56 PM #2
September 12th 2012, 12:59 PM #3
September 12th 2012, 01:10 PM #4
MHF Contributor
Oct 2009
September 12th 2012, 01:11 PM #5
September 12th 2012, 01:13 PM #6
September 12th 2012, 01:16 PM #7
September 12th 2012, 01:21 PM #8
September 12th 2012, 01:23 PM #9
September 12th 2012, 01:27 PM #10
September 12th 2012, 01:27 PM #11
MHF Contributor
Apr 2005
September 12th 2012, 01:30 PM #12
September 12th 2012, 01:31 PM #13
September 12th 2012, 01:39 PM #14
September 12th 2012, 01:54 PM #15
|
{"url":"http://mathhelpforum.com/algebra/203351-discriminate.html","timestamp":"2014-04-20T19:22:20Z","content_type":null,"content_length":"91711","record_id":"<urn:uuid:e6538122-386d-4efc-8b71-1e4b2d48449c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Screenshots! Already Answered! Circle questions, just need someone to check the both of them!
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50be0864e4b09e7e3b8588c8","timestamp":"2014-04-17T07:15:46Z","content_type":null,"content_length":"41856","record_id":"<urn:uuid:9621bde1-2a93-4f98-bc03-3b1acc735f09>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Killer Cage Combination Example, sudokuwiki.org
This example shows the kind of logic involved in using cage combinations in Killer Sudoku. There are many leaps of logic to be found in this type of puzzle, and this should also explain in more
detail some of the results from the Killer Sudoku Solver. In this instance the solver returns
KILLER STEP (Hard) on D3: shape of length 3 with clue of 15+ can only be 1/2/3/4/5/6/8/9, removing 7
The question is - why does it eliminate the 7 in D3 and not D1?
Hovering over the 15 on the small board it lists the combinations. There are quite a few in this cage:
Killer Cage Combo Example
|
{"url":"http://www.sudokuwiki.org/Print_Killer_Cage_Combination_Example","timestamp":"2014-04-17T03:49:06Z","content_type":null,"content_length":"4264","record_id":"<urn:uuid:7e662fd8-bef3-4262-8527-6a9ca274a36b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Retirement Calculator
Here is a retirement calculator to estimate how much your retirement fund will be worth in the future. Unlike other retirement funds, this one also considers how much the fund is stealing from you as
The parameters in the calculation are:
• Years of pension payment - The number of years you plan to work and accrue a pension.
• Market Interest - The yearly earnings you think (or hope...) your fund will make. If you want the value of the fund with respect to the current day currency value, enter the nominal percentage
yield minus the average inflation you expect (for different countries, these numbers are different). If you want to know the value in units of the currency value at retirement (which is
meaningless, since the cost of living will be higher too) do not subtract the inflation.
• Salary per month - Usually, this is the gross salary (before taxes) since the fraction which goes to pension funds is usually calculated from the gross not the net. Use any currency you wish
(denoted ¤), the result will be in the same currency. If you live in the USA, enter the salary in US dollars. If you live in Sweden, enter the salary in Swedish krona, and so forth. The
calculated value will be in the same currency you used.
• Salary increase rate - Enter the yearly increase rate you expect for your salary. As with the market interest rate, if you wish a real value for the pension in current day currency, use the
nominal increase minus the inflation rate.
• Maximum Salary per month - If you are not self employed, there is probably a maximum salary in your organization which you could ever hope to get at some point...
• Salary going to pension - The fraction [out of 1] of your "Salary per month" which is goes to the pension fund. Note that this value can be larger than your actual payments. This is because your
employer (if you are not self employed) could be transferring payments to the fund as well.
• Fraction of the payment going to insurance - In many pension funds, some of the money is not accrued and instead it is used to insure you in case something bad happens. This number is a fraction
out of the payments.
• Payment commission - This is the fraction of the payment (in percentage) which is immediately gone, taken by your fund. In different countries, this value can be anywhere from 0 to a few percent.
• Yearly commission - This is the fraction of the accrued funds which every year is lost, taken by your fund
• Fixed monthly fund payment - Is there a fixed monthly payment, a "handling" surcharge?
At the end of the calculation, you'll find an estimate for the value of your fund at retirement, and also the extra money you could have had if your fund would have worked for free.
|
{"url":"http://www.sciencebits.com/retirement_fund","timestamp":"2014-04-16T22:21:51Z","content_type":null,"content_length":"27196","record_id":"<urn:uuid:156d01a8-5029-4757-be6f-40e2df7a03c9>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
|
prove that 2 A-modules are not isomorphic
December 4th 2012, 09:15 AM
prove that 2 A-modules are not isomorphic
Show that the A-modules $A^{2010}$ and $A^{2012}$ cannot be isomorphic for any commutative ring with 1.
Could someone please show me step by step how to solve it?
December 4th 2012, 12:11 PM
Re: prove that 2 A-modules are not isomorphic
consider the map f:A^2012--->A^2010 given by:
f((a[1],a[2],....,a[2010],a[2011],a[2012])) = (a[1],a[2],....,a[2010]).
show f is an A-linear map. note that (0,....,0,1) is a non-zero element of the kernel.
December 8th 2012, 04:01 AM
Re: prove that 2 A-modules are not isomorphic
but this is an endomorphism right? (i. e homomorphism between two modules), so I need to show that this endomorphism is injective and surjective and thus is an isomorphism, but how to show that?
It would have been surjective if the kernel was trivial but as you wrote the kernel is is not trivial...
And to show that f is A-linear I should show smth like that:
$f(am+bn)=af(m)+bf(n)$, $\forall a,b\in A, \forall m,n\in A^{2012}$?
thanks in advance
December 8th 2012, 10:11 AM
Re: prove that 2 A-modules are not isomorphic
it IS surjective, it is NOT injective. that's the whole point. the two A-modules are NOT isomorphic.
December 8th 2012, 11:37 PM
Re: prove that 2 A-modules are not isomorphic
ah of course....I mixed that up:)
I do have one question more, how do you know that $kerf=(0,.......,0,1)$?
Kernel by def is a subset of the domain (so we want to list all elements from $A^{2012}$ that will give us 0 in $A^{2010}$) right?
December 9th 2012, 08:56 AM
Re: prove that 2 A-modules are not isomorphic
i didn't say that was the entire kernel. it's just one element in the kernel besides the 0 of the domain (to disprove injectivity, we only need one).
December 9th 2012, 10:44 AM
Re: prove that 2 A-modules are not isomorphic
I don't understand your answer. You seem to be implying the "truth" of the following: If M1 and M2 are two A modules with K non-zero, and M1/K isomorphic to M2, then M1 is not isomorphic to M2.
To see this is false, let M1 be the cross product of A indexed by the naturals, M2 the cross product of A indexed by the integers greater than or equal to 1. Let K=(A, 0, 0, ...). Then clearly M1
/K is isomorphic to M2. But also M1 is isomorphic to M2 via the function g where g(f)(i) = f(i-1) for all i >= 1.
December 9th 2012, 12:56 PM
Re: prove that 2 A-modules are not isomorphic
The standard way to prove this is as follows. By Krull's theorem there exists a maximal ideal $\mathfrak{m}$ for $A$. Note then that $A/\mathfrak{m}$ is a field, and the isomorphism $A^{2010}\
xrightarrow{\simeq} A^{2012}$ would induce a $A/\mathfrak{m}$ isomorphism $A/\mathfrak{m}\otimes_A A^{2010}\xrightarrow{\simeq} A/\mathfrak{m}\otimes_A A^{2012}$. But, it's easy to show that this
says that, as $A/\mathfrak{m}$ modules one has that $(A/\mathfrak{m})^{2010}\cong(A/\mathfrak{m})^{2012}$--but since these are fields, it's standard vector space theory that this can't happen.
December 9th 2012, 01:39 PM
Re: prove that 2 A-modules are not isomorphic
I don't understand your answer. You seem to be implying the "truth" of the following: If M1 and M2 are two A modules with K non-zero, and M1/K isomorphic to M2, then M1 is not isomorphic to M2.
To see this is false, let M1 be the cross product of A indexed by the naturals, M2 the cross product of A indexed by the integers greater than or equal to 1. Let K=(A, 0, 0, ...). Then clearly M1
/K is isomorphic to M2. But also M1 is isomorphic to M2 via the function g where g(f)(i) = f(i-1) for all i >= 1.
not quite. i'm saying commutative rings have an invariant basis number. A^2010 and A^2012 are free A-modules of rank 2010 and 2012, respectively.
the PROOF that commutative rings are IBN rings is essentially what drexel28 just wrote: it can be shown that if R-->S is a surjective homomorphism and S is an IBN ring, so is R. but if I is a
maximal ideal of R, then R-->R/I is certainly a surjective homomorphism, and R/I is a field, and fields are certainly IBN rings (a la linear algebra).
note that krull's theorem (that drexel28 quoted) is just a fancy way of saying: the axiom of choice implies every ring has a maximal ideal. so we can always "mod our way" to the field case.
so....what have i left out? well i havent REALLY showed A^n is a free A-module.
but {(1,0,...,0),(0,1,0,...,0),...,(0,...,0,1)} is a basis (it's clearly a spanning set, and if an A-linear combination of these is 0, we have each coefficient = 0, by the definition of equality
on the underlying cartesian product set).
your counter-example is well taken, but....we're dealing with free A-modules of finite rank. you can construct a similar example using vector spaces of infinite dimension, but that doesn't
invalidate the rank-nullity theorem in the least.
December 10th 2012, 12:15 AM
Re: prove that 2 A-modules are not isomorphic
The standard way to prove this is as follows. By Krull's theorem there exists a maximal ideal $\mathfrak{m}$ for $A$. Note then that $A/\mathfrak{m}$ is a field, and the isomorphism $A^{2010}\
xrightarrow{\simeq} A^{2012}$ would induce a $A/\mathfrak{m}$ isomorphism $A/\mathfrak{m}\otimes_A A^{2010}\xrightarrow{\simeq} A/\mathfrak{m}\otimes_A A^{2012}$. But, it's easy to show that this
says that, as $A/\mathfrak{m}$ modules one has that $(A/\mathfrak{m})^{2010}\cong(A/\mathfrak{m})^{2012}$--but since these are fields, it's standard vector space theory that this can't happen.
I feel embarrassed now:) We had quite similar problem to this one on problem session where we had smth like this. If $\phi A^{n}\to A^{m}$ is an isomorphism of A-modules then $m=n$
Anyway thank you very much for the replies.
P.S Is Krull's thm the same as Zorn's lemma. We proved in class that every commutative ring with ''one'' has a maximal ideal by Zorn's lemma.
December 10th 2012, 08:02 AM
Re: prove that 2 A-modules are not isomorphic
krull's theorem is a form of zorn's lemma, which is a form of the axiom of choice.
|
{"url":"http://mathhelpforum.com/advanced-algebra/209052-prove-2-modules-not-isomorphic-print.html","timestamp":"2014-04-21T14:43:48Z","content_type":null,"content_length":"17225","record_id":"<urn:uuid:a0963059-8346-40f9-8b0a-38c126290c1f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Turn or go straight? Quick!
This is a classic problem. You are in a car heading straight towards a wall. Should you try to stop or should you try to turn to avoid the wall? Bonus question: what if the wall is not really wide so
you don’t have to turn 90 degrees?
Assumption: Let me assume that I can use the normal model of friction – that the maximum static friction force is proportional to the normal force. Also, I will assume that the frictional coefficient
for stopping is the same as for turning.
I am going to start with the case of trying to stop. Suppose the car is moving towards the wall at a speed v[0] and an initial distance s away from the wall. Diagram time:
This is a 1-d problem. So, let me consider the forces in the direction of motion. There is only one force – friction. Now – you might be tempted to use one of the kinematic equations. Well, I guess
that is just fine. The following equation is appropriate here.
Really though, I would think – hey distance. That means use the work-energy equation. It gives you the same thing though – essentially. Since I already started with this kinematic equation, let me
proceed. In the direction of motion, I get:
Putting this into the above kinematic equation (with the change in x-distance as just s). Oh, note that I am using the maximum static frictional force. I am assuming that this will be the shortest
distance you could stop. Also, I am assuming that I the car stops without skidding.
There you have it. That is how far the car would need to stop. Quick check – does it have the right units? Yes.
Now, how far away could the car be and turn to miss the wall? Really, the question should be: if moving at a speed v[o], what is the smallest radius turn the car could make?
For an object moving in a circle, the following is true:
Here is my review of acceleration of an object moving in a circle. Key point: I said I could have used work-energy for the stopping part. I could NOT have used work energy for this turning part
(well, I could use it but it wouldn’t give me anything useful). There are two reasons why the work-energy principle won’t do you any good. First, the speed of the car doesn’t change during this
motion. This means that there is no change in kinetic energy. Second, the frictional force is perpendicular to the direction of motion so that it does no work (we can discuss work done by static
friction later).
Back to the turning calculation. I know an expression for the frictional force and I want the radius of the circle to be s. This gives:
And there you have it. If a car is traveling at a certain speed, it can stop in half the distance that it would take to turn.
I kind of like this result. Long ago, I took a driving class. You know, to learn how to drive. One think stuck in my mind. While driving, something came out in the road in front of me (I can’t
remember what it was). I reacted by swerving just a little into the next lane. The driving instructor used that annoying brake on the passenger side (that he would sometimes use just to show he was
in control – I was going to stop, but he didn’t give me a chance). Anyway, he said “always stay in your lane”. He probably said that because he was so wise in physics even though he did smell funny.
Oh, it is probably a good idea to stay in your lane not only for physics reasons but also because you don’t want to hit the car next to you (unless you are playing Grand Theft Auto – then that is
Another question
I wonder if you could stop in even shorter distance? Is stopping the best way? Is there some combination of stopping and turning that could work?
Let me try the following. What if the car brakes for the first half and then turns for the second half. Would it hit the wall? First, how fast would it be going after braking for s/2 distance? The
acceleration would be the same as before:
Using the same expression for the stopping distance from above, I get:
And this makes sense. If the car is stopping just half the distance, then it should have half the kinetic energy (which is proportional to v^2). Ok, so if that is the new speed, what radius of a
circle would it be able to move in? Again, using the expression from above:
Using this with half the distance – the total distance it would take to stop would be:
This is still greater than the stopping distance for just braking (which is s). But, did I prove that just stopping is the shortest distance? No. Maybe I just convinced myself to stop for now.
Here is a short bonus. Let me show that the work-energy principle is the same as that kinematic equation I was using. So, a car is stopping with just friction. The work done on the car by friction
(and I can do this if I consider the car to be a point particle):
The work-energy principle says this will be the same as the change in kinetic energy of the car. If the car starts at a speed of v[0] and stops at rest then:
See. Same thing.
How wide would the wall have to be so that it wouldn’t matter if you brake or turn? Either way you would miss?
1. #1 V. infernalis August 5, 2010
One of the things I remember being drilled into me during driver training was brake and avoid.
There’s an interesting dynamics aspect to this, in that braking results in a transfer of the car’s weight forwards, which gives you more traction to make the turn. In the absence of ABS, slamming
on the brakes will result in your wheels locking up and thus a complete inability to steer (not to mention, an increased stopping distance).
2. #2 Samantha Vimes August 6, 2010
What about hard left, then brake? After all, in America, that would get the driver’s side away from the wall, so that if your stopping distance is too much, you at least avoid a head on
collision, and provided you have no passenger, there shouldn’t be serious injury. Because if s> the distance between the car and the wall, you really need to think in terms of minimizing damage.
3. #3 Tim Eisele August 6, 2010
For that matter, if you hit the wall at an angle instead of head-on, you’ll slide along it instead of stopping instantly. This should reduce the deceleration forces acting on you, decreasing your
odds of injury. So it seems to me that combined steering and braking is the way to go.
4. #4 jijw August 6, 2010
Don’t forget the crumple zone you have if you hit front on
5. #5 Janne August 6, 2010
In Sweden you learn to break and swerve. Reason being, your normal obstacle (a moose, a human) is not all that wide, and the road is pretty likely to be empty apart from yourself. If there’s lots
of traffic, it’s unlikely that there’d be a static obstacle on it after all.
There’s a secondary consideration too. If you just swerve and fail, then you’re likely to hit the object at close to your original speed. And as speed is by far the most important determinant of
injury and death, that’s a Bad Thing.
On the other hand, obstacles are very rarely really wide. Anything remotely likely to show up on a street is pretty narrow – a moose, a car or something like it. So once you’re slowed down a bit
chances are still pretty good that you’ll be able to avoid it. And if you don’t, you’ll still hit at much lower speed than if you just swerved and failed.
6. #6 Rhett Allain August 6, 2010
Turn and then brake? Why didn’t I think about that? I did brake then turn. Hmmm.. I will have to do a follow up post.
7. #7 rob August 6, 2010
if you have a choice of a head on collision with, say an oncoming car, or swerving a bit so it isn’t head on, choose the swerve option.
your momentum can be broken up into components parallel and perpendicular to the other objects motion. by avoiding headon, you can decrease the parallel component by the cosine of the angle, and
thus decrease the magnitude of your change in momentum, the acceleration and thus the damage.
8. #8 Anonymous Coward August 6, 2010
If your wall is infinitely wide (as stated in the original problem), you can show that braking is always the best option if you want to avoid a collision with the greatest car-wall separation
(assuming the car is a point).
Just decompose your velocity vector into the component normal to the wall and a component along the wall.
If you turn (and avoid the wall) or do some combination of braking and turning:
1) the component normal must decelerate to zero at the point of closest approach
2) the component along the wall will increase to something nonzero.
If you just brake, you still have to do #1, but you avoid any forces required to achieve #2. Thus braking requires less total force.
So, for the infinite wall (and a point-mass car) braking along a straight line is superior to any braking/turning combination.
For the finite wall (or moose, as discussed above) it would probably require some algebra.
9. #9 Jerel August 6, 2010
I agree with break and trun. With that, how fast would your speed have to be to flip your vehicle during your turn (depending on weight, center of gravity, width of tires, etc.)?
10. #10 CCPhysicist August 6, 2010
A lane change can be done in significantly less distance than either of the two extant options, but that will not work in the classic homework version of this problem where the road is completely
blocked but there is a side street open if you turn. This works because you don’t turn 90 degrees. Not even close. However, see my final observation below.
Minor detail in your solution: The standard problem assumes the same coefficient of friction for braking and turning, but this is only an approximation. Most tires have more grip under braking
than when turning, which favors the braking solution even more.
What Anonymous Coward says is a result of what is known as the “friction circle” in racing. You trade sideways acceleration for fore-aft acceleration, which is why you brake and then turn …
leaving out all sorts of fine details. (Tires are complicated. Kinetic friction is a bit more than static friction as long as the slip is a small fraction of the velocity, then falls off rapidly
once the tire is not rolling at all. This applies to both forward motion and the “slip angle” when cornering and is significantly different, depending on tire design, when cornering. The
“friction circle” is usually an ellipse with the coefficient bigger for braking as noted above.)
The coefficient of friction when rolling, or (even better) rolling and slipping slightly, is bigger than when the tire is sliding sideways. This means that tossing the car so it slides sideways
is a really bad solution and will result is significantly larger values for the variable “s” in your solution.
All of this ignores vehicle dynamics and traffic. If you are in the right car and are fully aware of the space around you, a lane change is the best way to avoid a problem in the road ahead. If
you are not fully aware of the space around your vehicle, changing lanes can make things worse, and even skilled drivers are not always paying full attention to the road. Hence the driving
instructor advice to stay in your lane. Worse, you might be in a van or SUV where sudden turning movements, particularly lane changes, with under-inflated tires can cause the vehicle to roll.
Even passenger cars can get in trouble this way if one tire is off the pavement, which is why you should have been taught to slow down in a straight line before pulling the car back on the
pavement from an unpaved shoulder.
11. #11 Sweetredtele August 7, 2010
@comment 1- You can still steer when braking, independent of whether or no the car has ABS. It’s called the Emergency Brake, and in most vehicles it is attached only to the rear wheels.
I see I’ve been beaten to the Brake and Turn. I have done just that in a car- E-brake and a turn so that I do a complete 180. That would seem the best choice- and the distance needed to stop is
far less than braking alone.
12. #12 V. infernalis August 8, 2010
@11 – Unless you’re a trained stunt driver, I wouldn’t advise yanking the hand brake and then turning, because you’re going to end up locking up the rear wheels and then going into an
uncontrolled skid/slide.
But sure, it will look really cool.
13. #13 CCPhysicist August 8, 2010
@11 – The only advantage to doing a 180 is that you hit the wall going backwards. Your stopping distance will be greater because the coefficient of friction is less than with threshold braking or
modern ABS while going straight.
The disadvantage might be that you kill the kids in the backseat because cars are not designed to crash that way.
Sure would make a great Mythbusters episode, however.
|
{"url":"http://scienceblogs.com/dotphysics/2010/08/05/turn-or-go-straight-quick/","timestamp":"2014-04-19T07:18:25Z","content_type":null,"content_length":"80239","record_id":"<urn:uuid:1ab93d66-afbc-4673-8418-a2808a41b5b0>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parallel data mining for association rules on shared-memory
Results 1 - 10 of 61
- IEEE Concurrency , 1999
"... This article presents a survey of the state-of-the-art in parallel and distributed association rule mining (ARM) algorithms. This is direly needed given the importance of association rules to
data mining, and given the tremendous amount of research it has attracted in recent years. This article p ..."
Cited by 111 (3 self)
Add to MetaCart
This article presents a survey of the state-of-the-art in parallel and distributed association rule mining (ARM) algorithms. This is direly needed given the importance of association rules to data
mining, and given the tremendous amount of research it has attracted in recent years. This article provides a taxonomy of the extant association mining methods, characterizing them according to the
database format used, search and enumeration techniques utilized, and depending on whether they enumerate all or only maximal patterns, and their complexity in terms of the number of database scans.
The survey clearly lists the design space of the parallel and distributed ARM algorithms based on the platform used (distributed or sharedmemory) , kind of parallelism exploited (task or data), and
the load balancing strategy used (static or dynamic). A large number of parallel and distributed ARM methods are reviewed and grouped into related techniques. It is shown that there are a few
- DATA MINING AND KNOWLEDGE DISCOVERY , 1997
"... Discovery of association rules is an important data mining task. Several parallel and sequential algorithms have been proposed in the literature to solve this problem. Almost all of these
algorithms make repeated passes over the database to determine the set of frequent itemsets (a subset of databas ..."
Cited by 53 (6 self)
Add to MetaCart
Discovery of association rules is an important data mining task. Several parallel and sequential algorithms have been proposed in the literature to solve this problem. Almost all of these algorithms
make repeated passes over the database to determine the set of frequent itemsets (a subset of database items), thus incurring high I/O overhead. In the parallel case, most algorithms perform a
sum-reduction at the end of each pass to construct the global counts, also incurring high synchronization cost. In this paper we describe new parallel association mining algorithms. The algorithms
use novel itemset clustering techniques to approximate the set of potentially maximal frequent itemsets. Once this set has been identified, the algorithms make use of efficient traversal techniques
to generate the frequent itemsets contained in each cluster. We propose two clustering schemes based on equivalence classes and maximal hypergraph cliques, and study two lattice traversal techniques
based on bottom-up and hybrid search. We use a vertical database layout to cluster related transactions together. The database is also selectively replicated so that the portion of the database
needed for the computation of associations is local to each processor. After the initial set-up phase, the algorithms do not need any further communication or synchronization. The algorithms minimize
I/O overheads by scanning the local database portion only twice. Once in the set-up phase, and once when processing the itemset clusters. Unlike previous parallel approaches, the algorithms use
simple intersection operations to compute frequent itemsets and
, 2002
"... This paper presents a brief overview of the DDM algorithms, systems, applications, and the emerging research directions. The structure of the paper is organized as follows. We first present the
related research of DDM and illustrate data distribution scenarios. Then DDM algorithms are reviewed. Subs ..."
Cited by 49 (4 self)
Add to MetaCart
This paper presents a brief overview of the DDM algorithms, systems, applications, and the emerging research directions. The structure of the paper is organized as follows. We first present the
related research of DDM and illustrate data distribution scenarios. Then DDM algorithms are reviewed. Subsequently, the architectural issues in DDM systems and future directions are discussed
- In Proceedings of the second SIAM conference on Data Mining , 2002
"... With recent technological advances, shared memory parallel machines have become more scalable, and oer large main memories and high bus bandwidths. They are emerging as good platforms for data
warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining alg ..."
Cited by 27 (10 self)
Add to MetaCart
With recent technological advances, shared memory parallel machines have become more scalable, and oer large main memories and high bus bandwidths. They are emerging as good platforms for data
warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms.
- In ICDM , 2001
"... Searching for frequent patterns in transactional databases is considered one of the most important data mining problems. Most current association mining algorithms, whether sequential or
parallel, adopt an apriori-like algorithm that requires full multiple I/O scans of the data set and expensive ..."
Cited by 27 (3 self)
Add to MetaCart
Searching for frequent patterns in transactional databases is considered one of the most important data mining problems. Most current association mining algorithms, whether sequential or parallel,
adopt an apriori-like algorithm that requires full multiple I/O scans of the data set and expensive computation to generate the potential frequent items. The recent explosive growth in data
collection made the current association rule mining algorithms restricted and inadequate to analyze excessively large transaction sets due to the above mentioned limitations. In this paper we
introduce a new parallel algorithm MLFPT (Multiple Local Frequent Pattern Tree) for parallel mining of frequent patterns, based on FP-growth mining, that uses only two full I/O scans of the database,
eliminating the need for generating the candidate items, and distributing the work fairly among processors to achieve near optimum load balance.
- In Proceedings of the International Conference on Very Large Data Bases (VLDB , 2005
"... In this paper, we examine the performance of frequent pattern mining algorithms on a modern processor. A detailed performance study reveals that even the best frequent pattern mining
implementations, with highly efficient memory managers, still grossly under-utilize a modern processor. The primary p ..."
Cited by 24 (6 self)
Add to MetaCart
In this paper, we examine the performance of frequent pattern mining algorithms on a modern processor. A detailed performance study reveals that even the best frequent pattern mining implementations,
with highly efficient memory managers, still grossly under-utilize a modern processor. The primary performance bottlenecks are poor data locality and low instruction level parallelism (ILP). We
propose a cache-conscious prefix tree to address this problem. The resulting tree improves spatial locality and also enhances the benefits from hardware cache line prefetching. Furthermore, the
design of this data structure allows the use of a novel tiling strategy to improve temporal locality. The result is an overall speedup of up to 3.2 when compared with state-of-the-art
implementations. We then show how these algorithms can be improved further by realizing a non-naive thread-based decomposition that targets simultaneously multi-threaded processors. A key aspect of
this decomposition is to ensure cache re-use between threads that are co-scheduled at a fine granularity. This optimization affords an additional speedup of 50%, resulting in an overall speedup of up
to 4.8. To
- In Proceedings of the 9th Annual ACM Symposium on Parallel Algorithms and Architectures , 1997
"... Discovery of association rules is an important database mining problem. Mining for association rules involves extracting patterns from large databases and inferring useful rules from them.
Several parallel and sequential algorithms have been proposed in the literature to solve this problem. Almost a ..."
Cited by 23 (4 self)
Add to MetaCart
Discovery of association rules is an important database mining problem. Mining for association rules involves extracting patterns from large databases and inferring useful rules from them. Several
parallel and sequential algorithms have been proposed in the literature to solve this problem. Almost all of these algorithms make repeated passes over the database to determine the commonly
occurring patterns or itemsets (set of items), thus incurring high I/O overhead. In the parallel case, these algorithms do a reduction at the end of each pass to construct the global patterns, thus
incurring high synchronization cost. In this paper we describe a new parallel association mining algorithm. Our algorithm is a result of detailed study of the available parallelism and the properties
of associations. The algorithm uses a scheme to cluster related frequent itemsets together, and to partition them among the processors. At the same time it also uses a different database layout which
clusters related transactions together, and selectively replicates the database so that the portion of the database needed for the computation of associations is local to each processor. After the
initial set-up phase, the algorithm eliminates the need for further communication or synchronization. The algorithm further scans the local database partition only three times, thus minimizing I/O
overheads. Unlike previous approaches, the algorithms uses simple intersection operations to compute frequent itemsets and doesn’t have to maintain or search complex hash structures. Our experimental
testbed is a 32-processor DEC Alpha cluster inter-connected by the Memory Channel network. We present results on the performance of our algorithm on various databases, and compare it against a well
known parallel algorithm. Our algorithm outperforms it by an more than an order of magnitude. 1
- In Pacific-Asia Conference on Knowledge Discovery and Data Mining , 1998
"... An efficient parallel algorithm FPM (Fast Parallel Mining) for mining association rules on a shared-nothing parallel system has been proposed. It adopts the count distribution approach and has
incorporated two powerful candidate pruning techniques, i.e., distributed pruning and global pruning. It ha ..."
Cited by 22 (1 self)
Add to MetaCart
An efficient parallel algorithm FPM (Fast Parallel Mining) for mining association rules on a shared-nothing parallel system has been proposed. It adopts the count distribution approach and has
incorporated two powerful candidate pruning techniques, i.e., distributed pruning and global pruning. It has a simple communication scheme which performs only one round of message exchange in each
iteration. We found that the two pruning techniques are very sensitive to data skewness, which describes the degree of non-uniformity of the itemset distribution among the database partitions.
Distributed pruning is very effective when data skewness is high. Global pruning is more effective than distributed pruning even for the mild data skewness case. We have implemented the algorithm on
an IBM SP2 parallel machine. The performance studies confirm our observation on the relationship between the effectiveness of the two pruning techniques and data skewness. It has also shown that FPM
- In Proceedings of the first SIAM conference on Data Mining , 2001
"... Data mining is an interdisciplinary field, having applications in diverse areas like bioinformatics, medical informatics, scientific data analysis, financial analysis, consumer profiling, etc.
In each of these application domains, the amount of data available for analysis has exploded in recent year ..."
Cited by 21 (13 self)
Add to MetaCart
Data mining is an interdisciplinary field, having applications in diverse areas like bioinformatics, medical informatics, scientific data analysis, financial analysis, consumer profiling, etc. In
each of these application domains, the amount of data available for analysis has exploded in recent years, making the scalability of data
, 2003
"... We present a new distributed association rule mining (D-ARM) algorithm that demonstrates superlinear speedup with the number of computing nodes. The algorithm is the first D-ARM algorithm to
perform a single scan over the database. As such, its performance is unmatched by any previous algorithm. Sca ..."
Cited by 19 (0 self)
Add to MetaCart
We present a new distributed association rule mining (D-ARM) algorithm that demonstrates superlinear speedup with the number of computing nodes. The algorithm is the first D-ARM algorithm to perform
a single scan over the database. As such, its performance is unmatched by any previous algorithm. Scale-up experiments over standard synthetic benchmarks demonstrate stable run time regardless of the
number of computers. Theoretical analysis reveals a tighter bound on error probability than the one shown in the corresponding sequential algorithm.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=303990","timestamp":"2014-04-24T20:59:30Z","content_type":null,"content_length":"40816","record_id":"<urn:uuid:0c740418-b577-4212-bb4b-7ee87460bff3>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Top, Chapters: 1, 2, 3, 4, 5, 6, 7, 8, 9, A
As deterministic algorithms are driven to their limits when one tries to solve hard problems with them, a useful technique to speed up the computation is randomization. In randomized algorithms, the
algorithm has access to a random source, which can be imagined as tossing coins during the computation. Depending on the outcome of the toss, the algorithm may split up its computation path.
There are two main types of randomized algorithms: Las Vegas algorithms and Monte-Carlo algorithms. In Las Vegas algorithms, the algorithm may use the randomness to speed up the computation, but the
algorithm must always return the correct answer to the input. Monte-Carlo algorithms do not have the former restriction, that is, they are allowed to give wrong return values. However, returning a
wrong return value must have a small probability, otherwise that Monte-Carlo algorithm would not be of any use.
Many approximation algorithms use randomization.
Ordered StatisticsEdit
Before covering randomized techniques, we'll start with a deterministic problem that leads to a problem that utilizes randomization. Suppose you have an unsorted array of values and you want to find
• the maximum value,
• the minimum value, and
• the median value.
In the immortal words of one of our former computer science professors, "How can you do?"
First, it's relatively straightforward to find the largest element:
// find-max -- returns the maximum element
function find-max(array vals[1..n]): element
let result := vals[1]
for i from 2 to n:
result := max(result, vals[i])
return result
An initial assignment of $-\infty$ to result would work as well, but this is a useless call to the max function since the first element compared gets set to result. By initializing result as such the
function only requires n-1 comparisons. (Moreover, in languages capable of metaprogramming, the data type may not be strictly numerical and there might be no good way of assigning $-\infty$; using
vals[1] is type-safe.)
A similar routine to find the minimum element can be done by calling the min function instead of the max function.
But now suppose you want to find the min and the max at the same time; here's one solution:
// find-min-max -- returns the minimum and maximum element of the given array
function find-min-max(array vals): pair
return pair {find-min(vals), find-max(vals)}
Because find-max and find-min both make n-1 calls to the max or min functions (when vals has n elements), the total number of comparisons made in find-min-max is $2n-2$.
However, some redundant comparisons are being made. These redundancies can be removed by "weaving" together the min and max functions:
// find-min-max -- returns the minimum and maximum element of the given array
function find-min-max(array vals[1..n]): pair
let min := $\infty$
let max := $-\infty$
if n is odd:
min := max := vals[1]
vals := vals[2,..,n] // we can now assume n is even
n := n - 1
for i:=1 to n by 2: // consider pairs of values in vals
if vals[i] < vals[i + n by 2]:
let a := vals[i]
let b := vals[i + n by 2]
let a := vals[i + n by 2]
let b := vals[i] // invariant: a <= b
if a < min: min := a fi
if b > max: max := b fi
return pair {min, max}
Here, we only loop $n/2$ times instead of n times, but for each iteration we make three comparisons. Thus, the number of comparisons made is $(3/2)n = 1.5n$, resulting in a $3/4$ speed up over the
original algorithm.
Only three comparisons need to be made instead of four because, by construction, it's always the case that $a\le b$. (In the first part of the "if", we actually know more specifically that $a < b$,
but under the else part, we can only conclude that $a\le b$.) This property is utilized by noting that a doesn't need to be compared with the current maximum, because b is already greater than or
equal to a, and similarly, b doesn't need to be compared with the current minimum, because a is already less than or equal to b.
In software engineering, there is a struggle between using libraries versus writing customized algorithms. In this case, the min and max functions weren't used in order to get a faster find-min-max
routine. Such an operation would probably not be the bottleneck in a real-life program: however, if testing reveals the routine should be faster, such an approach should be taken. Typically, the
solution that reuses libraries is better overall than writing customized solutions. Techniques such as open implementation and aspect-oriented programming may help manage this contention to get the
best of both worlds, but regardless it's a useful distinction to recognize.
Finally, we need to consider how to find the median value. One approach is to sort the array then extract the median from the position vals[n/2]:
// find-median -- returns the median element of vals
function find-median(array vals[1..n]): element
assert (n > 0)
return vals[n / 2]
If our values are not numbers close enough in value (or otherwise cannot be sorted by a radix sort) the sort above is going to require $O(n\log n)$ steps.
However, it is possible to extract the nth-ordered statistic in $O(n)$ time. The key is eliminating the sort: we don't actually require the entire array to be sorted in order to find the median, so
there is some waste in sorting the entire array first. One technique we'll use to accomplish this is randomness.
Before presenting a non-sorting find-median function, we introduce a divide and conquer-style operation known as partitioning. What we want is a routine that finds a random element in the array and
then partitions the array into three parts:
1. elements that are less than or equal to the random element;
2. elements that are equal to the random element; and
3. elements that are greater than or equal to the random element.
These three sections are denoted by two integers: j and i. The partitioning is performed "in place" in the array:
// partition -- break the array three partitions based on a randomly picked element
function partition(array vals): pair{j, i}
Note that when the random element picked is actually represented three or more times in the array it's possible for entries in all three partitions to have the same value as the random element. While
this operation may not sound very useful, it has a powerful property that can be exploited: When the partition operation completes, the randomly picked element will be in the same position in the
array as it would be if the array were fully sorted!
This property might not sound so powerful, but recall the optimization for the find-min-max function: we noticed that by picking elements from the array in pairs and comparing them to each other
first we could reduce the total number of comparisons needed (because the current min and max values need to be compared with only one value each, and not two). A similar concept is used here.
While the code for partition is not magical, it has some tricky boundary cases:
// partition -- break the array into three ordered partitions from a random element
function partition(array vals): pair{j, i}
let m := 0
let n := vals.length - 2 // for an array vals, vals[vals.length-1] is the last element, which holds the partition,
// so the last sort element is vals[vals.length-2]
let irand := random(m, n) // returns any value from m to n
let x := vals[irand]
swap( irand,n+ 1 ) // n+1 = vals.length-1 , which is the right most element, and acts as store for partition element and sentinel for m
// values in vals[n..] are greater than x
// values in vals[0..m] are less than x
while (m <= n ) // see explanation in quick sort why should be m <= n instead of m < n
// in the 2 element case, vals.length -2 = 0 = n = m, but if the 2-element case is out-of-order vs. in-order, there must be a different action.
// by implication, the different action occurs within this loop, so must process the m = n case before exiting.
while vals[m] <= x // in the 2-element case, second element is partition, first element at m. If in-order, m will increment
while x < vals[n] && n > 0 // stops if vals[n] belongs in left partition or hits start of array
if ( m >= n) break;
swap(m,n) // exchange vals[n] and vals[m]
m++ // don't rescan swapped elements
// partition: [0..m-1] [] [n+1..] note that m=n+1
// if you need non empty sub-arrays:
swap(m,vals.length - 1) // put the partition element in the between left and right partitions
// in 2-element out-of-order case, m=0 (not incremented in loop), and the first and last(second) element will swap.
// partition: [0..n-1] [n..n] [n+1..]
We can use partition as a subroutine for a general find operation:
// find -- moves elements in vals such that location k holds the value it would when sorted
function find(array vals, integer k)
assert (0 <= k < vals.length) // k it must be a valid index
if vals.length <= 1:
let pair (j, i) := partition(vals)
if k <= i:
find(a[0,..,i], k)
else-if j <= k:
find(a[j,..,n], k - j)
TODO: debug this!
Which leads us to the punch-line:
// find-median -- returns the median element of vals
function find-median(array vals): element
assert (vals.length > 0)
let median_index := vals.length / 2;
find(vals, median_index)
return vals[median_index]
One consideration that might cross your mind is "is the random call really necessary?" For example, instead of picking a random pivot, we could always pick the middle element instead. Given that our
algorithm works with all possible arrays, we could conclude that the running time on average for all of the possible inputs is the same as our analysis that used the random function. The reasoning
here is that under the set of all possible arrays, the middle element is going to be just as "random" as picking anything else. But there's a pitfall in this reasoning: Typically, the input to an
algorithm in a program isn't random at all. For example, the input has a higher probability of being sorted than just by chance alone. Likewise, because it is real data from real programs, the data
might have other patterns in it that could lead to suboptimal results.
To put this another way: for the randomized median finding algorithm, there is a very small probability it will run suboptimally, independent of what the input is; while for a deterministic algorithm
that just picks the middle element, there is a greater chance it will run poorly on some of the most frequent input types it will receive. This leads us to the following guideline:
Randomization Guideline:
If your algorithm depends upon randomness, be sure you introduce the randomness yourself instead of depending upon the data to be random.
Note that there are "derandomization" techniques that can take an average-case fast algorithm and turn it into a fully deterministic algorithm. Sometimes the overhead of derandomization is so much
that it requires very large datasets to get any gains. Nevertheless, derandomization in itself has theoretical value.
The randomized find algorithm was invented by C. A. R. "Tony" Hoare. While Hoare is an important figure in computer science, he may be best known in general circles for his quicksort algorithm, which
we discuss in the next section.
The median-finding partitioning algorithm in the previous section is actually very close to the implementation of a full blown sorting algorithm. Building a Quicksort Algorithm is left as an exercise
for the reader, and is recommended first, before reading the next section ( Quick sort is diabolical compared to Merge sort, which is a sort not improved by a randomization step ) .
A key part of quick sort is choosing the right median. But to get it up and running quickly, start with the assumption that the array is unsorted, and the rightmost element of each array is as likely
to be the median as any other element, and that we are entirely optimistic that the rightmost doesn't happen to be the largest key , which would mean we would be removing one element only ( the
partition element) at each step, and having no right array to sort, and a n-1 left array to sort.
This is where randomization is important for quick sort, i.e. choosing the more optimal partition key, which is pretty important for quick sort to work efficiently.
Compare the number of comparisions that are required for quick sort vs. insertion sort.
With insertion sort, the average number of comparisons for finding the lowest first element in an ascending sort of a randomized array is n /2 .
The second element's average number of comparisons is (n-1)/2;
the third element ( n- 2) / 2.
The total number of comparisons is [ n + (n - 1) + (n - 2) + (n - 3) .. + (n - [n-1]) ] divided by 2, which is [ n x n - (n-1)! ] /2 or about O(n squared) .
In Quicksort, the number of comparisons will halve at each partition step if the true median is chosen, since the left half partition doesn't need to be compared with the right half partition, but at
each step , the number elements of all partitions created by the previously level of partitioning will still be n.
The number of levels of comparing n elements is the number of steps of dividing n by two , until n = 1. Or in reverse, 2 ^ m ~ n, so m = log[2] n.
So the total number of comparisons is n (elements) x m (levels of scanning) or n x log[2]n ,
So the number of comparison is O(n x log [2](n) ) , which is smaller than insertion sort's O(n^2) or O( n x n ).
(Comparing O(n x log [2](n) ) with O( n x n ) , the common factor n can be eliminated , and the comparison is log[2](n) vs n , which is exponentially different as n becomes larger. e.g. compare n = 2
^16 , or 16 vs 32768, or 32 vs 4 gig ).
To implement the partitioning in-place on a part of the array determined by a previous recursive call, what is needed a scan from each end of the part , swapping whenever the value of the left scan's
current location is greater than the partition value, and the value of the right scan's current location is less than the partition value. So the initial step is :-
Assign the partition value to the right most element, swapping if necessary.
So the partitioning step is :-
increment the left scan pointer while the current value is less than the partition value.
decrement the right scan pointer while the current value is more than the partition value ,
or the location is equal to or more than the left most location.
exit if the pointers have crossed ( l >= r),
perform a swap where the left and right pointers have stopped ,
on values where the left pointer's value is greater than the partition,
and the right pointer's value is less than the partition.
Finally, after exiting the loop because the left and right pointers have crossed,
swap the rightmost partition value,
with the last location of the left forward scan pointer ,
and hence ends up between the left and right partitions.
Make sure at this point , that after the final swap, the cases of a 2 element in-order array, and a 2 element out-of-order array , are handled correctly, which should mean all cases are handled
correctly. This is a good debugging step for getting quick-sort to work.
For the in-order two-element case, the left pointer stops on the partition or second element , as the partition value is found. The right pointer , scanning backwards, starts on the first element
before the partition, and stops because it is in the leftmost position.
The pointers cross, and the loop exits before doing a loop swap. Outside the loop, the contents of the left pointer at the rightmost position and the partition , also at the right most position , are
swapped, achieving no change to the in-order two-element case.
For the out-of-order two-element case, The left pointer scans and stops at the first element, because it is greater than the partition (left scan value stops to swap values greater than the partition
The right pointer starts and stops at the first element because it has reached the leftmost element.
The loop exits because left pointer and right pointer are equal at the first position, and the contents of the left pointer at the first position and the partition at the rightmost (other) position ,
are swapped , putting previously out-of-order elements , into order.
Another implementation issue, is to how to move the pointers during scanning. Moving them at the end of the outer loop seems logical.
partition(a,l,r) {
v = a[r];
i = l;
j = r -1;
while ( i <= j ) { // need to also scan when i = j as well as i < j ,
// in the 2 in-order case,
// so that i is incremented to the partition
// and nothing happens in the final swap with the partition at r.
while ( a[i] < v) ++i;
while ( v <= a[j] && j > 0 ) --j;
if ( i >= j) break;
++i; --j;
swap(a, i, r);
return i;
With the pre-increment/decrement unary operators, scanning can be done just before testing within the test condition of the while loops, but this means the pointers should be offset -1 and +1
respectively at the start : so the algorithm then looks like:-
partition (a, l, r ) {
v=a[r]; // v is partition value, at a[r]
while(true) {
while( a[++i] < v );
while( v <= a[--j] && j > l );
if (i >= j) break;
swap ( a, i, j);
swap (a,i,r);
return i;
And the qsort algorithm is
qsort( a, l, r) {
if (l >= r) return ;
p = partition(a, l, r)
qsort(a , l, p-1)
qsort( a, p+1, r)
Finally, randomization of the partition element.
random_partition (a,l,r) {
p = random_int( r-l) + l;
// median of a[l], a[p] , a[r]
if (a[p] < a[l]) p =l;
if ( a[r]< a[p]) p = r;
swap(a, p, r);
this can be called just before calling partition in qsort().
Shuffling an ArrayEdit
This keeps data in during shuffle
temporaryArray = { }
This records if an item has been shuffled
usedItemArray = { }
Number of item in array
itemNum = 0
while ( itemNum != lengthOf( inputArray) ){
usedItemArray[ itemNum ] = false None of the items have been shuffled
itemNum = itemNum + 1
itemNum = 0 we'll use this again
itemPosition = randdomNumber( 0 --- (lengthOf(inputArray) - 1 ))
while( itemNum != lengthOf( inputArray ) ){
while( usedItemArray[ itemPosition ] != false ){
itemPosition = randdomNumber( 0 --- (lengthOf(inputArray) - 1 ))
temporaryArray[ itemPosition ] = inputArray[ itemNum ]
itemNum = itemNum + 1
inputArray = temporaryArray
Equal Multivariate PolynomialsEdit
[TODO: as of now, there is no known deterministic polynomial time solution, but there is a randomized polytime solution. The canonical example used to be IsPrime, but a deterministic, polytime
solution has been found.]
Hash tablesEdit
Hashing relies on a hashcode function to randomly distribute keys to available slots evenly. In java , this is done in a fairly straight forward method of adding a moderate sized prime number (31 *
17 ) to a integer key , and then modulus by the size of the hash table. For string keys, the initial hash number is obtained by adding the products of each character's ordinal value multiplied by 31.
The wikibook Data Structures/Hash Tables chapter covers the topic well.
Skip ListsEdit
[TODO: Talk about skips lists. The point is to show how randomization can sometimes make a structure easier to understand, compared to the complexity of balanced trees.]
Dictionary or Map , is a general concept where a value is inserted under some key, and retrieved by the key. For instance, in some languages , the dictionary concept is built-in (Python), in others ,
it is in core libraries ( C++ S.T.L. , and Java standard collections library ). The library providing languages usually lets the programmer choose between a hash algorithm, or a balanced binary tree
implementation (red-black trees). Recently, skip lists have been offered, because they offer advantages of being implemented to be highly concurrent for multiple threaded applications.
Hashing is a technique that depends on the randomness of keys when passed through a hash function, to find a hash value that corresponds to an index into a linear table. Hashing works as fast as the
hash function, but works well only if the inserted keys spread out evenly in the array, as any keys that hash to the same index , have to be deal with as a hash collision problem e.g. by keeping a
linked list for collisions for each slot in the table, and iterating through the list to compare the full key of each key-value pair vs the search key.
The disadvantage of hashing is that in-order traversal is not possible with this data structure.
Binary trees can be used to represent dictionaries, and in-order traversal of binary trees is possible by visiting of nodes ( visit left child, visit current node, visit right child, recursively ).
Binary trees can suffer from poor search when they are "unbalanced" e.g. the keys of key-value pairs that are inserted were inserted in ascending or descending order, so they effectively look like
linked lists with no left child, and all right children. self-balancing binary trees can be done probabilistically (using randomness) or deterministically ( using child link coloring as red or black
) , through local 3-node tree rotation operations. A rotation is simply swapping a parent with a child node, but preserving order e.g. for a left child rotation, the left child's right child becomes
the parent's left child, and the parent becomes the left child's right child.
Splay trees are a random application of rotations during a search , so that a lopsided tree structure is randomized into a balanced one.
Red-black trees can be understood more easily if corresponding 2-3-4 trees are examined. A 2-3-4 tree is a tree where nodes can have 2 children, 3 children, or 4 children, with 3 children nodes
having 2 keys between the 3 children, and 4 children-nodes having 3 keys between the 4 children. 4-nodes are actively split into 3 single key 2 -nodes, and the middle 2-node passed up to be merged
with the parent node , which , if a one-key 2-node, becomes a two key 3-node; or if a two key 3-node, becomes a 4-node, which will be later split (on the way up). The act of splitting a three key
4-node is actually a re-balancing operation, that prevents a string of 3 nodes of grandparent, parent , child occurring , without a balancing rotation happening. 2-3-4 trees are a limited example of
B-trees, which usually have enough nodes as to fit a physical disk block, to facilitate caching of very large indexes that can't fit in physical RAM ( which is much less common nowadays).
A red-black tree is a binary tree representation of a 2-3-4 tree, where 3-nodes are modeled by a parent with one red child, and 4 -nodes modeled by a parent with two red children. Splitting of a
4-node is represented by the parent with 2 red children, flipping the red children to black, and itself into red. There is never a case where the parent is already red, because there also occurs
balancing operations where if there is a grandparent with a red parent with a red child , the grandparent is rotated to be a child of the parent, and parent is made black and the grandparent is made
red; this unifies with the previous flipping scenario, of a 4-node represented by 2 red children. Actually, it may be this standardization of 4-nodes with mandatory rotation of skewed or zigzag
4-nodes that results in re-balancing of the binary tree.
A newer optimization is to left rotate any single right red child to a single left red child, so that only right rotation of left-skewed inline 4-nodes (3 red nodes inline ) would ever occur,
simplifying the re-balancing code.
Skip lists are modeled after single linked lists, except nodes are multilevel. Tall nodes are rarer, but the insert operation ensures nodes are connected at each level.
Implementation of skip lists requires creating randomly high multilevel nodes, and then inserting them.
Nodes are created using iteration of a random function where high level node occurs later in an iteration, and are rarer, because the iteration has survived a number of random thresholds (e.g. 0.5,
if the random is between 0 and 1).
Insertion requires a temporary previous node array with the height of the generated inserting node. It is used to store the last pointer for a given level , which has a key less than the insertion
The scanning begins at the head of the skip list, at highest level of the head node, and proceeds across until a node is found with a key higher than the insertion key, and the previous pointer
stored in the temporary previous node array. Then the next lower level is scanned from that node , and so on, walking zig-zag down, until the lowest level is reached.
Then a list insertion is done at each level of the temporary previous node array, so that the previous node's next node at each level is made the next node for that level for the inserting node, and
the inserting node is made the previous node's next node.
Search involves iterating from the highest level of the head node to the lowest level, and scanning along the next pointer for each level until a node greater than the search key is found, moving
down to the next level , and proceeding with the scan, until the higher keyed node at the lowest level has been found, or the search key found.
The creation of less frequent-when-taller , randomized height nodes, and the process of linking in all nodes at every level, is what gives skip lists their advantageous overall structure.
What follows is a implementation of skip lists in python.
#a python implementation of SkipLists, using references as pointers
# copyright 2013 , as much gnu as compatible with wikibooks
# as taken from reading pugh's paper , and sedgewick
import random
min = 8
thresh = 6
SK_MAXV = 16
class SkNode:
def __init__(self, x, v):
self.ht = SK_MAXV
for i in xrange(1,SK_MAXV):
if random.randint(0,10) < 5:
self.ht = i
self.next = [None] * self.ht
self.v = v
self.x = x
def increase_ht(self, h):
self.next.extend( [None] * (h - self.ht))
self.ht = h
class SkipList:
def __init__(self ):
self.head = None
self.level = 0
def insert(self, x, v):
n = SkNode(x, v)
if self.head is None:
self.head = n
if n.ht > self.head.ht:
if x < self.head.x:
# the key is less than the head's key, replace the head
for j in xrange(0,n.ht):
n.next[i] = self.head
self.head = n
prev = self.head
#last holds the previous node for each level
last = [None]* self.head.ht
# starts at ht-1, scans to 0, stepping down ; this is "skipping" at higher j
# when j = 0, the lowest level, there is no skipping , and every next node is traversed.
# tall nodes are less frequently inserted, and links between taller nodes at higher j skip over more
# more frequent shorter nodes.
for j in xrange( self.head.ht-1, -1, -1):
# while there is a next node with smaller x than inserting x, go to next node
while not (prev.next[j] is None) and prev.next[j].x < x:
prev = prev.next[j]
#print "prev", prev
last[j] = prev #record the previous node for this level which is points to node with higher x
#weave in the node
#only change pointers for the levels of the inserted node
for j in xrange ( 0, n.ht):
tmp = last[j].next[j]
last[j].next[j] = n
n.next[j] =tmp
def find(self, x):
c = self.find_node(x)
if c is None or c.x <> x:
return None
return c.x
def find_node_and_prev(self, x):
if self.head is None:
return None
c = self.head
prev = [ y for y in self.head ]
for i in xrange(self.head.ht - 1, -1, -1):
while c.x < x and not c.next[i] is None and c.next[i].x <= x: # must be <= otherwise won't make c.x = x
prev[i] = c
c = c.next[i]
#print c.x, x
if c.x >= x:
return (c, prev)
return (None, None)
def find_node(self, x):
return self.find_node_and_prev(x)[0]
def delete(self, x):
c, prev = self.find_node_and_prev(x)
if c is None:
return False
for i in xrange(0, len(c.next) ):
prev[i] = c.next[i]
return True
# efficient subranges
def find_range_nodes( self, x1, x2):
c1 = self.find_node(x1)
c2 = self.find_node(x2)
l = []
while c1 <> c2:
c1= c1.next[0]
return l
def find_range_keys( self, x1, x2):
return [ n.x for n in self.find_range_nodes(x1,x2) ]
def find_range_values( self, x1, x2):
return [ n.v for n in self.find_range_nodes(x1,x2) ]
if __name__ == "__main__":
sk = SkipList()
for i in xrange(0,100000):
#x = random.randint(0,1000000)
sk.insert(i, i * 10 )
for i in xrange(0,100000):
print i, sk.find(i)
print sk.find_range_keys(0,100001)
print sk.find_range_values(75500, 75528)
Role of RandomnessEdit
The idea of making higher nodes geometrically randomly less common, means there are less keys to compare with the higher the level of comparison, and since these are randomly selected, this should
get rid of problems of degenerate input that makes it necessary to do tree balancing in tree algorithms. Since the higher level list have more widely separated elements, but the search algorithm
moves down a level after each search terminates at a level, the higher levels help "skip" over the need to search earlier elements on lower lists. Because there are multiple levels of skipping, it
becomes less likely that a meagre skip at a higher level won't be compensated by better skips at lower levels, and Pugh claims O(logN) performance overall.
Conceptually , is it easier to understand than balancing trees and hence easier to implement ? The development of ideas from binary trees, balanced binary trees, 2-3 trees, red-black trees, and
B-trees make a stronger conceptual network but is progressive in development, so arguably, once red-black trees are understood, they have more conceptual context to aid memory , or refresh of memory.
concurrent access applicationEdit
Apart from using randomization to enhance a basic memory structure of linked lists, skip lists can also be extended as a global data structure used in a multiprocessor application. See supplementary
topic at the end of the chapter.
Idea for an exerciseEdit
Replace the Linux completely fair scheduler red-black tree implementation with a skip list , and see how your brand of Linux runs after recompiling.
A treap is a two keyed binary tree, that uses a second randomly generated key and the previously discussed tree operation of parent-child rotation to randomly rotate the tree so that overall, a
balanced tree is produced. Recall that binary trees work by having all nodes in the left subtree small than a given node, and all nodes in a right subtree greater. Also recall that node rotation does
not break this order ( some people call it an invariant), but changes the relationship of parent and child, so that if the parent was smaller than a right child, then the parent becomes the left
child of the formerly right child. The idea of a tree-heap or treap, is that a binary heap relationship is maintained between parents and child, and that is a parent node has higher priority than its
children, which is not the same as the left , right order of keys in a binary tree, and hence a recently inserted leaf node in a binary tree which happens to have a high random priority, can be
rotated so it is relatively higher in the tree, having no parent with a lower priority. See the preamble to skip lists about red-black trees on the details of left rotation.
A treap is an alternative to both red-black trees, and skip lists, as a self-balancing sorted storage structure.
[TODO: Deterministic algorithms for Quicksort exist that perform as well as quicksort in the average case and are guaranteed to perform at least that well in all cases. Best of all, no randomization
is needed. Also in the discussion should be some perspective on using randomization: some randomized algorithms give you better confidence probabilities than the actual hardware itself! (e.g.
sunspots can randomly flip bits in hardware, causing failure, which is a risk we take quite often)]
[Main idea: Look at all blocks of 5 elements, and pick the median (O(1) to pick), put all medians into an array (O(n)), recursively pick the medians of that array, repeat until you have < 5 elements
in the array. This recursive median constructing of every five elements takes time T(n)=T(n/5) + O(n), which by the master theorem is O(n). Thus, in O(n) we can find the right pivot. Need to show
that this pivot is sufficiently good so that we're still O(n log n) no matter what the input is. This version of quicksort doesn't need rand, and it never performs poorly. Still need to show that
element picked out is sufficiently good for a pivot.]
1. Write a find-min function and run it on several different inputs to demonstrate its correctness.
Supplementary Topic: skip lists and multiprocessor algorithmsEdit
Multiprocessor hardware provides CAS ( compare-and-set) or CMPEXCHG( compare-and-exchange)(intel manual 253666.pdf, p 3-188) atomic operations, where an expected value is loaded into the accumulator
register, which is compared to a target memory location's contents, and if the same, a source memory location's contents is loaded into the target memories contents, and the zero flag set, otherwise,
if different, the target memory's contents is returned in the accumulator, and the zero flag is unset, signifying , for instance, a lock contention. In the intel architecture, a LOCK instruction is
issued before CMPEXCHG , which either locks the cache from concurrent access if the memory location is being cached, or locks a shared memory location if not in the cache , for the next instruction.
The CMPEXCHG can be used to implement locking, where spinlocks , e.g. retrying until the zero flag is set, are simplest in design.
Lockless design increases efficiency by avoiding spinning waiting for a lock .
The java standard library has an implementation of non-blocking concurrent skiplists, based on a paper titled "a pragmatic implementation of non-blocking single-linked lists".
The skip list implementation is an extension of the lock-free single-linked list , of which a description follows :-
The insert operation is : X -> Y insert N , N -> Y, X -> N ; expected result is X -> N -> Y .
A race condition is if M is inserting between X and Y and M completes first , then N completes, so the situation is X -> N -> Y <- M
M is not in the list. The CAS operation avoids this, because a copy of -> Y is checked before updating X -> , against the current value of X -> .
If N gets to update X -> first, then when M tries to update X -> , its copy of X -> Y , which it got before doing M -> Y , does not match X -> N , so CAS returns non-zero flag set. The process that
tried to insert M then can retry the insertion after X, but now the CAS checks ->N is X's next pointer, so after retry, X->M->N->Y , and neither insertions are lost.
If M updates X-> first, N 's copy of X->Y does not match X -> M , so the CAS will fail here too, and the above retry of the process inserting N, would have the serialized result of X ->N -> M -> Y .
The delete operation depends on a separate 'logical' deletion step, before 'physical' deletion.
'Logical' deletion involves a CAS change of the next pointer into a 'marked' pointer. The java implementation substitutes with an atomic insertion of a proxy marker node to the next node.
This prevents future insertions from inserting after a node which has a next pointer 'marked' , making the latter node 'logically' deleted.
The insert operation relies on another function , search , returning 2 unmarked , at the time of the invocation, node pointers : the first pointing to a node , whose next pointer is equal to the
The first node is the node before the insertion point.
The insert CAS operation checks that the current next pointer of the first node, corresponds to the unmarked reference of the second, so will fail 'logically' if the first node's next pointer has
become marked after the call to the search function above, because the first node has been concurrently logically deleted.
This meets the aim to prevent a insertion occurring concurrently after a node has been deleted.
If the insert operation fails the CAS of the previous node's next pointer, the search for the insertion point starts from the start of the entire list again, since a new unmarked previous node needs
to be found, and there are no previous node pointers as the list nodes are singly-linked.
The delete operation outlined above, also relies on the search operation returning two unmarked nodes, and the two CAS operations in delete, one for logical deletion or marking of the second
pointer's next pointer, and the other for physical deletion by making the first node's next pointer point to the second node's unmarked next pointer.
The first CAS of delete happens only after a check that the copy of the original second nodes' next pointer is unmarked, and ensures that only one concurrent delete succeeds which reads the second
node's current next pointer as being unmarked as well.
The second CAS checks that the previous node hasn't been logically deleted because its next pointer is not the same as the unmarked pointer to the current second node returned by the search function,
so only an active previous node's next pointer is 'physically' updated to a copy of the original unmarked next pointer of the node being deleted ( whose next pointer is already marked by the first
If the second CAS fails, then the previous node is logically deleted and its next pointer is marked, and so is the current node's next pointer. A call to search function again, tidies things up,
because in endeavouring to find the key of the current node and return adjacent unmarked previous and current pointers, and while doing so, it truncates strings of logically deleted nodes .
Lock-free programming issuesEdit
Starvation could be possible , as failed inserts have to restart from the front of the list. Wait-freedom is a concept where the algorithm has all threads safe from starvation.
The ABA problem exists, where a garbage collector recycles the pointer A , but the address is loaded differently, and the pointer is re-added at a point where a check is done for A by another thread
that read A and is doing a CAS to check A has not changed ; the address is the same and is unmarked, but the contents of A has changed.
Top, Chapters: 1, 2, 3, 4, 5, 6, 7, 8, 9, A
Last modified on 4 February 2014, at 13:42
|
{"url":"https://en.m.wikibooks.org/wiki/Algorithms/Randomization","timestamp":"2014-04-18T11:02:03Z","content_type":null,"content_length":"76151","record_id":"<urn:uuid:80617af0-07b1-445a-ac34-17592432976c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sequence of smooth functions converging to sgn(x)
up vote 0 down vote favorite
I'm looking for a sequence of smooth functions $f_i(x)$ converging to Sign$(x)$, each of which additionally have the following property:
$$f_i(x_1+x_2) = g_i(x_1, f_i(x_2))$$
for some $g_i$
Also, is there a name for a function that satisfies just the decomposability constraint above?
fa.functional-analysis sequences-and-series tag-removed
3 What type of convergence? – T-' Feb 17 '12 at 19:27
add comment
1 Answer
active oldest votes
Let $f_i$ be any sequence of strictly increasing smooth functions that converge to the Sign function, such as $f_i(x) = \tanh(ix)$, and let $g_i$ be defined by $g_i(x,y) = f_i(x+f_i^
{-1}(y))$ for $y$s in the range of $f_i$ (e.g., $-1 < y < 1$) and however you like elsewhere (since the problem posits no smoothness or even any continuity conditions on $g$). This
takes care of the specific problem.
In general, any continuous function satisfying the "decomposability constraint" is either constant or strictly monotonic. Indeed, suppose $f$ is not strictly monotonic, so that
(courtesy of the Intermediate Value Theorem) $f(a)=f(b)$ for some $a \ne b$. Then
up vote 1 down
vote accepted $$f(x+a)=g(x,f(a))=g(x,f(b))=f(x+b),$$
from which it follows (by substituting $x-a$ for $x$ in the displayed equation) that $f$ is periodic with period $b-a$. Since $f$ is continuous, there are values $c$ and $d$ between
$a$ and $b$ with $f(c)=f(d)$ and $|c-d|$ arbitrarily small (think points to either side of where $f$ takes its maximum value). Hence $f$ is periodic with arbitrarily small period --
which is to say, constant.
Hmmm you're right. The constraint as written doesn't capture what I was trying to capture, which is, roughly, that computing this function over all possible left-to-right paths
through a matrix decomposes in a way permitting a dynamic program solution. I'll have think about this and post a new question when I can formulate it clearly. Thanks for your
thoughts. – Alex Flint Feb 20 '12 at 14:21
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis sequences-and-series tag-removed or ask your own question.
|
{"url":"http://mathoverflow.net/questions/88752/sequence-of-smooth-functions-converging-to-sgnx?sort=oldest","timestamp":"2014-04-18T08:05:41Z","content_type":null,"content_length":"54375","record_id":"<urn:uuid:38922b06-f256-4f69-b2ec-80b8fe2c7e7a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts from January 26, 2011 on The Unapologetic Mathematician
The next thing we need to take note of it the idea of an “inner corner” and an “outer corner” of a Ferrers diagram, and thus of a partition.
An inner corner of a Ferrers diagram is a cell that, if it’s removed, the rest of the diagram is still the Ferrers diagram of a partition. It must be the rightmost cell in its row, and the bottommost
cell in its column. Similarly, an outer corner is one that, if it’s added to the diagram, the result is still the Ferrers diagram of a partition. This is a little more subtle: it must be just to the
right of the end of a row, and just below the bottom of a column.
As an example, consider the partition $(5,4,4,2)$, with Ferrers diagram
We highlight the inner corners by shrinking them, and mark the outer corners with circles:
That is, there are three ways we could remove a cell and still have the Ferrers diagram of a partition:
And there are four ways that we could add a cell and still have the Ferrers diagram of a partition:
If the first partition is $\lambda$, we write a generic partition that comes from removing a single inner corner by $\lambda^-$. Similarly, we write a generic partition that comes from adding a
single outer corner by $\lambda^+$. In our case, if $\lambda=(5,4,4,2)$, then the three possible $\lambda^-$ partitions are $(4,4,4,2)$, $(5,4,3,2)$, and $(5,4,4,1)$, while the four possible $\lambda
^+$ partitions are $(6,4,4,2)$, $(5,5,4,2)$, $(5,4,4,3)$, and $(5,4,4,2,1)$.
Now, as a quick use of this concept, think about how to fill a Ferrers diagram to make a standard Young tableau. It should be clear that since $n$ is the largest entry in the tableau, it must be in
the rightmost cell of its row and the bottommost cell of its column in order for the tableau to be standard. Thus $n$ must occur in an inner corner. This means that we can describe any standard
tableau by picking which inner corner contains $n$, removing that corner, and filling the rest with a standard tableau with $n-1$ entries. Thus, the number of standard $\lambda$-tableaux is the sum
of all the standard $\lambda^-$-tableaux:
$\displaystyle f^\lambda=\sum\limits_{\lambda^-}f^{\lambda^-}$
Now that we have a canonical basis for our Specht modules composed of standard polytabloids it gives us a matrix representation of $S_n$ for each $\lambda\vdash n$. We really only need to come up
with matrices for the swaps $(k\,k+1)$, for $1\leq k\leq n-1$, since these generate the whole symmetric group.
When we calculate the action of the swap on a polytabloid $e_t$ associated with a standard Young tableau $t$, there are three possibilities. Either $k$ and $k+1$ are in the same column of $t$,
they’re in the same row of $t$, or they’re not in the same row or column of $t$.
The first case is easy. If $k$ and $k+1$ are in the same column of $t$, then $(k\,k+1)\in C_t$, and thus $(k\,k+1)e_t=-e_t$.
The third case isn’t much harder, although it’s subtler. I say that if $k$ and $k+1$ share neither a row nor a column, then $(k\,k+1)t$ is again standard. Indeed, swapping the two can’t introduce
either a row or a column descent. The entries to the left of and above $k+1$ are all less than $k+1$, and none of them are $k$, so they’re all less than $k$ as well. Similarly, all the entries to the
right of and below $k$ are greater than $k$, and none of them are $k+1$, so they’re all greater than $k+1$ as well.
Where things get complicated is when $k$ and $k+1$ share a row. But then they have to be next to each other, and the swap introduces a row descent between them. We can then use our Garnir elements to
write this polytabloid in terms of standard ones.
Let’s work this out explicitly for the Specht module $S^{(2,1)}$, which should give us our well-known two-dimensional representation of $S_3$. The basis consists of the polytabloids associated to
these two tableaux:
We need to come up with matrices for the two swaps $(1\,2)$ and $(2\,3)$. And the second one is easy: it just swaps these two tableaux! Thus we get the matrix
$\displaystyle X^{(2,1)}\left((2\,3)\right)=\begin{pmatrix}0&1\\1&0\end{pmatrix}$
The action of $(1\,2)$ on the second standard tableau is similarly easy. Since $1$ and $2$ are in the same column, the swap acts by multiplying by $-1$. Thus we can write down a column of the matrix
$\displaystyle X^{(2,1)}\left((1\,2)\right)=\begin{pmatrix}?&0\\?&-1\end{pmatrix}$
As for the action on the first tableau, the swap induces a row descent. We use a Garnir element to straighten it out. With the same abuse of notation as last time, we write
and so we can fill in the other column:
$\displaystyle X^{(2,1)}\left((1\,2)\right)=\begin{pmatrix}1&0\\-1&-1\end{pmatrix}$
From here we can write all the other matrices in the representation as products of these two.
• Recent Posts
• Blogroll
• Art
• Astronomy
• Computer Science
• Education
• Mathematics
• Me
• Philosophy
• Physics
• Politics
• Science
• RSS Feeds
• Feedback
Got something to say? Anonymous questions, comments, and suggestions at
• Subjects
• Archives
|
{"url":"http://unapologetic.wordpress.com/2011/01/26/","timestamp":"2014-04-19T06:58:09Z","content_type":null,"content_length":"61870","record_id":"<urn:uuid:57de4a44-3483-48fd-8bd8-4fce02f5703b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Equiangular Spiral
A equiangular spiral and its secants.
Mathematica Notebook for This Page.
The investigation of spirals began at least with the ancient Greeks. The famous Equiangular Spiral was discovered by Rene Descartes, its properties of self-reproduction by Jacob Bernoulli (1654 〜
1705) (aka James or Jacques) who requested that the curve be engraved upon his tomb with the phrase “Eadem mutata resurgo” (“I shall arise the same, though changed.”) [Source: Robert C Yates (1952)]
The equiangular spiral was first considered in 1638 by Descartes, who started from the property s = a.r. Evangelista Torricelli, who died in 1647, worked on it independently and used for a definition
the fact that the radii are in geometric progression if the angles increase uniformly. From this he discovered the relation s = a.r; that is to say, he found the rectification of the curve. Jacob
Bernoulli, some fifty years later, found all the “reproductive” properties of the curve; and these almost mystic properties of the “wonderful” spiral made him wish to have the curve incised on his
tomb: Eadem mutata resurgo — “Though changed I rise unchanged”. [source: E H Lockwood (1961)]
Equiangular spiral describes a family of spirals of one parameter. It is defined as a curve that cuts all radial line at a constant angle.
It also called logarithmic spiral, Bernoulli spiral, and logistique.
1. Let there be a spiral (that is, any curve r==f[θ] where f is a monotonic inscreasing function)
2. From any point P on the spiral, draw a line toward the center of the spiral. (this line is called the radial line)
3. If the angle formed by the radial line and the tangent for any point P is constant, the curve is a equiangular spiral.
A example of equiangular spiral with angle 80°.
A special case of equiangular spiral is the circle, where the constant angle is 90°.
Equiangular spirals with 40°, 50°, 60°, 70°, 80° and 85°. (left to right) Equiangular Spiral
Let α be the constant angle.
Polar: r == E^(θ * Cot[α]) equiangular_spiral.gcf
Parametric: E^(t * Cot[α]) {Cos[t],Sin[t]}
Cartesian: x^2 + y^2 == E^(ArcTan[y/x] Cot[α] )
Point Construction and Geometric Sequence
Length of segments of any radial ray cut by the curve is a geometric sequence, with a multiplier of E^(2 π Cot[α]).
Lengths of segments of the curve, cut by equally spaced radial rays, is a geometric sequence.
The curve cut by radial rays. The length of any green ray's segments is geometric sequence. The lengths of red segments is also a geometric sequence. In the figure, the dots are points on a 85°
equiangular spiral.
Catacaustic of a equiangular spiral with light source at center is a equal spiral.
Proof: Let O be the center of the curve. Let α be the curve's constant angle. Let Q be the reflection of O through the tangent normal of a point P on the curve. Consider Triangle[O,P,Q]. For any
point P, Length[Segment[O,P]]==Length[Segment[P,Q]] and Angle[O,P,Q] is constant. (Angle[O,P,Q] is constant because the curve's constant angle definition.) Therefore, by argument of similar triangle,
then for any point P, Length[Segment[O,Q]]==Length[Segment[O,P]]*s for some constant s. Since scaling and rotation around its center does not change the curve, thus the locus of Q is a equiangular
spiral with constant angle α, and Angle[O,Q,P] == α. Line[P,Q] is the tangent at Q.
Equiangular Spiral Caustic
The evolute of a equiangular spiral is the same spiral rotated.
The involute of a equiangular spiral is the same spiral rotated.
Left: Tangent circles of a 80° equiangular spiral. The white dots are the centers of tangent circles, the lines are the radiuses. Right: Lines are the tangent normals, forming the evolute curve by
envelope. Equiangular Spiral Evolute
The radial of a equiangular spiral is itself scaled. The figure on the left shows a 70° equiangular spiral and its radial. The figure on the right shows its involute, which is another equiangular
The inversion of a equiangular spiral with respect to its center is a equal spiral.
The pedal of a equiangular spiral with respect to its center is a equal spiral.
Pedal of a equiangular spiral. The lines from center to the red dots is perpendicular to the tangents (blue lines). The blue curve is a 60° equiangular spiral. The red dots forms its pedal.
Pursuit Curve
Persuit curves are the trace of a object chasing another. Suppose there are n bugs each at a corner of a n sided regular polygon. Each bug crawls towards its next neighbor with uniform speed. The
trace of these bugs are equiangular spirals of (n-2)/n * π/2 radians (half the angle of the polygon's corner).
Left: shows the trace of four bugs, resulting four equiangular spirals of 45°. Above right: six objects forming a chasing chain. Each line is the direction of movement and is tangent to the
equiangular spirals so formed.
Spiral in nature
Spiral is the basis for many natural growths.
Seashells have the geometry of equiangular spiral. See: Mathematics of Seashell Shapes.
A cauliflower (Romanesco broccoli) exhibiting equiangular spiral and fractal geometry. (Photo by Dror Bar-Natan. Source)
See Also
Related Web Sites
See: Websites on Plane Curves, Printed References On Plane Curves.
Robert Yates: Curves and Their Properties.
Khristo Boyadzhiev, Spirals and Conchospirals in the Flight of Insects. The College Mathematics Journal, Jan 1999. Khristo_Boyadzhiev_CMJ-99.pdf
The MacTutor History of Mathematics archive.
blog comments powered by
|
{"url":"http://xahlee.info/SpecialPlaneCurves_dir/EquiangularSpiral_dir/equiangularSpiral.html","timestamp":"2014-04-19T12:25:39Z","content_type":null,"content_length":"16822","record_id":"<urn:uuid:876322af-73a2-4e2f-95d4-64522b4b97c6>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Length Three FIR
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
The simplest nondegenerate example of the loop filters of §6.8 is the three-tap FIR case ( frequency response:^10.1
If the dc gain is normalized to unity, then damping control. As damping is increased, the duration of free vibration is reduced at all nonzero frequencies, and the decay of higher frequencies is
accelerated relative to lower frequencies, provided
In this coefficient range, the string-loop amplitude response can be described as a ``raised cosine'' having a unit-magnitude peak at dc, and minimum gains sampling rate (
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
|
{"url":"https://ccrma.stanford.edu/~jos/pasp/Length_Three_FIR_Loop.html","timestamp":"2014-04-23T20:52:24Z","content_type":null,"content_length":"9905","record_id":"<urn:uuid:be32006d-af10-4aaf-9ccd-a91dbccbe615>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Directory tex-archive/fonts/fourier-GUT
Fourier-GUTenberg distribution, Michel Bovani michel.bovani@wanadoo.fr
2 LICENCE
5 USAGE
1 VERSION
This version number is 1.4.1 (2005-1-30)
NEW in version 1.4.1:
* no more gap in long arrows symbols
* two new symbols: \decosix and \starredbullet
* Bernard Gaulle's Makefile come back.
NEW in version 1.4 (2005-1-1):
* The widespace option works again (was broken in 1.3)
* Bold versions of FML encoded fonts (essentially greek and latin letters).
Thanks to Timur Mukhamedjanov who designed a first version that was
already pretty good.
The \boldmath command and the bm package seem to be correctly supported.
* Variants of varpi and partialdiff symbols (closer to the cm design).
The commands are \varvarpi and \varpartialdiff.
* New amssymb-like symbols (\leftleftarrows, \rightrightarrows, \square,
\blacksquare, \curvearrowleft, \curvearrowright, \subsetneqq).
* Ornaments are now provided by the fourier-orns package, which is required
by fourier, but may be called separately.
* New ornaments symbols. The commands are \decothreeleft
\decothreeright \decofourleft \decofourright, \floweroneleft, \floweroneright,
\lefthand, \righthand, \bomb.
* No more gap between the square root and its overbar. Thanks to Hans Hagen
and Sebastian Sturm who find that (it was an italic correction problem).
* \rmdefaut redefinition made before calling fontenc (in order to
allow fourier usage even if cork-encoded cm aren't insalled).
Thanks to Robin Fairbanks who pointed that out.
* Capital greek letters are now mathord (instead of mathalpha). Thanks
to Yvon Henel who reported that.
* Metrics improved in math... (I hope!). Thanks to all who reported
metrics features. I do my best to do correction, but it is a long way...
* The \widehat and \widetilde symbols are no longer intercepted by amssymb.
(Thanks to Peter Harmand for his help).
NEW in version 1.3 (2004-03-12).
* There is now an UNIX makefile. Many thanks to Bernard Gaulle
who made it.
* Bug corrected in the ornaments font (the Poscript name didn't match
* There is a new symbol in the ornament font.
NEW in version 1.2 (LaTeX Companion 2) 2004-03-02
* Bug corrected in fourier.map. Thanks to Walter Schmidt for pointing this
* The titling fonts are now named putrd8a.pfb, futrd8r.tfm, futrd8t.tfm,
futrd8t.vf, according to the Karl Berry scheme. Thanks to Dr Peter Schenzel
for pointing this out.
* The commands \dblbrackleft and \dblbrackright are deprecated and replaced
by \llbracket and \rrbracket.
* It is now possible to call amsmath after fourier. Collaboration between the
two packages had been improved (many thanks to Walter Schmidt).
* There is a new option (widespace) which enlarge the default interword space
of Utopia
* There is a new ornament font with 15 new symbols.
* The \eurologo is now in the ornament font (was in a wrong place in the
TS1 encoding, shame on me). Thanks to Frank Mittelbach and Thierry Bouche
for their help.
* All character codes are now decimal number (no more clash with
(n)german.sty). Thanks to Walter Schmidt for pointing this out.
2 LICENCE
The licence of fourier-GUTenberg is lppl (latex public
project licence). All the files of this distribution remain in directories
wich name is "fourier".
fourier uses utopia as his base font. Utopia has been gracefully donated by
Adobe but his licence is more restrictive than LPPL (it is nocharge and
freely distributable software, but it's *not* free software.)
Four files are concerned by this licence :
And here is the licence of utopia
% The following notice accompanied the Utopia font:
% Permission to use, reproduce, display and distribute the listed
% typefaces is hereby granted, provided that the Adobe Copyright notice
% appears in all whole and partial copies of the software and that the
% following trademark symbol and attribution appear in all unmodified
% copies of the software:
% Copyright (c) 1989 Adobe Systems Incorporated
% Utopia (R)
% Utopia is a registered trademark of Adobe Systems Incorporated
% The Adobe typefaces (Type 1 font program, bitmaps and Adobe Font
% Metric files) donated are:
% Utopia Regular
% Utopia Italic
% Utopia Bold
% Utopia Bold Italic
Note that Adobe sells two packages of fonts (utopia = 6 fonts, and utopia
expert = 16 fonts) which are fully usable with fourier (see expert and
oldstyle options below).
3 - INSTALLATION
The texmf tree provides a standard TDS.
You have to install all the "fourier" directories of the fourier texmf tree
in one of yours texmf trees.
If you don't still have the four utopia fonts (texmf/fonts/type1/adobe/utopia/),
you have to install them too.
If you have a licence for the commercial utopia packages, you have to
rename the *.pfb files to suit the declarations in fourier-utopia-expert.map
(or to modify this file). Mac fonts should be contverted to pfb format (with
t1unmac, for instance).
4 - CONFIGURATION
With updmap
% updmap --enable Map fourier.map
if you need to install the commercial packages
% updmap --enable Map fourier-utopia-expert.map
If you don't have updmap or don't want to use it
For dvips, add these lines in config.ps
p +fourier.map
p +fourier-utopia-expert.map
For pdf(la)tex add these lines in pdftex.cfg
map +fourier.map
map +fourier-utopia-expert.map
If you have problems when installing fourier, please tell me.
5 - USAGE
If you need more symbols ask me (I don't promise anything if you need a
complete alphabet!)
Usage : \usepackage[<options>]{fourier}
Don't call \usepackage[T1]{fontenc} or \usepackage{textcomp} as they are
already loaded by fourier.
The options are
* maths : sloped (default) and upright. With upright greek lowercases are
upright and so are roman uppercases (a la french!).
\otheralpha, \otherbeta, etc. can be used to switch to the other-sloped
greek symbol.
* text :
-- poorman (default) the 4 standard fonts
-- expert: expert full set (if you have the fonts!) with lining digits
-- oldstyle: expert set with oldstyle text digits
-- fulloldstyle: expert set with oldstye digits in text and in math.
with the three last options the following commands are provided
\sbseries \textsb semi-bold
\blackseries \textblack extra-bold
\titleshape \texttitle titling (caps only)
\lining and \oldstyle allow you to change the style of digits (use them in
a group).
amsmath compatibility: full (I hope).
amssymb :
No need to call it if you only need:
\mathbb (A..Z, 1, k)
If you need others amssymb symbols please, call amssymb *before* fourier
Other commands:
A slanted parallel symbol (french poeple like it!)
More integrals
A special QED symbol for a false proof (of course you need it, don't you ?)
XSwords symbols
$\xswordsup$ $\xswordsdown$
Name Size Date Notes
Makefile 12497 2005-01-29 05:31:00
README 7561 2005-01-30 12:10:00
Download the complete contents of this directory in one zip archive (646.7k).
fourier – Using Utopia fonts in LaTeX documents
Fourier-GUTenberg is a LaTeX typesetting system which uses Adobe Utopia as its standard base font. Fourier-GUTenberg provides all complementary typefaces needed to allow Utopia based TeX
typesetting, including an extensive mathematics set and several other symbols. The system is absolutely stand-alone: apart from Utopia and Fourier, no other typefaces are required.
The fourier fonts will also work with Adobe Utopia Expert fonts, which are only available for purchase. Utopia is a registered trademark of Adobe Systems Incorporated
Documentation The Fourier ornaments
Version 1.3
License The LaTeX Project Public License
Maintainer Michel Bovani
Contained in TeXLive as fourier
MiKTeX as fourier
Topics fonts for use in mathematics
See also utopia
|
{"url":"http://www.ctan.org/tex-archive/fonts/fourier-GUT","timestamp":"2014-04-18T23:47:57Z","content_type":null,"content_length":"15485","record_id":"<urn:uuid:16a9113d-4417-4839-bdba-c4651a8767f2>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why the Long Face? The Mechanics of Mandibular Symphysis Proportions in Crocodiles
Crocodilians exhibit a spectrum of rostral shape from long snouted (longirostrine), through to short snouted (brevirostrine) morphologies. The proportional length of the mandibular symphysis
correlates consistently with rostral shape, forming as much as 50% of the mandible’s length in longirostrine forms, but 10% in brevirostrine crocodilians. Here we analyse the structural consequences
of an elongate mandibular symphysis in relation to feeding behaviours.
Methods/Principal Findings
Simple beam and high resolution Finite Element (FE) models of seven species of crocodile were analysed under loads simulating biting, shaking and twisting. Using beam theory, we statistically
compared multiple hypotheses of which morphological variables should control the biomechanical response. Brevi- and mesorostrine morphologies were found to consistently outperform longirostrine types
when subject to equivalent biting, shaking and twisting loads. The best predictors of performance for biting and twisting loads in FE models were overall length and symphyseal length respectively;
for shaking loads symphyseal length and a multivariate measurement of shape (PC1– which is strongly but not exclusively correlated with symphyseal length) were equally good predictors. Linear
measurements were better predictors than multivariate measurements of shape in biting and twisting loads. For both biting and shaking loads but not for twisting, simple beam models agree with best
performance predictors in FE models.
Combining beam and FE modelling allows a priori hypotheses about the importance of morphological traits on biomechanics to be statistically tested. Short mandibular symphyses perform well under loads
used for feeding upon large prey, but elongate symphyses incur high strains under equivalent loads, underlining the structural constraints to prey size in the longirostrine morphotype. The
biomechanics of the crocodilian mandible are largely consistent with beam theory and can be predicted from simple morphological measurements, suggesting that crocodilians are a useful model for
investigating the palaeobiomechanics of other aquatic tetrapods.
Citation: Walmsley CW, Smits PD, Quayle MR, McCurry MR, Richards HS, et al. (2013) Why the Long Face? The Mechanics of Mandibular Symphysis Proportions in Crocodiles. PLoS ONE 8(1): e53873.
Editor: Christof Markus Aegerter, University of Zurich, Switzerland
Received: October 5, 2012; Accepted: December 4, 2012; Published: January 16, 2013
Copyright: © 2013 Walmsley et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original author and source are credited.
Funding: Work was funded by Australian Research Council Discovery Project grants DP0986471 (to CRM) and DP0987985 (to SW), Monash University internal funding (to CRM), and a Newcastle University
summer vacation scholarship (to CWW). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Large aquatic predators operate in a physical environment that has driven remarkable morphological convergence, notably the independent evolution of a tunniform body form in ichthyosaurs (reptiles),
lamnids (sharks), thunnids (bony fish) and odontocetes (mammals) [1], [2], [3], [4], [5]. In addition to swimming, feeding behaviour operates under strong constraints based on the fundamental fluid
dynamics of water that apply to ram, filter, and suction feeders [6]. For ram feeding, a spectrum of skull morphology runs from elongate, narrow ‘pincer’ jaws (‘longirostrine’) to shorter, more
robust jaws (‘brevirostrine’). This spectrum of jaw morphologies exists in a wide range of secondarily aquatic amniotes, including crocodilians, ichthyosaurs, plesiosaurs, and odontocetes (Figure 1).
Figure 1. Spectrum of rostral proportions in marine tetrapods.
Dorsal view of various skulls, showing the spectrum of rostral proportions in (from top) crocodilians, odontocetes, plesiosaurs, ichthyosaurs and thalattosuchians. Skulls are resized to equivalent
width at the back of the skull and for each group longirostrine taxa are on the right, brevirostrine on the left. Taxa shown are Caiman latirostris (A), Gavialis gangeticus (B), Feresa attenuata (C),
Platanista gangetica (D), Leptocleidus capensis (E), Dolichorhynchops osborni (F), Temnodontosaurus eurycephalus (G), Ophthalmosaurus icenicus (H), Suchodus brachyrhynchus (I), Steneosaurus
gracilirostris (J). Scale bars = 10 cm. Based on photos by CRM of specimen BMNH 86.10.4.2 (A), BMNH 1935.6.4.1 (B), BMNH 1897.6.30.1 (C) and USNM 504917 (D), after Cruichshank [60] (E), after O’Keefe
[61] (F) based on fossil specimen BMNH R1157 illustrated by Owen [62] (G), after Motani [63] (H), after Andrews [64](I), after Mueller-Töwe [65](J).
Among the 24 extant species of crocodilians, head shape ranges from the hyper-long snouted animals such as the gharial (Gavialis gangeticus) and false gharial (Tomistoma schlegelii), through to
broad-snouted brevirostrine taxa such as the spectacled caiman (Caiman crocodilus) and dwarf crocodile (Osteolaemus tetraspis) (Figure 2). Rostral shape correlates consistently with feeding
behaviour; long slender-snouted crocodilians tend to concentrate on small, agile, aquatic prey (fish), whilst shorter and more robust-snouted animals often take much larger prey [5], [7], [8]. The
Gharial (Gavialis gangeticus) is the longest snouted form and is described as a specialist fish eater [7], [9], whilst the saltwater (Crocodylus porosus) and Nile (C. niloticus) crocodiles have
shorter, more robust snouts and are capable of taking terrestrial prey much larger than themselves [10]. This relationship between head shape and diet has been considered reliable enough to serve as
a basis to infer diet in fossil species of marine reptiles and mammals [2], [5], [11].
Figure 2. Range of skull shape in crocodilians.
Specimens are scaled to approximately the same width and arranged from most longirostrine to most brevirostrine. Left: cranium and mandible in lateral view, Centre left: dorsal view of mandible,
Centre right: Cranium in ventral view, Right: species name and specimen number.
Longirostrine aquatic predators consistently have an elongated mandibular symphysis, which in longirostrine crocodilians such as Gavialis and Tomistoma makes up half the length of the lower jaw. In
general, longirostrine taxa have proportionally longer mandibular symphyses than do mesorostrine or brevirostrine relatives (Figures 2 and 3). As the longirostrine condition correlates with a
preference for small agile prey (e.g. fish), an elongate symphysis can therefore act as a proxy for feeding ecology in some extinct groups [11]. The presence of elongated mandibular symphyses in
longirostrine species in many unrelated groups suggests possible physical constraints on prey capture. The spectrum of jaw morphology in crocodilians has been interpreted as the functional trade-off
between hydrodynamic agility and strength, with longirostrine skulls reflecting a low drag-high speed morphotype suited for capturing small agile prey, and meso- to brevirostrine skulls being low
speed-high strength jaws better suited for killing and processing slower but larger or harder foods [5], [7], [8], [12]. In longirostrine forms, the elongated jaws provide extra reach and higher tip
velocity, factors which likely contribute to success rates of capturing small agile prey. However, the rapid sideways sweeping of the jaws during feeding incurs high drag, a cost that increases
quadratically with snout length for a given profile [8], and the reduced height and width of the jaws in longirostrine taxa may serve to minimise pressure and skin drag respectively, especially in
the anterior portion of the jaw. Additionally, the reduction of rostral width and height in longirostrine crocodilians may reduce angular momentum and mass moment of inertia () of the snout,
decreasing the energy required to accelerate the jaws towards prey (which also increases the acceleration possible for a given muscular effort); it may also be a means of minimising drag incurred by
the jaw during rapid adduction. Reduced distal mass is especially important for rapid adduction or sideways movements of longirostrine snout, because increases with the square of the distance of a
unit of mass from the centre of rotation. In the upper jaw, the anterior snout has an almost tubular section and this is mirrored by the symphyseal part of the lower jaw in longirostrine
crocodilians; the formation of an elongate symphysis seems to be a configuration allowing a minimal diameter of the mandible, and can be explained by hydrodynamic and/or energetic criteria.
Figure 3. Mandibular symphysis length vs mandible length in extant crocodilians.
X axis plots the ratio of mandibular length to width, giving a size-controlled proxy for the spectrum of brevisrostral to longirostral morphology. Y axis is the proportion of symphyseal length to
mandibular length. Values shown are natural logarithms. (A), data for 82 specimens of crocodilian, data measured from photographs of museum skulls; regression line is based upon mean values for each
species. (B), data points as for (A), with data points ordered by width in each species and connected by lines. In effect, this plot shows the allometric trajectory of ML/W for each species, with the
smallest animals on the right and largest on the left of each species plot; i.e. as animals increase in size, head width increases as a proportion of head length. Within each species, the symphyseal
length (as a proportion of mandible length) remains consistent. (C), Regression lines for alligatorids, non-tomistomine crocodylids, Gavialis, and Tomistoma.
If an elongate mandibular symphysis increases streamlining/energy efficiency, why is it not a consistent feature of all crocodilian mandibles? Why do forms with shorter rostra lack a long symphysis?
While the longirostrine form is streamlined and is efficient for capturing small, agile aquatic prey, it is not strong or well suited to the loads that result from feeding on large prey [8]. In
crocodilians that feed on large prey, the snout is shorter, broader, and usually taller in section than longirostrine forms; this shape is better for resisting high loads during feeding and is the
defining characteristic of meso- and brevirostrine taxa [8]. Although the structural consequences of this morphology have been explored for the upper jaw, those for the lower jaw have received less
attention [7], [8]. If an elongate symphysis is the most effective morphology for reducing the drag incurred and/or increasing the rate of acceleration of the anterior part of the mandible during a
rapid lateral sweep, then the absence of an elongate symphysis in meso- and brevirostrine taxa may be enforced by structural mechanics; i.e. an elongate symphysis decreases the strength of the
Theoretical Framework
Biomechanics of processing large prey for aquatic predators.
The mechanics of feeding upon large prey in water have been detailed by Taylor [5] and are summarised here. For predators that feed on prey that are too large to be swallowed whole, rendering prey
into bite-sized chunks is an important component of feeding behaviour. Terrestrial predators can use the weight of the prey to restrain it whilst the predator rips off chunks; the predator’s
forelimbs can help secure the carcass, whilst shearing dentition produces the forces required to reduce prey. Aquatic predators, however, are unable to use the prey’s weight as an anchor because the
predator cannot brace against the ground (both predator and prey are effectively weightless in water), and as forelimbs are often modified for aquatic locomotion these cannot be used to restrain
prey. As a result, the aquatic predators often use vigorous shaking of the prey, provided the prey is small enough to be held clear of the water. When the prey is too large to shake, its inertia is
used to anchor it whilst the predator spins rapidly around its own long axis, generating shear forces that twist chunks off the carcass [13]. Shake and twist feeding are also used to subdue prey
after capture, with the use of twist feeding in crocodilians underlying the infamous ‘death roll’.
‘Armchair predictions’: argument from principles of beam theory.
In crocodilians that feed on large prey (too large to be swallowed whole), the skull must be capable of withstanding diverse loads: (1) straightforward adduction of the jaws (‘biting’), (2) vigorous
lateral shaking of the head with the prey held in the jaws (‘shaking’), and (3) rapid roll of the predator’s whole body about the longitudinal axis, with the prey held in the jaws (‘twisting’). How
these loads interact with symphyseal length can be explored based on beam theory. The mandible can be viewed as a ‘Y’ shaped beam configuration with uniform sections and X, Y, and Z axes representing
the transverse, dorso-ventral, and longitudinal directions respectively (Figure 4). Beam theory predicts that during biting the mandible will behave as a cantilevered beam loaded in the dorso-ventral
(Y) direction. For a given section, the mechanical response will depend only on the length of the whole mandible; the proportion of the mandible that is formed by the symphysis will not affect the
area moment of inertia in the dorso-ventral direction (Ixx, about the horizontal x axis), and so symphyseal length is irrelevant. In shaking, the mandible acts as a cantilevered beam that is loaded
laterally (X axis) at its anterior end and fixed posteriorly; its mechanics will be influenced by both the length of the beam and by the moment of inertia in the lateral direction (Iyy, about the
vertical axis). Symphyseal length (SL) does affect Iyy; a longer SL means a reduced Iyy, with a change in Iyy at the junction of the symphysis with the rami. Under twisting loads, the crocodile skull
is expected to act as a tapered cylinder (i.e. a cone, an efficient shape for torsional loads), and the mandible will be a partial cone; the mechanics should depend primarily on the polar moment of
area (J), and as increased SL reduces J then SL is expected to affect the mechanical performance.
Figure 4. Second moments of area for beam models.
Second moments of area correspond to the geometry of long and short symphysis crocodilians. (A) shows the beam approximation of mandibles with long and short symphyseal lengths. (B) shows the change
in second moment of area (length^4) for long and short symphyseal beam models; these were calculated at discrete locations from the tip (anterior) of each mandible, as a conceptual illustration of
the differences in second moments of area between the two morphologies. Corresponding locations are shown with dotted lines and the Y axis is a uniform arbitrary scale throughout. (C) shows (from
top) the loading regimes associated with shaking, biting and twisting; where red arrows represent forces and black crosses represent restraints.
Methodological aspects.
Skulls are far more complex than beams, which presents significant challenges for analyses of cranial mechanics. While some studies have successfully applied beam theory to generate insights into the
functional aspects of cranial shape variation [7], [14], [15], recent focus has been on the use of Finite Element Analysis (FEA) of high resolution meshes to describe the mechanical response of
complex skull geometries to the loads incurred during feeding behaviour [16], [17]. Whilst FEA offers many advantages for biomechanical analysis, the gap between the high accuracy of the FE models
and the simple geometry explained by beam theory has meant that the results of high resolution biological FEA are rarely discussed with reference to underlying mechanical principles such as beam
theory. This lack of a theoretical context means that the analyses do not attempt to test hypotheses of structure/function relationships constructed a priori, but are instead used to describe
post-hoc patterns of variation from which underlying generalities might be elucidated. Whilst post-hoc approaches are valid and often necessitated by the complexity of biological datasets, and are an
important means of generating hypotheses, a priori approaches have the capacity to test hypotheses.
An approach that uses beam modelling and high resolution FEA combines the strengths of both methods [18]. Beam modelling requires an explicit hypothesis of the aspects of morphology that are
considered to be of the highest biomechanical importance. High resolution finite element (hi-res FE) modelling describes the complex mechanical behaviour of actual morphology, and allows the
explanatory power of the beam models to be evaluated quantitatively. If the beam models are found to describe, even qualitatively, the pattern of variation in mechanical performance between
morphologies, then they are useful approximations of reality, and aspects of morphology they encapsulate may be most important to the performance of the biological structure. Small discrepancies
between FEA and analytical results from beam theory (as with CT cross sections) are informative about the influence of factors such as mesh and geometry resolution, and material properties, on both
methods. Conversely, a large discrepancy between beam and hi-res FE models indicates that the complexity of the biological structure overwhelms the capacity for analysis using beam theory, and/or the
aspects of shape that determine mechanical behaviour have not been captured in the beam model.
Here we explore the correlation between head length and symphyseal length in crocodilians using beam theory and FEA. Building from the assumption (based upon theory but yet to be demonstrated
empirically) that the dynamics of a rapid lateral sweep of the jaws during prey capture selects for a narrow rostrum and an elongate mandibular symphysis, we hypothesise that shorter symphyses of
meso- and brevirostrine crocodilians are selected for by the mechanics of the shaking and twisting behaviours used in feeding on large prey, but not by the mechanics of biting (jaw adduction).
Implicit in the above hypothesis is the assumption that the biomechanics of the crocodilian mandible can be elucidated using beam theory; a secondary aim here is to quantify the extent to which that
assumption is valid. For this, we used the following criteria; if the pattern of variation in the mechanical performance of the actual mandibles (as modelled in hi-res FEA) correlates best with the
linear morphological variable predicted by beam theory, then the biomechanics of the mandibles conform with the principles of beam theory. In contrast, if the pattern of variation in the hi-res FE
dataset correlates better with another variable (whether a linear measurement or a metric of shape) then the beam models do not explain the biomechanics of the actual rostrum, and the mechanics of
complex biological structures resists explanation using fundamental principles.
Our approach is to:
1. Explore the mechanics of beam models of the mandible under biting, shaking, and twisting loads in relation to a number of simple variables.
2. Compile a comparative dataset, based upon CT scans of several crocodilian species that between them show a spectrum of symphyseal length relative to mandibular length.
3. Using Finite Element software, construct a set of ‘simple’ (beam) and ‘complex’ (hi-res FE) models of each specimen, which are then analysed under simulated biting, shaking, and twisting loads.
4. The results from this modelling will be analysed to evaluate the specific hypotheses:
1. Strain in beam models will correlate with mandibular length under biting, but with symphyseal length under shaking and twisting.
2. Similarly, strain in complex FE models of crocodilian mandibles will correlate with mandibular length under biting, but with symphyseal length under shaking and twisting.
3. The crocodilian mandible behaves as a beam, i.e. the simple variables that best explain variation in strain between beam models will also best explain variation in strain between complex FE
Specimens, Scans, and Image Processing
Analysis was based upon seven species of crocodilian species spanning a large range of mandible morphology and symphyseal length (Table 1 and Figure 5). Models were constructed from CT scan data;
five specimens were scanned at the University of Texas Digital Morphology Laboratories, one at the Newcastle Mater Hospital, and one at the US National Museum. Although scan settings are not
identical for the different specimens, we did not have the opportunity to scan specimens upon multiple scanners and for the purposes of the present study we assume that the source of the scan does
not affect the subsequent modelling results.
Figure 5. Specimen used in this study.
From top left: Crocodylus intermedius, Tomistoma schlegelii, Mecistops cataphractus, Crocodylus moreletii, Crocodylus novaeguineae, Crocodylus johnstoni, Osteolaemus tetraspis.
Table 1. Specimen scan information.
Processing of the CT data was performed in MIMICS v11 (MATERIALISE, Belgium). For each specimen, the skull and mandible were segmented separately and converted to 3D isosurface models. Image
segmentation was largely straightforward, with the exception of the Crocodylus intermedius scan; this specimen had wire embedded in several positions within the mandible, resulting in refraction
artefacts in the CT data; the affected slices were manually processed in a bitmap editor (Paintshop Pro v8, JASC) to improve image quality and reduce the influence of the artefacts (Figure 6).
Figure 6. Manual correction of diffraction artefacts in Crocodylus intermedius scan.
Left: scan data before correction. Right: scan data after correction. See text for explanation.
Isosurface 3D models of segmented data can be made at low, medium, or high ‘quality’ - these settings exchange accuracy with computational requirements (Figure 7). The accuracy of the isosurface
model was measured by averaging the difference between isosurface and segmentation mask diameters as measured at 10 locations on the mandible and cranium. For different specimens, a given quality
setting gave a wide range of isosurface accuracy values (‘Average Contour Error’ in Table 2); presumably because of the different scan resolutions between specimens. For the final isosurface that
formed the basis for the FE model, we standardised the level of accuracy by using the quality setting that gave a contour error between 0.05 and 0.1% of mandible length.
Figure 7. Quality of isosurface models and error quantification.
The mask (shown in blue) represents the segmented/selected voxels that will be used to create isosurfaces. The three different contour qualities represent the 3D approximation of the mask and will
form the isosurface. Contour error is the measured distance between the isosurface contour and the mask it was generated from (lower left of image).
Table 2. Calculation and standardisation of error in the 3D models.
Isosurfaces were exported as STL (Stereolithography) files – a surface mesh comprising triangles. Surface meshes were used for morphometric analysis (see below) and formed the foundation upon which
suitable FEA solid meshes were generated using Harpoon (SHARC). Surface meshes were optimised to remove unwanted internal geometry (Figure 8) and to control the resolution of the final ‘tetrahedral’
solid mesh. For each specimen, solid mesh resolution was set such that the number of tetrahedral elements in the cranium was approximately 1.5 million. The mandible was then meshed such that the
average size of tetrahedral elements was approximately the same as the cranium, yielding 2.5 million tetrahedra (+/−10%) (Table 3) for the cranium and mandible combined.
Figure 8. Mesh optimisation and solid mesh generation.
Mesh optimisation and solid mesh generation was performed using Harpoon (SHARC). The left images show the complex internal geometry captured from isosurface generation. The middle column shows
removal of complex internal geometry whilst still retaining important geometrical features. Images at right show the final solid mesh.
Table 3. Mesh resolution for ‘complex’ FE models.
We used linear measurements and landmark coordinates from each mandible in order to quantify shape. Linear measurements comprised overall length (L), symphyseal length (SL), width (W), and inter-rami
angle (A), and were taken from the STL files within Rhino (McNeel - [19]) (Figure 9A). Linear measurements were corrected for size using skull (cranium+mandible) volume. For multivariate
quantification of shape, the surface mesh was imported into Landmark [20] as.PLY files and 22 landmarks were defined. (Table 4 and Figure 9B). These landmark locations were then exported to
Morphologika v2.5 [21], where procrustes superimposition and principal component analysis were undertaken.
Figure 9. Linear measurements and landmarks for mandible.
(A), linear measurements of mandible; (B), landmark locations. See text for explanation.
Table 4. Landmark characterisation.
Structural Modelling
We used the Finite Element Analysis package Strand7 [22] for analysis of beam and complex models. Beam models were constructed from 3 elements, whilst the complex (hi-res FE) models of the mandibles
ranged between 0.75 and 1.15 million elements.
Three sets of models were produced. The first set (beam models #1) explored the effects of four linear variables - overall length (‘Length’, L), symphyseal length (SL), width (W), and inter- rami
angle (‘Angle’, A) - upon stresses in the beam model representing the mandible. Within a mandible, these measurements co-vary and so their effects cannot be explored independently of each other. We
therefore created four sets of models, within which two of the measurements were kept constant while two co-varied (Figure 10);
Figure 10. Variations for beam models #1.
Model variations used to explore relationship between strain and linear variables in the first set of beam models. Abbreviations are defined as follows: (CL, CSL; VA, VW) – Constant length and
symphyseal length, variable angle and width. (CL, CW; VSL, VA) – Constant length and width, variable symphyseal length and angle. (CA, CW; VSL, VL) – Constant angle and width, variable symphyseal
length and length. (CSL, CW; VL, VA) – Constant symphyseal length and width, variable length and angle.
1. constant length and symphyseal length, variable angle and width (CL,CSL;VA, VW)
2. constant length and width, variable symphyseal length and angle (CL, CW; VSL, VA)
3. constant angle and width, variable symphyseal length and length (CA, CW; VSL, VL)
4. constant symphyseal length and width, variable length and angle (CSL, CW; VL, VA)
Beam dimensions are given in Table 5; the models were fully restrained at the two nodes at the rear (i.e. no rotations or translations in any axis), and a load was applied to the node at the front of
the model (Figure 11). For the bite and shake load a 1 N force was applied in the X and Y axis respectively; for the twist a moment of 1 Nmm was applied in the XY plane.
Figure 11. Beam models showing axes, restraints and loads.
From top; shows loads and restraints for biting, shaking and twisting respectively. In all three cases models are fully restrained (rotation and translation) at the most posterior points of the beam
model. Loads are all placed at the most anterior point of the beam model.
Table 5. Dimensions for beam models #1.
The second set of beam models (beam models #2) used a similar construction, but dimensions were adjusted to capture the corresponding geometry of the hi-res models (Table 6). This allows direct
comparison between the results of the hi-res FE mandible models and beam modelling.
Table 6. Dimensions for beam models #2.
The material properties of all beam models were arbitrarily set to that of structural steel (Young’s modulus of 200,000 MPa, Poisson’s ratio of 0.25 and density of 7.87 g/cm^3). While material
properties for bone are considerably different to steel, the results indicate relative performance of each beam model; additionally, under assumed linear behaviour, stresses or strains in other
materials can easily be calculated from a given result. Section geometry of beams representing the rami and symphysis were circular in cross section with diameters of 0.05 mm and 0.07 mm
respectively. The diameter of the symphysis was chosen so as to maintain mass, width and overall length between a model with a symphysis of zero length (i.e. where the rami meet at the anterior end
of mandible) and one where symphyseal length accounts for 37.5% of overall length.
The four measurement variables explored with the beam models were all aligned in the XZ (coronal) plane - the beam models are in effect 2D models of the mandible. The third dimension is undoubtedly
important in crocodilian skull biomechanics [8], [23] and is here incorporated in the hi-res FE models (see below). In the beam models, we kept dimensions in the Y (vertical) axis constant to permit
the effects of variation in geometry in the XZ plane to be explored without confounding effects from variation in beam section.
Complex Models
The third group of models were the high resolution Finite Element (hi-res FE) models generated from the CT scan data of each specimen listed in Table 1. The solid meshes of the cranium and mandible
from each specimen were imported into Strand7 and form the basis for assembly of the FE models. Even though the present study focuses on mandibular biomechanics, crania were included within the model
to provide accurate boundary conditions (i.e. simulations of jaw joint, muscle attachments and force vectors, bite points, etc.).
Construction of the FEMs was based upon previously published protocols [24], [25], [26], [27], [28] and are summarised here.
Orientation and axes.
All models were orientated so the basal skull axis (which lies in the sagittal plane, and is defined rostrally by the tip of the premaxillae and caudally by the apex of the occipital condyle) was
aligned with the global Z axis, and the X and Y axes aligned with the transverse and vertical axes respectively.
Quadrate-articular joint and gape.
The mandible mesh was positioned to closely approximate the life position of the mandible relative to the cranium. The axis of rotation was defined with respect to the morphology of the quadrate
condyles. On each side of the skull, the jaw joint was simulated using a beam aligned with the joint axis, connected to the articular surfaces of the quadrate and articular by rigid links (beams with
infinite stiffness), and set to allow rotation around the beam’s long axis. In each model, gape was set to approximately 10 degrees (9.8 degrees +/−0.2 degrees).
Pterygoid buttress.
In crocodilians the lateral surface of the pterygoid flanges is lined with hyaline cartilage and tightly apposes the medial surface of the mandible; in effect it acts as an ‘open joint’ and
presumably buttresses against medial bending of the mandibular rami in response to the strong horizontally aligned vector components of the crocodilian jaw adductor muscles [29]. This action was
simulated by a link element between the relevant surfaces on the pterygoid and mandible, which allowed all movements except medial translation of the mandible.
Jaw muscles.
Jaw adductor musculature was simulated using truss elements that carry only tensional loads between muscle origin and insertion points [26], [27], [28]. Multiple trusses were used per muscle, with
the number of elements proportional to the size of the muscle. The anatomy of muscle attachments followed descriptions in the literature [30], [31]. Muscle forces for biting load cases were
calculated using a version of Thomason’s ‘dry skull’ method modified for crocodilian jaw muscle anatomy [15] with the ‘temporalis’ and ‘masseter’ groups [32] adjusted to ‘temporalis’ (adductor
externus, adductor posterior, pseudotemporalis) and ‘pterygoid’ (pterygoidus) groups respectively [29] (Table 7). For each group, cross sectional area (CSA) was determined using osteological
boundaries of the adductor chamber normal to its line of action (Figure 12), and muscle specific tension (force/area) assumed as 300 KPa [15]. The large m. pterygoidius posterior wraps around the
lower jaw to insert on the retroarticular process, where its lateral extent cannot be delimited. We partially account for force from its sling-like effect on the angular by extending the ‘pterygoid’
group’s subtemporal area to the outer margin of the lower jaw (Figure 12C). Future analyses will more fully incorporate the outer part of this large muscle, which varies substantially in size between
species and individuals. For now a discrete morphological proxy (lower jaw width) was judged the most precise approximation for comparing different taxa.
Figure 12. Reptile version of ‘dry skull method’ in a crocodile skull.
Skull of Mecistops cataphractus, showing: (A), temporal (red) and pterygoid (yellow) muscle vectors; temporal vector is oriented vertically with the skull aligned horizontally, pterygoid vector runs
between a point that is half of the cranial height at the postorbital bar, to the ventral surface of the mandible directly below the jaw joint. (B), calculation of the cross sectional area (CSA) for
the temporal muscles; the outline maps the extent of the adductor chamber defined from osteological boundaries, viewed normal to the relevant vector. (C), calculation of CSA for pterygoid muscles;
the outline is drawn normal to the vector. Outlines in B and C also show centroids, used for calculation of inlevers (see Thomason [15], McHenry [29]).
Table 7. Jaw muscle groups in crocs.
The number of trusses used to represent each muscle group was proportional to the CSA, and within each group, the number of trusses representing each muscle were divided according to attachment area
[26], [28], [29]. Muscle forces were applied as pretensions on each truss (Table 8). The diameter of each truss was calculated with respect to the measured cross sectional area of the respective
muscle groups in a specimen of C. porosus [29]; for each specimen used here truss diameters in all models were scaled to the cube root of their volume compared to that of the C. porosus model. For
shaking and twisting forces, we simulated an isometric force in the muscles (rather than isotonic fibre shortening during jaw adduction in biting) by assigning an increased elastic modulus to each
truss element [29]; this had the effect of bracing the jaws as they hold a prey item, as occurs during actual shaking and twisting behaviours.
Table 8. Beam pretensions used for functional muscle groups.
Free body rotation was prevented by restraining nodes on the skull - restraints prevent translation and/or rotations about a given axis. For biting and shaking restraints, a node on the apex of the
occipital condyle was ‘fully fixed’ (translation and rotation) in all axes; for twisting, this node is fixed in translation only. For biting, each of the teeth involved in the bite (see below) were
restrained against rotation about the jaw hinge axis (dθ); additionally the two left teeth are restrained for translation along the jaw hinge axis (dZ – i.e. laterally). For twisting, these teeth are
all fully fixed. The surface of the occipital condyles and teeth involved in restraints were tessellated with beams to prevent point load artefacts.
Bite points.
For biting, shaking and twisting loads, the simulated bite point was at the front of the jaw, at the largest tooth in the premaxillary row. All four teeth (the fourth premaxillary pair from the
cranium, and the fifth dentary pair from the mandible) were designated as ‘holding’ prey. For biting loads, ‘mid’ and ‘rear’ bites (Figure 13) were also simulated (for predictions of bite force - see
below) but structural mechanical data from these is not presented here. Loads/restraints were applied to the apical node of each tooth involved in the bite point, with tessellated beams on the teeth
used to reduce point load artefacts.
Figure 13. Bite points for bite, shake and twist.
Teeth used in simulating front, mid and back bite points are shown in orange. Crocodylus intermedius (A), Osteolaemus tetraspis (B), Crocodylus novaeguineae (C), Crocodylus moreletii (D), Crocodylus
johnstoni (E), Mecistops cataphractus (F), Tomistoma schlegelii (G).
Loaded/restrained surfaces.
In Finite Element modelling, single nodes to which a load or restraint are applied can be subject to very high stresses which are an artefact of the modelling technique. To reduce the effect of these
‘point load’ artefacts, the nodes on the neighbouring surface were connected by a network of beams that are slightly stiffer than the underlying bone [26], [27], [28], [29]. These networks were used
at the jaw joint (to line the articular surfaces), the occipital condyle (again, lining the articular surface), on the pterygoid buttress and apposing part of the mandible, on the teeth involved with
the bite point, and at the muscle attachment surfaces.
Material properties.
The skulls were modelled with homogeneous material property sets, with the brick elements representing bone assigned an elastic (Young’s) modulus of 13,471 MPa. This value was based upon the modulus
of the mean bone density in the M. cataphractus skull, using the conversions of Hounsfield Unit to density to modulus given by McHenry and colleagues [26]. For beam elements, the elastic modulus of
the trusses representing muscle fibres was set to 0.1 MPa for biting load cases and 15 MPa for shaking and twisting load cases [29]. Elastic modulus of the beams used to reinforce the loaded/
restrained surfaces were assigned a modulus of 100,000 MPa and a diameter of 1.92 mm in the M. cataphractus model, scaled accordingly in the other models. The elastic modulus of the beam representing
the jaw hinge was set to a high value in order to prevent unwanted movements of the joint (Table 9).
Table 9. Material properties for elements used in each model.
Each model was assembled and solved at its natural size for each load case. Since the hypotheses being tested concern shape, it was necessary to control for size: this was done by rescaling each
model so that the volume of cranium and mandible were the same as for the Mecistops cataphractus model [29], [33], which was intermediate between the smallest and largest specimens used in the
analysis. In the scaled models, the diameter of all beam elements was standardised. We quantified the sensitivity of results to different scaling criteria (surface area [34] and length [35]), which
are not presented here but were found to have similar strain discrepancies between specimens regardless of scaling method.
Load cases.
Biting load cases were simulated by restraining teeth at the bite point and applying pretensions to the ‘muscle beams’, as described above. ‘Front’, ‘mid’, and ‘rear’ bites were simulated for
unscaled (‘natural’) and scaled models; for the latter, we simulated bites where muscle forces were scaled to the 2/3 power of the change in volume (‘volume scaled’), and one where muscle forces were
adjusted so that the resultant bite force was equivalent to the bite force measured from the M. cataphractus model (‘tooth equals tooth’, or ‘TeT’). The TeT load case thus eliminated the effects of
size and load, and provides the simplest examination of the effects of shape upon skull mechanics.
For shaking load cases, a lateral force was applied to each of the teeth at the bite point; the magnitude of the force was initially calculated for each model on the basis of prey approximately three
times the mass of the skull being held at the front of the jaws, and shaken from side to side at a frequency of 4 full cycles per second ([29]; Figure 14). The force magnitudes calculated for the M.
cataphractus model were then applied to the volume rescaled models.
Figure 14. Calculation of shake forces.
The problem definition used to determine the equations of motion that describe the feeding behaviour associated with shaking a prey item. This motion is considered to be harmonic; since the skull
oscillates about a neutral axis in a set period of time (); in our case this period is 0.25 seconds – i.e at a frequency () of 4 full cycles per second. Left: the equations of motion associated with
shaking, where is angular displacement, is angular velocity and is angular acceleration. Maximum angular acceleration () occurs each time the skull changes direction; in our case (radians/sec^2),
where a positive value indicates counter clockwise acceleration and a negative value indicates a clockwise acceleration. Right: the range of motion for a crocodile shaking a prey item. Bottom right:
shows the equations used to calculate the maximum force () exerted on the skull as a result of shaking a prey item of mass () – approximately 2.55 kg in the M. cataphractus example shown here. Here
denotes linear acceleration (in the direction of force ) and denotes the distance to the centre of rotation. For our calculations is calculated as the perpendicular distance from the jaw hinge axis
to the centre of mass of the prey item (outlever length) – approx. 297 mm in M. cataphractus.
Similarly, the twisting load case was calculated on the basis of a large prey item being held in the jaws, with the crocodilian imposing a torsion load on the bite point by rotating its postcranium
about its own long axis at a rate of 2 full rotations per second (Figure 15). This torque was then simulated by fully fixing the teeth at the bite point and applying the calculated moment to the
occipital condyle. The moment calculated for the M. cataphractus model was applied to the volume rescaled models.
Figure 15. Calculation of twist forces.
The problem definition used to determine the equations of motion that describe the feeding behaviour associated with twisting a prey item. Bottom Left: the range of motion for a crocodile twisting a
prey item. Bottom right: the equations used to calculate the Torque generated by a crocodile of mass () as a result of twisting about its own axis with a prey item held in its jaws. Torque is the
produce of moment of inertia () about the animals long axis and the angular acceleration () – which is assumed to be constant. Moment of inertial is calculated using mass () and radius (); in our
calculations mass is approximated as fifty times the mass of the skull (approx. 40 kg in the M. cataphractus example shown here), while radius is approximated as skull width (approx. 152 mm in M.
cataphractus). Initial angular velocity () is zero since in this case the twist is being made from a standing start. denotes the angular displacement of the twist in radians ( or 360 degrees in this
case), while denotes the time taken to complete the rotation –0.5 seconds.
Assessing Biomechanical Performance
For each load case in each complex FE model, strain values for the tetrahedral brick elements making up the skull and cranium were exported as text files and analysed in the R statistical programming
environment [36]. Since we wished to determine the strength of the mandibles under load, the maximal strain values are the most useful for statistical analyses. In complex FE models, however, the
maximal strain values are often associated with artefacts of the model (e.g. restraints, load points, and elements with high aspect ratio geometry). We therefore used the 95% values of strain in each
model [37] which provide a similar pattern of results as the mean, median, 75%, and 90% values but differ from the 100% (i.e. maximal) values (Figure 16); in the absence of validated data on actual
strain values our assumption that 95% values provide a suitable basis for the analysis of results is untested but is logically sound. Contour plots of von Mises strain were also used to provide a
visual comparison of results.
Figure 16. Values of strain from complex FE models.
Shows mean, 50%, 75%, 90%, 95%, 99% and 100% strain values for taxon used in this study. 95% strain represents the largest elemental value of strain in the model if the highest 5% of all values are
ignored. 100% strain is the maximum elemental strain in the model and likely represents constraint artefacts caused by boundary conditions. Taxon abbreviations: Ot, Osteolaemus tetraspis; Cm,
Crocodylus moreletii; Cng, Crocodylus novaeguineae; Ci, Crocodylus intermedius; Cj, Crocodylus johnstoni; Mc, Mecistops cataphractus; Ts, Tomistoma schlegelii.
The beam models each comprised three elements and are not subject to the artefacts seen in the complex FE models. Results were collected as maximal fibre stress and converted to strain using elastic
modulus and the equation , where represents elastic (Young’s) modulus, represents stress, and represents strain.
Bite force was measured as the sum of the absolute values for nodal reaction forces for the four bite points involved in each bite, measured as reaction force in the rotational direction of the jaw
hinge axis (‘Dθ’ in Strand7).
Statistical Evaluation of Models
Analysis focused upon quantifying correlations between morphometric data and strain values, using natural logarithms of linear measurements, strain data, and principal component (PC) values. Scatter
plots of strain vs morphometric variables were produced using Excel (v2010, Microsoft).
Size corrected (by centroid) landmark data was analysed using principal components analysis (PCA). Visualisation of shape change along PC axes was performed using Morphologika v2.5, [21]. The
eigenscores from PCA represent relative shape variation and are used here as descriptors of shape as defined by Kendall [38], and is all that remains after rotation, translation and scale are
removed; see [38], [39], [40]. Only the first two principal components were used in this analysis because the first two PC values accounted for 92% of shape variation (66% PC1, 26% PC2) and low
sample size limits the number of explanatory (morphometric) variables that can be evaluated.
For each type of mandible load (biting, shaking, or twisting), we evaluated the explanatory power of linear measurements compared with shape. Each linear measurement was tested as an explanatory
model (EM) and compared using the second-order Akaike’s Information Criterion, AICc, as recommended in the case of small sample sizes [41], [42], [43]. AICc score is a measure of the relative amount
of information lost when using an explanatory model to approximate reality, taking into account both the number of parameters in the EM and the sample size. A lower AICc score indicates a better EM,
however interpretation is not entirely clear cut and there can be some uncertainty as to how much “better” one EM is than another and instead a few EM can be considered AICc-best. We have reported
the estimated parameters of each EM, the log-likelihood of each EM, AICc, ΔAICc, and the Akaike weights. ΔAICc values are the difference in AICc between an explanatory model and the AICc-best
explanatory model. EMs within 2 of each other were considered nearly identical in information, while EMs with ΔAICc values of 4 and 8 are considered fair and EMs with a ΔAICc greater than 10 are poor
[44]. Akaike weights can be interpreted as approximations of the EM selection probabilities or posterior probabilities of the EM [44]. Effectively, Akaike weights are a measure of the relative
informativeness of each EM. Analysis was performed within the R statistical programming environment version 2.15.0 [36] using the ‘shapes’ [45] and ‘MuMIn’ [46] packages.
Linear morphometric variables were selected a priori on the basis of beam theory principles. For biting, we evaluated mandibular length, the eigenscores of the first principal component, and the
eigenscores of the first two principal components. For shaking and twisting, we evaluated mandibular length, symphyseal length, mandibular angle, and eigenscores of the first two principal
Shape Analysis
Measurements of morphological variables are shown in Table 10. Figure 17 shows the plot of PC1 vs PC2 scores for the seven specimens. Most of the specimens lie within a defined linear band of PC
values, with the exception of Tomistoma schlegelii which appears to be an outlier (Figure 17). T. schlegelii is clearly separated from the other specimens along the PC1 axis, but not on the PC2.
Figure 17. Principal component plot.
Principal component 1 (PC1) versus principal component 2 (PC2) from geometric morphometric analysis Taxon abbreviations: Ot, Osteolaemus tetraspis; Cm, Crocodylus moreletii; Cng, Crocodylus
novaeguineae; Ci, Crocodylus intermedius; Cj, Crocodylus johnstoni; Mc, Mecistops cataphractus; Ts, Tomistoma schlegelii.
Table 10. Length, Symphyseal Length, Angle and Width for each of the mandibles used in this study.
The morphological components of the principal components are shown in Figures 18 and 19. Symphyseal length (SL) shows the greatest percentage change along the PC1 axis, with some change in width (W)
but only minor changes in angle (A) and length (L) (Figure 18). Along PC2, angle shows the highest percentage change, but SL and W are nearly as great (Figure 19). Width is inversely correlated with
the other variables along PC1, whereas along PC2 changes in SL, W, and A are correlated. Length does not change along PC2. Correlation with phylogeny is poor along both PC axes (Figures 18 and 19),
suggesting that symphyseal length is not strongly constrained by phylogeny, although we did not test this statistically (due to small sample size).
Figure 18. Quantification of Principal Component 1 (PC1).
Wireframe (left) of mandible from dorsal and lateral perspectives illustrates the change in shape along PC1 axis. Note the longer symphyses at higher PC1 values. The chart in the centre shows the
value of each morphological variable (e.g. symphyseal length) at a given PC value, as a percentage of the maximal value for that morphological variable. Specimens are plotted according to their
respective PC1 values (centre right). Phylogram (right) shows poor correlation of specimen PC1 scores with phylogeny. Phylogeny based upon the results of Erickson and colleagues [47].
Figure 19. Quantification of Principal Component 2 (PC2).
Wireframe (left) of mandible from dorsal and lateral perspectives illustrates decreasing mandible robustness with increasing PC2 values. The chart in the centre shows the value of each morphological
variable (e.g. symphyseal length) at a given PC value, as a percentage of the maximal value for that morphological variable. Specimens are plotted according to their respective PC2 values (centre
right). Phylogram (right) shows poor correlation of specimen PC2 scores with phylogeny. Phylogeny based upon the results of Erickson and colleagues [47].
Bite Force
Bite force predictions from the hi-res FEMs are given in Table 11 and plotted in Figure 20. The maximum estimated bite force, 2145 N for a rear bite by the C. intermedius ‘natural’ sized model is
considerably less than that reported for that taxon (6276 N for an animal by Erickson [47]). Scaled to skull volume, the relationship between bite force and outlever length appears to be consistent
between taxa, with the results for most specimens falling close to the regression line for the logarithm transformed data, suggesting that bite force is related to head size and bite point. The
slight non-linearity (slope of the regression line of logarithm transformed data is −0.93) in the data is not expected from the basic lever mechanics that are sometimes used to model bite force [15],
[48] and may stem from the measurement of bite force in the rotational axis of the jaw hinge; any component of the joint reaction force not aligned with that axis will be ignored by this measurement.
Figure 20. Bite force estimates for high resolutions FEMs.
Estimates of bite force generated by the high resolution FEMs, plotted against outlever length (distance from jaw hinge axis to bite point). Charts to right show natural logarithm transformed data.
(A) and (B) show results from models at ‘natural’ sizes, (C) and (D) show results from models rescaled to the volume of the M. cataphractus model. Note the strong correlation between volume-scaled
bite force and outlever (D). Front, mid, and rear bites for each FEM are shown. Taxon abbreviations: O.t, Osteolaemus tetraspis; C.ng, Crocodylus novaeguineae; C.i, Crocodylus intermedius; C.j,
Crocodylus johnstoni; M.c, Mecistops cataphractus; T.s, Tomistoma schlegelii; C.m, Crocodylus moreletii.
Table 11. Bite force estimates for natural sized and volume rescaled models.
Simple Beam Models #1
Results for the first set of beam models are shown as charts of strain values plotted against the value of each morphological variable (L, SL, A, W) in turn, for biting, shaking, and twisting (Figure
Figure 21. Strain for simple beam models #1.
Strain in the first set of simple beam models, plotted against morphological variables (from top) length, symphyseal length, angle, and width, for biting (left), shaking (middle) and twisting (right)
loads. Note the strong correlation between bite and overall length, shake and symphyseal length, and twist and angle. Data is plotted as natural logarithms of linear measurements (mm) and angles
(degrees). Model abbreviations are as follows: (CL-CSL-VA-VW) – Constant length and symphyseal length, variable angle and width. (CL-CW-VSL-VA) – Constant length and width, variable symphyseal length
and angle. (CA-CW-VSL-VL) – Constant angle and width, variable symphyseal length and length. (CSL-CW-VL-VA) – Constant symphyseal length and width, variable length and angle.
Under simulated bite loads, strain in the beam models correlated positively and linearly with length when symphyseal length also varied (CA-CW-VL-VSL), and with length when symphyseal length did not
vary (CSL-CW-VA-VL). There was a strong non-linear negative correlation of strain with angle when length also varied (CSL-CW-VA-VL), and a weak non-linear negative correlation with angle and
symphyseal length when these co-varied (CL-CW-VA-VSL). There was no correlation with width. The factors determining strain in the beam models under biting are thus mainly length, with the covariance
of angle and symphyseal length showing a weak effect when length is held constant.
Under shake loads, strain correlated positively (although non-linearly) with length when symphyseal length also varied (CA-CW-VL-VSL), but did not correlate with length when symphyseal length was
held constant (CSL-CW-VA-VL). Correlation with symphyseal length was positive and linear for models where symphyseal length varied (CA-CW-VL-VSL), but strain did not vary between models when
symphyseal length was constant (CL-CSL-VA-VW, CSL-CW-VA-VL). Correlation with angle was positive and non-linear only when symphyseal length covaried (CL-CW-VA-VSL). There was no correlation with
width. The factor determining strain in the beam models under shaking is thus principally symphyseal length.
Under twist loads, strain correlated negatively and non-linearly with length when angle and length varied (CSL-CW-VA-VL), positively and linearly with symphyseal length when symphyseal length and
angle covaried (CL-CW-VA-VSL), positively and non-linearly with angle when angle varied (CL-CSL-VA-VW, CSL-CW-VA-VL, CL-CW-VA-VSL), and with width when angle covaried (CL-CSL-VA-VW). Strain did not
correlate with length or symphyseal length when angle did not vary. The factor determining strain values in the beam models under twisting appears therefore to be angle.
Beam Models #2
Contour plots illustrating regions of high tensile (positive) and compressive (negative) fibre stress for the M. cataphractus beam model under biting, shaking, and twisting loads are shown in Figure
22. Deformations are exaggerated to emphasize the structural response to each load simulated and the general pattern of stress is characteristic of all beam models. Under biting, the mandible
experiences highest stress posteriorly on the rami, which decreases anteriorly along the mandible (Figure 22A). For shaking, the highest stress is located at the symphyseal-rami junction, decreasing
in both the anterior and posterior directions (Figure 22B). For twisting, stress in the symphysis is uniform along its length, with highest stress in the anterior portion of the rami (peaking at the
symphyseal-rami junction), where it decreases before increasing, posteriorly along the rami (Figure 22C).
Figure 22. Stress contour plots for beam models.
Stress contour plots for beam model based on M. cataphractus for biting (A), shaking (B), and twisting (C) loading regimes. The models are shown from lateral (left), anterior (middle) and dorsal
(right) views. The regions of high tensile (reds) and compressive (blues) stresses are shown. Deformations are exaggerated to better illustrate the structural response to loads. The general pattern
of strain is similar for all beam models.
Maximum strain for the second set of beam models is shown in Figure 23, plotted against the morphological variables, for biting, shaking and twisting, as log transformed data. The plots show a clear
correlation between; length and biting, symphyseal length and shaking, and angle and twisting. AICc values confirm that, for shaking and twisting respectively, symphyseal length and angle are by far
the best explanatory variables, with very low AICc values and Akaike weights close to 1.0 (Tables 12 and 13). Weaker correlations are apparent between symphyseal length and biting, as well as between
length and shaking, although the latter is a relatively poor explanatory model based on AICc. In the plots of strain against length and symphyseal length in twisting, T. schlegelii appears to be an
outlier while the data points for the other specimens suggest a negative relationship between length and symphyseal length for strain in twisting, but again these lack explanatory power under Akaike
Figure 23. Strain for simple beam models #2.
Strain in the second set of simple beam models, plotted against morphological variables (from top) length, symphyseal length, angle, and width, for biting (left), shaking (middle) and twisting
(right) loads. Note the strong correlation between bite and overall length, shake and symphyseal length, and twist and angle. Data is plotted as natural logarithms of linear measurements (mm) and
angles (degrees). Dimensions of the beam models are based upon the volume rescaled versions of the high resolution FEMs for the corresponding species. Taxon abbreviations: O.t, Osteolaemus tetraspis;
C.ng, Crocodylus novaeguineae; C.i, Crocodylus intermedius; C.j, Crocodylus johnstoni; M.c, Mecistops cataphractus; T.s, Tomistoma schlegelii; C.m, Crocodylus moreletii.
Table 12. Comparison of morphological variables for predicting shake strain in a simplified beam representation of a crocodile mandible.
Table 13. Comparison of morphological variables for predicting twist strain in a simplified beam representation of a crocodile mandible.
Complex FE Models
Figure 24 shows strain plots for the complex FEMs under biting, shaking, and twisting loads. In biting (TeT) loads, strain is higher in longirostrine mandibles and is highest in Tomistoma. For all
mandibles except Tomistoma, strains are highest in the rami, with peak strain in the anterior regions of the rami (near the symphysis), and anterior to the joint surface of the articular. Strain in
the symphysis of these models is low, but strain in the rami immediately posterior to the symphysis is high, and the symphyseal-rami junction appears to be a weak point. In Tomistoma, strain is high
in the rami, similar to the other models, but strain is also high in the symphysis; the strain pattern in Tomistoma is qualitatively different to the pattern in the other taxa. All of the mandibles
seem to be behaving as beams, with high strains on the upper and lower edges of the mandibles and a simple neutral surface of low strain running along the length of the mandible between these edges.
Figure 24. Strain plots for volume scaled FEMs.
Strain plots for volume scaled FEMs under biting, shaking, and twisting loads to show details of strain patterns. Top: biting load case plotted with a maximum strain limit of 0.001 (left) and 0.003
(right); the latter limit shows the position of the peak strains, and the former gives best comparison between the different load cases. Bottom left: shaking load case plotted with a maximum strain
limit of 0.001. Bottom right: twisting load case plotted with a maximum strain limit of 0.001. Taxa: A, Tomistoma schlegelii; B, Mecistops cataphractus; C, Crocodylus johnstoni; D, Crocodylus
intermedius; E, Crocodylus novaeguineae; F, Crocodylus moreletii; G, Osteolaemus tetraspis.
For shaking loads, strain is high in the anterior part of the mandible but the peak strain is more concentrated at the symphyseal-rami junction than in biting, and unlike the situation in biting the
posterior part of the symphysis is included in the region of highest strain. In the Tomistoma mandible strain is also high along each side of the symphysis, unlike the other models.
For twisting (TeT) loads, strain is highest at the symphyseal-rami junction, again with the exception of the Tomistoma model where the highest strains are at the anterior end of the symphysis. In all
models, strain is low along the rami, and is concentrated within the symphysis. In twisting strain magnitude for Tomistoma is much higher than other specimen and the pattern of strain contours is
qualitatively different.
When shake force is adjusted to match bite force (Figure 25), mandibular strain is higher under biting than under shaking, for all species. The difference is marked for most of the models, with
strain in biting and shaking being most similar for the C. moreletii mandible.
Figure 25. Strain plot response to equal biting and twisting loads.
Direct comparison of mandible response to equal biting and shaking loads at the most anterior bite point (front). Strain magnitude is higher under the biting loads; the difference is noticeable for
longirostrine (A–C) and mesorostrine (D–F) taxa. Taxon labels: A, Tomistoma schlegelii; B, Mecistops cataphractus; C, Crocodylus johnstoni; D, Crocodylus intermedius; E, Crocodylus novaeguineae; F,
Crocodylus moreletii; G, Osteolaemus tetraspis.
Peak strain (95%) values are plotted against morphometric variables in Figure 26. Under biting, charts suggest that strain correlates strongly with Length, and also with PC1 and symphyseal length. In
shaking, strain correlates with symphyseal length and PC1, whilst in twisting strain correlates with symphyseal length, length, and PC1.
Figure 26. Peak mandibular strain (95% values).
Peak mandibular strain (95% values) plotted against morphometric variables (from top) length, symphyseal length, angle, width, and PC1 score for biting (left), shaking (middle) and twisting (right)
loads. Note that strain in biting correlates strongly with overall length and very poorly with both angle and width, whilst in shaking strain has reasonable correlations with both symphyseal length
and PC1. Contrary to beam predictions strain in twisting correlated strongly with symphyseal length and very poorly with angle. Data is plotted as natural logarithms of linear measurements (mm) and
angles (degrees). Taxon: O.t, Osteolaemus tetraspis; C.ng, Crocodylus novaeguineae; C.i, Crocodylus intermedius; C.j, Crocodylus johnstoni; M.c, Mecistops cataphractus; T.s, Tomistoma schlegelii;
C.m, Crocodylus moreletii.
AICc scores are shown in Tables 14, 15, 16. The AICc-best explanatory model (EM) of strain in biting is that with length as the sole predictor (Table 14). The other two predictors, using the
eigenscores from geometric morphometric analysis both have large ΔAICc values (greater than 10) and thus cannot be interpreted as effective predictors of bite strain. Note that symphyseal length was
not assessed as a predictor for biting loads, although it appears to correlate with strain to some degree in Figure 26.
Table 14. Comparison of morphological variables predicting bite strain for hi-res FEMs.
Table 15. Comparison of morphological variables predicting shake strain for hi-res FEMs.
Table 16. Comparison of morphological variables predicting twist strain for hi-res FEMs.
The AICc-best EM of shake strain was the first principal component from the geometric morphometric analysis (PC1) (Table 15). This axis separates T. schlegelii with its very narrow mandible from the
more robust mandibles of C. moreletii and O. tetraspis. The next best EM is virtually identical to the AICc-best (ΔAICc 0.17) has symphyseal length as the sole predictor. The explanatory model with
the eigenscores from both PC1 and PC2 was the worst of all explanatory models shake strain (ΔAICc 13.74).
For twist strain, the AICc-best explanatory model had symphyseal length as the sole predictor (Table 16). The next best EM was similarly informative (ΔAICc 1.69), with Length as the sole predictor.
The third and fourth EMs with PC1 alone and combined PC1 and PC2 as predictors respectively were also potentially informative (ΔAICc 3.19 and 4.11 respectively), though these are not as good as the
first two EMs. Regardless, angle was a very poor explanatory model of twist strain (ΔAICc 16.92).
Qualitative comparison of Beam and FE models shows that beam models accurately predict ranked performance under biting, partially predict rank under shaking, and completely fail to predict rank under
twisting (Table 17). Under twisting, the relationship between Symphyseal Length measurements and strain predicted by beam models was inverted in the complex FE models (Figure 27).
Figure 27. Peak strain under twist loads for beam and FE models.
Peak strain under twist loads plotted against symphyseal length for beam (left) and FE (right) models. Note the relationship between symphyseal length and strain predicted by beam models is inverted
in the complex FE models; additionally, beam models fail to predict ranked order under twisting. Data is plotted as natural logarithms of linear measurements (mm).Taxon abbreviations are as follows:
O.t, Osteolaemus tetraspis; C.ng, Crocodylus novaeguineae; C.i, Crocodylus intermedius; C.j, Crocodylus johnstoni; M.c, Mecistops cataphractus; T.s, Tomistoma schlegelii; C.m, Crocodylus moreletii.
Table 17. Ranked performance of beam and FE models.
Interpretation and Discussion
Symphyseal Length in Mandibular Mechanics
The results show that symphyseal length is an important aspect of shape in determining the mechanical response of the crocodilian mandible to shaking and twisting loads (Hypothesis B). This
correlation is consistent with ‘armchair’ predictions (argument from first principles) based upon beam theory, and is partly consistent with beam modelling. AICc explanatory model selection indicates
that symphyseal length is the best simple measurement at predicting mandibular strain under these loads, and is even better than a multivariate measure of shape (PC1 score) for twisting loads. PC1
eigenscore is a slightly better predictor of strain than symphyseal length in shaking loads, although it should be noted that symphyseal length is a large component of the shape variation associated
with PC1. Length was also an effective predictor of strain under twisting loads, and also covaries with symphyseal length. As twisting and shaking behaviours are used by crocodilians to feed on large
prey, these results provide direct correlations between simple morphological variables and feeding ecology.
Also consistent with armchair predictions and beam modelling, length was the most important determinant of mechanical strain under biting loads (Hypothesis B). Length is a much better predictor of
strain than any other linear variable, and is also a much better predictor than multivariate measurements of shape (PC1) (Hypothesis C). In biting, mandibular strain is higher in longirostrine
crocodiles, both when bite force is standardised (TeT) and when bite simulates maximum muscle force (‘volume scaled’; Figure 28). In the latter case bite force in longirostrine forms decreases as
outlever length increases, so the higher strain may indicate a more gracile mandible in these forms in addition to the effects of increased bending moments acting on the jaws.
Figure 28. Strain in biting loads for TeT and NoLLC.
Left: Strain response of mandibles when subject to equal bite force (TeT), plotted against length for (from top) front, mid and back bites. Right: Strain response of mandibles at maximal bite force
(NoLLC), plotted against length for (from top) front, mid and back bites. In the TeT load cases, muscle forces are adjusted so that all models experience the same bite force as the M. cataphractus
model for each bite point; with the exception of the Osteolaemus model, this has little effect on the qualitative pattern of results, with longirostrine taxa exhibiting higher strain in TeT and NoLLC
load cases. Data is plotted as natural logarithms of linear measurements (mm).
Relative bite forces accord with in vivo studies of crocodilians, although absolute simulated forces are lower. Predicted bite force was consistent between volume scaled FEMs, correlating with
outlever length. Given the marked variation in cranial morphology between the models, this result is consistent with the recent finding by Erickson and colleagues [47] that, for a particular bite
point, bite force in crocodilians is controlled by body size rather than skull morphology (Figure 29). The absolute bite force predicted by the FEMs is consistently and significantly less than
empirical data reported by [47]. The discrepancy is most likely because the jaw muscles in the FEMs are modelled as parallel fibred beams that run as straight lines between attachment points, whilst
crocodilian muscles are actually pennated and run around bony structures (for example, M. pterygoidius posterior, which wraps around the ventral surface of the angular), aspects that are expected to
increase total muscle force and effective inlever length. Specific tension of jaw muscles is not often measured in reptiles but in Sphenodon punctatus is 89 KPa [49], a figure that is much greater
than isometric values used in our models (30 KPa: [15]), and this may be a source of error which may also contribute to differences in bite force between our results and experimental findings [47].
Whilst rostral proportions vary markedly between crocodilian taxa (Figure 2), the morphology of the postorbital region and adductor chamber is conservative [50], [51] and since this region houses the
adductor musculature it is likely that, size for size, crocodilians of different species produce similar maximal jaw muscle force [47]. As we calculated jaw muscle forces from the osteology of the
adductor chamber, the qualitative patterns of bite force predicted by the FEMs appear to be consistent with the empirical data, even if absolute force magnitude is less.
Figure 29. Comparison of FEM predictions and in vivo measurements of bite force.
Natural logarithms of FEM predicted bite force (red squares) and in vivo bite force (blue diamonds), plotted against body mass. Bite force is for rear bites, in vivo bite force data from Erickson
[47]. For the FEMs, body mass was calculated from skull volume using the equation log10 body mass = log10 (skull volume x 0.9336+1.9763) using data from McHenry [29]. Slopes of regression lines are
similar, but the difference in intercept means that the in vivo bite force is a factor of approximately 1.6 times the FEM predicted bite force for crocodilians of a given mass.
If maximal jaw adductor muscle force in longirostrine crocodiles is similar to that of mesorostrine forms, but strain under a given load is higher, then longirostrine crocodiles should be expected to
avoid dangerously high strain by either having high safety factors (so that even maximal bite force will not damage bone), or by voluntarily limiting bite force. Note that the suggestion that safety
factors in crocodilian skulls are high is inconsistent with in vivo strain data from rostra in Alligator mississippiensis [52], and stresses in crocodilian teeth [47].
Beam Modelling vs ‘armchair’ Predictions
The results from beam modelling are consistent with the argument from beam theory for biting and shaking; for the former, strain will be determined by length, but for the latter strain will be
determined by symphyseal length (Hypothesis A). Under twisting, however, the beam models found inter-rami angle, not symphyseal length, to be the best predictor of strain.
Beam Modelling vs Complex FEMs
Both sets of beam models indicate that strain in biting, shaking, and twisting can be predicted from measurements of length, symphyseal length, and angle respectively. Whilst the results from the
complex FEMs were consistent with these predictions for biting and shanking, inter-rami angle did not correlate with strain in the FEMs was an extremely poor explanatory model according to AICc-based
selection. This constitutes an important discrepancy between the beam and complex FE models (Hypothesis C).
Another important comparison is the qualitative predictions of beam models vs complex FEMs. In biting, the beam models ranked relative performance of the mandibles the same as the FE models; this
result means that in order to correctly rank the biomechanical performance of the seven mandibles tested here under biting loads, the only information required is mandible length. For shaking, beam
and complex FE models agreed on the relative performance of five models but differed in their rankings of the M. cataphractus and C. novaeguineae models. For twisting, ranked results were similar for
only four models.
The largest discrepancy between the beam modelling and FE modelling is for twist loads; the beam models found angle to be the best predictor of strain, whilst the complex FEMs found symphyseal length
as the best sole predictor. This result may indicate the limitations of modelling a complex shape such as a crocodilian mandible as a beam. However, the beam models used here were very simple so it
is possible that a very small increase in their complexity (such as allowing beam section to vary along the length of the beam, especially in the vertical axis) may capture an important aspect of the
actual 3-dimensional structure and improve the predictive power of the beam models compared with the complex FE models.
Functional Interpretation - Crocodilians
Ultimately, we are interested in the biomechanics that influence mandibular morphology in crocodilians. Whilst torsional loads (which are moments) cannot be directly compared to forces, the response
of the mandible to biting and shaking loads can be compared. In all taxa except Osteolaemus the mandible is stronger under shaking loads than under equivalent biting loads (Figure 25). If, in an
evolutionary sense, symphyseal length is controlled by shaking and twisting behaviours, we might expect that these behaviours should result in strain values that are at least of the same order as the
strain resulting from biting. When shake forces were equalised to bite forces, the mandible was weaker in biting than in shaking for all species except Osteolaemus. For the loads used in this study,
strain was higher in biting for all species modelled than in shaking, and strains resulting from twisting were much lower. If these loads accurately represent the magnitudes of loads used by
crocodiles, then our results suggest that selection should result in increased resistance to bending loads from biting, rather than shaking or twisting, as a mandible that is strong enough to cope
with a crocodile’s own bite force is already strong enough to cope with likely shaking or twisting loads. If, however, the loads used in shaking and/or twisting are actually much higher than those
used here, then shaking and/or twisting could possibly have the strongest influence on mandibular morphology, resulting in a morphology that is stronger under these loads than in biting. Whilst we
currently lack the quantitative data required to explore this aspect further, these data are in theory accessible from studies of crocodilian behaviour and as such will provide insight into the
behaviours that have determined the evolution of skull form in crocodilians.
Although structural modelling can identify the biomechanical advantages of a short mandibular symphysis, the question of why longirostrine crocodilians have an elongate symphysis remains open.
Clearly, it is not for increased strength, though an elongate symphysis might offer hydrodynamic advantages for rapid jaw closure during capture, or allow greater acceleration of the jaws towards
agile prey. To address this question, a combination of in vivo data, fluid dynamics and solid mechanics would be required to best model crocodilian jaws during prey capture.
Functional Interpretation - other Taxa and Palaeobiology
For palaeobiologists, one of the interesting aspects of crocodilians is their potential to act as an extant model for cranial palaeobiomechanics of various fossil groups which have superficially
similar morphologies such as plesiosaurians, ichthyosaurians, phytosaurs, and of course extinct crocodylomorphs [8], [23], [53]. Although overall head shape may be similar between these groups, the
details of skull anatomy are specific to each group. If the details of cranial anatomy are critical to modelling its biomechanics, then the principles elucidated from one group should not be
generalised to another. However, if a small number of simple measurements can provide insight into the biomechanics of that group, then those insights may be generalised to the other groups. The
results here are somewhat encouraging for palaeobiomechanists; since simple measures of mandibular shape (length and symphyseal length) provide some insight into the mechanics of the mandible, the
same measurements may be applicable to all of the above fossil reptile groups, and to marine mammals such as odontocetes, archaeocetes, and basal mysticetes [54], providing an answer to the
functional morphologist’s question – where to put the callipers? The next step in better understanding the functional morphology of the mandible is to quantify the relationship between shape and
diet. Odontocetes may provide a suitable study group for this, given the diversity, morphological variation, and available ecological data for this important group of pelagic predators.
Methodological Approaches
Since the early 2000s complex Finite Element models have been increasingly used to investigate skull mechanics in various fossil and living species; whilst different studies have made use of
deductive and experimental approaches [17], many have used a comparative biomechanical approach to reconstruct the palaeobiology of extinct species [26], [28], [33], [55], [56], [57], [58]. Whilst
comparative approaches are of high value to palaeobiology, they tend to use post hoc analysis and are sometimes difficult to conduct in a way that explicitly tests hypotheses of form and function.
Studies that predict the mechanical consequence of specific morphologies are rarer, because of the difficulty in applying a fundamental theorem (such as beam theory) to complex shapes. By combining
predictions based in beam theory with data from complex FE modelling, we are able to test a priori hypotheses of the mechanical consequences of changes in morphology. Some previous studies have
combined beam theory with FE modelling [8], [59], but used very low resolution FE models. We here assume that the high resolution models used in the present study do capture the actual mechanics of
the biological structures under study, but the models have yet to be validated and this remains an important step for future work and limitation of the present study.
Longirostrine crocodilians experience higher strain than those of meso−/brevirostrine forms when subject to equivalent biting, shaking and torsional loads. In the mandible, strain in biting and
twisting was best predicted by overall length and symphyseal length respectively, while shaking was best predicted by both symphyseal length and multivariate measure of shape (PC1). For biting and
twisting simple linear measurements of the mandible provide better predictors of mechanics than a multivariate measure of shape (PC1); with overall length and symphyseal length outperforming PC1 for
biting and twisting respectively.
For biting and shaking, the pattern of variation between species is consistent with predictions from beam theory, where overall length and symphyseal length are the best predictive measures of biting
and shaking respectively. The response to twisting is best predicted by symphyseal length, while beam models predicted inter-rami angle. This divergence could be due to the exclusion of sectional
variance in beam models; since beam models had uniform section and real mandibles vary their section with length, this difference could be expected to change the mechanics.
Of the hypotheses we sought to test, we found support for Hypothesis A, that strain in beam models should correlate best with length in biting but symphyseal length in shaking and twisting, and
Hypothesis B of the same correlations in complex FE models. We found partial support for Hypothesis C, that the morphological variables that best explain strain in beam models will also best explain
strain in complex FE models; this was the case under biting and shaking loads, but was not the case for twisting loads.
Beam theory remains a useful tool for exploring biomechanics and our approach illustrates the possibility of moving away from post hoc examinations of complex models towards a priori predictions of
the fundamental mechanics. Our approach allows researchers to focus on using information from first principles to identify the components of shape that are of interest and then quantify and compare
the relative statistical performance of various hypotheses using model selection criteria, something that is rarely done in current studies of biomechanics.
The authors thank Matthew Colbert and Jesse Maisano (Digital Morphology, University of Texas) and Chris Brochu (University of Iowa) for access to CT data. We thank Eleanor Cunningham (Newcastle Mater
hospital), Mason Meers (University of Tampa) and Bruno Frohlich (United States National Museum) for scanning of specimens. For access to specimens we thank Ross Sadlier & Cecilie Beatson (Australian
Museum). We thank Holger Preuschoft, Ulrich Witzel, Chris Glen and Bill Daniel for discussion on various aspects of biomechanics and Finite Element Analysis that ultimately lead to the research
presented here.
Author Contributions
Image design and creation: CWW MRQ MRM CRM. Conceived and designed the experiments: CWW PDC CRM. Performed the experiments: CWW CCO PDC. Analyzed the data: CWW PDS CRM. Contributed reagents/materials
/analysis tools: CWW MRM HSR SW PDC CRM. Wrote the paper: CWW PDS CRM.
Media Coverage of This Article
Posted by PLoS_ONE_Group
|
{"url":"http://www.plosone.org/article/info:doi/10.1371/journal.pone.0053873","timestamp":"2014-04-20T06:13:38Z","content_type":null,"content_length":"370723","record_id":"<urn:uuid:f0f80505-06c0-4063-a834-32a2fbc1eea1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Class Field Theory for Imaginary Quadratic Fields
up vote 6 down vote favorite
Let $K$ be a quadratic imaginary field, and E an elliptic curve whose endomorphism ring is isomorphic to the full ring of integers of K. Let j be its j-invariant, and c an integral ideal of K.
Consider the following tower:
K(j,E[c]) / K(j,h(E[c])) / K(j) / K,
where h here is any Weber function on E. (Note that K(j) is the Hilbert class field of K).
We know that all these extensions are Galois, and any field has ABELIAN galois group over any smaller field, EXCEPT POSSIBLY THE BIGGEST ONE (namely, K(j,E[c]) / K).
1. Does the biggest one have to be abelian? Give a proof or counterexample.
My suspicion: No, it doesn't. I've been trying an example with K = Q($\sqrt{-15}$), E = C/O_K, and c = 3; it just requires me to factorise a quartic polynomial over Q-bar, which SAGE apparently can't
1. What about if I replace E[c] in the above by E_tors, the full torsion group?
class-field-theory complex-multiplication elliptic-curves nt.number-theory
add comment
4 Answers
active oldest votes
Here is a case where it is non-Abelian. I use $K$ of class number 3. If I use the Gross curve, it is Abelian. If I twist in $Q(\sqrt{-15})$, it is Abelian for every one I tried, maybe
because it is one class per genus. My comments are not from an expert.
> K<s>:=QuadraticField(-23);
> jinv:=jInvariant((1+Sqrt(RealField(200)!-23))/2);
> jrel:=PowerRelation(jinv,3 : Al:="LLL");
> Kj<j>:=ext<K|jrel>;
> E:=EllipticCurve([-3*j/(j-1728),-2*j/(j-1728)]);
> HasComplexMultiplication(E);
true -23
> c4, c6 := Explode(cInvariants(E)); // random twist with this j
> f:=Polynomial([-c6/864,-c4/48,0,1]);
> poly:=DivisionPolynomial(E,3); // Linear x Linear x Quadratic
> R:=Roots(poly);
up vote 5 down > Kj2:=ext<Kj|Polynomial([-Evaluate(f,R[1][1]),0,1])>;
vote accepted > KK:=ext<Kj2|Polynomial([-Evaluate(f,R[2][1]),0,1])>;
> assert #DivisionPoints(ChangeRing(E,KK)!0,3) eq 3^2; // all E[3] here
> f:=Factorization(ChangeRing(DefiningPolynomial(AbsoluteField(KK)),K))[1][1];
> GaloisGroup(f); /* not immediate to compute */
Permutation group acting on a set of cardinality 12
Order = 48 = 2^4 * 3
> IsAbelian($1);
This group has $A_4$ and $Z_2^4$ as normal subgroups, but I don't know it's name if any.
PS. 5-torsion is too long to compute most often.
Thank you Junkie; this example shows that there is an error in my original question. When I assumed that K(j,E[c]) is always galois over K, I was wrong! In this example, [K(j,E[3]) :
K] = 12 (in your notation this is [KK : K] = 12). However, you show that the Galois closure of KK has degree 48 over K. I may start thinking about precisely when this extension is
Galois and when it isn't. Thanks again! – Barinder Banwait May 7 '10 at 11:07
add comment
In general $K(j,E[c])$ will not be abelian over $K$ (the reason being that $K(j,h(E[c]))$ is the ray class field of $K$ of conductor $c$, therefore maximal for abelian extensions of
conductor $c$). However, $K(j,E[c])$ is always abelian over $K(j)$. In particular, if the class number of $K$ is $1$, the answer is yes to both your questions, because $K(j)=K$.
up vote 5
down vote For more on this, see Silverman's "Advanced topics in the AEC". In particular, see pages 135-138, and Example 5.8 discusses briefly this question.
Is it possible for some example that $K(j,E[c])$ is always nontrivial over $K(j)$ when $c\geq 2$? – i707107 Jul 5 '13 at 22:52
add comment
The extension K(j,E_{tors}) is abelian over the Hilbert class field of K, hence over K if K has class number 1. Silverman (Advanced topics, p. 138) says that, in general, the extension is
not abelian. For getting a counterexample, looking at ${\mathbb Q}(\sqrt{-15})$ is the right idea. Instead of factoring the quartic you might simply want to compute its Galois group, which
you probably can read off the discriminant and the cubic resolvent.
up vote 4
down vote The field generated by $E_{tors}$ is the union of the fields generated by the $E[c]$, so for $c$ large enough you should see the nonabelian group already there.
I don't think the Galois group is enough. The quartic I referred to was the 3-division polynomial, defined over K(j), NOT K. Its splitting field would give me K(j,h(E[3])) (since h just
picks out the x-coordinates). But since I'm trying to get K(j,E[3]), I need the y-coordinates as well. Note that, using the universal elliptic curve, I can write down a Weierstrass
equation for E := C/O_K (well, this requires an explicit computation of j, which is possible, and is explicitly stated on page 143 of Silverman's advanced topics). – Barinder Banwait Apr
30 '10 at 14:35
add comment
Magma is not facile here but works, but maybe SAGE can do the same. You get $K(j,E[3])/K$ to be a degree 12 and cyclic Galois group, for the $E$ I think you want.
> jrel:=PowerRelation(jInvariant((1+Sqrt(-15))/2),2 : Al:="LLL");
> K:=QuadraticField(-15);
> Kj<j>:=ext<K|jrel>;
> A:=AbsoluteField(Kj);
> C:=EllipticCurve([-3*j/(j-1728),-2*j/(j-1728)]);
> b, d := HasComplexMultiplication(C); assert b and d eq -15;
> E:=QuadraticTwist(C, 7*11); // conductor at 3, 5
> E:=ChangeRing(WeierstrassModel(ChangeRing(E,A)),Kj);
> c4, c6 := Explode(cInvariants(E));
> f:=Polynomial([-c6/864,-c4/48,0,1]);
> poly:=DivisionPolynomial(E,3); // Linear x Cubic
> r:=Roots(poly)[1][1];
up vote 3 > Kj2:=ext<Kj|Polynomial([-Evaluate(f,r),0,1])>; // quadratic ext for linear
down vote > KK:=ext<Kj2|Factorization(poly)[2][1]>; // cubic x-coordinate
> assert #DivisionPoints(ChangeRing(E,KK)!0,3) eq 3^2; // all E[3] here
> f:=Factorization(ChangeRing(DefiningPolynomial(AbsoluteField(KK)),K))[1][1];
> // assert IsIsomorphic(ext<K|f>,KK); // taking too long ?
> // SetVerbose("GaloisGroup",2);
> GaloisGroup(f);
Permutation group acting on a set of cardinality 12
Order = 12 = 2^2 * 3
> IsAbelian($1);
The Magma has as online calculator for this. http://magma.maths.usyd.edu.au/calc
Darn, that means my example was wrong. Presumably if I replace 3 by something bigger, and try the above calculation, the last line should return "false" and I'm happy. I will find a
friend (who has MAGMA) to do this. I'll keep you posted. – Barinder Banwait May 3 '10 at 20:42
I do not know, but my impression is that for 3-torsion with the Gross curve the tendency is to be Abelian. I tested the same for $d=-23$ (the Gross curve) and got a cyclic group of
order 12. But my comments are not from an expert. Computing with 5-torsion is time-consuming. If you look at twists (preserving $j$-invariant), they have larger Galois group. See below.
– Junkie May 3 '10 at 23:51
add comment
Not the answer you're looking for? Browse other questions tagged class-field-theory complex-multiplication elliptic-curves nt.number-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/23092/class-field-theory-for-imaginary-quadratic-fields?sort=votes","timestamp":"2014-04-18T13:51:07Z","content_type":null,"content_length":"72239","record_id":"<urn:uuid:e9da42ef-19e9-4c4b-80de-0943191dd8e5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: external representations
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: external representations
On Jun 23, 2005, at 11:25 PM, William D Clinger wrote:
Bradley Lucier wrote:
Re: Your idea of representing common Scheme values as NaNs.
I believe it is possible under IEEE 754 that the "hardware"
could return a different NaN for each execution of (/ 0. 0.)
in the code (for example). (Some proposals have suggested
putting the address of the code and/or a rough time stamp
in the mantissa.) I'm a bit concerned that a floating-point
operation could return a value that would be interpreted by
your scheme as #\C (for example).
Interesting. Can you tell me of any hardware that actually
does this?
Apple's libraries do not put an address or a timestamp in the mantissa of a NaN, but they do put a code indicating where the problem arose. E.g., in
http://developer.apple.com/documentation/mac/PPCNumerics/ PPCNumerics-17.html#HEADING17-45
we find
A NaN may have an associated code that indicates its origin. These codes are listed in Table 2-3. The NaN code is the 8th through 15th most significant bits of the fraction field.
Table 2-3 NaN codes
Decimal Hexadecimal Meaning
1 0x01 Invalid square root, such as SQRT-1
2 0x02 Invalid addition, such as (+ )+(- )
4 0x04 Invalid division, such as 0/0
8 0x08 Invalid multiplication, such as 0×
9 0x09 Invalid remainder or modulo, such as x rem 0
17 0x11 Attempt to convert invalid ASCII string
21 0x15 Attempt to create a NaN with a zero code
33 0x21 Invalid argument to trigonometric function (such as cos, sin, tan) 34 0x22 Invalid argument to inverse trigonometric function (such as acos, asin, atan) 36 0x24 Invalid argument to
logarithmic function (such as log, log10 ) 37 0x25 Invalid argument to exponential function (such as exp, expm1) 38 0x26 Invalid argument to financial function (compound or annuity) 40 0x28
Invalid argument to inverse hyperbolic function (such as acosh, asinh)
42 0x2A Invalid argument to gamma function (gamma or lgamma)
The PowerPC processor always returns 0 for the NaN code.
and the following code gives
[descartes:~] lucier% gcc -Wall -o testfp testfp.c -save-temps
[descartes:~] lucier% ./testfp
7ff8000000000000 7ff8048000000000 7ff8044000000000 7ff8000000000000
[descartes:~] lucier% cat testfp.c
#include <stdio.h>
#include <math.h>
int main() {
union {
long long int i;
double d;
} x, y, z, a;
x.d = sqrt(-1.0);
y.d = log (-1.0);
z.d = acos(2.0);
a.d = (1.0 / 0.0) - (1.0 / 0.0);
printf("%llx %llx %llx %llx\n", x.i, y.i, z.i, a.i);
return 0;
I'd like to have access to those codes if they're available. In other words, I'd like all bit strings that can be interpreted as NaNs by the floating-point system to have external representations.
|
{"url":"http://srfi.schemers.org/srfi-70/mail-archive/msg00089.html","timestamp":"2014-04-17T09:35:47Z","content_type":null,"content_length":"7214","record_id":"<urn:uuid:5acd1b89-dc1f-4cf3-858c-23af5b94031b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Determinant of a pullback diagram
up vote 7 down vote favorite
Suppose that X and Y are finite sets and that f : X → Y is an arbitrary map. Let PB denote the pullback of f with itself (in the category of sets) as displayed by the commutative diagram
PB → X
↓ ↓
X → Y
Terence Tao observes in one the comments on his weblog that the product of |PB| and |Y| is always greater than or equal to |X|^2. (This is an application of the Cauchy-Schwarz inequality.) This fact
may be rephrased as follows: If we ignore in the above diagram all arrows and replace the sets by their cardinalities we obtain a 2x2 matrix with a non-negative determinant.
The question is whether this is a general phenomenon. Suppose that n is a positive integer and that X[1], X[2], ... ,X[n] are finite sets; furthermore we are given maps f[1] : X[1] → X[2], f[2] : X
[2] → X[3], ... , f[n-1] : X[n-1] → X[n]. We construct a pullback diagram of size nxn. The diagram for n=4 is shown below.
PB → PB → PB → X[1]
↓ ↓ ↓ ↓
PB → PB → PB → X[2]
↓ ↓ ↓ ↓
PB → PB → PB → X[3]
↓ ↓ ↓ ↓
X[1] → X[2] → X[3] → X[4]
Here, the maps between the X[i] in the last row and column are the corresponding f[i] and the PBs denote the induced pullbacks. (Of course, although they are denoted by the same symbol, different PBs
are different objects.) The PBs can be constructed recursively. First, take the pullback of X[3] → X[4] ← X[3]; it comes with maps X[3] ← PB → X[3]. Having constructed this, take the pullback of X[2]
→ X[3] ← PB and so forth.
Ignore all arrows and replace sets by their cardinalities. Is the determinant of the resulting nxn matrix always non-negative?
ct.category-theory matrices categorification
I can't help but feel like you should be using the Gessel-Viennot lemma. – Qiaochu Yuan Oct 22 '09 at 14:04
Interesting approach. By the way, I can prove that it is true for n=3, but this was already some effort. I hope one can read the diagram. – Philipp Lampe Oct 22 '09 at 14:52
add comment
1 Answer
active oldest votes
Write X[n] = {x[1], ..., x[k]}. For each 1 ≤ i ≤ k let w[i] be the vector whose jth component is the cardinality of the inverse image of x[j] in X[i]. Then your matrix is the sum w
up vote 10 down [1]w[1]^T + ... + w[k]w[k]^T, a sum of positive semidefinite matrices, so it is positive semidefinite and in particular has nonnegative determininant.
vote accepted
Great. What a simple argument. – Philipp Lampe Oct 22 '09 at 17:07
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory matrices categorification or ask your own question.
|
{"url":"http://mathoverflow.net/questions/1858/determinant-of-a-pullback-diagram","timestamp":"2014-04-21T10:28:04Z","content_type":null,"content_length":"54737","record_id":"<urn:uuid:9372833c-e9d0-41fe-98c7-6950c96d64cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A.: Categories, Allegories
Results 1 - 10 of 29
- Information and Computation , 1996
"... New tools are presented for reasoning about properties of recursively defined domains. We work within a general, category-theoretic framework for various notions of `relation' on domains and for
actions of domain constructors on relations. Freyd's analysis of recursive types in terms of a property o ..."
Cited by 99 (5 self)
Add to MetaCart
New tools are presented for reasoning about properties of recursively defined domains. We work within a general, category-theoretic framework for various notions of `relation' on domains and for
actions of domain constructors on relations. Freyd's analysis of recursive types in terms of a property of mixed initiality/finality is transferred to a corresponding property of invariant relations.
The existence of invariant relations is proved under completeness assumptions about the notion of relation. We show how this leads to simpler proofs of the computational adequacy of denotational
semantics for functional programming languages with user-declared datatypes. We show how the initiality/finality property of invariant relations can be specialized to yield an induction principle for
admissible subsets of recursively defined domains, generalizing the principle of structural induction for inductively defined sets. We also show how the initiality /finality property gives rise to
the co-induct...
, 1992
"... This thesis considers the problem of program correctness within a rich theory of dependent types, the Extended Calculus of Constructions (ECC). This system contains a powerful programming
language of higher-order primitive recursion and higher-order intuitionistic logic. It is supported by Pollack's ..."
Cited by 24 (1 self)
Add to MetaCart
This thesis considers the problem of program correctness within a rich theory of dependent types, the Extended Calculus of Constructions (ECC). This system contains a powerful programming language of
higher-order primitive recursion and higher-order intuitionistic logic. It is supported by Pollack's versatile LEGO implementation, which I use extensively to develop the mathematical constructions
studied here. I systematically investigate Burstall's notion of deliverable, that is, a program paired with a proof of correctness. This approach separates the concerns of programming and logic,
since I want a simple program extraction mechanism. The \Sigma-types of the calculus enable us to achieve this. There are many similarities with the subset interpretation of Martin-Lof type theory. I
show that deliverables have a rich categorical structure, so that correctness proofs may be decomposed in a principled way. The categorical combinators which I define in the system package up much
logical bo...
- TURKU CENTRE FOR COMPUTER SCIENCE , 1996
"... It is generally accepted that in principle it’s possible to formalize completely almost all of present-day mathematics. The practicability of actually doing so is widely doubted, as is the value
of the result. But in the computer age we believe that such formalization is possible and desirable. In c ..."
Cited by 23 (0 self)
Add to MetaCart
It is generally accepted that in principle it’s possible to formalize completely almost all of present-day mathematics. The practicability of actually doing so is widely doubted, as is the value of
the result. But in the computer age we believe that such formalization is possible and desirable. In contrast to the QED Manifesto however, we do not offer polemics in support of such a project. We
merely try to place the formalization of mathematics in its historical perspective, as well as looking at existing praxis and identifying what we regard as the most interesting issues, theoretical
and practical.
- Math. Structures Comput. Sci , 2001
"... Connections between the sequentiality/concurrency distinction and the semantics of proofs are investigated, with particular reference to games and Linear Logic. ..."
- Mathematical Structures in Computer Science , 1993
"... This paper gives the first complete axiomatization for higher types in the refinement calculus of predicate transformers. ..."
, 2000
"... A program derivation is said to be polytypic if some of its parameters are data types. Often these data types are container types, whose elements store data. Polytypic program derivations
necessitate a general, non-inductive definition of `container (data) type'. Here we propose such a definition: a ..."
Cited by 12 (0 self)
Add to MetaCart
A program derivation is said to be polytypic if some of its parameters are data types. Often these data types are container types, whose elements store data. Polytypic program derivations necessitate
a general, non-inductive definition of `container (data) type'. Here we propose such a definition: a container type is a relator that has membership. It is shown how this definition implies various
other properties that are shared by all container types. In particular, all container types have a unique strength, and all natural transformations between container types are strong. Capsule Review
Progress in a scientific dicipline is readily equated with an increase in the volume of knowledge, but the true milestones are formed by the introduction of solid, precise and usable definitions.
Here you will find the first generic (`polytypic') definition of the notion of `container type', a definition that is remarkably simple and suitable for formal generic proofs (as is amply illustrated
in t...
, 1997
"... This paper is concerned with models of SDT encompassing traditional categories of domains used in denotational semantics [7,18], showing that the synthetic approach generalises the standard
theory of domains and suggests new problems to it. Consider a (locally small) category of domains D with a (sm ..."
Cited by 11 (3 self)
Add to MetaCart
This paper is concerned with models of SDT encompassing traditional categories of domains used in denotational semantics [7,18], showing that the synthetic approach generalises the standard theory of
domains and suggests new problems to it. Consider a (locally small) category of domains D with a (small) dense generator G equipped with a Grothendieck topology. Assume further that every cover in G
is effective epimorphic in D. Then, by Yoneda, D embeds fully and faithfully in the topos of sheaves on G for the canonical topology, which thus provides a set-theoretic universe for our original
category of domains. In this paper we explore such a situation for two traditional categories of domains and, in particular, show that the Grothendieck toposes so arising yield models of SDT. In a
subsequent paper we will investigate intrinsic characterizations, within our models, of these categories of domains. First, we present a model of SDT embedding the category !-Cpo of posets with least
upper bounds of countable chains (hence called !-complete) and
- Encyclopedia of Database Technologies and Applications , 2005
"... This article (further referred to as Math-I), and the next one (further referred to as Math-II, see p. 359), form a mathematical companion to the article in this encyclopedia on Generic Model
Management (further referred to as GenMMt, see p.258). Articles Math-I and II present the basics of the arro ..."
Cited by 9 (7 self)
Add to MetaCart
This article (further referred to as Math-I), and the next one (further referred to as Math-II, see p. 359), form a mathematical companion to the article in this encyclopedia on Generic Model
Management (further referred to as GenMMt, see p.258). Articles Math-I and II present the basics of the arrow diagram machinery that provides model management with truly generic specifications.
Particularly, it allows us to build a generic pattern for heterogeneous data and schema transformation, which is presented in Math-II for the first time in the literature.
, 1994
"... An approach is described for the generation of certain mathematical objects (like sets, correspondences, mappings) in terms of relations using relation-algebraic descriptions of higher-order
objects. From non-constructivecharacterizations executable relational specifications are obtained. We als ..."
Cited by 7 (4 self)
Add to MetaCart
An approach is described for the generation of certain mathematical objects (like sets, correspondences, mappings) in terms of relations using relation-algebraic descriptions of higher-order objects.
From non-constructivecharacterizations executable relational specifications are obtained. We also showhowtodevelop more efficient algorithms from the frequently inefficient specifications within the
calculus of binary relations.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=956119","timestamp":"2014-04-20T14:38:57Z","content_type":null,"content_length":"34649","record_id":"<urn:uuid:c198b807-5c70-45f9-86ce-137753edde60>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Digital Waveguide Networks
Next: Waveguide Meshes and the Up: An Overview of Scattering Previous: Relationship to Digital Filters
Digital Waveguide Networks
The principal components of the Kelly-Lochbaum speech synthesis model, paired delay lines which transport wave signals in opposite directions, and the scattering junctions to which they are
connected, are the basic building blocks of digital waveguide networks (DWNs) [166]. Keeping within the acoustic tube framework, it should be clear that any interconnected network of uniform acoustic
tubes can be immediately transferred to discrete time by modeling each tube as a pair of digital delay lines (or digital waveguide) with an admittance depending on its cross-sectional area^. At a
junction where several tubes meet, these waves are scattered. See Figure 1.6 for a representation of a portion of a network of acoustic tubes, and its DWN equivalent.
Figure 1.6: (a) A portion of a general network of one-dimensional acoustic tubes and (b) its discrete-time realization using paired bidirectional delay lines and scattering junctions.
The scattering operation performed on wave variables must be generalized to the case of the junction of 1.7. Though we will cover this operation in more detail in Chapter 4, and in the wave digital
context in Chapter 2, we note that as for the case of the junction between two tubes, the scattering equations result from continuity requirements on the pressures and volume velocities at the
junction. That is, if the pressures in the
In other words, the pressures in all the tubes are assumed to be identical and equal to some junction pressure parallel connection of ^.
The pressures and velocities can be split into incident and reflected waves 1.2a) and (1.2b), by
where 1.12), is
and can be represented graphically as per Figure 1.7(b).
It is worth examining this key operation in a little more detail. First note that the scattering operation can be broken into two steps, as follows. First, calculate the junction pressure
Then, calculate the outgoing waves from the incoming waves by
Although (1.13) produces 1.14), as a natural by-product of the scattering operation; because in a numerical integration setting, this physical variable is always what we are ultimately after, we may
immediately suspect some link with standard differencing methods, which operate exclusively using such physical ``grid variables''. In Chapter 4, we examine the relationship between finite difference
methods and DWNs in some detail.
It is simple to show that the scattering operation also ensures that
which is, again, merely a restatement of the conservation of power at a scattering junction. Notice that if all the admittances are positive, then a weighted
Next: Waveguide Meshes and the Up: An Overview of Scattering Previous: Relationship to Digital Filters Stefan Bilbao 2002-01-22
|
{"url":"https://ccrma.stanford.edu/~bilbao/master/node11.html","timestamp":"2014-04-18T04:37:14Z","content_type":null,"content_length":"15474","record_id":"<urn:uuid:f6dc6699-ec5e-4f5d-9d67-f06ac138d010>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Senoia Geometry Tutor
Find a Senoia Geometry Tutor
...I teach systematic methods to make sure that every student does the best they can on the SAT math. We begin by identifying areas that students struggle with and identify methods that make
sense and that the students can use on a regular basis. Once students begin to build up confidence we progress to more difficult questions.
17 Subjects: including geometry, chemistry, writing, physics
...I believe that all students can learn if given the proper tools to be successful. I am a high school science teacher with five years of teaching and tutoring experience. As a classroom
teacher, I strive to make the topics that I teach fun and relatable for my students.
19 Subjects: including geometry, chemistry, statistics, biology
...My philosophy on tutoring is to discover why the student does not understand the material, focus on helping the student understand the material and allow the student to continue to practice
with the material until they become comfortable doing it. Also, if the student is unable to draw what the ...
13 Subjects: including geometry, calculus, trigonometry, SAT math
...I have passed the Elementary Math qualifying test. I am a huge fan of the game and I relate basketball to mathematics in my teaching. I have done research into the history and the fundamentals
of basketball.
11 Subjects: including geometry, algebra 1, algebra 2, SAT math
Hi. My name is Lisa. I have been tutoring for several years and I currently tutor several new college students, adults returning to college, and regular students in need of help.
31 Subjects: including geometry, reading, English, chemistry
|
{"url":"http://www.purplemath.com/senoia_ga_geometry_tutors.php","timestamp":"2014-04-18T18:47:38Z","content_type":null,"content_length":"23577","record_id":"<urn:uuid:d4b93f14-d100-4a19-8797-191dca9ff1a5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The POLYWARP procedure performs polynomial spatial warping.
Using least squares estimation, POLYWARP determines the coefficients Kx ( i,j ) and Ky ( i,j ) of the polynomial functions:
Kx and Ky can be used as inputs P and Q to the built-in function POLY_2D. This coordinate transformation may be then used to map from Xo, Yo coordinates into Xi, Yi coordinates.
This routine is written in the IDL language. Its source code can be found in the file polywarp.pro in the lib subdirectory of the IDL distribution.
Xo, Yo
Vectors of X and Y independent coordinates. These vectors must have the same number of elements as Xi and Yi .
The degree of the fit. The number of coordinate pairs must be greater than or equal to ( Degree +1)^ 2 .
A named variable that will contain the array of coefficients for Xi as a function of ( Xo, Y o). This parameter is returned as a ( Degree +1) by ( Degree +1) element array.
The following example shows how to display an image and warp it using the POLYWARP and POLY_2D routines.
Create and display the original image by entering:
Now set up the Xi's and Yi's. Enter:
Run POLYWARP to obtain a Kx and Ky:
POLYWARP, XI, YI, XO, YO, 1, KX, KY
Create a warped image based on Kx and Ky with POLY_2D:
|
{"url":"http://www.astro.virginia.edu/class/oconnell/astr511/IDLresources/idl_5.1_html/idl146.htm","timestamp":"2014-04-19T22:15:54Z","content_type":null,"content_length":"6902","record_id":"<urn:uuid:6730da72-124c-46e8-af3f-57688eb4ef24>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proof Theory on the eve of Year 2000
Date: Fri, 3 Sep 1999 18:14:06 +0200
From: Anton Setzer
To: sf@csli.Stanford.EDU
Subject: Re: Proof Theory on the eve of Year 2000
Dear Sol,
here are my personal points of view with respect to your questions.
I have decided that either I write a quick answer to them or I
will never write anything, so I have decided to write a fast answer
(and therefore I might not have thought well enough about it).
The following might be distributed.
In the following I am referring to proof theory in the original
Hilbert sense, i.e. that part of it concerned with the question
of the consistency of mathematical axiom systems and direct
descendants of it, mainly ordinal analysis (including for instance
This is of course only a very small part of the subject, but its
the part of it, which really concerns me, and I have
not enough knowledge to say a lot about other parts.
The following is of course a very personal point of view, coming from
a very specific angle, so I expect disagreement.
1. What are, or should be the aims of proof theory?
The original goal of proof theory due to Hilbert was, to
prove the consistency of mathematical axiom systems by finitary
methods. Since finitary methods do not suffice, we have to replace
them by principles for which we have good reasons for believing
in them, or even have philosophical arguments for their correctness.
Such a reduction should be in my opinion the main aim of proof theory.
Another aim is, to compare theories with each other, especially
by determining the proof theoretic strength, in order to get an
as broad as possible picture of the proof theoretic landscape.
2. How well has it met those aims?
I think very well, proof theory has reached quite a big strength,
constructive theories and semi-constructive theories of all kinds
have been found (ranging from ID_n, ID_alpha and extensions of
Kripke-Platek set theory, which I regard as "semi-constructive" theories
which already provide some reduction, up to real constructive
theories, most prominently constructive set theory, variations
of T_0 and variants of Martin-L"of type theory).
And we have a very detailed picture of the landscape of proof theory,
were we are able to compare completely different paradigms.
Further weak theories have been very well explored, so we know that
a large part of mathematics can be carried out in weak systems
of strength less than or equal ID_<\omega, an area
which is so well explored that I think we can absolutely trust
in the consistency of the corresponding theories.
And we have as well quite a good knowledge of weaker theories,
like bounded arithmetic.
3. What developments or specific achievements stand out?
First of all, we have to analyze stronger theories, Pi^1_n-CA and finally
ZF. And the area, recently reached by Rathjen and Arai, is
not very well understood yet, especially some other researchers
have to do some research on it and constructive counterparts have to
be found.
4. What, now, are the main open problems?
See 3.
5. Are there any neglected directions in proof theory?
- There is still some imprecision with what is meant by proof theoretic
strength (which can probably only be made precise for certain
subclasses of theories).
- Every proof theoretist essentially starts from scratch (even
M. Rathjen defines in his notes on the notation system for Pi^1_2-CA
the Veblen function). I think some more uniform methods
are needed, or some more general concepts. However, I have little
hope at the moment that somebody deals with such questions.
6. In what ways can or should proof theory relate to other parts of
logic/foundations/mathematics/computer science/linguistics/etc.?
- Other parts of logic: It seems that we can learn a lot from
recursion theory and from set theory. On the other hand,
the area beyond Mahlo seems not to have been sufficiently studied
in recursion theory, so the proof theoretic development in this
region might still inspire some fruitful
development there (although because of the growing importance of
computer science it might not be tackled in the near future).
- Philosophy: I think proof theory is mainly dealing with
foundations, and after some mathematical reduction steps we will always
end up with some principles, which can only be validated by
philosophical arguments. Here interaction with philosophy is required.
- Mathematics: Maybe we can learn something from the development
of Algebra, how to find more abstract concepts. And there are
probably many more methods from all areas of mathematics, which
could be used in proof theory.
- Computer science: I think proof theory can contribute a lot to
computer science: we constructivize certain theories, which corresponds
to data structures and programming principles. Most prominently I think
the analysis of inductive definitions has led to an understanding
of them, and because of this understanding they have become so
important in computer science. Stronger principles should over
time find their way as well into computer science, and
proof theoretic insight should be applied to problems in computer
On the other hand, questions coming from computer science might
(and already do) motivate proof theoretic research (for instance
questions like the extraction of programs from proofs, questions
about how to model constructive theories intended to be used in
computing etc.)
I see the relationship to computer science similarly to the relationship
between mathematics and physics: physics has stimulated a lot
of new ideas in mathematics, mathematics was always been concerned with
giving precise proofs for what physicists proved in an imprecise way,
and with formalizing "hacks" of physicists, and of course mathematics
has contributed very much to physics.
- Linguistics: I know to little about it to say something about it.
7. Where do you think the subject is heading?
Where do you think it should be heading?
One direction is to move more and more into computer science.
In some sense it is good, because we get motivations and new ideas.
I see some problem however in that attention towards foundational
aspects and "strong proof theory" reduces too much, and that
too much weight is laid on applications rather than substantial
progress in the field. I think we should
see ourselves as proof theoretists who know where they are coming
from, and who from having a solid basis in proof theory look
at interesting problems in computer science, to which proof theory
can contribute something.
Another, rather healthy direction seems to be, to move to stronger
and stronger theories, and in my opinion our understanding of
the area up to Pi_3-reflection has grown very fast in the last years,
and at the moment we are about to get an understanding of
8. Do you think more students/researchers should be involved in the
subject? If so, how can they be encouraged to do so?
In general I think it is nice that it is not a too big area. And
maybe with more researchers we wouldn't make much more progress.
I see only a problem in, that in the area of very strong proof theory
(concerned with big ordinals) not many researchers are involved and there
is lack of a new generation following, so here something has to be done.
If I knew how to encourage more people, I would do so, but I don't
9. Do we need more/different kinds of general expository texts? What
about specialized texts?
I am planning a proof theory course for next year, and I have decided
that there is no book which is really suitable for the part dealing
with impredicative proof theory, mainly because
the H-controlled derivations have changed the methods very much.
So there is need for
- some book suitable for courses, which includes some impredicative
proof theory
- and as well a big monograph of "state-of-the-art" proof theory.
10. Overall, how would you rate the state of health of our subject? What
is its promise for the future?
I think there is some danger that "strong proof theory" will die out
sooner or later, because it becomes too complicated and there are
too few people left. But I hope this will not happen.
A big problem for me are the many wars of all kinds within proof theory.
Maybe this lies in the nature of in subjects which have to deal
with foundational questions that people develop very divergent
points of view of what is important and that therefore conflicts
arise. On the other hand, in my opinion, if there is a real "enemy",
then it is the world outside proof theory, and we should concentrate
more on this enemy, or better to say, have to convince the real
world about the importance of our subject, rather than concentrating
on internal wars.
Anton Setzer
Anton Setzer Telephone:
Department of Mathematics (national) 018 4713284
Uppsala University (international) +46 18 4713284
P.O. Box 480 Fax:
S-751 06 Uppsala (national) 018 4713201
SWEDEN (international) +46 18 4713201
Visiting address:
Polacksbacken, Laegerhyddsvaegen 2,
House 2, room 138
Date: Sun, 5 Sep 1999 14:56:52 -0400 (EDT)
From: urquhart@urquhart.theory.toronto.edu
To: sf@csli.Stanford.EDU
Cc: urquhart@urquhart.theory.toronto.edu
Subject: Re: Proof Theory on the eve of Year 2000
Dear Sol:
Thanks for sending your questions. Here are a few comments and answers:
1. What are, or should be the aims of proof theory?
I don't have a lot to add to Georg Kreisel's extended reflections
on this subject. I would agree with him that people no
longer look to proof theory for epistemological security (like Hilbert).
That being so, it seems to me that proof theory has two inter-related
(a) A philosophical/foundational project of clarifying the nature of
mathematical proof, in particular making clear the logical strength and complexity
of various assumptions and theorems;
(b) A purely technical project of extracting algorithms from proofs
and estimating their complexity.
2. How well has it met those aims?
I think rather succesfully, in the sense that the work on proof theory
since Gentzen has given us an incomparably clearer picture of the nature
of mathematical proof than the early pioneers of logic had. This clarification
is so great that many things which used to be considered arcane (like estimating
the complexity of axiom systems) is now a matter of routine.
A remark under 1(b) is that proof theory may shortly become technologically
quite significant, in view of the fact that recent computer standards
for internet protocols demand a provably correct algorithm. The intellectual
background for such proofs of correctness is of course that of classical proof theory.
3. What developments or specific achievements stand out?
Well, of course, Gentzen's classic work, Herbrand's analysis,
Kreisel's work on the no-counter-example interpretations,
the work of Feferman and his collaborators on ordinal analysis.
Hmm ... I don't consider myself a professional proof theorist, so I had better
stop here or risk showing my ignorance.
4. What, now, are the main open problems?
No question in my mind: NP =? co-NP and related questions of propositional
proof complexity.
5. Are there any neglected directions in proof theory?
I am disappointed more proof theorists don't work on propositional proof
complexity. So far, it's mostly been the domain of computer scientists.
6. In what ways can or should proof theory relate to other parts of
logic/foundations/mathematics/computer science/linguistics/etc.?
I've answered that, I think, implicitly, in some of my answers above.
7. Where do you think the subject is heading?
Where do you think it should be heading?
See answer to #5 above.
8. Do you think more students/researchers should be involved in the
subject? If so, how can they be encouraged to do so?
That's a little tough, but one thing I would like to see is more
emphasis in the literature on open problems. In my opinion, logicians
are rather poor at this, compared to other mathematicians.
9. Do we need more/different kinds of general expository texts? What
about specialized texts?
At the moment, I think the subject is fairly well served, particularly
since Sam Buss's excellent "Handbook of Proof Theory" appeared.
There are good expositions by Schuette, Girard, Schwichtenberg
and others of the classic material, and also more recent material
by Krajicek, Pudlak, and others.
10. Overall, how would you rate the state of health of our subject? What
is its promise for the future?
Proof theorists always seem to be worrying about the health of their subject.
I suppose this is a delayed reaction to the collapse of the romantic dreams
of the Hilbert programme. Or perhaps it's a reflection of the fact that
proof theory is a rather distant descendant of Kant's critical philosophy.
Hard to say. Anyway, speaking as a kind of outsider, I think the subject is
pretty healthy, and it should definitely take pride in the fact that
it contains one of the most fundamental open problems in logic (I mean
NP =? co-NP, of course).
Alasdair Urquhart
Date: Sun, 5 Sep 1999 15:25:38 -0400 (EDT)
From: urquhart@urquhart.theory.toronto.edu
To: sf@csli.Stanford.EDU
Cc: urquhart@urquhart.theory.toronto.edu
Subject: Re: Proof Theory on the eve of Year 2000
Dear Sol:
Well, yes, I think that NP =? co-NP is definitely a problem in
proof theory. It is most naturally stated in terms of the size
of propositional proofs of tautologies. So, here we have the most
basic form of classical reasoning, truth-functional reasoning, and
we are asking whether there is a short proof for any tautology
("short" being defined as "polynomially bounded"). I can't think
of a more natural problem in proof theory. Add to this that it
is closely connected with the proof theory of feasible arithmetic,
and it seems clear to me that it is a classic problem of proof theory,
though one that was quite unconsidered by the early pioneers.
There's a tendency in the proof theory community, I think, to
restrict attention to the problems that are amenable to the
classical techniques. Propositional proof complexity falls largely
outside these, since cut elimination is of no help
in most cases. I am all in favour of defining proof theory broadly,
though, as Sam Buss did in his recent Handbook. This broadening
of scope can only be good for the health of the subject.
It's a striking fact to me that some of the most interesting recent extensions
of the concept of proof, such as that of a probabilistically checkable
proof, have come from outside the proof theory community. These are
not proofs in the classical Hilbert sense, of course, but from a foundational
point of view, if you define "proof" as something that convinces you, then
they really do seem to be proofs in that sense.
Best regards,
Date: Mon, 6 Sep 1999 21:26:45 -0700
From: Michael Beeson
To: Solomon Feferman
Subject: Re: Proof Theory on the eve of Year 2000
1. What are, or should be the aims of proof theory?
Originally, the aim was to give consistency proofs for mathematical
theories. That was, as I understand the matter, the aim of both Hilbert
(pre-Goedel) and Gentzen (post-Goedel), and was still the aim in the fifties
when bar-recursive functionals were introduced to give a consistency proof
of analysis. Later this aim was refined to the present apparent aim of
classifying and stratifying mathematical theories according to
"proof-theoretic strength".
2. How well has it met those aims?
I think it has met them very thoroughly indeed, in view of the
fundamental limitations of which we are now aware.
3. What developments or specific achievements stand out?
This you would know far better than I!
4. What, now, are the main open problems?
Within that research program, there are only technical questions
5. Are there any neglected directions in proof theory?
I think that the aim of giving an account of mathematics as it actually
is done has never been taken seriously.
6. In what ways can or should proof theory relate to other parts of
logic/foundations/mathematics/computer science/linguistics/etc.?
I have used some elementary ideas from proof theory in the design and
construction of software for mathematics education (Mathpert Calculus
Assistant). Good examples of the relations between proof theory and
computer science would be the discovery of 2nd-order lambda-calculus and
the way that the computer language ML developed, although I don't really know
the history--perhaps proof theory was not involved in the history of ML.
It seems to me that there are various social pressures to show "applicability"
of research and that these pressures sometimes lead researchers to
exaggerate the depth of [potential] connections of their work to other
I was very impressed with Montague semantics when studying natural
language processing. It's a good example of a fruitful application of,
if not proof theory itself, then tools that were invented by proof theorists
for proof theory. I still think it could be developed further, but as far
as I know, those who are doing natural-language processing aren't using
7. Where do you think the subject is heading?
More theories will be constructed, and more variants of existing theories,
and they will be interpreted one in the other, and after heavy labors,
ordinals will be assigned.
Where do you think it should be heading?
Well, in my ignorance, I would say that the subject is just about
"finished". Of course, that has been said of other subjects in the past
[for example, inorganic chemistry] by other foolish mortals, and of course
it can turn out that the subject fires up again in a rush of new and amazing
discoveries. I can't imagine what they will be, but see the "promise for
the future" question at the end.
8. Do you think more students/researchers should be involved in the
subject? If so, how can they be encouraged to do so?
Try me as a test case. I have a sabbatical next year (calendar 2000), and
I haven't decided what to do with it. I could work on
(a) branch points of minimal surfaces
(b) developing more mathematical software for the Web
(c) learning computational biology
(d) some new and exciting project in proof theory.
Tempt me! (b) and (c) have the "fire of the times" in them, while (a) has
"timeless beauty".
9. Do we need more/different kinds of general expository texts?
Aren't you happy with Swichtenberg and Troelstra? I thought it filled a
serious gap.
What about specialized texts?
Well, the texts of Takeuti and Schutte are pretty hard to read. I'm sure
you could write a MUCH better one. I doubt if anyone else could.
10. Overall, how would you rate the state of health of our subject?
It's a very mature subject, but in no danger of dying. Its main results are
gems that will be contemplated by logicians for millennia to come.
What is its promise for the future?
Over the long-term, I think the QED project is fascinating and inevitable,
and I think proof theory should begin laying the foundations, by developing
tools that take into account the structure of mathematical theories. The
traditional logical theories abstract away much of mathematical practice in
order to arrive at the "essence" of the matter, to facilitate
metamathematical studies. Now that we have succeeded in analyzing the
essence, perhaps it's time to put back what was abstracted away. This was
part of the motivation for my various papers on proofs and computation. I
would like to see theories constructed and studied that directly account for
published mathematics. This means paying attention not only to the role of
computation, but also to the organization of mathematical theories,
corresponding to what in computer science is called "inheritance".
The Mizar project has begun to formalize mathematics, but without the aid of
proof theorists. There is no "theory of theories" and mathematics in Mizar
has no organization at all, it's just a collection of different computer
files, each containing some mathematics. The hardest part of working in
Mizar is finding what has already been proved "somewhere". Is this just a
problem to be handled with an ordinary database, or is there some proof
theory to be done? Of course if there is proof theory, it isn't the
traditional kind. But there is a lot of work to be done before QED can be
properly implemented (funding questions aside), and some of it could
probably be considered "proof theory".
Date: Wed, 8 Sep 1999 13:56:03 -0700 (PDT)
From: Sam Buss
To: sf@csli.Stanford.EDU
Subject: Re: Proof Theory on the eve of Year 2000
I attach below some of my answers to at least some of
your questions. I'd be interested in hearing the results
of your poll, and the conclusions you reach for your talk.
In particular, I am participating in a panel at the Annual
ASL meeting next summer, on the "Prospects for Mathematical Logic
in the 21st Century", so the results of your questionaire
may be helpful to me in forming my own thoughts for the
So caveats: my own viewpoint is at least as much from
the viewpoint of theoretical computer science as from logic.
And please treat my comments as initial thoughts rather
than fully worked-out opinions.
--- Sam
> 1. What are, or should be the aims of proof theory?
Understand the nature of *formal* reasoning, its relationship
to informal reasoning, and its use in mathematics and other
FOR A LONGER ANSWER, see my opening discussion in the Handbook
of Proof Theory. In the first sentence of the first chapter in
Handbook, I wrote:
"Proof Theory is the area of mathematics which studies the
concepts of mathematical proof and mathematical provability."
This was followed by two paragraphs about the difference between
formal and social proofs, and then by
"Proof Theory is concerned almost exclusively with the study of
formal proofs: this is justified, in part, by the close connection
between social and formal proofs, and it is necessitated by the fact
that only formal proofs are subject to mathematical analysis."
I then listed four main tasks of proof theory: covering a) proof-theoretic
strengths of particular formal systems, b) the study of proofs as objects,
c) extraction of additional information (e.g., computational or
constructive) from proofs, d) computer-based search for/construction of
> 2. How well has it met those aims?
> 3. What developments or specific achievements stand out?
> 4. What, now, are the main open problems?
The aims have been spectacularly recognised in some respects:
the existence of sound and complete proof systems for first
order logic, proof-theoretic strength of formal systems,
structural proof theory (cut elimination, etc.), relationships
between provability and sub-recursive hierarchies, constructivity,
predicativity and impredicativity, independence results for
arithmetic and set theory.
Much progress remains to be done in other "big" problems:
* Better understanding of "social proofs", i.e., the use of
intuition (geometric or otherwise) in creating and
understanding proofs. Understanding the relationship
between social proofs and formal proofs
* The P/NP problem and related problems in complexity theory,
including problems about proof complexity.
* Computerized proof search and computer aided proof construction
is widely used, but almost no mathematical theory is
known about the effectiveness or optimality of present-day
algorithms (This may involve settling the
P vs NP problem, but also includes investigations
into strong proof systems such as type systems,
explicit mathematics, strong constructive proof systems, ...)
> 5. Are there any neglected directions in proof theory?
> 6. In what ways can or should proof theory relate to other parts of
> logic/foundations/mathematics/computer science/linguistics/etc.?
Proof theory is the area of logic best equiped
to deal with problems outside of mathematics. It is, or should be,
widely used in computer science, linguistics, articifial intelligence,
knowledge engineering, etc. Of course, it has serious contenders
for relevance to these areas, including contenders like neural nets, etc.
So it is frankly unclear whether logic will, in the end, be useful
for these areas. But I think it *should* be used in these areas.
> 7. Where do you think the subject is heading?
> Where do you think it should be heading?
> 8. Do you think more students/researchers should be involved in the
> subject? If so, how can they be encouraged to do so?
> 9. Do we need more/different kinds of general expository texts? What
> about specialized texts?
> 10. Overall, how would you rate the state of health of our subject? What
> is its promise for the future?
In general, I am dismayed by the relatively low level of knowledge
and low level of interest held by mathematicians for the subject of
logic in general (this applies in spades for proof theory). Logic
courses have more-or-less vanished from the undergraduate mathematics
curriculum. (My comments here apply only to the US).
To a lesser extent, theoretical computer science also seems to
have a dearth of PhD students, presumably because of the poor academic
employment opportunities in theory of computer science.
Proof theory needs, on the one hand, to become more relevant to
working mathematicians and to the foundations of mathematics;
and, on the other hand, to deepen its applications to areas
such as computer science, linguistics, knowledge theory, etc.
-- Sam Buss
Date: Tue, 14 Sep 1999 11:43:43 -0600
From: detlefsen.1@nd.edu
To: Solomon Feferman
Subject: Re: Proof Theory on the eve of Year 2000 (fwd)
Thanks, Sol. I have a few responses to make. Hope it proves to be of some
use to you.
>1. What are, or should be the aims of proof theory?
I have to admit that when I read work in proof theory I am sometimes at a
loss as to what its aims are. As regards what its aims should be, I like
Hilbert's answer.
"Our formula game that Brouwer so deprecates has, besides its mathematical
value, an important general philosophical significance. For this formula
game is carried out according to certain definite rules, in which the
technique of our thinking is expressed. These rules form a closed system
that can be discovered and definitively stated. The fundamental idea of my
proof theory is none other than to describe the activity of our
understanding, to make a protocol of the rules according to which out
thinking actually proceeds. Thinking, it so happens, parallels speaking and
writing: we form statements and place them on behind another. If any
totality of observations and phenomena deserves to be made the object of a
serious and thorough investigation, it is this one ..." Foundations of
Mathematics, p. 475 (van Heijenoort)
He went on to say that use of the classical principles of logical reasoning
was particularly central and important to our thinking, that thinking has
been conducted in accordance with them ever since thinking began (vH, 379)
and that reasoning which proceeds in accordance with them is 'especially
attractive' (l.c., 476) because of its 'surprising brevity and elegance'
(ibid.). Taking classical logical reasoning away from the mathematician, he
then famously said, would be like taking away the telescope from the
astronomer and depriving the boxer the use of his fists. In a somewhat less
passionate statement he remarked that 'we cannot relinquish the use ... of
any ... law of Aristotelian logic ... since the construction of analysis is
impossible without them' (l.c., 471).
>2. How well has it met those aims?
I don't think that the aims announced by Hilbert have been well met by
proof theory. In particular, I think:
(i) That there has been little if any significant progress in saying
exactly in what the efficiency of classical logical reasoning (the
so-called 'brevity and elegance' that so impressed Hilbert) consists. What
is needed, it seems to me, is development of a notion of complexity (and a
metric for measuring it) that is concerned with the relative difficulty of
discovering proofs, not the difficulty of verifying (or falsifying) a GIVEN
sequence of formulae as a proof in some given system. The former, and only
the former, it seems bears any interesting relation the 'brevity' that
Hilbert was interested in. He classical logical reasoning because he
thought it made proofs easier to find. He was thus interested more in what
might be called the 'inventional' complexity of proofs (i.e. the complexity
of finding proofs) than in the 'verificational' complexity of proofs (i.e.
the complexity of verifying of a sequence that it is a proof). So, at any
rate, it seems to me. As mentioned, I don't believe that proof theory has
made much progress in analyzing this crucial notion of complexity.
>3. What developments or specific achievements stand out?
The analysis of the notion of a computable function. Goedel's and Church's
theorems. The work by Hilbert-Bernays, Loeb, Kreisel, Feferman, Jeroslow
and others that bears on our understanding of what a formal system is and
what serves to distinguish co-extensive formal systems from one another.
(This bears on the fundamental question of theory-identity.) Gentzen's
attempts to describe 'natural' logical reasoning.
>4. What, now, are the main open problems?
>5. Are there any neglected directions in proof theory?
Yes, see answer to (2).
>6. In what ways can or should proof theory relate to other parts of
>logic/foundations/mathematics/computer science/linguistics/etc.?
>7. Where do you think the subject is heading?
> Where do you think it should be heading?
>8. Do you think more students/researchers should be involved in the
>subject? If so, how can they be encouraged to do so?
>9. Do we need more/different kinds of general expository texts? What
>about specialized texts?
I think more and better expository works are needed in every field of
logic. Proof theory is no exception. I would like to see a different type
of expository text, though ... namely, one that made more effort to relate
technical developments to the historical and philosophical origins of the
subject (such as those enumerated by Hilbert in the remarks quoted in the
answer to (1)).
>10. Overall, how would you rate the state of health of our subject? What
>is its promise for the future?
Date: Wed, 15 Sep 1999 18:00:05 -0400 (Eastern Daylight Time)
From: Jeremy Avigad
To: Solomon Feferman
Subject: Re: PT on the eve of Y2K
Dear Sol,
I would have liked to have more time to think about these questions, and I
would have liked to polish these answers. But given my busy semester, this
will have to do. I hope you find these comments useful.
1. What are, or should be the aims of proof theory?
I'd rather not be dogmatic about this: lots of things go on in proof
theory that are only tangential to my own personal interests, but are
nonetheless interesting and important. So I will discuss the aspects of
proof theory that *I* am attracted to, without claiming that these are the
only legitimate motivations.
I think that today one can distinguish between "traditional" and "modern"
aspects of the subject. By the former, I mean proof theory as the formal
study of mathematical practice, with an eye to illuminating aspects of
that practice; by the latter, I mean the more general mathematical study
of deductive systems and their properties. Of course, there is no sharp
line dividing the two, since foundational questions lead to mathematical
ones, which in turn influence our foundational views. But I do think one
can discern a difference in emphasis. In any event, I am personally
interested in the "traditional," foundational side.
On an informal level, we human beings harbor various and conflicting views
as to what mathematics is: e.g. mathematics is about abstract mathematical
objects, vs. mathematics is about calculation and construction;
mathematics is to be valued because it is useful, vs. mathematics is to be
valued because it is an art, or esthetically pleasing; mathematics is an
objective science, vs. mathematics is a human creation; etc., etc. I don't
mean to imply that each of these dichotomies offers characterizations that
are mutually exclusive, but rather, different viewpoints. Mathematics is a
central human endeavor, and our informal understanding affects the ways we
do mathematics, the way we teach mathematics, and the kinds of mathematics
we deem worthy of support.
In different ways, the philosophy of mathematics and mathematical logic
try to make our informal understanding explicit and precise. What I am
calling traditional proof theory studies idealized models of mathematical
activity, in the form of deductive systems, with that goal in mind. This
formal modeling helps us understand the basic concepts of mathematics, the
language, the basic assumptions, and rules of inference, the styles of
argument, the structure of various theories, and the relationships between
them. (Though, I would hasten to add, proof theory does not tell the whole
story!) The mathematical analysis of these deductive systems then enables
us to explore the relationship between abstract and concrete aspects of
the subject: mathematics is abstract, in that as it allows us to discuss
infinite sets and structures, but concrete, in that the rules governing
the discourse can be described explicitly. One then finds that these
explicit rules can often be mined for additional information.
2. How well has it met those aims?
I'd say quite well. We now have a wealth of formal systems that model
different aspects of mathematical practice, both classical and
constructive, in the languages of first-order or higher-order arithmetic,
set theory, type theory, explicit mathematics, etc., etc. We know a lot
about the relationships between the various systems, and their models. We
have a number of ways of understanding the "constructive" content of a
theory, in terms of their computational and combinatorial strength. Taking
these results all together provides a rich and satisfying account (though,
to repeat, not a complete account) of mathematical activity.
3. What developments or specific achievements stand out?
It would take me a long time to make up a good list, and then I'd worry
about what I'd left off. But I would start with the completeness and
incompleteness theorems, and the formal modeling of mathematics from Weyl
and Hilbert-Bernays to developments in Reverse Mathematics; and I would
include ordinal analysis, reductions of classical theories to constructive
ones, functional interpretations and the propositions-as-types
isomorphisms, combinatorial and other independences, and so on. All of
these provide illumination in the sense I have described above.
4. What, now, are the main open problems?
Perhaps the biggest difficulty is that (traditional) proof theorists face
is that the field is not driven by open problems. The aim is to find
characterizations and insights that are "illuminating," and it is hard to
know what further kinds of illumination one can hope for, until one finds
(Even the problem of "determining" the proof-theoretic ordinal of analysis
is not an open problem in the usual mathematical sense, since what is
required is a *natural characterization* of the ordinal, and there is no
precise specification of naturality. Notable exceptions to the statement
above occur in the context of proof complexity: e.g. questions as to the
provability of the pigeonhole principle in weak theories, or separating
Frege systems from extended Frege systems, seem to me to be important
mathematical open questions that are also of foundational interest.)
Here, however, are some general directions that I think should be pursued:
-- We should work towards better understanding of they way that strong
theories of analysis and set theory can have a bearing on the "concrete,"
say, regarding objects routinely encountered by algebraists and analysts,
or regarding Pi_2 (or even Pi_1!) statements of arithmetic.
-- We should work towards a constructive understanding of classical
mathematics that may, one day, be genuinely useful to computer scientists
-- that is, a general understanding of the way that abstract mathematical
knowledge can be represented and manipulated symbolically.
-- We should work towards a better understanding of what can or cannot be
done with weak theories of mathematics -- elementary, feasible, etc., e.g.
regarding the extent to which real analysis can be carried out in feasible
theories, or regarding the kinds of combinatorial principles that can be
carried out in extremely weak theories.
-- We should work towards deeper and more realistic models of mathematical
proof, that go beyond the low-level "axioms + logical inference"
description. E.g. it would be nice to see attempts to explain what makes a
theorem natural, or a proof explanatory, etc. (This could be useful for
automated deduction, as well as for philosophical reasons.)
5. Are there any neglected directions in proof theory?
None that I can think of offhand; all the directions I just described are
actively being pursued, though the community is small, and there is plenty
of work to do.
6. In what ways can or should proof theory relate to other parts of
logic/foundations/mathematics/computer science/linguistics/etc.?
If the goal is to understand what mathematics is, the more perspectives
one has, the better the picture. I would personally like to see lots of
interactions with all these areas, which provides a good way of grounding
the research.
7. Where do you think the subject is heading?
Where do you think it should be heading?
See 4.
8. Do you think more students/researchers should be involved in the
subject? If so, how can they be encouraged to do so?
Absolutely. How to attract students? The usual ways: good research, good
teaching, good exposition, good textbooks, etc., etc.
9. Do we need more/different kinds of general expository texts? What
about specialized texts?
As always, the more, the better. I am hoping to start work on an
introductory text this summer, using notes that I developed for a course
in Proof Theory that I taught last Spring.
10. Overall, how would you rate the state of health of our subject? What
is its promise for the future?
At times, the field seems beleaguered -- for example, there are very few
American proof theorists of my generation. But at the core, I am
optimistic: the issues are profound, compelling, timely, and relevant, and
I expect that the field will always find its adherents and supporters.
Date: Mon, 4 Oct 1999 16:20:03 +0200 (MET DST)
From: Ulrich Kohlenbach
To: sf@csli.stanford.edu
Cc: Ulrich Kohlenbach
Subject: Proof Theory on the eve of Year 2000
Dear Sol,
as a comparatively young researcher and reading the names of so many
leading senior proof theorists in the heading of your mail I feel not quite
comfortable to comment on your rather profound questions about the
state of the art and the future of the whole subject of proof theory.
However, as you repeatedly urged me to do so, here are some (sketchy)
remarks to some of your questions.
With best regards,
1. What are, or should be the aims of proof theory?
I believe that any attempt to give an exhaustive definition of what proof
theory is restricts the scope of this area and its applicability to
other fields in ways which are not be intended.
So I will only sketch ONE particular type of questions studied
in proof theory which I have been interested in for several years and
which can be summarized by the very general question:
What is the status of the use of a certain principle P in a proof
of a conclusion C relative to a deductive framework F?
For example this covers conservation results ("P can be eliminated
from proofs of C in F+P"), reverse mathematics ("P is equivalent to C
relative to F"), but also many other interesting questions like:
When can a non-constructive P be replaced by some constructive
approximation ("constructive content of proofs of C in F+P")?
When can an analytical principle P be replaced by an arithmetical one
("arithmetical content of proofs")?
What is the impact of the use of P on the computational complexity of
a realization of C (e.g. if C is Pi-0-2, what is the contribution of
P to the growth of provably recursive functions in F+P)?
Is there a speed-up of F+P over F for proofs of sentences C?
The context F here can be e.g. a (weak) base systems, a certain
logical framework (e.g. intuitionistic logic), but also structually
restricted classes of proofs.
Obviously, the questions just sketched are related to Kreisel's question
"What more do we know if we have proved a theorem by restricted means
than if we merely know that it is true?"
Of course, such questions can partly be also addressed using tools
from model theory instead. However, proof theory focuses on effective
solutions, e.g. an explicit elimination procedure in case of a
conservation result etc.
A more method-oriented answer to question 1) would be:
Proof theory investigates proofs
a) according to structural properties (normalization, cut-elimination
b) by proof interpretations (functional and realizability
interpretations etc.)
with the aim to obtain new information about proofs or classes of
proofs (theories).
5. Are there any neglected directions in proof theory?
In my view, proof theory notoriously lacks examples from mathematics,
i.e. (classes of) actual mathematical proofs which illustrate
proof theoretic phenomena and give rise to applications of proof
theory. Let me illustrate this by an example: conservation results
for weak Koenig's lemma WKL have received quite some attention, e.g.
in connection with reverse mathematics. However, very little focus has
been given to the question whether there are any actual mathematical proofs
which essentially use (in the normal mathematical sense) WKL but whose
conclusions belong to a class of formulas for which WKL can be
eliminated (so that proof theoretically the use of WKL is not
essential). Up to my knowledge only Artin's solution of Hilbert's
17th problem as well as a whole class of uniqueness proofs in
analysis (Chebycheff approximation theory), which I studied some years
ago, are known to be non-trivial instances of such WKL-conservation
results and in both cases, proof-theoretic analysis turned out to be fruitful.
I strongly believe that further such case studies would result in
new proof theoretic techniques and insights and even new
foundationally relevant results (the case studies in approximation
theory mentioned above incidentally yielded general meta-theorems
of the form "non-constructive uniqueness yields effective existence"
for a LOGICALLY rather special type of formulas which, however,
covers a large class of important uniqueness results in analysis.
Counterexamples for slightly more general formulas show that it is
important to focus on this logically special (but mathematically
broad) class of formulas extracted from the case studies).
I my view, the success of model theory in giving interesting
applications to mathematics is due to the restriction to logically
very special but mathematically significant structure whereas proof
theory focuses on general classes like ALL proofs in a certain system.
Of course, applications to other areas are by no means the only
(and probably not even the main) motivation for proof theoretic
research but add to the more traditional foundational aims (see
also the next question).
6. In what ways can or should proof theory relate to other parts of
logic/foundations/mathematics/computer science/linguistics/etc.?
I believe that it is very important to stress that in addition to the
well established uses of proof theory for foundational investigations,
proof theory also can be applied as a tool: in computer science this
has been proven already quite successfully. Applications to
mathematics are still rare but there is a potential here (see
question 5). I think one should avoid to play out foundational
aims against applications in mathematics/computer science and vice
versa. Both are important to secure proof theory a place in the
academic system.
8. Do you think more students/researchers should be involved in the
As I am excited about proof theory I am of course more than happy
about every student who likes to get involved in the subject.
However, the virtually non-existing job opportunities for logicians
and in particular proof theorists make it currently quite problematic to
actively encourage students to do so.
9. Do we need more/different kinds of general expository texts? What
about special texts?
Yes, I think there could be more texts which are appealing to ordinary
mathematicians and computer scientists (or at least to other
logicians) which introduce basic proof theoretic concepts using actual
proofs in mathematics as examples. For instance, a general treatment
of "interpretative" proof theory (Herbrand's theorem,
no-counterexample interpretation, realizability, functional
interpretations, ...) based on applications of these techniques to
specific proofs thereby discussing their respective merits and
possible uses would be very useful.
10. Overall, how would you rate the state of health of our subject?
What is its promise for the future?
Overall, I think that proof theory has been extremely successful in
its applications to foundational questions and also has become an
important applied tool in computer science. The links to ordinary
`core' mathematics are, however relatively weak, which constitutes a great
danger for the survival of proof theory in mathematics departments.
The scientific standard and the level of critical reflexion about
motivations, aims and intellectual relevance are usually very
high (sometimes coming close to being self-destructive).
The dissemination of the achievements of proof theory to a broader
community of logicians, mathematicians and computer scientists has
not yet been equally successful.
Date: Thu, 7 Oct 1999 18:43:23 +0200 (MET DST)
From: Lev Beklemishev
To: sf@csli.stanford.edu
Subject: Questionnaire
Dear Professor Feferman,
Here are the answers to your questions.
> 1. What are, or should be the aims of proof theory?
I would not specufy particular "aims" in the sense of concrete
goals to be achieved. Rather, I would define the subject as broadly
as possible, e.g., as the study of the notion of proof and proof systems
in all possible variants and aspects. (These systems need not
always relate to conventional ones. I would also predict that
in the future the variety of *kinds* of proofs we deal with
will noticeably increase. )
I think that it is important not to draw any fixed borders
of our subject, and to be ready to expand the current areas of
interests of proof-theorists. (Compare, for example,
with the evolution of the meaning of the subject "geometry"
from Euclides through Klein to the contemporary mathematics.)
> 2. How well has it met those aims?
> 3. What developments or specific achievements stand out?
I think proof theory is a wonderful and deep subject.
Memorable developments that come to my mind (just the first
few, without much thinking):
0) Formalization of logic and mathematical reasoning;
1) Goedel's incompleteness theorems;
2) The discovery of structural proof-formats: sequent calculus,
resolution, ...
3) The discovery of epsilon_0 and the role of proof-theoretic
4) The analysis of predicativity;
5) Isolation of meaningful parts of analysis and set theory
and their reductions;
6) Extraction of effective bounds from proofs;
> 4. What, now, are the main open problems?
a) I think that the most important problem is to find new
meaningful applications and connections of proof theory with the other
logic-related areas. It is not a technical
mathematical question, but it is a problem for proof-theorists
in the sense that they should actively look for such
applications. Under the applications of proof theory I do not
necessarily mean the "practical" applications in areas such as AI or CS.
It is also important to establish new connections with the
other branches of logic and mathematics.
[One recent application of the type I have in mind
was a discovered (by Razborov, Pudlak, Krajicek
et. al.) connection between some natural problems in bounded
arithmetic, effective Craig interpolation theorems in propositional
logic, and cryptographic conjectures. This was motivated by a
proof-theoretical question of independence of P=NP from bounded
arithmetic theories.]
b) A good idea would be to collect and publish and/or put on a
web a classified list of technical open problems in proof theory.
(Such lists do exist, for example, in group theory.)
This could, in particular, stimulate fresh researchers
and students and help us to set the next goals.
> 6. In what ways can or should proof theory relate to other parts of
> logic/foundations/mathematics/computer science/linguistics/etc.?
At present the situation appears to be
not fully satisfactory
(and not only for proof theory, but for logic as a whole).
I think that the kinds of applications one can expect from proof
theory are not so much the particular theorems, but rather
the background "culture": proof theory can provide appropriate
language, basic tools (like cut-elimination), proper ways of
formulating problems.
> 8. Do you think more students/researchers should be involved in the
> subject?
Yes, we need more students and I think there is a need
in the popularization of proof-theory.
> If so, how can they be encouraged to do so?
At the moment this is difficult for objective reasons (not only
because of the lack of textbooks): the area
appears perhaps too self-oriented, and too technical.
In other words, the students generally do not see the benefits of
learning proof theory. I have even heard the qualified logicians
express the opinion that they do not understand what
proof-theorists are doing.
The only way of curing this disease is to
put some effort in clarifying the basics and specific problems of
proof theory on the one hand and to come up with
new promising problems and directions of research on the other.
> 9. Do we need more/different kinds of general expository texts? What
> about specialized texts?
I think the best texts on proof theory, so far, are
in the handbooks. In particular, I find the recent handbook
on proof theory an excellent source.
The situation with the textbooks is less satisfactory.
We certainly need well-balanced
textbooks accessible for a beginner and/or an outsider.
> 10. Overall, how would you rate the state of health of our subject?
> What is its promise for the future?
As that of a generally healthy man, who somewhat neglected
to monitor his health lately. But there is no reason to panic:-)
With best regards,
Date: Tue, 16 Nov 1999 11:33:42 -0800 (PST)
From: Solomon Feferman
To: Solomon Feferman
Subject: PT on the eve of Y2K
A natural reaction to the original ten questions I posed is to
ask, why send out such a survey at all? Is reading it going to affect
what anyone is doing? Doesn't the field just self-propel in all its
different directions without having to think about where we're heading? I
understand such doubts and don't have a convincing answer. But I think
that all along proof theory has been more introspective than most other
fields of research, and that has to do with its origins in Hilbert's
program and the necessary steady retreat that has been made from that. We
can point to a remarkable record of accomplishment, but in many cases we
have difficulty in saying to the outside world what the nature of that
accomplishment is. I think of this as a search for our authentic
identity, and that some exercises like this in self-assessment can help us
find that.
To turn now to the questions themselves, like several other
respondents I have found it easier to deal with the questions more or less
as a whole rather than one by one, though of course taking them as a point
of departure. In doing so, I mainly want to concentrate on open
problems/neglected areas, which are relevant to all the questions.
Conceived of most broadly, proof theory should aim to study proofs
in the widest sense possible to make it a subject for systematic
metamathematical treatment. Mathematical proofs are the richest and most
closely examined source of material for such a study, and it is natural to
give greatest prominence to them. In a catholic view of proof theory one
should not be so confined, and I do so only because that's what I know the
best. Even with this restriction, it is not clear that our theory of
proofs does justice to the subject matter. I take it for granted that
there seems to be no way that various properties of proofs of interest to
mathematicians--such as being intelligible, beautiful, clever, novel,
surprising, etc.--can be part of a systematic metamathematical study.
Relatedly, I don't expect our theory could explain what makes one proof
*more* intelligible, beautiful, etc. than another. On the other hand,
there is a basic relation between proofs, of interest to mathematicians,
which we would expect to be accounted for in our theory and which is not,
namely their identity:
(Q1) When are two proofs (essentially) the same?
It's a bit of a scandal that we have no convincing theoretical criterion
for that.
Let's step back for a moment. Granted that we have to take an
idealized model of proofs for a metamathematical study, in what sense and
to what extent can we support the thesis that every proof can be
formalized? Of course we know that the issue of the problematic role of
diagrammatic and other heuristic arguments in geometric proofs goes back
to ancient times, but they seemed to be essential for discovery and
communication. Officially, they can be dispensed with, but in practice
they can't. Coming up to the present day, one leading mathematician has
told me that there are proofs in topology which can't be formalized,
because they involve elements of visualization in some essential way. And
some logicians in recent years, such as Barwise and Etchemendy, have been
investigating what they call heterogeneous reasoning, e.g. proofs that are
manipulated on computer screens that involve diagrammatic and iconic
elements in some significant way. It's not clear to me if they are making
the case that these are non-formalizable in anything like the usual sense.
So this leads to the following question:
(Q2) Can all mathematical proofs be formalized?
An answer to this question would require something like Church's Thesis:
(Q3) What is the most general concept of formal proof?
The common view is that the notion of formal proof is relative to the
concept of formal axiomatic system. But mathematical proofs don't wear
their underlying assumptions (*axioms*) on their sleeves, and the usual
process of representing proofs formally does so only with reference to
more or less standard background systems. Is there a sense in which
proofs can be considered in isolation, or must they always be referenced
to a background system? Supposing one had an answer to (Q3). Then one
could hope to engage in case studies of challenges to a positive answer to
(Q2). But I think it would be profitable to pursue such case studies even
without an answer to (Q3).
Here's a related matter that I find puzzling. It's common
experience that the standard sort of formal representation of informal
statements captures their structure and content quite well, and that there
is little uncertainty about how to pass back and forth by hand between the
informal and formal statements. Students take to this readily, and we can
quickly and convincingly identify errors in mis-representation for them.
Note that none of this seems to require prior determination of a
background formal language in which these are to be represented.
Moreover, once such a language has been specified, it is easy to check
mechanically whether a proposed formal representation of a statement is
indeed well-formed according to the language.
[An aside: it's my impression that there is no generally accepted standard
representation of natural language statements at large. What is it about
mathematical statements that makes them special in this respect?]
But now, when we come to proofs, there is no generally agreed upon
standard representation, and for the various kinds of representations
(e.g. Frege-Hilbert style, Gentzen-Prawitz natural deduction style,
Gentzen sequent style, etc.) which have been developed for proof theory,
there is no immediate passage from the informal proof to its formal
representation (or even vice-versa) and no generally accepted means of
checking such proofs for well-formedness (*correctness*) mechanically, let
alone by hand. The proof theory that evolved from Hilbert's program
didn't concern itself with such questions. Rather, once one was convinced
that a given body of proofs can be represented in principle in a certain
way, the only concern was whether a contradiction might be derivable from
such a body of proofs.
(Q4) Why isn't there an easier transition from informal but in-principle
formalizable proofs to actually formalized, mechanically checkable,
Even if one has doubts, as I do, about the value of having proofs checked
mechanically (because, it seems to me, if you can be clear enough about a
proof to set it up for checking, you can already convince yourself of its
correctness), there should be value in analyzing what's needed to deal
with (Q4), since the conceptual problems that will need to be handled
along the way are of independent interest. Perhaps we've been trying to
do much in one go--from informal proof to proof on the machine--and
instead one should break it up in a series of steps as has been done for
the transition from informal algorithms to their implementations.
All of the preceding belongs, I guess, to structural proof theory,
a subject which has certainly flourished in recent years through the study
of substructural logics, especially linear logic and its kin. Being
completely out of that area, I'm in no position to assess its current
state of health and prospects, though the fact that there continues to be
high activity is evidence that both are good. Another area of
considerable recent development is the application of notions of
complexity to propositional proof systems, which likewise seems to be
being pursued energetically, which is all to the good. Since some of the
basic general problems of complexity theory such as P = ? NP are
equivalent to statements about such systems, there is evidently hope that
the answer will come through an application of proof-theoretical methods.
Time will tell; I'm optimistic that the problem itself will be settled
within the next decade, and then we can see what method(s) were the
successful ones.
Of course, it would be a feather in the cap of proof-theory if the answer
came from our community rather than, say, the computer science community
or the finite-model theory field.
I turn now to the part of proof theory with which I am most
familiar, and which concerns formal systems for various parts of
mathematics, ranging from fragments of arithmetic through number theory
and analysis up to systems of set theory, with way stations in algebra
along the way. The main directions in this are the *reductive* line of
development stemming from Hilbert's program, and what I've called the
*extractive* line emphasized by Kreisel, but of course stemming from a
variety of sources (characterization of the provably recursive functions,
Kleene realizability, Godel functional interpretation, etc.) . The
reductive line has really split into two related lines, that for lack of
better descriptions I will call here the *strength* line, dominated by the
project of ordinal analysis as well as other measures of strength, and the
*interrelation* line concerned with relations of interpretation, (partial)
conservation, etc., between formal systems. Looking at the developments
in this part of our subject over the last 50 years, I think the progress
we have made is remarkable, and we can truly speak of having mapped out a
substantial proof-theoretical landscape, under steady expansion.
Moreover, the proof-theorists following these lines of work have a great
variety of tools at their disposal and interesting questions of
comparative methodology have emerged. Regrettably, the achievements that
have been made have been insufficiently appreciated by logicians working
in other areas, let alone by the wider mathematical public. I think one
of the reasons for this is the welter of formal systems that have been
studied, and it is here that some introductory texts for general
consumption could serve a useful purpose. They might concentrate on some
few principal, *typical* systems for examples and then explain how and why
one is led to consider refinements (e.g. via restrictions of
comprehension, choice and/or induction principles). By comparison, the
promoters of reverse mathematics have been able to reach a wider audience
by fixing on their five or six systems that every reasonably literate
person can understand, even if the reasons for fixing on those particular
systems are not prima-facie compelling in themselves. Of course, every
interesting subject has its welter of examples and data (viz., degree
theory or large cardinal theory) but the entry to such other subjects is
not prohibitive. If students are to be encouraged to pursue these lines
of proof theory, and I think they should, more expository work will be
needed to bring out the main features of the proof-theoretical landscape.
Now for some questions, beginning with the extractive line. The
very familiar but still basic question asked by Kreisel is:
(Q5) What more than its truth do we know of a statement when it has been
proved by restricted means?
It is hard to know how this might be answered in such generality. In
special cases, for example Pi-0-2 statements or Pi-1-1 statements, we can
say something, such as that we have somehow characterized the associated
provably recursive function or the associated ordinal (of unsecured
sequences), resp. And for Sigma-0-1 statements we would like to say that
a witness can be extracted from a proof. But none of these kinds of
answers is responsive to:
(Q6) What more than its truth do we know of a Pi-0-1 statement when it has
been proved by restricted means?
This could be considered a question for the reductive line, where
consistency statements are to be established by restricted means, e.g. by
quantifier-free transfinite induction on a *natural* well-ordering; but
what does the least ordinal of such an ordering tell us about the
statement? Perhaps (as suggested to me by Peter Aczel) something simpler
can be said, namely as to where these statements sit in their ordering by
implication (over some weak system). Note that one can cook up
incomparable consistency statements, so the order is not linear. Perhaps
there are even natural examples of this sort, given by consistency
statements canonically associated with finitely presented (not necessarily
finitely axiomatized) formal systems, and that leads us to:
(Q7) What is the ordering relation between canonical consistency
Large-cardinal theorists point to the fact of the *well-ordering* of the
consistency statements associated with large cardinals, the motivations
for which came from all sorts of different directions, as evidence of some
inner harmony among them (and perhaps for their justification). But
that's just an empirical fact about one class of consistency statements.
It would be remarkable if that were true of canonical consistency
statements in general.
To get back on the extractive track, one slogan that has motivated
work in this area is that if one can prove a Pi-0-2 statement
constructively (and, indirectly, even classically) then one can extract a
program from the proof. And that accomplishes a double purpose of
providing one with a proof of correctness. So far as I know, no new
programs of interest have been obtained in this way. In general, if some
procedure is proved to exist by usual mathematical reasoning, the
associated function is not feasibly computable. For example, it is
trivial to prove constructively that any list can be sorted (relative to a
given ordering of its items). But completely separate algorithmic
analyses have been necessary to find optimal sorting procedures. That
shifts the burden to the computer scientist. One could try to get around
this by restricting to fragments of arithmetic (or associated fragments of
analysis) whose provably recursive functions are feasible in principle.
But I think even in these cases, one would not come out with optimal
solutions. To be sure, there is a value in developing systematic
procedures for extracting such programs even if they don't give optimal
solutions, and even if they are for already known problems. What I would
hope to see, though, is an answer to:
(Q8) Can we extract previously unknown constructive information such as
in-principle algorithms or bounds from constructive or classical proofs?
Once one has such information, one could then subject the situation to ad
hoc non-proof-theoretical analysis to see to what extent the information
can be optimized. Relatedly, one would hope to obtain more applications
of proof theory to mathematics. I researched the literature on this for
my article in the *Kreiseliana* volume, and found that the existing
applications are few and far between, despite the high hopes (and a
certain amount of hype) that originally pushed the *unwinding* program.
What is needed is a sustained effort and the development of general
methods like those which led to many mathematical applications of model
theory, especially to algebra.
(Q9) Can proof-theoretical methods be applied to obtain results of clear
mathematical interest comparable to those obtained by model-theory?
Let's come finally to the reductive direction of work. Here a
basic problem is that the line of development from the Hilbert program has
become so attenuated that one can no longer say in any clear-cut way what
is accomplished. The leading work in that direction is that of Arai and
Rathjen on the subsystem of analysis based on Pi-1-2 comprehension. The
Gentzen-Schuette-Takeuti aim was to prove consistency by constructive
means. However, if the means employed are so complicated--as they are in
the cited work--one cannot assert that one's confidence in the consistency
of this system is at all increased thereby. Setting aside the putative
value of demonstrating consistency, one could try to see if some reduction
to a prima facie constructive system results from that work, in the form
of the interrelation line that I suggested in my article on Hilbert's
program relativized. As far as I know, no such system is on offer. The
other aim that has been pursued was to give up foundational goals in favor
of technical results, namely to give an ordinal analysis of the system in
question. But we have no criterion for what constitutes an ordinal
analysis in general, in terms of which we can see what the cited work
achieves. I am in the process of preparing a paper, *Does reductive proof
theory have a viable rationale?*, which goes further into these matters.
I don't currently have an answer to the question raised in that title and
which include the work of Arai and Rathjen, but I hope strongly that an
answer will emerge that preserves the philosophical significance of
reductive proof theory. At any rate, to be specific:
(Q10) What has the proof-theoretical work on Pi-1-2 comprehension
accomplished? At the very least, how can the work be made more
My belief is that this will require some new concepts, whether what is
accomplished is foundational or is simply an ordinal analysis. In the
latter case, it comes back to the old question:
(Q11) What is a natural system of ordinal representation?
Finally, the ordinal analysis program leads us to questions of
methodology. As it has been carried out in its most extensive
development, this has required the employment of concepts from higher
recursion theory and, through that, of higher set theory, involving large
cardinal notions, their admissible analogues and their formal use in
ordinal representation systems. The phenomenon is even more widespread,
and there must be a good, unifying reason as to why it works as well as it
(Q12) What explains the success of large cardinal analogues in admissible
set theory, explicit mathematics, constructive set theory, constructive
type theory, and ordinal representation systems?
In very recent years another method has emerged at the hands of Jaeger and
his co-workers to deal with impredicative systems such as ID_1 and beyond,
that uses only techniques from predicative proof theory (namely, partial
cut-elimination and asymmetric interpretation).
(Q13) What explains the success of metapredicative methods? What is their
limit, if any? Does metapredicativity have foundational significance?
It may be that metapredicativity stands to locally predicative methods
(used for prevailing ordinal analysis) as the use of ordinal
representation systems built via transfinite types over the countable
ordinals stands to those built over higher number classes.
Still another interesting approach to ordinal analysis is the use
of model-theoretic methods recently published by Avigad and Sommer. This
has worked successfully on a variety of predicative systems but it is not
clear yet if it can break into impredicative territory. If it should be
able to compete successfully with ordinal analysis at higher levels, that
in my view would constitute the final detachment from the
Gentzen-Schuette-Takeuti line, and then ordinal analysis would have to
stand on its own as a contribution to the strength line.
To conclude, I have to admit that my feeling about the
foundationally reductive line of proof theory had been rather one of
discouragement, and that we may be at a dead end, or at least at a loss
where to go from here. But, instead, thinking further about the ten
questions I had posed and reading the many useful responses I received has
made me feel *very* encouraged about the current state and future of our
subject as a whole. The further questions I have raised here are my ideas
of general directions in which I think it would be profitable for proof
theory to move in the coming years. Other workers in the field have in
some cases quite different ideas, in other cases ideas that overlap or
complement the above. If one thought that all the interesting questions
had already been answered or could be answered without any essentially new
ideas, we would be in a bad state. I'm convinced just the opposite is
true. The fact that there are so many good questions and such good people
in the field to deal with them ensures its continuing vitality into the
next century.
To be sure, seeing the way through for a foundationally and technically
informative proof theory of strong subsystems of analysis will be a great
challenge, but if the past is any predictor, we can be confident that
people will rise to that challenge. I thus look forward with great
optimism to the developments in the coming years.
Date: Tue, 25 Jan 2000
From: Jan von Plato
1. What are, or should be the aims of proof theory?
Proof theory studies the general structure of mathematical proofs.
The aim is to find concepts and results by which that structure
can be mastered in various branches of mathematics.
A precise representation of mathematical proofs as formal(izable)
derivations is sought. Then questions about derivability (consistency,
independence of axioms, conservativity problems, etc) can receive
precise answers.
One aim of proof theory is to find representations of mathematical
proving that actually help in finding proofs. For example, a Hilbert-
style axiomatic representation of classical propositional logic is
hopeless in this respect, whereas a sequent system with invertible
rules makes the task automatic.
I consider Martin-Lof's constructive type theory to belong to proof
theory. So one aim of proof theory is the same as that of type-
theoretical proof editors: formal checking of correctness, in
practical terms, program verification.
2. How well has it met these aims?
Proof theory, largely speaking, has failed in achieving its aims.
Gentzen's work was perfect in pure logic, the predicate calculus.
Intuitionistic natural deduction and classical sequent calculus are
by now mastered, but the extension of this mastery beyond pure logic
has been modest. There is some development of proof theory of
arithmetic but on the whole, proof theory is not a useful tool for
the practising mathematician.
Proof theory would need a second Gentzen who comes with some new
ideas about the structure of proofs in arithmetic, then says:"If you
want to prove this and this kind of theorem (say Goldbach), then do
this and this kind of thing." And people do it and find the proofs!
It seems paradoxical to have ridiculously simple statements in
arithmetic with no structured idea on how to prove them.
3. What developments or specific achievements stand out?
The more recent success story, for me, begins when Martin-Lof extended
the Curry-Howard isomorphism from implication to full intuitionistic
natural deduction. (Witness Howard himself, see the remark added in -79
to his -69 paper.) "The first really new idea since Frege invented the
topic," as someone said.
The sequent calculi that do not have any structural rules, the
G3-calculi, are a gem of logic I would compare to creations such as
the quantum-mechanical formalism. From Gentzen to Ketonen (who did
the classical propositional part) to Kleene, Dragalin and Troelstra,
each added some pieces until the building was finished. The last ten
years have brought some applications of this powerful tool of
structural proof analysis and more is in the coming. A pity one
does not even learn of the existence of these "contraction-free"
calculi from the recent Handbook of Proof Theory.
4. What, now are the main open problems?
To repair the situation signalled in 2, by extending the achievements
in the structural proof analysis of logic to mathematics.
The most important specific open problem of proof theory is to find
a normalizing system of rules of natural deduction for the full
language of classical predicate calculus, or to show that no such
system of rules is possible. At present, we do not have the rules
of classical natural deduction, just a fragment without disjunction
and existence. Natural deduction for full classical propositional
logic, with normalizing derivations throughout, exists but its idea,
a rule of excluded middle as a generalization of indirect proof,
does not extend to the quantifiers.
If the problem about classical natural deduction is solved, a
comprehensive answer can be given to: What is the computational
content of a classical first-order proof?
5. Are there any neglected directions in proof theory?
Proof theorists, having failed in analysing proofs in mathematics,
went on to apply their skills (somewhat opportunistically in my mind)
in logical systems different from the two canonical ones,
intuitionistic and classical. So I am saying that there are *overly*
cultivated areas when other things could have been done, though I also
find that this has led to systems with dozens of rules and with less
understanding of what it all means. In these studies, a very formalistic
stand has been taken on, say, Gentzen's structural rules. People treat
them as if they were something in themselves, then try to find formal
semantical interpretations to their systems. So I find there is neglect
in method. Gentzen never meant weakening, contraction, exchange to have
a "meaning" as such, he even called them later "structural modifications"
in contrast to logical rules and said they are needed only because of some
special features of his sequent calculus.
6. In what ways can or should proof theory relate to other parts of
logic/foundations/mathematics/computer science/linguistics/etc?
Logic & foundations: Proof theory is what logic and foundations are
for me, by and large. (Maybe I could add some algebraic semantics
but that is more mathematics than logic.)
Mathematics: Proof theory should win its place as a tool for
the mathematician.
Computer science: Formalized proofs=verified programs in computer science.
Linguistics: Aarne Ranta's book Type-Theoretical Grammar (Oxford 1994)
is the foremost application to linguistics.
7. Where do you think the subject is heading?
Where do you think it should be heading?
Structural proof theory is making steady though not expansive progress,
with results accessible to anyone who wants to learn them. Ordinal
proof theory seemss a bit like set theory, a never-ending search and study
of ever stronger things.
I find that proof theory should remain non-esoteric, which has
not been the case in so many parts of logic. The basic building blocks
should be simple. The difficulties should come from the enormity of the
combinatorial possibilities of putting them together. One has a great feeling
of real discovery when the maze of all possible proof trees suddenly
starts obeying your command. First you try and try, but there is always
some little gap that destroys your theorem. Then you locate the essential
property and induction on the complexity of derivations goes through
without too many problems. *Afterwards* others will tell you: that was
easy (and you try to be patient about them).
8. Do you think more students/researchers should be involved in
the subject? If so, how can they be encouraged to do so?
Logic is dominated by other things than proof theory, a look at the
logic journals shows this. I predict a change, through the relevance
of computational proof theory for computer science. There are also
institutional changes: Mathematics departments are becoming smaller
through pressure created on universities by the needs of computer
science. Such changes will contribute to the change of emphasis in
logic, too and maybe there will be place for more researchers.
As to engaging students, every student of mathematics should in my
opinion have some idea of the general principles of mathematical
proof. Now they know perhaps some truth tables and that mathematical
truths are empty "tautologies", fashionable slogans from the 1920's. If
instead they had heard that the truth of a theorem is established by proof
and that knowing a proof is the possession of a mathematical construction,
some such students would by natural inclination be drawn into proof
Proof theory will be rewarding for the new things that can be done,
where many other parts of logic are becoming untractable monsters.
It will be rewarding for its applications and for the job a student
can get in conputer science related industry. The latter has certainly
happened with my (few) students.
9. Do we need more/different kinds of general expository texts?
What about specialized texts?
Having just written one text of the former kind, "Structural Proof
Theory", co-authored by Sara Negri and to be published towards the end of
2000 by Cambridge, the answer is a definite "Yes"! The book has an
interactive sequent calculus theorem prover implemented in Haskell and
donloadable from the web.
As to specialized texts, an exposition of Martin-Lof's type theory
covering all things would be first on my list. An accessible
presentation of the proof theory of arithmetic, but not suffocated
by an abuncance of "subsystems," just one or two systems, unbounded, is
on my list of books I would like to see appear.
10. Overall, how would you rate the state of health of our subject?
What is its promise for the future?
Health is measured by estimating the average ratio of:
degree of novelty divided by degree of complexity. The higher this
measure is in new work in a field, the better. More seriously, as
long as a direction of research yields new simple things it is alive.
The contrary starts happening when new things become complicated or
the simple things repeat what was known already.
Structural proof theory is in a healthy state. Congresses in recent
years (the LC nm's) have been encouraging for me.
For the future, I foresee a time when proof theory is not one of
the four canonical parts (or five with the addition of "philosophical
logic") of logic conferences, but one out of two.
|
{"url":"http://www.ihes.fr/~carbone/papers/proofsurveyFeferman2000.html","timestamp":"2014-04-19T23:11:16Z","content_type":null,"content_length":"159064","record_id":"<urn:uuid:4a6a5f58-8a7a-4d5a-aa9e-681407005b35>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Development and Applications of Artificial Neural Network for Prediction of Ultimate Bearing Capacity of Soil and Compressive Strength of Concrete
Seyed Hakim, Seyed Jamalaldin (2006) Development and Applications of Artificial Neural Network for Prediction of Ultimate Bearing Capacity of Soil and Compressive Strength of Concrete. Masters
thesis, Universiti Putra Malaysia.
Artificial Neural Networks (ANNs) have recently been widely used to model some of the human activities in many areas of science and engineering. One of the distinct characteristics of the ANNs is its
ability to learn from experience and examples and then to adapt with changing situations. ANNs does not need a specific equation form that differs from traditional prediction models. Instead of that,
it needs enough input-output data. Also, it can continuously re-train the new data, so that it can conveniently adapt to new data. This research work focuses on development and application of
artificial neural networks in some specific civil engineering problems such as prediction of ultimate bearing capacity of soil and compressive strength of concrete after 28 days. One of the main
objectives of this study was the development and application of an ANN for predicting of the ultimate bearing capacity of soil. Hence, a large training set of actual ultimate bearing capacity of soil
cases was used to train the network. A neural network model was developed using 1660 data set of nine inputs including the width of foundation, friction angle in three layer, cohesion of three layers
and depth of first and second layer are selected as inputs for predicting of ultimate bearing capacity in soil. The model contained a training data set of 1180 cases, a verification data set of 240
cases and a testing data set of 240 cases. The training was terminated when the average training error reached 0.002. Many combinations of layers, number of neurons, activation functions, different
values for learning rate and momentum were considered and the results were validated using an independent validation data set. Finally 9-15-1 is chosen as the architecture of neural network in this
study. That means 9 inputs with a set of 15 neurons in hidden layer has the most reasonable agreement architecture. This architecture gave high accuracy and reasonable Mean Square Error (MSE). The
network computes the mean squared error between the actual and predicted values for output over all patterns. Calculation of mean percentage relative error for training set data, show that artificial
neural network predicted ultimate bearing capacity with error of 14.83%. The results prove that the artificial neural network can work sufficiently for predicting of ultimate bearing capacity as an
expert system. It was observed that overall construction-related parameters played a role in affecting ultimate bearing capacity, but especially the parameter “friction angle” play a most important
role. An important observation is that influencing of the parameter “cohesion” is too less than another parameters for calculating of ultimate bearing capacity of soil. Also in this thesis is aimed
at demonstrating the possibilities of adapting artificial neural Also in this thesis is aimed at demonstrating the possibilities of adapting artificial neural networks (ANN) to predict the
compressive strength of concrete. To predict the compressive strength of concrete the six input parameters, such as, cement, water, silica fume, superplasticizer, fine aggregate and coarse aggregate
identified. Total of 639 different data sets of concrete were collected from the technical literature. Training data sets comprises 400 data entries, and the remaining data entries (239) are divided
between the validation and testing sets. The training was stopped when the average training error reached 0.007. A detailed study was carried out, considering two hidden layers for the architecture
of neural network. The performance of the 6-12-6-1 architecture was the best possible architecture. The MSE for the training set was 5.33% for the 400 training data points, 6.13% for the 100
verification data points and 6.02 % for the 139 testing data points. It can recognize the concrete in term of ‘strength’ with a confidence level of about 95%, which is considered as satisfactory from
an engineering point of view. It was found from sensitivity analyses performed on a neural network model that the cement has the maximum impact on the compressive strength of concrete. Finally, the
results of the present investigation were very encouraging and indicate that ANNs have strong potential as a feasible tool for predicting the ultimate bearing capacity of soil and compressive
strength of concrete.
Item Type: Thesis (Masters)
Subject: Soil morphology
Subject: Concrete
Chairman Supervisor: Associate Professor Jamaloddin Noorzaei, PhD
Call Number: FK 2006 100
Faculty or Institute: Faculty of Engineering
ID Code: 631
Deposited By: Yusfauhannum Mohd Yunus
Deposited On: 16 Oct 2008 15:59
Last Modified: 27 May 2013 06:49
Repository Staff Only: item control page
|
{"url":"http://psasir.upm.edu.my/631/","timestamp":"2014-04-19T14:34:52Z","content_type":null,"content_length":"41024","record_id":"<urn:uuid:f18b715f-7f6f-4e23-9a1c-8ec1840b8332>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
South Pasadena Algebra 2 Tutor
Find a South Pasadena Algebra 2 Tutor
...I'm comfortable tutoring both programs. I've been programming in C/C++, Python and Ruby for 3-4 years for academic and commercial applications. Previously I've developed software to test John
Deere's GPS technology using python and have developed web applications for the Jet Propulsion Laboratories electronic Flight Ground Dictionary Management System using Ruby / Ruby on Rails.
14 Subjects: including algebra 2, calculus, physics, geometry
...From my experience, I have found many creative ways of explaining common problems. I love getting to the point when the student finally understands the concept and tells me that they want to
finish the problem on their own. I look forward to helping you with your academic needs.
14 Subjects: including algebra 2, physics, calculus, geometry
...Let's do it. Physics has always come naturally to me and I have taken several years of physics in high school and college. For physics I have tutored students from 9th-12th grade, primarily in
mechanics but also in electricity and magnetism, across all levels (intro, college prep, honors, ap), and for classes with or without calculus.
18 Subjects: including algebra 2, chemistry, physics, calculus
Hi folks! My name is Ryan. I'm a physicist/astronomer/educator with an excellent background in math, science, economics, business, and history.
52 Subjects: including algebra 2, English, chemistry, finance
...My areas of expertise include Arithmetic, Pre-Algebra, Algebra 1, and Algebra 2. As a tutor, I constantly focus on improving my instructional methods. I modify instructional methods to fit
individual student using various effective teaching and learning theories and strategies, I will meet you at your level of understanding.
5 Subjects: including algebra 2, algebra 1, ACT Math, prealgebra
|
{"url":"http://www.purplemath.com/South_Pasadena_Algebra_2_tutors.php","timestamp":"2014-04-19T17:39:07Z","content_type":null,"content_length":"24291","record_id":"<urn:uuid:e9f3803b-f4ea-4c71-8680-2cc5c6f74fd9>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Figure 1: Chess openings can be described as decision trees, showing each move and associated branching ratios. This diagram shows the three most popular first (d=1) half-moves in the
1.5-million-game ScidBase chess database [12] and their branching ratios. For example, in 45% of the games, white starts with e4 (King’s pawn to fourth row, in algebraic chess notation), 35% start
with d4 (Queen’s pawn to fourth row), etc. Each of these moves then have branching ratios to the second half-move by black (d=2). Blasius and Tönjes find that for all games up to d=40, the opening
sequence popularity follows a Zipf power law with universal exponent nearly equal to -2, but for small values of d, the exponent is nonuniversal and depends linearly on d. (Adapted from Ref. [1].)
|
{"url":"http://physics.aps.org/articles/large_image/f1/10.1103/Physics.2.97","timestamp":"2014-04-16T22:46:14Z","content_type":null,"content_length":"1705","record_id":"<urn:uuid:191b0a87-9112-4e39-a669-6b7ff516398b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Electron. J. Diff. Eqns., Vol. 1999(1999), No. 37, pp. 1-20.
Dini-Campanato spaces and applications to nonlinear elliptic equations
Jay Kovats Abstract:
We generalize a result due to Campanato [C] and use this to obtain regularity results for classical solutions of fully nonlinear elliptic equations. We demonstrate this technique in two settings.
First, in the simplest setting of Poisson's equation B, where f is Dini continuous in B, we obtain known estimates on the modulus of continuity of second derivatives D^2u in a way that does not
depend on either differentiating the equation or appealing to integral representations of solutions. Second, we use this result in the concave, fully nonlinear setting F(D^2u,x)=f(x) to obtain
estimates on the modulus of continuity of D^2u when the L^n averages of f satisfy the Dini condition.
Submitted January 6, 1999. Revised July 19, 1999. Published September 25, 1999.
Math Subject Classifications: 35B65, 41A10.
Key Words: Fully nonlinear elliptic equations, polynomial approximation, Dini condition.
Show me the PDF file (226K), TEX file, and other files for this article.
Jay Kovats
Department of Mathematical Sciences
Florida Institute of Technology
Melbourne, FL 32901, USA
e-mail address: jkovats@zach.fit.edu Return to the EJDE web page
|
{"url":"http://ejde.math.txstate.edu/Volumes/1999/37/abstr.html","timestamp":"2014-04-17T21:38:47Z","content_type":null,"content_length":"2084","record_id":"<urn:uuid:de7b3cda-4ba5-46ea-8557-5fa00ae5b527>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thermal Mass for Heat Storage
From ChemPRIME
back to Heat Capacities
Trombe Walls and Thermal Mass
Many very energy-efficient or "passive houses" use "passive solar" energy storage of various kinds. The simplest is probably the "Trombe Wall". The Trombe wall absorbs and releases large amounts of
heat without changing temperature very much, so it must have a high thermal mass or heat capacity.
One Wikipedia article states that if a tank of water were used for the Trombe wall instead of concrete, it could store five times as much heat. Given that rock would be so much heavier, is that
possible? Like every solar house designer, we can answer that question with some simple calculations.
Heat Capacities
When the sun supplies heat energy to the Trombe wall, a rise in temperature occurs. In this case, no chemical changes or phase changes take place, so the rise in temperature is proportional to the
quantity of heat energy supplied. If q is the quantity of heat supplied and the temperature rises from T[1] to T[2] then
q = C × (T[2] – T[1]) (1)
q = C × (ΔT) (1b)
where the constant of proportionality C is called the heat capacity of the wall. The sign of q in this case is + because the sample has absorbed heat (the change was endothermic), and (ΔT) is defined
in the conventional way.
If we are interested in comparing Trombe walls of variable mass, the quantity of heat needed to raise the temperature is proportional to the mass as well as to the rise in temperature. That is,
q = C × m × (T[2] – T[1]) (2)
q = C × m × (Δ T) (2b)
The new proportionality constant C is the heat capacity per unit mass. It is called the specific heat capacity (or sometimes the specific heat), where the word specific means “per unit mass.”
Specific heat capacities provide a convenient way of determining the heat added to, or removed from, material by measuring its mass and temperature change. As mentioned [|previously], James Joule
established the connection between heat energy and the intensive property temperature, by measuring the temperature change in water caused by the energy released by a falling mass. In an ideal
experiment, a 1.00 kg mass falling 10.0 m would release 98.0 J of energy. If the mass drove a propeller immersed in 0.100 liter (100 g) of water in an insulated container, its temperature would rise
by 0.234^oC. This allows us to calculate the specific heat capacity of water:
98 J = C × 100 g × 0.234 ^oC
C = 4.184 J/g^oC
At 15°C, the precise value for the specific heat of water is 4.184 J K^–1 g^–1, and at other temperatures it varies from 4.178 to 4.218 J K^–1 g^–1. Note that the specific heat has units of g (not
the base unit kg), and that since the Centigrade and kelvin scales have identical graduations, either ^oC or K may be used.
Example 1: If the sun raises the temperature of a 3 m x 6 m x 0.5 m Trombe wall filled with water (D = 1.0) from 25.0 ^oC to 35.0 ^oC, how much heat energy is stored, given that the specific heat
capacity of water is 4.184 J K^–1 g^–1?
Solution: V = 3 m x 6 m x 0.5 m = 9 m^3 9 m^3 x (100 cm / m)^3 = 9 x 10^6 m^3 or 9 x 10^6 g
q = 4.18 J/g^oC × 9 x 10^6 g × (35.0 - 25.0)
q = 3.76 x 10^8 J or 3.76 x 10^5 kJ.
Example 2: If the sun raises the temperature of a 3 m x 6 m x 0.5 m Trombe wall made of concrete (typical D = 2.3 g/cm^3) from 25.0 ^oC to 35.0 ^oC, how much heat energy is stored, given that the
specific heat capacity of concrete (see below) is 0.880 J K^–1 g^–1?
Solution: V = 3 m x 6 m x 0.5 m = 9 m^3 9 m^3 x (100 cm / m)^3 = 9 x 10^6 m^3 m = D x V = 2.3 g/cm^3 x 9 x 10^6 m^3 = 2.07 x 10^7 g
q = 0.880 J/g^oC × 2.07 x 10^7 g × (35.0 - 25.0)
q = 1.82 x 10^8 J or 1.8 x 10^5 kJ.
Note that the water can absorb about 2 times as much heat for the same volume and same temperature change. For the same mass, however, water can absorb 4.18/0.880 = 4.75 times as much heat. The
mass-based calculation must be the basis for the Wikipedia claim.
Specific heat capacity of building materials
(Usually of interest to builders and solar designers)
Specific heat capacity of
building materials ^[4]
Substance Phase c[p]
Asphalt solid 0.920
Brick solid 0.840
Concrete solid 0.880
Glass, silica solid 0.840
Glass, crown solid 0.670
Glass, flint solid 0.503
Glass, pyrex solid 0.753
Granite solid 0.790
Gypsum solid 1.090
Marble, mica solid 0.880
Sand solid 0.835
Soil solid 0.800
Wood solid 0.420
Substance Phase c[p]
Specific heat capacities (25 °C unless otherwise
│ Substance │phase │C[p](see below) │
│ │ │ J/(g·K) │
│air, (Sea level, dry, 0 °C) │gas │1.0035 │
│argon │gas │0.5203 │
│carbon dioxide │gas │0.839 │
│helium │gas │5.19 │
│hydrogen │gas │14.30 │
│methane │gas │2.191 │
│neon │gas │1.0301 │
│oxygen │gas │0.918 │
│water at 100 °C (steam) │gas │2.080 │
│water at 100 °C │liquid│4.184 │
│ethanol │liquid│2.44 │
│water at -10 °C (ice)) │solid │2.05 │
│copper │solid │0.385 │
│gold │solid │0.129 │
│iron │solid │0.450 │
│lead │solid │0.127 │
Other Heat Storage Strategies
Molten salt can be used to store energy at a higher temperature, so that stored solar energy can be used to boil water to run steam turbines. The sodium nitrate/potassium nitrate salt mixture melts
at 221 °C (430 °F). It is kept liquid at 288 °C (550 °F) in an insulated "cold" storage tank. The liquid salt is pumped through panels in a solar collector where the focused sun heats it to 566 °C
(1,051 °F). It is then sent to a hot storage tank. This is so well insulated that the thermal energy can be usefully stored for up to a week.
When electricity is needed, the hot salt is pumped to a conventional steam-generator to produce superheated steam for a turbine/generator as used in any conventional coal, oil or nuclear power plant.
A 100-megawatt turbine would need tanks of about 30 feet (9.1 m) tall and 80 feet (24 m) in diameter to drive it for four hours by this design.
To understand energy conversion from thermal to electrical, we need to know something about electrical units.
Electrical Energy Conversion
The most convenient way to supply a known quantity of heat energy to a sample is to use an electrical coil. The heat supplied is the product of the applied potential V, the current I flowing through
the coil, and the time t during which the current flows:
q = V × I × t (2)
If the SI units volt for applied potential, ampere for current, and second time are used, the energy is obtained in joules. This is because the volt is defined as one joule per ampere per second:
1 volt × 1 ampere × 1 second = 1$\begin{matrix}\frac{\text{J}}{\text{A s}}\end{matrix}$ × 1 A × 1 s = 1 J
EXAMPLE 3: An electrical heating coil, 230 cm^3 of water, and a thermometer are all placed in a polystyrene coffee cup. A potential difference of 6.23 V is applied to the coil, producing a current of
0.482 A which is allowed to pass for 483 s. If the temperature rises by 1.53 K, find the heat capacity of the contents of the coffee cup. Assume that the polystyrene cup is such a good insulator that
no heat energy is lost from it.
Solution The heat energy supplied by the heating coil is given by
q = V × I × t = 6.23 V × 0.482 A × 483 s = 1450 V A s = 1450 J
q = C × (T[2] – T[1])
Since the temperatue rises, T[2] > T[1] and the temperature change ΔT is positive:
1450 J = C × 1.53 K
so that
$\begin{matrix}C=\frac{\text{1450 J}}{\text{1}\text{.53 K}}=\text{948 J K}^{-\text{1}}\end{matrix}$
Note: The heat capacity found applies to the complete contents of the cup-water, coil, and thermometer taken together, not just the water.
As discussed in other sections, an older, non-SI energy unit, the calorie, was defined as the heat energy required to raise the temperature of 1 g H[2]O from 14.5 to 15.5°C. Thus at 15°C the specific
heat capacity of water is 1.00 cal K^–1 g^–1. This value is accurate to three significant figures between about 4 and 90°C.
If the sample of matter we are heating is a pure substance, then the quantity of heat needed to raise its temperature is proportional to the amount of substance. The heat capacity per unit amount of
substance is called the molar heat capacity, symbol C[m]. Thus the quantity of heat needed to raise the temperature of an amount of substance n from T[1] to T[2] is given by
q = C × n × (T[2] – T[1]) (4)
The molar heat capacity is usually given a subscript to indicate whether the substance has been heated at constant pressure (C[p])or in a closed container at constant volume (C[V]).
EXAMPLE 4: A sample of neon gas (0.854 mol) is heated in a closed container by means of an electrical heating coil. A potential of 5.26 V was applied to the coil causing a current of 0.336 A to pass
for 30.0 s. The temperature of the gas was found to rise by 4.98 K. Find the molar heat capacity of the neon gas, assuming no heat losses.
Solution The heat supplied by the heating coil is given by
q = V × I × t
= 5.26 V × 0.336 A × 30.0 s
= 53.0 V A s
= 53.0 J
Rearranging Eq. (4), we then have
$\begin{matrix}C_{m}=\frac{q}{n\text{(T}_{\text{2}}-\text{T}_{\text{1}}\text{)}}=\frac{\text{53}\text{.0 J}}{\text{0}\text{.854 mol }\times \text{ 4}\text{.98 K}}=\text{12}\text{.47 J K}^{-\text
{1}}\text{ mol}^{-\text{1}}\end{matrix}$
However, since the process occurs at constant volume, we should write
C[V] = 12.47 J K^–1 mol^–1
|
{"url":"http://wiki.chemprime.chemeddl.org/index.php/Thermal_Mass_for_Heat_Storage","timestamp":"2014-04-16T19:11:39Z","content_type":null,"content_length":"90387","record_id":"<urn:uuid:b1491814-d821-47a1-9ab7-afd0dccb87b8>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
: Solution of the Inverse Dynamics in Classical
Simulation of Forces : Solution of the Inverse Dynamics in Classical Mechanics
Robert Kragler
FH Ravensburg-Weingarten /University of Applied Sciences
A.N. Prokopenya and N.I. Chopchits
Brest Polytechnic Institute, Belarus (box@aprokop.belpak.brest.by)
1999 Mathematica Developer Conference
Applications of Mathematica
October 23, 1999
In general, studying the equations of motion of any physical system it is necessary to solve differential equations. However, very seldomly the corresponding solutions can be found in analytical
form. Thus, usually only systems which are simple enough will be analysed in a physics course at universities. Situation changed considerably, however, with the availability of computer algebra
systems. Using Mathematica in many cases we can easily find either analytical or numerical solutions of the equations of motion and visualize them even in the case of an intricate physical system.
The analysis of such systems therefore promotes better understanding of the underlying physical principles and may help to develop physical intuition of the students.
Generation of Figures
The Problem of Simulation of Forces
In classical mechanics the motion of a particle is determined by Newton's second law
where the position of the particle is given in Cartesian coordinates.
Here it is not assumed that this expression is a definition of the force, defining the momentum of the particle, because forces can be determined independently on the lhs of Newton's second law.
Traditionally all forces considered in classical mechanics depend on coordinates and velocity of the particle, interacts with another body whose law of motion is given then the force may even
explicitly depend on time. Thus, in the general case the force is a function of three variables :
On the other hand, if the law of motion of the particle is given then the resulting force
and may be presented as a function of time or as a function of velocity, of coordinates or some combinations of these variables.
The very first example will demonstrate that simulation of forces which occur in the equations of motion is not a trivial problem.
Example 1 : Constant Force for Particle Moving along a Straight Line
Example 2 : Particle on a Track
Example 3 : Particle on a Rotating Contour
|
{"url":"http://library.wolfram.com/conferences/devconf99/kragler/","timestamp":"2014-04-20T18:42:57Z","content_type":null,"content_length":"26332","record_id":"<urn:uuid:03fc8cff-6c0e-409d-8136-895b862ed626>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Journal Papers (Published, To Appear)
1. Perfect Output Feedback in the Two-User Decentralized Interference Channel
S Perlaza, R. Tandon, H. V. Poor and Z. Han
IEEE Transactions on Information Theory, submitted, June 2013.
2. Capacity Region of the Symmetric Linear Deterministic Interference Channel with Partial Feedback
Le-Sy Quoc, R. Tandon, M. Motani and H. V. Poor
IEEE Transactions on Information Theory, submitted, December 2012.
Conference Papers
|
{"url":"http://www.ece.vt.edu/tandonr/publications.html","timestamp":"2014-04-17T09:34:26Z","content_type":null,"content_length":"12231","record_id":"<urn:uuid:3c2ba7e4-4d67-4f41-a904-d62588b25a0a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
|
fashionablemathematician - mathematics
Hello everyone,
After doing a lot of thinking and reading over vacation, I've decided to shut this blog down. By no means am I done with blogging, but this site really was my first attempt at the beast, and I need
to make some fundamental changes. Thank you all for reading (and occasionally commenting). You can follow me to:
Hey all,
I'm going on vacation in Ocean City, MD for the next 9 days. No computer means no posting. I'll be back on August 4th!
I continued working through the exercises in Chapter 1 of Donald Saari's Basic Geometry of Voting, and found another interesting problem in Exercise 1.1.3.3 (just one exercise after my last posts'
excursion, considering when a plurality winner can be a Condorcet loser).
This time, we're given some information about voters' preferences in an election. All we're told is how many voters ranked each alternative first. The example in the text has four alternatives; A, B,
C, and D with 9, 8, 7, and 6 votes, respectively. The election uses the plurality method, so that the social preference is A beats B beats C beats D.
We're then told that if a candidate drops out of the election, all of those people who voted for them instead vote for their second choice preference. It turns out that such an occurrence can
actually reverse the resultant social preference. For example, removing candidate D can lead to the plurality result:
C beats B beats A!
I found this quite strange, and decided to dig deeper. I determine an algorithm which "optimizes" this strange result, and find conditions to see what elections permit such a result. You can read
about it all right here:
Reversal of Social Preference in Plurality Voting
Oh, and I managed to finish the rest of the exercises in the section without feeling the need to write another blog post. Good thing, otherwise I might never get past chapter 2. The introductory
material does, however, give one a lot of interesting things to think about (obviously)!
I recently received my copy of Donald Saari's Basic Geometry of Voting. I'm using the book both for research purposes and to prepare to TA a class on the mathematics involved in Democracy, voting
systems, and the like. I'll be posting some extensions of what comes out of the book and my own research here in the future.
Today's work comes courtesy of Exercise 1.1.3.2 in the book, which asks us to investigate when it is possible for a plurality winner to be a Condorcet loser. The exercise only looks at elections with
three alternatives, but I was able to generalize to an arbitrary number of voters and alternatives.
For those unfamiliar, here are some preliminaries necessary to understand the work:
Each voter ranks the alternatives from first-choice to last. A strict plurality winner is an alternative which receives more first-place votes than any other alternatives. (Non-strict definitions
typically allow for ties).
A Condorcet winner is one which is ranked higher than every other alternative in a majority of decisions. This is often confusing, as the same majority is not required for each pair of alternatives.
For example, if
4 people prefer A to B to C
3 people prefer B to C to A
2 people prefer C to B to A
Then A is the plurality winner, because it has the most first-place votes. B is the Condorcet winner, as 5 people prefer B to A (a majority), and 7 people prefer B to C (also a majority, though not
the same voters).
In the example above, A is what we call a Condorcet loser; it is ranked lower than every other alternative in a majority of decisions (5 to 4 in both cases).
This example is what we seek, when the plurality winner is simultaneously the Condorcet loser, an admittedly strange (though not necessarily uncommon) result. The complete analysis, in which we
explicitly work through the case of three alternatives, then generalize to an arbitrary number of alternatives is right here:
When Can a Plurality Winner Be A Condorcet Loser?
Here's my next step in analyzing the NCAA tournament with some mathematics. Today we look at the basics of multiround theory, setting up some computations and determining some relevant quantities.
The file can be found right here:
Abstract Bracketology 2
Enjoy, and let me know if you come up with any other things that should be studied here!
Sorry for the delay in posting readers! I've been doing some construction work, which has been taking up a great deal of my time. I'm back with a possibly poorly timed application of mathematics to
the world of sports.
Year after year my NCAA March Madness Bracket falls apart, and I've had enough. It's time to mathematize (this, rather than increasing my knowledge of college basketball, is of course the easiest way
to improve my results). I'll be continuing my work on this indefinitely, as I've already come up with a number of interesting questions to look at. We start small, looking at strategies for a single
game tournament.
Abstract Bracketology 1
If anyone has ideas of where to go with these, or questions to ask/answer, please comment!
...and GO DUKE!
Rounding out the subseries on fractions, decimals, and percentages, this document covers how to convert between the three forms and offers advice on how to approach problems with such conversions.
Exercises, solutions, and explanations at the end, as always. You can access the document below:
GRE Math Quick Review: Conversions
For the rest of the GRE Math Quick Review Series, access the portal page.
|
{"url":"http://fashionablemathematicianmath.blogspot.com/","timestamp":"2014-04-16T04:14:49Z","content_type":null,"content_length":"69171","record_id":"<urn:uuid:35af588d-28c5-43f1-bf61-259639c81e7d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
|