content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
the me
On 19 March 1791 the metre, the base of the new metric system, was theoretically defined as being equal to the ten millionth part of one quarter of the terrestrial meridian. Practically, the length
of the meridian still had to be set up, however.
In its "report on the choice of units of measure", the Academy of Science defined the various steps that this work would involve: the length of the meridian would be determined by triangulation, from
an arc of 9 and a half degrees between Dunkirk and Barcelona.
Triangulation, one method known since the early 17th century
Already in 1718, Jacques Cassini used this method to measure the meridian between Dunkirk and Collioure. Triangulation consists of marking a route by a network of highly visible landmarks: tower,
peak, church spire, etc., these points representing a series of connected triangles. The method involves trigonometry calculations. Knowing all the angles formed by two adjacent triangles and at
least one of the lengths in one of these triangles, we can determine the lengths of all sides in both triangles.
On 13 April 1791, the Academy appointed the members of the commissions who would perform the measurements
Juvisy Pyramid (picture)
The triangulation and determination of the latitudes were to be carried out by Cassini ( the son of Jacques Cassini), Legendre and Méchain.
Monge and Meusnier were to measure the bases. In June 1791, Cassini simply visited with Méchain the base of Villejuif at Juvisy, Paris (the obelisk is currently known as the Pyramide de Juvisy).
Although Cassini expected to be able to use this old base, which had already been used by his father in 1739, his grandfather in 1701, and Abbot Picard in 1670 when they carried out their
triangulation calculations ; he was unable to do so. Cassini then stayed in Paris to help Borda. Monge and Legendre did very little in fact. Meusnier left to join the Rhine army and was killed in
1793. Delambre, who had just joined the Academy of Science was then nominated to replace them.
A precision instrument: The Borda circle
Measurement of the meridian arc involved the use of precision instruments and was partly justified by improvements to these instruments. Of much greater precision, these measurements would replace
the previous ones taken fifty years before.
To determine the angles, our two geodesists were going to use the new Borda repeating circle. Using this innovation, angles could be measured to the nearest second, whereas with the quadrants used so
far it had only been possible to obtain accuracy to the nearest 15 seconds. The ground measurements, in Toise du Pérou units, were to be made with copper-platinum bimetal rulers. Obviously, any other
unit would have been suitable, since once the length of the quarter of the meridian is determined, dividing it by 10 000 000 would give the length of a metre. In this case, the length of the first
metre was therefore expressed in Toise du Pérou units; in 1747, La Contamine had brought back this measurement unit from his expedition to the equator, but it only became a national standard on 16
May 1766 after a royal declaration.
Cercle répétiteur
(St Mandé, IGN)
Two teams for measuring the meridian arc
Delambre's team included the Frenchmen Lalande and Bellet; Tranchot and Esteveny accompanied Méchain. The Academy of Science distributed the work involved in measurement of the meridian arc as
follows: the two upper thirds, from Dunkirk to Rodez, were assigned to Delambre; the last third, from Rodez to Barcelona, was assigned to Méchain. This difference could be explained by the fact that
Delambre's route would theoretically follow close to the points of the former triangulation, whereas Méchain would explore territory where no geodesic measurements had yet been taken.
In practice, the earlier triangulation landmarks turned out to be unusable: during the turmoil of the revolution, some spires had disappeared or were about to collapse. Peak after peak, Delambre
discovered that it was impossible to use Cassini's previous landmarks: the old spires had been rebuilt differently after being burnt down.
Mark out the meridian arc: a very difficult enterprise
More than one hundred triangles were required to mark out the meridian arc; our two geodesists were to experience numerous mishaps during their expedition: arrests, temporary revocations, damaged or
destroyed geodesic equipment. The marker signals they used for their observations aroused the distrust of the population; the material attached at the end of their signals was white, the colour of
royalty, and therefore a counter-revolutionary colour. In spite of their passes, passports and other authorisations, our two scientists were still not safe from arrest, since the authorities which
had issued these documents disappeared, making them outlaws. For instance, following the abolition of the Academies (in 1793), Delambre found that he had been excluded from the temporary commission
of weights and measures (in 1794) and therefore prohibited from continuing his work, until June 1795. Méchain also experienced numerous setbacks.
Landmarks to establish, mountains to cross, not forgetting the historical events :
War broke out on 7 March 1793 between France and Spain, where some of his measurements had to be taken. From 1793 to 1795 therefore, the Terror regime was to delay his triangulation calculations. At
the same time, the metre was temporarily fixed by the law of 1 August 1793 according to the results of measuring the French meridian, published by Lacaille in the 1758 Mémoires de l'Académie.
Moreover, the decimal subdivisions of the metre were to be the decimetre, the centimetre and the millimetre. This temporary standard metre did not correspond to the work carried out by Méchain and
Delambre, but to the results of Cassini's earlier triangulation.
1799, a new platinum metre standard
In 1795, with the improvement of the political situation, the triangulation work was able to resume. It continued for a further three years, before the length of the quarter of the meridian could be
accurately determined and a new platinum metre standard dedicated to "all times and all men " was deposited in 1799, in the archives of the republic.
pictures extracted from the book "L'épopée du mètre" (published by the French Ministry in charge of Industry and Regional Planning)
Print this page Top of the page
|
{"url":"http://www.french-metrology.com/en/history/metre-adventure.asp","timestamp":"2014-04-21T04:33:01Z","content_type":null,"content_length":"21442","record_id":"<urn:uuid:0628f069-5309-404e-b136-c066c372a42c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Equation of a tangent to a curve - Mathematics
Equation of a tangent to a curve
You must have a general understanding of graph gradients, tangents, normals and derivatives. This would probably be a reminder rather than a lesson. Finding the equation of a tangent to a curve
involves finding the derivative/gradient function of the function and then finding the gradient m of the tangent at the given point and by substituting in the x-coordinate of the point in the
derivative. So the steps are;
• Find the derivative f ’(x)
• Find the gradient m of the tangent by substituting in the x-coordinate of the point.
• And then use the following formula to get the equation of the tangent. You will need the m which is the gradient, x-coordinate which is the given value and the y-coordinate, you find this by
substituting it in the original function.
You might be required or for your purpose to write the equation of the tangent in this form;
And that’s how you find the equation of a tangent. It’s that easy…
Read more about tangents here
Leave a Reply Cancel reply
|
{"url":"http://mathematicsi.com/equation-of-a-tangent-to-a-curve/","timestamp":"2014-04-18T20:42:48Z","content_type":null,"content_length":"49662","record_id":"<urn:uuid:f4eef207-f93e-4065-870b-7860e5b9d6be>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Self Inductance
Next: Energy Stored in an Up: Inductance Previous: Mutual Inductance
Self Inductance
We do not necessarily need two circuits in order to have inductive effects. Consider a single conducting circuit around which a current
where the constant of proportionality self inductance of the circuit. Like mutual inductance, the self inductance of a circuit is measured in units of henries, and is a purely geometric quantity,
depending only on the shape of the circuit and number of turns in the circuit.
If the current flowing around the circuit changes by an amount
is generated around the circuit. Since
Thus, the emf generated around the circuit due to its own current is directly proportional to the rate at which the current changes. Lenz's law, and common sense, demand that if the current is
increasing then the emf should always act to reduce the current, and vice versa. This is easily appreciated, since if the emf acted to increase the current when the current was increasing then we
would clearly get an unphysical positive feedback effect in which the current continued to increase without limit. It follows, from Eq. (243), that the self inductance positive number. This is not
the case for mutual inductances, which can be either positive or negative.
Consider a solenoid of length
is generated in the core of the solenoid. The field-strength outside the core is negligible. The magnetic flux linking a single turn of the solenoid is
According to Eq. (241), the self inductance of the solenoid is given by
Note that
Engineers like to reduce all pieces of electrical apparatus, no matter how complicated, to an equivalent circuit consisting of a network of just four different types of component. These four basic
components are emfs, resistors, capacitors, and inductors. An inductor is simply a pure self inductance, and is usually represented a little solenoid in circuit diagrams. In practice, inductors
generally consist of short air-cored solenoids wound from enameled copper wire.
Next: Energy Stored in an Up: Inductance Previous: Mutual Inductance Richard Fitzpatrick 2007-07-14
|
{"url":"http://farside.ph.utexas.edu/teaching/302l/lectures/node102.html","timestamp":"2014-04-19T11:57:25Z","content_type":null,"content_length":"11304","record_id":"<urn:uuid:d8af36d3-64c7-48d0-bc36-860072217c62>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Number of possible outcomes.
October 11th 2010, 05:33 AM
Number of possible outcomes.
There are 8 different toppings that can be put on a pizza. How many ways can the pizza be made? One can have anywhere from no toppings to all 8 toppings.
Help is appreciated
October 11th 2010, 11:47 AM
one way is to draw Pascal's triangle.
couldn't draw the fig...will attach one
October 11th 2010, 01:14 PM
October 11th 2010, 01:50 PM
Here's the attachment..
I did the first five rows of the triangle..your job is to complete all 10 rows..
the elements of row 10 will add up to the number of possible outcomes.
October 11th 2010, 03:38 PM
|
{"url":"http://mathhelpforum.com/advanced-statistics/159157-number-possible-outcomes-print.html","timestamp":"2014-04-21T15:34:10Z","content_type":null,"content_length":"5786","record_id":"<urn:uuid:787387f1-f22b-40e4-8b5e-935c36ec64bd>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The standard form equation of a general quadratic (polynomial functions of degree 2) function is
f(x) = ax^2 + bx + c where a ≠ 0.
If b = 0, the quadratic function has the form f(x) = ax^2 + c.
Since f(-x) = a(-x)^2 + c = ax^2 +c = f(x),
Such quadratic functions are even functions, which means that the y-axis is a line of symmetry of the graph of f.
The graph of a quadratic function is a parabola, a line-symmetric curve whose shape is like the graph of y = x^2 shown in figure. The point of intersection of the parabola and its line of symmetry is
the vertex of the parabola and is the lowest or highest point of the graph. The graph of a parabola either opens upward like y=x^2 or opens downward like the graph of y = -x^2 .
In the figure, the vertex of the graph of y=x^2 is (0,0) and the line of symmetry is x = 0.
Definition: Parabola
A Parabola is the graph of a quadratic relation of either form where a ≠ 0;
y = ax^2 + bx + c or x = ay^2 + by + c
2. Geometric:
A parabola is the set of all points in a plane and a given line.
From the geometric point of view, the given point is the focus of the parabola and the given line is its directrix. It can be shown that the line of symmetry of the parabola is the line
perpendicular to the directrix through the focus. The vertex of the parabola is the point of the parabola that is closet to both the focus and directrix.
Connection between Algebra and Geometry of Parabola
Show that an equation for the parabola with the focus (o, p) and directex y = -p is y = 1/4p x^2
We must show that a point (x, y) that is equidistant from (o, p) and the line y = -p also satisfies the equation y = 1/4p x^2
Conversely, we must also show that a point satisfying the equation y = 1/4p x^2 is equidistance from (o,p) and the line y = -p.
We assume that p>0. The argument is similar for p<0.
First, if (x, y) is equidistance from (0, p) and the line y = -p, then
• distance from (x, y) to y = -p is |x + y|
• distance from (x, y) to (0, p) is [(x^2 + (y-p)^2]^1/2
Consequently, we can drive an equation for the parabola as follows:
|y + p| = [(x^2 + (y-p)^2]^1/2
squaring both sides.
(y + p)^2 = (x^2 + (y-p)^2
y^2 + 2py + p^2 = x^2 + y^2 -2py + p^2
4py = x^2
y = 1/4p x^2
By reversing the above steps, we see that a solution (x, y) of y = 1/4p x^2 is equidistance from (0, p) and the line y =-p.
This completes the proof.
Characteristics of a Parabola
The standard form of a parabola with vertex (0,0) are as follows:
Algebra Geometry
y = ax^2 Focus: (0, 1/4a)
Directrix: y= -1/4a
x = ay^2 Focus (1/4a, 0)
Directrix: x = -1/4a
The line of symmetry for y =ax^2 is the y-axis. Similarly, the line of symmetry for x=ay^2 is the x-axis.
Find the focus and Directrix for the parabola y = -1/2x^2
Compare with the form y = ax2
=> a = -1/2
Therefore, the focus is (0, 1/4a) = (0, 1/[4(-1/2)])
= (0, -1/2) and the directrix is the line y = -1/4a = -1/[4(-1/2)] = 1/2
Find an equation in standard form for the parabola whose directrix is the line x = 2 and focus is the point (-2, 0).
The directrix, x = 2 is to the left. Therefore, the parabola is one with a horizontal line of symmetry and opens to the left.
Because x = 2 = -1/4a
=> 8a = -1
=> a = -1/8
The standard form equation for the parabola is
x = ay^2 = -1/8 y^2
|
{"url":"http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/parabola.htm","timestamp":"2014-04-16T07:35:06Z","content_type":null,"content_length":"6400","record_id":"<urn:uuid:23a17dfe-842b-45de-986d-4a2bb25c9348>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Copyright © University of Cambridge. All rights reserved.
'Fun Time' printed from http://nrich.maths.org/
On Saturday, Asha and Kishan's grandad took them to a Theme Park.
^[]of the day queuing for rides.
They worked out that $\frac{3}{4}$ of the rest of their time there was spent enjoying the rides.
^[] $\frac{1}{2}$ was spent having lunch.
The 3-d cinema show took up^[] $\frac{2}{3}$ of the rest of the day.
Finally there were $10$ minutes left to go to the gift shop before they went home.
How long were Asha and Kishan at the Theme Park?
|
{"url":"http://nrich.maths.org/1100/index?nomenu=1","timestamp":"2014-04-21T04:41:02Z","content_type":null,"content_length":"3677","record_id":"<urn:uuid:eb94f608-011d-4002-aae3-bfdce2cc2f61>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Several options for the \Vertex macro
UPDATE: This example no longer works with the latest versions of tkz-graph and tkz-berge. For an updated version see this
post at my new blog.
{\tikzstyle{every node} = [node distance=1.5cm]
{\tikzstyle{every node} = [node distance=1.5cm]
style={line width=2pt,
inner sep=0pt,
minimum size=12pt}]{Q,R,S}}
\foreach \x/\y in {E/F,G/H,I/J,K/L,M/N,O/P}
\Edge[style={bend left}](G)(N)
\Edge[style={bend right}](A)(T)
5 comments:
Thx it helped me a lot
I am having a trouble running your code. It gives me the following error
Argument of \Vertices@NoStar has an extra }.
l.9 ...=4,dir=\SO,LabelOut=true,Ldist=5pt]{B,C,D}}
on line
I have cut-and-pasted the code and enclosed it within
@M. Tamer Özsu:
I can reproduce the problem, probably it is due to changes in tkz-berge. I'll look into it and post an update. Thanks for your report!
Thanks. While you are at it, I wonder if I can ask another question. Is there a way to have a node with label both inside it and outside? I want to have a node identifier inside the node and the
actual label above it, but have not been able to figure out if this can be done.
I have updated another post in my new blog, see: http://graphtheoryinlatex.wordpress.com/2011/01/21/doing-more-with-vertices/, showing one way to have multiple labels on one vertex.
|
{"url":"http://graphtheoryinlatex.blogspot.com/2009/08/several-options-for-vertex-macro.html?showComment=1318953095435","timestamp":"2014-04-16T10:46:54Z","content_type":null,"content_length":"56449","record_id":"<urn:uuid:538086ab-e903-48a4-8037-3cc3b49fc197>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Engineering Judgment and Natural Circulation Calculations
Science and Technology of Nuclear Installations
Volume 2011 (2011), Article ID 694583, 11 pages
Review Article
Engineering Judgment and Natural Circulation Calculations
^1Autoridad Regulatoria Nuclear, Avenida del Libertador 8250, 1429 Buenos Aires, Argentina
^2CONICET and National Academy of Sciences of Buenos Aires, Argentina
Received 9 September 2010; Accepted 29 November 2010
Academic Editor: Alejandro Clausse
Copyright © 2011 J. C. Ferreri. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The analysis performed to establish the validity of computer code results in the particular field of natural circulation flow stability calculations is presented in the light of usual engineering
practice. The effects of discretization and closure correlations are discussed and some hints to avoid undesired mistakes in the evaluations performed are given. Additionally, the results are
presented for an experiment relevant to the way in which a (small) number of skilled, nuclear safety analysts and researchers react when facing the solution of a natural circulation problem. These
results may be also framed in the concept of Engineering Judgment and are potentially useful for Knowledge Management activities.
“No man outside his specialty is not credulous…”
Jorge Luis Borges, “The secret miracle”, Fictions
1. Introduction
The concept Engineering Judgment (EJ) is sometimes invoked to support the validity of technical assertions based on the subjective judgment of experts. This is particularly true when uncertainty
prevails regarding the data at hand, in opposition to statistically valid data sets. Many relevant technical decisions are based on this type of EJ. In particular, the assignment of subjective
probabilities to rarely occurring events is a usual example of this particular use of EJ. The statement “educated guessing” used to be an alternative nomenclature to denominate this somewhat
arbitrary, nonscientific, way of value assignment to parameters. The nuclear corporation is sensitive to these aspects and one of the general conclusions of a nuclear safety specialists meeting, see
Aksan [1], was to “Minimize need for expert judgment as far as practicable”. Needless to say, this is also the more common cause for public and nongovernmental organizations complaints regarding risk
and cost-benefit analyses of installations. Public and NGOs opposition to chemical, nuclear and many other types of industrial emplacements are, quite frequently, the consequence of their negative
perception of said risk-benefit studies.
EJ is really at the base of the usual way of engineering data analysis. It is the case of deciding whether or not a calculated set of results can be considered a valid one. In this paper,
applications of EJ deal with the computer prediction of the stability of natural circulation (NC) flows (jargon for natural thermal convective flows) in hydraulic loops of interest in the nuclear
The simplest approximation will be considered, namely, one-dimensional (1D), almost incompressible flow in single phase. It may be argued that it is a rather simplistic problem, because real life
installations show much more complicated situations. However, most of the calculations performed under these restrictive hypotheses pose some challenges that must be solved on the basis of EJ if this
is understood, as mentioned above, as the process performed to determine the validity of a given set of computer results.
The emphasis of this paper is not on the two basic steps of computer code development, namely, verification and validation. These steps are assumed as already done. Here, the verified and validated
codes are used to analyze the behavior of quite simple loops, either theoretical or experimental ones, with the main interest focused on assessing the results. As a consequence, some insights are
derived to account for the effects of discretization and closure correlations. One aspect that will deserve particular consideration is whether to stop for the search of perfection in the achieved
results, this in the light of lack of really valid experimental data allowing for partial validation or lack of exact solutions for the problem under analysis (the vast majority of real life
engineering problems) with different codes.
Perhaps, before starting the analysis, it may be useful to excerpt some considerations by Scannapieco and Harlow [2] on the role of computational predictions: “In as much as we can simulate reality,
we can use the computer to make predictions about what will occur in a certain set of circumstances. Finite-difference techniques can create an artificial laboratory for examining situations which
would be impossible to observe otherwise, but we must always remain critical of our results. Finite-differencing can be an extremely powerful tool, but only when it is firmly set in a basis of
physical meaning. In order for a finite-difference code to be successful, we must start from the beginning, dealing with simple cases and examining our logic each step of the way”. Harlow was one of
the most talented experts in Computational Fluid Dynamics, who leaded the famous Group T3 at Los Alamos Scientific Laboratory in the 70–80s.
From the regulatory point of view, the need for independent safety analysis cannot be sufficiently emphasized. It must be understood that the same engineering data most probably will generate
different results, even using the same code and the same (agreed with the licensee) criteria for discretization. Differences would arise from choosing different code options or what is the code user
interpretation of the agreed criteria. In passing, the importance of EJ may be once again exemplified by the following excerpt from the work of Shotkin [3]: “It should be stressed that the staff does
not rely solely in computer analyses, but rather use the analyses as a tool to help guide understanding of plant behavior in conjunction with Engineering judgment, hand calculations, data analysis,
and experience with plant operation”. Also: “It must be continually emphasized that code results must always be used with cautionary Engineering judgment. This is true even for those uses where the
code has been explicitly assessed against data because user choices and input deck errors may influence the calculation results”.
In what follows, some examples coming from previous work by the author and his colleague at the University of Pisa, Professor Walter Ambrosini, are reviewed and presented. These results will be the
support for a part of the present contribution.
Also in relation to the aforementioned work, a theoretical experiment was performed, aimed at testing how a bunch of skilled, active and young nuclear safety analysts and researchers would react when
faced with solving some puzzling results of the use of a systems thermal hydraulic code and an in house developed thermal hydraulic code. The information given to these people was somewhat biased to
provoke an unneeded sophistication of the analysis. The results showed that this bias was (regretfully) successful. Some other aspects on scientific information as presented in technical journals are
discussed and the lessons learned are made explicit. These aspects would also be potentially useful for Knowledge Management (KM) activities.
It must be mentioned that the subjects herein discussed are some of the more important aspects of safety evaluations and this brief, quite restricted presentation may, hopefully, contribute some
emphasis on them.
2. The Search for Convergence of Results
This is, perhaps, the easiest step in computational analysis of engineering problems but only conceptually. In fact, it means that grid size, as measured by some suitable norm, is compatible with the
accuracy of resolution of some type of boundary layer. This may be a momentum boundary layer as in the vicinity of a wall, the depth of heat penetration in a solid, or the time history of some
suitable dependent variable as a function of its time scale, among many other possible examples. What must be considered is that a given boundary layer behavior must be solved accurately enough.
Searching for grid convergence is not a too costly activity in simple integration domains, like the 1D cases herein considered. This is not the case in multidimensional domains. In the latter, the
use of multiple scale calculations tends to keep detail and accuracy at an appropriate level in the entire integration domain. Shape and size variation of computational cells affect the global
In the case of NC in unstable flow conditions analyzed using time domain computer codes, the problem consists in using a spatial discretization fine enough as to minimize the amount of numerical
diffusion. This numerical diffusion is sometimes added in the process of solution as a consequence of the inherent properties of the discrete scheme. This diffusion is usually associated with
first-order spatial discretization. It may be argued that using spatial O numerical schemes should not be recommended in general. However, most engineering thermal-hydraulic systems codes use this
approximation to circumvent a worse limitation: the ill-posedness of governing equations.
The interaction of flow stabilization and discretization may be exemplified resorting to results cited in Ferreri and Ambrosini [4], as shown in Figure 1, where the flow rate in a simple loop of
Figure 2 was obtained using a finite-difference scheme O known as forward time (Euler) upstream space (FTUS in short), 1000 spatial nodes, and a cell Courant number (), . The results are compared
with the ones obtained using a modal expansion, which is free of numerical diffusion, with 500 modes and adding the numerical diffusion nearly corresponding to the previous approximation. It may be
observed that the results are nearly the same. Then, it may be concluded that the usual interaction between the numerics and the physics persists in this nonlinear case.
The results by Ferreri and Ambrosini [4] showed how using different order schemes could be useful to get improved convergence of results to some limiting accuracy. Perhaps the most interesting
results were showing how usual approximations of piping systems related to nuclear industry could be nonconservative from the point of view of safety. In fact, revisiting a pioneering work by
Welander [5], a stability map was determined. It corresponds to a two pipes loop 10m high and 0.1m in diameter, with a concentrated heat source at the bottom and an opposite heat sink at the top,
as the one shown in Figure 2.
The analytical stability map is the one in Figure 3, where a working point corresponding to an unstable flow condition was set. Then the map was constructed by calculation with the FTUS approximation
and the effect of the number of nodes was determined. In the maps following, α and ε are two nondimensional parameters that measure, respectively, the buoyancy driving force and the resisting
friction force in the loop.
Figure 4 shows that, as the number of nodes increases, the unstable region in the map progressively converges to the theoretical stability boundary (SB). Then, for the point under analysis, flow
changes from a stable condition to an unstable one. Then, the evaluation of this system goes from a non-conservative stability condition evaluation towards a conservative, real unstable one.
Predicting the system to be stable is, obviously a noncorrect, dangerous situation in this case.
The interesting consideration here is that discretizing a pipe 10m long and 0.1m internal diameter in volumes 0.3m long seems natural to a systems code user, at least as a compromise between
computational cost and expected system behavior. Then, assuming that the system is expected to perform in a stable way, EJ must be used to decide on various aspects, namely, (a) the system satisfies
the design goals; (b) the numerical model is appropriate; (c) the computer code is applicable; (d) the discretization is adequate and does not mask some unexpected behavior; (e) results are
converged. These questions are of great importance for the safety evaluation of nuclear installations. Furthermore, as they seem natural, they have also been considered in the so-called Code Scaling,
Applicability and Uncertainty (CSAU) evaluation methodology; see reference [6], a United States Nuclear Regulatory Commission’s major documented way to assess the traceability of nuclear safety
analyses. Also, the need for the qualification of codes and their users arises in a natural way and this, incidentally, has also been the subject of much analysis; see for example, the discussions in
[7], among others.
Another problem arises when two independent code results are compared. A general, advanced thermal-hydraulic systems code like RELAP5, see Carlson et al. [8], and another of restricted validity can
be both applied to a particular physical situation for which the second is known to be applicable. In NC flows in single phase, the mass flow rate scales with the 1/third power of the heat input to
the system. Then, a difference of 10% in heat input leads to only 3.2% in flow rate. This last difference is small and acceptable in most situations, given the uncertainties in codes and their
closure correlations, but covers a significant one in power. Deciding when it is possible to accept this difference poses some challenge for large, complex systems and requires applying EJ again.
Regarding convergence of results, some care must be also taken when lumped parameter simulations are used. In Ferreri et al. [9, 10], a lumping criterion for concentrated heat source/sink was
developed, which eliminates the lack of convergence due to heated length in an FTUS finite-differences scheme applied to the above- mentioned problem. These results arose from applying EJ to this
lack of convergence.
3. The Effect of Closure Correlations
Related to the previous search for convergence of results, there is another aspect to be taken into account. It is whether an accepted, commonly applied closure correlation is appropriate to describe
the physics of the problem under analysis. Closure correlations serve to set a system of conservation equations closed. Most commonly, they include interface and interphase relations like friction
laws, heat transfer correlations, phase slip velocity specification, and many others. In this section, the effects of using different versions of the macroscopic friction law will be discussed. It is
important to say, from the very beginning, that if the results of a computer prediction are not known (the usual case in engineering calculations), then using accepted closure correlations is a basic
tenet. There is nothing to be argued against this practice. On the contrary, it is supported by common sense and EJ. On the other side, it must be noted that unstable, time reversing flows always
traverse a laminar-turbulent flow transitional region. The time scale associated to these reversals may affect the influence of the transitional regime.
It may be interesting to consider firstly the effect of friction law in the stability map of a toroidal loop. This geometry is amenable to analytical and numerical analysis and has been the subject
of research since decades ago. An example of this may be found in Ferreri and Doval [11] and Figure 5 shows, without makeup, how the system behaved changing the nodalization, showing the usual
damping of the FTUS finite-differences scheme.
Far more recently, in Ferreri and Ambrosini [4], the effects of the friction laws on the stability maps of a similar system were analyzed. Figure 6(a) shows the most usual correlations for the
friction factor in a tube, as a function of the Reynolds number. The one signaled as Churchill law is an adequate fitting to the Moody’s law used for smooth tubes in engineering calculations. Figure
6(b) shows how the neutral stability boundary is affected by the particular choice of the friction factor variation at the transition of the flow from laminar to turbulent. The variation is also
predicted using the FTUS methodology and a modal expansion solution of the governing equations. Now, a more realistic situation will be analyzed.
Let us now consider the following experimental results, Vijayan et al. [12], dealing with NC flow in a simple square loop. The loop consists of a 23.2mm I.D. glass pipe, having 2.1m vertical legs,
and equipped with 0.8m long electrically heated and fluid cooled horizontal sections. The latter consists of a pipe-in-pipe heat exchanger, fed by relatively cold water and at prescribed flow rates.
This loop showed unstable NC flow conditions for a heat power input of 420W. These results have been simulated by a set of two codes described in Ambrosini and Ferreri [13]. Figures 7 and 8 show the
results of the predictions using Churchill’s approximation. As may be observed, the map shows a band of stable flow condition. Figure 9 shows the map for the same conditions using the friction law as
suggested in Vijayan et al. [12]. The flow is always unstable, as the experiments also indicate.
The calculation using the codes of Ambrosini and Ferreri [13] with the correlation by Vijayan et al. [12] permitted to recover a condition similar to the one in Figure 9, that is, a completely
unstable map. Now, the following may be concluded: the transition laws adapted to link correlations for laminar and turbulent flow as adopted in thermal-hydraulic codes are under question in unstable
flows. It was shown that a nonmonotonous transition branch in the correlating curve may lead to predict stability, whereas experimental observations show unstable behaviour. Again the condition is
not conservative.
It is somewhat difficult to establish an EJ criterion to deal with this situation. Perhaps, the conclusion in Ambrosini et al. [14] can be repeated here: the validity of the traditional claim for the
inapplicability of the forced convection friction correlations in natural circulation conditions appears to be rather dependent on the geometry of the loop. In fact, though in some literature works
including comprehensive reviews, recommendations are given to use friction laws providing larger friction factors than in forced flow, the work of Vijayan et al. [12] seems to suggest that classical
laminar and turbulent friction correlations perform reasonably well in rectangular loops. It is so when appropriate localised pressure drop coefficients are included in the models to account for the
effect of bends and other discontinuities. Nevertheless, what is clear is that transitional flows must be evaluated quite carefully, testing the effects even of the most classical closure
4. Testing the Possibility of Continued Knowledge Development in NC
It may be accepted, loosely paraphrasing Kuhn, that in the evolution of science there are sudden jumps in knowledge, followed by stability periods of consolidation and accumulation of related
information. The last century shows several examples when, after the foundations of a new theory are well established in a particular field, an explosive increase in the number of related scientific
publications occurs which, paradoxically, is the true symptom of stability. This situation persists until new evidence cannot be explained in terms of the prevailing paradigm. Typically this leads to
the formulation of a new paradigm and the cycle restarts.
In addition, well-known concepts may experience a revival after some years of lethargy. This applies in the case of learned journals too. There are several factors contributing to the last mentioned
situation but it is the author’s opinion that the contribution from reviewers is not the least. It is obvious that as time elapses, the list of peers change and the newer ones may not have enough
time (or predisposition) to read previous, “old” literature. In this way they may be unwittingly prone to recycle information. Researchers that have been publishing their findings since thirty years
ago may be conscious witnesses of this phenomenon.
The reading of an essay on automata by Garassa [15] suggested what will be proposed in the following, with the aim of showing the possibility of pushing the order in a period of stability to its
limits through almost automatic knowledge advancement.
The general proposal was remarkably simple:
The Almost Automatic Exploration of Knowledge Niches to Get Additional, Supporting, Continuing Contributions
In order to test the feasibility of this approach, a theoretical experiment was devised. The experiment has been carried out with the contribution of several young, experienced, professionals
belonging to several groups with theoretical and experimental skills in nuclear engineering. They are professionally active in the field and were willing to participate in a “theoretical experiment
in KM”. The participants had previous working background or recent training on the addressed subject. Only e-mail contact was used. The interest of this approach may be, hopefully, evident in what
To accomplish this goal, the relevant issue was performing a theoretical experiment to test the possibility of continued, “normal” development of knowledge by juniors in a selected niche of
knowledge, without interacting with seniors.
4.1. Subject of Application
Arguably, the knowledge niche selected was the computation of NC flows in thermal-hydraulic loops.
This has been the subject of intensive research for more than thirty years. Again, a list of publications up to 2002 may be found in Ferreri and Ambrosini [4]. On the other side, even earlier, this
author also tried to put in rational terms the usual thinking in setting up computational fluid dynamics (CFD) models in a rather elementary prototype of expert system, as in Ferreri and Grandi [17].
This background leads to the present election. It should be pointed out that the long range goal of this work was incorporating the way of analysis to be described in what follows in some inference
machine embedded in an expert system. Automatic inference is not new, see for example, King et al. [18] and Schmidt and Lipson [19], and would allow obtaining the minor advances like the one reported
here, leaving time for more relevant research tasks. On the other side, detailed procedures for documentation and data reproducibility, see Schwab et al. [20], may also be used with advantage for
this long-term goal.
4.2. Selected Bibliographical Material
The information provided consisted in full text versions of references by Ferreri and Ambrosini [4], Ambrosini and Ferreri [13], and Pilkhwal et al. [16]. Really, to use information publicly widely
available online at the moment of the experiment (2007), the title, keywords, and abstract should only be used. The corresponding material of the papers cited in these references in which this author
participated could also be used, but it was not suggested to the participants. The underlying idea was applying the scheme to be described below to infer the lines of research that lead to some new,
unpublished data. There were two, almost evident, possible lines to be inferred, (a) the continuation of detailed studies, based on CFD codes and (b) a second one, explaining how to overcome the
limitations of one-dimensional (1D) codes in the case of interest. The second, less evident, was the key leading to the set of unpublished, new results. Merging of the two techniques in a multiscale,
multidomain system code was another possible solution.
4.3. Procedure of Experimentation
The procedure followed consisted in sending a letter of invitation to the potential participants, after asking for authorization to their advisors.
The group of people included some usually working with CFD codes and some others working with so-called thermal-hydraulic system codes. The latter are basically the ones usually used to perform
safety analysis of nuclear power installations as well as to get experience on their behavior through the simulation of controlled integral test facilities experiments.
The invitation letter expressed that the participant may be “aware that the management of knowledge implies taking care of heritage. Many institutions are presently suffering the effects of a long
lasting lethargy. This is particularly true, although not exclusively, in the nuclear field, where seniors are beginning to retire and there is a lack of skilled, intermediate aged professionals,
able to continue the activities.”
It continued stating, “There is a set of results, still unpublished, which is a “natural” continuation of the line of research indicated as background material. What is expected from your
participation is to infer what the aforementioned unpublished results are and the way they have been obtained, on the basis of the reading of the background material at two levels of detail as
specified below. This expected outcome, of only half a page in length would imply that, what seems ‘natural’ to me might be easily unveiled from reading the papers.” (In reality, what happened was
that the author obtained the results in this way and this fact gave the opportunity to test the procedure now reported, simply by rejecting the possibility of publication of the new development. The
Appendix illustrates the reasoning behind this approach.)
Then the selected material was cited, as specified before. Regarding the procedure to follow, two levels could be employed. Both started with a common premise:(a)“Do not consult or discuss your
conclusions with your advisor (I asked him/her for permission) or colleagues of your work group.”
Then, the following two approaches could be followed:(a)“Read the papers in sequence using only the title, the keywords and the abstract (b)Draw conclusions, advancing your guess of the outcome. (c)
Write your conclusions and send them to me by e-m.”
Or, in case it was felt necessary to have more detailed information, the procedure to follow could be: (a)“Add the reading of the full text (b)As before (c)As before.”
The selection of the references was purposely biased, the first two leading the participants to realize the limitations of one-dimensional codes. The third one stressed even more on these
limitations. Also, the latter paper explicitly stated in its abstract that the difficulties could not be overcome by using 1D codes and that CFD codes were the natural option to follow, something
that was also suggested in the second reference. The conclusion on the ultimate limitation is true as stated but, as is usually accepted in the Engineering practice when the flow pattern may be
inferred from experience, a suitable nodalization can be set up, to take into account the (somewhat) complicated flow-pattern.
Then the following was suggested to the experiment participants.
As stated by Ferreri and Ambrosini [4] “Sometimes, scaling leads to the adoption of the 1D approximation; this may, in turn, hide important aspects of the system physics. A simple example of this
situation consists in keeping the height of the system unchanged to get the same buoyancy; then, if the system is scaled accordingly to the power/volume ratio, the cross section area of the volume
will be reduced; this leads to a much smaller pipe diameter that makes the 1D representation reasonable, at the cost of eliminating the possibility of fluid internal recirculation. A workaround for
this situation is providing paths for recirculation, in the form of additional, interconnected components; however, this solution may impose the flow pattern in the system and the balance between
these aspects is a challenge to any practitioner in natural circulation modeling”.
Also, in Pilkhwal et al. [16], it was explicitly stated “Strategies for improving the predictions of the RELAP5 code are under study by the present Authors, trying to provide the simulation of the
heater in the HHHC (horizontal heater/horizontal cooler) configuration with some allowance for predicting thermal stratification phenomena”.
The above-mentioned “suitable” nodalization usually comes from the application of EJ based on the simulation of experiments in similar situations. This option needs some more intuition but leads to
results that may reflect the experimental trends. It also has, at least, two advantages: (a) computer time is quite small, in the order of minutes using a standard PC, as opposed to many hours using
a CFD code and, (b) experience is gained, suitable for its application in reasonable extrapolations (This is the type of knowledge that may be incorporated into the system of rules in some expert
system.). See also the discussion in [6].
4.4. Results
In total, more than twenty invitations were sent, distributed in five institutions at different countries. Only ten answers have been obtained, of varying degree of detail. Six answers were based on
the first indication of reading and the other four on varying degrees of reading of the papers. The low number of answers may be, perhaps, attributed to the simple fact that many people think that
paying attention to this type of experiment is simply not worth doing.
All the answers were conceptually correct, did not went too deep into justification, and suggested that the additional results were CFD analyses or different extensions of the third paper. What is
interesting is that most of the participants are familiar and presently working with such techniques. Perhaps, these young researchers were somewhat dogmatic in considering what was written in the
supplied literature and not prone to consider alternatives to what is shown in it or, perhaps worse, alternatives to their usual thinking. Another possibility is that no one was too interested in
reading in detail long introductions, discussions, or conclusions. However, it must be emphasized that the invited people usually perform code validation to continue research and nuclear safety
evaluations of advanced reactors design. Then, this may put a warning on people at the Academia, with regard to promoting appropriate use of computer resources and emphasizing on EJ, because code
users may be prone to consider the least information that may lead to confirm their presumption on expected results. As a consequence, full exploitation of present computer models and codes must be
emphasized at research and development groups. This may lead to saving time and resources.
Just one answer was what the author expected, suggesting among other things, the way to obtain results in the way described as (b) above, explaining how to overcome the limitations of 1D codes in the
case of interest. This answer explicitly stated “A tentative to reproduce such behavior (stratification in horizontal pipe) by the 1D system code could be done by suitable nodalization technique
(e.g., dividing the horizontal tube into two parallel parts). However special care should be given to avoid the introduction of phenomena not part of the experiment or not physical”. One tenth is
satisfactory as a result. Obviously, it cannot be asserted that increasing the number of participants would imply keeping a similar result.
From this experiment, it seems that the first approach to the literature analysis is not useful to continue research or, at least to explore useful alternatives to the summarized results. It is also
a warning to any author (the present one is not exception) on how to write an abstract. It also seems that reasonable suggestions of further research may be obtained following the procedure quoted as
the full text approach to literature analysis. Then, a more exhaustive experiment may be designed and tested based on this.
From the limited number of answers, it was concluded that(a)the procedure, as presented, seemed reasonable. It should be tested in another field, preferably in someone dealing with a different niche
of knowledge, to further test its feasibility;(b)information, as available for browsing in presently commercially copyrighted literature is not enough to advance the knowledge, because it depends on
the information that authors consider relevant to abstract.
It is suggested to continue with this type of experiment to analyze the idea proposed in this work with a wider universe of participants. The research area may deal with a different topic.
5. Conclusions
This paper dealt with some particular applications of Engineering Judgment to evaluate the results of computer codes application to unstable, one-dimensional, NC flows in single phase. Despite the
simplicity of the systems analyzed, some problems have been exemplified that pose a challenge to the common reasoning. Perhaps, the only way to circumvent the questions of convergence of results and
the effects of closure correlations is to resort to sensitivity to parameters analysis. If a concluding assertion is needed, it may be that EJ and nondogmatism go together and that accepting clichés
as working rules must be avoided. In the author’s opinion, the few examples considered fully support the previous assertion.
On the other side, in order to show that it is possible to advance almost automatically in the full exploration of a knowledge niche, a number-limited, controlled experiment was performed. In so
doing, the conceptual approach on the possibility of continued development by young researchers, without interaction with seniors was tested. The experiment permitted to verify that the test
procedure is reasonably well founded and that the literature published so far was consistent in pointing findings and new ways amenable to exploration. The experiment may be useful in KM activities.
As said before, a list of publications up to 2002 may be found in Ferreri and Ambrosini [4]. The results have been always related to quantify the effects of closure correlations and numerical
approximations, as implemented in nuclear safety analysis codes, on the results. The particular aspect under analysis was the unstable behavior of natural circulation flows.
The results that substantiated the experiment will be clarified in what follows, for the sake of completeness. These results partly come from Pilkhwal et al. [16]. Figure 10 shows the experimental
rig that originated the results. Said rig was represented using RELAP5 and an in-house developed code named TRANLOOP. RELAP5, as developed by US-NRC, see Carlson et al. [8], is one of the most widely
used thermal-hydraulic systems code to perform nuclear safety revaluations. Figure 11 shows the nodalization adopted to consider the HHHC configuration mentioned above and a nominal heating power of
100W. The flow rate time variation in the loop is shown in Figure 12. It is a composition showing the results as obtained from (a) the experiment, (b) from RELAP5 and TRANLOOP, and (c) from a CFD
code, namely, FLUENT 6.2, see Fluent Inc. [21]. As may be observed, the CFD approximation represents well the growing and persistence of the physical flow rate instabilities. This fact leads to the
obvious conclusion that representing fluid stratification like a CFD code and a condition not reachable using 1D codes like RELAP5 and TRANLOOP may allow obtaining an adequate flow pattern
description. In the case of the results obtained using RELAP5, the flow remains stagnant until the fluid starts to boil. Minute differences in temperature destabilize the flow and a cycle (like the
ones visible in Figure 14 later) starts. This does not happen when using TRANLOOP because the Boussinesq approximation fails to reflect the physics.
As previously mentioned, some alternative nodalization using 1D codes may be considered. The one shown in Figure 13 may be one of the possible solutions in this particular case. In fact, including
two interconnected parallel channels with equivalent friction and heat transfer should constitute an approximation capable to represent the expected behavior of the physical installation. Then, this
nodalization was implemented using RELAP5 and the results, exemplified in Figure 14, showed that the flow instabilities may be recovered, even for lower heating rates. This nodalization of the
horizontal heater did not affect the stability in the other configurations discussed in Pilkhwal et al. [16]. The different behavior is due to the transverse flows between the horizontal channels. As
may be observed in Figure 14, the expected thermal stratification is roughly represented. Arguably, more parallel channels would approximate better the physical situation. The availability of a
component allowing thermal stratification would be a desirable feature for any systems thermal hydraulic code.
The simple reasoning described above and the results so obtained provided the background that leads to the reported theoretical experiment in KM.
Part of this paper is based on a conference delivered to the National Academy of Sciences at Buenos Aires, in August 23rd, 2003, on “Computational Models and Engineering Judgement (in Thermal
1. N. Aksan, “(Compiler), Best estimate methods in thermal hydraulic safety analysis,” in Proceedings of the Summary and Conclusions of OECD/CSNI Seminar, Ankara, Turkey, June 1988, NEA/CSNI/R(99)
2. E. Scannapieco and F. H. Harlow, “Introduction to Finite-Difference Methods for Numerical Fluid Dynamics,” LA-12984 (UC-700), 1995.
3. L. M. Shotkin, “Development and assessment of U.S. nuclear regulatory commission thermal-hydraulic system computer codes,” Nuclear Technology, vol. 116, no. 2, pp. 231–244, 1996. View at Scopus
4. J. C. Ferreri and W. Ambrosini, “On the analysis of thermal-fluid-dynamic instabilities via numerical discretization of conservation equations,” Nuclear Engineering and Design, vol. 215, no. 1-2,
pp. 153–170, 2002. View at Publisher · View at Google Scholar · View at Scopus
5. P. Welander, “On the oscillatory instability of a differentially heated fluid loop,” The Journal of Fluid Mechanics, vol. 29, part 1, pp. 17–30, 1967.
6. “Quantifying Safety Margins: Application of Code Scaling, Applicability, and Uncertainty Evaluation Methodology to a Large Break Loss-of-Coolant Accident,” NUREG/CR-5249, EGG-2659—also in Nuclear
Engineering and Design, 119, 1990.
7. F. D'Auria, “Proposal for training of thermal-hydraulic system code users,” in Proceedings of the IAEA Specialist Meeting on User Qualification for and User Effect on Accident Analysis for
Nuclear Power Plants, Vienna, Austria, August 1998.
8. K. E. Carlson, et al., “RELAP5/MOD3 code manual, volume I: code structure, system models and solution methods,” Tech. Rep. NUREG/CR-5535, 1990.
9. G. M. Grandi and J. C. Ferreri, “Limitations of the use of a "Heat Exchanger Approximationfor a Point Heat Source",” Internal Memo, CNEA, GSRN, Argentina, 1991.
10. J. C. Ferreri and W. Ambrosini, “Verification of RELAP5/MOD3 with theoretical and numerical stability results on single-phase, natural circulation in a simple loop,” Tech. Rep. NUREG IA/151, US
Nuclear Regulatory Commission, 1999.
11. J. C. Ferreri and A. S. Doval, “On the effeect of discretization in the computation of natural circulation in loops,” Seminarios del CAMAT, vol. 24, pp. 181–212, 1984 (Spanish).
12. P. K. Vijayan, H. Austregesilo, and V. Teschendorff, “Simulation of the unstable oscillatory behavior of single-phase natural circulation with repetitive flow reversals in a rectangular loop
using the computer code athlet,” Nuclear Engineering and Design, vol. 155, no. 3, pp. 623–641, 1995.
13. W. Ambrosini and J. C. Ferreri, “Prediction of stability of one-dimensional natural circulation with a low diffusion numerical scheme,” Annals of Nuclear Energy, vol. 30, no. 15, pp. 1505–1537,
2003. View at Publisher · View at Google Scholar
14. W. Ambrosini, N. Forgione, J. C. Ferreri, and M. Bucci, “The effect of wall friction in single-phase natural circulation stability at the transition between laminar and turbulent flow,” Annals of
Nuclear Energy, vol. 31, no. 16, pp. 1833–1865, 2004. View at Publisher · View at Google Scholar
15. D. L. Garassa, Los Automatas y Otros Ensayos, Editorial Corregidor, Buenos Aires, Argentina, 1992.
16. D. S. Pilkhwal, W. Ambrosini, N. Forgione, P. K. Vijayan, D. Saha, and J. C. Ferreri, “Analysis of the unstable behaviour of a single-phase natural circulation loop with one-dimensional and
computational fluid-dynamic models,” Annals of Nuclear Energy, vol. 34, no. 5, pp. 339–355, 2007. View at Publisher · View at Google Scholar
17. J. C. Ferreri and G. M. Grandi, “On expert system assisted finite-difference schemes selection in computational fluid dynamics,” in Proceedings of the 6th International Conference on Numerical
Methods in Laminar & Turbulent Flows, C. Taylor, P. M. Gresho, J. Thompson, R. L. Sani, and J. Hauser, Eds., vol. 2, Pineridge Press, Swansea, UK, July 1989.
18. R. D. King, J. Rowland, S. G. Oliver et al., “The automation of science,” Science, vol. 324, no. 5923, pp. 85–89, 2009. View at Publisher · View at Google Scholar · View at PubMed
19. M. Schmidt and H. Lipson, “Distilling free-form natural laws from experimental data,” Science, vol. 324, no. 5923, pp. 81–85, 2009. View at Publisher · View at Google Scholar · View at PubMed
20. M. Schwab, M. Kerrembach, and J. Claerbout, “Making scientific computations reproducible,” Computing in Science and Engineering, vol. 2, no. 6, pp. 61–67, 2000.
21. FLUENT Inc., FLUENT 6.2 User’s Guide, Centerra Resource Park, Lebanon, NH, USA, 2003.
|
{"url":"http://www.hindawi.com/journals/stni/2011/694583/","timestamp":"2014-04-21T11:53:00Z","content_type":null,"content_length":"74320","record_id":"<urn:uuid:4add1a6c-8a5f-411c-8ff9-1ed90bf7a276>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gene-Based Tests of Association
Genome-wide association studies (GWAS) are now used routinely to identify SNPs associated with complex human phenotypes. In several cases, multiple variants within a gene contribute independently to
disease risk. Here we introduce a novel Gene-Wide Significance (GWiS) test that uses greedy Bayesian model selection to identify the independent effects within a gene, which are combined to generate
a stronger statistical signal. Permutation tests provide p-values that correct for the number of independent tests genome-wide and within each genetic locus. When applied to a dataset comprising 2.5
million SNPs in up to 8,000 individuals measured for various electrocardiography (ECG) parameters, this method identifies more validated associations than conventional GWAS approaches. The method
also provides, for the first time, systematic assessments of the number of independent effects within a gene and the fraction of disease-associated genes housing multiple independent effects,
observed at 35%–50% of loci in our study. This method can be generalized to other study designs, retains power for low-frequency alleles, and provides gene-based p-values that are directly compatible
for pathway-based meta-analysis.
Author Summary
Genome-wide association studies (GWAS) have successfully identified genetic variants associated with complex human phenotypes. Despite a proliferation of analysis methods, most studies rely on
simple, robust SNP–by–SNP univariate tests with ever-larger population sizes. Here we introduce a new test motivated by the biological hypothesis that a single gene may contain multiple variants that
contribute independently to a trait. Applied to simulated phenotypes with real genotypes, our new method, Gene-Wide Significance (GWiS), has better power to identify true associations than
traditional univariate methods, previous Bayesian methods, popular L1 regularized (LASSO) multivariate regression, and other approaches. GWiS retains power for low-frequency alleles that are
increasingly important for personal genetics, and it is the only method tested that accurately estimates the number of independent effects within a gene. When applied to human data for multiple ECG
traits, GWiS identifies more genome-wide significant loci (verified by meta-analyses of much larger populations) than any other method. We estimate that 35%–50% of ECG trait loci are likely to have
multiple independent effects, suggesting that our method will reveal previously unidentified associations when applied to existing data and will improve power for future association studies.
Citation: Huang H, Chanda P, Alonso A, Bader JS, Arking DE (2011) Gene-Based Tests of Association. PLoS Genet 7(7): e1002177. doi:10.1371/journal.pgen.1002177
Editor: Mark I. McCarthy, University of Oxford, United Kingdom
Received: May 26, 2010; Accepted: May 25, 2011; Published: July 28, 2011
Copyright: © 2011 Bader et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: JSB acknowledges funding from the Robert J. Kleberg Jr. and Helen C. Kleberg Foundation and from the NIH. DEA, JSB, and HH acknowledge funding from the Simons Foundation (SFARI 137603 to
DEA). The Atherosclerosis Risk in Communities Study is carried out as a collaborative study supported by National Heart, Lung, and Blood Institute contracts (HHSN268201100005C, HHSN268201100006C,
HHSN268201100007C, HHSN268201100008C, HHSN268201100009C, HHSN268201100010C, HHSN268201100011C, and HHSN268201100012C); R01HL087641, R01HL59367, and R01HL086694; National Human Genome Research
Institute contract U01HG004402; and National Institutes of Health contract HHSN268200625226C. The authors thank the staff and participants of the ARIC study for their important contributions.
Infrastructure was partly supported by Grant Number UL1RR025005, a component of the National Institutes of Health and NIH Roadmap for Medical Research. The funders had no role in study design, data
collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Traditional single-SNP GWAS methods have been remarkably successful in identifying genetic associations, including those for various ECG parameters in recent studies of PR interval (the beginning of
the P wave to the beginning of the QRS interval) [1], QRS interval (depolarization of both ventricles) [2] and QT interval (the start of the Q wave to the end of the T wave) [3]–[5]. Much of this
success has relied upon increasing sample size through meta-analyses across multiple cohorts, rather than through the use of novel analytical methods to increase power.
One analytical approach, gene-based tests proposed during the initial development of GWAS [6], has natural appeal. First, variations in protein-coding and adjacent regulatory regions are more likely
to have functional relevance. Second, gene-based tests allow for direct comparison between different populations, despite the potential for different linkage disequilibrium (LD) patterns and/or
functional alleles. Third, these analyses can account for multiple independent functional variants within a gene, with the potential to greatly increase the power to identify disease/trait-associated
Despite these appealing properties, gene-based and related multi-marker association tests have generally under-performed single-locus tests when assessed with real data [7], [8]. A general drawback
of methods that attempt to exploit the structure of LD to reduce the number of tests, for example through principal component analysis, is the loss of power to detect low-frequency alleles. Methods
that consider multiple independent effects often require that the number of effects be pre-specified [9], which loses power when the tested and true model are different.
Multi-locus tests often have the additional practical drawback of being highly CPU and memory intensive. Several methods use Bayesian statistics to drive a brute-force sum or Monte Carlo sample over
models [10], [11], but again often restrict the search to one or two-marker associations. In general, the computational costs have made these approaches infeasible for genome-wide applications.
The Gene-Wide Significance (GWiS) test addresses these problems by performing model selection simultaneously with parameter estimation and significance testing in a computational framework that is
feasible for genome-wide SNP data (see Methods). Model selection, defined as identifying the best tagging SNP for each independent effect within a gene, uses the Bayesian model likelihood as the test
statistic [12]–[14]. Our innovation is to use gene regions to impose a structured search through locally optimal models, which is computationally efficient and matches the biological intuition that
the presence of one causal variant within a gene increases the likelihood of additional causal effects. Models are penalized based on the effective number of independent SNPs within a gene and the
number of SNPs in the model, akin to a multiple-testing correction. The Schwarzian Bayesian Information Criterion corrects for the difference between the full model likelihood and the easily computed
maximum likelihood estimate [15]. This method has greater power than current methods for genome-wide association studies and provides a principled alternative to ad hoc follow-up analyses to identify
additional independent association signals in loci with genome-wide significant primary associations.
Reference genotype and phenotype data
The ECG parameters PR interval, QRS interval and QT interval are ideal test cases because recent large-scale GWAS studies have established known positive associations. These traits are all clinically
relevant, with increased PR interval associated with increased risk of atrial fibrillation and stroke [16], and both increased QRS and QT intervals associated with mortality and sudden cardiac death
[17]–[20]. We assessed the ability of standard methods and GWiS to rediscover these known positives using data from only the Atherosclerosis Risk in Communities (ARIC) cohort, which contributes 15%
of the total sample size for QRS, 25% for PR, and 50% for QT (Table 1).
Table 1. Populations, genes, and SNPs used in this study.
The SNPs were assigned to genes based on the NCBI Homo sapiens genome build 35.1 reference assembly [21]. Gene boundaries were defined by the most transcriptional start site and transcriptional end
position for any transcript annotated to a gene, yielding 25,251 non-redundant transcribed gene regions. Incorporating additional flanking sequence increases coverage of more distant regulatory
elements, which increases power, but also increases the number of SNPs tested, which decreases power. Expression quantitative trait loci (eQTL) mapping in humans has shown that most cis-regulatory
SNPs are within 100 kb of the transcribed region [22], [23], with quantitative estimates that of large effect eQTNs (functional nucleotides that create eQTLs) are within 20 kb of the transcribed
region [24]. We report results for 20 kb flanking regions; the performance ranking is robust to flanking by up to 100 kb (Table S1). SNPs within these regions are then assigned to one or more genes.
Of the approximately 2.5 million genotyped and imputed SNPs, about 1.4 million are assigned to at least one gene. The median number of SNPs per gene is 43 and the mean is 72 (Table 1), reflecting a
skewed distribution with many small genes having few SNPs.
The “gold standard” known positives rely on previously published meta-analyses of PR interval [1], QRS interval [2] and QT interval [4], [5]. We first identify gold-standard SNPs having . Any gene
within 200 kb of a gold-standard SNP is classified as a known positive, and known positives within a 200 kb window are merged into a single locus, yielding 38 known positive gene-based loci. This
procedure was followed to ensure that each association signal results in a single locus as opposed to being split between adjacent loci, which could result in over-counting.
Other methods
The minSNP test uses the p-value for the best single SNP within a gene. The minSNP-P test converts this SNP-based p-value to a gene-based p-value by performing permutation tests within each gene.
BIMBAM averages the Bayes Factors for subsets of SNPs within a gene, with restriction to single-SNP models recommended for genome-wide applications [10]. Because the Bayes Factor sum is dominated by
the single best term, results for BIMBAM are very similar to minSNP-P. The Versatile Gene-Based Test for Genome-wide Association (VEGAS) [25] is a recent multivariate method that sums the association
signal from all the SNPs within a gene and corrects the sum for LD to generate a test statistic. The terms summed by VEGAS are asymptotically equivalent to the negative logarithms of the Bayes
Factors summed by BIMBAM. LASSO regression, or L1 regularized regression, is a multivariate method that combines sparse model selection and parameter optimzation [26]–[28], with promising recent
applications to GWAS [29]. See Methods for more details.
Simulated data and power
Power calculations used genotypes from the ARIC population to ensure realistic LD. Phenotypes were then simulated for genetic models with one or more causal variants within a gene. GWiS was the
best-performing method, with an advantage growing as more independent effects are present (Figure 1a). Theoretically, GWiS should have lower power than single-SNP tests when the true model is a
single effect; according to the “no free lunch theorem”, this loss of power cannot be avoided [30]. The performance of GWiS therefore depends on the genetic architecture of a disease or trait: higher
power if genes house multiple independent causal variants, and lower power if each gene has only a single causal variant. In practice, the loss of power was so slight as to be virtually undetectable.
Figure 1. Estimated power at genome-wide significance for simulated data.
Power estimates for GWiS (black), minSNP-P (blue), BIMBAM (dashed blue), VEGAS (green), and LASSO (red) are shown for 0.007 population variance explained by a gene. Genes were selected at random from
Chr 1; genotypes were taken from ARIC; and phenotypes were simulated according to known models with up to 8 causal variants with independent effects. (a) Power decreases as total variance is diluted
over an increasing number of causal variants. (b) Power estimates with 95% confidence intervals are shown as a function of minor allele frequency (MAF) for the simulations from panel (a) with a
single independent effect. GWiS, minSNP, minSNP-P, and BIMBAM are robust to low minor allele frequency, whereas VEGAS and LASSO lose power.
Of the other methods, minSNP-P and BIMBAM had similar performance that degraded as the true model included more SNPs. The VEGAS test did not perform well, presumably because the sum over all SNPs
creates a bias to find causal variants in LD blocks represented by many SNPs and to miss variants in LD blocks with few SNPs. In the absence of LD, with genotypes and phenotype simulated using PLINK
[31], VEGAS performs better (Figure S1). The LASSO method performed worst.
The advantage of GWiS arises in part from better power to detect associations with low-frequency alleles (Figure 1b). GWiS, minSNP-P, and BIMBAM have roughly constant power for a given variance
explained, regardless of minor allele frequency. In contrast, both VEGAS and LASSO suffer from a two-fold loss of power when minor allele frequencies drop from 50% to 5%. VEGAS may lose power because
these low-frequency SNPs lack correlation with other SNPs, reducing the contribution to the VEGAS sum statistic. The LASSO penalty shrinks the regression coefficient, which may adversely affect SNPs
with large regression coefficients that balance low minor allele frequencies.
Simulated data and model size
The model size selected by GWiS and LASSO was evaluated by simulation (Figure 2). These simulations also used the ARIC population to supply realistic LD, with genes selected at random with
replacement from chromosome 1. In chromosome 1, the number of SNPs in a gene ranges from 1 to over 1000, and the number of independent effects ranges from 1 to over 100, similar to the distributions
in the genome as a whole (Figure S2). A subset of SNPs within a gene had causal effects assigned (“True ”), phenotypes were simulated to mimic weak and strong gene-based signals, and then models were
selected by GWiS and LASSO. Model selection to retain a subset of SNPs (“Estimated ”) was performed both for the full genotype data and for the genotype data with the causal SNPs all removed.
Figure 2. Model size estimation.
The ability to recover the known model size was evaluated for GWiS (a and b) and LASSO (c and d). The power to detect a single SNP was set to be 10% (a and c) and 80% (b and d). In separate tests,
the causal SNPs were either retained in (black) or removed from (red) the genotype data.
GWiS provides a better estimate of the true model size than LASSO, assessed from the of estimated versus true . With causal SNPs kept, for GWiS is substantially higher, 0.65 versus 0.47 at low power
(Figure 2a, 2c) and 0.81 versus 0.60 at high power (Figure 2b, 2d). GWiS also performs better when causal SNPs are removed, 0.55 versus 0.33 at low power and 0.60 versus 0.39 at high power. GWiS also
provides a conservative estimate of the model size, with the ratio of estimated to true size ranging from a worst-case of 44% (low power, causal SNPs removed) to a best-case of 81% (high power,
causal SNPs kept) over the four scenarios examined. In contrast, LASSO is prone to over-predict the size of the model, with a worst-case of models that are on average 33% too large (high power,
causal SNPs kept, Figure 2d).
Removing a causal SNP results in GWiS predicting a smaller model, with the ratio of estimated to true dropping from 0.55 to 0.44 for low power and from 0.85 to 0.81 for high power. These reductions
in model size are highly significant (p for both, Wilcoxon pair test) and counter a concern that the absence of a causal variant from a marker set will inflate the model size by introducing multiple
markers that are partially correlated with the untyped causal variant.
These results demonstrate that the model size returned by GWiS is conservative for causal variants with small effects, and approaches the true model size for causal variants with large effects.
Application to ECG data
We then obtained p-values from GWiS, minSNP, minSNP-P, BIMBAM, VEGAS, and LASSO for the ARIC data. Permutations of phenotype data holding genotypes fixed [32] provided thresholds for genome-wide
significance for each method (Table S2). Due to LD across genes, a strong signal in one gene can lead to a neighboring gene reaching genome-wide significance. This effect is well known, and scoring
these as false positives would unduly penalize traditional univariate tests. Instead, neighboring genes reaching genome-wide significance were merged, and overlap (even partial) with a known positive
was scored as a true positive.
GWiS out-performed all other methods in the comparison (Figure 3 and Table 2). GWiS identifies 6 of 38 known genes or loci as genome-wide significant. In contrast, BIMBAM identifies 5 known
positives; minSNP, minSNP-P and VEGAS identify 4; and LASSO identifies 2. Loci identified by the other methods are all subsets of the 6 found by GWiS. None of the methods produced any false positives
at genome-wide significance.
Figure 3. Recovery of known positive associations at genome-wide significance.
Of 38 known positives, GWiS identified 6 at genome-wide significance with no false positives. Univariate methods (minSNP and minSNP-P) and VEGAS identified a subset of 4 entirely contained by GWiS,
and LASSO identified a smaller subset of 2.
Table 2. Recovery of known associations.
Due to the limited size of the ARIC cohort relative to the studies that generated the known positives, no method was expected to find all 38 known loci to be genome-wide significant. Nevertheless,
known positives should still rank high among the top predictions of each method, assessed by the ranks of the known positives at 40% recall (Figure S3). We found that GWiS, minSNP, minSNP-P, BIMBAM,
and VEGAS were equally effective in ranking known positives (Mann-Whitney rank sum p-values for any pairwise comparison). LASSO performed below the other methods (p-value for a pairwise comparison of
LASSO to any other method). Top associations (up to 100 false positives) from each method are provided for PR interval, QRS interval, and QT interval (Tables S3, S4, S5).
While our conclusions are based on cardiovascular phenotypes, the results suggest that GWiS will have an advantage when causal genes have multiple effects. When an association is sufficiently strong
to be found by a univariate test, GWiS is generally able to identify it. Beyond these association, GWiS is also able to detect genes that are genome-wide significant, but where no single effect is
large enough to be significant by univariate tests. The association of QRS interval with SCN5A-SCN10A is a striking example: 4 independent effects are found by GWiS (p-value = ) but the association
is not genome-wide significant by univariate methods (p-value = for minSNP-P) (Figure 4). A common follow-up strategy for single-SNP methods is to search for secondary associations in the same locus
as a strong primary association. These results for ARIC together with results above for simulated data (Figure 2) demonstrate that GWiS performs this task well. While this feature is present in
previous follow-up methods for candidate loci [11], [33], [34], it is absent from methods generally used for primary analysis of GWAS data.
Figure 4. Multiple weak effects identified as genome-wide significant.
GWiS correctly identifies the SCN5A-SCN10A locus as genome-wide significant with four independent effects, even though the strongest single effect has a p-value 100 worse than the genome-wide
significance threshold indicated as a dashed line. No other method was able to identify this locus as genome-wide significant. The SNPs selected by GWiS are represented as large, colored diamonds,
and SNPs in LD with these four are colored in lighter shades. The light blue trace indicates recombination hotspots.
Of the 38 known positives, 20 have GWiS models with at least one SNP (regardless of genome-wide significance), and 7 of these are predicted to have multiple independent effects (Figure 5). These
results suggest that the genetic architecture of ECG traits supports the hypothesis underlying GWiS. Moreover, for QT interval where the power is greatest to identify known positives (the ARIC sample
size is 50% of the GWAS discovery cohorts), 5 of the 10 loci identified by GWiS are predicted to have multiple independent effects.
Figure 5. Distribution of the number of independent effects in ECG loci.
Of 38 known positive loci, GWiS identified 20 loci, and 7 of these contain multiple independent effects.
In summary, we describe a new method for gene-based tests of association. By gathering multiple independent effects into a single test, GWiS has greater power than conventional tests to identify
genes with multiple causal variants. GWiS also retains power for low-frequency minor alleles that are increasingly important for personal genetics, a feature not shared by other multi-SNP tests.
Furthermore, GWiS provides an accurate, conservative estimate for the number of independent effects within a gene or region. Currently there are no standard criteria for establishing the genome-wide
significance of a weak second association in a gene whose strongest effect is genome-wide significant. While the number of effects can be provided by existing Bayesian methods [34], their
computational expense has limited their applicability to candidate regions, and they are not routinely used. By providing a computationally efficient alternative to existing methods, GWiS provides a
new capability to estimate the number of effects as part of primary GWAS data analysis. Demonstrated effectiveness on real data may lead to more widespread use of this type of analysis. Applied to
cardiovascular phenotypes relevant to sudden cardiac death and atrial fibrillation, GWiS indicates that 35 to 50% of all known loci contain multiple independent genetic effects.
The test we describe includes a prior on models designed to be unaffected by SNP density, in particular by the number of SNPs that are well-correlated with a causal variant. The priors on regression
parameters are essentially uniform, with the benefit of eliminating any user-adjustable parameters. A theoretical drawback is that the priors are improper [35], [36]. Theoretical concerns are
mitigated, however, because improper priors pose no challenge for model selection, and our permutation procedure ensures uniform p-values under the null.
Bayesian methods can be computationally expensive. GWiS minimizes computation by evaluating only the locally optimal models of increasing size in a greedy forward search. This appears to be an
approximation compared to previous Bayesian methods that sum over all models. Previous Bayesian methods entail their own approximations, however, because the search space must either be truncated at
1 or 2 SNPs, heavily pruned, or lightly sampled using Monte Carlo. Our results demonstrate that the approximations used by GWiS provide greater computational efficiency than approximations used in
previous Bayesian frameworks, with no loss of statistical power. GWiS currently calculates p-values, rather than Bayesian evidence provided by other Bayesian methods. If Bayesian evidence is desired,
an intriguing alternative to Bayesian post-processing of candidate loci might be to use the Bayes Factor from the most likely alternative model identified by GWiS as a proxy for the sum over all
alternatives to the null model. This may be an accurate approximation because, in practice, the Bayes Factor for the most likely model from GWiS dominates all other Bayes Factors in the sum.
The GWiS framework, using gene annotations to structure Bayesian model selection, may be applied to case-control data by encoding phenotypes as 1 (case) versus 0 (control), a reasonable approach when
effects are small. More fundamental extensions to logistic regression, Transmission Disequilibrium Tests (TDTs), and other tests and designs should be possible and may yield further improvements.
Moreover, similar gene-based structured searches can be applied to genetic models to include explicit interaction terms [14]. The Bayesian format also permits incorporation of prior information about
the possible functional effects of SNPs [37], [38], and disease linkage [39], [40]. Finally, the gene-based p-values provide a natural entry to gene annotations and pathway-based gene set enrichment
analysis [41]–[43].
Materials and Methods
Ethics statement
This research involves only the study of existing data with information recorded in such a manner that the subjects cannot be identified directly or through identifiers linked to the subjects.
Known positives
Known positive associations are taken from published genome-wide significant SNP associations (p-value ) [1], [2], [4], [5]. Genes within 200 kb of any genome-wide significant SNP are scored as known
positives. Finally, genes within 200 kb that are both positive are merged into a single known positive locus to avoid over-counting.
Study cohort
The ARIC study includes 15,792 men and women from four communities in the US (Jackson, Mississippi; Forsyth County, North Carolina; Washington County, Maryland; suburbs of Minneapolis, Minnesota)
enrolled in 1987-89 and prospectively followed [44]. ECGs were recorded using MAC PC ECG machines (Marquette Electronics, Milwaukee, Wisconsin) and initially processed by the Dalhousie ECG program in
a central laboratory at the EPICORE Center (University of Alberta, Edmonton, Alberta, Canada) but during later phases of the study using the GE Marquette 12-SL program (2001 version) (GE Marquette,
Milwaukee, Wisconsin) at the EPICARE Center (Wake Forest University, Winston-Salem, North Carolina). All ECGs were visually inspected for technical errors and inadequate quality. Genotype data sets
were cleaned initially by discarding SNPs with Hardy-Weinberg equilibrium violations at p , minor allele frequencies , or call rates . Imputation with HapMap CEU reference panel version 22 was then
performed, and all imputed SNPs were retained for analysis, included imputed SNPs with minor-allele frequencies as low as 0.001. These cleaned data sets contributed to the meta-analysis to yield the
known positives, and full descriptions of phenotype and sample data cleaning are available elsewhere [1], [2], [4]. Regional association plots were generated using a modified version of
“make.fancy.locus.plot” [45].
Conventional multiple regression
The phenotype vector Y for N individuals is an vector of trait values. The genotype matrix X has N rows and P columns, one for each of P genotyped markers assumed to be biallelic SNPs. For
simplicity, the vector Y and each column of X are standardized to have zero mean. A standard regression model estimates the phenotype vector as , where b is a vector of regression coefficients and e
is a vector of residuals assumed to be independent and normally distributed with mean 0 and variance . The log probability of the phenotypes given these parameters is(1)
The maximum likelihood estimators (MLEs) are and , where denotes the transpose of . The total sum-of-squares (SST) is , and the sum-of-squares of the model (SSM) is . The sum-of-squares of the errors
or residuals (SSE) is(2)
A conventional multiple regression approach uses the F-statistic to decide whether adding a new SNP improves the model significantly,(3)
for a model with K SNPs, distributed as under the null. This approach fails, however, when the best SNPs are selected from the much larger number of M total SNPs, because the statistic does not
account for the selection process.
Bayesian model selection
A model M is defined as the subset of SNPs in a gene with P total SNPs that are permitted to have non-zero regression coefficients. For each gene, GWiS attempts to find the subset that maximizes the
model probability , where each of the P columns of X corresponds to a SNP assigned to the gene. In the absence of association, the null model with = 0 usually maximizes the probability, indicating no
association. When a model with maximizes the probability, an association is possible, and permutation tests provide a p-value. According to Bayes rule,(4)
The factor is model-independent and can be ignored.
The prior probability of the model, , assumes that each of the P SNPs within the gene has an identical probability of being associated with the trait. This probability, denoted f, is unknown, and is
integrated out with a uniform prior. The prior is also designed to make the model probability insensitive to SNP density: it should be unaffected if an existing SNP is replicated to create a new SNP
marker with identical genotypes. We do this by replacing the number of SNPs within a gene with an effective number of tests, , calculated from the local LD within a gene. Correlations between SNPs
make the effective number of tests smaller than the number of SNPs. The model prior based on the effective number of tests is(5)
or for integer values. As the effective number of tests, , whose calculation is described below, is generally non-integer, we use the standard Beta function rather than factorials.
The remaining factor in Eq. 4 is(6)
The integration limits and prefactor ensure normalization. We assume that these limits are sufficiently large to permit a steepest descents approximation as in Schwarzian BIC model selection [15].
First, assuming that the genotypes are centered, the genotype covariance matrix is , where indicates matrix transpose as before, and diagonal elements for SNP with allele frequency . Provided that is
much greater than each component of , the integral over is approximately(7)
where the sum-squared-error SSE is . Provided that the limit is much greater than the maximum likelihood value , the integral over can be approximated as(8)
where is the standard Gamma function. To avoid the cost of Gamma function evaluations, we instead use the steepest descents approximation,(9)
The log-likelihood is then
As in the BIC approximation, we retain only terms that depend on the model and are of order or greater. Thus we replace by , and . For historical reasons, we also included a factor of in the prior
for model size, to yield the asymptotic approximation(11)
The strategy of GWiS is therefore to find the model that maximizes the objective function(12)
The terms involving provide a Bayesian penalty for model performance, but also make this an NP-hard optimization problem. We have adopted two efficient deterministic heuristics for approximate
optimization. First is a greedy forward search, essentially Bayesian regularized forward regression, in which the SNP giving the maximal increase to the model likelihood is added to the model
sequentially until all remaining SNPs decrease the likelihood. The second is a similar heuristic, except that the initial model searches through all subsets of 2 SNPs or 3 SNPs. We adopted this
subset search to permit the possibility that all = 1 models are worse than the = 0 null, whereas a more complex model with or 3 has higher score. In practice, all associations identified by subset
selection were also identified by greedy forward search. We therefore used the greedy forward search for computational efficiency.
GWiS is designed to select a single model for each gene. An alternative related approach would be to test for the posterior probability of the null model, , against all other models, + + + , using
our model selection procedure either to choose the locally best model of each size or to include multiple models (which could suffer from a systematic bias favoring SNPs in large LD blocks). This is
in fact the strategy of BIMBAM, which attempts to systematically evaluate all terms up to a given model size. Unfortunately, the number of terms increases exponentially fast with model size, and the
brute-force approach does not scale to genome-wide applications. Monte Carlo searches over models have also been difficult to apply genome-wide. Our work suggests that approximations that limit the
search for fixed model size can be accurate, and further that the probabilities of models that are too large are expected to decrease exponentially fast, permitting the sum to be pruned and
truncated. We have observed in practice that the model with the most likely value of dominates the sum, and similarly for BIMBAM that the single SNP with the best Bayes Factor dominates the
sum-of-Bayes-Factors test statistic. These results suggest that the results of a more computationally expensive sum over all models would be largely consistent with the results of GWiS method.
Furthermore, the Bayes Factor for the most likely model could provide a proxy for the Bayesian evidence.
Effective number of tests
The effective number of tests is an established concept in GWAS to provide a multiple-testing correction for correlated markers. While the exact correction can be established by permutation tests,
faster approximate methods can perform well [46]–[49]. While we use a fast procedure, a final permutation test ensures that p-values are uniform under the null.
The method we adopt is based on multiple linear regression of SNPs on SNPs. The genotype vector for each SNP i is standardized to have zero mean. Correlations between all pairs of SNPs i and j are
initialized as . Each SNPs weight is initialized to 1, and the number of effective tests T is initialized to 0. The SNP i with maximum weight is identified, and the following updates are executed:
This process continues until all weights are equal to zero. When SNPs with maximum weight are tied (as occurs for the first SNP processed), the SNP with lowest genomic coordinate is selected to
ensure reproducibility; we have ensured that this method is robust to other methods for breaking ties, including random selection. For simplicity, the correlations are not updated (the update rule
would be ), which may lead to an overestimate for T. Model selection may therefore have a conservative bias. The p-values are not affected, however, because they are calculated by permutation tests
as described below.
The effective number of tests implies a trivial renormalization of the model prior, (Eq. 5), that does not affect the test statistic. Letting be the total number of markers, be the effective number
tests, and be the size of the model, our prior gives each model of size the weight . If and are identical, there are models of this size, and the total weight of all models of size is . Since can
range from 0 to , the sum is normalized. But when is larger than , the sum of all models of size is , which is . The sum from to is therefore . A normalization of 1 can be recovered by including an
overall normalization factor, . The explicit prior for models of size is , which is normalized to 1. Since is model-independent, it does not contribute to the test statistic.
P-values and genome-wide significance
We use two stages of permutation tests: the first stage converts the GWiS test statistic into a p-value that is uniform under the null; the second stage establishes the p-value threshold for
genome-wide significance.
The first stage is conducted gene-by-gene. We permute the trait array using the Fisher-Yates shuffle algorithm [50], [51] and use the permuted trait to calculate the test statistics using the same
procedure as for the original trait. Specifically, the model size is optimized independently for each permutation, with most permutations correctly choosing = 0. For S successes (log-likelihoods
greater than or equal to the unpermuted phenotype data) out of Q permutations, the empirical p-value is S/Q. To save computation, permutations are ended when . Furthermore, once a finding is
genome-wide significant, there is no practical need for additional permutations. For gene-based tests (GWiS, minSNP-P, BIMBAM, and VEGAS), the p-value for genome-wide significance depends on the
number of genes tested (rather than the number of SNPs), for humans. We therefore also terminate permutations after Q = 1 million trials, regardless of S. In these cases, for purposes of ranking, a
parametric p-value is estimated for GWiS as(14)
The first factor is the parametric p-value for the F statistic from the MLE fit, and the second term is the combinatorial factor for the number of possible models of the same size.
While these p-values are uniform under the null, the threshold for genome-wide significance requires a second set of permutations. To establish genome-wide significance thresholds, in the second
stage we permuted the ARIC phenotype for each trait 100 times, ran GWiS for the permuted phenotypes on the entire genome, and recorded the best genome-wide p-value from each of the 100 permutations.
We then combined the results from each trait to obtain an empirical distribution of the best genome-wide p-value under the null. We then estimated the p = 0.05 genome-wide significance threshold as
the 15th best p-value of the 300. This procedure was performed for GWiS, minSNP, minSNP-P, LASSO, and VEGAS to obtain genome-wide significance thresholds for each. Since minSNP-P and BIMBAM are both
uniform under the null, we used the genome-wide significance threshold calculated for minSNP-P, , for BIMBAM to avoid additional computional cost (Table S2). The threshold for GWiS is more stringent,
, presumably because of the locus merging procedure described below. Changes in the genome-wide significance thresholds of up to 50% would not affect any of the reported results.
Hierarchical analysis of genetic loci
In a region with a strong association and LD, GWiS can generate significant p-values for multiple genes in a region. A hierarchical version of GWiS is used to distinguish between two possibilities.
First, through LD, a strong association in one gene may cause a weaker association signal in a second gene. In this case, only the strong association should be reported. Second, the causal variant
may not be localized in a single gene; for example, the best SNP tags are assigned to multiple genes. In this case, the individual genes should be merged into a single associated locus. The
hierarchical procedure is as follows.
1. Identify all genes with GWiS , and use transitive clustering to merge into a locus all genes whose transcript boundaries are within 200 kb.
2. Run GWiS on the merged locus (including a recalculation of the number of effective tests within the locus) and identify the SNPs selected by the GWiS model. If genes at either end of the locus
have no GWiS SNPs, trim these genes from the locus. Repeat this step until no more trimming is possible. If only a single gene remains, accept it with its original p-value as the only association
in the region. Otherwise, proceed to step 3.
3. Use a permutation test to calculate the p-value for the merged locus from step 2. Assign it a p-value equal to the minimum of the p-values from the individual genes, and the p-value from its own
permutation. Regardless of the p-value used, retain the entire trimmed region as an associated locus.
The trimming in step 2 handles the first possibility, a strong association in one gene that causes a weaker association in a neighbor. The rationale for accepting the smallest p-value in step 3 is
the case of a single SNP assigned to multiple genes. The merged region will have a less significant p-value than any single gene, and it does not seem reasonable to incur such a drastic penalty for
gene overlap.
Univariate tests: minSNP and minSNP-P
For these tests, SNPs are assigned to gene regions as before. The p-value for each SNP is then calculated using the F-statistic as the test statistic, with empirical p-values from permutation to
ensure correct p-values for SNPs with low minor allele frequencies. The minSNP method assigns a gene the p-value of its best SNP. Selection of the best p-value out of many leads to non-uniform
p-values under the null. It is standard to reduce this bias by scaling p-values by a Bonferroni correction based on the number of SNPs or number of estimated tests. Instead, we perform gene-by-gene
permutation tests using the best F statistic for SNPs within the gene as the test statistic. As with GWiS, if 1 million permutations do not lead to one success, the association is clearly genome-wide
significant and we use the Bonferroni-corrected p-value for ranking purposes.
The Bayesian Imputation-based Association Mapping (BIMBAM) is a Bayesian gene-based method [10]. BIMBAM calculates the Bayes Factor for a model and then averages the Bayes Factors for all models
within a gene to obtain a test statistic. Because 1-SNP models were found to have as much power as 2-SNP models, and because 2-SNP models are not computationally feasible for genome-wide analysis,
BIMBAM by default restricts its sum to all 1-SNP models within a gene [10]. The Bayes Factor for a single SNP is(15)
The design matrix has first column s and second column equal to the dosages of SNP in the individuals; is the phenotypic mean; ; the matrix is diagonal with diagonal terms ; and contains the
regression coefficients . We used the recommended value relative to the phenotypic standard deviation. The test statistic for a gene with SNPs is . As with other methods, we used gene-by-gene
permutations to convert this statistic into a p-value that is uniform under the null. Up to 1 million permutations were used, stopping after 10 successes.
The sufficient statistics used by BIMBAM are identical to minSNP and minSNP-P, yet we found that the runtime of the public implementation was much slower, taking 270 sec for 1000 permutations of a
gene with 135 SNPs across 8000 individuals. By improving memory management and optimizing computations, we improved the timing to 14 sec per 1000 permutations, a 19-fold speed-up. This implementation
is included in our Supplementary Materials.
The Versatile Gene-Based Test for Genome-wide Association (VEGAS) [25] is a recently proposed method that considers the SNPs within a gene as candidates for association study. VEGAS assigns SNPs to
each of the autosomal genes using the UCSC genome browser hg18 assembly. The gene boundaries are defined as of the and UTRs. Single SNP p-values are used to compute a gene-based test statistic for
each gene and significance of each gene is evaluated using simulations from a multivariate normal distribution with mean 0 and covariance matrix being the pairwise LD values between the SNPs from
HapMap Phase 2. As a result the method avoids permutations in calculating per gene p-values, although permutations are required to obtain the genome-wide significance threshold.
LASSO regression
LASSO regression is a recent method for combined model selection and parameter estimation that maps L1 regularized regression onto a computationally tractable quadratic optimization problem [26]–[28]
. Applications to GWAS are attractive because it is possible to perform model selection on an entire chromosome. We therefore implemented a recent LASSO procedure developed specifically for GWAS [29]
To reduce computational cost, univariate p-values are estimated from parametric tests, and gene-based SNPs with are retained (we have confirmed that this computational constraint does not lose any
known positive associations). Incremental model selection was performed by Least Angle Regression [27] using the R lars package [52]. The LASSO parameter was determined using 5-fold cross validation.
All genes with at least one SNP selected were identified, and selected genes overlapping other selected genes (including flanking regions) were merged into single loci.
As suggested previously, we used the Selection Index to rank genes and as the test statistic for a permutation p-value [29]. To obtain the Selection Index, the MLE log-likelihood is calculated for
the full model and for a reduced model with a subset of SNPs removed. Twice the log-likelihood difference is interpreted as a statistic, and the Selection Index is defined as the corresponding
p-value for a distribution with the number of removed SNPs as the degrees of freedom. Due to the LASSO model selection procedure, the Selection Index is not distributed as a under the null, and
permutation tests are used to establish genome-wide significance levels.
Simulations: power
For each true model size of to 8, we performed a series of simulations by picking 1000 genes from chromosome 1 randomly with replacement, using genotype data from the ARIC population of approximately
8000 individuals. For each gene, we selected “causal” SNPs that have from regression with other “causal” SNPs within the gene. A gene had to have at least SNPs to be picked for models of size to
ensure enough remaining SNPs after the removal of the causal SNPs to permit a model of the true size.
We attempted to distribute the total population variance explained, , equally across the SNPs. The covariance matrix for the SNPs calculated from the population is denoted , with understood to be .
The coefficient for SNP in the model was set to(16)
which ensures that . The phenotype for an individual with genotype row-vector was then calculated as , with again the population average of and drawn from a standard normal distribution.
The power was calculated as (number of genes that are genome-wide significant)/1000, and the error of the estimate was calculated using 95% exact binomial confidence intervals. The p-value thresholds
were taken directly from genome-wide permutations (Table 2).
Simulations: model size
Phenotypes that were used to estimate the model size were generated by assigning each “causal” SNP the same power of 0.1 and 0.8. The population variance explained for each SNP was calculated as , in
which is the quantile of the standard normal for upper-tail cumulative probability of , and is the quantile for lower-tail probability power. We chose to be , the commonly used genome-wide
significance threshold for univariate tests. The effect of SNP is then , in which is the genotype covariance matrix. The simulated phenotypes are then , with drawn from a standard normal
distribution. In this test we control for the variance explained by the SNP, not by the gene, and therefore do not rescale the regression coefficients to account for LD. For each ranging from 0 to
10, we repeated these steps using ARIC genotype data for 100 genes chosen at random from chromosome 1.
Only GWiS and LASSO give model size estimates. GWiS directly reports the model size as the number of independent effects within a gene and LASSO reports the model size as the number of selected SNPs
within a gene. We ran both methods using the simulated data with LD. We also tested both scenarios when the causal SNPs were kept or removed from gene.
Performance evaluation
Gene associations were scored as true positives if the gene (or merged locus) overlapped with a known association, and as false positives if no overlap exists. Only the first hit to a known
association spanning several genes was counted.
The primary evaluation criterion is the ability to identify known positive associations at genome-wide significance. The genome-wide significance threshold was determined separately for each method
(see above), and no method gave any false positives at its appropriate threshold.
A secondary criterion was the ability to enrich highly ranked loci for known associations, regardless of genome-wide significance. This criterion was assessed through precision-recall curves, with
precision = TP/(TP+FP), recall = TP/(TP+FN), and true positives (TP), false positives (FP), and false negatives (FN) defined as a function of the number of predictions considered.
Small differences in precision and recall may not be statistically significant. To estimate statistical significance, we performed a Mann-Whitney rank sum test for the ranks of the known associations
at 40% recall for GWiS, minSNP, minSNP-P, and LASSO.
GWiS runs efficiently in memory and CPU time, roughly equivalent to other genome-wide tests that require permutations (Table 3). Computational times are greater for real data because real
associations with small p-values require more permutations. LASSO required far less computational resources, but also pre-filtered the SNPs and had the worst performance. Genome-wide studies can be
finished within around 100 hours. Low memory requirements allow GWiS to run in parallel on multiple CPUs. The GWiS source code implementing GWiS, minSNP, minSNP-P, and BIMBAM is available under an
open source GNU General Public License as Supplementary Material, also from the authors' website (www.baderzone.org), and is being incorporated into PLINK [31].
Table 3. Memory and CPU requirements.
Supporting Information
Estimated power at genome-wide significance for genotypes simulated without LD. Simulation tests were performed for true models in which a single gene housed one to eight independent causal variants.
Genotypes were simulated with 20 SNPs per gene, no LD between SNPs, and minor allele frequencies selected uniformly between 0.05 and 0.5. Power estimates are provided for VEGAS (green), GWiS (black),
minSNP-P (blue), BimBam (blue dashed), and LASSO (red). While VEGAS performs well in the absence of LD, its performance degrades under realistic LD (see main text, Figure 1). We simulated genetic
models for quantitative traits with no linkage disequilibrium between SNPs using the simulate-qt option of PLINK. Genes were simulated with 20 SNPs and minor allele frequencies selected uniformly
between 0.05 and 0.5. Genotypes were coded as allele dosages from 0 to 2. The power of a standard regression test for additive effects depends on the population variance explained, for a single
variant with allele frequency and regression coefficient (or effect size) . We performed simulations holding constant and sampling different allele frequencies, adjusting the effect size to obtain
the desired variance explained, . For each choice of the true model size from 1 to 8, we averaged over 1000 simulations each with 8000 individuals. In each simulation, we randomly selected SNPs to be
“causal” SNPs and distributed the variance equally across the causal SNPs, with each SNP contributing variance . The resulting model for the phenotype of an individual with genotype row-vector for
the causal SNPs is , where is the true population average of , is the column-vector of SNP effects, and is drawn from a standard normal distribution. The resulting value for the component of for a
causal SNP with minor allele frequency is . The power was calculated as (number of genes that are genome-wide significant)/1000, and the error of the estimate was calculated using 95% exact binomial
confidence intervals. The p-value thresholds for genome-wide significance came from genome-wide permutations of actual data for GWiS, BimBam, minSNP-P and VEGAS. For LASSO, however, the selection
index threshold from the genome-wide permutations may not be appropriate for simulations without LD. We therefore used a slightly different approach for LASSO. We calculated a null distribution of
the selection index through permutations, and then used this null distribution to convert the selection index to a gene-based p-value. The p-value was then compared to the most lenient gene-based
threshold of the other methods, from minSNP-P.
Number of SNPs and effective number of tests per gene. The number of SNPs and effective tests per gene are displayed as a density plot for (a) chromosome 1 and (b) the autosomal genome. While on
average genes have 70 SNPs and 9 tests, large genes can have over 1000 SNPs and 100 tests.
Precision-recall curves for recovery of known associations. Precision and recall for recovery of 38 known associations are shown for GWiS (black), minSNP (thin blue), minSNP-P (thick blue), BIMBAM
(dashed blue), LASSO (red), and VEGAS (green). Ranking is by p-value for GWiS, minSNP, minSNP-P, and VEGAS, and by Selection Index for LASSO. The tails of the curves for GWiS and LASSO are truncated
when remaining loci have no SNPs entered into models, which occurs close to 50% recall. Triangles indicated the last genome-wide significant finding from each method.
Number of identified genome-wide significant loci. Results are reported for 20 kb and 100 kb flanking transcription boundaries. G: GWiS, S: minSNP, SP: minSNP-P, B: BIMBAM, V:VEGAS, L: LASSO. *BIMBAM
was only tested for 20 kb. **VEGAS is hard-coded to use .
Genome-wide significance thresholds calculated by permutation. Results are reported for 20 kb and 100 kb flanking transcription boundaries. Thresholds for GWiS, minSNP, minSNP-P and VEGAS are for
p-values. Threshold for LASSO are for the selection index. The thresholds for minSNP and LASSO decrease because the larger threshold implies more tests. GWiS and minSNP-P already include a correction
for the number of tests within a gene, and thresholds are somewhat less stringent for longer gene boundaries. *BIMBAM uses the threshold from minSNP-P because both tests provide gene-based p-values
with identical uniform distributions under the null. **VEGAS is hard-coded to use .
Top associations for PR interval. The top 100 associations are reported for GWiS, minSNP, minSNP-P, BIMBAM, VEGAS, and LASSO. The locus name concatenates the named genes within the start and end
positions indicated. Additional columns provide the number of SNPs, the effective number of tests, the number of independent associations within the region (), the p-value (), the rank from 1 through
100, and an indicator for known positives (isKnownPositive).
Top associations for QRS interval. The column information is the same as for Table S3.
Top associations for QT interval. The column information is the same as for Table S3.
The Atherosclerosis Risk in Communities Study is carried out as a collaborative study. The authors thank the staff and participants of the ARIC study for their important contributions.
Author Contributions
Conceived and designed the experiments: DEA JSB. Performed the experiments: HH PC. Analyzed the data: DEA JSB HH PC. Contributed reagents/materials/analysis tools: AA DEA HH PC. Wrote the paper: HH
DEA JSB PC.
|
{"url":"http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1002177?imageURI=info:doi/10.1371/journal.pgen.1002177.t002","timestamp":"2014-04-23T13:18:42Z","content_type":null,"content_length":"285763","record_id":"<urn:uuid:0d051d49-4e8b-4015-9fee-e049fe03115b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Converting boolean expression into disjunctive normal form
Re: Converting boolean expression into disjunctive normal form
I think you mean a Karnaugh map.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=281286","timestamp":"2014-04-18T08:21:00Z","content_type":null,"content_length":"48591","record_id":"<urn:uuid:f61dacba-b422-4deb-bc24-40cd77f8a2fb>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Truth About Money Saving Electric Heaters
Several years ago I began to see ads for electric heaters that were reported to have found some new technique to provide electric heat in a way that was suggested to help you reduce your home heating
bills by up to 50%. I am a fan of using electric for both heating and cooling since this eliminates the need to have fossil fuels burning in the home, so I took a special interest in finding out if
these claims could be true.
The answer to the practicality of money saving electric heaters as compared to less expensive electric heaters turned out to be rather simple, yet, to share this answer requires that you have some
background information so that you understand why the answer is what it is, in lieu of simply taking my word for it. The calculations used in the coming paragraphs are not complicated, but it may be
helpful to read the article a second time to fully see how and why the answer is what it is.
To start, when looking to evaluate anything, you must have something concrete on which to base your assessment. When it comes to using electricity, that it quite simple. We refer to OHM's Law and the
formula P=ExI which translates as WATTS = VOLTAGE x CURRENT, which more simply means, the electric power you get out of something is based upon the voltage applied and the amperage used.
For example, standard baseboard electric heaters that run on 240 volts generally provide 250 watts per foot of heat. A typical 6 foot electric heater would then provide 1500 watts (6 x 250 = 1500
watts) of heat. To provide this amount of heat, the unit must use current or amperage accordingly. In our case, the current can be found with the formula I=P/E or CURRENT = WATTS divided by VOLTAGE,
which equates to 6.25 amps = 1500 watts divided by 240 volts.
Likewise, a standard electric space heater that runs on 120 volts typically provides an output of 1500 watts of heat. To provide this amount of wattage, the unit must consume double the current or
amperage since the voltage is half. Again, the current can be found with the formula I=P/E or CURRENT = WATTS divided by VOLTAGE, which equates to 12.5 amps = 1500 watts divided by 120 volts.
As you can see, by using either 120 volt or 240 volt heaters, you obtain the same output watts of heat due to different amounts of current being used. Since your power utility company charges you for
watts, it doesn't matter whether you use 120v or 240v to provide your 1500 watts of heat. It's all the same to them. If your utility company charges 14 cents per kilowatt hour, this means you pay 14
cents for every hour where you consume 1000 watts or 1KW. If you used the 1500 watt (1.5 KW) heater for one hour, that is equal to 1.5 KW x .14 = 21 cents per hour.
Next, let's formulate the room size this 1500 watts can heat. The worlds accepted heating unit is called the BTU or British Thermal Unit. When determining the BTU's provided by any electrical heating
source, simply multiply the watts by 3.413 and the resulting number is the BTU's. In our case 1500 watts x 3.413 = 5119 BTU's. Most manufacturers of 1500 watt heaters generally say they provide 5120
BTUH's or British Thermal Units per Hour.
Now, the colder your average climate, the greater your heating needs will be. Because of this, there is no set level of how many BTU's are needed to heat your room. For example, if you live in
Montana, winter temperatures can easily stay in the teens or single digits for days on end. You would need sufficient heat to add 50 to 60 degrees of heat to the air to maintain a 65 to 70 degree
room temperature. Alternatively, if you live in Georgia, winter temperatures can easily stay in the thirties or forties most days. You would need sufficient heat to add only 30 to 40 degrees of heat
to the air to maintain a 65 to 70 degree room temperature.
In our example, you would require twice as much heat in Montana than in Georgia, which means a 1500 watt heater in Montana will only take care of a room half the size it does in Georgia. To resolve
issues with temperature differences, we'll introduce a temperature zone multiplier of 40 for cold climates, 25 for moderate climates and 10 for warmer climates. Add another 10 if the house is not
insulated well and add another 10 if the windows and doors do not seal well.
To begin calculating the BTU's required for a room, you first need to obtain its square footage by multiplying the rooms length by its width. A ceiling height of 8 feet is assumed. To compensate for
taller ceilings, increase the square footage by 12% for each additional foot of ceiling height. For example, a 12 x 12 room is equal to 144 square feet. If the ceiling height were 9 feet we would
multiply 144 x 1.12 for a total of 161 square feet.
Now, depending upon your temperature zone, multiply your square footage by your temperature zone multiplier. Using our example, if we were living in Montana, we would multiply 144 square feet x 40
for a total requirement of 5760 BTUH's for that room. This is just short of the 5120 BTU provided by a 1500 watt electric heater, but it should suffice in most instances. If the house were not
insulated well and had issues with air infiltration, a larger or second heater would be required.
Meanwhile, any claim by an electric heater manufacturer that says their heater will heat 300 square feet or 1000 square feet means nothing if there is no reference to the outside temperature or
condition of the room. The newest version of the money saving electric heater claims to be able to heat 1000 square feet, but it uses only 1483 watts. Our previous example shows how this could be
marginal even for a 144 square foot room, let alone 1000 square feet. Using our same example, a 1000 square foot room at 40 BTUH per square foot would require approx 40,000 BTU of heat. To determine
the watts, we divide that by 3.413 to get 11719 watts, which is almost eight times greater in size than the wattage provided by the money saving heater.
We do have to be fair and determine the conditions under which the 1483 watt heater could heat a 1000 square foot room. Bear in mind, this is equivalent to a 14 ft x 72 ft space, or an entire modular
home. Let's work backwards to find the temperature zone where this might work. 1483 watts x 3.413 = 5061 BTU. We divide this by 1000 square feet to obtain a temperature zone multiplier of 5.
Remember, a temperature zone multiplier of 40 is for a cold climate, 25 is for a moderate climate and 10 is for a warm climate. Basically, a temperature zone multiplier of 5 means you could heat 1000
square feet of space in Florida during the winter.
OK, so now we know how and why a 1500 watt electric heater
If your $400 to $600 heater says it provides 1500 watts, it will do absolutely nothing different than what a $60 heater can do by providing the same 1500 watts. If you set either heater in the room
that also includes the thermostat for your whole house heating system, you will reduce your houses overall fuel consumption because the thermostat will not sense the need to activate the heating
system. This can save money, but the remaining rooms in your home could get very cold and possibly result in freezing damage.
Save Money and Keep Warm This Winter. Use a Lasko 5307 Oscillating 1500W Ceramic Tower Space Heater
Just for the fun of it, let's say we ran a $60 1500 watt electric heater 16 hours a day, for 4 months straight, what would this cost? 16 hours x 7 days x 4.25 weeks per month x 4 months = 1904 hours
x 1.5 KW per hour = 2856 KWH. At 14 cents per KWH, the total cost for power would be $400 plus another $60 for the heater for a total of $460 in all.
Now, let's say we ran a $500 1500 watt electric heater 16 hours a day, for 4 months straight, what would this cost? 16 hours x 7 days x 4.25 weeks per month x 4 months = 1904 hours x 1.5 KW per hour
= 2856 KWH. At 14 cents per KWH, your total cost for power would be $400 plus another $500 for the heater for a total of $900 in all.
Imagine a drumroll. Here's the simple answer...
If you take the $900 spent to heat a room with a money saving heater and then minus the $460 you really only needed to spend by using a standard heater, you were overcharged by $440 which you have
unknowingly donated to Amish farmers or to Bob and his sponsors. Don't get me wrong, making donations is an American tradition and is depended upon by those in need, but I believe that if you are
going to make this donation, you should at least know it's a donation. With this information in hand, you could better determine for yourself if a companies need for profit is greater than your need
to keep your family fed and warm this winter.
What Else Could You Do With $440.00 Dollars?
If you choose to purchase a standard $60 ceramic electric heater ( very safe, reliable and comfortable ) in lieu of a money saving electric heater, what could you do with all the money you saved?
• Buy 170 gallons of fuel oil at $2.60 per gallon and heat your house completely for 30 days
• Buy 21890 cu ft of natural gas at $2.01 per ccf and heat your house completely for 30 days
• Buy 3 tons of coal at $146 per ton and heat your entire house for 3 months
• Buy a second $60 1500 watt heater and heat an entire other room for 4 months
• Hire a company to insulate and weatherize your home, saving 15% to 25% on all heating and cooling costs for years to come
• Buy groceries for a family of 4 for 2.5 weeks
• Contact Feed the Children and make a real donation that provides food and essentials for 9 starving children for 5 months
Electricity is already the most efficient method of heating, since it is 100% efficient. By combing electricity with heat pump technology, you could extract heat from the ground or air and use
electricity to gather heat from these other sources, but the electricity itself cannot morph into more than what it is. There is no magic way to make pure electrical power provide more watts than the
laws of physics allow.
For now, the best thing you can do to save energy is to spend your money on insulating and sealing your home so that the energy you do consume is used more effectively. A couple hundred dollars spent
on home weatherization can save thousands of dollars as the years pass, regardless of what method you use to heat and cool your home. Be smart, be green and conserve what you are already using and
you'll save more energy and money than any money saving heater company can ever hope to convince you of otherwise.
David's career highlights include authoring 'The Rewards of Making Energy-Efficient Choices', working in the electrical engineering division of three nuclear power plants and serving as an
administrator, engineer and installer in the heating and air conditioning field.
He lives in Northeast Pennsylvania with his wonderful and supportive wife, Karlene and spends his time writing and performing home energy audits.
|
{"url":"http://www.energyefficientchoices.com/news-events/truth-money-saving-electric-heaters.php","timestamp":"2014-04-20T20:55:24Z","content_type":null,"content_length":"32265","record_id":"<urn:uuid:d2274320-9944-48db-937e-2c49b44c4688>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Intermediate Algebra for College Students
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help
|
{"url":"http://www.knetbooks.com/intermediate-algebra-college-students-6th/bk/9780321758934","timestamp":"2014-04-20T18:24:29Z","content_type":null,"content_length":"39009","record_id":"<urn:uuid:5df25928-9813-4e58-b51a-18249453adce>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nest representations of TAF algebras.
Hopenwasser, Alan and Peters, Justin R. and Power, Stephen C. (2000) Nest representations of TAF algebras. Canadian Journal of Mathematics, 52 (6). pp. 1221-1234. ISSN 0008-414X
Full text not available from this repository.
A nest representation of a strongly maximal TAF algebra $A$ with diagonal $D$ is a representation $\pi$ for which $\lat \pi(A)$ is totally ordered. We prove that $\ker \pi$ is a meet irreducible
ideal if the spectrum of $A$ is totally ordered or if (after an appropriate similarity) the von Neumann algebra $\pi(D)''$ contains an atom.
Item Type: Article
Journal or Publication Title: Canadian Journal of Mathematics
Uncontrolled Keywords: nest representation ; meet irreducible ideal ; strongly maximal TAF algebra
Subjects: Q Science > QA Mathematics
Departments: Faculty of Science and Technology > Mathematics and Statistics
ID Code: 19352
Deposited By: ep_ss_importer
Deposited On: 17 Nov 2008 14:44
Refereed?: Yes
Published?: Published
Last Modified: 09 Oct 2013 13:12
Identification Number:
URI: http://eprints.lancs.ac.uk/id/eprint/19352
Actions (login required)
|
{"url":"http://eprints.lancs.ac.uk/19352/","timestamp":"2014-04-17T06:44:38Z","content_type":null,"content_length":"14145","record_id":"<urn:uuid:408da491-2d7a-4cd7-ad4a-d309343f8b06>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Check out my new math proof
September 19th 2012, 08:52 AM #1
Sep 2012
Iowa City
Check out my new math proof
Greetings. This is my first post and I would like to share a proof that I recently put on youtube. It hasn't gotten many views yet so I figured I'd share it here .
It is kind of like when people say something that goes away if you know what I mean.
Re: Check out my new math proof
If you write your proof in text form, then not only it would take about 10^4 times less memory, but it would be more clearly seen and people would probably be more willing to check it out. You
could use LaTeX ([TEX]2^{n+1}<(n+1)![/TEX] gives $2^{n+1}<(n+1)!$), the superscript tags (2[sup]n+1[/sup] gives 2^n+1) or just plain text writing 2^(n+1) for 2^n+1.
Re: Check out my new math proof
Sorry, I will try and capture it more clearly but here it is line by line:
Prove that 2^n < n! for all n greateroreaqual to 4, for n in N
Base Case: Let n = 4 . 2^4 < 4! is true since 16 < 24
Let n = k, for k in N and k > 4
2^k = 2^(k-1) x 2 and
k! = (k-1)! * k
and 2^4 = 16
4! = 24
2^k < (k)! for all k greatororequal to 4
since 2 < k
Re: Check out my new math proof
Here is how I would write this.
\begin{align*}2^k &= 2^{k-1}\times 2\\&<(k-1)!\times 2&&\text{by inductive hypothesis}\\&<(k-1)!\times k&&\text{since }k>2\\&=k!\end{align*}
The fact that 2^4 = 16 and 4! = 24 is not relevant to the inductive step.
Re: Check out my new math proof
I wrote that to show 2^n < n! in the case where n = 4 . Like how the stuff above the proof in the video is for demonstration also, but thanks for your input.
Re: Check out my new math proof
Thanks for the input, I updated the video so it can be read. I am half-Polish living in the USA and I do feel my method is different. If you watch my other videos you will observe a difference
but you see it works https://www.youtube.com/user/danbabb . Thanks.
Re: Check out my new math proof
Guys, on a tangent, $2^n$ never divides $n!$, nor do any prime number to the power of $n$. It is because there are not enough exponents in $n!$, for any prime number $p$.
Maths online
Re: Check out my new math proof
Salahuddin559, When n = 2, 2^n = 4 and n! = 2, so that would work for what you just said.
Re: Check out my new math proof
More exactly, p^n does not divide ((p-1)*n)! for any prime number p.
Maths online
September 19th 2012, 11:45 AM #2
MHF Contributor
Oct 2009
September 22nd 2012, 01:36 PM #3
Sep 2012
Iowa City
September 22nd 2012, 01:48 PM #4
MHF Contributor
Oct 2009
September 22nd 2012, 02:15 PM #5
Sep 2012
Iowa City
October 27th 2012, 05:34 AM #6
Sep 2012
Iowa City
October 27th 2012, 06:07 PM #7
Junior Member
Oct 2012
October 27th 2012, 08:08 PM #8
Sep 2012
Iowa City
October 30th 2012, 08:29 PM #9
Junior Member
Oct 2012
|
{"url":"http://mathhelpforum.com/number-theory/203713-check-out-my-new-math-proof.html","timestamp":"2014-04-17T05:37:26Z","content_type":null,"content_length":"52722","record_id":"<urn:uuid:12db02de-4796-4a38-8ba7-1d9c3f9e1d79>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions - Re: In "square root of -1", should we say "minus 1" or "negative 1"?
Date: Dec 3, 2012 7:43 PM
Author: Paul A. Tanner III
Subject: Re: In "square root of -1", should we say "minus 1" or "negative 1"?
On Mon, Dec 3, 2012 at 6:22 PM, Joe Niederberger <niederberger@comcast.net> wrote:
> "Minus times minus makes a plus" makes perfect sense.
> In fact it emphasizes the fact the rule is based on the signs alone. The fact that negative numbers also have a magnitude is of no account whatsoever in regards to this sign rule. Think of the implied subjects and objects here being the signs, not the numbers. Or, don't, I don't care.
One of the standard proofs in abstract algebra textbooks of the theorem (-a)(-b) = ab for all ring elements a,b (including for rings that are not ordered) contains the equality (-a)(-b) = -(-(ab)).
Is this what you meant?
Regardless, there are ways to prove this theorem that does not use the prior theorem of additive groups x = -(-x).
And so to say that that the theorem itself (and not merely such a particular proof of it) is based on such is not right.
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7931816","timestamp":"2014-04-16T10:35:53Z","content_type":null,"content_length":"2070","record_id":"<urn:uuid:70e282e9-15ba-462f-8226-d8793901f9a2>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Research Statistics] Chi-Squared Test?
September 26th 2009, 11:40 PM #1
Sep 2009
[Research Statistics] Chi-Squared Test?
I've got some data that I need to do some analysis on, and I THINK I've selected the right test, but I don't want to risk being wrong.
I want to do the tests on a measure of Gestational Diabetes in conjunction with Iron and/or Vitamin C supplementation, so:
Dependent Variable:
Gestational Diabetes (GDM) (0 = no; 1 = yes)
Independent Variables:
Iron Supplementation (0 = no; 1 = yes)
Vitamin C Supplementation (0 = no; 1 = yes)
And in measuring them I would need to have 4 variables (I think?), those being:
Iron Only
Vitamin C Only
So from what I understand, I should be able to do a chi-squared test, since they're all categorical variables? Is that correct, or did I miss something big? Furthermore, do I need to do a
chi-squared test on each category (e.g. GDM Yes - Iron Only; GDM Yes - Vitamin C Only; GDM No - Iron Only; etc)
Also, I need to validate those against a few confounding variables, namely
Age (continuous)
Body Mass Index (continuous)
Parity (continuous)
To compare against those, would I need to use a t test? Would I compare each confounder to each of the 4 categories above (Iron, Vit C, Both, Neither)? Or would I need to somehow compare all of
them at once to the variables, or..?
Last edited by b1177; September 27th 2009 at 05:18 PM.
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/statistics/104540-research-statistics-chi-squared-test.html","timestamp":"2014-04-17T09:14:04Z","content_type":null,"content_length":"30338","record_id":"<urn:uuid:d02edc18-57da-49fe-81d3-bfa4db570710>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Suppose An Observer Measures The Length Of A Stationary ... | Chegg.com
Suppose an observer measures the length of a stationary spaceship to be 6 m (note: m for meters). If the spaceship is moving at a speed of 0.8c (note: c for the speed of light) with respect to the
Earth, what is the length of the same spaceship, as measured by an observer on the Earth?
2.4 m
3.6 m
4.8 m
6 m
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/suppose-observer-measures-length-stationary-spaceship-6-m-note-m-meters--spaceship-moving--q2781456","timestamp":"2014-04-18T03:52:15Z","content_type":null,"content_length":"20967","record_id":"<urn:uuid:71bebf79-241e-452f-b129-b920d255955b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Helicopter Aerodynamics, calculating thrust loading, disk loading, power loading
return to the flight school home page
Below, we will demonstrate a method to calculate the theoretical thrust that that a propeller or rotor can generate. Of course in a helicopter, the rotor disk is oriented such that we call its force
"lift" rather than thrust, "thrust" would be used in the case of an airplane. On a helicopter, the force of a tail rotor would be best described as thrust.
The first step is to measure the diameter of the rotor or propeller and calculate the area in square feet. Area is defined as:
A [ft^2] = Pi * r^2
A [ft^2] = (Pi/4) * D^2
where r is the radius of the propeller or rotor disk and D is the diameter in feet, of course Pi is the number 3.141592653589793238462643
If we know the area of the disk in square feet, we then need to know the amount of power that is delivered to the rotor. This needs to be in the units of horsepower. To convert the metric system of
units Watts to horsepower, use the conversion: 1 horsepower = 745.699872 Watts. The goal is to calculate a parameter called "power loading" in units of horsepower per square foot. Power loading is
calculated by:
PL [hp/ft^2] = power / A
where "power" is the power delivered to the rotor or propeller and A is the area, calculated above. This is very important---approximately 10 to 15 % of the engine's power will be delivered to the
tail rotor to counteract torque. This number obvisouly varies but 10 to 15 percent is a good starting point. If you have a 100 horsepower engine in your helicopter, expect only 85 to 90 horsepower to
actually get to the main rotor. Additionally, you would need to reduce the power by an additional small percentage to account for frictional losses in the drive system. In the case of a tandem rotor
helicopter such as the Chinook or meshed rotor like the Kmax, all of the power will be delivered to the main rotor and this in fact is the reason those helicopters are so well suited for heavy load
lifting operations.
Using the parameter PL [hp/ft^2], we use an empiracly defined formula to calculate the thrust loading (after McCormick). Thrust loading is in the units of pound per horsepower and is a function of
power and rotor disk area. Thrust loading (TL) is calculated:
TL [lb/hp]= 8.6859 * PL^(-0.3107)
Now that thrust loading is calculated, we can find the total thrust of the propeller (or lift of the rotor).
Lift = TL * power >>>[lb] = [lb/hp] * [hp]
*note: Calculation results are in pounds-thrust, NOT pounds-mass
Below are some common examples of the thrust developed by common aircraft engines and typical helicopters:
Theoretical thrust developed by common airplane engine - propeller combinations:
300 hp, 78" propeller develops 1,300 pounds of thrust
80 hp, 50" propeller develops 400 pounds of thrust
1.5 hp, 12" propeller develops 10.5 pounds of thrust
Theoretical lift developed by common helicopters:
300 hp, 30' rotor develops 3,400 pounds of lift (neglecting loss to tail rotor of 10-15%)
25 hp, 12' rotor develops 347 pounds of lift (neglecting loss to tail rotor of 10-15%)
2 hp, 6' rotor develops 39 pounds of lift (neglecting loss to tail rotor of 10-15%)
0.25 hp, 10.5" tail-rotor develops 2.9 pounds of thrust
Full series of example calculations:
[EQ1]: PL [hp/ft^2] = power / A
[EQ2]: TL [lb/hp]= 8.6859 * PL^(-0.3107)
[EQ3]: Lift = TL * power >>>[lb] = [lb/hp] * [hp]
Using equation one, we calculate power loading (PL) of a 6 foot diameter (72") disk with 300hp absorbed.
PL = 300 / (pi*3'^2)
PL = 300 / 28.27
PL = 10.61 hp/ft^2
Using equation two, we calculate the thrust loading. Typical communication/interpretation error is in the negative exponent of the equation. Remember, X^(-Y) is the same as 1/(X^Y).
TL = 8.6859 * (10.61^-.3107)
TL = 8.6859 / (10.61^.3107)
TL = 8.6859 / 2.083
TL = 4.2 [lb/hp]
Using equation three, we calculate the lift/thrust.
Lift = TL * power
Lift = 4.2 * 300
Lift = 1,251 pounds
Please let us know if this was helpful information or if you have suggestions, comments or other questions you would like answered.
|
{"url":"http://www.heli-chair.com/aerodynamics_101.html","timestamp":"2014-04-17T04:03:59Z","content_type":null,"content_length":"14198","record_id":"<urn:uuid:52941361-6dc2-411b-9ecc-8fc19c9f0578>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Buford, GA Algebra 2 Tutor
Find a Buford, GA Algebra 2 Tutor
...For example, to find an answer for a division problem, look for the multiple of the numbers. This type of technique gives students the confidence they need to tackle any math problems by
breaking it up into smaller components that they recognize. Then math can truly be elementary.
26 Subjects: including algebra 2, English, reading, geometry
...Oh yes, and I am very good with computers and Microsoft Excel.Algebra has always been my favorite math subject. When I was in high school I went to statewide math contests for two years and
placed in the top ten both times. This algebra deals mostly with linear functions.
22 Subjects: including algebra 2, calculus, geometry, ASVAB
...All of which has given me a uniquely personal and spiritual perspective on the broad topic of religion. I hope to share my knowledge and experience with you or your student. As a child I went
to Glen Haven christian private school where I memorized more of the Holy Bible than not.
32 Subjects: including algebra 2, Spanish, English, geometry
...I have taken courses in world religions, Christian church history, Christian theology, Christian education, preaching, ethics and pastoral care. I have also studied the bible and translated
from the ancient Greek and Hebrew into English. I have a Master of Divinity from Columbia Theological Seminary.
26 Subjects: including algebra 2, English, reading, writing
...My goal is always to make sure that the student, not only understands the material, but also feels confident in what they are doing. I feel that every student is different in what builds their
confidence in the material, so try to figure out what that is as we work together. I also ask for some...
9 Subjects: including algebra 2, chemistry, calculus, geometry
Related Buford, GA Tutors
Buford, GA Accounting Tutors
Buford, GA ACT Tutors
Buford, GA Algebra Tutors
Buford, GA Algebra 2 Tutors
Buford, GA Calculus Tutors
Buford, GA Geometry Tutors
Buford, GA Math Tutors
Buford, GA Prealgebra Tutors
Buford, GA Precalculus Tutors
Buford, GA SAT Tutors
Buford, GA SAT Math Tutors
Buford, GA Science Tutors
Buford, GA Statistics Tutors
Buford, GA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Cumming, GA algebra 2 Tutors
Doraville, GA algebra 2 Tutors
Duluth, GA algebra 2 Tutors
Dunwoody, GA algebra 2 Tutors
Flowery Branch algebra 2 Tutors
Gainesville, GA algebra 2 Tutors
Johns Creek, GA algebra 2 Tutors
Lawrenceville, GA algebra 2 Tutors
Mableton algebra 2 Tutors
Milton, GA algebra 2 Tutors
Norcross, GA algebra 2 Tutors
Rest Haven, GA algebra 2 Tutors
Snellville algebra 2 Tutors
Sugar Hill, GA algebra 2 Tutors
Suwanee algebra 2 Tutors
|
{"url":"http://www.purplemath.com/buford_ga_algebra_2_tutors.php","timestamp":"2014-04-21T12:50:40Z","content_type":null,"content_length":"23965","record_id":"<urn:uuid:88cd8554-d9ea-462e-afa3-6047837de098>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lightweight Airplane Design
The geometry of this lightweight aircraft is from reference 1. The original design objective for this geometry was a four-seat general aviation aircraft that was safe, simple to fly, and easily
maintainable with specific mission and performance constraints. For more details on these constraints, see reference 1.
Potential performance requirements for this aircraft include:
● Level cruise speed
● Acceptable rate of climb
● Acceptable stall speed.
For the aircraft flight control, rate of climb is the design requirement and assumed to be greater than 2 meters per second (m/s) at 2,000 meters.
Figure 1: Lightweight four-seater monoplane [1].
Determining Vehicle Aerodynamic Characteristics
The aircraft's geometrical configuration determines its aerodynamic characteristics, and therefore its performance and handling qualities. Once you choose the geometric configuration, you can obtain
the aerodynamic characteristics by means of:
While wind tunnel tests and flight tests provide high-fidelity results, they are expensive and time- consuming, because they must be performed on the actual hardware. It is best to use these methods
when the aircraft's geometry is finalized. Note: Analytical prediction is a quicker and less expensive way to estimate aerodynamic characteristics in the early stages of design.
In this example, we will use Digital Datcom, a popular software program, for analytical prediction. The U.S. Air Force developed it as a digital version of its Data Compendium (DATCOM). This software
is publicly available.
To start, create a Digital Datcom input file that defines the geometric configuration of our aircraft and the flight conditions that we will need to obtain the aerodynamic coefficients.
$FLTCON NMACH=4.0,MACH(1)=0.1,0.2,0.3,0.35$
$FLTCON NALT=8.0,ALT(1)=1000.0,3000.0,5000.0,7000.0,9000.0,
$FLTCON NALPHA=10.,ALSCHD(1)=-16.0,-12.0,-8.0,-4.0,-2.0,0.0,2.0,
$OPTINS SREF=225.8,CBARR=5.75,BLREF=41.15$
$SYNTHS XCG=7.9,ZCG=-1.4,XW=6.1,ZW=0.0,ALIW=1.1,XH=20.2,
$BODY NX=10.0,
$WGPLNF CHRDTP=4.0,SSPNE=18.7,SSPN=20.6,CHRDR=7.2,SAVSI=0.0,CHSTAT=0.25,
$HTPLNF CHRDTP=2.3,SSPNE=5.7,SSPN=6.625,CHRDR=0.25,SAVSI=11.0,
$VTPLNF CHRDTP=2.7,SSPNE=5.0,SSPN=5.2,CHRDR=5.3,SAVSI=31.3,
$SYMFLP NDELTA=5.0,DELTA(1)=-20.,-10.,0.,10.,20.,PHETE=.0522,
Digital Datcom provides the vehicle's aerodynamic stability and control derivatives and coefficients at specified flight conditions. Flight control engineers can gain insight into the vehicle's
performance and handling characteristics by examining stability and control derivatives. We must import this data into the MATLAB® technical computing environment for analysis. Normally, this is a
manual process.
With the Aerospace Toolbox software, we can bring multiple Digital Datcom output files into the MATLAB technical computing environment with just one command. There is no need for manual input. Each
Digital Datcom output is imported into the MATLAB technical computing environment as a cell array of structures, with each structure corresponding to a different Digital Datcom output file. After
importing the Digital Datcom output, we can run multiple configurations through Digital Datcom and compare the results in the MATLAB technical computing environment.
In our model, we need to check whether the vehicle is inherently stable. To do this, we can use Figure 2 to check whether the pitching moment described by the corresponding coefficient, Cm, provides
a restoring moment for the aircraft. A restoring moment returns the aircraft angle of attack to zero.
In configuration 1 (Figure 2), Cm is negative for some angles of attack less than zero. This means that this configuration will not provide a restoring moment for those negative angles of attack and
will not provide the flight characteristics that are desirable. Configuration 2 fixes this problem by moving the center of gravity rearward. Shifting the center of gravity produces a Cm that provides
a restoring moment for all negative angles of attack.
Figure 2: Visual analysis of Digital Datcom pitching moment coefficients.
Creating Flight Vehicle Simulation
Once we determine aerodynamic stability and control derivatives, we can build an open-loop plant model to evaluate the aircraft longitudinal dynamics. Once the model is complete, we can show it to
colleagues, including those who do not have Simulink® software, by using Simulink® Report Generator™ software to export the model to a Web view. A Web view is an interactive HTML replica of the model
that lets you navigate model hierarchy and check the properties of subsystems, blocks, and signals.
A typical plant model includes the following components:
● Equations of motion: calculate vehicle position and attitude from forces and moments
● Forces and moments: calculate aerodynamic, gravity, and thrust forces and moments
● Actuator positions: calculate displacements based on actuator commands
● Environment: include environmental effects of wind disturbances, gravity, and atmosphere
● Sensors: model the behavior of the measurement devices
We can implement most of this functionality using Aerospace Blockset™ blocks. This model highlights subsystems containing Aerospace Blockset blocks in orange. It highlights Aerospace Blockset blocks
in red.
Figure 3: Top Level of Lightweight Aircraft Model
We begin by building a plant model using a 3DOF block from the Equations of Motion library in the Aerospace Blockset library (Figure 4). This model will help us determine whether the flight vehicle
is longitudinally stable and controllable. We design our subsystem to have the same interface as a six degrees-of-freedom (DOF) version. When we are satisfied with three DOF performance, stability,
and controllability, we can implement the six DOF version, iterating on the other control surface geometries until we achieve the desired behavior from the aircraft.
Figure 4: Equations of Motion implemented using 3DoF Euler block from the Aerospace Blockset library.
To calculate the aerodynamic forces and moments acting on our vehicle, we use a Digital Datcom Forces and the Moments block from the Aerospace Blockset library (Figure 5). This block uses a structure
that Aerospace Toolbox creates when it imports aerodynamic coefficients from Digital Datcom.
For some Digital Datcom cases, dynamic derivative have values for only the first angle of attack. The missing data points can be filled with the values for the first angle of attack, since these
derivatives are independent of angle of attack. To see example code of how to fill in missing data in Digital Datcom data points, you can examine the asbPrepDatcomasbPrepDatcom function.
Figure 5: Aerodynamic Forces and Moments implemented in part with the Aerospace Blockset Digital Datcom Forces and Moment block.
We also use Aerospace Blockset blocks to create actuator, sensor, and environment models (Figures 6, 7, and 8, respectively). Note: In addition to creating the following parts of the model, we use
standard Aerospace Blockset blocks to ensure that we convert from body axes to wind axes and back correctly.
Figure 6: Implementation of actuator models using Aerospace Blockset blocks.
Figure 7: Implementation of flight sensor model using Aerospace Blockset blocks.
Figure 8: Environmental effect of wind, atmosphere, and gravity using Aerospace Blockset blocks.
Once we have created the Simulink plant model, we design a longitudinal controller that commands elevator position to control altitude. The traditional two-loop feedback control structure chosen for
this design (Figure 9) has an outer loop for controlling altitude (compensator C1 in yellow) and an inner loop for controlling pitch angle (compensator C2 in blue). Figure 10 shows the corresponding
controller configuration in our Simulink model.
Figure 9: Structure of the longitudinal controller.
Figure 10: Longitudinal controller in Simulink model.
With Simulink® Control Design™ software, we can tune the controllers directly in Simulink using a range of tools and techniques.
Using the Simulink Control Design interface, we set up the control problem by specifying:
● Two controller blocks
● Closed-loop input or altitude command
● Closed-loop output signals or sensed altitude
● Steady-state or trim condition.
Using this information, Simulink Control Design software automatically computes linear approximations of the model and identifies feedback loops to be used in the design. To design the controllers
for the inner and outer loops, we use root locus and bode plots for the open loops and a step response plot for the closed-loop response (Figure 11).
Figure 11: Design plots before controller tuning.
We then interactively tune the compensators for the inner and outer loops using these plots. Because the plots update in real time as we tune the compensators, we can see the coupling effects that
these changes have on other loops and on the closed-loop response.
To make the multi-loop design more systematic, we use a sequential loop closure technique. This technique lets us incrementally take into account the dynamics of the other loops during the design
process. With Simulink Control Design, we configure the inner loop to have an additional loop opening at the output of the outer loop controller (C1 in Figure 12). This approach decouples the inner
loop from the outer loop and simplifies the inner-loop controller design. After designing the inner loop, we design the outer loop controller. Figure 13 shows the resulting tuned compensator design
at the final trimmed operating point.
Figure 12: Block diagram of inner loop, isolated by configuring an additional loop opening.
Figure 13: Design plots at trim condition after controller tuning.
You can tune the controller in Simulink Control Design software in several ways. For example:
● You can use a graphical approach, and interactively move controller gain, poles, and zeros until you get a satisfactory response (Figure 13).
● You can use Simulink® Design Optimization™ software within Simulink Control Design software to tune the controller automatically.
After you specify frequency domain requirements, such as gain margin and phase margin and time domain requirements, Simulink Design Optimization software automatically tunes controller parameters to
satisfy those requirements. Once we have developed an acceptable controller design, the control blocks in the Simulink model are automatically updated. See the examples "Getting Started with the SISO
Design Tool" in Control Systems Toolbox examples and "Tuning Simulink Blocks in the Compensator Editor" in Simulink Control Design examples for more information on tuning controllers.
We can now run our nonlinear simulation with flight control logic and check that the controller performance is acceptable. Figure 15 shows the results from a closed-loop simulation of our nonlinear
Simulink model for a requested altitude increase from 2,000 meters to 2,050 meters starting from a trimmed operating point. Although a pilot requests a step change in altitude, the actual controller
altitude request rate is limited to provide a comfortable and safe ride for the passengers.
Figure 14: The final check is to run nonlinear simulation with our controller design and check that altitude (purple) tracks altitude request (yellow) in the stable and acceptable fashion.
We can now use these simulation results to determine whether our aircraft design meets its performance requirements. The requirement called for the climb rate to be above 2 m/s. As we can see, the
aircraft climbed from 2,000 to 2,050 meters in less than 20 seconds, providing a climb rate higher than 2.5 m/s. Therefore, this particular geometric configuration and controller design meets our
performance requirements.
In addition to traditional time plots, we can visualize simulation results using the Aerospace Blockset interface to FlightGear (Figure 15).
Figure 15: Visualizing simulation results using the Aerospace Blockset interface to FlightGear.
We can also use the Aerospace Toolbox interface to FlightGear to play back MATLAB data using either simulation results or actual flight test data.
The next steps involve
● Building a hardware-in-the-loop system to test real-time performance
● Building the actual vehicle hardware and software
● Conducting the flight test
● Analyzing and visualizing the flight test data.
Because these steps are not the focus of this example, we will not describe them here. Instead, we will simply mention that they can all be streamlined and simplified using the appropriate tools,
such as Embedded Coder™, Simulink® Real-Time™, Simulink® Verification and Validation™, and Aerospace Toolbox software.
In this example we showed how to:
● Use Digital Datcom and Aerospace Toolbox software to rapidly develop the initial design of your flight vehicle and evaluate different geometric configurations.
● Use Simulink and Aerospace Blockset software to rapidly create a flight simulation of your vehicle.
● Use Simulink Control Design software to design flight control laws.
This approach enables you to determine the optimal geometrical configuration of your vehicle and estimate its performance and handling qualities well before any hardware is built, reducing design
costs and eliminating errors. In addition, using a single tool chain helps facilitate communication among different groups and accelerates design time.
[1] Cannon, M, Gabbard, M, Meyer, T, Morrison, S, Skocik, M, Woods, D. "Swineworks D-200 Sky Hogg Design Proposal." AIAA®/General Dynamics Corporation Team Aircraft Design Competition, 1991-1992.
[2] Turvesky, A., Gage, S., and Buhr, C., "Accelerating Flight Vehicle Design", MATLAB® Digest, January 2007.Turvesky, A., Gage, S., and Buhr, C., "Accelerating Flight Vehicle Design", MATLAB®
Digest, January 2007.
[3] Turvesky, A., Gage, S., and Buhr, C., "Model-based Design of a New Lightweight Aircraft", AIAA paper 2007-6371, AIAA Modeling and Simulation Technologies Conference and Exhibit, Hilton Head,
South Carolina, Aug. 20-23, 2007.
|
{"url":"http://www.mathworks.in/help/aeroblks/examples/lightweight-airplane-design.html?prodcode=AE&language=en&s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-23T14:42:40Z","content_type":null,"content_length":"43199","record_id":"<urn:uuid:ab2901e1-688f-4743-9693-f37fdb52ff50>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Brightwood, Washington, DC
College Park, MD 20740
Walter, Science and Math Tutor
...I enjoy teaching; one of the most rewarding experiences I've had is seeing a student finally grasp a difficult concept. Although my degree is in physics, I am knowledgeable about other fields as
well, especially math. I took
Algebra 1
in middle school. Since then,...
Offering 10+ subjects including algebra 1
|
{"url":"http://www.wyzant.com/geo_Brightwood_Washington_DC_algebra_1_tutors.aspx?d=20&pagesize=5&pagenum=3","timestamp":"2014-04-18T16:58:07Z","content_type":null,"content_length":"61725","record_id":"<urn:uuid:8d304643-7ad4-4c65-bc79-4da3461b806a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stop the clock, squash the bug
Which clock is the best?
We can easily rule the one which has stopped …
Or can we? In “The Rectory Umbrella” Lewis Carroll argues otherwise.
Which is better, a clock that is right only once a year, or a clock that is right twice every day?
“The latter,” you reply, “unquestionably.”
Very good, now attend. I have two clocks: one doesn’t go at all, and the other loses a minute a day: which would you prefer? “The losing one,” you answer, “without a doubt.” Now observe: the one
which loses a minute a day has to lose twelve hours, or seven hundred and twenty minutes before it is right again, consequently it is only right once in two years, whereas the other is evidently
right as often as the time it points to comes round, which happens twice a day.
It’s an amusing diversion, but not really that puzzling: of course the clock which loses time is of more practical use, even if, somewhat paradoxically, the less time it loses the less often it tells
the right time. A clock which loses just a second a day only tells the right time every 118 years or so.
I mention these defective clocks because I’m thinking about bugs in software and how we go about finding and fixing them.
Code which is obviously wrong is easier to spot than code which is almost right, and spotting bugs is the precursor to fixing them. This implies — building on Carroll’s terminology — that we’re
unlikely to ship many stopped clocks but if we’re not careful we may end up delivering a few which lose time. And, in general, code which is obviously wrong is easier to fix than code which is almost
right. A badly-broken function clearly needs a rethink; whereas one which almost works may simply get tweaked until it appears to work, often resulting in a more subtle bug.
C and C++ provide a good example of what I’m talking about. Consider a program which misuses memory. An attempt to allocate workspace of 4294967295 bytes fails instantly^[1]; a slow memory leak, like
a slow running clock, may cause no perceptible damage for an extended period.
Decent tools detect memory leaks. Race conditions in multi-threaded code are harder to track and may prove elusive during system testing. More than once I’ve left a program running under a debugger,
being fed random inputs, in the hope some rare and apparently random condition will trigger a break in execution. Give me truly broken code any day!
Here are two implementations of a C function to find an integer midway between a pair of ordered, positive integer values, truncating downwards. Before reading on, ask yourself which is better.
int midpoint1(int low, int high)
return low/2 + high/2;
int midpoint2(int low, int high)
return (low + high)/2;
Midpoint1 is a “stopped clock”, returning 3 instead of 4 as the mid-point of 3 and 5, for example. It gets the wrong answer 25% of the time — fatally wrong were it to be used at the heart of, say, a
binary search. I think we’d quickly detect the problem.
An obvious fix would be the one shown in midpoint2 which does indeed return 4 as the mid-point of 3 and 5.
Midpoint2 turns out to be a losing clock, though. If the sum low + high overflows then the result is undefined. On my implementation I get a negative value — a dangerous thing to use as an array
index. This is a notorious and very real defect, nicely documented in a note by Joshua Bloch subtitled “Nearly all Binary Searches and Mergesorts are broken”.
Bloch offers more than one fix so I’ll just note here that:
• this defect simply doesn’t exist in a high-level language like Python or Haskell, where integers are bounded only by machine resources
• I think Bloch is unfair to suggest Jon Bentley’s analysis in chapter 4 of Programming Pearls is wrong. The pseudo-code in this chapter is written in a C-like language somewhere between C and
Python, and in fact one of Bentley’s exercises is to examine what effect word size has on this analysis.
• in a sense, midpoint2 is more broken than midpoint1: over the range of possible low and high inputs, the sum overflows and triggers the defect 50% of the time.
Computers are supposed to be predictable and we typically aim for correct programs. There’s no reason why we shouldn’t consider aiming for programs which are good enough, though, and indeed many
programs which are good enough to be useful are also flawed. Google adverts, for example, analyse the contents of web pages and serve up related links. The algorithm used is secret, clever and quick,
but often results in semantic blunders and, on occasion, offensive mistakes. Few could deny how useful to Google this program has been, though.
Here’s a more interesting example of an algorithm which, like a losing clock, is nearly right.
def is_fprime(n):
"""Use Fermat's little theorem to guess if n is prime.
from random import randrange
tries = 3
xs = (randrange(1, n) for _ in range(tries))
return all((x ** n) % n == x for x in xs)
We won’t go into the mathematics here. A quick play with this function looks promising.
>>> all(is_fprime(n) for n in [2, 3, 5, 7, 11, 13, 17, 19])
>>> any(is_fprime(n) for n in [4, 6, 8, 9, 10, 12, 14, 15])
In fact, if we give it a real work-out on some large numbers, it does well. I used it to guess which of the numbers between 100000 and 102000 were prime, comparing the answer with the correct result
(the code is at the end of this article). It had a better than 99% success rate (in clock terms, it lost around 8 minutes a day) and increasing tries will boost its performance.
The better is_fprime performs, the less likely we are to spot that it’s wrong. What’s worse, though, is that it cannot be fixed by simple tweaking. However high we set tries we won’t have a correct
function. We could even take the random probing out of the function and shove every single value of x in the range 1 to n into the predicate:
def exhaustive_is_fprime(n):
return all((x ** n) % n == x for x in range(1, n))
Exhaustive_is_fprime is expensive to run and will (very) occasionally return True for a composite number^[2]. If you want to know more, search for Carmichael numbers.
The point I’m making is that code which is almost right can be dangerous. We are tempted to fix it by adjusting the existing implementation, even if, as in this case, a complete overhaul is required.
By contrast, we all know what needs doing with code which is plainly wrong.
We’ve all seen nervous functions which go beyond their stated interface in an attempt to protect themselves from careless users.
* Return the maximum value found in the input array.
* Pre-condition: the input array must not be empty.
int nervy_maximum_value(int const * items, size_t count)
int M = -INT_MAX;
if (items == NULL || count == 0)
return M;
for ( ; count-- != 0; ++items)
if (*items > M)
M = *items;
return M;
What’s really wanted is both simpler and easier for clients to code against.
int maximum_value(int const * items, size_t count)
int const * const end = items + count;
int M = *items++;
for ( ; items != end; ++items)
if (*items > M)
M = *items;
return M;
Did you spot the subtle bug in nervy_maximum_value? It uses -INT_MAX instead of INT_MIN which will cause trouble if clients code against this undocumented behaviour; if nervy_maximum_value is
subsequently fixed, this client code back-fires.
Note that I’m not against the use of assertions to check pre-conditions, and a simple assert(items != NULL && count != 0) works well in maximum_value; it’s writing code which swallows these failed
pre-conditions I consider wrong.
The occurrence of defects in complex software systems can be modelled in the same way as radioactive decay. I haven’t studied this theory and my physics is rusty^[3], but the basic idea is that the
population of bugs in some software is rather like a population of radioactive particles. Any given bug fires (any given particle decays) at random, so we can’t predict when this event will happen,
but it is equally likely to fire at any particular time. This gives each defect an average lifetime: a small lifetime for howling defects, such as dereferencing NULL pointers, and a longer one for
more subtle problems, such as accumulated rounding errors. Assuming we fix a bug once it occurs, the population of defects decays exponentially, and we get the classic tailing-off curve.
Anyone who has ever tried to release a software product knows how it feels to slide down the slope of this curve. We system test, find bugs, fix them, repeat. At the start it can be exhilarating as
bugs with short half-lives fall out and get squashed, but the end game is demoralising as defects get reported which then cannot be reproduced, and we find ourselves clawing out progress. When we
eventually draw the line and ship the product we do so suspecting the worst problems are yet to be found. To put it more succinctly^[4].
Ship happens!
A combination of techniques can help us escape this depressing picture. The most obvious one would be to avoid it: rather than aim for “big-bang” releases every few years, we can move towards
continual and incremental delivery. A modular, decoupled architecture helps. So does insistence on unit testing. Rather than shake the system and sweep up the bugs which fall off we should develop a
suite of automated tests which actively seek the various paths through the code, and exercise edge cases. Within the code-base, as already mentioned, defensive programming can cause defects to become
entrenched. Instead, we should adopt a more confident style, where code fails hard and fast.
Have you ever fixed a defect and wondered how the code ever even appeared to work before your fix? It’s an important question and one which requires investigation. Perhaps the bug you’ve fixed is
compensated for by defensive programming elsewhere. Or perhaps there are vast routes through the code which have yet to be exercised.
None of these clocks is much good. The first has stopped, the second loses a second every minute, the third gains a second every minute. At least it’s easy to see the problem with the first: we won’t
be tempted to patch it.
We should never expect our code to work first time and we should be suspicious if it appears to do so. Defensive programming seems to mean different things to different people. If I’ve misused the
term here, I’m sorry. Our best defence is to assume code is broken until we’ve tested it, to assume it will break in future if our tests are not automated, and to fail hard and fast when we detect
import math
from itertools import islice, count
from random import randrange
def primes(lo, hi):
'''Return the list of primes in the range [lo, hi).
>>> primes(0, 19)
[2, 3, 5, 7, 11, 13, 17]
>>> primes(5, 10)
[5, 7]
sqrt_hi = int(math.sqrt(hi))
sieve = range(hi)
zeros = [0] * hi
sieve[1] = 0
for i in islice(count(2), sqrt_hi):
if sieve[i] != 0:
remove = slice(i * i, hi, i)
sieve[remove] = zeros[remove]
return [p for p in sieve[lo:] if p != 0]
def is_fprime(n, tries=3):
'''Use Fermat little theorem to guess if n is prime.
xs = (randrange(1, n) for _ in range(tries))
return all((x ** n) % n == x for x in xs)
def fprimes(lo, hi, tries=10):
'''Alternative implementation of primes.
return filter(is_fprime, range(lo, hi))
if __name__ == '__main__':
import doctest
lo, hi = 100000, 102000
primes_set = set(primes(lo, hi))
fprimes_set = set(fprimes(lo, hi))
print "Range [%r, %r)" % (lo, hi)
print "Actual number of primes", len(primes_set)
print "Number of fprimes", len(fprimes_set)
print "Primes missed", primes_set - fprimes_set
print "False fprimes", fprimes_set - primes_set
Running this program produced output:
Range [100000, 102000)
Actual number of primes 174
Number of fprimes 175
Primes missed set([])
False fprimes set([101101])
[1] In the first version of this article I wrote that an attempt to allocate 4294967295 bytes would cause the program to crash, which isn’t quite right. Malloc returns NULL in the event of failure;
standard C++ operator new behaviour is to throw a bad_alloc exception. My thanks to R Samuel Klatchko for the correction.
[2] “Structure and Interpretation of Computer Programs” discusses Carmichael numbers in a footnote
Numbers that fool the Fermat test are called Carmichael numbers, and little is known about them other than that they are extremely rare. There are 255 Carmichael numbers below 100,000,000. The
smallest few are 561, 1105, 1729, 2465, 2821, and 6601. In testing primality of very large numbers chosen at random, the chance of stumbling upon a value that fools the Fermat test is less than
the chance that cosmic radiation will cause the computer to make an error in carrying out a “correct” algorithm. Considering an algorithm to be inadequate for the first reason but not for the
second illustrates the difference between mathematics and engineering.
[3] Being lazy and online I thought I’d search for a nice radioactive decay graphic rather than draw my own. I found a real gem on the University of Colarado site, where Kyla and Bob discuss
radioactive decay.
Hmmm…so a lot of decays happen really fast when there are lots of atoms, and then things slow down when there aren’t so many. The halflife is always the same, but the half gets smaller and
That’s exactly right. Here’s another applet that illustrates radioactive decay in action.
Visit the site to play with the applet Bob mentions. You’ll find more Kyla and Bob pictures there too.
[4] I’m unable to provide a definitive attribution for the “Ship happens!” quotation. I first heard it from Andrei Alexandrescu at an ACCU conference, who in turn thinks he got it from Erich Gamma. I
haven’t managed to contact Erich Gamma. Matthew B. Doar reports using the term back in 2002, and it appears as a section heading in his book “Practical Development Environments”.
|
{"url":"http://wordaligned.org/articles/stop-the-clock-squash-the-bug","timestamp":"2014-04-20T23:26:35Z","content_type":null,"content_length":"25569","record_id":"<urn:uuid:c786f976-eb7a-4d8c-bbef-2ab3b99647ba>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Smithfield, RI Math Tutor
Find a Smithfield, RI Math Tutor
...Though I did not tutor much during college, I started up with tutoring again with a company as soon as I had my degree (and more free time). I enjoy working with children of all ages and of
all levels, whether it is to help them catch up when they have fallen behind or help them ace the class. M...
17 Subjects: including calculus, trigonometry, actuarial science, linear algebra
...I found a children might be able to read most of a word but have trouble with one syllable. We would look at the word for the sounds of the vowels within that syllable, see if it followed any
rules, patterns, or was irregular, and then sound it out. Many times, the child might read the word or...
31 Subjects: including algebra 2, English, precalculus, algebra 1
...I think to be a good tutor you have to understand how a person learns and everybody learns differently. You have to be able to find out the best way a person learns and modify how you are
going to tutor them based on that. I also think constant practice and having somebody there to push you and motivate you is also sometimes necessary for students.
17 Subjects: including calculus, precalculus, trigonometry, statistics
...I originally was an Accountant for several years before becoming a math teacher. I have a love of math and a great interest in helping students that struggle with math. There was a time when I
was in 7th grade when I started to struggle a great deal with math.
7 Subjects: including algebra 2, precalculus, SAT math, linear algebra
...I am pursuing a master's degree in school counseling and would like to grow my tutoring/ one on one student teaching experience. Feel free to message with questions! Thank you!
15 Subjects: including algebra 1, geometry, reading, writing
Related Smithfield, RI Tutors
Smithfield, RI Accounting Tutors
Smithfield, RI ACT Tutors
Smithfield, RI Algebra Tutors
Smithfield, RI Algebra 2 Tutors
Smithfield, RI Calculus Tutors
Smithfield, RI Geometry Tutors
Smithfield, RI Math Tutors
Smithfield, RI Prealgebra Tutors
Smithfield, RI Precalculus Tutors
Smithfield, RI SAT Tutors
Smithfield, RI SAT Math Tutors
Smithfield, RI Science Tutors
Smithfield, RI Statistics Tutors
Smithfield, RI Trigonometry Tutors
|
{"url":"http://www.purplemath.com/smithfield_ri_math_tutors.php","timestamp":"2014-04-18T22:04:31Z","content_type":null,"content_length":"24003","record_id":"<urn:uuid:9c89c0da-a077-4135-b41a-f6daacf33657>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coq and The Monad Laws: The Third
Submitted by mrd on Thu, 08/16/2007 - 2:18pm.
The Third Monad Law
The previous two articles introduced Coq and the first two Monad Laws. I am discussing the third one separately because it will take longer to prove.
The proof for the third law will proceed at first like the others, induction and some simplification.
Theorem third_law : forall (A B C : Set) (m : list A) (f : A -> list B) (g : B -> list C), bind B C (bind A B m f) g = bind A C m (fun x => bind B C (f x) g). Proof. induction m. (* base case *)
simpl. trivial. (* inductive case *) intros f g. simpl. unfold bind. unfold bind in IHm.
Which brings us to this state in the interactive theorem prover.
1 subgoal A : Set B : Set C : Set a : A m : list A IHm : forall (f : A -> list B) (g : B -> list C), flat_map g (flat_map f m) = flat_map (fun x : A => flat_map g (f x)) m f : A -> list B g : B ->
list C ============================ flat_map g (f a ++ flat_map f m) = flat_map g (f a) ++ flat_map (fun x : A => flat_map g (f x)) m
At this point, if we could rewrite flat_map g (f a ++ flat_map f m) into flat_map g (f a) ++ flat_map g (flat_map f m)) then we would be able to apply the Inductive Hypothesis and be home free.
The "cut" tactic allows you to make an assumption, and then later come back and prove your assumption correct. Using "cut",
cut (flat_map g (f a ++ flat_map f m) = flat_map g (f a) ++ flat_map g (flat_map f m)). intro Distrib. rewrite Distrib. rewrite IHm. reflexivity.
the original goal is easily solved. But Coq has generated an additional subgoal: we must now prove that this cut is correct.
1 subgoal A : Set B : Set C : Set a : A m : list A IHm : forall (f : A -> list B) (g : B -> list C), flat_map g (flat_map f m) = flat_map (fun x : A => flat_map g (f x)) m f : A -> list B g : B ->
list C ============================ flat_map g (f a ++ flat_map f m) = flat_map g (f a) ++ flat_map g (flat_map f m)
We'll proceed by induction on f a which has inductive type list B.
induction (f a). (* base case *) simpl. reflexivity. (* inductive case *) simpl. rewrite IHl. rewrite app_ass. reflexivity. Qed. End ListMonad.
All done. We only needed the association property of list append, which I found by querying SearchAbout app.
Here is a much shorter proof which takes advantage of some of Coq's automated tactics.
Theorem third_law' : forall (A B C : Set) (m : list A) (f : A -> list B) (g : B -> list C), bind B C (bind A B m f) g = bind A C m (fun x => bind B C (f x) g). Proof. induction m; simpl; intuition.
replace (bind B C (f a ++ bind A B m f) g) with (bind B C (f a) g ++ bind B C (bind A B m f) g); [ rewrite IHm | induction (f a); simpl; auto; rewrite app_ass; rewrite IHl ]; auto. Qed.
On a final note, Coq has the ability to extract code into several different languages.
Extraction Language Haskell. Recursive Extraction bind ret.
results in
module Main where import qualified Prelude data List a = Nil | Cons a (List a) app l m = case l of Nil -> m Cons a l1 -> Cons a (app l1 m) flat_map f l = case l of Nil -> Nil Cons x t -> app (f x)
(flat_map f t) ret :: a1 -> List a1 ret a = Cons a Nil bind :: (List a1) -> (a1 -> List a2) -> List a2 bind m f = flat_map f m
|
{"url":"http://sequence.complete.org/node/360","timestamp":"2014-04-21T02:04:12Z","content_type":null,"content_length":"17592","record_id":"<urn:uuid:2904e116-1291-4641-a8c7-9dcb81c0d1e6>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hi, I'm a new user, and I need help with this Statistics problem
September 19th 2013, 05:50 PM #1
Sep 2013
North Carolina, USA
Hi, I'm a new user, and I need help with this Statistics problem
Hi! This seems like a great place and everyone is so helpful sand I need some help so I came here. I'm in AP Statistics and this problem has me confused.
So, here is the problem-
Suppose 75% of all families spend more than $75 weekly for food, while 15% spend more than $150. Assuming the distribution of family expenditures on groceries is normal, what is the mean weekly
expenditure and standard deviation?
I have no idea how to start it or anything, thanks for the help guys!
Re: Hi, I'm a new user, and I need help with this Statistics problem
Hi! This seems like a great place and everyone is so helpful sand I need some help so I came here. I'm in AP Statistics and this problem has me confused.
So, here is the problem-
Suppose 75% of all families spend more than $75 weekly for food, while 15% spend more than $150. Assuming the distribution of family expenditures on groceries is normal, what is the mean weekly
expenditure and standard deviation?
I have no idea how to start it or anything, thanks for the help guys!
From a table of cumulative values for the standardised normal distribution, you can find out what 'score' encompassed 75% of the area from the right and what score encompasses 15% of the area
from the right.
Then you can use the formula for standardising the normal distribution $Z = \frac{X - \mu}{\sigma}$. Simultaneous equations.
Last edited by FelixFelicis28; September 20th 2013 at 02:27 PM.
September 20th 2013, 01:35 PM #2
|
{"url":"http://mathhelpforum.com/new-users/222103-hi-i-m-new-user-i-need-help-statistics-problem.html","timestamp":"2014-04-18T18:57:57Z","content_type":null,"content_length":"34253","record_id":"<urn:uuid:3adaea08-80fb-4c44-8023-67fde7861933>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Natural Orbitals for "Particles in a Box"
If by "natural" you mean the *eigenstates* of the interacting electron system, then no. But you could take Slater determinants of the sine wave solutions as a useful *basis* to work in for the
interacting electron problem. If you really want to find a good ground state though, probably the best approach would be to use a test wave-function with a few free parameters and apply the
variational principle.
Natural orbitals are defined technically as the orbitals which diagonalize the 1-density operator.
|
{"url":"http://www.physicsforums.com/showpost.php?p=3821381&postcount=4","timestamp":"2014-04-19T07:29:44Z","content_type":null,"content_length":"8263","record_id":"<urn:uuid:f1f155be-2bb0-4a65-a5a3-d19b7f1e6c10>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Research, Inc.
A New Kind of Science Explorer: Mathematica Kit
The NKS Explorer: Mathematica Kit gives access to the functionality of NKS Explorer from within Mathematica. This enables you to run more in-depth experiments by incorporating the results into your
own Mathematica programs. You can quickly generate hundreds of different graphics for visual inspection or perform various kinds of analysis on large data sets.
This loads the package.
Finding contents
Finding available contents.
Each available graphic is labeled by its page number in A New Kind of Science. If there are several different types of graphics on a page (page 80, for example), they are numbered 80.1, 80.2, and so
This lists the available sections in Chapter 8.
Out[2]={{8,2,"The Growth of Crystals"},{8,3,"The Breaking of Materials"},{8,5,"Fundamental Issues in Biology"},{8,6,"Growth of Plants and Animals"},{8,7,"Biological Pigmentation Patterns"},
{8,8,"Financial Systems"}}
This lists the available page numbers in Section 8.6.
Note that there are three different graphics available for page 402 and two different ones for page 411.
This gives the description of a particular graphic.
The information given includes the title of the graphic and the general format of the NKSGraphics command for the page. There is also a description of each of the required input values, as well as
the input values for the default graphic appearing in NKS Explorer.
Getting graphics and data
Producing graphics and data.
Possible options for NKSDisplay are Heading -> False (no annotation) and ImageSize -> Automatic (using Mathematica's default image size). The overall image size used by NKSDisplay can be rescaled by
changing the value of the variable $NKSImageScale.
Getting the input values available as defaults and examples in NKS Explorer.
The basic command for accessing a graphic is NKSGraphics. If no inputs are given, the default graphic is returned.
The corresponding raw data is accessed using NKSData.
This produces an annotated version of the graphic, using the default NKS Explorer image size.
You can enter your own input values as the second argument to NKSGraphics, NKSData, and NKSDisplay.
Each graphic in NKS Explorer comes with a set of default input values, and for many graphics there are also further sets of example input values, usually labeled (a), (b), and so on. This accesses
This shows all examples for page 400.
You can also use the result from doing “copy input to clipboard” in NKS Explorer as an argument to NKSGraphics, NKSData, and NKSDisplay.
In[12]:=NKSGraphics[NKSXInput["NKS400",{"Chapter-08","Section-06","Page-400"},{"branch angle" -> 10,"branch growth rate" -> 0.5,"stem growth rate" -> 0.3,"steps" -> 6}]]//Show;
For convenient browsing of the contents of a whole chapter, use the NKSNotebook function.
This will pop up a new notebook with a listing of all available contents in Chapter 8. For each graphic, there is an unevaluated Input cell which would produce the default version of the graphic. To
show all default graphics in a given section or a whole chapter, simply select all the cells and evaluate them. The complete set of such notebooks is also available directly in the Help Browser.
|
{"url":"http://reference.wolfram.com/legacy/applications/nksx/","timestamp":"2014-04-20T03:58:44Z","content_type":null,"content_length":"33871","record_id":"<urn:uuid:c1a3f729-070b-49f1-9d51-c004e2e0ba19>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Intrepid::Basis_HDIV_TRI_In_FEM< Scalar, ArrayScalar >
Implementation of the default H(div)-compatible Raviart-Thomas basis of arbitrary degree on Triangle cell. More...
Public Member Functions
Basis_HDIV_TRI_In_FEM (const int n, const EPointType pointType)
void getValues (ArrayScalar &outputValues, const ArrayScalar &inputPoints, const EOperator operatorType) const
Evaluation of a FEM basis on a reference Triangle cell.
void getValues (ArrayScalar &outputValues, const ArrayScalar &inputPoints, const ArrayScalar &cellVertices, const EOperator operatorType=OPERATOR_VALUE) const
FVD basis evaluation: invocation of this method throws an exception.
Private Member Functions
virtual void initializeTags ()
Initializes tagToOrdinal_ and ordinalToTag_ lookup arrays.
Private Attributes
< Scalar, FieldContainer
< Scalar > > Phis
Orthogonal basis out of which the nodal basis is constructed.
FieldContainer< Scalar > coeffs
expansion coefficients of the nodal basis in terms of the orthgonal one
template<class Scalar, class ArrayScalar>
class Intrepid::Basis_HDIV_TRI_In_FEM< Scalar, ArrayScalar >
Implementation of the default H(div)-compatible Raviart-Thomas basis of arbitrary degree on Triangle cell.
Implements nodal basis of degree n (n>=1) on the reference Triangle cell. The basis has cardinality n(n+2) and spans an INCOMPLETE polynomial space of degree n. Basis functions are dual to a
unisolvent set of degrees-of-freedom (DoF) defined and enumerated as
• The normal component on a lattice of order n+1 and offset 1 on each edge (see PointTools). This gives one point per edge in the lowest-order case. These are the first 3 * n degrees of freedom
• If n > 1, the x and y components at a lattice of order n+1 and offset on the triangle. These are the rest of the degrees of freedom.
If the pointType argument to the constructor specifies equispaced points, then the edge points will be equispaced on each edge and the interior points equispaced also. If the pointType argument
specifies warp-blend points, then Gauss-Lobatto points of order n are chosen on each edge and the interior of warp-blend lattice of order n+1 is chosen for the interior points.
Definition at line 93 of file Intrepid_HDIV_TRI_In_FEM.hpp.
template<class Scalar , class ArrayScalar >
void Intrepid::Basis_HDIV_TRI_In_FEM< Scalar, ArrayScalar >::getValues ( ArrayScalar & outputValues,
const ArrayScalar & inputPoints,
const EOperator operatorType
) const [virtual]
Evaluation of a FEM basis on a reference Triangle cell.
Returns values of operatorType acting on FEM basis functions for a set of points in the reference Triangle cell. For rank and dimensions of I/O array arguments see Section MD array template arguments
for basis methods .
outputValues [out] - variable rank array with the basis values
inputPoints [in] - rank-2 array (P,D) with the evaluation points
operatorType [in] - the operator acting on the basis functions
Implements Intrepid::Basis< Scalar, ArrayScalar >.
Definition at line 288 of file Intrepid_HDIV_TRI_In_FEMDef.hpp.
Referenced by main().
|
{"url":"http://trilinos.sandia.gov/packages/docs/r11.2/packages/intrepid/doc/html/classIntrepid_1_1Basis__HDIV__TRI__In__FEM.html","timestamp":"2014-04-16T22:33:31Z","content_type":null,"content_length":"13303","record_id":"<urn:uuid:9bd915ee-7ae3-40c5-8e29-bdda90764d23>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
|
No. 2467: Graph Theory
Today, the bridges of Königsberg. The University of Houston’s College of Engineering presents this series about the machines that make our civilization run, and the people whose ingenuity created
I first encountered the problem in elementary school. I was on a field trip to the Seattle Science Center. One of the instructors there showed us a picture. On it were four islands. Some were
connected by bridges — seven in all. And she gave us a challenge.
“Pick any island,” she said, “and see if you can find a walk that goes over every bridge exactly once and brings you back to the island where you started.” I tried one walk, then another. No
luck. I always had to retrace at least one bridge. I drew the picture on a piece of paper and took it home to show my parents. The instructor had succeeded. She’d made me think.
I ran into the problem many years later in a college course on graph theory. To mathematicians, a graph is a collection of islands connected by bridges or, more precisely, points connected by
lines. Get a sheet of paper. Draw some points. Connect some of them with lines. You’ve got what mathematicians call a graph. Pretty simple. But graphs turn out to be remarkably interesting.
Many mathematicians make their living trying to solve difficult, abstract problems about graphs. Claws. Odd holes. Odd anti-holes. Graph theorists speak a language of their own. But graph theory
has plenty of practical problems, too.
For example, street maps define graphs. We can think of each intersection as a point and each street segment between two intersections as a line. So the problem of finding a shortest path from
your house to work is a problem in graph theory. So is the problem of picking good bus routes, or how to make scheduled deliveries from a warehouse. Can garbage trucks be routed so they don’t go
down a street more than once? Graph theory again. In fact, it’s just the island-and-bridge problem stated more generally.
The specific island-and-bridge problem I’d learned as a child is called the Königsberg Bridge Problem. As the story goes, it was posed by the citizens of Königsberg, Prussia, as they puzzled over
the problem on their evening strolls. The problem was solved by the great mathematician Leonhard Euler in 1736. That moment’s considered the beginning of graph theory. And the problem’s solution
is so simple — so satisfying — it’s story has been passed down through generations of mathematicians.
Euler showed that every walk in Königsberg must retrace at least one bridge. But he also realized there’s a simple way to tell if any graph does or doesn’t have a walk free of repeated bridges.
The answer? There’s a walk free of repeated bridges if, and only if, every island is connected to an even number of bridges. Try to convince yourself — and your friends; or visit the Engines web
site for a little help.
I’m Andy Boyd at the University of Houston, where we’re interested in the way inventive minds work.
(Theme music)
For related episodes, see 1897, Optimization, and 2153, Cliques and Ties.
Graph Theory and the Bridges of Königsberg: http://www.jcu.edu/math/vignettes/bridges.htm. Accessed February 25, 2009.
The land masses in the Königsberg Bridge Problem aren’t all islands, as the picture suggests. They are simply land masses separated by a river, but were referred to as islands for audio clarity.
Euler went on to prove many facts about graphs. Here’s how he reasoned the “even number of bridges” property.
Statement: If there exists a walk that starts and ends at the same point without retracing any lines, then each point must be connected to an even number of lines.
To see this is true, pick any point with an odd number of lines attached to it. Any walk that traverses every line must, in particular, traverse all the lines attached to this point. The walk
visits this point on one bridge, then leaves on another, revisits the point on a different bridge, then leaves on another bridge, and so on (the walk can certainly go elsewhere in between the
visits, but we don’t need to know where to make our argument). At some point, because the number of bridges connected to the point is odd, the walk enters the point but can’t leave — unless a
bridge is retraced. (If by chance the point we picked was the starting point of the walk, the walk would eventually leave the point but couldn’t return without retracing a line.) So we can
conclude that every each point must be connected to an even number of lines.
Statement: If each point is connected to an even number of lines, then there exists a walk that starts and ends at the same point without retracing any lines.
Draw a walk at random, starting at any point. Because every point has an even number of bridges connected to it, you’ll never get stuck at a point until you return to where you started. If you’ve
traced all the lines, great; you’re done. If not, call the walk you just created Walk 1. If you erase all the lines in Walk 1, the points and lines that remain have the property that each point
still has an even number of lines attached to it. Call this graph with some of the lines erased Graph 2. So pick a new point P — one that’s in Walk 1, and again create a random walk on Graph 2.
Call this Walk 2. Notice that you can combine Walk 1 and Walk 2 by taking Walk 1 to point P, “inserting” Walk 2, which starts and ends at P, and completing Walk 1. Call this Walk 3. Now repeat
the procedure by erasing all the lines in Walk 3 (which are all the lines in Walks 1 and 2). You may have to repeat the process a number of times, but since there are only a finite number of
points and lines, the process must eventually stop.
The process sounds harder than it is. Give it a try!
The Engines of Our Ingenuity is Copyright © 1988-2009 by John H. Lienhard.
|
{"url":"http://www.uh.edu/engines/epi2467.htm","timestamp":"2014-04-19T17:30:38Z","content_type":null,"content_length":"8755","record_id":"<urn:uuid:d5971438-7132-422f-a4a3-c91f67898f95>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coastal Engineering
A theory for waves of finite height, presented in this paper, is an exact theory, to any order for which it is extended. The theory is represented by a summation liarmdnic series, each term of which
is in an unexpanded form. The terms of the series when expanded result in an approximation of the exact theory, and this approximation is identical to Stokes' wave theory extended to the same order.
The theory represents irrotational - divergenceless flow. The procedure is to select the form of equations for the coordinates of the particles in anticipation of later operations to be performed in
the evaluation of the coefficients of the series. The horizontal and vertical components of these coordinates are given respectively by the following; (equations given).
From the above equations it is possible to deduce the expressions for velocity potential and stream function. The horizontal and vertical components of particle velocity are obtained by
differentiating £ and ^with respect to time. Along the free surface z -1?!a 0 and z = Vs and all expressions reduce to simple forms, which in turn saves considerable work in the evaluation of the
coefficients. The coefficients are evaluated by use of Bernoulli's equation. The final form of the solution is given by two sets of equations. One set of equations (same as above) is used to compute
the particle position and the second set (the first time derivatives of the above) is used to compute the components of particle velocity at the particle position. That is, the particles and
velocities are referenced to the lines of the stream function and the velocity potential. Expanding the two sets of equations, by approximation methods, results in one set of equation for computing
particle velocity and no equations are required for the particle position.The unexpanded form requiring two sets of equations, being an exact solution, is more accurate theoretically, than the Stokes
or the expanded form to the same order. Coefficients have been formulated for all terms of the order one to five for both the unexpanded and the expanded form of the theory, and are presented in
tabular form as functions of d/L, as consecutive equations.
finite height; harmonic series; Stokes' theory;
Full Text:
This work is licensed under a
Creative Commons Attribution 3.0 License
|
{"url":"http://journals.tdl.org/icce/index.php/icce/article/view/2168/0","timestamp":"2014-04-17T05:54:22Z","content_type":null,"content_length":"17590","record_id":"<urn:uuid:3f80835d-7dc4-4ef2-af8d-f58861c4dbf8>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ray Tracing News
"Light Makes Right"
August 9, 2001
Volume 14, Number 1
Compiled by Eric Haines, 1050 Craft Road, Ithaca, NY 14850 erich@acm.org. Opinions expressed are mine, not Autodesk's.
All contents are copyright (c) 2000,2001, all rights reserved by the individual authors.
HTML Table of Contents at http://www.raytracingnews.org
Text version at http://www.acm.org/tog/resources/RTNews/text/
You may also want to check out the Ray Tracing News issue guide and the ray tracing FAQ.
SIGGRAPH is the great motivator for me right now, as far as getting out an issue of the Ray Tracing News goes. First, it's a place to announce the Ray Tracing Roundtable (which this year will be in
the Garden East Room at the Wilshire HQ hotel at 6:30-8 pm on Thursday), and second, by getting an issue out I don't have to answer any questions like, "are you still compiling the Ray Tracing News?"
The RT Roundtable: where else can one possibly ask the question, "mailboxes, are they good or bad?" Which is in fact a question I'd like to bring up: Slusallek says good, Shirley says bad, and only a
good mudfight can decide the issue. Well, actually, there is one other place to ask this question: the course "Interactive Ray-Tracing", which is on Sunday from 1:30 to 5 pm at SIGGRAPH, http://
helios.siggraph.org/s2001/conference/courses/crs13.html. Tim Purcell mentioned that he'll be addressing the topic. He's finding, for grid acceleration structures, that the most efficient grids are
those with 5 to 10 times as many grid cells as objects, and that even with this high ratio the gain due to mailboxing can often disappear.
At this point ray tracing itself is a tool. It's a rendering method, but the geometric operation of intersecting rays is useful for many other operations, such as light transport in general, picking,
and collision detection and response. I even noticed ray tracing in DirectX 8 recently: the sample picking code uses a ray/triangle intersector to do its work. So is the RTNews worth continuing?
Sure. It doesn't cost anything, of course (except time). For me the Ray Tracing News is a great place to archive those little ideas, web sites, book information, and other resources that otherwise
get lost in my mailbox and bookmark list.
Other news: there are two books that should be of interest to readers that will be out by SIGGRAPH. Henrik Wann Jensen's "Realistic Image Synthesis Using Photon Mapping", from AK Peters, presents
both the theory and practice of using the photon mapping algorithm for global illumination. This technique works by sending out rays from the light sources and having the photons collect on surfaces,
then using classical ray tracing to retrieve these collections. A simple idea, and ideas such as Arvo's "Backwards Ray Tracing" predate it, but Jensen works through many of the flaws of earlier
schemes and presents details and fixes in a single volume. This is a nice follow-up work to Peter Shirley's book from last year, "Realistic Ray Tracing," which deals with classical and Monte Carlo
ray tracing. These two books combined could make a good one or two semester course: implement a ray tracer, add Monte Carlo, add photon mapping (though Jensen makes it too easy, giving a full C++
implementation for the photon map in an appendix). I've read the first six chapters so far and hope to finish the rest on the plane to SIGGRAPH. The first third of the book is a review of global
illumination research to date. It's well-written, though sometimes jumps to equations with not as much explanation as I would prefer. The rest of the book is on photon mapping itself and how to
implement it effectively. A relatively short book (less than 200 pages), but pure gold if you have any interest in using this algorithm. There is more on photon mapping at Henrik's homepage: http://
AK Peters has another book out this SIGGRAPH: "Non-Photorealistic Rendering", by Gooch and Gooch. I have not seen this book yet, but I have been looking forward to it. See their website for more
information on NPR (though not the book): http://www.cs.utah.edu/npr/ - details on the SIGGRAPH NPR BOF are also there. Both Jensen and the Gooches will have book signings at AK Peters' booth (#1918)
on Tuesday afternoon, 2-3 pm for Jensen, 3:30-4:30 for the Gooches. Jensen is also signing at the SIGGRAPH bookstore at 3:30-4:30 (but it's probably cheaper to buy at the AK Peters booth).
Other than these two books, I'm also looking forward to "Game Programming Gems 2", edited by Mark DeLoura, Charles River Media, booth #1910. See http://www.charlesriver.com/titles/gamegems2.html for
a table of contents. The first volume primed the pump and got people realizing they could write articles about what they've learned doing computer graphics for games (most of the volume is about
modeling, rendering, and animation techniques), so I'm expecting good things from this second collection of articles.
So, last year's puzzle, from Paul Strauss, was that you're given a square sheet of paper, one side is blue and the other red. You can make folds in the paper, but the rule is that only red can touch
red. Red touching blue or blue touching blue is not allowed. Folds are full 180 degree foldovers, so some piece of paper will always touch another part of the paper. So, what is the maximum number of
folds you can make? The answer: 5. You can first fold the paper in half, at an angle so that the four corners do not align and each pokes out a bit from the overlapping area. Now each corner is free
to fold back on itself; you just need to fold the tips in the tiniest bit and the 5 folds are done.
Rick LaMont was one person with the correct answer, and sent this fine graphic of a solution:
|/ \|
| |
|---__ |
| ---__ |
| ---__ |
| ---|
| |
|\ /|
The Ray Tracing News itself now has two more memorable URLs now:
Ingo Wald, Philipp Slusallek, and others have been doing some interesting research in interactive raytracing, see:
If you have not looked there lately, they also have a "State of the Art" report on interactive raytracing on their Publications page.
WinOSi stands for "Optical Simulation Rendering for Windows". It is a freeware open-source system for doing what sounds like photon mapping. It's really just begun, but even now the images in the
gallery are worth a look:
Phil Dutre reports: "The algorithm used appears to be very similar to the light tracing algorithms I published in various papers in the 90's, and is also described in my Ph.D., see http://
www.graphics.cornell.edu/~phil/PUBLICATIONS/publications.html. The main difference is that instead of sending a contribution ray to the screen from each hitpoint along the path of a particle,
hitpoints in WinOSi are stored in a buffer. During a 2nd pass, a viewing ray is shot through each pixel. Each hitpoint stored in the buffer then sends a contribution ray to the screen if it is
located in the neighborhood of the visible point along the viewing ray. So, the viewing rays are used as a selection mechanism to decide which of the hitpoints along the paths of all particles
actually send a contribution ray to the screen."
Errata for Peter Shirley's book "Realistic Ray Tracing" is at:
He pays one beer per new error found.
For some (many!) practical tips on using multithreading for ray tracing, check out the Realtime Raytracing mailing list archives:
There are some questions and answers at Paul Bourke's web site about the free Tachyon ray tracer (http://jedi.ks.uiuc.edu/~johns/raytracer/), as well as a comparison on one scene of Tachyon and
Tachyon is about 3 times as fast at rendering as POV-Ray for this test scene, and parsed the data about 6.5 times faster - important for large databases. There are also some multiprocessor timing
results, showing a nearly linear speed-up as processors are added.
John ran an interesting set of experiments this spring, showing how ray tracing was not incredibly far from a hardware z-buffer rendering of a sphereflake model. The unoptimized sphereflake ran at
0.57 FPS on the ray tracer (with shadows and reflections) on an Athlon 1200 MHz, and 1.2 FPS on a GeForce2 on the same machine. The reason was that the GeForce2 was transform-bound, as each sphere
was sent down and rendered as 400 polygons in tristrips. Being smarter about the tessellation of the sphere, i.e. using level-of-detail to make the small spheres have many less polygons, got the rate
on the GeForce2 up to 16 FPS, and moving in closer to the sphereflake made Tachyon drop to 0.33 FPS. So at least in this case, after optimization the graphics accelerator was 50 times as fast as the
ray tracer. If you'd like to see the images and more stats, see our Powerpoint presentation "Real-Time Shadows" at http://www.erichaines.com/; it's two slides near the very end.
There are some useful optimization tricks for ray tracers in Java at:
Though some advice applies only to Java (e.g. "avoid array dereferences"), much of the advice applies to C and other languages, too. A key motto to remember, though, with all optimization tricks is
"test, test, test". Some of the tricks here (the loop countdown) may gain nothing when things like Hotspot are used.
Piero Foscari has made a basic FAQ for real-time raytracing (i.e. the demo scene):
A contest for writing a ray tracer using a functional language was held by the International Conference on Functional Programming last year. The rules and results are at:
Phil Dutre's Global Illumination Compendium has expanded considerably in the past year:
You need this resource.
Vlastimil Havran's Ph.D. Thesis, "Heuristic Ray Shooting Algorithms", defended on 20 April 2001 at the Czech Technical University, can be downloaded at:
Email comments to him at VHavran@seznam.cz.
Global illumination research aiming at the photo-realistic image synthesis pushes forward research in computer graphics as a whole. The computation of visually plausible images is time-consuming and
far from being realtime at present. A significant part of computation in global illumination algorithms involves repetitive computing of visibility queries.
In the thesis, we describe our results in ray shooting, which is a well-known problem in the field of visibility. The problem is difficult in spite of its simple definition: For a given oriented
half-line and a set of objects, find out the first object intersected by the half-line if such an object exists. A naive algorithm has the time complexity O(N), where N is the number of objects. The
naive algorithm is practically inapplicable in global illumination applications for a scene with a high number of objects, due its huge time requirements. In this thesis we deal with heuristic ray
shooting algorithms that use additional spatial data structures. We put stress on average-case complexity and we particularly investigate the ray shooting algorithms based on spatial hierarchies. In
the thesis we deal with two major topics.
In the first part of the thesis, we introduce a ray shooting computation model and performance model. Based on these two models we develop a methodology for comparing various ray shooting algorithms
for a set of experiments performed on a set of scenes. Consecutively, we compare common heuristic ray shooting algorithms based on BSP trees, kd-trees, octrees, bounding volume hierarchies, uniform
grids, and three types of hierarchical grids using a set of 30 scenes from Standard Procedural Database. We show that for this set of scenes the ray shooting algorithm based on the kd-tree is the
winning candidate among all tested ray shooting algorithms.
The second and major part of the thesis presents several techniques for decreasing the time and space complexity for ray shooting algorithms based on kd-tree. We deal with both kd-tree construction
and ray traversal algorithms. In the context of kd-tree construction, we present new methods for adaptive construction of the kd-tree using empty spatial regions in the scene, termination criteria,
general cost model for the kd-tree, and modified surface area heuristics for a restricted set of rays. Further, we describe a new version of the recursive ray traversal algorithm. In context of the
recursive ray traversal algorithm based on the kd-tree, we develop the concept of the largest common traversal sequence. This reduces the number of hierarchical traversal steps in the kd-tree for
certain ray sets. We also describe one technique closely related to computer architecture, namely mapping kd-tree nodes to memory to increase the cache hit ratio for processors with a large cache
line. Most of the techniques proposed in the thesis can be used in combination. In practice, the average time complexity of the ray shooting algorithms based on the kd-tree, as presented in this
thesis, is about O(log N), where the hidden multiplicative factor depends on the input data. However, at present it is not known to have been proved theoretically for scenes with general distribution
of objects. For these reasons our findings are supported by a set of experiments for the above-mentioned set of 30 scenes.
Globillum/Perception Thesis
by Hector Yee, Cornell Program of Computer Graphics
We present a method to accelerate global illumination computation in dynamic environments by taking advantage of limitations of the human visual system. A model of visual attention is used to locate
regions of interest in a scene and to modulate spatiotemporal sensitivity. The method is applied in the form of a spatiotemporal error tolerance map. Perceptual acceleration combined with good
sampling protocols provide a global illumination solution feasible for use in animation. Results indicate an order of magnitude improvement in computational speed. The method is adaptable and can
also be used in image-based rendering, geometry level of detail selection, realistic image synthesis, video telephony and video compression.
[There is also an ACM TOG article derived from the thesis, see http://www1.acm.org/pubs/tog/yee01/index.htm for some images and movies. -EAH]
Eigenvector Radiosity
Ian Ashdown has completed an MS Computer Science thesis at the University of British Columbia. Download it at:
Radiative flux transfer between Lambertian surfaces can be described in terms of linear resistive networks with voltage sources. This thesis examines how these "radiative transfer networks" provide a
physical interpretation for the eigenvalues and eigenvectors of form factor matrices. This leads to a novel approach to photorealistic image synthesis and radiative flux transfer analysis called
eigenvector radiosity.
The "Best Efficiency Scheme" project has moved to:
and now includes 100 VRML models for testing, at http://www.cgg.cvut.cz/BES/scenes/www/.
Interactive ray tracer in Java:
There seems to be something funky with their specular highlights (I think it's probably just the Phong model at too low a specular power), but it's fun to just be able to go to the web site and have
this thing work.
is a simple little Java applet showing how ray tracing works. It uses a 2D view to trace rays through a scene and shows what parts of the algorithm are done at each point.
A handy page:
which has radiosity abstracts and other literature links.
Measured BRDF data for rendering is currently available from two sites:
Viewers for various BRDF theoretical models are available at:
http://pecan.srv.cs.cmu.edu/~ph/src/illum/ (unfinished, though)
Some links to other BRDF information is at:
Paul Heckbert has made a useful BSP-tree summary page at:
Zoltan Karpati's "3D Object Converter" is a shareware program that converts 246+ (!) file formats. The unregistered version does not save out to DXF, 3DS, or LWO formats. The URL has moved to:
(by the way, www.f2s.com has free web hosting without advertising.)
An implementation of the Lafortune BRDF model using RenderMan, and suitable for use in BMRT (http://www.exluna.com/products/bmrt/), is available from Steve Westin at:
Demo Scene
The Platipus demo in "prods" at www.incognita-hq.com has a blend of ray tracing and 3D acceleration in a single scene. angel sastre <icg_reboot@bigfoot.com> comments:
We did some volumetric rendering with raytracing and then "mixed" that with a normal 3d scene (the scene was drawn using 3d acceleration).
Fair enough, it looks a bit blurry but it does give the impression of flying through 3d clouds.
There is a quite impressive RT intro, Fresnel 2 by Kolor. You can get it at
It has two versions, one for the PIII and the other for less powerful machines. You need an OpenGL capable 3D card.
The web site for Dr. David Rogers' book "An Introduction to NURBS" is at:
It includes information about the book, errata, source code, etc.
The Graphics Gems site has gone through a major redesign (as well as getting a more memorable URL), with all code now directly linked to the gems, which have been organized by book, category, and
Tidbits: an early draft of "Point in Polygon Strategies", and efficient code for doing shaft culling, is now available at the site http://www.erichaines.com/.
If you just can't get enough on Pluecker coordinates, i.e. the articles in RTNv10n3 and RTNv11n1 weren't enough for you, try:
It is derived from the RTN articles, but has some other information and may be more approachable.
A derivation of the formula for ray/general quadric intersection, from Paul Kahler:
(it's a Word Doc file)
The site:
has a glossary of the various terms used in the area of lighting and related fields. Reasonably done, though nothing deep. There are certainly a few I've never heard of before (e.g. "etendue").
Where can you find minimal indirect illumination, no atmospheric effects, perfectly reflective surfaces, and hard-edged shadows (depending on the location and size of the sun)? Yes, space, the real
home of classical ray tracing. Check out movies and stills at:
[excerpted from the Realtime Raytracing mailing list at http://groups.yahoo.com/group/realtime_raytracing/. - EAH]
I've done some checking, and ran across RTNv12n2 where there is an article covering many (all?) of the various papers on ray-octree traversal. From reading the descriptions, it seems that the method
I worked so hard to come up with is #18 by Gargantini and Atkinson from 1993. Yet another algorithm I've independently reinvented :-) There was no link to the actual paper or code, so I haven't
compared my implementation to theirs yet. I must say that this algorithm (and others like it) is VERY sensitive to the details of the implementation. Let me explain...
I started with a simple octree using 8 pointers to children at each node, with NULL pointers where there are no children. I use axially aligned boxes with subdivision always happening in the center
(I have good reasons to back up this choice, but that's another story). My first implementation recursed to all 8 children as long as there was no NULL pointer. Each node was responsible for
determining if the ray hit it. The hit test treated the node as the intersection of 3 slabs and checked for the existence of a non-empty span of the ray passing through all 3. I used a table of 8
orders to check the children in front to back order based on the direction the ray came from. The biggest problem with this is that the ray can hit at most 4 of the children, this means potentially
many recursive calls to intersection tests with nodes the ray does not hit. Note that the computation lies completely with the child nodes, the parent only checks pointers for NULL.
I realized that a little work in the parent node can reduce the number of children checked to 4 (at most) and that helped a LOT. I ultimately reached the algorithm described in the abstract I found.
I'm still left with some open questions. I can still put more work into the parent nodes, but it will be done even for children with NULL pointers - this will slow things down in some cases and speed
it up in others. I can also do more work related to sorting things, however there are only 5 numbers whose order matters and more complex code may hurt things more than it helps - this one is not
dependant on the existence of children, so I should try it. I still need to find the authors' code to compare - since this algorithm is so sensitive to what optimization is done, it's conceivable
that I can beat them. A modest 2x performance increase would put it in direct competition with those grids everyone keeps talking about. Don't laugh, I've already seen an increase of more than 4x
over my first implementation while keeping the same time complexity. Also, there are some things that would be best handled by combining sign-bits in floating point numbers as opposed to using
conditional code. This could reduce branch prediction errors enormously, but Pentium type processors really aren't designed with that in mind. BTW, I expect Athlon to smoke P4 running this code due
to the branching complexity and the 20-stage P4 pipes that will keep getting flushed - if the branches could be predicted, we'd have an even better algorithm :-)
Anyway, there's a demo of it at:
[There is also an interesting article on doing what is essentially adaptive supersampling of a low resolution version of the image (i.e. render every eighth pixel to start) at Paul's site at http://
www.oakland.edu/~phkahler/rtrt/dre.html - EAH]
Select demo 4. It does a big fractal octree thing and lets you subdivide up to 8 levels deep. Most rays from the camera hit the tree root, and with 2 light sources there's a lot more work to do also.
Since the octree is fixed in space, I attached the camera AND light sources as children of the same transformation node in the scene graph, this make it look like the octree is spinning :-) BTW, I
can turn off leaf visibility and place objects in the octree, but that starts making speed dependent on other things than tree traversal which is what I've been trying to optimize.
[I asked Paul for some timing numbers, and he tried some tests. -EAH]
My terrain dataset is 511x511 squares each divided into 2 triangles. The vertices are offset vertically using 2 frequencies of Perlin noise. I have one light source up in the sky, and turning off
shadow rays only added 15% to the frame rate because those rays trace very efficiently.
I ran some more controlled tests. 320x240 images with a ray every 8th pixel for a total of 9600 eye rays per image. The camera was aimed 0.5 radians down from straight horizontal (it thrashes if you
get the horizon in view with too many polys). This means every ray hit the ground and had 1 shadow ray cast. The camera was high enough to see several hundred (perhaps a couple thousand?) triangles
at a time. My DRE was turned off so it was 19200 rays per frame.
So... with tracing a terrain grid of size 2047x2047x2 = 8,380,418 triangles I got 4.0 FPS with some fluctuation up to 4.1 & 4.2 - call it 4.0 for a total of 76800 rays per sec. The root of the octree
was 11 levels deep (including root and leaf). It took several minutes to build the terrain, and then another minute for the thrashing to stop after it started rendering. Last time I did this it
allocated about 1GB of memory.
Next with 511x511x2 = 522,242 triangles in a 10 level octree I got... the same result 4.0 FPS - not fluctuating up as often. It only takes a few seconds to build the scene and the speed tops out in a
few more. Tree depth is really the determining factor, not polygon count.
Then I pointed the camera up higher so about 75% the image was ground and the rest was sky (default blue color). This gave about the same speed as the previous test. While there are fewer rays, they
have to do more work - passing over the ground at lower levels of the tree. The horizon looked really bad due to the pixel interpolation. I turned on DRE and it cleared up but dropped to 3.2 - 3.4
Summary: I'm at about 76K-80K rays/second in a 10 level octree with 50% being shadow rays. If I point the camera straight down it speeds up to the 130K rays per second due to fewer intersection
tests. All done on an Athlon 700 with 128M RAM.
Paul gave an update a few months later:
I am now getting 140K rays per second all the time (perhaps a dip to 130K) and going up to 215K per second with the camera pointed down to avoid those over the horizon rays.
[Paul got this speed boost from code tweaks like turning C++ to C, inlining, and in translating inner loops into assembler. - EAH]
Vlastimil Havran <havran@fel.cvut.cz> comments:
I agree with the sensitivity to the implementation issues, I improved the efficiency of the code for octree about three times by tuning and trying different versions of the code etc. It seems that
actual implementation is still very important.
The determining of which child nodes have to be traversed is the real potential source of the inefficiency. What I would recommend is to get the paper from J. Revelles et al, "An Efficient Parametric
Algorithm for Octree Traversal", and implement it, since the optimization idea in this paper is just what you are searching for and is very promising, I think.
Unfortunately, I have not yet time to reimplement it in my own source code. The paper is available at: http://giig.ugr.es/~curena/papers/, and also includes some pseudocode of the traversal
Ingo Wald <Ingo.Wald@gmx.de> comments:
Two numbers from our interactive ray tracer [see http://graphics.cs.uni-sb.de/rtrt/] may be of particular interest for you:
• about 90,000 rays per second for a 4-million triangle scene in which almost all triangles are visible.
• about 150,000 to 600,000 primary rays per second (depending on view settings)
for a 1.5 million triangle scene with lots of occlusion.
(Both scenes are primary rays only, flat shading)
All that on a 800 MHz PIII.
[Part of the difference in speeds is because Ingo et al are using SIMD (SSE) instructions in a clever way: they intersect four rays at once against a single triangle. Like Paul, they also pay a lot
of attention to and optimize for cache lines and other architectural elements of their processor. -EAH]
John Stone <johns@ks.uiuc.edu> notes (in a separate email; worth mentioning here):
There's a heck of a lot that can be done to fit a ray tracer to an architecture. Make sure to have tight loops that avoid branches, precompute stuff, have the cache-lines get filled nicely by making
data structures fit them, ways to fit like data together to maximize coherency (I've seen some very funky code for culling for z-buffers by Intel, for example, that has a few copies of a mesh's data
in different places to get better locality).
Nick Chirkov <nickchir@yahoo.com> writes:
My ray tracer works on any polygonal models, builds BSP, and traces 40,000 uniformly distributed rays/sec (Celeron433) on the well-known Attic (~52,000 triangles) scene with ratio of correct hits
99.96%. All code is C++, SSE is not used. I can get a speed up of 150sing Intel's C++ compiler and rewrite something like if(*(DWORD*)&float_value<0) to if (float_value<0.0f). I can't use occlusion
techniques because of shadow generation [I'm not sure what this means. -EAH].
Uniformly distributed rays is slower than primary rays because of CPU cache misses. In real image generation it was ~60,000 rays/sec on the Attic scene with 3 light sources with shadows.
I tested a scene with ~480,000 triangles and one light source. It was an old ship and all triangles were within the view frustum. I got ~30,000 rays/sec.
[See Nick's page at http://www.geocities.com/SiliconValley/Bay/5604/raytrace.htm]
In a dynamic animation environment, one problem for ray tracers to solve is updating the spatial ray tracing structure as quickly as possible. This is especially true if multiprocessor machines are
being used. Reinhard, Smits, and Hansen give one solution in their work, "Dynamic Acceleration Structures for Interactive Ray Tracing," http://www.cs.utah.edu/~reinhard/egwr/. In this paper they
insert objects into a grid. As objects move outside the grid, they do a modulo operation for the object's location. So, say an object is in the rightmost grid cell, #9, in a 10x10x10 grid. It moves
outside the grid - instead of recreating the grid, the object is moved to cell #0, but is noted as being in a "copy" of the grid at X location 1. Another way to express it is that each grid
coordinate has a number 0-9, but think of the world as being filled with these grids, in a repeating grid. This meta-grid is accessed with the higher numerals of the object's location number.
Everything starts in grid #0. So 10 would mean meta-grid #1 along the axis, 20 is meta-grid #2, etc. Now when a ray travels through a grid cell containing something outside the normal grid, the
object's real location is also checked. If the meta-grid location does not match the ray's meta-grid location, the object is not actually there, so it's not tested. As time goes on and more objects
move outside the grid, this scheme becomes less efficient as more objects have to be tested but can never be hit. See the paper for how the authors decide to regenerate the grid when it becomes
What's clever about their scheme is that when an object moves, it is quick to remove it from the grid and reinsert it. The grid does not have to be regenerated. This scheme can also work with a
hierarchy of grids (i.e. nested grids). The authors note that normal octrees suffer from non-constant insertion and deletion times, as the tree has to be traversed and an object may get put into two
or more nodes.
Thatcher Ulrich's "loose octree" spatial partitioning scheme has some interesting features. Meant for collision detection, it may also have application to real-time ray tracing. The basic idea is
that you make each octree node actually enclose twice the space, in each direction, as its location in the octree. That is, normally an octree node does not overlap its neighbors - space is precisely
partitioned. In Ulrich's scheme, the octree node box is extended by 1/2 in the six directions of its face. Anything listed in this node is inside this extended box.
This makes for a less-efficient partitioning of space, but has a great advantage for dynamic objects. As an object moves or otherwise changes, it can be removed and inserted in the tree "instantly".
Say you want to insert spheres into this structure. The radius of the sphere determines exactly what level of the octree you need to insert it at. For example, if the extended octree node at some
level is, say, 12x12x12 units, then a sphere with a radius of 3 or less must fit inside this extended node if the center of the sphere is inside the unextended octree node. If the radius is 1.5 or
less, it can be inserted in the next octree node down (6x6x6) or further, as it must fit there. So just knowing the sphere's center and radius fully determines which octree node to insert it into,
without searching or bounds testing (other than walking down the tree to the node itself).
Similarly, deletion from the octree is quick: each object exists in one and only one octree node, which can be found immediately and so deleted from. It might even be faster to hash the octree nodes
by their level and address (as Glassner did in his original scheme) to more quickly delete from them.
This gives at least some hope that octrees could be used in a dynamic ray tracing system. Another nice feature of octrees is that if an object moves outside the bounding box of the entire octree,
this octree can become a sub-node of a larger octree made on the fly that also encloses the space the object moved to. I have to admit that the loose octree structure seems pretty inefficient to
trace rays against, but really I am just brainstorming here, presenting a possible use for a clever, new data structure with some useful properties. I can imagine combining loose octrees with other
schemes, e.g. also creating bounding boxes within each populated node lazily, as needed, to further speed intersection testing.
See more about the loose octree idea at http://www.tulrich.com/geekstuff/partitioning.html, and read more about it in the book "Game Programming Gems." There are some optimizations I do not mention
here, like being able to sometimes push some objects one level deeper into the octree.
I have various new info on the compiler fronts.
For x86 machines, I've run a bunch of benchmarks on Intel P4 and AMD Athlon machines, and have had interesting results. Along with this, I've tested gcc, the Portland Group compilers, and the beta
Intel compilers for Linux.
On the hardware front, for the testing I've done with Tachyon (John's ray tracer, source at http://jedi.ks.uiuc.edu/~johns/raytracer/), the Athlon machines have consistently annihilated the P4
machines with which I've compared them. My 1.2Ghz Athlon at home is still posting a SPD balls time of 2.6 seconds, versus about 3.0 for a 1.7Ghz P4 we have at work. I tested on a 1.4Ghz Athlon
previously and got 2.2 seconds!
From the testing I did recently, the PG compilers were good, but the most recent versions of gcc are now at about the same level of performance (I haven't tested recently with all combinations on the
same box, but I believe my results indicate that if PG compilers are still better than gcc, then the margin has narrowed to about 3% so in performance.
In all cases, the new beta Intel compiler has proven to produce code that runs slower than what I've been getting out of gcc, though since it is a beta compiler, I can't complain too much yet. This
was true when testing the produced binaries on both an Athlon and on a P4, with Tachyon. We have seen some performance benefits running some other codes, on the older generation Pentiums, but not yet
on the P4. I suspect that Intel just needs a few more months of work before their new linux compiler is as competitive as we'd expect. I think the main advantages it has right now are much more
sophisticated vectorizing features which benefit linear algebra oriented codes, but don't help ray tracers very much. (Tachyon has very few vectorizable loops presently, unfortunately...)
Though the timings I have below span over a couple of months, none of the core code in Tachyon has changed for quite a while.
I noticed a line in Peter Shirley's book "Realistic Ray Tracing"
Typically, a large mesh has each vertex being stored by about six
triangles, although there can be any number for extreme cases.
This is true enough, but it goes stronger and deeper than that, and something I only recently realized (well, in 1998), so I'm passing it on here.
It turns out that the Euler Characteristic precludes there ever being more or less than about six triangles per vertex for large meshes consisting entirely of triangles, surprisingly enough. The
formula is:
V + F - E = 2
(V=vertices, F=faces, E=edges) for a closed mesh (i.e. a solid object) without any holes (i.e. topologically equivalent to a sphere; though a donut merely changes the formula to V + F - E = 0; for an
open mesh without holes it's V + F - E = 1). The formula holds in general, but we can constrain it when dealing with connected triangles in a closed mesh.
If every entity in the mesh is a triangle, we know that the number of edges is going to be exactly 3/2 * the number of faces, since each face has three edges and each edge in a closed mesh is shared
by two faces. Substitute:
V + F - 3/2*F = 2
V ~= 1/2 * F
(I'm dropping the "2" constant, since it's relatively unimportant for large meshes). We also know that each triangle has 3 vertices, so for the total number of vertices to be 1/2 the number of faces,
each vertex *must* be shared by an average of 6 triangles. I love this result, as it applies to any mesh. You can draw the mesh so that one vertex is shared by many triangles, but then the rest of
the vertices are shared by proportionally less, balancing out to 6 - there's no way around it.
If the mesh is open, then the ratio will change, with the average number of triangles going down. Topologically, with an open mesh (one not forming a solid object but having some edges touching some
single triangle) you can think of the mesh as solid, but including a polygon with many edges along the outside of the mesh. This many-sided polygon changes the ratio.
[I wrote this note to Pete, and he had just learned the same fact the day before receiving my note! I figure if neither of us knew this until recently, then it was worth presenting here.]
Schmalstieg and Tobler had a clever article two years ago:
Dieter Schmalstieg and Robert F. Tobler, "Fast projected area computation for three-dimensional bounding boxes," journal of graphics tools, 4(2):37-43, 1999. http://jgt.akpeters.com/papers/
The idea is that a bounding box can be thought of as six planes that divide space into 27 parts (nothing new so far, it's used in orthographic view volume clipping). Take the location of the eye and
see which of the 27 parts it's in. For each of the 27 positions there is a set of faces visible from that position (1, 2, or 3; essentially which faces face that direction). Go a step further,
though: in a table for the 27 entries you can do better than storing the faces that are visible. Instead, you store the list of box vertices, 4 or 6, that make up the silhouette outline of the box
from that position. This silhouette list does not change within the volume of space (of the 27) that you're in. You can use this list of vertices to project onto the screen and directly compute the
screen area of the convex polygon formed.
I was wondering if this algorithm might be useful for ray/box intersection. I have not tried it out, but the idea's to use these silhouettes for 2D point in polygon testing, and also find the
distance along the ray by using the two box corners that bound the ray's direction. There is only one closest (and one farthest) box vertex for a given ray direction, and the index of this vertex can
be determined once for any given ray direction. This is an idea from shaft culling (http://www.erichaines.com), where the closest box vertex to a plane is entirely determined by the normal of the
plane (and a ray's values do define a plane, since they both have a normal and origin).
To begin, the closest and farthest vertex can be cast upon the ray's direction to find out how far these vertices are along the ray. This is simple: the vector from the ray's origin to the vertex
dotted with the ray direction gives the distance to the vertex. If the ray's extent (and rays often are actually line segments, having a maximum distance to them, but "line segment tracing" doesn't
sound as good as "ray tracing") overlaps the box's closest and farthest distances, then the ray is likely to hit the box. This is not a perfect test, but is conservative: the test will sometimes say
there's an overlap when there's not, but will never miss a box that should be hit.
If the ray survives this test, say the ray starts in a corner region. You now have a polygon outline. Instead of computing the ray's intersection with 3 slabs or 3 faces (the traditional ray/box
intersection methods), form a plane at the closest corner that is perpendicular to the ray's direction. In fact, you have all you need of this plane already, as the ray's normal is the plane's normal
and the plane's distance is actually irrelevant. The idea is to use the ray's direction to cast the ray's origin and the silhouette vertices onto a 2D plane. If the ray's 2D position is inside the
vertex list's 2D polygon, the ray is definitely inside the box. It doesn't really matter what the frame of reference is for the 2D plane itself, anything will do.
That's about it, and I hope it makes sense. To recap, you first look at the problem as how far along the ray is the box. If you survive this test, you see whether, looking along the ray's direction,
the ray's origin is inside the convex polygon formed by the silhouette vertices. I have my doubts as to whether this test is any faster than traditional tests (the operation of actually casting onto
a 2D plane looks to be slow), but I thought it was interesting that there was another way to skin this cat. This algorithm does have the interesting feature that it appears to avoid using division at
all, it is mostly just dot products to cast vertices onto lines and planes.
(Followup article in RTNv15n1.)
Eric Haines / erich@acm.org
|
{"url":"http://tog.acm.org/resources/RTNews/html/rtnv14n1.html","timestamp":"2014-04-17T15:31:56Z","content_type":null,"content_length":"52553","record_id":"<urn:uuid:afb656b6-6864-457e-83b1-64c5bd37515b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integration Problems in Calculus: Solutions, Examples & Quiz | Education Portal
Integration Problems in Calculus: Solutions, Examples & Quiz
In this lesson, learn about the different types of integration problems you will encounter. You will see how to solve each type. Also, learn about the rules of integration that will help you.
Integration Problems
First, let me say that integrating various types of functions is not difficult. All you need to know are the rules that apply and how different functions integrate.
You know the problem is an integration problem when you see the following symbol.
Remember too that your integration answer will always have a constant of integration which means that you are going to add '+ C' for all your answers. The various types of functions you will most
commonly see are monomials, reciprocals, exponentials, and trigonometric functions. Certain rules like the constant rule and the power rule will also help you. Let's start with monomials.
Monomials are functions that have only one term. Some monomials are just constants while others also involve variables. None of the variables have powers that are fractions; all the powers are whole
integers. For example, f(x) = 6 is a constant monomial while f(x) = x is a monomial with a variable.
When you see a constant monomial as your function, the answer when you integrate is our constant multiplied by the variable, plus our constant of integration. For example, if our function is f(x) =
6, then our answer will be the following.
Integrating a constant monomial.
We can write this in formula form as the following.
The formula for integrating a constant monomial.
If our function is a monomial with variables like f(x) = x, then we will need the aid of the power rule which tells us the following.
The power rule tells us that if our function is a monomial involving variables, then our answer will be the variable raised to the current power plus one, divided by our current power plus 1, plus
our constant of integration. This is only if our current power is not -1. For example, if our function is f(x) = x where our current power is 1, then our answer will be this.
Integrating the function f(x) = x using the power rule.
Recall that if you don't see a power, it is always 1 because anything raised to the first power is itself. Let's try another example. If our function is f(x) = x^2, then our answer will be the
Integrating the function f(x) = x^2 using the power rule.
Whatever our current power is our answer will be the variable raised to the next power divided by the next power. In the above example, our current power is 2, so our next power is 3. In our answer,
we have a 3 for the variable's power and for the denominator following the power rule.
If our monomial is a combination of a constant and a variable, we have the constant rule to help us. The constant rule looks like this.
This rule tells us to move the constant out of the integral and then to integrate the rest of the function. For example, if our function is f(x) = 6x, then our integral and answer will be the
Integrating the function f(x) = 6x using both the constant rule and the power rule.
We've moved the 6 outside of the integral according to the constant rule and then we integrated the x by itself using the power rule. For the answer, we simplified the 6x^2/2 to 3x^2 since 6 divides
evenly by 2.
Reciprocals and Exponentials
Another type of function we will deal with is the reciprocal. The integral of the reciprocal follows this formula.
The formula for integrating the reciprocal function.
The formula is telling us that when we integrate the reciprocal, the answer is the natural log of the absolute value of our variable plus our constant of integration.
Exponential functions include the e^x function as well as the ln (x) function and these types of functions follow these formulas for integration.
Formulas for integrating exponential functions.
The first formula tells us that when we have a function e^x, our answer for the integral will be e^x + C. The a in the middle integral formula stands for a constant. The middle formula tells us that
when we have, for example, a function like 3^x, then our answer after integrating will be 3^x/ln(3) + C. The last formula tells us that the integral of the natural log of x function is x times (ln(x)
-1) plus our constant of integration.
Trigonometric Functions
Our trigonometric functions include cosine, sine, and secant functions. They follow these formulas.
Formulas for integrating trigonometric functions.
If you are integrating the cosine function, you will end up with the sine function plus the constant of integration. Integrating the sine function gives you the negative cosine function plus our
constant of integration. If you see the secant function squared, your answer will be the tangent function plus our constant of integration.
Integrating different functions involves referring to the formulas for each type of function along with applying the constant or power rule when necessary. Always remember your constant of
integration when integrating.
Ask Our Experts
Thanks! Your question has been submitted to our experts and will be answered via email. You can check the status of your question on
your dashboard
Response times may vary by topic.
Our experts can answer your question related to:
• Requirements for Different Careers
• Enrolling in College
• Transferring Credit
• And More…
Did you know …
This lesson is part of a course that helps students earn real college credit accepted by 2,900 colleges.
|
{"url":"http://education-portal.com/academy/lesson/integration-problems-in-calculus-solutions-examples-quiz.html","timestamp":"2014-04-20T13:58:45Z","content_type":null,"content_length":"69356","record_id":"<urn:uuid:8ce1d180-b6fd-4cc9-84e6-252373e303c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Neighborhood of Infinity
Just thinking about the blog entry about
over at
ars mathematica
One of the things that is underappreciated about quantum systems is how big they are. Consider a classical system made of two subsystems A and B. Then the space of configurations of the union of the
systems is AxB. Typically this is a finite dimensional manifold. To describe the state of the combined system we just need to describe the A susbsystem and then the B subsystem. In this sense
classical mechanics is additive.
In quantum mechanics we look at much larger spaces. Typically we're looking at wavefunctions, complex valued functions on the classical configuration space. When we combine two systems the state
space is the tensor product of the state spaces of the individual systems. So in a sense combining quantum mechanical systems is multiplicative. When you start combining quantum systems they get very
Anyway, to get an idea of what I'm talking about see my comment. With what seems like a really simple system we're way beyond what is reasonable to simulate on a computer. It's not surprising then
that it's hard to predict the behaviour of water. On the contrary, we should be happy that we can make any kinds of predictions at all about the bulk properties of matter.
Now, many years ago, I used to work in a computational chemistry group. People were simulating molecules all the time. Not just little ones - big ones with hundreds or even thousands of atoms. They
were trying to understand biology at a molecular level in order to design drugs. But given that water is so hard to understand this seems like an insurmountable task. They either used simplified
quantum models (eg. single electron models) or empirical classical models based on masses and springs. Typically what happened was post hoc fitting of data. The empirical models had many parameters
that could be tweaked. The user would run sims with many different parameters, eventually see the real data (eg. from X-ray diffraction techniques or NMR) and then choose the model that best fit the
experimental results claiming it explained the observed behaviour. It had next to zero predictive value. Occasionally my colleagues would have moments of lucidity and realise that they might as well
predict molecular behaviour using yarrow stalks and the I Ching - and then a short while later they'd go back to work tweaking the parameters.
And you thought you were helping to cure cancer when you donated your cycles to folding@home!
Incidentally, the largeness of quantum systems is why quantum computers are potentially so powerful. The states of a 5 qubit computer form a 5D vector space. The states of a 10 qubit system form a
10D vector space. The latter is vastly larger than the former and has much more complex time evolution. The potential increase in power is much more than what you get from upgrading a 5 bit classical
computer to a 10 bit one. (When I say 10 bits, I don't mean a computer with a 10 bit address bus, I mean 10 bits total memory.) But, alas, I suspect, for the same reason, that an (N+1)-qubit computer
is much harder to build than an N-qubit computer, and so the cost of making a quantum computer of a given power will be much the same as the cost of making a classical computer of equivalent power.
(I'll make an exception for specialised types of problem that you can't even begin to tackle classcially such as certain types of encryption.)
No comments:
|
{"url":"http://blog.sigfpe.com/2005/07/quantum-systems-are-big.html","timestamp":"2014-04-18T20:44:45Z","content_type":null,"content_length":"59504","record_id":"<urn:uuid:128e5180-8bf6-4e2b-bb28-7ea8ed659aee>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Having trouble understanding this.
Do you just not work with it's tangential velocity (magnitude) at all? It doesn't show up in [tex]a_c=\frac{v^2}{r}[/tex], which expanded is [tex]a_c=\frac{(\frac{2{\pi}r}{T})^2}{r}[/tex]. And if all
this is right, then isn't the acceleration always constant? Someone please clarify. I just don't understand what acceleration is when it comes to circular motion.
For circular motion the acceleration can be viewed as having two components: tangential and radial (radial = centripetal). If the speed is constant, the
acceleration is zero. The radial component is just what you call the centripetal acceleration: it's the (rate of) change in the velocity
towards the center
. If all that changes is the
of the velocity, then all you have is centripetal acceleration. (But note that centripetal acceleration
depend on the speed; it is not just a rate of change of angle, it's a rate of change of
|
{"url":"http://www.physicsforums.com/showthread.php?t=66450","timestamp":"2014-04-18T03:13:15Z","content_type":null,"content_length":"43261","record_id":"<urn:uuid:d51b8727-af89-405f-825e-7137f4bf4f99>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fullerton, CA Prealgebra Tutor
Find a Fullerton, CA Prealgebra Tutor
...I am aware of Common Core standards and can assist students in a variety of ways. I have two kids of my own, so timely cancellation is always appreciated. If I can not help, I will not waste
your time.
7 Subjects: including prealgebra, chemistry, calculus, geometry
...I am a Long Beach Poly PACE alumna and recent graduate from UCI (double majored in cognitive science and philosophy) willing to help out students K-12 with their math skills, especially those
struggling in Prealgebra, Algebra, Geometry, and Algebra 2/Trig. Math has always been a strong point of ...
22 Subjects: including prealgebra, reading, English, Spanish
...I work with Revit and Autocad daily and have used these programs for some years now. There's many approaches to your design, so half the battle is learning to maneuver within the program. My
instrucion level is from (beginning to advanced).If you're a student that will be going into the design work force, especially architecture, get ahead now by learning Revit.
12 Subjects: including prealgebra, reading, algebra 1, grammar
Hello, my name is Michelle and I am a former teacher who has been tutoring students of all ages for the past 2 years. I have several degrees including a PhD in Plant Pathology, a certificate in
Biotechnology, a multiple subject teaching credential, and a bachelors in Microbiology. I am an excellent teacher in the subjects of math, science, and study skills.
37 Subjects: including prealgebra, reading, English, chemistry
...In addition, I was a teaching assistant for undergraduate and graduate students in the Biomedical Engineering and Kinesiology departments. It is my goal to not only teach my students the
material, but to give them the tools needed to succeed in all their classes. With the right tools and encour...
30 Subjects: including prealgebra, chemistry, calculus, physics
|
{"url":"http://www.purplemath.com/Fullerton_CA_prealgebra_tutors.php","timestamp":"2014-04-20T23:47:26Z","content_type":null,"content_length":"24285","record_id":"<urn:uuid:7e67ecf8-9fc2-4871-bdee-f8b38f185419>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Munster Geometry Tutor
Over the past four years I have worked as an elementary art teacher. I have had the opportunity to instruct a diverse group of elementary students. I have developed my own curriculum for
pre-school through eighth grade.
16 Subjects: including geometry, reading, statistics, algebra 1
...My math experience includes adding, subtraction, multiplication, division and pre-algebra. My social studies experience include local government to the seven continents. My English experience
includes sentence structure and the parts of speech.
10 Subjects: including geometry, reading, GRE, algebra 2
...And finally, I've also worked as a private tutor, which has given me a terrific sense for how wonderfully productive working one-on-one can be. I seem to gravitate toward teaching in one way or
another, simply because I have a genuine love of helping others learn and grow, and I think you can ea...
37 Subjects: including geometry, reading, writing, English
...FOR HIGH SCHOOL STUDENTS: I offer comprehensive instruction in both Quantitative and Verbal Reasoning in preparation of standardized testing and also regularly assist students with day-to-day
academic coursework in all levels of mathematics as well as AP economics. Additionally, I specialize in...
38 Subjects: including geometry, reading, Spanish, statistics
I have over 20 years working with young people. I hold an Indiana Certified teacher's license in language arts and social studies. I have always been a well-rounded student and feel comfortable
helping in almost any primary or junior high subject.
27 Subjects: including geometry, English, reading, grammar
|
{"url":"http://www.purplemath.com/Munster_Geometry_tutors.php","timestamp":"2014-04-18T19:15:59Z","content_type":null,"content_length":"23693","record_id":"<urn:uuid:e833f428-ba5f-4919-ae6d-1cc3da793dfb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Basics: Work Energy
**Pre Reqs:** [What is a Force](http://scienceblogs.com/dotphysics/2008/09/basics-what-is-a-force.php)
[Previously, I talked about the momentum principle](http://scienceblogs.com/dotphysics/2008/10/basics-forces-and-the-momentum-principle.php). Very useful and very fundamental idea. The other big (and
useful) idea in introductory physics is the work-energy theorem. Really, with work-energy and momentum principle, you will be like a Jedi with a lightsaber and The Force – extremely powerful.
Well, what is work? What is energy? How are they related? In [another post, I talked about energy.](http://scienceblogs.com/dotphysics/2008/10/what-is-energy/) I think it is interesting to look at
how most textbooks define energy:
*Energy is the ability to do work*
This is really a stupid definition. Kind of circular logic, if you ask me. In the post I mentioned earlier, I claim there are two kinds of energy, particle energy and field energy. At low speeds (not
near the speed of light), particle energy can be written as:
Where *m* is the mass of the particle, *c* is the speed of light. So, if you just look at a particle, that is it for the energy. Now, what about the “work” portion? Work is defined as:
Where *F* is the net force on the particle, ?r is the vector displacement of the particle. The “dot” in between F and ?r represents the “dot product” operation between vectors (also known as the
scalar product). In a [previous post](http://scienceblogs.com/dotphysics/2008/09/basics-vectors-and-vector-addition/) I showed that you could multiply a scalar quantity by a vector quantity. Here I
need to do “something” with two vectors. You can’t multiply two vectors in the same sense that you multiply scalars. A general definition of the dot product for two vectors:
That looks a little more messy than I wanted, but it can not be helped. Really, it is not that complicated. The dot product is simply the projection of one vector on the other. Let me explain in
terms of work.
Suppose I pull a block 2 meters with a force of 10 Newtons as shown in this diagram:
Since both force and displacement are in the same direction, this would give a work of 20 Joules. However, I will do this the long way. Assume the x-axis is horizontal, then:
Note that I am calculating the work done by THAT force. There could be other forces acting on that block, but even if they aren’t it doesn’t change the work done. Now let me look at another example
that is similar. Suppose I again push with a force of 10 Newtons, and I again move the block 2 meters:
In this case, I push perpendicular to the direction the block moves (to do this, there would need to be other forces acting on the block). How much work would be done?
So, forces perpendicular to the displacement do no work, only forces in the same (or opposite) direction do. So, what if there is a force not perpendicular, but also not in the same direction?
I could calculate the work done by the force in the normal fashion (dot product) or I could say I just need the component of force in the direction of motion:
Either way, same thing. This is why sometimes you will hear people explain the dot product as a projection of one vector onto the other. It is only the components of the two vectors that are in the
same direction that matter.
So, that is work. Now for the good part. The work energy theorem says:
*The work done on a particle is equal to it’s change in energy*
Notice that it is the CHANGE in energy, not the energy. At low speeds, the mass-energy (mc^2) doesn’t really change, so I will typically just relate the work to the change in kinetic energy.
But wait!! What about potential energy? Yes, I know. But for a PARTICLE, there is no potential energy. You can have potential energy for a system. Instead of talking about potential energy, I will
give a short example of work-energy. I will save potential for another day.
Here is an example I used [previously](http://scienceblogs.com/dotphysics/2008/10/basics-numerical-calculations/). Suppose I throw a ball straight up with a speed of 10 m/s (and the ball has a mass
of 0.5 kg, but that doesn’t really matter). How high will it go? I did this problem both analytically and numerically using the momentum principle. The momentum principle deals with force and TIME –
but in this case, I don’t want the time. Work-energy deals with force and displacement. This will be perfect for this problem. So, while the ball is going up, here is the free body diagram:
I can now write an expression for the work (the total work) done on this ball as it rises:
In this case, the ball is moving up, so ?y is a positive number. The gravitational force is down but in the same direction (opposite direction – so that there is no cosine term). I can set the
initial position of the ball to * y = 0 m*. As for the change in energy, the mass energy does not change so I just have change in kinetic energy.
Since I know the initial velocity, I can get a value for the final y:
If you look back at the analytical solution using the momentum principle, this probably looks easier. It should, because it is. Remember:
• Work-Energy deals with force and change in position. This problem specifically was looking for the position – a perfect match.
• Work-Energy does not give a vector answer. The kinetic energy has v^2 which is technically the square of the magnitude of velocity
• The stuff I have done here deals with a PARTICLE. If you have something that can not be represented as a particle (like a car with internal combustion engine) then you will need to do something
1. #1 CL417a February 5, 2010
I like the explanation and I like the illustrations.
I find the inclusion of the particle’s atomic energy (mc^2) to be distracting, especially in a working equation, and potentially confusing to some students.
I think some students would appreciate it if you specified that the angle theta in the third illustration is 60 degrees. The only reason I know it is 60 degrees is because I worked backwards from
the answer.
Also, the delta symbol and theta symbol (in other posts) show up as question marks in my browser.
2. #2 Rhett Allain February 5, 2010
Yeah, I know about the problem with the symbols – I will need to fix that sometime. Sorry about that.
Also, I agree that the mass energy may be confusing – but at a certain level it is important to know.
3. #3 Bob February 13, 2010
1. Regarding the ball thrown upward, rather than state that there is no cosine term perhaps it would be clearer to students to say that the angle between the force and the displacement is 180
degrees and rather than rigid body dynamics and the cosine of that angle is zero.
2. Neither a block nor a ball is actually a particle, and from a teaching standpoint it might be better to mention somewhere that particle dynamics rather than rigid body dynamics apply whenever
forces act only along lines through the centroids of bodies.
|
{"url":"http://scienceblogs.com/dotphysics/2008/10/20/basics-work-energy/","timestamp":"2014-04-20T00:55:26Z","content_type":null,"content_length":"64280","record_id":"<urn:uuid:e149eba7-07e8-45a5-92a2-d98718cdc05b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: AUDITORY Digest - 14 Jul 2009 to 15 Jul 2009 (#2009-160)
There are 6 messages totalling 358 lines in this issue.
Topics of the day:
1. frequency to mel formula (6)
Date: Wed, 15 Jul 2009 13:11:27 -0500
From: "James W. Beauchamp" <jwbeauch@xxxxxxxxxxxxxxxxxxxxxx>
Subject: frequency to mel formula
Dear List,
On the Wikipedia page
a formula for computing frequency in terms of mels is given as:
mel = log(1 + fr/700)*1127 .
It is easily inverted to fr = 700*exp(mel/1127 - 1) .
My question is: Where do these formulas come from? I.e., I need
a journal reference for these formulas.
Thanks much,
Jim Beauchamp
Univ. of Illinois at Urbana-Champaign
Date: Wed, 15 Jul 2009 14:01:02 -0500
From: "McCreery, Ryan W" <McCreeryR@xxxxxxxxxxxx>
Subject: Re: frequency to mel formula
Hi Jim,
I don't know if this is the first reference to the mel scale, but it's I have read before and seen cited:
Stevens, S.S. & Volkmann, J. (1940) The relation of pitch to frequency: A r=
evised scale. The American Journal of Psychology 53(3), 329-
I hope this helps.
Ryan McCreery
-----Original Message-----
From: AUDITORY - Research in Auditory Perception [mailto:AUDITORY@xxxxxxxxx=
ILL.CA] On Behalf Of James W. Beauchamp
Sent: Wednesday, July 15, 2009 1:11 PM
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: frequency to mel formula
Dear List,
On the Wikipedia page=20
a formula for computing frequency in terms of mels is given as:
mel =3D log(1 + fr/700)*1127 .
It is easily inverted to fr =3D 700*exp(mel/1127 - 1) .
My question is: Where do these formulas come from? I.e., I need
a journal reference for these formulas.
Thanks much,
Jim Beauchamp
Univ. of Illinois at Urbana-Champaign
Date: Wed, 15 Jul 2009 15:03:45 -0400
From: Dan Ellis <dpwe@xxxxxxxxxxxxxxx>
Subject: Re: frequency to mel formula
We discussed this last year. See
and the surrounding thread.
I think the actual origin is Fant in a paper in Swedish from 1949,
summarized in his 1973 book:
Fant, C G M "Analys av de svenska konsonantljuden" LM Ericsson
protokoll H/P 1064, 1949: 139pp.
referenced on p. 48 of
Fant, G "Speech Sounds and Features", MIT Press, 1973.
but Fant uses log(1+f/1000). The log(1+f/700) was attributed to
O'Shaughnessy, D. (1978) Speech communication: Human and machine.
Addison-Wesley, New York, page 150.
On Wed, Jul 15, 2009 at 2:11 PM, James W.
Beauchamp<jwbeauch@xxxxxxxxxxxxxxxxxxxxxx> wrote:
Dear List,
On the Wikipedia page
a formula for computing frequency in terms of mels is given as:
mel =3D log(1 + fr/700)*1127 .
It is easily inverted to fr =3D 700*exp(mel/1127 - 1) .
My question is: Where do these formulas come from? I.e., I need
a journal reference for these formulas.
Thanks much,
Jim Beauchamp
Univ. of Illinois at Urbana-Champaign
Date: Wed, 15 Jul 2009 15:55:25 -0400
From: Dan Ellis <dpwe@xxxxxxxxxxxxxxx>
Subject: Re: frequency to mel formula
I'm not sure if this is worth discussing on the full list, but...
After the discussion last year I actually got a hold of the Beranek
1949 book from our library's cold storage, and the reference is wrong.
In the book, Beranek gives empirical values for the Mel scale, but no
equation. Clearly, this reference got mangled somewhere along the
way: there may be a different early Beranek reference, but it isn't
this one.
I think Fant is the more appropriate reference (for log(1+f/1000)) and
O'Shaugnessy for log(1+f/700).
On Wed, Jul 15, 2009 at 3:34 PM, James D. Miller<jamdmill@xxxxxxxxxxx> wrot=
As Dan explained last time this was discussed, the correct reference to t=
he formula cited by Beauchamp is
LL Beranek, Acoustic Measurements, Wiley, New York, 1949), p.329.
as the source for mel(f) =3D 1127 ln(1 + f/700)
Jim Miller
-----Original Message-----
From: AUDITORY - Research in Auditory Perception [mailto:AUDITORY@xxxxxxx=
CGILL.CA] On Behalf Of Dan Ellis
Sent: Wednesday, July 15, 2009 3:04 PM
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: Re: [AUDITORY] frequency to mel formula
We discussed this last year. =C2=A0See
and the surrounding thread.
I think the actual origin is Fant in a paper in Swedish from 1949,
summarized in his 1973 book:
Fant, C G M "Analys av de svenska konsonantljuden" LM Ericsson
protokoll H/P 1064, 1949: 139pp.
referenced on p. 48 of
Fant, G "Speech Sounds and Features", MIT Press, 1973.
but Fant uses log(1+f/1000). =C2=A0The log(1+f/700) was attributed to
O'Shaughnessy, D. (1978) Speech communication: Human and machine.
Addison-Wesley, New York, page 150.
On Wed, Jul 15, 2009 at 2:11 PM, James W.
Beauchamp<jwbeauch@xxxxxxxxxxxxxxxxxxxxxx> wrote:
Dear List,
On the Wikipedia page
a formula for computing frequency in terms of mels is given as:
mel =3D log(1 + fr/700)*1127 .
It is easily inverted to fr =3D 700*exp(mel/1127 - 1) .
My question is: Where do these formulas come from? I.e., I need
a journal reference for these formulas.
Thanks much,
Jim Beauchamp
Univ. of Illinois at Urbana-Champaign
Date: Wed, 15 Jul 2009 17:29:47 -0400
From: Christine Rankovic <rankovic@xxxxxxxxxxxxxxxx>
Subject: Re: frequency to mel formula
I just checked Beranek's book: Acoustic Measurements. Beranek cites
Stevens and Volkman as the source of his plot on page 203 (Beranek provides
no equation) . The full reference provided by Beranek is: S.S. Stevens and
J. Volkman, "The relation of pitch to frequency: a revised scale," Amer.
J. Psychol., vol. 53, 329 (1940).
Christine Rankovic
----- Original Message -----
From: "Dan Ellis" <dpwe@xxxxxxxxxxxxxxx>
To: <AUDITORY@xxxxxxxxxxxxxxx>
Sent: Wednesday, July 15, 2009 3:55 PM
Subject: Re: frequency to mel formula
I'm not sure if this is worth discussing on the full list, but...
After the discussion last year I actually got a hold of the Beranek
1949 book from our library's cold storage, and the reference is wrong.
In the book, Beranek gives empirical values for the Mel scale, but no
equation. Clearly, this reference got mangled somewhere along the
way: there may be a different early Beranek reference, but it isn't
this one.
I think Fant is the more appropriate reference (for log(1+f/1000)) and
O'Shaugnessy for log(1+f/700).
On Wed, Jul 15, 2009 at 3:34 PM, James D. Miller<jamdmill@xxxxxxxxxxx>
As Dan explained last time this was discussed, the correct reference to
the formula cited by Beauchamp is
LL Beranek, Acoustic Measurements, Wiley, New York, 1949), p.329.
as the source for mel(f) = 1127 ln(1 + f/700)
Jim Miller
-----Original Message-----
From: AUDITORY - Research in Auditory Perception
[mailto:AUDITORY@xxxxxxxxxxxxxxx] On Behalf Of Dan Ellis
Sent: Wednesday, July 15, 2009 3:04 PM
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: Re: [AUDITORY] frequency to mel formula
We discussed this last year. See
and the surrounding thread.
I think the actual origin is Fant in a paper in Swedish from 1949,
summarized in his 1973 book:
Fant, C G M "Analys av de svenska konsonantljuden" LM Ericsson
protokoll H/P 1064, 1949: 139pp.
referenced on p. 48 of
Fant, G "Speech Sounds and Features", MIT Press, 1973.
but Fant uses log(1+f/1000). The log(1+f/700) was attributed to
O'Shaughnessy, D. (1978) Speech communication: Human and machine.
Addison-Wesley, New York, page 150.
On Wed, Jul 15, 2009 at 2:11 PM, James W.
Beauchamp<jwbeauch@xxxxxxxxxxxxxxxxxxxxxx> wrote:
Dear List,
On the Wikipedia page
a formula for computing frequency in terms of mels is given as:
mel = log(1 + fr/700)*1127 .
It is easily inverted to fr = 700*exp(mel/1127 - 1) .
My question is: Where do these formulas come from? I.e., I need
a journal reference for these formulas.
Thanks much,
Jim Beauchamp
Univ. of Illinois at Urbana-Champaign
Date: Wed, 15 Jul 2009 20:54:45 -0500
From: "James W. Beauchamp" <jwbeauch@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: frequency to mel formula
It would be good if someone could double check the O'Shaugnessy
reference, as given by Dan earlier today:
O'Shaughnessy, D. (1978) Speech communication: Human and machine.
Addison-Wesley, New York, page 150.
I think the title is actually Speech Communications: Human and Machine.
In the archived message http://www.auditory.org/mhonarc/2008/msg00189.html
Dan gives the date of the book as 1987, so I'm not sure which is correct.
At any rate, it is possible to buy a second edition of the book, which is
copyrighted 2000. However, when perusing the Contents and the Index it
looks like the page has changed. Pages for 'mel scale' in the Index are
128, 191, and 214. I hope the formula made it.
Original message:
From: Dan Ellis <dpwe@xxxxxxxxxxxxxxx>
Date: Wed, 15 Jul 2009 15:55:25 -0400
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: Re: [AUDITORY] frequency to mel formula
Comments: To: "James D. Miller" <jamdmill@xxxxxxxxxxx>
I'm not sure if this is worth discussing on the full list, but...
After the discussion last year I actually got a hold of the Beranek
1949 book from our library's cold storage, and the reference is wrong.
In the book, Beranek gives empirical values for the Mel scale, but no
equation. Clearly, this reference got mangled somewhere along the
way: there may be a different early Beranek reference, but it isn't
this one.
I think Fant is the more appropriate reference (for log(1+f/1000)) and
O'Shaugnessy for log(1+f/700).
End of AUDITORY Digest - 14 Jul 2009 to 15 Jul 2009 (#2009-160)
|
{"url":"http://www.auditory.org/mhonarc/2009/msg00552.html","timestamp":"2014-04-18T01:49:00Z","content_type":null,"content_length":"19874","record_id":"<urn:uuid:10d663f2-ea5f-4282-bb9d-14a3e93a333a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Resolution: standard / high
Figure 5 .
Angular dependence in polar coordinates. For R(θ) (a,b,c) and for the corresponding etching rates V=f(θ) (d,e,f), obtained on a (111) wafer (a and d) etched at a current I= 800mA for t=10min
and on (100) (b and e) and (110) (c and f) wafers etched at I= 50mA for t=70min.
Astrova and Zharova Nanoscale Research Letters 2012 7:421 doi:10.1186/1556-276X-7-421
Download authors' original image
|
{"url":"http://www.nanoscalereslett.com/content/7/1/421/figure/F5","timestamp":"2014-04-21T05:51:14Z","content_type":null,"content_length":"11786","record_id":"<urn:uuid:a5d9df90-99c1-4e65-84d1-cbe8554c41a8>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
|
While the last chapter focused on equations and systems of equations, this chapter deals with inequalities and systems of inequalities. Inequalities in one variable are introduced in algebra I; in
algebra II, we turn our attention to inequalities in two variables.
The first section explains how to graph an inequality on the xy -plane. Graphing an inequality in 2 dimensional space (a graph with 2 variables) is similar to graphing an inequality on the number
line. Both involve treating the inequality as an equation, solving the equation, and testing points. In the 2 variable case, however, the solution to the equation is a line, not a single point. It is
this line that divides the xy -graph into two regions: one that satisfies the inequality, and one that does not.
The second section deals with systems of inequalities. Unlike systems of equations, systems of inequalities generally do not have a single solution; rather, systems of inequalities describe an entire
region. Thus, it makes sense to find this region by graphing the inequalities. This section explains how to solve systems of inequalities by graphing.
The third section provides an application of inequalities--linear programming. Linear programming is a process by which constraints are turned into inequalities and graphed, and a value is maximized
or minimized. This is especially useful in economics, in which linear programming is used to maximize revenue, minimize cost, and maximize profit.
Inequalities have other applications in addition to linear programming. They are used to describe the relationship between any two quantities when one quantity "limits" the other. These relationships
appear frequently in physics and chemistry, as well as in everyday life. Inequalities are also used to find viable values of variables against several constraints.
|
{"url":"http://www.sparknotes.com/math/algebra2/inequalities/summary.html","timestamp":"2014-04-17T04:37:21Z","content_type":null,"content_length":"50924","record_id":"<urn:uuid:bf2968e7-b405-417b-953c-2c4b04a92e70>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Items tagged with blog
I was reminded of this by another thread.
It is faster to add in-place a large size storage=sparse float[8] Matrix into a new empty storage=rectangular float[8] Matrix than it is to convert it that way using the Matrix() or rtable()
Here's an example. First I'll do it with in-place Matrix addition. And then after that with a call to Matrix(). I measure the time to execute as well as the increase in bytes-allocated and
> with(LinearAlgebra):
> N := 500:
> A := RandomMatrix(N,'density'=0.1,
> 'outputoptions'=['storage'='sparse',
> 'datatype'=float[8]]):
> st,ba,bu := time(),kernelopts(bytesalloc),kernelopts(bytesused):
> B := Matrix(N,'datatype'=float[8]):
> MatrixAdd(B,A,'inplace'=true):
> time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
0.022, 2489912, 357907
|
{"url":"http://www.mapleprimes.com/tags/blog?page=6","timestamp":"2014-04-16T07:25:57Z","content_type":null,"content_length":"113265","record_id":"<urn:uuid:2418e681-956d-4ab8-847a-a2ecccaa9f6d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reading an unknown number of data points from a table
09-22-2011 #1
Registered User
Join Date
Sep 2011
Reading an unknown number of data points from a table
Hi there, I'm trying to read an unknown number of data points from txt based file into a 2d array. The file contains a grid of numbers which has 4 columns and an unknown set of rows. To start
with I read just the first 20 rows and it worked. I am now unsure of what the code would be to read the rows until there is no more data.
#include <stdio.h>
#include <stdlib.h>
int main()
FILE *myfile;
int j,i;
float c[20][4];
myfile = fopen("data.txt","r");
puts("data.TXT not found!");
j = 0;
while (j<20)
while (i<4)
j = j+1;
return 0;
Any help would be much appreciated
If you are reading a set of fixed columns you can use fscanf() as...
fscanf(myfile,"%f %f %f %f", &c[j][0],&c[j][1],&c[j][2],&c[j][3]);
Also note that you should be checking the return value of fscanf() to make sure you're getting the right number of conversions. (Details will be in your library documentation.)
BUT... when you have an unknown number of rows you have a much larger problem. You are going to have to set up a linked list with one row in each structure and read the file row by row (as
above), creating a new list element for each.
typedef t_rows
{ float c[4];
t_rows *next
} rows, *prows;
You can study up on linked lists.... here
If you absolutely need this to end up in an array... you can count the number of rows from your linked list and use malloc() to make the appropriate array and copy the data out as you destroy
your linked list.
Well, you could either declare an array that is way larger than you will ever need, or if memory is a concern take a look at determining the file size at run time and then dynamically allocate
your array. Look at the ftell() function. Note this is an approximation for what you would need in txt files however it will do the trick.
Warning: Some or all of my posted code may be non-standard and as such should not be used and in no case looked at.
09-22-2011 #2
Join Date
Aug 2010
Ontario Canada
09-22-2011 #3
Registered User
Join Date
May 2011
Around 8.3 light-minutes from the Sun
|
{"url":"http://cboard.cprogramming.com/c-programming/141281-reading-unknown-number-data-points-table.html","timestamp":"2014-04-20T10:25:10Z","content_type":null,"content_length":"50035","record_id":"<urn:uuid:31ec5a85-187a-446b-86f5-00ab3c477b37>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding Vertices of a Parallelepiped
January 23rd 2009, 01:31 PM
Finding Vertices of a Parallelepiped
I will try to describe the picture:
The figure is a rectangular box, with the near lower left point A labeled (3,3,4) and the far top right point B labeled (-1,6,7).
I am to find the coordinates of the remaining 6 vertices. There are no examples in my book, and I'm not sure how to solve. I have found |A|= root 34, |B| = root 86, and |AB| = root 34.
January 23rd 2009, 03:00 PM
I will try to describe the picture:
The figure is a rectangular box, with the near lower left point A labeled (3,3,4) and the far top right point B labeled (-1,6,7).
I am to find the coordinates of the remaining 6 vertices. There are no examples in my book, and I'm not sure how to solve. I have found |A|= root 34, |B| = root 86, and |AB| = root 34.
View my diagram and note that, by pythagorus:
$|BH|^2 + |AH|^2 = |AB|^2$ (which you can see from the blue line!)
You know A and B, so you can most certainly find H from this, and then continue in that fashion to uncover the rest of the coordinates.
January 23rd 2009, 03:31 PM
Thanks for answering and for providing a diagram.
I understand what you mean, but don't know how to solve for |BH| and |AH| using the points I have.
January 23rd 2009, 03:47 PM
January 23rd 2009, 03:56 PM
|AB| = root ( (-1-3)^2 + (6-3)^2 + (7-4)^2 ) = root 34
January 23rd 2009, 04:19 PM
|
{"url":"http://mathhelpforum.com/calculus/69600-finding-vertices-parallelepiped-print.html","timestamp":"2014-04-19T18:46:39Z","content_type":null,"content_length":"6801","record_id":"<urn:uuid:d5e8c7d2-22f0-4377-b4c5-7c8cf8d07302>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: December 2000 [00103]
[Date Index] [Thread Index] [Author Index]
inner angle of a triangle in the 4th dimension
• To: mathgroup at smc.vnet.net
• Subject: [mg26324] inner angle of a triangle in the 4th dimension
• From: "Jacky Vaillancourt" <jacky_1970 at videotron.ca>
• Date: Sun, 10 Dec 2000 21:38:07 -0500 (EST)
• Sender: owner-wri-mathgroup at wolfram.com
Hi, i have a basic problem. I can't see my mistake can somebody help me?
Here's the problem:
I want to calculate each angle of the triangle formed by those three dots.
P:=(0,1,0,1), Q:=(3,2,-2,1), R:=(3,5,-1,3)
u:=PQ -> (3-0,2-1,-2-0,1-1) -> (3,1,-2,0)
v:=QR -> (3-3,5-2,-1-(-2),3-1) -> (0,3,1,2)
w:=PR -> (3-0,5-1,-1-0,3-1) -> (3,4,-1,2)
The formula to have the angle between tho vector is:
The formula to calculate the length is SQRT(a^2+b^2+c^2+d^2)
So, the angle between u and v is:
ARCCOS(ABS(15)/(SQRT(14)*SQRT(30))) = 42.95 deg
the angle between v and w is:
ARCCOS(ABS(-15)/(SQRT(30)*SQRT(14)))= 42.95 deg
the angle between u and w is:
ARCCOS(ABS(-1)/(SQRT(30)*SQRT(14)))= 85.9 deg
Here's the problem 180-85.9-42.95-42.95= 8.2 deg
I'm missing 8.2 deg....
I hope you'll understand what i wrote, i'm not used to write in english...
• Follow-Ups:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/Dec/msg00103.html","timestamp":"2014-04-16T10:18:18Z","content_type":null,"content_length":"35521","record_id":"<urn:uuid:97815862-92da-4035-8040-bd3fecfeb7d0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Trivia, Quizzes, and Brain Teasers
I'm no scientician, but I'm vaguely aware of this one formula, E=mc². Apparently it's a big deal. In the video below, the good people of Minute Physics rapidly explain how to derive the formula,
using a theoretical scenario involving a radioactive cat in space, a spacecraft, and some math. For a two-minute explanation, this is remarkably complete (at least to a non-math-genius) -- though it
goes by a little too fast for me to grasp each point. I had to keep backing it up and re-running it, and even... READ ON
|
{"url":"http://www.mentalfloss.com/section/math/page/3/0","timestamp":"2014-04-17T12:50:55Z","content_type":null,"content_length":"69777","record_id":"<urn:uuid:bf12a1e4-863a-45ae-80aa-b3ba71124fa4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Theory of Rings
Jacobson: Theory of Rings
An important event in the development of ring theory was the publication of The Theory of Rings by Nathan Jacobson in 1943. The information contained on the title page of the book was as follows:
THE THEORY OF RINGS
by NATHAN JACOBSON
MATHEMATICAL SURVEYS NUMBER II
Published by the
531 WEST 116TH STREET, NEW YORK CITY
We give below a version of the Preface to Jacobson's book The Theory of Rings which explains the background to the book as well as giving an indication of its contents:
The theory that forms the subject of this book had its beginning with Artin's extension in 1927 of Wedderburn's structure theory of algebras to rings satisfying the chain conditions. Since then the
theory has been considerably extended and simplified. The only exposition of the subject in book form that has appeared to date is Deuring's Algebren published in the Ergebnisse series in 1935. Much
progress has been made since then and this perhaps justifies a new exposition of the subject.
The present account is almost completely self-contained. That this has been possible in a book dealing with results of the significance of Wedderburn's theorems, the Albert-Brauer-Noether [A Adrain
Albert, Richard Brauer, Emmy Noether] theory of simple algebras and the arithmetic ideal theory is another demonstration of one of the most remarkable characteristics of modern algebra, namely, the
simplicity of its logical structure.
Roughly speaking our subject falls into three parts: structure theory, representation theory and arithmetic ideal theory. The first of these is an outgrowth of the structure theory of algebras. It
was motivated originally by the desire to discover and to classify "hypercomplex" extensions of the field of real numbers. The most important names connected with this phase of the development of the
theory are those of Molien, Dedekind, Frobenius and Cartan. The structure theory for algebras over a general field dates from the publication of Wedderburn's thesis in 1907; the extension to rings,
from Artin's paper in 1927. The theory of representations was originally concerned with the problem of representing a group by matrices. This was extended to rings and was formulated as a theory of
modules by Emmy Noether. The study of modules also forms an important part of the arithmetic ideal theory. This part of the theory of rings had its origin in Dedekind's ideal theory of algebraic
number fields and more immediately in Emmy Noether's axiomatic foundation of this theory.
Throughout this book we have placed particular emphasis on the study of rings of endomorphisms. By using the regular representations the theory of abstract rings is obtained as a special case of the
more concrete theory of endomorphisms. Moreover, the theory of modules, and hence representation theory, may be regarded as the study of a set of rings of endomorphisms all of which are homomorphic
images of a fixed ring R. Chapter 1 lays the foundations of the theory of endomorphisms of a group. The concepts and results developed here are fundamental in all the subsequent work. Chapter 2 deals
with vector spaces and contains some material that, at any rate in the commutative case, might have been assumed as known. For the sake of completeness this has been included. Chapter 3 is concerned
with the arithmetic of non-commutative principal ideal domains. Much of this chapter can be regarded as a special case of the general arithmetic ideal theory developed in Chapter 6. The methods of
Chapter 3 are, however, of a much more elementary character and this fact may be of interest to the student of geometry, since the results of this chapter have many applications in that field. A
reader who is primarily interested in structure theory or in representation theory may omit Chapter 3 with the exception of 3. Chapter 4 is devoted to the development of these theories and to some
applications to the problem of representation of groups by projective transformations and to the Galois theory of division rings. In Chapter 5 we take up the study of algebras. In the first part of
this chapter we consider the theory of simple algebras over a general field. The second part is concerned with the theory of characteristic and minimum polynomials of an algebra and the trace
criterion for separability of an algebra.
In recent years there has been a considerable interest in the study of rings that do not satisfy the chain conditions but instead are restricted by topological or metric conditions. We mention von
Neumann and Murray's investigation of rings of transformations in Hilbert space, von Neumann's theory of regular rings and Gelfand's theory of normed rings. There are many important applications of
these theories to analysis. Because of the conditions that we have imposed on the rings considered in this work, our discussion is not directly applicable to these problems in topological algebra. It
may be hoped, however, that the methods and results of the purely algebraic theory will point the way for further development of the topological algebraic theory.
This book was begun during the academic year 1940-1941 when I was a visiting lecturer at Johns Hopkins University. It served as a basis of a course given there and it gained materially from the
careful reading and criticism of Dr Irving Cohen who at that time was one of the auditors of my lectures. My thanks are due to him and also to Professors A A Albert, Schilling and Hurewicz for their
encouragement and for many helpful suggestions.
Chapel Hill, N. C., March 7, 1943.
JOC/EFR April 2007
The URL of this page is:
|
{"url":"http://www-history.mcs.st-andrews.ac.uk/~history/Extras/Jacobson_rings.html","timestamp":"2014-04-20T05:44:06Z","content_type":null,"content_length":"7053","record_id":"<urn:uuid:5d77af69-4a44-4cd3-9ca6-250573d512f8>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
|
NAG Library
NAG Library Routine Document
1 Purpose
D02NNF is a reverse communication routine for integrating stiff systems of implicit ordinary differential equations coupled with algebraic equations.
2 Specification
SUBROUTINE D02NNF ( NEQ, LDYSAV, T, TOUT, Y, YDOT, RWORK, RTOL, ATOL, ITOL, INFORM, YSAV, SDYSAV, WKJAC, NWKJAC, JACPVT, NJCPVT, IMON, INLN, IRES, IREVCM, LDERIV, ITASK, ITRACE, IFAIL)
INTEGER NEQ, LDYSAV, ITOL, INFORM(23), SDYSAV, NWKJAC, JACPVT(NJCPVT), NJCPVT, IMON, INLN, IRES, IREVCM, ITASK, ITRACE, IFAIL
REAL (KIND=nag_wp) T, TOUT, Y(NEQ), YDOT(NEQ), RWORK(50+4*NEQ), RTOL(*), ATOL(*), YSAV(LDYSAV,SDYSAV), WKJAC(NWKJAC)
LOGICAL LDERIV(2)
3 Description
D02NNF is a general purpose routine for integrating the initial value problem for a stiff system of implicit ordinary differential equations coupled with algebraic equations, written in the form
An outline of a typical calling program is given below:
! Declarations
call linear algebra setup routine
call integrator setup routine
1000 CALL D02NNF(NEQ, NEQMAX, T, TOUT, Y, YDOT, RWORK, RTOL,
ATOL, ITOL, INFORM, YSAVE, NY2DIM, WKJAC, NWKJAC, JACPVT,
NJCPVT, IMON, INLN, IRES, IREVCM, LDERIV,
ITASK, ITRACE, IFAIL)
IF (IREVCM.GT.0) THEN
IF (IREVCM.GT.7 .AND. IREVCM.LT.11) THEN
IF (IREVCM.EQ.8) THEN
supply the Jacobian matrix (i)
ELSE IF (IREVCM.EQ.9) THEN
perform monitoring tasks requested by the user (ii)
ELSE IF (IREVCM.EQ.10) THEN
indicates an unsuccessful step
END IF
evaluate the residual (iii)
GO TO 1000
END IF
! post processing (optional linear algebra diagnostic call
! (sparse case only), optional integrator diagnostic call)
There are three major operations that may be required of the calling subroutine on an intermediate return (
${\mathbf{IREVCM}}e 0$
) from D02NNF; these are denoted
The following sections describe in greater detail exactly what is required of each of these operations.
Supply the Jacobian matrix
You need only provide this facility if the parameter
if using sparse matrix linear algebra) in a call to the linear algebra setup routine (see
). If the Jacobian matrix is to be evaluated numerically by the integrator, then the remainder of section (i) can be ignored.
We must define the system of nonlinear equations which is solved internally by the integrator. The time derivative,
${y}^{\prime }$
, has the form
is the current step size and
is a parameter that depends on the integration method in use. The vector
is the current solution and the vector
depends on information from previous time steps. This means that
$\frac{d}{d{y}^{\prime }}\left(\text{ }\right)=\left(hd\right)\frac{d}{dy}\left(\text{ }\right)$
The system of nonlinear equations that is solved has the form
but is solved in the form
is the function defined by
$ft,y = hd A t,y y-z / hd - gt,y .$
It is the Jacobian matrix
$\frac{\partial r}{\partial y}$
that you must supply as follows:
$∂fi ∂yj = aijt,y+hd ∂∂yj ∑k=1NEQaikt,yy′k-git,y ,$
are located in
respectively and the arrays
contain the current solution and time derivatives respectively. Only the nonzero elements of the Jacobian need be set, since the locations where it is to be stored are preset to zero.
Hereafter in this document this operation will be referred to as JAC.
Perform tasks requested by you
This operation is essentially a monitoring function and additionally provides the opportunity of changing the current values of
, HNEXT (the step size that the integrator proposes to take on the next step), HMIN (the minimum step size to be taken on the next step), and HMAX (the maximum step size to be taken on the next
step). The scaled local error at the end of a time step may be obtained by calling the real function
as follows:
IFAIL = 1
ERRLOC = D02ZAF(NEQ,ROWK(51+NEQMAX),RWORK(51),IFAIL)
! CHECK IFAIL BEFORE PROCEEDING
The following gives details of the location within the array
of variables that may be of interest to you:
Variable Specification Location
TCURR the current value of the independent variable ${\mathbf{RWORK}}\left(19\right)$
HLAST last step size successfully used by the integrator ${\mathbf{RWORK}}\left(15\right)$
HNEXT step size that the integrator proposes to take on the next step ${\mathbf{RWORK}}\left(16\right)$
HMIN minimum step size to be taken on the next step ${\mathbf{RWORK}}\left(17\right)$
HMAX maximum step size to be taken on the next step ${\mathbf{RWORK}}\left(18\right)$
NQU the order of the integrator used on the last step ${\mathbf{RWORK}}\left(10\right)$
You are advised to consult the description of
for details on what optional input can be made.
If either
are changed, then
must be set to
before return to D02NNF. If either of the values HMIN or HMAX are changed, then
must be set
$\text{}\ge 3$
before return to D02NNF. If HNEXT is changed, then
must be set to
before return to D02NNF.
In addition you can force D02NNF to evaluate the residual vector
by setting
and then returning to D02NNF; on return to this monitoring operation the residual vector will be stored in
, for
$\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$
Hereafter in this document this operation will be referred to as MONITR.
Evaluate the residual
This operation must evaluate the residual
$-r = gt,y - At,y y′$ (1)
in one case and the reduced residual
in another, where
is located in
. The form of the residual that is returned is determined by the value of
returned by D02NNF. If
, then the residual defined by equation
above must be returned; if
, then the residual returned by equation
above must be returned.
Hereafter in this document this operation will be referred to as RESID.
4 References
5 Parameters
this routine uses
reverse communication.
Its use involves an initial entry, intermediate exits and re-entries, and a final exit, as indicated by the parameter
. Between intermediate exits and re-entries,
all parameters other than YDOT, RWORK, WKJAC, IMON, INLN and IRES must remain unchanged
1: NEQ – INTEGERInput
On initial entry: the number of equations to be solved.
Constraint: ${\mathbf{NEQ}}\ge 1$.
2: LDYSAV – INTEGERInput
On initial entry: a bound on the maximum number of equations to be solved during the integration.
Constraint: ${\mathbf{LDYSAV}}\ge {\mathbf{NEQ}}$.
3: T – REAL (KIND=nag_wp)Input/Output
On initial entry
, the value of the independent variable. The input value of
is used only on the first call as the initial point of the integration.
On final exit
: the value at which the computed solution
is returned (usually at
4: TOUT – REAL (KIND=nag_wp)Input/Output
On initial entry
: the next value of
at which a computed solution is desired. For the initial
, the input value of
is used to determine the direction of integration. Integration is permitted in either direction (see also
Constraint: ${\mathbf{TOUT}}e {\mathbf{T}}$.
On exit
: is unaltered unless
on entry (see also
) in which case
will be set to the result of taking a small step at the start of the integration.
5: Y(NEQ) – REAL (KIND=nag_wp) arrayInput/Output
On initial entry
: the values of the dependent variables (solution). On the first call the first
elements of
must contain the vector of initial values.
On final exit
: the computed solution vector evaluated at
6: YDOT(NEQ) – REAL (KIND=nag_wp) arrayInput/Output
On initial entry
: if
must contain approximations to the time derivatives
${y}^{\prime }$
of the vector
. If
, then
need not be set on entry.
On final exit: contains the time derivatives ${y}^{\prime }$ of the vector $y$ at the last integration point.
7: RWORK($50+4×{\mathbf{NEQ}}$) – REAL (KIND=nag_wp) arrayCommunication Array
On initial entry
: must be the same array as used by one of the method setup routines
, and by one of the storage setup routines
. The contents of
must not be changed between any call to a setup routine and the first call to D02NNF.
On intermediate re-entry
: must contain residual evaluations as described under the parameter
On intermediate exit
: contains information for JAC, RESID and MONITR operations as described under
Section 3
and the parameter
8: RTOL($*$) – REAL (KIND=nag_wp) arrayInput
the dimension of the array
must be at least
, and at least
On initial entry: the relative local error tolerance.
${\mathbf{RTOL}}\left(i\right)\ge 0.0$
for all relevant
9: ATOL($*$) – REAL (KIND=nag_wp) arrayInput
the dimension of the array
must be at least
, and at least
On initial entry: the absolute local error tolerance.
${\mathbf{ATOL}}\left(i\right)\ge 0.0$
for all relevant
10: ITOL – INTEGERInput
On initial entry
: a value to indicate the form of the local error test.
indicates to D02NNF whether to interpret either or both of
as a vector or a scalar. The error test to be satisfied is
, where
is defined as follows:
ITOL RTOL ATOL ${w}_{i}$
1 scalar scalar ${\mathbf{RTOL}}\left(1\right)×\left|{y}_{i}\right|+{\mathbf{ATOL}}\left(1\right)$
2 scalar vector ${\mathbf{RTOL}}\left(1\right)×\left|{y}_{i}\right|+{\mathbf{ATOL}}\left(i\right)$
3 vector scalar ${\mathbf{RTOL}}\left(i\right)×\left|{y}_{i}\right|+{\mathbf{ATOL}}\left(1\right)$
4 vector vector ${\mathbf{RTOL}}\left(i\right)×\left|{y}_{i}\right|+{\mathbf{ATOL}}\left(i\right)$
${e}_{i}$ is an estimate of the local error in ${y}_{i}$, computed internally, and the choice of norm to be used is defined by a previous call to an integrator setup routine.
Constraint: ${\mathbf{ITOL}}=1$, $2$, $3$ or $4$.
11: INFORM($23$) – INTEGER arrayCommunication Array
12: YSAV(LDYSAV,SDYSAV) – REAL (KIND=nag_wp) arrayCommunication Array
13: SDYSAV – INTEGERInput
On initial entry
: the second dimension of the array
as declared in the (sub)program from which D02NNF is called. An appropriate value for
is described in the specifications of the integrator setup routines
. This value must be the same as that supplied to the integrator setup routine.
14: WKJAC(NWKJAC) – REAL (KIND=nag_wp) arrayInput/Output
On intermediate re-entry
: elements of the Jacobian as defined under the description of
. If a numerical Jacobian was requested then
is used for workspace.
On intermediate exit: the Jacobian is overwritten.
15: NWKJAC – INTEGERInput
On initial entry
: the dimension of the array
as declared in the (sub)program from which D02NNF is called. The actual size depends on the linear algebra method used. An appropriate value for
is described in the specifications of the linear algebra setup routines
for full, banded and sparse matrix linear algebra respectively. This value must be the same as that supplied to the linear algebra setup routine.
16: JACPVT(NJCPVT) – INTEGER arrayCommunication Array
17: NJCPVT – INTEGERInput
On initial entry
: the dimension of the array
as declared in the (sub)program from which D02NNF is called. The actual size depends on the linear algebra method used. An appropriate value for
is described in the specifications of the linear algebra setup routines
for banded and sparse matrix linear algebra respectively. This value must be the same as that supplied to the linear algebra setup routine. When full matrix linear algebra is chosen, the array
is not used and hence
should be set to
18: IMON – INTEGERInput/Output
On intermediate exit
: used to pass information between D02NNF and the MONITR operation (see
Section 3
). With
contains a flag indicating under what circumstances the return from D02NNF occurred:
Exit from D02NNF after ${\mathbf{IRES}}=4$ (set in the RESID operation (see Section 3) caused an early termination (this facility could be used to locate discontinuities).
The current step failed repeatedly.
Exit from D02NNF after a call to the internal nonlinear equation solver.
The current step was successful.
On intermediate re-entry
: may be reset to determine subsequent action in D02NNF.
Integration is to be halted. A return will be made from D02NNF to the calling (sub)program with ${\mathbf{IFAIL}}={\mathbf{12}}$.
Allow D02NNF to continue with its own internal strategy. The integrator will try up to three restarts unless ${\mathbf{IMON}}e -1$.
Return to the internal nonlinear equation solver, where the action taken is determined by the value of INLN.
Normal exit to D02NNF to continue integration.
Restart the integration at the current time point. The integrator will restart from order $1$ when this option is used. The internal initialization module solves for new values of $y$ and $
{y}^{\prime }$ by using the values supplied in Y and YDOT by the MONITR operation (see Section 3) as initial estimates.
Try to continue with the same step size and order as was to be used before entering the MONITR operation (see Section 3). HMIN and HMAX may be altered if desired.
Continue the integration but using a new value of HNEXT and possibly new values of HMIN and HMAX.
19: INLN – INTEGERInput/Output
On intermediate re-entry
: with
specifies the action to be taken by the internal nonlinear equation solver. By setting
and returning to D02NNF, the residual vector is evaluated and placed in
, for
$\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$
and then the MONITR operation (see
Section 3
) is invoked again. At present this is the only option available:
must not be set to any other value.
On intermediate exit: contains a flag indicating the action to be taken, if any, by the internal nonlinear equation solver.
20: IRES – INTEGERInput/Output
On intermediate exit
: with
specifies the form of the residual to be returned by the RESID operation (see
Section 3
If ${\mathbf{IRES}}=1$, then $-r=g\left(t,y\right)-A\left(t,y\right){y}^{\prime }$ must be returned.
If ${\mathbf{IRES}}=-1$, then $-\stackrel{^}{r}=-A\left(t,y\right){y}^{\prime }$ must be returned.
On intermediate re-entry
: should be unchanged unless one of the following actions is required of D02NNF in which case
should be set accordingly.
Indicates to D02NNF that control should be passed back immediately to the calling (sub)program with the error indicator set to ${\mathbf{IFAIL}}={\mathbf{11}}$.
Indicates to D02NNF that an error condition has occurred in the solution vector, its time derivative or in the value of $t$. The integrator will use a smaller time step to try to avoid this
condition. If this is not possible D02NNF returns to the calling (sub)program with the error indicator set to ${\mathbf{IFAIL}}={\mathbf{7}}$.
Indicates to D02NNF to stop its current operation and to enter the MONITR operation (see Section 3) immediately.
21: IREVCM – INTEGERInput/Output
On initial entry: must contain $0$.
On intermediate re-entry: should remain unchanged.
On intermediate exit
: indicates what action you must take before re-entering D02NNF. The possible exit values of
which should be interpreted as follows:
${\mathbf{IREVCM}}=1$, $2$, $3$, $4$, $5$, $6$, $7$ or $11$
Indicates that a RESID operation (see Section 3) is required: you must supply the residual of the system. For each of these values of IREVCM, ${y}_{\mathit{i}}$ is located in ${\mathbf{Y}}\
left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$.
For ${\mathbf{IREVCM}}=1$, $3$, $6$ or $11$, ${y}_{\mathit{i}}^{\prime }$ is located in ${\mathbf{YDOT}}\left(\mathit{i}\right)$ and ${r}_{\mathit{i}}$ should be stored in ${\mathbf{RWORK}}\
left(50+2×{\mathbf{NEQ}}+\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$.
For ${\mathbf{IREVCM}}=2$, ${y}_{\mathit{i}}^{\prime }$ is located in ${\mathbf{RWORK}}\left(50+{\mathbf{NEQ}}+\mathit{i}\right)$ and ${r}_{\mathit{i}}$ should be stored in ${\mathbf{RWORK}}\
left(50+2×{\mathbf{NEQ}}+\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$.
For ${\mathbf{IREVCM}}=4$ or $7$, ${y}_{\mathit{i}}^{\prime }$ is located in ${\mathbf{YDOT}}\left(\mathit{i}\right)$ and ${r}_{\mathit{i}}$ should be stored in ${\mathbf{RWORK}}\left(50+{\
mathbf{NEQ}}+\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$.
For ${\mathbf{IREVCM}}=5$, ${y}_{\mathit{i}}^{\prime }$ is located in ${\mathbf{RWORK}}\left(50+2×{\mathbf{NEQ}}+\mathit{i}\right)$ and ${r}_{\mathit{i}}$ should be stored in ${\mathbf{YDOT}}
\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$.
Indicates that a JAC operation (see Section 3) is required: you must supply the Jacobian matrix.
If full matrix linear algebra is being used, then the $\left(i,j\right)$th element of the Jacobian must be stored in ${\mathbf{WKJAC}}\left(\left(j-1\right)×{\mathbf{NEQ}}+i\right)$.
If banded matrix linear algebra is being used, then the $\left(i,j\right)$th element of the Jacobian must be stored in ${\mathbf{WKJAC}}\left(\left(i-1\right)×{m}_{B}+k\right)$, where ${m}_
{B}={m}_{L}+{m}_{U}+1$ and $k=\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({m}_{L}-i+1,0\right)+j$; here ${m}_{L}$ and ${m}_{U}$ are the number of subdiagonals and superdiagonals,
respectively, in the band.
If sparse matrix linear algebra is being used, then
must be called to determine which column of the Jacobian is required and where it should be stored.
CALL D02NRF(J, IPLACE, INFORM)
will return in
the number of the column of the Jacobian that is required and will set
). If
, you must store the nonzero element
of the Jacobian in
; otherwise it must be stored in
Indicates that a MONITR operation (see Section 3) can be performed.
Indicates that the current step was not successful, due to error test failure or convergence test failure. The only information supplied to you on this return is the current value of the
variable $t$, located in ${\mathbf{RWORK}}\left(19\right)$. No values must be changed before re-entering D02NNF; this facility enables you to determine the number of unsuccessful steps.
On final exit
indicating that the user-specified task has been completed or an error has been encountered (see the descriptions for
Constraint: $0\le {\mathbf{IREVCM}}\le 11$.
22: LDERIV($2$) – LOGICAL arrayInput/Output
On initial entry
must be set to .TRUE. if you have supplied both an initial
and an initial
${y}^{\prime }$
must be set to .FALSE. if only the initial
has been supplied.
must be set to .TRUE. if the integrator is to use a modified Newton method to evaluate the initial
${y}^{\prime }$
. Note that
${y}^{\prime }$
, if supplied, are used as initial estimates. This method involves taking a small step at the start of the integration, and if
on entry,
will be set to the result of taking this small step.
must be set to .FALSE. if the integrator is to use functional iteration to evaluate the initial
${y}^{\prime }$
, and if this fails a modified Newton method will then be attempted.
is recommended if there are implicit equations or the initial
${y}^{\prime }$
are zero.
On final exit
is normally unchanged. However if
and internal initialization was successful then
${\mathbf{LDERIV}}\left(2\right)=\mathrm{.TRUE.}$, if implicit equations were detected. Otherwise ${\mathbf{LDERIV}}\left(2\right)=\mathrm{.FALSE.}$.
23: ITASK – INTEGERInput
On initial entry
: the task to be performed by the integrator.
Normal computation of output values of $y\left(t\right)$ at $t={\mathbf{TOUT}}$ (by overshooting and interpolating).
Take one step only and return.
Stop at the first internal integration point at or beyond $t={\mathbf{TOUT}}$ and return.
Normal computation of output values of $y\left(t\right)$ at $t={\mathbf{TOUT}}$ but without overshooting $t={\mathbf{TCRIT}}$. TCRIT must be specified as an option in one of the integrator
setup routines before the first call to the integrator, or specified in the optional input routine before a continuation call. TCRIT (e.g., see D02NVF) may be equal to or beyond TOUT, but not
before it in the direction of integration.
Take one step only and return, without passing TCRIT (e.g., see D02NVF). TCRIT must be specified under ${\mathbf{ITASK}}=4$.
The integrator will solve for the initial values of $y$ and ${y}^{\prime }$ only and then return to the calling (sub)program without doing the integration. This option can be used to check
the initial values of $y$ and ${y}^{\prime }$. Functional iteration or a ‘small’ backward Euler method used in conjunction with a damped Newton iteration is used to calculate these values
(see LDERIV). Note that if a backward Euler step is used then the value of $t$ will have been advanced a short distance from the initial point.
if D02NNF is recalled with a different value of
altered) then the initialization procedure is repeated, possibly leading to different initial conditions.
Constraint: $1\le {\mathbf{ITASK}}\le 6$.
24: ITRACE – INTEGERInput
On initial entry
: the level of output that is printed by the integrator.
may take the value
$-1$ is assumed and similarly if ${\mathbf{ITRACE}}>3$, then $3$ is assumed.
No output is generated.
Only warning messages are printed on the current error message unit (see X04AAF).
Warning messages are printed as above, and on the current advisory message unit (see X04ABF) output is generated which details Jacobian entries, the nonlinear iteration and the time
integration. The advisory messages are given in greater detail the larger the value of ITRACE.
25: IFAIL – INTEGERInput/Output
On entry
must be set to
$-1\text{ or }1$
. If you are unfamiliar with this parameter you should refer to
Section 3.3
in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$-1\text{ or }1$
is recommended. If the output of error messages is undesirable, then the value
is recommended. Otherwise, because for this routine the values of the output parameters may be useful even if
${\mathbf{IFAIL}}e {\mathbf{0}}$
on exit, the recommended value is
When the value $-\mathbf{1}\text{ or }1$ is used it is essential to test the value of IFAIL on exit.
On exit
unless the routine detects an error or a warning has been flagged (see
Section 6
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Errors or warnings detected by the routine:
On entry, the integrator detected an illegal input, or that a linear algebra and/or integrator setup routine has not been called prior to the call to the integrator. If
${\mathbf{ITRACE}}\ge 0$
, the form of the error will be detailed on the current error message unit (see
The maximum number of steps specified has been taken (see the description of optional inputs in the integrator setup routines and the optional input continuation routine,
With the given values of
no further progress can be made across the integration range from the current point
. The components
${\mathbf{Y}}\left(1\right),{\mathbf{Y}}\left(2\right),\dots ,{\mathbf{Y}}\left({\mathbf{NEQ}}\right)$
contain the computed values of the solution at the current point
There were repeated error test failures on an attempted step, before completing the requested task, but the integration was successful as far as
. The problem may have a singularity, or the local error requirements may be inappropriate.
There were repeated convergence test failures on an attempted step, before completing the requested task, but the integration was successful as far as
. This may be caused by an inaccurate Jacobian matrix or one which is incorrectly computed.
Some error weight
became zero during the integration (see the description of
). Pure relative error control (
) was requested on a variable (the
th) which has now vanished. The integration was successful as far as
The RESID operation (see
Section 3
) set the error flag
continually despite repeated attempts by the integrator to avoid this.
on entry but the internal initialization routine was unable to initialize
${y}^{\prime }$
(more detailed information may be directed to the current error message unit, see
A singular Jacobian $\frac{\partial r}{\partial y}$ has been encountered. You should check the problem formulation and Jacobian calculation.
An error occurred during Jacobian formulation or back-substitution (a more detailed error description may be directed to the current error message unit, see
The RESID operation (see
Section 3
) signalled the integrator to halt the integration and return by setting
. Integration was successful as far as
The MONITR operation (see
Section 3
) set
and so forced a return but the integration was successful as far as
The requested task has been completed, but it is estimated that a small change in
is unlikely to produce any change in the computed solution. (Only applies when you are not operating in one step mode, that is when
${\mathbf{ITASK}}e 2$
The values of
are so small that D02NNF is unable to start the integration.
7 Accuracy
The accuracy of the numerical solution may be controlled by a careful choice of the parameters
, and to a much lesser extent by the choice of norm. You are advised to use scalar error control unless the components of the solution are expected to be poorly scaled. For the type of decaying
solution typical of many stiff problems, relative error control with a small absolute error threshold will be most appropriate (that is, you are advised to choose
small but positive).
The cost of computing a solution depends critically on the size of the differential system and to a lesser extent on the degree of stiffness of the problem; also on the type of linear algebra being
used. For further details see Section 8 in
of the documents for
(full matrix),
(banded matrix) or
(sparse matrix).
In general, you are advised to choose the Backward Differentiation Formula option (setup routine
) but if efficiency is of great importance and especially if it is suspected that
$\frac{\partial }{\partial y}\left({A}^{-1}g\right)$
has complex eigenvalues near the imaginary axis for some part of the integration, you should try the BLEND option (setup routine
9 Example
We solve the well-known stiff Robertson problem written as a differential system in implicit form
$r1=a′+b′+c′ r2=0.04a-1.0E4bc-3.0E7b2-b′ r3=0.04a-1.0E4bc-3.0E7b2-c′$
over the range
with initial conditions
and with scalar error control (
). We integrate to the first internal integration point past
), using a BDF method (setup routine
) and a modified Newton method. We treat the Jacobian as sparse (setup routine
) and we calculate it analytically. In this program we also illustrate the monitoring of step failures (
) and the forcing of a return when the component falls below
in the evaluation of the residual by setting
9.1 Program Text
9.2 Program Data
9.3 Program Results
|
{"url":"http://www.nag.com/numeric/fl/nagdoc_fl24/html/D02/d02nnf.html","timestamp":"2014-04-17T11:25:24Z","content_type":null,"content_length":"84418","record_id":"<urn:uuid:a89eebf1-551c-4219-b75a-70563793f2d4>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reseda Algebra 2 Tutor
Find a Reseda Algebra 2 Tutor
Hello, My name is Brooke. I'm a Stanford grad, and have scored perfectly on the recent SAT (2400!) and also scored in the 99th percentile when I took the test (out of 1600) in high school. I've
been teaching and tutoring for over a decade and have over 10,000 hours experience tutoring children, teens, and adults.
49 Subjects: including algebra 2, reading, writing, English
...I cannot make the butterflies in your stomach go away when delivering a speech, but I promise to help you make them "fly in formation". I enjoy teaching students the basics of playing the piano
and reading music. After teaching myself the basics of playing the piano, I took lessons from a respected jazz teacher for two years.
13 Subjects: including algebra 2, reading, English, writing
...Math is the kind of subject where you need to understand topic A to succeed in topic B. Tutoring spends that extra time catching the student up to pace so the confusion doesn't accumulate. My
most prized possession is my patience.
22 Subjects: including algebra 2, English, reading, writing
...I’ve helped students who deal with with autism and Asperger's to adults who finally conquered their fear of math and fractions and went on to nursing careers. However, even though I'm an
excellent tutor who has helped hundreds of students, I'm not a miracle worker. If you come to me with only a...
45 Subjects: including algebra 2, chemistry, English, calculus
...Perfect practice makes perfect. Ultimately, I would like to get out there, meet people and help teach Math. Just reach me at my email, and I will get back to you as fast as possible.
6 Subjects: including algebra 2, calculus, algebra 1, precalculus
|
{"url":"http://www.purplemath.com/reseda_algebra_2_tutors.php","timestamp":"2014-04-18T21:59:33Z","content_type":null,"content_length":"23739","record_id":"<urn:uuid:b79d0a46-49be-488d-8bda-b0473a41f142>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Teaching Math to Young Children
by Rick Garlikov
This is one of a series of webpages to help students understand math, and to help parents teach their children math -- especially to help children have a good foundation. These are not pages to teach
the mere recipes or algorithms for solving problems. My philosophy is that if you understand math as you go along, you will be able to do your homework well yourself and you should be able to do well
on exams. And although understanding takes some work, it is usually actually easier than trying to memorize rules and recall them later, especially in situations that are slightly new and different,
and especially under pressure of tests.
There ARE some things that one needs to know almost automatically and that need to be a part of your memory, but those things can be learned with practice that shouldn't be too tedious. Practice is
important to help you do math more automatically and therefore to recognize possible solutions more quickly, but you should still understand the things you memorize or know automatically.
Memorization and rote learning, by themselves, in math just won't help much by the time you get to algebra; you have to understand numbers and relationships between numbers -- the logic of numbers
and of math. It is not all that difficult if you develop a good foundation, which requires both understanding and sufficient practice to keep your understanding sharp. This essay presents ways for
parents or teachers to help children develop such a foundation.
The following are some other essays in this series.
"The Concept and Teaching of Place-Value" -- a theoretical explanation about (the problem of) teaching Place-Value to children, with a practical method given based on that explanation
"The Socratic Method: Teaching by Asking Instead of by Telling" -- an example and explanation of how to use students' (including young children's) inherent reasoning abiltiy to guide them gently and
easily into an understanding of difficult ideas
"A Supplemental Introduction to Algebra" and a separate webpage "The Way Algebra Works" -- both explain some things algebra books don't tend to tell students, but which I think are important for
students to know so that they can understand what algebra "is about" and see, in a sense, how it works
"Understanding 'Rate' Word Problems" -- a conceptual explanation of doing various kinds of math "rate" problems (e.g., "distance/speed/time" problems, problems involving combining work done by
different agents working at different rates of speed, quantity/proportion problems, etc.).
My training is in philosophy and I use a conceptual and analytic approach to teach math. I believe it is lack of conceptual understanding that causes students to have the most difficulty, along with
lack of practice manipulating quantities and numbers in certain ways to the point of being comfortable and familiar with them.
For example with regard to practice, adults often don't realize how difficult it is for young children to associate number NAMES with the QUANTITIES OF THINGS those names represent; adults have
associated quantities with numbers for so long, it seems second nature to them; but to children, it is more like the following would be to an adult, and it is just as difficult:
Imagine if I said from now on we will change all the numeral names (that is 0-9) to the ten letters of the alphabet below, which you already know in order, so this should make the tasks following
them even easier:
k, l, m, n, o, p, q, r, s, t.
How long do you think it would take you
• to learn to quickly recognize p objects by sight, or
• to count easily to lkk by m's,
[m, o, q, s, lk, lm, lo, lq, ls, mk, mm, ... , tq, ts, lkk], or
• to see easily that m plus n equals p?
You would need lots of practice; and the idea would be to make the practice be enjoyable and interesting to you.
The same is true for little children with regard to learning numbers because number names don't mean anything more to children about quantities than the above letter names for quantities do to you.
I think it is important for children and their teachers to accept and believe in the following three teaching/learning principles:
1) When the student tells the parent or teacher what s/he doesn't understand or cannot do, s/he should ALSO TELL what s/he tried to do and why. S/he should tell what s/he understands about the
problem and how s/he tried to go about solving it, and why s/he tried that. S/he should tell as specifically as s/he can where s/he thinks s/he is getting stuck. Obviously this will be more difficult
for young children to verbalize, but they should have the opportunity and the coaxing to try to communicate in some way how they are thinking and seeing a concept or idea. It is important for parents
and teachers to try to help children explain their own ideas and reasoning as much as possible.
2) It is important for the parent or teacher to (try to) understand where the child is going wrong and how the child got there, so that the teacher can correct misconceptions and so that he or she
can know what sort of answer might serve the student best.
3) When a parent or teacher believes a child has understood an idea or concept, the parent or teacher should present the child with a problem that requires a slight modification of it, in order to
see whether the child recognizes something new needs to be done and can see how to take it into account. If a child can do this, that presents better evidence the child does understand what s/he been
doing and has not just hit on a pattern that works for particular situations or has not just learned by rote how to do what you were working with them on previously.
While teachers should be looking for any indication a student does not understand something, it is also important that the student should say if s/he does not understand a parent or teacher's
explanation or answer or some part of it. There may be more the parent/teacher needs to say, or there may be some other way they need to say it so that the student can see it. "Seeing" someone else's
explanation about something, particularly in math, is not always easy; and it does not mean a student cannot learn or come to understand a principle or relationship, or that a student is not smart,
just because s/he doesn't see it the same way the teacher sees it, or the way the teacher says it the first time they give what they THINK is an adequate explanation. The teacher will just have to
try a different approach to help, or will just have to explain more about his/her initial approach. But the student needs to let the teacher know, because otherwise the teacher may not realize s/he
doesn't understand, and may keep on going in a way that gets the student really lost.
The following are some of the aspects of arithmetic where I believe a good foundation is particularly important. If you are the parents of young children, you can practice some of these things with
your children while you are in the car with them; I found that to be a good time to work with them:
1) Naming whole numbers in order (sometimes called counting, even though one may not actually be counting anything). As young children get older, they can add more numbers to the list. Notice, naming
numbers in order is different from counting, though counting sometimes involves naming numbers in order. But you can name numbers in order without counting and you can count without naming numbers in
order. (You, as an adult, can count without naming numbers because you can sometimes see five objects immediately as five, without having to "count out" each one of them -- as when playing dominoes;
or you might multiply to get a total, or you might count by two's or by 10's, etc.) But before kids can count objects by naming the numbers one at a time in order as they point to objects, they need
to learn the number names in order. So you can start with "one, two" and then add numbers as kids are able to absorb them. You can name numbers while pointing to fingers, or just reciting the
numbers, or by using nursery rhymes such as "One, two, buckle my shoe...." Give little kids plenty of practice naming numbers as high as they can go, helping them and making it fun for them, and
applauding them when they learn them. (See #5 and #6 below for typical particular problems about learning number names in order.)
2) Counting things one by one. As your children learn number names, give them practice counting things, helping them when necessary and praising them as they get it right. Counting things one by one
helps them count and it reinforces the order of number names while they are young. You can have them count candy, such as M&M's, or poker chips, or the hearts (spades, clubs, diamonds) on the face
cards in a deck of cards, or the dots on dice. If you have games like Chutes and Ladders or Monopoly, etc., it will give them lots of practice counting the dice and the squares they move past on the
game board. Eventually they will even start to see groups of squares they won't even have to count one-by-one.
3) Naming number names by groups; e.g., by 2's, by 5's, and by 10's in particular. Once they have learned to name numbers in order, teach them to name numbers by two's, then by fives, and by tens.
Once they understand WHAT it is you are teaching them, you can give them practice by the next step, #4.
4) Counting by groups; e.g., by 2's, by 5's, by 10's, etc. Make sure you get them to see how much faster it is to count out large quantities by groups, rather than one at a time. You have to point
this out to most kids or they will tend to count things one at a time and not even think about counting by groups even though they know how to count by groups; they just don't think to do it, unless
they have been told and shown at least once that it is a faster way to count.
5) My children had trouble learning what I would call the "transition" number names. They had trouble learning what comes after the 9's in the two digit numbers; e.g., after 29, 39, 49, etc., even
though they could say numbers by tens: 10, 20, 30, 40, etc. So it took additional practice working with that in particular. I had to get them to see that what came after, say, 49, when they were
counting by one's was the same thing that came after 40 when they were counting by 10's; that is, they needed lots of practice in seeing that when you "finished" the forties you went into the
fifties, when you finished the seventies you went into the eighties, etc. So we did extra practice naming numbers starting at the 7 in each "decade"; i.e., 37, 38, 39, ? 47, 48, 49, ? 87, 88, 89, ?
6) Kids also have difficulty sometimes saying numbers in order out loud because they will accidentally jump from, something like fifty-six to seventy-seven or to sixty-seven because they get confused
between changing the one's or the ten's place number. It is not a sign of any significant difficulty, but you need to watch for it so they learn not to do it.
7) Kids need to learn to read and to write numbers. This is not too difficult with single digit numbers, but it is somewhat difficult with multi-digit numbers, since the number ten, for example,
written out looks like one, zero. Kids can just learn it is 10. At this stage they don't necessarily need, and might not be able to appreciate, a rationale. You can just say something like, "I know
this looks like one, zero, but it is the way you write 'ten'." Similarly 11, etc. At some point, if they seem like they can follow it, you can show them that ten through nineteen all have a "1" on
the left side, and that all the twenties have 2's on the left side, etc., but I wouldn't get into talking about columns or place-value. If you feel they might think it interesting, you might explain
that the "teen" in each of the -teen number names is like "ten" and that the teens are like three-teen, four-teen, five-teen, and that twelve is like two-teen and so the numbers look like a ten
except for the numeral that replaces the "0" in the ten. Once you get to twenty, this is easier, and you may even want to start with it -- twenty one is written like twenty but with a one at the end;
twenty two is like twenty with a two at the end, etc. (I will get to place-value later.)
8) When your children are very young, you can very naturally, without any fanfare, introduce them to fractions by breaking a cookie into roughly two parts in front of them and saying something like:
"Here, I'll give you half a cookie and I will eat half [or I'll give your brother the other half]." Similarly with one-fourth of something when a reasonable occasion arises. Or you might give them
"half a glass of milk" and identify it as such.
9) When your children start to study fractions in school, you can make it easier for them by explaining every fraction has two parts, which, when written, are a top number that is said first, and a
bottom number which is said second (in the form you will have to explain to them --e.g., "fourTHS" instead of "four"). Let them know the bottom tells how many "pieces" you divide something up into,
and the top part tells you how many of those pieces "you have" or "you are talking about". So if you divide a cookie into halves, and you get one piece, you have one half a cookie. If you have four
people in your family and two of them are women, then two fourths of your family are females. You can ask them what fraction of their family they are, what fraction the children are, what fraction of
the legs of a dog are front legs, or left legs, or left front legs. Etc. I find kids get a real kick out of telling you all kinds of bizarre fractions like these once they catch on to seeing how to
name fractional parts of things. At some point you can also show them that fractions can be more than one whole thing, say, by breaking two cookies into halves and giving them three of the halves and
asking them how many halves they have. And helping them see that three halves then is the same as one-and-a-half cookies, just as you probably already have shown them that two halves are the same as
a whole cookie (except for some of the crumbs that fall when you break the cookie in half).
10) As they learn to add numbers, give them plenty of practice by letting them play games where they add numbers together. They can play with two or more dice, for example. Or they can play "double
war" in cards, a game where each player turns over two cards, and the player with the highest SUM wins all the cards turned over. (When a player runs out of cards to turn over, he or she picks up the
cards s/he has won and uses them. Each player keeps doing this until one player has all the cards.) Or when they are old enough to start to understand the game, they can play blackjack just for fun
without betting anything. They will like just trying to win each hand. As your children get better at adding and subtracting, you can show them neat "magic" tricks with numbers, such as how to add up
the numbers that are on the BOTTOMS of the dice they have rolled, without having to pick up the dice to see those numbers. (The opposite sides of dice add up to 7, so if the three is rolled, a four
is on the bottom; if a six is rolled, the one is on the bottom. So if you roll two dice and get a five and a three, you know that there is a two and a four on the bottom, and can sum them up to six.
Also, the opposite sides of TWO dice will add up to 14, so you could add the five and the three you see and subtract that from 14 to still get 6.) Once a kid learns how to do this trick, s/he can
amaze his/her friends, and get lots of practice. Especially if using three or four dice.
11) I believe it is important that children play games that give them practice adding single digit numbers up to sums of at least 18, since 18 is the largest number you ever get when you regroup or
"borrow" numbers by the "standard" subtraction "method" or recipe (algorithm); e.g, if you are subtracting 9 from 38, in the standard American algorithm, you change the "thirty" to "twenty with 10
ones", and that gives you 18 ones. (If you were to get 19, you would not have had to regroup in the first place, because you could have subtracted any digit from the 9 that you began with, without
having to "borrow" to do it; e.g., if you were subtracting something from 39, you would never have to "borrow" from the "thirty", since with a 9 in the one's column, you could subtract ANY number
from it in the one's column.) If you are not opposed to letting them play cards, "blackjack" or 21 is an easy and excellent game for practice in developing this particular skill.
12) Children run into great difficulty learning "place-value"-- what the different columns of numbers represent, AND WHY, etc. And many learn it only by rote (they never learn the "why"), which
causes problems later in a number of places. I think there is an easy and great way to teach place-value, and to teach about regrouping, borrowing, etc. using poker chips with different colors.
(Stacks of poker chips can also teach about fractional relationships; e.g., if you start with a stack of 32 poker chips, half of that stack is 16, half of that is 8, half of that is 4, half of that
is 2, and half of that is 1; and you can show them the relationships among the stacks: e.g., 4 is half of the stack with 8 in it, and 1/4 of the stack with 16. Etc., etc.) Plus, when they are first
learning to count, and also learning to count by two's, etc., they can count poker chips and stack them into two's, five's, ten's. So I recommend that you buy a pack or two of poker chips (be sure
they have stacks of at least three colors -- commonly red ones, white ones, and blue ones), which you can get for a few dollars a pack at some of the discount stores or at some drugstores. And I also
recommend your buying two decks of cards, since you can give kids practice in counting and adding and subtracting with them. They can count cards or count the objects on the faces, or add and
subtract the face values, in a number of different games they might play, or in a number of different tasks you might ask them to do, that they often will find fun.
13) If children have learned fractions and place-value, decimals will not be all that difficult, with some help and explanation. And once they have learned these things, percentages will be easy as
14) Finally, for now, you can lay some groundwork as early as kindergarten or first grade for word problems in general and for algebra later, by asking questions like "If I have a bag, and you have a
bag, and we each have the same number of things in our bags, and together we have four things, how many things do we each have?" Let the child figure it out however s/he wants to; don't make there be
some particular way to do it. As the child gets older or more sophisticated in arithmetic, you can make the question more sophisticated: "I have a bag and you have a bag, and I have twice as much as
you, and together we have nine things in our bags...." or "If we double what you have and then add three, you will have 13...." or even harder: "I have five more than you do in my bag, but if you
double what is in your bag, you will have five more than I do...." Surprisingly perhaps, kids can figure these out. Sometimes they do so by trial and error or by lucky guesses; but all of them give
them more and more practice with numbers and with relationships between numbers. And they often seem to love doing these things, at least in small doses. And they also like doing "progressions", such
as "if numbers start out going 1, 2, 4, 8, what number will be next, and HOW DO YOU KNOW?" You can quickly begin to make the progressions harder and they will still catch on. Or you can make two
different progressions in the same problem: what should come after 3, 4, 6, 8, 12? (16) And why? (There are two progressions here: 3, 6, 12, as the first, third, and fifth numbers in the series, and
4, 8, _ as the second, fourth, etc. numbers.)
15) You may have your own areas of math that you find interesting: geometry, trig, topology, etc. Try to devise games or puzzles using insights from those areas that your children might find fun to
play with and think about. There are various inexpensive math puzzle, riddle, logic, or "magic" books, and free Internet sites, available that teach many different aspects of math in different fun
ways. Simple objects can be used to teach math elements also. Nobel physicist Richard Feynman told, for example, about how when he was still in his high chair, his father would bring home color tiles
and would line them up in various ways for (and with) him so that there would be patterns, such as blue-white-blue-white-blue-white, or color patterns alternating by thirds or some other way. It was
something of just a fun game for the baby, but a game that had a deeper meaning and point to his father. As long as it stays interesting or fun for the child, I do not see any harm in it, and it
might have much educational developmental value for later.
|
{"url":"http://www.garlikov.com/math/TeachingMath.html","timestamp":"2014-04-21T15:06:43Z","content_type":null,"content_length":"24010","record_id":"<urn:uuid:c4db2bc5-b186-46df-a38f-8b743a83de50>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Dictionary and Cyclopedia of Mathematical Science
Mathematical Dictionary and Cyclopedia of Mathematical Science: Comprising Definitions of All the Terms Employed in Mathematics - an Analysis of Each Branch, and of the Whole, as Forming a Single
Science (Google eBook)
We haven't found any reviews in the usual places.
Popular passages
A Circle is a plane figure bounded by a curved line every point of which is equally distant from a point within called the center.
If two triangles have the three sides of the one equal to the three sides of the other, each to each, the triangles are congruent.
The circumference of every circle is supposed to be divided into 360 equal parts, called degrees ; each degree into 60 equal parts, called minutes ; and each minute into 60 equal parts, called
Multiply the divisor, thus augmented, by the last figure of the root, and subtract the product from the dividend, and to the remainder bring down the next period for a new dividend.
A sphere is a solid bounded by a curved surface, every point of which is equally distant from a point within called the center.
Three lines are in harmonical proportion, when the first is to the third, as the difference between the first and second, is to the difference between the second and third ; and the second is called
a harmonic mean between the first and third. The expression 'harmonical proportion...
Then multiply the second and third terms together, and divide the product by the first term: the quotient will be the fourth term, or answer.
If one angle of a parallelogram is a right angle, all the other angles are right angles, and the figure is a rectangle.
If two triangles have two sides, and the included angle of the one equal to two sides and the included angle of the other, they are equal in all their parts.
References from web pages
Mathematical dictionary and cyclopedia of mathematical science ...
SUBJECT OF, 1 Review from 1855. THINGS CONNECTED TO “Mathematical dictionary and cyclopedia of mathematical science (Book)”. HUMAN BEINGS ...
harpers.org/ subjects/ MathematicalDictionaryAndCyclopediaOfMathematicalScienceBook
Collection of Early American Mathematics Books
Main Title TM: Mathematical dictionary and cyclopedia of mathematical science, comprising definitions of all the terms employed in mathematics-- an analysis ...
www.math.gatech.edu/ ~hill/ publications/ books/ booklist.html
Amy Ackerberg-Hastings
Chapter Five. The Two Circles Will Touch Each Other Internally:. Charles Davies at the Art and Business of Teaching Geometry. ̉[W]ith scientific attainments ...
www.math.usma.edu/ people/ Rickey/ dms/ DeptHeads/ Davies%20by%20Amy%20Ackerberg-Hastings.rtf
Bibliographic information
|
{"url":"http://books.google.com/books?id=DoUMAAAAYAAJ","timestamp":"2014-04-18T13:48:03Z","content_type":null,"content_length":"145382","record_id":"<urn:uuid:f254e292-2afd-450e-8c34-a058d9152fba>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
To show that there is a root between 1.1 and 1.2, just find the values of the functions at each of those points and show that there is a change of sign.
f(1.1) ≈ 0.23
f(1.2) ≈ -0.17
There is a change of sign, so a root must exist between the two.
|
{"url":"http://www.mathisfunforum.com/post.php?tid=2147&qid=20071","timestamp":"2014-04-21T07:33:50Z","content_type":null,"content_length":"19009","record_id":"<urn:uuid:08dc59c2-5ad8-4927-a930-64b264e70983>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/ipm1988/answered","timestamp":"2014-04-16T04:45:04Z","content_type":null,"content_length":"121497","record_id":"<urn:uuid:f49c5f0c-9b28-46b9-a530-98dc4f675a2c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Derivation of Logistic Growth versus Groping in the dark
Derivation of Logistic Growth versus Groping in the dark
Since many peak oil analysts like to use Logistic growth to model peak, for Hubbert Linearization, etc., I thought to give the standard derivation of the equation a run through. Of course, the
logistic formulation comes about from studies of population dynamics, where the rate of birth and death follows strictly from the size of the population itself. This makes sense from the point of
view of a multiplying population, but not necessarily from inanimate pools of oil. In any case, the derivation starts with two assumptions, the birth and death rates:
B = B0 - B1*P
D = D0 + D1*P
We base the entire premise on a the negative sign on the second term in the birth rate -- in the event of limited resources such as food, the birth rate can only decrease with size of population (and
the death rate correspondingly increases).
The next step involves writing the equation for population dynamics as a function of time.
dP/dt = (B-D)*P
This provides the underpinnings for exponential growth, however critically modulated by the individual birth and death functions. So if we expand the population growth rate, we get:
dP/dt = (B0-B1*P-D0-D1*P)*P = (B0-D0)*P - (B1+D1)*P^2
which matches the classic Logistic equation formulation:
dP/dt = rP*(1-P/P[infinity])
Where P
becomes the carrying capacity of the environment. So the leap of faith needed to apply this to oil depletion comes about from analogizing population to a carefully chosen resource variable. The one
that history has decided to select, cumulatively extracted oil, leads to the classical bell-shaped curve for instantaneous extraction rate, i.e. the derivitive dP/dt. (Note that we can throw out the
death term because it doesn't really mean anything.)
I have always had issues with both the upward part of the logistic curve derivative and the decline part. Trying to rationalize why instantaneous production would initially rise proportionally to the
cumulative production only makes sense if oil itself
the exponential growth. But we know that oil does not mate with itself as biological entities would, so the growth really has to do with human population increase (or
oil corporation growth
) causing the exponential rise. That remains a big presumption to the model. The decline too has a significant interpretion hurdle as well. Why exactly the rate of growth after we start approaching
and bypassing peak has that peculiar non-linear modifier doesn't make a lot of sense; the human population hasn't stabilized as of yet (even though oil company growth certainly has, technically
declining significantly through mergers and acquisitions
). We really have to face that a lot of apples and oranges assumptions flow into this interpretion.
In the end, using the Logistic curve only makes sense as a cheap heuristic, something that we can get a convenient analytical solution from. It fits into the basic class of solutions similar to the
"drunk looking for his car-keys under the lamp-post" problem. Somebody asks the drunk why he chose to look under the lamp-post.
"Of course, that's where the light is"
. I have fundamental problems with this philosophy and have made it a challenge to myself to seek something better; if that means groping around in the dark, what the heck.
2 Comments:
The correlation of oil growth with population growth makes sense. On my darker days, the correlation makes sense on the down side too.
That said, your modeling is the best I've seen on the geology of oil depletion.
Professor General Public Warehousing said...
Good. The more you use insipid terms like 'organize, effectively, optimize, synergy', the more you sound like............
"Like strange bulldogs sniffing each other's butts, you could sense wariness from both sides"
|
{"url":"http://mobjectivist.blogspot.com/2007/03/derivation-of-logistic-growth-versus.html","timestamp":"2014-04-18T08:26:10Z","content_type":null,"content_length":"36961","record_id":"<urn:uuid:8527a4d2-8a75-417f-8edc-97c9bff6ed13>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Answer following questions:
1. For a population with
Answer following questions: 1. For a population with m = 50 and s = 10,...
Home Tutors Statistics and Probability Answer following questions: ...
Answer following questions:
1. For a population with m = 50 and s = 10, the z-score corresponding to X = 45 is z = +0.50
True or false
2. For any population, a z-score of +1.00 corresponds to a location above the mean by one standard deviation.
True or false
3. If two individuals in a population have identical X scores, they also will have identical z-scores.
True or False
4. Under what circumstances would a score that is 15 points above the mean be considered an extreme score, far out in the tail of the distribution?
5. For an exam with a mean of M = 74 and a standard deviation of s = 8, Mary has a score of X = 80, Bob’s score corresponds to z = +1.50, and Sue’s score is located above the mean by 10 points. If
the students are placed in order from smallest score to largest score, what is the correct order?
6. What proportion of a normal distribution is located between z = 1.00 and z = 1.50?
7. In any normal distribution, what are the z-score boundaries for the middle 50% of the distribution?
8. For a normal distribution with m = 100 and s = 5, what is the probability of selecting a score between X = 90 and X = 110? In symbols, what is p(110 < X < 90)?
|
{"url":"http://www.coursehero.com/tutors-problems/Statistics-and-Probability/6841360-Answer-following-questions-1-For-a-population-with-m-50-and-s/","timestamp":"2014-04-17T09:37:14Z","content_type":null,"content_length":"36359","record_id":"<urn:uuid:88d31ff7-a940-4d69-afc0-7a3edf605ab9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Meeting Details
For more information about this meeting, contact Stephen Simpson.
Title: Introduction to mass problems
Seminar: Logic Seminar
Speaker: Stephen G. Simpson, Pennsylvania State University
In an important and influential 1932 paper, Kolmogorov proposed a "calculus of problems" and noted its similarity to the intuitionistic propositional calculus of Brouwer and Heyting. According to
Kolmogorov's informal idea, one problem is said to be "reducible" to another if any solution of the second problem can easily be transformed into a solution of the first problem. In 1955 Kolmogorov's
doctoral student Medvedev developed a rigorous, elaboration of Kolmogorov's informal idea, in terms of Turing's theory of computability and unsolvability. A mass problem was defined to be a set of
Turing oracles, i.e., an arbitrary subset of the Cantor space. A mass problem was said to be solvable if it contains at least one computable member. One mass problem was said to be reducible to
another if there exists a partial computable functional carrying all members of the second problem to members of the first problem. This reducibility notion is now known as strong reducibility, in
contrast to the weak reducibility of Muchnik 1963, who required only that for each member of the second problem there exists a member of the first problem which is Turing reducible to it. On this
basis Medvedev and Muchnik respectively noted that the strong and weak degrees of unsolvability form Brouwerian lattices. Subsequently these lattices turned out to be an extremely useful tool in the
classification of unsolvable problems arising in several areas of mathematics, including most recently dynamical systems. The purpose of this talk is to introduce these ideas, which will be
elaborated further in subsequent talks.
Room Reservation Information
Room Number: MB106
Date: 10 / 16 / 2007
Time: 02:30pm - 03:45pm
|
{"url":"http://www.math.psu.edu/calendars/meeting.php?id=489","timestamp":"2014-04-19T00:39:50Z","content_type":null,"content_length":"4625","record_id":"<urn:uuid:3653b4ca-0fc2-4b87-89b8-16291db703b4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Storing Formulae
March 24th 2007, 08:31 PM #1
Storing Formulae
Do you think it would be a good idea to store complicated and/or important formulas such as the root-finding ones?
What I essentially mean is that you type the formula in then store it, then type something like insert x=1 into the formula.
(I have a TI-89)
How would I go about doing this?
Do you think it would be a good idea to store complicated and/or important formulas such as the root-finding ones?
What I essentially mean is that you type the formula in then store it, then type something like insert x=1 into the formula.
(I have a TI-89)
How would I go about doing this?
It depends on three things:
1) What kind of work you tend to do and
2) How complicated the formula is and
3) How you are storing the formula.
1) If you are ceaslessly doing the same calculation over and over again, of course it makes sense. Everybody I know does this. However, if you are in school and you simply want to store a way to
solve quadratic equations for you then I would recommend that you don't. The purpose of schoolwork is practice. Writing a program to solve the problems for you is good a exercise, but defeats the
purpose of the homework.
2) If you have a simple formula for what you need then you can simply store it as a formula. For example if you are finding many points on the function sqrt{x^3 + 3x^2} + exp{sin(5x + 4)/x!}
then, yeah, I'd write out the function on the screen and automate it. You should be able to copy and paste the line above and put a "|x = 3" (for example) after each line to evaluate it at a
specific x value.
3) For a genuinely complicated formula you can go into the editor mode and define either a function or write a program to do it for you. For example if you are trying to find the exact solutions
of a quartic polynomial, a simple process like I described in 2) is going to be impossible. But you can write a program to do it if you have the formula handy.
Unfortunately I have a TI-83 and a TI-92, not a TI-89. The 89 is similar to the 92 but I don't know how close the details are, so I can't help you with the specifics.
Thanks, I didn't know you could simply use |x=n to substitute in...
I also just discovered a few other things with it. I think it's a great, convenient thing to do!
one additional remark:
If you store the (for instance topsquark's) term sqrt(x^3 + 3x^2) + exp(sin(5x + 4)/x!) into y1(x) then you can get the value of this term for x = 3 by typing y1(3) [Enter]. That means your TI89
is calculating the y-value of a function.
AFAIK, the 89 is the 92 minus the qwerty keyboard and a bit of speed.
And an invaluable tip to all 89 users: there are copy and paste buttons (the yellow diamond and two other top-row, left-side keys) that you can use to copy functions off the upper part of the
home screen and into the y= editor. You can also highlight text similar to computers by pressing and holding either the yellow diamond or alpha key (don't remember which) and using the arrows if
you want to copy stuff from the entry area.
March 25th 2007, 05:12 AM #2
March 25th 2007, 05:34 AM #3
March 25th 2007, 08:04 AM #4
April 3rd 2007, 03:35 PM #5
Apr 2007
|
{"url":"http://mathhelpforum.com/calculators/12925-storing-formulae.html","timestamp":"2014-04-17T20:15:09Z","content_type":null,"content_length":"45761","record_id":"<urn:uuid:63a13b09-15a1-49f7-9ea6-57fc9a57b449>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rendering Magnetic Fields - Math and Physics
So I'm making a game involving electric and magnetic fields and I'm running into an issue. What is the best (and physically accurate) way to calculate and render the magnetic field lines resulting
from a dipole? I need to be able to represent these mathematically because I plan on having arrows traveling along the lines so that direction and intensity can be inferred by the user. Something
that is parametric would be great but I haven't been able to come across anything up to this point.
|
{"url":"http://www.gamedev.net/topic/632545-rendering-magnetic-fields/","timestamp":"2014-04-18T10:39:53Z","content_type":null,"content_length":"96467","record_id":"<urn:uuid:b5b86eb0-2622-4d36-8e7c-6b362d8a87ba>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solving by Interpolation
Date: 05/30/2001 at 05:04:47
From: Rosly
Subject: Exponential equation
Dear Dr. Math,
Given y = 10 and b = 1.419, find X from this equation:
y = ( b^(-0.25X)) + 0.84X
I have X, but when I replaced in the equation above, I couldn't get
y = 10.
I really appreciate your consultation.
Date: 05/30/2001 at 16:07:00
From: Doctor Rob
Subject: Re: Exponential equation
Thanks for writing to Ask Dr. Math, Rosly.
y = 1.419^(-0.25*X) + 0.84*X
= (1.419^[-0.25])^X + 0.84*X
y = 0.91623^X + 0.84*X
This equation, and many other similar ones, cannot be solved
algebraically. You will have to find the answer numerically.
One way to do this is by the method of interpolation. Find values of X
that make y < 10 and y > 10. Call them X1 and X2, respectively. The
actual value of X lies between them. Call the corresponding values of
y y1 and y2, respectively. Then compute a new value
X3 = X1 + (X2-X1)*(10-y1)/(y2-y1)
This is the y-intercept of the straight line through (X1,y1) and
(X2,y2). If the curve is straight, this will be the answer, and even
if it is not, it is closer to the answer than one of X1 or X2.
Compute the corresponding y3. Replace (X1,y1) by (X2,y2) and (X2,y2)
by (X3,y3). Repeat this until you get as much accuracy as you need.
For example,
X y
10 8.817
12 10.430
11.46687 9.99887
11.468267964 9.999998975
11.46826923329 10.00000000000
This is the answer to 11 decimal places. The actual value is about
11.46826923328992, according to Mathematica(TM).
You can see that this process converges quite rapidly to the
correct answer.
- Doctor Rob, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/54598.html","timestamp":"2014-04-20T16:01:35Z","content_type":null,"content_length":"6577","record_id":"<urn:uuid:b0c2f930-51bf-4b8b-9282-1f86002c7d5d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Normed combinatorial homology
and noncommutative tori
Marco Grandis
Cubical sets have a directed homology, studied in a previous paper and consisting of preordered abelian groups, with a positive cone generated by the structural cubes. By this additional information,
cubical sets can provide a sort of `noncommutative topology', agreeing with some results of noncommutative geometry but lacking the metric aspects of C* -algebras. Here, we make such similarity
stricter by introducing normed cubical sets and their normed directed homology, formed of normed preordered abelian groups. The normed cubical sets NC_\theta associated with `irrational' rotations
have thus the same classification up to isomorphism as the well-known irrational rotation C* -algebras A_\theta.
Keywords: Cubical sets, noncommutative C*-algebras, combinatorial homology, normed abelian groups
2000 MSC: 55U10, 81R60, 55Nxx
Theory and Applications of Categories, Vol. 13, 2004, No. 7, pp 114-128.
TAC Home
|
{"url":"http://www.emis.de/journals/TAC/volumes/13/7/13-07abs.html","timestamp":"2014-04-18T13:54:25Z","content_type":null,"content_length":"2397","record_id":"<urn:uuid:ccd1c914-2a2e-4390-8b55-4a5f3bfb122a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post by thread:[PICLIST] [PIC] Timer Tribulations
No exact or substring matches. trying for part
PICList Thread
'[PICLIST] [PIC] Timer Tribulations'
2001\02\22@033933 by Vasile Surducan
James, I was wondering for many times why is very difficult to obtain a
long time accurate clock even using 32768 or multiply Xtal.
The answer is on code but not only. External temperature affect dramatic
the xtal stability. It's imperative to use an external trimpot circuitry
and to measure & adjust the Xtal frequency at value you take in
For code I'm not an expert but I've made a 16x84 web clock collection from
you may inspire at http://www.geocities.com/vsurducan/pic.htm
Also Jinx ( Joe Colquitt ) is an expert in clock's...
One of my precious clock have Xtal thermostated at about 50...60 C and
works in harsh environement: external temperatures between -20C and +45C
On Tue, 20 Feb 2001, James wrote:
{Quote hidden}
http://www.piclist.com hint: The list server can filter out subtopics
(like ads or off topics) for you. See http://www.piclist.com/#topics
2001\02\22@034404 by Vasile Surducan
On Wed, 21 Feb 2001, Tony Nixon wrote:
> You will find that the clock will drift one way or the other depending
> on temperature, so using the mains as a frequency source is much more
> accurate.
You are certainly joking ! Or in your country mains frequency drift is
under 0.5% ...
http://www.piclist.com hint: The list server can filter out subtopics
(like ads or off topics) for you. See http://www.piclist.com/#topics
2001\02\22@061314 by Alan B. Pearce
> (4) Must "preload" TMR0 with literal 6 so it will have a cyclic period of
>250 (256-6). But wait, must compensate by loading TMR0 with 6+2 = 8 because
>the timer increment is latent for
> 2 instruction cycles after a write to TMR0 (see data sheet)
I have a feeling this should be 6 - 2 = 4 because it is a count up timer that
interrupts on overflow, and you need to take account of the number of
instructions in your interrupt routine before you reload it.
http://www.piclist.com hint: The list server can filter out subtopics
(like ads or off topics) for you. See http://www.piclist.com/#topics
2001\02\22@070500 by Bob Ammerman
In USA long term mains drift is _very_ close to zero.
The power grid will deliberately adjust the mains frequency to compensate
for previous errors.
Mains driven clocks keep good time nearly indefinitely (barring power
failure, which is very rare except for local problems due to storms, etc.)
[And California's problems due to stupidity :-) ].
Bob Ammerman
RAm Systems
(contract development of high performance, high function, low-level
{Original Message removed}
2001\02\22@071658 by Vasile Surducan
Just for comparation, you have 120V/60Hz isn't it ?
Have you an official info ( or a long time measurement ) how small
is the main frequency drift ? For me sound's incredible...
On Thu, 22 Feb 2001, Bob Ammerman wrote:
{Quote hidden}
> {Original Message removed}
2001\02\22@072325 by Bob Ammerman
As I said before:
Do _NOT_ preload TMR0.
Instead ADD to TMR0.
For a period of 250 cycles the correct value is:
256-250+2 == 8
The +2 is needed because the timer doesn't update for two cycles when it is
written to (by the ADD).
I have done this. It WORKS!
Bob Ammerman
RAm Systems
(contract development of high performance, high function, low-level
http://www.piclist.com hint: The list server can filter out subtopics
(like ads or off topics) for you. See http://www.piclist.com/#topics
2001\02\22@081637 by Michael Rigby-Jones
{Quote hidden}
Bob is perfectly correct. By adding a constant to the timer, you
effectively remove the only variable in the problem: the interrupt latency.
http://www.piclist.com hint: The list server can filter out subtopics
(like ads or off topics) for you. See http://www.piclist.com/#topics
2001\02\22@093015 by Thomas McGahee
Mains frequency drift is not CUMULATIVE over any given
24 hour period. It is monitored by the power company
and compensated for. Otherwise AC electric clocks
would be absolutely worthless.
Fr. Tom McGahee
{Original Message removed}
2001\02\22@160150 by Drew Ames
>project is to make . . . yet another clock, but this "time" using a 4.000
MHZ xtal, which is not
>the most natural choice, but offers some
>(1) XTAL is 4.000MHZ
>(2) Instruction cycle is therefore 4.000/4 = 1.000 MHZ
Here's another thought (it's not my original idea, but I can't remember
where I heard it first)...
At 1MHz, 256cycles and a prescaler of 64, there will be approximately
61.03515625 overflows per second.
Let's call that 61 and clock a second every 61 overflows.
So, every minute, we are out by 2.109375 overflows. At the time we clock
over 1 minute, subtract 2 overflows or count up to 63 overflows, your choice
So, every hour, we are out by 6.5625 overflows. At the time we clock over 1
hour, subtract 6 overflows as with the minutes.
So, every day, we are out by 13.5 overflows. At the time we clock over a
day, subtract 13 overflows as with the minutes and hours.
So, every week, we are out by 3.5 overflows. At the time we clock over a
week, subtract 3
So, every year, we are out by exactly 26 overflows which means the clock is
perfectly accurate of the crystal is perfectly on 4MHz!
This technique can be worked backwards if you have a measured time interval
and know how far the clock is off, you can substitute the 'real' clock
value and tune the offsets.
Drew & Karen Ames
Home E-Mail: .....drewKILLspam.....rebel.net.au or EraseMEkarenspam_OUTTakeThisOuTrebel.net.au
Business E-Mail: damesspam_OUTsyserv.com.au
http://www.piclist.com hint: The list server can filter out subtopics
(like ads or off topics) for you. See http://www.piclist.com/#topics
'[OT]: Re: [PIC] Timer Tribulations'
2001\02\22@165102 by Tony Nixon
Vasile Surducan wrote:
> On Wed, 21 Feb 2001, Tony Nixon wrote:
> > You will find that the clock will drift one way or the other depending
> > on temperature, so using the mains as a frequency source is much more
> > accurate.
> You are certainly joking ! Or in your country mains frequency drift is
> under 0.5% ...
> Vasile
Perhaps a bit of AC math about power consumption in a system as large as
a state grid may reveal why power companies try to keep the frequency as
stable as possible.
I'm definitely no expert here, but I'll bet it is definitely in the
power company's best interest.
Upteen million clocks around the world wouldn't be using the principle
if it was unreliable.
Best regards
http://www.piclist.com hint: The list server can filter out subtopics
(like ads or off topics) for you. See http://www.piclist.com/#topics
The US is hooked up into a single power grid so that power can be shared
from one region to another over long distances. In order to do this easily,
it is necessary for all the generators not only to be producing the same
frequency, but the same phase as well.
AC motors and generators (which are pretty nearly the same thing) are a lot
like stepping motors we are familiar with. For a given phase of the power,
their rotors want to be in a corresponding physical position.
If you tie two generators together that are not in phase, the one that's
ahead will supply current to the one that's behind, which will act as a
motor trying to catch up (which will also load down the one that's ahead,
slowing it down). To do this, they will exchange current -- a lot of
current, because they will want to do this as nearly instantaneously as they
So the load is getting heavy, and someone goes to switch on another
generator to handle the load, but it's out of phase. Humongous relays clang
closed, and huge currents rush about as rotating machines as big as houses
jump and buck and make the ground shake, and the lights dim.
Actually what happens is circuit breakers take both generators off line
before something breaks, and now the grid really is overloaded...
Power companies pay a whole lot of attention to frequency and phase. They
coordinate this, and keep the whole grid locked to very precise frequency
standards. Sometimes when the load is heavy, they do get behind a few
cycles, but later on when the load lightens, they speed up. If you observe
an electric wall clock carefully and compare it to something like WWV, you
may see it get a few seconds behind on a hot summer afternoon, but it will
be caught up by the next morning.
So the grid, at least in the US has great long-term stability, but only
pretty good short-term.
> {Original Message removed}
'[PICLIST] [PIC] Timer Tribulations'
2001\02\22@191446 by Bob Ammerman
My favorite way to handle this 'non-commensurate' intervals problem is to
steal a concept from the Bresenham line drawing algorithm.
Using the current case: 4.00 MHz crystal
1Mhz instruction rate
256 cycles and prescaler of 64
each overflow of the timer represents
256*64 == 16384 microseconds
You start with a counter set to 1,000,000 (1 second in microseconds)
On each timer interrupt you subtract 16384 from the counter.
If the counter goes negative you update the time by one second and then add
1,000,000 back in to the counter.
This technique will work for any interval. It can be made perfect for
intervals that have a rational relationship to the instruction cycle time,
and can be abitrary close to perfect even for irrational ratios (anybody got
any SQRT(2) Mhz crystals handy?)
Bob Ammerman
RAm Systems
(contract development of high performance, high function, low-level
http://www.piclist.com hint: The list server can filter out subtopics
(like ads or off topics) for you. See http://www.piclist.com/#topics
'[OT]: Re: [PIC] Timer Tribulations'
2001\02\22@233041 by Nikolai Golovchenko
In Ukraine these clocks just don't work correctly. Currently we have
the mains frequency at about 49.3 Hz, and if you assume the frequency
is 50 Hz... :)
---- Original Message ----
From: Tony Nixon <KILLspamTony.NixonKILLspamENG.MONASH.EDU.AU>
Sent: Thursday, February 22, 2001 23:49:33
To: RemoveMEPICLISTTakeThisOuTMITVMA.MIT.EDU
Subj: [OT]: Re: [PIC] Timer Tribulations
{Quote hidden}
http://www.piclist.com hint: The list server can filter out subtopics
(like ads or off topics) for you. See http://www.piclist.com/#topics
2001\02\22@233538 by Tony Nixon
Nikolai Golovchenko wrote:
> In Ukraine these clocks just don't work correctly. Currently we have
> the mains frequency at about 49.3 Hz, and if you assume the frequency
> is 50 Hz... :)
> Nikolai
It'll give you more time to relax ;-)
Best regards
http://www.piclist.com hint: The list server can filter out subtopics
(like ads or off topics) for you. See http://www.piclist.com/#topics
2001\02\22@233809 by Herbert Graf
The thing to remember is it's not the actual frequency accuracy that counts
so much, it's the synching of phase that does. If they run the grid at
49.3Hz that's fine, as long as every generator is running at that frequency
with the same relative phase. TTYL
> {Original Message removed}
'[PICLIST] [PIC] Timer Tribulations'
2001\02\23@010326 by Nikolai Golovchenko
I think an easier way to cope with the error is simply to count time
in time units instead of overflows. For example,
timer is not reloaded (256 cycle period)
Each overflow takes
T = 1/(fosc/4/prescaler/256) = ----------------- [s]
In our case,
T=0.016384 [s]
So, on each timer overflow, we add this time to the time accumulator.
When the accumulator becomes equal or higher than 1, we increment the
seconds counter and subtract one from the accumulator.
The accuracy depends on how well the T constant is approximated. In
binary, the T value 0.016384 looks like infinite series:
Say we want to consider only 24 bits after the decimal point. The
error then is 0.00003%, which is probably good enough :)
constant=round(T*2^24)=0431BE (hex)
Now the code would look like:
movlw 0xBE
addwf time0, f
movlw 0x31
movlw 0x32
addwf time1, f
movlw 0x04
movlw 0x05
addwf time2, f
goto increment_seconds
There is no need to initialize the time0:2 accumulator, because we
don't really care at which moment the first second ticks.
As a bonus, this approach solves the problem of which time of the day
to do correction :)
---- Original Message ----
From: Drew Ames <RemoveMEdrewTakeThisOuTREBEL.NET.AU>
Sent: Thursday, February 22, 2001 13:30:42
To: PICLISTEraseME.....MITVMA.MIT.EDU
Subj: [PIC] Timer Tribulations
{Quote hidden}
http://www.piclist.com hint: The PICList is archived three different
ways. See http://www.piclist.com/#archives for details.
'[OT]: Re: [PIC] Timer Tribulations'
2001\02\23@033439 by Vasile Surducan
On Fri, 23 Feb 2001, Tony Nixon wrote:
> Nikolai Golovchenko wrote:
> >
> > In Ukraine these clocks just don't work correctly. Currently we have
> > the mains frequency at about 49.3 Hz, and if you assume the frequency
> > is 50 Hz... :)
> >
> > Nikolai
> It'll give you more time to relax ;-)
I don't know why, but you ( americans ) look more relax than we are...
Even when you have painted the moon with coca-cola sign, you found it
already painted in red... at 49,3 Hz
Zdrasvuite Nicolai,
Cheers Tony,
http://www.piclist.com hint: The PICList is archived three different
ways. See http://www.piclist.com/#archives for details.
2001\02\23@052917 by Roman Black
Hey! All this talk about clocks and 50Hz/60Hz mains
timing and getting crystal clocks accurate just
gave me a clever idea!
Using mains sync is probably the best for accuracy,
but has problems due to needing to provide a mains
supply and then insulate high voltages or use
a transformer etc. Just not practical for a
battery powered PIC device on the wall.
So how about this: Every time I use a high gain
input on some circuit and put the cro on it there
is always some 50Hz mains freq clearly visible
on the signal. Yes even on battery driven circuits.
Seems the average house has quite enough mains driven
devices and mains wires running through all the
walls that there is a very definite 50Hz/60Hz
RF signal everywhere in suburbia.
So... How about a simple two transistor high-gain
amp (or op amp) tuned for about 50Hz. This could
be fed into the PIC pin. Sure there would be some
ocassional false triggers but the PIC could be
smart enough to log input triggers in software
and "decide" which are the real ones based on the
general timer period which is always known.
If done correctly you could have a battery powered
PIC device anywhere in your home, mains sync'ed
and running with perfect time accuracy. And never
have to actually connect it TO the mains.
So this is the point where some smartie normally
says they already thought of it and have been
making them for years... ;o)
Tony Nixon wrote:
> > > You will find that the clock will drift one way or the other depending
> > > on temperature, so using the mains as a frequency source is much more
> > > accurate.
> >
> > You are certainly joking ! Or in your country mains frequency drift is
> > under 0.5% ...
http://www.piclist.com hint: The PICList is archived three different
ways. See http://www.piclist.com/#archives for details.
2001\02\23@060754 by Roman Black
Nikolai Golovchenko wrote:
> In Ukraine these clocks just don't work correctly. Currently we have
> the mains frequency at about 49.3 Hz, and if you assume the frequency
> is 50 Hz... :)
> Nikolai
Ha ha! So they are ripping you off 0.7Hz!!
You are getting less than what you pay for,
I wonder if that applies, seriously??? :o)
http://www.piclist.com hint: The PICList is archived three different
ways. See http://www.piclist.com/#archives for details.
'[PICLIST] [PIC] Timer Tribulations'
2001\02\23@061421 by Roman Black
Nikolai Golovchenko wrote:
{Quote hidden}
Doesn't Bob's Bresenheim method have zero error
and only needs one 24bit add every second?? :o)
http://www.piclist.com hint: The PICList is archived three different
ways. See http://www.piclist.com/#archives for details.
'[OT]: Re: [PIC] Timer Tribulations'
2001\02\23@062040 by Roman Black
Vasile Surducan wrote:
{Quote hidden}
Ha ha! Friendly competition!! :o)
Hey, I just realised, the Coca Cola sign already
IS red!! Is that telling us something??? ;o)
http://www.piclist.com hint: The PICList is archived three different
ways. See http://www.piclist.com/#archives for details.
'[PICLIST] [PIC] Timer Tribulations'
2001\02\23@074907 by Nikolai Golovchenko
Maybe, I don't know about that method. In fact, you could do the
calculation at about the 1 Hz rate if you wish. The idea is to count
the actual time between overflows. Well, it might be any rate. At
lower than 1 Hz the seconds are updated too rarely. Above 1 Hz is the
best. But I doubt about absolute zero error for every case. The
overflow period should be represented as a finite binary fixed-point
number to achieve that (like 0.75 s, or 0.125 s). But as long as error
in software is reasonably low (lower than crystal) you are okay with
any crystal.
---- Original Message ----
From: Roman Black <RemoveMEfastvidTakeThisOuTspamEZY.NET.AU>
Sent: Friday, February 23, 2001 13:15:22
To: EraseMEPICLISTspamspamBeGoneMITVMA.MIT.EDU
Subj: [PIC] Timer Tribulations
> Doesn't Bob's Bresenheim method have zero error
> and only needs one 24bit add every second?? :o)
> -Roman
http://www.piclist.com hint: The PICList is archived three different
ways. See http://www.piclist.com/#archives for details.
'[OT]: Re: [PIC] Timer Tribulations'
2001\02\23@074924 by Nikolai Golovchenko
Well, isn't Tony from Australia? :)
I guess people in California are supposed to be most relaxed ... for a
number of reasons... :)
Let the light be!
---- Original Message ----
From: Vasile Surducan <RemoveMEvasileKILLspamL30.ITIM-CJ.RO>
Sent: Friday, February 23, 2001 10:17:34
To: PICLISTSTOPspamspam_OUTMITVMA.MIT.EDU
Subj: [OT]: Re: [PIC] Timer Tribulations
{Quote hidden}
http://www.piclist.com hint: The PICList is archived three different
ways. See http://www.piclist.com/#archives for details.
2001\02\23@074929 by Nikolai Golovchenko
They rip us anyway!
Usually houses don't have gas, heat or water counters. So everyone
pays an "average" rate relative to apartment size. I heard the tenants
of an apartment house installed the counters, and the actual payment
should have been something like 20% less. But no one lets them pay
Seriously, the problem with low frequency, as far as I know, deals
with overloaded power system. There are shortages of fuel, etc... And
also someone decided to shut down the Chernobyl nuclear station. :)
---- Original Message ----
From: Roman Black <spamBeGonefastvidSTOPspamEraseMEEZY.NET.AU>
Sent: Friday, February 23, 2001 13:09:22
To: KILLspamPICLISTspamBeGoneMITVMA.MIT.EDU
Subj: [OT]: Re: [PIC] Timer Tribulations
{Quote hidden}
http://www.piclist.com hint: The PICList is archived three different
ways. See http://www.piclist.com/#archives for details.
2001\02\23@080811 by Bob Ammerman
Excellent idea Roman, just don't take the widget camping with you ;-)
Bob Ammerman
RAm Systems
(contract development of high performance, high function, low-level
{Original Message removed}
'[PICLIST] [PIC] Timer Tribulations'
2001\02\23@081222 by Bob Ammerman
----- Original Message -----
From: Roman Black <EraseMEfastvidEraseMEEZY.NET.AU>
To: <@spam@PICLIST@spam@spam_OUTMITVMA.MIT.EDU>
Sent: Friday, February 23, 2001 6:15 AM
Subject: Re: [PIC] Timer Tribulations
{Quote hidden}
Actually, without looking closely at the above, it is very close to what I
recommended, except that the accumulator is a fixed point number in seconds,
and we add ticks to it until is gets to one.
My technique requires a 24bit add or subtract on every timer interrupt, plus
an extra one once a second.
Bob Ammerman
RAm Systems
(contract development of high performance, high function, low-level
http://www.piclist.com hint: The PICList is archived three different
ways. See http://www.piclist.com/#archives for details.
part 1 658 bytes content-type:text/plain; (decoded 7bit)
> All this talk about clocks and 50Hz/60Hz mains
> timing and getting crystal clocks accurate just
> Using mains sync is probably the best for accuracy,
I use this circuit as an accurate and very low power long-
term back-up if mains goes down or is removed during
transport. You can pick up a kitchen clock for $2, take the
small PCB out of it and unsolder the coil connection. (The
coil, btw, can be driven by reciprocating PIC o/ps through
a 390R resistor so the clock movement isn't a write-off).
The PCB puts out an alternating 0.5Hz on each of the two
pads where the coil was. Pick either as the 0V
part 2 2108 bytes content-type:image/gif; (decode)
part 3 136 bytes
http://www.piclist.com#nomail Going offline? Don't AutoReply us!
email spamBeGonelistservKILLspammitvma.mit.edu with SET PICList DIGEST in the body
2001\02\24@082020 by Byron A Jeff
> > All this talk about clocks and 50Hz/60Hz mains
> > timing and getting crystal clocks accurate just
> >
> > Using mains sync is probably the best for accuracy,
> I use this circuit as an accurate and very low power long-
> term back-up if mains goes down or is removed during
> transport. You can pick up a kitchen clock for $2, take the
> small PCB out of it and unsolder the coil connection. (The
> coil, btw, can be driven by reciprocating PIC o/ps through
> a 390R resistor so the clock movement isn't a write-off).
> The PCB puts out an alternating 0.5Hz on each of the two
> pads where the coil was. Pick either as the 0V
All this discussion has me rethinking my sunrise/sunset clock/controller.
So far I'd been happy with the 32khz crystal but I haven't run tests more than
24 hours yet and the target location isn't going to be a well temp controlled
as my test bench.
Did we finally agree that a couple of big series resistors and a 5.1 zener
was the safest line voltage reducer? I think I have some 1 Mohms in one of
my junk boxes.
So what do you think of the idea of counting line frequency for the primary
timekeeper with the 32khz as the backup? I already have the 320 Khz and
adding 2 junkbox resistors and and zener is no big deal.
http://www.piclist.com#nomail Going offline? Don't AutoReply us!
email .....listservspam_OUTmitvma.mit.edu with SET PICList DIGEST in the body
2001\02\24@092814 by Roman Black
Byron A Jeff wrote:
{Quote hidden}
If you have the option and already have a mains transformer,
that is the safest system. The large inductance of the
transformer is the best spike killer. :o)
http://www.piclist.com#nomail Going offline? Don't AutoReply us!
email TakeThisOuTlistserv.....TakeThisOuTmitvma.mit.edu with SET PICList DIGEST in the body
2001\02\25@164531 by Peter L. Peres
>If you have the option and already have a mains transformer,
>that is the safest system. The large inductance of the
>transformer is the best spike killer. :o)
Huh ? Do you want to do the small rationing with the coupling capcitance
between primary and secondary again ? The one with the input resistor to a
PIC pin connected to HV ? I know how this sounds, but, if your project is
well-earthed, then the spike will probably glitch it hard. If it is not
earthed but properly built then it may not notice the glitch at all ...
but there is no warranty. Anyway this time small value capacitors across
the diodes in the bridge rectifiers will help. Like 0.1uF across two of
the diodes in the bridge. In some countries this is mandatory, but for
other reasons (to avoid generating harmonics from hard switching diodes).
http://www.piclist.com hint: To leave the PICList
'[OT]: Re: [PIC] Timer Tribulations'
2001\02\25@165621 by Tony Nixon
Vasile Surducan wrote:
> I don't know why, but you ( americans ) look more relax than we are...
> Even when you have painted the moon with coca-cola sign, you found it
> already painted in red... at 49,3 Hz
> Zdrasvuite Nicolai,
> Cheers Tony,
> Vasile
Ozzie mate :-)
Best regards
http://www.piclist.com hint: To leave the PICList
2001\02\25@165827 by Tony Nixon
Roman Black wrote:
> Hey! All this talk about clocks and 50Hz/60Hz mains
> timing and getting crystal clocks accurate just
> gave me a clever idea!
> Using mains sync is probably the best for accuracy,
I thought about this ages ago, but I wonder how many 'ticks' get
corrupted when electric motors, welders, etc. etc. are started and
Best regards
http://www.piclist.com hint: To leave the PICList
2001\02\25@193024 by Robert Rolf
Tony Nixon wrote:
> Roman Black wrote:
> >
> > Hey! All this talk about clocks and 50Hz/60Hz mains
> > timing and getting crystal clocks accurate just
> > gave me a clever idea!
> >
> > Using mains sync is probably the best for accuracy,
> I thought about this ages ago, but I wonder how many 'ticks' get
> corrupted when electric motors, welders, etc. etc. are started and
> stopped.
Thats probably why those old clocks had little flywheels in them <G>.
They only tracked the average frequency, not the instantaneous one.
As if anyone would notice the cumulative 1000/60 error when their
alarm clock goes off.
http://www.piclist.com hint: To leave the PICList
2001\02\25@221045 by Tony Nixon
Robert Rolf wrote:
> Tony Nixon wrote:
> > Roman Black wrote:
> > >
> > > Hey! All this talk about clocks and 50Hz/60Hz mains
> > > timing and getting crystal clocks accurate just
> > > gave me a clever idea!
> > >
> > > Using mains sync is probably the best for accuracy,
> >
> > I thought about this ages ago, but I wonder how many 'ticks' get
> > corrupted when electric motors, welders, etc. etc. are started and
> > stopped.
> Thats probably why those old clocks had little flywheels in them <G>.
> They only tracked the average frequency, not the instantaneous one.
> As if anyone would notice the cumulative 1000/60 error when their
> alarm clock goes off.
I guess you could create a similar flywheel in software.
Keep track of the 50Hz signal and if a valid signal change occurs
outside a hysteresis point, ignore it.
Can also be used to provide a timebase for brief power outages.
Best regards
http://www.piclist.com hint: To leave the PICList
2001\02\25@222731 by Scott Dattalo
On Mon, 26 Feb 2001, Tony Nixon wrote:
> Robert Rolf wrote:
> >
> > Thats probably why those old clocks had little flywheels in them <G>.
> > They only tracked the average frequency, not the instantaneous one.
> > As if anyone would notice the cumulative 1000/60 error when their
> > alarm clock goes off.
> I guess you could create a similar flywheel in software.
Yeah, but don't hit any breakpoints while you're simulating or your code will
http://www.piclist.com hint: To leave the PICList
2001\02\25@234115 by Tony Nixon
'[PICLIST] [PIC] Timer Tribulations'
2001\02\26@024551 by Roman Black
Peter L. Peres wrote:
> >If you have the option and already have a mains transformer,
> >that is the safest system. The large inductance of the
> >transformer is the best spike killer. :o)
> >-Roman
> Huh ? Do you want to do the small rationing with the coupling capcitance
> between primary and secondary again ? The one with the input resistor to a
> PIC pin connected to HV ? I know how this sounds, but, if your project is
> well-earthed, then the spike will probably glitch it hard. If it is not
> earthed but properly built then it may not notice the glitch at all ...
> but there is no warranty. Anyway this time small value capacitors across
> the diodes in the bridge rectifiers will help. Like 0.1uF across two of
> the diodes in the bridge. In some countries this is mandatory, but for
> other reasons (to avoid generating harmonics from hard switching diodes).
> Peter
Double huh! I have no idea what you just said Peter!
http://www.piclist.com hint: PICList Posts must start with ONE topic:
[PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads
'[OT]: Re: [PIC] Timer Tribulations'
2001\02\26@025626 by Roman Black
Tony Nixon wrote:
> Roman Black wrote:
> >
> > Hey! All this talk about clocks and 50Hz/60Hz mains
> > timing and getting crystal clocks accurate just
> > gave me a clever idea!
> >
> > Using mains sync is probably the best for accuracy,
> I thought about this ages ago, but I wonder how many 'ticks' get
> corrupted when electric motors, welders, etc. etc. are started and
> stopped.
That's why you use software correction to make
up for missed ticks, and only allow a small window
for tick timing. It should self-correct given that
it will receive 100 ticks/second.
http://www.piclist.com hint: PICList Posts must start with ONE topic:
[PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads
More... (looser matching)
- Last day of these posts
- In 2001 , 2002 only
- Today
- New search...
|
{"url":"http://www.piclist.com/techref/postbot.asp?by=thread&id=%5BPICLIST%5D+%5BPIC%5D+Timer+Tribulations&w=body&tgt=post&at=20010222072325a","timestamp":"2014-04-19T20:06:24Z","content_type":null,"content_length":"110925","record_id":"<urn:uuid:fb72b978-8d0d-430b-978c-0a58a50f57a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Distribution
I agree with bobbym's answer, if my confirmation is of any value.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=282085","timestamp":"2014-04-19T02:12:28Z","content_type":null,"content_length":"15281","record_id":"<urn:uuid:f704472c-21b5-4fc9-bca2-701cdbb9926a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In Search of Interior Riches
In Search of Interior Riches
Geometrical Landscapes: The Voyages of Discovery and the Transformation of Mathematical Practice. Amir R. Alexander. xviii + 293 pp. Stanford University Press, 2002. $65.
Geometrical Landscapes—in part a history of English exploration of the Americas, in part a history of precursors of the infinitesimal calculus—is an original and challenging contribution to the
history of ideas. Author Amir R. Alexander acknowledges several well-known historians of mathematics and several centers for cultural studies, humanities, European studies and international studies.
But his claims for his methodology leave me wondering how much he consulted with actual mathematicians.
The book makes and documents the claim that in the 17th century, geographical and mathematical explorers shared a "standard narrative of exploration and discovery." This narrative
posited a wondrous land of riches in the interior, surrounded by hazardous terrain of forests, mountains, and, occasionally, icebergs. The enterprising explorer who arrived on its shore would
find hidden passages and break through all obstacles in his way to the fabled land of the interior. For this he was rewarded with fabulous riches and the possession of a wondrous land.
Alexander gives many examples of this narrative, presented literally in reports of explorations of the Atlantic coast of North America and figuratively in accounts of the mathematics of rhumb lines
and equiangular spirals.
The exploration stories are fascinating: Martin Frobisher repeatedly convinced himself that he had found the Northwest Passage to the Orient, and Walter Raleigh vainly attempted to colonize Virginia
and Guyana. But Alexander focuses mainly on advertisements for and reports about these explorations. Regardless of geographical reality, they conform to the standard narrative.
Mathematicians were closely involved in these explorations. Thomas Hariot, a leading Elizabethan mathematician, was actually a member of Raleigh's first Virginia colony. His contemporary Henry
Briggs, the first Savilian Professor of Mathematics at Oxford, constructed "elaborate mathematical tables, which were to be used by mariners in determining their location according to the magnetic
dip." Briggs was an "advisor and promoter" of two competing voyages seeking the Northwest Passage in 1631.
In an appendix on "The Mathematical Narrative," Alexander notes that Hariot and his colleagues played important roles in expeditions, designing nautical instruments, preparing astronomical tables,
composing promotional pamphlets, drawing maps of discoveries and sometimes even going along for the journey.
Navigators needed mathematical help in following the "rhumb line"—the curve on the surface of the Earth that leads a ship along a constant bearing to its desired destination. A natural planar
simplification of the rhumb line is the equiangular spiral, which Alexander describes as "a curve that revolves endlessly around a central point, approaching it ever more closely but never actually
reaching it. . . . [I]f straight lines were drawn emanating from the central point, the spiral would cross each and every one of them at the same angle." To calculate either the rhumb line or the
equiangular spiral, the natural method was approximation by straight line segments. Hariot theorized that a curved line is made of connected infinitesimal straight line segments. Alexander says
Hariot was not just describing the continuum, he was exploring its inner essence.
In a chapter on "Navigating Mathematical Oceans," Alexander describes the better-known work done with infinitesimal methods in the 16th century by Continental mathematicians Bonaventura Cavalieri,
Evangelista Toricelli and Simon Stevin. These mathematical explorations by means of infinitesimals were different in spirit from what one finds in Euclid's Elements. Paradoxes were relished. They
involved searching the geometrical unknown, rather than rigorously deriving known facts. And the descriptive rhetoric that accompanied this mathematical work often used the same images and metaphors
as did the proposals and reports of geographic exploration, all following the standard narrative of discovery.
On the basis of this historical material, Alexander makes a grandiose claim for a new historical methodology: "Certain mathematical techniques, developed by Elizabethan mathematical practitioners,
were shaped by a ubiquitous cultural narrative." He maintains that "In mathematics, as also in cartography, Hariot's work was guided by the standard narrative of exploration and discovery."
Mathematicians will see the shaping and guiding in a different light. Practical needs focused attention on the rhumb line and its model, the equiangular spiral. The only tool available to study such
curves was approximation by straight line segments. Once a mathematician had made some progress in analysis and calculation, it would be natural to talk about these successes with the rhetoric and
metaphors available from geographical exploration. To a mathematician it would seem a blatant non sequitur to claim that those metaphors and rhetorics "shaped" or "guided" the mathematical work.
Alexander offers this book as a new paradigm for the history of mathematics. He finds that standard history of mathematics, which "emphasizes the progressive unveiling of universal truths, rather
than the contingencies of human existence, has little to do with 'history' as it is commonly understood."
"I find a narrative approach to be a most promising avenue for historicizing mathematics," writes Alexander.
Mathematical work does, I argue, contain a narrative. Once this narrative is identified, it can be related to other, nonmathematical cultural tales that are prevalent within the mathematicians'
social circles. A clear connection between the "mathematical" and the "external" stories would place a mathematical work firmly within its historical setting.
No problem so far. But then Alexander goes on to claim that "If a strong relationship can be established between an historically specific nonmathematical tale and the narrative of a mathematical work
that originated within its social sphere, then mathematics can indeed be said to be fundamentally shaped by its social and cultural setting." By using a narrative approach to the history of
mathematics, he maintains, "complex mathematical techniques are shown to be dependent on cultural narratives prevalent in their wider social setting."
No such thing is or could be shown.
Mathematics progresses by the recognition or invention of problems and the struggle to solve them. The choice of problems for Hariot and Stevin came from navigation and engineering. Social reality
shapes mathematics, not by its narratives, but by its practical needs. The means to solve the problem are then "shaped" or "guided" by the problem itself, by the mathematical knowledge and technique
available at that time and place, and by the ingenuity of the mathematicians who work on it.
If there existed a prevalent social or cultural story that was analogous or parallel to the mathematical story, it by no means follows that such a story "shaped" or "guided" the mathematics. Such a
social or cultural story may have simply served as a model for how one talked about or advertised the mathematics.
"Penguins are 10 times older than humans and have been here for a very, very long time," said Daniel Ksepka, Ph.D., a North Carolina State University research assistant professor. Dr. Ksepka
researches the evolution of penguins and how they came to inhabit the African continent.
Because penguins have been around for over 60 million years, their fossil record is extensive. Fossils that Dr. Ksepka and his colleagues have discovered provide clues about migration patterns and
the diversity of penguin species.
Click the Title to view all of our Pizza Lunch Podcasts!
• A free daily summary of the latest news in scientific research. Each story is summarized concisely and linked directly to the original source for further reading.
An early peek at each new issue, with descriptions of feature articles, columns, Science Observers and more. Every other issue contains links to everything in the latest issue's table of
News of book reviews published in American Scientist and around the web, as well as other noteworthy happenings in the world of science books.
To sign up for automatic emails of the American Scientist Update and Scientists' Nightstand issues, create an online profile, then sign up in the My AmSci area.
|
{"url":"http://www.americanscientist.org/bookshelf/pub/in-search-of-interior-riches","timestamp":"2014-04-20T23:58:10Z","content_type":null,"content_length":"117980","record_id":"<urn:uuid:faee5cd9-e215-4535-83d8-8ccb271b0550>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum computer built inside a diamond
Diamonds are forever or, at least, the effects of this diamond on quantum computing may be. A team that includes scientists from USC has built a quantum computer in a diamond, the first of its kind
to include protection against "decoherence" noise that prevents the computer from functioning properly.
The demonstration shows the viability of solid-state quantum computers, which unlike earlier gas- and liquid-state systems may represent the future of quantum computing because they can be easily
scaled up in size. Current quantum computers are typically very small and though impressive cannot yet compete with the speed of larger, traditional computers.
The multinational team included USC Professor Daniel Lidar and USC postdoctoral researcher Zhihui Wang, as well as researchers from the Delft University of Technology in the Netherlands, Iowa State
University and the University of California, Santa Barbara. Their findings will be published on April 5 in Nature.
The team's diamond quantum computer system featured two quantum bits (called "qubits"), made of subatomic particles.
As opposed to traditional computer bits, which can encode distinctly either a one or a zero, qubits can encode a one and a zero at the same time. This property, called superposition, along with the
ability of quantum states to "tunnel" through energy barriers, will some day allow quantum computers to perform optimization calculations much faster than traditional computers.
Like all diamonds, the diamond used by the researchers has impurities things other than carbon. The more impurities in a diamond, the less attractive it is as a piece of jewelry, because it makes
the crystal appear cloudy.
The team, however, utilized the impurities themselves.
A rogue nitrogen nucleus became the first qubit. In a second flaw sat an electron, which became the second qubit. (Though put more accurately, the "spin" of each of these subatomic particles was used
as the qubit.)
Electrons are smaller than nuclei and perform computations much more quickly, but also fall victim more quickly to "decoherence." A qubit based on a nucleus, which is large, is much more stable but
"A nucleus has a long decoherence time in the milliseconds. You can think of it as very sluggish," said Lidar, who holds a joint appointment with the USC Viterbi School of Engineering and the USC
Dornsife College of Letters, Arts and Sciences.
Though solid-state computing systems have existed before, this was the first to incorporate decoherence protection using microwave pulses to continually switch the direction of the electron spin
"It's a little like time travel," Lidar said, because switching the direction of rotation time-reverses the inconsistencies in motion as the qubits move back to their original position.
The team was able to demonstrate that their diamond-encased system does indeed operate in a quantum fashion by seeing how closely it matched "Grover's algorithm."
The algorithm is not new Lov Grover of Bell Labs invented it in 1996 but it shows the promise of quantum computing.
The test is a search of an unsorted database, akin to being told to search for a name in a phone book when you've only been given the phone number.
Sometimes you'd miraculously find it on the first try, other times you might have to search through the entire book to find it. If you did the search countless times, on average, you'd find the name
you were looking for after searching through half of the phone book.
Mathematically, this can be expressed by saying you'd find the correct choice in X/2 tries if X is the number of total choices you have to search through. So, with four choices total, you'll find
the correct one after two tries on average.
A quantum computer, using the properties of superposition, can find the correct choice much more quickly. The mathematics behind it are complicated, but in practical terms, a quantum computer
searching through an unsorted list of four choices will find the correct choice on the first try, every time.
Though not perfect, the new computer picked the correct choice on the first try about 95 percent of the time enough to demonstrate that it operates in a quantum fashion.
3 / 5 (2) Apr 04, 2012
Diamonds are exceptional by strength of chemical bonds and the stability of excited states of nitrogen vacancies. It manifests with reddish fluorescence of natural diamond specimens, which lasts for
seconds. The high energetic barrier of chemical bonds prohibits their spontaneous decoherence with thermal vibrations. These vacancies can be observed as glowing spots under the microscope, which is
exceptional if we realize, they're represented with single atom domains. Only cooled atoms within Bose-Einstein's condensate are behaving in similar way, which means, the diamond enables to replicate
some expensive experiments made with fragile boson condensates at room temperature comfortably.
1 / 5 (6) Apr 04, 2012
Neat trick but when can we play World of Warcraft on them?
3.7 / 5 (3) Apr 05, 2012
Neat trick but when can we play World of Warcraft on them?
Crysis dude....Crysis....
not rated yet Apr 05, 2012
The first country to get a working quantum computer might be able to crack all codes in existence. It's like getting the nuclear weapon first, race to build a quantum computer == the Manhattan
project? Will it be like this?
not rated yet Apr 05, 2012
@Feldagast and thematrix606: Quantum computers will not help your triangles. ;) But they could lead to much more accurate physics at a similar load to current methods.
@Blakut: You could break a hash where you know the equation, but if you don't know the equation used to encrypt, the data is still garbage. A threat to standardized encryption mothods, but not to
anything custom.
2 / 5 (2) Apr 05, 2012
@Feldagast and thematrix606: Quantum computers will not help your triangles. ;) But they could lead to much more accurate physics at a similar load to current methods.
Actually, quantum algorithms could help in some cases for video games in certain scripting or random number situations.
There are situations in games where you need a random number to make a random event happen, or do roll dice for a truly random number.
Conventional computers handle this by iteration of a ridiculously complicated random number formula, and, for example, a "dice" system.
If your character has 5D6 in a skill or attribute, etc, then it ends up calling the function 5 times and tallying the results, etc.
IN some situations, a quantum computer may be able to handle randomization in far fewer steps, which is a huge, huge thing in many video games.
not rated yet Apr 11, 2012
"You could break a hash where you know the equation... A threat to standardized encryption mothods, but not to anything custom"
There's a reason why security in modern encryption relies on the key being kept secret, not the algorithm. It's because algorithms are notoriously weak, and your best bet is to spread it far and wide
and let the best cryptographic minds figure out if there's holes or not.
If data is encrypted using anything other than a one time pad, it's vulnerable. The vast (VAST) majority of that data would be vulnerable to attacks using improved methods of factoring primes, and
that includes any "custom" encryption.
Quantum computers have the potential to crack any data encrypted with an algorithm that relies on the difficulty of factoring primes in a single pass. Most data out there relies on this difficulty
for the simple reason that any other custom algorithms are pretty weak by comparison, and *still* might be child's play to a quantum computer.
1 / 5 (1) Apr 11, 2012
@Feldagast and thematrix606: Quantum computers will not help your triangles. ;) But they could lead to much more accurate physics at a similar load to current methods.
@Blakut: You could break a hash where you know the equation, but if you don't know the equation used to encrypt, the data is still garbage. A threat to standardized encryption mothods, but not to
anything custom.
As Lurker so thoughtfully pointed out, just random number generation alone would be a huge step forward. Also, many rendering processes use filtering techniques and pixel-comparisons that would
benefit greatly from quantum GPUs (e.g. single-step pixel supersampling).
As for the encryption problem, you see, the thing is that mathematical and program equations and formulas are also information in and of themselves. You can already "search" for a formula using
specialized software. It increases complexity exponentially and adds more parameters, but true quantum processing makes those points moot
|
{"url":"http://phys.org/news/2012-04-quantum-built-diamond.html","timestamp":"2014-04-18T03:51:07Z","content_type":null,"content_length":"82368","record_id":"<urn:uuid:27d51ad8-d7f0-4355-be5f-f1fec3d25214>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weekly Problem 47 - 2006
Copyright © University of Cambridge. All rights reserved.
'Weekly Problem 47 - 2006' printed from http://nrich.maths.org/
In Niatirb they use Cibara numerals. These are the same shape as normal Arabic numerals, but with the meanings in the opposite order. So "0" means "nine", "1" means "eight" and so on. But they write
their numbers from left to right and use arithmetic symbols just as we do. So, for example, they use 62 for the number we write as 37.
How do the inhabitants of Niatirb write the answer to the sum that they write as 837 + 742?
If you liked this problem,
here is an NRICH task
which challenges you to use similar mathematical ideas.
This problem is taken from the UKMT Mathematical Challenges.
View the previous week's solutionView the current weekly problem
|
{"url":"http://nrich.maths.org/4998/index?nomenu=1","timestamp":"2014-04-17T12:36:19Z","content_type":null,"content_length":"3607","record_id":"<urn:uuid:793047f1-5239-4b9d-81b1-b4ba51f72392>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ch 8
Connecting Geometry©
Chapter 8
A polygon is a closed geometric shape made of straight line segments. Polygons are named by the number of sides they have, and they have as many sides as angles. Therefore we call a 3-sided polygon a
triangle (tri is a Greek word for 3). A four-sided polygon is called a quadrilateral (quad means 4 and lateral refers to side). A five-sided polygon is called a pentagon. Beehives are in the shape of
hexagons (hex means 6), as shown below:
Polygons are often used in many types of architecture, artwork, and graphic design. Floor tiles are usually grids of squares, but sometimes are hexagons or other polygons. Tessellations are a special
type of tiling pattern, but usually using more complex polygons. A tessellation is a graphic design composed of congruent images that interlock to fill the page, with each shape fitting perfectly
into a sort of "jigsaw puzzle" pattern. A Dutch artist by the name of Escher has done some beautiful designs using tessellated polygons.
"Maurits Cornelis Escher was born in Leeuwarden, June 17, 1898. He studied drawing at the secondary school in Arnhem, by F.W. van der Haagen, who helped him to develop his graphic aptitude by
teaching in the technique of the linoleum cut. From 1919 to 1922 he studied at the School of Architecture and Ornamental Design in Haarlem, where he was instructed in the graphic techniques by S.
Jessurun de Mesquita, whose strong personality greatly influenced Escher's further development, as a graphic artist."
Escher created many beautiful linoleum and wood cuts, using a variety of techniques and subject matter. His work is of great interest to graphic artists and to mathematicians, especially his
The information above came from an intriguing website on Escher. To visit this site, and see more of Escher's fascinating work, click on the link below:
Look closely at the tessellation below, and you will see that it is composed of interlocking shapes, all congruent but in 2 different colors. Each shape fits in the spaces between the other shapes,
with no space left over. This may, at first, seem quite simple to do - but if you think about it a bit, you will realize that not all shapes will "tessellate"; the shape has to be carefully designed
so this will work.
Only certain polygons will tessellate. If we look at regular polygons, then the sum of the angles at any one vertex would have to be 360°, as shown below with regular hexagons, on the left. The
example on the right shows regular pentagons, and we see that they will not "tessellate"; they do not fit together.
Based on your knowledge of the interior angles of regular polygons, can you figure out which regular polygons will tessellate?
If you discovered that only equilateral triangles, squares, and regular hexagons will tessellate, then you are absolutely correct! Their interior angles (triangle: 60°, square: 90°, and regular
hexagon: 120°) are the only ones that will divide evenly into 360°.
If we look at non-regular polygons, then there are an unlimited number of shapes that will tessellate, even very irregular shapes such as the ones you saw in the tessellation above. Some more common
shapes will tessellate, such as parallograms, and that is why you can use a parallelogram grid as a base for a creative tessellation project, as explained in the Geometer's Sketchpad Tessellation
Activity for this chapter.
Create a tessellation of your own, using the method described in The Geometer's Sketchpad activity called "Create Tessellation", in chapter 7 of the GSP Activities. Color it in Sketchpad including
Polygon Interiors. (Remember, to constuct the interior of a polygon, select the vertices in consecutive order, then choose Polygon Interior in the Construct menu.)
Go to Chapter 9 Similar Triangles
Back to Connecting Geometry Home Page
|
{"url":"http://mathforum.org/sanders/connectinggeometry/ch_08Polygons.html","timestamp":"2014-04-18T18:47:44Z","content_type":null,"content_length":"5730","record_id":"<urn:uuid:8e3f8ee0-2ad4-41e5-841d-d54720f9dbbb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On the Lorentz-Lorenz formula and the Lorentz model of dielectric dispersion: addendum
« journal navigation
On the Lorentz-Lorenz formula and the Lorentz model of dielectric dispersion: addendum
Optics Express, Vol. 11, Issue 21, pp. 2791-2792 (2003)
The approximate equivalence relation equating the frequency dispersion of the Lorentz model alone with that modified by the Lorentz-Lorenz formula is shown to also equate the branch points appearing
in each of these two descriptions.
© 2003 Optical Society of America
The primary effect [
] of the Lorentz-Lorenz formula, rewritten here to express the relative dielectric permittivity
) in terms of the mean molecular polarizability
) as
with the single resonance Lorentz model of the molecular polarizability
is to downshift the effective resonance frequency and increase the strength of the low frequency behavior from that described by the Lorentz model approximation
ω [0]
is the undamped resonance frequency of the harmonically bound electron of charge magnitude
and mass
with number density
, phenomenological damping constant
and plasma frequency
. The approximate expression given in Eq. (
) is obtained from Eq. (
) with Eq. (
) when the inequality
b ^2
δω [0]
)≪1 is satisfied. The Lorentz-Lorenz relation (1) with the Lorentz model (2) of the molecular polarizability gives the relative dielectric permittivity as
where the undamped resonance frequency is denoted in this expression by
ω [*]
. The value of this resonance frequency
ω [*]
that will yield the same value for
(0) as given by Eq. (
) is given by [
This approximate equivalence relation provides a “best fit” in the rms sense between the frequency dependence of the Lorentz-Lorenz modified Lorentz model dielectric and the Lorentz model alone [
] for both the dielectric permittivity and the complex index of refraction
along the positive real frequency axis.
The branch points of the complex index of refraction for the single resonance Lorentz model dielectric with dielectric permittivity described by Eq. (
) are given by
and the branch points of the complex index of refraction for the Lorentz-Lorenz modified Lorentz model dielectric with dielectric permittivity described by Eq. (
) are given by
If ω [*]=ω [0], then the branch points of n(ω) for the Lorentz-Lorenz modified Lorentz model are shifted inward toward the imaginary axis from the branch point locations for the Lorentz model alone
provided that the inequality ω*2-b ^2/3-δ ^2≥0 is satisfied. If the opposite inequality ω*2-b ^2/3-δ ^2<0 is satisfied, then the branch points ωp±′ are located along the imaginary axis.
If ω [*] is given by the equivalence relation (5), then the locations of the branch points of the complex index of refraction n(ω) for the Lorentz-Lorenz modified Lorentz model and the Lorentz model
alone are exactly the same. The branch cuts for these two models are then also the same (or can be chosen so). It then follows that the analyticity properties for these two causal models of the
dielectric permittivity are the same. This important result is central to the asymptotic description of both ultrawideband signal and ultrashort pulse propagation in Lorentz model dielectrics,
particularly when the number density of molecules is large.
References and links
1. K. E. Oughstun and N. A. Cartwright, “On the Lorentz-Lorenz formula and the Lorentz model of dielectric dispersion,” Opt. Express 11, 1541–1546 (2003), http://www.opticsexpress.org/abstract.cfm?
URI=OPEX-11-13-1541. [CrossRef] [PubMed]
OCIS Codes
(260.2030) Physical optics : Dispersion
(320.5550) Ultrafast optics : Pulses
ToC Category:
Research Papers
Original Manuscript: July 17, 2003
Revised Manuscript: October 13, 2003
Published: October 20, 2003
Kurt Oughstun and Natalie Cartwright, "On the Lorentz-Lorenz formula and the Lorentz model of dielectric dispersion: addendum," Opt. Express 11, 2791-2792 (2003)
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
|
{"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-11-21-2791&id=77673","timestamp":"2014-04-17T10:17:29Z","content_type":null,"content_length":"75911","record_id":"<urn:uuid:378af853-71aa-400e-a979-9f59de430753>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Interpretation of coefficient
Damon posted on Sunday, May 16, 2004 - 10:45 am
I'm a mplus novice and have a simple question. Under the categorical outcome analyses examples on your website, you have a path analysis example. In the analysis command, you don't specify
"logistic". How then do I interpret the regression coefficients in this model. For example, for y8 on y5, the estimate is .246. I assume that this in not in logodds. Could I use the ouput to
determine what the coefficient would be in logodds? Thank you.
Linda K. Muthen posted on Sunday, May 16, 2004 - 10:52 am
With weighted least squares estimation and categorical outcomes, the regression coefficient is a probit regression coefficient. With maximum likelihood estimation and categorical outcomes, the
regression coefficient is a logistic regression coefficient or a log odds. Version 3 allows both estimators with categorical outcomes.
Peggy Tonkin posted on Friday, January 21, 2005 - 11:22 am
I am running mediational models using path analysis with continuous and binary predictors and continuous and binary outcomes. The estimates are WLSMV using the THETA matrix. I am also using the TYPE=
GENERAL COMPLEX because I need to use the CLUSTER function (I am looking at students within schools). Which estimates are appropriate to report?--the StdYX? I am assuming the two binary outcomes are
probit estimates?
Thank You,
Peggy Tonkin
BMuthen posted on Saturday, January 22, 2005 - 3:34 pm
The regression coefficients for binary dependent variables with WLSMV are probit regression coefficients. In line with regular regression, I would report raw coefficients as well as StdYX
coefficients except when binary covariates are involved.
Peter Martin posted on Wednesday, September 28, 2005 - 4:59 am
Hello there,
Within a path model, how do I interpret the coefficient of a path where X is ordinal (but not binary) to a Y that may be either binary, ordinal, or continuous?
(I'm using WLSMV.)
Linda K. Muthen posted on Wednesday, September 28, 2005 - 10:22 am
The scale of y determines the type of regression that is estimated. The scale of the exogenous x variable is not as issue in estimation. x variables can be either binary or continuous. With a binary
or ordinal y variable, WLSMV estimates probit regresison coefficients. With a continuous y variable, WLSMV estimates a simple linear regression coefficent.
Peter Martin posted on Thursday, September 29, 2005 - 1:55 am
Thanks, Linda. So does this mean that an ordinal X is treated as if it was on interval scale? E.g. in a linear regression, the coefficient would state the increase in Y given an increase of 1 rank in
Or does the procedure use the latent variable that underlies X (this latent variable would be estimated, because the X has also paths leading to it)?
Or am I missing the point?
Linda K. Muthen posted on Thursday, September 29, 2005 - 8:12 am
Peter Martin posted on Thursday, September 29, 2005 - 8:33 am
Sorry to be tenacious - yes to what?
Linda K. Muthen posted on Thursday, September 29, 2005 - 8:35 am
The question in your first paragraph. Sorry.
melissa posted on Thursday, July 12, 2007 - 8:30 am
I am running a SEM in which:
One endogenous latent variable is indicated by three dichotomous variables.
Another endogenous latent variable is indicated by two continuous and one dichotomous variable.
I have specified the categorical variables in the input and am using the wlsmv estimator.
Here are my questions:
1. Are the estimates related to the first mentioned latent variable (with all three dichotomous indicators) interpreted as probit estimates?
2. How are coefficients related to the second latent variable interpreted?
3. I am currently reporting both B's and StdYX's in my tables. I have the standardized coefficients labeled as Beta's. Is this inappropriate given the above mentioned latent variables? (I have other
latent variables that do indeed include only continuous indicators).
Many advance thanks-
Linda K. Muthen posted on Thursday, July 12, 2007 - 9:11 am
The scale of the dependent variable determines the type of regression coefficient. For categorical factor indicators and WLSMV, probit regression coefficients are estimated. For continuous indicators
and WLSMV, linear regression coefficients are estimated.
The labels don't depend on the variables being categorical or continuous. In both cases, the parameter estimates are regression coefficients.
Preeti posted on Saturday, December 13, 2008 - 2:29 pm
Hello. I am running an SEM model with a categorical outcome using the probit function. I have both latent and observed predictors. The reviewers would like effect sizes on my parameters. Could you
please advise me on what would be the appropriate effect size to use and how to calculate it?
Bengt O. Muthen posted on Saturday, December 13, 2008 - 5:07 pm
You have to decide what effect size is relevant here. Effect size is typically a difference in means under 2 different covariate conditions such a treatment/control, divided by the SD.
Sarah Ryan posted on Tuesday, September 20, 2011 - 3:08 pm
My model involves:
4 binary and 1 continuous control covariate (x1-x5)
2 observed exogenous predictors (z1-z2)
3 latent exogenous predictors (L1-L3)
1 latent mediator (LM1)
1 observed ordinal outcome (y)
I have a few questions related to coeff. interpretation.
1) I read elsewhere that the interp. of stdzd. probit is not as straightforward as with linear coeff.- Is this simply due to awareness of when to use STDYX and STDY, as well as how there is a diff b/
w a unit change in continuous x versus a change in category (binary x)?
2) One of my controls shares a large and large and signif assoc w/ my mediator such that in the full model the expected large and signif assoc b/w the control and the outcome is negative and signif.
Can I interpret that to mean that once one controls for the rel. b/w the control and LM1, the remaining variance in the control no longer shares the formerly assumed relationship with y?
3) With an ordinal outcome, does a positive beta indicate that an increase in x is associated with an increase in the probability of moving from one category to the next?
Bengt O. Muthen posted on Wednesday, September 21, 2011 - 6:40 am
1) Standardizing probit/logit with respect to the covariate variance is no different from linear regression with continuous outcomes. For instance, you don't want to do it for a binary covariate. As
for the DV, you don't really need to standardize wrt the binary outcome (or rather its latent response variable counterpart).
2) I'm unclear about this question. Did the control->y relationship go from positive to negative once the mediator LM1 was introduced?
3) Think of it as the latent response variable increasing and when it does the probability of a lower category goes down and a higher category goes up. However, a middle category probability first
goes up but with further covariate increase then goes down, favoring a higher category. It is easiest to see the effect in a graph.
Sarah Ryan posted on Wednesday, September 21, 2011 - 8:29 am
Regarding 2) above:
Yes, you understand correctly. LM1->control is strong and positive, y->LM1 is strong and positive, y->control is small to moderate and negative.
Further, if I model the paths between this control and the two indicators of the latent mediator with which the control shares a direct relationship (indicated by M.I.'s), the sign of the control/
indicator relationship is also negative and the standardized y->LM1 now just over 1; y->control remains neg, but grows in magnitude.
I've been doing some reading on suppression effects, but I am not sure that is what is going on. I've also done all the things I can think of to do to assess multicollinearity effects. Collinearity
diagnostics with all of the measured varbs. in the model were okay. All of the latent variable correlations are below .6 except for that b/w the mediator and outcome. None of the bivariate
correlations are above .5, and most are quite a bit less. Because I'm using three waves of data, it is not logistically possible for the predictors, mediator, and outcome to be measuring the same
thing (nor is it theoretically possible). My standard errors range from .01 to .07 (however, the sample size is about 5000- when I run the baseline with the second group (N=1000), the SE's are larger
Bengt O. Muthen posted on Thursday, September 22, 2011 - 10:09 am
Sounds like a task for SEMNET.
Back to top
|
{"url":"http://www.statmodel.com/discussion/messages/23/401.html?1316711392","timestamp":"2014-04-21T03:37:43Z","content_type":null,"content_length":"41852","record_id":"<urn:uuid:4ca94ac2-ad7f-4089-95f6-e24e85345274>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Institute for Mathematics and its Applications (IMA)
Luca Benzoni (Finance Department, University of Minnesota) lbenzoni@umn.edu http://legacy.csom.umn.edu/WWWPages/FACULTY/lbenzoni/
Stochastic Volatility, Mean Drift, and Jumps in the Short-Term Interest Rate (poster session)
Joint work with Torben G. Andersen (Northwestern University) and Jesper Lund (Nykredit Bank).
We find that an intuitively appealing and fairly manageable continuous-time model provides an excellent characterization of the U.S. short-term interest rate over the post Second World War period.
Our three-factor jump-diffusion model consists of elements embodied in existing specifications, but our approach appears to be the first to successfully accommodate all such features jointly.
Moreover, we conduct simu ltaneous and efficient inference regarding all model components, which include a shock to the interest rate process itself, a time-varying mean reversion factor, a
stochastic volatility factor and a jump process. Most intriguingly, we find that the restrictions implied by an affine representation of the jump-diffusion system are not rejected by the U.S. short
rate data. This allows for a tractable setting for a ssociated asset pricing applications.
Kent D. Daniel (Kellogg School of Management, Northwestern University) kentd@kellogg.northwestern.edu http://kent.kellogg.nwu.edu/
Testing Factor-Model Explanations of Market Anomalies
A number of recent papers have attempted to explain the size and book-to-market anomalies with either (1) factor models based on economically motivated factors, or (2) with conditional CAPM or CCAPM
models with economically motivated conditioning variables. These papers use similar methodologies and similar test assets, and generally fail to reject the proposed models. We argue that these tests
may fail to reject because of low statistical power of the tests against reasonable alternative hypotheses, rather than because the models are consistent with the data. We propose an alternative test
methodology with higher power against the proposed alternatives, and show that the new test methodology results in the rejection of several of the proposed factor models at high levels of
Michael A.H. Dempster (Centre for Financial Research, Judge Institute of Management, University of Cambridge & Cambridge Systems Associates Limited) m.dempster@jims.cam.ac.uk
Modelling the Global FX Market
Slides: html pdf ps ppt
This talk reports on work undertaken with the support of HSBC to understand the $1.4 B per day global currency market. After a general introduction, the detailed structure of the global FX market
will be described with a focus on the roles of the major market makers and the EBS and Reuters 3000 electronic interdealer markets. Next modelling individual agents, traders and market makers with
computational learning techniques based on extensive quote, trade, agent order flow and order book data seen by a market maker will be reported. Finally, work in progress to construct realistic agent
simulation models of the essence of the global market will be discussed which attempts to capture the current mechanisms of price discovery - at least over intervals shorter than those at which
macroeconomic fundamentals are thought to dominate market movements.
Gregory R. Duffee (Haas School of Business, University of California-Berkeley) duffee@haas.berkeley.edu http://faculty.haas.berkeley.edu/duffee/
A No-Arbitrage Term Structure Model Without Latent Factors
Slides: pdf
Paper: pdf
I present a framework for modeling part of the dynamics of the term structure. The framework can be used to link the term structure to observed variables such as inflation and output. Its partial
nature allows us to dispense with yield-based factors (e.g., latent factors) while retaining restrictions associated with no-arbitrage. I apply the model to the joint dynamics of inflation and the
term structure. As other research has noted, both short-term and long-term bond yields adjust gradually to a change in inflation. I find that the dynamics of the price of interest rate risk needed to
fit this pattern from 1983 through 2003 are implausible. An alternative interpretation is that investors were systematically surprised by the slow adjustment of short-term yields to inflation.
Philip H. Dybvig (Olin School of Business, Washington University in Saint Louis) pdybvig@dybfin.wustl.edu
Exploration of Interest Data
Slides: pdf
Absent unreasonably strong assumptions, financial theory places almost no restriction on interest rates and bond prices. If the short rate process exists (not even an implication of most preferences
we study), then bond and interest derivative prices are given by expected discounted values using the rolled-over spot rate for discounting and risk-neutral ("martingale'') probabilities for
computing expectations. Absent theoretical guidance, the choice of interest rate process should ideally be dictated by the data. This presentation explores the interest-rate process starting with the
sample version of the quadratic variation of the three-year Treasury Bill discount process, using about 50 years worth of daily data from the Fed's H15 tape. This analysis updates an analysis done in
1990 with an eye toward looking at the impact of what seems to be a unique regulatory and economic environment today, but the major conclusions are unchanged. A final comment relates the analysis to
a result on parameter uncertainty from a FAJ paper with Bill Marshall.
J. Doyne Farmer (Santa Fe Institute) jdf@santafe.edu http://www.santafe.edu/~jdf
Modeling Liquidity, Risk and Transaction Costs in the London Stock Exchange Using Low Intelligence Agents
Slides: html pdf ps ppt
I will present a variety of empirical results based on a study of the London Stock Exchange. The data set contains about 350M events, including every action by every trader on every stock, making it
possible to reconstruct the limit order book at any instant in time. This study has generated a variety of new empirical results, including a characterization of the approximate power law behavior
and long-memory effects associated with price returns, order placement, and the spread. My collaborators and I have shown that price changes are largely driven by fluctuations in liquidity. A model
for order flow is developed, that when simulated along with its impact on prices, explains many of statistical properties of the data very well. Finally, time permitting, I will present some
preliminary results developing an agent ecology of arbitraguers who exploit liquidity demanders, and discuss their affect on prices. These results illustrate first, that there are many strong
regularities in market behavior at the microstructure level, and second, that many aspects of these regularities can be understood based on what might be characterized as low intelligence models of
agent behavior.
Dean P. Foster (University of Pennsylvania, The Wharton School) foster@gosset.wharton.upenn.edu http://gosset.wharton.upenn.edu/~foster/
Ponzironi: The Search for Statistically Significant Excess Returns
Slides: pdf ps
Joint work with Robert A. Stine.
Almost everyone you talk to claims to have a scheme that "beats the market." How should we test such claims? We created a test (based on Bennett's inequality) that only assumes that CAPM excess
returns should be martingale. But the claimants scoff at our test and say that it doesn't have sufficient power to show the beauty of their scheme.
With tongue firmly in cheek, we will provide a few schemes that will pass any weakening of our test. This has been a wonderful teaching aid, since the schemes are understandable to MBAs. Finally we
will revisit Fama and French's book to market ratio as a way of generating excess returns and ask does it have enough jump to pass our statistical test.
Xavier Gabaix (Department of Economics, Massachusetts Institute of Technology) xgabaix@mit.edu http://econ-www.mit.edu/faculty/xgabaix/papers.htm
A Theory of Power Law Distributions in Financial Market Fluctuations
Papers: NatureMay2003Published.pdf cubicfeb16-20041.pdf
Joint work with Parameswaran Gopikrishnan, Vasiliki Plerou, and H. Eugene Stanley (Center for Polymer Studies and Department of Physics, Boston University).
Insights into the dynamics of a complex system are often gained by focusing on large fluctuations. For the financial system huge databases now exist which facilitates the analysis of large
fluctuations and the characterization of their statistical behavior [1,2]. Power laws appear to describe histograms of relevant financial fluctuations, such as fluctuations in stock price, trading
volume, and the number of trades [3-10]. Remarkably, the exponents that characterize these power laws are similar for different types and sizes of markets, for different market trends, and even for
different countries - suggesting that a generic theoretical basis may underlie these phenomena. Based on a plausible set of assumptions, we propose a model that provides an explanation for these
empirical power laws. In addition, our model explains certain striking empirical regularities that describe the relationship between large fluctuations in prices, trading volume, and the number of
trades. In our model, large movements in stock market activity arise from the trades of the large participants. Starting from an empirical characterization of the size distribution of large market
participants (mutual funds), we show that their trading behavior when performed in an optimal way, generates power-laws observed in financial data.
Rohitha Goonatilake (Department of Mathematical and Physical Sciences, Texas A&M International University) harag@tamiu.edu
Development, Evaluation and Analysis of a 20-Year Deferred Annuity Product (poster session)
Report: pdf
This project analyzes an annuity product that suits the needs of today's American family under moderate assumptions. It helps in the study of the pricing accuracy in a mutual life insurance company
and to better understand the extent of the analysis and computations involved in developing a 20-year deferred annuity product designed for a group of 1000 people; ages ranging from 30 - 40 years and
having a 5 year old child.
Lars Peter Hansen (Department of Economics University of Chicago) l-hansen@uchicago.edu http://home.uchicago.edu/~lhansen/
Recursive Robust Control and Prediction
Slides: pdf
When confronting a stochastic environment, a decision-maker may not have full confidence in his probabilistic assignments and may not observe the full array state variables that characterize the
probabilistic model. Instead he or she may wish to explore how decision rules perform when the stochastic specification is altered or perturbed. In this paper we consider decision problems in which a
class of such perturbations are permitted. By introducing these perturbations, decision rules for prediction and control are made to be more robust. We develop and explore recursive formulations of
the robust control/prediction problem and deduce corresponding risk-sensitive recursions that feature a distinct risk-adjustment for predicting the hidden Markov states.
Joint with Marco Cagetti, Thomas J. Sargent and Noah Williams.
Narasimhan Jegadeesh (Emory University) Narasimhan_Jegadeesh@bus.emory.edu
Value of Analyst Recommendations: International Evidence*
Slides: html pdf ps ppt
Paper: pdf
Joint work with Woojin Kim.
This paper examines analyst recommendations in the G7 countries and evaluates the value of these recommendations over the 1993 to 2002 period. We find that the frequencies of sell and strong sell
recommendations in all countries are far less than that of buy and strong buy recommendations. The frequency of sell recommendations is the lowest in the U.S. We also find that stock prices react
significantly to recommendation revisions on the revision day and on the following day in all of these countries except Italy. We find the largest price reactions in the U.S., followed by Japan. We
also evaluate trading strategies that buy upgraded stocks and sell downgraded stocks. Here again, we find the highest profits in the U.S., followed by Japan.
* Narasimhan Jegadeesh is the Dean's Distinguished Professor at the Goizueta Business School, Emory University, and Woojin Kim is a doctoral student at the University of Illinois at Urbana-Champaign.
We would like to thank Cliff Green and Michael Weisbach, and the seminar participants at Duke University, the University of Alabama at Tuscaloosa, the University of Illinois at Urbana-Champaign, and
Vanderbilt University for helpful comments. We are responsible for any errors.
Contact information: Narasimhan Jegadeesh, Goizueta Business School, 1300 Clifton Road, Atlanta, GA 30322, email: Narasimhan.Jegadeesh@bus.emory.edu; Woojin Kim, 340, Wohlers Hall, University of
Illinois at Urbana-Champaign, Champaign, IL 61820, email: wkim5@uiuc.edu.
Steven Kou (Department of Industrial Engineering and Operations Research (IEOR), Columbia University Columbia University) sk75@columbia.edu http://www.columbia.edu/~sk75/
A Tale of Two Growths: Modeling Stochastic Endogenous Growth and Growth Stocks
This paper extends the deterministic endogenous R&D growth model to a stochastic endogenous growth model, which is used to study growth stocks. The model provides an understanding of the links
between economic growth, monopolistic competition in R&D, and the valuation of growth stocks. With the presence of stochastic shocks, the model leads to a decomposition of the value of growth stocks.
The decomposition implies that the value of growth stocks should be very volatile, while the long-run average return is roughly equal to the growth rate of R&D labor. The model also explains an
empirical size distribution puzzle observed for the cross-sectional study of growth stocks.
Vladimir Kurenok (Department of Natural and Applied Sciences, University of Wisconsin-Green Bay)
On a Model for the Term Structure of Interest Rate Processes of Stable Type (poster session)
Nick Laskin (IsoTrace Lab, Department of Physics, University of Toronto) nick.laskin@utoronto.ca
Jump Dynamics and Stochastic Volatility for Stock Returns (poster session)
We develop approach to model components of return distribution, which are assumed to be led by a news arrival random process. It is assumed that the compound generalized Poisson process governs
information arrivals. The compound generalized Poisson process captures long-memory effect, which results in non-exponential distribution of interarrival times. The conditional variance of returns is
decomposed into two components, a smoothly evolving component for standard diffusion of past news impacts and the component related to the information arrival process that generates jump stream with
fractional statistics. The developed model predicts impact of large changes in stock returns on volatility. Empirical evidence of the impact jump versus normal return innovations and time-series of
jump clustering has been presented.
Kiseop Lee (Department of Mathematics, University of Louisville) kiseop.lee@louisville.edu
Estimation of Liquidity Risk by Multiple Change-Point Models (poster session)
Liquidity risk is often defined as the additional risk in the market due to the timing and size of a trade. Based on a pioneering work of Cetin et al. we develop an estimation method which is
practically of use. Our new method estimates liquidity cost by applying a sequential multiple change-point detection algorithm to a broken-line regression model.
Ding Li (Department of Economics and Finance, Northern State University) Ding.Li@northern.edu
Empirical Study of Investment Behavior in Equity Markets Using Wavelet Methods (poster session)
This empirical study addresses stock returns behavior using wavelet methods in time-scale domain. Financial markets data revealed more complex dynamic patterns than random walk, the objective of this
study is to apply scale analysis to explore the scale-dependent property of stock returns behavior to support the reference-dependence theory in behavioral finance. In this research, we study eleven
years of daily returns for three hundred stocks sampled from the S&P 1500 index. The sample data is further categorized into groups according to their market capitalizations, divided into three time
periods, and wavelet decomposed at level six. Our findings support the reference-dependence argument. We find patterns that stock returns statistical properties are scale-dependent. Our results show
that stock returns are non-normally distributed and nonstationary at small scales but normal and stationary at relatively larger scales. We find significant market effects on individual assets and
mixed results on different stock caps. Also stock returns cannot always be modeled as long memory processes. Our results support that people associate different investment horizons with different
mental accounts.
Juyoung Lim (Department of Mathematics, The University of Texas at Austin) limju@math.utexas.edu
An Application of Large Deviation Principle to Pricing Multi Asset Derivative Securities (poster session)
Poster: pdf
Paper: pdf
Joint work with M. Avellaneda.
We present a statistical method to estimate conditional expectation of multivariate diffusion process in short time horizon. The result includes asymptotic convergence theorem for estimator and its
standard error that is based on Large Deviation Principle. Quantities from multivariate diffusion process are often analytically intractable and this method gives an effective method to estimate them
without simulation and offers a way to undertand its risk profile intuitively.
An application is demonstrated with relative value pricing of multi asset derivatives such as index option and swaption.
Jun Liu (Finance Group Anderson Graduate School of Management, University of California-Los Angeles) jliu@anderson.ucla.edu http://www.personal.anderson.ucla.edu/jun.liu/
Information, Diversificiation, and Cost of Capital
We study the pricing implications of information in a noisy rational expectations model with a factor structure for multi-asset payoffs. There are two classes of price taking investors in our model;
informed investors who receive private signals on systematic and idiosyncratic components of asset payoffs, and uninformed investors who draw imperfect inferences about those signals from prices. We
solve the equilibrium explicitly. We show that only information about systematic factors matters in determining asset risk premiums, when the number of the risky assets is large. Idiosyncratic risk
as well as the information associated with them is fully diversifiable.
Jun Liu (Finance Group Anderson Graduate School of Management, University of California-Los Angeles) jliu@anderson.ucla.edu http://www.personal.anderson.ucla.edu/jun.liu/
Risk, Return and Dividends (poster session)
Paper: pdf
Joint work with Andrew Ang (Columbia University and NBER).
We characterize the joint dynamics of expected returns, stochastic volatility, and prices. In particular, with a given dividend process, one of the processes of the expected return, the stock
volatility, or the price-dividend ratio fully determines the other two. For example, the stock volatility determines the expected return and the price-dividend ratio. By parameterizing one, or more,
of expected returns, volatility, or prices, common empirical specifications place strong implicit, and sometimes inconsistent, restrictions on the dynamics of the other variables. Our results are
useful for understanding the risk-return trade-off, as well as the predictability of stock returns.
Jun Pan (MIT Sloan School of Management, ) junpan@mit.edu http://www.mit.edu/~junpan
The Information in Option Volume for Future Stock Prices
Paper: pdf
Joint work with Allen M. Poteshman (University of Illinois at Urbana-Champaign).
We find strong evidence that option trading volume contains information about future stock price movements. Taking advantage of a unique dataset from the Chicago Board Options Exchange, we construct
put to call ratios for underlying stocks, using volume initiated by buyers to open new option positions. Performing daily crosssectional analyses from 1990 to 2001, we find that buying stocks with
low put/call ratios and selling stocks with high put/call ratios generates an expected return of 40 basis points per day and 1 percent per week. This result is present during each year of our sample
period, and is not affected by the exclusion of earnings announcement windows. Moreover, the result is stronger for smaller stocks, indicating more informed trading in options on stocks with less
efficient information flow. Our analysis also sheds light on the type of investors behind the informed option trading. Specifically, we find that option trading from customers of full service brokers
provides the strongest predictability, while that from firm proprietary traders is not informative. Finally, in contrast to the equity option market, we do not find any evidence of informed trading
in the index option market.
Monika Piazzesi (Graduate School of Business, University of Chicago) mpiazzes@gsb.uchicago.edu http://gsbwww.uchicago.edu/fac/monika.piazzesi/research/
Futures Prices as Risk-Adjusted Forecasts of Monetary Policy
Slides: pdf
Many researchers have used federal funds futures rates as measures of financial markets' expectations of future monetary policy. However, to the extent that federal funds futures reflect risk premia,
these measures require some adjustment for risk premia. In this paper, we document that excess returns on federal funds futures have been positive on average. We also document that expected excess
returns are strongly countercyclical. In particular, excess returns are surprisingly predictable by employment growth and other business-cycle indicators such as Treasury yields and corporate bond
spreads. Excess returns on eurodollar futures display similar patterns. We document that simply ignoring these risk premia has important consequences for the future expected path of monetary policy.
We also investigate whether risk premia matter for conventional measures of monetary policy surprises.
Michael Tehranchi (Department of Mathematics, University of Texas at Austin) tehranch@math.utexas.edu
Optimal Portfolio Choice in Bond Markets (poster session)
We consider the Merton problem of optimal portfolio choice when the traded instruments are the set of zero-coupon bonds. Working within an infinite-factor Markovian Heath-Jarrow-Morton model of the
interest rate term structure, we find conditions for the existence and uniqueness of optimal trading strategies. When there is uniqueness, we provide a characterization of the optimal porfolio.
Ruey S. Tsay (Graduate School of Business, University of Chicago) ruey.tsay@gsb.uchicago.edu
Efficient Estimation of Stochastic Diffusion Models with Leverage Effects and Jumps
Slides: pdf
Paper: pdf
This talk is concerned with estimating stochastic diffusion models with leverage effects and with or without jumps. Several methods have been proposed in the literature to estimate such models
including efficient method of moments (EMM) and Markov chain Monte Carlo (MCMC) method. For MCMC methods, most of the existing methods cannot deal with leverage effects or require intensive
computation. We discuss the difficulties of the estimation problem and propose a modified method that can estimate the model efficiently. Simulation and real examples are used to compare estimation
results of various methods.
Diane Louise Wilcox (Department of Mathematics and Applied Mathematics, University of Cape Town) diane@maths.uct.ac.za
Periodicity and Scaling of Eigenmodes in an Emerging Market (poster session)
Joint work with Tim Gebbie.
We investigate periodic, aperiodic and scaling behaviour of eigenmodes, i.e. daily price fluctuation time-series derived from eigenvectors, of correlation matrices of shares listed on the
Johannesburg Stock Exchange (JSE) from January 1993 to December 2002. Periodic, or calendar, components are investigated by spectral analysis. We demonstrate that calendar effects are limited to
eigenmodes which correspond to eigenvalues outside the Wishart range. Aperiodic and scaling behaviour of the eigenmodes are investigated by using rescaled-range methods and detrended fluctuation
analysis (DFA). We find that the eigenmodes which correspond to eigenvalues within the Wishart range are dominated by noise effects. In particular, we find that interpolating missing data or illiquid
trading days with a zero-order hold introduces high frequency noise and leads to the overestimation of uncorrected (for serial correlation) Hurst exponents. DFA exponents of the eigenmodes suggest an
absence of long-term memory.
Shu Wu (Department of Economics, The University of Kansas) shuwu@ku.edu
Interest Rate Risk and the Forward Premium Anomaly in Foreign Exchange Markets (poster session)
premiums implied by the yield curves across countries, uncovered interest rate parity (UIP) is still strongly rejected by the data. Moreover, factors that predict the excess bond returns are found
not significant at all in predicting the foreign exchange returns. These results reject the joint restrictions on the exchange rate and interest rates imposed by dynamic term structure models,
suggesting that foreign exchange markets and bond markets may not be fully integrated and we have to look beyond interest rate risk in order to understand the exchange rate anomaly.
Yong Zeng (Department of Mathematics and Statistics, University of Missouri at Kansas City) zeng@mendota.umkc.edu
Filtering with a Marked Point Process Observation: Applications to the Econometrics of Ultra-High-Frequency Data
(poster session)
pdf ps
Ultra-high-frequency (UHF) data are naturally modeled as a marked point process (MPP), because of the random arrival times as well as the associated marks such as price, volume and ask and bid quotes
at an arrival time. Even though econometricians model UHF data as a MPP, they view UHF data as an irregularly-spaced time series. Here, we take the angle of probabilists and view UHF data as an
observed sample path of a marked point process (MPP). Then, we propose a general filtering model for UHF data where the signals are latent processes with time-varying parameters and the observations
are in a generic mark space with other observable factors. The latent process and parameters are jointly modeled by a martingale problem and the observable factors are allowed in the stochastic
intensity kernel of the MPP. In this way, we obtain a unified framework for many existing models for UHF data.
The powerful tools of stochastic filtering are introduced for developing the statistical foundations of the proposed model. The likelihoods, posterior, likelihood ratios and Bayes factors, are
studied. They all are of continuous time, of infinite dimension and are characterized by stochastic differential equations such as filtering equations. To calculate, for example, likelihoods or
posterior of a proposed model, consistent algorithms are required. Mathematical foundations for consistent, efficient algorithms are established. There are two general approaches for constructing
recursive algorithms. One approach is Kushner's Markov chain approximation method, and the other is Sequential Monte Carlo method or particle filtering method. The latter approach is more attractive
in that it can mitigate and even avoid the ``curse of dimensionality'' in complex models. Especially, Bayesian inference (estimation and model selection) via filtering are developed for the proposed
|
{"url":"http://www.ima.umn.edu/complex/abstracts/5-24abs.html","timestamp":"2014-04-17T04:02:15Z","content_type":null,"content_length":"72528","record_id":"<urn:uuid:977e23ee-3c3c-48cd-87f4-005762764ca8>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Velocity Reviews - Floating Point Representation (Question)
Stefan Ram 12-26-2012 04:22 PM
Floating Point Representation (Question)
AFAIK 0.1 in hex is 0x1.(9)ap-4, where »(9)« means an
infinite sequence of »9«. However, Java only stores a
finite number of 9s:
printf( "%a%n", 0.1 )
. So, since something /positive/ is missing, Javas
representation of 0.1 should be /smaller/ than 0.1, but
println( new java.math.BigDecimal( 0.1 ))
0.100000000000000005551115123125782702118158340454 1015625
shows me a value that is /greater/ than 0.1?
Removing more 9s makes the value even larger!
println( new java.math.BigDecimal( 0x1.999999999999ap-4 ));
0.100000000000000005551115123125782702118158340454 1015625
println( new java.math.BigDecimal( 0x1.9ap-4 ))
Stefan Ram 12-26-2012 06:05 PM
Re: Floating Point Representation (Question)
Patricia Shanahan <pats@acm.org> writes:
>The "a" in "0x1.999999999999ap-4" is the last digit of the hex fraction.
For some reason, I did not understand this, but thought »ap«
was a unit marking the start of the exponent (already
wondering »Why /two/ letters?«). Now that you have told me
this, I understand it all - thank you!
Eric Sosman 12-26-2012 06:36 PM
Re: Floating Point Representation (Question)
On 12/26/2012 11:22 AM, Stefan Ram wrote:
> AFAIK 0.1 in hex is 0x1.(9)ap-4, where »(9)« means an
> infinite sequence of »9«.
What's the "a" for? ;-)
> However, Java only stores a
> finite number of 9s:
> printf( "%a%n", 0.1 )
> 0x1.999999999999ap-4
"One, point, twelve nines, A, exponent." See the "A?"
Eric Sosman
All times are GMT. The time now is 09:24 PM.
Powered by vBulletin®. Copyright ©2000 - 2014, vBulletin Solutions, Inc.
SEO by vBSEO ©2010, Crawlability, Inc.
|
{"url":"http://www.velocityreviews.com/forums/printthread.php?t=955835","timestamp":"2014-04-20T21:24:54Z","content_type":null,"content_length":"6229","record_id":"<urn:uuid:7bba4ce2-ad51-40d8-b4d8-bd0c227cf89c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This is for my trig class. Thanks for the help!
I tried to multiply the 3-i on the top and bottom but then I get lost on what belongs on the bottom and on the top. I thought I would have something like 6-2i/9 but that doesn't...
This is for my trig class, thanks for the help!
f(x) = 3x2 - 2x -3 f(x-1) = I would really appreciate the steps and final outcome of this problem. Thank you!
7 ln x - 1/4 ln y I would be very grateful for some help! Thanks for reading!
The problem says: Simplify (3/2)^-2 I can't get the correct answer.
The directions say to simplify and write in standard form a+bi.
well i cant figure out this answer and im basically simplifying these fractions
Please simplify the following expression: -(9x-3) thank you very much. thank you very much. thank you very much. thank you very much. thank you very much. thank you very much. thank you very...
Simplify by factoring. Assume that all variables under radicals represent nonnegative, (Show all work) √81x^6
Use the Laws of Logarithms to combine the expression. 4 log x − (1/3)log(x2 + 1) + 5 log(x − 1)
do I use -5 outside the parenthesis to multiply each value inside the parenthesis?
i just cant figure this out neither can anybody help me out their
|
{"url":"http://www.wyzant.com/resources/answers/simplify?f=active&pagesize=20&pagenum=3","timestamp":"2014-04-20T22:36:28Z","content_type":null,"content_length":"60634","record_id":"<urn:uuid:d0f08a8c-6ae6-4b01-8795-d349fe1dc816>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quadratic Eqn Word Problem: projectile shot up @ 928 ft/s [Archive] - Free Math Help Forum
10-30-2008, 08:52 PM
You fire a rifle straight up. Your bullet leaves your gun at a velocity of 928 feet per second. Ignore air resistance. Consider the muzzle of your gun to be at height zero. The bullet will reach its
maximum altitude of _______ feet after ______ seconds.
I know in order to find time I have so far with the equation....
h(t) = -16t^2+928t but thats as far as I got. I'm not sure how to do the rest. I tried the quadratic equation using just that and it didn't work. I got 58 or 57 point something and the answer is
wrong. The teacher just said we had to solve this using a quadratic equation.
|
{"url":"http://www.freemathhelp.com/forum/archive/index.php/t-58392.html","timestamp":"2014-04-19T12:14:20Z","content_type":null,"content_length":"5843","record_id":"<urn:uuid:edd47562-4c8f-463b-bcb2-aa29af1b2da8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Doppler Effect
Michael Fowler 10/14/09
(Flashlet here)
The Doppler effect is the perceived change in frequency of sound emitted by a source moving relative to the observer: as a plane flies overhead, the note of the engine becomes noticeably lower, as
does the siren noise from a fast-moving emergency vehicle as it passes. The effect was first noted by Christian Doppler in 1842. The effect is widely used to measure velocities, usually by
reflection of a transmitted wave from the moving object, ultrasound for blood in arteries, radar for speeding cars and thunderstorms. The velocities of distant galaxies are measured using the
Doppler effect (the red shift).
Sound Waves from a Source at Rest
To set up notation, a source at rest emitting a steady note generates circular wavecrests:
The circles are separated by one wavelength and they travel outwards at the speed of sound v. If the source has frequency f[0], the time interval between wave crests leaving the source
As a fresh wave crest is emitted, the previous crest has traveled a distance , so, since it’s moving at speed v,
and therefore
Sound Waves from a Moving Source
The Doppler effect arises because once a moving source emits a circular wave (and provided the source is moving at less than the speed of the wave) the circular wave crest emitted continues its
outward expansion centered on where the source was when it was emitted, independent of any subsequent motion of the source.
Therefore, if the source is moving at a steady speed, the centers of the emitted circles of waves will be equally spaced along its path, indicating its recent history. In particular, if the source
is moving steadily to the left, the wave crests will form a pattern:
Or, to be more realistic (from Wikipedia Commons):
It is evident that, as a result of the motion of the source, waves traveling to the left have a shorter wavelength than they had when the source was at rest. And it’s easy to understand why.
Denoting the steady source velocity by u[s], in the time between crests being emitted the source will have moved to the left a distance At the same time, the previously emitted crest will itself
have moved to the left a distance Therefore, the actual distance between crests emitted to the left will be
These waves, having left the source, are of course moving at the speed of sound v relative to the airthe motion of the source does not affect the speed of sound in air. Therefore, as these waves of
wavelength arrive at an observer placed to the left so the source is moving directly towards him, he will hear a frequency
Frequency Detected by Stationary Observer of Moving Source
From the above argument, the observed frequency for a source moving towards the observer at speed u[s] is:
(Note that for the common case , we can approximate, .)
By an exactly parallel argument, for a source moving away from an observer at speed u[s], the frequency is lower by the corresponding factor:
Stationary Source, Moving Observer
Consider now an observer moving at speed u[obs] directly towards a stationary frequency f[0 ]source. So, she’s moving to meet the oncoming wave crests. Remember, the wave crests are apart in the
air, and moving at v. Suppose her time between meeting successive crests is . During this time, she moves , the wave crest moves coming to meet her, and between them they cover the distance
between crests.
It is evident from the diagram that the time interval she will measure between meeting successive crests is
and therefore the sound frequency she measures is
Source and Observer Both Moving Towards Each Other
For this case, the arguments above can be combined to give:
Both motions increase the observed frequency. If either observer or source is moving in the opposite direction, the observed frequency is found by switching the sign of the corresponding u.
Doppler Effect for Light
The argument above for the Doppler frequency shift is accurate for sound waves and water waves, but fails for light and other electromagnetic waves, since their speed is not relative to an underlying
medium, but to the observer. To derive the Doppler shift in this case requires special relativity. A derivation can be found in my Modern Physics notes.
The Doppler shift for light depends on the relative velocity u of source and observer:
for motion towards each other.
Other Possible Motions of Source and Observer
We’ve assumed above that the motions of source and observer are all along the same straight line. But as we hear the change in frequency of a jet engine passing overhead, the note drops smoothly,
because we’re off the straight line path of the plane. The actual note heard as a function of time can be found from fairly simple geometric considerations to be , where is the angle between the
straight line path and a line from the source to the observer. (Note: if you watch and listen to a jet plane passing overhead, it’s very obvious that the sound you are hearing at any instant is
coming from a point the jet left some time agothe jet has traveled a significant distance since it emitted that sound! So we’re talking here about a line from the observer to the point where the
sound was emitted, where a blind man would place the plane. This so-called retardation effect is important here because the jet plane is traveling at a significant fraction of the speed of sound.
It’s irrelevant for police radar units, which do use the formula.)
Notice that if . This seems very reasonable, but is not the case for light, if the frequency shift is measured accurately enough to find effects of order (v/c)^2. To this order, relativistic time
dilation of the source gives a frequency shift. This was found unequivocally in a beautiful series of experiments in the 1930’s (by Ives and Stillwell) who were attempting to establish the
opposite: they were trying to disprove special relativity.
|
{"url":"http://galileo.phys.virginia.edu/classes/152.mf1i.spring02/DopplerEffect.htm","timestamp":"2014-04-17T12:28:24Z","content_type":null,"content_length":"84878","record_id":"<urn:uuid:5fa02fef-32b3-4186-8ffe-a527b6604aa4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Focus points and convergent process operators (A proof strategy for protocol verification)
abstract We present a strategy for finding algebraic correctness proofs for communication systems. It is described in the setting of μCRL [11], which is, roughly, ACP [2,3] extended with a formal
treatment of the interaction between data and processes. The strategy has already been applied successfully in [4] and [10], but was not explicitly identified as such. Moreover, the protocols that
were verified in these papers were rather complex, so that the general picture was obscured by the amount of details. In this paper, the proof strategy is materialised in the form of definitions and
theorems. These results reduce a large part of protocol verification to a number of trivial facts concerning data parameters occurring in implementation and specification. This greatly simplifies
protocol verifications and makes our approach amenable to mechanical assistance; experiments in this direction seem promising. The strategy is illustrated by several small examples and one larger
example, the Concurrent Alternating Bit Protocol (CABP). Although simple, this protocol contains a large amount of internal parallelism. so that all relevant issues make their appearance.
|
{"url":"http://dspace.library.uu.nl/handle/1874/26675","timestamp":"2014-04-24T05:12:11Z","content_type":null,"content_length":"15250","record_id":"<urn:uuid:85a5e5c6-5d24-411c-85ab-d98341fc0e97>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
hi Au101
OK. Here we go.
There are three forms for the equation of a plane (that I can think of, there may be more).
They are:
form 1
form 2; the dot product form
form 3
The first form is derived from the diagram in post 20.
This is the position vector for a point in the plane
are two vectors lying in the plane.
In the second form
is a vector that is perpendicular to all vectors lying in the plane. r and n are being 'dotted' together and the result is a constant.
If you then replace 'r' by
and do the dot product
you get the cartesian version that is the third form.
From this you can see that form 2 and form 3 are the same plane.
Less obviously, so is form 1.
In form 1 put lambda = 1 and mu = 1 and you get x = 3, y = 3, z = 1
put lambda = 1 and mu = -1 and you get x = 1, y = -1, z = -1
put lambda = 0 and mu = 1 and you get x = 2, y = 3, z = 2
If you try each of these sets of values in x - y + z you get 1 every time.
So form 1 and form 3 have three non-collinear points in common. That's enough to prove they represent the same plane because you only need three points to define a plane (provided they are not in the
same straight line).
Now to the question.
You know the equation of the plane before the transformation. The book method transforms points directly from this equation to get the equation after the transform.
I think the algebra for this is a bit horrid and
you still have to convert it into form 2
So my method was to pick three points in the plane and transform them. Let's call them A, B and C. That's just number work. ***
Now get two vectors in the plane by doing AB = AO - OB etc.
Any two will do because, provided they are not parallel, any two vectors will 'span' the plane ie. will enable you to reach all points in the plane.
Now to get the vector that's at right angles to both these vectors.
Imagine by some trick of gravity you can stand on the plane with 'up' meaning 'at right angles to' the plane. If I asked you to point a stick at right angles to the plane it would go straight up. If
you draw any line on the 'ground' = 'the plane' , it would be at right angles to the stick. And it wouldn't matter how long the stick was. A stick twice as long, would still be at right angles to
every line in the plane.
Call the vector at right angles
So do a dot product between one vector in the plane and 'n' and set it equal to zero. Do again for the second vector.
You've got 2 equations with a, b and c as unknowns. Choose any 'a' to make the calculations easy. That will enable you to work out 'b' and 'c'. That gives you a possible 'n' the vector perpendicular
to the plane. You might think it is cheating to choose 'a'. But if you had chosen an 'a' that was twice as big, you'd have got 'b' and 'c' twice as big as before so all that would happen is you'd get
'2n' for the perpendicular vector. You can use
any vector
that is at right angles so the first would do!
Now to get 'p'.
The equation
is the equation for all points, 'r', in the plane.
Back at point *** we had three possible points so 'sub' in any one set of x, y and z values and you'll get 'p'
Check by 'subbing' in the other values from *** to see if you get the same p.
Problem done!
Does that all make sense?
|
{"url":"http://www.mathisfunforum.com/post.php?tid=14856&qid=159896","timestamp":"2014-04-16T21:53:04Z","content_type":null,"content_length":"43031","record_id":"<urn:uuid:25fc7e0a-e604-4eae-ad31-b3b20b8662c8>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
::Forum - Population equation - Star Wars Combine::
Year 14
Day 68 Population equation
Ellias Just trying to fix up my spreadsheet, and noticed something wierd with the population equation:
BP = (Ln (Flats) x (Flats)^2) + (Flats)
Now, to me this means this order of operations:
then, multiple that answer by flats (in effect flats ^3)
then, do the natural logarithm on that number
then, add flats.
If we use the ~1400 flats for my planet this gives:
1,400^2 = 1,960,000
1,960,000 X 1,400 = 2,744,000,000
ln (2,744,000,000) = 21.7
21.7+1,400 = 1421.7
Now this is a ridiculous number - the population should be around 19mil. Now am I misunderstanding something there (I think not), or has it been incorrectly put into
the rules page? From fiddling around it seems that this fits the numbers better:
BP = (Ln (Flats)) x (Flats)^2 + (Flats)
this gives 14.2mil, much closer. Answers please.
Year 14
Day 68 Population equation
Ben Wrong order of operations....
Camden 1. (flats^2)
2. then, do the natural logarithm on the number of flats
3. then, multiple answer 1 and 2
4. then, add flats.
Ln(Flats) is a single operator.
Adjutant Ben Camden
Regional Government
Year 14
Day 68 Population equation
Jevon Bens right.
Flats: 100
THe second one, what you described Ellias, is far too low of a number.
|
{"url":"http://www.swcombine.com/forum/thread.php?thread=59450&post=693095","timestamp":"2014-04-19T07:02:09Z","content_type":null,"content_length":"24958","record_id":"<urn:uuid:b95ecfc5-82b8-4689-9a4a-18e22ea59aaa>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algorithmic incentives
In 1993, MIT cryptography researchers Shafi Goldwasser and Silvio Micali shared in the first Gödel Prize for theoretical computer science for their work on interactive proofs — a type of mathematical
game in which a player attempts to extract reliable information from an unreliable interlocutor.
In their groundbreaking 1985 paper on the topic, Goldwasser, Micali and the University of Toronto’s Charles Rackoff ’72, SM ’72, PhD ’74 proposed a particular kind of interactive proof, called a
zero-knowledge proof, in which a player can establish that he or she knows some secret information without actually revealing it. Today, zero-knowledge proofs are used to secure transactions between
financial institutions, and several startups have been founded to commercialize them.
At the Association for Computing Machinery’s Symposium on Theory of Computing in May, Micali, the Ford Professor of Engineering at MIT, and graduate student Pablo Azar will present a new type of
mathematical game that they’re calling a rational proof; it varies interactive proofs by giving them an economic component. Like interactive proofs, rational proofs may have implications for
cryptography, but they could also suggest new ways to structure incentives in contracts.
“What this work is about is asymmetry of information,” Micali says. “In computer science, we think that valuable information is the output of a long computation, a computation I cannot do myself.”
But economists, Micali says, model knowledge as a probability distribution that accurately describes a state of nature. “It was very clear to me that both things had to converge,” he says.
A classical interactive proof involves two players, sometimes designated Arthur and Merlin. Arthur has a complex problem he needs to solve, but his computational resources are limited; Merlin, on the
other hand, has unlimited computational resources but is not trustworthy. An interactive proof is a procedure whereby Arthur asks Merlin a series of questions. At the end, even though Arthur can’t
solve his problem himself, he can tell whether the solution Merlin has given him is valid.
In a rational proof, Merlin is still untrustworthy, but he’s a rational actor in the economic sense: When faced with a decision, he will always choose the option that maximizes his economic reward.
“In the classical interactive proof, if you cheat, you get caught,” Azar explains. “In this model, if you cheat, you get less money.”
Complexity connection
Research on both interactive proofs and rational proofs falls under the rubric of computational-complexity theory, which classifies computational problems according to how hard they are to solve. The
two best-known complexity classes are P and NP. Roughly speaking, P is a set of relatively easy problems, while NP contains some problems that, as far as anyone can tell, are very, very hard.
Problems in NP include the factoring of large numbers, the selection of an optimal route for a traveling salesman, and so-called satisfiability problems, in which one must find conditions that
satisfy sets of logical restrictions. For instance, is it possible to contrive an attendance list for a party that satisfies the logical expression (Alice OR Bob AND Carol) AND (David AND Ernie AND
NOT Alice)? (Yes: Bob, Carol, David and Ernie go to the party, but Alice doesn’t.) In fact, the vast majority of the hard problems in NP can be recast as satisfiability problems.
To get a sense of how rational proofs work, consider the question of how many solutions a satisfiability problem has — an even harder problem than finding a single solution. Suppose that the
satisfiability problem is a more complicated version of the party-list problem, one involving 20 invitees. With 20 invitees, there are 1,048,576 possibilities for the final composition of the party.
How many of those satisfy the logical expression? Arthur doesn’t have nearly enough time to test them all.
But what if Arthur instead auctions off a ticket in a lottery? He’ll write down one perfectly random list of party attendees — Alice yes, Bob no, Carol yes and so on — and if it satisfies the
expression, he’ll give the ticketholder $1,048,576. How much will Merlin bid for the ticket?
Suppose that Merlin knows that there are exactly 300 solutions to the satisfiability problem. The chances that Arthur’s party list is one of them are thus 300 in 1,048,576. According to standard
econometric analysis, a 300-in-1,048,576 shot at $1,048,576 is worth exactly $300. So if Merlin is a rational actor, he’ll bid $300 for the ticket. From that information, Arthur can deduce the number
of solutions.
First-round knockout
The details are more complicated than that, and of course, with very few exceptions, no one in the real world wants to be on the hook for a million dollars in order to learn the answer to a math
problem. But the upshot of the researchers’ paper is that with rational proofs, they can establish in one round of questioning — “What do you bid?” — what might require millions of rounds using
classical interactive proofs. “Interaction, in practice, is costly,” Azar says. “It’s costly to send messages over a network. Reducing the interaction from a million rounds to one provides a
significant savings in time.”
“I think it’s yet another case where we think we understand what’s a proof, and there is a twist, and we get some unexpected results,” says Moni Naor, the Judith Kleeman Professorial Chair in the
Department of Computer Science and Applied Mathematics at Israel’s Weizmann Institute of Science. “We’ve seen it in the past with interactive proofs, which turned out to be pretty powerful, much more
powerful than you normally think of proofs that you write down and verify as being.” With rational proofs, Naor says, “we have yet another twist, where, if you assign some game-theoretical
rationality to the prover, then the proof is yet another thing that we didn’t think of in the past.”
Naor cautions that the work is “just at the beginning,” and that it’s hard to say when it will yield practical results, and what they might be. But “clearly, it’s worth looking into,” he says. “In
general, the merging of the research in complexity, cryptography and game theory is a promising one.”
Micali agrees. “I think of this as a good basis for further explorations,” he says. “Right now, we’ve developed it for problems that are very, very hard. But how about problems that are very, very
simple?” Rational-proof systems that describe simple interactions could have an application in crowdsourcing, a technique whereby computational tasks that are easy for humans but hard for computers
are farmed out over the Internet to armies of volunteers who receive small financial rewards for each task they complete. Micali imagines that they might even be used to characterize biological
systems, in which individual organisms — or even cells — can be thought of as producers and consumers.
Professor Goldwasser also teaches a 5-day course on campus that offers a unique approach to cryptography that you won't find in standard textbooks. It is based on the theory of provable security, and
will ultimately enable you to assess cryptographic technologies with confidence, design cryptography that is error-proof, and understand many new technologies and their potential applications.
Cryptography and Computer Security - August 6-10, 2012 Learn more at: http://shortprograms.mit.edu/6.87
|
{"url":"http://newsoffice.mit.edu/2012/algorithmic-incentives-0425","timestamp":"2014-04-17T19:05:12Z","content_type":null,"content_length":"91773","record_id":"<urn:uuid:cffc3467-67de-4a40-a88c-93b9a6e61f57>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How does EV Calculation work?
I have a question i am fully aware that the max EV is 510 and 252 is max for 1 stat
And im kinda confused with the EV calculation
So, lets say i kill a Staravia which gives 2 Speed Stat EVs
is that 2/510?
Also, once i've used my 510 EVs, now what? what happens if I exceed past 510 because once i hit 510 obviously i still need to level and I cant control the fact that it will exceed the limit
yeah, but once u hit 510 EVs your pokemon wont be lvl 100 yet, no?
also, so once it hits 510 EVs and u still level up, it wont exceed?
Sorry, im probably making it harder than it is.
So basically u can only have 255 base stats and it can maximize 2 stats, am i correct?
I still dont get it cuz ok u need 4 EVs to make 1 solid base stat yet that would make it
4/510, right? So wouldn't that not even be enough to get one of my stats to 255?
|
{"url":"http://pokemondb.net/pokebase/84828/how-does-ev-calculation-work?show=84865","timestamp":"2014-04-17T06:23:15Z","content_type":null,"content_length":"25581","record_id":"<urn:uuid:04160236-e813-489f-9ca7-6f2da553851c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
If B has a radius of 4 and m AC = 36, what is the area of the sector ABC?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/514ca175e4b0d02faf5a96d0","timestamp":"2014-04-17T18:31:27Z","content_type":null,"content_length":"45145","record_id":"<urn:uuid:e633c603-eeb0-4b70-82a0-36cc25791ab8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cumming, GA SAT Math Tutor
Find a Cumming, GA SAT Math Tutor
...In classes, I am often a student that others look to for advice. My success as a student speaks for itself- I graduated as valedictorian of my high school class, and scored a perfect 36 on my
ACT. I am heavily invested in being successful in my endeavors, and tutoring is no different.
28 Subjects: including SAT math, chemistry, physics, calculus
...Students who use this method consistently for 6 months get better grades and spend less time studying. I teach students (and to some extent parents) how to get the most out of resources they
already have. Teachers are amazing and they usually care very much about the student’s academic progress.
12 Subjects: including SAT math, calculus, geometry, algebra 2
...For help with any courses listed on my page feel free to contact me. I look forward to working with you.I completed Differential Equations for Honors Students at Embry-Riddle Aeronautical
University. I passed the course with an A as a sophomore and tutored students in the course for the remainder of my undergraduate career.
24 Subjects: including SAT math, calculus, geometry, statistics
...High school football is more demanding as the QB needs the be aware of everything on the field. They need to prepare by watching film, practicing the plays, and learning to become a leader on
and off the field. I have played basketball since I was five years old.
29 Subjects: including SAT math, chemistry, calculus, physics
...I have used Microsoft Windows daily since the release of version 3.0 in 1990. Since then I have worked with Windows 95, Windows 98, Windows for Workgroups 3.11 and NT 3.1 and more recently
Windows 7 and 8. I have installed and worked extensively nearly every major application available on Windo...
126 Subjects: including SAT math, chemistry, English, calculus
Related Cumming, GA Tutors
Cumming, GA Accounting Tutors
Cumming, GA ACT Tutors
Cumming, GA Algebra Tutors
Cumming, GA Algebra 2 Tutors
Cumming, GA Calculus Tutors
Cumming, GA Geometry Tutors
Cumming, GA Math Tutors
Cumming, GA Prealgebra Tutors
Cumming, GA Precalculus Tutors
Cumming, GA SAT Tutors
Cumming, GA SAT Math Tutors
Cumming, GA Science Tutors
Cumming, GA Statistics Tutors
Cumming, GA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Alto, GA SAT math Tutors
Ball Ground SAT math Tutors
Berkeley Lake, GA SAT math Tutors
Buford, GA SAT math Tutors
Chamblee, GA SAT math Tutors
Flowery Branch SAT math Tutors
Holly Springs, GA SAT math Tutors
Lilburn SAT math Tutors
Milton, GA SAT math Tutors
Nelson, GA SAT math Tutors
Oakwood, GA SAT math Tutors
Powder Springs, GA SAT math Tutors
Rest Haven, GA SAT math Tutors
Sugar Hill, GA SAT math Tutors
Suwanee SAT math Tutors
|
{"url":"http://www.purplemath.com/Cumming_GA_SAT_Math_tutors.php","timestamp":"2014-04-19T04:47:23Z","content_type":null,"content_length":"24035","record_id":"<urn:uuid:8a362a63-a407-4ee4-80d0-85d4fb1c4474>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The number of points on a singular curve over a finite
- Int. Math. Res. Not. IMRN 2007, art. ID rnm004
"... Abstract. We show that if f: X − → Y is a finite, separable morphism of smooth curves defined over a finite field Fq, where q is larger than an explicit constant depending only on the degree of
f and the genus of X, then f maps X(Fq) surjectively onto Y (Fq) if and only if f maps X(Fq) injectively i ..."
Cited by 7 (5 self)
Add to MetaCart
Abstract. We show that if f: X − → Y is a finite, separable morphism of smooth curves defined over a finite field Fq, where q is larger than an explicit constant depending only on the degree of f and
the genus of X, then f maps X(Fq) surjectively onto Y (Fq) if and only if f maps X(Fq) injectively into Y (Fq). Surprisingly, the bounds on q for these two implications have different orders of
magnitude. The main tools used in our proof are the Chebotarev density theorem for covers of curves over finite fields, the Castelnuovo genus inequality, and ideas from Galois theory. 1.
- In: Gy}ory K (ed) Proc Number Theory in Progress, pp 805–812. Berlin: W de Gruyter , 1999
"... Dedicated to Andrzej Schinzel on his sixtieth birthday By a totient we mean a value taken by Euler’s function φ(n). Dence and Pomerance [DP] have established Theorem A. If a residue class
contains at least one multiple of 4, then it contains ..."
Cited by 6 (1 self)
Add to MetaCart
Dedicated to Andrzej Schinzel on his sixtieth birthday By a totient we mean a value taken by Euler’s function φ(n). Dence and Pomerance [DP] have established Theorem A. If a residue class contains at
least one multiple of 4, then it contains
"... Lectures given by Prof. J.W.P. Hirschfeld. Abstract Curves over finite fields not only are interesting structures in themselves, but they are also remarkable for their application to coding
theory and to the study of the geometry of arcs in a finite pl ..."
Add to MetaCart
Lectures given by Prof. J.W.P. Hirschfeld. Abstract Curves over finite fields not only are interesting structures in themselves, but they are also remarkable for their application to coding theory
and to the study of the geometry of arcs in a finite plane. In this note, the basic properties of curves and the number of their points are recounted.
- JOURNAL OF ALGEBRAIC COMBINATORICS , 2008
"... Abstract A lower bound on the minimum degree of the plane algebraic curves containing every point in a large point-set K of the Desarguesian plane PG(2,q) is obtained. The case where K is a
maximal (k, n)-arc is considered in greater depth. ..."
Add to MetaCart
Abstract A lower bound on the minimum degree of the plane algebraic curves containing every point in a large point-set K of the Desarguesian plane PG(2,q) is obtained. The case where K is a maximal
(k, n)-arc is considered in greater depth.
"... ABSTRACT. Planar functions over finite fields give rise to finite projective planes and other combinatorial objects. They exist only in odd characteristic, but recently Zhou introduced an even
characteristic analogue which has similar applications. In this paper we determine all planar functions on ..."
Add to MetaCart
ABSTRACT. Planar functions over finite fields give rise to finite projective planes and other combinatorial objects. They exist only in odd characteristic, but recently Zhou introduced an even
characteristic analogue which has similar applications. In this paper we determine all planar functions on Fq of the form c ↦ → ac t, where q is a power of 2, t is an integer with 0 < t ≤ q 1/4, and
a ∈ F ∗ q. This settles and sharpens a conjecture of Schmidt and Zhou. 1.
"... Abstract. We investigate the surjectivity of the word map defined by the n-th Engel word on the groups PSL(2, q) and SL(2, q). For SL(2, q), we show that this map is surjective onto the subset
SL(2, q)\{−id} ⊂ SL(2, q) provided that q ≥ q0(n) is sufficiently large. Moreover, we give an estimate for ..."
Add to MetaCart
Abstract. We investigate the surjectivity of the word map defined by the n-th Engel word on the groups PSL(2, q) and SL(2, q). For SL(2, q), we show that this map is surjective onto the subset SL(2,
q)\{−id} ⊂ SL(2, q) provided that q ≥ q0(n) is sufficiently large. Moreover, we give an estimate for q0(n). We also present examples demonstrating that this does not hold for all q. We conclude that
the n-th Engel word map is surjective for the groups PSL(2, q) when q ≥ q0(n). By using the computer, we sharpen this result and show that for any n ≤ 4, the corresponding map is surjective for all
the groups PSL(2, q). This provides evidence for a conjecture of Shalev regarding Engel words in finite simple groups. In addition, we show that the n-th Engel word map is almost measure preserving
for the family of groups PSL(2, q), with q odd, answering another question of Shalev. Our techniques are based on the method developed by Bandman, Grunewald and Kunyavskii for verbal dynamical
systems in the group SL(2, q). 1.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=274706","timestamp":"2014-04-17T01:15:43Z","content_type":null,"content_length":"23695","record_id":"<urn:uuid:2011df36-0b41-4785-b1fd-cf62137009ce>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Equation of a HyperPlane
February 22nd 2013, 11:36 AM #1
Feb 2013
United States of America
Equation of a HyperPlane
Am trying to construct a pyramid tree.
In a r dimensional space, A pyramid tree is made of 2*r pyramids.
For example in a 2 D space, we will have four pyramids (four triangles). The triangles would all share it's vertex at origin and have the four bases as the four bounding lines of the space as
shown in the below image
The same pyramid in the 3 D space would be similar as above but instead of the base being a line would be a plane and the structure itself would be a pyramid. Generally in a r dimensional space,
the sides of the pyramid would be a (r-1) dimensional hyperplane.
My question is, In case of 2-D, the line equation of the sides of the Pyramid are straight-forward. (x=y and x =-y). But how do we find the equation of the sides of the pyramid in an arbitrary
dimensional space.
Any help posted or any pointers to material I should start studying will greatly be appreciated.
Thanks in advance,
Re: Equation of a HyperPlane
I did a bit of reading and I figured out that equation of an hyperplane can be figured using a vector normal to the hyperplane and a point lying of the hyperplane (Am just extending the 3 D
planar theory). Since given that, in our case the plane passes through origin, if I can find a vector normal to the plane, say \begin{displaymath} {\vec(n)} = a1*{\vec(i1)} + a2*{\vec(i2)} +
......, + an*{\vec(in)} \end{displaymath}, then the equation of the plan would be \begin{displaymath} a1*{\vec(i1)} + a2*{\vec(i2)} + ......, + an*{\vec(in)} = 0 \end{displaymath}
How correct or wrong am I ?
February 22nd 2013, 12:34 PM #2
Feb 2013
United States of America
|
{"url":"http://mathhelpforum.com/geometry/213604-equation-hyperplane.html","timestamp":"2014-04-20T00:01:03Z","content_type":null,"content_length":"32466","record_id":"<urn:uuid:07f8327a-d907-4640-8ff7-b5e5004fa665>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: Does Mathematics Need New Axioms?
Harvey Friedman friedman at math.ohio-state.edu
Wed Feb 16 23:53:16 EST 2000
Reply to Steel Wed, 16 Feb 2000 11:29.
I am very pleased that Steel has confirmed that I had stated his views
fairly accurately, for the most part, and that he has taken the time to
clarify and amplify them.
I am in sharp disagreement with him on certain points, so much so that we
are in imminent danger of talking past each other in a panel discussion
format. So I am convinced that that panel discussion will be greatly served
by airing out the differences on the Internet.
In fact, the disagreements over a few points - not the vast majority of
points - are so sharp as to undoubtedly be mostly due to some failures of
communication. Perhaps the use of words, perhaps a misunderstanding of
information that is not publicly available, and the like - which can be
corrected before the public discussion takes place.
>> Steel would emphasize the large cardinal axioms as what he views
>>is the canoncial extension of the usual ZFC axioms, and their productive
>>use in settling problems about the projective sets.
> Yes, for me, and I think many others who have worked in this area,
>these are the main points.
So we agree that these are the main achievments of the set theorists in
large cardinals (including consistency strengths as Steel indicates below).
However, although it appears that the large cardinal axioms are the
canoncial extensions of the usual ZFC axioms, the reasons given leave much
to be desired, even at the level of small large cardinals. This problem of
showing that even the small large cardinals form a canoncial extension of
the usual ZFC axioms seems to me to be of comparable importance to other
issues with regard to large cardinals that are being actively pursued - yet
this issue is not, as far as I know, being addressed by the specialists in
set theory. (Naturally, I wouldn't say this if I didn't have definite ideas
about it).
>I would add that the role of large cardinal
>axioms in calibrating consistency strengths shows how central they are.
I would say that this indicates their importance for abstract set theory.
E.g., it does not show that they are central for mathematical logic as a
whole, and certainly not for mathematics as a whole.
>>In particular, Steel is aware of the disinterest among mathematicians in
>>the continuum hypothesis and the projective hierarchy. But, as opposed to
>>Maddy, for Steel this is not an intellectual matter worthy of research
>>investigation that is to be worked into one's philosophical/mathematical
>>views or one's research.
> There's no mystery in the fact that number-theorists have no interest,
>qua number-theorist, in large cardinals. For all we know, large cardinals
>might be needed to prove the Riemann Hypothesis, but it would make very
>little sense for anyone working on the RH to look that far afield.
I think you miss my point. Set theorists are far more interested in number
theory than number theorists are in large cardinals and the projective
hierarchy. For instance, topics like "there are infinitely many primes in
every appropriate arithmetic progression", or FLT, or Goldbach's
conjecture, or the prime number theorem, or e+pi is irrational, or
2^sqrt(2) is transcendental, or RH - these are of nearly universal interest
among mathematicians, not just number theorists, even if they are not
familiar with the relevant papers and proofs.
Set theorists do not expect to be able to use such number theory in their
work any more than number theorists except to be able to use modern set
theory in their work. But there is a big difference in the attitudes each
other have towards the other subject.
There is no reason to dwell on number theory or number theorists. Replace
number theory/number theorists with many other branches of mathematics.
E.g., topologists, geometers, mathematical physicists, etcetera.
> It's important for a theory to have applications, but you can know that
>a theory represents a basic conceptual advance without a long list of
The issue with regard to large cardinals is not trying to turn a
substantial list into a long list. The issue is that from the point of view
of mathematics, the list is both tiny and totally unconvincing and
idiosyncratic *to them*. There are systemic problems with this tiny list
that practically scream out to the mathematician that for him that list is
not real - for him that list is the empty list. For him, this is an
irrelevant abstract game that bears no resemblance to any mathematics that
he can touch or feel - certainly none that he could even in principle
encounter in the settings that he is familiar with. That there is an
intrinsic disconnect at the most fundamental level.
Let's compare this with the situation with differential equations. The
origins of differential equations come from physics, so the origin is
practically one big application. So when differential equations develops -
as all subjects do - a theoretical side, independently of applications,
people are not surprised that some of this work finds its way in
applications, or at least is useful for people doing applications. There is
a universal recognition of at least the real possibility of applications
even if the theory gets removed from the applications. They do not dismiss
differential equations - even the abstract parts - as some crazy abstract
game that cannot, in principle, have anything to do with anything that they
can touch or feel.
Whereas here with large caridnals - or even substantial fragments of ZFC -
for most mathematicians, there isn't even the recognition of the
possibility of applications to anything that they would regard as "normal"
mathematics. They could not even imagine this possibility, and so they, at
least at the subconscious level, doubt that it can ever have any
applications to any "normal" mathematics.
>Newton must have known his theory of gravitation was such an
>advance as soon as he had derived Kepler's Laws, if not before.
That was a great application to something real!
>I think we
>know enough about large cardinal axioms and their consequences to see that
>they represent a basic conceptual advance.
Who is "we"? If we is set theorists, then I agree, although I would say
this: set theorists may know that large cardinals represent a basic
conceptual advance, but set theorists don't quite know what that basic
conceptual advance is.
As I have said in other postings, I am convinced that set theory (with
large cardinals) is part of, or falls out of, a wider theory involving more
primitives. When this wider theory is understood, the large cardinal axioms
will drop out as naturally and clearly and compellingly as, say, induction
on the integers. But we are not there yet.
> The more large cardinals are applied, the more important the advance. I
>greatly admire Harvey's work directed toward finding concrete, natural,
>combinatorial consequences of large cardinal hypotheses.
Thank you very much! I find this half of the paragraph more congenial than
the next half.
>I might point out
>that if this work has long-term significance, then so do large cardinals,
>whereas the converse is not true, and therefore the long-term significance
>of this work is subject to at least as much doubt as is the long-term
>significance of large cardinals.
This is by far the most confusing thing I have ever read by Steel. I
cannnot imagine what Steel has in mind here.
For the purposes of discussion, let us use as a representative of "my work"
the most recent theorem/conjecture/partial conjecture I posted, where I am
just about to claim a somewhat different partial conjecture, and am
hopefully well on track on much stronger partial conjectures.
Let us simplify the conjectures as follows, for the sake of clarity of the
CONJ 1: It is necessary and sufficient to use certain small large cardinals
in order to completely analyze the following finite set of problems. The
universal solvability of any given Boolean relation between three infinite
sets of natural numbers and their images under two multivariate functions
on the natural numbers.
CONJ 2: In any instance of such a problem, if one can find arbitrarily
large finite solutions, then one can find infinite solutions.
CONJ 3: In any instance of such a problem, if it is true for functions of 2
variables then it is true for all multivariate functions.
I know (proof needs to be checked carefully) that it is necessary to use
these small large cardinals to prove any of these three conjectures. I
don't know that they are sufficient for any of them. Significant partial
results seem to be coming along fine.
I believe that this universal solvability subject is of immediate,
compelling appeal to a huge range of mathematicians. Theoretically, to the
entire mathematical community - but I have learned over the years that that
is usually too large and diverse a set to quantify over.
In particular, I have some quick feedback that this universal solvability
subject is "completely fundamental and compelling" from, say, a specialist
in several complex variables who knows virtually nothing about mathematical
Contrary to what Steel says, the long term significance of this work -
assuming the conjectures are proved, or at least sufficiently strong
partial forms are proved - is not dependent on the long term significance
of large cardinals. In fact, the long term significance of this work
(assuming ...) is immediately obvious. Here are the reasons:
1. There are very natural restrictions of these conjectures that are
expected to be equivalent to the 1-consistency of ZFC over ACA. Thus the
long term significance would only therefore depend on the long term
significance of ZFC, which is not dependent on the long term significance
of large cardinals.
2. There are even very natural restrictions that are expected to be
equivalent to the 1-consistency of the theory of types over ACA.
3. Or, for that matter, of most really natural levels <= ZFC.
In fact, from the viewpoint of the history of mathematics as we know it, it
is almost impossible to imagine that any long term significance will be
attributed to large cardinals unless one has a genuine application of them
to what mathematicians feel is "real", and this probably has to be
accompanied by the feeling that it is not an isolated application, and a
proof that the use of the large cardinals is essential.
>>More specifically, Steel views the mathematicians' interest/disinterest
>>or attitudes towards problems and topics in set theory as sociology,
>>which is of significance only in the role that it plays in funding and
>>job opportunities. For Steel, this is something that is subject to
>>unpredictable change and fashion and has no basis in real philosophical
>>or mathematical issues.
> Actually, these are not my opinions. The number-theorist's disinterest
>in learning large cardinal theory is rational. His unwillingness to hire
>set theorists would be understandable, but not rational.
I never suggested that a number theorist would, rationally, feel compelled
to learn large cardinal theory, or that a set theorist would, rationally,
feel compelled to learn number theory. What I was saying is a lot closer to
your second sentence.
But why is that not rational, if the number theorist's perception is that
large cardinals are a remote game that is in principle impossibly far
removed from what "normal" mathematics is about, when there are many many
job candidates who work directly in "normal" mathematics?
> I think of what
>goes on in hiring committees and funding agencies as applied philosophy of
>mathematics. Of course, fashion plays a role there, but in my experience
>there is a substantial rational core.
You don't agree that the *perceived* fundamental disconnect between
abstract set theory and "normal" mathematics has a major serious negative
impact in the reward system in the U.S. mathematics community? Sure,
nothing is absolutely black and white, and there are certainly some people
who can get around such problems because they have something else to offer.
E.g., local politics of various sorts, or that somebody is the best set
theorist in 1000 years, and the like.
I am confused. If you agree with the previous paragraph, then are you
saying that this is rational or irrational?
> I would qualify this two ways. First, one needs much more than the
>abstract possibility that large cardinals might someday be useful to
>justify work in the field.
I don't know how this sentence fits exactly into the argument. But let me
respond to it.
Large cardinals already has a lot more going for it than just the abstract
possibility that it might someday say something about a "normal"
mathematical situation - even without my efforts.
However, there the field has a great deal left to be desired both in terms
of philosophical coherence and in terms of saying something about a
"normal" mathematical situation.
>Second, I don't have any objection to informed
>criticism in the proper forum.
I hope that you think this is a proper forum. It is very public, of course.
And I have a rule for myself: I avoid criticizing unless I am proposing a
better alternative. Just criticism in a vacuum, without positive
suggestions - I don't like to see it, and I don't like to do it.
> >It is also my impression that Steel feels that current mainline research
>>in set theory is based on views that are philosophically attractive, but
>>perhaps not fully coherent
> Yes. Are anyone's views in this arena fully coherent?
The issue for me is to what extent one recognizes the incoherences, and
tries to remedy them. In particular, to what extent does one take into
account the shortcomings in the formulation of research programs? I don't
see very much of this sort of thing in the mathematical logic community.
People get committed and hugely invested in a line of research, and they
don't like to make 90 degree turns very much.
On the other hand, it is not easy for most people to incorporate genuine
philosophical considerations in their research plans. But I feel it is
important to try.
>>, and certainly not explainable in elementary
>>terms that are readily accessible to outsiders, even within the
>>mathematical logic community. But in his view, coherence and
>>explainability should in no way influence the direction and emphasis of
>>research in mainline set theory, nor deter or slow down its intensity. In
>>his view, it is certainly not appropriate to consider coherence and
>>explainability in the evaluation of research in mainline set theory.
> Coherence is important, explainability much less so.
But I find that explainability is intimately tied up with coherence. And in
particular, the standards for a new mathematical method of reasoning to
become accepted is very very very high - in terms of coherence and
explainability. It's just not going to happen - unless it is for "normal"
mathematical purposes, is not regarded as a fluke, and is generally usable
and explainable and coherent.
> One more point: the question that we are to debate, "Does
>mathematics need new axioms?", is deficient. It leads into pointless
>wrangling as to what we mean by "mathematics" and "need".
I tend to agree with you to some extent. But we may be able to avoid a lot
of what you call "pointless wrangling" if we can agree in advance that if
the big conjectures that I am making turn out to be true, then, indeed,
mathematics needs new axioms - at least in the sense intended by the
organizers of the upcoming panel discussion. Exactly what those axioms
should say can be subject to debate.
>I would put the
>question a different way: Is the search for, and study of, new axioms
>worthwhile? It seems to me that this gets to the real, "applied
>philosophy of math" question: should people be working on this stuff?
I would say yes, definitely, but with an expanded approach that addresses a
range of fundamental issues that are not being properly addressed right
now. Also, there should be modifications in the associated educational
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2000-February/003751.html","timestamp":"2014-04-18T10:38:25Z","content_type":null,"content_length":"19459","record_id":"<urn:uuid:678f1490-ce7a-4bb0-9892-4c6f7a4bbde0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Texture Synthesis
Efros & Leung's Algorithm
Matlab source: synth.m
Efros & Leung's algorithm is remarkably simple. First, initialize a synthesized texture with a 3x3 pixel "seed" from the source texture. For every unfilled pixel which borders some filled pixels (I
think of this as a pixel on the 'frontier'), find a set of patches in the source image that most resemble the unfilled pixel's filled neighbors. Choose one of those patches at random and assign to
the unfilled pixel a color value from the center of the chosen patch. Do this a few thousand times, and you're done. The only crucial parameter is how large of a neighborhood you consider when
searching for similar patches. As you can see in the examples below, as the "window size" increases, the resulting textures capture larger structures of the source image. If you want to implement
this algorithm, there are some finer details you should read about in the paper linked below.
I implemented this as a homework for Rob Fergus' Computational Photography class. If you're interested in this sort of thing, check out the course website, which has many more interesting papers.
Reference: Texture Synthesis by Non-parametric Sampling
Source 3 x 3 window 5 x 5 window 7 x 7 window 11 x 11 window 15 x 15 window 19 x 19 window
Parametric Synthesis: Not So Great
Matlab source: synth_gmm.m, EM_GM.m
I started from Efros & Leung's algorithm and rewrote it to use an explicit patch density model. Instead of measuring the distance from the current synthesized patch to every patch in the original
image, I sample a few patches from a Gaussian mixture model and use the sample with lowest distance. This trades a terrible start-up cost (waiting for EM to converge) for a much quicker per-pixel
running time. If the visual performance were comparable, this scheme would be well-suited for synthesizing large textures. Unfortunately, the results from the mixture model don't look so great. When
I have time, I'm hoping to try this with a Restricted Boltzmann Machine.
Source 3 x 3 window 5 x 5 window 7 x 7 window 11 x 11
|
{"url":"http://rubinsteyn.com/comp_photo/texture/","timestamp":"2014-04-20T03:23:11Z","content_type":null,"content_length":"9791","record_id":"<urn:uuid:d592e0df-3b66-4da4-be9e-50521f8e3c51>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Impact Geometry
Impact Geometry
Home FAQ Mission Science Technology Mission Results
Gallery Education Discovery Zone Your Community Press
See National Math Standards for this Challenge.
Q: How far away will the flyby spacecraft be when the impactor hits Tempel 1?
This is actually one in a whole series of questions that can only be accurately answered by using some complicated geometry.
It is, however, possible to estimate the answers using simple two-dimensional geometry, so that's what we'll recommend you try.
Looking at the diagram above, you'll see that the impactor and flyby spacecraft separate from each other 24 hours before the impact will occur. The impactor then proceeds into the path of the
comet. It should be pointed out that it won't really fly in a straight line as depicted, but will probably follow a slightly more parabolic curve. That's one of the simplifications we'll make
in this problem. The impactor will be traveling at a relative speed of 10.2 kilometers per second (km/sec).
The flyby spacecraft heads off at an angle to the path of the impactor. That angle will be approximately 0.033°. The flyby spacecraft will be traveling at a relative speed of 7.6 km/sec at
that new heading.
After the impact, the flyby spacecraft continues in its path, taking pictures and spectrometer images until it gets too close to the comet, and has to go into "shield mode" to protect its
sensitive instruments from the dust particles in the comet's coma. The minimum safe distance from the comet is 750 km, and the flyby will actually get closer than that.
Okay, you've got all the information you need (believe it or not) to answer the following questions with simple conversions and geometry - they're written in the order that we'd recommend you
answer them, but there may be other ways to do this. Try these yourself, and the answers will be presented here later so you can check your work!
1. How far will the impactor spacecraft travel between separation and the time of impact?
2. What's the closest distance (TCA) the flyby spacecraft will come to the comet?
3. How far will the flyby spacecraft travel between separation and the time of impact?
4. How far will the flyby spacecraft travel between separation and the point of it's closest approach (TCA)?
5. How far will the flyby spacecraft travel between the time of impact and the point of it's closest approach (TCA)?
6. How far will the flyby spacecraft travel between the point where it has to enter "shield mode" and the point of its closest approach (TCA)?
7. How far will the flyby spacecraft travel between the time of impact and the point where it has to enter "shield mode"?
8. How much time will the impactor have to take pictures and images between the time of impact and the point where it has to enter "shield mode"?
9. How far away from the comet will the flyby spacecraft be at the time of impact?
A: Let's look at the solutions for these problems, in the order they were asked.
1. How far will the impactor spacecraft travel between separation and the time of impact?
This problem is a simple conversion problem. We have the time between these two events (24 hours) and the speed that the impactor spacecraft will be traveling (10.2 km/sec). Using these two
pieces of information, we can calculate the distance it will travel. It is necessary to first convert the time into seconds, then multiply by the speed to get distance:
2. What's the closest distance (TCA) the flyby spacecraft will come to the comet?
Here's the first use of geometry. We now know the total distance the impactor spacecraft will travel (881280 km), and we know the angle between the impactor spacecraft's trajectory and the
flyby spacecraft's trajectory (0.033°). The distance the impactor spacecraft travels can be seen as the hypotenuse of a right triangle, with the closest distance of the flyby spacecraft being
the "opposite side" to the 0.033° angle.
The sine of 0.033° will equal the length of the opposite side over the length of the hypotenuse, so:
Rearranging this equation gives us:
TCA = impactor distance x sin 0.033° = (881280 km) x (0.000575969) = 508 km
3. How far will the flyby spacecraft travel between separation and the time of impact?
This is another conversion problem like in number 1.
4. How far will the flyby spacecraft travel between separation and the point of it's closest approach (TCA)?
Here we need to use the Pythagorean theorem (c^2 = a^2 + b^2). The distance that the impactor space craft covers between separation and impact, the closest distance between the flyby to the
to the comet (TCA) and the distance between separation and TCA make up a right triangle. Rearranging the Pythagorean theorem yields:
5. How far will the flyby spacecraft travel between the time of impact and the point of it's closest approach (TCA)?
This is a simple subtraction. The flyby spacecraft travels 881280 km between separation and TCA (number 4) and it travels 656640 km between separation and impact. This means the distance
between impact and TCA must be:
6. How far will the flyby spacecraft travel between the point where it has to enter "shield mode" and the point of its closest approach (TCA)?
Here we need the Pythagorean theorem again. This time the triangle's hypotenuse is the minimum safe distance between the comet and the flyby spacecraft (750 km), and one of the legs is the
closest distance between the flyby spacecraft and the comet (508 km). The distance between the start of "shield mode" and TCA is the other leg.
7. How far will the flyby spacecraft travel between the time of impact and the point where it has to enter "shield mode"?
This is another simple subtraction. The flyby spacecraft travels 234640 km between impact and TCA (number 5) and it travels 552 km between the start of "shield mode" and TCA. This means the
distance between impact and the start of "shield mode" must be:
8. How much time will the impactor have to take pictures and images between the time of impact and the point where it has to enter "shield mode"?
This is another conversion problem like number 1. We know the distance the flyby spacecraft will travel between impact and the start of "shield mode", and we know how fast it will be
traveling, so:
9. How far away from the comet will the flyby spacecraft be at the time of impact?
This final problem again requires the Pythagorean theorem. This time the answer to the problem is the hypotenuse, the distance between impact and TCA (224640 km) is one leg of the triangle,
and the closest distance the flyby spacecraft comes to the comet (508 km) is the other leg.
Now, you may ask, how big will the comet look to the flyby spacecraft's telescopes at this distance - that is, how many pixels across will it be? Excellent question. Excellent question.
That's why it is also a mission challenge!
More Challenges
NASA Official: Kristen Erickson
Advisory: Dr. James Green, Director of Planetary Science
Outreach Manager: Alice Wessen
|
{"url":"http://solarsystem.nasa.gov/deepimpact/disczone/challenge_impactgeometry_A.cfm","timestamp":"2014-04-16T04:53:19Z","content_type":null,"content_length":"55402","record_id":"<urn:uuid:936a717e-6382-4413-9bb6-c759c82c401f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stumper Answer
Treebeard's Homepage : Stumpers
Treebeard's Stumper Answer
30 Jan 98
Lots of Pennies
In The New Yorker special Cartoon Issue before Christmas, Warren Miller shows a butler addressing his wealthy employer: "The fifty-five gallon drum is completely filled with pennies, sir. Should
it be taken to the bank?"
Warren Miller in The New Yorker, December 15, 1997.
Just how many pennies is that? We won't guess the exact answer of course (or care). But we can make reasonable estimates in many ways by starting with what we know and taking it a step further.
Submit your estimates by email, and I'll report the results next week.
How many pennies would fill a 55 gallon drum? Most kids at school counted pennies in a cup or half-cup and then multiplied x 16 cups/gallon x 55 gallons. The average of our answers was about
300,000 pennies, $3000 worth. I counted a half-cup of pennies several times and got different answers each time, so packing and settling are important factors that explain the spread of our
answers. I'm impressed by how close our estimates were. Pennies weigh 2.6 to 2.7 grams apiece, so that full drum would weigh about 1800 pounds, nearly a ton. Better take it to the bank with a
When I counted a half-cup of pennies, I got different answers every time, varying from 150 to 190 pennies per half-cup. If a half-cup holds 170 +/- 20 pennies, then 55 gallons holds 299,200 +/-
18,000 pennies. Even two significant digits is pushing it.
I also figured the volume of a penny as:
Volume = 1/4 pi d^2 h = 1/4 pi (3/4) (3/4) (.061) = .027 in^3.
55 gallons is 12,705 in^3, and dividing gives 470,000 pennies total. That's assuming 100% filling, like melting the pennies and pouring them into the drum. Then I took 100 ml of pennies and found
that it took about 40 ml of water to cover them, for a packing density of about 60%. This gives a final estimate of 282,000 pennies, in the same neighborhood at least.
Graybear contributes this analysis:
I recently tried a practical test. I filled a 15 ounce peanut can with water and found its volume to be 20 fluid ounces, so 352 would fit in the barrel. By adding pennies one handful at a
time, then shaking to compact them before the next handful, it would hold 916 pennies. By stacking them, it would hold 1071+. (I didn't actually stack them, but 21 pennies would lay flat on
the bottom, and 51 pennies could be stacked vertically. Other pennies may have been able to fit in the interstitial spaces.) Therefore, the results of the test (322,432 - 376,992+) confirm
your findings.
JUST IN CASE THIS IS A TRICK QUESTION!
If the question were, "How many pennies can you fit in an empty barrel?", the answer would be 'one, of course, then the barrel would not be empty'.
Back to Stumper
last modified .
Copyright © 1998 by Marc Kummel / mkummel@rain.org
|
{"url":"http://www.rain.org/~mkummel/stumpers/30jan98a.html","timestamp":"2014-04-18T20:48:13Z","content_type":null,"content_length":"4268","record_id":"<urn:uuid:747442f7-08c7-4c58-828a-02b2f661ef21>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ALEX Lesson Plans
Subject: Mathematics (4 - 5)
Title: Fraction Action
Description: The students will participate in a hands-on-lesson where they will understand and be able to write equivalent forms of fractions. The students will learn and apply the concept of using
fractions in everyday life.
Subject: Mathematics (3 - 4)
Title: Fractions on a Number Line
Description: As students master the concept of the number line, they may be curious about where fractions would fit on a standard number line. Students will work cooperatively to explore the
possibilities where fractions lie on number lines. Students will also work independently to extend and enrich their new knowledge of fractions on number lines. This lesson plan was created by
exemplary Alabama Math Teachers through the AMSTI project.
Subject: Mathematics (3 - 5)
Title: Tangram Fractions
Description: The teacher will engage the class by reading Grandfather Tang. Students will solve a tangram puzzle. Students will find the fractional value for each piece of the tangram. Students will
create a picture using the pieces of the tangram. Students will find the fractional value of their partner's tangram picture. This lesson plan was created as a result of the Girls Engaged in Math and
Science, GEMS Project funded by the Malone Family Foundation.
Thinkfinity Lesson Plans
Subject: Mathematics
Title: Inch by Inch Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students use an actual ruler to represent various fractions as lengths. This lesson builds on work done in a previous lesson
with nonstandard measurement as students use a standard instrument to measure a variety of items. Several pieces of literature appropriate for use with this lesson are suggested.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5
Subject: Mathematics
Title: Equivalent Fractions Add Bookmark
Description: This student interactive, from Illuminations, allows students to create equivalent fractions by dividing and shading squares or circles. The fractions are simultaneously displayed on a
number line so students can see the relationship between the fractions.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5
Subject: Mathematics
Title: Pattern Block Fractions Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students focus on the identification of fractional parts of a region and record them in standard form. Students develop
communication skills by working together to express their understanding of fraction relationships and to record fractions in written form.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5
Subject: Mathematics
Title: Expanding Our Pattern Block Fraction Repertoire Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students expand the number of fractions they can represent with pattern blocks by increasing the whole. Instead of
representing the whole with one yellow hexagon, the students explore fractional relationships when two, three, and four yellow hexagons constitute the whole.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5
Subject: Mathematics
Title: Eggsactly Equivalent Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students use twelve eggs to identify equivalent fractions. Construction paper cutouts are used as a physical model to
represent various fractions of the set of eggs. Students investigate relationships among fractions that are equivalent.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5
Subject: Mathematics,Professional Development
Title: Communicating about Mathematics Using Games Add Bookmark
Description: The mathematical game in this four-lesson unit from Illuminations fosters mathematical communication as students explain and justify their moves to one another. In addition, the game
motivates students and engages them in thinking about and applying concepts and skills.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5
Subject: Mathematics
Title: Calculation Nation Add Bookmark
Description: Become a citizen of Calculation Nation! Play online math strategy games to learn about fractions, factors, multiples, symmetry and more, as well as practice important skills like basic
multiplication and calculating area! Calculation Nation uses the power of the Web to let students challenge themselves and opponents from anywhere in the world. The element of competition adds an
extra layer of excitement.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8,9
Subject: Mathematics
Title: Investigating Equivalent Fractions with Relationship Rods Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students investigate the length model by working with relationship rods to find equivalent fractions. Students develop skills
in reasoning and problem solving as they explain how two fractions are equivalent (the same length). Relationship rods are wooden or plastic rods in ten different colors, ranging in length from one
to ten centimeters.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5
Subject: Mathematics
Title: Fun with Fractions Add Bookmark
Description: In this seven-lesson unit from Illuminations, students explore relationships among fractions through work with the set model. This early work with fraction relationships helps students
make sense of basic fraction concepts and facilitates work with comparing and ordering fractions and working with equivalency.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5
Subject: Mathematics
Title: More Fun with Fraction Strips Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students work with fraction strips to compare and order fractions. Building on work done with fraction relationships in a
previous lesson, students develop skills in problem solving and reasoning as they make connections between various fractions.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5
Subject: Mathematics
Title: Fraction Models Add Bookmark
Description: This student interactive, from Illuminations, explores several representations for fractions using adjustable numerators and denominators. Students can see decimal and percent
equivalents, as well as a model that represents each fraction.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8
Subject: Mathematics
Title: Fraction Game Add Bookmark
Description: This student interactive, from Illuminations, simulates a flash card fraction game. Students flip cards one at a time and try to match the values to the given number lines.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8
Subject: Mathematics
Title: Number and Operations Web Links Add Bookmark
Description: This collection of Web links, reviewed and presented by Illuminations, offers teachers and students information about and practice in concepts related to arithmetic. Users can read the
Illuminations Editorial Board's review of each Web site, or choose to link directly to the sites.
Thinkfinity Partner: Illuminations
Grade Span: K,1,2,3,4,5,6,7,8,9,10,11,12
Subject: Mathematics
Title: Communicating about Mathematics Using Games: Playing Fraction Tracks Add Bookmark
Description: Mathematical games can foster mathematical communication as students explain and justify their moves to one another. In addition, games can motivate students and engage them in thinking
about and applying concepts and skills. This e-example from Illuminations contains an interactive version of a game that can be used in the grades 3-5 classroom to support students' learning about
fractions. e-Math Investigations are selected e-examples from the electronic version of the Principles and Standards of School Mathematics (PSSM). The e-examples are part of the electronic version of
the PSSM document. Given their interactive nature and focused discussion tied to the PSSM document, the e-examples are natural companions to the i-Math investigations.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5
Subject: Mathematics
Title: A Brownie Bake Add Bookmark
Description: This lesson, one of a multi-part unit from Illuminations, focuses on student organization, preparation, and presentation of some simple foods as a way to apply various mathematical
concepts, with problem-solving techniques being a central theme. Students prepare, after determining minimum amounts of ingredients required, a commercial brownie mix and serve equal portions to all
class members.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5
Subject: Mathematics
Title: Eggsactly with Eighteen Eggs Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students continue to examine fractions as part of a set. This lesson helps students develop skill in problem-solving and
reasoning as they examine relationships among the fractions used to describe part of a set of eighteen.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5
|
{"url":"http://alex.state.al.us/plans2.php?std_id=53726","timestamp":"2014-04-16T04:26:22Z","content_type":null,"content_length":"41479","record_id":"<urn:uuid:9349f052-18b1-42c9-ad3f-19320d2106d0>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Quintiles
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Quintiles
From Maarten Buis <maartenlbuis@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Quintiles
Date Wed, 8 Aug 2012 18:40:49 +0200
--- Leonardo Jaime Gonzalez Allende asked:
>>> I'm trying to divide a sample of households (expanded)
>>> into quintiles using the xtile command. I want to create
>>> 5 groups with the exactly same quantity of population,
>>> but using the xtitle command, the quantity of households
>>> in each quintil is very slightly different to 20% when the
>>> number of observations isn't exactly dividable by 5.
>>> Do you know any command to divide the population
>>> (sample expanded) into 5 groups of exactly same weight?
--- I answered:
>> That is logically impossible.
-- Leonardo Jaime Gonzalez Allende wrote me privately:
> sorry for write you directly, but I like to know, why is
> logically impossible separate the population (by
> incomes) in 5 groups of the same weith?
Don't sent such follow-up questions privately. If you find my answer
puzzling, than chances are that someone else who is following this
discussion finds that too. This is explained in the Statalist FAQ.
Think of it this way: How can you divide 6 persons in 5 equally sized
groups? You could assign one person to each group, and than you are
left with one person. If you could split the remaining 1 person up
into 5 1/5th persons, than we could create 5 equally sized groups.
However, that is impossible (or rather bloody, if we take that too
literally). So given the inherently discrete nature of the number of
observations you cannot divide your data up into 5 groups of exactly
the same size if the number of observations in not dividable by 5.
-- Maarten
Maarten L. Buis
Reichpietschufer 50
10785 Berlin
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-08/msg00388.html","timestamp":"2014-04-19T10:18:13Z","content_type":null,"content_length":"10296","record_id":"<urn:uuid:c10fd9ea-0a11-4ea4-9f3c-7e0f46450438>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Adjoining a new isolated point without changing the space
up vote 5 down vote favorite
Suppose $X$ is a $T_1$ space with an infinite set of isolated points. Show that if $X^\sharp = X \cup \lbrace \infty \rbrace$ is obtained by adding a single new isolated point, then $X$ and $X^\
sharp$ are homeomorphic.
I am almost embarrased to raise this, which seems obvious. The proof must be simple, but it eludes me for now. Maybe it is an exercise in some textbook. You can clearly establish a 1-1 equivalence
between the isolated points of $X$ and those of $X^\sharp$. But it is not clear how this equivalence would extend to the closure of the isolated points.
The theorem is easy when $X$ is compact $T_2$ and $cl(D) = \beta(D)$, where $D$ is the set of isolated points.
add comment
2 Answers
active oldest votes
In my answer to http://mathoverflow.net/questions/26414 I descibed a somewhat simpler-looking example than Nik's, but proving that it works may be harder. Take two copies of $\beta\
mathbb N$ and glue each non-isolated point of one copy to the corresponding point of the other copy. Any way of "absorbing" a new isolated point into the two copies of $\mathbb N$ forces
up vote 9 a relative shift of those two copies, which forces corresponding shifts of the non-isolated points, which in turn conflicts with the gluing. The perhaps surprising thing about this
down vote example is that, if you add two isolated points, the result is (easily) homeomorphic to the original.
Yeah, that's cleaner. – Nik Weaver Aug 7 '12 at 20:50
Two good answers. Thanks. – Fred Dashiell Aug 7 '12 at 23:11
add comment
Well, I think this is false. Start with a family of $2^{2^{\aleph_0}}$ mutually non-homeomorphic connected spaces, and attach them to the non-isolated points of $\beta {\bf N}$. (I.e., start
with the disjoint union of $\beta {\bf N}$ and the other spaces, and factor out an equivalence relation which identifies each point of $\beta {\bf N} - {\bf N}$ with a point of one of the
other spaces.) Any homeomorphism between $X$ and $X^\sharp$ has to take isolated points to isolated points; taking closures, it takes $\beta{\bf N}$ onto itself; and by connectedness it takes
up vote each of the extra spaces onto itself. So it has to fix each point of $\beta {\bf N} - {\bf N}$. Now the question is whether a bijection between ${\bf N}$ and ${\bf N}$ minus a point can fix $
7 down \beta {\bf N} - {\bf N}$ pointwise. The answer is no because iterating the map, starting on the missing point, yields a sequence within ${\bf N}$ that gets shifted by the map, and it is easy
vote to see that this shift does not fix the ultrafilters supported on that sequence.
add comment
Not the answer you're looking for? Browse other questions tagged gn.general-topology or ask your own question.
|
{"url":"http://mathoverflow.net/questions/104212/adjoining-a-new-isolated-point-without-changing-the-space?sort=oldest","timestamp":"2014-04-18T18:37:46Z","content_type":null,"content_length":"56316","record_id":"<urn:uuid:9360eeee-9f36-4b8c-a890-18e97d20fb52>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comtet [1974]. Advanced Combinatorics
, 1989
"... . Lambda-Upsilon-Omega, \Upsilon\Omega , is a system designed to perform automatic analysis of well-defined classes of algorithms operating over "decomposable" data structures. It consists of an
`Algebraic Analyzer' System that compiles algorithms specifications into generating functions of averag ..."
Cited by 14 (2 self)
Add to MetaCart
. Lambda-Upsilon-Omega, \Upsilon\Omega , is a system designed to perform automatic analysis of well-defined classes of algorithms operating over "decomposable" data structures. It consists of an
`Algebraic Analyzer' System that compiles algorithms specifications into generating functions of average costs, and an `Analytic Analyzer' System that extracts asymptotic informations on coefficients
of generating functions. The algebraic part relies on recent methodologies in combinatorial analysis based on systematic correspondences between structural type definitions and counting generating
functions. The analytic part makes use of partly classical and partly new correspondences between singularities of analytic functions and the growth of their Taylor coefficients. The current version
\Upsilon\Omega 0 of \Upsilon\Omega implements as basic data types, term trees as encountered in symbolic algebra systems. The analytic analyzer can treat large classes of functions with explicit
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3397539","timestamp":"2014-04-18T20:02:50Z","content_type":null,"content_length":"12291","record_id":"<urn:uuid:19277e19-113d-4b53-8291-4b13527ec456>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Listing Closed Sets of Strongly Accessible Set Systems with Applications to Data Mining
Tamas Horvath, Axel Poigne and Stefan Wrobel
Theoretical Computer Science Volume 411, Number 3, , 2010.
We study the problem of listing all closed sets of a closure operator $\sigma$ that is a partial function on the power set of some finite ground set E, i.e., $\sigma \subseteq \mathcal{F}$ with $\
mathcal{F} \subseteq \mathcal{P}(E)$. A very simple divide-and-conquer algorithm is analyzed that correctly solves this problem if and only if the domain of the closure operator is a strongly
accessible set system. Strong accessibility is a strict relaxation of greedoids as well as of independence systems. This algorithm turns out to have delay $O(|E| (T_\mathcal{F}+T_\sigma+|E|))$ and
space $O(|E|+S_\mathcal{F}+S_\sigma)$, where $T_\mathcal{F}$, $T_\sigma$ , $S_\mathcal{F}$ , and $S_\sigma$ are the time and space complexities of checking membership in $\mathcal{F}$ and computing $
\sigma$, respectively. In contrast, we show that the problem becomes intractable for accessible set systems. We relate our results to the data mining problem of listing all support-closed patterns of
a dataset and show that there is a corresponding closure operator for all datasets if and only if the set system satisfies a certain confluence property.
EPrint Type: Article
Project Keyword: Project Keyword UNSPECIFIED
Subjects: Theory & Algorithms
ID Code: 6143
Deposited By: Mario Boley
Deposited On: 08 March 2010
|
{"url":"http://eprints.pascal-network.org/archive/00006143/","timestamp":"2014-04-21T07:09:22Z","content_type":null,"content_length":"7245","record_id":"<urn:uuid:1252a765-5f04-47ce-8da5-354edb68ab09>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Orthogonal contrasts
Orthogonal contrasts for analysis of variance are independent linear comparisons between the groups of a factor with at least three fixed levels. The sum of squares for a factor A with a levels is
partitioned into a set of a - 1 orthogonal contrasts each with two levels (so each has p = 1 test degree of freedom), to be tested against the same error MS as for the factor. Each contrast is
assigned a coefficient at each level of A such that its a coefficients sum to zero, with coefficients of equal value indicating pooled levels of the factor, coefficients of opposite sign indicating
factor levels to be contrasted, and a zero indicating an excluded factor level. With this numbering system, two contrasts are orthogonal to each other if the products of their coefficients sum to
For example, a three-level factor A has the following coefficients for its two orthogonal contrasts B and C:
Factor Contrast^ *
A B C(B)
2 -1 1
3 -1 -1
Contrast B compares group A[1] to the average of groups A[2] and A[3]; contrast C (which is nested in B) compares group A[2] to group A[3]. If A[1] is a control and A[2] and A[3] are treatments, then
the contrasts test respectively for a difference between the control and the pooled treatments, and for a difference between the treatments. The contrasts are orthogonal because they have a zero sum
of the products of their coefficients (2x0 + -1x1 + -1x-1 = 0). If the control belongs to a different level of A, then the rows of the contrast coefficients can be rearranged accordingly without
losing orthogonality. These two contrasts can be analysed in GLM with sequential SS by requesting the terms: B + C(B) as fixed factors. This will give SS[B] + SS[C(B)] = SS[A], and df[B] + df[C(B)] =
A four-level factor A can have the following alternative sets of three orthogonal contrasts B to D (in any permutation of coefficient rows for each set, and analysed in GLM by requesting the fixed
contrast terms with sequential SS):
Factor Contrast set 1^ * Factor Contrast set 2^ Factor Contrast set 3
A B C(B) D(C B) A B C(B) D(B)
A B C D
1 1 1 -1
2 -1 2 0 2 1 -1 0
2 1 -1 1
3 -1 -1 1 3 -1 0 1
3 -1 1 1
4 -1 -1 -1 4 -1 0 -1
4 -1 -1 -1
Note that the analysis of contrast set 3, by running GLM with sequential SS on terms B + C + D, is equivalent to running a balanced ANOVA either on terms B + C + C*B or on terms B + D + D*B or on
terms C + D + D*C.
A five-level factor A can have the following alternative sets of four orthogonal contrasts B to E (in any permutation of coefficient rows for each set, and analysed in GLM by requesting the fixed
contrast terms with sequential SS):
Factor Contrast set 1^ Factor Contrast set 2^ *
A B C(B) D(B) E(D B) A B C(B) D(C B) E(D C B)
2 3 -1 0 0 2 -1 3 0 0
3 -2 0 2 0 3 -1 -1 2 0
4 -2 0 -1 1 4 -1 -1 -1 1
5 -2 0 -1 -1 5 -1 -1 -1 -1
Factor Contrast set 3^ Factor Contrast set 4^
A B C(B) D(C B) E(C B) A B C(B) D(B) E(B)
2 -1 1 1 0 2 -1 1 1 -1
3 -1 1 -1 0 3 -1 1 -1 1
4 -1 -1 0 1 4 -1 -1 1 1
5 -1 -1 0 -1 5 -1 -1 -1 -1
Analysis of contrasts on a factor A does not require a significant A effect. If it is significant, however, at least one of the orthogonal sets will contain at least one significant contrast. For a
priori planned orthogonal contrasts, the conceptual unit for error rate is conventionally taken to be the individual contrast (rather than the family of contrasts in the full set), just as it is
taken to be the individual term in multi-factorial ANOVA partitioned into treatment effects and interactions (rather than the full experiment). The family-wise Type-I error must apply, however, if
contrasts are used for post hoc comparisons to locate the biggest differences amongst levels of a treatment. The family-wise error rate for m independent tests, each with an individual error rate α,
is 1 - (1 - α)^m; the family-wise error rate for m orthogonal contrasts is some small amount less than this because their significance tests are not independent (since all use the same error mean
square, even though the contrasts are independent since orthogonal). The size of α can be reduced to control the family-wise error rate, though at a cost of substantially diminishing power to detect
individual differences.
In the usual application of orthogonal contrasts, for a priori planned comparisons, the choice of contrast set for a factor A with 4 or more levels will be informed by the study design. For example,
a 4-level factor A may be suited to set 1 when the levels include a control and three treatments, whereas it may be suited to set 3 when the levels include cross-factored treatment combinations
(e.g., +/+, +/-, -/+, -/-).
Significance tests should be reported for all orthogonal contrasts in the set, because the set partitions the variation due to factor A. For example, consider the two contrasts B and C(B) comprising
the set for a 3-level factor A applied to a control and two treatments. Although the contrasts test independent hypotheses, since they are orthogonal, interpretation of the difference between the two
treatments in contrast C(B) depends on their combined difference from a control in contrast B, since both contrasts share the same error mean square.
A set of orthogonal contrasts is balanced only if each level of A has the same number of replicates, and if all pairs of crossed contrasts in the set have a consistent number of levels of A
representing each pair of contrast levels. For example, in contrast set 3 of the 4-level factor A above, all three of its crossed contrast pairs have one level of factor A representing each pair of
contrast levels (1, 1 and 1, -1, and -1, 1, and -1, -1). The same is true of contrast set 4 of the 5-level factor A. For a factor A with eight or more levels, it is possible – though not desirable –
to construct unbalanced orthogonal contrast sets with pairs of crossed contrasts having inconsistent numbers of levels of A representing each pair of contrast levels.
These web pages include examples of balanced orthogonal contrasts for a priori planned comparisons amongst three- and five-level single factors, examples for three- and four-level factors in
cross-factored designs, including contrast-by-contrast interactions, an example of contrasts for a one-factor randomized block and an example for a two-factor randomized block, and an example of
contrasts for a three-factor split plot. Click here for the suite of commands in R (freeware statistical package, R Development Core Team 2010) that will analyze each of the example datasets.
Above five levels for a factor, the number of alternative sets of orthogonal contrasts starts to increase rapidly with each additional level (sequence A165438 in OEIS). The program Contrasts.exe will
provide coefficients for all possible sets of balanced orthogonal contrasts on a factor with any number of levels up to a maximum of 12. For a chosen set or range of sets, it will store contrast
coefficients in a text file for any specified number of replicates, and will identify the (unique) GLM model for analysing the set (with sequential SS, after each data line has been tagged with the
response value for the replicate).
^* This orthogonal set is also known as the set of Helmert contrasts for a factor with this number of levels.
Doncaster, C. P. & Davey, A. J. H. (2007) Analysis of Variance and Covariance: How to Choose and Construct Models for the Life Sciences. Cambridge: Cambridge University Press.
R Development Core Team (2010). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org.
|
{"url":"http://www.southampton.ac.uk/~cpd/anovas/datasets/Orthogonal%20contrasts.htm","timestamp":"2014-04-18T13:45:10Z","content_type":null,"content_length":"112313","record_id":"<urn:uuid:39a284ed-6a8a-4d60-8c7a-3e08fee84fb0>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Definition from Wiktionary, the free dictionary
valuation (plural valuations)
1. An estimation of something's worth.
2. (finance) The process of estimating the market value of a financial asset or liability.
□ 1993, Historic American Building Survey, Town of Clayburg: Refractories Company Town, National Park Service, page 4:
The tax assessor put them in fourteen valuation groups ranging from one two-story brick house and two one-and-a-half-story houses to the largest groups of eighteen two-story houses and
twenty-four one-story bungalows.
3. (logic, propositional logic, model theory) An assignment of truth values to propositional variables, with a corresponding assignment of truth values to all propositional formulas with those
variables (obtained through the recursive application of truth-valued functions corresponding to the logical connectives making up those formulas).
4. (logic, first-order logic, model theory) A structure, and the corresponding assignment of a truth value to each sentence in the language for that structure.
5. (algebra) A measure of size or multiplicity.
6. (measure theory, domain theory) A map from the class of open sets of a topological space to the set of positive real numbers including infinity.
Related terms[edit]
See also[edit]
|
{"url":"http://en.wiktionary.org/wiki/valuation","timestamp":"2014-04-20T09:03:19Z","content_type":null,"content_length":"32292","record_id":"<urn:uuid:421e13bc-9295-4866-8780-56f76e966ff1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] one-offset arrays
Chris Barker chrishbarker at home.net
Wed Aug 29 12:41:57 CDT 2001
It's mostly been said already, but I can't help but add my $.02
Eric Nodwell wrote:
> >> Without one-offset indexing, it seems to me that Python is minimally
> >> useful for numerical computations. Many, perhaps the majority, of
> >> numerical algorithms are one-indexed. Matlab for example is one-based
> >> for this reason. In fact it seems strange to me that a "high-level"
> >> language like Python should use zero-offset lists.
I was a heavy user of MATLAB for a long time before I discovered NumPy,
and I have to say that I like the 0-indexing scheme MUCH better!
> In my view, the most important reason to prefer 1-based indexing
> versus 0-based indexing is compatibility. For numerical work, some of
> the languages which I use or have used are Matlab, Mathematica, Maple
> and Fortran. These are all 1-indexed.
Actually Fortran is indexed however you decide you want it:
DIMENSION array(0:9)
DIMENSION array(1:10) or DIMENSION array(10)
DIMENSION array(1900:1999)
Are all legal. This is a VERY handy feature, and I would say that I used
the 0-indexed version most often. The reason is related to C's pointer
arithmetic logic: Often the array would represent discrete points on a
continuous scale, so I could find the value of X, for instance, by
Xaxis(i) = MinX * i * DeltaX
with i-indexing, you have to subtract 1 all the time.
I suspect that the higher level nature of NumPy would make it a lot
harder to have arbitrary indexing of this fashion: if all you have to do
is access elements, it is easy, but if you have a whole collection of
array oriented operations, as NumPy does, you would probably have to
stick with one standard, and I think the 0-indexing standard is the
> for experts in a
> particular field who are accustomed to certain ingrained notations, it
> is the code which breaks the conventional notation which is most
> error-prone.
This is why being able to set your own indexing notation is the best
option, but a difficult one to impliment.
> Python is otherwise such an elegant and
> natural language. Why the ugly exception of making the user conform to
> the underlying mechanism of an array being an address plus an offset?
I gave an example above, and others have too: Python's indexing scheme
is elegant and natural for MANY usages. As with many things Python
(indentation, anyone!), I found it awkward to make the transition at
first, but then found that it, in fact, made things easier in general.
For me, this is the very essence of truly usable design: it is designed
to make people most productive in the long run, not most comfortable
when they first start using it.
> All this is really neither here nor there, since this debate, at least
> as far as Python is concerned, was probably settled 10 years ago and
Well, yes, and having NumPy different from the rest of Python would NOT
be a good idea either.
> I'm sure nobody wants to hear anything more about it at this point.
> As you point out, I can define my own array type with inheritance. I
> will also need my own range command and several other functions which
> haven't occured to me yet. I was hoping that there would be a standard
> module to implement this.
If it were truly generally useful, there probably would be such a
package. I imagine most people have found it easier to make the
transition than to write a whole package that would allow you not to
make the transition. If you really have a lot of code that is 1-indexed
that you want to translate, it may be worth the effort for you, and I'm
sure there are other folks that would find it useful, but remember that
it will always be incompatable with the rest of Python, which may make
it harder to use than you imagine.
Christopher Barker,
ChrisHBarker at home.net --- --- ---
http://members.home.net/barkerlohmann ---@@ -----@@ -----@@
------@@@ ------@@@ ------@@@
Oil Spill Modeling ------ @ ------ @ ------ @
Water Resources Engineering ------- --------- --------
Coastal and Fluvial Hydrodynamics --------------------------------------
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2001-August/013188.html","timestamp":"2014-04-20T16:56:05Z","content_type":null,"content_length":"7111","record_id":"<urn:uuid:f307b57e-b35a-456e-97e2-b04dd0171302>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Schnelle Multiplikation grosser
Results 1 - 10 of 11
- SIAM J. on Computing , 1997
"... A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation
time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. ..."
Cited by 882 (2 self)
Add to MetaCart
A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by
at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are
generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a
hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.
- Journal of Cryptology , 1997
"... Feedback shift registers with carry operation (FCSR’s) are described, implemented, and analyzed with respect to memory requirements, initial loading, period, and distributional properties of
their output sequences. Many parallels with the theory of linear feedback shift registers (LFSR’s) are presen ..."
Cited by 50 (7 self)
Add to MetaCart
Feedback shift registers with carry operation (FCSR’s) are described, implemented, and analyzed with respect to memory requirements, initial loading, period, and distributional properties of their
output sequences. Many parallels with the theory of linear feedback shift registers (LFSR’s) are presented, including a synthesis algorithm (analogous to the Berlekamp-Massey algorithm for LFSR’s)
which, for any pseudorandom sequence, constructs the smallest FCSR which will generate the sequence. These techniques are used to attack the summation cipher. This analysis gives a unified approach
to the study of pseudorandom sequences, arithmetic codes, combiners with memory, and the Marsaglia-Zaman random number generator. Possible variations on the FCSR architecture are indicated at the
end. Index Terms – Binary sequence, shift register, stream cipher, combiner with memory, cryptanalysis, 2-adic numbers, arithmetic code, 1/q sequence, linear span. 1
- Math. Comp , 2006
"... Abstract. We present new algorithms for computing the values of the Schur sλ(x1,x2,...,xn)andJackJ α λ (x1,x2,...,xn) functions in floating point arithmetic. These algorithms deliver guaranteed
high relative accuracy for positive data (xi,α>0) and run in time that is only linear in n. 1. ..."
Cited by 7 (4 self)
Add to MetaCart
Abstract. We present new algorithms for computing the values of the Schur sλ(x1,x2,...,xn)andJackJ α λ (x1,x2,...,xn) functions in floating point arithmetic. These algorithms deliver guaranteed high
relative accuracy for positive data (xi,α>0) and run in time that is only linear in n. 1.
"... Abstract. We describe an algorithm for computing Bernoulli numbers. Using a parallel implementation, we have computed Bk for k = 108, a new record. Our method is to compute Bk modulo p for many
small primes p, and then reconstruct Bk via the Chinese Remainder Theorem. The asymptotic time complexity ..."
Cited by 3 (1 self)
Add to MetaCart
Abstract. We describe an algorithm for computing Bernoulli numbers. Using a parallel implementation, we have computed Bk for k = 108, a new record. Our method is to compute Bk modulo p for many small
primes p, and then reconstruct Bk via the Chinese Remainder Theorem. The asymptotic time complexity is O(k2 log2+ε k), matching that of existing algorithms that exploit the relationship between Bk
and the Riemann zeta function. Our implementation is significantly faster than several existing implementations of
"... Abstract — In the past, mathematicians actively used the ability of some people to perform calculations unusually fast. With the advent of computers, there is no longer need for human
calculators – even fast ones. However, recently, it was discovered that there exist, e.g., multiplication algorithms ..."
Add to MetaCart
Abstract — In the past, mathematicians actively used the ability of some people to perform calculations unusually fast. With the advent of computers, there is no longer need for human calculators –
even fast ones. However, recently, it was discovered that there exist, e.g., multiplication algorithms which are much faster than standard multiplication. Because of this discovery, it is possible
than even faster algorithm will be discovered. It is therefore natural to ask: did fast human calculators of the past use faster algorithms – in which case we can learn from their experience – or
they simply performed all operations within a standard algorithm much faster? This question is difficult to answer directly, because the fast human calculators ’ selfdescription of their algorithm is
very fuzzy. In this paper, we use an indirect analysis to argue that fast human calculators most probably used the standard algorithm.
, 810
"... Abstract. We describe a cache-friendly version of van der Hoeven’s truncated FFT and inverse truncated FFT, focusing on the case of ‘large ’ coefficients, such as those arising in the
Schönhage–Strassen algorithm for multiplication in Z[x]. We describe two implementations and examine their performan ..."
Add to MetaCart
Abstract. We describe a cache-friendly version of van der Hoeven’s truncated FFT and inverse truncated FFT, focusing on the case of ‘large ’ coefficients, such as those arising in the
Schönhage–Strassen algorithm for multiplication in Z[x]. We describe two implementations and examine their performance. 1.
, 2010
"... integer multiplication ..."
"... Abstract. We compute all irregular primes less than 163 577 856. For all of these primes we verify that the Kummer–Vandiver conjecture holds and that the λ-invariant is equal to the index of
irregularity. 1. ..."
Add to MetaCart
Abstract. We compute all irregular primes less than 163 577 856. For all of these primes we verify that the Kummer–Vandiver conjecture holds and that the λ-invariant is equal to the index of
irregularity. 1.
"... Abstract. Generalized Cullen Numbers are positive integers of the form Cb(n):=nbn + 1. In this work we generalize some known divisibility properties of Cullen Numbers and present two primality
tests for this family of integers. The first test is based in the following property of primes from this fa ..."
Add to MetaCart
Abstract. Generalized Cullen Numbers are positive integers of the form Cb(n):=nbn + 1. In this work we generalize some known divisibility properties of Cullen Numbers and present two primality tests
for this family of integers. The first test is based in the following property of primes from this family: nbn ≡ (−1) b (mod nbn + 1). It is stronger and has less computational cost than Fermat’s
test (to bases b and n) and than Miller-Rabin’s test (if b is odd, to base n). Pseudoprimes for this new test seem to be very scarce, only 4 pseudoprimes have been found among the many millions of
Generalized Cullen Numbers tested. We also present a second, more demanding, test for which no pseudoprimes have been found. These tests lead to an algorithm, running in Õ(log2 (N)) time, which might
be very useful in the search of Generalized Cullen Primes. 1.
, 2009
"... d'un point de torsion dans une ..."
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1199996","timestamp":"2014-04-18T07:24:28Z","content_type":null,"content_length":"32260","record_id":"<urn:uuid:c02ab99f-bfe6-4e60-9856-3e6a5b60c6d3>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistical Errors in Mainstream Journals
While we frequently on SBM target the worst abuses of science in medicine, it’s important to recognize that doing rigorous science is complex and mainstream scientists often fall short of the ideal.
In fact, one of the advantages of exploring pseudoscience in medicine is developing a sensitive detector for errors in logic, method, and analysis. Many of the errors we point out in so-called
“alternative” medicine also crop up elsewhere in medicine – although usually to a much less degree.
It is not uncommon, for example, for a paper to fail to adjust for multiple analysis – if you compare many variables you have to take that into consideration when doing the statistical analysis
otherwise the probability of a chance correlation will be increased.
I discussed just yesterday on NeuroLogica the misapplication of meta-analysis – in this case to the question of whether or not CCSVI correlates with multiple sclerosis. I find this very common in the
literature, essentially a failure to appreciate the limits of this particular analysis tool.
Another example comes recently from the journal Nature Neuroscience (an article I learned about from Ben Goldacre over at the Bad Science blog). Erroneous analyses of interactions in neuroscience: a
problem of significance investigates the frequency of a subtle but important statistical error in high profile neuroscience journals.
The authors, Sander Nieuwenhuis, Birte U Forstmann, and Eric-Jan Wagenmakers, report:
We reviewed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (Science, Nature, Nature Neuroscience, Neuron and The Journal of Neuroscience) and found that
78 used the correct procedure and 79 used the incorrect procedure. An additional analysis suggests that incorrect analyses of interactions are even more common in cellular and molecular
The incorrect procedure is this – looking at the effects of an intervention to see if they are statistically significant when compared to a no-intervention group (whether it is rats, cells, or
people). Then comparing a placebo intervention to the no-intervention group to see if it has a statistically significant effect. Then comparing the results. This seems superficially legitimate, but
it isn’t.
For example, if the intervention produces a barely statistically significant effect, and the placebo produces a barely not statistically significant effect, the authors might still conclude that the
intervention is statistically significantly superior to placebo. However, the proper comparison is to directly compare the differences to see if the difference of difference is itself statistically
significant (which it likely won’t be in this example).
This is standard procedure, for example, in placebo-controlled medical trials – the treatment group is compared to the placebo group. But what more than half of the researchers were doing in the
articles reviewed is to compare both groups to a no-intervention group but not comparing them to each other. This has the effect of creating the illusion of a statistically significant difference
where none exists, or to create a false positive type of error (erroneously rejecting the null hypothesis).
The frequency of this error is huge, and there is no reason to believe that it is unique to neuroscience research or more common in neuroscience than in other areas of research.
I find this article to be very important, and I thought it deserved more play than it seems to be getting. Keeping to the highest standards of scientific rigor is critical in biomedical research. The
authors do an important service in pointing out this error, and researchers, editors, and peer reviewers should take note. This should, in fact, be part of a check list that journal editors employ to
ensure that submitted research uses legitimate methods. (And yes, this is a deliberate reference to The Checklist Manifesto – a powerful method for minimizing error.)
I would also point out that one of the authors on this article, Eric-Jan Wagenmakers, was the lead author on an interesting paper analyzing the psi research of Daryl Bem. (You can also listen to a
very interesting interview I did with Wagenmakers on my podcast here.) To me this is an example of how it pays for mainstream scientists to pay attention to fringe science – not because the subject
of the research itself is plausible or interesting, but because they often provide excellent examples of pathological science. Examining pathological science is a great way to learn what makes
legitimate science legitimate, and also gives one a greater ability to detect logical and statistical errors in mainstream science.
What the Nieuwenhuis et.al. paper shows is that more scientists should be availing themselves of the learning opportunity afforded by analyzing pseudoscience.
34 thoughts on “Statistical Errors in Mainstream Journals”
1. “513 behavioral … articles in five top-ranking journals … and found that 78 used the correct procedure and 79 used the incorrect procedure.”
78 correct procedures plus 79 incorrect procedures is 157. What was the status of the remaining 356 articles and the statistical techniques they used?
2. The article in Nature Neuroscience reported correct and incorrect analyses in five leading scientific journals in their Table 1. They did not report whether there were differences between error
rates in studies of animals and studies of humans. It was so jarring to read their article because the errors they report rarely occur in randomized clinical trials involving human participants.
They were trying to avoid identifying specific published studies (giving fictive examples) for diplomatic reasons, and this deprives the reader of looking at which kinds of studies fall into the
traps they describe. From the examples they provide, it sounds as if the errors were found in the cellular and molecular neuroscience studies. I cannot recall the last time I read a study of a
clinical intervention in patients which drew conclusions based on p values less than .05 in one group and greater than 0.05 in the other. The listed journals do seem to be weighted toward
laboratory science. I suspect that statistical errors involving research at the bedside occur less frequently than for research at the bench.
It would be interesting (and worthwhile) to have this comparison. It may take some time to do, but would be publishable if it were done.
3. sCAM/ acupuncture researchers tend to take this technique one step further. They compare “real” acupuncture and sham technique to a no intervention group, and when “real” and sham produce
statistically equivalent results that significantly differ from no intervention, they conclude that both the sham technique and “real” acupuncture are both effective.
It seems to me that the (typicaly unblinded) no intervention group in such studies is usually only useful as an indicator of the presence/strength of a placebo response/effect/factor and really
shouldn’t be used in statistical analysis if it is done at all. The placebo is supposed to be the control, not the no intervention.
This post meshes well with Prometheus’ two part post “Anatomy of a Study: A Dissection Guide”
I think “How to critically/skeptically evaluate a study” would make an excellent workshop or presentation at the next TAM.
4. Great article. I wonder if part of the problem is that many in the cell and molecular programs do not emphasize statistical training at the undergraduate and graduate levels. Our pre-med
undergrads (Tier I research university) must take *either* calculus or stats, and nearly all opt for Calculus. (It is required for our grad students.) I was never required to take stats and to
this day deeply regret it. I use stats daily and calculus almost never, and I think I’m typical of my colleagues (yes, arguing from experience, sorry). I essentially had to teach myself.
Another limitation may be the access to a statistician. My college (heavy in cell and molecular PIs and not the med school) underwrites statistical help via a few stats grad students, but access
is limited and their knowledge thin – after all, they are students. In contrast my med school colleagues have access to full time statisticians and they have the good sense to build 5% salary
into their grants. This is seldom done in the many basic research budgets I’ve reviewed over the years.
Then again, there is that old joke. Ask three statisticians for an answer, and you will get five opinions.
5. I think this is less common with human drug trials because the placebo-controlled paradigm is deeply entrenched. Partly because it is mandated by the FDA – so it has become the standard. (This is
a good thing, BTW)
Regarding statistics – I find that this should be taught more in medical school and science programs. But perhaps it needs to be taught in a more accessible and practical way. Instead of getting
bogged down in the complex math (i.e. teaching statistics to scientists so that they can crunch the numbers themselves) it might be better to split it up to a basic course that everyone takes and
is designed to improve understanding of how to use and read statistics, and an advanced course that gets into the math to the point that you can actually do statistics.
If you go just for the latter, you lose a lot of students who end up not understanding the basics.
6. Dr. Novella,
This is my first post here, but as someone who teaches applied statistics courses in the social sciences, I want to point out a few problems I have encountered when splitting the material the way
you suggest. They are mostly from the behavior induced, and I don’t know which of them would be applicable to medical students.
The first is that the stronger students are able to opt into the more advanced class first, since they know most of what is being taught the first semester (usually from undergrad) and therefore
the first class slows down. (It is hard to resist the urge to pitch the class at the median student actually there instead of the median student in the cohort.) Second, the math serves as a
signal to students that they need to take the class seriously. In my opinion, the hardest parts of the material to get students to really take seriously are causal inference and identification.
There are no tests or problems with exact answers to do, but if you get them wrong you can be technically perfect and still have complete nonsense. Again, I don’t know whether med students would
fall prey to these problems, but they can be significant practical impediments to teaching a “how to read and use stats” course before teaching the numbers.
That said, I am constantly seeing technically perfect nonsense published where the authors do exactly what they claim but don’t seem to realize that what they are doing cannot be what they want,
so I am very sympathetic to the need to change how we present statistics.
7. AS – Thanks. I was just throwing that out there as a suggestion, but your points seem valid.
Another way to go is to include statistical analysis in a course on how to interpret studies and the literature. One way to analyze a paper is to ask – are they using the correct statistical
methods? This would include issues I raised in this blog – are they doing multiple analysis? Are they properly applying a meta-analysis? Are they data mining? What does statistical significance
mean? What is a Baysean analysis?
You can teach this to doctors and scientists without making them learn the advanced math.
8. Dr. Novella – I sort of agree, but I don’t think you need to split the class, but rather emphasize the uses, strengths and weaknesses of different statistical tests.The stats class that I took
that was based in my field was much easier to follow than the one taught by the math department (sorry mathematicians!). The department I am in now is trying to develop a biostatistics course,
focusing on field-specific procedures and software to distinguish it from the math course.
Your second suggestion actually fits with some seminar courses I have taken, both as a grad and an undergrad. That’s something you can easily do with upperclassmen. (I will confess my skepticism
overprepared me for those courses since I already knew how to spot shaky science.)
9. This error is tricky to understand without visuals. It badly needs a diagram, so I took a stab at it.
diagram of the difference error on SaveYourself.ca [~75K]
Feedback, corrections, suggestions are most welcome. Have I got it?
10. “Another way to go is to include statistical analysis in a course on how to interpret studies and the literature.”
Combine this post with Prometheus’s two parter on evaluating studies and you’ve got the basis of a potential killer CME workshop for TAM X.
11. Many times I have wished that authors put the data they used in the analysis on the journal web site, so that the reader could have a chance to analyze it. No identifying data, obviously, but if
the data in the tables and figures were in an ASCII file, then, no matter what software the reader has, there would be an option to check for main effects and interactions on one’s own.
One paper on my desk now reports that an important outcome variable changed by 1.9 hours in the text, but in the table it is reported as 1.9%. There is obviously a typo in one or the other place,
but I cannot tell which. Also, they reported doing multiple regression using 5 covariates, but there were not enough participants in the study to support that kind of analysis; I would like to
check the analyses myself.
This is the age of the internet; let us exploit its capabilities. Posting the text of the article online is revolutionary in that you do not have to go out at night and trudge to the library and
wait in line to use the Xerox machine in the periodicals section, but this amounts to nothing more than a change in delivery of the same content as before. A real revolution would be to enhance
the actual content which is delivered.
12. In the research-review work that I do the differences between groups, and their statistical significance, are not as telling as the effect sizes (eg, Cohen’s d or similar), which also takes into
account dispersion or Standard Deviation of the data. All too often, authors report significant findings based on differences when the effect sizes (which they do not report) turn out to be of
minimal clinical importance, at best. In some cases, the data to calculate effect sizes are not provided, which is either deceptive or careless (one can never tell for sure).
As for Paul Ingraham’s chart – which is very nicely done – it seems to overlook the possibility of having a no-treatment condition (eg, “wait-list group”) as a valid cohort for comparison with an
active-treatment group. This allows evaluation of actual treatment effects compared with the normal course of disease in similarly selected subjects who are enrolled in the research study but
awaiting any treatment. Anyway… I’ve seen that done and it seems to be a valid approach in the pain management field.
Of course, having a third group – receiving placebo – would shed further light on whether treatment effects reflect (a) influences of receiving any therapy, even one expectedly inert, within the
context of the study environment (ie placebo response) and/or (b) the natural course of the disease/condition or one which might spontaneously improve merely by being a part of a research trial
(ie, Hawthorne effect).
13. Some people have proposed eliminating calculus as a medical school prerequisite and substituting statistics. Apart from a few very superficial references to derivatives in physiology class, like
dV/dt, I encountered precious few allusions to the concepts of the calculus.
How about making linear algebra a prerequisite instead of calculus, with a statistics course that can make the concepts of multiple regression, either as a second undergraduate math course or
maybe a first or second year medical school course? The med school course would focus on the assumptions of statistical tests and the pitfalls into which one may fall if unwary. If there are
pitfalls in diagnosis which clinicians are taught to be wary of, there are traps in inference which future readers of new research should be taught to recognize and avoid. This does not mean that
every medical student would be expected to run a lot of analyses, but just as we hope that they can make sense of radiologists’ reports and can look at a few images themselves, we can hope that
the methods section of published literature will not look like a bunch of crazy shapes and shadows with no rhyme or reason.
14. “This should, in fact, be part of a check list that journal editors employ ”
Damn it, what happened to the reviewers? It is they that are incompetent. It is they that failed, either through ignorance or laziness.
Next the proposal to teach stats without calculus disgusts me. Let’s just have Gaussian distributions be magical? That is the “best fitting line” cause my calculating machine says so?
It is however indisputable that scientists need more courses on the design and analysis of experiments – I see folks designing experiments that have absolutely no business doing so – they go
ahead doing it without a serious idea about how the data will be analyzed. When I tell my scientists that what they really want is to test an interaction, some don’t know what I’m talking about
(this is the issue of the article, using different words). I explain it 3 different ways, and they sometimes still refuse to get it. I write down a model, and their brains shut down. Wanna do
science? Get a clue. Stop letting folks do biology while (and because) they hate the math. We are making docs and scientist that can’t even properly read the literature. We are making a
literature where statistical cheating is common, cause the reader, the reviewer, and the editor all can’t spot it.
15. Some case studies. We have a culture problem.
I am having to explain why I take logarithms to post-docs, or having to explain what I mean when I ask them why they took the anti-logs of the RT-PCR data. They compare the cycle thresholds,
which is good, since it is already in log space. They even compute the standard deviations and do T-tests in log space – good, though they may not know why it’s good. They then plot in non-log
space, and do a bastard thing that they have no clue about to obtain error bars for that plot. They argue that “everybody does it that way” – popularity, or “my gigantic non-peer reviewed book
says to do it this way” – authority.
They have 3 vs 3 observations of something – and plot the means and the standard deviations rather than showing the data. “everybody does it that way” again. They call the usual plot of the
actual data “the dot plot” – maybe they’ve never seen someone actually show the damn data, and need new words for that peculiar situation.
They run blots and show one, and write that it was “representative”. Why not design a good experiment with replication and analyze the data using (gasp) a model, and estimate the chance that
rejecting the null hypothesis is an accident – so we can actually call it science? And god forbid saying what the nature of the replicates was.
Folks do t-tests on 3 vs. 3 designs where the replicates are merely technical and not biological replicates. They fail to say what the replicates are ofcourse. Their lab mates have taught them to
do the experiment this way, cause it works – you get small p-vaules. If it doesn’t, repeat it, being sure to know which things you want to have bigger values – it will work eventually. Then say
it is representative. Doing actual biological replicates is hazardous to your desired conclusion.
They would rather break up a linear Y-axis into 3 ranges, where we still can’t see how different the low-valued samples were, rather than make Y be log scale. Reader might not get how big the
difference you want to tell them about is – cause they aren’t used to log-scale plots. “everybody does it that way.”
Cell counting we see that at day 5 we have 20 million of type A cells but only 10 million of type B, the average of triplicates that give p=.01. Be sure to plot on a linear scale so people can’t
tell 1) that we started with about 20,000 type A and about 10,000 type B, all we can see is that they were very small numbers, and 2) we can’t tell if we get straight lines on the log-scale. When
I plot log-scale and they see perfectly straight lines that are perfectly parallel, they don’t like my methods. If I suggest a dumbed-down version, taking log(final/starting), they may get it,
but may not want me to take logs. Thankfully this last example is more rare, and it is true that there are smart and good people out there who do understand the math pretty well for all my
The real reason they do these things is never because it is good. Whether it is good is usually never asked. It is in fact bad – misleading or suppressing information. And that’s why it is
popular. It gets passed on like certain rhymes on the kindergarten playground – the adults are powerless to stop it.
16. An often whimsical biostatistics book by Geoffrey Norman and David Streiner tells of a colleague whose master thesis involved looking at the constipative effects of medications used by elderly
patients. Because the dependent variable (whether or not the patient had a bowel movement that day) was binomially distributed, the arc sine transformation should be used to analyze the data. A
supervisor asked “If a clinician were to ask you what the number means, are you going to tell him, ‘It is two times the angle whose sine is the square root of the number of patients (plus 0.5)
who shat that day?””
The moral was that even when it is mathematically rigorous to transform the data, it may make it harder for non-mathematicians to make sense of the results.
You can require only so much undergraduate work from applicants to medical school. Lots of math is great. Making sense of the methods sections of studies is a valuable skill, and the math which
is needed to do that should have priority.
17. @Rork
You are absolutely right on this. I have the same experiences with the students and postdocs; we are wrestling right now on the “right” way to normalize western blots. Here’s an example: we
recently published a echocardiography study. The control mean & SD on ventricle size was tight. The experimentals had a similar mean but huge SD. The student insisted there was no difference
until I forced her to make the dot plot. Shabam! She suddenly discovered that half the experimental hearts segregated into dilatative failure. The means were only the same thanks to the law of
But I think there’s another culprit here – the Journal Editors and reviewers. There is no longer space for ambiguity or less-than-perfect results. Editors seek excuses to reject and review
competition is fierce. Remember when we used to say, “Those data look too perfect”? Not any more. The students and mentors are responding not only out of ignorance or laziness, but out of
artificial standards of “data perfection.”
18. Angora Rabbit:
Getting the student to draw the data by hand is a nice device. Look at the data before you calculate anything.
When the mean in the samples is the same but the variance is different, it can confuse students who have the idea that the t-test is done to see if the means are the same in the two samples. You
need to remind them that the test is really done to see if the samples are drawn from the same population, not to see if the means are equal.
The world population of wild sewer rats has a mean weight. You could, through many generations of careful inbreeding, develop a strain of rats with that same mean weight, but with a much smaller
variance. If a sample of wild and laboratory rats are weighed, and the mean weights do not differ, most students will agree that the means are the same, but few will agree that the samples are
drawn from the same population. This thought experiment is easy to carry out. Stating the null hypothesis correctly can cause at least some clouds to disperse.
It may sound disgusting to skip calculus and present the normal distribution as “magical,” but the methods needed to do integration of the gaussian distribution are well beyond the scope of
undergraduate calculus courses, so even if you have taken three semesters of calculus, that distribution has some elements of magic (or faith).
19. In Thomas Gilovich’s 1991 classic book, “How We Know What Isn’t So”, he spends some time at the end talking about learning proper use of statistical methods and inferences.
He discusses a study of graduate students of various sorts, and how they developed in their ability to properly interpret mixed data. The outcome was that the most growth in the ability to
interpret complex data was among students in the social sciences, who get used to dealing with complex data without simple cause and effect relationships, where multiple causal chains have to be
teased apart experimentally. Medical students did a little worse, and students in the physical sciences and in the humanities (law) did the worst.
I certainly didn’t learn my stats in med school or much before. Grad school, and then working in the world of clinical trials, was what it took for me.
20. I am not sure that I am following this correctly, having only a very limited grasp of statistics, but is the original complaint really a statistical problem, or a more fundamental conceptual one
— the researchers have forgotten, or never quite understood the purpose of various elements in study design.
I mean, in a study testing for intrinsic efficacy in a treatment method with subjective end-points then obviously only the comparison to a completely inert but indistinguishable placebo will do.
So why should an untreated group be included in studies of some some intervention and especially in lab studies where there are usually far fewer variables? What is its function? It implies a
more complex scenario than exists in the usual drug study.
21. Thanks for the suggestion, Vera Montanumom. I definitely agree that a no-treatment group is be a “valid cohort for comparison.” It’s not wrong to compare no-treatment to treatment … it’s just
wrong to only make that comparison, and pass off the statistically significant difference as a statistically significant treatment effect size. So you’re right, but that subtlety may be beyond
the power of one diagram to convey. This is why a scientist buddy of mine called this more of an error inference than of statistics. As she put it:
Stats aside, what they need to be doing is treating this as one factor ANOVA where you have 3 levels of treatment: drug, placebo, no pill. ANOVA by definitiion will compare between all
possible combinations of these groups. Problem solved, simple stats).
And the stakes for getting it right tend to go way up studying anything where treatment requires a lot of interaction (i.e. psychiatry, massage therapy), where placebo can loom large, powered by
a metric buttload of nonspecific effects, and accounting for nearly all of what was presumed to be a treatment effect. If you don’t include a placebo comparison there, and a no-treatment group,
and do an ANOVA … well, shoot, experimental design fail. You’re doing it wrong.
22. I’ve revised that diagram more, based on feedback received so far, here and elsewhere. More criticism still most welcome. I am determined to nail this.
diagram of the difference error on SaveYourself.ca [~125K]
23. Paul: – it’s just wrong only make that comparison, and pass off the statistically significant difference as a statistically significant treatment effect size.
I like the diagram, but you need to specify what question has been asked of the data. That surely determines which are “right” and “wrong” approaches, also what is “relevant” and “significant”.
Those judgments often coincide, as in questions of intrinsic drug efficacy, which is what everyone here usually has in mind.
But other questions do arise.
For example patients and CAM practitioners may ask why the enormous difference between both placebo/treatment arms and “no intervention” is regarded as “significant but not relevant”. Not
relevant to what? Why could that not be seen evidence of important “mind-body” responses from the overall program of care? The “no intervention” arm will control for most other influences.
There is at minimum often the false presumption from such data that nothing of value has occurred, as implied by somewhat loose “it doesn’t work” statements (should at least be qualified by “via
the mechanisms typically claimed”).
This should NOT be seen as a defence of pseudoscience. It is a matter of scientific precision, which should operate all ways, especially in our dealings with CAM.
24. pmoran, quick partial response: it sounds like you haven’t seen the more recent version of the diagram, which addresses some aspects of your suggestions (i.e. I ditched “right” and “wrong”).
Visit again and refresh your browser to make sure you’re not seeing a cached old version.
25. “Next the proposal to teach stats without calculus disgusts me. Let’s just have Gaussian distributions be magical? That is the “best fitting line” cause my calculating machine says so?”
I am disturbed that people can use computers without knowing anything about how they work. Most can’t even program a simple search routine! They regard their working as little better than magic.
Yet it would be insane to insist everyone who is going to use a computer needs at least second year IT papers!
You cannot teach everyone everything. And calculus is well down the list of necessary skills for a doctor. If they learn that, they are not learning something else.
I would suggest that the issue is that people who go into medical research need skills that most doctors do not. They shouldn’t even be doing the same degree.
(For the record I like calculus and am good at it, passing level 2 university with ease, so I am not biased against the subject.)
26. For what it’s worth:
I took a year of calculus, was good at it and enjoyed it, but the only time I ever used it was to solve problems in a physics class, and there were ways to do that without calculus. Looking back,
I wish I had had a year’s education in statistics instead. There is an excellent course at The Teaching Company that explains what calculus is all about without getting into equations. I think
understanding those basic concepts would be all most doctors would need. The same really goes for statistics: doctors need to understand the concepts and how to interpret the statistics they read
in published studies, but they don’t necessarily need to know how to do the statistical math themselves.
27. I can completely second what Dr. Hall said. I completely breezed through calculus and did quite well at it, but never found too many applications for it. And when it came to physics, I almost
always just converted everything to energy and used conservation to solve my problems. It was just easier.
But, I did also have a reasonably solid foundation in statistics – though not as much as I would have liked in retrospect.
28. @ pmoran et al:
Intuitive Biostatistics by Harvey Motulsky (ISBN 978-0-19-973006-3) is now in its second edition, and might be a good investment. The place of statistics within the framework of science is
well-discussed and many common problems with study interpretation are nicely presented. Check it out and see what you think.
29. Regarding the flaw mentioned in the OP: testing two interventions vs. placebo, and not against each other:
Robinson and colleagues made this mistake in a test of meds vs therapy for post-stroke depression prevention:
JAMA, May 28, 2008—Vol 299, No. 20.
They were taken to task for it. This is the JAMA article that drew heavy fire for failure to disclose drug funding, and so this study got raked over the coals a bit more than most do. [This is
where the journals amped up their COI standrds, and DeAngelis had a couple editorials on the issue.]
30. Do journals even now look at statistical significance?
I thought in the top journals you have to use confidence intervals for RCT’s.
31. Preface from Michael Woodroofe’s “Probability with Applications”, which is just a beginning to learning statistics:
“The prerequisite for an intelligent reading of this book is 2 years of calculus.”
Folks claiming not really needing it for physics weren’t doing difficult enough stuff. Do you take elliptical orbits on faith? That sphere’s mass acts similar to a point mass? Is it OK to do
physics but not understand Newton? Einstein or Schrodinger stuff is clearly impossible.
Paul: Your diagram doesn’t reflect the classic situation, where there are four groups, and one tests an interaction to see if
(A-B)-(C-D) = 0. People instead are showing A>B is significant and that C>D is not. They will occasionally instead show that A>C (which for you would be treated vs. placebo, possibly hailed as
the solution to the problem) but that is still not enough. For example imagine C and D are placebo and no treatment but performed by group 2, and C is lower than A, but D is much lower than B, a
no treatment arm performed by group 1. It’s OK, so long as you know that.
32. @rork,
“Folks claiming not really needing it for physics”
I think you are referring to me, but all I said was that I did not need it for the college physics course I took as a pre-med requirement. Physics majors definitely need calculus; but I don’t
think the average medical student or clinician does.
33. @rork:
I chimed in as well. And yeah, I was not a physics major, so the year of non-major physics I needed to take was not exactly taxing on my calculus skills. I did use it a few times. But I don’t
need to solve every basic problem using the calculus to trust that my answer is representative of the real world. Fortunately for the physicists of the world there aren’t too many “complimentary
and alternative physics” people out there.
So of course we weren’t doing difficult enough stuff – human beings and their biology do not involve the calculus of elliptical orbits. And yes, I did take it on “faith” (more like trust in my
college physics professor) that a sphere’s mass acts similarly to a point mass. In fact, IIRC, he specifically said that such calculations were tough, required calculus to solve, and if we were
curious enough to go ahead and do it, otherwise just accept that others have and move on (or something to that effect).
34. Dr. Novella, I think your introduction of the third group, “no intervention,” is confusing the issue. A no-intervention group is irrelevant in most medical studies, and the only relevant
comparison is the pre-treatment–post-treatment difference between the active treatment and placebo (or another active treatment). As Goldacre and the authors of the original article point out,
showing that the pre-post difference is significant in the active treatment group, but not in the placebo group, does not imply that the active treatment was more effective than placebo. Such a
claim requires that the difference between these pre-post difference is significant.
The error is prevalent in journals in many fields.
|
{"url":"http://www.sciencebasedmedicine.org/statistical-errors-in-mainstream-journals/","timestamp":"2014-04-17T21:35:05Z","content_type":null,"content_length":"120543","record_id":"<urn:uuid:5812c6d4-d8ce-4dc5-85b2-dc131b68e13d>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Double VLOOKUP
which is an array formula, it should be committed with Ctrl-Shift-Enter, not
just Enter.
Excel will automatically enclose the formula in braces (curly brackets), do
not try to do this manually.
When editing the formula, it must again be array-entered.
Note that you cannot use a whole column in array formulae (prior to excel
2007), but must use an explicit range.
(there's no email, no snail mail, but somewhere should be gmail in my addy)
|
{"url":"http://www.excel-answers.com/microsoft/Excel/31766382/double-vlookup.aspx","timestamp":"2014-04-20T03:11:02Z","content_type":null,"content_length":"6945","record_id":"<urn:uuid:317b7182-5873-42d9-b93b-44904bdd8e73>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Queens, NY SAT Math Tutor
Find a Queens, NY SAT Math Tutor
...As an experienced teacher of high school and college level physics courses, I know what your teachers are looking for and I bring all the tools you'll need to succeed! Of course, a big part of
physics is math, and I am experienced and well qualified to tutor math from elementary school up throug...
18 Subjects: including SAT math, reading, physics, calculus
...I understand the struggles of preparing for medical school and can impart my past experiences and strategies to create a personalized plan with you in order to significantly increase your
score. As long as you’re ready to study smart with my guidance, the sky is the limit. With prior experience...
17 Subjects: including SAT math, chemistry, physics, biology
...I have had great success tutoring GMAT both independently and for GMAT prep companies. I've found that for me, it takes about 6-9 weeks on average of working with a student to get to an 80-100
point improvement, and I can work with Quant, Verbal, or both. Background: I have a BS in Electrical Engineering from MIT and an MBA with Distinction from the University of Michigan.
11 Subjects: including SAT math, calculus, geometry, algebra 1
...I have found that by developing a deeper understanding of the topic through analogy and real-world applications, lessons can be presented in an interesting way to involve and motivate the
student. We all learn by our mistakes and I endeavor to teach by highlighting mistakes I have made so that s...
7 Subjects: including SAT math, Java, computer science, computer programming
...I have a passion for math and believe that all students are capable of improving their math skills with the proper assistance. I have several experiences working with young children and
tutoring math to students who are struggling with the subject both inside and outside of the classroom. I hav...
9 Subjects: including SAT math, geometry, algebra 1, algebra 2
Related Queens, NY Tutors
Queens, NY Accounting Tutors
Queens, NY ACT Tutors
Queens, NY Algebra Tutors
Queens, NY Algebra 2 Tutors
Queens, NY Calculus Tutors
Queens, NY Geometry Tutors
Queens, NY Math Tutors
Queens, NY Prealgebra Tutors
Queens, NY Precalculus Tutors
Queens, NY SAT Tutors
Queens, NY SAT Math Tutors
Queens, NY Science Tutors
Queens, NY Statistics Tutors
Queens, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Queens_NY_SAT_math_tutors.php","timestamp":"2014-04-18T19:17:18Z","content_type":null,"content_length":"24136","record_id":"<urn:uuid:21612e66-ab10-4e87-a2fa-a59b810ab88c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Collaborative Group for Research in Mathematics Education
ALL PUBLICATIONS
Below is a chronological list of publications, in reverse order from 2000 (see further below for older publications)
For details of recent publications sorted by author, please click here
Wherever possible, publications are available in full (usually in pdf format) from our eprints server.
Fujita, T. and Jones, K. (2008), The process of re-designing the geometry curriculum: the case of the Mathematical Association in England in the early 20th Century. Paper presented to Topic Study
Group 38 (TSG38) at the 11th International Congress on Mathematical Education (ICME-11), Monterrey, Mexico, 6-13 July 2008. 19pp
Click here for full article in pdf format.
Jones, K. (2008), Windows on mathematics education research in mainland China: a thematic review, Research in Mathematics Education, 10(1), 107-113. ISSN: 1479-4802
Click here for full article in pdf format.
Jones, K. and Rowland, T. (2008), Brian Griffiths (1927–2008): a tribute to a pioneer in mathematics education, Proceedings of the British Society for Research into Learning Mathematics, 28(2), 1-6.
ISSN: 1463-6840
Click here for full article in pdf format.
Little, C. and Jones, K. (2008), Contexts for ‘pure’ mathematics: a framework for analysing A-level mathematics papers, Research in Mathematics Education, 10(1), 97-98. ISSN: 1479-4802
Click here for full article in pdf format.
Little, C. and Jones, K. (2008), Assessment of University-entrance level mathematics in England: an analysis of key influences on the evolution of the qualification during the period 1951-2001. Paper
presented to Topic Study Group 36 (TSG36) at the 11th International Congress on Mathematical Education (ICME-11), Monterrey, Mexico, 6-13 July 2008. 10pp.
Click here for full article in pdf format.
Christou, C., Jones, K., Pitta-Pantazi, D., Pittalis, M., Mousoulides, N., Matos, J. F, Sendova, E., Zachariades, T. and Boytchev, P. (2007), Developing student spatial ability with 3D software
applications. Paper presented at the 5th Congress of the European Society for Research in Mathematics Education (CERME), Larnaca, Cyprus, 22-26 Feb 2007. 10pp.
Click here for full article in pdf format.
Christou, C., Pittalis, M., Mousoulides, N., Pitta, D., Jones, K., Sendova, E. and Boytchev, P. (2007) Developing an active learning environment for the learning of stereometry. Paper presented at
the 8th International Conference on Technology and Mathematics Teaching (ICTMT8), Hradec Králové, Czech Republic, 1-4 Jul 2007. 5pp.
Click here for full article in pdf format.
Ding, L. and Jones, K. (2007), Using the van Hiele theory to analyse the teaching of geometrical proof at Grade 8 in Shanghai, China. In D. Pitta-Pantazi & G. Philippou (Eds), European Research in
Mathematics Education V (pp 612-621). Nicosia, Cyprus: University of Cyprus. ISBN: 9789963671250
Click here for full article in pdf format.
Edwards, J. (2007), The language of friendship: developing socio-mathematical norms in the secondary school classroom. In D. Pitta-Pantazi & G. Philippou (Eds), European Research in Mathematics
Education V (pp 1190-1199). Nicosia, Cyprus: University of Cyprus. ISBN: 9789963671250.
Click here for full article in pdf format.
Fletcher, M. (2007), Poker Face, Teaching Mathematics and its Applications, 26(4), 222-224.
Fujita, T. and Jones, K. (2007), Learners’ understanding of the definitions and hierarchical classification of quadrilaterals: towards a theoretical framing, Research in Mathematics Education, 9,
3-20. ISSN: 1479-4802 [journal volume also available as a book ISBN: 0953849880]
Click here for full article in pdf format.
Hohenwarter, M. and Jones, K. (2007), Ways of linking geometry and algebra: the case of GeoGebra, Proceedings of the British Society for Research into Learning Mathematics, 27(3), 126-131. ISSN:
Click here for full article in pdf format.
Little, C. and Jones, K. (2007), Contexts for pure mathematics: an analysis of A-level mathematics papers, Proceedings of the British Society for Research into Learning Mathematics, 27(1), 48-53.
ISSN: 1463-6840
Click here for full article in pdf format.
Mooney, C., Briggs, M., Fletcher, M., Hansen, A. and McCullouch, J. (2007), Primary Mathematics: Teaching Theory and Practice. Exeter: Learning Matters. 3rd edition. ISBN: 9781844450992
Mooney, C., Ferrie, L., Fox, S., Hansen, A. and Wrathmell, R. (2007), Primary Mathematics: Knowledge and Understanding. Exeter: Learning Matters. 3rd edition. ISBN: 9781844450534
Mooney, C. & Fletcher, M. (2007), Primary Mathematics: Audit and Test. Exeter: Learning Matters. 3rd edition. ISBN: 9781844451111
Voutsina, C. and Ismail, Q. (2007), Issues in identifying children with specific arithmetic difficulties through standardised testing: a critical discussion of different cases, Proceedings of the
British Society for Research into Learning Mathematics, 27(1), 84-89.
Click here for full paper in pdf format.
Zachariades, T., Pamfilos, P., Christou, C., Maleev, R. and Jones, K. (2007), Teaching Introductory Calculus: approaching key ideas with dynamic software. Paper presented at the CETL–MSOR Conference
2007 on Excellence in the Teaching & Learning of Maths, Stats & OR, University of Birmingham, 10-11 September 2007.
Click here for full article in pdf format.
Ding, L. & Jones, K. (2006), Students’ geometrical thinking development at Grade 8 in Shanghai. In: Novotná, J., Moraová, H., Krátká, M. & Stehlíková, N. (Eds.), Proceedings 30th Conference of the
International Group for the Psychology of Mathematics Education (PME30), vol 1, p382. [extended abstract]
Click here for full article in pdf format.
Ding, L. & Jones, K. (2006), Teaching geometry in lower secondary school in Shanghai, China, Proceedings of the British Society for Research into Learning Mathematics, 26(1), 41-46.
Click here for full paper in pdf format.
Edwards, J. (2006), Exploratory talk in friendship groups. In: Novotná, J., Moraová, H., Krátká, M. & Stehlíková, N. (Eds.), Proceedings 30th Conference of the International Group for the Psychology
of Mathematics Education (PME30), vol 1, p248. [extended abstract]
Edwards, Julie-Ann and Jones, Keith (2006) Linking geometry and algebra with GeoGebra, Mathematics Teaching, 194, 28-30.
Click here for full article in pdf format.
Fujita, T. and Jones, K. (2006) Primary trainee teachers’ understanding of basic geometrical figures in Scotland. In, Novotná, J., Moraová, H., Krátká, M. and Stehlíková, N. (eds.), Proceedings 30th
Conference of the International Group for the Psychology of Mathematics Education (PME30). Prague, Czech Republic, vol 3, pp129-136.
Click here for full article in pdf format.
Fujita, T. and Jones, K. (2006), Primary trainee teachers’ knowledge of parallelograms, Proceedings of the British Society for Research into Learning Mathematics, 26(2), 25-30.
Click here for full article in pdf format.
Fujita, T. and Jones, K. (2006), Primary trainee teachers’ understanding of basic geometrical figures in Scotland. In, Novotná, J., Moraová, H., Krátká, M. and Stehlíková, N. (eds.) Proceedings 30th
Conference of the International Group for the Psychology of Mathematics Education (PME30). Prague, Czech Republic, Psychology of Mathematics Education, vol 3, 129-136.
Click here for full article in pdf format.
Jones, K. (2006), Book review. Educational Research Primer, written by Anthony G. Picciano, International Journal of Research & Method in Education, 29(1), 123-125.
Click here for the review in pdf format.
Jones, K., Fujita, T. & Ding, L. (2006), Informing the pedagogy for geometry: learning from teaching approaches in China and Japan, Proceedings of the British Society for Research into Learning
Mathematics, 26(2), 109-114.
Click here for full article in pdf format.
Brown, G. Cadman, K., Cain, D. Clark-Jeavons, A. Fentem, R., Foster, A., Jones, K., Oldknow, A., Taylor, R. and Wright, D. (2005), ICT and Mathematics: a guide to learning and teaching mathematics
11-19 (revised edition). London: Mathematical Association/Teacher Training Agency. 73pp. ISBN: 090658860X
Click here for full report in pdf format.
Christou, C., Pittalis, M., Mousoulides, N. and Jones, K. (2005) Developing 3D dynamic geometry software: theoretical perspectives on design. In, Olivero, F. and Sutherland, R. (eds.) Visions of
Mathematics Education: embedding technology in learning. Bristol, UK, University of Bristol. Vol 1, pp69-77. ISBN: 0862925592
Click here for full article in pdf format.
Edwards, J. (2005), Exploratory talk in peer groups: exploring the zone of proximal development. Proceedings of the 4th Congress of European Research in Mathematics Education (CERME 4)
Click here for full article in pdf format.
Edwards, J. and Wright, D. (eds.) (2005), Integrating ICT into the Mathematics Classroom. Derby, UK: Association of Teachers of Mathematics. 128pp. ISBN: 1898611408
Click here for more details about this book.
Fletcher, M. (2005), The price is right, Teaching Statistics, 27(3), 69-71.
Click here for more detail on this paper.
Hirst, K. E. (2005), Student expectations of studying mathematics at University, Hiroshima Journal of Mathematics Education, 11, 49-68.
Hyde, R. (2005), Using graphics calculators with low attaining pupils. In: Anne Watson, Jenny Houssart and Caroline Roaf (Eds), Supporting Mathematical Thinking. London: David Fulton. ISBN:
Jones, K. (2005), Planning for mathematics learning. In, Johnston-Wilder, S., Johnston-Wilder, P., Pimm, D. and Westwell, J. (eds.), Learning to teach mathematics in the secondary school. 2nd
edition. London, UK, RoutledgeFalmer, 93-113. ISBN: 0415342821
Click here for full article in pdf format.
Jones, K. (2005), Research on the use of dynamic geometry software: implications for the classroom. In, Edwards, J. and Wright, D. (eds.), Integrating ICT into the Mathematics Classroom. Derby, UK,
Association of Teachers of Mathematics, 27-29. ISBN: 1898611408 [reprint of MicroMath, 18(3), 18-20].
Click here for full article in pdf format.
Jones, K. (2005), Research Bibliography: Dynamic Geometry Software. In: D. Wright (Ed), Moving on with Dynamic Geometry. Derby: Association of Teachers of Mathematics. pp 159-160. ISBN: 1898611394
[reprint of MicroMath, 18(3), 44-45].
Click here for the article in pdf format.
Jones, K. (2005), The shaping of student knowledge with dynamic geometry software. In, Virtual Learning? the Computer Assisted Learning Conference 2005 (CAL05), Bristol, UK, 4-6 April 2005. 10pp.
Click here for full article in pdf format
Jones, K. (2005), Using Spreadsheets in the Teaching and Learning of Mathematics: a research bibliography, MicroMath, 21(1), 30-31.
Click here for full article in pdf format.
Jones, K. (2005), Graphing calculators in the teaching and learning of mathematics: a research bibliography, MicroMath, 21(2), 31-33.
Click here for full article in pdf format.
Jones, K. (2005), Using Logo in the teaching and learning of mathematics: a research bibliography, MicroMath, 21(3), 34-36.
Click here for full article in pdf format.
Jones, K., Fujita, T. and Ding, L. (2005), Developing geometrical reasoning in the classroom: learning from expert teachers from China and Japan. In, 4th Biennial Conference of the European Society
for Research in Mathematics Education (CERME4), Sant Feliu de Guíxols, Spain, 17-21 February 2005. 10pp.
Click here for full article in pdf format.
Jones, K., Fujita, T. and Ding, L. (2005), Teaching geometrical reasoning: learning from expert teachers from China and Japan. Proceedings of the British Society for Research into Learning
Mathematics, 25(1), 89-96. [publisher version of the paper presented at the 6th British Congress on Mathematical Education (BCME6), Warwick, March 2005]
Click here for full article in pdf format.
Merrett, S. and Edwards, J. (2005), Enhancing mathematical thinking with an interactive whiteboard, MicroMath, 21(3), 9-12.
Click here for full article in pdf format.
Voutsina, C. and Jones, K. (2005), The process of knowledge re-description as underlying mechanism for the development of children’s problem-solving strategies: an example from arithmetic. Paper
presented to the European Association for Research on Learning and Instruction conference 2005 (EARLI2005), Nicosia, Cyprus, August 23-27, 2005.
Click here for full paper in pdf format.
Voutsina, C. and Jones, K. (2005), Children building on success: mathematical problem solving in the early years. Paper presented to the British Educational Research Association conference 2005 (BERA
2005), University of Glamorgan, Wales, 14-17 September 2005.
Click here for full paper in pdf format.
Pope, S. and Jones, K. (2005) Who trains the new teachers? - supporting tutors new to initial teacher education in mathematics. Paper presented at ICMI Study 15 on the Professional Education and
Development of Teachers of Mathematics, Águas de Lindóia, Brazil, 15-21 May 2005. 5pp.
Clickhere for full article in pdf format.
Brown, M., Jones, K. and Taylor, R. and Hirst, A. (2004) Developing geometrical reasoning. In, Putt, Ian, Faragher, Rhonda and McLean, Mal (eds.) Proceedings of the 27th Annual Conference of the
Mathematics Education Research Group of Australasia (MERGA27). Townsville, Queensland, Australia, James Cook University, 127-134.
Click here for full article in pdf format.
Ding, L. and Jones, K. (2004), The structure of mathematics lessons: researching the development of geometrical reasoning in lower secondary schools in China. Paper presented at the European Society
for Research in Mathematics Education Summer School, Podebrady, Czech Republic, August 2004. 6pp.
Edwards, J. (2004), Friendship groups and socially constructed mathematical knowledge, Proceedings of the British Society for the Learning of Mathematics, 24(3), 7-13.
Click here for full article in pdf format.
Fletcher, M. (2004), Odds that don't multiply up, Teaching Statistics, 26(1), 30-32.
Click here for full paper.
Fujita, T., Jones, K. and Yamamoto, S. (2004) The role of intuition in geometry education: learning from the teaching practice in the early 20th century. In, 10th International Congress on
Mathematical Education (ICME-10), Copenhagen, Denmark, 4-11 July 2004. 15pp.
Click here for full article in pdf format.
Fujita, T., Jones, K. and Yamamoto, S. (2004) Geometrical intuition and the learning and teaching of geometry. In, 10th International Congress on Mathematical Education (ICME10), Topic Study Group 10
(TSG10) on Research and Development in the Teaching and Learning of Geometry, Copenhagen, Denmark, 4-11 July 2004. 7pp.
Click here for full article in pdf format.
Hirst, K. E. (2004), Student expectations of studying mathematics at University. In: Ian Putt, Rhonda Faragher & Mal McLean (Eds), Proceedings of the 27th Annual Conference of the Mathematics
Education Research Group of Australasia (MERGA27), 27-30 June 2004, Townsville, Queensland, Australia., vol 1, pp295-302. ISBN: 1920846042
Hirst, K. E. (2004), Sawtooth functions, International Journal of Mathematics Education in Science and Technology, 35, 122-126.
Click here for more detail on this paper.
Hyde, R. (2004) What do mathematics teachers say about the impact of ICT on pupils learning mathematics? Micromath, 20(2), 11-12.
Jones, K. (2004), Celebrating 20 Years of Computers in Mathematics Education: a research bibliography, MicroMath, 20(1), 29-30.
Click here for full article in pdf format.
Jones, K. (2004), Using Interactive Whiteboards in the Teaching and Learning of Mathematics: a research bibliography, MicroMath, 20(2), 5-6.
Click here for full article in pdf format.
Jones, K. (2004), Book review: The Changing Shape of Geometry, edited by Chris Pritchard, Mathematics in School, 33(4), 35-36.
Click here for more information on this review.
Jones, K. (2004), Book Review: Social Research: Issues, Methods and Process, 3rd edition, written by Tim May. Bristol, UK, ESCalate, the Higher Education Academy Subject Centre for Education.
Click here for the review in html format.
Jones, K., Fujita, T. and Ding, L. (2004), Structuring Mathematics Lessons to Develop Geometrical Reasoning: comparing lower secondary school practices in China, Japan and the UK. Paper presented at
the Symposium on Comparative Studies in Mathematics Education at the British Educational Research Association Annual Conference, University of Manchester, 15-18 September 2004.
Click here for full article in pdf format.
Jones, K. and Pope, S. (2004), Starting as a researcher in mathematics education. Proceedings of the British Society for Research into Learning Mathematics, 24(3), 63-68.
Click here for full article in pdf format.
Voutsina, C. and Jones, K. (2004), Studying change processes in primary school arithmetic problem solving: issues in combining methodologies, Proceedings of the British Society for Research into
Learning Mathematics, 24(3), 57-62.
Click here for full article in pdf format.
Ball, B., Brown, G., Cadman, K., Clark-Jeavons, A., Edwards, J., Hyde, R., Jones, K., Oldknow, A., Piggott, J., Taylor, R. and Wright, D. (2003), Mathematics with ICT at Key Stage 3. Reading: Centre
for School Standards. [report not made public but material incorporated into relevant National Strategy publications available from DfES]
Brown, M., Jones, K. and Taylor, R. (2003), Developing geometrical reasoning in the secondary school: outcomes of trialling teaching activities in classrooms, a report to the QCA. Southampton, UK,
University of Southampton, School of Education. 90pp. ISBN: 0854328092
Click here for full article in pdf format.
Edwards, J. and Jones, K. (2003), Co-learning in the collaborative mathematics classroom. In: A. Peter-Koop, A. Begg, C. Breen & V. Santos-Wagner (Eds.), Collaboration in Teacher Education: Examples
from the context of mathematics education. Dordrecht, NL: Kluwer. pp. 135-151. ISBN: 14020-1392-2
Click here for full article in pdf format.
Fletcher, M. & Mooney, C. (2003), The Weakest Link, Teaching Statistics, 25, 54-55.
Fujita, T. and Jones, K. (2003), Critical Review of Geometry in Current Textbooks in Lower Secondary Schools in Japan and the UK, Proceedings of the 27th Conference of the International Group for the
Psychology of Mathematics Education, Vol 1, 220 [extended abstract].
Click here for the extended abstract in pdf format.
Fujita, T. and Jones, K. (2003), Interpretations of National Curricula: the case of geometry in Japan and the UK. Paper presented at the British Educational Research Association Annual Conference,
Heriot-Watt University, Edinburgh, 10-13 September, 2003.
Click here for full article in pdf format.
Fujita, T. and Jones, K. (2003) The place of experimental tasks in geometry teaching: learning from the textbook designs of the early 20th century. Research in Mathematics Education, 5, 47-62. ISSN:
1479-4802 [journal volume also available as a book, ISBN: 0953849848]
Click here for full article in pdf format.
Hirst, K. E. (2003), Welcome to MathBank, London Mathematical Society Newsletter (April 2003), 27-28.
Click here for full paper in html format.
Hirst, K. E. (2003), Welcome to MathBank, Mathematics Today, 39(3), 77-78.
Click here for more detail on this paper.
Howson, G. (2003), Geometry 1950-1970. In: D. Coray, F. Furinghetti, H. Gispert, B.R. Hodgson, G. Schubring (Eds), One Hundred Years of L'Enseignement Mathématique: Moments of Mathematics Education
in the Twentieth Century. ISBN 2-940264-06-6
Jones, K. (2003), Research Bibliography: Four-function Calculators, MicroMath, 19(1), 33-34.
Click here for full article in pdf format.
Jones, K. (2003), Using the Internet in the Teaching and Learning of Mathematics: a research bibliography. MicroMath, 19(2), 43-44.
Click here for full article in pdf format.
Jones, K. (2003), Classroom Implications of Research on Dynamic Geometry Software. In: M. A. Mariotti (Ed), European Research in Mathematics Education III. Pisa: University of Pisa. [Section 9, p59].
Click here for full article in pdf format.
Jones, K. and Lagrange, J-B. (2003), Tools and Technologies in Mathematical Didactics: research findings and future directions. In: M. A. Mariotti (Ed), European Research in Mathematics Education III
. Pisa: University of Pisa. [Section 9, pp1-6].
Click here for full article in pdf format.
Jones, K. and Mooney, C. (2003) Making space for geometry in primary mathematics. In, Thompson, I. (ed.) Enhancing primary mathematics teaching. Maidenhead, UK, Open University Press, pp3-15. ISBN:
Click here for full article in pdf format
Mooney, C. (2003), A-Z of Key Concepts in Primary Mathematics. Exeter: Learning Matters. ISBN: 1903300487
Mooney, C. & Fletcher, M. (2003), Primary Mathematics: Audit and Test. Exeter: Learning Matters. 2nd edition. ISBN: 1903300878
Mooney, C., Fletcher, M. and Jones, K. (2003), Minding your Ps and Cs: subjecting knowledge to the practicalities of teaching geometry and probability, Proceedings of the British Society for Research
into Learning Mathematics, 23(3), 79-84.
Click here for full article in pdf format.
Paul, M. and Westaway, L. (2003), Unlocking doors: Moving beyond problem-solving to facilitate an understanding of mathematics. Proceedings of the Third International Conference on Science,
Mathematics and Technology Education. Curtin University of Technology (Perth, Australia) and Rhodes University (East London, South Africa), January, 2003.
Pope, S., Haggarty, L. and Jones, K. (2003), Induction for secondary mathematics ITE tutors, Proceedings of the British Society for Research into Learning Mathematics, 23(3), 115-120.
Click here for full article in pdf format.
Simons, H., Kushner, S., Jones, K. and James, D. (2003), From evidence-based practice to practice-based evidence: the idea of situated generalisation, Research Papers in Education, 18(4), 347-364.
Click here for full article in pdf format.
Voutsina, C. and Jones, K.(2003), Moving Beyond Success: changes in young children’s successful problem solving behaviour. In: A. Gagatis and S. Papastravridis (Eds), Proceedings of the 3rd
Mediterranean Conference on Mathematical Education. Athens: University of Athens. pp717-724. ISBN: 9607341252
Click here for the extended abstract in pdf format.
Ainley, J., Barton, B., Jones, K., Pfannkuch, M. and Thomas, M. (2002), Is what you see what you get? representations, metaphors and tools in mathematics didactics. In, Novotna, J. (ed.) European
Research in Mathematics Education II. Prague, Czech Republic, Charles University Press. pp128-138. ISBN: 80-7290-075-7
Click here for full article in pdf format.
Al-Ghafri, M., Jones, K. and Hirst, K. (2002), Secondary Trainee-Teachers' Knowledge of Students' Errors and Difficulties in Algebra. In: A. D. Cockburn and E. Nardi (Eds), Proceedings of the 26th
Conference of the International Group for the Psychology of Mathematics Education, Vol 1. p259. [extended abstract]
Click here for the extended abstract in pdf format.
Brown, G. Cadman, K., Cain, D. Clark-Jeavons, A. Fentem, R., Foster, A., Jones, K., Oldknow, A., Taylor, R. and Wright, D. (2002), ICT and Mathematics: a guide to learning and teaching mathematics
11-19. London: Mathematical Association/Teacher Training Agency. 73pp. ISBN: 0906588502
Click here for full report in pdf format.
Davis, G. E. & Tall, D. O. (2002), What is a scheme? In: David Tall and Mike Thomas (Eds.), Intelligence, Learning and Understanding in Mathematics: a tribute to Richard Skemp, Flaxton: Post Pressed.
ISBN: 1876682329
Edwards, J. (2002), Learning Mathematics Collaboratively: Learning the skills. In: A. D. Cockburn and E. Nardi (Eds), Proceedings of the 26th Conference of the International Group for the Psychology
of Mathematics Education, Vol 2, 213-220, UEA, UK.
Click here for full article in pdf format.
Edwards, J. (2002), An Environment for Strategy Development, Micromath, 18(2), 42-44.
Click here for full article in pdf format.
Edwards, J., Hartnell, M. and Martin, R. (2002), Interactive Whiteboards: some lessons from the classroom, Micromath, 18(2), 30-33.
Click here for full article in pdf format.
Fujita, T. and Jones, K. (2002), Opportunities for the development of geometrical reasoning in current textbooks in the UK and Japan, Proceedings of the British Society for Research into Learning
Mathematics, 22(3), 79-84.
Click here for full article in pdf format.
Fujita, T. and Jones, K. (2002), The Bridge between Practical and Deductive Geometry: developing the "geometrical eye". In: A. D. Cockburn and E. Nardi (Eds), Proceedings of the 26th Conference of
the International Group for the Psychology of Mathematics Education, Vol 2, 384-391, UEA, UK.
Click here for full article in pdf format.
Gomes, A., Ralha, E. & Hirst, K. (2002), Undergraduate Mathematics for Primary School Teachers: the situation in Portugal. Paper presented at the 2nd International Conference on the Teaching of
Mathematics (at the undergraduate level), University of Crete, 1-6 July 2002.
Hirst, K. (2002), Classifying students’ mistakes in Calculus. Paper presented at the 2nd International Conference on the Teaching of Mathematics (at the undergraduate level), University of Crete, 1-6
July 2002.
Click here for more detail on this paper.
Hirst, K. E. (2002), Welcome to MathBank, MSOR Connections, 2(4), 23-24.
Click here for full paper in pdf format.
Howson, G. (2002), Some Questions on Probability, Teaching Statistics, 24(1), 17-21.
Jones, K. (2002), Issues in the teaching and learning of geometry. In, Haggarty, L. (ed.) Aspects of teaching secondary mathematics: perspectives on practice. London, UK, Routledge Falmer, 121-139.
ISBN: 0-415-26641-6
Click here for full article in pdf format.
Jones, K. (2002), Book review: Teaching Mathematics with ICT, written by Adrian Oldknow and Ron Taylor, Micromath, 18(1), 40-41.
Click here for the review in html format.
Jones, K. (2002), Research on the use of dynamic geometry software: implications for the classroom. MicroMath, 18(3), 18-20.
Click here for full article in pdf format.
Jones, K. (2002), Research Bibliography: Dynamic Geometry Software, MicroMath, 18(3), 44-45.
Click here for full article in pdf format.
Jones, K. and Fujita, T. (2002), The design of geometry teaching: learning from the geometry textbooks of Godfrey and Siddons, Proceedings of the British Society for Research into Learning
Mathematics, 22 (2), 13-18.
Click here for full paper in pdf format.
Jones, K., Lagrange, J-B. and Lemut, E. (2002), Tools and technologies in mathematical didactics. In: J. Novotna (Ed), European Research in Mathematics Education II. Prague: Charles University.
pp125-127. ISBN 80-7290-075-7
Click here for full article in pdf format.
Jones, K., Mooney, C. and Harries, T. (2002) Trainee primary teachers' knowledge of geometry for teaching. Proceedings of the British Society for Research into Learning Mathematics, 22(2), 95-100.
Click here for full article in pdf format.
Mooney, C., Briggs, M., Fletcher, M. and McCullouch, J. (2002), Primary Mathematics: Teaching Theory and Practice. Exeter: Learning Matters. 2nd edition. ISBN 1903300568
Mooney, C., Ferrie, L., Fox, S., Hansen, A. and Wrathmell, R. (2002), Primary Mathematics: Knowledge and Understanding. Exeter: Learning Matters. 2^nd edition. ISBN: 190330055X
Mooney, C. and Jones, K. (2002), Lost in space: primary trainee teachers' spatial subject knowledge and their classroom performance. In: A. D. Cockburn and E. Nardi (Eds), Proceedings of the 26th
Conference of the International Group for the Psychology of Mathematics Education, Vol 1. p363. [extended abstract]
Click here for the extended abstract in pdf format.
Clark-Jeavons, A. and Hyde, R. (2001), Developing a technologically-rich scheme of work for 11 - 12 year olds in mathematics for electronic delivery. Proceedings of the 5th International Conference
on Technology in Mathematics Teaching, August, 2001, University of Klagenfurt, Austria.
Edwards, J. and Jones, K. (2001), Exploratory talk within collaborative small groups in mathematics, Proceedings of the British Society for Research into Learning Mathematics, 21(3), 19-24.
Click here for full article in pdf format.
Fletcher, M. & Mooney, C. (2001), It pays to bluff, Teaching Mathematics and its Applications, 20(2), 75-77.
Gomes, A., Ralha, E. & Hirst, K. (2001), Sobre a formaçăo matemática dos professores do 1.ş ciclo: conhecer e compreender as possíveis dificuldades. Actas do XII Seminário de Investigaçăo em Educaçăo
Matemática. pp 175-196. [in Portuguese]
Click here for full paper in pdf format.
Hirst, A. E. and Singerman, D. (2001), Basic Algebra and Geometry. London: Prentice Hall. ISBN 0 130 86622 9
Hirst, K. E. (2001), Pot Black. Mathematics in School, 30(1), 36-38.
Hyde, R. (2001), Ideas for using graphics calculators with middle school pupils, MicroMath, 17(2), 14-16.
Hyde, R. (2001), Creating a professional development network. Proceedings of the 5th International Conference on Technology in Mathematics Teaching, August, 2001, University of Klagenfurt, Austria.
Jones, K. (2001), Learning geometrical concepts using dynamic geometry software. In: Kay Irwin (Ed), Mathematics Education Research: A catalyst for change. Auckland: University of Auckland. No ISBN.
Click here for full article in pdf format.
Jones, K. (2001), Spatial thinking and visualisation. In, Teaching and Learning Geometry 11-19. London, UK: Royal Society, 55-56. ISBN: 085403563X
Click here for full article in pdf format.
Jones, K. (2001), Boxed into a Corner: Teaching definitions is a tricky business, Times Education Supplement, mathematics curriculum supplement, January 19, p14.
Click here for full article in pdf format.
Jones, K. and Morgan, C. (2001), Research in Mathematics Education: some issues and some emerging influences, Research in Mathematics Education, 3, 1-20. ISSN: 1479 4802 [journal volume also
available as a book, ISBN: 0953849813]
Click here for full article in pdf format.
Jones, K. and Fujita, T. (2001), Developing a new pedagogy for geometry. Proceedings of the British Society for Research into Learning Mathematics, 21(3), 90-95.
Click here for full article in pdf format.
Jones, K. and Rodd, M. (2001), Geometry and proof, Proceedings of the British Society for Research into Learning Mathematics, 21(1), 95-100.
Click here for full article in pdf format.
Kushner, S., Simons, H., James, D., Jones, K. and Yee, W. C. (2001), TTA School Based Research Consortium Initiative, the Evaluation, Final Report. University of the West of England & University of
Southampton. 116pp
Click here for full report in pdf format.
Mooney, C. & Fletcher, M. (2001), Primary Mathematics: Audit and Test. Exeter: Learning Matters. ISBN: 1903300215
Sinkinson, A. and Jones, K. (2001), The validity and reliability of Ofsted judgements of the quality of secondary mathematics initial teacher education courses, Cambridge Journal of Education, 31(2),
221-237. [published version of BERA2000 paper]
Click here for full article in pdf format.
Voutsina, C. and Jones, K. (2001), The Micro-development of Young Children's Problem Solving Strategies when Tackling Addition Tasks, In: Marja van den Heuvel-Panhuizen (Ed), Proceedings of the 25th
Conference of the International Group for the Psychology of Mathematics Education. Utrecht, volume 4, 391-398.
Click here for full article in pdf format.
Morgan, C. and Jones, K. (Eds) (2001), Research in Mathematics Education, volume 3 [editorialship of special journal issue; also available as a book, ISBN: 0-9538498-1-3]
Click here for details of the contents of this publication. Click here for the first article in pdf format.
Clausen-May, T., Jones, K., McLean, A. and Rollands, S. (2000), Perspectives on the design of the geometry curriculum, Proceedings of the British Society for Research into Learning Mathematics, 20(1&
2), 34-41.
Click here for full article in pdf format.
Davis, G., Hill, D., & Smith, N. (2000), A memory-based model for aspects of mathematics teaching. In T. Nakahara & M. Koyama (Eds.), Proceedings of the 24th Conference of the International Group for
the Psychology of Mathematics Education, vol 2, pp. 225-232.Hiroshima: Hiroshima University.
Edwards, J. and Jones, K. (2000), Co-learning about the role of pupil-pupil talk in developing mathematical reasoning in the classroom, Occasional Papers in Science, Technology, Environmental and
Mathematics Education. Southampton, University of Southampton, pp11-12.
Jones, K. (2000), Teacher Knowledge and Professional Development in Geometry, Proceedings of the British Society for Research into Learning Mathematics, 20(3), 109-114.
Click here for full paper in pdf format.
Jones, K. (2000), The Mediation of Mathematical Learning though the use of Pedagogical Tools: a sociocultural analysis. Invited paper presented at the conference on Social Constructivism,
Socioculturalism, and Social Practice Theory: relevance and rationalisations in mathematics education, Norway, March 2000.
Click here for full article in pdf format.
Jones, K. (2000), The Student Experience of Mathematical Proof at University Level. International Journal of Mathematical Education, 31(1), 53-60. ISSN: 0020-739X
Click here for full article in pdf format.
Jones, K. (2000), Providing a foundation for deductive reasoning: students' interpretations when using dynamic geometry software and their evolving mathematical explanations. Educational Studies in
Mathematics, 44(1-2), 55-85.
Click here for full article in pdf format.
Jones, K. (2000), Critical Issues in the Design of the Geometry Curriculum. Invited presentation for the Topic Group on Geometry at the 9^th International Congress on Mathematical Education (ICME9),
Tokyo, Japan, August 2000.
Click here [or click here for an extended version of the paper (in pdf format)]
Jones, K. (2000), Critical issues in the design of the school geometry curriculum. In, Barton, Bill (ed.) Readings in Mathematics Education. Auckland, New Zealand, University of Auckland, 75-91.
[extended version of ICME9 paper]
Click here for full article in pdf format.
Jones, K (2000), A Regrettable Oversight or a Significant Omission? Ethical considerations in quantitative research in education. In H. Simons and R. Usher (Eds), Situated Ethics in Educational
Research. London: Routledge. pp147-61. ISBN: 0415206669
Click here for full article in pdf format.
Jones, K. and Simons, H. (2000), The Student Experience of Online Mathematics Enrichment. In: T. Nakahara and M. Koyama (Eds), Proceedings of the 24th Conference of the International Group for the
Psychology of Mathematics Education, Hiroshima, Japan, Volume 3, pp103-110.
Click here for full article in pdf format.
Jones, K. and Sinkinson, A. (2000), A Critical Analysis of Ofsted Judgements of the Quality of Secondary Mathematics Initial Teacher Education Courses, Evaluation and Research in Education, 40(2),
79-93. [published version of the BERA1999 paper]
Click here for full article in pdf format.
Sinkinson, A. and Jones, K. (2000), The Validity and Reliability of Ofsted Judgements of the Quality of Secondary Mathematics Initial Teacher Education Courses. Paper presented at the Symposium on
'Critical Issues in Mathematics Initial Teacher Education' at the British Educational Research Association Annual Conference (BERA2000), The University of Wales, Cardiff, September 7th - 9th, 2000.
Click here for full paper in pdf format.
Voutsina, C. and Jones, K. (2000), Changes in young children's strategies when solving addition tasks, Proceedings of the British Society for Research into Learning Mathematics, 20(3), 97-102.
Click here for full article in pdf format.
See below for more publications from members of the Collaborative Group for Research in Mathematics Education.
Selected Older Publications
Gary Davis
Banks, J., Brooks, J., Cairns, G., Davis, G., and Stacey, P. (1992), On Devaney's definition of chaos, American Mathematical Monthly, 99, 4, 332- 334.
Cairns, G., Davis, G., Elton, D., Kolganova, A. and Perversi, P. (1995), Chaotic group actions, L'Enseignement Mathématique, 41, 123-133.
Davis, G. E. (1992), Cutting through chaos: a case study in mathematical problem solving. In W. Geeslin and K.Graham (eds.), Proceedings of the Sixteenth Conference of the International Group for the
Psychology of Mathematics Education, University of New Hampshire, U.S.A, vol. 1, pp. 177-184.
Davis, G. (1996), What is the difference between remembering someone posting a letter and remembering the square root of 2? In L. Puig & A. Guttieréz, (Eds.), Proceedings of the 20th conference of
the International Group for the Psychology of Mathematics Education, Vol 2, pp. 265-272. Valencia: Universidad de Valencia.
Davis, G. and Jones, K. (1996), The Psychology of Experimental Mathematics. In: L. Puig and A. Gutiérrez (Eds), Proceedings of the 20th Conference of the International Group for the Psychology of
Mathematics Education. University of Valencia, Volume 1, 149 [extended abstract].
Click here for full article in pdf format.
Davis, G., Pearn, C. and Jones, A. (1997), Communication in mathematics classes: How can we tell if it's happening? Pre-print.
Davis, G., Pearn, C., Price, G. & Smith. K. (1997), Counting and reading in the early years of schooling. In Hejny, M. & Novotna, J. (Eds.) International Symposium: Elementary Maths Teaching SEMT97,
pp. 97-100. Prague: Charles University.
Davis, G., Merrifield, M., Pearn, C., Price, G. & Smith. K. (1997), Connections Between Counting and Reading. Pre-print. University of Southampton.
Davis, G. E., Smith, N.C. & Hill, D. J. W. (working paper), Reflections on Mathematical Memory. Pre-print. University of Southampton.
Davis, G., Tall, D. and Thomas, M. (1997), What is the object of the encapsulation of a process? Mathematics Education Research Group of Australasia, Auckland, New Zealand.
Davis, G. E. and Royle, P. L (1996), A comparison of Australian university output using journal impact factors. Scientometrics, 35(1), 45-58.
Davis, G. E. & Hunting, R.P. (1990), Spontaneous partitioning: pre-schoolers and discrete items, Educational Studies in Mathematics, 21, 367-374.
Davis, G. E., Hunting, R. P. and Pearn, C. (1993), What might a fraction mean to a child and how would a teacher know? The Journal of Mathematical Behavior, 12(1), 63-76.
Davis, G., Hunting, R. P. and Pearn, C. (1993), Iterates and relations: Elliot and Shannon's fraction schemes. In I. Hirabayashi, N. Nohda, K. Shigematsu and F.-L. Lin (eds.) Psychology of
Mathematics Education, PME XVII, pp. 154 - 161. Tsukuba: University of Tsukuba.
Davis, G. and Jones, K. (1996), The Psychology of Experimental Mathematics. In: L. Puig and A. Gutiérrez (Eds), Proceedings of the 20th Conference of the International Group for the Psychology of
Mathematics Education. University of Valencia, Volume 1, 149 [extended abstract].
Click here for full article in pdf format.
Davis, G. E. and Pepper, K. L.(1992), Mathematical problem solving by pre-school children. Educational Studies in Mathematics, 23, 397-415.
Davis, G. and Pitkethly, A.(1990), Cognitive aspects of sharing. Journal for Research in Mathematics Education, 21(2), 145-153.
Davis, G. E. and Pobjoy, M. (1995), Spreadsheets as constructivist tools for the learning and teaching of mathematics. In O. P. Ahuja (ed.) Quality Mathematics Education in Developing Countries, pp.
25-56. New Delhi: UBSPD.
Davis, G. E. and Waywood, A. R. (1992), Assessment of challenging problems and project work in senior secondary mathematics. In M.Stephens and J. Izard (eds.) Reshaping Assessment Practices:
Assessment in the Mathematical Sciences Under Challenge, pp. 185-200. Melbourne: Australian Council for Educational Research.
Davis, G. E. and Waywood, A. R. (1993), A model for estimating the zone of proximal development through students' writing about mathematical activity. In I. Hirabayashi, N. Nohda, K. Shigematsu and
F.-L. Lin (eds.) Psychology of Mathematics Education, PME XVII, vol 2, pp. 183-190. Tsukuba: University of Tsukuba.
Hunting, R. P., Davis, G. E. and Pearn, C. A. (1997), Whole number constraints on rational number learning in an operator setting. Paper presented at the Symposium From Whole Number Sequences to the
Rational Numbers of Arithmetic, Research Pre-session of the 75th Annual Meeting of the National Council of Teachers of Mathematics, Minneapolis, MN, April 15-16, 1997.
Hunting, R. P., Davis, G. E. and Pearn, C.A. (1997), The role of whole number knowledge in rational number learning. Mathematics Education Research Group of Australasia annual conference. Auckland,
New Zealand.
Hunting, R. P. and Davis, G. E. (1996), Engaging Whole Number Knowledge for Rational Number Learning Using a Computer-Based Tool. Journal for Research in Mathematics Education, 27(3), 354-379
Royle, P. L. and Davis, G. E. (1995), Quality and Distribution of Australian Science Journal Publishing. In M.E.D. Koenig and A. Bookstein (eds.) Fifth International Conference of the International
Society for Scientometrics & Informetrics. Proceedings - 1995. pp. 475-484. Medford N.J.: Learned Information.
Saads, S. and Davis, G. (1998), Verbal hesitancy in student talk. Pre-print. University of Southampton.
Saads, S. and Davis, G. (1997), Visual perception and image formation in three dimensional geometry. Preprint.
Saads, S and Davis, G. (1997) Spatial abilities, van Hiele levels, and language use in three dimensional geometry. In: Pehkonen E (Ed), Proceedings of the 21st Conference of the International Group
for the Psychology of Mathematics Education. University of Helsinki, Finland. Vol 4, pp.104-111.
Silveira, C. & Davis, G. (1998), Implicit processes in early number learning. Pre-print. University of Southampton.
Smith, N. C., Davis, G. E. & Hill, D. J. W. A classroom-experiment in mathematical memory. (pre-print). University of Southampton.
Tall, D., Thomas, M., Davis, G., Gray, E. & Simpson, A. (1998), What is the object of the encapsulation of a process? Journal of Mathematical Behavior, 18(2), 223-241.
Julie-Ann Edwards
Edwards, J. and Jones, K. (1996), Book review: Dynamic Geometry, edited by Ronnie Goldstein, Hilary Povey and Peter Winbourne, Micromath, 12(3), 40-41.
Click here for the review in pdf format.
Edwards, J and Jones, K. (1998), The Contribution of Exploratory Talk to Mathematical Learning. In: Olivier A (Ed), Proceedings of the 22nd Conference of the International Group for the Psychology of
Mathematics Education. University of Stellenbosch, South Africa, Volume 4, p330.
Click here for full article in pdf format.
Edwards, J. and Jones, K. (1999), Students' Views of Learning Mathematics in Collaborative Small Groups. In: O. Zaslavsky (Ed), Proceedings of the 23rd Conference of the International Group for the
Psychology of Mathematics Education, Haifa, Israel, Volume 2, pp281-288.
Click here for full article in pdf format.
Brian Griffiths
Griffiths, H. B. (1998), The British Experience of Teaching Geometry since 1900. In: C. Mammana and V. Villani (Eds), Perspectives on the Teaching of Geometry for the 21st Century. Dordrecht: Kluwer.
pp194-203. ISBN: 0792349903
Griffiths, H. B. (1999), Fudge and Fiddlesticks: a century after. In C. Hoyles, C. Morgan and G. Woodhouse (eds), Rethinking the Mathematics Curriculum. London: Falmer.
Stephen Hegedus
Hegedus, S. (1996), Analysing the Metacognitive Behaviour of Undergraduates. Paper presented at the 8th International Conference of Mathematics Education (ICME8), Seville, Spain, July 1996.
Hegedus, S. (1996), Analysing the Metacognitive Behaviour of Undergraduates in the Domain of Calculus. Paper presented at the Joint Conference of the British Society for Research into Learning
Mathematics and the Association of Mathematics Education Tutors, Loughborough: UK, 1996
Hegedus, S. (1996), Analysing Verbal Data. Paper presented at the Joint Conference of the British Society for Research into Learning Mathematics and the Association of Mathematics Education Tutors,
Working Group: Interviewing, Loughborough: UK, 1996.
Hegedus, S. (1997), Advanced Mathematical Thinking, Metacognition and The Calculus. Paper presented at the Conference of the British Society for Research into Learning Mathematics, Bristol: UK, 1997
Hegedus, S. (1998), The Construction of the ROME model for analysing the Metacognitive Behaviour of Mathematics Undergraduates. Proceedings of the International Conference for the Teaching of
Mathematics, Samos98, Samos, Greece, John Wiley & Sons, July, 1998
Hegedus, S. (1999), Advanced Mathematical Thinking. In L. Bills (Ed.), Proceedings of the Conference of the British Society for Research into Learning Mathematics, King's College London, UK,
February, 1999. pp 89-94
Ann Hirst
Griffiths, H. B. and Hirst, A. E. (1994), Cubic equations, or where did the examination question come from? American Mathematical Monthly, 101, 151-161.
Hirst, A. E. (1999), From FE to HE with A-level Mathematics. Southampton: University of Southampton.
Keith Hirst
Hirst, K. E. (1992) Changes in school mathematics - consequences for the university curriculum. London Mathematical Society Newsletter, 192, 4-6.
Hirst, K. E. (1992) Square triangular numbers and continued fractions. Mathematics in Schools, 21, 36-37.
Hirst, K. E. (1992) Transition to A-level. Journal of the Royal Statistical Society, A, 155, 208.
Hirst, K. E. (1993) The National Curriculum and A-level reform. Bulletin IMA, 29, 9-15.
Hirst, K. E. (1995), Consequences of GCSE in mathematics for degree studies. Math Gazette, 79, 61-63.
Hirst, K. E. (1995), Continued fractions for d^(1/2) with constant partial quotients. Int. J. Math. Educ. Sci. Technol., 26, 205-211
Hirst, K. E. (1995), Numbers, Sequences and Series. Edward Arnold.
Hirst, K. E. (1996), Changes in A-level Mathematics from 1996. LMS Newsletter, 239 (June), 8-9.
Hirst, K. E. (1996), Newton's Method - with mistakes. Math Gazette, 80, 385-389.
Hirst, K. E. (1996), Changes in A-level Mathematics from 1996. University of Southampton.
Hirst, K. E. (1997), Exploring Complex Cosines using a Computer Algebra System. International Journal of Computer Algebra in Mathematics Education, 4, 329-337.
Hirst, K. E. (1997), La Medida de Distancia en Barcelona. SUMA (Revista del Federacion Espanola de Sociedades de Profesores de Matematicas), 24, 63-66
Hirst, K. E. (1998), Limit Points of Sequences, In: Bob Burn, John Appleby andPhilip Maher (eds), Teaching Undergraduate Mathematics. London: Imperial College Press (pp22-23).
Hirst, K. E. (1999), Mature Students Studying Mathematics. International Journal of Mathematics Education in Science and Technology, 30, 207-213.
Click here for more details about this article.
Hirst, K. E. (1999), Divisors of n! Mathematical Gazette, 83, 440-445.
Hirst, K. E. (1999) Procedural extrapolation as a source of errors in calculus, Hiroshima Journal of Mathematics Education, 7, 63-66.
Click here for more details about this article.
Hirst, K. E. and Atkinson, K. (1995), Starting Derive. Teaching Mathematics and its Applications, 14, 34-36.
Hirst, K. E. and Shiu, C.M. (1995), Investigations in Pure Mathematics: A Constructivist Perspective. Hiroshima Journal of Mathematics Education, 3, 1-14
Hirst, K. E. and Shiu, C.M. (1996), Investigations in Pure Mathematics. In Chris Haines and Sylvia Dunthorne (eds.) Mathematics Learning and Assessment, pp 2.9-2.16. London: Arnold
Geoff Howson
Balacheff, N., Howson, A.G., Sfard, A., Steinbring, H., Kilpatrick. J. and Sierpinska, A. (1993), What is Research in Mathematics Education and What are its Results? Zentralblatt für Didaktik der
Mathematik, 3, 114-116.
Howson, G. (1991), National Curricula in Mathematics. Leicester: The Mathematical Association.
Howson, A.G. (1993), The Relationship between Assessment, Curriculum and Society. In M. Niss (ed.) Investigations into Assessment in Mathematics Education, pp. 47-56. Dortrecht: Kluwer.
Howson, A.G. (1993), Some difference in mathematics education between Japan and England. In Research Report No. 27, pp. 63-71. Tokyo: National Institute for Educational Research.
Howson, A.G. (1993), A Mathematics Education towards the year 2000. Journal of the Japanese Society of Mathematics Education, LXXV(5), 86-102.
Howson, A.G. (1993), Teachers of mathematics. The Teaching of Mathematics and Informatics, 1, 4-16.
Howson, A.G. (1993), Japanese jottings. Mathematics Teaching, 145, 10-13
Howson, A.G. (1993), Teachers of mathematics. Mathematics Teaching, 142, 28-31.
Howson, G. (1995), Mathematics Textbooks: a Comparative Study of Grade 8 Texts. TIMSS Monograph No 3. Vancouver BC: Pacific Educational Press.
Howson, A. G. (1994), Mathematics in the New Zealand curriculum. Wellington, New Zealand: Business Roundtable. 44pp.
Howson, A. G.. (1994), Teachers of Mathematics. In C. Gaulin, B.R. Hodgson, D.H. Wheeler and J.C. Egsgard (eds.) Proceedings of the Seventh International Congress of Mathematics Education, pp. 9-26.
Quebec: Laval University Press.
Howson, A. G. (1998), The Value of Comparative Studies. In: Kaiser, G., Luna, E. and Huntley, I., (eds), International Comparisons in Mathematics Education. London: Falmer Press, 165-188.
Howson, A. G. (1998), Mathematics and Common Sense. In: Alsina, C. et al., (eds)., Selected Lectures of 8th International Conference on Mathematics Education, Seville, SAEM 'Thales' , 257-69.
Howson, A.G. (1998), MJL [Sir James Lighthill] and mathematics education, Mathematics Today, 34, 164-5.
Howson, A.G. (1998), Some Thoughts on Constructing a Curriculum, Mathematics Teaching, 165, 18-21.
Howson, G., Harries T. and Sutherland, R. (1999), Primary School Mathematics Textbooks: an international study summary, 46pp. London: Qualifications and Curriculum Authority.
Chronaki, A. and Jones, K. (1999), Language Use and Geometry Texts. Proceedings of the British Society for Research into Learning Mathematics, 19(1), 95-100.
Click here for the complete report in pdf format.
Davis, G. and Jones, K. (1996), The Psychology of Experimental Mathematics. In: L. Puig and A. Gutiérrez (Eds), Proceedings of the 20th Conference of the International Group for the Psychology of
Mathematics Education. University of Valencia, Volume 1, 149 [extended abstract].
Click here for full article in pdf format.
Gorgorio, N and Jones, K. (1996), Elements of the Visualisation Process within a Dynamic Geometry Environment. Invited paper presented to Topic group on The Future of Geometry at the 8th
International Congress on Mathematical Education, Seville, Spain.6pp
Click here for full article in pdf format.
Gorgorió, N. and Jones, K. (1997), Cabri i Visualització, Biaix, 10, 21-23. [in Catalan]
Click here for the version of the article (in English) presented at ICME8 conference (article in pdf format).
Hoyles, C. and Jones, K. (1998), Proof in Dynamic Geometry Contexts. In: C. Mammana and V. Villani (Eds), Perspectives on the Teaching of Geometry for the 21st Century. Dordrecht: Kluwer. pp121-128.
ISBN: 0792349903
Click here for full article in pdf format.
Jones, K. (1993), Researching Geometrical Intuition. Proceedings of the British Society for Research into Learning Mathematics, 13(3), 15-19.
Click here for the article in pdf format.
Jones, K. (1994), Where is the Mathematics in the Continuing Professional Development of Mathematics Teachers? Annual conference of the Association of Mathematics Education Teachers 1994 (AMET1994),
Cheltenham, UK, 5-8 September 1994.
Click here for the article in pdf format.
Jones, K. (1994), On the Nature and Role of Mathematical Intuition. Proceedings of the British Society for Research into Learning Mathematics, 14(2), 59-64.
Click here for the article in pdf format.
Jones, K. (1994), Mathematics Teaching From A Different Point of View, Mathematics Education Review, 5, 10-17.
Click here for full article in pdf format.
Jones, K. (1995), Acquiring abstract geometrical concepts: the interaction between the formal and the intuitive. In: Proceedings of the Third British Congress on Mathematics Education, Manchester
Metropolitan University, pp239-46.
Click here for full article in pdf format.
Jones, K. (1995), Dynamic geometry contexts for proof as explanation. In: L. Healy and C. Hoyles (Eds), Justifying and Proving in School Mathematics. London: Institute of Education, pp142-154. No
Click here for full article in pdf format.
Jones, K. (1995), Researching the Learning of Geometrical Concepts in the Secondary Classroom: problems and possibilities, Proceedings of the British Society for Research into Learning Mathematics,
15(2), 31-34.
Click here for the complete report in pdf format.
Jones, K. (1995), Contexts for teaching geometry, Proceedings of the British Society for Research into Learning Mathematics, Birmingham, 15(3), 41-42.
Click here for the complete report in pdf format.
Jones, K. (1995), Geometrical reasoning, Proceedings of the British Society for Research into Learning Mathematics, 15(3), 43-47.
Click here for the complete report in pdf format.
Jones, K. (1995), The Changing Nature of Probability at Key Stages 1 and 2, Mathematics in School, 24(2), 40-41. ISSN: 0305-7259
Click here for full article in pdf format.
Davis, G. and Jones, K. (1996), The Psychology of Experimental Mathematics. In: L. Puig and A. Gutiérrez (Eds), Proceedings of the 20th Conference of the International Group for the Psychology of
Mathematics Education. University of Valencia, Volume 1, 149 [extended abstract].
Click here for full article in pdf format.
Jones, K. (1996), Coming to know about 'dependency' within a dynamic geometry environment. In: L. Puig and A. Gutiérrez (Eds), Proceedings of the 20th Conference of the International Group for the
Psychology of Mathematics Education. University of Valencia, Volume 3, 145-152.
Click here for full article in pdf format.
Jones, K. (1997), Children Learning to Specify Geometrical Relationships Using a Dynamic Geometry Package. In: Pehkonen E (Ed), Proceedings of the 21st Conference of the International Group for the
Psychology of Mathematics Education. University of Helsinki, Finland, Volume 3, 121-128.
Click here for full article in pdf format.
Jones, K. (1997), Student Teachers' Conceptions of Mathematical Proof. Mathematics Education Review, 9, 21-32.
Click here for full article in pdf format.
Jones, K. (1997), A Comparison of the Teaching of Geometrical Ideas in Japan and the US. Proceedings of the British Society for Research into Learning Mathematics, 17(3), 65-68.
Click here for the complete report in pdf format.
Jones, K. (1997), Some Lessons in Mathematics: a comparison of mathematics teaching in Japan and America, Mathematics Teaching, 159, 6-9. ISSN: 0025-5785
Click here for full article in pdf format. Or click for pre-print in html format.
Jones, K.(1998), Deductive and Intuitive Approaches to Solving Geometrical Problems. In: C. Mammana and V. Villani (eds), Perspectives on the Teaching of Geometry for the 21st Century. Dordrecht:
Kluwer. pp78-83. ISBN: 0792349903
Click here for full article in pdf format.
Jones, K. (1998), Theoretical Frameworks for the Learning of Geometrical Reasoning. Proceedings of the British Society for Research into Learning Mathematics, 18(1-2), 29-34.
Click here for the complete report in pdf format.
Jones, K. (1998), Mathematics Graduates' Conceptions of Mathematical Proof. In: D Holton (Ed), On the Teaching and Learning of Mathematics at University Level: pre-proceedings of ICMI study 11.
Singapore: National Institute of Education, 161-164.
Click here for full article in pdf format.
Jones K. (1998), The Mediation of Learning within a Dynamic Geometry Environment. In: Olivier A (Ed), Proceedings of the 22nd Conference of the International Group for the Psychology of Mathematics
Education. University of Stellenbosch, South Africa, Volume 3, pp96-103.
Click here for full article in pdf format.
Jones, K. (1999), Student Interpretations of a Dynamic Geometry Environment. In: Inge Schwank (Ed), European Research in Mathematics Education. Osnabrueck, Germany: Forschungsinstitut fur
Mathematikdidaktik. pp 245-58. ISBN: 392538653X
Click here for full article in pdf format.
Jones, K. (1999), Planning for Mathematics Learning. In: S. Johnston-Wilder, P. Johnston-Wilder, D. Pimm and J. Westwell, (Eds), Learning to Teach Mathematics in the Secondary School. London:
Routledge. pp84-102. ISBN: 0415162807
Click here for full version of 2nd edition of chapter (in pdf format).
Jones, K. and Bills, C. (1998), Visualisation, Imagery and the Development of Geometrical Reasoning. Proceedings of the British Society for Research into Learning Mathematics, 18(1-2), 123-128.
Click here for the complete report in pdf format.
Jones, K. and Brown, L. (1994), The Vanishing National Curriculum, Mathematics Teaching, 148, pp19 and 25. ISSN: 0025-5785
Click here for full article in pdf format.
Jones, K. and Simons, H. (1999) Online mathematics enrichment: an evaluation of the NRICH project. Southampton, UK, University of Southampton, 94pp. ISBN: 0854327010
Click here for full article in pdf format.
Jones, K. and Sinkinson, A. (1999), A Critical Analysis of Ofsted Judgements of the Quality of Secondary Mathematics Initial Teacher Education Courses. Paper presented at the Symposium on 'Critical
Issues in Mathematics Initial Teacher Education' at the British Educational Research Association Annual Conference (BERA1999), The University of Sussex, Brighton, 2-5 September 1999.
Click here for the complete report in pdf format.
Jones, K. and Smith, K. (1997), Student Teachers Learning to Plan Mathematics Lessons. Paper presented at the 1999 Annual Conference of the Association of Mathematics Education Teachers (AMET1999).
Leicester. May 1997. 7pp.
Click here for the complete report in pdf format.
McLeay, H., O'Driscoll-Tole, K. and Jones K. (1998), Using imagery to solve spatial problems. Proceedings of the British Society for Research into Learning Mathematics, 18(3), 83-88.
Click here for the complete report in pdf format.
Mogetta, C., Olivero, F. and Jones K. (1999), Providing the Motivation to Prove in a Dynamic Geometry Environment. Proceedings of the British Society for Research into Learning Mathematics, 19(2),
Click here for the complete report in pdf format.
Mogetta, C., Olivero, F. and Jones K. (1999), Designing Dynamic Geometry Tasks that Support the Proving Process. Proceedings of the British Society for Research into Learning Mathematics, 19(3),
Click here for the complete report in pdf format.
Paul, M. (1995) Pizza and spaghetti: Solving maths problems in the primary classroom. The Computing Teacher, April.
Paul, M. (1997) Using spreadsheets as investigative tools with primary mathematics pupils. Proceedings, Internet and Educational Computing Conference, Cape Town.
Last modified 08 October 2008
|
{"url":"http://www.crme.soton.ac.uk/publications/","timestamp":"2014-04-17T00:58:54Z","content_type":null,"content_length":"82187","record_id":"<urn:uuid:670872e0-21dc-4194-afbb-69bf96d93e18>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometry Dilation (with videos, worksheets, games & activities)
Geometry Dilation
Videos, worksheets, games and acivities to help Geometry students learn about transformations on the coordinate plane. In this lesson, we will look at dilation.
A dilation is a non-rigid transformation, which means that the original and the image are not congruent. They are, however, similar figures. To perform dilations, a scale factor and a center of
dilation are needed. If the scale factor is larger than 1, the image is larger than the original; if the scale factor is less than 1, the image is smaller than the original.
Math Dilations
Students learn that when the dimensions of a figure are increased or decreased to create a new figure that is similar to the original figure, the transformation is called a dilation.
Custom Search
We welcome your feedback, comments and questions about this site - please submit your feedback via our Feedback page.
Custom Search
|
{"url":"http://www.onlinemathlearning.com/dilation-geometry.html","timestamp":"2014-04-21T07:07:27Z","content_type":null,"content_length":"18403","record_id":"<urn:uuid:206e2f8c-3d5f-4869-b910-8a6998377907>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Equation play: who triumphs? Nspire-CAS or HP 50G ?
Re: Equation play: who triumphs? Nspire-CAS or HP 50G ?
Equation play: who triumphs? Nspire-CAS or HP 50G ?
For the facile articulation and manipulation of mathamatical expressions,
who triumphs? Nspire-CAS or HP 50G ?
Publicly Anonomous Use wrote in
> For the facile articulation and manipulation of mathamatical expressions,
> who triumphs? Nspire-CAS or HP 50G ?
I don't think I like this in the bid description:
"Quickly and easily select the proper syntax, symbols and variables from a
-Sounds like a Casio convolution.
Tag:"Who can afford experience?"
Re: Equation play: who triumphs? Nspire-CAS or HP 50G ?
On Apr 11, 1:55*pm, Publicly Anonomous Use
> For the facile articulation and manipulation of mathamatical expressions,
> who triumphs? Nspire-CAS or HP 50G ?
Well now, that is invoking a bit of a holy war, but the honest answer
is that they both have strengths and weaknesses whose importance
greatly varies with with the user, as well as with the user's
experience with each.
(I'm not very familiar with Nspire, but I'm reasonably confident that
many of the overall differences in philosophy between the Ti-89 and
the HP50g also apply to the Nspire).
For example, the TI's auto-simplification is a nice feature, and often
produces the desired result or very close to it. But if it fails to do
so, there is relatively little you can do to convince it to transform
the expression to the form you wanted. The HP50g simplifies very
little by default, but has a wide array of commands to manipulate the
result into your desired form. With expereince it is easy to get the
form you want, but in the begining it is rather difficult. As for
general entry of symbolic equations (or even numeric ones large enough
that entering them step by step is error prone) the best testament to
the HP50g's equation editor's design is the fact that it was ported to
the TI-89 twice, once as a free app Hail, and once as EQW (originally
available as a crippled free version, and a for-pay flash app
I'm not sure if Nspire has a version of the equation editor built-in,
but it would not surprise me. Assuming the Nspire support for units is
the same as the TI-89's then it is by default slightly nicer than
HP50g's unit support. Etc.
The real bottom line is that the Ti-offerings are somewhat more user-
friendly, and have a much shallower leaning curve, but the HP50g in
general is far more powerful and customizable than TI's offerings. Of
course there are almost certainly a few small exceptions, but that's
Some other notes. Virtually all the functions in the the NSpire will
work more-or-less as expected with symbolic arguments. For a variety
of reasons, a fair number of HP50g commands do not have support for
symbolic arguments, but in many cases there is a version of the
function present that does have support.
Re: Equation play: who triumphs? Nspire-CAS or HP 50G ?
On Apr 11, 2:28*pm, username localhost
> I'm not sure if Nspire has a version of the equation editor built-in,
> but it would not surprise me. Assuming the Nspire support for units is
> the same as the TI-89's then it is by default slightly nicer than
> HP50g's unit support. Etc.
I have both the TI-89 Titanium and the HP 50g and think the 50g has
much better unit support than the 89-Ti does *if you know how to use
the HP*. The ideal method of unit manipulation is not discussed in the
user manual -- instead, a slower, more cumbersome method is
On the 50g with soft menus (-117 SF), units are very easy to use. [r->]
[UNITS] (the 6 key) brings up a soft menu of types of units (e.g.
length, area, volume, time, speed...). Each of these contains the
units of that category. Pressing the corresponding soft key multiplies
whatever is in stack level 1 by that unit. Right-shifting that soft
key will attempt to convert whatever is in stack level 1 to that unit,
if the units agree (i.e. you can't convert 5 feet to hours). Finally,
left-shifting the soft key will divide by that unit. This is much
easier than the CONVERT command.
On the TI-89, you must first type in the value, then the unit, then
the "arrow" convert operator, and then finally the unit to be
converted to. The HP simplifies this greatly by making unit conversion
a two-keystroke process and is very efficient for chain-calculations.
Re: Equation play: who triumphs? Nspire-CAS or HP 50G ?
On Apr 11, 1:55*pm, Publicly Anonomous Use
> For the facile articulation and manipulation of mathamatical expressions,
> who triumphs? Nspire-CAS or HP 50G ?
Depends. The TI line is excellent for the average high school student
who just wants his calculator to spit out the answer in a form similar
to the back of the book. TI was aiming for the educational market, so
this makes sense.
The HP is much better for "outside-the-box" thinking where the user
must come up with her own equations to use as opposed to simply
copying them out of a textbook. This is valuable for real-world
problem solving as well as mathematics competitions. The HP's RPN mode
gives it a huge advantage when it comes to performing a series of
operations on a number. There is no issue with intermediate rounding
and it also saves keystrokes (and therefore time).
So basically, if you're a student (high school, college,...), the TI
will probably suit you better, unless you're the type of student who
is willing to invest extra time to tinker with things. If you're out
of school and need a calculator for math, the HP might be better as it
is more flexible. If you're out of school and need to do serious math,
you shouldn't be using a calculator anyway and probably already have
some math program installed on your computer.
Re: Equation play: who triumphs? Nspire-CAS or HP 50G ?
May I correct you?
"Left-shifting" that soft key will attempt to convert whatever is in
stack level 1 to that unit,
if the units agree...
"Right-shifting" the soft key will divide by that unit.
> On the 50g with soft menus (-117 SF), units are very easy to use. [r->]
> [UNITS] (the 6 key) brings up a soft menu of types of units (e.g.
> length, area, volume, time, speed...). Each of these contains the
> units of that category. Pressing the corresponding soft key multiplies
> whatever is in stack level 1 by that unit. Right-shifting that soft
> key will attempt to convert whatever is in stack level 1 to that unit,
> if the units agree (i.e. you can't convert 5 feet to hours). Finally,
> left-shifting the soft key will divide by that unit. This is much
> easier than the CONVERT command.
Re: Equation play: who triumphs? Nspire-CAS or HP 50G ?
On Apr 11, 8:29*pm, jdol...@gmail.com wrote:
> May I correct you?
> "Left-shifting" that soft key will attempt to convert whatever is in
> stack level 1 to that unit,
> if the units agree...
> "Right-shifting" the soft key will divide by that unit.
My mistake. Thanks for the catch.
Re: Equation play: who triumphs? Nspire-CAS or HP 50G ?
On Apr 11, 8:01*pm, sc_use...@hotmail.com wrote:
> On Apr 11, 2:28*pm, username localhost
> wrote:
> > I'm not sure if Nspire has a version of the equation editor built-in,
> > but it would not surprise me. Assuming the Nspire support for units is
> > the same as the TI-89's then it is by default slightly nicer than
> > HP50g's unit support. Etc.
> I have both the TI-89 Titanium and the HP 50g and think the 50g has
> much better unit support than the 89-Ti does *if you know how to use
> the HP*. The ideal method of unit manipulation is not discussed in the
> user manual -- instead, a slower, more cumbersome method is
> demonstrated.
First notice that I said by default. Further, it is obvious that one
is intended to use the softkeys
rather than the choose menus. The features you mention are indeed
mentioned in the manual too.
However, The TI-89 comes with the ability (in fact the default
behavior) of simplifying units. To do that on the HP50g requires
external software. The HP50g appears to lack support for units in
Then there are two small things i slighty prefer on the TI-89.
One is that "_m" is valid, and "1_m" is not required. Annother is that
custom defined units
are stored with a leading underscore in their name, making it
extremely clear when browsing what they are.
Finally, I do like the fact that constants in the ti-89 use the same
system as units. This makes sense. After all,
is 'c' really a contant, or is it also a unit. when one says 0.95c
they are effectively using it as a unit, not as a constant.
How about 5g's? Same thing. I find that terribly convient. While this
could obviously be replicated with custom units, that again
is not a default feature.
Re: Equation play: who triumphs? Nspire-CAS or HP 50G ?
On Apr 12, 1:36*pm, username localhost
> Finally, I do like the fact that constants in the ti-89 use the same
> system as units. This makes sense. After all,
Not sure what you mean here, but the 50g's CONLIB (constants library)
expresses all the constants in terms of built-in units (e.g. g =
9.80665 m/s^2; h = 6.626E-34 J-s; etc).
> is 'c' really a contant, or is it also a unit. when one says 0.95c
> they are effectively using it as a unit, not as a constant.
> How about 5g's? Same thing. I find that terribly convient. While this
> could obviously be replicated with custom units, that again
> is not a default feature.
Sure, 1c = 299792458 m/s, so 0.95c = (0.95)(299792458 m/s) = 284802835
m/s, so c can be thought of as a constant that contains the m/s unit.
Re: Equation play: who triumphs? Nspire-CAS or HP 50G ?
username localhost wrote in
> On Apr 11, 1:55*pm, Publicly Anonomous Use
> wrote:
>> For the facile articulation and manipulation of mathamatical
>> expressions,
>> who triumphs? Nspire-CAS or HP 50G ?
> Well now, that is invoking a bit of a holy war, but the honest answer
> is that they both have strengths and weaknesses whose importance
> greatly varies with with the user, as well as with the user's
> experience with each.
> (I'm not very familiar with Nspire, but I'm reasonably confident that
> many of the overall differences in philosophy between the Ti-89 and
> the HP50g also apply to the Nspire).
> For example, the TI's auto-simplification is a nice feature, and often
> produces the desired result or very close to it. But if it fails to do
> so, there is relatively little you can do to convince it to transform
> the expression to the form you wanted. The HP50g simplifies very
> little by default, but has a wide array of commands to manipulate the
> result into your desired form. With expereince it is easy to get the
> form you want, but in the begining it is rather difficult. As for
> general entry of symbolic equations (or even numeric ones large enough
> that entering them step by step is error prone) the best testament to
> the HP50g's equation editor's design is the fact that it was ported to
> the TI-89 twice, once as a free app Hail, and once as EQW (originally
> available as a crippled free version, and a for-pay flash app
> version).
> I'm not sure if Nspire has a version of the equation editor built-in,
> but it would not surprise me. Assuming the Nspire support for units is
> the same as the TI-89's then it is by default slightly nicer than
> HP50g's unit support. Etc.
> The real bottom line is that the Ti-offerings are somewhat more user-
> friendly, and have a much shallower leaning curve, but the HP50g in
> general is far more powerful and customizable than TI's offerings. Of
> course there are almost certainly a few small exceptions, but that's
> life.
> Some other notes. Virtually all the functions in the the NSpire will
> work more-or-less as expected with symbolic arguments. For a variety
> of reasons, a fair number of HP50g commands do not have support for
> symbolic arguments, but in many cases there is a version of the
> function present that does have support.
I've got to choose one or the other very soon, (for a variety of reasons)
I'm holding a 38G, a 48S and the 35S, but I'm afraid I'll need something
newer, faster and more complex if I continue with classes.
I suppose, that while HP's rigour and precision is attractive, I need to
balance the HP advantages of confidence and speed with a low user-demand
during operation, as well as tactile sureity and simple durability.
I just can't decide yet. Someone mentioned, and I agree that graphic
calculators are not too common in the office, but I needn't worry about
that for a while.
One best tool, in an environment of academic theoretical abstractions,
hmmm.. I suppose I'll have to put both to my hands for a few hours.
Well, back to the mines...
Thanks; -as usual, the group's full of friendly articulate people with a
real sense to the working value of this type of equipment .
|
{"url":"http://fixunix.com/hewlett-packard/379575-equation-play-who-triumphs-nspire-cas-hp-50g.html","timestamp":"2014-04-21T05:10:04Z","content_type":null,"content_length":"58608","record_id":"<urn:uuid:4fc94675-4863-4607-aadc-b0c5cc7baac6>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
|