content
stringlengths
86
994k
meta
stringlengths
288
619
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: write the given trig function in terms of sine and cosine, then simplify. given function: tan(theta)/(sec(theta)-cos(theta)) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50bec2b0e4b09e7e3b85f125","timestamp":"2014-04-20T16:21:51Z","content_type":null,"content_length":"298915","record_id":"<urn:uuid:8c122a4a-bea7-4475-a055-ea037e13c061>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of boltzmann's constant statistical thermodynamics Boltzmann's equation is a probability equation relating the entropy S of an ideal gas to the quantity , which is the number of corresponding to a given $S = k log W !$ (1) where k is Boltzmann's constant equal to 1.38062 x 10^-23 joule/kelvin and W is the number of microstates consistent with the given macrostate. In short, the Boltzmann formula shows the relationship between entropy and the number of ways the atoms or molecules of a thermodynamic system can be arranged. In 1934, Swiss physical chemist Werner Kuhn successfully derived a thermal equation of state for rubber molecules using Boltzmann's formula, which has since come to be known as the entropy model of rubber. The equation was originally formulated by Ludwig Boltzmann between 1872 to 1875, but later put into its current form by Max Planck in about 1900. To quote Planck, "the connection between was first stated by L. Boltzmann in his kinetic theory of gases." The value of $W$, specifically, is the Wahrscheinlichkeit, or number of possible microstates corresponding to the macroscopic state of a system — number of (unobservable) "ways" the (observable) thermodynamic state of a system can be realized by assigning different positions and momenta to the various molecules. Boltzmann’s paradigm was an ideal gas of $N$identical particles, of which $N_i$ are in the $i$-th microscopic condition (range) of position and momentum. $W$ can be counted using the formula for permutations $W = N!; / ; prod_i N_i!$ (2) where i ranges over all possible molecular conditions and $!$ denotes factorial. The "correction" in the denominator is due to the fact that identical particles in the same condition are indistinguishable. $W$ is sometimes called the "thermodynamic probability" since it is an integer greater than one, while mathematical probabilities are always numbers between zero and one. Boltzmann's formula applies to microstates of the universe as a whole, each possible microstate of which is presumed to be equally probable. But in thermodynamics it is important to be able to make the approximation of dividing the universe into a system of interest, plus its surroundings; and then to be able to identify the entropy of the system with the system entropy in Classical thermodynamics. The microstates of such a thermodynamic system are not equally probable—for example, high energy microstates are less probable than low energy microstates for a thermodynamic system kept at a fixed temperature by allowing contact with a heat bath. For thermodynamic systems where microstates of the system may not have equal probabilities, the appropriate generalization, called the Gibbs entropy, is: $S = - k sum p_i log p_i$ (3) This reduces to equation (1) if the probabilities p[i] are all equal. Boltzmann used a $rhologrho$ formula as early as 1866. He interpreted $rho$ as a density in phase space—without mentioning probability—but since this satisfies the axiomatic definition of a probability measure we can retrospectively interpret it as a probability anyway. Gibbs gave an explicitly probabilistic interpretation in 1878. Boltzmann himself used an expression equivalent to (3) in his later work and recognized it as more general than equation (1). That is, equation (1) is a corollary of equation (3)—and not vice versa. In every situation where equation (1) is valid, equation (3) is valid also—and not vice versa. Boltzmann entropy excludes statistical dependencies The term Boltzmann entropy is also sometimes used to indicate entropies calculated based on the approximation that the overall probability can be factored into an identical separate term for each particle -- i.e., assuming each particle has an identical independent probability distribution, and ignoring interactions and correlations between the particles. This is exact for an ideal gas of identical particles, and may or may not be a good approximation for other systems. See also External links
{"url":"http://www.reference.com/browse/boltzmann's+constant","timestamp":"2014-04-19T02:12:20Z","content_type":null,"content_length":"79956","record_id":"<urn:uuid:951a09d7-a425-49bf-acf4-4a06a5526bf4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Many Worlds Interpretation and the Emperor’s New Clothes The Many Worlds Interpretation of Quantum Mechanics and the Emperor’s New Clothes Encountering the Many Worlds Interpretation Several years ago, I looked into the Ma­ny Worlds Interpretation (MWI) of quantum mechanics and concluded that it was not on the right track. It seemed to be creating more conceptual and technical problems than it solved. However, I frequently come across mention of it in the physics literature and in documentaries. Several leading scientists refer to it as a ‘viable’ alternative to the canonical Copenhagen Interpretation (CI); some even calling it the ‘preferred’ interpretation. So, I recently decided to take another look at the MWI. Perhaps there was something I missed, or something important that I did not understand on the first go-around. My initial instincts have been validated. Reading about the MWI, including papers by its proponents as well as by its detractors, reminded me of the Hans Christian Andersen story called The Emperor’s New Clothes. The Emperor and his ministers believe the hype about a fabric that is allegedly invisible to anyone who is unfit for their position. They pretend that they can see the fabric so as not to feel left out. While the Emperor is parading naked through the town, believing that he is wearing the best suit of clothes, a naïve young boy blurts out that the Emperor is naked! Perhaps I can be that naïve young boy when it comes to untestable ideas like the MWI. I may not be young, but bear with me. So what is the Many Worlds Interpretation? As advertised, the main advantage of the MWI is that it solves the measurement problem. I discussed the measurement problem in two previous posts: Quantum Weirdness: The unbridled ability of quantum physics to shock us and Contrary to Popular Belief, Einstein Was Not Mistaken About Quantum Mechanics. The measurement problem results from the apparent need for two distinct processes for the evolution of the state vector: (1) continuous and deterministic evolution according to the Schrödinger equation when no one is looking, followed by (2) spontaneous non-unitary evolution, or collapse, of the state vector upon measurement of an observable. What constitutes a measurement and the dynamics of wave function collapse are not defined in the CI. Additionally, special status is assigned to an intelligent observer who is treated as being outside the quantum system. As an added bonus, proponents of MWI claim that it enables independent derivation of quantum probability distributions without assuming the Born rule. The Born rule for computing the probability of potential outcomes of a quantum event is an additional postulate of canonical quantum mechanics. According to this rule (which has enjoyed phenomenal experimental verification time and time again throughout the past roughly ninety years), the probability for each potential outcome to become the realized outcome is given by the amplitude squared from the applicable terms in the state vector. Hugh Everett developed the relative state formulation in his dissertation and his subsequent publication of “Relative State” Formulation of Quantum Mechanics (also available at this link). It was later given new life by Bryce DeWitt in 1970, with his work applying rational decision theory and game theory to quantum mechanics; see Quantum Mechanics and reality. Since then, dozens of papers have been written attempting to patch holes in the theory, or to take it apart. The MWI hypothesis avoids the measurement problem by assuming that wave function collapse never happens. A single result never emerges from an interaction or quantum measurement. Instead, all possibilities are realized. Each possibility is manifested in a new branching universe. With each observation, measurement, or interaction, the observer state branches into a number of different states, each on a separate branch of a multiverse. All branches exist simultaneously and each branch is ‘equally real’. All potential outcomes are realized, regardless of how small their What is wrong with the Many Worlds Interpretation? If you have read my earlier post Three Roads to What Lies Beyond Quantum Mechanics, you have already glimpsed my discontent with MWI. You will find statements in the literature that claim MWI solves the paradoxes of the CI, and that it derives quantum probabilities without the use of an ad hoc assumption (as in the case of the Born rule in the CI). Hugh Everett’s main goals when he gave birth to the ‘relative state formulation’, which subsequently became known as the MWI, were to get rid of non-unitary wave function collapse and to relegate the observer to just another part of the quantum system. Unfortunately, MWI and its many variants does not live up to the product’s claims. The MWI hypothesis requires an unimaginably large, perhaps infinite, number of universes, each spawned essentially instantaneously in a fully evolved state from it’s parent. Your present universe is constantly branching, sprouting multiple universes at a fantastic rate. Each new universe is identical to its parent IN EVERY WAY, except for the record of a single quantum event. I don’t just mean in one you are the Queen or King of your senior prom, and in another you decide not to run for prom royalty. Every quantum interaction, every quantum measurement, a countless infinity of which happen every day in what we conventionally call the universe, leads to multiple new universes. According to Bryce DeWitt in Quantum Mechanics and reality, “…every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies of itself.” Cloning and quantum teleportation Star Trek-style should be a breeze if quantum mechanics allows cloning the entire universe a countless number of times each second! This may make for interesting and fun science fiction, but without testable predictions it is not physics. This multiverse evolves in a continuous and deterministic way. The apparent randomness that an observer in a particular universe (branch) perceives is in his/her mind; a consequence of the particular branch he/she finds him/herself in. The emergence of macroscopic uniqueness, a consequence of state vector collapse in the CI, is just an illusion in the MWI. That sounds like progress, right? But wait. There’s More The different branches are incoherent; they do not interfere with each other and observers in one branch cannot detect the existence of any of the other branches (this is the “no-communication” hypothesis). The wave function collapse hypothesis has been replaced by the no-communication hypothesis. Quantum decoherence has been used to justify and explain the no-communication hypothesis, with varying success. But, it has also been used to justify and explain the wave function collapse hypothesis. So there is nothing gained here by postulating a countless number of universes branching out from all of the interactions occurring throughout our universe. As John Bell stated (while writing about the MWI, see p. 133 of Speakable and Unspeakable in Quantum Mechanics): “Now it seems to me that this multiplication of universes is extravagant, and serves no real purpose in the theory, and can simply be dropped without repercussions.” Probabilities in the Many Worlds Interpretation Everett sets out to show that the Born probability rule can be derived from within his model, as opposed to having to assume it. He does this by assuming that the square of the amplitudes (from the state vector, same values that the Born rule uses) represent the ‘measure’ that should be assigned to each of the branches. When an observer repeats the same experiment a large number of times, multiple branches appear corresponding to each of the possible outcomes for each performance of the experiment. A particular observer will traverse a particular series of branches out of all the possible combinations of outcomes from all the trials. By applying his weighting scheme, Everett shows that, in most cases, the observer is part of a branch where the relative frequency of the observed results agrees with the Born rule. What exactly does it mean for different branches to have different weights, if each and every branch is ‘equally real’? Are we to assume that the number of realizations of branches associated with a particular outcome of a particular measurement or interaction is proportional to the branch weight? You may naively think that the probabilities of various outcomes should be related to the number of branches with that outcome (a simple counting measure). What would then happen if the probability was an irrational number? Combinatorial methods fail. Even if you could use simple combinatorial methods, many observers would see outcome distributions that conflict with the Born rule. The Born probability rule has been validated in countless experiments over the past 87 years. Why have we never witnessed a deviation from it in any of the uncountable combinations of branches we have traversed to get where we are today? In Everett’s theorem, the observer is considered as a purely physical system. This is a central part of his relative state formulation. The observer is just one subsystem in the overall system under consideration. Once one state is chosen for one part of the overall system, then the rest of the system is in a relative state; state X given that the one subsystem is in state Y. This was, initially, an advantage of the MWI compared to the CI. However, attempts to patch some of the holes in the theory have relied heavily on rational decision theory and game theory, thrusting a conscious observer back into the spotlight. Throwing in Rational Decision Theory and Game Theory Unfortunately, Everett’s approach to deriving the Born rule has been taken apart due to its use of circular reasoning. David Deutsch used decision theory and game theory to derive the Born rule; see Quantum Theory of Probability and Decisions. He demonstrated that if the amplitude squared measure is applied to each branch, then this value is also the probability measure for those branches. He did this by arguing that it represents the preferences of a rational agent. He considered the behavior of a rational decision maker who is making decisions about future quantum measurements. By rational, he meant that the decision maker’s preferences must be transitive: if he/she prefers A to B, and B to C, then he/she must also prefer A to C. (On a side note – many psychology studies have shown that personal preferences of so-called rational agents in the macro world are often not transitive). According to Deutsch, if a rational decision maker believes all of quantum theory with the exception of assuming a probability postulate, he/she necessarily will make decisions (behave) as if the canonical probability rule is true. I am not an expert on decision theory, but it seems to me that the strategy chosen by Deutsch’s rational observer is not unique; it just happens to be the one that correlates with the desired end point – the Born probability rule when the amplitude squared values are used as branch weights. Additionally, if you accept Deutsch’s reasoning, methodology, and assumptions, I should think his results could equally well be used to demonstrate why the Born probability rule works in the CI, as well as in the MWI. Attempts to Make it Consistent Many attempts to formulate a consistent and defensible version of Everett’s initial ideas have been discussed in the literature since Deutsch’s work. Adrian Kent addresses many of them in One world versus many: the inadequacy of Everettian accounts of evolution, probability, and scientific confirmation. Kent points out some of the inconsistencies and contradictions that these attempts fall victim to, either when compared to each other or within themselves. Given that every potential outcome is actually realized in a branch, regardless of likelihood, a rather tortured path has to be taken to explain the meaning of probability and uncertainty when applying decision theory. Additionally, Kent is concerned by the lack of uniqueness in the assumptions and conclusions that can be made about the so-called rational decision-maker. To apply decision theory or game theory reasoning to quantum mechanical events seems rather surreal to me. But regardless of whether you take the approach seriously, there is little gained from it, unless you want to get extremely metaphysical about the role of consciousness. Which I do not. So Where Does This Leave Us With Respect to the Many Worlds Interpretation? The MWI does not deliver on its promises. In particular, it does not solve the measurement problem unless you ignore the extra baggage that comes with the theory, such as the no communication hypothesis, the song and dance concerning rational decision theory, and the surreal role of the observer. Nonetheless, the idea of countless multiple universes has mesmerized popular culture and theoretical physics. The image of an infinite number of copies of ourselves, with slight variations in each universe, is quite tempting. Some people claim that multiverses must be real because we are getting hints of one from multiple theories, including superstring theory, inflationary cosmology, and anthropic reasoning. But each of these predictions are perched upon a mountain of assumptions. And each posits a different cause for the multiverse. It is not at all clear to me that satisfying the multiverse hypothesis of one model would necessarily satisfy that of the others. The idea that the MWI is the only viable alternative to the CI is a myth. Other viable alternatives already exist; and it is premature to assume no one will ever discover another. These alternatives, such as de Broglie-Bohm mechanics and the Transactional Interpretation, need more work. But at the very least, they serve as proof of concept that we should not be so eager to believe any wild idea offered to us, without evidence. So, if you come across someone endorsing the Many Worlds Interpretation of quantum mechanics, remember the story of the Emperor’s New Clothes. Let them know that you are aware the emperor is naked. MWI does not provide a unique and independent derivation of probability, it does not remove the special treatment of the observer, and it replaces the collapse hypothesis with run-away multiverse branching and the no-communication hypothesis. My upcoming posts will include: • Discussion of hydrodynamic quantum analogues. These experiments demonstrate how phenomenon and probability distributions normally associated only with the quantum world can be produced by macroscopic systems and classical dynamics. • So-called weak measurements that are allowing physicists to directly measure the quantum wave function itself, and monitor its evolution. • Introduction to de Broglie-Bohm mechanics. Incidentally, wave function collapse does not occur in de Broglie-Bohm mechanics, and it does not require an infinite number of universes (just empty 2 comments Newest | Oldest Hi David, I have not read the papers or the book you refer to. But the Price paper (http://arxiv.org/pdf/1307.7744.pdf) is on my stack of things to get to. Lately, I have been pondering the significance of pre-selection and post-selection in weak measurements; and also how Transactional Interpretation and Two-state Vector formalism specify forward and backward evolving state vectors. It is as if nature always specifies the initial and final states before proceeding with a quantum “event” – we get a whiff of what is going on in weak measurements and it is what TSVF or TI stumbled into. For indeterminism – this has always bothered me, as well. But, it is what protects non-locality from leading to a violation of causality (i.e. why non-locality does not allow faster-than-light transfer of information). So, it seems to be indispensable, at least for now. For Gleason – what I recall reading about it is a general lack of enthusiasm for it because it does not explain why nature would use that particular probability measure (why the state vector, and how does nature convert that to the probability for outcomes?) From: David S Thanks for your reply. Firstly: I don't mind the time symmetric philosophy, but I personally find it extremely hard to believe that there is fundamental indeterminism in the fabric of nature, as is required in the transactional interpretation. For some reason it just seems a bit "cheap" to me. I know Huw Price has written a recent paper on time symmetry in QM foundations, but I haven't yet read it. Perhaps you have? Thanks for the links, it refreshed my memory a bit. I have read or skimmed them a couple of years ago, but I am still very much undecided on the issue. I have been in contact with Zurek in order to get more info, but he is extremely vague. In some comments he seem to be a MWI'er in others he seem to adopt some sort of "information is fundamental" David Wallace argue that the problem of preferred basis is solved through what he calls "emergent functionalism". I wonder if you have read his Multiverse book (2012) ? Additionally there was a paper last year that resurrected the problem: http://arxiv.org/abs/1210.8447 but nothing seems to have come of it, so it seems most people consider the preferred basis issue to be solved by Zurek, Zeh and Wallace. As for the Born rule issue, yes it's a huge controversy, but quite a few seem to have accepted it despite all the papers arguing against it. And besides, isn't Gleason's Theorem enough anyway? At least that seems to be the argument I hear most. All the best, Hi David, Thanks for reading the article and your feedback. My list of articles that I want to write includes expanding upon where I left off in the Decoherence article and in the Transactional Interpretation (TI) article.I don't know when I will get around to these, but when I do, it may address some of your concerns. People (Zurek in particular) have tried to use the decoherence program to show how a preferred basis could naturally arise.I am not convinced that their approach is correct, but I am also not certain that it is wrong.I tend to think that a time-symmetric interpretation of quantum mechanics is necessary (such as TI or the Two-State Vector Formalism, or something along those lines).In a time-symmetric interpretation, the initial and final conditions (i.e. wave functions) are specified. Perhaps a preferred basis is a natural consequence of how the initial and final states are (physically) selected, and nothing more will need to be said about it. You might enjoy chewing on these papers about decoherence: Concerning the MWI: it's true that many smart people claim to like it.However, the reasons I often hear them cite are either (1) that it is the only alternative to the Copenhagen Interpretation, or (2) that it provides a derivation or proof of the Born rule.Both of those claims are false.There are other viable interpretations, such as de Broglie-Bohm pilot wave theory or the Transactional Interpretation.And, to attempt to prove the Born rule with MWI, people have to make circular assumptions, or poorly defined assumptions, or assumptions that are quite a stretch (like applying rational decision theory to quantum physics).So, I think some of these people heard or casually read some erroneous claim about MWI and believed it without question. In the Transactional Interpretation, the Born rule arises naturally from the theoretical formalism:the link between amplitude squared and likelihood. -----Original Message----- Name: David Comment: Hello, Just came across this site and read the Many Worlds article. I was wondering if you could do another one where you pick apart the Born Rule issue and also the Preferred basis issue? Because the feeling I was left with after reading the article was basically that you reject MWI because it's mindboggling and that you are unsure of the Born Rule issue. I am sure that's not the case, at least I hope that's not the case. As a fellow realist who also have a gut feeling that there is a deeper theory, I still haven't been able to reject MWI completely because a lot of smart people adhere to it. I know the Born Rule issue is a bit controversial, but is that really all there is against it? Has the preferred basis issue been solved thorouhgly in your opinion? Tagged interpretations of quantum mechanics, many worlds interpretation, quantum mechanics. Bookmark the permalink.
{"url":"http://www.thefunisreal.com/2013/10/many-worlds-interpretation-quantum-mechanics-emperors-new-clothes/","timestamp":"2014-04-19T19:34:02Z","content_type":null,"content_length":"82982","record_id":"<urn:uuid:ebc7e172-5984-4427-b408-912a745e8cb3>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
the old ordering function for ACL2 ordinals Major Section: PROGRAMMING See o< for the current new ordering function for ACL2 ordinals. The functions e0-ordinalp and e0-ord-< were replaced in ACL2 Version_2.8 by o-p and o<, respectively. However, books created before that version used the earlier functions for termination proofs; the old functions might be of use in these cases. To use the old functions in termination proofs, include the book books/ordinals/e0-ordinal and execute the event (set-well-founded-relation e0-ord-<) (see set-well-founded-relation). For a more thorough discussion of these functions, see the documentation at the end of books/ordinals/e0-ordinal.lisp.
{"url":"http://planet.racket-lang.org/package-source/cce/dracula.plt/2/7/docs/E0-ORD-_lt_.html","timestamp":"2014-04-16T16:00:39Z","content_type":null,"content_length":"1627","record_id":"<urn:uuid:4a75c4a7-585b-4617-b7eb-a1439d08fbb5>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionThe Dynamic Model of SUARThe State ModelThe Measurement ModelCase 1Case 2The Wavelet Decomposition and Reconstruction MethodThe Adaptive Extended Kalman FilterExperimentHardware SystemStatic Distance TestHovering Flight TestAutonomous LandingConclusionsReferencesFigures and Tables Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s121013212 sensors-12-13212 Article An Adaptive Altitude Information Fusion Method for Autonomous Landing Processes of Small Unmanned Aerial Rotorcraft LeiXusheng^* LiJingjing Science and Technology on Inertial Laboratory, Beijing 100191, China; E-Mail: 001rose001@sina.com Author to whom correspondence should be addressed; E-Mail: yushangtianxia@163.com; Tel.: +86-010-8233-8820. 2012 27 09 2012 12 10 13212 13224 14 08 2012 21 09 2012 21 09 2012 © 2012 by the authors; licensee MDPI, Basel, Switzerland. 2012 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. small unmanned aerial rotorcraft wavelet filter altitude information fusion adaptive extended Kalman filter With the ability to land vertically, small unmanned aerial rotorcraft (SUAR) have an irreplaceable role in civil applications [1]. Thus, they have been widely used in many areas, including road traffic monitoring, city building surveillance and power line inspection, etc. [2,3]. SUAR is a complex multi-input and multi-output (MIMO) system. Compared with the hovering and straight flight processes, there exists land disturbance in the landing process [4]. High performance altitude information is the basis factor for SUARs to realize stable landing control [5]. Due to the constraints of weight and size, sensors with low size and low performance are often used by SUARs, including micro-electronic mechanic system (MEMS) accelerometers and gyroscopes, barometers, the global positioning system (GPS) and ultrasonic sensors. Integrated by the Euler equations, or quaternions, SUAR can get the corresponding aircraft attitude angles and position information, however, inertial sensors, especially gyroscopes, have fixed bias, drift bias, asymmetric scale factor errors and temperature-varying biases, causing the integration results to drift from true attitude [6]. GPS can provide absolute position and velocity information [7], but GPS information is easily affected by sources of interference [8]. Furthermore, GPS has low data frequency to get position and velocity information for a SUAR system [9]. Based on the relationship between the air pressure and the altitude, barometers can provide altitude information [10], but they are easily affected by wind disturbances, air fluctuations, and temperature [11]. Ultrasonic sensors are also often used in SUAR systems. Although they can provide high performance altitude measurements, they have measurement region limitations. When the altitude of SUAR surpasses the upper bound of ultrasound, the measurement results will have errors, therefore all the current sensors have limitation for SUAR to realize stable landing control. Using filter methods, system can get high performance information based on different sensors. The most used filtering method is the extended Kalman filter (EKF) [12]. With the predict and update theory, Beard has used EKF to realize attitude acquisition for a unmanned aerial vehicle [13]. Nevertheless, poor performance or even divergence arising from the linearization implicit in EKF has led to the development of other filters [14]. The unscented Kalman filter (UKF) is also used in UAV systems [15]. Based on second or higher-order approximations of nonlinear functions, UKF can estimate the mean and covariance of state vectors [16]. With UKF, Seung realized target relative position and velocity determinations for follower UAV systems [17]. However, UKF is sensitive to the statistical distribution of the stochastic processes [18]. Based on the concept of sequential sampling and Bayesian theory, particle filtering (PF) is also used in dealing with nonlinear and non-Gaussian noise in SUAR systems. Kamrani used PF for efficient path planning of a UAV [19], but its computational demands are too complex. Wavelet analysis has also been widely used for its time and frequency domain convenience and it can effectively eliminate high frequency noise. Tsiotras used a wavelet transform to construct an approximation of the environment at different levels for small UAVs [20]. Inspired by the discussion above, an adaptive extended Kalman filter (AEKF) method based on the wavelet filter is proposed to get high performance altitude information for a SUAR during the autonomous landing process. The wavelet decomposition and reconstruction method is used to restrain the high frequency noise in the barometer, ultrasonic and GPS sensor information. Since the measurement noise is greatly changed after wavelet filtering, an AEKF based on a maximum a posteriori criterion is proposed to estimate the measurement noise matrix in real time to get high performance altitude information. The paper is organized as follows: the dynamic model of the SUAR system is described in Section 2. The wavelet decomposition and reconstruction method is presented in Section 3. In Section 4, an AEKF based on a maximum a posteriori criterion is proposed to improve altitude information. The simulation and test results confirm the effectiveness of the proposed method in Section 5. Finally, conclusions are drawn in Section 6. For SUAR, altitude information is mainly controlled by the main rotor speed and longitudinal cyclic input. Therefore, the simple altitude dynamic model for SUAR can be defined as: { x ˙ 1 = f 1 = x 2 x ˙ 2 = f 2 = a 0 + a 1 x 2 + a 2 x 2 2 + ( a 3 + a 4 x 4 − a 5 + a 6 x 4 ) x 3 2 x ˙ 3 = f 3 + u 1 = a 7 + a 8 x 3 + ( a 9 sin x 4 + a 10 ) x 3 2 + u 1 x ˙ 4 = f 4 = x 5 x ˙ 5 = f 5 + u 2 = a 11 + a 12 x 4 + a 13 x 3 2 sin x 4 + a 14 x 5 + u 2where x[i] = (h, ḣ, w, θ, θ̇)^T, i = 1,⋯,5. The term h is the estimated altitude measured by barometer, GPS, and ultrasonic sensors, w is rotor speed of main rotor, θ is the longitudinal angle of rotor blade speed, u = (u[1], u[2])^T is the corresponding input, u[1] and u[2] are throttle and collective input respectively, playing an important role in longitudinal cyclic input, lateral cyclic input and blade speed. a[j](j = 1,2,⋯,15) are unknown identification parameters, obtained by the adaptive genetic method [21]. Therefore, the altitude state model of SUAR can be defined as follows: h ˙ = f ( h , w , θ , θ ˙ ) The output accuracy of a barometer is mainly affected by the high frequency noise and constant error which is related to air pressure and temperature. The high frequency noise can be restrained largely by a wavelet filter. Thus, the barometer output h[b] mainly includes a constant error ε[b] and measurement noise v[1]. The function of the h[b] can be defined as follows: h b = h + ɛ b + v 1where h is the altitude of the SUAR system. With the location method of the ranging interchange theory, DGPS can provide position information for SUAR systems with sub-meter performance. The output of DGPS can be defined as follows: h g = h + v 2where h[g] is the output of DGPS, and v[2] is measurement noise Ultrasonic sensors can provide high performance altitude information from 0.15 m to 6.05 m, and the error is less than 1 millimeter. When the altitude surpasses the upper limitation, the output of ultrasonic sensor fluctuates greatly. Therefore, the output of the ultrasonic sensor can be defined as follows: h u = { w l u h ≥ 6.05 m w u 0 ≤ h < 6.05where h[u] is the output of ultrasonic sensor, w[u] is the random error within the bounds and w[lu] is random error without bounds. When a SUAR finishes a certain task at low altitude, there exists land disturbance causing an increase of barometer error. Ultrasonic sensors can provide high precision altitude information for SUARs at low altitude, therefore the measurement matrix can be constructed with inputs from different sensors. If the integrated navigation altitude is larger than 6 m, the SUAR is beyond the range of the ultrasonic sensor. The output of barometer and DGPS are fused. Thus, the constant error of barometer sensor can be revised by the DGPS. The measurement equation can be defined as: [ h b h g ] = [ 1 1 1 0 ] [ h ɛ b ] + [ v 1 v 2 ] If the integrated navigation altitude is less than 6 m, the barometer is easily affected by land disturbance. The output of DGPS and ultrasonic sensor are used to construct the measurement vector. The measurement equation can be defined as follows: [ h g h u ] = h + [ v 2 w u ] Therefore, the measurement equation can be expressed as: Z k = H k X k + V kwhere H[k] is the state vector, Z[k] is the measurement vector, H[k] is the measurement matrix and V[k] is the measurement noise vector. With the wavelet filter, the high frequency noises can be largely eliminated. Then, using the AEKF based on maximum a posteriori criterion, the altitude h and the constant error ε[b] can be estimated unbiasedly. The whole procedure is shown in Figure 1. To get high precision altitude information, it is necessary to use a data filter to deal with high frequency noises in the output of barometer sensor, DGPS, and ultrasonic sensor. Wavelet analysis is a time and frequency domain method, having good representation for partial signal characteristics, therefore, a wavelet filter is used here as a tool to reduce high frequency noises in the sensor information. Lifting-based wavelet transform implementation has shown high potential in reducing the number of computations, so it is used to reduce computation burden in real tasks. It includes three steps: Split: splitting the original signal s^j = {s^j,k|0 ≤ k < 2^j} into even and odd ones. That is: { Split ( s j ) = ( E j − 1 , O j − 1 ) E j − 1 = { E j − 1 , k = s j , 2 k } , O j − 1 = { O j − 1 , k = s j , 2 k + 1 } Predict: defining the detailed representation characteristics by choosing a predictor: O j − 1 − = P ( E j − 1 ) Update: averaging the signal of rough representation against original signal: E j − 1 + = U ( O j − 1 ) The basic principle of lifting scheme is to factorize the polyphase matrix of a wavelet filter into a sequence of alternating upper, lower triangular matrices and a diagonal matrix with constants. The factorization is obtained by using an extension of the Euclidean algorithm. The resulting formulation can be implemented by means of banded matrix multiplications. Suppose that the z-transform of wavelet filter h = {h[k], k ∈ Z} can be defined as h(z). Let ĥ(z) and ĝ(z) be the low and high pass analysis filters, and h(z), g(z) be the low and high pass synthesis filters. ĥ(z), ĝ(z), h(z) and g(z) are biorthogonal filters. The filters can be divided into even and odd parts as: { h ^ ( z ) = g ^ e ( z 2 ) + z − 1 g ^ o ( z 2 ) g ^ ( z ) = g ^ e ( z 2 ) + z − 1 g ^ o ( z 2 ) h ( z ) = h e ( z 2 ) + z − 1 h o ( z 2 ) g ( z ) = g e ( z 2 ) + z − 1 g o ( z 2 ) The polyphase matrices are then defined as: { P ^ ( z ) = [ h ^ e ( z ) h ^ o ( z ) g ^ e ( z ) g ^ o ( z ) ] P ( z ) = [ h e ( z ) h o ( z ) g e ( z ) g o ( z ) ] If the (ĥ, ĝ) is a complementary filter pair, then P̂(z) can be factored as follows: P ( z ) = ∏ i = m m [ 1 s i ( z ) 0 1 ] [ 1 0 t i ( z ) 1 ] [ K 0 0 K − 1 ]where K is a constant value. Therefore, the low pass samples are multiplied by the time domain equivalent of s[i](z), and are added to the high pass samples. Then, the updated high-pass samples are multiplied by the time domain equivalent of t[i](z) and are added to the low-pass samples. If a diagonal matrix is present in the factorization, the low pass coefficients are multiplied by K and the high-pass coefficients are multiplied by K^−1. The polyphase-based wavelet transform in lifting scheme is shown in Figure 2. In this paper, the wavelet “db4” is utilized to construct the wavelet method. The coefficients of the filter are shown in Table 1. The comparisons of original data and the wavelet filtered data of barometer and DGPS are shown in Figure 3. Obviously, the wavelet method can filter out the high frequency noise effectively. Since the measurement noise structure has changed greatly after wavelet filtering, experiential value or the statistics of partial noise cannot be used to provide a good description of measurement noise covariance, therefore, an AEKF is proposed to estimate the measurement noise covariance in real time to improve altitude information. Since the nonlinear dynamic equation of SUAR is continuous and the measurements are a discrete series, a continuous-discrete EKF is proposed to fuse altitude sensor information. In EKF, the state equation and measurement equation can be expressed as: { X ˙ ( t ) = f ( X ( t ) , t ) + B ( t ) u ( t ) + W ( t ) Z k = H k X k + V kwhere, k is the number of time step. X(t) and Z[k] are the state vector and measurement vector respectively. f(X(t), t) is nonlinear ordinary differential equations, and H[k] is the measurement matrix. B(t) is the input matrix, u(t) is the input vector, W(t), V[k] are the system noise and measurement noise vector respectively. Besides, the system noise and the measurement noise are uncorrelated, and the system noise can be treated as Gaussian white noise. In EKF, measurement noise covariance matrix R plays an important role in obtaining a converged filter result. If the value of R is small, unreliable results will be obtained, and a big value of the diagonal elements of R can lead to filter divergence. In traditional EKF, the measurement noise is treated as Gaussian white noise, however, the measurement noise structure has changed greatly after wavelet filtering. Using traditional experiment values or partial statistics of sampling data as measurement noise matrix, the filtering speed will become slow, and filtering performance will become bad, therefore, the sub-optimal and unbiased maximum a posteriori method is proposed to estimate R in real time. As shown in Equation (21), the current R is updated by the innovation ε[k] and R at previous time. The filter consists of the following stages: The prediction stage: X ^ k / k − 1 = X ^ k − 1 + { f [ X ^ k − 1 , t k − 1 ] + B ( t k − 1 ) u ( t k − 1 ) } T P k / k − 1 = ∅ k , k − 1 P k − 1 ∅ k , k − 1 T + Q k − 1 Z ^ k / k − 1 = H k X ^ k − 1 The update stage: ɛ k = Z k − Z ^ k / k − 1 X ^ k = X ^ k / k − 1 + K k − 1 ɛ k − 1 R k = ( 1 − 1 k ) R k − 1 + 1 k ( ɛ k ɛ k T − H k P k H k T ) K k = P k / k − 1 H k T [ H k P k / k − 1 H k T + R k ] P k = ( I − K k H k ) P k / k − 1 ( I − K k H k ) T + K k R k K k Twhere X̂[k][/][k][−1] is the predicted measurement vector for the next epoch, Ẑ[k][/][k][−1] and P[k][/][k][−1] are the predicted measurement vector and the predicted state covariance matrix respectively. Ø[k,k][−1] is the transition matrix after discretization. The innovation ε[k] is the difference between the real observations and its estimated values. T is the sampling time. Q[k] is the system noise covariance matrix. K[k] is the gain matrix, P[k] is the estimated state covariance matrix, and R[k] is the covariance matrix of measurement noise based on the maximum a posteriori adaptive method. Using ε[k], H k P k / k − 1 H k T and initial experiment value R[0], the measurement noise covariance matrix can be estimated in real time to improve filtering performance. Experiments were conducted on a radio-controlled Raptor 90 helicopter, shown in Figure 4. The SUAR is 1.3 m length and 1.46 m span. The total weight is 5 kg, including two liter fuels, a light weight DGPS receiver, and radio telemeter system. The SUAR is powered by a piston engine running on a mixture of methanol and oil. Five servos are used to control the tail, the longitudinal cyclic input, the lateral cyclic input, the collective and the throttle. The longitudinal vertical direction can be stabilized by using the collective and pitch cyclic. Meanwhile, the lateral direction can be controlled by using the roll-cyclic and collective. The heading can be controlled by the tail. For SUAR, there exist weight and size constraints for onboard control components. Thus, a micro guidance navigation control (MGNC) system with little weight was self-developed to realize stable control. The MGNC is only 207 g in weight, with a size of 120 mm × 61 mm × 48 mm. It consists of a horizontal main board, housing three angular rate sensors, two 2-axis accelerometers and a barometer. The barometer, DGPS, and ultrasonic sensor are used to provide altitude information for the SUAR system. The MPXA6115 barometer, produced by Freescale Semiconductor Company, has a range of 15 kPa∼115 kPa. The DGPS module employs the Novatel RTK, whose position accuracy is about 0.02 m, and the range of the Mini-S electrostatic ultrasonic transducer is from 0.15 m to 6.05 m. To test the effectiveness of the proposed information fusion method, a static distance test has been done on the stairs. A six-floor building is chosen as the basis for its high precision measurement. Three marking points are selected on the stairs. The distances between points and ground have been tested by flexible rulers and the distances are 5.12 m (first point), 9.32 m (second point) and 13.52 m (third point). The SUAR is stretched to measure the distance between the current point and the ground. Besides, the sampling time is 60 s per point. The comparison result of the proposed method and the real altitude, the output of the barometer and the real altitude, the output of the DGPS and the real altitude, the output of the ultrasonic sensor and the real altitude are shown in Figure 5. The measurement results of each sensor are shown in Table 2. It is easy to see that the proposed AEKF method has the best performance. Since there exist barriers for GPS in the building, the maximum error reaches to 1.49 m. Without airflow disturbance, the barometer can provide good measurement results, and the standard deviation is 26 percent of the DGPS after initial alignment. The ultrasonic sensor can provide high performance altitude information under 6 m. The mean error at first point is below 0.11 m. When altitude surpasses the 6 m, the performance of ultrasonic is decreased greatly. To test the dynamic performance of the proposed method, a hovering flight test has been done on the SUAR system. Under a 3.4 m/s wind disturbance, the SUAR hovers in the air at 10 m altitude. The LQR control method has been used to adjust altitude and position in real time [22]. Since the planned altitude surpasses the upper limit of ultrasonic sensor, DGPS and barometer are used to provide altitude information for the SUAR system. The altitude generated by the AEKF method, barometer and DGPS are shown in Figure 6. The mean error of the adaptive EKF is only 0.214 m, and the standard deviation is 0.169 m. With the proposed AEKF, the SAUR can realize stable hovering control. Furthermore, it is easy to see that the altitude measured by the barometer fluctuates greatly. Since there exists wind disturbance, the fluctuation of barometer surpasses 1 m. Without shelter, the output of DGPS can provide high performance measurement in short periods. With the fluctuation of the star number, the DGPS output is not so reliable. To test the effectiveness of the proposed method, a series of autonomous landing tests have been done on the SUAR system with the adaptive radial basis function neural network and pilot model. When the SUAR received an autonomous landing command, it changed work station, and flew to the planned hovering point (0,0,10). To satisfy the criteria for position error, speed error and heading error, the SUAR hovered at the planned hovering point. With the constant adjustment for the planned hovering altitude, SAUR descends with hovering stations. Finally, the SUAR landed on the ground. Ten landing tests were conducted from different altitudes, while the wind velocity was less than 3 m/s. The landing results are shown in Figure 7. With the proposed adaptive altitude information fusion method, the SUAR can realize stable autonomous landings, and the average Euclidean distance from the landing target is about 0.67 m. Compared with the navigation system with camera [23,24], the SUAR can get achieve similar landing performance. The comparison of landing performance with AEKF and KF [13] which fuses SINS and DGPS is shown in Figure 8. Using the AEKF, SUAR realized a stable autonomous landing with 0.75 m and 0.45 m error in the East and North directions from the planned landing point. Compared with the KF, the AEFK has much better performance in the autonomous landing process. The altitude, attitude and velocity of the autonomous landing using AEKF are shown in Figure 8(b–d) respectively. In this paper, an adaptive information fusion method based on wavelet decomposition and reconstruction is proposed to improve the accuracy and reliability of altitude measurement information in the landing process for a SUAR. With the proposed method, the high frequency noises in sensors can be eliminated greatly, and then high performance altitude information can be fused to provide support for SUAR in the autonomous landing process. The effectiveness of the proposed method has been demonstrated by static tests, hovering tests and a series of autonomous landing tests. The research is supported by the National Natural Science Foundation of China (Grant No. 60905056, 60904093, 61121003, 61273033). AlexanderJ.SjirU.NickE.Safety in high-risk helicopter operations: The role of additional crew in accident prevention20094771772110.1016/j.ssci.2008.09.009 GeorgeV.LiangT.GrahamD.LuisG.From mission planning to flight control of unmanned aerial vehicles: strategies and implementation tools20052910111510.1016/j.arcontrol.2004.11.002 NajibM.TarekH.A UAR for bridge inspection: Visual servoing control law with orientation limits20071731010.1016/j.autcon.2006.12.010 CaiG.W.ChenB.M.LeeT.H.DongM.B.Design and Implementation of a Hardware-in-the-loop Simulation System for Small-scale UAV Helicopters2009191057106610.1016/j.mechatronics.2009.06.001 FabianiP.FuertesV.PiquereauA.MampeyR.TeichteilF.Autonomous flight and navigation of VTOL UAVs: From autonomy demonstrations to out-of-sight flights20071118319310.1016/j.ast.2006.05.005 TobiasP.ThomasR.JanT.G.Modeling of UAV formation flight using 3D potential field2008161453146210.1016/j.simpat.2008.08.005 LorenzoM.RoboertoN.Aggressive control of helicopters in presence of parametric and dynamical uncertainties20081838138910.1016/j.mechatronics.2007.10.004 PengK.M.CaiG.W.ChenB.M.DongM.B.LumK.LeeT.H.Design and implementation of an autonomous flight control law for a UAV helicopter2009452333233810.1016/j.automatica.2009.06.016 AgusB.SinggihS.Optimal tracking controller design for a small scale helicopter2007427128010.1016/ S1672-6529(07)60041-9 BayraktarS.FeronE.Experiments with small unmanned helicopter nose up landings20093233233710.2514/1.36470 WidyawardanaA.AdangS.A.JakaS.Automated flight test and system identification for rotary wing aerial platform using frequency responses analysis2007423724410.1016/S1672-6529(07)60037-7 CuiP.L.ZhangH.J.QMRPF-UKF master-slave filtering for the attitude determination of micro-nano satellites using gyro and magnetometer2010109935994710.3390/s10110993522163448 RandalB.DerekK.MorganQ.DerylS.Autonomous vehicle technologies for small fixed wing UAVs200529210810.2514/1.8371 HongL.WickerD.A.Spatial-domain multi solution particle filter with threshold wavelets2007871384140110.1016/j.sigpro.2006.12.005 CrassidisJ.L.MarkleyF.L.Unscented filtering for spacecraft attitude estimation20032653654210.2514/2.5102 CarmiA.OshmanY.Fast particle filtering for attitude and angular-rate estimation from vector observations200932707810.2514/ 1.36979 SeungM.O.EricN.J.Relative Motion Estimation for Vision-Based Formation Flight Using Unscented Kalman FilterProceedings of AIAA Guidance, Navigation, Control Conference and ExhibitHilton Head, SC, USA20–23 August 2007 ZhangJ.Q.YanY.A wavelet-based approach to abrupt fault detection and diagnosis of sensors2001501389139610.1109/19.963215 KamraniF.LozanoM.G.AyaniR.Path planning for UAVs using symbiotic simulationProceedings of European Simulation and Modeling ConferenceToulouse, France23–25 October 2006 TsiotrasP.JungD.BakolasE.Multiresolution Hierarchical path-planning for small UAVs using wavelet decompositions20126650552210.1007/s10846-011-9631-z LeiX.S.DuY.H.A linear domain system identification for small unmanned aerial rotorcraft based on adaptive genetic algorithm2010714214910.1016/S1672-6529(09)60200-6 LeiX.S.BaiL.DuY.H.MiaoC.X.ChenY.WangT.M.A small unmanned polar research aerial vehicle based on the composite control method20112182183010.1016/ j.mechatronics.2010.12.002 FaridK.KenzoN.IsabelleF.An Adaptive Vision-based autopilot for mini flying machines guidance; navigation; and control20092716518810.1007/s10514-009-9135-x HermanssonJ.GisingA.SkoglundM.SchonT.B.Technical Report from Automatic Control at Linkopings UniversitetSweden2010 The scheme of altitude fusion. The polyphase-based wavelet transform in lifting scheme. (a) The wavelet analysis process. (b) The wavelet reconstruction process. (a) The comparison between the original barometer data and the wavelet filtered data. (b) The comparison between the original DGPS data and the wavelet filtered data. The Raptor-90 helicopter. (a) The comparison between the output of AEKF and the real altitude. (b) The comparison between the output of barometer and the real altitude. (c) The comparison between the output of ultrasonic and the real altitude. (d) The comparison between the output of DGPS and the real altitude. The altitude generated by the AEKF method, barometer and DGPS in a hovering process. The result of autonomous landing tests from different altitudes. (a) The comparison of 3D trajectory of the SUAR with KF and AEKF method in the landing process. (b) The altitude trajectory of SUAR in autonomous landing process. (c) The pitch and roll angles in autonomous landing process. (d) The velocities in two directions. The coefficients of the “db4” filter. h[0] 0.48296291314453 g[0] 0.12940952255126 h[1] 0.83651630373780 g[1] 0.22414386804201 h[2] 0.22414386804201 g[2] −0.83651630373780 h[3] −0.12940952255126 g[3] 0.48296291314453 Altitude accuracies for AEKF and each sensor (in meters). AEKF Barometer Ultrasonic DGPS Max. absolute error 0.31 0.32 4.94 1.49 Mean error 0.08 0.14 0.44 0.49 Standard deviation 0.098 0.16 0.94 0.41
{"url":"http://www.mdpi.com/1424-8220/12/10/13212/xml","timestamp":"2014-04-19T18:03:45Z","content_type":null,"content_length":"80488","record_id":"<urn:uuid:e1fa2992-fe54-45ef-9a91-57c323c8a14e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Wave Motion Introduction: The Plucked String When a class is asked to describe the motion of a string released from rest in a "Pluck" (isosceles triangle) configuration, most students form their fingers into a tent, and move them up and down. The instructor usually has to work very hard to convince everyone that there is no net force on a straight section of string, so only the segment at the kink can have an acceleration. The folowing applet offers two initial configurations: one where the inital displacement extends across the entire string (Pluck), and one where it is confined to the central quarter (Pulse). (The usual assumptions are made: the string is perfectly flexible; the tension is high enough for gravity to be ignored; the transverse displacements, although plotted on an expanded scale, are much smaller than the length of the string.) The motion of the string is plotted frame by frame on the left, and the displacement of a point one-quarter of the way from one end is plotted versus time on the right. The fact that there is acceleration only at a kink shows up clearly for both initial displacements. The Pulse option shows the initial displacement breaking up into travelling pulses that invert on reflection from the fixed ends of the string. The algorithm that generates the motion makes use of the fact that the classical wave equation (Newton's Law) has travelling wave solutions that can be superimposed to describe the motion of a string with fixed endpoints. This generates amusing pictures, but says nothing about the physics of the problem. In what follows, two alternate approaches are taken to generating the motion of the string. In the first, a numerical solution of the wave equation is generated directly. In second, the "normal modes" and their frequencies of oscillation are found first, then are superimposed to represent the initial displacement of the string. For a uniform string clamped at both ends, the normal modes make up what is known as the Fourier Sine Series (FSS), and the superposition can be made as accurate as one wishes by including sufficient terms. For a string composed of segments of differing linear density, the normal modes do not form a set of orthogonal functions, so their superposition can only approximate an initial displacement. The quantum version of a string with fixed endpoints is called the infinite square well. The quantum wave equation is the Schrodinger Equation, and the infinite square well has the same set of normal modes (stationary states) as the classical clamped string. Initial "Pluck" and "Pulse" wavefunctions generate oscillations quite different from those on a classical string. In the direct solution this is due to the difference in the wave equations, and in the series solution it is due to the difference in the dispersion relations (the dependence of frequency on mode number) and the complex exponential time dependence in the quantum case. Unlike classical normal modes, the quantum stationary states always form orthogonal sets. The examples in the following sections enable one to compare and contrast the behavior of waves on classical and quantum strings.
{"url":"http://www.kw.igs.net/~jackord/bp/n1.html","timestamp":"2014-04-20T20:56:33Z","content_type":null,"content_length":"3853","record_id":"<urn:uuid:f64befef-9258-46fa-8661-9fc768aa3019>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Is 0.9 recurring equal to 1? I feel like you just made my point... The infinite term simply does not occur, therefore, is will always be 0.999999.....9 which is not equal to one. The limit equaling one is exactly that, a limit. 0 is not a member of the set of values in the sequence 9/10^n. This sequence is always changing; the derivative is also nonzero. Therefore its sum will never settle at any number, let alone 1. The sum will approach 1, get closer and closer, but never get there, unless you are at Infinity, which doesn't exist. I don't understand the break down in this logic. You are correct that the sequence will never settle. It will always come closer and closer to 1. The point is that the very of 0.9999.... is a limit, namely the limit of the series this is the definition of 0.9999.... The above series will approach 1, and the limit of the above series will equal one. Thus 0.9999..., being defined as a limit, will equal 1.
{"url":"http://www.physicsforums.com/showthread.php?p=3570121","timestamp":"2014-04-18T15:43:13Z","content_type":null,"content_length":"76334","record_id":"<urn:uuid:6e110945-045b-44a6-aac7-b98eff6e1730>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Cambridge Studies in Advanced Mathematics, 29. Cambridge etc.: Cambridge University Press. xii, 204 p. £25.00; $ 39.50 (1990). This excellently written book is an advanced textbook on the theory of Coxeter groups. It pursues two objects. Firstly, it is an introduction to the book by N. Bourbaki on Lie groups and algebras (chapters 4-6, 1968; Zbl 0186.330). Secondly, it is an updating of the coverage. Correspondingly, the book is divided into two parts. The first part consists of 4 chapters: finite reflection groups, classification of finite reflection groups, polynomial invariants of finite reflection groups, affine reflection groups. The second part is inspired especially by the seminal work by D. Kazhdan and G. Lusztig [Invent. Math. 53, 165-184 (1979; Zbl 0499.20035)] on representations of Hecke algebras associated with Coxeter groups. This part consists of 4 chapters: Coxeter groups (here there is the Bruhat ordering), special cases, Hecke algebras and Kazhdan-Lusztig polynomials, complements (this chapter sketches a number of interesting complementary topics as well as connections with Lie theory). The book has an extensive bibliography on Coxeter groups and their applications. 20F55 Reflection groups; Coxeter groups 20G05 Representation theory of linear algebraic groups 20-02 Research monographs (group theory) 51F15 Reflection groups, reflection geometries 20H15 Other geometric groups, including crystallographic groups
{"url":"http://zbmath.org/?q=an:0725.20028&format=complete","timestamp":"2014-04-16T19:22:08Z","content_type":null,"content_length":"22105","record_id":"<urn:uuid:3e0028e1-7f11-4e3e-aa53-d6fd8382ba04>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
On Mixing and Stability of Limit Theorems Results 1 - 10 of 35 , 1995 "... The two-parameter Poisson-Dirichlet distribution, denoted pd(ff; `), is a distribution on the set of decreasing positive sequences with sum 1. The usual Poisson-Dirichlet distribution with a single parameter `, introduced by Kingman, is pd(0; `). Known properties of pd(0; `), including the Markov ..." Cited by 221 (37 self) Add to MetaCart The two-parameter Poisson-Dirichlet distribution, denoted pd(ff; `), is a distribution on the set of decreasing positive sequences with sum 1. The usual Poisson-Dirichlet distribution with a single parameter `, introduced by Kingman, is pd(0; `). Known properties of pd(0; `), including the Markov chain description due to Vershik-Shmidt-Ignatov, are generalized to the two-parameter case. The size-biased random permutation of pd(ff; `) is a simple residual allocation model proposed by Engen in the context of species diversity, and rediscovered by Perman and the authors in the study of excursions of Brownian motion and Bessel processes. For 0 ! ff ! 1, pd(ff; 0) is the asymptotic distribution of ranked lengths of excursions of a Markov chain away from a state whose recurrence time distribution is in the domain of attraction of a stable law of index ff. Formulae in this case trace back to work of Darling, Lamperti and Wendel in the 1950's and 60's. The distribution of ranked lengths of e... , 2003 "... It is a common financial practice to estimate volatility from the sum of frequently-sampled squared returns. However market microstructure poses challenge to this estimation approach, as evidenced by recent empirical studies in finance. This work attempts to lay out theoretical grounds that reconcil ..." Cited by 109 (16 self) Add to MetaCart It is a common financial practice to estimate volatility from the sum of frequently-sampled squared returns. However market microstructure poses challenge to this estimation approach, as evidenced by recent empirical studies in finance. This work attempts to lay out theoretical grounds that reconcile continuous-time modeling and discrete-time samples. We propose an estimation approach that takes advantage of the rich sources in tick-by-tick data while preserving the continuous-time assumption on the underlying returns. Under our framework, it becomes clear why and where the “usual ” volatility estimator fails when the returns are sampled at the highest frequency. - THE ANNALS OF PROBABILITY , 1998 "... We are interested in the rate of convergence of the Euler scheme approximation of the solution to a stochastic differential equation driven by a general (possibly discontinuous) semimartingale, and by the asymptotic behavior of the associated normalized error. It is well known that for Itô’s equatio ..." Cited by 103 (8 self) Add to MetaCart We are interested in the rate of convergence of the Euler scheme approximation of the solution to a stochastic differential equation driven by a general (possibly discontinuous) semimartingale, and by the asymptotic behavior of the associated normalized error. It is well known that for Itô’s equations the rate is 1 / √ n; we provide a necessary and sufficient condition for this rate to be 1 / √ n when the driving semimartingale is a continuous martingale, or a continuous semimartingale under a mild additional assumption; we also prove that in these cases the normalized error processes converge in law. The rate can also differ from 1 / √ n: this is the case for instance if the driving process is deterministic, or if it is a Lévy process without a Brownian component. It is again 1 / √ n when the driving process is Lévy with a nonvanishing Brownian component, but then the normalized error processes converge in law in the finite-dimensional sense only, while the discretized normalized error processes converge in law in the Skorohod , 2004 "... With the availability of high frequency financial data, nonparametric estimation of volatility of an asset return process becomes feasible. A major problem is how to estimate the volatility consistently and efficiently, when the observed asset returns contain error or noise, for example, in the form ..." Cited by 85 (10 self) Add to MetaCart With the availability of high frequency financial data, nonparametric estimation of volatility of an asset return process becomes feasible. A major problem is how to estimate the volatility consistently and efficiently, when the observed asset returns contain error or noise, for example, in the form of microstructure noise. The former (consistency) has been addressed heavily in the recent literature, however, the resulting estimator is not quite efficient. In Zhang, Mykland, and Aït-Sahalia (2003), the best estimator converges to the true volatility only at the rate of n −1/6. In this paper, we propose an efficient estimator which converges to the true at the rate of n −1/4, which is the best attainable. The estimator remains valid when the observation noise is dependent. Some key words and phrases: consistency, dependent noise, discrete observation, efficiency, Ito process, microstructure noise, observation error, rate of convergence, realized volatility - Journal of Econometrics, forthcoming , 2009 "... This paper is about how to estimate the integrated covariance 〈X, Y 〉T of two assets over a fixed time horizon [0, T], when the observations of X and Y are “contaminated ” and when such noisy observations are at discrete, but not synchronized, times. We show that the usual previous-tick covariance e ..." Cited by 26 (3 self) Add to MetaCart This paper is about how to estimate the integrated covariance 〈X, Y 〉T of two assets over a fixed time horizon [0, T], when the observations of X and Y are “contaminated ” and when such noisy observations are at discrete, but not synchronized, times. We show that the usual previous-tick covariance estimator is biased, and the size of the bias is more pronounced for less liquid assets. This is an analytic characterization of the Epps effect. We also provide optimal sampling frequency which balances the tradeoff between the bias and various sources of stochastic error terms, including nonsynchronous trading, microstructure noise, and time discretization. Finally, a two-scales covariance estimator is provided which simultaneously cancels (to first order) the Epps effect and the effect of microstructure noise. The gain is demonstrated in data. "... Ito processes are the most common form of continuous semimartingales, and include diffusion processes. The paper is concerned with the nonparametric regression relationship between two such Ito processes. We are interested in the quadratic variation (integrated volatility) of the residual in this re ..." Cited by 25 (11 self) Add to MetaCart Ito processes are the most common form of continuous semimartingales, and include diffusion processes. The paper is concerned with the nonparametric regression relationship between two such Ito processes. We are interested in the quadratic variation (integrated volatility) of the residual in this regression, over a unit of time (such as a day). A main conceptual finding is that this quadratic variation can be estimated almost as if the residual process were observed, the difference being that there is also a bias which is of the same asymptotic order as the mixed normal error term. The proposed methodology, “ANOVA for diffusions and Ito processes”, can be used to measure the statistical quality of a parametric model, and, nonparametrically, the appropriateness of a one-regressor model in general. On the other hand, it also helps quantify and characterize the trading (hedging) error in the case of financial applications. , 2008 "... The econometric literature of high frequency data often relies on moment estimators which are derived from assuming local constancy of volatility and related quantities. We here study this local-constancy approximation as a general approach to estimation in such data. We show that the technique yiel ..." Cited by 22 (8 self) Add to MetaCart The econometric literature of high frequency data often relies on moment estimators which are derived from assuming local constancy of volatility and related quantities. We here study this local-constancy approximation as a general approach to estimation in such data. We show that the technique yields asymptotic properties (consistency, normality) that are correct subject to an ex post adjustment involving asymptotic likelihood ratios. These adjustments are given. Several examples of estimation are provided: powers of volatility, leverage effect, integrated betas, bipower, and covariance under asynchronous observation. The first order approximations in this study can be over the period of one observation, or over blocks of successive observations. The advantage of blocking is a gain in transparency in defining and analyzing estimators. The theory relies heavily on the interplay between stable convergence and measure change, and on asymptotic expansions for martingales. , 2005 "... In the last decade, sequential Monte-Carlo methods (SMC) emerged as a key tool in computational statistics (see for instance Doucet et al. (2001), Liu (2001), Künsch (2001)). These algorithms approximate a sequence of distributions by a sequence of weighted empirical measures associated to a weighte ..." Cited by 16 (7 self) Add to MetaCart In the last decade, sequential Monte-Carlo methods (SMC) emerged as a key tool in computational statistics (see for instance Doucet et al. (2001), Liu (2001), Künsch (2001)). These algorithms approximate a sequence of distributions by a sequence of weighted empirical measures associated to a weighted population of particles. These particles and weights are generated recursively according to elementary transformations: mutation and selection. Examples of applications include the sequential Monte-Carlo techniques to solve optimal non-linear filtering problems in state-space models, molecular simulation, genetic optimization, etc. Despite many theoretical advances (see for instance Gilks and Berzuini (2001), Künsch (2003), Del Moral (2004), Chopin (2004)), the asymptotic property of these approximations remains of course a question of central interest. In this paper, we analyze sequential Monte Carlo methods from an asymptotic perspective, that is, we establish law of large numbers and invariance principle as the number of particles gets large. We introduce the concepts of weighted sample consistency and asymptotic normality, and derive conditions under which the mutation and the selection procedure used in the sequential Monte-Carlo build-up preserve these properties. To illustrate our findings, we analyze SMC algorithms to approximate the filtering distribution in state-space models. We show how our techniques allow to relax restrictive technical conditions used in previously reported works and provide grounds to analyze more sophisticated sequential sampling strategies. Short title: Limit theorems for SMC. 1 , 2006 "... In the econometric literature of high frequency data, it is often assumed that one can carry out inference conditionally on the underlying volatility processes. In other words, conditionally Gaussian systems are considered. This is often referred to as the assumption of “no leverage effect”. This is ..." Cited by 12 (3 self) Add to MetaCart In the econometric literature of high frequency data, it is often assumed that one can carry out inference conditionally on the underlying volatility processes. In other words, conditionally Gaussian systems are considered. This is often referred to as the assumption of “no leverage effect”. This is often a reasonable thing to do, as general estimators and results can often be conjectured from considering the conditionally Gaussian case. The purpose of this paper is to try to give some more structure to the things one can do with the Gaussian assumption. We shall argue in the following that there is a whole treasure chest of tools that can be brought to bear on high frequency data problems in this case. We shall in particular consider approximations involving locally constant volatility processes, and develop a general theory for this approximation. As applications of the theory, we propose an improved estimator of quarticity, an ANOVA for processes with multiple regressors, and an estimator for error bars on the Hayashi-Yoshida estimator of quadratic covariation Some key words and phrases: consistency, cumulants, contiguity, continuity, discrete observation, efficiency, Itô process, likelihood inference, realized volatility, stable convergence - Stochastic Processes and Their Applications 118 "... In this paper we present the central limit theorem for general functions of the increments of Brownian semimartingales. This provides a natural extension of the results derived in Barndorff-Nielsen, Graversen, Jacod, Podolskij & Shephard (2006), who showed the central limit theorem for even function ..." Cited by 8 (5 self) Add to MetaCart In this paper we present the central limit theorem for general functions of the increments of Brownian semimartingales. This provides a natural extension of the results derived in Barndorff-Nielsen, Graversen, Jacod, Podolskij & Shephard (2006), who showed the central limit theorem for even functions. We prove an infeasible central limit theorem for general functions and state some assumptions under which a feasible version of our results can be obtained. Finally, we present some examples from the literature to which our theory can be applied.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2277849","timestamp":"2014-04-18T01:34:44Z","content_type":null,"content_length":"39877","record_id":"<urn:uuid:e2d64681-f124-44f3-b9af-15417c46fafd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: On average, when a basketball is dropped, it bounces 7/10 as high as it did on the previous bounce. If a ball is dropped from a height of 10 m, determine the total distance up and down that the ball travels by top of the 11th bounce. Round to the nearest hundredth Best Response You've already chosen the best response. Best Response You've already chosen the best response. it is the sum of a geometric sequence Sum = 10[(1-.7^11)/(1-.7)] = 32.67 Best Response You've already chosen the best response. Are you familiar with the sum of a geometric series? Let \[S = 1 + x + x^2 + x^3 +...+x^n\] then \[xS = x + x^2 + x^3 + ...+x^{n+1}\] and so \[S - xS = S(1-x) = 1 - x^{n+1}\] So \[S = \frac{1-x^ {n+1}}{1-x}\] That's the sum of a geometric series. Now examining your problem, we have to add the following: \[ 10 + \frac{7}{10} \cdot 10 + \frac{7}{10}\cdot 10 + \left(\frac{7}{10}\right)^2 \ cdot 10 + \left(\frac{7}{10}\right)^2 \cdot 10 + ... \] so on and so forth. Noting that the distance traveled (one-way, either up or down) after the nth bounce is \[\left(\frac{7}{10}\right)^n \ cdot 10\] and that everything except the first and last bounce is repeated, we can say that our total distance traveled is \[\left[2\sum_{n=0}^{11} \left(\frac{7}{10}\right)^n\cdot 10\right]- 10 - \left(\frac{7}{10}\right)^{11}\cdot 10\] but this looks just like a geometric series with x = 7/10, so that means \[\sum_{n=0}^{11} \left(\frac{7}{10}\right)^n = \frac{1-\left(\frac{7}{10}\ right)^{12}}{1-\frac{7}{10}}\] And the rest is just calculation... :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ef28b40e4b0dc507db6e267","timestamp":"2014-04-21T12:32:40Z","content_type":null,"content_length":"33496","record_id":"<urn:uuid:7c6809d6-f4c0-46d0-b1d4-356160bd1bf0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Cosine funtion expression February 12th 2013, 07:44 PM #1 Feb 2013 Cosine funtion expression Problem: 6cosx+2=-1 Directions: solve for x. a)Write an expression that identifies all possible real answers (n= any integer) b) Give an answer that is within the principal values. We went over this today in class, the answers were: a) 120+360n and 240+360n b) x=120 I'm confused about how to arrive at this answer. Also, is it possible to solve this without a graphing calculator? Re: Cosine funtion expression Hey Xelaza. Hint: Re-arrange to get cos(x) = blah and tell us what principal angle corresponds to this particular x. Re: Cosine funtion expression -2 -2 /6 /6 Re: Cosine funtion expression So what angle does this correspond to in the principal branch (Hint: Consider 30 and 60 degrees). Re: Cosine funtion expression I'm not sure.. Re: Cosine funtion expression Then you need to learn more trig! There are a few values of trig functions that can be derived easily. Note Chiro's "Hint: Consider 30 and 60 degrees". Draw an equilateral triangle with sides of length 2. What are its angles? Draw a perpendicular from one vertex to the opposite side. That also bisects both angle and side length. That gives you two right triangles. What are the angles in those right triangles? What is the length of the hypotenuse? What are the lengths of the legs? (One is easy, the other requires the Pythagorean theorem.) Now that you know the side lengths what are the sine and cosine of the angles? Another "easy" angle is 45 degrees. Draw a right triangle with legs of length 1. What is the length of the hypotenuse? What are the angles? February 12th 2013, 07:46 PM #2 MHF Contributor Sep 2012 February 12th 2013, 07:56 PM #3 Feb 2013 February 12th 2013, 08:01 PM #4 MHF Contributor Sep 2012 February 12th 2013, 08:13 PM #5 Feb 2013 February 13th 2013, 07:08 AM #6 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/pre-calculus/213021-cosine-funtion-expression.html","timestamp":"2014-04-20T06:29:32Z","content_type":null,"content_length":"42744","record_id":"<urn:uuid:22ca1db5-57ff-4cb2-82e8-77d2c03998ac>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Bisimulations for o-minimal hybrid systems Seminar Room 1, Newton Institute An hybrid system roughly consists in finitely many continuous dynamical systems coupled with a finite automaton which governs the discrete transitions between the continuous dynamical systems. Behaviors of hybrid systems model air traffic control, flight behavior... The main decision problem for hybrid systems is the reachability problem: given a set I of initial states, a set F of final states and a set A of states to avoid, does the system securely behave on inputs from I, i.e. does the system always reach a final state, avoiding the set of prohibited states? A first global approach to this question is to build finite bisimulations of the system. We will show how to prove existence of finite bisimulations when the hybrid system is definable in a o-minimal theory. This is joint work with Thomas Brihaye, Cedric Riviere and Christophe Troestler. Our results generalizes previous results of Lafferriere, Pappas and Sastry. Recent advances on the effectivenesss of our construction have been obtained by Maragarita Korovina and Nicolai Vorobjov in the pfaffian case.
{"url":"http://www.newton.ac.uk/programmes/MAA/seminars/2005050311001.html","timestamp":"2014-04-19T15:16:18Z","content_type":null,"content_length":"4762","record_id":"<urn:uuid:4a0f2251-77ab-42d3-9985-f197addc16b6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantization and Cohomology (Week 10) Posted by John Baez I’m back in town and eager to continue lecturing about Quantization and Cohomology! In last Fall’s lectures, we discussed the Lagrangian and Hamiltonian approaches to the classical mechanics of point particles, and sketched how these could be generalized to strings and higher-dimensional membranes by a process that we’ll ultimately see as categorification. This quarter we’ll start by bringing the quantum aspects of the theory into the game. We begin with a review and a quick discussion of some basic but still incompletely understood questions: • Week 10 (Jan. 16) - A quick review of classical versus quantum mechanics, in both the Lagrangian and Hamiltonian approaches. What are path integrals, really? How do we quantize a classical Hamiltonian to obtain a quantum one? The notes from last class — the final class of the Winter quarter — are here. Next week’s notes are here. Posted at January 17, 2007 12:42 AM UTC Re: Quantization and Cohomology (Week 10) The notes say that Schrödinger guess[ed] the quantization rule (1)$p \mapsto \frac \hbar \i \vec \Del$ But didn’t de Broglie say that first? Posted by: Toby Bartels on January 17, 2007 2:22 AM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) It’s quite possible de Broglie said it first; someone mentioned this possibility in class, and all I could say for sure is that he must have realized something like this. Maybe someone here knows the precise history? Posted by: John Baez on January 17, 2007 5:55 AM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) Could I persuade you to return to our discussion which began here? Reading some of the references I subsequently mentioned, I caught a glimpse of what Bob Coecke said: When working with quantum mechanics a lot, one comes to the point where it feels perfectly natural and all desire to “interpret” it as anything else than what it is disappears. It then is classical physics which begins to look odd and in need of “explanation”. How do classical particles know which way to go? Posted by: David Corfield on January 17, 2007 1:31 PM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) Perhaps to tempt you with a starting point, if the wavefunction for the quantum particle is in $L^2(Q)$, wouldn’t you expect a ‘classical wavefunction’ to be an $\mathbb{R}^{min}$-valued function on $Q$, whose integral (i.e, minimum) is bounded? Posted by: David Corfield on January 17, 2007 1:58 PM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) if the wavefunction for the quantum particle is in $L^2(Q)$, wouldn’t you expect a ‘classical wavefunction’ to be an $\mathbb{R}$ min-valued function on $Q$, whose integral (i.e, minimum) is This question ties in with the general question I tried to formulate in our recent discussion. Namely, while I can answer your above question, I can do so only by using hindsight and additional knowledge about the setup that I have. I wouldn’t know how to answer this in a way systematic enough that I could, say, code a computer program for doing it. Here is the unsystematic answer: As Fuchs recalls, given a solution $\psi \in L^2(Q)$ of the Schrödinger equation, regarded as a function of $\hbar$, we may perform a certain $\hbar \to 0$-limit on its logarithm. $\psi$ being a solution of the Schrödinger equation will imply that the resulting limiting function is a solition of the asscociated Hamilton-Jacobi equation. Posted by: urs on January 17, 2007 6:17 PM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) David wrote: Could I persuade you to return to our discussion which began here? Right now I’m extremely busy: having run off to that workshop in Toronto the first week of class, I’m now having to figure out exactly what to say in my courses on quantization & cohomology and computation & categorification, as well as the qualifier course I’m teaching on algebraic topology. I also have weekly meetings with five grad students, two of whom (Jeff and Derek) are finishing their theses this spring, and applying for jobs now. Of course I want to talk to James Dolan, too — we’re starting to write stuff about categorifying quantum groups. And, to top it all off, I’m in charge of picking new grad students for the UCR math So, I’ll probably be less talkative on this blog than I’d like to be. But, yesterday I decided to discuss this ‘deformation of rigs’ approach to quantization in the course on quantization & cohomology. I realized this may be the deepest way to get a grip on how cohomology shows up when you quantize particles, strings, and higher-dimensional branes. That ‘homotopy types as categorified truth values’ business we talked about should show up somehow… I’m not quite sure yet, but I feel there must be a lot to say about this. So, I’ll try to tackle a bunch of your questions, or at least similar questions, in this course! Posted by: John Baez on January 17, 2007 10:13 PM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) Here’s another tack. You can just about convince yourself that the Lagrangian story works as a rig change, but what about the Hamiltonian? How do we get from the cotangent space to $L^2(Q)$? Is there anything to the observation that: (1)$\epsilon ^m + \epsilon ^n \approx \epsilon^{min(m, n)}?$ Posted by: David Corfield on January 17, 2007 8:48 PM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) After attending this week’s lecture, where I compared four approaches to physics: Lagrangian classical Lagrangian quantum Hamiltonian classical Hamiltonian quantum someone inquired: BTW, what would be wrong with labelling the two columns in the table today “particle mechanics” and “wave mechanics” (where the latter is not restricted to the deBroglie usage) as in the old distinction between ray optics and wave optics? The connotation of that distinction is slightly different since it positions the quantization as a consequence of the wave-ization of mechanics. I replied: It would be sort of confusing, since the distinction between particles and waves is often treated as orthogonal to the distinction between classical and quantum. It’s fun to consider this square: classical particles quantum particles (e.g. Newtonian mechanics) (e.g. Schrödinger's classical waves quantum waves (e.g. the part of Maxwell's (e.g. the part of QED equations describing light) describing light) The part of quantum electrodynamics describing light can be obtained by quantizing Maxwell’s equations in perfect parallel with how Schrödinger’s equation is obtained by quantizing F = ma. One can also take the viewpoint that quantization is the same as the passage from particles to waves. I find this a bit less fruitful… but there’s something to it, as evidenced by the fact that people often use ‘second quantization’ to mean the quantization of a theory of waves! There’s a lot more to be said about all this… it took me decades to feel I understand it, and I’m just scratching the surface here! If we think of the particle/wave distinction as orthogonal to the classical/quantum and Lagrangian/Hamiltonian distinction, we get $2^3 = 8$ kinds of physical theory — and a good physicist needs to learn all eight! Posted by: John Baez on January 20, 2007 3:44 AM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) How should we fit your $2^2 = 4$ table: classical statics classical dynamics thermal statics quantum dynamics to this $2^3 = 8$ picture? Posted by: David Corfield on January 20, 2007 3:03 PM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) Good question, David! I leave it as an exercise for the ambitious reader. Some hints: the process of reinterpreting ‘ dynamics for $p$-branes’ as ‘statics for $(p+1)$-branes’ is a process of categorification, together with Wick rotation. For clarity, we should probably separate this categorification process from the Wick rotation. The Wick rotation amounts to changing $\hbar$ in the expression $exp(iS/\hbar)$ from real to imaginary. It’s part of the overall ‘change of rig’ business that relates thermal physics, quantum physics and classical physics. Posted by: John Baez on January 21, 2007 2:18 AM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) What confuses me a little about this is that 3 of the entries in your $2^2$ table are classical, and I don’t see where quantum thermal theories fit in? Posted by: David Corfield on January 22, 2007 9:25 AM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) Posted by: Tim Silverman on January 23, 2007 10:25 PM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) David asked how the wavefunction of a quantum particle reduces to the ‘wavefunction of a classical particle’ — whatever that is — in the $\hbar \to 0$ limit. I think I finally figured it out. The answer is pretty simple: a ‘classical wavefunction’ is a real-valued function $\psi$ on spacetime. We can think of the number $\psi(t,x)$ as the least possible action for a particle to reach the point $x$ at time $t$. We then evolve this in time following the principle of least action. It’s sort of obvious how — but before I describe that, let me bring in a handy analogy. We globe-trotting academics are all familiar with the problem of minimizing the cost of travelling from here to there. So, the principle of least action becomes more intuitive if we call it the principle of ‘least cost’. In these terms, you should think of $\psi(t_0,x_0)$ as the price you have to pay to start a trip at the point $x_0$ at time $t_0$. The ‘startup cost’, as it were. Given this, we define $\psi(t_1,x_1)$ to be the least cost of getting to the point $x_1$ by the time $t_1$. It’s easy to compute this function if we know the price of any path that takes us from the point $x_0$ at time $t_0$ to the point $x_1$ at time $t_1$. To compute $\psi(t_1,x_1)$, we first take the startup cost $\psi(t_0,x_0)$ for each point $x_0$. Then we add the cheapest price of a path from $x_0$ at time $t_0$ to $x_1$ at time $t_1$. Then we minimize this over all such paths and all points $x_0$. Note this is exactly like the path integral at the top right of this week’s notes, but with the ‘cost’ or action $S$ replacing the exponentiated action $exp(iS/\hbar)$, addition replacing multiplication, and minimization replacing integration. This is just what we’d expect: classical mechanics is just like quantum mechanics, but with the rig $\mathbb{R}^{min}$ replacing the ring of complex numbers! Starting from what I’ve said, we can write down a differential equation describing the time derivative of the wavefunction $\psi(t,x)$. In the quantum case this is called Schrödinger’s equation; in the classical case it’s called the Hamilton–Jacobi equation. If you try to work out how the quantum wavefunction reduces to the classical one in the $\hbar \to 0$ limit, you need to see the ring $\mathbb{C}$, or at least $\mathbb{R}$, as a one-parameter deformation of the rig $\mathbb{R}^{min}$. The formulas for doing this involve exponentials, which is why Urs mentioned logarithms in his answer to your question. It’s all very pretty — and it’s a bit shocking that more people don’t discuss this stuff. So, maybe it’s my duty to figure it out and explain it thoroughly in this course. Posted by: John Baez on January 21, 2007 1:56 AM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) So can you stitch together a classical trajectory from its pieces, or does it kinda need to know that it’s a limit as $\hbar \to 0$ of a quantum sum? Obviously a particle can’t find a global minimum cost if the initial part of the trajectory would be expensive. Posted by: David Corfield on January 22, 2007 9:14 AM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) a ‘classical wavefunction’ is a real-valued function $\psi$ on spacetime. We can think of the number $\psi (t, x)$ as the least possible action for a particle to reach the point $x$ at time $t$. Shouldn’t that be an $\mathbb{R}^{min}$-valued function to allow for it being impossible to get to $x$ at time $t$, i.e., infinite action? Posted by: David Corfield on January 25, 2007 9:46 AM | Permalink | Reply to this Re: Quantization and Cohomology (Week 10) David just corrected my discussion of the wavefunction in classical mechanics. I sloppily said it was a real-valued function on configuration space, and he said: Shouldn’t it be an $\mathbb{R}^{min}$-valued function to allow for it being impossible to get to $x$ at time $t$, i.e., infinite action? He’s right! Yesterday my student John Huerta revealed that he’s already learned some stuff about this from James Dolan. He pointed out that the classical analogue of a ‘position eigenstate’ is a wavefunction $\psi: Q \to \mathbb{R}^{min}$ which equals $\infty$everywhere except at one point. This means it’s impossible for the particle to be anywhere except that point. This sort of wavefunction has an amusing upside-down similarity to the ‘delta function’ used to describe a position eigenstate in quantum mechanics. This comes from the upside-down nature of the Boltzmann homomorphism $E \mapsto exp(-E/k T)$ sending energies to probabilities (though in this discussion we’re actually talking about actions and amplitudes — confusing, eh?). Posted by: John Baez on January 26, 2007 12:26 AM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2007/01/quantization_and_cohomology_we_9.html","timestamp":"2014-04-19T18:22:43Z","content_type":null,"content_length":"56427","record_id":"<urn:uuid:5625fa7e-6fce-4dc1-bdf7-7b58c3a9a059>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamic Single-Pile Nim Using Multiple Bases In the game $G_{0}$ two players alternate removing positive numbers of counters from a single pile and the winner is the player who removes the last counter. On the first move of the game, the player moving first can remove a maximum of $k$ counters, $k$ being specified in advance. On each subsequent move, a player can remove a maximum of $f(n,t) $ counters where $t$ was the number of counters removed by his opponent on the preceding move and $n$ is the preceding pile size, where $f:N\times N\rightarrow N$ is an arbitrary function satisfying the condition (1): $\exists t\in N$ such that for all $n,x\in N$, $f(n,x) =f(n+t,x) $. This note extends our earlier paper [E-JC, Vol 10, 2003, N7]. We first solve the game for functions $f:N\times N\rightarrow N$ that also satisfy the condition (2): $\forall n,x\in N$, $f(n,x+1) -f(n,x) \geq -1$. Then we state the solution when $f:N\times N\rightarrow N$ is restricted only by condition (1) and point out that the more general proof is almost the same as the simpler proof. The solutions when $t\geq 2$ use multiple bases. Full Text:
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v13i1n7/0","timestamp":"2014-04-19T22:54:58Z","content_type":null,"content_length":"15589","record_id":"<urn:uuid:d8fd8008-c28b-4ff9-8b14-536f4a5d4834>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
Reversible Watermarking Techniques: An Overview and a Classification An overview of reversible watermarking techniques appeared in literature during the last five years approximately is presented in this paper. In addition to this a general classification of algorithms on the basis of their characteristics and of the embedding domain is given in order to provide a structured presentation simply accessible for an interested reader. Algorithms are set in a category and discussed trying to supply the main information regarding embedding and decoding procedures. Basic considerations on achieved results are made as well. 1. Introduction Digital watermarking techniques have been indicated so far as a possible solution when, in a specific application scenario (authentication, copyright protection, fingerprinting, etc.), there is the need to embed an informative message in a digital document in an imperceptible way.Such a goal is basically achieved by performing a slight modification to the original data trying to, at the same time, satisfy other bindings such as capacity and robustness. What is important to highlight, beyond the way all these issues are achieved, it is that this "slight modification" is irreversible: the watermarked content is different from the original one. This means that any successive assertion, usage, and evaluation must happen on a, though weakly, corrupted version, if original data have not been stored and are not readily available. It is now clear that in dependence of the application scenario, this cannot always be acceptable. Usually when dealing with sensitive imagery such as deep space exploration, military investigation, and recognition, and medical diagnosis, the end-user cannot tolerate to risk to get a distorted information from what he is watching at. One example above all: a radiologist who is checking a radiographic image to establish if a certain pathology is present or not. It cannot be accepted that his diagnosis is wrong both, firstly, to safeguard the patient's health and, secondly, to protect the work of the radiologist himself. In such cases, irreversible watermarking algorithms clearly appear not to be feasible; due to this strict requirement, another category of watermarking techniques have been introduced in literature which are catalogued as reversible, where, with this term, it is to be intended that the original content, other than the watermark signal, is recovered from the watermarked document such that any evaluation can be performed on the unmodified data. Thus doing, the watermarking process is zero-impact but allows, at the same time, to convey an informative message. Reversible watermarking techniques are also named as invertible or lossless and were born to be applied mainly in scenarios where the authenticity of a digital image has to be granted and the original content is peremptorily needed at the decoding side. It is important to point out that, initially, a high perceptual quality of the watermarked image was not a requirement due to the fact that the original one was recoverable and simple problems of overflow and underflow caused by the watermarking process were not taken into account too. Successively also, this aspect has been considered as basic to permit to the end user to operate on the watermarked image and to possibly decide to resort to the uncorrupted version in a second time if needed. Reversible algorithms can be subdivided into two main categories, as evidenced in Figure 1: fragile and semifragile. Most of the developed techniques belong to the family of fragile that means that the inserted watermark disappears when a modification has occurred to the watermarked image, thus revealing that data integrity has been compromised. An inferior number, in percentage, are grouped in the second category of semi-fragile where with this term it is intended that the watermark is able to survive to a possible unintentional process the image may undergo, for instance, a slight JPEG Figure 1. Categorization of reversible watermarking techniques. Such feature could be interesting in applications where a certain degree of lossy compression has to be tolerated; that is, the image has to be declared as authentic even if slightly compressed. Within this last category can also be included a restricted set of techniques that can be defined as robust which are able to cope with intentional attacks such as filtering, partial cropping, JPEG compression with relatively low quality factors, and so on. The rationale behind this paper is to provide an overview, as complete as possible, and a classification of reversible watermarking techniques, while trying to focus on their main features in a manner to provide to the readers basic information to understand if a certain algorithm matches with what they were looking for. In particular, our attention has been dedicated to papers appeared approximately from years 2004-2005 till 2008-2009; in fact, due to the huge amount of works in this field, we have decided to restrict our watch to the last important techniques. Anyway we could not forget some "old" techniques that are considered as reference throughout the paper, such as [1–3], though they are not discussed in detail. The paper tries to categorize these techniques according to the classification pictured in Figure 1 and by adding an interesting distinction regarding the embedding domain they work on: spatial domain (pixel) or transformed domain (DFT, DWT, etc.). The paper is structured as follows: in Section 2, fragile algorithms are introduced and subdivided into two subclasses on the basis of the adopted domain; in Section 3, techniques which provide features of semi-fragileness and/or robustness are presented and classified again according to the watermarking domain. Section 4 concludes the paper. 2. Fragile Algorithms Fragile algorithms cover the majority of the published works in the field of reversible. With the term fragile a watermarking technique which embeds a code in an image that is not readable anymore if the content is altered. Consequently the original data are not recoverable too. 2.1. Spatial Domain This subsection is dedicated to present some of the main works implementing fragile reversible watermarking by operating in the spatial domain. One of the most important works in such a field has been presented by Tian [4, 5]. It presents a high-capacity, high visual quality, and reversible data embedding method for grayscale digital images. This method calculates the difference of neighboring pixel values and then selects some of such differences to perform a difference expansion (DE). In such different values, a payload (i)a JBIG compressed location map, (ii)the original LSB values, and (iii)the net authentication payload which contains an image hash. To embed the payload, the procedure starts to define two amounts, the average Given a pair of pixel values and given The method defines different kinds of pixel couples according to the characteristics of the corresponding changeable and expandable differences, let us see below for their definitions, respectively. Definition 1. For a grayscale-valued pair Definition 2. For a grayscale-valued pair This is imposed to prevent overflow/underflow problems for the watermarked pixels To embed a bit and (6) for changeable ones by replacing To extract the embedded data and recover the original values, the decoder uses the same pattern adopted during embedding and applies (1) to each pair. Then two sets of differences are created: Tian applies the algorithm to "Lena" (1, where the embedded payload size, the corresponding bitrate, and PSNRs of the watermarked image are listed. Table 1. Payload size versus PSNR of Lena image. As DE increases, the watermark has the effect similar to mild sharpening in the mid tone regions. Applying the DE method on "Lena," the experimental results show that the capacity versus distortion is better in comparison with the G-LSB method proposed in [2], and the RS method proposed in [1]. The previous method has been taken and extended by Alattar in [6]. Instead of using difference expansion applied to pairs of pixels to embed one bit, in this case difference expansion is computed on spatial and cross-spectral triplets of pixels in order to increase hiding capacity; the algorithm embeds two bits in each triplet. With the term triplet a (i)Spatial Triplet: three pixel values of the image chosen from the same color component within the image according to a predetermined order. (ii)Cross-spectral Triplet: three pixel values of the image chosen from different color components (RGB). The forward transform for the triplet The value for all the expandable triplets, where expandable means that According to the above definition, the algorithm classifies the triplets in the following groups. In the embedding process, the triplets are transformed using (7) and then divided into Lena, Baboon, and Fruits. The algorithm is applied recursively to columns and rows of each color component. The watermark is generated by a random binary sequence and 2, PSNRs of the watermarked images are shown. In general, the quality level is about 27dB with a bitrate of 3.5bits/coloredpixel. In Table 3, it is reported also the performance comparison in terms of capacity between the Tian's algorithm and this one, by using grayscale images Lena and Barbara. Table 2. Embedded payload size versus PSNR for colored images. Table 3. Comparison results between Tian's and Alattar's algorithm. From the results of Table 3, the algorithm proposed outperforms the Tian's technique at lower PSNRs. At higher PSNRs instead, the Tian's method outperforms the proposed. Alattar proposed in [7] an extension of such a technique, to hide triplets of bits in the difference expansion of quads of adjacent pixels. With the term quads a 2). The difference expansion transform, The inverse difference expansion transform, Similarly to the approach previously adopted, quads are categorized in expandable or changeable and differently treated during watermarking; then they are grouped as follows. In the embedding process the quads are transformed by using (10) and then divided into the sets In the presented experimental results, the algorithm is applied to each color component of three Baboon, Lena, and Fruits setting Lena and Fruits, produce more expandable triplets with lower distortion than high frequency images such as Baboon. In particular with Fruits, the algorithm is able to embed 867kbits with a PSNR Baboon the algorithm is able to embed 802kbits or 148kbits achieving a PSNR of The proposed method is compared with Tian's algorithm, using grayscale images, Lena and Barbara. At PSNR higher than 35dB, quad-based technique outperforms Tian, while at lower PSNR Tian outperforms (marginally) the proposed techniques. The quad-based algorithm is also compared with [2] method using grayscale images like Lena and Barbara. Also, in this case the proposed method outperforms Celik [2] at almost all PSNRs. The proposed algorithm is also compared with the previous work of Alattar described in [6]. The results reveal that the achievable payload size for the quad-based algorithm is about 300,000bits higher than for the spatial triplets-based algorithm at the same PSNR; furthermore, the PSNR is about Finally, in [8], Alattar has proposed a further generalization of his algorithm, by using difference expansion of vectors composed by adjacent pixels. This new method increases the hiding capacity and the computation efficiency and allows to embed into the image several bits, in every vector, in a single pass. A vector is defined as In this case, the forward difference expansion transform, The inverse difference expansion transform, Similarly to what was done before, the vector expandable if, for all To prevent overflow and underflow, the following conditions have to be respected. On the contrary, the vector changeable if, (14) holds when the expression In the embedding process the vectors are forward transformed and then divided into the groups The maximum capacity of this algorithm is 1bit/pixel but it can be applied recursively to increase the hiding capacity. The algorithm is tested with spatial triplets, spatial quads, cross-color triplets, and quads. The images used are Lena, Baboon, and Fruits (4(a). The performance of the algorithm is lower with Baboon than with Lena or Fruits. With Fruits, the algorithm is able to embed 858kb (3.27bits/pixel) with an image quality (PSNR) of 28.52dB or only 288kb (1.10bits/pixel) with reasonably high image quality of 37.94dB. On the contrary, with Baboon, the algorithm is able to embed 656kb (2.5bits/pixel) at 21.2dB and 115kb (0.44bits/pixel) at 30.14dB. In the case of spatial quads, the payload size against PSNR is plotted in Figure 4(b). In this case, the algorithm performs slightly better with Fruits. In this case with Fruits, the algorithm is able to embed 508kb (1.94bits/pixel) with image quality of 33.59dB or alternatively 193kb (0.74bits/ pixel) with high image quality of 43.58dB. Again with Baboon, a payload of 482kb (1.84bits/pixel) at 24.73dB and of only 87kb (0.33bits/pixel) at 36.6dB are achieved. In general, the quality of the watermarked images, using spatial quads, is better than the quality obtained with spatial triplets algorithm (the sharpening effects is less noticeable). The payload size versus PSNR for cross-color triplets and cross-color quads are shown in Figures 4(c) and 4(d), respectively. For a given PSNR, the spatial vector technique is better than the cross-color vector method. The comparison between these results demonstrates that the cross-color algorithms (triplets and quads) have almost the same performance with all images (except Lena at PSNR greater than 30dB). From the results above and from the comparison with Celik and Tian, the spatial quad-based technique, that provides high capacity and low distortion, would be the best solution for most applications. Figure 4. (a) Spatial Triplets, (b) Spatial Quads, (c) Cross-col Triplets and (d) Cross-col Quads. Weng et al. [9] proposed high-capacity reversible data hiding scheme, to solve the problem of consuming almost all the available capacity in the embedding process noticed in various watermarking techniques. Each pixel 5). The so-called companding error is Embedding is performed according to (19) ( Pixel belonging to where 4] and the Thodi's one [3] from the capacity-vs-distortion point of view: for instance it achieves In Coltuc [10], a high-capacity low-cost reversible watermarking scheme is presented. The increment in capacity is due to the fact that it is not used any particular location map to identify the transformed pairs of pixels (as usually happens). The proposed scheme, adopts a generalized integer transform for pairs of pixels. The watermark and the correction data, needed to recover the original image, are embedded into the transformed pixel by simple additions. This algorithm can provide for a single pass of watermarking, bitrates greater than 1bpp. Let us see how the integer transform is structured. Given a gray-level ( which is basically based on the fact that the relations in (23) (called congruence) hold If a further modification is applied (i.e., watermarking) through an additive insertion of a value In addition, it is important to point out that a nontransformed pair does not necessarily fulfill (23), but it can be demonstrated that it always exists an During watermarking, all pairs which do not cause under/overflow are transformed, on the contrary not transformed ones are modified according to (24) to satisfy (23), and the corresponding correction data are collected and appended to watermark payload. During detection, the same pairs of pixels are identified and then, by checking (23) if the result is In the proposed scheme, the bitrate depends on the number of transformed pixel pairs and on the parameter Baboon performances are sensibly lower. In [11], Coltuc improves the algorithm previously presented [10]. A different transform is presented: instead of embedding a single watermark codeword into a pair of transformed pixels, now the algorithm embeds a codeword into a single transformed pixel. Equation (27) defines the direct transform. while the inverse transform is given by the following. This time the congruence relation is given by by the following. Then the technique proceeds similarly to the previous method by distinguishing in transformed and not-transformed pixels. The hiding capacity is now The proposed algorithm is compared with the previous work [10]. This new technique provides a significant gain in data hiding capacity while, on the contrary, achieves low values of perceptual quality in terms of PSNR. Considering the test image Lena, a single pass of the proposed algorithm for 10], but at a lower PSNR (22.85dB compared with 25.24dB). For In Chang et al. [12], two spatial quad-based schemes starting from the difference expansion of Tian [4] algorithm are presented. In particular, the proposed methods exploit the property that the differences between the neighboring pixels in local regions of an image are small. The difference expansion technique is applied to the image in row-wise and column-wise simultaneously. Let ( and a message bit and then In the proposed scheme, the host image 6). To establish if a block where true, F16, Baboon, Lena, and Barbara. The results, in terms of capacity versus PSNR, are compared with other three algorithms, proposed by Thodi, Alattar and Tian. All methods are applied to images only once. From the comparison, the proposed algorithm can conceal more information than Tian's and Thodi's methods, while the performances of Alattar scheme are similar. In general, the proposed scheme is better than Alattar at low and high PSNRs. For middle PSNR Alattar's algorithm performs better. Weng et al. presented in [13] a reversible data hiding scheme based on integer transform and on the correlation among four pixels in a quad. Data embedding is performed by expanding the differences between one pixel and each of its three neighboring pixels. Companding technique is adopted too. Given a grayscale image The forward integer transform while the inverse integer transform The watermarking process starts with the transformation 9] for detail) whose output values are classified into three categories The algorithm is tested and compared with Tian's and Alattar's method on several images including Lena and Barbara. Embedding rates close to 0.75bpp are obtained with the proposed and the Alattar's algorithm without multiple embedding, while multiple embedding is applied to Tian's algorithm to achieve rates above 0.5bpp. From results the proposed method presents a PSNR of 1–3dB more than the others with a payload of the same size. For example, considering Lena, in the proposed method the embedding capacity of 0.3bpp is achieved with a PSNR of 44dB, while in Tian, the PSNR is 41db and in Alattar is 40db. The embedding capacity of 1bpp is achieved with a PSNR of 32db for the proposed method, while in this case in Tian and Alattar the PSNR is 30db. For Baboon, the results show that for a payload of 0.1bpp a PSNR of 44db, 35db, and 32db for the proposed method, Tian and Alattar is achieved, respectively. In general, the proposed technique outperforms Alattar and Tian at almost all PSNR values. In [14], Ni et al. proposed a reversible data hiding algorithm which can embed about 5–80kb of data for a 7(a), the histogram of Lena is represented. Figure 7. (a) Histogram of Lena image, (b) Histogram of watermarked Lena image. Given the histogram of the original image the algorithm first finds a zero point (no value of that gray level in the original image) or minimum point in case that zero point does not exist, and then the peak point (maximum frequency of that gray level in the original image). In Figure 7(a)7(b) the histogram of the marked Lena is displayed. Let be In embedding process the value of pixel (between the minimum and maximum point) is added or subtracted by 1. In the worst case, 4, results, in terms of PSNR and payload, of an experiment with some different images are shown. Table 4. Experimental results for some different images. 2.2. Transformed Domain In this subsection, works dealing with fragile reversible watermarking operating on transformed domain are presented. An interesting and simple technique which uses quantized DCT coefficients of the the to-be-marked image has been proposed by Chen and Kao [15]. Such an approach resorts to three parameters adjustment rules: ZRE (Zero-Replacement Embedding), ZRX (Zero-Replacement Extraction), and CA (Confusion Avoidance); the first two are adopted to embed and extract one bit, respectively, the third one is to prevent confusion during embedding and extraction. Hereafter, these three rules are listed. ZRE: embeds one bit into ZRX: extract one bit from (1)Extract bit 1 from (2)Extract bit 0 from CA: proposed to avoid embedding or extracting error. (1)In embedding, each (2)In extracting, each To perform embedding, the image is partitioned in LenaCameraman (payload of Another work based on integer DCT coefficients modification has been proposed by Yang et al. [16]. The reversibility is guaranteed by integer DCT, a lossless 14]. The integer DCT transform has the property of energy concentration which can be used to improve the capacity of histogram modification scheme. The watermarking process starts with dividing the image into Lena In Weng et al. [17], a data hiding scheme, based on companding technique and an improved difference expansion (DE) method is presented. The payload is embedded into high frequency bands (LH, HL, and HH) in the integer wavelet transform domain (IWT), using the companding technique. To solve the overflow/underflow problem, after IWT, a method based on histogram modification is proposed. Such algorithm is based on Xuan's technique [18], which suffered the problem of overflow/underflow. Weng avoids that problem by interchanging the order of histogram modification and IWT. The advantages are basically an increment in hiding capacity with the PSNR value slightly increased and an overall PSNR improvement. Watermark embedding is divided into two steps: firstly, the image w is embedded into the LSB of one bit left shifted version of an IWT selected coefficient; after that inverse, IWT is applied and the image The extraction process is composed by two stages: in the first one, classification is performed again and DE embedding is inverted till retrieving LenaBaboon Lee et al. [19] proposed a reversible watermarking scheme with high embedding capacity. The input image is divided into non-overlapping blocks, and the watermark is embedded into the high-frequency wavelet coefficients of each block. To guarantee the reversibility, invertible integer to integer wavelet transforms are used, by applying the Lazy wavelet and the lifting construction (finite length filter), to avoid loss of information through forward and inverse transform. The watermark is embedded into the wavelet coefficients using two techniques, the LSB-substitution or the bit-shifting (specifically p-bit-shifting). In the first case, the watermark is embedded by replacing the LSB of the selected wavelet coefficient with the watermark bit. where 8 is to be considered. It displays the scheme of forward and inverse wavelet transform and watermark embedding. Figure 8. Forward and inverse wavelet transform and watermark embedding. First, an 8, it derives that, The underflow or overflow depend on the error During embedding process, the watermarked image block obtained is Introducing such error, the watermarked image block An image block The proposed algorithm uses also a location map L (binary matrix) that indicates which blocks are watermarked. This matrix is a part of the side information used in decoding phase, and is embedded during the watermarking process. The decoding algorithm starts dividing the watermarked image into non-overlapping 9 shows the quality of watermarked images at various embedding capacities with block size of Figure 9. Comparison of embedding capacity versus PSNR for some grayscale images. 3. Semi-Fragile and Robust Algorithms In this section the second category of algorithms belonging to the class of semi-fragile and robust is introduced. Such techniques present the characteristic to grant a certain degree of robustness when a specific process is applied to the watermarked image: this means that the image is still asserted as authentic. 3.1. Semifragile Algorithms 3.1.1. Spatial Domain De Vleeschouwer et al. proposed in [20], a semi-fragile algorithm based on the identification of a robust feature of the luminance histogram for an image tile. As for the patchwork approach, the cover media is tiled in non-overlapping blocks of pixels that are associated to a bit of the embedded message. For a single block, the pixels are equally divided into two pseudorandom sets (i.e., zones A and B) and for each zone the luminance histogram is computed and mapped around a circular support. A weight, proportional to the occurrence of each luminance value, is placed on the corresponding position of the circle and then a center of mass is calculated and localized respect to the center of the circle. Since zones A and B are pseudo-randomly determined, it is highly probable that the localization of the corresponding centers of mass are very close to each other. This peculiarity can be exploited to embed a bit by simply rotating the center of mass of the A and B zones in opposite ways. A clockwise rotation of the By using this approach, it is very easy to determine, during the watermark detection, if a "1" or "0"bit is embedded in a certain block and, eventually, remove the mark by counter rotating the histogram along the circular support. In a real image, some pathological cases can arise when the two centers of mass are not properly positioned and in general do not respect the mutual nearness. These cases are statistically negligible and do not affect significantly the available watermark payload. If the histogram is mapped linearly into the circular support, salt and pepper noise can appear because of the abrupt transition on the occurrences of the 255-level to the 0-level and viceversa even for a small support rotation. To cope with this problem, the histogram can be mapped to the support in an alternative fashion by mapping clockwise the Because of the rearrangement of the histogram on the support, the center of mass for the A and B zones appear very close to the center of the circle making the watermark detection less reliable. In this case, the center of mass computation is substituted by the computation of the minimal inertia axis that can be detected more easily. This alternative technique make the salt and pepper noise disappear. Both these approaches can cope with acceptable lossy attacks such cropping (by embedding a synchronization grid) and JPEG compression. The proposed methods show a good robustness, even if the second one, while more appealing from a perceptual point of view, is more fragile to JPEG compression. In Ni et al. [21], an algorithm based on the De Vleeschouwer idea is proposed in order not to be fragile to JPEG compression. This method is based upon an analysis of the differences between couples of pixels belonging to an image tile. An image tile is divided into pixel couples and a sum of differences of their luminance values (taken in an ad hoc manner) is computed. A statistical analysis shows that this computed value (named Category 1 The pixel grayscale values of a block under consideration are far enough away from the two bounds of the histogram (0 and 255 for an 8-bit grayscale image). In this category, two other cases are further considered according to the value of (1)The value (2)The absolute value of Category 2 Some pixel grayscale values of the block under consideration are very close to the lower bound of the histogram (0 for an 8-bit grayscale image). In this category, two other cases are further considered according to the value of (1)The value (2)The value of Category 3 Some pixel grayscale values of the block under consideration are very close to the upper bound of the histogram (255 for an 8-bit grayscale image). In this category, two other cases are further considered according to the value of (1)The value (2)The value of Category 4 Some pixel grayscale values of the block under consideration are close to the upper bounds, while some pixel grayscale values are close to the lower bounds of the histograms. In this category, two other cases are further considered according to the value of (1)The value (2)The absolute value of Depending on the categories and on the cases the couples of pixels are referrable to, the difference salt and pepper noise; in these case, no modification are applied and then an error is inserted. To cope with these insertion errors, the payload is embedded with an Error Correction Code providing a sufficient data redundancy. Authors states that BCH(63,7,15) can correct most of the random errors that can be generated during the embedding process. In some cases, errors concentrate in particular regions of the image (bursts of errors) giving no chance to the ECC to recover data. In order to deal with these situations, the authors used a message bits permutation scheme to redistribute errors along the entire image. Experimental results confirm that a significant enhancement of the data embedding capacity and of the PSNR of the marked image can be achieved respect to the method proposed in [20]. The images used in the experiments are Lena, Baboon, and Boat (Lena with a PSNR of 40.2db the capacity is 792bits, but for the other two images the capacity is lower, in fact in Baboon with a PSNR of 38.7db the capacity is 585bits while for Boat with a PSNR of 40.5db the payload is 560bits. In particular, robustness is slightly increased in the case of a lossy modification like JPEG/JPEG2000 compression with higher compression rates with respect to [20]. For severe compression rates, instead, the results of the proposed algorithm are comparable to those presented by De Vleeschouwer. A unified authentication framework based on the proposed methods has been included in the Security part of JPEG2000 (known as JPSEC) IS (International Standard), JPSEC ISO/IEC 15444-8:2007, April 2007. 3.1.2. Transformed Domain Zou et al. [22] proposed a semi-fragile lossless watermarking scheme based on the 5/3 (LeGalle 5/3 filter) integer wavelet transform (10). Figure 10. Histogram of the IWT coefficients in the HL sub-band of JPEG2000. From this feature it is possible to deduce that dividing the considered sub-band into non-overlapping blocks of size nxn and calculating the mean of the coefficients values in each block, the resulting mean values also have zero-mean Laplacian distribution. The scheme starts scanning all the blocks to find out the maximum absolute mean value of coefficients, Since S is fixed for all blocks, the original coefficients can be recovered to reconstruct the original image. The reconstructed value is obtained by subtracting 11). Each category is represented by an histogram of the corresponding pixel values of the blocks in the spatial domain. Assuming that the maximum absolute pixel grayscale value (0–255) change is 11(d); in this kind of block is not possible to embed data (not-embeddable block). If during embedding phase a bit 1 is embedded, in detection process the system can extract this value without problems. Problems occur when during detection process a bit 0 is detected. In this case, the decoder is not able to decide if a bit 0 has been embedded or the considered block is not embeddable. To solve the problem and correct the errors, an 20]. The PSNR of the proposed method are all over 38dB. Zou applies the algorithm to Lena, a 5. Zou algorithm's is also robust to JPEG2000 lossy compression. Table 5. Block size versus capacity (Lena Figure 11. Blocks classification. (a) Type A. (b) Type B. (c) Type C. (d) Type D. Wu [23] proposed a reversible semi-fragile watermarking scheme for image authentication. This algorithm embeds a watermark into 22], for most of the images, the integer wavelet coefficients histogram, of the high-frequency sub-band, follow a near zero-mean Laplacian-like distribution. Lena, Baboon, Barbara, Peppers, and so forth, are used. All images have a size of 6, PSNRs of six marked images are shown. The experimental results show that the embedding distortion is small and a good visual quality of the watermarked image is guaranteed. The proposed technique can also resist JPEG lossy compression at a low quality factor. Table 6. PSNR values for some test images. 3.2. Robust Algorithms 3.2.1. Spatial Domain The algorithm presented in [24] is based on histogram modification. Embedding is performed by selecting a couples of histogram bins, If the asked relation does not already exist, bins are swapped (pixels belonging to the bins are changed accordingly); if an equality happens between selected bins, they are skipped. Bins couples are individuated according to a public key which is composed by a real number whose integer and decimal parts are used to determine the initial bin (start) and the distance between the two bins within each couple (step), respectively. Couples are selected sequentially over the histogram, in order to allocate all the message bits. Furthermore, reference side information which records if bins are swapped or not is constructed and passed to the extractor, together with the watermark length and the public key, to allow reversibility. The capacity of this method is quite low (at most step) is not over In Coltuc and Chassery [25], a technique based on Reversible Contrast Mapping (RCM) which is a simple integer transform applied to couples of pixels is presented. RCM is invertible even if the LSBs of the transformed pixels are lost. Being the image gray-level It can be proved that (48) exactly inverts (47) also if the LSBs of the transformed pixels are lost; furthermore, if ceil functions. Due to this property, LSBs are used for carrying the watermark. For sake of correctness, it can be said that ceiling operation is robust to the loss induced by watermarking only if both The watermark is composed by the payload and the bits saved in the step 3. During detection, the image is partitioned again into pairs (1)if the LSB of (2)if the LSB of (3)if the LSB of It is important to highlight that the embedding of the true LSB of a nontransformed pair will happen in a spatially close couple thus granting a slight robustness in case of cropping, though experimental results on that are not reported within the paper. Being Further iterations can be applied to augment capacity to the extent of increasing perceptual distortion. The proposed scheme was tested on several graylevel and color images, Lena, Baboon, and Boat. Applying the proposed scheme on Lena without control distortion, a bit-rate of 0.49bpp is obtained. The bit-rate is very close to the theoretical upper bound of 0.5bpp. Further iterations of the scheme increase the hiding bit-rate till 0.98, 1.40, 1.73, and 1.86bpp. For low and medium bit-rates, a slight increase of contrast can be seen. Increasing the hiding capacity, the noise increases as well. Boat is slightly lower, the maximum hiding capacity is of 1.53bpp. Baboon provides only 0.84bpp of embedding rate. With a bitrate of 0.2bpp, a PSNR of 45db is achieved for Lena. PSNR of 40db and 32db are achieved with Boat and Baboon respectively with a bitrate of 1bpp. The technique outperforms other compression-based methods but it is slightly worst than Tian's difference expansion approach though it appears less complex. In Coatrieux et al. [26], robustness is achieved by mixing two different approaches: one based on a reversible technique and one based on a robust watermarking method, such an approach is summarized with regard to the embedding phase in Figure 12. This technique is basically devoted to deal with MR (Magnetic Resonance) images in which is quite simple to separate ROI (Region Of Interest) like the head or any anatomical object, by the RONI (Region Of Non Interest) which is the black background area behind the object. The capacity to make such a distinction is fundamental to allow the system to work, and it is very important to grant that the watermarking process does not affect this segmentation in the detection phase. According to what is pictured in Figure 12, there are two protection levels. The first one provides robustness to the watermark extraction, for instance against JPEG compression, by watermarking with a lossy robust method the RONI; the inserted code is composed by an authenticity code and a digital signature derived from the ROI. The second protection level adopts a reversible technique to cast, this time in the ROI, another code depending upon the whole image (marked RONI plus ROI). The global robustness is limited by the fact that a possible attack determines a wrong reconstruction of ROI which consequently influences watermark extraction at the first protection level; in the paper, it is asserted that a JPEG compression not lower than a quality factor of 3.2.2. Transformed Domain In the work presented in [27], a quantization-based approach, named Weighted Quantization Method, (WQM) is introduced. Being while function The parameter Lena and Baboon. In Gao and Gu [28], a procedure based on Alattar's difference expansion computed in the wavelet domain is presented. 1-level IWT (Integer Wavelet Transform) is applied to Lena, Boat, and Baboon (Lena , 40.2db for Boat and 42db for Baboon. Image reversibility is granted when no attacks have happened and watermark robustness is partially provided against cropping, salt and pepper noise, and other image damaging localized in restricted zones. 4. Conclusions Reversible digital watermarking techniques have been individuated so far to be adopted in application scenarios where data authentication and original content recovery were required at the same time. Such techniques have been introduced and a general classification has been provided; some of the main algorithms known in literature have been presented and discussed, trying to give to the interested readers an easy-to-use overview of the matter. The work described in this paper has been supported under a Grant provided by ASF (Azienda Sanitaria Fiorentina) which is the Public Entity for Health in the Florence Area. Sign up to receive new article alerts from EURASIP Journal on Information Security
{"url":"http://jis.eurasipjournals.com/content/2010/1/134546","timestamp":"2014-04-17T12:48:53Z","content_type":null,"content_length":"209453","record_id":"<urn:uuid:0c5b342a-c8fd-48ba-b408-ea9e8c2ee513>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by kelly on Sunday, April 8, 2012 at 7:10pm. An automobile traveling 95 overtakes a 1.10--long train traveling in the same direction on a track parallel to the road. Q1: If the train's speed is 75 , how long does it take the car to pass it Q2:How far will the car have traveled in this time? Q3:What is the time if the car and train are traveling in opposite directions? Q4:How far will the car have traveled if the car and train are traveling in opposite directions • physics - bobpursley, Sunday, April 8, 2012 at 7:21pm relative speed:20mph distance 1.1 time=distance/speed=1.1/20 hrs • physics - kelly, Sunday, April 8, 2012 at 7:31pm i got answers to some of the questions Q1: 3.3 mins Q2: ?km <- still need help with Q4:?km still need help with • physics - Elena, Monday, April 9, 2012 at 6:01am It is easier to solve this problem if we introduce the relative velocity of the car (relative to the train). The velocity of the car relative to the train is (95-75) km/h=20 km/h. In the relative description the train is not moving and the car is moving with constant speed 20 km/h. a. The time the car needs to pass the train is t = length of the train/relative speed = 1.1 km/20 (km/h) = 0.055 h =198 s. b. To find the actual traveled distance of the car we just need to multiply the traveled time (0.055 h) by the actual speed of the car: L = 95•0.055 = 5.225 km c. The relative speed is v1= 95+75 =170 km/h The time the car needs to pass the train in this case is t1= length of the train/relative speed v1= 1.1 km/170 (km/h)= = 0.00647h =23.3 s. d. To find the actual traveled distance of the car we just need to multiply the traveled time (0.00647 h) by the actual speed of the car: L = 95•0.00647 = 0.615 km. Related Questions physics - An automobile traveling 95 overtakes a 1.10--long train traveling in ... physics - An automobile traveling 95 overtakes a 1.10--long train traveling in ... Physics - At the instant a traffic light turns green, an automobile that has ... Physics - At the instant a traffic light turns green, an automobile that has ... math - a freight train is traveling 30mph. An automobile starts out from the ... algerba - Train A & B are traveling in the same direction on parelle tracks.... physics - At the instant the traffic light turns green, an automobile starts ... algebra - Train A and B are traveling in the same direction on parallel tracks. ... Physics - At the instant the traffic light turns green, an automobile that has ... math algebra - Trains A and B are traveling the same direction on parallel ...
{"url":"http://www.jiskha.com/display.cgi?id=1333926655","timestamp":"2014-04-18T23:02:40Z","content_type":null,"content_length":"10021","record_id":"<urn:uuid:841cf358-c7da-41f0-9c1b-dbd83c787d3d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
This very popular converter has low input and output ripple. By controlling the duty cycle, it can be operated as a boost or buck regulator. Initial State Step 1 – Close the switch. Step 2 – Open the switch. Step 3 – Close the switch. Step 4 – Open the switch. Ćuk converter From Wikipedia, the free encyclopedia (Redirected from Cuk converter For Idel-Ural festival, see The Ćuk converter (sometimes incorrectly spelled Cuk, Čuk or Cúk) is a type of DC-DC converter that has an output voltage magnitude that is either greater than or less than the input voltage magnitude, with an opposite polarity. It uses a capacitor as its main energy-storage component, unlike most other types of converters which use an inductor. It is named after Slobodan Ćuk of the California Institute of Technology, who first presented the design in the paper referred to below.^[1] Ćuk is pronounced as Chook. Spellings as Cuk, Čuk or Cúk are common but incorrect: C, Č and Ć are three different letters in Serbian, whereas Ú is simply not used at all. Operating Principle A Ćuk converter comprises two inductors, two capacitors, a switch (usually a transistor), and a diode. Its schematic can be seen in figure 1. It is an inverting converter, so the output voltage is negative with respect to the input voltage. The capacitor C is used to transfer energy and is connected alternately to the input and to the output of the converter via the commutation of the transistor and the diode (see figures 2 and 3). The two inductors L[1] and L[2] are used to convert respectively the input voltage source (V[i]) and the output voltage source (C[o]) into current sources. Indeed, at a short time scale an inductor can be considered as a current source as it maintains a constant current. This conversion is necessary because if the capacitor were connected directly to the voltage source, the current would be limited only by (parasitic) resistance, resulting in high energy loss. Charging a capacitor with a current source (the inductor) prevents resistive current limiting and its associated energy loss. As with other converters (Buck converter, Boost converter, Buck-boost converter) the Ćuk converter can either operate in continuous or discontinuous current mode. However, unlike these converters, it can also operate in discontinuous voltage mode (i.e the voltage across the capacitor drops to zero during the commutation cycle). Continuous mode In steady state, the energy stored in the inductors has to remain the same at the beginning and at the end of a commutation cycle. The energy in an inductor is given by: $E=\frac{1}{2}L\cdot I^2$ This implies that the current through the inductors has to be the same at the beginning and the end of the commutation cycle. As the evolution of the current through an inductor is related to the voltage across it: it can be seen that the average value of the inductor voltages over a commutation period have to be zero to satisfy the steady-state requirements. If we consider that the capacitors C and C[o] are large enough for the voltage ripple across them to be negligible, the inductor voltages become: • in the off-state, inductor L[1] is connected in series with V[i] and C (see figure 2). Therefore V[L1] = V[i] − V[C]. As the diode D is forward biased (we consider zero voltage drop), L[2] is directly connected to the output capacitor. Therefore V[L2] = V[o] • in the on-state, inductor L[1] is directly connected to the input source. Therefore V[L1] = V[i]. Inductor L[2] is connected in series with C and the output capacitor, so V[L2] = V[o] − V[C] The converter operates in on-state from t=0 to t=D.T (D is the duty cycle), and in off state from D.T to T (that is, during a period equal to (1-D).T). The average values of V[L1] and V[L2] are $\bar V_{L1}=D \cdot V_i +\left(1-D\right)\cdot\left(V_i-V_C\right) =\left(V_i-(1-D)\cdot V_C\right)$ $\bar V_{L2}=D\left(V_o-V_C\right) + \left(1-D\right)\cdot V_o=\left(V_o - D\cdot V_C\right)$ As both average voltage have to be zero to satisfy the steady-state conditions we can write, using the last equation: So the average voltage across L[1] becomes: $\bar V_{L1}=\left(V_i+(1-D)\cdot \frac{V_o}{D}\right)=0$ Which can be written as: It can be seen that this relation is the same as that obtained for the Buck-boost converter. Discontinuous mode Related structures Inductor coupling Single-Ended Primary Inductance Converter (SEPIC) is able to step-up or step-down the voltage. 1. ^ Ćuk, Slobodan; Middlebrook, R. D. (June 8, 1976). "A General Unified Approach to Modelling Switching-Converter Power Stages" (PDF). Proceedings of the IEEE Power Electronics Specialists Conference (Cleveland, OH.): pp.73-86. http://www.ee.bgu.ac.il/~kushnero/temp/guamicuk.pdf. Retrieved on 2008-12-31. This article is licensed under the GNU Free Documentation License . It uses material from the article "Cuk_converter"
{"url":"http://www.morpheustechnology.com/eXe_Projects/Power_Supplies/cuk_converter.html","timestamp":"2014-04-20T01:27:18Z","content_type":null,"content_length":"24692","record_id":"<urn:uuid:dded40aa-96f4-46e9-a6e2-19aafee5e837>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] R: Is there a compendium of examples of the "power"of deductive logic? [FOM] R: Is there a compendium of examples of the "power"of deductive logic? Antonino Drago drago at unina.it Wed Dec 14 18:11:25 EST 2005 John McCarthy: > It is possible to introduce mass, force, and acceleration separately, > so that F = kma, where k is a constant, becomes experimentally > testable. Later units can be chosen so that k = 1. > Namely: Introduce mass as weights, the key fact being that when two or > more bodies are combined, the mass of the combination is the sum of > the masses of the constituents. It is well-known that mass force and acceleration cannot be introduced separately. Indeed what John McCarthy introduces is the notion of statical force, but not dynanical force; and the notion of force is just the notion needed for transpassing from statics Thus, a vicious circle is inherent in f=ma. Newton "solved" the question by appealing to the notion of density, by disregarding the fact that beforehand one has to define density independently from mass. Mach tried to solve the question by making use of the third principle for defining mass; the cost is the loss of a principle . One more vicious circle is in the notion of inertial reference frame, which is the frame where the inertial principle holds true. But the inertial principle holds true only in inertial reference frame. The problem did not belong to Newton's system which appealed to an absolute space. Thus, he had no need of defining the notion of time independently of inertia principle; but in the modern formulation of dynamics this definition constitutes one more vicious circle with the inertial frame. About the inertia principle I suggest to read N.R. Hanson: "Newton's first Law. A Philosopher's door in Natural Philosophy", in R.G. Colodny (ed.): Beyond the edge of certainty, Prentice-Hall, 1965, 6-28. A further analysis is my: "A Characterization of Newtonian Paradigm" in P.B. Scheurer, G. Debrock (eds.): Newton's Scientific and Philosophical Legacy, Kluwer Acad. P., 1988, 239-252. Every time a physical theory starts a deductive theory, it puts a principle abstract in nature, then a second principle corrects this abstractness; this configuration is the same that Berkeley denounced in infinitesimal analysis, i.e. a double fault. The more manifest instance of this double fault in deductive theory in physics, is the theory of thermodynamics, where the second principle corrects the full validity of the first one (equality between heat and work in a cycle) by bounding the conversion of heat to work to a maximum efficiency (hence heat is no more said to be equal to work, but equivalent to work). Antonino Drago via Benvenuti 5 Castelmaggiore Calci Pisa 56010 tel. 050 937493 fax 06 233242218 -----Messaggio Originale----- Da: "" <jmc at steam.Stanford.EDU> A: <fom at cs.nyu.edu> Data invio: mercoledì 14 dicembre 2005 7.37 Oggetto: Re: [FOM] Is there a compendium of examples of the "power"of deductive logic? > Introduce force using the static equilibrium of strings pulled by > springs or weights. The key fact is that forces add vectorially. > Acceleration is based on measurements of position and time. Time > intervals are also additive. > _______________________________________________ > FOM mailing list > FOM at cs.nyu.edu > http://www.cs.nyu.edu/mailman/listinfo/fom More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-December/009470.html","timestamp":"2014-04-19T14:52:50Z","content_type":null,"content_length":"6419","record_id":"<urn:uuid:93d3bcda-97cb-4dfa-9071-ddddbdec6f22>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
homotopical category homotopical category Homotopy theory Paths and cylinders Homotopy groups $(\infty,1)$-Category theory Basic concepts Universal constructions Local presentation A homotopical category is a construction used in homotopy theory, related to but more flexible than a model category. A homotopical category is a category with weak equivalences where on top of the 2-out-of-3-property the morphisms satisfy the 2-out-of-6-property: • If morphisms $h \circ g$ and $g \circ f$ are weak equivalences, then so are $f$, $g$, $h$ and $h \circ g \circ f$. • The 2-out-of-6-property implies the 2-out-of-3 property, hence every homotopical category is a category with weak equivalences. • Every model category is a homotopical category. • A functor $F : C \to D$ between homotopical categories which preserves weak equivalences is a homotopical functor. Simplicial localization Every homotopical category $C$ “presents” or “models” an (infinity,1)-category $L C$, a simplicially enriched category called the simplicial localization of $C$, which is in some sense the universal solution to inverting the weak equivalence up to higher categorical morphisms. This definition is in paragraph 33 of Revised on January 3, 2012 05:26:48 by Mike Shulman
{"url":"http://www.ncatlab.org/nlab/show/homotopical+category","timestamp":"2014-04-20T23:30:04Z","content_type":null,"content_length":"65350","record_id":"<urn:uuid:ae799643-24a7-41e5-b2c8-103f1f2a6913>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
builder notation for set September 18th 2009, 06:31 PM builder notation for set for this: {0,3,6,9,12} S={x|x $\in$ N , x is multiply of 3, x $\leq$ 12} S={x|x is multiply of 3 and 0 $\leq$ x $\leq$ 12} Which one is correct and more valid? and how to write for this one: {m,n,o,p} ? September 18th 2009, 10:23 PM Set notation Hello zpwnchen First, let's be clear that the word you want here is multiple, not multiply. Secondly, there are two interpretations of $\mathbb{N}$, the natural numbers: it's either the positive integers, $\{1,2,3,...\}$ or the non-negative integers, $\{0,1,2,3,...\}$. So unless you state which of these two definitions is being used, then it's best to avoid using $\mathbb{N}$ here. The clearest notation to use, then, is $S=\{x|x$ is a multiple of 3 and $0\leq x \leq 12\}$. You could define {m, n, o, p} in a variety of ways; for example, { $\lambda | \lambda$ is a lower-case letter and $\lambda$ follows l and precedes q in alphabetical order}
{"url":"http://mathhelpforum.com/discrete-math/103038-builder-notation-set-print.html","timestamp":"2014-04-20T14:19:31Z","content_type":null,"content_length":"7679","record_id":"<urn:uuid:c7d5e9df-0193-4026-9dd3-4e64ca524cc2>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Integer sequence , an integer sequence is a (i.e., an ordered list of terms) of An integer sequence may be specified explicitly by giving a formula for its n-th term, or implicitly by giving a relationship between its terms. For example, the sequence 0, 1, 1, 2, 3, 5, 8, 13, ... (the Fibonacci sequence) is formed by starting with 0 and 1 and then adding any two consecutive terms to obtain the next one: an implicit description. The sequence 0, 3, 8, 15, ... is formed according to the formula n^2 − 1 for the n-th term: an explicit definition. Integer sequences which have received their own name include: See also External links
{"url":"http://fixedreference.org/en/20040424/wikipedia/Integer_sequence","timestamp":"2014-04-18T15:39:25Z","content_type":null,"content_length":"4887","record_id":"<urn:uuid:fe95f2e1-e78c-4549-ad0f-decc4ac755e0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Natural born mathematicians Issue 19 Mar 2002 One day old, and already a mathematician When was your very first mathematical thought? At age four? Three? Two? It may surprise you, but it was certainly earlier still - in fact, you were born a mathematician... In the last few years, researchers have become accomplished at finding out what goes on in the minds of tiny children, even new-born babies. This is done either by watching their gaze (looking away indicates familiarity or boredom, staring intently indicates surprise or interest) or by giving them a dummy (the more they suck, the more interested they are). This means that we can tell what expectations babies have in different situations, and when those expectations are violated. What we have learnt is that, amazingly, we all come into this world ready-supplied with basic mathematical "We are born with a core sense of cardinal number", says neuropsychologist Brian Butterworth, author of The mathematical brain, reviewed in this issue of Plus. "We understand that sets have a cardinality, that is, that collections have a number associated with them and it doesn't really matter what the members of that set are. Infants, even in the first week of life, notice when the number of things that they're looking at changes. "If you show babies two things and then another two things - you can change what the things are and vary lots of the visual features of these two things, so it's not that you're showing them the same thing over and over again - they gradually lose interest and start to look away for longer and longer periods. Then you show them a set with threeness, and they become interested again, and then you show them more sets with threeness and they lose interest, and then you show them a set with twoness and they gain interest again." Impressive as this ability is, newborn babies are even more mathematically accomplished. They have arithmetical expectations, says Butterworth. "If you show a baby that you're hiding one thing behind a screen, and then you show the baby that you're hiding another thing behind the screen, the baby will expect there to be two things behind the screen, and will be surprised if this expectation is violated." So even before babies can focus their eyes, they are surprised to see a sum with the wrong answer! These core abilities, which Butterworth calls the "number module", may be the foundation of everything we learn about mathematics later in our lives. He speculates on this in The mathematical brain - "but I have to stress that it is speculation, because what we need to know is whether babies use the same bit of brain as adults. Adults use the left parietal lobe for this ability to recognise small cardinalities. If babies use the same bit of brain, then the course of learning more advanced mathematics builds on this core. If it's a different bit of brain, it's back to the drawing board." The notion that children have no mathematical abilities whatsoever until they are old enough to have elements of logical reasoning (four or five years old) is very influential, and was held by the famous educationalist Piaget. Clearly this view isn't correct, but according to Butterworth, some of the mathematical abilities Piaget studied may have deeper aspects that children don't achieve until they're four or five. However, he thinks that "these abilities, such as one-to-one correspondence, are built on a basis which is innately specified. Manipulating sets really does need the achievement of some kind of logical abilities that babies don't have. So maybe Piaget was right in a way, but if he was working today he would see that the child has more going for it when it gets to four or five than simply transitive reasoning, class inclusion, these very general logical ideas, it's also got an primitive idea of cardinality." As a neuropsychologist, Butterworth has seen many patients with bizarre deficits caused by brain damage. In fact, some of the earliest clues to the existence of the number module came from such patients. "I came across patients who seemed to be perfectly alright in every other respect except their mathematical ability", he says. "Something happened to their brain, as the result of a stroke usually, and afterwards they seemed to be unable to do mathematics. This is a condition known as acalculia. A lot of people thought that mathematics was just language and we thought that if this was so then how could it be that Mrs G. speaks perfectly well, reasons okay, but can't count above 4? So we started to investigate in a bit more detail, and kept our eyes open for patients with other similar kinds of problems and that's really how we got started. "Recently I've been seeing patients who have terribly disordered language but whose maths is still perfectly good, for example, one guy who has an incredibly striking dissociation. He is unable to understand the simplest words. If you ask him 'what is this?' he can't say 'watch', and if you ask him to point to a watch, he can't do that either. But he is still able to do long multiplication and long division, and to understand the principles behind these operations." Adults thinking about mathematics tend to think about it as something logical, which of course it is, it has its own structure, but it doesn't develop according to that structure in our minds. You might think that you would have to have the concept of zero before developing thinking about sets and cardinalities, but what neuropsychology shows is that this isn't so. The number module isn't something we develop according to some logically consistent scheme, instead it's inbuilt - instinctive, in fact. "The child's acquisition of mathematical ideas actually seems to recapitulate the history of mathematics", says Butterworth. "But it doesn't recapitulate the logic of mathematics. For example, in the history of mathematics, the concept of zero is rather late. In the Frege-Russell construction of numbers it's rather early! So I would say that we can reinterpret the history of mathematics in the light of the child's development. We could say that some ideas are very easy, rather straightforward extensions of what the individual was born with, and some ideas are rather more complicated, because they're not so natural. Ideas like probability for example, are not very natural. We're very bad at probability, which of course is why insurance companies and banks are rich! You don't really get a mathematical theory of probability until the seventeenth century. That just reflects that ideas of probability are very difficult." Using my hands teaches me maths Interestingly, and suggestively, there is evidence that early mathematical development is related to certain physical skills. We all start to count on our fingers, and only later do most (but by no means all!) of us abandon our fingers in favour of mental calculation. Butterworth and his colleagues have just started a project looking at people with dyspraxia. "This means they have difficulty in controlling their bodily movements", he explains. "There are degrees of it, mostly dyspraxics are just a bit clumsy. They tend to have particularly poor finger dexterity, and we want to know, what's their maths like? We have anecdotal evidence that these people are worse at maths than the average, both as children and as adults. But we don't know why that is. It might have to do with their manual dexterity or lack of it, or it might have to do with something else. There might be a common cause for a whole range of different difficulties. We want to know if the kinds of difficulties they have are the sorts you would expect them to have if they had problems counting on their fingers when they were little." One particularly interesting case, Butterworth says, concerns a woman with a very rare genetic disorder, who was born with neither hands nor feet. She reportedly says that, when doing mental arithmetic, she puts her "imaginary hands" on an imaginary table in front of her and uses them to do the calculation. So it seems that the connection between our hands and our number ability is deeper than we might think at first glance. It's interesting to speculate that hands might be a crucial part of what raises human mathematical ability so far above that of other animals, many of whom are also able to distinguish small cardinalities, but who never develop anything further based on that ability. So far we've only talked about the most basic mathematics - arithmetic and an inbuilt notion of cardinal number. What about more advanced, or adult, mathematical ability? The evidence seems to explain how things can go very wrong - via brain damage or physical problems with dexterity - but what about when things go very right? How come some people are so good at mathematics, and so In Western culture, the most prevalent theory about talent is that it is innate. When someone is outstandingly good at something, we describe them as "gifted", and say they are "naturals". This idea is not so common in other societies, where hard work is seen as the primary reason why some people excel. According to Butterworth, all the evidence supports the hard work theory. He goes so far as to say that the only "statistically significant" indicator of mathematical excellence is the number of hours put in. This seems to suggest that anyone could be a superb mathematician if they are willing to put in the hours - but the truth is slightly more nuanced. The crucial word here is "willing". Butterworth says that "anybody who is a good mathematician is slightly obsessed with maths - or more slightly obsessed - and they put a lot of hours into thinking about it. So they are unusual in that respect. But they may be no more unusual than anybody who is very good at what they do, because they have to have a certain obsessiveness or otherwise they're not going to be able to put in the hours to get to this level of expertise. This is true of musicians, it's probably true of waiters. Now, if you start putting in the hours when you are very young, how are we going to tell whether your adult state has got to do with what your brain was like before you started to put in the hours, or what it was like because you put in the hours?" Butterworth is slightly impatient with this chicken and egg question - which comes first, zeal or hard work? He says that "if, for whatever reason, you start working hard at mathematics when all your classmates don't, then the teacher is going to favour you, so you're going to get external rewards, and you're going to get the internal rewards of being able to do something rather well that your mates aren't so good at, and so you'll start off a virtuous circle of external rewards, internal rewards, you work a bit harder, you get even farther ahead of your classmates, who aren't actually putting in the time. So it wouldn't be surprising that if random people who for some reason select to pursue maths on the whole get rewarded because they are going to be better than their peers." There are particular cases which give great weight to what we might call the "zeal theory of excellence". Butterworth describes the recent case of RĂ¼diger Gamm, a German who started to teach himself to become a prodigious calculator in his twenties, because he wanted to win a prize on a TV game show. He won the prize, and became very famous in Germany as a calculator. "He can do wonderful things, because he spent four hours a day since he was twenty working on it, learning new tricks, learning the table of cubes and cube roots, and to the power of four and fourth roots and so on. He learned all the tricks he could find, and worked out tricks for himself." All that maths has tired me out So the picture of mathematical ability and its provenance is a nuanced one. Newborn babies, commonly thought to be incapable of anything but eating, sleeping and crying, are actually budding mathematicians. We arrive in this world hardwired with basic number abilities, and very probably everything we learn later in life about mathematics builds on this fundamental core. For some of us, maths will always be difficult, possibly because innate clumsiness made it hard for us to do sums on our hands when we were small. But for the rest of us, how good we end up at maths is mostly to do with how hard we try at it - and that depends on how much we enjoy it. About this article Helen Joyce is editor of Plus. For this article Helen interviewed Brian Butterworth, Professor of Cognitive Neuropsychology at University College, London and founding editor of the academic journal "Mathematical Cognition". He has taught at Cambridge and held visiting appointments at the universities of Melbourne, Padua and Trieste, MIT and the Max Planck Institute at Nijmegen. He is currently working with colleagues on the neuropsychology and the genetics of mathematical abilities. You can find out more at his website www.mathematicalbrain.com.
{"url":"http://plus.maths.org/content/os/issue19/features/butterworth/index","timestamp":"2014-04-17T07:13:09Z","content_type":null,"content_length":"38839","record_id":"<urn:uuid:f2897cca-7ff6-430b-9134-8b6f39cbc744>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Through simple system studies, researchers are unearthing a new quantum state of matter Researchers at the University of Pittsburgh have made advances in better understanding correlated quantum matter that could change technology as we know it, according to a study published in the Nov. 20 edition of Nature. W. Vincent Liu, associate professor of physics in Pitt's Department of Physics and Astronomy, in collaboration with researchers from the University of Maryland and the University of Hamburg in England, has been studying topological states in order to advance quantum computing, a method that harnesses the power of atoms and molecules for computational tasks. Through his research, Liu and his team have been studying orbital degrees of freedom and nano-Kelvin cold atoms in optical lattices (a set of standing wave lasers) to better understand new quantum states of matter. From that research, a surprising topological semimetal has emerged. "We never expected a result like this based on previous studies," said Liu. "We were surprised to find that such a simple system could reveal itself as a new type of topological state an insulator that shares the same properties as a quantum Hall state in solid materials." Since the discovery of the quantum Hall effect by Klaus Van Klitzing in 1985, researchers like Liu have been particularly interested in studying topological states of matter, that is, properties of space unchanged under continuous deformations or distortions such as bending and stretching. The quantum Hall effect proved that when a magnetic field is applied perpendicular to the direction a current is flowing through a metal, a voltage is developed in the third perpendicular direction. Liu's work has yielded similar yet remarkably different results. "This new quantum state is very reminiscent of quantum Hall edge states," said Liu. "It shares the same surface appearance, but the mechanism is entirely different: This Hall-like state is driven by interaction, not by an applied magnetic field." Liu and his collaborators have come up with a specific experimental design of optical lattices and tested the topological semimetal state by loading very cold atoms onto this "checkerboard" lattice. Generally, these tests result in two or more domains with opposite orbital currents; therefore the angular momentum remains at zero. However, in Liu's study, the atoms formed global rotations, which broke time-reversal symmetry: The momentum was higher, and the currents were not opposite. "By studying these orbital degrees of freedom, we were able to discover liquid matter that had no origins within solid-state electronic materials," said Liu. Liu says this liquid matter could potentially lead toward topological quantum computers and new quantum devices for topological quantum telecommunication. Next, he and his team plan to measure quantities for a cold-atom system to check these predicted quantum-like properties. not rated yet Nov 21, 2011 Wow, this article was almost impossible to understand, and I've done experiments with the Hall effect before. Are there any editors who look at this stuff before it goes out? not rated yet Nov 21, 2011 University of Hamburg in England? not rated yet Nov 21, 2011 The reason this is not understandable is that it is very poorly written. For instance they say: "The quantum Hall effect proved that when a magnetic field is applied perpendicular to the direction a current is flowing through a metal, a voltage is developed in the third perpendicular direction." That is just the Hall effect. The quantum portion of it is that the current produced is quantized (specific values) under low temperature quantum conditions. They continue to chop parts of sentences out without adding all of the content. No wonder it is hard to follow. not rated yet Nov 21, 2011 University of Hamburg is GERMANY moron writers! 1 / 5 (1) Nov 21, 2011 I understand ya! This article quantum show entirely shifted unexplained. So for the first time, it can be shown (redacted)Hall effect was produced in a nano quantum entangled state without the need for any rare earth magnets or funding evaporating said Professor Liu change physics! more Tsingtao? not rated yet Nov 21, 2011 I agree with the above. This sounds potentially important, but isn't there diagrams or analogies or SOMETHING that could illustrate the points made here better than this? 5 / 5 (2) Nov 22, 2011 Original press release @ http://www.news.p...mPhysics NBF article @ http://nextbigfut...-of.html not rated yet Nov 22, 2011 I understand ya! This article quantum show entirely shifted unexplained. So for the first time, it can be shown (redacted)Hall effect was produced in a nano quantum entangled state without the need for any rare earth magnets or funding evaporating said Professor Liu change physics! more Tsingtao? More Tsingtao?" haha And I've got a feeling that the "original" press release bugmenot23 posted has just been updated at Pitt's site after numerous complaints about how bad it was. Hard to believe they got Hamburg wrong. Given that the researchers are working together. Oh, well, maybe it was written by an English major, lol.
{"url":"http://phys.org/news/2011-11-simple-unearthing-quantum-state.html","timestamp":"2014-04-16T17:39:08Z","content_type":null,"content_length":"77663","record_id":"<urn:uuid:30e00f29-58d0-4593-9466-943e178d770a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Medal awarded! If a regular square pyramid has a height of 1, a slant height of 13, a base edge of 10 . What is the lateral edge? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51747a12e4b0050fabb954c3","timestamp":"2014-04-17T07:11:48Z","content_type":null,"content_length":"49519","record_id":"<urn:uuid:ad1de674-0776-4ccc-8f0a-e90521e8e379>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Euclid's Postulates HPS 0410 Einstein for Everyone Back to main course page Euclid's Postulates and Some Non-Euclidean Alternatives John D. Norton Department of History and Philosophy of Science University of Pittsburgh The five postulates on which Euclid based his geometry are: 1. To draw a straight line from any point to any point. 2. To produce a finite straight line continuously in a straight line. 3. To describe a circle with any center and distance. 4. That all right angles are equal to one another. 5. That, if a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles. Playfair's postulate, equivalent to Euclid's fifth, was: 5^ONE. Through any given point can be drawn exactly one straightline parallel to a given line. In trying to demonstrate that the fifth postulate had to hold, geometers considered the other possible postulates that might replace 5'. The two alternatives as given by Playfair are: 5^MORE. Through any given point MORE than one straight line can be drawn parallel to a given line. 5^NONE. Through any given point NO straight lines can be drawn parallel to a given line. Once you see that this is the geometry of great circles on spheres, you also see that postulate 5^NONE cannot live happily with the first four postulates after all. They need some minor adjustment: 1'. Two distinct points determine at least one straight line. 2'. A straight line is boundless (i.e. has no end). Each of the three alternative forms of the fifth postulate are associated with a distinct geometry: Spherical Geometry Euclidean Geometry Hyperbolic Geometry Positive curvature Flat Negative Curvature Postulate 5^NONE Euclid's Postulate 5 Postulate 5^MORE Straight lines Finite length; connect Infinite length Infinite length back onto themselves Sum of angles of a triangle More than 2 right angles 2 right angles Less than 2 right angles Circumference of a circle Less than 2π times radius 2π times radius More than 2π times radius Area of a circle Less than π(radius)^2 π(radius)^2 More than π(radius)^2 Surface area of a sphere Less than 4π(radius)^2 4π(radius)^2 More than 4π(radius)^2 Volume of a sphere Less than 4π/3(radius)^3 4π/3(radius)^3 More than 4π/3(radius)^3 In very small regions of space, the three geometries are indistinguishable. For small triangles, the sum of the angles is very close to 2 right angles in both spherical and hyperbolic geometries. For convenience of reference, here is the summary of geodesic deviation, developed in the chapter "Spaces of Variable Curvature" The effect of geodesic deviations enables us to determine the curvature of space by experiments done locally within the space and without need to think about a higher dimensioned space into which our space may (or may not) curve. geodesics converge positive curvature geodesics retain zero curvature constant spacing flat (Euclidean) geodesics diverge negative curvature Copyright John D. Norton. December 28, 2006.
{"url":"http://www.pitt.edu/~jdnorton/teaching/HPS_0410/chapters_2013_Jan_1/non_Euclid_postulates/postulates.html","timestamp":"2014-04-16T14:54:07Z","content_type":null,"content_length":"9057","record_id":"<urn:uuid:057cb754-c41a-4675-884c-2c5049f18aaa>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Is square of Delta function defined somewhere? up vote 21 down vote favorite Hello, every one. I am wondering whether any one knows that whether the square of Dirac Delta function is defined some where? In the beginning, this question might look strange. But by restricting the space of the test functions, I think it is still possible. For example, in order to make sense of $\delta_0^2$, we can think that it is the limit of $\frac{e^{-x^2/t}}{2\pi t}$ as $t\rightarrow 0_+$. Now choose the test function $f(x)=x^2$. It is clear that $$ \int_{-\infty}^{\infty} x^2 \frac{e^{-x^2/t}}{2\pi t} d x = \ frac{1}{2\sqrt{\pi t}} \int_{-\infty}^{\infty} x^2 \frac{e^{-x^2/t}}{\sqrt{\pi t}} d x = \frac{1}{2\sqrt{\pi t}} \cdot \frac{t}{2} = \frac{\sqrt{t}}{4\sqrt{\pi}}\;. $$ Then let $t$ tend to $0$, we get $<\delta_0^2,f>=0$ in this case. So we can restrict, for example, all test functions tend to 0 at the speed no less than $x^2$. I don't want to invent the whole stuff if it already exists. Otherwise, I might take care of the every details. Thank you in advance for any hints. EDIT: Here are some references that I found to be useful. 1: Mikusiński, J. On the square of the Dirac delta-distribution. (Russian summary) Bull. Acad. Polon. Sci. Sér. Sci. Math. Astronom. Phys. 14 1966 511–513. 44.40 (46.40) 2:Ta Ngoc Tri, The Colombeau theory of generalized functions Master thesis, 2005 fa.functional-analysis real-analysis differential-equations fourier-analysis 3 There is the <a href="en.wikipedia.org/wiki/…; Colombeau algebra </a>. Though I have no idea what applications it has. – Rbega Dec 2 '10 at 17:29 1 I am no expert but my understanding is that the Dirac delta function is not a function but is a distribution. Furthermore multiplication of distributions is not defined and this is the cause of much frustration in quantum field theory. – Bruce Westbury Dec 2 '10 at 17:35 @Bruce the multiplication of distributions is sometimes defined. If you are interested you should look up wavefront sets, one good reference is Chapter 5 of math.mit.edu/~rbm/18.157-F09/ 3 18.157-F09.html, see e.g. Prop 5.12. For example, this would make rigorous the notion that the product of a delta function at $0$ and a delta function at $x\neq 0$ should be $0$. However, you are right that the product of two delta functions is not well defined as a distribution. However, I think that @Anand is asking about restricting the domain of distributions to allow it to exist. – Otis Chodosh Dec 2 '10 at 17:47 @Otis. Thank you for your reference. I will have a look. Yes. As you said, I intend to restrict the space of the test functions, in order to give a rigorous definition of $\delta_0^2$. – Anand Dec 2 '10 at 17:54 3 @Anad: I think you have to clarify what kind of object you want your square of $\delta$ to be. E.g. restricting the test funtions does not give you something like $\delta^2$ as long as you are looking for a linear functional (which you probably don't). – Dirk Dec 2 '10 at 18:58 show 2 more comments 9 Answers active oldest votes When L. Schwartz "invented" distributions (actually, he only invented the mathematical theory as a part of functional analysis, because distributions were already used by physicists), he proved incidentally that it is impossible to define a product in such a way that distributions form an algebra with acceptable topological properties. What is possible is to define the product of distributions when their wave front sets do not meet. This is why $fT$ makes sense if $T$ is a distribution and $f$ is $C^\infty$, for instance, because the front set of $f$ is void. But you can also multiply that way genuine distributions; for instance in $\mathbb R^2$, $\delta_{x=0}=\delta_{x_1=0}\delta_{x_2=0}$. up vote 32 down vote J.-F. Colombeau invented in the 70's an algebra of generalized functions, which has something to do with distributions. But each distribution has infinitely many representatives in the accepted algebra, and you have to play with the equality and a "weak equality" (or "association"). I don't know of an example where this tool solved an open problem. In Colombeau's algebra, the square of $\delta_0$ makes sense, but is highly non unique. @Prof. Denis Serre, Thank you very much. Your explanation is very clear. Especially the case $\delta_{x=0}=\delta_{x_1=0}\delta_{x_2=0}$ in $R^2$. I took this results for granted 1 before. Now I know the reason. By the way, do you have a good reference on the topic of Wave front set? I encountered this topic in Hormander's books. His books seem too difficult. Thank you very much! :-) – Anand Dec 3 '10 at 14:08 add comment $\delta_0$ vanishes identically on the space of test functions you've defined. So it's not surprising that its square is well-defined: $0\cdot 0 = 0$. up vote 11 down I suspect you'll have a much harder time defining $\delta_0^2$ on test functions which don't vanish at $0$. My intention is just to make $\delta_0^2$ rigorous by restricting the test functions to vanish at 0. :-) – Anand Dec 2 '10 at 17:58 12 Yes, but you've thrown the baby out with the bathwater. – userN Dec 2 '10 at 17:59 I think that you can restrict to functions which approach their value at $0$ like $x^2$, instead of functions which are zero at $0$, e.g. $x^2 + 3$. I don't think this leads to anything interesting, but @A.J.'s objection might not be fatal. – Otis Chodosh Dec 2 '10 at 18:08 @Otis, Thanks for your comments. Since I meet this problem in my research, I need something rigorous, even it is not very interesting. :-) – Anand Dec 2 '10 at 18:11 @Anand I think that you should edit your question to describe the properties of \delta_0^1 you would like. A.J.'s argument basically shows that if you want to define $\delta^2 (f)$ to 7 be $\lim_{t\to 0} \int_{-\infty}^\infty f e^{-x^2/t}/\sqrt{2\pi t}$ for some class of $f$, then basically the only thing you can do is to define it to be zero for functions which vanish faster than $x^2$. Perhaps you should explain why you would like to define $\delta_0^2$ and what properties you would like from such a definition. Otherwise, I doubt that anyone can say anything more useful than A.J's answer. – Otis Chodosh Dec 2 '10 at 18:19 show 4 more comments There are whole theories in microlocal analysis that deal with the issues here, I believe. Some heuristics are that the "singular support" of a distribution controls what it can be multiplied by in a naive sense (distributions with a disjoint singular support). So squaring the delta function is the first bad case - whatever the singular support means, it must be the set containing 0 for the delta function. Need more heuristics. up vote One insight is that one dimension may be too few to show the real picture. "Microlocal" tends to mean localising in (co)tangential directions, and one dimension offers only two. 6 down Hyperfunctions in the case of one dimension make something of this by considering the real line as the boundary of the upper half complex plane. I.e up is not the same as down. Boundary vote values of functions holomorphic in the upper half plane have a candidate for the delta function analogue: take 1/z. No problem squaring that. More of a problem saying what this analogy means that is worth anything. Mikio Sato did that. Now I shall be quiet, because this is probably already wrong enough. @Charles, thank you very much for your comments. It is very interesting. But it is too hard for me to understand. I wish I could understand something about Microlocal analysis one day. :-) – Anand Dec 2 '10 at 21:38 2 The process of squaring holomorphic functions on half-planes does not yield hyperfunctions that behave like squares, even when restricting to the regular case. – S. Carnahan♦ Dec 3 '10 at add comment The extent to which multiplication of distributions is defined was examined by Richards & Youn and some of the results are in their short and fairly elementary joint book on distributions. One can multiply something fairly exotic like the third derivative of the delta function by a very well-behaved function; that much everybody knows. But I think they had a result that as up vote 3 one factor becomes progressively less well-behaved the other must become more well-behaved in order to make multiplication possible. I don't recall the details. But I'm pretty sure theirs down vote is not the last word on the subject. Yes. As you said that as one factor becomes progressively less well-behaved, the other should become more well-behaved. The assumption here might be that you consider the same space of test functions. In current case, both factor becomes ill-behaved (both of them are $\delta_0$). So we might overcome this difficulty by restricting the test functions, requiring that they should vanish at 0 at certain speed. Thanks. :-) – Anand Dec 2 '10 at 18:05 By the way, could you please give me some more details about the reference of Richards & Youn? I learnt this topic by Roudin's book, I am not an expert at this domain. Thank you very much! :-) – Anand Dec 2 '10 at 18:13 1 Google "Richards Youn Distributions" and it pops up in google books. – B R Dec 2 '10 at 18:44 1 The Theory of Distributions: A Nontechnical Introduction by J. Ian Richards and Heekyung Youn, published in 1990 by Cambridge University Press. "Nontechnical" appears to mean that the reader is not assumed to know functional analysis, topology, or measure theory. But everything is rigorously proved. Including the fact that the "tempered" distributions---those that don't grow too fast at $\pm\infty$---are closed under the Fourier transform. – Michael Hardy Dec 2 '10 at 20:51 Thanks BR and Michael Hardy. :-) – Anand Dec 2 '10 at 21:31 add comment I'd like to point out that several of the concepts mentioned here are explained on the nLab: multiplication of distributions, up vote 3 down vote while several are missing and the parts on microlocal analysis and hyperfunctions could use some help . @Tim van Beek. Thank you very much for the reference. It looks like what I am looking for...:-) – Anand Dec 3 '10 at 14:03 add comment The theory of distributions and operations on them are generally only useful in so far as they extend the operations on smooth functions. If you look in Hörmander, there is a criterion in terms of wavefront sets which is very useful (mentioned by others), and you'll also notice that the wavefront sets of $\delta$ and $\delta$ collide. The reason you can't square the delta-function is that when you approximate it by smooth functions, there is no unique limit. If you wanted to restrict to a smaller space of test functions, you would clearly have to up vote consider test functions which vanish at the origin in some way. But do you have a particular purpose in mind for this question? 1 down vote EDIT: Sorry -- this was supposed to be a comment, not an answer. @Phil Isett, I am doing stochastic heat equation. If the initial data is a Delta distribution, then the second moments of the solution is well defined for all $t>0$, but for time $t=0$, the second moment will be formally certain Delta square. This is why I encounter this problem. Thanks you for your comment. :-) – Anand Jul 21 '11 at 9:10 add comment Denis Serre's answer is just perfect. Let me add a couple of examples of distributions that can be squared: (1) With $H$ the Heaviside function, define $Log(x+i0)=\ln(\vert x\vert)+i\pi H(-x) $ and $$T_1=\frac{1}{x+i0}=\frac{d}{dx}(Log(x+i0))=pv\frac 1{x}-i\pi \delta_0(x).$$ It is easy to see that $ WF T_1=[0]\times (0,+\infty), $ so that $WF T_1+WF T_1$ does not meet 0. Then there is no difficulty to define $T^2$ say as $$ \langle T^2,\phi\rangle=\lim_{\epsilon\rightarrow 0_+}\ int\frac{\phi(x) dx}{(x+i\epsilon)^2}. $$ up vote 1 down vote (2) Let us consider a smooth hypersurface $\Sigma$ of $\mathbf R^d$ defined by the equation $f(x)=0$ with a smooth $f$ such that $df\not=0$ at $f=0$ and let $\delta_\Sigma$ be the Euclidean measure on $\Sigma$. Then $$ T_2=pv\frac{1}{f}-i\delta_\Sigma $$ can be squared. The reason is the same than for the previous example, since $ WF T_2$ is the positive conormal of $\Sigma$. A point $(x,\xi)\in WF T_2$ iff $$ x\in \Sigma\quad \xi =\lambda df(x) \text{ with $\lambda >0$}. $$ Then of course, if $(x,\xi_j)$, $j=1,2$ are both in $WF T_2$ then $$ \xi_1+\xi_2\not=0. add comment I've seen the idea of it used in image processing for denoising; the total variation energy $E_{TV}(f) = \displaystyle\int\left(|\nabla f| +(f-u)^2\right)$ is generally used instead of Tikhonov regularization up vote 0 down $E_{Tikhonov}(f) = \displaystyle\int\left(|\nabla f|^2 +(f-u)^2\right)$ as the latter never has a discontinuous solution (since the integral would be infinite). I don't remember how rigorously this idea was developed - "Mathematical Problems in Image Processing" by Aubert and Kornprobst was the textbook I used at the time, but there are probably some more recent references in the field. add comment Check out the papers by Accardi and Boukas [added by S. Carnahan: The relevant part of their first ArXiv paper is that they regularize powers of $\delta$, not by setting $\delta^n(x) = c_n \delta(x)$ for some real $c_n$ (which they up vote 0 find to work poorly for their purposes), but by using two variables and setting $\delta(t-s)^n = \delta(s)\delta(t-s)$ for all $n \geq 2$. This definition allows them to define certain down vote representations of Lie algebras by Fock space methods. As far as I can tell, this does not yield a workable definition of $\delta^2$ for analysis on the real line.] 2 I was more or less ready to delete this non-answer, but I might as well summarize what I found by Googling. – S. Carnahan♦ Aug 18 '13 at 16:46 add comment Not the answer you're looking for? Browse other questions tagged fa.functional-analysis real-analysis differential-equations fourier-analysis or ask your own question.
{"url":"https://mathoverflow.net/questions/48067/is-square-of-delta-function-defined-somewhere/48156","timestamp":"2014-04-18T10:54:46Z","content_type":null,"content_length":"111656","record_id":"<urn:uuid:c2dd049b-5f34-4b05-bb08-d1b6785928cc>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
FAQ 3.5. - How do I convert a Number into a String with exactly 2 decimal places? FAQ 3.5. - How do I convert a Number into a String with exactly 2 decimal places? When formatting money for example, to format 6.57634 to 6.58, 6.5 to 6.50, and 6 to 6.00? Rounding of x.xx5 is uncertain, as such numbers are not represented exactly. N = Math.round(N*100)/100 only converts N to a Number of value close to a multiple of 0.01; but document.write(N) does not give trailing zeroes. ECMAScript Ed. 3.0 (JScript 5.5 [but buggy] and JavaScript 1.5) introduced N.toFixed, the main problem with this is the bugs in JScripts implementation. Most implementations fail with certain numbers, for example 0.07. The following works successfully for M>0, N>0: function Stretch(Q, L, c) { var S = Q if (c.length>0) while (S.length<L) { S = c+S } return S function StrU(X, M, N) { // X>=0.0 var T, S=new String(Math.round(X*Number("1e"+N))) if (S.search && S.search(/\D/)!=-1) { return ''+X } with (new String(Stretch(S, M+N, '0'))) return substring(0, T=(length-N)) + '.' + substring(T) function Sign(X) { return X<0 ? '-' : ''; } function StrS(X, M, N) { return Sign(X)+StrU(Math.abs(X), M, N) } Number.prototype.toFixed= new Function('n','return StrS(this,1,n)') http://www.merlyn.demon.co.uk/js-round.htm http://msdn.microsoft.com/library/en...mthToFixed.asp Postings such as this are automatically sent once a day. Their goal is to answer repeated questions, and to offer the content to the community for continuous evaluation/improvement. The complete comp.lang.javascript FAQ is at The FAQ workers are a group of volunteers.
{"url":"http://www.velocityreviews.com/forums/t926475-faq-3-5-how-do-i-convert-a-number-into-a-string-with-exactly-2-decimal-places.html","timestamp":"2014-04-19T02:12:51Z","content_type":null,"content_length":"31866","record_id":"<urn:uuid:fd552ca8-20a9-4f65-b23d-dea269075178>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Duncanville, TX Algebra 2 Tutor Find a Duncanville, TX Algebra 2 Tutor I have been helping students overcome their fears and doubts about mathematics for over ten years. I have a Chemical Engineering degree from The University of Texas at Austin and I am Texas certified in Mathematics grades 4-8 and 8-12. I have experience teaching grades 5, 6, 7, 8, Algebra I, Algebra II, Geometry, Pre-Calculus and Calculus. 9 Subjects: including algebra 2, chemistry, calculus, geometry ...I look forward to helping you conquer this test and move on to reaching your goals. I have been teaching English for over 5 years in Mexico and the US. I am a full time tutor in the subject, so you, my student, are my full time job. 29 Subjects: including algebra 2, reading, Spanish, GRE ...I do not have experience with Microsoft ACCESS. I can help with query strategies and with syntax of most query statements. I am a certified tutor in all math topics covered by the ASVAB. 15 Subjects: including algebra 2, chemistry, physics, calculus ...I prefer regular sessions so that I can observe a student's learning style and adapt accordingly. I believe it is important to build a relationship with the student first because they say "no one cares how much you know until they know how much you care." I have always worked very hard in my cou... 7 Subjects: including algebra 2, algebra 1, SAT math, ACT Math ...This is where I taught students with learning disabilities how to pass the STAAR Test. I have taught Algebra I and Geometry, but am certified to teach any math in public schools. I am a Texas certified teacher in 4th grade through 12th grade mathematics. 10 Subjects: including algebra 2, geometry, statistics, algebra 1 Related Duncanville, TX Tutors Duncanville, TX Accounting Tutors Duncanville, TX ACT Tutors Duncanville, TX Algebra Tutors Duncanville, TX Algebra 2 Tutors Duncanville, TX Calculus Tutors Duncanville, TX Geometry Tutors Duncanville, TX Math Tutors Duncanville, TX Prealgebra Tutors Duncanville, TX Precalculus Tutors Duncanville, TX SAT Tutors Duncanville, TX SAT Math Tutors Duncanville, TX Science Tutors Duncanville, TX Statistics Tutors Duncanville, TX Trigonometry Tutors Nearby Cities With algebra 2 Tutor Balch Springs, TX algebra 2 Tutors Bedford, TX algebra 2 Tutors Cedar Hill, TX algebra 2 Tutors Dallas algebra 2 Tutors Dalworthington Gardens, TX algebra 2 Tutors Desoto algebra 2 Tutors Euless algebra 2 Tutors Glenn Heights, TX algebra 2 Tutors Grand Prairie algebra 2 Tutors Highland Park, TX algebra 2 Tutors Hurst, TX algebra 2 Tutors Lancaster, TX algebra 2 Tutors Mansfield, TX algebra 2 Tutors Pantego, TX algebra 2 Tutors Watauga, TX algebra 2 Tutors
{"url":"http://www.purplemath.com/Duncanville_TX_algebra_2_tutors.php","timestamp":"2014-04-20T06:34:59Z","content_type":null,"content_length":"24148","record_id":"<urn:uuid:081b6991-5b41-43cb-aee5-4add014bc82b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Sets of equations up vote 2 down vote favorite Let $I,J,K$ be three non-void sets, and let $\gamma$:$I\times J\times K\rightarrow\mathbb{N}$. Is there some nonempty set $X$, together with some functions {$\{ f_{i}:X\rightarrow X;i\in I\} $}, some subsets {$\{ \Omega_{j}\subset X;j\in J\} $}, and some points {$\{p_{k}\in X;k\in K} $} s.t. $\mid f_{i}^{-1}\left(p_{k}\right)\cap\Omega_{j}\mid=\gamma\left(i,j,k\right)$ $\left(i\in I,j\in J,k\in K \right)$, and $\mid f_{i}^{-1}\left(p\right)\mid\leq\mid\mathbb{R\mid}$$\left(i\in I,p\in X\right)$ ? In other words, is $\gamma$ ''representable'' as the number of solutions of some ''reasonable'' equations? [An elementary problem, indeed.] Try first for I of size 1. Put the (generally countable) preimage of p_k as a row, so you have a Kx\omega array. Now pick \gamma(j,k) elements from the kth row and put them in \Omega_j. This gets you f_1 as defined on the first array. For larger I and the ith f, use a disjoint by K x \omega array and repeat. Define f_h(x) for x in the ith array (for h not equal i) to be the 1st element that is not any p_k and is in the same row as x. All preimages are countable, although K and \Omega_j may not be. Gerhard "Ask Me About System Design" Paseman, 2011.04.17 – Gerhard Paseman Apr 17 '11 at add comment 3 Answers active oldest votes Here is a second attempt (see edit history for previous version). For each $t\in\mathbb{N}$, let $$P_{i,j,k,t}=\{1_{i,j,k,t},\ldots,n_{i,j,k,t},\ldots,\gamma(i,j,k)_{i,j,k,t}\}$$ (so that for each choice of $i\in I$, $j\in J$, $k\in K$, and $t\in\mathbb {N}$, we have a disjoint set of size $\gamma(i,j,k)$). For each $t\in\mathbb{N}$, let $$Q_t=\{a_{k,t}\mid k\in K\}$$ (so for each $t\in\mathbb{N}$, this is just a copy of $K$, up to relabeling). Let $$X=\coprod_{t\in\mathbb{N}}\left(Q_t\coprod_{\substack{i\in I,j\in J\\\k\in K}}P_{i,j,k,t}\right).$$ Define $$\Omega_j=\coprod_{i\in I,k\in K}P_{i,j,k,1}\subset X,$$ and $f_i:X\ rightarrow X$ by $$f_{i_0}(n_{i,j,k,t})=\begin{cases}a_{k,1}\text{ if }i=i_0,t=1\\\ n_{i,j,k,t+1}\text{ otherwise}\end{cases}$$ $$f_i(a_{k,t})=a_{k,t+1}$$ up vote 2 down vote Thus $$f_{i}^{-1}(n_{i,j,k,t})=\begin{cases}\emptyset\text{ if }t=1,2\\\ \{n_{i,j,k,t-1}\}\text{ if }t>2\end{cases}$$ $$f_i^{-1}(a_{k,t})=\begin{cases}\coprod_{j\in J}P_{i,j,k,1}\text{ if }t=1\\\ \{a_{k,t-1}\}\text{ if }t>1\end{cases}$$ We choose $p_k=a_{k,1}$. Thus $f_i^{-1}(p_k)\cap \Omega_j=P_{i,j,k,1}$, so $|f_i^{-1}(p_k)\cap\Omega_j|=\gamma(i,j,k)$. Unfortunately this still doesn't address your size concerns, i.e. the preimage of any element of $X$ being countable, because if $J$ is uncountable then $f_i^{-1}(a_{k,1})$ is uncountable (I added the whole mess with the $t$'s to make the preimages of all the other elements countable). I'll leave this as a community wiki, and if anyone sees a way of fixing it they are welcome to edit this. @Zev Thank you. This is interesting, yet what about the second condition? I mean [as you've already remarked], you are not controlling the size of the preimages [not necessarily countable!]. – Ady Apr 17 '11 at 7:45 @Ady - Yes, my first attempt was just to write down what seemed to be the most natural sets and functions, without controlling the sizes. I've changed things a bit to reduce the problem somewhat, but it is still there. Perhaps we need to mod out by an appropriate equivalence relation to force things to get smaller? – Zev Chonoles Apr 17 '11 at 7:58 He is asking for each preimage under f_i to be countable, right? Or is he asking for the union (over I) of the preimages of each p to be countable? Gerhard "Ask Me About System Design" Paseman, 2011.04.17 – Gerhard Paseman Apr 17 '11 at 8:19 No. He is asking for each preimage under f_i to be not larger [in size] than the power of the continuum. – Ady Apr 17 '11 at 8:45 Good. I hope he (or perhaps other than he, forgive my presumption) likes the ideas in the comment I made to the question. I think that keeps the preimage size down. Gerhard "Ask Me About System Design" Paseman, 2011.04.17 – Gerhard Paseman Apr 17 '11 at 16:42 add comment This is basically a detailed description of a solution, based on Gerhard's answer. Let $X=\mathbb{N}_0 \times I_0 \times K $, where $\mathbb{N}_0$ includes $0$ and similarly $I_0=I\cup 0$ with $0\not \in I$. Let $p_k =(0,0,k)$, and let $\displaystyle \Omega_j=\bigcup_i \bigcup_k \bigcup_{n=1}^{\gamma_{ijk}} (n,i,k)$. Define $f_i(n,i,k)=(0,0,k)=p_k$ and $f_i(n,i',k)=(n+1,i',k)$ for $i'\neq i$. We add $1$ to $n$ so that $f_i(p_k)\neq p_k$. up vote Note that for $x\neq p_k$, $|f_i^{-1}(x)|\leq 1$. On the other hand, $f_i^{-1}(p_k)=\mathbb{N} \times \{i\}\times \{k\} $, and then $f_i^{-1}(p_k)\cap \Omega_j =\{(n,i,k)\mid 1\leq n\leq \ 2 down gamma_{ijk} \}$, which has the desired cardinality $\gamma_{ijk} $. Remark: In a comment to Gerhard's answer, Ady says "I think you're underestimating the size of J." The size of $J$ actually plays no role in this problem as stated, as the sets $\Omega_j$ may overlap (and will overlap a lot in my construction). If you want to require the $\Omega_j$ to be disjoint, note that the other conditions force $|J|\leq |\mathbb{R}|$, as $\left|\ bigcup_j \left( f_i^{-1}(p_k)\cap \Omega_j \right)\right| \leq |f_i^{-1}(p_k)|\leq |\mathbb R|$. We can modify the construction above by replacing $\mathbb N$ with $\mathbb R$, and can assure that the $\Omega_j$ are disjoint if we are more careful in choosing which $\gamma_{ijk}$ elements of $\mathbb{R}\times\{i\}\times\{k\}$ to include in $\Omega_j$ (above I chose the points $(n,i,k)$ for $n$ from $1$ to $\gamma_{ijk}$). Thanks for the clarification. Not only does adding 1 to n keep f_i(p_k) from being p_k, it keeps it from being p_l for any l in K. I hope this clears things up for Ady. Gerhard "Ask Me About System Design" Paseman, 2011.04.18 – Gerhard Paseman Apr 18 '11 at 9:01 +1 I hereby give you my second upvote. (I would give you+ 7, but then others might think me strange in a bad, unfriendly way.) Gerhard "Ask Me About System Design" Paseman, 2011.04.19 – Gerhard Paseman Apr 20 '11 at 6:20 add comment Consider the following construction. Let $Y$ be a subset of $X$ such that $Y$ is (equipollent to) $I \times K \times \omega$. I think of it as $I$-many copies of an array with $K$-many rows and each row has countably many elements. The $k$th row in the $i$th array is the preimage of $p_k$ under $f_i$. (For $h$ not equal to $i$, let $f_i$ send the $k$th row in the $h$th array to, say, the first element in that row, or perhaps instead to some subset of elements in that row, under the condition that those images are disjoint from the set of $p_k$.) For the sets $\ Omega_j$, pick precisely $\gamma(i,j,k)$ elements from the $k$th row in the $i$th array and put them into $\Omega_j$. So far, we have achieved that the preimage of every point in the range of $f_i$ is at most countably infinite, for every $i$. We also have the desired condition on the intersection of the preimage of $p_k$ under $f_i$ with the set $\Omega_j$. up vote Now everything is done except for deciding where to put the $p_k$. As long as you avoid sending $f_i(x)$ to a $p_k$ for $x$ outside the ith array, you can label some of the array elements 1 down with $p_k$; this should be doable because you have control of how $f_i$ acts outside the $i$th array. Alternatively, let the $p_k$ be disjoint from $Y$ and the $\Omega_j$, and let $f_i$ send the $p_k$ to themselves, or to some other set disjoint from the $\Omega_j$. Gerhard "Ask Me About System Design" Paseman, 2011.04.17 @Gerhard Sounds promising, yet too vague. E.g., may I ask who is X? Thx anyway. P.S. I think you're underestimating the size of J. – Ady Apr 18 '11 at 5:01 add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/62000/sets-of-equations?sort=votes","timestamp":"2014-04-19T02:41:17Z","content_type":null,"content_length":"69951","record_id":"<urn:uuid:38c4824f-542e-4a17-9006-ea107e1990a4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 107, No. 3, pp. 547­557, DECEMBER 2000 System of Vector Equilibrium Problems and Its Applications1 Q. H. ANSARI,2 S. SCHAIBLE,3 AND J. C. YAO Abstract. In this paper, we introduce a system of vector equilibrium problems and prove the existence of a solution. As an application, we derive some existence results for the system of vector variational inequalities. We also establish some existence results for the system of vector optimization problems, which includes the Nash equilibrium problem as a special case. Key Words. System of vector equilibrium problems, system of vector variational inequalities, system of vector optimization problems, Nash equilibrium problem, fixed points. 1. Introduction and Preliminaries In 1980, Giannessi (Ref. 1) extended classical variational inequalities to the case of vector-valued functions. Meanwhile, vector variational inequalities have been researched quite extensively; for example, see Ref. 2
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/307/2629196.html","timestamp":"2014-04-17T10:15:27Z","content_type":null,"content_length":"8216","record_id":"<urn:uuid:f81bf16a-cd5d-4bd9-bc10-44dda2ff926f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
On modes of convergence of the Dirichlet series for reciprocal powers of zeta up vote 0 down vote favorite Let $\mu_k(n)$ denote the nth coefficient in the Dirichlet series for $\zeta^{-k}(s)$. It can be shown that if there exists a $u=u(x,s)$ such that $$|\sum_{n\leq x}\frac{\mu_k(n)}{n^s} |^{1/k}\leq u$$ then there is a $c=c(\sigma)$ such that $u$ is $\Omega(\log^c(x))$ when $\sigma$ is strictly less than unity. Since the limit on left hand is $1/\zeta(s)$ if it converges, I am wondering if much is known about the modes of convergence of the series under the root on the left on various hypotheses such as Riemann etc? nt.number-theory cv.complex-variables add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged nt.number-theory cv.complex-variables or ask your own question.
{"url":"http://mathoverflow.net/questions/146441/on-modes-of-convergence-of-the-dirichlet-series-for-reciprocal-powers-of-zeta","timestamp":"2014-04-19T04:31:12Z","content_type":null,"content_length":"47128","record_id":"<urn:uuid:e8a4ec0a-5548-43b5-b8da-9bb5720a6765>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Date Subject Author 4/20/04 cube root of a given number vsvasan 4/20/04 Re: cube root of a given number A N Niel 4/20/04 Re: cube root of a given number Richard Mathar 7/14/07 Re: cube root of a given number Sheila 7/14/07 Re: cube root of a given number amzoti 7/14/07 Re: cube root of a given number quasi 7/14/07 Re: cube root of a given number arithmeticae 7/16/07 Re: cube root of a given number Gottfried Helms 7/16/07 Re: cube root of a given number Iain Davidson 7/21/07 Re: cube root of a given number arithmetic 7/21/07 Re: cube root of a given number arithmetic 7/21/07 Re: cube root of a given number Iain Davidson 7/21/07 Re: cube root of a given number arithmetic 7/22/07 Re: cube root of a given number Iain Davidson 7/22/07 Re: cube root of a given number arithmetic 7/22/07 Re: cube root of a given number Iain Davidson 7/23/07 Re: cube root of a given number arithmetic 7/24/07 Re: cube root of a given number Iain Davidson 7/24/07 Re: cube root of a given number arithmetic 7/24/07 Re: cube root of a given number arithmetic 7/24/07 Re: cube root of a given number Iain Davidson 7/25/07 Re: cube root of a given number arithmetic 7/24/07 Re: cube root of a given number gwh 7/25/07 Re: cube root of a given number arithmetic 7/25/07 Re: cube root of a given number Iain Davidson 7/25/07 Re: cube root of a given number arithmetic 7/25/07 Re: cube root of a given number Iain Davidson 7/25/07 Re: cube root of a given number arithmetic 7/25/07 Re: cube root of a given number arithmetic 7/25/07 Re: cube root of a given number Iain Davidson 7/25/07 Re: cube root of a given number arithmetic 7/26/07 Re: cube root of a given number Iain Davidson 7/26/07 Re: cube root of a given number arithmetic 7/26/07 Re: cube root of a given number Iain Davidson 7/26/07 Re: cube root of a given number arithmetic 8/6/07 Re: cube root of a given number arithmetic 7/26/07 Re: cube root of a given number semiopen 7/26/07 Re: cube root of a given number Iain Davidson 7/26/07 Re: cube root of a given number semiopen 7/26/07 Re: cube root of a given number arithmetic 7/26/07 Re: cube root of a given number semiopen 7/26/07 Re: cube root of a given number arithmetic 7/26/07 Re: cube root of a given number Iain Davidson 7/27/07 Re: cube root of a given number arithmetic 7/27/07 Re: cube root of a given number Iain Davidson 7/27/07 Re: cube root of a given number Iain Davidson 7/27/07 Re: cube root of a given number arithmetic 7/27/07 Re: cube root of a given number arithmetic 7/27/07 Re: cube root of a given number arithmetic 7/27/07 Re: cube root of a given number Iain Davidson 7/27/07 Re: cube root of a given number arithmetic 7/27/07 Re: cube root of a given number Iain Davidson 7/27/07 Re: cube root of a given number arithmetic 7/27/07 Re: cube root of a given number Iain Davidson 7/27/07 Re: cube root of a given number arithmetic 7/27/07 Re: cube root of a given number arithmetic 7/27/07 Re: cube root of a given number Iain Davidson 7/27/07 Re: cube root of a given number arithmetic 7/27/07 Re: cube root of a given number Iain Davidson 7/28/07 Re: cube root of a given number arithmetic 7/28/07 Re: cube root of a given number Iain Davidson 8/5/07 Re: cube root of a given number arithmeticae 8/5/07 Re: cube root of a given number Iain Davidson 8/6/07 Re: cube root of a given number arithmetic 8/6/07 Re: cube root of a given number Iain Davidson 8/6/07 Re: cube root of a given number arithmeticae 8/7/07 Re: cube root of a given number Iain Davidson 8/7/07 Re: cube root of a given number mike3 8/10/07 Re: cube root of a given number arithmetic 8/10/07 Re: cube root of a given number Iain Davidson 8/11/07 Re: cube root of a given number r3769@aol.com 8/11/07 Re: cube root of a given number Iain Davidson 8/11/07 Re: cube root of a given number r3769@aol.com 8/11/07 Re: cube root of a given number Iain Davidson 8/11/07 Re: cube root of a given number r3769@aol.com 8/12/07 Re: cube root of a given number Iain Davidson 8/17/07 Re: cube root of a given number r3769@aol.com 8/12/07 Re: cube root of a given number arithmetic 8/13/07 Re: cube root of a given number Iain Davidson 8/24/07 Re: cube root of a given number arithmetic 8/28/07 Re: cube root of a given number narasimham 1/10/13 Re: cube root of a given number ... Milo Gardner 8/28/07 Re: cube root of a given number arithmetic 8/28/07 Re: cube root of a given number Iain Davidson 8/7/07 Re: cube root of a given number mike3 8/7/07 Re: cube root of a given number Iain Davidson 8/10/07 Re: cube root of a given number arithmetic 8/10/07 Re: cube root of a given number arithmetic 7/28/07 Re: cube root of a given number arithmetic 7/27/07 Re: cube root of a given number arithmetic 7/27/07 Re: cube root of a given number arithmetic 7/27/07 Re: cube root of a given number arithmetic 7/26/07 Re: cube root of a given number Iain Davidson 7/26/07 Re: cube root of a given number arithmetic 7/25/07 Re: cube root of a given number Iain Davidson 7/26/07 Re: cube root of a given number arithmetic 7/22/07 Re: cube root of a given number arithmetic 7/21/07 Re: cube root of a given number arithmetic 7/16/07 Re: cube root of a given number Proginoskes 7/21/07 Re: cube root of a given number arithmetic 7/22/07 Re: cube root of a given number Proginoskes 7/22/07 Re: cube root of a given number Virgil 7/22/07 Re: cube root of a given number Proginoskes 7/23/07 Re: cube root of a given number arithmetic 7/23/07 Re: cube root of a given number arithmetic 7/24/07 Re: cube root of a given number Proginoskes 7/16/07 Re: cube root of a given number gwh 7/17/07 Re: cube root of a given number Iain Davidson 7/21/07 Re: cube root of a given number arithmetic 7/21/07 Re: cube root of a given number arithmetic 7/21/07 Re: cube root of a given number arithmetic 7/24/07 Re: cube root of a given number pomerado@hotmail.com 7/25/07 Re: cube root of a given number orangatang1@googlemail.com
{"url":"http://mathforum.org/kb/message.jspa?messageID=5813635","timestamp":"2014-04-18T08:19:32Z","content_type":null,"content_length":"150280","record_id":"<urn:uuid:c9631b1f-4a2e-406f-b4fb-dcf3ef875e26>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Two coils having equal resistance but different inductances are connected in series, then the time constant of series combination will ______ a. sum of time constants of individual coils b. average of time constant of individual coils c. geometric mean of the time constants of individual coils d. product of the time constants of individual coils... please help me with the answer and tell me Best Response You've already chosen the best response. time constant of rl circuit is r/l so time constants of individual circuits are l1/r and l2/r when they are connected in series the equivalent L=l1+l2 and r is r+r=2r hence time constant finally is (l1+l2)/2r which is average of both Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ed0e222e4b04e045af6aa1a","timestamp":"2014-04-17T09:48:46Z","content_type":null,"content_length":"28093","record_id":"<urn:uuid:9cda933b-7b79-4e1e-b4a0-4631b645afd6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
Stamford, CT Algebra 2 Tutor Find a Stamford, CT Algebra 2 Tutor ...My number one issue in class was getting used to my teachers teaching skills. I was incapable of learning her way and desperately wanted to transfer out of the class. However, Ben had given me the motivation to keep trying. 15 Subjects: including algebra 2, chemistry, physics, calculus ...I then earned a Masters of Arts in Teaching from Bard College in '07. I've been tutoring for 8+ years, with students between the ages of 6 and 66, with a focus on the high school student and the high school curriculum. I have also been an adjunct professor at the College of New Rochelle, Rosa Parks Campus. 26 Subjects: including algebra 2, calculus, physics, geometry I am an Information Technology (IT) Professional with over 20 years of technical, hands-on expertise in Full Project Lifecycle Applications Development, Website Design and Development, Individual User and Classroom Training, as well as both Technical and NonTechnical Writing. I am proficient and ce... 12 Subjects: including algebra 2, algebra 1, GED, SAT math ...I got an A+ in calculus in high school and a 5 on the BC Calculus AP Test. I have two degrees in engineering, and calculus is a must-know subject for that. I got an A+ in Geometry in high 18 Subjects: including algebra 2, calculus, physics, geometry ...I believe what sets me apart from all other professionals in my field is my diversified background as an educator. With over 27 years of hands-on successful teaching, tutoring, coaching, and mentoring, both students and teachers, I feel that I am exactly the kind of tutor you need to stay motivated and focused. Learning always takes place when a student is present with the work. 9 Subjects: including algebra 2, geometry, accounting, algebra 1 Related Stamford, CT Tutors Stamford, CT Accounting Tutors Stamford, CT ACT Tutors Stamford, CT Algebra Tutors Stamford, CT Algebra 2 Tutors Stamford, CT Calculus Tutors Stamford, CT Geometry Tutors Stamford, CT Math Tutors Stamford, CT Prealgebra Tutors Stamford, CT Precalculus Tutors Stamford, CT SAT Tutors Stamford, CT SAT Math Tutors Stamford, CT Science Tutors Stamford, CT Statistics Tutors Stamford, CT Trigonometry Tutors Nearby Cities With algebra 2 Tutor Astoria, NY algebra 2 Tutors Bridgeport, CT algebra 2 Tutors Bronx algebra 2 Tutors Cos Cob algebra 2 Tutors Darien, CT algebra 2 Tutors Flushing, NY algebra 2 Tutors Glenbrook, CT algebra 2 Tutors Greenwich, CT algebra 2 Tutors New Rochelle algebra 2 Tutors Norwalk, CT algebra 2 Tutors Old Greenwich algebra 2 Tutors Ridgeway, CT algebra 2 Tutors Riverside, CT algebra 2 Tutors White Plains, NY algebra 2 Tutors Yonkers algebra 2 Tutors
{"url":"http://www.purplemath.com/stamford_ct_algebra_2_tutors.php","timestamp":"2014-04-19T05:07:54Z","content_type":null,"content_length":"24074","record_id":"<urn:uuid:87eebee6-1597-4541-85ce-11e8bf311f0c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Bilateral step length estimation using a single inertial measurement unit attached to the pelvis The estimation of the spatio-temporal gait parameters is of primary importance in both physical activity monitoring and clinical contexts. A method for estimating step length bilaterally, during level walking, using a single inertial measurement unit (IMU) attached to the pelvis is proposed. In contrast to previous studies, based either on a simplified representation of the human gait mechanics or on a general linear regressive model, the proposed method estimates the step length directly from the integration of the acceleration along the direction of progression. The IMU was placed at pelvis level fixed to the subject's belt on the right side. The method was validated using measurements from a stereo-photogrammetric system as a gold standard on nine subjects walking ten laps along a closed loop track of about 25 m, varying their speed. For each loop, only the IMU data recorded in a 4 m long portion of the track included in the calibrated volume of the SP system, were used for the analysis. The method takes advantage of the cyclic nature of gait and it requires an accurate determination of the foot contact instances. A combination of a Kalman filter and of an optimally filtered direct and reverse integration applied to the IMU signals formed a single novel method (Kalman and Optimally filtered Step length Estimation - KOSE method). A correction of the IMU displacement due to the pelvic rotation occurring in gait was implemented to estimate the step length and the traversed distance. The step length was estimated for all subjects with less than 3% error. Traversed distance was assessed with less than 2% error. The proposed method provided estimates of step length and traversed distance more accurate than any other method applied to measurements obtained from a single IMU that can be found in the literature. In healthy subjects, it is reasonable to expect that, errors in traversed distance estimation during daily monitoring activity would be of the same order of magnitude of those presented. Inertial measurement; Gait analysis; Gait parameters; Accelerometer; Step length; Gait monitoring; Stride length; Inertial sensor; Wearable. The measurement of temporal and spatial features of gait is essential for the assessment of gait abnormalities and the quantitative evaluation of treatment outcomes [1]. In particular, amplitude, variability and asymmetry of step length (SL) have been shown to be effective outcomes of walking ability. In fact, they are strongly related to the propulsion generation and can be representative of the compensatory mechanisms adopted in pathological walking [2,3]. Having access to instruments capable of gathering information about the patient walking ability outside the laboratory [4], during daily life with no space limitations and for prolonged periods of time is of paramount importance in numerous clinical applications [5]. Inertial measurement units (IMU) are strong candidates for these applications. Those IMUs, featuring 3-axis accelerometers and gyroscopes [6], can be employed to estimate the SL during walking. An estimate of the SL can be obtained by double integrating the IMU acceleration component in the direction of progression (DoP) between the instants of two consecutive heel strikes (HS). However, the implementation of the procedure described above requires the solution of a number of critical issues: a) the identification of the foot contact time instances (gait events); b) the determination of the IMU orientation with respect to the GF [7-9]; c) the compensation for the drift affecting the accelerometer and gyroscope signals [10,11]; d) the estimation of the initial velocity values for the integration of the acceleration along the DoP [12]. The most straightforward solution to determine both right and left SL (rSL and lSL) would be to place an IMU on each foot so that velocity and the orientation of each IMU can be set to zero at the beginning of the integration interval [7,13,14]. However, when the focus is on the description of the individual motor capacity ("can do in daily environment") and performance ("does do in daily environment") [15], the requirement of a light, small and minimally obstructive setup is of primary importance. Therefore, it would be desirable to obtain the same information using a single, discomfort-free IMU. To the authors' knowledge, all the studies in the literature, based upon the use of a single IMU, determined the SL through indirect estimation methodologies [16-18]. Zijlstra and Hof (2003) [16] proposed a method for estimating spatial-temporal parameters of gait from trunk accelerations. The IMU was placed on the back at the level of the S2 vertebrae. The method used zero crossing of the forward accelerations for detecting foot contact instances and an inverted pendulum model to estimate the SL. However, in their method, left and right foot contacts identification failed for 6 out of 15 subjects, 12% of the times and the SLs were underestimated in all subjects and at all speeds. Gonzales and colleagues (2007) [17] proposed a modified version of the pendulum model for taking into account the single stance and double-stance, separately. The IMU was placed on the back at the L3 vertebral level. Traversed distance estimation ranged between 94.5% to 106.7% of the actual traversed distance. No information was provided about the errors associated to the SL estimates. Shin and Park [18] determined the SL using a linear combination of the walking frequency and the variance of the accelerometric signals recorded by an IMU attached to the subject's waist belt. Experiments were carried out on a single subject. The accuracy of the SL estimation is not reported, but the accuracy of traversed distance resulted to be of about 96%. The abovementioned methods for the SL estimate are based either on a simplified representation of the human gait mechanics (inverted pendulum model) or on a general linear regressive model. Hence, the SL estimates provided are expected to be affected by the errors intrinsic to the specific model formulation. Conversely, this study presents a method for the estimation of rSL and lSL during level walking using a single IMU attached to the pelvis without using gait models. In line with previous approaches, the proposed method requires the identification of the gait events as a preliminary step. To minimize the detrimental drift effects, the IMU acceleration along the DoP is double integrated by means of a combination of direct and reverse integrations [19] of an optimally filtered acceleration. We hypothesized that the proposed method for the estimation of the SL, would be more accurate than methods found in the literature based on the use of inverted pendulum or regressive models. The performance of the proposed approach was evaluated on data collected on healthy subjects walking at various speeds, using stereo-photogrammetric (SP) measurements as a gold standard. An IMU (FreeSense, Sensorize^®) featuring a tri-axial accelerometer and two bi-axial gyroscopes (acceleration resolution 0.0096 m/s^2, angular rate resolution 0.2441 deg/s, unit weight 93 g, unit size 85 mm × 49 mm × 21 mm) sampling data at 100 frames/s was used. A 10-camera BTS^® SMART-D stereo-photogrammetric system acquiring at 100 frames/s, (positional accuracy 0.3 mm) was used for validation purposes. Step length estimation The method described below can be applied to the data obtained from any IMU featuring a tri-axial accelerometer and a three axial gyroscope. The physical quantities, specific forces and angular velocities, provided by each sensors of the IMU, are measured with respect to the axes of a local frame (LF) aligned to the edges of the unit housing. In this application, the IMU was mounted on the right side of the body at the pelvis level with X[L], Y[L ]and Z[L ]axes pointing downward, forward, and to the right, respectively (Figure 1). Figure 1. The IMU location. The IMU attached to the belt and positioned to the right side of the subject pelvis and relevant LF. The method proposed in this paper is based on the assumption that the pelvic displacement along the DoP, between the instants of contra-lateral HS and the first following HS, is equal to the ipsi-lateral step length. The success of the method relies on the solving of following issues: a) the identification of gait events; b) the determination of pelvis displacement along the DoP. Identification of gait events A gait cycle begins when a foot hits the ground (heel strike - HS) and ends when the next HS of the same foot occurs. When gait cycles can be identified for both sides, then also steps can be identified. A step begins when a contra-lateral HS occurs and ends at the first following HS. For each recorded gait cycle, the relevant gait events were identified from the IMU signals. By following a heuristic approach [16], a preliminary visual investigation was performed on few samples of data to correlate subject invariant distinctive features in the IMU signals with the relevant gait events extracted from the SP system data (Figure 2a). Figure 2. IMU signals and relevant gait events. (a) Raw accelerometric signals: X[L ]pointing downward (dashed line), Y[L ]pointing forward (solid line) and Z[L ]pointing laterally (dot-dashed line). SP-based gait event timings are superimposed (vertical lines). (b) Raw signal on X[L ]and corresponding reconstructed signal (thick dashed line) used for the definition of the interval of interest. (c) Raw signal on Y[L ]and corresponding reconstructed signal (thick line) used for the definition of the interval of interest. (d) Circles show the reference points used to estimate gait events from the accelerometric raw signals. A wavelet-based approach was developed to identify intervals of interest where gait event candidates were searched in the accelerometer signals. The X[L ]and Y[L ]accelerometer signals were decomposed using a "Stationary wavelet decomposition" [20]. A Daubechies level 5 ("db5") mother wavelet was chosen given its similarity to the IMU signals in the proximity of HS. The original signals were then decomposed in an approximation curve plus ten levels of detail. Thresholds were applied to the first three detail levels and the other detail levels were discarded. Thresholds for these levels were 1/5, 1/4 and 1/3 of its magnitude for the first, second and third level, respectively. The signals were reconstructed using only the first three levels of detail after thresholds were applied. An interval of interest for the accelerometer signals was defined as the interval of time during which the reconstructed signals differed from zero (Figure 2b and Figure 2c). The intervals of interest allow to identify the regions of the signals with localized high frequency components where rHS and lTO are expected to occur. Based on the preliminary visual investigation, the right HS (rHS) was detected as the instant of time in the middle between the maximum of the Y[L ]accelerometer signal and the first minimum of the accelerometer signal along the X[L ]accelerometer signal in the corresponding intervals of interest. The identification of the left HS (lHS) required the identification of both right and left toe off instances (rTO, lTO). The lTO was detected as the instant of time in the middle between the first maximum of the Y[L ]accelerometer signal after the rHS and the second minimum of the X[L ] accelerometer signal in the corresponding intervals of interest. The rTO was found as the time of the minimum negative peak value between two consecutive lTO and rHS. The lHS was found as the first local maximum of the Z[L ]accelerometer signal before the rTO (Figure 2d) [21]. Estimation of the pelvis displacement along the DoP The method developed for estimating the pelvis displacement along the DoP requires the knowledge of the gait events and is divided in the following phases: a) estimation of the IMU acceleration along the DoP; b) integration of the IMU acceleration along the DoP; and, when the IMU is located laterally, c) removal of pelvic rotation contribution to IMU displacement. Estimation of the IMU acceleration along the DoP To estimate the IMU acceleration along the DoP, the orientation of the LF with respect to the global frame (GF), was estimated using a specifically designed Kalman filter based on both three dimensional acceleration and angular velocity vectors. For a detailed explanation of how the Kalman filter was implemented please refer to Mazzà and colleagues [22]. The GF was defined as follows: the X[G ]axis coinciding with the direction of gravity, the Y[G ]axis coinciding with the DoP during level straight walking, and the Z[G ]axis resulting from the cross product between X[G ]and Y[G ]. To define the Y[G ]axis direction, at the beginning of each acquisition, the IMU Y[L ]axis was aligned to the DoP of the level straight walking. The orientation of the LF with respect to the GF at the i^th instant of time was expressed using the orientation quaternion ^Gq[L](i). Let ^Lf(i) be the vector obtained from the accelerometer signals (expressed in the LF) at the i^th instant of time, then the acceleration vector ^Ga(i) expressed in the GF can be computed as: Integration of the IMU acceleration along the DoP To obtain the velocity and displacement time series along the DoP, an integration technique, the Optimally Filtered Direct and Reverse Integration (OFDRI), and its simplified version, the Optimally Filtered Integration (OFI) [19], was adapted to manage acceleration signals during gait. Both the OFI and the OFDRI were originally designed for step negotiation motor tasks [19] and require the knowledge of the final value of the integral to set a cut off frequency for the high pass filter employed to reduce the effect of the drift in the accelerometer signals. In this application, the cut off frequency was determined from a gait cycle for which the initial and final velocity of the IMU was assumed to be equal. The resulting cut-off frequency was then applied for filtering the acceleration signal along the DoP (a[x](i)), one gait cycle at a time. Since the trials started with the subject standing, the initial velocity along the DoP was set to zero. The velocity values found for the final HS were used as initial velocity values for the integration of the following gait cycle. For gait cycles in which the velocity of the final HS was within a tolerance (± ε, ε = 0.3 m/s) of the initial value then it was forced to be exactly equal to it and the OFDRI was applied. The same updated velocity value was used also as initial velocity of the following gait cycle. The value ε = 0.3 m/s was chosen based on a trial and error approach. The integration of velocity along the DoP (v[x](i)) to obtain displacement, was performed using the OFI. Since, for each cycle, the final value of the integral (the displacement at the final HS) is the unknown quantity, the OFDRI cannot be applied. Therefore, the resulting estimates of rSL and lSL of the j^th gait cycle were: where s[x ]is the displacement along the DoP. The SL estimation method including the estimation and integration of the IMU acceleration along the DoP was named KOSE (Kalman and Optimal based filtering Step length Estimation). Removal of pelvic rotation contribution to IMU displacement When the IMU is located on the right side of the pelvis, the IMU displacement along the DoP, at the end of the right step duration (rT, the time interval between a lHS and the following rHS), is larger than the actual rSL due to the pelvic angular displacement θ, about the vertical axis of the GF, occurring during the same interval of time. Conversely, the IMU displacement at the end of the left step duration (lT, the time interval between the rHS and the lHS) is smaller than the lSL. Therefore, applying the equations (2) without taking into account the pelvic rotation, would result in an overestimate of the rSL and an underestimate of the lSL (vice versa if the IMU is attached to the left side of the pelvis). The amount of the over [under] estimate would be equal to dsin(rθ[j]) [d sin(lθ[j])], where d is the inter ASIS (anterior superior iliac spine) distance, and rθ[j ][lθ[j]] represents the pelvic angular displacement about the vertical axis between the beginning and the end of the j^th right [left step]. Both rθ[j ]and lθ[j ]are obtained from the IMU orientation provided by the Kalman filter. Therefore, the resulting estimates of rSL and lSL of the j^th gait cycle shown in (2) can be corrected as follows: when the IMU is on the right side of the pelvis and as: when the IMU is on the left side of the pelvis. Experimental session Gait data of nine healthy subjects (31 ± 6 yrs) were acquired. Two markers were placed on the right and left heels and toes of each subject. Subjects were asked to walk along a closed loop track of 25 m varying their speed as follows. They started walking from a standing position with their heels aligned to the start line. An adhesive tape, attached to the floor, was used to define the DoP (Figure 3a). For the first two laps they walked at slow speed. At the beginning of the third lap, they increased their speed (comfortable speed) and maintained it for the third and fourth laps. At the beginning of the fifth lap, they further increased their speed (fast speed) and maintained it for the fifth and sixth laps. At the beginning of the seventh lap, they decreased their speed (comfortable speed) and maintained it for the seventh and eighth laps. At the beginning of the ninth lap, they further decreased their speed (slow speed) and maintained it for the ninth and tenth laps. They stopped walking at the end of the tenth lap (Figure 3b). Figure 3. Experimental data acquisition. (a) A schematic view of the closed loop track and the calibrated volume (seen from above). (b) Diagram of indicative gait speeds sequence vs. laps of the track. Grey areas represent the portion of the path included in the calibrated volume. (c) Diagram of indicative gait speeds including only portions of the track in the calibrated volume as function of the traversed distance. A 4 m long portion of the walking track was included in the calibrated volume of the SP system, (Figure 3a) with its reference frame made to coincide with the GF. For each lap, SP data of three consecutive gait cycles were recorded. Only data recorded by the IMU and SP system within the calibrated volume were used for the method validation. Data recorded during each lap were arranged together to obtain a continuous data set formed by 3 × 10 gait cycles (Figure 3c). Data analysis The estimates of the timing of the gait events (rHS, lHS), of the right and left step duration (rT, lT) and of rSL and lSL, obtained from the heel marker coordinates reconstructed with the SP system were used as gold standard measurements when evaluating the accuracy of the estimates obtained using the IMU [23]. Thus, for each right step j and for any given quantity, the differences between IMU estimates and gold standard measurements may be interpreted as estimation error: The same processing was applied to the left side. Descriptive statistics of the above errors were computed. In addition, to compare our results with previous works, the difference between the IMU estimated distance and the actual traversed distance was computed for each subject. Heel strike detection All rHS and lHS instances were successfully detected in all subjects. The mean and standard deviation of errors e[rHSj ]and e[lHSj ]are reported in Table 1. On average, the IMU-based estimates of both rHS and lHS were delayed with respect to the corresponding SP estimates by 0.017 s and 0.027 s, respectively. Right and left step duration The mean and standard deviation of errors e[rTj ]and e[lTj ]together with the mean percent errors are reported in Table 2. On average, across subjects, the IMU-based estimates of rT were 0.015 s (-2.3%) shorter than the SP measurements. On the contrary, the IMU-based estimates of lT were 0.017 s (+2.4%) longer than the SP measurements. Right and left SL estimation Cut-off frequencies employed in both OFI and OFDRI to high-pass filter the original data varied, across the subjects, between the 0.067 Hz and 0.096 Hz. The pelvic rotation about the GF vertical axis was on average across subjects about 5 degrees. The mean and standard deviation of errors e[rSLj ]and e[lSLj ]together with the mean percent errors are reported in Table 3. Percent errors varied across subjects, between -2.6% and +2.9% for the rSL and between 0.4% and 2.6% for the lSL. On average, across subjects, IMU-based estimates of rSL were slightly overestimated by 0.009 m (+1.2%). In contrast, lSL were underestimated by 0.008 m (-1.1%). Total traversed distance The actual and IMU-based estimates of the traversed distance are reported in Table 4. Percent errors over a traversed distance of about 30 m, varied across subjects, between -0.54 m (-1.6%) and 0.54 m (+1.7%). The average estimate of the traversed distance is equal to 0.1%. Discussion and Conclusion A method for determining both right and left SL during level walking using a single IMU to be used either indoor or outdoor, without space limitations and for prolonged periods of time, was presented and validated. SL estimates were obtained directly from the IMU displacement between two consecutive HS, by employing an original method (the KOSE method) which double integrates the IMU acceleration along the DoP obtained after applying a Kalman filter. The KOSE method includes an adaptation to gait of an optimal integration technique originally proposed by Zok and colleagues [19] to reduce the errors caused by the drift affecting the dynamometric signals. The method was tested on healthy subjects while they were walking increasing and decreasing their speed, five times along 30 m. The algorithm for the identification of left and right HS never failed in all the subjects analyzed. Both right and left HS were detected with an average time delay corresponding to 1.7 samples and 2.7 samples, respectively (sample frequency 100 frames/s). These errors implied that the right step duration was on average two samples shorter than the left one. Despite the shorter estimated step duration, the rSL resulted, over the different subjects, overestimated by 1.2%. The maximum mean error in estimating the SL was overestimated by 2.9% for the rSL and underestimated by 2.6% for the lSL. The main explanation for the side to side differences, is probably that the effect of the pelvic rotation, associated to the asymmetrical IMU positioning, was not completely compensated for by the correction term dsin(θ[j]). In fact, in this regard, several assumptions were not completely satisfied: being attached to the belt of the subject, the IMU was not rigidly moving with the pelvis and d, which is taken as the inter-ASIS distance may differ from the distance between the LF origin to the pelvis vertical axis of rotation. As expected, the maximum percent error on the total traversed distance was to 1.7% and therefore smaller than the maximum SL percent error. It is worth noticing that the estimate of the traversed distance was never consistently either an overestimate or an underestimate (average over subjects equal to 0.1%). Several methods to estimate rSL and lSL, using a single IMU, have been proposed in the literature [16-18,24]. A common feature of these methods is that the SL estimates were derived using indirect methodologies, either based on inverted pendulum models for level human walking representation [25,26] or based on general regressive equation in which SL is expressed as function of the step frequency and accelerometric signal variations. Unfortunately, no data on the accuracy of the rSL and lSL estimates were provided in abovementioned studies. Therefore, a straightforward comparison among methods is not possible. However, Zijlstra and Hof (2003) [16] reported a consistent SL underestimation. As a consequence, also the mean speed (mean SL divided mean step time) was underestimated and to correct it, they introduced a coefficient of 1.25 heuristically determined. It could then be inferred that the SL estimations were affected by errors of about 20-30%. In Gonzales and colleagues (2007) [17], the performance of their method was evaluated by assessing the errors of the distance traversed in each acquisition. Across subjects, errors ranged from -6.5% to +6.0%. Shin and Park [18] tested their method on a single subject walking at a constant speed for a 70 m straight trajectory. Three data set were recorded (slow, normal and fast). Across trials, errors for the traversed distance varied between 1% to 3% with an accuracy error of 3.7% in the worst case. In our study the errors in estimating the traversed distance ranged from -1.6% to +1.7% across subjects. The smaller estimation errors compared to previously publish methods, confirm our initial hypothesis that the KOSE method improves the accuracy with respect to indirect methods based on the use of general models, which may not always take into account subject specificity in gait. In this study, the IMU was attached to the subject's belt on the right side. This location was chosen to reduce subject discomfort and because it is a practical solution for monitoring daily life motor activities. However, it can be hypothesized that attaching the IMU centrally in the back (S2 or L3 vertebral level) similarly to Zijlstra and Hof (2003) [16], Gonzales and colleagues (2007) [17 ] and Shin and Park [18], the residual errors related to the pelvic rotation correction would be reduced and it would not be necessary to correct for the pelvic rotation effects. In this study we presented results relative to gait event timings and SL estimations. Additional gait parameters can be easily calculated from the estimated parameters. Therefore, using a single IMU, we were able to provide excellent estimates of both temporal and spatial gait parameters bilaterally. It is important to stress that the KOSE method was tested on experimental conditions fairly similar to those which can be encountered in the real life: different gait speeds (three gait speed levels), numerous velocity transients (five velocity transients on 30 m). It is reasonable to expect that, in healthy subjects, errors in traversed distance estimation during daily monitoring activity would be of the same order of magnitude of those presented. The data set, derived from the experiments carried out in the present study, referred to a straight level walking. For this specific method validation, the GF was therefore defined, at the beginning of each acquisition with the Y[G ]axis coinciding with the DoP. The IMU acceleration component along the DoP, was then calculated by projecting the accelerometric signal, after having removed the gravitational contribution, on the GF. In general, if during the acquisition, the subject varies the DoP more than once, it would be necessary to define for every different DoP, an auxiliary GF with the Y[G ]axis aligned to the relevant DoP. However, the solution of the latter issue is out of the scope of the present study. There are some limitations to the current proof of concept which need to be addressed before using the presented methodology to estimate gait spatial and temporal parameters in clinical contexts. One issue is related to the algorithm used to identify the gait events, which is normally based on IMU signal features. Such features may change or even disappear in pathologic subjects. Another issue is related to the assumption that the pelvis displacement along the DoP between HS equals the SL. It is certainly true in a symmetric gait, but in some gait disorders the pelvis displacement may not be in phase with the feet displacement as much as in healthy subjects. Future studies need to be performed to test the method performance on pathological gait [27]. In conclusion, the method proposed allows for an accurate estimate of right and left step length using a minimal invasive experimental set up through an optimal integration procedure of the accelerometric signals. List of abbreviations DoP: direction of progression; GF: global reference frame; HS: heel strike; IMU: inertial measurement unit; LF: local reference frame; lHS: left heel strike instant; lSL: right step length; lTO: left toe off instant; OFDRI: optimally filtered direct and reverse integration; OFI: optimally filtered integration; rHS: right heel strike instant; rSL: right step length; rTO: right toe off instant; SL: step length; SP: stereo-photogrammetric. Authors' contributions AK, AC, and UD participated in the conceptualization of the methods proposed and contributed to the definition of the experimental design and the data analysis. AK was involved in the methods development, acquisition and the analysis of the data. All authors revised and approved the current version of the manuscript. The authors thank Dr. Marco Donati from the Laboratory of Locomotor Apparatus Bioengineering of the University of Rome "Foro Italico" for providing the code of the Kalman filter. Sign up to receive new article alerts from Journal of NeuroEngineering and Rehabilitation
{"url":"http://www.jneuroengrehab.com/content/9/1/9","timestamp":"2014-04-20T13:26:38Z","content_type":null,"content_length":"121804","record_id":"<urn:uuid:f4821a66-579d-42ac-808c-1809ef117e80>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
I Critical Summary of New and Prcviooas.y Reported Research . ROY A. DOTY AND WILLIAM C. REIN Durham, North Carolina "" : 'i A Critical Summary of New and Previously Reported A Critical Summary of New and Previously Reported ROY A. DOTY AND WILLIAM C. REIN Durham, North Carolina COPYRIGHT, 1941, BY THE - 1 I I - THE SEEMAN PRINTER, INC., DURHAM, N. C. I am very glad indeed to acknowledge my indebtedness to the fol- lowing agencies and persons for their assistance: 1. To Duke University, for research funds to prosecute this study and for publishing the report. 2. To Ginn and Company of Boston, Massachusetts, for supply- ing free of charge a special printing of the tests used in the investi- gation reported in Chapter III and for permission to reproduce copies of tests in this monograph. 3. To the administrators and primary teachers of the schools listed below, in which the data for two original studies were obtained: California Davidson Township San Andreas Dushore Florida Erie (Lawrence Park) Sanford Estella Massachusetts Forest City Lexington (Hancock) Jenkins Township New Brighton Waltham (Plympton, Whitemore) New Brighton North Carolina Smethport Durham (North Durham, Watts) Summerhill Township Durham County (Bragtown, Glenn, Susquehanna (Thompson) Hope Valley, Oak Grove) Williamsport Raleigh (Boylan Heights, Hayes- Virginia Barton) Carson Winston-Salem (Granville) Petersburg (Brown, Ohio Jackson, Lee) Marysville Prince George Wayne Rives Pennsylvania Woodlawn Canton Township Wisconsin Charleston Green Bay (For one reason or another, the data from some of these schools could not be used, but this fact in no way lessens my feeling of obligation for the attempted co-operation.) 4. To Miss Hulda Kilmer of the Dushore, Pennsylvania, schools, who was largely instrumental in securing data from rural schools in her state. - <, -L vi Acknowledgments 5. To the graduate students in my courses, Experimental Educa- tion and Investigations in Arithmetic, who helped materially in group testing and in interviewing. 6. To the junior authors, Mr. Roy A. Doty and Mr. W. C. Rein, for assisting in the tabulations reported in Chapters II and III and in the bibliographical research in Chapters II and IV, for preparing several sections of Chapter IV, and for reading and criticizing the manuscript as a whole. W. A. B. I. THE PRESENT STATUS OF NUMBER IN THE PRIMARY GRADES 3 Reasons for Present Confusion........................... 3 The Commoner Programs of Primary Number Instruction... 6 The Crucial Questions.................... ............. 8 Purpose of This Monograph ............. ............... 9 Part I. Original Investigation, with Related Research (Topics 1-7) .............. ............ .............. 12 1. Rote Counting.............. ...................... 14 2. Enum eration ................... ................... 18 3. Identification ...................... ................. 20 4. R production ........................ .............. 24 5. Crude Quantitative Comparison ....................... 28 Concrete objects or pictures................:.......... 28 Abstract numbers .................................... 32 6. Exact Quantitative Comparison, or Matching........... 33 7. Number Combinations or Facts, and Their Use in Problems 34 The number combinations when presented concretely.. 35 The number combinations when presented in verbal problem s ...................................... 37 The number combinations when presented in abstract form ......................................... 42 Conclusions with respect to the number combinations..... 44 Part II. Topics 8-17............... ...................... 44 8. F actions ........................................... 44 9. Ordinals ......................................... 47 10. Reading and Writing Numbers ....................... 48 Reading numbers............. .................. 48 Writing numbers ............................... 49 11. Recognition of Geometric Forms ..................... 49 12. Time, U. S. Money, and Measures ..................... 50 13. Sex Differences....................................... 51 14. City versus Rural Children............................ 52 15. Differences in Levels of Intelligence................... 53 16. Effect of Kindergarten Instruction .................... 54 [ vii ] viii Contents 17. M miscellaneous Studies................................. 55 Preschool studies.................................. 55 Readiness testing .............................. 57 Part III. Appraisal of Research Findings ................... 58 Brief Summary of Findings.......................... 58 Limitations of Research............................... -59 Cautions in Applications.............................. 61 Significance of Research Findings...................... 62 ARITHMETIC INSTRUCTION IN GRADES I AND II 64 The Plan of Instruction............. ..................... 65 Outcomes ............................................. 63 Methods of teaching................ .................. 66 M materials of instruction.................................. 67 Experimental subjects................... .............. 69 Tests ................................................. 69 Results from the Group Tests ............ .................. 70 Gross results............... .......................... 70 Results in Grade IB.................................. 72 Results in Grade IA.................................. 76 Results in Grade IIB.................................... 80 Results in Grade IIA................... .............. 86 Results of the Individual Tests.............................. 90 Procedure ........................ .... ............... 90 Summary of interview data, by grades ................... 92 Thought processes or procedures on "known" combinations.. 92 The process of learning the combinations ................. 96 The subtraction compared with the addition combinations.... 99 Concluding Statement................................... 102 IN GRADES I AND II 104 Evidence on the Values of Systematic Instruction Beginning in Grade I ........................... .. ............. 105 Evidence on the Values of "Social" Arithmetic in Grade I (or in Grades I and II) and of Deferring Systematic Instruction... 107 Evidence on the Values of Abandoning Systematic Instruction Entirely, or at Least in the First Grades................... 112 Teaching the Simple Combinations or Facts.................. 116 Comparative Difficulty of the Number Combinations or Facts... 122 Reading Numbers.............. ......................... 128 Psychological Development of Number Concepts and Skills.... 129 Counting (Enumeration) ............................... 129 N um ber concepts......................... ............. 130 Transfer of Training in Learning Arithmetic................ 135 Transfer with number combinations ...................... 135 Learning simple addition and subtraction operations........ 138 Conclusions with regard to transfer....................... 141 Grade Placement of Topics................................. 141 Children's Uses of, and Needs for, Number in the Primary Grades ...................... ......... .............. 144 Specific Techniques; Minor Problems....................... 147 D rill .................. ... ............. ............... 147 Use of number patterns and other perceptual aids.......... 149 Adding upward versus adding downward .................. 150 Subtractive versus additive subtraction .................... 151 Teaching the related combinations together or separately.... 152 Games and devices.................................... 154 Permanence of learning .................................. 155 Conclusion .............................. ............... 156 GRADES I AND II 160 The Three Crucial Questions................. .............. 160 Meaning and Significance in Arithmetic .................... 162 Evaluating Programs of Primary Arithmetic................. 163 Arguments against and for Systematic Instruction............. 165 Effects on personality................................. 165 The policy of postponement............................. 166 CODE OF TOPIcs 170 BIBLIOGRAPHY 172 1. Grade I Pupils Tested...................... .............. 13 2. Present Study: Results of the Tests of Rote Counting and Enumeration .................... ........................ 15 3. Summary of Findings with Respect to Rote Counting by l's.... 16 4. Summary of Findings with Respect to Enumeration............ 19 5. Present Study: Results on Group Test of Identification-Four, Seven, and Ten Objects .................................... 22 6. Summary of Findings with Respect to Identification ............ 23 7. Present Study: Results on Group Test of Number Reproduction -5, 6, and 9............................................. 25 8. Summary of Findings with Respect to Number Reproduction.... 26 9. Present Study: Results on Group Test of Crude Comparison- Concrete Numbers............... .......................... 28 10. Summary of Data with Respect to Crude Comparison, Concrete Numbers .............................................. 30 11. Present Study: Results on Individual Tests of Crude Comparison Abstract Numbers.................. .. ... ............... 33 12. Present Study: Results on Group Test of Exact Comparison- Concrete Numbers 4, 5, 7..................... ............ 34 13. Summary of Findings with Respect to Number Combinations When Concretely Presented ................................. 36 14. Present Study: Results on Number Combinations in Verbal Problems ................... ........................... 38 15. Summary of Findings with Respect to Number Combinations When Presented in Verbal Problems......................... 39 16. Present Study: Results on the Number Combinations in A abstract Form ....................... ........ ........ .. .. .. 43 17. Per Cents of 1,897 Grade IA Pupils Without Instruction on Num- ber, Who Had Various Concepts of Time and U. S. Money (After W oody) ............................................. 50 18. Scores on Term Group Tests, Grade IB Through Grade IIA.... 70 19. Classification of Grade IB Test Items by Outcomes, and Scores by Test Parts.................... ..................... 73 20. Comparison of Total Grade IB Group and the Selected Sample of 100 Pupils........................ ... ............... 74 21. Number of Errors and Per Cents of Success on Items of Grade IB Test; Samples of 100 Cases.............................. 75 22. Classification of Grade IA Test Items by Outcomes, and Scores by Test Parts.......................................... 78 List of Tables 23. Number of Errors and Per Cents of Success on Items of the Grade IA Test; Sample of 100 Cases........................ 79 24. Classification of Grade IIB Test Items by Outcomes, and Scores by Test Parts................. .......... ................ 83 25. Number of Errors and Per Cents of Success on Items of the Grade IIB Test; Sample of 100 Cases ....................... 84 26. Classification of Grade IIA Test Items by Outcomes, and Scores by Test Parts............. ..... ....... ... .............. 88 27. Number of Errors and Per Cents of Success on Items of the Grade IIA Test; Sample of 100 Cases ....................... 89 28. Results of Interviews in Grade I, First and Second Terms; Forty Children Selected from Six Schools.......................... 93 29. Results of Interviews in Grade II, First and Second Terms; Sixty Children Selected from Seven Schools....................... 94 30. How Children Think About the Combinations They Are Sup- posed to "Know"........................ ................. 95 31. Nature of Learning as Shown by Changes in Procedures Used with Number Combinations from Term to Term................ 97 32. Comparison of Procedures Used with A (Addition) and S (Sub- traction) Combinations in Grades I and II.................... 100 33. Extent to Which Various Types of Solution Were Used on A- and S-Combinations in Grades I and II ..................... 101 34. A Comparison of the Rank Difficulty of Twelve Addition Combinations .......................... ................. 125 35. Percentage Scores on First and Last Tests, Together with Per Cents of Possible Gain Actually Made (Adapted from Overman) 139 36. Relative Frequency with Which Situations Involving Number Occurred (From Smith) ..................................... 145 37. Results of the Analysis of Ten Series of Primary Readers (A after Gunderson) .......................... ............. 146 Chart I. Outcomes by Half-Grades.............................. 66 Few problems relating to the elementary school curriculum are as troublesome as are those associated with the kind and amount of arithmetic to be taught in the primary grades. These problems are essentially new problems; they did not exist, or at least they were not generally recognized, a quarter century ago when practices with regard to primary number were relatively more uniform. The last two decades have witnessed a decided break from tradition, but as yet no satisfactory solution has been found. Instead, there is such variety of practice as to amount almost to confusion. The extent of this confusion is readily noted if one but compares different course of study offerings in primary arithmetic. For ex- ample, one school expects children to learn the addition combinations with sums to 10 in Grade I; another defers systematic instruction on these facts to Grade II; still another postpones such instruction to Grade III. Needless to say, were other arithmetical topics included in these comparisons, the variations suggested in the case of a single topic would be greatly enhanced. Reasons for confusion with respect to primary grade number are not hard to find. In the first place, evidence from school surveys, from local testing programs, from investigations of the learning of individual children, from research on adult usage-evidence from all of these and other sources made it incontestably clear some years ago that the anticipated results of arithmetical competence were not being attained. Children proved to be inaccurate in computation and unintelligent in problem solving; adults were found to employ neither frequently nor effectively the arithmetic they had been taught, even when it was obviously useful. While most of these surveys and most of the experimentation dealt with arithmetic in the intermediate and higher grades, search for an explanation of the unsatisfactory conditions inevitably led to an examination of the state of affairs in the primary grades. Still, it should be observed that while attention was directed to arithmetic 4 Arithmetic in. Grades I and II in the first grades (a tendency greatly accelerated by other influences discussed below), the resulting studies did not in themselves provide an answer for the questions they raised. TruLe. instruction, as meas- ured by its results, was ineffective, but what was to be done? It was possible from the data collected to argue in favor of widely different programs. A second factor making for change was the spread of the educa- tional philosophy epitomized in the phrase "the child-centered school." Clearly, the kind of arithmetic content and teaching found commonly in the primary grades fifteen or twenty years ago was inconsistent with this conception of education. Little effort was expended to utilize children's interests; classroom situations in the arithmetic class were typically "unnatural"; few indeed were the opportunities for individual creative and exploratory activity; children were told what to learn and how to "learn" it and then required to "learn" precisely as told. In a word, many practices in the primary arithmetic class violated the tenets of the new conception of education; but, unfortunately, this new conception did not in itself contain a single unambiguous plan for the needed improvement. To illustrate, it was perfectly consistent with the new views (or with some of them) to abolish all planned experiences with number, on the one hand, and, on the other, to adopt a new and different kind of planned arithmetic and teaching to start in Grade I. Like the research mentioned in earlier paragraphs, the new conception of education pointed the need for change but found equally congenial actual changes which were quite unlike each A third factor leading to change, if not for agreement in that change, was psychological in character. Reference here is to learning theory. Both educational and psychological experimentation on learn- ing during the last fifteen years or so has raised objections to the oversimplification of the learning process which had theretofore been generally accepted. This research called attention to aspects of the learning situation which had been neglected when the experimenter artificially isolated the particular "S's" and "R's" in which he was interested. Nowadays psychol'-cist t,'nd to stress the importance of the subjective conditions of learmin.:-the previous relevant experi- ences of the learner, his present goals and motivation, and the like; they point out that when learning involves relationships and rich understandings, learning does not take place all at once, but is rather long continued; they insist that however consistent the learning proc- Th, Present Status of inmbtr 5 is mn the acqu'siti:on of mianinis: and understandings may be basi- cally and ultimately with the facts o4 simple conditioning, learning in such cai>es i- not 'ery helpfully viewed in these terms. Briefly then, learning theo.:ry now Lmnphasizes increasingly: (1) the allowance of sufficient time for the completion of learning, (2) the nece;-it. that the learner have the essential intellectual capacity to larn,. an.] 13 the presence :-.f a felt purpose or a goal for learning. Unfortunately. the.e new p:-ycholo,.:ical emphases, important and welcome as they\ are. furnii-h guidance no less equivocal for the de- termination ,:. the content and instructional procedures in primary number than do the factors already considered. Advocates of early sy.temnatic in-truction insist that by providing the requisite founda- tionial experience- from the start and b\ deferring mastery perhaps tO:, Grade III they are making proper allowance for time for learning. At the -ame time the:ct who viouldi restrict all number experiences in the primary grades to thostc v. which appFar incidentally and by chance, hold that they alone are acting in accordance with modern psychol- g:,_. for they are v.aiting for the child to develop adequate intellectual power for learning. / Coi:fuslion v. ith regard to' primary arithmetic is worse confounded Sb. the operation:n of a fourth factor, namely, different conceptions of K tlht1 purpose of arithmetic in the elementary curriculum. These dif- ferent co:nceptions of course quite largely reflect the influence of factors, mentioned above, and their differences arise from the ways in which these factors and the principles deduced therefrom have been combined. Be that as it may, tliese various conceptions provide teacher and allministratnr with concrete patterns of thinking which n- turn influence the kind and amount of arithmetic assigned to the lower grades. According. to one view. for example, arithmetic is a tool subject; it is in the elementary curriculum to equip children to deal effectively v.ith the quantitatiie problems they can hardly avoid in later life. Consequently. attention is given chiefly to efficiency (speed and ac- curacy i. and there is little concern that children shall understand ,.hat the\ are taught. The arithmetic of abstract numbers is intro- duced from the trart -of Grade I; instruction takes the form of telling children hat and how, to think; and children "learn" by mastering the prescrilbed formulas and skills. According to a second conception. arithmetic is first :of all a mathematical system; the crucial element in learning is tI1h uiiderstanding of the number system and its operation. Aritlietic mn Grades I ajid II Obviously, both in conten' and imiphasi- tihe courics in prinar nunm- ber which arise from these two <:ntreption- mut Ut -L quite Jiim-inilar.' THE COMMONER PROGP.'IS, OF PkI'MAR' NUMBER I N.;TRI.CT 10'. The situation with respect tci prinmarn numibr has Ibcn dlc-.r[iid as one of confusion, in the sense that no s'ingl,; program -,f inr-tru.- tion is predominantly favored at present Neverthelek.. it would be misleading to say that the situation i chaotic, for in the conifusi-.n one can detect at least fo-ir prrg.ram- of nstructirin v.hic:h are fairl\ (1) The first program i in ore: e.nst: not a pr..,granm at all. ,Unt rather the negation of program it a,,bislile, all Yvstenma.ntc inirrue- tion in number in Grade I. or in Grades I and II. o-,r in Griadr- I, II, and III. This program ma\ L.< de-ignated a, the indilntaiil ip- proach and may be interpreted a, a most \ g-rius reaction a.gain.t the formalism of traditional arithimetic tre al-iing. It i, not '.ipp['i:'iid that children will have no number tcxi-, I cn..:s in the primary grades on the contrary, it is assumined thit children after enteriing s.i:hol. will continue to find number a part Iif their natural acilnt'es, a I tlhe did before entering school ., and it i; further asuiwed that the-e in- formal and unplanned ccnltacts are uflticient [ti prr.-vi.le a bh is for later systematic instruction-il indeeId such in-tru.tin i ever of- fered. Arguments in favor -'f thi- incidJntal appraclih tres- tlie mental immaturity of children n tcle primary grades. the w.-as.tc in- herent in efforts to teach arithmetic t.:i children before they are ready for it, the dangers (largely cmi:'tio:inali whichli nmut result from un- seemly haste and forcing, and thle niuih .i gr,:ater value iof other type- of experience than those in\ii ing nunimber. ii the development fi thlt primary pupil.2 (2) The second program i- planned frm in ti our.':t, but ,t t planned chiefly to acquaint children with number a- a iin:rmal part of their environments. That is to a'ay. the center if intcre-. in the plan- ning is the social setting or arpplicrtiCi1n .- nmimber. and c:liild ren art accordingly encouraged to establilli arind operate _grocer, -trres, ,clii'lA Nor does the confusion stop imr er.'re v. ith a'lierncne ti different cr':ncep- tions of elementary school arirlhm tic. ,.ni:. t o .f ( '.liich haVe ben menrti.:.ined. Each conception may be dinrt-rentli interpirited nl iJifferninl., implernenictl. Thus, one recent writer, while u tl., pirotestiniig hi: co'nm iciion itht all :,rith- metic must be "meaningful," objects ig.,r...u-l tI., teaching children in Grade III the rationale of the number *.ieni. -[Ho,, else it i pii.-ssiblre 1t get chil- dren to understand computational c.l.,crations it.li numhers, this u riner J.:.ci not say. 'F,-.r example: Howard A Lane. "Child Deelol.:rment and ith Three R'.." ClhiJI.o-d Education, XVI (N.-, l' -). 1.11. lii. The PF'es'nl Stl,'is ,',f .V\'nii.c," bank, p,.ist .,iTfic.,, and then like lecaucse ,.if lie large place given such experiences in thlis jr,_'.ramn. Pr.giamin ( 2J1 ma,. for the purposes of tlis isp1,rt. I,: de signat:d a, tihe s ..'ial app',., *. It should be noted that. a, part ot ti plan. teachers make certain that the socially cen- tered activitie- '.-ill provi.le plenty .:f .ntact with number, and %,.n-tact .f a kr,,nii kind At luis pi-'it Program (2) is markedly dit,:rent fn..im Pr'igrani (1 i. It is h.;.:eve:r like Program (1) in that it seeks to i: ak,: arithmietic' funciti,'nal, and many of the argu- iienit. in faltr f Pr:cram i I i and in *oppoitio'ii to traditional prac- ncee~ may lie clied in cupp:,rt of Pro:rain 2 1. Si Mllention hals ituit I.,t:n inade .f "-tralditional practice," and tl-is maiy he r-.jarded as the thiil proiranm :of primary number in- ructi.i'n..\:cc:'rdir. to: this l'r.,,rani. niumlber is systematically pre- sentcd from the tart (or tr ir thi: ldieginnirir, i :t"f Grade II). Little attintiin is given 1 t the. i social applicati rns of number and to the psychological fai'tors s pr:,iiiient in Programn, (1) and (2). In- deed. it is assumed lthat c'hihlri:i 'Alien they enter Grade I (or shortly therevcafterr are ready tir aintact nimilber. After giving some prac- tic in counting. pcrhals,. t,:ac,:hrs I I:-Ti Ain) the simple addition and isulItractic-rn fact_. Tihes thy pres.rlt rally and later in written form, but aa'.\av h ijthe: nurinlr tmbl,._ls and %ith a minimum of con- c:r.t,. exper'encc Tlie dI-minaicin, purprr.e :,t" the traditional pro- n, ii. s t1 ,:. get children in the priinary grades to "know" these number fact. in till: rit':t |;'ib;lle time as tli>. surest foundation fic.r eCii.n.. ii,'al ut' e in tc i,: co':im uitatiin ,if Gradj ; III. (4) Unlike the first three programs-the incidental, the social, and the tra.ditih.al--che fourth program cannot be neatly described by a simplee name. One reason is that the fourth program gives place to', \arillus features of th[lie first three and is therefore somewhat eclec- tic in nature. P,-rhaps a better reason is that this program has no s'ingl: strickiin characteristic, except that it recognizes the element of matheimat].il mi:aninig which is, or tends to be, neglected in the other pro-grahm; N'evcrtcleless, to call this program the "meaning" program is t.'. lilziht Ither features that are equally essential parts. Advocates .if tlii fourth pr,.igram are no less concerned than are the exponents of the first tc'' with making arithmetic socially functional, and no less; con,-rned than are the exponents of the traditional program F:.r example- G M Wilson, "New Standards in Arithmetic: A Con- irolled Experiment in Supervision," Journal of Education Research, XXII (De: 193'0. 351-36'.:) This is reference 55 in the research bibliography at the cn'd co thhi mo,',,.;.raph Henceforth citations to this bibliography will be by number onlh 8 Arithmetic in Grades I and II with developing eventually a high degr,'.' of ':.lthicienc int rhe u.e 1i arithmetic; but they hold that these are lir enoi ugh-that children must understand what they learn, and so imiu-t have a gra:p ulpon the mathematics of arithmetic. Briefly. thun, accor-ling t, thie fourth program full advantage is taken of all incidental occurrence,. *.f number in "natural" situations; more'rver. o:thIr 'ocical ;ettiilgs Im- volving number are 'deliberately arranged to Increat;. rcal contact- with number. But these informal and planned social experiericc: art supplemented with learning activities dJchbl.rat-:.. d:--i'_ir:id (aI' to make number and number operations ..n..ible and i I) to- encourage children as rapidly as they safely may toi adopt tht proctdurc- which make for arithmetical proficiency.4 THE CRUCIAL Q1iJFTI,)NS Four programs of primary arithmetic ha\e been bri.tly d.crilbed Actually, of course, still other pattern ofif content and Initruction result from various combinations of the four outlined. Thic i_ not the place for a critique of the program-. The purp,-' in men- tioning them here is mainly to point out that the different progr-au represent different solutions for the troublec-ome problem which re- late to arithmetic in the primary curriculum. These troublesome problems may be reduced to three crucial questions, all of which must be answered before a satisfactory solution can ultimately be found. These questions are: (1) Is the primary grade pupil intellectually capable. of profiting from systematic instruction in arithmetic? That is, has he the mental powers requisite to this learning? (2) If the primary grade pupil can learn much about arithmetic., Jiuld he be asked to do so at this time; is instruction in number wasteful if given in the primary grades, or does it produce gains which justify it- being given? (3) If the primary grade child can and shoudd learn arithmetic from the start of his school career, what should be the content and form ..' this The four programs of instruction which have been analyzed ,xii- ously are based upon different answers to these crucial quc.Ation. Thus, Program (1) answers question (1) in the negative, while: the This program is essentially that of the National Council Commiti: (.ri Arithmetic, sponsored by the National Council of Teachers of Matiemati.:: The point of view has been set forth by the chairman of the Committee. Pr.:-- fessor R. L. Morton, in the Mathematics Teacher for Oct., 1938, ard in ith Curriculum Journal for Nov., 1938. The forthcoming (1941) yearboo-: .:.f" uii Committee, the sixteenth in the series of the National Council of Teacher' uf Mathematics, should be especially helpful in implementing this program. Thc Piets'nt Status of Number 9 ,:thclr three programir aii4-'er it in the affirmative. Likewise Pro- gramr 1 2 1. .3 1, and i 4 I ansiier the second question in the affirma- tl't. but an2f r iqueti on i in quite different ways, while Program i 1 c.nsitcithl sa No: to question (2) and dismisses the problem i.,f content and meth... bi.\ r,.lling upon incidental number experiences. But at the. fkV.c:t hIre are four quite different programs of in- -itruction which airi:c fri:,rm different answers to the crucial questions and irom dilferent a.\is of implementing those answers. The fact that there can b,- different arin.vers and programs arises in part from lack .,f research data and in part from disagreement with respect to educational philosophy. Question (2) is fundamentally a question of values and so of philosophy, but an answer to the question, even if philosophical, is a better answer if it is based upon some kind of research 'data. Thus, an adequate answer to question (1) should make easier the answering of question (2). Likewise, data on the outcomes of various programs of primary number instruction should be helpful in answering question (3), and indirectly in answering question (2). It is the purpose of this monograph to supply data with which to help answer the crucial questions just posed. The question as to the "readiness" of the primary pupil for number instruction is partly answerable from evidence as to the number knowledge and skill with which he enters school. Several investigations have already been made in this area. These will be summarized, and to their data will be added the results of an original investigation not previously re- ported. The question as to the best system of instruction (it being assumed that instruction is to be given) is answerable in the light of evidence of the effects of various programs. A few research re- ports contain information on different approaches to instruction, and thii inf-..niation will be summarized and supplemented by new data ti.-. hIii the results of a particular program of instruction. If it can he dimi_-nn.trated that children actually do profit from number in- struction frim the start, then further evidence is provided for an arfirimtiave answer for question (1) as well. Chlapttr 11 will canvass previous investigations and report a new .tud\ :-,f the number knowledge and ability of children when they cnier Grade 1. Chapter III will present data on a new study, de- s.,i1ed to ;show.v the results of one particular program of instruction in Gradrs I and II. Chapter IV will summarize other research on the 10 .-1, ti ii tit ,t 'n G ,,ai 's I Jiid II effects b,-tih -f pj-tplo.nii.. 'nd f iniiatinl' in-i-truc:ti., in the pr1- mar, -.:ra,]-.-. Chapter \ ciinta-iii a hinal iiterpretatiii-n -:. rcc-earch findin.g- and ,.tc fi-rtil the v riter'- -c.:ce.[.ton -i arithmetic for hie primary -ia,ic- N.:.t the I l.a-t tl-eI l Ife l t cJirt 1 f the. ,in.n-.'raph -bI uld 1.,i thc bib- liogr'h li if -ixt\ reachh tiUil-r in thi. i.I] f prinimeri aritliiK-etic. with whi,.h the Ti-.n.Igraph :nd]- TI-hr...ul-.ut the ni:.ri-,cgriaph cita- tions t... this Irbli.:.grajh. Ll bvy number --.nl\. Facing, [lihe firti page .:,f the bil'l.,ralph\ i. a cla'il'nIcat.i.ni -I r-:-earch -tuJie- accoTr.d- ing t,:. I pi..1 le- vith 01 hi: thC del T'1lii' cla"-ificati'-.n -h-ul be of -cr.icc t, the crcful stu- eint .-. arithm .-tic i.:. ', ihe .S L:' check tlih v, riter'- re. ie o. ain i trip-rtaiiil.n- b. c:ntiulrine 1Ihe origl.nal -'iuirc:, . C HA.PF'TR II This chapter i~ de'.ot<:d tE:. a critical review of both new and pre- vi'tU-lY reportE- r :-,:arh *:.n Ile arithmetical skills and knowledge of children judt entering Grade I.' flie treatment will be topical. That ii to say. All rn:sarchl which relatc- to the skill, item of knowledge, or other category in question will be brought together and discussed to- gether. The list of topics to be considered in this chapter follows: 1. Rote Counting (p. 14) 2. Enumeration (p. 18) 3. Identification (p. 20) 4. Reproduction (p. 24) 5. Crude Quantitative Comparison (p. 28) 6. Exact Quantitative Comparison, or Matching (p. 33) 7. Number Combinations or Facts, and Their Use in Problems (p. 34) 8. Fractions (p. 44) 9. Ordinals (p. 47) 10. Reading and Writing Numbers (p. 48) 11. Recognition of Geometric Forms (p. 49) 12. Time, U. S. Money, and Measures (p. 50) 13. Sex Differences (p. 51) 14. Differences between City and Rural Children (p. 52) 15. Differences in Levels of Intelligence (p. 53) 16. Effect of Kindergarten Instruction (p. 54) 17. Miscellaneous Studies (p. 55) The chapter is divided into three parts. Topics 1-7 are treated as a group in Part I, because new data relating to these topics are to be reported here for the first time. Other research dealing with thtt-n same1r topics will be summarized in Part I also, but the separate :.'si-derati':n of this group of seven topics assures some unity to th,- .r.ngiidl investigation. The rest of the topics listed above are *:.nsiiired in Part II. Part III consists in an appraisal of the re- .carch in, this general area, the purpose being to point out various limnatii.n- ,.n research techniques which have been used, to stress the :c-,n<|qu-nt necessity for caution in interpreting the results of the re- Th,. hhlilhed research available before 1932 has been elsewhere sum- mnar'-.:d ht. T. C.. Foran in "Early Development of Number Ideas," Catholic EJd...i..,i' i. ,::, ew, XXX (Dec., 1932), 598-609. 12 .4rithn.,'t" inl Giades I and II search, and to direct notice to a rect: of number knowledge upon which re.-tarch i- needed. Not all the available rt-earch data ire canma:.-ed in Part-s I and II. i 1 Inpubli.lied material il not includ.led. partly becau.-e uch report arc n,,t acce-::ible to thle average reader. mi-.re becatul- an exhau-itl. c iinv.'tntorry of thtee reports ,wa: impracticable (2 All forevln -itdieg. except Briti -h. are onmiitted. Miost :f theie to.:i are usel.-:, t.: the average reader. .irnct th,-\ cannot be Cibtained, or rtid if obtained. M:lrce.:vr-anIlI lh at more imiportant-thle ignili- canc o:t ;(iuich :tudie f',r Amen-rican education i aribibgu.-,i: Di.:- similar environnienital circum-tance, alnI-,t certainly intrm-Atic fac:t.-r which make for lar'e difft.:r,.n-r. in number kini, ledge and kill. F-,or these r.a'-i-,r. een ,sulch v. ell-lkn,:,ri :tudite a- thi-'-e '-f D)ecr.:,lv and Degand- and. r.i Decrtiidres' ar',- not here ci.n.-ilererd. I 3 Studies made 'Aith fce ub jctz. a. in the caae of Caldv. in and Stecher4 :or made under conditions insufficieintl. described ti:O juLtil -, evalnatinl. as in the ca -,f Hall," are like'. i-e Oi.itted. i4 i It s possible ti::. that a few v-itiihcant 1 ri ettigat ion,_r: r.rinted in periodicals nIt co.:-crtd by the Edunati,'n indc.i or or by biblio:,graphice: like tho- e in the Revi.ew of Ei/ih'ati,',.nl Rt'si'cil ma.' ha1v: escaped ni:tice and 1o are no ,t here P'.FT I r.pRIGlIAL I N\'E. TiGAlI, 0l'., V. I I[ PlL.\ri:E PESEARCHI" i Topic-, 1-7: -ee p. 11 Ne, data .:ii the "readiness" :4f 1iir:t-zgrade children for arithminetic were collected in the fall of 19.I. and the fall :-f 193" from) tv.enty- four :chi:il: I thirt3-tv.i- cla::' I in f'oiir -tates. The irnstrunLint uIti.1 for thi- t,':-:tin, %%a thet crit iof prete'.tz -.lich are rublilced in con- nection v. ith the "J.lly NunibLr-" prin.ary material. -,f the Daily- Life .4riillhiiit' '; Th':-c pret,'. conlih-t in la, a rrou.ip of -Lub-tet l.azed uponl picture: and admini:tered, to the cla; a v. .l' hlc, i b i a '.econd -ierles of sub-test.- intended t-, be gl.'en to. mnallkr. t.le.:tdl grolup- of chil- [ Dr. 1M I Dce,.:.:,. and I M lle. Julia Deca ni.l. lb,, r jati.:.kn rdlall i. l' I .,- lutioni ijI: n.:,ti...n ItI quanntil,', o:'ntinLut t d li-i:-niortlu -, '.-h z l'Iin 't," A.r- chivi .1,- P : '.- ;.,.,,],. X II M .., 1,12,1. 1-121. S I.M.,: D :-c.:tu.Jrer. L, '.:,l f.'i',': d, I E fr...t -ari s: Drl.-chauz el N iesile. S A., r.. .' i (1.. 1-2 14. F;. T. P.al. in an.d L ..rl: Stecher, Ti.,. F c. ,I.. '. ..f li r'r,sc t. 'o1 Chil. (Ne- .:.r D. .\rr.lrto:,n and C.:... 10 4 r'r'. I 2- 0. '1. Stanlr. Hall. "_,..-nt.nt. ..I ,.hildIJtn'i Mirndj .:.n Entri ing Sch.:.9l." PJ J,. 'r i.,(, l .'.- IuI. ,. I I 118 I,1'. 13'I -173 "Gui. T. f.i l. \\'ilhams A E.i.:.. n. 11. n,: Len.:.i: l,:.hrn. pF,a ,. Lit', .-r,'ll,, .,'cts i E' .:.n : inrt and C '.,, Irj..''l .Aithnil'tic in the Pkssession of School Beginners 13 *lr.n who pre-io:.usl% haI e had suib-test (a), and (c) a third series of sub-ter-ts t.., be given inlividually to a still smaller sample of pupils. In this particular instance. Io,.etver, sub-tests (b) and (c) were administered in their et-ilret \ t .ll pupils of a class or to as many puLils :,F a class as circumcaitactcs permitted. In the latter case direc- ti.'ii.. er t elec t.. the ic rst. fourth, seventh pupil, etc., on the teach..r's alphabetical roll. or the first, third, fifth pupil, etc.-in other words. t,: give tili tests s,-, as ti.- insure a representative sample of the total enlIilneint. In all. 365 rural and 327 city pupils (see Table I i t::ok sub-test (a) arid all o:r a usable part of the other sub-tests. The teLtin.; was regularly dune within the first two months of the term, in Grade I only, and with pupils who had just entered school for the first time. (Repeaters were thus eliminated.) In no class had any number instruction been offered. TABLE 1 State Number of Pupils by Schools Rural City Florida................. ................... Sanford: School 16......... 12 North Carolina.......... Durham County: Durham: School 1......... 30 School 17 ........ 72* School 2 ........ 40 School 18 ........ 62* School 3......... 25 Raleigh: School 4......... 35 School 19 ........ 28 School 20......... 47 Pennsylvania............ North Central area: Williamsport: School 5......... 52* School 21......... 25 School 6......... 25 School 7......... 22* School 8 ........ 9 School 9 ........ 23* School 10......... 10 School 11......... 33 '*rp_-ri: ....... Vicinity of Petersburg: Petersburg: School 12 ........ 7 School 22 ........ 14 School 13 ........ 13 School 23 ........ 36 School 14......... 12 School 24......... 31 School 15......... 29 .- ,':' t 365 327 I' ,ch..l. rr,..-Le.1 thus supplied pupils to the testing both in 1938 and in 1939. r Thi-. r...ri include all pupils whose records, even when incomplete, were used in the ,ih.u ,.'..r, ".1 ri. chapter. The tables reporting data from the individual tests show totals *.oller thr. the ri.gures above, for the reason that not all pupils in every class were given [I,,2 ltiil. ,J l h.: lC . . lr:ll:h ct.' in Gmc',h I 71and II The testeis rere graduate students in thie writer's c...urs., Inves- tigation- in Arithmetic. in the case f -choolls 1-4. 12-15. 17 and S1 (for the mon.t part ,. 19 and 20. and 22-24. In the .:'thler ,ch:,.-,l (5-11, 16. 17 and 1 [itn part]. and 21 i the tz-ter va' Mi_- Hulda Kilmer.7 a teach:ir in the )us-hore. I-'etnnl\ vania. sch'l'..Il. *.r the classrcomi teacher h:,i.i ai-.qwtance M[ii; Kilmer hadl .rcurtrd In tie latter can'on great care "as taken to halia thin: tea.lier,- tunidertand] and follk," the tet 'dircm:tiot:ins. Th ret uilt obtained fror. thi testing are gi vn behloe acc.'rdiring to the o:,rder of the topics as iiuthilined :.)n pagc 1. Thi ordtr .t topics differs fr.min the ordkr of tle 1ul.-te-_ts theni-ilie:, Lut is hI:r<, adolt,.d as making for greater c:larit 'ind com:mn\:;ience to the reader. F.:or each topic there \. il be in a daescrilti.'ni of t tue retin- instrumiient,. the directrm'n f':or adlmirniternl- the t]l.m-tm:7t. and theil quantitati\' data secured. 'lie-i data \'ill then be ominpar:d \tinh tie cltr-'re-,pnd- ing findin.'i -of others \iho: havin: mvestieated tie way it ill be [Lpossible to. synth'si.'ze the new and tih i.ilder inf:, rmm - tion willi les[ect to: each .:.f thlie eken arithnietical ideas and skils- consider,d in thii part ..f the chapt,.r. /. RoW L,*';tin', The temt of ability in r ite crouniltinit; .as of: course given indi- vidual: Thel child \as tohi. "I \ int tio st, h'o." \.ell \.:u can count. Will yoi c.:.unt for me ?" If tle child as ti id .:.r v a- unc:rtaran A: the nature .-,f the ta-k. he .*a- tincoruramgeil il"n] the rask was illun- trated itn some -ilrhi falmi]il a "1 knoV, \otu .ani ..tLtiti. N'. li'tcn to me: rnu:. t:, i tIhr Cai Ui It amount fr.iin m other: ?'" All children were stopped u l.n tih,., reachdied 20. Chiklren i ho did no:t tr\ toi count that far o.r ho imae ina mi ktak :. l.eir.re that point ii, the numbi.r series uere credited itrk hatin.i cor.unted to: thle laot correctly stated number. For practically all the children one trial nas eiouigh. \\'Vhen. however, the examiner had raacin to, h:h.ie that a child had erred accide:ntall. hie a',s i\e a second trial. Thime rit half .-4 Table 2 suiimmari..ets the iii.hnit-. It \\ill be i.:.ted that 47 .s per :,It of thi, rural .:hilrven and 57 4 i.ir cent of tiht cit,' children counted t'.: 20 and that 52 3 ipr i.ent of the 6.31 chi the tctal grotii attirnedl thi- inme ioinit. nl,' alout l teinthl stopped short r.f 10 M i-- Kinlm.r hal i. rc m 'u- .- 1 i h. at..','. .:-mi nti.,-ni ] m..ur it. h tl, writer, and c..ll.:ct,:-l ;r .la'a ...r I r ,\ M. th.-i+ a\h h t imnp iubhl h.d, i a file in t1lie Di[ L Uk L' i., .--r,t: I ii-rar,. TABLE 2 Pl-L- .;'-T .N.'Ti_.,i'. ['E-ULT: ,-,r i .- T L.4T- .-F RI-,L. C..,L',.TiN,:. .N i, EN., .I ..N [f',r, ,' ss Pitf.ls .4 ;'lu.v i In'1, ra L f'l% Toratl I .V = 2.';- I f --- 2 r ,' (31 J e R.-ic (. ifluniiii b., 1'4 , 2IiJ ... . . 52 I -! . . . 57 1 r,7 2 Wl. 10-14 .. 84 Q4.o .5 I . 111.i 1 11).1 II Enim ra ir :,r . 10 objects ................ 83.3 88.5 85.7 9 objects ................ 87.5 91.2 89.2 8 objects ................ 90.4 93.2 91.7 7 objects............. ... 91.3 93.9 92.5 6 objects... ......... 93.3 96.0 94.6 5 objects, or fewer....... 100.0 100.0 100.0 The figures obtained in the present study are to be compared with those of other investigators, as summarized in Table 3. The first three of these studies have been reported by Buckingham and MacLatchy (reference 12 in the research bibliography at the end of the monograph). For two of these studies Buckingham and Mac- Latchy are themselves directly responsible. One of their studies, which will hereinafter be referred to as their "major" or "main" study, involved a total of 1,356 school entrants in one Texas and sixteen Ohio communities. Their second study, herein designated the "Cincinnati" study, exactly paralleled their main study but was restricted to Cincinnati schools, where 1,067 children, about two thirds of whom had previously attended kindergarten, were inter- viewed. (In spite of the difference in earlier educational experience among the Cincinnati children, they will all be referred to here as school entrants.) The third study reported by Buckingham and MacLatchy was conducted independently in Cleveland by O'Connor, whose subjects were 1,242 kindergarten children, only 313 of whom were however tested on rote counting. O'Connor's study will be named the "Cleveland" study for the purposes of this report. Ac- c-.:.ring t:. Table 3, the figures for the Cleveland subjects run high, pir,:.IahIl because of the influence of some selective factor, or because ..f -,ni:mc -.,ieal instruction not enjoyed by other subjects, or because rfi I..th -4f th: factors mentioned. Data for the Woody study (56, 58; see the research bibliography at thld end ol the monograph) were gathered from 2,895 children in .4qrii th'ciC in G'(JdS i / Cid Hi TABLE 3 S1U.MMI P\ ...r Fi -t.: .:,. ,.iiH RifE,_r T.I R u.TE COIN'T[',G B" I'. Inv'.;t uat. Sit.,cs -'er cit.'s il,' t/ C. I Throitugh li 2,.' .i.# 11.10 Buckinelian- ad Mac Latch% . M in Stud .......... 1.2"?nl ch.:.-ol .'i0 .I0 2_ 1) cr ratiar C nriurnaii Study..... I .I.iI; ;cl.:,,:l 8, 10) :nt rant Cleveland 5tud,*. 3. I3 I ,r,, r- *') 7r, IS W ood. t ........... 2..' :. pupil; in 2.. 9S. U.. ,(., '.'4 erouri;. I- nlI - cartinu II. Present tud, .. . .631 chi(,.:.l 'i 52 *O' C-...,,. r i .cure- r -ur L J. f. r c..r.-l, ;n- r, ri.J L M ,. L., ,.12 t T .e I.,ial r.jri..-r cbi jr.r. iri.I ,eI-. .r.J t6 k.r .irc, r r rn I "-4. I'. Gra.-J IB. 607; in Gr.J]- 1.; I in i.rad, lIB. I. ard -.r,i.J, !A. : Tlh r- 1,. c r *..i.- reported itb... e I tht i.tI-I,:..lu ri c.:.r.r .p ; i I... ibc rc..rdJ! i..i.Je I.. Ihe-- .-r.: i.. (Wood;-, 5.-. five half-4-rade kgrup< iln th rt- -uine ystcrns., tthe..: Ihial f-'rade '.r,',tlp.b; being. kindergarten, I, I ht-ginninig bali .:( Grade I i. IA. IiB, and IIA. In 3acI:h instance cliildr,-ii '.i. rev te-t in the half-grade j.it_ prior to that in \which '.le matic inctruicti,:,n in aithinietic v.-as bc.Lun. In so far -a \V\'.,:d\ la publi.-ied data for tli- Ile groul-, l they v. ill hze included in thie su.i inmary talle., of lhli m,:n,,rraph Four c.ti Ji ontai',ai data for cruLtm1i. b I's. The two made by Buckinigham and MacL-at:ch i acrere in slh:n.'i that abuitit onei pupil in ten has d~vl:--':ped tlii- ability at leas.t tc, IlI a; a hmin;t. \, lih.:.ut specific. schol:i instruction. bi tlic time hie entertcr Grade I \\.e,-dy's figure i; much hii,;ler. 3S per cent, or I.etter than :,ne ptupil in three. The reason for this large di'.crepanc. I; pro:blimatic. but it is not improbhalle that mtany of \V\-,,d .'s Grade IB _.ubjecit had re:cit:ed instruction in tii-h ability in conercti,-*n v. ith kindergarten schlolitig. For countiitg 1\ I's tc. 10, 20. and 50 t[l-e data from;i the ctuidirs which are comparable .with respect to, subijeci and te.,tini procedures are in fairly uubaiitantial a4r,mL.iinlnt. It ,eI;l, aL.a i t1 conclude- that nearly all children upontit entering school -are in coinim and of il h nmiini- ber series. to 10 and that bet e.i.it half and t,\o thirds know the scrics to 20. For the: purpoem, of instl action, thlie initicanc>- of tlhi ability "An A ariv tu..!v, that b '. Yocum 1 591. found rhai 2i0 r.tr cant I.f fiity bo., and filt\ %irls just ente.rine Grad.l. I could coini 1t., 21i adala irom Buclingham and Mac:Latch, I). .-rithmintic in ilit Pcsscssion of School Beginners 17 in rote counting may be either exaggerated or too greatly minimized. The chili. who can count to: 20. for example, (a) knows the number names anI ib) knows them in their correct order. Having this knowi-ledge. he ic has the basis for understanding the relative size ,of numberr. Thus. in kn-.':in- that 8 comes after 7 and before 9 he has the basis for Iknowing that 8 is more than 7 and less than 9. Furthermore, when this ability to repeat the number sequence is coupled with the ability to apply the number names to groups of objects, he (d) has a means of satisfying by the very immature pro- cedure of enumeration most of the practical number needs which he will encounter in and out of school. At the same time, it is to be noted that rote counting in itself does not make (c) and (d) possible. On the contrary, in both in- stances rote counting must be supplemented by other kinds of learn- ing. As a matter of fact, rote counting guarantees only (a) and (b) and may actually be quite devoid of quantitative meaning. The truth of this statement is revealed in the child's inability to put the number terms he knows to any useful service. He may "count" a group of six objects and find "eight," or "three," or "sixteen"; whatever number he assigns the group, he is entirely complacent about its accuracy. For such a child "five" is merely the word to say after saying "four" and before saying "six"; for him "five" possesses no more mathematical meaning than does "e" in the alphabetical series "a, b, c, d, e, f." Thus far ability in rote counting by l's only has been discussed. Some attempts have been made to ascertain how well school entrants can count by other units, as by 10's. Buckingham and MacLatchy (12) report that (a) about 50 per cent of their pupils could not count by 10's at all, even though they were helped by the examiner's starting them with "Ten, twenty, thirty"; (b) about a fourth (28 per cent) could count by 10's to 40, and (c) another fourth (24 per cent) could count in this way to 100. The figures for the separate Cincinnati study are extraordinarily alike, being 52 per cent for (a), 29 per cent for (b), and 22 per cent for (c). In \\Woo::'dy's investigation (58) about the same proportion of children could count to 100 by 10's as by 1's. For his five half-grade groups kindergarten through IIA) the per cents who could reach 100 v.ere .32, 46, 69, 76, and 93. Woody's second group, composed of children in Grade IB, were most like those in the main Bucking- ham and MacLatchy study and in their Cincinnati study; yet \\':.d. reports a percentage figure of 46 for his subjects, and the Arithmetic in Gratids / and II latter two investigations, a percentage figure :f aliout 28 .A-s men- tioned before, the most reasonable explanation for the diffterelce i. the probability of specialized training ;i man\ :if the children in Woody's groups, though there is no proof that thi_ hI) 'pcthesi i; valid. As for other forms of rote counting. \\'rod, i 56 i reports 34 per cent of his 1,897 Grade IA pupils as able to county Iy 2's t,: 30, 9 per cent as able to count by 3's to 311, and 12 per cent a; able to count backward from 20 by 2's. 2. Enumtnr,lh'i The second half of Table 2 (p. 15 1 summaarize, the results of the test in enumeration. The differences a- but\.teen rural and cita chil- dren are slight and probably of no educational significance. Accord- ing to the last column, nearly seven eighth.i of the children could enumerate ten objects and about 92 rer cent could cInumie-rate at least eight. Their ability in enumeration therefore a; but slightly less than their ability in rote counting, by I's. a finding w Ihich conrirl: that reported by Buckingham and MacLatchv In this study the testing procedure "\as as follo-,v. the examiner scatters ten small objects (pegs, paper clip,. penlie i on, the table top in front of each child and says, "Here are some pegs. I want you to tell me how many you have." The child is required to lay his finger or hand on each object as he tells it off, thus to show that he actually has established a one-to-one correspondence between the series of language terms and the objects in the group. This procedure has been rather generally used in the investigations of others, th,,ough the objects used and the numbers of objects have differed. Tihus, Buckingham and MacLatchy used twenty instead of ten object-; Woody presented a row of twenty-one circles in a printed booklet: and in the Cleveland study made by O'Connor and reported b3 Buck- ingham and MacLatchy the children were given two trials in pushing twenty tacks into a board. It is difficult to assess the influence of variations such a- those just described, but internal inconsistencies in bodies of data from the same study and discrepancies as between 'different studies probail' mean that these and other variations in testing procedure introduce factors which, however negligible they appear to adults, cause marked differences in the responses of children. Comparative data on enumeration from other investigation are summarized in Table 4. The Cleveland and the Woody data are out of line with the data from the other three studies. In the case of .- l,' linii,'l in the i '. ,.;,',., i oi .cil.i, I I.,l i i'rs 19 T .E:LE 4 S uM i.kY OF Fiti iN ,' IT H ER '-s iF-T Iu- [F I_. '. IEI I-TN ii'': c lJ.7al.', S ul'ict/.< P.** iC /, ta E toiiiu rate Through S 10 15 20 hioickingham and Main Stud% .. 1 222 i,.:..-l )7 C43 "J' .0 58 Cle.eland Stud3' ... 1,242 kinder- 2r, ijutb trial). garteners 22 (more, on one trial only) W oody................. 2,895 pupils, .. .. .. .. 71,79,93, kindergarten 98, 97t to IIA Present Study.......... 631 school 99 93 89 O'Connor's figures reported by Buckingham and MacLatchy (12). t Figures are for subjects as follows: kindergarten (94), IB (607), IA (1,877), IIB (80), and IIA (207). (Woody, 58) the Cleveland study the discrepancy is explained partly by the fact that kindergarteners served as subjects, but probably much more by differences in the procedure for testing enumeration. These children were required not alone to select the correct number of tacks (20) but also to stick them into a board. Clearly, lack of success may be as reasonably attributed to distraction or to difficulties in fixing the tacks in the board as to difficulties of a purely quantitative character. The high per cents in the Woody study may be explained by the hypothesis suggested twice before (p. 18), namely, that his subjects actually benefited by planned kindergarten experiences before the supposed start of systematic instruction in Grade I or II. If conclusions are 'drawn only from the data for school entrants it appears that nine out of ten children may be expected at the start of their schooling to be able to enumerate ten objects and that seven :i thLe ten will be able to enumerate successfully fifteen objects and ;ix of the ten, twenty objects.9 This conclusion is of considerable SThe 19Rl, Stanford-Binet placed the enumeration of four pennies at age iour Enumeration to 4 appears in the new (1938) test at age five, where children mnii successfully enumerate objects in two of three trials, the objects vanring from trial to trial. In the validation of this item it was found that 40 per cent e:ri: successful at age four, 68 per cent at age five, and 91 per cent at acLe ;ix. The 1916 item calling for enumeration of thirteen pennies and JplJced then at age six has been dropped in the revision. Yviinmi 1 59) in 1901 reported that 30 per cent of his fifty boys and 38 per 20 Arithmetic .a GC'idcs I .Ind I1 practical significance. Unlike the ability\ tI:, i.outni bl, rote t:, ID0. tlte ability to enumerate ten object- correctly is p hi-ible ,-nl 1.i childrenn whose number concepts have really. b1gun i.i take -on c-:nientc. That is to say, such children can use the nuimnbir nanic in functinLal quarn- titative situations. Ability to enutmeratc t., 10 or .'mec other sich point does not warrant the infeience iliat children have iell-devel- oped number concepts, but it d'-cs \\arranr the iwiferentcl that theil number concepts are in process ..f quantitative development. 3. Ide 'nl;itC ,iU'- Closely related to ability in enurmeration is the ability to identify or name the number of object- itn groups of vari -tis 'ize'. In the present study this ability was tc-stc ] ith cla a -e- a- .h,,l- ,I mnieans of pictured groups (see Part -\, p. 21) Inctruction- were Vo "put a mark on the man with four ball:on : on Mary's birthday\ tale itli seven candles; on the pot with ten Nv'.rs." A -imilar prxedure was employed by Grant (15), -.h roi 'ade use .:.f Te.t 5 .f llth- Metro- politan Readiness Tests for this purpose. His suibjcct halal to clect:r from among four choices the b..x that ciintained three d:ts. six dori. and eight 'dots. In both of these -tudic-, therefore, something miii.rc than mere enumeration was in\ole,.I; the children tc-ted were pre- sented with a variety of number rep.ir-entratii., i and had to -elect thl- appropriate one. Quite a different procedure v..ai employed L Buikinliham and MacLatchy (12). In their two inv- .tiatons (in Cincimnati and else- where) the examiner threw fi t, ix. s-even. ei-Jht, and ten ,lljec'. in random patterns on the table t-p before the child and asked. "How many pegs (or what not) ar,- here?'" In their main trndv. three trials were given and data are rc-port i: for successes on ion trial only, on two trials, and on three trial'. MAL. they translated suc- cesses into a "mean index of cifficultc" for the different numbner- in such a way as to make the resulting index fi' ure.' represent the per cents of success as compared v ith the total numnier of i-,l.pirtuioties. In their Cincinnati study evidently but onec trial ,as given. It might be argued that both of these studied art I'ettcr cla.sicfid under enu- meration than under identification, hut thev are inclindi.l under the latter, in part because they are so, cias-ified bh the ail-irs and in part because of discrepancies between the perceniraute hures for the-e tasks and those for enumeration I Tallc 4'i. cent of his fifty girls on entering sclh,.., coubl c:.unt more than tr.cniy i..biecis (data from Buckingham and MacLat:ch I Arithmetic in the Possession of School Beginners Name -----------------------------------..... School--......----------------- Date ...-----.. Prt A P.rt B Woody (56) presented his subjects with three sets of grouped objects, the sets being made up of domino figures, of groups of dots, and of groups of lines. Each set consisted of five groups. The subject was first instructed to point to the particular domino among the five dominoes given which contained five dots; then the one which con- 22 Arithmetic in Grades I and II trained four, etc., etc., until he had used four of the five dominoes in that set. The subject then proceeded similarly with the set of dot figures, and then with the set of line figures, in each case identifying the number in four of the five groups. Three other items in the Woody test required the subject to divide a line of stars into groups of three, another line of stars into groups of five, and a third, into groups of four. The per cents of accuracy for 1,897 IA pupils were 90, 79, and 76, respectively. Obviously, this last task differs con- siderably from that in the other studies in this section, and for this reason the data are not included in the summary table or statements. Table 5 contains the detailed summary of findings from the present study. Again the results of rural and city children are about the same, the rural children being slightly superior (about 5 per cent) on the numbers 4 and 7 and slightly inferior (4.3 per cent) on the number 10. About 6 per cent more of the city children were suc- cessful on all three numbers than is true in the case of the rural children. As to the relative difficulty of the three numbers 4, 7, and 10 for the purposes of identification, the data are anomalous: the per cents of children who identified 4 and 10 were very much alike, but the per cents who identified 7 were, both for city and for rural children, less than the per cents for 10. On the assumption that the figures for 4 and 10 are correct, and that more than three fourths of school beginners can identify 10 as well as 4, then the figures for 7 are difficult to account for. On the assumption that the per cents for 4 and 7 are correct and that numbers increase in difficulty of identi- fication as they increase in size, then the per cent for 10 is too high. A tentative explanation is as follows: In the third picture the children were instructed to mark the pot which contains ten flowers; they knew that 10 is a large number, and the pot showing ten flowers is rather obviously better supplied with flowers than the other two pic- TABLE 5 FOUR, SEVEN, AND TEN OBJECTS Per cents Identifying 0, 1, Per cents Identifying Spe- Type of Pupil 2, or 3 Numbers Correctly cific Numbers Correctly Rural (365).....6.3 12.1 31.2 50.3 86.7 72.6 76.4 City (327) ......5.5 14.9 22.9 56.5 80.7 69.1 80.7 Total (692)... 5.9 13.4 27.3 53.3 83.8 70.9 78.5 Arithmetic in the Possession of School Beginners 23 tured, which contain but six and eight flowers respectively. That is to say, the decision in favor of the 10-pot may well have been made in terms of gross comparison rather than in terms of actual and exact enumeration and comparison. That children do so discriminate between groups of objects has been amply demonstrated by Rus- sell (42). Comparative data on identification (in the different meanings of this term as already described) have been previously reported by Buckingham and MacLatchy (12) (their main and their Cincinnati studies), by Woody (56), and by Grant (15). The relevant facts are presented in Table 6. If one takes the median figures reported for each number (and uses only the Grant per cents for children of average intelligence), the numbers are found to have the following approximate per cents of success: (90) (90) (80) (66) (71) (68) (75) (70) The anomaly found in the present study in the case of 7 an'd 10 is to be noted in the Grant study in the case of 6 and 8. If Grant's TABLE 6 Investigation Subjects Per cents Able to Identify Buckingham and Main Study*........ 1,356 school .. .. 72 63 60 58 .. 56 entrants .. .. 82 75 74 72 .. 70 63 52 46 45 .. 42 Cincinnati Study .... 1,123 school .. .. 80 69 63 63 .. 60 Woodyt................ 1,897 IA 80 .. 83 81 .. 79 pupils 99 96 92 95 91 .. 86 .. 75 80 Grantt.................. 563 school 54 .. .. 19 .. 54 entrants 82 .. .. 43 .. 73 96 .. 61 .. 86 Present Study........... 692 school .. 84 .. .. 71 .. .. 79 First row of figures is in terms of "mean index of difficulty"; second row shows per cents successful in at least one trial; third row shows per cents successful in all three t Data are presented for Grade IA only. First row is for responses to domino pat- tern; second row, to groups of dots; third row, to group of lines. $ First row for 145 pupils with IQs below 90; second row for 252 pupils with IQs between 90 and 109; third row for 166 pupils with IQs of 110 and higher. Arithmetic, i Jd,-:: i and II per cents for 3 and 6 are cor;ccl, liat i:.r 8 -eeni l:. blie ,.::, lwbh: if the per cents for 3 and 8 arc c. rrect. hat fir (.1 .eemi tIi hIe much too low. An examination of the pictured gr:oupt, sLlI[.lI: n-.' explana- tion to favor either hypothesi All gn-.up', ,-f doti. from three t.'- nine, are distributed over squares .,f the anime ize half-nclh'i. The number 8 must be identified frim p-i-itures co',mainingi four. ninety. ix. and eight dots; the number 6. fr'..m piiture- c:,ntramininr three. -ix. four, and five dots. Whatever be the true explanation for the twoi Cturi.k- iIncim!isten- cies noted, the data on identification as a whole seem to warrant the conclusion that six out of ten children on entering Grade I can iden- tify or name the numbers to 10 when these numbers are represented concretely and without regularity of pattern. This figure is to be compared with that for enumeration, in which nine out of ten chil- dren were able to enumerate ten objects correctly. The difference in relative success in enumeration and identification may be ascribed to the extra task in identification as here tested, namely, to the neces- sity of selecting the correct number from among several others pres- ent at the time. This conclusion and this explanation are by no means certain. All measures for enumeration were secured from individual chil- dren. The data on identification, however, were obtained, in the present study and in the Grant study, by means of group tests. Un- familiarity with the test situation on the part of school beginners almost certainly would have made for an excessive number of mis- takes. On the other hand, another factor in this testing would have had the opposite effect. In both of the studies named the subject made his selection from among three or four pictures, and he could therefore have scored some unearned successes through the operation of chance. How much one of these two factors-test unsophistication and chance-compensated for the other it is impossible to say. 4. Reproduction Identification is the activity by which one answers the qi-.ticunn. "How many apples have I?" Reproduction, on the other hand. it the activity in which one engages to comply with the request. "Gtue me five apples." In the case of identification a group of objects is given and their number must be found; in the case of reprI.dIlucti.:in the number is given or announced and the corresponding _r'up 4-.f objects must be found. The mental processes required in the r\c, .- ithmetic in the Possession of School Beginners 25 number feat- are markedly different (though both obviously employ enumeration), and so both need to be tested. In the present study reproduction was tested for classes as wholes by means of three pictures (Part B, p. 21). The directions were: 1. This boy wants to play marbles. Draw five marbles for him. 2. Look at the umbrellas. They have no handles. Put handles on six of them. 3. Do you see the rabbits? They have no tails. Put tails on nine of the rabbits. A single trial was given with each picture, and the numbers for which ability to reproduce was tested were 5, 6, and 9. The results of the testing are summarized in Table 7. TABLE 7 REPRODUCTIOX-5, 6, AND 9 Per cents Reproducing 0, 1, Per cents Reproducing Type of Pupil 2, or 3 Numbers Correctly Specific Numbers Correctly Rural (365).... 17.8 20.8 27.1 34.2 73.7 55.1 49.0 City (327) ..... 5.2 22.9 25.9 45.9 86.9 66,7 59.0 Total (692).. 11.8 21.8 26.6 39.7 79.9 60.5 53.8 Nearly half (45.9 per cent) of the city children, compared with about a third (34.2 per cent) of the rural children, were successful with all three numbers; and only 5.2 per cent of the city children, compared with 17.8 per cent of the rural children, failed on all num- bers. So far as the separate numbers 5, 6, and 9 are concerned, diffi- culty in reproduction is seen to have increased with the size of the groups to be reproduced. There are at least four other related investigations. Buckingham and MacLatchy (12) report data for the numbers 5, 6, 7, 8, and 10. In their study the examiner began with 5 and worked upward if the b-hild was successful, and downward if he was unsuccessful. Three trials were given, and their data are here reproduced for those sub- jects who were successful in one trial only, in two trials, and in all three trials. In addition, their figures for "mean index of difficulty" i see p. 20) are quoted. 26 Arithmetic in, Grades I and II TABLE 8 Investigation Subjects Per cents Successful in Repro.ie: m 4 5 6 7 8 I'*' 1 Buckingham and Main Study*........1,355 school .. 74 67 66 63 t,3 Cincinnati...........1,123 school .. 83 73 72 68 t.7 Grantt.................. 513 school 57 .. 39 .. I' entrants 64 .. .. 58 .. 4. 0 87 .. .. 83 .. .. ..4 Present Study........... 692 school .. 80 61 .. .. 5-1 Figures are for "mean index of difficulty." Per cents of subjects successful .. trial only were, for the five members respectively, 14, 15, 16, 17, and 16; per cer, :. cessful on two trials only were: 7, 10, 11, 12, 10; per cents successful on all thr .. r ir. were: 64, 56, 54, 49, 50. t First row of figures is for 145 children with IQs below 90; second row, t'.. children with IQs between 90 and 109; third row, for 116 children with IQs ab...- I1I" Table 8 summarizes the data from the Buckingham rn J MacLatchy studies. No facts for O'Connor's study with Cleveland kindergarten children are given here or in the Buckingham ai'l. MacLatchy report, for the reason that the methods of testing ai,, of treating the results are not comparable. Buckingham and Mac- Latchy's statement may be quoted: Of 1,242 children at Cleveland, 32 per cent were completely suc-.-:s'- ul in both trials of a test in which they were to reproduce 5, 7, 9, and 11 b:. putting the required number of marks in designated spaces on a sheet ..I paper. According to the marking plan for this test, the highest -l.tan- able score was 10, and the median of all the scores actually made .. . This is a very creditable performance. Grant's procedure (15) with Test 5 of the Metropolitan Rj'' - ness Tests was much like that in the present study. Picture-4 i objects were given, with directions to put marks on four olije.:t (houses) in one picture, on seven in the second, and on thirteen tII the third. His test results were then tabulated according to the i(-, of the subjects, and the per cents so obtained are included ii, thi- summary table (Table 8). The per cents obtained in the present study and in the i-'rani study are, with one exception (the number 5, in the present st Ii'., i. lower than the corresponding per cents obtained in the other t'.. Arithmetic in the Possession of School Beginners 27 studies. If it is assumed that the true abilities of the four different groups of subjects were actually equivalent, then it follows that the children tested by group tests were thereby put at a disadvantage and were unable to show their real abilities. This inference is not im- probable. In the first place, school entrants would suffer from their unfamiliarity with the techniques of group testing: they lose their place, misunderstand directions, etc., thus making errors which should not arise in the case of individual testing. In the second place, their scores with the larger numbers would be especially in- fluenced for the worse by the rather crowded appearance of the objects pictured. Certainly this is true in the case of the test picture for 10 (rabbits) in the present study (see p. 21). Likewise it seems to have been true in the case of the pictures for 9 and 13 in the Grant testing procedure. For these reasons the two sets of figures from the Buckingham and MacLatchy studies should be given special weight in assessing the ability of school entrants to reproduce numbers. But in this case which of Buckingham and MacLatchy's four sets of measures should be adopted? These authors point out that no less than three successes in as many trials can be accepted as guaranteeing real ability, but they also stress the danger of disregarding or minimizing the signifi- cance of smaller 'degrees of success. After all, the child who can produce five objects in two of three trials is well on his way in the development of a good reproduction-concept of 5, and even the child who can correctly produce five objects only once in three trials is not without some degree of ability. In the listing below the numbers are arranged in the order from 4 to 13, and with each is given the per cent of successful reproduc- tion in the same way as had already been done for identification. Each per cent figure represents the median of the measures avail- able for each number, only the Buckingham and MacLatchy mean indices of difficulty being used from their study: (64) (80) (67) (66) (66) (63) (40) If these figures could be accepted at face value, it would be pos- sible to conclude that about two thirds of school entrants are able to reproduce all the numbers to 10 (the Grant figure for 4 is almost certainly much too low)."' Obviously, however, various limitations 10 The 1938 Stanford-Binet places a reproduction test at age six. On de- mand children must be able to supply three, nine, five, and seven objects (three successes required for credit). Percentages for the various numbers are not 28 Arithmetic in Grades I and II with respect to testing procedures and with respect to the determina- tion of accurate single indices from the data obtained render this conclusion debatable; it must be regarded as purely tentative, though it is the best guess, on the basis of the evidence available. 5. Crude Quantitative Comparison Concrete objects or pictures.-Under this caption are included terms like largest, shortest, most, and so on, a total of eleven such terms by means of which one describes crude differences in size or amount without attempting to fix exactly the degree of the differ- ence. In the present study single trials were given in the use of six of these words or phrases. Three of these same terms appear among the eleven terms of the Metropolitan Readiness Tests, Test 5, which was used by Grant (15). In both of these studies the testing employed pictures. The pic- ture test of the present study is Part C of page 29. The specific directions are: 1. ...... Put a mark on the largest one [cat]. 2. ...... Put a mark on the smallest one [doll]. 3. ...... Put a mark on the kite with the longest tail. 4. ...... Put a mark on the pan with the shortest handle. 5. ...... Put a mark on the nest that has the most eggs in it. 6. ...... Put a mark on the tree that has the most apples on it. 7. ...... Put a mark on the table that has the smallest number of cups, or the fewest cups, on it. As shown in the first half of Table 9, less than half the children tested knew all the terms well enough to score consistent success. TABLE 9 Type of Pupil Per cents Answering 0-7 Items Correctly Rural (365) .............. 0.8 0.8 0.5 2.7 5.8 16.2 30.7 42.5 City (327)............... 0.0 0.0 0.6 2.1 4.6 14.1 32.4 46.2 Total (692) ............ 0.4 0.4 0.6 2.5 5.2 15.2 31.5 44.2 Per cents Ansvering Specific Items Correctly Larg- Small- Long- Short- Most Most Fewest est est est est Rural (365)..............80.0 78.6 92.3 86.8 96.2 96.2 65.2 City (327) ...............87.8 79.8 95.7 92.4 97.6 98.8 62.1 Total (692) ............ 83.7 79.2 93.9 89.5 96.8 97.4 63.7 Arithmetic in the Possession of School Beginners Name................................... School -------------- -- Date ----....--......... Part C Part D Three fourths of them, however, identified either six or seven terms, and nine out of ten identified at least five. Differences between rural and city children were conspicuous only in the case of largest and shortest, where the city children were slightly superior (about 6 percentage points). The best known terms in both groups of chil- 30 Arithmetic in Grades I and II dren were most and longest; the least well known was fewest (or smallest number). The comparisons in the present study, as will be noted from the picture, regularly involved three objects; the comparisons in the Grant study with equal regularity involved four objects, and his results are therefore less subject to chance success and error. The specific directions for the Grant test were: Mark the other board that is just as long as the first one. Two tests involve half, and it is impossible to tell from the re- port which of the two yielded the data reported for the term. Mark the longest pencil. Mark the middle hat. Mark the shortest flower. Mark the smallest tree. Mark the tallest boy. Mark the widest board. The three terms studied in both the present and the Grant in- vestigations are longest, shortest, and smallest. Both groups of children knew the first two of these terms exceedingly well, the per cents who were successful running at least to 90. The term smallest was about as well known to Grant's pupils as were the other two. but in the present study the per cent amounted to 79, perhaps be- cause of unfavorable factors relating to the test picture (see above). TABLE 10 Per cents of School Entrants Tcrnz of Successful with Various Terms Term of ------------------ Comparison Grant Study Present Study IQ 90- IQ 90-110 IQ 110+ (N= 145) (N =252) (N = 166) (N =692) as long as.................. 24 50 68 fewest (smallest number) ..... 64 half.......................55 76 83 largest ....................... 84 longest.................... .92 99 100 94 m iddle..................... 84 96 98 m ost......................... 97, 98* shortest.................... 74 94 98 90 smallest.................... 66 91 93 79 tallest..................... 71 89 98 widest..................... 72 85 93 Two tests-5 compared with 3 and 2; 6 compared with 2 and 4. .-Irithmetic in the Possession of School Beginners 31 Thui figures of Table 10 seem to support the conclusion that most chillr.n :on entering school are in control of the process of crude co:IiipLari,rn. at least as comparison was involved in the test situations uice. in these studies. Exceptions should be noted in the case of iadl il tlii ugh, as stated, Grant's data are ambiguous) and fewest, or si.?,7. ..;I number. It is a fair guess that the latter term was less tr:.ubl.bl'.mni because of an as yet undeveloped kind of comparison th-in lecaLie'nu of difficulty associated with the language for expressing the result of the comparison. The ri.-,st extensive data collected with respect to crude com- pari.-n live been summarized by Woody (56) in a way which r,-:atl., reduces their value for the present purpose. Woody's inven- i'-.rv rt't. iven to children from the kindergarten through Grade 11.\. ct.'.ained twenty-two items relevant to crude comparison. The first eizit were concrete in type-e.g., "Which is more, 3 apples or 5 apples "Which is most, 11 cookies, 7 cookies, or 9 cookies?" and whichc h i: less, 33 marbles or 29 marbles?" Ten items dealt with abjtracr imnnbers and involved the questions, "Which is more (e.g., 3 .:.r "Which is less?" "Which is 'biggest'?" "Which is "n,,allr .i' some of the numbers being as large as 229. In another exer-.i-,- children had to cross out all numbers "bigger" than 9 in a :'t of *i,,,ht numbers, all numbers "bigger" than 11 in another set, all numiiibcr, "smaller" than 17 in a third set, and all numbers "big- c-r" tli 25 in a fourth. On these twenty-two tasks combined, the ':,.r cetir ,:.f success was 77, for Grade IA pupils. Thi- nr.-st careful study yet made of children's procedures in com- p.ariIg numbers represented by groups of objects has been reported I. Ru:.cIl (42). Unfortunately for the purposes of this monograph hi data ar, not tabulated to show quantitatively the ability of school intrant: ''. make the kinds of comparisons he studied. In his first x.[Pl In',eint he used thirteen kindergarten, twelve first-grade, and :,iir e,:,,'nil-grade children, who were asked to identify which of [t,..- .,,l|. of blocks (such as 4 and 7, 5 and 5, 6 and 8) was "more" .:.r "In.-i-t Having found this to be a "leading" or a "misleading" ,uit-,t'n'ii. in his second experiment with ten kindergarten, ten first- SakL. '-ind five second-grade children, he asked for the identification *if *rn'. : that were "alike," or "the same," or "equal in size." Thie value of this study lies in Russell's psychological analysis of .lil'lren'. thought processes when required to compare quantities. Ruiiell rel-orts that children of age five have good concepts of ,c.t /.,,'ih. and biggest; that not even seven-year-old children know Arithmetic in Grades I and II same and equal at all generally; that in making crude comparisons children of the ages of his subjects rarely use counting to detect dif- ferences in amount, but rather estimate directly when they can or break up large groups into identifiable smaller groups which they then use by some such process as matching. The introduction of blocks of different sizes or different colors resulted in marked in- crease in error, a fact which clearly reveals the instability of chil- dren's thought processes in quantitative comparisons of the kind Russell investigated. Abstract numbers.-The crude comparisons thus far considered all dealt with groups of actual or pictured concrete objects. It is of course possible to compare abstract numbers, such as 4 and 6, 3 and 7, etc. The mental 'demands of such comparisons are more arduous than when actual or pictured objects are present. In the latter case, for example, the compare has direct evidence of difference in size or amount in the external groupings. In comparing abstract num- bers, on the other hand, the compare must himself supply whatever content he uses. So far as could be ascertained by diligent bibliographical search, this ability has been studied only twice-and then not very satis- factorily-in the investigation here reported for the first time, and in Woody's study. The procedure in the present case was to ask which of two given numbers is more and which of two other num- bers is less. The exact questions asked of the subjects, individually in each case, were: 1. Which is more, 2 or 4? 2. Which is more, 7 or 3 ? 3. Which is more, 5 or 8? 4. Which is less, 3 or 5? 5. Which is less, 6 or 4 ? 6. Which is less, 10 or 8 ? To be given credit for a "pass" the child had to answer correctly twice in the three trials for more and, similarly, twice in the three trials for less. This method of scoring is open to criticism, first of all, because it by no means eliminates the factor of chance. It can be criticized, in the second place, because really neither success nor failure tells very much. Even when chance is ruled out, probably many children were given credit for knowing more or less or both merely because they employed correctly their knowledge of the se- rial order of the numbers. Moreover, it is not improbable that chil- dren scored successes with particular pairs of numbers whose Arithmetic in the Possession of School Beginners 33 relations as to size they knew, whereas testing with other and less familiar pairs of numbers might have revealed ignorance or inability. For these reasons not much weight can be attached to the results which are summarized in Table 11. TABLE 11 Type of Pupil Per cents Comparing Correctly More Less Rural (337) .................... 81.6 48.7 City (296) ..................... 84.8 40.9 Total (633) .................. 83.1 45.0 The difference in per cents of success for more and less, and in favor of more, probably is sufficiently large, in spite of the limita- tions mentioned above, to be regarded as a fact. Whether this means greater ignorance of the word less than of the word more, or less ability to compare abstract numbers in descending than in ascending order, or both, it is impossible to say from the data at hand.1 6. Exact Quantitative Comparison, or Matching The requirement in exact quantitative comparison, as this ability was measured in the present study, is essentially that of matching a given number of objects. For this purpose Part D of the group test (see p. 29) was used. The directions are as follows: 1. Put as many candles on this tree [pointing to the tree with no candles] as there are on this one [pointing to the first tree]. [The num- ber to be matched is 4.] 2. Here are two boxes. One has eggs in it; the other one is empty. Put as many eggs in this box [pointing to the empty box] as there are in this one [pointing to the full one]. [The number to be matched is 5.] 3. Draw marbles in this ring, so that this boy [pointing to the second one] will have as many as this one [pointing to the first boy]. [The number to be matched is 7.] To score a correct response the child must first enumerate the objects presented in the picture and then reproduce this number by drawing. In each instance the drawing requirement is simple- almost any kind of mark satisfies the demand-so that errors are probably quantitative in character, the result of incorrect enumera- Woody (56) asked his subjects, "Which is more, 3 or 5 (5 or 8) (13 or 17) ?" and "Which is less, 15 or 22?" but the response data are not reported in a way which permits comparison with the figures from the present study. 34 Arithmetic in Grades I and II tion, of failure to hold in mind the numerical total, of inability to reproduce that total, or of some combination of elements. Correct matching, in other words, is possible only after rather complicated mental feats. The results obtained in the present study (the only one yet re- ported in this area)12 are to be found in Table 12. Three fourths, slightly more or less, were able to match 4 and 5, and about half, to match 7. The matching of 5 was easier than the matching of either of the other numbers, even 4. Whether this fact is to be explained as evidence of a richer concept of 5 than of 4, or as the result of differences in the test situations which unintentionally made the group of five eggs easier than the group of four candles to ap- prehend or reproduce or both, it is impossible to say. TABLE 12 CONCRETE NUMBERS 4, 5, 7 Type of Pupil Per cents Answering Per cents Matching Specific 0-3 Items Correctly Numbers Correctly Rural (365) .... 13.9 13.9 32.3 39.7 69.0 79.5 49.3 City (327) ..... 12.8 11.6 30.3 45.3 72.5 80.1 55.4 Total (692)..13.4 12.9 31.4 42.3 70.7 79.8 52.2 The city children were better able to match the three numbers in the test than were the rural children; 45.3 per cent of the former scored successes on all three numbers, as compared with 39.7 per cent of the rural children. Moreover, this advantage is consistently in the favor of the city children in the case of each of the numl.,er-. though the difference in the case of the number 5 is negligible. 7. Number Combinations or Facts, and Their Use in Problems The ability of school entrants to deal with number combination or facts (the terms will be used interchangeably) has been studiiel in various ways, and the investigations unfortunately have for thel most part dealt with different combinations. Comparisons of rc:ultcs are therefore of limited value. For the purposes of the present -.mn.- mary the studies are grouped and the findings reported under three 12 Russell (42) had his subjects in his second experiment tell whether groups of objects were the "same" in size or "equal." Possibly success meIiin the ability to match; but in any case his data as reported cannot be coririp d with those obtained in the present study. Arithmetic in the Possession of School Beginners 35 heads: (a) the number combinations when represented as a whole or in part by means of concrete objects; (b) the number combina- tions when presented in verbal problems, and (c) the number com- binations when presented abstractly. a. The number combinations when presented concretely.-Both in the main Buckingham and MacLatchy study (12) and in their Cin- cinnati study ten addition combinations were presented individually to children, first, in an "invisible" test and, then, for children who failed in the "invisible" test, in a "visible" test. The procedure was as follows: In the case of the combination 2 2, for example, a child was first shown two small objects which were then covered; next, he was shown two more small objects which were also im- mediately covered; then he was asked, "How many oranges are two oranges and two more oranges ?" If the child scored a success in his answer, he was given the next combination in the same way. If, however, a child could not give the answer, he was given a "visible" test on the same combination. This time groups of small objects representing both terms of the combination were exposed and left exposed until the child arrived at an answer. Thus in the "visible" test both numbers involved were present together for the child's use. The Cleveland study (12) included only five combinations which are identical with those in the preceding studies, and data are re- ported for these combinations alone. The first trial in this study was similar to the "invisible" test already described. Grant (15) made use of pictures instead of real objects. The subjects were instructed to look at a row of ten apples, after which the examiner said, "I had one apple and, mother gave me two more; think how many I would have, and mark the number of apples I would have then." The requirements of the subjects here were more like thone of the Buckingham and MacLatchy "visible" test than of their "invisible" test in that all the objects were continuously present L'fore- them. On the other hand, the Grant test was probably harder in that i a) the groups of objects for the two terms of the combina- tirn 'eren- not separated and identifiable as such and (b) the total had t-i. be isolated from a larger total which was also continuously present On the other hand, the Grant subjects did not have to it-e.rd their answers in any numerical form, either written or oral, :,inl tibm fact probably compensated in part for the greater difficulty ciiu:ed b (a) and (b) just mentioned. Table 13 contains the per cents of correct responses in the four -tmluie: described above. For the Buckingham and MacLatchy studies Arithmetic in Grades I and II the first column of figures refers to the "invisible" test and the sec- ond column to the total of successes on both the "invisible" and the "visible" test. The assumption here is that those who passed the "invisible" test would certainly have passed the "visible" test also, had they taken the latter test as well. TABLE 13 Per cents of Subjects Successful Buckingham and MacLatchy Main Study Cincinnati Study Cleveland Study* Granti | (1356 school (1,123 school (313 kinder- (563 school entrants) entrants) gardeners) entrants) 0 Invisible Visible Invisible Visible Invisible 2 + 2..... 66 89 70 90 66 ........ 8 + 1..... 45 76 46 76 .......... 6 + 1..... 51 77 55 81 ........ 1 + 7..... 53 80 54 82 53 ........ 3 + 1..... 64 89 71 90 ........ 2 + 4..... 40 72 39 82 40 ........ 2 + 8 ..... 37 73 36 74 37 ........ 2 + 6..... 50 78 37 76 50 ........ 3 + 7..... 33 73 27 72 .......... 4 + 6..... 32 72 27 73 .. 1 + 2 ..... .. .. .. .. 32, 56, 67 3 + 4 . .. .. .. .. .. 12,36,51 6 + 6 . .. .. .. .. .. 11,30,50 5 2 ..... .. .. .. .. 17,39, 54 3 1 ..... .. .. .. .. .. 19,44,69 O'Connor's figures reported by Buckingham and MacLatchy (12). t The three figures for each combination stand for the records made by the three in- telligence groups: 145 pupils with IQs less than 90, 252 with IQs between 90 and 109, and 166 with IQs of 110 and higher. The first three studies in the table are roughly comparable as to procedure, and the results obtained in them are in extraordinarily close agreement. A fair summary is that a third or more of the children (except in the Cincinnati study, for 3 -- 7 and 4 -- 4, which were "hard" combinations) were able to get answers for the combinations in the "invisible" test, and that this proportion of chil- dren increased to about three fourths when the combinations were "visibly" presented. The per cents for the addition combinations in the Grant study are lower if they are compared with the figures for the "visible" tests in the other three studies, but are about the same if they are compared with the results of the "invisible" test. The same may be said with respect to the two subtraction combinations in the Grant study. None of the investigators in commenting upon his results inter- prets these facts to mean that such-and-such per cents of children .-l4ithinctic mt the Possession of School Beginners 37 "'kne.v" the-c, fact' : they rtay well within the conditions under which their data v, er,: ,Itainted when they say that such-and-such per cents rere alle o "',':t the anns.u-rs" or were "able to handle" the combi- nati:,irn- a- the', \ere [pr-sc-ented. Others in citing these figures have inil.ie.I that tli.i research "shows that children on entering Grade I already kni.'', many ,-_f their addition (or addition and subtraction) :':nbinrti:in'." Statementtic of this kind are wholly unwarranted. It is one ithln for a child to "get a correct answer" for a combination :.r fi.r a problem ontrainig a combination when he is allowed to 'csee r tc. manipulate r,.pr,.-entative objects and quite another thing to:I .1,. s, '. Ieii he ,:an aut.unmatically and meaningfully supply a correct ab-tract arn-wer fir a c.'-mbination abstractly stated. The minml, CO e.ill. lionss when presented in verbal problems. -In IX\ .tudi':-- niiuber combinations, predominantly in addition and f-:.r thile m.i-t part ,liiferin g considerably from study to study, have been pre-'ented t., ch,- I entrants as parts of verbal problems. File ii-n.'c'tti':at,.n 0'.n the use of number combinations in verbal pri-iblemni- nil be later '..'-nsidered together. The sixth study, by \V'.Jod I 56 i. i- treated fir-st and separately, chiefly because the find- in-: cannot be br'-,ker-Ti RJ'in for comparison with those in the other Fv:i Thi \V::od\ in%.vent:or, given to children from the kindergarten through Crade II i but always to the class immediately preceding that in v.hich- .stematI: instruction was offered), contains eighteen Ierbal pr.i:.b-im. lhese range from simple one-step problems, five in a.hlditi,'n and firr. in subtraction (like "How many pennies are 2 pniies and I pennyy" -ind "If you have 2 pennies and give 1 penny a.a,. I,.-i'A mania, pennies- do you have left?")-from problems as ea-\ a- thes-: l.' t\.' .'n,-'-'rep problems in column addition, to four '..ne-:tep pr.:.blem- in hizh,.r-decade addition, to two two-step prob- l,-m in In addition. The per cent of success for the 1,666 Grade IA pupril ic 39. buit 'vi,-u-i'ly this measure tells little about success or tailiire *:.n particular combinations as presented in verbal problems. In the present -tuJly t'-.- addition combinations (2 +- 2, 2 3) ard tuo. -'ubtracti'-'n c:,inbinations (3 2, 4 3) were presented indi idutall to:, 6.33 3s:h',I entrants. The actual problems used 1. On ,iri.'s biirthla:. her daddy gave her 2 dollars and her aunt ca.e her 2 d.-.llar- m.-.re. H.:..v many dollars did she have then? 2 Mar\ ial 3 .1 :-.lls. She put 2 of them to bed. How many stayed up . ?. i. :.,l ca;n t., Frlay with Tom one day. Then 3 more boys canni H-.:'.. nm..n- b.: i in ill came to play with Tom? 38 Arithii,',c io Grades I /mJd II 4. Tom put 4 pencils on the t.ble. Three r:.lled. off. Hov. imny were left on the table? The results obtained from, tlhi tc-ting are .uinrnarized in Table 14. Only a few more than 10 per cIent -,f the children wcre able t': give correct answers for all four prohlenis. and shlghily les. than a third were able to answer- a, niinny as three The city children Aere somewhat superior to the rural chil.lren. b,-,th in thi,- function as a whole and in each of th,: -.eparatc problcizm. TI o of the prubleicm. were correctly solved by about half the children. one .:-f thete being in addition (2 + 2) and thi .:Iher in suilitracti:,ln i3 21 The other two problems (2 + 3 and 4 3- 1 ere b.,.ei by a few m.-,re than a third of the children. TA BLE 14 PRESENT STUDY: RESULTS c' N"L', BLE CI '.[fiE n .,',.~ I. FP. PA ,i-,t, Per cents .': ll.) i '-1 Fer .''i,; riin.; Spc 'ifpL Type of Verbal Problt,,., Cl ',r..r il' '.., '"i." .,i.;.i: .'.."L r'cti 0 1 2 .3 4 2 3 2 4 +_ --2 +3 --3 Rural (337) .... 22.3 24.6 2410 I'. '1 2 51 6 47.; 351) ,35.50 City (296) ...... 15.9 20.6 2'" 4 21 1" 122 2 4 s5 4 3J', 4) 2 Total (633).. .19.3 22.7 -, 5 21:'" 10 F. 2 .'* 5 37.3 37.4 As has already been explained. the teist ued] in the l're:ent tu'ld was originally planned for til special .urpq.ose f ,:las.f.i, Mg children very crudely at the beginning ,of a particular c::ur,.e -:f instruction. The sampling of number co:'imibiatins is thetef',re much t.-., liritc'd to serve as a means of imn nter'ryIn. the ability cit chilllren in general with respect to the number ccomi]natir.ni The Duckin-ghani and MacLatchy and the Cincinnati -tudiee, on ithe other hand, %\ere in- tended primarily for this wider purpose. Buckingham and MacLatchy (12) made use of ten number combinations, all in addition, in as many verbal problems." lie problems deal with familiar situations and with easily imagined ob- jects (pencils, apples, marbles, paper dolls, books, oranges, prnnie-. jacks, and the like). The ten problems are read or stated to children one at a time, and responses are scored, as in the present stud',. a: successes or failures. The data from these two studies are repro- duced in Table 15, along with the corresponding data for t:ie -f,.ur identical number combinations used in the Cleveland study. MacLatchy (32) has summarized these and other data for a hyp:oth-ii,:al group of thirty-five first graders. .1,itli ui' lc tII lihe P'ossession of School Beginners 39 TABLE 15 S%..IM.I. M \ OAi N -ili V. I'. rH RESPECT TO NUMBER COMBINATIONS WHEN FiF;ri;TE. IN VERBAL PROBLEMS B , ,. ..,,, .. ..;J .ll cLatchy ......r. r 1.',.. i. ',, ', i.' l .,eland G rant P resent L. ..rn ; 11i. s -'., i udy* (563 school entrants) Study S.. i. 1 1.' i 13) IQ 90- IQ 90-109 IQ 110+ (633 school 1, 1 a I,.'- 1 nder- (N= 145) (N= 252) (N= 166) entrants) ,..lru l ., ',ir ..' i .,ir eners) S + 1...... "2 72 48 7 + I... c4 (.3 30 1 + '- -t 5,.. 4 + 4 .... 31 3 37 I + r...... 4*' 51 5 + 2...... 44 43 S + 2. 44 4 :,. 36 4 + -s. 22 22 5 + ... 32 34 .. 3 +5 27 -. . I, r.... .. 17 26 35 . 0.4 18 31 2. .. .. .. .. .. 54 + ...... .. .. .. 37 2 .... .. .. .. .. 52 4 3 . . .. .. .. .. 37 "'C....r,...r'. I'i,-,rt: r-.,rr.l t, P.iJckingham and MacLatchy (12). r The tr-a ,-,r, ... i all, rr,I,.-.l I"- 2, though Grant has reported data for 10- 1. It :: ': ir., .J rl,;r I, I a: ,,p..-; phical error. Data i' r tv.o ,f Garait'- number combinations (15), 10 2 and 3 : 2. ;re inclulr-d in TaWTlt 15. Yet, these data are not exactly ca:.tipaarabl- t, h'.-. : frii,m tihe other studies. Grant's subjects, once th,..v 10hal leaide,: ulp,,'n an answer, were required to identify that anr ier ir.ni anL :,i hi'e numbers printed in the booklet. Thus the ans.:r I.n.r tlhi:- niuiltipilh aticii problem (6, for 3 x 2) had to be se- lected fri.:'1 arnia 9. 7 10i. r.. and 15 ; and the answer 8, for 10 2, In. t,. Ie-,:- ilotcl fr..-i am;...ng 8, 9, 2, 6, and 4. Errors might there- .-rc have- r,--Lult>-d. n.I:a I'r.:.m inaccurate solutions, but from inability r., rt:, ..aiie t vle 5. rittri -\ nlhols for the answers. A: ha- iben in-.tc l lIe',.r. the per cent figures for the two Buck- iiliain and MacLatdv. tulies are in remarkable agreement. The ihurre for thlt Cits-Iaial -rihjects are the same for 4 -- 4, slightly la,:,r f,.,r 8 + 2. -:ndl con-i .rably lower for 5 1 and 7 -4- 1. One xplaniiati,:in fa-r flht l:i.,er ui.uccesses is suggested by Buckingham and Mac. Latclv 3piari':ntl\ their Cleveland problems dealt with less fa- imilia.- ituati.-,, ,i-a v ithi l%<- asily imagined objects. Thus, the prob- leni i-,r 7 +I a a. Bili.ie is :e..i tc.ir- ...I Tomin is one vear older. How old is Tomr? 40 .*1i "lijti'c itn Gjles ,' J ld Ii None .of tile four cr,iribinatio:n- emnipi-'',ed in tile pre-ent ,tuCJd were als e-mploy-e.d in an\ other investigation, L.ut tie hEfitlr,: : f.r the two a'dditi',in combiiiat.iorn ( 54 per cent and 37 per cent > are not seriously ''lit Of li:ne .'1 tll thi.-e in tIe ihr' three columnist of the table. The :act- iin t',e table .:L a v. hole m11, le -iiiiiarized a i f:lli'-' for the addition combination- in.voilkii'n 1 itr c.. 5 + 1 1 + 9I the median per c-ent of u: -...e- i- lightly mrui:r thin 50: for tilth ,l.i- tion combiniatin- :, in, liin, 2, th,-- iiediai per cent i1 letterr thaI 40; for lih-- rt-mnaining, additi.:.n c':ibinati:onl. ti e nicllin1 [-r -enti i about 30. Tli-:t- m,-edianr fir addition criiiinaiiti'in.- in '.riLal prob- lems are alimn:'t identii:ca.l '.ith the me.liani ffir -itnilar :'rliii', of addition co:ili-li.rson; u i'hren tlhty vere [.r--enell in "'i[]:ibl, ti-bie t (Table 14 ). liut tle\ are -,f c,'ursc much I':,, er tihani the nediran per cents for the samit ionijiilt L: i hen [rentite'J : ",. isl. 'b test4. (In the aitta:r ca-e. thil iimedian t.i r cnit, are .. .75, an-]: 72. re[e:-c- tively, fo'r tihe three _''-roup, i:f c,:il'in ti(oi I T"I cr cihlL e iii., summary :,b'i. %., tiht niiirber clnliiiati,-', iii .ubtractilrin ani] multiplication are t:oo' f-, L:. lur1isli i]ati:i of any imp.,irtance. Succe-.- in ,Jealiir- '.'-ith number ci-mlinations in verbal pro:bli-mis would seem-n t: Le IlILICil ilinre lifficU.lt tiaan .LiCcess i\ len thi riiiiiher combina:ioii:n are presentel c-'incretel., I.%\ inean 'of grouii[p' In s, (l - ing verbal problems of thLe kind co: :-re- d L-. Table 15 ;ile child I., without external Cul-- a1' ,'- ti l,: Cont -i t ,:-i the niimhler; imn-r-i'ce'l anJ] must therefore r up[il. their c:inti:nt himsnielf Tl. Iii- e ma'1,. I a I bv' using abtrict ic:ncqpr- tiel nuiniler-s a units I thiu, "i'.,," ft',,r example, a- a 'rotip i., i I-y de'.eloeping cl-ar imcntal imnaes of groups of .-,i..ject I inllutliIlin t,.,:. or mi-,re -uhgroup- fo:r .acih nuim- ber) whi.:-i he then lconLbme; iIrectly. o:r i -, I l.y ubistitiiivin' ill thought aogre-..at':-. :f _~i .le oiibject- to' talii.n, ti t\- .: nuii i l.'er' ani,.] then by c:'unini], ti-r.-ether tle-e -ulbttirute mental -,ltiject- In the- "i i:ible" tenti n:o such a'stract concript;. imhac o gr,:uip-. or image o:f i separate olject-. ,rvie re, -Uired actual ,b.ijctc. \,ere present ,:' senie. an']. a.cc':r.ligl,'. the cliidrern ..*ere in cenerai iiuch more succet-ful in thi- I in,.l f tert. The fact that 'nme cliilren \were equally cr'iio pct-'ntll in th!e "ini ible" 1t,.t. rd in sl' in; .e-rhal pro:l- lems (it l eiin; a.-urni.d:l thatli a i nminber comblinilatic-rs ,'.,re ,of aibut equal difficui' i nI a I\ meani that thIl:\ actually\ eilli,-l'. Iei ai:,iiut tlhe same mential ipr,::e.es ii tile tL1' -.itlationl In the in' is-ble" te t- the children 'sa. thie if '.-t gr'.oup. remembler,..l that i 'aii' a cleir ima',: of it), sa%, ihI- -cc l co i ':iup, reinmem i ele that i had an-'tier clear image), in-il thin coml in.'d tle- iiija-e-- .1recti' ,:,r Iby c1 'ntin, '. In .-Iritl,,iH ic in the Possession of School Beginners 41 the ciase of [Ile ,rrtial problems they could not of course start with external oiup-,. lit thev could have developed clear images of the r,'rouli:,p an.] dealt with these images by direct combination or by countinct.i. It 'A ill be noted that practically all the problems used in thLe .:Lrio- ,tudies ,':ure carefully prepared to enable children to conjure ui precls'ly- thi- kind of subjective substitute for actual UniJortiiiatel. in none of the investigations summarized in Table 15 were data collected on children's thought processes in dealing . llh the nuinmer combinations. Woody (56, 58), however, reports -L fi:. fact- in thick connection and stresses the importance of evaluat- min'' performance, not i,.irely in terms of correctness of response, but al.o in term ..it tih: niamrity of the process used by the child. Data of this kind are badly. needed in order to combat misinterpretation of the result, of the inue-ti-ations which have been summarized above. This point nas made in the section immediately preceding, but it can I.,ear re-tatement. It i- a serious error to infer that because a child ca-in furni-h the correct answer for the combination 3 -+ 4 in pioiblen-i. lie kno- that combination. If the child has substituted -ei.arate ,imiital o:bject- and counted them ("one, two, three, .. siien." or "three. four, five, six, seven"), he has not, in strict liter- qdinecs, reacted tl the combination as such at all. What he has done i; mc-reli to: re'eil liat he has some way of dealing with the quanti- tairie -ir uitin in -in effective, if immature, manner. Only further olervati on and (ue.iti.Ioning can reveal the precise nature of that te-aclhrs a1r. 0to iunJei-land how children develop number concepts and ho,;, they' c:,ome t.: an understanding and meaningful mastery of [lie IIntl 'ih-el -om binti ticon- (5). \'hlit ha: Pi.t beun ,aid has reference to the child who approaches number in vwhat man\ b,: called a "natural" way, through the exten- :-ion and refinemrent ,:f procedures which are at first crude, uneco- nomical. "ind inmmattire. and who later surrenders these procedures frr othl'er,- .l'1chi are imre refined, economical, and mature. But not all children ta-L thi' route to number knowledge. They are some- time' mis'directed 'b i ell-meaning efforts on the part of older chil- dren or adults. In tuch cases rather peculiar consequences follow. In the pre-emnt itudl _c' eral children were found who were better Jile t. supj.l\ an-t:.'er: for abstract combinations than for combina- tion, in erbal .r.:obleim-. When the combinations were presented in ahtract ,formi. the', .e the answers glibly and correctly; but when 42 Arithmetic in Grades I and II combinations of supposedly equal difficulty were presented in verbal problems, they failed. These children appear to have been coached to memorize the combinations as abstract facts (a practice which is by no means un- common in classroom instruction, but which was hardly anticipated in the case of school beginners). When the combinations are pre- sented in abstract form, they are able immediately to produce answers; but when the combinations occur, not as simple uncomplicated num- ber statements, but as parts of verbal descriptions as in problems, they cannot identify the combinations as such; and, having no other means of dealing with the numbers, they cannot supply the answers. c. The number combinations when presented in abstract form.14- Eight number combinations, half in addition and half in subtraction, were presented to the subjects in the present investigation. In addi- tion, the question took the form, "How many are . and . .?" In subtraction this question was, "How many are . take away S. .?" No objects were used; the combinations did not occur in verbal problems; they were merely stated abstractly as questions. According to Table 16 nearly a third (29.5 per cent) gave the cor- rect answers for half of the facts. The per cent for the city children was 32.9, and for rural children, 26.8. As a group, the addition combinations were easier than the subtraction combinations; the mean per cents of children succeeding on the two groups of facts are 42.8 and 27.3, respectively. As would be expected, the abstract combinations were in general too difficult for these school beginners to negotiate. Only two com- binations, 1 2 and 2 + 2, were correctly dealt with by as many as half of the children. Moreover, it would be stretching probability to infer from these results that even those relatively few children who answered three or more, actually "knew" the combinations in any real sense of the word. Attention has already been called to the fact that many of them evidently recalled memorized (if meaning- less) answers which they had picked up somewhere. Others of them undoubtedly counted some kind of images to secure the answers. Still others were lucky in guessing. Woody (56) collected data both on the abstract addition and on the ab- stract subtraction facts. In both cases, however, the facts were included in a test which involved more advanced forms of the two processes. It is impos- sible from his figures to secure measures of "knowledge" of the subtraction combinations, but his later report (58) carries facts on the addition combina- tions. These are summarized in the section above. .- ithi nlii iii the P,'ssession of School Beginners 43 TABLE 16 F'PEsNr SrIUN *- kE-':LT4 OrN THE NUMBER COMBINATIONS IN ABSTRACT FORM .', Pe'r u,': .-ic'. 'r.., No. Combi- Per cents Answering C -,*.r i... T. t P,-ir. nations Correctly, According f n ..,"; in Test to Type of Pupil fly' ''' ,J ..r .r (C, T i A Rural City Total ,. '.rct'ly = 3 ,: N = :, ,N = ,03) (N = 337) (N = 296) (N = 633) . .. 145 125 13 1 + 2 ..... 49.9 53.1 51.3 I 1. 1..4 14* Ir,7 2 + 2..... 52.5 55.5 53.9 2 ..... 21 7 "' 2v 1) 1 + 3..... 34.1 38.9 36.3 3 ... 1. 7 I'.'1 1'0.3 3 + 2 ..... 26.4 33.4 29.7 4 s' ..s 7.? 2 1..... 37.4 42.2 39.7 5 5 r. 2 ... 3 2 ... 29.1 35.8 32.2 S ... 4 2 5. 4 7 4 3 ..... 19.6 25.7 22.4 3. 5.1 4.3 5 2..... 19.0 27.9 22.7 4. .. ..j ............. .. . ... ... N,, dala V.-rre- rcrIIried ,Lth respect to the incorrect answers iienI. Liut certain inter-sting facts were observed. There were wide dilfferennc'- alm,.ng thI clildr.:n in the extent to which their answers erred. S'c..:e :I them gltii .'.el blindly: 3 + 2 were "sixteen," or "tele." :.r "It\.nti\" 4 3 were "ten," or "fifteen," or "nine." th0er ,t er able l.:. I,: nI i,..r 1li erincs i- .1 + 2 were "four," or "six"; 4 3 were "i.o.'" r "lithr." Thei lar,-Ir children seemed to have mapped out ith numi l.-,t % st A i.\ The i.,rm-iii.r <.htidr: II i.n1 the ,r-.her hand, had little idea of the com- .arati'.e izi >.," inulil.t.r oI:ir i.f their relative places in the number -Ise-r. .At thi. po:iit ,3alual.Ik research might well be undertaken, to deternniie tlie ciiionite-, of the practice of "locating answers in their al.,p|'i. 3iale area" andj t!ie relation between different degrees .f Iti; ahitlit .and stubi-iirIent success in learning the number com- ,ir3atit.ns. It i-s ii nt unliL-el that experiences in "approximating" an- .tr niiiihJit t.ill I-e .carterl very early, as a preliminary to the cl -er .ud :,of thi: initl.r ,:.c-mbiinations as such. Data .:o l i. :trcn _,.litir .. ..>mbinations are reported by Woody 158 i for i lie fivi halft'-.raih I_ i oups to whom his inventory test was tliiiinitslerl S.iriie intrerq i, largest in his Grade IB subjects who w:r,: it C nt:it in,' I -I li, tlie figures for this group are briefly sum- iiari?''I I I:'l.l. in i t .i-i .f .,:r cents of correct responses, and the rt i:,r.l- I,..r th,i:-t r.i.14 chlilruci are compared with those for Woody's I 897 Glali I.\ cil:dil.iin -if ile sixteen combinations in the inven- ir..r. the l":t ii o'. \a-, 2 + 1 (52 per cent in Grade IB, 78 per ,ent in Gral I .\ i: nxt i.-rler of increasing difficulty was 2 2 i 42 l.er c.enti in Ciale 1[. 72 per cent in Grade IA) ; next, 1 + 7 44 Arithmetic in UJ./s.' I .-Id if (36 per cent and 65 per cent i th-in. i1 + 2 and 7 + I 1 35 per Crilt in Grade IB, 56 per cent and e:5 per Ccent, resp'ctieli;. in itird l IA . 6 + 2 (27 per cent and 57 per cent ; :' + I aindl 5 5 (25 Tpir cent in Grade IB, and 85 per cciit and 54 pei ccnt. ri-cpucticl\. inl Grade IA) ; 6 -+- 3 (24 per ciint anid 53 p._r cent I. The co:mnbirati:,nii 5 +- 4, 6 + 4, and 3 + 9 'v.,r: r,:rrectl\ resp':'nded t .: b bhrt-.eei 10 and 18 per cent in Gra6c: iB; thI co'miinlbnati,:'ii 5 + 7. 5 + 8. 7 + 6, and 9 + 8, by between 5 per ccint and ')9 ic c-rit. TI,, combination contained both in tIh[ pire-nt tudl' andI in tli. \\,:mud). investigation was 2 2, wlhichi 'Aa- caI_',rr>Cctl; aii.,'crcd by 5-4 p:r cent of school entrants in the fi.'nirmr and by -42 per cent in thc latter Conclusions with respect i:,' it- t | ,,I.,',- ...iil. ,i",'i.<.- It is p:s- sible to read both too much and t...... littlit inilic'I itii'' tie re- search findings on the ability, -f c ii....'l citratl-d t.-, deal ,'-ith ti h number combinations. One p'r_.r.,'ii cm[rlasii.:es tiec pr:p':reti children who secured answer- lu tic v3ri'ill' i'.om.inaii'iti>in-, pa3, r: attention to the methods used L,\ children I,..' '. tl"-ir an. v,<:t.., atd recommends immediate instrucri,:ii *: 1'hr ab.sti:ct 1ci.Z'mbinatiun- as such. The other points to t'K f'rat ti it the cI.mlination., included in the testing are all among the simnph:r ,_.nicS, .trc.sses the pr.r:ortionI of children who can do nothing. .with combi atlii:n. nd asr.ues for postponement of all instructi.-in ,-n, th :, combiniiiati:ns as suchli Neither extreme view is calculated t1:' prmite _i-i:.unr d iintrutc- tional procedures. It is trui. that rather isurpiir.in;'lQi la.ic numbers of these school entrants managed 'mhid,'.'- t,'. -,et arin.\ers .. L ih combinations presented; it is true that thlie, co.mbinati.ii nsrc :riin l:ii the easiest; and it is true that pr:.L. abl. lfe.. if dan, :, I tn. suiibjects found answers by mature inethid- i f lthinkirni' Tirhese stittmint- lead, so it seems to the writer, i,_- thi- .:-.incsii-i:'n that t: b>t pr[ritabi,m the experiences (if any) which First-grade children hIi.uild ha': \with the combinations must be \.cil clhj:',en ail \V.icm. (lir,..t'd. Th.. adjectives in this sentence ari- :rttl notuiii PART li. -i FIic .- .-17 .'. F actii'/ i Three studies are summiari]2:-d in thi- ,cCti,_i. ianam l\. thll:-ce mad'- by Woody, by Grant, and by P.:,lkinghiii, nc. Woody's study (56, 58) iiH ol\-ci fiar mi:,'e ub.jitt t1li i did either of the other two, a tont! ,f 3.002 children, dividedd] amiio"i.i,' If- .-Irith,'ic mn the Possession of School Beginners 45 -:iup,, as sho.in below,, hut none of them as yet exposed to system- .tlic rinthineric instruction The first series of three items called for the identlicati,-n i-,f pictures showing apples cut into halves, into f.,urtli-, :rnd in. thirds His data for these three items (56) are i2'en bil.lw i per cr1it of successful responses, the results for L.,:' and zirlk]- I.:ii combined: hI,,,..r.L,] ,', Grade IB Grade IA Grade IIB (N=238) F,.rm..o '., = -- =oI (N =594) (N= 1896) (N = 80) Grade IIA S. ,4 67 77 82 88 S. . 45 52 67 69 74 13. ... . -I4 45 65 61 76 .rated brinetr. tiw,' thirds -f4 school entrants (Grade IB) could deal -ati-fact..xril. with 1'., about one half with Y4, and nearly one half i 1 1.. The sec.-,nd s.ri:.s :f three items took the form, "Into how many hale t'ithirds. l':.urth can you cut an apple?" The results for the:. items. reported as are those for the first three items above, are: hale. .. .... -I 44 48 44 45 thirds ...... 26 29 35 43 46 furtl. 2. 22 34 29 47 ..'ain. the co:nicept :,f Ihales was better understood than that of either .-.f the other fractions. and the concept of fourths better than that of thir-.s Thi p'.r cent. foir thirds and fourths are considerably below th:os fL:r alves except in Grade IIA. The- third serie- of \:oody's items called for the comparison of the fractli:,n- 1_. an. Jd I3, as to relative size. Eighty per cent or more :4 hii- Iibject_. r,-grdless of half-grade, answered correctly that a .t-,le i larger than a half; between 40 and 50 per cent up to Grride II l:nrew that j half is larger than a fourth; the other com- pari-on ,. 1i :nd ';,:. 14 :,nd %, and 1/2, and %) were success- illY 'tcalc \,. ith l .-nl. About one third prior to Grade IIB. Woody '. iels calutii:n: acaiin.t inferring from these objective data alone that lhildr,'n in lthe pi-rnuar. ;r-ides actually possess rich concepts of these unit fra,:tion: er.en in ite limited ways in which they were tested, and he rerp'rts remnarikl made by his subjects which indicate that -ucce-_ ihn ni-,t a f,-: instances was more or less accidental or the Inilt of Iut v' rY Spri:-it :al knowledge. Itmni 27 i-. Test 5 .4- the Metropolitan Readiness Test used by .ir.iiit 1 15 1 rei- ti :ll.. s: -46 .-AtiiMuetict il (i' rads I and II Loo:k at the r.:. ofi tour crcle,. Maik halt it all the- circlei-julst half ,:tf the circle-. Item 28 reads- Look at the ro,, oi circles th lines in them Fmind the circle tht is m.rk t' ii :..i/: ,. that i s cut in /half rind remark it (Grant ha- reported data for "half, ut v. hther his data refer tou. the results on item 27. :n it lii 28. or on the two i.ombined. it i- impo:-_sible t.:, s-a. \\haiteer the basis. his figure s-hrw that "'half" was s.iccesfull, negotiated by 55 per cent of hi 14-5 "-dull" pupt;1 (10 Ihel,"- .(. I: ) 7Tr per cent ii Ins 252 "average" pupil- i l, 90-109' and I.v 6,3 per cent .If his "bright" pupils 1 itt 1 10 and abtve'i. Item 27 atpplie- the fraction !' to a grtutp of obli-:ts: item 2&. to:I -iinglhe r'iijec:t- (- tiii,-,u l thte e f",:% dJata. particul rl ial ici: their reference: i, ;miiigui'u-. tell little ab:iut the abilitNy of school entrants to diLal % ith frac:tions. The most tili.r:-.uglomrin investigation in thiN area is that uiade li P'ilkinlghiirne i 40A: \ t. gond as ais thi, itudl. it i ituljert I,:- lihim nations liich affect tlie general %aliditl\ f the re'ult- [ii thi-: ri place. *oinl part lie-r 22W -ubicts- icre schi.Il entiranti. Part %%ere kindergarten pupil:. and part err et-nrollred in GradJc II and Ill Nre_\rthl.t,-. thu data for the group as a S\h,:l M arc heri treeate:d tigeriher. smine v. batt:\vr the childri.n could do \\ith the est,. the- liad:l Iarned t i' r. 'Ithout sistemati.; minru.:tion. In the -ec-ind -lac.e. the test has not bc,.n printed. Otl, a -saiple r-f the fi-ct-t%-ko itemni it a\ailaLlc. and lack of inf formatic-n at this point mal-e int-r- pr,-tatul n difficult. In thte third place the -utljectt :., ere undlrultedly if -i.i, riotr alili ha ing been dr\unn from the Unir-it\ Lal.ra- tior School if the Uiiver-it\ L.f Chicago In thec fourth place. in tle brief article heree the findings ar. reported, ihe data are treated in a\ %a i.liich maiakes inipo-Vible the kind .f anal yi- 'Ie- -iral:l (for theI rep-let t.iurpoC c The frt\ -th-o item, .if ti,: test. I[ich '.a- cien iniJi.iduallvy t- each child. are cla-sifed under thli he:id-,. unit fraction: i such an: and 14 i. ,oihi:r properr fractiir. 1 ;. '- .-.. '4 1. im p i oper fract1in-. idiitfihcatioin of fraction. and erqui\alnt fract:'iins. Tli. r:spo,:iise i:r .acli item ,.a-s e.itler nmilltrical ur a perfirllnance v.liich could lbe ,coreld oliectl ,l Examples ar'. "Here are tw o, pencil- \Will yrni give noe_ one half of tlhcin ?": "I aIn, *,ain t,-, dra\, a line [dras a line about -i' incht-u longg. Ira', anro-thr line one half a lo,"': * Here i- a: dra\-.i,; of four marble:- Coli-r thrce-fuurth i:f thiet " .-,',tIinut/ic in the Possession of School Beginners 47 Filty- ihree per c~ni of ihe unit fraction items were correctly an ,erdl [,v the p'.rouii as a %%hole. The next largest per cent was fhr othlier proper fraci,'n-IS-l per cent. The per cents for improper fraction and identilcatiion of fractions were both 8, and that for eiluiialent fraction-, 0. The kindergarten children averaged 3.7 cor- recr iitemn. all dealmc .i th unit fractions; the first-grade children aira2,:dl 5.9 correct item-, all :f them again on unit fractions. The a' cr-Ae score on Iunit raciion- for second-graders was 9.4, and for ihird-graders. I1.' \\ ith other proper fractions, the averages were, iesplieccuely. 1 7 and 3.8. Thl Grade II children had negligible suc- ce-s v. ithl te other thrre clae's- of items. In the case of Grade III children, the aTvrare i.,orL- on the three last classes were 1.2, 1.6, jnd 00. \\hen tilth sub.ijct,- .',.re grouped according to mental age S5-7. 7-9. 9-11. aid 11-13 scores comparable to those for grade er:.lup[ I erer Tllade I-'olkinigh:.rne conclude- that in the first grade unit fractions may ie -afely prei-,ntedl. Firrs in c..nnection with single objects, next in cnnnction ,.'ith 'roupr, atnd finally for comparing, first, two objects, andl. ithen. tu group. of olbjuct. In Grade II fraction concepts may and a beginning 'nay Ihe mad: in the understanding of improper frac- rn:,n whether r or not rhe-c recommendations, made on the basis of tetnmr' ratlhlr than i:f te' '.mig and on the basis of testing superior children. are -.alid for sch: 'o.ls in general is a question for future re-earch. Data -ifll be prrLsntcd in Chapter III to show that at least -,omtn of thc ,e cInic.p-Is ar, \.ell within the powers of first- and c-econd --; radc childri i n. Grant 1 15 i hai rpunrte] the only sizeable study of school en- trants' knowlvJdge -..f ordinals. For this purpose he used items 20 and 21 of TL-t 5 of tie Nletropolitan Readiness Test. 2'1 See t he farmer fretdinic the chickens. Mark the third chicken from the fariner" 2'1 I-e the p, ruinin, -.. thle fence. Mark the sixth pig from the Thi- re- ult- of th 1e 1 ing may e summarized as follows: 0, i./,,./ 'D.l!" frpils "Average" pupils "Bright" pupils !,/ !,is than IQ 90-109 IQ 110+ 0 1 = !45) (N = 252) (N = 166) /thhid ..... . I 39 71 . l .. .. 21 44 67 Arithmetic in Grades I and I1 10. Reading and Writing Numbers Reading numbers.-The study by Grant (15) samples the num- ber series to 100, testing only on the numbers 4, 9, 16, and 75. The study by Wheeler and Wheeler (54) gives data on all the numbers to 100. The item for 4 in the Grant investigation is typical: "Look at the row of numbers where the hand is. Mark the 4." To be suc- cessful the child must select the 4 from among the numerals 6, 3, 4, 8, and 9. Grant's findings, in per cents of successful responses, are as follows: "Dull" pupils "Average" pupils "Bright" pupils (N = 145) (N = 252) (N = 166) The data from Wheeler and Wheeler are too numerous to pre- sent in full. They were obtained individually from 157 first-grade children within a month of the start of the first term. The numbers from 1 to 9 were known by an average of 64 per cent of the children (4, by 68 per cent; 9, by 33 per cent) ; the numbers 10 to 19, by 15 per cent (16, by 8 per cent) ; those for 20 to 29, by 13 per cent; and so on. The per cents for the numbers for the decades above the 20's range from 6 (for the 50's) and 8 (for the 30's) to 13 (for the 70's and 80's). The median score obtained was 9.21 (one point being allowed for each correct response) ; the range was 0-100, with Qi at 0.99 and Q3 at 11.8. Woody's inventory test (56) contained thirty items on the read- ing of numbers. Six exercises were based upon the numbers in a calendar and involved the numbers 3, 8, 6, 11, 23, and 28. In another exercise the subject was asked to identify six of seven numbers printed in a row; these numbers, in the testing presented at random, were: 37, 19, 72, 41, 131, and 113. The next exercise consisted in twelve numbers, ranging from 13 to 10,103, which had to be read exactly, with all such words as "thousands" and "hundreds" cor- rectly placed. Finally the subject had to locate correct pages in a book, these pages being numbered 13, 21, 70, 57, 89, and 123. The thirty items were presented to 1,897 pupils in Grade IA, but before they had been subjected to systematic instruction. The results are not analyzed for the separate numbers, only the per cents of success by city (there were eleven cities) being given, together with that for the total, which was 63 per cent. .-rithimclic in the Pos.',.-sion of School Beginners 49 So far, onl\ the ability tc- recognize or identify the numbers as such has Ien con-zidered. .A child had only to say "thirty-six" upon bein .ir mir "r',3" to score a success. Knowledge of the meaning of the number played ,:, part. The Grant study contains 'data on the ability oi sclh..Il enttrant; to, "interpret" the numerals 4 and 7. The item 'for 7 is much like that f.:,r 4. which is: "Now look at the next niumb, r [it i- 4). Mak, a mrnani dots after the number as it tells ,\ou ti miake." Ii ,:lr tic.e conditions Grant's pupils were less suc- ces.ful .ith 4 than they had been in "reading" 4: the per cent of "duill" pupil fell from' 26 to 10: of "average" pupils, from 60 to 40; and ,-,f "'bri'-lit" pupil- fioim 83 to 70. Unfortunately, there are no -iuch comllparable "'readjincg" '.lat for 7; as "interpreted" by the three zrouilps of children, the per ccraitz .ere 9, 33, and 61. Nevertheless, the Lff,-ct of calling f,-,r "interpretation" in the case of the numeral 4 m;ia be enr:iu2li If a caution against the acceptance of ability to .read" numerals a- e.tti alnit ., a grasp upon their meaning. l'riti.ug uninm irs.-Gravt i 15 i had his subjects try to make the ntunritrals for 4, 7. 2. 5, '. iiii The per cents of success for his a"verae'" pupils- were- ,0 17. 17. 27. 22, 9, and 21 (mean, 21). Ar- ranfed in :,rdier. the numrral. 1. ert, from easiest to hardest: 4, 2, 5, . 7. and 9. The mean pcr cent of the "dull" pupils for the six numerals .. 5. andi for the "bright" 49.8. For the last named gro.:ip. the i-.rder .-f dilihKuilv, as: 5, 4, 2, 6, 7, and 9. Grant's figures re\cal a ratrlhr surprising degree of ability to write the numbers iprir tn school iriitrniction. particularly in the case of the "bright" 11. Rcc.p-n*iic-i., .f Geometric Forms Grant'- studiv 15 1 reports data on the ability of his 563 subjects to recognize figur-,s in the iliape -,if a square, a circle, and a triangle. In each case the correct form had to be selected from a group of four diaramn,. the correct ,inl and three others. The per cents of siuccezs folloit, : T7 ,'.,, ,[ li" pupils "Average" pupils "Bright" pupils i- i *., ..:, ,, IQ 90-109 IQ 110+ '. = !J (N = 252) (N = 166) si/tn, ......... 79 92 c -c,-" ... . 76 85 i, i ,. ,: . . . 1 19 34 Thri-e iotirthls or inore o, Grant's school entrants already knew the tern-i /,',u, and ciro/lc well enough to use them intelligently. The ca-e '..ith rc-s[p-,it t, .. /i,.l,- is quite different. Clearly, these 50 Arithmetic in (Grdes I and II children 'did not know the ia.rm., .cn ti the ext-int reJquired by the test. How much of a task it t would be to teach its mIcanii.n. toU ftirst- grade children is not known. Their ignorance, accirdlinc to Grant's data, may be the result (ai of lack o.f experience % ih the term and the corresponding form, i of real *]ilt-icult\ in acquiring the concept, or (c) of both (a > and i bi. 12. Time, U. S. .lloait'. oad .llcasiir's Included among the 204 temns in Wood\i 's inventory <56) are eight dealing with time, thirt-ren 'with U. S. cm.inIs. in.d eight wilt linear and liquid measures. Tabll IT s;urnnnrizes the result; of test- ing 1,897 IA pupils who liaIl had no s stenmiatic inr-.truicti'inr in arithmetic, the items in the table having to d.1Ju ith the qut-cstiuns on time and on U. S. coins. .\bout half -if those children cuild tell time on the hour, and about fiur out of ten cCould locate the position of the long hand on the clock at the c: en ilhour. Timeri at the: hall- or quarter-hour was pretty well bevorid their powers. Practically nint- out of ten could identify each of thie ie cin;s to the-- halh dollar, andl TABLE 17 PER CENTS OF 1,897 GRADE IA 'liri fi L trH.:,.,r I -rr.'.iti.'. ,*.,. N 'if'tr!i. WHO HAD VARIOUS L'**: .,r-pr' .f TIME .\':- Ll S MI''El I .,F \A '..r \ I Time Items Pr .1 Tells time on clock 1 (9 o'clock) ............... .. 1 Tells time on clock 2 (1 o'clock) .............. ... 53 Tells time on clock 3 (12 o'clock) .............. . Tells time on clock 4 (3:30 o'clock) ............... 15 Places long hand to show 7 o'clock........... ... -4 Places long hand to show 5 o'clock............... 42 Places long hand to show 10 o'clock.......... -43 Places long hand to show 11:45 o'clock........ 12 Money Items Points to penny.............. '" Points to dime............... " Points to quarter............ s. Points to nickel.............. ..') Points to half-dollar......... .'is, .1/.....,- ,'; .,:s: \\ brch lll, m:, 'lOr . a p.min .:.r a ik'l . ., rprin:. .o.r din'ri : a lime -r a rc:' . a3 li-rr, i,r A qL.IruarE . a haI l .I:llar .-r iquarti:r ai l. intan a i. amr am..untl j A i, ini,] lit I~j l. lr'i. *Ii. rti r .. . 31 qjarit:r .... ,-' rilckil- .1illn c ....... ,lu r te r- . -,halt-,lh ll:,r ,] inr, . -. h al '-,l..ll.r ... P".' " .-jri,,iJ',i!" in ith P'l'lSc'ston of School Beginners 51 about thic ;amin: ratio. kn,:w the gross comparative worth of pairs of tlr,:-e coint, but third precise relationships between the coins, as be- tween nickel' and quarter, %ere not so well known. As would be ex['ect.:d, the nuimbi,:r of peiinie; in the nickel and the dime were Le-4t knlio' n Unfortunately. W\\ood doe not report the corresponding data for childr,'n in the other lalf--,rades tested, except in the case of tellin. timn:. 158 In gc'n,:ral. pupils in his IIB and IIA groups .lho.wed aLbihty beyond that .:o his IA pupils, for whom the facts are tatited above. O-Inly the iact fao-r his 594 IB pupils are given here. \bout 30 per cent :f the'e children correctly identified the clock ,li'ho uii nine o'cloi:k I data ior the sexes are combined), about 49 l-cr cent identifi.-d that shi:. ing one o'clock, 40 per cent that showing te. dlve o'clock. 7 per ce:n that ;h.wing half-past three, 24 per cent that sti-: .v.ing ei';l'it ':,'clock. 20 per cent that showing five o'clock, 22 .per rent that -l\,-, mi te-n o'clock. and 8 per cent that showing a 'liiarter to:' t el\e o'cl,'ck W\\ood\'; finding. for linear nieasure and for liquid measure are c,'-.n-olidated int. per centii- ,r his Grade IA pupils as a whole and b', citi:.-. Fort.\ ,.er c-rit of the responses to linear measures were correct i the- ranee 1.v cities, v.a, 20 to 60 per cent), and 33 per cent of th,: rc-i :,ner, 10 liquid nme3iurc, (range, 33 to 67 per cent). The it,_im- :'rn liii'ar nrauiire, caliAl for the drawing of lines of the fol- l, ing l':nigth: j fo..,t., ix inchiis. three inches, two feet, and a foot and a hall'. Th,: item on lilinii measure were: "Which holds more, a pint little. '-r a quart bottle''" "How many pints will a quart b.:ttle lihld'" "How man. quart- in a gallon?" Obviously, success on iht-,:- item-. -nmall a- it wv.a-. indicates little of the meaning and -iit.'nntcance uhich tili'-e meaur,:- held for the children tested. /L S,'.i L fft't'JI,'.; Buickinr.hain and MacLatch'. I 12) found a small but rather con- i-tent -up.ri rit'. :.f ir- ':''.er b''ys in the various functions which th;t tu.lied. In rot': co:U.iitnL. b1', I's the median limit attained by l.,.:, na, 24.5. anld lv .irl-. 2'-,. while 27.3 per cent of the girls ac1ie\'.d, the urimt -:'f 50 as c-.impared with 19.1 per cent of the boys. Similarl. in rote c,-,ntitrt.n l,\ 10' the boys fell behind the girls, but I, a 'eer\ narrow martin. In 'nurmmerating twenty objects 58.2 per Lent of the ",:'v- and .11 7.0 per cent of the girls were successful; and S5 3 per crent o'f ithe b,:.., and '.-I 1 per cent of the girls enumerated at kl'-. t ten ,:,l j,:-.:t_. 52 ).-l;iiinicte in Grades I and. 11 Moil gyir thir bol1 '.-i-re abl- to r._l-'J-dlucl t ac: ..f the niinbe.rs 5. 6, 7, S, and 10, thit margin of dirtier, :ce biinlg al.,iut 5 p-erc,-taigg poiItc I tlhutl. 81.0 and S5.8 per :ent for 5 1. Thin tuip.,riorlt\ f1:, cirl 'i a n maintain' :f the sime hfe nuimlblt) r Thil median [It.rci.-ita: -.''rt-r iad.- [ bi. and girl irr-, th, -7inml in :'I;: tin to rLi:di pronblcmn, ,i ith iplN[: alddlitio:n ,::,l.ina- ti[':ln ain] in Irnddiwin an'-mers for lt:.n other addiitio:,n fact: when "vi- iblyV" pr'eenteu' \\ hein thei "viible" ind tihe "in-\i-ible" p.reeintationii. \ier,: c::imbhieird. thie cirl exc-elledi tlh r h .:, by fr-m I to nearly v pe'rcr'ental.re pointrit :n all ten ..f thile latter -et of iadd'iti'-n cOicbiiia- ti'-n- BIuickin-lia ham and MacLatchy conclude tlhat. mlioe\ver co0ii-t- ent the a'J iantag IIcd b tihe 'irkl, it i nmail anrd cin[:'ltet-ly irth-' t -i ih:ticarin.e from'il the -ta.'li:-.'iit ':f clia, r,:,:n ini-trrtctio.n \\ood"J'- hrinm- 156, 5S8 i ith re'pe-t to -ex dJifere,:nt. ;inr. unlikt- tho-_. titlt l'r<'-r:ntle, for in hi- inte-tigation the a,''ana1ea2 l.%i .ith tilt. L,'.: \\I h.i"the sci r :r- f r the l h':ile test .err:- tabu- lite']. th, 1.-'.y- :iiirpasld th pir in all hailf-.'rad,'e groiips eXYC r G rade II .\. Non'. : of t' he ,ir -irr:i'r. -r,:.. t:.r, v.ere tatisticallF reliable. .About the inle rrlatio.-ii'li.c Uere f1:Iun] i. whe thii: [aliil.- ti -. were rectrio-t:,i to s'.-:ar--. i-.l r.ga-irdle- of 'grade. lThe Teupe- rio-rity of the b,'-yc wa- monirt: iiar-:,:-d in tl'- part- oif thle t'_-t d:l:etted to ltimi -. rniion-y,y. -tt: H iwhler [,-r centc :I." _o, than ,..f girl- f'a-:,c ,;c h of the cigl, t itcmi- O:n tin;,_: in th- iJ ntiiication :,I ,.:,in it-cin I the hoy- '.'Iunlrd, or ,urpa-:', tile t'irl- in each cae : rand thr -alle \.Va- true on tile tt.l(:I int-sn relating to the co mpi arlr.riie v.tltei In *ze'n,:ral. thte problem of -'e., ditxerenice: i of Ie- irntrr:.t nc ni.. than it ias -ome itv.',-, ,e]cae.- arind rinore -,,o. The diff-rt:ice-: found between the uexre are mircih le,- in ari-'untlr til-in thi,: 'lirtte'r-- b>.- t.-een1 inJi\idiid-als within n i\teIn x. That i- to -a., uu-all', nearly; 50 rper cent ':if :m- :.e. will be found to equal tlh'- nit-dian for thlk ot!ier tx. \\'ltrre d]irft'ertnc-I- i nre f'outind, they are -li.-l:t aid i:an rra,-'iiably he attrirtitId to diulcn.ii..i- in cultural itimilation, rather than t,. in rini- Ji, l rff,:r. 'c, o:f a .,,l:,o ,: .,:rt 1J. Ci (\ ;'. 3. 3'.ii Rir-l C/iildr'cn In the criminal r,-:ear.:h rpo'rte-.l in Parr I of this chipttr th,- .ity cdl,!lrn -:rpas- i] tht riral children rather cs-:nliitintli in the ariIthiiitic:ail fni.ii,, t:.'d. The akarima.-e lay,' cl,-arl.\ with th, ,-tyir\ cil]Jri in role o'inrmtii- L I' l:len-,_ratic-lI of en :iLbjct-. .-lritln, tic ini thle Possession of School Beginners 53 rcprodu'cti.in i.f ih',e. six. anmd nine objects, crude comparisons with cotncr.-te or pictuiredJ i.bjc'tr, exact comparison of pictured objects, and the nunibcr contIbin-Ati,-'iis whether presented in verbal problems or as the ab-tract facts The advantage for the city children was not c.. Clear in lthe idenitifictiiin of four, seven, and ten objects and in crude .i mparso;ns insm,. .ing abstract numbers. NMacI.atchly (28 i has reported an analysis of the data on ten addi- tiin cnmbin'ititons obtainedd from 1,226 school entrants in ten Ohio cities. 1.123 sch-iol entrants. in Cincinnati, and 130 school entrants in a itinumber of one-r,-.m schools. In every combination except one, the cit, children :xcell:dl b, margins of from 6 to 17 percentage p.inmt Thus, the per cents f.:.r the three groups just named on the combination 7 + I *.'ere 64. 65, and 58; on the combination 1 + 6, the per cents .-ere -49, 41. and 32. The one exception to this general condition was in thc ca-e i.-f the combination 5 + 1, which was "ni -. ln'" bi 91 r r cent i..f the rural children as compared with 72 per cent for the .., city groups. No reason either for this exception or for the high deIgree of -uccess in the case of the rural children is offered b\ the investlat,-tr. The c:ml:piriaratie arithmetical abilities of city and rural children liaie ben tr:ated in cther r,: ferences than the two cited, but for the mloist part ,ulh rtfe2ren'c:e. and all of the research relate to arithmetic at higher le:l: than that lihere under consideration. Future studies may .,ir ma\ not co:inirni tilht superiority of city children as mentioned in the frireci;iti: para.raplhs. but it is fairly certain (a) that the dif- ferences fiiund \vill riit be large, so large as to demand markedly ditfertent pr granmu -f instruction for the two types of children, and b I that tlihe rc:a.ins ft'r thi differences will lie in unlike experiences ila ing their orir:in in unlike: Linvironments. 15 L'iiffcireccs in Levels of Intelligence Ac v. cill bi: ex-pected. such research findings as are available reveal a po-.iti\ie rlatiii-n bet.'een intelligence on the one hand and ,ari,,.Li arithinretical abilitie o:n the other hand. Evidence with re- gard tvi this r,:.lation is iurnirished in two studies in particular. MlacLatchi\ 301 clazitied 291 six-year-old school entrants on the liai- .:f tlieir latng, i.n thile Pintner-Cunningham Primary Men- tal Tet., di idling th-im into,: tie lowest fourth, the middle half, and the hihehi s-t fourth. Tl-h: rii2hre-_t group on the average could count to 27 and "'km.." tilI: ,Is of 6 8 of ten fairly easy addition combina- ti...n. Tihe ',orri:sionding hlures for the other two intelligence groups 54 Ailn, tc .111 Gr0ah I a1nd 1f were: counting to 21 arid 4.S umn, :. or the a\eira;e roup,. and count- ing to 12 and 1.5 suri-_ for the -ln.est ;rou'[j The niedian superiorr child could reproduce *'ii-;r .-_f Fe. 4rx. ,ever. eight, an.d ten ob- jects correctly three .:nut of three trial-,. and tlie acraV'., child. unly once in three trials. AbilitN to repr-lduCe number- was limited to groups of five for the l:ia '.-t ;r(up,. and then ionl. once in th],;, Grant's data (15) lhax been reported in full in \arro )r prec:.dnmg sections. His subjects tcre re-arded as ":lll" if their l('s ,-r,- 89 or less, as "average" ii lect.v:.-ii '.0 and I10. and a- "brilit" if 110 or higher. In every one of hIn comiparison the per cents .f accuracy: increase from the "dull" tiroaiue the averagee" t.. tlhe "brihit" group. The comparison c:,ier: identilicatin -.Af gri ,ups of three. six, and eight objects: rep-ioducition of g: roups of f:,ur. se,-rn, and thirteen; crude compare on: i -uch C:o-nc:ept as "ac long as.," "largest. etc.); number combination-, pre'-rented ci-ncretel,' i ( 1 + 2. 3 + 4. 6 + 6, 5 2, and 3 1I and in xvrbal problems 10 2 an,] 3 x 2) ; the fraction a. alppilii.,a tu a rouip and t-. a single i.biec:t . ordinals (third and s$.ri,' : reading, "mterrpretati,.n," and] % writing of numbers, and knowledge of tiihree -e:mnietric formii-. Wheeler and Wheel':r 154 I repa,_rt a :..efficient -.f crrelation of .51 between M. A. and ability of unintructed hr -t-grader. ti, read the numbers to 100. 16. Effect of Kird,f rriarIlta Instruction It will be recalled that Ruckinehani anld NlacL-at:lih 1 12, 291 include in their report data from'i abut -one th, oucand schm:a-I entrants in Cincinnati. Almost exactly tv. tlird> o:f thicse entered Grade I after some time in the kindergarten. .-\ compari-un of the te-t re-tilrt of these kindergarten children x itli tle re-.ultr a;f nofnkiindergartenir. should provide some c idence un tilhe ffcct of the prech,:l.ool train- ing. Admittedly, notbin. i L.v,,.f.n of" th,. nature oft this trarrin, in general, to say nothin.i of the extent to,- which artlthmetioal skills were involved. In rote counting by l's. 1.1 4 per ,ent ,:,f the K-zrtroup I kinder- garten children) and 4..' per cent of ithe NK-gri.nup iarnkindJcrrar- teners) reached 100. "lie wiedlians for thlie v.. .;io inrp v.ere 2'': 3 and 19.1. In rote counting, bIv l's t11 K-grro:up field C,.'.mpial.rle adlala- tages. Of the K-group, 70.3 per cent enumnerated ta.,:nt\ Object; correctly, compared v. itl- 47 7 fei cent ,f tl,,- NK-.r,:itip. Ten ob- .-ntlhn'lic in the Possession of School Beginners 55 Ject, \;ere correctly enumerated by 93.7 per cent of the K-children and bi 82.3 per cent of the NK-children. The K-children both re- prdun.c-. and identified or named each of the numbers 5, 6, 7, 8, and 10 n;-ore successfully than did the NK-children. The K-children gave correct answers to 5.4 combinations and the NIK-chtldren to 3.5 when ten addition combinations were presented in 'erhal problems. Their margin of superiority ranged from 5 percentage points for 3 + 5 (29.3 and 24.3) to 20.7 for 7 + 1 722 2 nd 51.5). When ten other addition combinations were pre- s1intrcd "visibly" and "invisibly" the K-group excelled on every com- bination and by about equal margins. These figures bespeak far greater ability on the part of the lK-children. However, no simple conclusion can be drawn from tli~ee heures with respect to the effects of kindergarten instruction. Buiickin gham and MacLatchy report that the average mental age of the K-children was six years six months, compared with a mental ac.e of five years ten months for the NK-children. Stated in terms If IQs, the difference in average brightness was the difference be- t.,-.een IQ 104 and IQ 93. Whatever may be the contribution of each factor, the K-children had advantages not merely in their kin- dergarten training but also in mental maturity (MA) and in bright- ness (IQ). 17. Miscellaneous Studies Preschool studies.-A number of research investigations have dealt with the development of arithmetical abilities of children prior ti: entering school. These can be but briefly referred to here, since thlir value is greater for the psychologist than for the practical teacher ind administrator. In the developmental biographies of individual children more or less attention is commonly given to the appearance and use of various irithmlcical concepts and skills. Two such studies which deal pri- nmarilv with aspects of quantitative development are those by Court15 I,nl by Drummond.10 ID)'uglass (14) administered three tests to children aged four and a Ial to six. The first test consisted in recognizing the number of S.:r.hie Ravitch Altshiller Court, "Numbers. Time, and Space in the First Fi,' \Y,:irs of a Child's Life," Pedagogical Seminary, XXVII (March, 1920), 71 -S'. Also by the same author: "Self-Taught Arithmetic from the Age of Fin-. t... the Age of Eight," Pedagogical Seminary, XXX (March, 1923), 51-68. M igaret Drummond, Five Years or Thereabouts (London: Edward Ar- rn..il1 ,I.] Co., 1921), pp. 119-135. 5'. .-pitiii'ic" in Grades I and II do]tc in pattern-i of ditfielet e ize- : the -cond, of sclelcting certain patterns a- represcnritin the number- announced by the experimenter the third, of estimating' the number of marbles expo.etd 1-r,.rmientarily by the experimenter. The conclusion drawn was that the experi- mental children had completelyy accurate concepts of I and 2. very -erviceahble and accurate concepts of 3. and a very vcriccable con- cept of 4. and of 5. 6. S. 9. and 10 rather vague concept:. though c.rviccable to a slight dc-gr.e." McL.'iiiulin 1 33 i ia\c indi'.idual test- to 125 children enrolled in nur-ery 4chool- and I- indcrgartcns. Their aces ranged fri-im thirt'- six tV -cventv-tv.o nm-ontI. One seriess of tcts measured ability in rtCe counting: an'-ther. ability to recognize mall aorgate, of oLb- jects: a third. albilit,. to comil.,ine nuim bers tinder varying_ conditions. Thie data. reported onlv in part in the reference cited, arm. tabulated lr three age gr.:iup. Children a'red thirt\-i:ix to fo.rt.-eight riionthc could count by rote to 4.5 on thie average, c. iuld enumerate an aver- age of 4.4 object-. and could counItt back,.ards niot at all. The cet.rrc- s.pondin:.-, limit- for children aged fort,-eiht to s]ity month- were 17, ), 14 5. and 1.6. re-pectively: for the oIlder children. 33.4. 2S.2. and 5.5. re-pectivel Practically all :f the hv;e-vear-ol.k could reEu- larly rcogri:e ,gr: up[,c of t\'.o and three obiect.. and mietimes groups of four. The combiniiii of inti, her '...a- :..' i hard for three- e\-ar-old,.. oas soim.. ihat easier for fo-mr-'.ear-old-., .hi-' ui-ed counit- ing pred.iminantil', anr.] v a i much casi er for i'.e-yar-old \.h. utied countinm, to supplement groupin.-!. The ref cr'r-nc co'.ntains a fairly\ extensive jdisc sioni of the r,.latio,:n b t.t'.e,.n -uccess ,:on the nimib'. r tacs;s inpom-ced An'i the mi iital prce-.e o:f the children in arri,.i, at avic r erc. .\bilitv to compare cr.-jiipm- of ob.icts numb:rin ten or fcocr v. :i studied int>,nsi4.:ly bLv Ruisell 142 ). Hi- ucibjects in pon> e:peri- ment ".ire thirteen I.:jicrygartener., [.' .1. First-_.raid,.rs. and four seciond-graders. In this m.xperiment the children v ere in-tructed to tell I.hich of t',-, expo:,sed groiip o ,f ojects a-, "more." \'ariatimon- ,,.ere introduced bv. u-iin bl cks of different size-s and of dit fere.nt colors i Th-e chan.iizc se-ned t,:, c. 'niuse- the children, a fact v.hich i- perhaps best interpr-.ted (ithoiigh it i- nI -i o interpreted lb tihe ,-xperiiijmenter ai meaning_ that to the: lubject- thle -ituation had really cea-ed to he a 1iuaniititatic -one in the mOrdinary -seie. 'Birnet7 Allre. Bii- e "[.. Pe'rccpr.i;ion .k,- L .,,;gu, r-4 tt !zt. N...m-L.re ." F .:'i : I'i' l. .,,:/,,, ,.-" X X X Il ul.,. 'i ':.- 1. Arithmetlic 1n the Possession of School Beginners 57 nearly fifty years Lb'fort hadi noted the same phenomenon.) Russell's second experinient in ,old t.'.iicty-five subjects (ten kindergarten, ten I.rade I, and five Grade 11 i. This time the children were re- quired t "equal"' in szc. the word i"more" in the first experiment having proved to furnish fals- le:i. Russell concluded (a) that "many- ness" is tih fir't iquantitatke concept, (b) that cardinal and ordinal number con.cp:ts de.elo,'p toetLhr, (c) that counting is not a re- liable measure .:4 this dLcleopmenI'nt, (d) that children under five \ear- of age understand "'most." "both," and "biggest," but not "eqt:l." i e that 'eien-\.car-old-; know "more" but not "same" and "equal" at all fully. ind I if tlat counting as a means of differen- tiating grro:,up. d\el...pz late Tlih last conclusion is understandable, -ince cou:,ting is hardly as direct and serviceable a procedure even %%ith adults a. the matching of small groups which can be really apprehended at a glance. The author's views with regard to the late development of counting seem therefore to be in error, since his experimental situation-'-s \ere such as to place counting at a serious R,'aeicss tcstian' -The Metropolitan Readiness Tests were ad- ministered h% Hildreth I 181 to two groups totaling about one hun- dred children in the lirst month of school. A year and a half later, hlicn one -.f the-.e groups w\as int the second grade, they were given a special arithmetic test designed for the program of instruction to hllich they had been zubJected. From Hildreth's description this ,rogram appears to hai. been o:,ne based upon "activities" in which considerable: u.se wa made of number situations which occurred natural' in the classroom' Comparison of the readiness and the aclidveinent te't corer. for twenty-six children remaining in this gr.:>up yielded a correlation ,:olicient of .50. The other of the two group, then inii.Cirinc, thirtm-three, was given its arithmetic test two and a half \ears afte-r takl-iii. the readiness test. The coefficient of correlation in thi ca.e as 5; 9 Neither coefficient suggests much useftulness for this particular readiness test as a means of predicting achietementit. hio,.cier %aluiiile it may be for inventorying purposes. Hildreth points *out reasons \.h. the predictions of the readiness test %\ere no higher. but does not mention the fact that to be most effec- tive the readiness t,:t nimu-t .e closely keyed to the program of in- structinn which i to: folla,'w. a' closely as must the achievement test by Which the eftects of that p-rogram are later assessed. 58 .-,'uliiiii c,' in: 'J a..h. .s I and If PART Ii AM I F.*i.AL OF f E: \- F1'. iDNiN. tric l Si lUaIll V o'f FiJdi,'..:s The research i..n tle arihlie.tcai kin,,v. !ed.4e and -.ilk .if children just entering sch::il in ,ipre-tive b.:il i in it extent and ii' the fac:t: which it has revealed Partc I and II -.f thil cliapter uiinnmarL.e data from twelve -eparaLtc in. etltiiai.n relating' t:. tv.el\e different arithmetical topi':>. A u ill be p,.:rited oi.t in a later -ectijon. -nie o01 the studies are (,pen ti. ,l1e:tic-i .' .t *Aill ri.urn ..r an'.ticr ncvc\r- theless, the fact rcain- that children .. hen the\ come It,-. i ch.-..-.l k:nF...'- a great deal about iinmdel it i wi..rlh .hilc tI. cla-i-:v the recearcli findings in three catc.jric. ,lav.;y,' rcmenimber.ini that certain limi- tations attach to thii- rtc-arch 1. The follow in'z -l:,ll .,i. c_.. ,cer'- eei' t:, e qu;r,- ;..1 ell I.e. lo[ip by the time most cliddilren -t.rt -ch,:,l Rote counting I. I'.- throu-.. h i'i at lea't Enumeration: thr.:,i'h 21.1 ,it lea-[ Identification: tlir.ii.l I.I at le'-ar i the Imlit t ,..I.ed. in re-.etr':li and probably tli:u. u-'h 2'l Crude comnpar-:.:l a .,ith l:c:t tih coicept- l.tiiet." "inidJ- ,lie." 1:',.-1." ":h,,rte-_t." "' i niller "t.. lle' ," I i ... ih tl..:Itract iitiber- 'm,:,rc." th thl e ntlnl- ber: thrr..u'h b10 Exact compari-,:i .:.-r miar'lCiinz Icact tliroin.-h 5 :,r 7 i l-e limit of Number combiri *ii i. dt ,:bect- t-o c-in- of Il1 tions: ill erL ajl [prolh:iin v. n1th e-,zily i Iiin."iied ob.iecit .ln, etuati,:.l-. ;,,JJil', I nd _". l, In,| pr. .l, dl,l y ,o,-t f V1'. t: 1- ith nl ii- t.:, ..r "7 Fractions: unit Fractioni thir.on li lIial~e. aiid tIiurrbi .i- a, pli'Il t) single objct-, an] perhli. h lh' e- a-: ue.] ih *small r.:,u.. ,n even division. Ordinals: thrcw..b o i Geometric fig-ire: 6rcle" jii -iuari" Telling time: at ile .:,ouir U. S. coins: rec,:.: ,ori of -ill coi m tl-he half Jollar. r1..I ,, F. understandir-' -.f reliat e ilue .f [.c ii.:' ar.J] other nii'ller coin- 2. The follow' skill- aiilnd :oncepr.- ire i-..t -o fully I .. n to -cl:l, :dl entrants, but are fLi'rly v. l -tartedi amo:.-' a rei-.:il.ly l.ir'.e per cent ri children of this ace: Rote counting I '- t1 1111 liv i I' t, 1111.1 L.. -.'s. t, 2d' or 30 .-ilinhctic in the Possession of School Beginners 59 Crude co:ri.ir. :,n: a i with objects: "as long as," "fewest" (or "the smallest number") b i with abstract numbers: "less," with the ab- stract numbers to 10 Number c.iiibina3- :, i in verbal problems: probably all the facts with ii. .n : sums to 9 or 10 b i with abstract numbers: few research data available, but apparently less than 50 per cent able to deal successfully even with the easiest facts (e.g., those involving the addition or subtraction of 1) ReaJdint number : only a few know the numerals to 10 3 The folo'.' in- slill and concepts are possessed by less than a third *-iI i.',el -rienr:.iiint., ani then in limited degrees of richness or proficiency: Rk-tc c*.-tM inz. : l:,' 3'_, to 30 Crudle comiipari on objects: "same" or "equal" Frariti' a I proper fractions other than unit fractions, ap- plied to single objects and small groups 1. i improper fractions c p relative size of fractions Rei..nrr an.l writing g numbers: virtually no ability to read beyond 10; \rtu:.dll i nonIi I.'. rite, even to 10 Ge.,,metric iguri-. "triangle" Telling time- at the half- and quarter-hour U S mii,,ne' : relIati: value of coins other than pennies Liqiid and linea-r measures: relative size of units Rc:-':arch has contriluted little or nothing with respect to several traditional t.oic: in ithe primary course of study. For example, crude c:':mpar.I ins are s.,nmetimes made by the use of such terms as ",:'iiiiet." "ol..let." "heaviest," "lightest," "darkest," and the like, *:On nii'ne 1if .lhich are there research data. Likewise, research is vir- rcallv -illent in the matter of the subtraction combinations. Too few :if tllh:c have 1l.,cen included in inventory tests to reveal first-graders' familiaritN uitl thint. At other points, for example in the case of I ractirim. rei-earchl secms to indicate possible success in teaching ide-cs n,.' *.ener-illk '[ withheld to the later grades. In such cases more re-eard;- i. niided. to determine whether the inferences drawn with reg-ard to, rhi pr-.:.l,able effects of instruction are sound or unsound. Limitations of Research Iin the -clitnce immediately preceding the lists of concepts and skill :|aT,<, -. ca.tiu,:,n was stated to the effect that these research ending!. mu-t ni:t be accepted too quickly. The investigations which .-ritliintic H 'i Gra&ds I and II have been summarized are riitmetime li hmted in ,alue becau-e of errors in technique or of n' utficient cc'.eragt. In the f.rst [pace, fe\ arithmetical .skill- and conce-pt' ha'.e been at all exhau.tielh v tudied. Reference here i toi what may be called the horizontal dimincnion. Mlo-t skill; and co'ricepts have bt-cian merely sampled, which is ti, -_. 2 that they have been studi'-d only in part, in but some of the -ituations in which they function. An example i- the number cominlination-, ,nly a few of \lhich I'.e been included in research ineri iorkes. The fec, concepts and s.ills that h12e Ieeni studied more comn.letel:,, for examrnpl., coiuntinlg irid enumeratin, are among the ,io'lTet and leait cn.mplicate i in th eld of arithmetic. In the second place. a rather large part of the data ha. buen col- lected by means of group test. Tht ttechnitq:ue of group t-twing vI somewhat Ulki.crtain l|articulrlv\ "itlih pupils a; immature as chdl- dren just bc;innin. Grade 1. EBery fir-t-grade teacher knoi\\ the difficulties of conttroilling attention ..f all pupil, throughout the test- ing period; he lkiov.s als the extraneous errors mtroduced by in- ability to understand and to foll:,. direction. and to tnter an 'r. in the right place. Group testing i a xe for rniltipli.-choice types) has, however, '-on. distinct adlantag. : the rt-ult- obtain d almost certainly under rate the actual cxtirent of kno\riid'-]c. In [art, but only in part. tlhi- advantage compen-ater for the incomplete saimpnll:r of concepts and skill as these are rer'resenited in tlh tt.':.. In the tfird place, in ery tfe invest I;atiin4 haxe data been ob- tained on children' pIrocedures in dealing v. itl' tiS c number task; tested. That is, experimenters report the an.: \ers children give, but not how the antl w,'r, have been arrived at. 'Of coMurs. in the caL e COf rote counting arnd enumieratiiin children necessarily re.eal their ro- cedures, bu: nt :'' itn .Kuchl quantitati'te f.atw an compilaring number>. solving combination.'"? e-timA.itinsi le ngth:,. and the Iille Yet. tEie procedures u;ed by children are eu.ential to a true :apprai..al -'f their developmental StatuLs. Correct arin .'.ers c-an be obtained by immature procedures; incorrect ansi ers may merely mean iiicompkte coin- mand of mature procedures. On this a-ccount future resear,:li rma well include thI te:hniiiiuc of careful ob:errataioni and of the inter\iie 18 The co-r..j-'.xit. f nI ,n I .l.' iii... i.:.1 Ir mc-mn r ful na-t, ry .4 the number combl.:i-ri..ns Io b.n anal ed tI.. Br.. .-.r-II \\ill i A.m A. Pr'.,'-rll. "Teaching M caning:.' F.,'ir ri.d iii in .-1,,a 'i';, '.- i Eullitin :.1" hir A -s.:iati.:.n for Childhood Elduca n.r. I'." [P. I -1. Th,.: holee bulletin .. ill L.: 1...un.:l of value to teach rs int,:r.-t..l in .:lv :l..' ,.ii qunr 'im atli .' il'alinei -. .Ariiitmehc in the Pisse'sion of School Beginners 61 or conicretice a- mcans o:f s'upplemnenting the information procurable thriiugh group testing.19 Cautnons in Applications The research findings .summarized in this chapter have been ob- raincd friiii pupil- _upp,:,,-.dly t~ pical of all children who enter GriadL I \tar after \ear. This a.iumption may not be valid in every ca.e: a.tpical clas..,c mnay unint:-ntio:,nally have been selected for test- ing. But if .o. the error i- not probably as serious as that arising from .a L second assumiiption. namely. that the first-grade pupils of a single clay., or .:.f a community arc ncces..sarily comparable with the research :ub[j::lt Otn thel c,_-ntrarn. a local entering class or the classes of a whole system liay be quitn unlike the experimental subjects. It is therefore un\, it,4. apart fr':m surp.':.rting evidence, to regard research finding-' a- directly applicable t, any and all school situations Research findingi. tell the teacher or administrator little about the class as a "hole. but they tell very much less about the number abilti:t. .:,f particular childir-n. If educational psychology has estab- li-hed any2 fact, it has emstabli-licd the fact of individual differences. Instructl.:.n in Grade I land fo:,r that matter in any other grade) thouid ble l.aerd up':in accurate knowledge of the abilities of each npupil in the clias. On this account careful inventories should be miadle :f the arithm.tical concept- and skills of every pupil before t.achingii. i. undrtakie n. The inventory need not-indeed, should not- c,:ver all coniccrits arnd -kills and be completed all at once. Rather, the ihnentory rl. would be carried on progressively, by stages. Thus, inf,:irniatio:n should firt be obtained concerning ability in the first arithmetical -kill: ,:ir concept to be taught; the children deficient thi.r:in can then be formed into a group for instruction. Meanwhile i,,r aiter,.ardsi tie tiiventory can be extended to cover the next t.:[pic- In this %%a\ children can eventually by small-group instruc- rtio n be Lroought t.-i the point \hli-re the class can be taught as a whole. \'" ihr i pr.I.ably thil rr- th.- r.-rugh and penetrating study yet made of ili. r.imn:.. **.i rrim.nr, r e J.: rhildr>:n for instruction in arithmetic is unfor- Sinr t.elt n.:.t a'ailA'lc, f..,r ihi: .umm:ir.. It is the doctoral research of Miss L':.ris Carrpr an! nma. .:ortl., 1,,: :tlined from the Duke University Library un.k',r the til,. ".1 Stu.I, ...t S':-m.: A,'.:cts of Children's Number Knowledge Pri...r t.-. n rlrucrii.:.n i- Carpl r I,:res:lf interviewed 270 school entrants on a C-niiderr.'hhl r,'n.e .-. f numlb:1 [a4:k.. and her report contains not only the factual dla lta e c.:.1lrC, but l1:.:. 3 .:a .rhihng iniuiry into the procedures employed by childlr. in arriminr i at their a .ert Evidence at this point was oltain':lJ c.:ure i* r-..i tl, .._i, er:': action' nd l iJ .,ing which accompanied the :" .Ie 62 Arithmetic in Grades I and I! Significance of Research Findinos It is theoretically possible to take three different pi:.-I ,:tn with respect to the capacity of first-grade children to pr.,'Ft fr.,m itn-truc- tion in arithmetic. According to the first positirin. fir.,i-yra.le liil- dren are too immature for arithmetic; they simply arc tniabil t:. learn arithmetic. Considerable doubt is thrown on this r'.,si: b, t1'c facts which have been assembled in this chapter .\.lmitt-dl\. tthec facts do not completely undermine the position: i...nl direct pil"..f that children when taught arithmetic in Grade I actual l klari arab- metic could do this. Nevertheless, the facts cc n:cernin, ,ch..::l en- trants' knowledge of arithmetic warrant the rather c:.rnfid.ent ilteleni:c that systematic instruction in this field should yil'l g,''l hcrturnis. In the absence of more direct evidence of an experimental character. it is not unreasonable to believe that children v.h:. already kni.,.v a considerable amount in a given area have thereby demio.rstrat:l their ability to learn more.20 The second position is to recognize the fact that Fir-t-,gradcr., already possess a considerable fund of arithmetical kn.li'-. kd, ,i and hence are probably able to extend their learnir.,. I.ut t.' il.trtl-t it "maturation" and to incidental experiences the acqui:sit,:in Of further knowledge. The argument is that since children ha.t already don'. so well "on their own," they should be allowed t.:. .r.ntiiine their learning on the same basis. Systematic instruction is n1-.t nee,-de aind may be very harmful. Nothing in the findings of research is in conflict v iti this \t.... As a matter of fact, these research findings arL hardly r the issue; they tell what children know and can d-:. ii tlicv c,:. c to school, but they tell nothing about the way childr,.n have ::iir by their knowledge and ability. It is possible ti7. infer. a- the sup- porters of this view seem to do, that children. lefc t,. the icl\czs. somehow grow into number knowledge. If so, thiire is t.',r\ r'a- .ii to believe that the passing of another year or tiw.. will i,- ..',: Pr- tunity for more of this same growth. But number knowledge is hardly the result ...f ani ', such gr-.'. th or inner maturing: it is the result of directed experienci.e, fitnequnti[ taking the form of direct teaching. Woody (56, 58 hai s rnt i. [l certain facts which have not received wide enou';h attrnti'.n '. u.s- tionnaires were sent to the parents of the first-grade children iii ite 20 Dickey in a recent article has rightly challenged tl. it:rnlnc. r,., :I:,:rt this inference uncritically. John W. Dickey, "Readines; in .A\rl.rnti.:." n .'- mentary School Journal, XL (April, 1940), 592-598. .-Ir4ittci:l" 1, th, P'.s:. een elmtirenitai', chilk ...i f .\nn Arbor, Michigan. Replies came fr-iiin a i:1l of 164 hliInil Eighty-three per cent of the parents rpll, 1n -ittd that inli thel honie instruction was given in rote count- 5in t 54 pir cenit i.f tit parentrt. taught counting to 100, for example); 78 per c.r.it of ith1- parnti, caught their children to count objects; 87 per cent. t:. recogni.zjl coiris: 68 per cent, to know the value of imnev: abot iv: tlird]. ti,: read and write numbers and to solve simple v-vrbal pr'obltiem. : abolit one half, to understand the size of ,nmllr,. t,-. 10 p,,rfrr', r-.pi ad.lditions, and to tell time; about one ihird. t'-. deal with -inidple Itnations involving subtraction. It \i.iii.l be I-:. -i-l- t:o. i:ntinue to absolve teachers of the re- [:',on:ibilit3 f:.r mliumbr tachin- and to leave to parents the task of enoicjuragiii, further d,.vel.oppniit of number ability. But if Woody's it.ctit mea anythi.liii. ti-' rerai: that some one must assume this duty. Mere att.aiiimtlint ,if mirit anld niore birthdays cannot be expected to hrtine th- iint.JtJd cica:e- iIn Inumber knowledge, apart from directed e rlr.c r/nce it .;iid nit of course be inconsistent with the second i,:.sitt:.in t.:. agrce that this re:-,:nsibility is the school's, but to deny that it -should I,..: met inl thlt primary grades and to insist that it may more ea:il. iand pr._Fitabl bIe tmet in Grade III or even later. Ti : psiti:ns w't'i re.'ardJ to: the ability of first-grade children to learn arithmetic Iiae h,,en considered. The third position is that ithe di 'v :f c: i prprl tl',e funclti-_ii ,:f primary teachers. Long before this 1ioi:it the n.adcr ma] havt_ dt >.cted evidence that this position is the ,-iie entertained b, thl: a riter. Briefly, the position is this: Research liis s',: :n that ch::OIl t rrantr already know much about number; the inference i. that ihe can learn more; society requires that chil- dren niu-t kr'..v arithlmictic: nothing is gained, and much may be li.tt. if thi. ch-oI .' ila t. later grades the discharging of its ,1i) i 43ti I/f .\t later p,.iint. inl the m:onn,-,_.raph, particularly in Chapter V, this p:ti..t.ii ill I. d'ci cl.i-ed and supported at some length. At this p,.it it 1niiiUt uft:ce to .av that one does not need to advocate a return to the unintclli-ilit-. al,'tract drill now happily on the way out : there are other li1ndi of: learning than memorization, and other Lin,- ,:,f Ikarvin activ.it .- than repetitive practice. Provided that x-periernce are adju tiii.' t: their interests and capacities, first-grade Jildren *:an an J ill :.\i-end ilheir number knowledge happily, intel- li'gcn.tl. aud utitfuill,. Fx:lcniice in support of this belief will be iprcientel in itle i/ext chapter. IN GRADES I AND II The inference dranii fro.in the research findings re netted in Chapter II is that c-hildren in Grades I and II are ready fur sy\tem- atic instruction in arithmetic. The data re'ieu ed di. nut in themn- selves prove that :hi. irnstructjion if given v.ill produce mera'urable evidence of sound and xaltabc le warning. f,.r the data relate only to the arithmetical equipment v.ith liilh children -tart school. Pro.:.[ that this inference i valid imust co:me from a different kind ..f re- search. It must be shown that in-structicon actually d,:.e yNield the kind of effects which are desired. It is the purpose of this .chapter to report the result obtained from a particular pro'._ram for primary grade arithmetic. a program which, after two years :,'f experimentation. %\as finally employ 'ed under circumstances 10 which permitted e'aluati:,in by means :f te-st.' It is not the purpose .,f thick chapter toj try to pro.ve that this program is the only possible program, no.r that it i~ tile be-t program available at present. As a matter 4,_f fact, there are probably many' different programs now in u.-, and s:.rmie of thm may b.e LettLr than that here tested. The intentio.in in thi ..hapter is. rather, t,:. dl:w v.hat ci.; be accomplished when rcas.:'na-ble: outicomnj are -el up, half-grade by half-grade, and when appropriate pupil actiiitie- are dico:xered and arranged with a viei t.:, the realization of thiee -,oiito'rnme. In the writer's ,pini,..n nu all'gy i needed for reporting com- pletely the results .f any pr,:,ogra3n ol primary arithmetic As the reader well knows, the literature co:ntaini tmany -tatementcls :t thie effect that children in Grade I <,:.r Grade II i can i or camniot i learn this or that, these ctatemeriits generally being based upon notlhii:z more substantial than .oplinioii _*.r limit-d expe experiments thus far made. available -;ipplly et-scntial information neither with regard to ouitonite- asumied nor '.ithl regard t,. inritruc- 1This program is built .around Teachr-' Mlailals and pupil,' v.-.rl-b:,,:,ks and readers published u,'-dkr li, giencril title .-.I ".:ull Number'." as part :.' the Daily-Life Arithmeii.- Seric- i:.r Graidc I thr.:.uch VII inclhiii.c Thi; series, the work of GOu T. F.u-vell. L iiin,.rc J..hi. in'd ilit v. writer i publi lbed by Ginn and Co. Results of a Particular Program of Systematic Instruction 65 tional method emplo,:ed. Lacking this knowledge, the reader is hardly conipectcnt to, pas- critical judgment on the significance of an inle'tiigation. and certainly lie must be cautious in accepting the evi- dence either as prtoinrg or as refuting the case for systematic arith- metic in Grades I and II. In the long run, programs should be evaluated. not a;s .holes, but rather for their success or failure in furthering certain end- Programs which at most points are very bad indeed may [ie excellent at other points, so good in fact that these features should bt ad,-pted and widely used. There is every reason to behete thai~t from experience with 'different programs will even- tually emerge a program far superior to any we now have. This hope for the ultimate -mergence of a superior program, and not the desire to e-tabli- special merits in the particular program here under in- v.c-tination. is respo.:nsible for this chapter. T i 17 i ., N OF INSTRUCTION Th-- plan of instruction followed in the present investigation is fully described in the Teacher,' Manuals which can be consulted by interested readers. In this pla ce, therefore, only the most important aspects of the program are outlined. (O.tficoi'..-Tlit chart hliich begins on the next page contains a concise -urnmmary of the outcomes by half-grades. The reader who it acquainted v. ith the traditional course of study for the first two grades will Find it this chart many familiar outcomes-or at least he will tend to identify' those *inen as familiar. The outcomes with rerp[ect t,-, counting and enumeration, for example, are not unusual for the primary Nrades Moreover, it has been customary to assign some part of the number cminbi nations to the first two grades. When thin has been done. however, the course of study usually has required notlnii short of "'master; In this program, however, the term used P1 intelligentet control over." and the difference in terminology is a matter of con-iderable moment. "Intelligent control over" implies that children mLu-.t lie able to deal with the number combinations in smlie eftectri. sen-ible manner: it does not imply that children will from the start. or even very soon thereafter, have the ability auto- matically t, recall correct an;s\ers as does the adult. Such mastery is %iewed as tle product of a long period of growth, the result of dei eloipment throu'.lh a -erie- of stages in thinking. The pre-cnt lit of outcome s differs perhaps more noticeably in the pre-ence oif -such term, a- "understanding," "appreciation," "so- cial valued:." "dispoittn to iise," and the like, all of which imply 66 Arithmetic in Grades I and II that the arithmetic learned must possess meaning and apparent use- fulness for the learner; the child must see sense in what he learns and he must have experience in using what he learns. Methods of teaching.-In general, teachers who follow this system of instruction rely little upon telling their pupils and much upon showing them, or, better, upon having their pupils under guidance make discoveries and then verify those discoveries for themselves. The authority made use of is not their authority as teachers, but rather the authority of truth revealed by their pupils' own practical concrete experiences in number situations. Understandings and gen- eralizations evolve from pupil activity; they are not given out by teachers as neat formulations arrived at ahead of time. Perhaps the clearest and briefest way to summarize the general principles of method which are recommended for this program of instruction is to quote directly from the Teachers' Manual: 1. We must insure orderly development in quantitative thinking. 2. The child must see sense in what he learns. 3. The child's activities and the purposes of arithmetic must harmonize. 4. Meanings must precede symbols; understandings must precede drill. 5. The way children think of numbers is as important as is the result of their thinking. 6. We must teach at the rate at which the child learns. 7. We must present arithmetic as an object of "natural" interest. 8. Instructional materials should be organized spirallyy." 9. Children must know both what they are to learn and how well they are learning it. Grade IB Grade IA .Grade IIB Grade IIA 1. Counting and enumeration: by l's................. to20 to 100 X* X* by 10's................. .... to 100 X X by 2's................. .... .... to20 X by 5's................. .... .... to 100 X by 3's ................. .... .... .... to 30 2. Understanding of place value of numbers in the series to 10 100 X X 3. Understanding and use of the ordinals to.......... sixth or eighth tenth thirty- seventh first 4. Reading and writing numer- als to................... 10 100 X X 5. Understanding of significance of 10 as the basic unit in the larger numbers to ... .... 20 100 X 6. Recognition of regular groups of objects to..... 6 or 7 9 or 10 X X Results ,f F'.rti,,cular Program of Systematic Instruction 67 intelligent c.:.ir.,l ....,r the I inL ber c .m l.in ri ...r .n- ihr...hph . 6 9 12 18 1St. lclient c.1.ni-I . .,:r the U1-.: .r. ati.:n .i .. ........ X X i9 ..R:afin -:, a d .* rating c.:.inbi- lia i.i ii. crticalli and ,itr- izrniall thr...ugh........ 6 9 12 18 1'.1. LniiJ r-.tanling and i.e :,i ih.: ir ,c,-e ;' i *. ddiii.:.r . and _.i..itract,.:.n X X X X II Lndtr ia i id .:>if rclan..rn- hi r. lct ..ccn .l.1 .' in aind -ubtraCuli-n and inc ci the addi[tirn combintiin_ rand ll,' re-latd *liitra.:- ti,_on c.:,IVmbinatio)n4.... .. ..... X X X 12. HiLch -ujt[rai.i.ti n, cjrr.'rin ...r l._rr *. *img. urn aind rrinuiii nd i.... .. .... ...... 19 99 11 .' iim r. and -.._riz...n. 3] adi- t...n Ii m I..ire ihtni i,.-, riumbnter.. ilh -,nd ..uil,- 3 digits, 3 digits, 4 digits, out ) .... ... .. .... sums to 9 sums to 12 sums to 18 14 U. .:,f fracrti.n. a. al.plie' J t.i sing i tlic .is and] I , gr.,li.- .:,l .. -ic ic s Icre-' ,d i, i:i.:,n . ........... .... 1,/4 13. L;u .,:.l nuiii ,b r in ,:nn.c- U. S. money X ti..ir v.ih. ... .... Telling time X linear and PI kcauing andJ ratingg R:i.man numral ..... ..... .... ... XII X* 17 Finctitnal r.. dir ip :.ca l,- larv .,i nunmitr u,:,rdl and :.'.t.-mb l, .... ... .... X X X X KI~. A. .pr:cci:iio.)n of ':':,ial alucI .:." aritlh ti ic. li4-..-inrn tI' u-, 3 r ri n T'ei. and <;v.tcrrnte ,) t. I'arn mn.:-r ariih,.: ............. X X X X cs.*-,,: ,ih the i .t .kil ...r o..i.cept is extended, enriched, and/or practiced. ,aat/ ria1 oif istruntic,.-If natural and planned number occur- ren:ce- in the clas-room may for convenience be classed under the headin "inaterial: ,of in-truction," teachers in this investigation had the f',:ll:,win;Z suircc, iipon A\hich to draw: (1) Teachers' Manuals, I 2) 1-iipil-' :orkb:,ok ,. (13 pupils' number readers, (4) unplanned number ,ituatiions -.lich appeared spontaneously, and (5) planned or prearranged iuiiil_,r StiuItions. Auliihi'clic iu Goades I aiid II (1) As has already been explained. e\ery teacher had tl- manual for her grade. The'e manuals are unuisuially c-.mplete. ,','itainin_ both theoretical discuI.si,',nr t.. mak' clear the undrl\ ing phil:s,-'phy and practical sugge:ti',_.n '.ith reward to teaching procedure: an.] materials. In the ca.e of the la:t three half-grade.? Grade IA through Grade IIA) tlli page- ,f the pupils' :.r'.b:'ik, ar,: rpr-- duced in the manual. ,-, that te-ichers are able t,- relate the manual suggestions for teaching : dir'ctl, I. thil page t,:1 which they c,',rrep:on,:l. (2) Pupils in Grade IA had a ',,rkih.-,k ,'.f e\entH-t',,- page;. about twelve of which c'nained review. au.. testing niatcrial andI the rest, developmental material Thie '.wMrkb.k f,-.r Gralde. IID and IIA comprised eighty paees each. .:., ich a-ut t'.'el' c are dIe.'. ted to reviews and tests. In Graide I I thle pupil. Ih:ad n.' '. irkb.'..ks..' and. accordingly, their teacher, made much grc.ater u.e ,-f i'4 and t'5 in the list above. The typical page in cach w':,rkl"'i::k tart ,ithl an interesting pic- ture, especially designed t, re, cal thle number idea. and relati'.n-hip: set for that unit of inst.ructicni. Chil.lren are expete'.l under the teacher's guidance, 1,:' arrive ,at ;eineralizat.i -,o :,r relationship oun a concrete basis and to exp'-,c-es their i.hoi-',eries in tlhe lian:ruag,': of abstract number. Space are left in tilthe: '-rkbl_-_".k p.agc- for entering the results of the vani.u,, nuntmber .\xpericnc i. The intention is int to hurry the immediate mnmiirization i.'f thll fact, dis -'; ered, 1it. rather, to show number a- a dish:rtthnd methlicd .:f translating and recording both the quantctative pr..cCs:- uti:'d and the r,:-_ult: -.f thil process. As will be Lx.'plained bel-i'..' under 4 and i 5 i. \'.rkb"--k lessons regularly foll' v experiences :4 a less artfici-il character. (3) Some, but by n,'' means all. f thie pupils. in thi mn' c igan-ti'in had number readers,. ihich in their c'.ntent parnillel tlie dte\-,:li-,pmlnr. of number ideas an'] relaticiinships ,jutlined. in tice manuals. Ther-_ storybooks are planned t,-. slhu:, chii.ldrn that number is a natural ,r normal element in readindm matter. he I t,-,ri,- c'intain numll..ers and quantitative situations in iun-' ttru'Liv ',a\ -s th cli j' o n,:-t interfere with the story proper. .\fttr readini2 a story. clIldren comine t. nuni- ber questions, they turn back t, thlie apr,',priate c-'ntext. ,cl-ct tl-,i relevant quantitative material. and v.]:rk :uLit the required rcelti'n.l;ip . (4) and (5) Each nw% phase '-.f devel- *p.nental in tructi,.,n -tart. with an actual numLer cxr.ercincr a.ipart from ,:,iiryb',.'k ...r .:.rkb:l:. SA workbook for thi' half-grade has sii-ce 1.e'r .,uhhlhC.l un.kr ith tilt Jolly Numbers Primer. Jolly Number Tals. t...,. (_ nc awr.. ./.:/Iv \,Vi !.,-, Ti, ,.' Ta"o i.:.r Grade IA and the whole ol Glral. II, r.:. ,i.'A:l.. ResIclts of a Particular Program of Systematic Instruction 69 I Indido. i, Grade IB in this study these extra-book number experi- ences provided the sole basis for instruction.) Instruction is begun in this way in order to impress' children with the need for the new idea or skill. Sometimes a fortuitous or chance occurrence serves the purpose; at other times a situation must be prearranged. Chance happenings and prearranged situations involving number are utilized not alone to initiate developmental instruction, but also to provide opportunities for children to use the arithmetic they have learned. As a consequence, arithmetic is not confined to the arithme- tic period, but is part of the whole school day. It is expected that by encountering number at all sorts of times and in all kinds of ways Children will become more sensitive to the values of arithmetic and will develop habits of use that will function in many practical ways. (Outcome 18, p. 67). Experimental subjects.-Usable returns were received for 223 pupils in Grades IB and IA and for 280 pupils in Grades IIB and IIA. By "usable" is meant (1) that test records were available for each child for the two terms in his grade and (2) that the test blanks showed awareness of the purpose of the test and intention to follow directions. In spite of the second requirement some exceedingly poor test papers were retained, as will subsequently appear. "Re- peaters" were not included in the study. The 223 pupils in the first grade came from ten classrooms lo- cated in four states-Massachusetts, North Carolina, Ohio, and Pennsylvania. The 280 second-grade pupils came from eleven class- rooms in the same four states. Tests.-The tests were of two kinds: (1) group tests and (2) individual tests. The first kind was administered to all pupils in each half-grade; the latter, to a sample of the pupils, selected so as to be representative of the whole class. (1) Group tests. The group tests were specially printed blanks which reproduced the tests provided in the manuals-pages 74 and 75 (Grade IB) and pages 142 and 143 (Grade IA) of the Teachers' Manual for the Beginners' Course; pages 100 and 101 (Grade IIB) and pages 153 and 154 (Grade IIA) of the Teachers' Manual for folly Numbers, Book Two. Samples of these tests appear as needed on the following pages, altered as required to show the problems used and the directions for administering the tests. In each test the verbal problems were read or stated by the teachers and did not appear on the test blanks. In the test for Grade IB all directions 70 Arithmetic in Grades I and II were given orally to avoid reading difficulties, but Part II of thi: tests for the other half-grades regularly involved considerable readinrc. (2) Individual tests. The contents of the individual tests and tl'e procedure in administering them are described at a later point (be- ginning with p. 90). Gross results.-Table 18 contains a tabulation of the scores made by the experimental subjects on the group tests for the four half- grades. In the group test for Grade IB one point was allowed i'r each correct response, so that the highest possible number of points was 55; 65 pupils made scores of 53, 54, or 55 (actually 26 had perfect scores) ; 48 had scores of 50, 51, or 52, and so on. The median score was 50, which represents 92.7 per cent of the possible score. One fourth made scores below 42 (a percentage maximum of 76), and one fourth made scores of 53 or better (a percentage minimum of 96). TABLE 18 71 .............. 56 .............. 53 .............. 41 .............. 35 .............. 31, and below.... Q it............ Qst ............ Grade I First term Second term (55) (73) .. .. 40 .. .. 30 .. .. 25 .. .. 10 .. .. 11 .. 65 13 .. 48 7 .. 24 5 .. 19 4 .. 17 6 .. 17 0 .. 12 1 .. 6 3 .. 15* 7 .. 50 66 .. 42 59 .. 53 71 Grade II First term Second term (83) (81) The 15 scores distribute as follows: 29-31 (4); 26-28 (5); 23-25 (0); 20-22 (1); 20 and below (5). t Medians obtained from the raw scores without grouping. The percentage equivalents of the medians are: Grade IB, 92.7 (50 out of possible 55 points); Grade IA, 90.4; Grade IIB. 89.2; Grade IIA, 93.8. I Quartiles obtained from raw scores without grouping. R ;.'W,ts of a Particular Program of Systematic Instruction 71 Th,. figures for the other half-grades were about equally good, .1 I, mei-dian per cents of 90.4, 89.2, and 93.8, respectively, for Grades IA. II11. and IIA. There were nineteen perfect papers among those f.r the Grade IA pupils, seven perfect papers for the Grade IIB pupil-. ,-id thirty-four perfect papers for the Grade IIA pupils. Thi.:rc .. ere, however, a number of very poor papers. Eleven (5 per centci I the Grade IB pupils made percentage scores of less than '). ain. there were eleven such pupils (5 per cent) in Grade IA, fifteen such pupils (5 per cent) in Grade IIB, and five such pupils (2 per cent) in Grade IIA. Still, if it be granted that the tests sample achievement satisfactorily and that the method of scoring was ade- quate, there seems to be sound evidence that the experimental sub- GRADE IB TERM TEST, PART I 6-1= 5-4= 4+1= 2+3= 5-2= 1+5= 4-3= 6-4= 4+2= B. --------------------------------------------___ --------- I +2 -5 -1 -3 -3 -_ -2 +2 +3 +1 -_ -_Z (I had 5 pen- nies. I spent 1 penny for a piece of candy. How many pen- nies did I have left?) E. (Ann had a party. If 4 other girls came, how mary girls in all were at the (I put 2 yellow books and 4 green books on the tale. How many books were on the table then?) G. - (.Card for 3, 2, and 5 shown; children to write one story or fact.) H. -- (Same as G, but card for 1, 3, and 4.) I.- (Same as C, but card for 1, 2 (Same as C, but card for 2, 2, and 3.) and 4.) S (I saw 4 birds on the wire. Then 2 flew away. How many bir s were left on the wire?) , ! 72 Arithmetic in Grades I and II GRADE IB TERM TEST, PART II (1. Draw 7 balls in a row. 2. Put I on the second ball. 3. Put C on the fifth ball.) B (. 1. Make 10 crosses in a row. 2. Draw a rint around the fourth cross.) 3. Put a box around the sixth crose.) (C-F. Write the number to show how many flags (etc.) there are.) E.I I F. 1 . I i (G. \Write the numerals from 1 19 in order.) G. I I (H. Write the figures for the number words.) 11. one ----------------- three ----------------- two --------------- four ----------------- five ----------------- six ---- jects in all the half-grades learned much of what they had been taught. Three fourths of the IB-pupils made scores equivalent to 77 per cent or more of the possible score. In Grade IA three fourths made a percentage score of 81 or better; in Grade IIB the corre- sponding percentage score was 80, and in Grade IIA, 86. Results in Grade IB.-The extent to which the Grade IB test covered the outcomes for that half-grade is shown in the upper half of Table 19. The ability to enumerate to 10 (Outcome 1) was tested in items A, B, and C-F of Part II; knowledge of the place values of the numbers to 10 (Outcome 2) was tested in item G of Part II; ability to read and write the numerals to 10 (Outcome 3) was tested in items A-J of Part I and in items C-H of Part II, and so on. All the outcomes are represented by test items, though obviously in each case by a small sample of the many possible items. No outcome was tested, or could be tested by the usual group techniques, in its en- tirety. Outcome 6, "Intelligent control of the combinations," is, for Q 'Z'''' ^''. T1 I ''0 '. R, salts of a Particular Program of Systematic Instruction 73 example, deliberately vague: it means little more than the ability to get the correct answers for combinations by any method whatever. Scores. in this part of the test tell nothing about the degree to which the combinations were habituated (they were not supposed to be habituated until the second half-grade or even later). Outcome 10 cann-.t hb tested adequately by any paper-and-pencil procedure. Sen- .itivenIe-- to the quantitative in life is to be observed only as one a.oitd or meets satisfactorily the number situations in which one Fids ,_,neself. Accordingly, measurement, as here made, through suc- cess :in the usual kind of verbal problems is very indirect and prob- ably none too valid. TABLE 19 Test Items, by Parts Part I Part II 1. Enumeration to 10 ........... .... A, B, C-F 2. Place value to 10 ............ .... G 3. Numerals to 10............. A-J C-H 4. Ordinals to sixth or seventh.. .... A, B 5. Recognition of regular groups G-J .... 6. Intelligent control of combina- tions through 6............ A, B, G-J .... 7. Reading and writing combina- tions horizontally and verti- cally ...................... A, B H 8. Understanding of addition and subtraction ................ C-F .... 9. Reading vocabulary ......... A, B H 10. Appreciation of social values, etc. ........................ C-F Possible ....................... 29 26 M edian ....................... 25 26 Q i ........................... 17 24 Qs ........................... 25 26 Percentage equivalents of: M edian ....................... 86.3 100.0 Q i ........................... 58.6 92.3 Q 3 ........................... 96.6 100.0 The lower half of Table 19 reveals that the children tested were much more successful with Part II than with Part I. On Part II three fourths or more of the pupils made percentage scores of 92 or better. On Part I, on the other hand, the corresponding percentage score was but 58.6. Arithmetic in Grades I and II The results on the various items of the test are analyzed in Tabl' 21 for a sample of one hundred Grade IB pupils, so selected that the.\ are truly representative of all the pupils tested. The degree to which this representativeness was attained is shown in Table 20 below. It will be seen that 10 per cent both of the total group and of the samnli: group secured scores of 55 or better, 20 per cent secured scores -'t at least 54, and so on. The medians and quartile points are identical (or nearly so) for both groups.4 TABLE 20 SAMPLE OF 100 PUPILS Percentile Scores Point Total Group Sample Gro',r 90........................... 55 55 80........................... 54 54 70........................... 52 52+ 60........................... 51 51 50 (M edian) ................. 50 50 40........................... 47 47 30 ......................... 44 44 20........................... 40 40 10........................... 35 36 Q 3 .......................... 53 53 Q .......................... 42 42+ To return to Table 21, the per cents of success on Part II of thI. Grade IB test speak for themselves. Clearly the children had acquir':d high degrees of skill and knowledge in dealing with arithmetical tasl.- set for them in this part of the test. The story for Part I is 'different. All the items in Part I relate to the number combinations with sums and minuends to 6. In A arc nine abstract combinations in horizontal form; in B, twelve abstraLt combinations in vertical form; in C-F, four combinations presented orally in verbal problems; in G-J, still other combinations to be rec- ognized from pictured groups. The situation in Grade IB may be fairly well summarized by saying that the children demonstrated substantial growth toward all outcomes, except possibly those connected with the number combina- tions as such (and possibly those associated with the development of habits of use, an outcome none too well tested). In the treatment of the results for Grades IA, IIB, and IIA the same prac- tice is followed, of analyzing the data for a selected sample of one hundred cases instead of for the half-grade group as a whole. In each instance the sample was as closely matched with the total group as above in the case of Grade IB. Results of a Particular Program'of Systematic Instruction 75 TABLE 21 OF GRADE IB TEST; SAMPLE OF 100 CASES Outcomes Number of Errors Per cents of Test Item Tested and Omissions Success Part I A ............ 6,7,9 214 76.2 B .............. 6, 7, 9 331 72.4 C-F ............ 8,10 59 85.3 G-J ............. 5, 6 114 71.5 Part II A-B ............ 1,4 33* 91.8 C-F ............ 1,3 5 98.8 G .............. 2, 3 11 99.9 H .............. 3,7,9 43t 92.9 Out of 100 possible errors on each of the four ordinals tested, 5 errors were made on second, 7 on fourth, 11 on fifth, and 10 on sixth. t Out of 100 possible errors on each of the six number words tested, 6 were made on one, 5 on two, 4 on three, 9 on four, 12 on fire, and 7 on six. The significance attached to the figures for the number combina- tions varies with what one expects from children in this half-grade. If one calls for automatic mastery of all the combinations taught, then the low per cents signify a wholly unsatisfactory state of affairs: the children had not "learned" these simple number facts. On the other hand, if one is willing to wait for mastery and if one views the experiences these children had with the number com- binations as being exploratory in character, then one is not at all disturbed by the apparent lack of mastery in Table 21. As a matter of fact, on the assumption that the subjects in these experimental classes were typical of those studied in investigations of the number knowledge of children on entering school (Chapter II), these chil- dren show unmistakable evidence of growth. A median of about 35 per cent of school entrants "knew" the abstract combinations pre- sented to them (Table 16) ; the corresponding figure from the Grade IB test is 72 per cent. A median, of about 42 per cent "knew" com- binations in verbal problems when they entered school (Table 15), though some of these combinations were harder than any here tested; the corresponding per cent for these Grade IB children is 87 (only four combinations, however). The amount of growth revealed by these comparisons is enough to satisfy the student of arithmetic who judges growth in terms other than those of mastery. Such a person is not disturbed by the rather superficial mastery shown by these children since he is confident that the understandings they acquire 76 Arithmetic in Grades I and II through meaningful experience will eventually make for a more usable kind of knowledge. Results in Grade IA.-The Grade IA test is analyzed in terms of outcomes in Table 22 as was done for the Grade IB test in Table 19. It will be noted that all outcomes except two (namely, 1 and 2) are represented in the Grade IA test, though with varying degrees of completeness. Moreover, as in the case of the Grade IB test, the measurement of Outcome 12 is very imperfectly and only indirectly GRADE IA TERM TEST, PART I A. Write thle answers: 1. 9 2 2 8 G 7 9 S + +7 -5 -2 -3 -7 ; 4 4 4 +2 +L +3 +2 S 2 -7 +li B. Add: *C. Write just the answer : . ------------ f. ------------ *Proil'ms ior Ex.C d. There we re jiit 7 len;ve o onone branch of a a. Pat has : marbles in hi, right hand and 6 ini tree. Along caime the wirl : [dl bilew off all but 2. his left hand. Jlow many imlarllel, ihan he il lhoth lion many leaves iwri liliini oil the liranch? hands? e. Jane ha:i's 5 s.'hoiil iidre. and 2 party dresses. b. Anin counted tlie egg, hlie find ill the larn. How liaiiy d'ss There were 5. She put therm with tie 4 egg in the f. Eight of .Iabel's drawings were hanging on icebox. Then there were how many egg in the thlie wall of her bedroom. She took down the one icebox? that got dirty. Now there are 1 drawings on the c. If you take 8 sandwiches to the picnic and wall. How miany of the drawings did Mabel take only 6 are eaten, how many sandwiches will be left? down ? Results of a Particular Program of Systenatic Instruction 77 GRADE IA TERM TEST, PART II A. Draw a ring around the right answers: a. I ow do you find how many in all? subtract add b. WVhich picture shows a half? c. How many ones in 16? d. Which number comes just before 18? e. Which means to add? = + - f. Which number is less than 75? 79 SO 63 92 g. Which means one half? 2 four 2 ii... u.... uadd? up down i H. ,... tens has 15? j. Which are parts of S? 2 and 3 5 and I G and 2 k. flow many parts has the ihiole story ablut 3, 6, and 9? 1. Which meiani 7? tille i-ht .re en m. How du you find hlw nminy are gone? subtract add n. Which shows 2'? o. Which is the answer for 7 2 p. Which number is more than 50? 57 29 36 4S q. How many parts has the whole story about 4, 4, and 8? B. Write the numbers that are left out; a. 36 37 .- 39 . b. 58 .--61- -..----. c. 10 20 30 ----- ------60 d. 40 --...... C------ SO ----... proidid f,:,r. The omission of items for Outcomes 1 and 2, most of the :killi in which can be tested only by individual interviews, is lhardIh .erious. In the first place, practically all children may be as- sumed ir. be able to enumerate and to count to 100 by 1's at the end ,if Grade IA if they receive any instruction at all on these skills. It , ill he recalled that about one tenth of school entrants already possess thi- bilit\ (Table 3). In the second place, the ability to count to I(C" L-v 10'I is also rather easily acquired; about one fourth have the 78 Arithmetic in Grades I and II ability when they enter school (Table 3 again). In the third pla..:. the high degree of success of these same children at the end of Grade IB in dealing with the lower ordinals (about 90 per cent knew fifit and sixth) warrants the belief that, a half-grade later, they would bt equally successful with the new ordinals seventh and eighth. The median score on Part I of this test (the lower section .,f Table 22) amounted to 93.5 per cent of the possibility, and three fourths secured scores of about 85 per cent or better. Part II N:a- harder for these children, the median dropping to 88.9 per cent and Q, to 74.1 per cent. The reason for the lower scores on Part II cannot be certainly ascribed to any one cause: (1) the form of thul TABLE 22 Outcomes, by Test Items: 1. Enumeration to 100 by 1's; ordinals to eighth.................. 2. Counting to 100 by l's; by 10's..... 3. Reading and writing numbers to 100 4. Place of numbers to 100............... 5. Significance of 10 in teens num bers ........................ 6. Intelligent control of combinations with sums and minuends to 9....... A 7. Understanding of addition and subtraction as processes............ 8. Understanding of the relation between addition and subtraction.... 9. Column addition, sums to 9, three addends ........................... 10. Understanding of Y as applied to objects and even groups............ 11. Reading vocabulary (all of Part II, but especially) .................. 12. Appreciation of values of arith- metic, etc. ........................ Part I Part II A: d, f, p; B : a-.! A:d, f,p; B:a-J A: c,i (35 facts) (6 verbal A: j, k, q C A: a,m,o B (5 examples) A: h Possible .......................... 46 27 M edian ........................... 43 24 Q I ............................... 39 20 Q 3 ............................... 45 26 Percentage equivalents of: M edian ........................... 93.5 88.9 Q 1 ............................... 84.8 74.1 Qa ............................. 97.8 96.3 A" sits f' a Particular Program of Systematic Instruction 79 it::t 1u,, Ihi.. bci.n relatively unfamiliar to the children; (2) the rea.hirnv r,:eq.uirnmt-i mnay have baffled some; (3) the ideas contained in tile test itt-ii1 nia not actually have been possessed by the chil- drt.n. '.iIlv tli. third .of these three sources of error is the one which ,.r- actu:ilv [,r.._j,..-.d for testing; the purpose, in other words, was t ... i,..t.ria i the lItcr'. to which children had acquired the concepts :indl undreraiiaIi, represented in the items of Part II (uncompli- ca'teld :o, Jel:icieincie iii reading skill and in the technique of taking ,st-t inrlirii,,ne of the first two sources of error is to depress urilil tli: -: ...rte .:.l-htined. There is at least one counteracting fac- t.Li. niCilI. thi. chli..ce selection of answers: by pure guessing cor- rc,,:t an:v:.e r- xculld [I, marked for one out of each three or four items. It is r-It. h... .,etr, that the favorable effect of this last influence was mI'.r Liai :.rf.,t lIv the unfavorable effects of the first two factors, so th'it thi ., -.-e_ Inre reported are almost certainly too low. .\>c..rdig t.:. Table 23 for a sample of one hundred Grade IA iu[.pil.- 91.5 pi cint of the thirty-five abstract combinations were o.:rre-cthl 'in-vi :r-., :;r well as 81.8 per cent of five examples in col- mint a i,. itn...n. d 4.0 per cent of the six verbal problems. (One Iniidred liii..l.I tn1n ach attempted 35 abstract combinations; 297 an- z-_.tr-: were ., Vrr.g iand 3,203 were correct, to give a figure of 91.5 as til, ft-r :ilit -A' .iCu CC--.) 'lith .ri..ui- '- r:uipl of items in Part II varied, as to per cents of succ:.:.-, V'froin, I 5 t,. 93.0. The lowest items, separately considered. TABLE 23 Nu i-itp .'.r Lin.:R.-: A i PER CENTS OF SUCCESS ON ITEMS OF THE GRADE IA TEST; SAMPLE OF 100 CASES* T7, ; 1it,, Outcomes Number of Errors Per cents of Tested and Omissions Success Part i .A . . ... 6 297 91.5 b .. .. ... .. ...... 9 91 81.8 .. ..... ...... 6, 7, 12 96 84.0 Part i f p. B. a-d ...... 3,4 197 84.8 A. c ... . ...... 5 57 71.5 A ). r... .. ... ......... 6,8 66 78.0 .A m., ...... 7 49 83.7 A\ b ...... 9 19 81.0 .' b. n . . . 10 21 89.5 A ,.. I ...... 11 21 93.0 80 Arithmetic in Grades I and II are A: c, f, h, i, k, q and B: d. A: c and A: i (81 and 62 per cent respectively) relate to the meaning of ones and tens in the numbers 11 to 19. Sixteen of the 19 errors made on A: c came from the selec- tion of 4 instead of 6 as the number of ones in 16; 34 out of 38 errors made on A: i represented the selection of 5 instead of 1 as the number of tens in 15. Item A: f required the selection of the number (79, 80. 63, or 92) which is "less than 75." The errors were about evenly distributed among the wrong alternatives, a fact which seems to in- dicate ignorance of the meaning of "less." Item A: h ("How do you add? up . down") probably should not have been included in the scoring. The "correct" answer according to the manual direction, for teaching is "down," but whole classes tended to mark "up," thus suggesting that in those particular groups the manual instructions had not been adopted. In these classes "up" was of course the correct answer. Items A: k and A: q deal with the number of parts in the "whole story" about 3, 6, and 9 and about 4, 4, and 8, respectively. Seventy-nine per cent answered A: k correctly, but only 57 per cent. A: q, which contains a "catch." Apparently this idea of the "whole story" was none too well understood by these children. The general conclusions to be drawn with respect to the Grade IA test results must be somewhat guarded, in view of the uncertainty surrounding the results on Part II of the test. So far as the abstract combinations are concerned, these children indicated very satisfac- tory progress in learning. In column addition, which was introduced in this half-grade, the results were none too good, but the newness of the process may account for the comparatively low degree of success. It is difficult to account for the failure of these children to do better with the verbal problems, in view of their supposed familiarity with this kind of arithmetic. The system of instruction outlined for tht experimental schools calls for a great many experiences with de- scribed quantitative situations, and if these were actually supplied ti:- the children, they should have been able to solve correctly more of the test problems. The results on Part II of the test have already been considered in some detail. If it is a fair interpretation to regard the per cent- of success reported as really too low, one could infer that these chil- dren were acquiring verbal statements of mathematical principles. generalizations, and relationships of large value in their understand- ing of arithmetic. Results in Grade IIB.-According to Table 24 all Grade IIB ou - comes except one, namely, Outcome 11 (Roman numerals) are rep- Results of a Particdular Program of Systematic Instruction 81 resented by at least two items in the group test. The coverage of O(-lutcinle 13 is, as in the case of the other tests already considered, very inadequate indeed, and the types of behavior associated with the other outcomes can of course be considered only as sampled rather than as exhaustively tested. This limited sampling is inevi- table, ;:. far as the present study is concerned, for the test was de- \itsed for the measurement of achievement over a half-grade of instruction and not for the purpose of diagnostic analysis. GRADE IIB TERM TEST, PART I *A. Your teacher will tell you what to do. a.......--------- b..---....... c. B. Look at the sign. Then add or subtract. 1. 11 3 6 -7 +9 +5 - 2. 9 3. 12 4. 7 d .....---------... 12 10 2 5 II 9 -4 +8 +0 -4 -7 +3 +7 -5 +4 -6 -9 +7 -s +5 -3 -8 -3 +6 -1 +3 3 +6 -7 -6 +8 -9 C. Look at the sign. 1. 13 -3 + Then add or subtract. 15 + 10 -5 +4 -7 +8 17 IS 19 10 19 12 - 6 -S -3 +7 -0 +6 5. 3 + 0+ 7= Proems for ,E. A a II .rr, .....r,, .j his toy soldiers. He had 1 1 . ... i,.,. . c ins and 3 toy soldiers with .,r, II. n r, more of his toy soldiers had r.r I , I J I., aJis? h ft. P ill r ii, Iropped aboxof bottles. When .. i.-l i i 'ji only 5 of the 12 bottles were -...d H.... .. L..ttles had been broken? ci [... l.. t i...'ces of peppermint candy and 4 pieces of chocolate candy in her bag. How many pieces of candy has she in all ? d. A farmer was mending his fence. He had to put in 12 new posts. After lie had put in 9 posts, how many more did he have to put in? e. Mrs. Spider spun a fine web. The first day she caught 2 flies and the next day 8 flies. How many flies did she catch in bIoth days? 2. 4 + 15 D. Add: Arithmetic in Grades I and II GRADE IIB TERM TEST, PART II A. Draw a ring around the answer: 1. Which means to add? t + + 2. Which means one fourth ? 1+3 4 1 4-0 3. To find "how many more" you subtract. add. 4. Which shows 4 ? 5. Which is less than 57? 6. Which means cents? + 1 7. Which number is I I I I I I? 8. How many tens has SO? 3; 0 8 7 9. Which number is largest? 70 62 S9 45 10. Which number is twelve? 11. To find "how many gone" you add. subtract. B. Write the missing number or numbers: 1. After 15, 20, 25, come - - and ..- - -.. 2. The hole story about 7, 5, and 12 I,.. ...... parts. 3. 5 is one part of 11 ; the other part is 4. At ten o'clock the short hand is on - - -.. The long hand is on ---. 5. If you take away all of a number, -.. is left. 6. After S, 10, 12 come ---- and --. .. 7. The ring is around one --- of the d.. *. .0. 8. 17 has -.----- ten and ------ones. 9. The whole story about 5, 5, and 10 has S- - -parts. 10. The -picture for 30 is . 11. A dime is the same as ------ cents. The scores in the lower section of the table indicate the degree to which achievement, broadly considered, was satisfactory among the experimental subjects. One fourth of the children secured scores equivalent to more than 98 per cent of the possibility on Part I, one half, scores equivalent to 93 per cent or better, and three fourths, scores equivalent to 82.5 per cent or better. The corresponding per- centage figures for Part II are lower-92.3, 84.6, and 69.2, respec- Results of a Particular Program of Systematic Instruction 83 TABLE 24 Outcomes, by Test Items: 1. Understanding of numbers to 100, ordinals to tenth............... 2. Counting by 2's to 20 and by 5's to 100......................... 3. Intelligent control of combinations through 12.................. 4. Understanding of the relationship between addition and subtraction and the corresponding number combinations ................ 5. Intelligent control of the 0-combi- nations ....................... 6. Understanding of the processes of addition and subtraction........ 7. Column and horizontal addition, three digits, sums to 12........ 8. 'Higher-decade addition, to 19.... 9. 3 as applied to single objects and even groups.................... 10. U. S. money and telling time..... 11. Reading and writing Roman nu- merals to XII ................. 12. Reading vocabulary (all of Part II, but especially) ............. 13. Appreciation of values of arithme- tic, etc......................... Part I Part II A: 5,7-9; B-8 B: 1,6 B: 2,3,9 A: 3, 11; B: 3, 5 A: 2, 4; B: 7 A: 6; B: 4,11 A: 1,2,6, 10; B: 10 Possible ........................... 57 26 M edian ............................ 53 22 Q i ................................ 47 18 Q s ................................ 56 24 Percentage equivalents of: M edian ............................ 93.0 84.6 Q 1 ................................ 82.5 69.2 Q 3 ...... ........................ 98.3 92.3 tihvlv Part I is devoted to abstract arithmetic, chiefly computation; PF'art II. 1,. mathematical relationships, generalizations, and the like. A sample of one hundred Grade IIB pupils (Table 25) solved s2.2 per cet of the verbal problems (5 problems per child, a total :.f 50XI pr._,blc,.ms, of which 89 were missed). On the abstract facts S'.tlh 'ums and minuends from 10 to 12 (twenty-eight facts in the test i. tlit- [i,:er cent of success was 87.7, and on the 0-facts (only four in the te.;S i t was 96.3. In higher-decade addition and subtraction .',ilh sums and minuends through 19 (fourteen examples), the per c,ntm of accuracy was 82.8. In horizontal and column addition, three digit. uim~n to 12 (six examples), the per cent of accuracy was 92.5. 84 Arithmetic in Grades I and II Again, as in the case of the Grade IA test, it is difficult to account for the comparatively low per cent in problem solving, except on the ground that these children did not have all the expected experience in dealing with verbally described quantitative situations. The per cent of success on the new addition and subtraction combinations is entirely satisfactory in an instructional program which does not hurry mastery. The figure for the 0-combinations (though based unfor- tunately on only four combinations) is especially interesting in view of the still common belief that such combinations are especially hard. According to the scheme of instruction here under consideration all 0-facts are presented by means of four generalizations and are taught as groups (0 + n, n + 0, n 0, n n). Under these conditions the combinations are readily understood, and the learning is accord- ingly made easier. As for higher-decade addition and subtraction, the process was new to these children, and 82.8 per cent seems to represent entirely satisfactory learning at this stage. The last item in Part I consists of examples in horizontal and column addition, and the children were very successful with the six examples given TABLE 25 OF THE GRADE IIB TEST; SAMPLE OF 100 CASES Outcomes Number of Errors Per cents Test Items Tested and Omissions of Success Part I A ..................... 6,12 89 82.2 B ...................28 (facts to 12) 345 87.7 4 (0-facts) 15 96.3 C ..................... 8 241 82.8 D ..................... 7 45 92.5 Part II A :5, 7-9;B:8 ......... 1 129 78.5 B : 1, 6 ................. 2 76 81.0 B :2,3,9 ............... 3,4 82 72.0 A :3, 11;B:3,5 ........ 6 84 79.0 A :2,4;B:7 ........... 9 74 72.0 A :6; B:4, 11 ..........10 37 90.8 A : 1,2,6,10; B:10 .....12 51 89.8 On Part II, success varied from 72.0 per cent for the third group of items (the "whole story" idea) to 90.8 per cent on the sixth group (U. S. money and telling time). As already mentioned, Part II seemed to be harder for these children than was Part I, but the reason for this greater difficulty (if indeed there was greater difficulty) is uncertain. Like Part II of the Grade IA test, Part II of this test Results of a Partiiular Program of Systematic Instruction 85 called i 1) for reading and (2) for knowledge of the technique of m:irking answers. Either or both of these factors, extraneous to the real purpose of the test, may have introduced an undue number of The following items were missed by 10 per cent or fewer of the children: A: 1, 2, 4, 6, 10; B: 5. The following items were missed by 2) rper cent or more : A: 3; B: 2, 3, 6, 7, 8, 9, 10. Forty per cent ctaited that to find "how many more" one should add (item A: 3), an error that may well mean the early but undesirable adoption of "mor." as a specific cue for addition. B: 2, 3, and 9, missed by 26 per cu-Lnt, 22 per cent, and 34 per cent, respectively, all involve the "v.'-,.le story" idea, a fact which suggests the need for special atten- ticn t..' the relationship between combinations. B: 6 (25 per cent mrle errors) involves counting by 2's, but the framing of the test itern may not have suggested this fact. B: 7 calls for the identifica- tion *:'f group of three dots as one fourth of twelve dots. Only one third *:.f the children (36 per cent) succeeded on this item as com- paicd ".-ith 93 per cent in the case of A: 4, where one fourth of a -sigle Object was identified. The difference in per cents confirms the behalf that the application of fractions to groups of objects is much n:ore difficult than to single objects, though in this test the low per cent may reflect ambiguity in the test item itself. B: 8, missed by 3r. per cent, requires two entries, and one of these was frequently ,,mitted B: 10 tests ability to use a special symbol for 10 in the co'n-truction of a "picture" for 30. Forty-two per cent could not draw the correct "picture"; this probably should not be surprising in view O:it c'.imparatively slight experience in this kind of activity. The form of analysis adopted in the foregoing paragraphs and in the corresponding sentences for the other tests has the disadvantage .'f str.essing deficiencies rather than progress in learning. The reader may" therefore be predisposed to object to any statement which im- prlies complacency about the situation as revealed by the Grade IIB te-t results. It is true that the showing at some few places (espe- call, those mentioned in the preceding paragraph) is none too good, but there are many evidences that these children were advancing t.rc, 'ird the outcomes for their half-grade. After all, outcomes must be ie(xed as directions for development to take, and not goals which either are or are not attained, the quicker they are attained, the bet- ter In- the instruction to follow in Grade IIA and in later grades thIre are many opportunities to enrich and deepen the learning which Vi b,1gui in Grade IIB and to carry that learning to the desired limit. Arithmetic in. Grades I aoi'd li The crucial point is to make sure that earl learning; i- .-f the right kinrd-meaningful, intelligent, based upon under-..inlirg-and the experimental subjects in this investigation '.cr -pproacah.ii.. arith- metic in this way. Results in Grade IIA.-The use of ordinals (Outcome 1) is not represented among the items of the Grade IIA test (Table 26) ; and Outcomes 3, 4, 9, 10, and 12 are but slightly represented. However, GRADE IIA TERM TEST, PART I *A. Write just the answers: 1. ------------- 4. ------------- B. Do what the sign tells you to do : 1. 15 5 7 - I +9 +f f 5. ------------- 3. ----. ...-- [I 7 IS 14 15 -7 +S -9 -I; -7 14 4 17 9 15 14 ; - s +9 -8 +7 -9 -5 +9 23 1to S 15 9 14 1:1 -7 -9 +9 -S + -9 -S C. Do what the sign tells you to do : 1. 94 5 47 -4 +34 -3 D. Arldd 7S 5 -3 +6 H 1.3+5+0+6= ---- 2.4+2+3+5= ---. E. Count by 3's: 1.9 12 ---- ---- 21 F. Write the missing numbers: 1.37 3S ---- ---- 41 -0 + : -I +5 4 3S 32 99 S+4 -6 +7 -2 2.IS 1 ---- ---- 27 .... -..--- 71 ----.... 73 Proems for -r. A 4. E[lla n rote 14 words on tihe board. Tom w rot. 1. If you have 15 peanuts in a bag and eat 9 of 6 words. How many fewer words did Tom rite? them, how ninny will be left? 5. Dorothy has 8 hair ribbons. Her sister JuIhi 2. A book costs So and a tablet GO. HIow much han 9. HIow many hair ribbons have the two girls ido the book and tablet cost together? in all? 3. There were 12 berries on a straw berry plant. 6. Mother put a pan of 1(; cookies in the oven. Along carnme some birds, and soon only 5 berries The fire was too hot and 7 conkies got burned. IHow were left. How many berries had the birds eaten? many cookies did not get burned ?
{"url":"http://ufdc.ufl.edu/UF00098587/00001","timestamp":"2014-04-18T21:44:29Z","content_type":null,"content_length":"277346","record_id":"<urn:uuid:1e33c7f2-ff79-4fca-873f-c55e1f759926>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
School of Mathematics, Science & Engineering To contact instructors, click on their names to get the email addresses of full-time professors or the appropriate school contact information for adjunct MATH 45 ELEMENTARY ALGEBRA 4 Units [Prerequisite: MATH 35, MATH-35PL or the equivalent skill level as determined by the Southwestern College Mathematics Assessment or equivalent. Recommended Preparation: RDG 56 or the equivalent skill level as determined by the Southwestern College Reading Assessment or equivalent.] Emphasizes elementary concepts of algebra, including real numbers, linear equations and inequalities in one variable, graphs of lines and inequalities in two variables, Pythagorean theorem, 2x2 systems, exponents, polynomials, factoring techniques, rational expressions, and applications. Includes mandatory lab. (Not open to students with credit in any higher-numbered mathematics course.) [ND] MATH 45Sections Offered Cls No. Crs/Sect No. Time Days Room Instructor #Weeks #Seats 81949 MATH 45 60 04:30PM-06:50PM MTWTH 565 Camarena,Misael 7 0* MAIN CAMPUS - EVENING Begins: 6/10/2013 Ends: 7/25/2013 *Open seats as of Monday, June 24, 2013 7:15 AM 82612 MATH 45 62 04:30PM-06:50PM MTWTH 511 Francis,Youssef 7 5* MAIN CAMPUS - EVENING Begins: 6/10/2013 Ends: 7/25/2013 *Open seats as of Monday, June 24, 2013 7:15 AM MATH 60 INTERMEDIATE ALGEBRA I 4 Units [Prerequisite: MATH 45 or MATH 45PL or the equivalent skill level as determined by the Southwestern College Mathematics Assessment or equivalent. Recommended Preparation: RDG 56 or the equivalent skill level as determined by the Southwestern College Reading Assessment or equivalent.] Emphasizes intermediate concepts of algebra such as rational numbers, systems of equations in two and three variables, absolute value equations and inequalities, radical expressions, rational exponents, complex numbers, quadratic equations, graphing linear and quadratic functions, and graphing parabolas and circles. Scientific calculator is required. (Not open to students with credit in MATH 60PL or any higher-numbered mathematics course.) [D] MATH 60Sections Offered Cls No. Crs/Sect No. Time Days Room Instructor #Weeks #Seats 81947 MATH 60 60 04:30PM-06:50PM MTWTH 343 Medin,Sherooq 7 6* MAIN CAMPUS - EVENING Begins: 6/10/2013 Ends: 7/25/2013 *Open seats as of Monday, June 24, 2013 7:15 AM 79120 MATH 60 86 05:30PM-07:50PM MTWTH 5106 Nguyen,Kelly 7 6* SAN YSIDRO ED. CTR, SAN YSIDRO - EVENING Begins: 6/10/2013 Ends: 7/25/2013 *Open seats as of Monday, June 24, 2013 7:15 AM MATH 70 INTERMEDIATE ALGEBRA II 4 Units [Prerequisite: MATH 60 or MATH 60PL or the equivalent skill level as determined by the Southwestern College Mathematics Assessment or equivalent. Recommended Preparation: RDG 56 or the equivalent skill level as determined by the Southwestern College Reading Assessment or equivalent.] Emphasizes advanced concepts, including algebra of functions, function composition, inverse functions, exponential and logarithmic functions, radical functions, and rational functions. Covers conics, quadratic, cubic equations, systems of equations, inequalities, matrix methods, sequences, and series. The graphing calculator will be used to graph and analyze functions. Requires graphing calculator. (Not open to students with credit in MATH 70PL.) [D] [EFFECTIVE FALL 2013 - DESCRIPTION CHANGE] [Prerequisite: MATH 60 or MATH 60PL or the equivalent skill level as determined by the Southwestern College Mathematics Assessment or equivalent. Recommended Preparation: RDG 56 or the equivalent skill level as determined by the Southwestern College Reading Assessment or equivalent.] Emphasizes application problems, graphing calculator (calculations, matrix methods, graphing), logarithms, and conics. Covers functions (inverse, exponential, logarithmic, radical, rational, quadratic), nonlinear inequalities, polynomial division, equations (quadratic in form, exponential, logarithmic), systems of equations or inequalities, sequences and series. Requires graphing calculator. (Not open to students with credit in MATH 70PL.) [D] MATH 70Sections Offered Cls No. Crs/Sect No. Time Days Room Instructor #Weeks #Seats 81670 MATH 70 60 04:30PM-06:50PM MTWTH 342 Diwa,Philipp 7 13* MAIN CAMPUS - EVENING Begins: 6/10/2013 Ends: 7/25/2013 *Open seats as of Monday, June 24, 2013 7:15 AM MATH 119 ELEMENTARY STATISTICS 4 Units [Prerequisite: MATH 70 or the equivalent skill level as determined by the Southwestern College Mathematics Assessment or equivalent. Recommended Preparation: RDG 158 or the equivalent skill level as determined by the Southwestern College Reading Assessment or equivalent.] Emphasizes elementary concepts of statistics including measures of central tendency and variability, probability, sampling techniques, binomial, hypergeometric, and normal distributions, statistical estimation and hypothesis testing, regression and correlation. Includes descriptive statistics, probability and probability distributions, and inferences concerning single population means and proportions. Requires graphing calculator and other technologies will be used. [D; CSU; MATH 119Sections Offered Cls No. Crs/Sect No. Time Days Room Instructor #Weeks #Seats 81671 MATH 119 60 04:30PM-06:50PM MTWTH 398 Medin,Andrew 7 0* MAIN CAMPUS - EVENING Begins: 6/10/2013 Ends: 7/25/2013 *Open seats as of Monday, June 24, 2013 7:15 AM MATH 121 APPLIED CALCULUS I 3 Units [Prerequisite: MATH 70 or the equivalent skill level as determined by the Southwestern College Mathematics Assessment or equivalent. Recommended Preparation: RDG 158 or the equivalent skill level as determined by the Southwestern College Reading Assessment or equivalent.] Emphasizes concepts and applications of algebra, analytic geometry and the polynomial calculus to solving problems in the physical, biological and social sciences. Requires graphing calculator. (Not open to students with credit in MATH 250 or equivalent.) [D; CSU; UC] MATH 121Sections Offered Cls No. Crs/Sect No. Time Days Room Instructor #Weeks #Seats 81610 MATH 121 60 06:00PM-07:50PM MTWTH 391 Cebecioglu,Burak 7 16* MAIN CAMPUS - EVENING Begins: 6/10/2013 Ends: 7/25/2013 *Open seats as of Monday, June 24, 2013 7:15 AM Total Courses: 5 Total Sections: 7
{"url":"http://www2.swccd.edu/ClassSchedule3/index.asp?DeptCode=MATH&CT=Eve","timestamp":"2014-04-20T21:19:54Z","content_type":null,"content_length":"27027","record_id":"<urn:uuid:96e76a9e-e143-411e-b532-afe2567568ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: 124A PARTIAL DIFFERENTIAL EQUATIONS This is the outline for the course. We emphasize the key ideas and the structure, and often omit the details. The details will be covered on the lectures. They also can be found in the textbook (W. A. Strauss PDEs, an Introduction, 2nd ed.) and in the extended UCSB 124-course notes by Viktor Grigoryan. The two sources contain much more than will be covered during the quarter. 1. Basics (1) A partial differential equation (in two independent variables) is a relation F(x, y, u, ux, uy, uxx, uxy, . . .) = 0 for the unknown function u(x, y) . All laws in sciences are formulated in the form of (system of) PDEs. A function u is a solution of the PDE in a region if F(x, y, u(x, y), ux(x, y), uy(x, y), uxx(x, y), uxy(x, y), . . .) 0 for (x, y) . Thus it is very easy to check if a given function is a solution of the given PDE. (2) The general solution to the given PDE is the formula for all solutions. For example, we know from the calculus that the general solution u(x) of - u = 0 u = Cex
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/911/4277460.html","timestamp":"2014-04-19T19:36:21Z","content_type":null,"content_length":"8131","record_id":"<urn:uuid:58076cdc-ff48-436a-98e8-49f5a23054de>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
yy = smooth(y) yy = smooth(y,span) yy = smooth(y,method) yy = smooth(y,span,method) yy = smooth(y,'sgolay',degree) yy = smooth(y,span,'sgolay',degree) yy = smooth(x,y,...) yy = smooth(y) smooths the data in the column vector y using a moving average filter. Results are returned in the column vector yy. The default span for the moving average is 5. The first few elements of yy are given by yy(1) = y(1) yy(2) = (y(1) + y(2) + y(3))/3 yy(3) = (y(1) + y(2) + y(3) + y(4) + y(5))/5 yy(4) = (y(2) + y(3) + y(4) + y(5) + y(6))/5 Because of the way endpoints are handled, the result differs from the result returned by the filter function. yy = smooth(y,span) sets the span of the moving average to span. span must be odd. yy = smooth(y,method) smooths the data in y using the method method and the default span. Supported values for method are listed in the table below. ┃ method │ Description ┃ ┃ 'moving' │ Moving average (default). A lowpass filter with filter coefficients equal to the reciprocal of the span. ┃ ┃ 'lowess' │ Local regression using weighted linear least squares and a 1st degree polynomial model ┃ ┃ 'loess' │ Local regression using weighted linear least squares and a 2nd degree polynomial model ┃ ┃ 'sgolay' │ Savitzky-Golay filter. A generalized moving average with filter coefficients determined by an unweighted linear least-squares regression and a polynomial model of specified degree ┃ ┃ │ (default is 2). The method can accept nonuniform predictor data. ┃ ┃ 'rlowess' │ A robust version of 'lowess' that assigns lower weight to outliers in the regression. The method assigns zero weight to data outside six mean absolute deviations. ┃ ┃ 'rloess' │ A robust version of 'loess' that assigns lower weight to outliers in the regression. The method assigns zero weight to data outside six mean absolute deviations. ┃ yy = smooth(y,span,method) sets the span of method to span. For the loess and lowess methods, span is a percentage of the total number of data points, less than or equal to 1. For the moving average and Savitzky-Golay methods, span must be odd (an even span is automatically reduced by 1). yy = smooth(y,'sgolay',degree) uses the Savitzky-Golay method with polynomial degree specified by degree. yy = smooth(y,span,'sgolay',degree) uses the number of data points specified by span in the Savitzky-Golay calculation. span must be odd and degree must be less than span. yy = smooth(x,y,...) additionally specifies x data. If x is not provided, methods that require x data assume x = 1:length(y). You should specify x data when it is not uniformly spaced or sorted. If x is not uniform and you do not specify method, lowess is used. If the smoothing method requires x to be sorted, the sorting occurs automatically. Load the data in count.dat: load count.dat The 24-by-3 array count contains traffic counts at three intersections for each hour of the day. First, use a moving average filter with a 5-hour span to smooth all of the data at once (by linear index) : c = smooth(count(:)); C1 = reshape(c,24,3); Plot the original data and the smoothed data: hold on title('Smooth C1 (All Data)') Second, use the same filter to smooth each column of the data separately: C2 = zeros(24,3); for I = 1:3, C2(:,I) = smooth(count(:,I)); Again, plot the original data and the smoothed data: hold on title('Smooth C2 (Each Column)') Plot the difference between the two smoothed data sets: plot(C2 - C1,'o-') title('Difference C2 - C1') Note the additional end effects from the 3-column smooth. Create noisy data with outliers: x = 15*rand(150,1); y = sin(x) + 0.5*(rand(size(x))-0.5); y(ceil(length(x)*rand(2,1))) = 3; Smooth the data using the loess and rloess methods with a span of 10%: yy1 = smooth(x,y,0.1,'loess'); yy2 = smooth(x,y,0.1,'rloess'); Plot original data and the smoothed data. [xx,ind] = sort(x); set(gca,'YLim',[-1.5 3.5]) legend('Original Data','Smoothed Data Using ''loess''',... set(gca,'YLim',[-1.5 3.5]) legend('Original Data','Smoothed Data Using ''rloess''',... Note that the outliers have less influence on the robust method. More About Another way to generate smoothed data is to fit it with a smoothing spline. Refer to the fit function for more information. See Also fit | sort
{"url":"http://www.mathworks.com/help/curvefit/smooth.html?nocookie=true","timestamp":"2014-04-18T20:55:28Z","content_type":null,"content_length":"39849","record_id":"<urn:uuid:638a4534-be79-42a4-ba7c-dcfee51cbe61>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
The sad tally 3: first analysis Last week, I posed the question of how one can ascertain if a data set is "random" using Golden Gate Bridge suicide data compiled by the SF Chronicle. For review, see here (and less importantly, here ). I plan to document my own exploration slowly over a few posts as the material may be a bit heavy for those not trained in statistics. Don't run away yet as I'll try to explain things in simple terms as best I can. In the original plots, I compared eight randomly generated datasets with the real suicide data (bottom right graph here). The random data imagine the scenario in which death-seekers randomly picked the locations from which to jump off the bridge. While in theory there should be about 21.6 suicides at each of the 35 locations, the random data exhibited a wide spread, with some locations having the number of deaths over 2 to 3 standard deviations away from the average (norm). Standard deviation is a way to measure the spread of data around the average value. Staring at a graph like the one shown above, how does one decide whether the pattern is random or not? I will, for now, stick to visualization methods. (1) Frequency of extreme values Intuitively, assuming random, then the number of suicides should be close to the mean at most locations, with few extreme values. If the number of locations having extreme values in the real data is significantly higher (or lower) than the same count in the random data, one might reasonably conclude that the real data is not random. In my chart, there are 9 lines, each a dataset. I identified two unusual lines by red color. We have evidence that the real data (Dataset 9) is not random since the line deviated from most other lines. This evidence is inconclusive because Dataset 4 also strayed even though it was randomly generated. This illustrates why the question of randomness is tricky: we rely on strange (rare) behavior to detect non-randomness but strange things can happen randomly too. Therefore, it is best to collect many pieces of evidence. (2) Cumulative distribution Like Robert, I plotted the cumulative distribution. This graph shows the % of locations with less than 10, 20, 30, etc. suicides. Again, each "staircase" line represents one of the 9 datasets. They all converge at the point (20, 0.6): that is, in each dataset, about 21 locations (60% x 35 locations = 21) had 20 suicides or fewer. The red line clearly stood out in this chart. It is Dataset 9. So we have stronger evidence still that it is different from random data. Observe that for random data (the black lines), a steep rise occurred between 14 and 35 suicides, indicating that if randomly selected, most locations will see approximately 14-35 suicides. By contrast, the red line rose fast at about 10 suicides or so, then saw less action in the 25+ range, and had an extreme outlier at 60. In other words, the variance (spread) in the real data was higher than can be expected in random data. These visualizations, of extreme values and cumulative distribution, faciliated our objective of comparing the real and the random data. The difference showed up more potently here than in the scatter plots. In a future post, I will look at other ways, both simpler and more complex ideas, to identify the important differentiator of variance. I really like the CDF chart -- it leads right into a Komolgorov-Smirnov test, which I had not thought of. Thanks for the hint! This is only a preview. Your comment has not yet been posted. Your comment could not be posted. Error type: Your comment has been posted. Post another comment As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments. Having trouble reading this image? View an alternate. Post a comment Recent Comments
{"url":"http://junkcharts.typepad.com/junk_charts/2005/11/the_sad_tally_3.html","timestamp":"2014-04-20T09:13:19Z","content_type":null,"content_length":"62754","record_id":"<urn:uuid:aad7d34d-03d9-4a5d-9fea-e6385537242c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimum covering in cubic graphs Take the 2-minute tour × MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Does a cubic graph with $2n$ vertices admit a minimal cover with $n-1$ vertices? Is it an edge or vertex cover? hbm May 26 '12 at 16:08 And this seems like a statement, not a question... Igor Rivin May 26 '12 at 16:34 The way it is stated, it's not a real question. Voting to close. Once you formulate it, don't forget to convince us it's not your homework from a course in combinatorics... Vladimir Dotsenko May 26 '12 at 17:10 Consider a nodal curve $ C $ with 2n components where each component intersects the curve complementary to it on three points. We can interpret each component of the curve $C$ as a vertex and each node how a edge, obtaining thus, a graph called the dual graph of the curve $ C $. My question is if I can chose $n-1$ components (vertices) so that each one these components (vetices) intersect at least one component among the other $n+1$ components. This is not a problem for minimal covering of graphs ? Flávio May 26 '12 at 22:49 At least you should edit your motivation from the comment above into the main body of the question. Gjergji Zaimi May 27 '12 at 5:23 show 2 more comments A subset of vertices $S$ in a graph $G$ is called a dominating set if every vertex in $G$ is in $S$ or is connected to a vertex in $S$. The size of the smallest dominating set in a graph is called the domination number of $G$. up vote 2 down Even though your question asks about a minimum (vertex/edge) covering, what you seem to be interested in is the dominating number of a cubic graph. Not only does the domination number vote satisfy the bound in your question, but it can be improved further. Bruce Reed showed in "Paths, stars, and the number three", Combin. Probab. Comput. 5 (1996) 277--295, that every cubic graph has its domination number bounded by $3|V|/8$, where $|V|$ is the number of vertices in your graph. The bound is achieved for some graphs on 8 vertices. This bound has been more recently improved on by Kostochka and Stodolsky. I believe the conjectured best bound is $5|V|/14$, but it is not known if infinitely many graphs achieve it. add comment Pick any vertex. It is adjacent to three other vertices. Now repeatedly pick an edge joining two vertices, call them $x$ and $y$, such that $y$ has not yet been accounted for, and choose up vote 1 $x$. Each such choice adds one more to the vertices accounted for. When you've exhausted the graph, you've picked $n-1$ vertices such that each of the other $n+1$ vertices is adjacent to down vote at least one of the $n-1$. @Gerry, this argument won't work. Think of $K_{3,3}$. Gjergji Zaimi May 27 '12 at 0:27 @Gjergji, let the vertices of $K_{3,3}$ be $a,b,c,1,2,3$, with letters adjacent to numbers. Pick $a$. It is adjacent to $1,2,3$. Now there's an edge joining $b$ and $1$, and $b$ has not yet been accounted for, so choose $1$. Now we've accounted for everything, so we have picked 2 vertices ($a$ and $1$) such that each of the other 4 vertices is adjacent to at least one of the 2 we picked. It's possible that my argument doesn't work, but I think it works for $K_{3,3}$. Gerry Myerson May 27 '12 at 5:32 As I understand Gerry's argument, it does work for K_3,3. I don't know about other cubic graphs. Gerhard "Looks Like It Works Though" Paseman, 2012.05.26 Gerhard Paseman May 27 '12 at 5:36 Of course, his argument establishes an upper bound, not an exact number. To get exactly n-1 you have to be carefully clumsy, or perhaps I mean clumsily careful. Gerhard "Still Thinks It Could Work" Paseman, 2012.05.26 Gerhard Paseman May 27 '12 at 5:41 Also, there is the issue of minimality. Until we have a clear idea of what is wanted, I would say the answer is no, n-1 is not guaranteed for 2n vertices. Gerhard "Waiting For The Fine Print" Paseman, 2012.05.26 Gerhard Paseman May 27 '12 at 5:52 show 4 more comments up vote 0 Dear Gjergji Zaimi, thank you for your reply but I think there is something wrong with this value $ 3 | V | / $ 8 because the following cubic graph, http://www.dharwadker.org/ down vote independent_set/fig2a.gif, has 6 vertices and a dominating set with three vertices. Is not This? It also has a dominating set of size two. Zack Wolske May 27 '12 at 23:03 add comment Looks wrong. The Heawood graph is 3-regular on 14 vertices and its vertex cover is 7. sage: g = graphs.HeawoodGraph() sage: g.degree() up vote 0 down vote [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] sage: g.order() sage: g.vertex_cover() [0, 2, 4, 6, 8, 10, 12] Notice that for any regular graph, a vertex cover contains at least half of the vertices. Gjergji Zaimi May 29 '12 at 8:21 ahahahah right ! And necessarily more if it is not bipartite :-) Nathann Cohen May 30 '12 at 7:00 add comment Not the answer you're looking for? Browse other questions tagged graph-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/98047/minimum-covering-in-cubic-graphs/98071","timestamp":"2014-04-18T10:48:25Z","content_type":null,"content_length":"76632","record_id":"<urn:uuid:b477e043-9b74-4cfe-8304-46eb9fe9c410>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Permutations and Combinations May 12th 2008, 08:21 AM #1 May 2008 1. I am going to choose 6 students out of the 15 who currently have A's to be mentors for the students who have C's or lower. How many ways can a group of 6 be selected? 2. I've decided that the mentor group should be made up of 3 girls and 3 boys. Of the students who have A's 9 of them are girls and 6 are boys. How many ways can I form a group of 3 boys and 3 3. State and multi-state lotteries are common in the U.S. To win a typical lottery, you must match 6 numbers between 1 and 40. How many different combinations are possible? What does that say about the chances of winning the lottery? 4. A pizza shop offers the following toppings: 8 different vegetables, 5 different meats, and 4 different cheeses. How many ways can 4 toppings be selected where: a. All the toppings are vegetables b. All the toppings are meat c. There is only cheese on the pizza d. There are 2 vegetables, 1 meat and 1 cheese e. There are 3 meats and 1 cheese. What's your favorite combo? Hello, gtzfynestbabiigurl! 1) I am going to choose 6 students out of 15 to be mentors. How many ways can a group 6 be selected? There are: . $_{15}C_6 \:=\:{15\choose6} \;=\;5,005$ ways. 2) I've decided that the mentor group should be made up of 3 girls and 3 boys. Of the 15 students, 9 are girls and 6 are boys. How many ways can I form the group There are: . $\left(_9C_3\right)\left(_6C_3\right) \;=\;{9\choose3}{6\choose3} \;=\;1,680$ ways. 3) State and multi-state lotteries are common in the U.S. To win a typical lottery, you must match 6 numbers between 1 and 40. How many different combinations are possible? There are: . $_{40}C_6 \:=\:{40\choose6} \:=\:3,838,380$ possible combinations. What does that say about the chances of winning the lottery? A lottery is a penalty for being weak in mathematics. 1) A pizza shop offers the following toppings: 8 different vegetables, 5 different meats, and 4 different cheeses. How many ways can 4 toppings be selected where: a) All the toppings are vegetables. There are: . ${8\choose3} \:=\:{8\choose4} \:=\:70$ ways. b) All the toppings are meat. There are: . ${5\choose4} \:=\:5$ ways. c) All the toppings are cheese. There is: . ${4\choose4} \:=\: 1$ way. d) There are 2 vegetables, 1 meat and 1 cheese. There are: . ${8\choose2}{5\choose1}{4\choose1} \:=\:560$ ways. e) There are 3 meats and 1 cheese. There are: . ${5\choose3}{4\choose1} \:=\:40$ ways. What's your favorite combo? White truffles, Beluga caviar, dusted with saffron . . . Just kidding! May 12th 2008, 10:07 AM #2 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/statistics/38094-permutations-combinations.html","timestamp":"2014-04-19T00:52:53Z","content_type":null,"content_length":"39935","record_id":"<urn:uuid:f4739784-2a9e-4d5a-ba75-dbd0e016239a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: October 2012 [00136] [Date Index] [Thread Index] [Author Index] Re: D under Sum • To: mathgroup at smc.vnet.net • Subject: [mg128400] Re: D under Sum • From: Roland Franzius <roland.franzius at uos.de> • Date: Sat, 13 Oct 2012 01:02:59 -0400 (EDT) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com • Delivered-to: l-mathgroup@wolfram.com • Delivered-to: mathgroup-newout@smc.vnet.net • Delivered-to: mathgroup-newsend@smc.vnet.net • References: <k55olv$eb3$1@smc.vnet.net> Am 11.10.2012 08:23, schrieb Nicolas Venuti: > Hello > I want the derivation D to be applied under the Sum sign, > I have a function HoldSum that do not evaluate but is similar to Sum in form > In[3]:= D[HoldSum[p*i, {i, 1, n}], p] > I get the output : > Out[3]=i*HoldSum^(1, {0, 0, 0})[i p, {i, 1, n}] > but I would want that D be applied under the Sum sign like done by hand here : > In[5]:= HoldSum[D[p i, i], {i, 1, n}] > Out[5]= HoldSum[p, {i, 1, n}] > Obviously I want D to work for any expression containing HoldSum (even on nested HoldSum expression) > Does anybody knows how to do that ? As you have observed already, manipulating expressions in sums and integrals with respect to the summation variable seems to be mathematical nonsense, because the name of the domain construct is local and the sums are constants with respect to them. A path to implemt your goal is, as in functional and fourier analysis as well as quantum maechanics, to switch to a general context of a domain space with a fixed variable name and apply transformation rules to the abstract sum term together with the domain list. The container "List" suffices to implement the two main operations, that make up the Heisenberg algebra on the space of eg Applying a function f of two variables f: {i,term} ->f[i,term] apply[f_][{summand_,domain_List}] := {f[domain[[1]],summand],domain} and derivaitve with respect to the summation variable Evaluation of sums and integrals is considered as a linear operation sending the List to the domain of values ot the terms eval{[term_,domain_List}]:= Sum[term,domain] Logically, after symbolic evalutation the result is considered as a number. Numbers cannot be changed by manipulation of the summation term, even if evaluation result does not yield a number. With the exception of identical and simplifying transformations. The last case is of course a theme in the Mathematica simplification process because resummation technique requires complex replacements of summation term and the domain construct in unevaluated sums and integrals with fixed summation domains. Roland Franzius
{"url":"http://forums.wolfram.com/mathgroup/archive/2012/Oct/msg00136.html","timestamp":"2014-04-19T19:42:46Z","content_type":null,"content_length":"27607","record_id":"<urn:uuid:f9ace8f4-d166-4f7e-ab74-1cf0cb3270fa>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
st: AW: small problem with 'stack' command ! [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: AW: small problem with 'stack' command ! From "Martin Weiss" <martin.weiss1@gmx.de> To <statalist@hsphsun2.harvard.edu> Subject st: AW: small problem with 'stack' command ! Date Thu, 24 Sep 2009 12:32:01 +0200 You should give an excerpt of your data to make answers to this rather opaque question possible. Also, have you had a look at -reshape-? -----Ursprüngliche Nachricht----- Von: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Marwan Elkhoury Gesendet: Donnerstag, 24. September 2009 12:30 An: statalist@hsphsun2.harvard.edu Betreff: st: small problem with 'stack' command ! dear statalist, I have a minor problem with the 'stack' command in stata (stata 10 for windows xp). I have 5 groups of countries of various size length which I want to stack into a single group of two variables, say, ctryg totrnk. right now I have cntry_i, rnk_i, for i=1 to 5 and if I write the following: stack cntry_1 rnk_1 cntry_2 rnk_2 cntry_3 rnk_3 cntry_4 rnk_4 cntry_5 rnk_5, into(cntry_1 rnk_1) wide clear ren cntry_1 cntryg ren rnk_1 totrnk It, in fact, creates two variables of size length the smallest of the size length of the groups and not stacking the entire number of what would be the easiest and smartest way of stacking this 'unbalanced' group of variables, if any, to create one group of two variables from the 5 different groups ? best regards, Marwan Elkhoury * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-09/msg00961.html","timestamp":"2014-04-16T13:55:11Z","content_type":null,"content_length":"7408","record_id":"<urn:uuid:858916b4-2f0c-4467-8fd3-df239cba42c1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
7: Harmonic Modeling Lab 7: Harmonic Modeling A Harmonic Model assumes that not all the magnitude peaks of a spectrum should be modeled by sinusoids because many peaks can be the sidelobes of the analysis window, the result of noisy components or other transitional effects. In order to decide if a spectral peak is a harmonic or not we have to perform a detection of the fundamental frequency, f0, of the spectrum and check it the peak is close to an ideal harmonic of the fundamental frequency. To do this lab you can start from the sinemodel.m, which requires peakinterp.m, genbh92lobe.m and genspecsines.m, converting it into harmonicmodel.m 7.1: Detection of F0 We will be using the Two-way-mismatch algorithm that is implemented in f0detection.m and that can be called by [f0,f0error,isHarm]=f0detection(mX, fs, iploc, ipmag, ethreshold) 1. Incorporate the f0detection function into the new harmonicmodel.m code. 2. Generate a square wave (ex: t = 0:1/44100:1; y = square(2*pi*200*t);), perform the harmonic analysis, and plot the f0 and f0error values for the whole duration of the sound. You can improve the detection by narrowing down the f0 range, adding a minf0, maxf0 parameters. Do it also with a real sound that is harmonic. What is the effect of ethreshold in the f0 detection? What other input parameters of the harmonicmodel function affect the f0 detection? How? 7.2: Detection of harmonics Once the f0 in a given frame is detected we have to find the harmonics present in the frame. We can do it by finding the closest peaks to the harmonic series of f0, and defining a maximum of harmonics to find, using the parameter nSines, ex: • if (isHarm) • for i=1:nSines • trajfreq = f0(l)*i; • if (trajfreq < fs/2) • [closestpeakmag,closestpeakindex]=min(abs((iploc-1)/N*fs-trajfreq)); • ihloc(i) = iploc(closestpeakindex); • ihmag(i) = ipmag(closestpeakindex); • ihphase(i) = ipphase(closestpeakindex); • end • end • end 1. Incorporate this code into your function and plot the harmonic values found on top of each spectral frame of the sound. 2. Complete the function harmonicmodel.m by generating only the harmonics obtained. The function could be called by: y = harmonicmodel(x, fs, w, N, H, t, nSines, minf0, maxf0, f0threshold) 7.3: Testing the harmonic model Test the algorithm with different harmonic and inharmonic sounds. You will have to change the parameters to get a good result.
{"url":"http://www.dtic.upf.edu/~xserra/cursos/CCRMA-workshop/labs/lab7/lab-7.html","timestamp":"2014-04-16T04:11:45Z","content_type":null,"content_length":"5724","record_id":"<urn:uuid:d4368f40-854c-448b-9bd2-d37ad6030681>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
A simple model comprised of only one data structure and three operators Source code files accompanying article are located on MacTech CD-ROM or source code disks. We re rapidly approaching perhaps the most interesting time in all of the existence of personal computing. The winds of change are upon us, and this much is clear: we programmers are going to have to adopt new programming paradigms if we expect to survive into the future. In terms of the hardware that we use, old classifications and distinctions are being either blurred or obliterated on a daily basis. The line between high-end PC and low-end workstation is becoming artificial, a matter of marketspeak only, as time marches on. The machine that this magazine is devoted to has always had the capability that is probably 2nd most-often used to designate a workstation as opposed to a PC-networking capability. The most-often used designator, performance, is something that can be achieved principally through changing processors. Nearly everyone in the industry has heard about the architectures vying for attention-Motorola s 88000 family, MIPS, Sun s SPARC, and IBM s RS/6000, to name only the ones that come immediately to mind. In addition, Apple has been investigating RISC for some time. It only stands to reason that another revolution will ensue as the average power of even the most affordable machines goes from the order of the single-digit MIPS (with the high end Macintosh IIfx being around 8.5, if memory serves me correctly) to more on the order of the low-three-digit MIPS, say 120 on the low end of the spectrum. If the new revolution doesn t begin-if all we do when we go from 8.5 MIPS to 120 MIPS is recalculate our spreadsheets faster-we ll have only ourselves to blame. Regardless, something that should be fairly apparent in all of this is that the chips are approaching their limits. I m not referring to limits of the state of the art; those change daily. Rather, I m referring to the pace at which the capabilities of our silicon foundries are approaching the limits of the laws of physics. We can only go so far with smaller/faster/cheaper until Einstein, Bohr, and others step into muddy the waters. Mechanical engineering gives way to quantum mechanics, as it were. This may sound pretty dismal, but there is an obvious solution to the problem: start building machines with more than one processor. If one blazing processor is good, wouldn t more be better? The answer, of course, is yes, depending upon the nature of the problem being solved, and that opens a can of worms all its own. If you have a parallel architecture, the way that you must go about creating software for it changes accordingly, and most of us are not accustomed to programming for parallelism. God knows I m not. And it s not as though there were a consistent mental model to apply to the notion of parallel programming, either. For one thing, most of the effort that I m familiar with regarding parallel programming has been done in order to adapt an already largely outmoded programming methodology- structured programming, the holy grail of the 70s-to parallelism. Research to integrate what I ll call the holy grail of the 80s- object-oriented programming, which seems better suited to programming in the large than structured programming alone-with parallel programming still seems to be largely open, with few real-world implementations available. Parallelism has its own language, of course, with terms like SIMD, MIMD, Shared Memory, coarse- and fine-grained, massively, data flow, and the like, and has systems with names like Transputer, Sequent, Butterfly, Hypercube, Cray, and Connection Machine. In terms of generally-applicable software to allow the implementation of parallel systems, the ones that I ve heard the most about are Linda (as a model and environment) and Occam (as a language). Unfortunately for me, I could never wrap my brain around either of them, although I d like to give Linda another shot some day. In the meantime, there is another model that I think deserves a lot of attention, because it provides you explicit control over the costs of both computation (work done by the processors) and communication (how much wire and how many intermediate stops must a signal take to get from one processor to another). It s architecture independent-it doesn t care whether it s on a MIMD, SIMD, Shared Memory, etc. machine. Most importantly to me, it s simple. The whole model is comprised of one, count em, one data structure and three, count em, three operators. The model is also base-language independent: there are at least two implementations in Lisp, and an experimental implementation in C. The model is called the Paralation Model, and was developed by Gary Sabot, a senior research scientist at Thinking Machines, Inc., creators of the Connection Machine, a massively-parallel computer capable of supporting up to 65,536 processors. It s described in the book entitled The Paralation Model, published by The MIT Press, ISBN 0-262-19277-2. The principal data structure of the Paralation Model is the field. A field is simply a collection of data that can be referred to by an ordinal index. Related fields comprise a paralation, which is short for parallel relation. A paralation always includes an index field. A paralation consists of sites, which are similar to the elements of an array; the nth site of a paralation contains the nth field elements of the fields of the paralation. The index field at that site has the value n. When you create a paralation, you create its index field. Additional fields can then be added to the paralation, and each of them contains a pointer to the index field. Elements of fields in a given paralation that have a particular index are guaranteed to be near each other-that is, the 173rd elements of two fields in the same paralation are near, the 2nd elements are near, etc. This means that intra-site communication is guaranteed to be cheap. Inter-site communication, on the other hand, can be arbitrarily cheap or costly. By default, paralations are shapeless; there s no information regarding the nearness of sites in a paralation to the sites in any other paralation. You can, however, specify the shape of paralations in order to ensure the least costly inter-site communication possible, which may have a dramatic effect on system performance. When you create a paralation, you automagically create its index field, and in fact, that is what is returned: A word about the examples is in order here: first, I m a Lisp hacker, so the examples will all be in Paralation Lisp. A secondary reason for this is that, as of the writing of the edition of The Paralation Model that I own a copy of, the Paralation C compiler had not been completed, and all of the examples in the book are in Paralation Lisp. Perhaps the most important motivation of all, however, is that the book includes a complete Tiny Paralation Lisp simulator that will work in any [Steele84] Common Lisp implementation. In case you re wondering how the simulator can be complete and tiny at the same time, the answer is that it completely implements the model (remember, it s only one data structure and three operators), but it doesn t go miles out of its way to do type checking, error handling, or generate efficient code on a serial machine. Neither does it support shaped paralations, which isn t really a loss on a serial machine. There is a much better simulator for Common Lisp that you can order from The MIT Press. An order form is included with the book. Common Lisp aficionados will realize that the #F notation used to represent fields in Paralation Lisp resembles the #A syntax that Common Lisp uses to denote an array. Also like an array, a paralation is made to be a certain length, and all fields in the paralation are of the same length. As the Common Lisp programmer would expect upon seeing the #F representation, fields can be given to the reader as literals, but each field read causes a new paralation to be constructed, so: because the two objects are not the same object; they are two paralations that happen to consist of fields of the same length containing the same values. This is the only gripe that I have about Paralation Lisp: you cannot tell simply from looking at its printed representation what paralation a field belongs to. That pretty much covers paralations and their fields, so let s move on to the Paralation Model operators. They are elementwise evaluation, move, and match. Elementwise evaluation is exactly what it sounds like: given a field, some evaluation is performed on each of the elements of the field. The result is a new field in the same paralation. The evaluations are done in parallel-correct paralation programs cannot rely on any order of initiation or completion of the evaluation among any of the elements of the field. The only synchronization guarantee that elementwise evaluation makes is that the operator will not terminate until the evaluations of all of the elements have terminated. This example obviously isn t terribly useful; its sole function is to create a new paralation with ten sites, assign the resulting index field to the global variable *foo*, and then to add a new field to the paralation whose elements are the integers one through ten rather than zero through nine by adding one to each index field element in parallel (if you were to run this on an implementation with at least ten processors, this would happen very quickly indeed, ignoring communication costs for now). There are a few subtleties to the semantics of elwise. One is that elwise can operate on more than one field at a time, which is why the first parameter to elwise is a list. Another is that references to the field name within the body refer to an individual element rather than to the entire field. Still another subtlety is that elwise can support temporary binding in similar fashion to Common Lisp s let. It s fairly common to temporarily create a paralation and then want to apply elwise to the resulting index field without having to store it in some variable. An example of rewriting the above code would look something like this: There aren t any a priori restrictions on what can be done within the body of an elwise, although there are limitations as to what can be done correctly in the context of elwise s parallel operation. For example, side effects in the body are perfectly fine: results in a paralation with an index field suffering from an off-by-one bug (moral: there s nothing to prevent you from hoisting yourself by your own petard). Note that the fields returned by the last two examples are not the same: the first is a new field added to the paralation; the second is the (incorrectly-formed) index field of the paralation. One of the remaining Paralation Model operators is the move operator. You can think of it as a parallel assignment. The idea is to move values from one field to another in parallel. Of course, this implies that some way to specify which elements in the source field go into which elements in the destination field exists. This mechanism is called a mapping, and mappings are created by the third and final Paralation Model operator, match. Match is fairly straightforward: it takes two fields and returns a mapping from one to the other. Any elements not present in both fields are represented as NIL in the mapping. The other elements are handled as follows: the elements in one field are labeled with the index of their first occurrence in the field, and the elements in the other field that match are given that same label. For example: (At least this is one possible representation of a mapping, and in fact it s the one used by the Tiny Paralation Lisp system.) If you re looking closely at this, remembering that the principle purpose of match is to provide a mechanism for the move operator to use to get the elements of one field to another, you ll perhaps notice that there are opportunities for a couple of things. One is that one or more elements in the to field might not get filled at all. In the figure above, this is represented by a field element with no arrowhead pointing to it, although you should keep in mind that duplicate elements in the to field also have no arrows pointing to them-they simply take on the first occurrence of the value in the field. Another situation that can occur is that there might be more than one value to be placed in the same element in the to field. This condition is represented by a field element having more than one arrowhead pointing to it. The example above suffers from both of these problems. The move operator allows you to resolve both of these problems by specifying default values for the cases where no value to be moved in exists, and a combiner function to handle the cases where there s more than one value to be moved in. Oftentimes you might wish to take a field and combine all of its elements-e.g. you may want a sum of a field of numbers. It s trivial to write a library function, vref, in terms of the Paralation Model primitives that allows this conglomeration of elements in a field: The function may not be too easy to understand at first. It checks to see if the length of the field is positive. If so, it uses the move operator ( <- in Paralation Lisp) with a mapping of all of the elements of the field to the same place, passing in the combiner function. The result is a field of one element, so Common Lisp s elt is used with a zero parameter to extract the result. The secret is in the match form. It creates a from field consisting of zeros using elwise. These zeros will all map to the same place if the to field is simply the field #F(0), and the easiest way to get that is by making a paralation of length one. A good example of the applicability of these data structures and operators to programming tasks that are normally done sequentially is the Sieve of Eratosthenes, the de facto standard algorithm for finding prime numbers. You can find the serial version of the algorithm in nearly any entry-level computer science text. Most of us are so accustomed to thinking of the Sieve in serial terms that we probably don t even realize that the algorithm can be expressed in a parallel fashion, and remarkably simply in the Paralation Model: (1) (defun find-primes (n) (let* ((sieve (make-paralation n)) (candidate-p (elwise (sieve) (if (> sieve 1) t nil))) (prime-p (elwise (sieve) nil))) (do ((next-prime 2 (position t candidate-p))) ((null next-prime) (<- sieve :by (choose prime-p))) (setf (elt prime-p next-prime) t) (elwise (sieve candidate-p) (when (and candidate-p (zerop (mod sieve next-prime))) (setf candidate-p nil)))))) The function find-primes takes one parameter, which is the maximum number of primes to find. First it creates the Sieve by making a paralation of size n. Next it creates a field that consists of booleans indicating whether the integer at that index is potentially prime or not (at this point, potentially prime is defined as being an integer greater than one). Finally a field of all NILs is created, meaning that initially no values are known to be prime. A do loop is then entered. This loop binds next-prime to the appropriate value (beginning with 2 and continuing through all elements of candidate-p that are T). The loop is defined to terminate when next-prime is null, and the result is the evaluation of (<- sieve :by (choose prime-p)). Choose is another library function that is easily defined in terms of the primitives: (2) (defun choose (field) (let ((count 0) (from-field (elwise (field) nil))) (dotimes (i (length field)) (when (elt field i) (setf (elt from-field i) count) (incf count))) (match (make-paralation count) from-field))) Choose is pretty straightforward; it binds count to zero and from-field to a field of NILs, then loops for the length of the incoming field, checking the ith element of the incoming field and, if it is non-NIL, setting the ith element of from-field to the count of non-NIL elements. It then returns a mapping of from-field to a field of a length sufficient to hold only the non-NIL elements. The value of the do loop in find-primes, then, will be the result of moving sieve (the index field) by the mapping returned by (choose prime-p). The body of the loop is concerned with seeing that prime-p is a field of NILs and Ts, with the indices of the Ts being prime numbers. It is always the case that next-prime is in fact the index of a prime number, so find-primes sets the appropriate element of prime-p to T. It then elwises across sieve and candidate-p, checking first to see if the current element is even a candidate and, if so, whether it s evenly divisible by next-prime. If it is, then the element is removed as a candidate. Then the loop goes again. The interactions among the fields, especially prime-p and candidate-p, are fairly subtle; don t feel discouraged if it s not clear even after several readings why this function works. Hint: you can think of there being two loops (do and elwise), with prime-p being modified by the do loop and candidate-p being modified by the inner elwise. The point of the algorithm is that finding prime numbers basically consists of checking several numbers against one number to see if they divide evenly or not, and elwise fits the bill very nicely. The loop then climbs, so that the next parallel modulo comparison can occur, and so on. This is all subtle enough that some more pictures are called for, so let s meander through an invocation of (find-primes 5). To see the paralation initially created by the let* form, please see Figure 3 above. Keeping in mind that elwise operates in parallel, here s what happens as we evaluate our example. The do binds next-prime to 2, and evaluates (setf (elt prime-p next-prime) t), giving: (3) (elwise (sieve candidate-p) (when (and candidate-p (zerop (mod sieve next-prime))) (setf candidate-p nil))) form. Keeping in mind that references to a field name within an elwise actually refer to the elment, we see that this parallel evaluation will do nothing for element zero, since candidate-p is NIL, so the state doesn t change; see Figure 4 above. Likewise for element one; candidate-p is also NIL. Things change, however, when we get to element two; candidate-p is T, so (zerop (mod sieve next-prime)) is evaluated. Two modulo two is, of course, zero, so the result of the zerop is T, which causes the (setf candidate-p nil) to be evaluated (see Figure 5). This may seem puzzling at first-after all, two is a trivially prime number, and should therefore seem to remain a candidate-but if you refer to Figure 4 above again, you ll see that this has already been accounted for in the prime-p field. Element three is next, and candidate-p is T, so we evaluate (zerop (mod sieve next-prime)) again. This time, three modulo two is one, not zero, so the zerop returns NIL, and the T value of candidate-p is left alone. Finally we reach element four, and candidate-p is T for it as well, so we evaluate (zerop (mod sieve next-prime)) yet again. Unfortunately for candidate-p, four modulo two is zero, so the zerop returns T, and candidate-p goes NIL: For the N = 5 case, that s the end of the elwise. On parallel hardware, each element would presumably have been dealt with at the same time, meaning that the elements of candidate-p would get set very quickly. So finally we go through the do loop again. The binding of next-prime becomes (position t candidate-p), and if you refer to Figure 6 above, you ll see that at this point, the position of the next T in candidate-p is element three. The (setf (elt prime-p next-prime) t) gets evaluated again, which gives us: The elwise form is evaluated all over again, this time with next-prime bound to three. Everything gets left alone until element three is examined, at which point candidate-p is T and (zerop (mod sieve next-prime)) is also T, so candidate-p goes NIL again: Element four is examined, but candidate-p was set NIL by the elwise that was done when next-prime was bound to two, so no change is made, and the elwise ends. The next time through the do loop, however, (position t candidate-p) is NIL, which is the terminating condition for the loop. The result form, (<- sieve :by (choose prime-p)), should be fairly self-explanatory, but in case it s not, here we go. I defined choose a few pages back. First it binds count to zero. Next, it adds a new field, from-field, to whatever paralation the parameter is in: Next is a dotimes loop for the length of the parameter. It searches for non-NIL elements and, when it finds them, it stores the value of count into that element of from-field and increments count. In our case, the result of this is: Finally, choose returns (match (make-paralation count) from-field). At this point, count is bound to 2, so the result of that is a new paralation with an index field of #F(0 1). The result of the match, then, is the mapping: As you might expect, the <- operator applied to the sieve field with this mapping results in a field of #F(2 3), which is in fact a field containing all of the prime numbers from zero to four. I believe that that pretty much covers what I wanted to say about the Paralation Model. It s small, it s clean, and it s powerful. I hope that you ve enjoyed your glance at it and that it s given you food for thought as to the future of computing using architectures other than the traditional Von Neumann approach. The book goes into much more detail, of course, and comes highly recommended for those readers who might be curious about an elegant model for parallel computation. A special word of thanks is due here to Gary Sabot, creator of the Paralation Model, who kindly reviewed this article and convinced me that the figures were a necessary evil. Have a Special Dead Trigger 2 Easter Basket Full of Goodies, Courtesy of Madfinger Games Posted by Rob Rich on April 18th, 2014 [ permalink ] Dead Trigger 2 | Read more » Zynga Launches Brand New Farmville Experience with Farmville 2: Country Escape Posted by Tre Lawrence on April 18th, 2014 [ permalink ] | Read more » David. Review By Cata Modorcea on April 18th, 2014 Our Rating: :: MINIMALISTIC IN A DIFFERENT WAYUniversal App - Designed for iPhone and iPad David is a minimalistic game wrapped inside of a soothing atmosphere in which the hero... | Read more » Eyefi Unveils New Eyefi Cloud Service That Allows Users to Share Media Across Personal Devices Posted by Tre Lawrence on April 18th, 2014 [ permalink ] | Read more » Tales from the Dragon Mountain: The Lair Review By Jennifer Allen on April 18th, 2014 Our Rating: :: STEADY ADVENTURINGiPad Only App - Designed for the iPad Treading a safe path, Tales from the Dragon Mountain: The Lair is a... | Read more » Yahoo Updates Flickr App with Advanced Editing Features and More Posted by Tre Lawrence on April 18th, 2014 [ permalink ] | Read more » My Incredible Body - A Kid's App to Learn about the Human Body 1.1.00 Device: iOS Universal Category: Education Price: $2.99, Version: 1.1.00 (iTunes) Description: Wouldn’t it be cool to look inside yourself and see what was going on... | Read more » Trials Frontier Review By Carter Dotson on April 18th, 2014 Our Rating: :: A ROUGH LANDINGUniversal App - Designed for iPhone and iPad Trials Frontier finally brings the famed stunt racing franchise to mobile, but how much does its... | Read more » Evernote Business Notebook by Moleskin Introduced – Support Available in Evernote for iOS Posted by Tre Lawrence on April 18th, 2014 [ permalink ] | Read more » Sparkle Unleashed Review By Jennifer Allen on April 18th, 2014 Our Rating: :: CLASSY MARBLE FLINGINGUniversal App - Designed for iPhone and iPad It’s a concept we’ve seen before, but Sparkle Unleashed is a solidly enjoyable orb... | Read more »
{"url":"http://www.mactech.com/articles/mactech/Vol.08/08.07/Paralation/index.html","timestamp":"2014-04-19T01:28:56Z","content_type":null,"content_length":"93202","record_id":"<urn:uuid:e6549553-f9d1-4852-abc0-4a2e7bb1a988>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
November 5th 2009, 10:55 PM #1 Nov 2009 I have no experience with typing math onto the computer so excuse me in advanced. Let us denote N={1,2,3,...,n} We have a Family {Ai} 1<=i<=m non equal subsets of N with the property that for each i and j such that i != j , the sum of the values in the intersection of Ai and Bi, each one to the power of 3 is equal to L. note that this L is global for any i != j. show that m <= n. anyone has a direction? November 7th 2009, 05:29 AM #2 Nov 2009
{"url":"http://mathhelpforum.com/discrete-math/112738-combinatorics.html","timestamp":"2014-04-19T22:20:22Z","content_type":null,"content_length":"30256","record_id":"<urn:uuid:1c916dd9-8739-4e49-9fba-ff55dd825a06>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Surface Area Box Problem October 19th 2009, 09:40 AM #1 Junior Member Oct 2009 Surface Area Box Problem (a) Express the surface area A (total of all six sides) of a cubical box in terms of the volume V of the box. (b) Express the volume V of this box in terms of the total surface area A. I'm confused! Help! A = 6e^2 V = e^3 e = edge So, A = 6a^2 = 6V/e and V = a^3 = Ae/6. (a) Express the surface area A (total of all six sides) of a cubical box in terms of the volume V of the box. $V=a^3 \quad ( a =\ edge\ of\ cube\ )$ $\Rightarrow a=V^{1/3}$ $A(V)= 6a^2=6(V^{1/3})^2=6V^{2/3} \quad \Rightarrow {\color{blue}\boxed {A(V)= 6V^{2/3}}}$ (b) Express the volume V of this box in terms of the total surface area A. $A=6a^2 \quad \Rightarrow \ a= \left (\frac{A}{6}\right )^{1/2}$ $V(A)=a^3=\left \{\left (\frac{A}{6} \right )^{1/2} \right \}^3=\left (\frac{A}{6} \right)^{3/2} \quad \Rightarrow {\color {blue} \boxed {V(A)=\left (\frac{A}{6} \right)^{3/2}}}$ October 19th 2009, 09:46 AM #2 Oct 2009 October 19th 2009, 10:00 AM #3 May 2009 New Delhi
{"url":"http://mathhelpforum.com/algebra/109000-surface-area-box-problem.html","timestamp":"2014-04-21T15:36:33Z","content_type":null,"content_length":"34877","record_id":"<urn:uuid:68964871-8f1f-4505-88c4-a8afede329d1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Definition of superset A set $A$ is a superset of another set $B$ if all elements of the set $B$ are elements of the set $A$. The superset relationship is denoted as $A \supset B$. For example, if $A$ is the set $\{ \diamondsuit, \heartsuit, \clubsuit, \spadesuit \}$ and $B$ is the set $\{ \diamondsuit, \clubsuit, \spadesuit \}$, then $A \supset B$ but $B \not\supset A$. Since $A$ contains elements not in $B$, we can say that $A$ is a proper superset of $B$. Or if $I_1$ is the interval $[0,2]$ and $I_2$ is the interval $[0,1]$, then $I_1 \supset I_2$.
{"url":"http://mathinsight.org/superset_definition","timestamp":"2014-04-19T09:26:08Z","content_type":null,"content_length":"9903","record_id":"<urn:uuid:b081263c-2d63-4ff9-9a07-54d286117aee>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebriac statement sometime, always or never true Algebra II: Algebriac statement sometime, always or never true This is a free lesson from our course in Algebra II In this lesson you'll be introduced to the concepts related to the logical arguments and making conclusions. Further you'll explore walking through some examples with solution, and the instructor's explanation with the help of audio, video presentation in own hand writing, how to verify a statement using he truth both "principle of contradiction" and "principle of identity". Reasoning is the set of processes that enables to go beyond the information given and conclusions are based on analysis of given information. It also helps you to analyze and determine whether an algebraic statement involving rational/radical expressions sometimes, always or never true. (More text below video...) (Continued from above) E.g. You cannot add any of the given numbers from 3, 6, 9, 12, 15, 18, 21 to get 52, as 52 is not divisible by 3. Note: When we arrive at a conclusion using facts, definitions, rule, or properties, it is called Deductive Reasoning and a conclusion reached based on deductive reasoning is always true. For example: (x x y) x z is the same as (z x y) x x, is always true. Remember the following steps using the deductive reasoning here: Step 1: (x x y) x z = z x (x x y), using commutative property of multiplication. Step 2: = z x (y x x), using commutative property of multiplication again. Step 3: = (z x y) x x, using associative property of multiplication. Winpossible's online math courses and tutorials have gained rapidly popularity since their launch in 2008. Over 100,000 students have benefited from Winpossible's courses... these courses in conjunction with free unlimited homework help serve as a very effective math-tutor for our students. - All of the Winpossible math tutorials have been designed by top-notch instructors and offer a comprehensive and rigorous math review of that topic. - We guarantee that any student who studies with Winpossible, will get a firm grasp of the associated problem-solving techniques. Each course has our instructors providing step-by-step solutions to a wide variety of problems, completely demystifying the problem-solving process! - Winpossible courses have been used by students for help with homework and by homeschoolers. - Several teachers use Winpossible courses at schools as a supplement for in-class instruction. They also use our course structure to develop course worksheets.
{"url":"http://www.winpossible.com/lessons/Algebra_2_Algebriac_statement_sometime_always_or_never_true.html","timestamp":"2014-04-20T10:59:20Z","content_type":null,"content_length":"53387","record_id":"<urn:uuid:04bfdae9-b284-48c2-bc73-a1076767ddb4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Department of Public Health Sciences BMTRY 714 : Linear Models in Biology and Medicine The matrix representation of the general linear statistical model is studied through the implication, distribution, and partitioning of quadratic forms and their probability distributions. Estimation of parameters in the linear model by methods of maximum likelihood and least squares will be presented along with the accuracy and precision of these estimators. Estimability in both the full rank and less than full rank models is introduced. The test statistic for the general linear hypothesis is derived, and its distribution is determined under an assumption of normally distributed errors for both the null and a general alternative hypothesis. Sufficient examples are given to show its application to tests on means as well as in ANOVA and ANOCOVA. Students prepared in basic statistical methods and theory, and matrix algebra are eligible to take this course. • Elementary Matrix Concepts • Kronecker Products • Random Vectors • Multivariate Normal Distribution Function • Multivariate Normal Distribution Function • Conditional Distributions of Multivariate Normal Random Vectors • Distributions of Certain Quadratic Forms • Quadratic Forms of Normal Random Vectors • Independence • The t and F Distributions • Bhat’s Lemma • Problems and Review • Models That Admit Restrictions (Finite Models) • Models That Do Not Admit Restrictions (Infinite Models) • Sums of Squares and Covariance Matrix Algorithms • Sums of Squares and Covariance Matrix Algorithms • Expected Mean Squares • Algorithm Applications • Ordinary Least Squares Estimation • Odds and Even-odds • Best Linear Unbiased Estimators • ANOVA Table for the Ordinary Lease-Squares Regression Function • Weighted Least-Squares Regression 5.4 • Lack of Fit Tests • Partitioning the Sum of Squares Regression • The Model Y = Xb + E in Complete Balanced Factorials • Maximum Likelihood Estimators • Invariance Property, Sufficiency, and Completeness • ANOVA Methods for Finding Maximum Likelihood Estimators • The Likelihood Ratio Test for Hb = h • Confidence Bands on Linear Combinations of b • Replication Matrices • Pattern Matrices and Missing Data • Using Replication and Pattern Matrices Together • General Balanced Incomplete Block Design • Analysis of the General Case • Matrix Derivations of Kempthorne’s Interblock and Intrablock Treatment Difference Estimators • Model Assumptions and Examples • The Mean Model Solution • The Mean Model Analysis When Cov(E) = s2I 9.3 • Mean Model Analysis When Cov(E) = s2V 9.5 BMTRY 700, 701, 706, 707. Typically offered every other fall semester.
{"url":"http://academicdepartments.musc.edu/phs/academics/students/courses/bmtry714.htm","timestamp":"2014-04-18T03:28:48Z","content_type":null,"content_length":"20263","record_id":"<urn:uuid:c7ab465a-f527-43ac-a625-75e1b414ebc5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
January 13th 2006, 08:41 AM #1 Nov 2005 Hey, I have a whole sheet of trigonometry questions to do and I've drawn just 4 of them in paint. Please take me through them as I haven't a clue. Many thanks! In a right triangle, the sine of one of the (non-right) angles is equal to the opposite side divided by the hypotenuse while the cosine of that angle is the adjacent side divided by the For problem 1, this gives that cos(27°) = x/5.89 which is an equation you can solve for x. The other problems are similar! So my answer for 1 would be x= 5.2480 (4dp) Am I right? Can you give me the equation used, as you did for cos(27°) = x/5.89, for the others please? So my answer for 1 would be x= 5.2480 (4dp) Am I right? That's correct Can you give me the equation used, as you did for cos(27°) = x/5.89, for the others please? Well I could, but then you wouldn't be learning anything from it. Try to set-up such an equation youself. I'll list the possible formula's you can use: $\begin{gathered}<br /> \sin \alpha = \frac{{{\text{opposite}}}}<br /> {{{\text{hypotenuse}}}} \hfill \\<br /> \cos \alpha = \frac{{{\text{adjacent}}}}<br /> {{{\text{hypotenuse}}}} \hfill \\<br /> \tan \alpha = \frac{{{\text{opposite}}}}<br /> {{{\text{adjacent}}}} \hfill \\ <br /> \end{gathered}$ Now: first identify which of the three is given (the opposite, adjacent or hypotenuse) and which is asked. Then choose the formula which has both these sides and try to set up the equation. OK then, can you explain why I used COS for question one. I really dont get how you determin which equation you use. Of course, as I said: you have to determine which sides are involved in the problem. Do you understand the terms adjacent, opposite and hypotenuse? Here's a clear image, be careful because it's important which angle is marked. Now, for the marked angle at the right, the bottom side is 'adjacent' (it touches the angle but isn't the longest side), the left side is 'opposite' (it doesn't touch the angle) and the longest side is called the hypotenuse. In the first problem, the marked angle is at the top left. The side we're looking for is the top side, which is the adjacent side for this angle. The hypotenuse is given, so we're looking for the formula which involves the hypotenuse and the adjacent side, that's the on with the cosine. Do you understand? OK, I kinda understand. For 2 (and 3), I need to find the hypotonuse. So I'm looking for one of those 3 equations to contain the ajacent and hypotenuse. Therefore I'd use COS? For 2, I'd type in COS 55 in the calculator then times the answer by 45? For 3, I'd type COS 71 in the calculator the tiems by 163. Are these correct? Take them one at a time and look carefully. For two, the marked angle is at the top left. The hypotenuse is indeed the side we're looking for, but which one is given (from the point of view of the angle): the adjacent or opposite side? Indeed! The side which isn't linked to the angle, that is the opposite side. So what formula will you be using? You need one that involed the hypotenuse and the opposite side. So how do I approach this in a calculator? Indeed sin. Now write out the formula I gave for the sine, replacing all expressions you know with their values and call the unknown side x. Then solve for x, the expression you now have can be entered in a calculator. SIN 55 =0.8191 0.8191 x 45 = 36.8618 You should first try to set-up the equation correctly, starting from the formula. Write down the formula of the sine and fill in eveything you know, i.e. replace "opposite", "hypotenuse" and "alpha" with their values (or x, if it's the unknown). $\sin \alpha = \frac{{{\text{opposite}}}}<br /> {{{\text{hypotenuse}}}} \Rightarrow \sin 55^\circ = \frac{{45}}<br /> {x}$ You then solve for x $\sin 55^\circ = \frac{{45}}<br /> {x} \Leftrightarrow x\sin 55^\circ = 45 \Leftrightarrow x = \frac{{45}}<br /> {{\sin 55^\circ }} \approx 54.93$ Last edited by TD!; January 13th 2006 at 10:16 AM. January 13th 2006, 08:45 AM #2 Senior Member Jan 2006 Brussels, Belgium January 13th 2006, 08:50 AM #3 Nov 2005 January 13th 2006, 08:54 AM #4 Senior Member Jan 2006 Brussels, Belgium January 13th 2006, 09:05 AM #5 Nov 2005 January 13th 2006, 09:10 AM #6 Senior Member Jan 2006 Brussels, Belgium January 13th 2006, 09:31 AM #7 Nov 2005 January 13th 2006, 09:34 AM #8 Senior Member Jan 2006 Brussels, Belgium January 13th 2006, 09:36 AM #9 Nov 2005 January 13th 2006, 09:37 AM #10 Senior Member Jan 2006 Brussels, Belgium January 13th 2006, 09:39 AM #11 Nov 2005 January 13th 2006, 10:04 AM #12 Senior Member Jan 2006 Brussels, Belgium January 13th 2006, 10:10 AM #13 Nov 2005 January 13th 2006, 10:13 AM #14 Senior Member Jan 2006 Brussels, Belgium
{"url":"http://mathhelpforum.com/trigonometry/1609-trigonometry.html","timestamp":"2014-04-19T23:23:22Z","content_type":null,"content_length":"61766","record_id":"<urn:uuid:ac107048-ddf3-4dca-8eb4-b1b95ee1caa8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Trebling' printed from http://nrich.maths.org/ Why do this problem? This problem requires learners to think about place value and the way that standard column multiplication works. Although the problem can be done by trial and improvement, it is solved more efficiently if worked through systematically. Possible approach You could start by showing the first sum to the whole group and discussing what is required to do it. It would be good for learners to work in pairs, perhaps usingĀ this sheet. (The first page contains the first sum with several lighter versions for children to write on. The second page is the second sum done in the same way.) Alternatively, pairs could use digit cards to move around on a sheet of paper, or mini-whiteboard. Give the children time to make a start and then after a suitable length of time, bring the group back together to talk about how they are getting on so far. This is a good opportunity to share some initial insights. For example, some pairs may have worked out which digit 'e' must be by looking at the units digit of the answer. Some may have started in a different way, for example by trying digits at random to see what happens when the multiplication takes place. Draw attention to those pairs that have adopted a system in their working and highlight that this means the solution will be found more efficiently. You may need to clarify that all the letters appear in the answer too, in other words that 'e' stands for the same number wherever it appears (the beginnings of algebraic notation). You could then leave learners to continue with the problem. A discussion of all the steps in the reasoning could take place in the plenary. Key questions How does the $1$ (or $2$) in the units column of the answer help? What are the units digits of the multiples of $3$? Possible extension All the Digits makes a good extension activity as the mathematics is similar, but slightly more sophisticated reasoning is required. Possible support It might help some children to write out their three times table to begin with, or to be able to refer to a multiplication square as they tackle this problem.
{"url":"http://nrich.maths.org/2004/note?nomenu=1","timestamp":"2014-04-16T19:09:28Z","content_type":null,"content_length":"6032","record_id":"<urn:uuid:6be74416-98a6-4ed3-964b-9b9457081e60>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Definition of Lawvere theory: additional condition needed Martin Davis martind at cs.berkeley.edu Sun Mar 22 14:18:46 EST 1998 At 09:39 AM 3/21/98 -0800, Vaughan Pratt wrote: >The difference between graph theory and category theory is that category >theory incorporates the congruence on paths as part of its structure, >permitting every instance of "congruent to" to be streamlined to >"equal to". The graph theory formulation of category theory is like a >bicycle with training wheels: great for getting started but eventually >you want to take off the training wheels so you can go faster. Dear Vaughan, Surely this is not the only difference between such utterly different subjects. It has been noted that the \in relation in a model of set theory can be thought of as defining a graph. But single sentence of the form: "The difference between grapg theory and set theory is ..." would just be silly. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-March/001684.html","timestamp":"2014-04-16T06:02:18Z","content_type":null,"content_length":"3364","record_id":"<urn:uuid:491fbaab-f02d-4be6-926e-05c1fe1155c4>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
* Copyright (c) 1999 Apple Computer, Inc. All rights reserved. * @APPLE_LICENSE_HEADER_START@ * Copyright (c) 1999-2003 Apple Computer, Inc. All Rights Reserved. * This file contains Original Code and/or Modifications of Original Code * as defined in and that are subject to the Apple Public Source License * Version 2.0 (the 'License'). You may not use this file except in * compliance with the License. Please obtain a copy of the License at * http://www.opensource.apple.com/apsl/ and read it before using this * file. * The Original Code and all software distributed under the License are * distributed on an 'AS IS' basis, WITHOUT WARRANTY OF ANY KIND, EITHER * Please see the License for the specific language governing rights and * limitations under the License. * @APPLE_LICENSE_HEADER_END@ * Copyright (c) 1983, 1993 * The Regents of the University of California. All rights reserved. * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by the University of * California, Berkeley and its contributors. * 4. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. #include <stdio.h> #include <stdlib.h> * random.c: * An improved random number generation package. In addition to the standard * rand()/srand() like interface, this package also has a special state info * interface. The initstate() routine is called with a seed, an array of * bytes, and a count of how many bytes are being passed in; this array is * then initialized to contain information for random number generation with * that much state information. Good sizes for the amount of state * information are 32, 64, 128, and 256 bytes. The state can be switched by * calling the setstate() routine with the same array as was initiallized * with initstate(). By default, the package runs with 128 bytes of state * information and generates far better random numbers than a linear * congruential generator. If the amount of state information is less than * 32 bytes, a simple linear congruential R.N.G. is used. * Internally, the state information is treated as an array of longs; the * zeroeth element of the array is the type of R.N.G. being used (small * integer); the remainder of the array is the state information for the * R.N.G. Thus, 32 bytes of state information will give 7 longs worth of * state information, which will allow a degree seven polynomial. (Note: * the zeroeth word of state information also has some other information * stored in it -- see setstate() for details). * The random number generation technique is a linear feedback shift register * approach, employing trinomials (since there are fewer terms to sum up that * way). In this approach, the least significant bit of all the numbers in * the state table will act as a linear feedback shift register, and will * have period 2^deg - 1 (where deg is the degree of the polynomial being * used, assuming that the polynomial is irreducible and primitive). The * higher order bits will have longer periods, since their values are also * influenced by pseudo-random carries out of the lower bits. The total * period of the generator is approximately deg*(2**deg - 1); thus doubling * the amount of state information has a vast influence on the period of the * generator. Note: the deg*(2**deg - 1) is an approximation only good for * large deg, when the period of the shift register is the dominant factor. * With deg equal to seven, the period is actually much longer than the * 7*(2**7 - 1) predicted by this formula. * For each of the currently supported random number generators, we have a * break value on the amount of state information (you need at least this * many bytes of state info to support this random number generator), a degree * for the polynomial (actually a trinomial) that the R.N.G. is based on, and * the separation between the two lower order coefficients of the trinomial. #define TYPE_0 0 /* linear congruential */ #define BREAK_0 8 #define DEG_0 0 #define SEP_0 0 #define TYPE_1 1 /* x**7 + x**3 + 1 */ #define BREAK_1 32 #define DEG_1 7 #define SEP_1 3 #define TYPE_2 2 /* x**15 + x + 1 */ #define BREAK_2 64 #define DEG_2 15 #define SEP_2 1 #define TYPE_3 3 /* x**31 + x**3 + 1 */ #define BREAK_3 128 #define DEG_3 31 #define SEP_3 3 #define TYPE_4 4 /* x**63 + x + 1 */ #define BREAK_4 256 #define DEG_4 63 #define SEP_4 1 * Array versions of the above information to make code run faster -- * relies on fact that TYPE_i == i. #define MAX_TYPES 5 /* max number of types above */ static long degrees[MAX_TYPES] = { DEG_0, DEG_1, DEG_2, DEG_3, DEG_4 }; static long seps [MAX_TYPES] = { SEP_0, SEP_1, SEP_2, SEP_3, SEP_4 }; * Initially, everything is set up as if from: * initstate(1, &randtbl, 128); * Note that this initialization takes advantage of the fact that srandom() * advances the front and rear pointers 10*rand_deg times, and hence the * rear pointer which starts at 0 will also end up at zero; thus the zeroeth * element of the state information, which contains info about the current * position of the rear pointer is just * MAX_TYPES * (rptr - state) + TYPE_3 == TYPE_3. static long randtbl[DEG_3 + 1] = { 0x9a319039, 0x32d9c024, 0x9b663182, 0x5da1f342, 0xde3b81e0, 0xdf0a6fb5, 0xf103bc02, 0x48f340fb, 0x7449e56b, 0xbeb1dbb0, 0xab5c5918, 0x946554fd, 0x8c2e680f, 0xeb3d799f, 0xb11ee0b7, 0x2d436b86, 0xda672e2a, 0x1588ca88, 0xe369735d, 0x904f35f7, 0xd7158fd6, 0x6fa6f051, 0x616e6b96, 0xac94efdc, 0x36413f93, 0xc622c298, 0xf5a42ab8, 0x8a88d77b, 0xf5ad9d0e, 0x8999220b, * fptr and rptr are two pointers into the state info, a front and a rear * pointer. These two pointers are always rand_sep places aparts, as they * cycle cyclically through the state information. (Yes, this does mean we * could get away with just one pointer, but the code for random() is more * efficient this way). The pointers are left positioned as they would be * from the call * initstate(1, randtbl, 128); * (The position of the rear pointer, rptr, is really 0 (as explained above * in the initialization of randtbl) because the state table pointer is set * to point to randtbl[1] (as explained below). static long *fptr = &randtbl[SEP_3 + 1]; static long *rptr = &randtbl[1]; * The following things are the pointer to the state information table, the * type of the current generator, the degree of the current polynomial being * used, and the separation between the two pointers. Note that for efficiency * of random(), we remember the first location of the state information, not * the zeroeth. Hence it is valid to access state[-1], which is used to * store the type of the R.N.G. Also, we remember the last location, since * this is more efficient than indexing every time to find the address of * the last element to see if the front and rear pointers have wrapped. static long *state = &randtbl[1]; static long rand_type = TYPE_3; static long rand_deg = DEG_3; static long rand_sep = SEP_3; static long *end_ptr = &randtbl[DEG_3 + 1]; * srandom: * Initialize the random number generator based on the given seed. If the * type is the trivial no-state-information type, just remember the seed. * Otherwise, initializes state[] based on the given "seed" via a linear * congruential generator. Then, the pointers are set to known locations * that are exactly rand_sep places apart. Lastly, it cycles the state * information a given number of times to get rid of any initial dependencies * introduced by the L.C.R.N.G. Note that the initialization of randtbl[] * for default usage relies on values produced by this routine. unsigned long x; register long i; if (rand_type == TYPE_0) state[0] = x; else { state[0] = x; for (i = 1; i < rand_deg; i++) state[i] = 1103515245 * state[i - 1] + 12345; fptr = &state[rand_sep]; rptr = &state[0]; for (i = 0; i < 10 * rand_deg; i++) * initstate: * Initialize the state information in the given array of n bytes for future * random number generation. Based on the number of bytes we are given, and * the break values for the different R.N.G.'s, we choose the best (largest) * one we can and set things up for it. srandom() is then called to * initialize the state information. * Note that on return from srandom(), we set state[-1] to be the type * multiplexed with the current value of the rear pointer; this is so * successive calls to initstate() won't lose this information and will be * able to restart with setstate(). * Note: the first thing we do is save the current state, if any, just like * setstate() so that it doesn't matter when initstate is called. * Returns a pointer to the old state. * Note: The Sparc platform requires that arg_state begin on a long * word boundary; otherwise a bus error will occur. Even so, lint will * complain about mis-alignment, but you should disregard these messages. char * initstate(seed, arg_state, n) unsigned long seed; /* seed for R.N.G. */ char *arg_state; /* pointer to state array */ long n; /* # bytes of state info */ register char *ostate = (char *)(&state[-1]); register long *long_arg_state = (long *) arg_state; if (rand_type == TYPE_0) state[-1] = rand_type; state[-1] = MAX_TYPES * (rptr - state) + rand_type; if (n < BREAK_0) { "random: not enough state (%ld bytes); ignored.\n", n); if (n < BREAK_1) { rand_type = TYPE_0; rand_deg = DEG_0; rand_sep = SEP_0; } else if (n < BREAK_2) { rand_type = TYPE_1; rand_deg = DEG_1; rand_sep = SEP_1; } else if (n < BREAK_3) { rand_type = TYPE_2; rand_deg = DEG_2; rand_sep = SEP_2; } else if (n < BREAK_4) { rand_type = TYPE_3; rand_deg = DEG_3; rand_sep = SEP_3; } else { rand_type = TYPE_4; rand_deg = DEG_4; rand_sep = SEP_4; state = (long *) (long_arg_state + 1); /* first location */ end_ptr = &state[rand_deg]; /* must set end_ptr before srandom */ if (rand_type == TYPE_0) long_arg_state[0] = rand_type; long_arg_state[0] = MAX_TYPES * (rptr - state) + rand_type; * setstate: * Restore the state from the given state array. * Note: it is important that we also remember the locations of the pointers * in the current state information, and restore the locations of the pointers * from the old state information. This is done by multiplexing the pointer * location into the zeroeth word of the state information. * Note that due to the order in which things are done, it is OK to call * setstate() with the same state as the current state. * Returns a pointer to the old state information. * Note: The Sparc platform requires that arg_state begin on a long * word boundary; otherwise a bus error will occur. Even so, lint will * complain about mis-alignment, but you should disregard these messages. char * char *arg_state; /* pointer to state array */ register long *new_state = (long *) arg_state; register long type = new_state[0] % MAX_TYPES; register long rear = new_state[0] / MAX_TYPES; char *ostate = (char *)(&state[-1]); if (rand_type == TYPE_0) state[-1] = rand_type; state[-1] = MAX_TYPES * (rptr - state) + rand_type; switch(type) { case TYPE_0: case TYPE_1: case TYPE_2: case TYPE_3: case TYPE_4: rand_type = type; rand_deg = degrees[type]; rand_sep = seps[type]; "random: state info corrupted; not changed.\n"); state = (long *) (new_state + 1); if (rand_type != TYPE_0) { rptr = &state[rear]; fptr = &state[(rear + rand_sep) % rand_deg]; end_ptr = &state[rand_deg]; /* set end_ptr too */ * random: * If we are using the trivial TYPE_0 R.N.G., just do the old linear * congruential bit. Otherwise, we do our fancy trinomial stuff, which is * the same in all the other cases due to all the global variables that have * been set up. The basic operation is to add the number at the rear pointer * into the one at the front pointer. Then both pointers are advanced to * the next location cyclically in the table. The value returned is the sum * generated, reduced to 31 bits by throwing away the "least random" low bit. * Note: the code takes advantage of the fact that both the front and * rear pointers can't wrap on the same call by not testing the rear * pointer if the front one has wrapped. * Returns a 31-bit random number. register long i; register long *f, *r; if (rand_type == TYPE_0) { i = state[0]; state[0] = i = (i * 1103515245 + 12345) & 0x7fffffff; } else { * Use local variables rather than static variables for speed. f = fptr; r = rptr; *f += *r; i = (*f >> 1) & 0x7fffffff; /* chucking least random bit */ if (++f >= end_ptr) { f = state; else if (++r >= end_ptr) { r = state; fptr = f; rptr = r;
{"url":"http://www.opensource.apple.com/source/Libc/Libc-262.3.2/stdlib/random.c","timestamp":"2014-04-18T18:57:00Z","content_type":null,"content_length":"23276","record_id":"<urn:uuid:ad635d85-8c57-46ee-a7a2-3b2c6cf4fc53>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex Analysis - Cauchy's Inequality March 29th 2010, 11:47 AM Complex Analysis - Cauchy's Inequality Hello everyone, I was wondering if anyone could give me a hand with this question. Let f be an entire function satisfying $|f(z)| \le K|z^2|$ for $|z| \ge 1$, where K is a positive constant. Show using Cauchy's inequality, that $f^{(n)}(0) = 0$ for $n > 2$ and deduce that f is a quadratic polynomial. Ok f is entire so is defined and analytic on all of the complex plane. So it is analytic on D(0,R), some big disc. Then by Taylor's theorem, so has a power series expansion $f(z) = a_0 + a_1 z + a_2 z^2 + \ldots$ for all z in C, and we can use Cauchy's integral formula for derivatives to get $|f^{(n)}(0)| = \left| \frac{n!}{2 \pi i} \int_{|z|=R} \frac{f(z)}{z^{n+1}} dz \right| \le \frac{2 \pi R n!}{2 \pi} \text{sup}_{|z| = R}\left| \frac{f(z)}{z^{n+1}} \right|$ by the estimation lemma. I understand it all up until this point. However , nowthe solution I have says that $\frac{2 \pi R n!}{2 \pi} \text{sup}_{|z| = R} \left|\frac{f(z)}{z^{n+1}} \right| \le Rn!\frac{KR^2}{R^{n+1}}$ which I think tells me that (*) $\text{sup}_{|z| = R} \left|\frac{f(z)}{z^{n+1}} \right| \le \frac{KR^2}{R^{n+1}}$ but I have no idea where this comes from! Does anyone have an idea how to help? For reference Cauchy's inequality in my notes says if f analytic on an open disc $D(z_0,R)$ and f is bounded on the circle $|z-z_0 | = r < R$ with $|f(z)| \le M$ for some M on $|z-z_0| = r$, then for $n \in \mathbb{N}$, $|f^{(n)}(z_0)| \le \frac{Mn!}{r^n}.$ If anyone could help me how to use this to get (*) it would be much appreciated March 29th 2010, 01:02 PM Let f be an entire function satisfying $|f(z)| \le K|z^2|$ for $|z| \ge 1$, where K is a positive constant. Show using Cauchy's inequality, that $f^{(n)}(0) = 0$ for $n > 2$ and deduce that f is a quadratic polynomial. $|f^{(n)}(0)| = \left| \frac{n!}{2 \pi i} \int_{|z|=R} \frac{f(z)}{z^{n+1}} dz \right| \le \frac{2 \pi R n!}{2 \pi} \text{sup}_{|z| = R}\left| \frac{f(z)}{z^{n+1}} \right|$ by the estimation lemma. I understand it all up until this point. However , now the solution I have says that $\frac{2 \pi R n!}{2 \pi} \text{sup}_{|z| = R} \left|\frac{f(z)}{z^{n+1}} \right| \le Rn!\frac{KR^2}{R^{n+1}}$ which I think tells me that (*) $\text{sup}_{|z| = R} \left|\frac{f(z)}{z^{n+1}} \right| \le \frac{KR^2}{R^{n+1}}$ but I have no idea where this comes from! Does anyone have an idea how to help? So you know that $|f^{(n)}(0)| \leqslant Rn!{\textstyle\sup_{|z| = R}}\Bigl| \frac{f(z)}{z^{n+1}} \Bigr|$. You also know that, on the circle |z|=R, $|f(z)|\leqslant K|z|^2 = KR^2$ and (obviously) $|z^{n+1}| = R^{n+1}$. Therefore $|f^{(n)}(0)| \leqslant \frac{Rn!KR^2}{R^{n+1}} = \frac{n!K}{R^{n-2}}$. If n>2 then the right side tends to 0 as $R\to\infty$. Therefore $f^{(n)}(0) = 0$ for all March 29th 2010, 01:12 PM ahh this is of course if R is very big, so its bigger than 1, so the inequality comes into play from the start! Thanks very much but i was worried about one little thing, how do i know that i can talk about $\text{sup}_{|z|=R} \left| \frac{f(z)}{z^{n+1}} \right|$? What if this function is not bounded on the circle?
{"url":"http://mathhelpforum.com/differential-geometry/136330-complex-analysis-cauchys-inequality-print.html","timestamp":"2014-04-16T04:13:58Z","content_type":null,"content_length":"13046","record_id":"<urn:uuid:64a29df4-b974-4b3e-80af-169928134357>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
Trig Formulas Trig factor formulas Trig factor formula is used to convert the product of 2 trigonometric functions into a sum or difference of 2 trigonometric functions or vice-versa. Sum or Difference of trig functions (involves cosine and sine) Remember these trig formulas split a complex trigonometry function into a difference/addition of two trigonometry functions. The trig formulas are derived from the additional formulas. Combining Trigonometric functions (involves cosine and sine) We can derive the following trig factor formulas that combine basic trigonometry functions into one. It is a good idea if you are able to memorize the trig formulas. By placing P=A+B and Q=A-B in 1) to 4) above equations, we obtain Let us apply the trig formulas to the following problems. a)Show that We combined sin ? with sin 3 ? and sin 5 ? with sin7 with the trig formulas. Left Hand Side = Take out cos ?. Notice that there are two basic trigonometry functions left in the brackets .We combine them together as well. Look at the RHS (Right Hand Side) of the equation. It is in terms of ? and not 4 ?. So we use double angle formula to reduce sin 4 ? into 2sin 2 ? cos ?. Notice that there are cos cos ?. So we use the double angle formula again to reduce sin 2 Show that Upon reading the question, we notice that there is only one trigonometry function in the numerator and denominator on the RHS( Right Hand Side) of the equation. LHS = We have to think on how we are going to get a tangent function on the RHS. “Could it be b) Show that LHS = We place it in the form 2sin A sin B .Then we expand it using the trigonometry factor formulas. We note that the power on the right hand side of the equation is 1. The power of the current equation is at 2. We reduce it using the double angle formula (cos 2 Return to Trigonometry Help or Basic Trigonometry .
{"url":"http://www.trigonometry-help.net/trig-formulas.php","timestamp":"2014-04-21T02:10:44Z","content_type":null,"content_length":"11037","record_id":"<urn:uuid:7a6ef543-ecaa-4f16-b905-821d583ffcf1>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Small gaps between primes What are the shortest intervals between consecutive prime number s? The twin primes conjecture, which asserts that P =2 (where P , as usual, stands for the N-th ) infinitely often is one of the oldest problems; it is difficult to trace its origins. In the 1960's and 1970's sieve methods developed to the point where the great Chinese mathematician Chen was able to prove that for infinitely many primes P the number P+2 is either prime or a product of two primes. However the "parity problem" in sieve theory prevents further progress. What can actually be proven about small gaps between consecutive primes? A restatement of the prime number theorem is that the average size of P[n+1]-P[n] is log(P[n]). A consequence is that Δ := lim[n → ∞] P[n+1]-P[n] ≤ 1 In 1926, Hardy and Littlewood, using their ``circle method'' proved that the Generalized Riemann Hypothesis (that neither the Riemann zeta function nor any Dirichlet L-function (?) has a zero with real part larger than 1/2) implies that Δ ≤ 2/3. Rankin improved this (still assuming the Riemann Hypothesis) to Δ ≤ 3/5. In 1940 Erdös, using sieve methods, gave the first unconditional proof that Δ ≤ 1. In 1966 Bombieri and Davenport, using the newly developed theory of the large sieve (in the form of the Bombieri - Vinogradov theorem) in conjunction with the Hardy - Littlewood approach, proved unconditionally, Δ ≤ 1/2 and then using the Erdös method they obtained Δ ≤ (2 + "√ 3)/8 (or about 0.46650). In 1977, Huxley combined the Erdös method and the Hardy - Littlewood, Bombieri - Davenport method to obtain Δ ≤ 0.44254. Then, in 1986, Maier used his discovery that certain intervals contain a factor of more primes than average intervals. Working in these intervals and combining all of the above methods, he proved that Δ ≤ 0.2486, which was the best result until now. However, Dan Goldston and Cem Yildirim have recently written a manuscript (which was presented in a lecture at the American Institute of Mathematics) which advances the theory of small gaps between primes by a huge amount. First of all, they show that Δ = 0, exactly. Moreover, they can prove that for infinitely many n the inequality P[n+1]-P[n] < {log(p[n])}^8/9holds. Goldston's and Yildirim's approach begins with the methods of Hardy-Littlewood and Bombieri - Davenport. They have discovered an extraordinary way to approximate, on average, sums over prime K-tuples. We believe, after work of Gallagher using the Hardy-Littlewood conjectures for the distribution of prime K-tuples, that the prime numbers in a short interval ( N, N+ Γlog(N) ) are distributed like a Poisson random variable with parameter Γ. Goldston and Yildirim exploit this model in choosing approximations. They ultimately use the theory of orthogonal polynomials to express the optimal approximation in terms of Laguerre polynomials. Hardy and Littlewood could have proven this theorem under the assumption of the Generalized Riemann Hypothesis; the Bombieri - Vinogradov theorem allows for the unconditional treatment. This new approach opens the door for much further work. It is clear from the manuscript that the savings of an exponent of 1/9 in the power of log(P[n]) is not the best that the method will allow. There are (at least) two possible refinements. One is in the examination of lower order terms that arise in his method. Can they be used to enhance the argument? The other is in the error term Gallagher found in summing the ``singular series'' arising from the Hardy-Littlewood K-tuple conjecture. There is reason to believe that this error term can be improved, possibly using ideas in recent work of Montgomery and Soundararajan (``Beyond Pair - Correlation''.) This work is astonishing in another regard. They have actually proven that for any fixed number r the inequality P[n+r]-P[n] < (Log(P[n])^8r/(8r+1)) holds for infinitely many n. In 1977 Helmut Maier burst onto the analytic number scene with his tour de force proof that the largest gaps known to hold for two consecutive primes could be proven for each gap of r consecutive gaps for any fixed r. Goldston and Yildirim have achieved a similar sort of result for consecutive small gaps at the same time, in that they have demolished all previous records for one small gap. It is clear is that at least part of a monumental barrier which has impeded progress for at least the last 80 years has been broken down, and the paper will allow a significant amount of research to go forward.
{"url":"http://everything2.com/title/Small+gaps+between+primes?showwidget=showCs1453390","timestamp":"2014-04-19T00:08:22Z","content_type":null,"content_length":"25066","record_id":"<urn:uuid:b4b21f5b-8faa-4b6b-9851-51d984bf4d2e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
01-19-2013 10:33 PM steak Quote: Originally Posted by Left C Your answer is on the back of the last page in the directions. Well here goes: The yellow color is your end color for the KH test. You multiply the number of drops used to get the yellow color by 10. It was 10 drops in your case. This gives you 100 mg/L or 100 ppm. If you want to convert ppm to degrees, you multiply your ppm value by 0.056 to get degrees. So 100 x 0.056 = 5.6 degrees. The GH test has a little different formula. As an example, let's say that you used 10 drops to reach the end color of light blue. For the GH test, you multiply the number of drops used to reach the end color by 20 instead of 10 like the KH test. In this example, this gives you 200 mg/L or 200 ppm. To convert to degrees, you again multiply by 0.056. So, 200 x 0.056 = 11.2 You can get degrees from ppm another way. Divide your ppm by 17.86 to get degrees. Note that both KH and GH are measured in CaCO3 equivalents. I hope that this helps. This allowed me to use my test. I lost my booklet and no longer had the color chart, or the directions. I got on here looking for this information specifically. Thank you 10-23-2006 01:35 AM Left C Quote: Originally Posted by thanks a lot ... you helped more than you know You're welcome. I know that when you got the yellow/lime color, it was confusing. That means that you were almost to the end point. You had to add two more drops to reach the end point. Deadmonkey asked some related questions. If you get a chance, please read my reply. 10-23-2006 01:29 AM Left C Quote: Originally Posted by I don't have a KH tester, but mh GH tester says to keep adding drops until it changes from yellow to green (shaking each time). Once it changes to green, count the drops and look at the GH chart for the different fish you have to make sure it's in the right range. On my kit you multiply the drops by 17.9 and get your Parts Per Millon (ppm) or percent GH in the tank. I would guess by the lst post that KH has a similar method. For example, it took mine 13 drops times 17.9 is roughly 230 ppm. This is actually rather high. Frequent water changes should fix this though eventually. as far as accuracy, you are only going to be accurate to the nearest drop. It sounds like you are using AP's KH/GH test kit. It measures in degrees. The Hagen KH/GH test kit uses a different indicator solution. That's why the colors are different and one measures in ppm and the other measures in degrees. If you round 17.86 degrees up to one decimal place, you'll have 17.9 degrees. You can have two ways to calculate ppm and degrees. If you take 1 and divide it by 17.86 you get 0.056. Also, if you take 1 and divide it by 0.056 you get 17.86. Then, 17.86 ppm per degree and 0.056 degrees per ppm are your constants. So, let's says you have 5 degrees of hardness. You can multiply 5 degrees by 17.86 and get 89.3 ppm. Here's the other way to do it. Let's use 5 degrees of hardness again. You can divide 5 degrees by 0.056 and also get 89.3 ppm. Let's deal in ppm this time. Lets say that you have 100 ppm. You can divide 100 ppm by 17.86 and you get 5.6 degrees. The other way is to multiply 100 ppm by 0.056 and you get 5.6 Do you understand what I did in these calculations? Here's a tip. Both of these test kits use 5 ml of water for the test. You can be a little more accurate if you can find a test tube that will hold 10 ml of water. When you get the end color, you divide the number of drops by two. This may help if you have trouble deciding where the end point actually is. I believe that Tetra and Red Sea use 10 ml test tubes in some of their kits. (You can also use 15 ml, 20 ml, etc. also. Just be sure to divide by the number of drops correctly.) LaMotte's alkalinity test kit is more precise and it's about $20. It's a good one though. It's color change end point is rather abrupt. That helps it to be more accurate. LaMotte KH/ Alkalinity Test Kit 4491- DR / Direct Reading Titrator Method - Marine Depot - Marine and Reef Aquarium Super Store 10-22-2006 10:48 PM deadmonkey I don't have a KH tester, but mh GH tester says to keep adding drops until it changes from yellow to green (shaking each time). Once it changes to green, count the drops and look at the GH chart for the different fish you have to make sure it's in the right range. On my kit you multiply the drops by 17.9 and get your Parts Per Millon (ppm) or percent GH in the tank. I would guess by the lst post that KH has a similar method. For example, it took mine 13 drops times 17.9 is roughly 230 ppm. This is actually rather high. Frequent water changes should fix this though eventually. as far as accuracy, you are only going to be accurate to the nearest drop. 10-22-2006 09:55 PM LoJack thanks a lot ... you helped more than you know 10-18-2006 05:00 AM Left C Your answer is on the back of the last page in the directions. Well here goes: The yellow color is your end color for the KH test. You multiply the number of drops used to get the yellow color by 10. It was 10 drops in your case. This gives you 100 mg/L or 100 ppm. If you want to convert ppm to degrees, you multiply your ppm value by 0.056 to get degrees. So 100 x 0.056 = 5.6 degrees. The GH test has a little different formula. As an example, let's say that you used 10 drops to reach the end color of light blue. For the GH test, you multiply the number of drops used to reach the end color by 20 instead of 10 like the KH test. In this example, this gives you 200 mg/L or 200 ppm. To convert to degrees, you again multiply by 0.056. So, 200 x 0.056 = 11.2 You can get degrees from ppm another way. Divide your ppm by 17.86 to get degrees. Note that both KH and GH are measured in CaCO3 equivalents. I hope that this helps. 10-18-2006 04:07 AM LoJack Need help with Hagen Kh/Gh test I hate these stupid color tests. I am doing the Kh test, and your supposed to add a single drop of reagent, and then add a drop at a time until your solution turns from blue to "yellow/lime" color. On the back, where you are supposed to match the color, its got a blue square for start, and a yellow square for finish. At 8 drops my water turns from blue to a greenish color, and at 10 drops its the yellow on the back, so what is my Kh? Anyone who uses this tests please give me some insight as to what is accurate.
{"url":"http://www.plantedtank.net/forums/newreply.php?do=newreply&p=2290681","timestamp":"2014-04-18T14:15:28Z","content_type":null,"content_length":"56140","record_id":"<urn:uuid:a7f1a8f5-d64d-4865-9cd2-d76eb25ad683>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Alexandria Books Bookseller Rating: 322 - 3841 Lakeshore Blvd. West Toronto, ON, Canada M8W 1R2 Ask Bookseller a Question Tel: 416 904 2890 About Alexandria Books Alexandria Books is family business specialized in out of print, rare and new books. We have over 11000 books listed and constantly upgrade our inventory. We provide books on wide variety of topics, including first editions, signed copies, 17th, 18th,19th century books,as well as modern editions; we offer variety of topics including architecture, painting, collectible books.Many of our books are in the category of very low number copies available for sale on book selling engines. Mainly we deal with books on advanced science including politics, sociology, history, mathematics, biology, physics, chemistry, books on particular topics in engineering, reference books. In particular we have big collection of advanced level books in mathematics, physics, chemistry (applied and theoretical), chemical technology, biology, engineering, power engineering, electricity, electrical machines. They are thoroughly described in terms of keywords and we encourage our customers to search our collection by means of general keywords to find general information on topic an later narrowing keywords to very specific terminology on topic of interest. This way you will find titles even with specific chapters on particular topics inside of the book; very good option for anyone with very general interest on subject and, in the same time, for advanced scientists and engineers having very special interest even inside of one particular topic in field, even if you are interested in one only chapter in the book, you will find it. All our books are listed with pictures, many have multiple pictures to represent edition and condition to our best This seller's inventory is temporarily unavailable. Please try your search with another AbeBooks Bookseller. Catalogs of Alexandria Books Advance Reading Copies Uncorrected Page Proofs American Mathematical Society Americana Architecture Art Biography, Autobiography Books Collecting Reading Books Design Manuscripts Business Canadiana Castles Ceramics Civil Engineering Construction Dancing and Dancers Decoration Ornament Dictionaries References Drawing Drawing Technical Everyman's Library Series Fantasy Fiction First Editions Food Wines Recipes Foreign Languages Japanese Information Theory Landscaping Literature Juvenile Literature Management, Leadership Military, World War Music Oscillators Painting Photography Poetry References Science Biology Entomology Science Biology General Science Biology Theoretical Science Chemistry Colloidal Surface Chemistry Science Chemistry General Science Chemistry Radiation Chemistry Science Chemistry Theoretical Science Computer Science Science Economics Science Electronics Computation Science Engineering Science Geology Science History Britain England Science History Europe Science History Local Canada Science History Local Europe Science History Middle East Science History Rome Science Languages Linguistics Science Literature Criticism Science Logic Science Mathematical Physics Science Mathematics Algebras Science Mathematics Analysis Science Mathematics Applied Science Mathematics Differential Equations Science Mathematics Differential Geometry Science Mathematics Discrete Mathematics Science Mathematics Education Teaching Science Mathematics Functional Analysis Science Mathematics Functions Science Mathematics Game Theory Science Mathematics General Popular Science Mathematics Geometry Algebraic Geometry Science Mathematics Graphs Theory Science Mathematics Group Theory Science Mathematics Hilbert Space Science Mathematics Lie Groups Algebras Science Mathematics Logic Science Mathematics Matrices Science Mathematics Number Theory Science Mathematics Numerical Methods Science Mathematics Operations Research Science Mathematics Operator's Theory Science Mathematics Rings Theory Science Mathematics Schaum's Outline Series Science Mathematics Set Theory Science Mathematics Statistics Probability Science Mathematics Tensor Analysis Science Mathematics Time Series Fourier Series Science Medicine Science Philosophy Science Physics Chemistry Thermodynamics Science Physics Classical Mechanics Science Physics Dynamic Systems Science Physics Field Theory Science Physics General Popular Science Physics Lasers Science Physics Magnetism Electromagnetic Waves Science Physics Nuclear Science Physics Nuclear Reactors Science Physics Optics Optoelectronics Science Physics Plasma Science Physics Quantum Mechanics Science Physics Relativity Theory Science Physics Semiconductors Science Physics Statistical Mechanics Science Physics Statistical Physics Science Physics Superconductivity Science Physics Theoretical Science Political Science Science Popular Literature Science Radioactivity Radiation Science Technical Science Turbulence Sculpture Series Signed Books Soviet Union and Russia History Bookstore Terms Shipping Terms: Orders usually ship within 2 business days. Shipping costs are based on books weighing 2.2 LB, or 1 KG. If your book order is heavy or oversized, we may contact you to let you know extra shipping is
{"url":"http://abebooks.com/Alexandria-Books-Toronto-ON-Canada/5063118/sf","timestamp":"2014-04-16T10:19:03Z","content_type":null,"content_length":"53411","record_id":"<urn:uuid:6cbfc2c1-9f0d-4157-a843-a1bf7406e72a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Elementary Statistics Plus MyStatLab with Pearson eText -- Access Card Package Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/bk-detail?isbn=9780321897206","timestamp":"2014-04-19T12:30:43Z","content_type":null,"content_length":"37489","record_id":"<urn:uuid:b420b3a0-a4d2-4c50-9b45-f04687f43c76>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Personal data: Name: Tamás Waldhauser Date of birth: 1975.02.08. Place of birth: Komló, Hungary Employment: Bolyai Institute, University of Szeged, assistant professor 1993-1998 Mathematics Masters Program, University of Szeged 1998-2001 Mathematics Ph.D. Program, University of Szeged 2000 Technische Universität Dresden (one semester with Erasmus Grant) 2000-2003 Mathematics Ph.D. Program, University of New Hampshire 1998 M.Sc. in Mathematics (degree with honours), University of Szeged 2004 Ph.D. in Mathematics, University of New Hampshire 2008 Ph.D. in Mathematics (summa cum laude), University of Szeged 1997 National Scientific Student Conference, 2nd Prize 1997 Distinguished Student of the University of Szeged 2002-2003 Dissertation Fellowship, University of New Hampshire 2005 Géza Grünwald Commemorative Prize (Bolyai János Mathematical Society)
{"url":"http://www.math.u-szeged.hu/~twaldha/cv_en.html","timestamp":"2014-04-17T18:31:55Z","content_type":null,"content_length":"3063","record_id":"<urn:uuid:68f60969-d57a-4a8b-ba9f-0325fb491939>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
West Chicago Science Tutor Find a West Chicago Science Tutor ...I double majored in Biology and Psychology and I minored in Chemistry. During my final semester at Loyola, I began working for the tutoring center on campus and tutored biology students in any 100 or 200 level biology course. I have also been an MCAT course instructor for the past year. 8 Subjects: including physics, biochemistry, genetics, biology ...I work with junior high, high school, college, and adult students. Standardized test prep is my specialty. About me: I’m a lifelong math-lover who’s been tutoring students of all ages for nearly 20 years. 18 Subjects: including ACT Science, chemistry, writing, Microsoft Excel ...I can offer a structured, systematic and hopefully, rewarding learning experience with the student looking for educational assistance. My prior work background and experience for 30+ years consists of providing structural engineering services for large and small firms in the private sector (comm... 10 Subjects: including civil engineering, geometry, algebra 1, algebra 2 ...While I was abroad in Spain in fall 2012, I took all five of my college courses in Spanish, and this spring I was invited to join the Spanish honor society on campus, Sigma Delta Pi. More generally, I do have some coursework in general biology, chemistry, and basic music theory. I've also been ... 29 Subjects: including ACT Science, reading, English, Spanish ...During my two and a half years of teaching high school, I have taught various levels of Algebra 1 and Algebra 2. I have a teaching certificate in high school mathematics issued by the South Carolina State Department of Education. During my two and half years of teaching high school mathematics, I have had the opportunity to teach various levels of Prealgebra, Algebra 1 and Algebra 2. 12 Subjects: including ACT Science, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/west_chicago_science_tutors.php","timestamp":"2014-04-18T11:22:03Z","content_type":null,"content_length":"24053","record_id":"<urn:uuid:580e838c-bc68-4a85-9425-14b766eb572e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
Operator: Mod 71pages on this wiki Operator: Mod Evaluates the remainder if n1 was divided by n2 The result is the remainder after number1 is divided by number2. For example, the expression 14 Mod 4 evaluates to 2. It's a very simple operation like dividing : you divide 14 by 4 4 is multiplied by 3 so we have now 12 14-12 = 2 which is the remainder You can make a table of division like we used to do in school so you'll have better results for large numbers (you must multiply until the last number before comma so you'll have the remainder) n1 Mod n2 6 Mod 2 =
{"url":"http://vb.wikia.com/wiki/Operator:_Mod","timestamp":"2014-04-17T03:48:35Z","content_type":null,"content_length":"47378","record_id":"<urn:uuid:ec3c562e-6a95-4360-98c3-9e32cf9e367f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
3D velocity modeling, with calibration and trend fitting using geostatistical techniques, particularly advantageous for curved for curved-ray prestack time migration and for such migration fol - Patent # 7493241 - PatentGenius 3D velocity modeling, with calibration and trend fitting using geostatistical techniques, particularly advantageous for curved for curved-ray prestack time migration and for such migration fol 7493241 3D velocity modeling, with calibration and trend fitting using geostatistical techniques, particularly advantageous for curved for curved-ray prestack time migration and for such migration fol (18 images) Inventor: Lee Date Issued: February 17, 2009 Application: 10/564,895 Filed: July 22, 2004 Inventors: Lee; Wook B. (Sugarland, TX) Primary Phan; Thai Attorney Or Shaper Ile, LLPShaper; Sue Z. U.S. Class: 703/2; 367/73; 703/10; 703/6; 706/928 Field Of 703/2; 703/10; 703/6; 702/14; 367/73; 345/420; 706/928 International G06F 17/50; G06G 7/48 U.S Patent Other Wook B. Lee and Wenlong Xu, 3-D Geosttistical Velocity Modeling: Salt Imaging in a Geopressured Environment, Reprinted from the Jan. 2000issue The Leading Edge. cited by other. References: Brooke J. Carney, Building Velocity Models for Steep-Dip Prestack Depth Migration through First Arrival Traveltime Tomography, Nov. 2000, Blacksburg, VA. cited by other. Michael E. Kenney and Wook B. Lee, Velocity Calibration for 3D Prestack Time Migration: A Prerequesite for Velocity Depth Modeling, SEG Expanded Abstract, Oct. 2003. cited by other. Wook B. Lee, Seislink Corporation, Velocity Calibration and Trending-Fitting Procedures for Seismic Depth Imaging and Interpretation. cited by other. Abstract: A method of constructing a 3D geologically plausible velocity model for efficient and accurate prestack imaging wherein embodiments of the invention provide: (1) a method of calibrating velocity functions, appropriately and effectively taking into account well (hard) and seismic (soft) data as well as geological features, and trend fitting ("iDEPTHing") RMS velocities before curved-ray prestack time migration; (2) a method of calibrating and trend fitting ("iDEPTHing") interval velocities before prestack depth migration, appropriately and effectively taking into account well (hard) and seismic (soft) data as well as geological features; and (3) a method of constructing a geologically plausible velocity model using the previous steps of velocity calibration and trend fitting RMS and interval velocities, for efficient sequential use in prestack time migration followed by prestack depth migration. Advantages of the embodiments include providing a quick turnaround of prestack time and depth migration to interpreters and cutting back resource-intensive interpretation efforts for 3D seismic data The invention has significant implications for improving aspects of oil and gas exploration and production technologies, including pore pressure prediction, prospect evaluation and seismic attribute analysis. Claim: What is claimed is: 1. A method for prestack time migration, including velocity calibration and trend fitting ("iDEPTHing") before curved-ray prestack time migration, comprising: a. editingseismic velocities, including at least one of honoring geologic trends existing in a survey area, using envelopes of vertical trends based on rock properties and using computed envelopes; b. computing lateral trends of RMS velocities derived fromseismic data by geostatistical variogram modeling; c. preparing scale factors from well (hard) data and seismic (soft) data wherein the well (hard) data include at least one of checkshot data and up-scaled and bulk-shifted sonic logs; d. calibratingRMS velocities, including applying interpolated scale factors to RMS velocities wherein the interpolation is a function of the lateral trends; and e. curved-ray prestack time migration using interpolated calibrated RMS velocities, wherein theinterpolation is a function of the lateral trends. 2. The method of claim 1 wherein editing of seismic velocity data includes weighing, and further includes converting seismic velocities to interval velocities using Dix equation and to average velocities wherein interval velocities are definedat the center of the layers; and at least one of: a. computing envelope interval functions by specifying upper and lower limits based on geologic constraints and deleting and/or down weighting erratic functions of picks, lying outside of envelopefunctions; and b. re-sampling and applying median and damped-least-square fillers on RMS velocity domain. 3. The method of claim 1 that includes calibrating seismic velocities using at least one key controlling stratigraphic horizon such that said stratigraphic horizon(s) provides conforming surfaces for velocity calibration. 4. The method of claim 3 that includes using at least one horizon controlling sedimentation style due to compaction. 5. The method of claim 1 that includes deriving correction scale factors between well (hard) data and seismic (soft) data; and interpolating scale factors by geostatistical Kriging, where scale factors are computed by dividing checkshot RMSvelocities with seismic RMS velocities at well locations; and computing calibrated velocities by multiplying scale factors. 6. The method of claim 1 that includes computing lateral trends of RMS velocities, consistent with geologic trends, by a. computing a variogram model of seismic RMS velocities; and b. adjusting the model to take into account at least one ofgeologic trends and velocity trends derived from well (hard) data; and that further include interpolating calibrated seismic RMS velocities by Kriging on a stratigraphic grid for prestack time migration using the adjusted variogram model. 7. The method of claim 1 wherein the well (hard) data includes a plurality of wells of checkshot data. 8. The method of claim 7 wherein sonic velocities are calibrated with checkshot data. 9. The method of claim 1 that includes viewing at least one of geological maps, faults, geo-pressure zones, geologic markers and salt intrusion on an interactive workstation. 10. The method of claim 1 that includes editing using envelopes and computing regional envelopes for a large survey area with hundreds of lease blocks. 11. The method of claim 1 wherein preparing scale factors includes at least one of deviated checkshot data and checkshot data from adjacent blocks in the well (hard) data. 12. The method of claim 1 that includes calibrating average velocities derived from seismic data, including applying interpolated scale factors to the average velocities, and interpolating the calibrated seismic average velocities fortime-to-depth conversion and depth interpretation. 13. The method of claim 1 that includes interpolating scale factors by geostatistical Kriging, wherein scale factors are computed by dividing at least one checkshot and sonic interval (or average) velocities with seismic interval (or average)velocities at well locations; and computing calibrated velocities by multiplying scale factors. 14. The method of claim 1 wherein checkshot data includes at least one of deviated checkshot data and checkshot data from adjacent blocks. 15. The method of claim 1 that includes using at least one of salt entry points and proximity survey for calibration. 16. The method of claim 1 including adjusting lateral trends from seismic (soft) data with lateral trend from well (hard) data. 17. The method of claim 1 wherein the well (hard) data includes upscaled and bulk-shifted sonic logs, and that includes upscaling sonic logs by integrating transit times at every 100 feet or by fitting a polynomial function and bulk shifting tois fit the vertical trends wherein the shift caused due to missing data between the surface and logging depth 18. A method for velocity calibration and trend fitting ("iDEPTHing") before prestack depth migration comprising: a. constructing a geologically plausible velocity model from seismic data for subsequent prestack depth migration; b. computingvariogram modeling of interval velocities; c. calibrating interval velocities from the velocity model with hard well data and trend fitting the calibrated velocities using the variogram modeling ("iDEPTHing"); d. interpreting prestack time migrationresults; e. verifying well marker ties; f. updating key horizons based on the verifying; g. repeating steps a.-f at least once; and h. prestack depth migrating seismic data using the calibrated, trend fitted velocity model. 19. The method of claim 18 that includes calibrating seismic interval velocities using stratigraphic horizons by a. selecting at least one stratigraphic horizon, which will provide at least one conforming surface for interpolating intervalvelocities; b. interpreting and updating stratigraphic horizons after curved-ray prestack time migration for use with prestack depth migration; and c. calibrating at least one of interval or average velocities using key controlling stratigraphichorizons. 20. The method of claim 18 wherein computing variogram modeling of interval velocities includes: a. adjusting the modeling with at least one of geologic trends and velocity trends from well (hard) data; b. building a conforming stratigraphicunit using at least two updated stratigraphic surfaces; and c. interpolating seismic interval velocities on a new stratigraphic grid for pre-stack depth migration. 21. The method of claim 18 wherein calibrating interval velocities includes applying to the interval velocities an interpolated scale factor, the scale factor being a function of well (hard) data and seismic (soft) data and that includes addinga scale factor between true depth and migrated depth into the scale factor computed from hard and soft data. 22. A velocity model for use in time migration of 3D prestack seismic data, comprising: geostatistically interpolated, calibrated velocity function values associated with 3D prestack seismic data trace locations, the geostatisticallyinterpolated, calibrated velocity function values being a function of the application of geostatistical interpolation to calibrated velocity functions, the calibrated velocity functions being the product of a combination of the geostatisticalinterpolation of at least one scale factor with a seismic (soft data) velocity functions derived from 3D prestack seismic (soft) data, and at least one scale factor being a function of a well (hard) data source; and wherein the geostatisticalinterpolations are a function of lateral velocity trends developed from the 3D prestack seismic data. 23. The velocity model of claim 22, wherein the soft data (seismic) velocity functions have been edited. 24. The velocity model of claim 22 wherein the geostatistical interpolations use a stratigraphic grid based on at least one established stratigraphic horizon. 25. The model of claim 22 wherein the calibrated velocity function is the product of the combination of the geostatistical interpolation of at least two scale actors, each scale factor being a function of a separate well (hard) data source. 26. A method for developing a velocity model for use in migrating seismic data, comprising: from 3D prestack seismic data, developing soft data (seismic) velocity functions for select locations and developing a variogram model of lateralvelocity trends; from a plurality of hard data sources, developing a set of scale factors for each source, each scale factor relating a hard data velocity function derived from the source and a seismic (soft data) velocity function; geostatisticallyinterpolating the scale factors and applying the interpolated scale factors to seismic (soft data) velocity functions to create calibrated velocity functions; geostatistically interpolating the calibrated velocity function to trace locations; andwherein the geostatistical interpolations are a function of the variogram model. 27. The method of claim 26 that includes editing the seismic (soft data) velocity functions. 28. The method of claim 26 wherein the geostatistical interpolations utilize a stratigraphic grid based on at let one established stratigraphic horizon. 29. A method for enhanced time migration of 3D prestack seismic data, comprising: generating soft data (seismic) velocity functions and lateral velocity trends from at least a portion of 3D prestack seismic (soft) data; computing well (harddata) velocity functions from at least two sources of well (hard) data; creating at least two sets of scale factors, each set associated with a source of well (hard) data, each scale factor a function of a well (hard) data velocity function and aseismic (soft data) velocity function; applying interpolated scale factors to seismic (soft data) velocity functions to create calibrated velocity functions, and interpolating calibrated velocity functions to trace locations, the interpolation of scalefactors and calibrated velocity functions being a function of the lateral velocity trends; and applying curved ray time migration to 3D prestack seismic data using the interpolated calibrated velocity functions. 30. The method of claim 29 wherein the seismic (soft data) velocity functions are edited. 31. The method of claim 29 wherein the interpolations utilize a stratigraphic grid incorporating at least one established stratigraphic horizon. 32. The method of claim 29 wherein the lateral velocity trends including variogram modeling and the interpolation includes Kriging. 33. The method of claim 29 that includes editing the hard data velocity functions. 34. An improved method for migrating 3D prestack seismic data comprising: developing a calibrated, trend fitted RMS velocity model from 3D prestack seismic data and a plurality of hard data sources; applying curved-ray time migration to the 3Dprestack seismic data using the velocity model; developing a calibrated trend fitted interval velocity model incorporating at least one horizon derived from the previously migrated data; and depth migrating the 3D prestack seismic data using thecalibrated, trend fitted interval velocity 35. The method of claim 34 that includes iterating over the steps of developing the interval velocity model and depth migration until a satisfactory convergence is established. Description: FIELDOF THE INVENTION This invention relates to improved methods for building geologically plausible 3D velocity models for use in 3D prestack seismic imaging, in particular for use in curved-ray prestack time migration and in such migration followed by prestack depthmigration, and including utilizing geologic data and geostatistical techniques for integrating velocity information derived from seismic data, so called "soft data", with well data, or so called "hard data", such as checkshot surveys and sonic logs. BACKGROUND OF THE INVENTION In seismic exploration seismic reflection data from subsurface layers are collected by multiple receivers typically at the earth's surface. The more recent 3D seismic acquisitions cover large areas, collecting extensive 3D seismic data for usein subsurface imaging. Seismic "migration" corrects and improves initial assumptions of near horizontal layering in an attempt to model geophysical realities such as dips, discontinuities, and curvature of formations. Seismic migration is typically the culmination ofthe image processing for the seismic data, with the goal of producing detailed pictures for use in the interpretation of subsurface geologic structures. Such detailed pictures are important for prospect generation and reservoir characterization, such aslithology, fluid prediction, and pore pressure prediction, as well as for reservoir volume estimation. 3D seismic interpretation of migrated data aspires to produce an accurate mapping of the subsurface structures, important for oil and gas exploration. Such seismic interpretation profits from an improved focusing and positioning of subsurfacereflectors in the migrated data. It is one disclosure of the instant invention that such improved focusing and positioning can be made possible by migrating the data with a more realistic velocity model. Seismic migration of 3D prestack seismic data is practiced in two forms, time migration and depth migration. Each requires a 3D velocity model, a description of acoustic velocity structures. Time migration is simpler, less resource intensiveand considered to be less accurate and less sophisticated. Historically, the industry has not been bothered to build a sophisticated geologically plausible velocity model prior to prestack time migration, relying instead on simple assumptions deemed tobe fitting the technique regarded as less accurate. For accuracy, the industry currently looks to time and resource intensive "depth migration", a technique utilizing massive interations to converge on solutions. One disclosure of the instant invention is that, surprisingly, an effective and efficient focusing and positioning of subsurface reflectors is achievable by seismic time migration, but such results are highly dependant on the accuracy of the 3Dvelocity model employed therein. In particular, a geologically realistic, accurate 3D velocity model produces surprisingly good results when used with curved ray prestack time migration. The relatively new 3D seismic depth imaging services, techniques that have opened a rapidly growing market in recent years, are at present highly interpretive, quite expensive and very complex processing jobs. They carry the promise ofilluminating complicated oil traps under geologic complications, such as sub-sak prospects in deep water so that to economize on the high cost of drilling in deep water, such better seismic imaging technology is in demand. However, reducing the cost andtime of such complex processing is therefore also important. One disclosure of the instant invention is a methodology that so reduces the cost and time. Interpreters are historically given the task of tying well markers to 3D seismic data. Calibrating velocity models derived from seismic data and used for migration by tying into well information (hard data) has been historically deemed theinterpreter's task, preformed post migration. Inconsistencies between processing velocities and the depth tying velocities, however, create formidable challenges at the interpretation stage. It is a significant improvement of the instant invention tohave previously dealt with this calibration issue, prior to and/or during migration. The instant invention provides a better solution, to both imaging and tying well information. More focused seismic data, produced as a result of early velocity calibration with hard data and geologic data and the use of geostatisticallysensitive trend fitting (together referred to as "iDEPTHing") prior to migration, as taught herein, restores better signal-to-noise ratio for deeper exploration. By utilizing the methods of the instant invention curved-ray prestack time migrationprovides excellent migrated seismic data quickly to interpreters, and provides superior preliminary data for constructing a 3D velocity model for prestack depth migration. Summary of State of the Industry The current industry practice of prestack depth migration is best described by John W. C. Sherwood (Sherwood, 1989); "How can we routinely, efficiently, and economically extract interval velocities with the accuracy need for depth inversion? . .. . First, an educational estimate of the interval velocities is made. This is used to create, from the regular stacked (may be migrated) section, an approximate depth-interval velocity model. This first guess is going to be wrong. But the data can beprocessed on the basis of this model using a full pre-stack depth migration. Then the data are examined and a determination is made of how well it is focused. If they don't appear to be focused properly, the model can be adjusted, both the depths andthe interval velocities, to bring the data into better focus. The adjustment is repeated until the results are satisfactory. The procedure is today being used very intensively by some companies." An abundance of articles on the subject of iterativeprestack depth migration have been published for the last 20 years (Wang et. al., 1991; Lee and Zhang, 1992; MacKay and Abma, 1992; Liu, 1997). U.S. Pat. No. 4,992,996, entitled "Interval velocity analysis and depth migration using common reflection point gathers" relates to a method of performing velocity analysis while eliminating the effects on weak signals caused by strongersignals with an assumption that assumes that the initial velocity model is reasonably correct and geologically plausible. The current invention provides a method for providing such a needed geologically plausible initial velocity model. The iterative technique of depth migration often fails to converge, and it often fails to provide prospects with its economics within reasonable ranges of error for drilling. The iterative technique is based on the assumption that all iterationswill converge to a unique solution most of the times. However, these iterations will converge to a geologically acceptable solution only when the initial model is geologically plausible. And, as Sherwood stated, the initial model is going to be wrongmost of the times: the target depth is often wrong based on the result of iteration. Again, the current invention provides a method for providing such a needed geologically plausible initial velocity model. Another favorite assumption is that imaging velocities are different from velocities for depth conversion due to anisotropy. It has been assumed that seismic velocities are in general 2-13% faster than the well average velocities (Guzman et.Al., 1997). However, considering errors in seismic velocity measurements 5-10%, anisotropy cannot be so easily assumed. Based on hyperbolic moveout, vertical velocity gradients and velocity trend reversal will be a significant source of errors. Smallerrorsin stacking velocities will be amplified in converting to interval velocities. As a reference for geophysical solutions being non-unique based on geophysical data collected on earth surface, Al-Chalabi anabzed explicitly the non-uniqueness in velocity-depth curve in his article (Al-Chalabi 1997). He pointed out that theerrors in measuring velocities from semblance clouds make the problem even worse. However, the current industry practice assumes the existence of a unique or true solution. The current invention deals with velocity calibration in an improved manner inorder to reduce inaccuracies in seismic velocities and non-uniqueness. U.S. Pat. No. 5,089,994, entitled "Tomographic estimation of seismic transmission velocities from constant offset depth migrations" relates to a method for improving velocity models so that constant-offset migrations estimate consistentpositions for reflectors, including tomographic estimations of seismic transmission velocities from constant-offset depth migrations. This costly method has a fundamental problem of non-uniqueness between layer velocities and layer boundary positions. The current invention is a prerequisite to reasonable tomographic estimation. U.S. Pat. No. 5,513,150, entitled "Method of determining 3-D acoustic velocities for seismic surveys" relates to a method of producing a velocity volume for a seismic survey volume, based on two-way time seismic data and process velocity data. This method provides a hand-on interactive tool for velocity editing but does not recognize the importance of velocity Constructing improved velocity models requires all different sources of velocity data. Inaccurate seismic velocity (soft) data are abundant and reliable well data (hard) data are sparse. Most of the times, inconsistencies between seismic andwell data are observed. Mostly the inconsistencies between different sources of velocity data are attributed to some unexplained physical mechanism. However, it is difficult to access any physical mechanism due to inaccuracies in seismic velocitymeasurements. The following patent stated that the inconsistencies might come from anisotropy. U.S. Pat. No. 6,253,157, entitled "Method for efficient manual inversion of seismic velocity information", relates to a method of calculating seismic velocity for migration purposes as a function of subsurface spatial position that gives theseismic processing analyst direct control of the resulting migration velocity model. This method recognizes the different sources of velocity information and the strengths of one source often offset the weaknesses of one another. It states thatdifferent velocity information sources usually give inconsistent estimates of the instantaneous velocity and interpreted that this inconsistency may be due to anisotropy that is being ignored. Anisotropy is difficult to measure because it can only be measured in the laboratories. Laboratory measurements indicating that there were significant intrinsic anisotropy was observed in West Africa, and anisotropy migration is required forboth improved imaging and accurate positioning in such cases. U.S. Pat. Nos. 5,696,735 and 6,002,642, entitled "Seismic migration using offset checkshot data" relates to a method of migrating seismic data using offset checkshot survey measurements. This method uses direct travel time measurement fromoffset checkshot survey instead of a 3D velocity model. Although, such offset checkshot surveys are not commonly available, they may be advantageously used in special situations for shear wave imaging of dipping reflectors. Development of Instant Invention It was postulated that velocity models constructed according to current industry standards could be too smooth for depth imaging and sometimes wipe out key prospects. Excessive smoothing in velocity modeling could be a consequence of embeddedvelocity errors. Often layer boundaries were interpreted based on over-migrated or under-migrated results. It was detrimental to include erroneous boundaries in a velocity model. A variation of the collocated Co-Kriging method was early on tested by the instant inventor on post-stack seismic data for modeling geologic features such as geo-pressure zones. The collocated Co-Kriging method showed that it could incorporatebig local anomalies with smooth transition boundaries (Lee and Xu, 2000), integrating seismic (soft) data and well (hard) data by the zone of influence according to variances. Calibration was used in the test to tie wells for post-stack depth migration. Focusing or residual velocity or focusing errors could not be determined with post-stack data. The primary purpose of the post-stack depth migration was to correct positioning errors. In a recent subsequent test, in accordance with the instant invention, velocity calibration to hard and geologic data and geologically sensitive trend fitting (together referred to as "iDEPTHing") was incorporated in constructing a geologicallyplausible velocity model for preslack time/depth migration. The test showed that curved-ray prestack time migration based on such velocity model gave astonishingly ("eureka") good results in terms of imaging steeply dipping seismic events and tying wellmarkers (Kenney and Lee, 2003; Lee, 2003). Further, the output of such time migration provides a superior, cost effective and efficient input for any subsequent depth migration. Current Invention The current invention discloses improved methods for constructing a premigration velocity model by the preferred steps of: 1) editing to avoid embedding velocity errors and to improve lateral velocity trends of seismic velocities; 2) betterfitting of vertical velocity trends by calibrating seismic (soft) data to well (hard) data (checkshot and/or sonic logs); 3) more accurately interpolating velocities and calibration scale factors using geostatistical techniques; 4) incorporating geologicfeatures, such as known stratigraphic horizons, into a stratigraphic grid, when their positioning accuracy is verified, and otherwise utilizing known geologic features for editing; and 5) avoiding excessive smoothing by using a variation ofgeostatistical collocated co-Kriging for localized anomolies. The invention teaches that curved-ray prestack time migration will correctly accommodate a vertical velocity gradient. Further, the prestack curved ray time migration results using the instant invention provide superior focusing of seismicevents, including steeply dipping events, and excellent well ties. Due to well ties, more accurate stratigraphic horizons can be interpreted. A second velocity calibration (to hard data and geologic data) and (geologically sensitive) trend fitting(together referred to as "iDEPTHing") before subsequent prestack depth migration can significantly accommodate lateral velocity variations above a prospect. For sub-salt prospects, the inventive methods can provide valuable calibrated velocity data inabove top salt for constructing a geologically plausible initial model for prestack depth migration. Geostatistical methods have been used in other disciple, such as reservoir characterization. Interpreters have used velocity calibration. Some predecessors have attempted to calibrate well (hard) and seismic (soft) data prior to migration. Thecurrent invention, however, is the first to specifically disclose a unique (referred to as "iDEPTHing" process for calibration prior to prestack time and/or depth migration as well as its surprising value when combined with curved-ray prestack timemigration, and when then subsequently used with depth migration. SUMMARY OF THE INVENTION The instant invention discloses an improved method for constructing a 3D geologically plausible velocity model that has particular value for efficient and accurate prestack imaging. Embodiments of the invention provide: (1) a method forcalibrating seismic velocity data, appropriately and effectively taking into account hard data, geological features and utilizing geostatistically sensitive trend fitting (together referred to as "iDEPTHing") for interpolating, which is particularlyvaluable for producing RMS velocities for use with curved-ray prestack time migration; (2) a similar method for calibrating and trend fitting ("iDEPTHing") interval velocities before prestack depth migration, again appropriately and effectively takinginto account well (hard) and seismic (soft) data as well as geological features and geostatistical interpolation; and (3) a method for efficient sequential use of iDEPTHing in curved-ray prestack time migration followed by further iDEPTHing and prestackdepth migration. Advantages of the embodiments include providing a quick turnaround for prestack time and depth migration images to interpreters and reducing resource-intensive interpretation efforts for 3D seismic data. The invention has significantimplications for improving oil and gas exploration and production technologies, including pore pressure prediction, prospect evaluation and seismic attribute analysis. Proper calibration and trend fitting of seismic RMS and interval velocities before prestack migration is a key aspect of the instant inventive method, leading to efficient and successful prestack imaging and interpretation of 3D seismic data. Experiment shows that a proper velocity calibration and trend fitting prior to curved-ray presack time migration can provide seismic data with excellent imaging and accurate well ties. Further velocity calibration and trend fitting of intervalvelocities can provide a further improved geologically plausible velocity model to enhance pre-sack depth migration, both in terms of the speed and the quality of the results. Editing seismic velocities using geologic constraints is valuable for effective calibration. Use of stratigraphic surfaces can further play a helpful role in more accurately calibrating seismic velocities. Geostatistical methods (variogrammodeling and Kriging) are key factors in interpolating seismic velocities and/or calibration scale factors. A scale of geological heterogeneities is better used for smoothing instead of an arbitrary scale. The inventive procedure provides a method of velocity calibration and trend fitting (together referred to herein as "iDEPTHing") for velocity modeling prior to curved-ray prestack time migration. Such time migration, in turn, offers an improvedstepping-stone for efficiently and effectively constructing an accurate 3D velocity model for prestack depth migration. The invention can be implemented in specific preferred procedures for editing, calibrating and trend fitting, thereby effectively integrating well (hard) data, geological data and seismic (soft) data The ("iDEPTHing") procedure can besupplemented by interactive software packages. BRIEF DESCRIPTION OF THE DRAWINGS A better understanding of the present invention can be obtained when the following detailed description of the preferred embodiments are considered in conjunction with the following drawings, in which: FIG. 1: illustrates two basic steps of one embodiment of the current inventive method for velocity calibration and trend fitting for prestack time migration and geologically plausible velocity modeling for prestack depth migration. FIG. 2: illustrates preferred steps for velocity calibration and trend fitting ("iDEPTHing") for curved-ray prestack time migration. FIG. 3: illustrates preferred steps for velocity calibration and trend fitting ("iDEPTHing"), and for constructing a geologically plausible velocity model for prestack depth FIG. 4: illustrates seismic Velocity Types. (a) Stacking (or NMO) velocities (red and blue lines) are interpreted from semblance clouds (ellipses) which are generated from seismic data according to velocity analysis (note: personal bias caninterpret semblance clouds differently); (b) RMS velocities are inferred from stacking velocities assuming the earth flat; (c) average velocities by time averaging of interval velocities; (d) interval velocities from the Dix equation as applied to priorRMS velocities. Three neighboring velocities are shown in FIGS. (b), (c) and (d). FIG. 5: illustrates Velocity Editing. (a) A base map and velocity locations (red cross) comprising selected locations with pre-existing salt contours (yellow and blue) and faults (green and black); (b) interval velocities (red lines) with ageological constraint envelope (blue) which is computed and checked with geology; (c) time-interval velocity data from two selected locations (blue and black lines) before editing; (d) the same time-interval velocity data from two selected locationsafter editing including resampling, median and damped-least-square filtering. Note: a couple of isolated points were moved into the red envelope by applying filtering. FIG. 6: illustrates velocity editing. (a) A base map showing lateral velocity trends in colors (cool colors are slow) in three (northern, mid and southern) blocks at four seconds; (b) RMS velocities with a geological constraint envelope (blue)for the entire area showing velocity trends; (c) southern blocks with slow velocity trends with global envelope (green) and local envelope (blue); (d) and (e) mid and northern blocks show faster velocity trends. FIG. 7: illustrates geologic markers. Velocity trends between certain geologic markers were discerned from rock types and ages. The estimation of vertical velocity trends can be improved based on known rock types and again editing can beapplied by choosing filter parameters and sampling rates in different geologic time windows according to information about rocks. A sonic (green) curve and four neighboring seismic (red) velocities are shown. Horizontal lines with name (MIOC, OLIG, . . . ) indicate the ages of rocks. FIG. 8: illustrates Checkshot Data (a) Time vs. Depth curve from measured data; (b) time vs. average velocity derived from checkshot data; (c) time vs. interval velocity derived from checkshot data. FIG. 9: (a) A checkshot average velocity (bright blue) was compared with neighboring seismic velocities (blue lines); (b) a checkshot RMS velocity (bright blue) was compared with seismic average velocities (blue lines); and (c) a checkshotinterval velocity (bright blue) was computed with seismic interval velocities (blue lines). Checkshot RMS and interval velocities were derived from checkshot data for comparison. FIG. 10: illustrates Stratigraphic Modeling. (a) Three stratigraphic surfaces; (b) a cross-section with layer boundaries; (c) an upper stratigraphic unit with a well; and (d) a lower stratigraphic unit with a well are shown. Two stratigraphicunits have stratigraphic grids and the shape of grids are irregular but conforming with two stratigraphic horizons. FIG. 11: illustrates Calibration Process. (a) Pre-calibration seismic RMS velocity contours; (b) post-calibration seismic RMS velocity contours; and (c) Calibration Table, showing tabulated velocities before and after calibration and scalefactors. Well derricks indicate well locations, numbered counterclockwise starting from far right, and thin lines indicate locations of seismic velocity analysis. FIG. 12: (a) A velocity trend map at three seconds with known salt contours (blue and yellow lines) and faults (purple and black lines); and (b) interval velocity data show vertical trends with colors. A horizontal line indicates time equalsthree seconds. Warm colors indicate fast velocities and cool colors indicate slower velocities using a rainbow scale. Purple color indicates no data zone. FIG. 13: A variogram map on a stratigraphic horizon, shows NE direction as the direction of maximum spatial continuity. Colors indicate normalized variogram values. Two red dots and an ellipse are variogram modeling tools for finding thedirection of maximum continuity and computing variogram values in major and minor axes of ellipse. FIG. 14: (a) A base shows the location of Line A to B (red line) and known salt contours and well locations; (b) Line A to B (base map) of curved-ray prestack time migration result shows excellent imaging of steep salt face reflection (insideyellow ellipse), faults. It also shows excellent ties to well markers: Dn (green), Bh (red), Ym (yellow) and salt (pink) inside yellow circles. Vertical scales are time and horizontal scales are line and trace locations. FIG. 15: illustrates a comparison of two prestack time migration. (a) Previous 1999 prestack time migration result; (b) current 2003 prestack time migration result shows significantly improved imaging of salt flank and faults, wherein theaccuracy of the location of salt is critical for defining oil compartments; (c) a depth slice shows oil compartment defined by the location of salt and faults; (d) well planning by dotted red line indicates a successful drilling path close to the saltface compared to a failed drilling path (black line). Detailed sand columns are indicated by varying colors. FIG. 16: illustrates curved ray prestack migration results. (a) prestack time migration without velocity calibration and trend fitting; (b) presack time migration with velocity calibration and trend fitting showed improved images. The above twosections show significantly different well markers (green bar: DN, pink bar: salt) ties with seismic data (yellow circles). Interpreters agreed that (b)-section shows excellent ties to well markers as a result of using RMS velocities with calibrationand fitting. Horizontal scales indicated Line and Trace locations and the vertical scale indicates Time in seconds. FIG. 17: illustrates common image point gathers, showing consistent prediction of times between different offset traces, which indicate small velocity errors. Small residual errors indicate that the velocity model was accurate for imaging. DESCRIPTION OF THE PREFERRED EMBODIMENTS Discussion of Terms Time And Depth Migration Seismic migration methods are classified as either depth migration or time migration. Depth migration honors lateral as well as vertical variations, utilizing different assumptions about the physics of seismic wave propagation. Time migrationsolutions ignore wave-field distortions created by lateral variations in seismic velocities. However simplistic, time migration is widely used because it is less computationally expensive. Curved-ray time migration is a recent advance in prestack timemigration, which is sensitive to vertical velocity variations. However, it is only with the relatively recent advent of clustered PC processing that curved-ray prestack time migration has become generally available. Checkshot Data Checkshot is a method of determining an average velocity as a function of depth by lowering a geophone into a hole and recording energy from shots fired from surface shot holes. Checkshot is often run in addition to a sonic log to supply areference time at the base of a casing to check an integrated time. Typical intervals between receivers are 250 to 500 feet. Some checkshot wells are vertical and some are deviated. Sonic Logs A sonic log measures interval travel time (ITT), which is the reciprocal of interval (P-wave) velocity in microseconds per foot. It is frequently used for porosity determination by time-average equation. Geostatistics (Isaaks And Srivastava, 1989) Geological and geophysical data come in discrete and incomplete forms. A common task for a geophysicist is to fill up the space between data points with reasonable estimates. The quantitative description of geological trends is analyzed by thevariogram process and the interpolation method incorporating such variogram trend information is called Kriging. Geophysical data also comes with different measurement scales. Sonic logs are measured in every half foot, and seismic velocities aremeasured to 100 milliseconds, which corresponds to .about.300 feet. Data of such different accuracies can be integrated in interpolation by a process called Co-Kriging. Variogram (Isaaks And Srivastava, 1989) In geostatistics, a variogram is a measurement of spatial variability. In common words, it indicates how data points in a defined space become uncorrelated as the distance between points increases. Geostatistical Interpolation Geostatistical interpolation refers to Kriging and Co-Kriging incorporating variogram trend information. Velocity Functions The term "velocity functions" is used to indicate common velocity parameters used in the industry, such as RMS velocity, interval velocity, and average velocity. The term is intended to include such or equivalent (new or old) or newly developed or alternate velocity functions. Selected Seismic Data "Selected seismic data" indicates a use of less than a whole set of recorded Is traces from a 3D seismic data set. Hard Data Sources "Hard data sources" indicate sources of well data such as checkshot data, sonic data and log data. Use of the word "established" in reference to stratigraphic horizons is intended to indicate stratigraphic horizons that are accepted as relatively well known and/or are believed likely. There should be significant or strong substantiation forthe existence of such horizons. Alternative Collocated Co-Kriging A variation of the collocated co-Kriging method, called Alternative Collocated Co-Kriging, is designed to incorporate geologic features causing large velocity anomalies. The method produces a smooth boundary with a large anomaly using estimationvariances. In general, see Background Information Table. Discussion of Invention The current invention is a method for: 1) efficiently generating imaging and well ties for 3D seismic data using prestack time migration; (2) improved velocity calibration to hard data and geological data, including geostatistically sensitivetrend fitting, ("iDEPTHing") of RMS velocities for use in curved-ray prestack time migration; and (3) constructing an improved 3D geologically plausible velocity model for subsequent prestack depth migration. The current invention is based on the following concepts. Seismic velocities indicate lateral velocity variation faithfully but tend to be inaccurate and unreliable for vertical velocities, due to the hyperbolic moveout assumption. Discrepancies between seismic velocities and well velocities are not necessarily due to anisotropy, as assumed by many people. They may be due to, and explained by, inaccuracies in seismic velocity analysis. Early calibration and trend fitting("iDEPTHing") can surprisingly resolve many such inconsistencies in velocities derived from well (hard) data and seismic (soft) data (FIG. 9). The prominent discrepancies between seismic (soft) and checkshot (hard) data are due to geopressure in FIG. 9. The procedure can improve early estimated velocities in RMS, average and interval velocities. An early accurate velocity model requires more than just seismic velocities from seismic data. Various other usable sources of velocity data are frequently available, and should be appropriately integrated into the model, such as checkshot,sonic, log, well markers, proximity surveys, VSP data and other geologic markers or descriptions. (The latter are typically referred to as hard data.) The value of checkshot and log data in adjacent blocks, in particular, is highlighted by the currentinvention. Surprisingly, curved-ray prestack time migration can generate excellent seismic imaging and well ties if one first calibrates and trend fits (preferably edited) RMS velocities (with preferably edited) available hard data and geophysical data. Afurther improved 3D geologically plausible velocity model subsequently can be constructed by interpreting prestack time migration results. Seismic velocities are preferably edited with the use of geologic constraints, to the extent available, and in particular, seismic interval velocities should be adjusted to be stable and be in agreement in terms of predictable rock properties. RMS velocities and average velocities are best derived from edited interval velocities. Automated editing tools are particularly helpful for efficient volume editing. Sophisticated calibration tools are particularly useful so that sparse well data can be smartly interpolated. Geostatistical Interpolation (Kriging and Co-Kriging) is best utilized for smart interpolation, honoring different correlation lengths in lateral directions. Kriging is valuable for honoring the anisotropic correlation lengths. Geostatistical methods are best used to integrate the well (hard) data and seismic (soft) data. An alternative method could use a geological heterogeneity scale. These methods best replace excessive smoothing, now popular in the industry. A set of velocity tools best enable the current invention to proceed in steps, including editing, building stratigraphic units, and geostatistical interpolation methods and calibration including trend fitting. For quality control, thecalibration and trend fitting results can be checked along well tracks and on curved well planes between wells. Discussion of the Figures FIG. 1 is a schematic flow chart illustrating a preferred embodiment of the current invention. In Steps 10-12 indicate velocity calibration of seismic RMS velocities with hard data and geostatistically sensitive trend fitting (together sometimesreferred to as "iDEPTHing") before curved-ray prestack time migration, one key aspect of the current invention. Curved-ray prestack time migration accommodates vertical velocity gradients. The result surprisingly images well all seismic events and tieswell markers. Such result alone can save interpreters significant time by not having to task for well ties. Subsequently, migrated time data can be converted to depth using an average velocity cube, converted from the RMS velocity cube. The depth of any stratigraphic horizons produced should be accurate for velocity depth modeling in the next steps. Steps 20-22 indicate further velocity calibration, again with hard data and geostatistically sensitive trend fitting ("iDEPTHing"), for subsequent prestack depth migration. Such comprises an efficient method for constructing a geologically plausibleinitial interval velocity model. This model calibrated and trend fitted, takes advantage of the further information derived from the migrated time data. New stratigraphic horizons may have been identified. According to the current industry practice, iterative prestack depth migration relies on cycling through Steps 24-28 without performing Steps 10-14 and 20-22. According to current industry practice, a macro model is constructed using interpretedhorizons and interpolating velocity fields. This type of model has the following characteristics: 1) velocity errors are embedded due to lack of proper editing; 2) velocities are smoothed to hide the embedded errors; 3) the positioning of layerboundaries are not verified; and 4) vertical velocity trends are deviated whenever velocity reversals or large velocity gradients exist. Steps 10-14 and 20-22 can improve the initial velocity model by avoiding embedded velocity errors, avoidingexcessive smoothing and by trend fitting using geostatistical Kriging. For preferred embodiments of the present invention, more detailed steps are described in subsequent flow charts, indicating in more detail velocity editing, calibration and trend fitting in FIG. 2 and constructing a geologically plausiblevelocity model in FIG. 3. FIG. 2 illustrates one preferred method for a preferred embodiment, indicating velocity editing, calibration and trend fitting for use in curved-ray prestack time migration. (Step 40) Velocities are compiled from selected 3D prestack seismicdata and interval and RMS velocities are edited honoring geologic trends that exist in the survey area. Seismic velocities can be edited using envelopes of vertical trends, based on rock properties and/or using computed envelopes. (See FIGS. 4, 5, and6). Regional envelopes may be needed for an extensive area (FIG. 6). Envelopes form constraints. Editing can include resampling, to remove anomalies and/or applying median and damped-least-squares filters to RMS velocities. The RMS velocities andinterval velocities may be edited with the help of interactive windows. Lateral trends can be computed by geostatistical variogram modeling. (See FIGS. 12, 13). Geological maps, faults, geo-pressure zones, geological markers and salt intrusion can be viewed on an interactive workstation, including on dual windows,with one window for interval velocities and one window for RMS velocities. More particularly, the following steps can be taken for editing. Referencing step 41 of FIG. 2, one can convert computed seismic RMS velocities to interval velocities using the Dix equation, and to average velocities. Interval velocities aredefined at the center of the layers (See FIG. 4). One can compute envelope interval functions by specifying upper and lower limits based on geologic constraints, to the extent available. One can then delete or correct erratic functions of picks, lyingoutside of envelope functions. (See FIG. 5). Referencing step 42 of FIG. 2, an alternative possibility is to resample the seismic data and/or also to apply median and damped-least-square filters on the RMS velocity domain. If rock properties are knownbetween certain geologic markers (See FIG. 7), one can apply filters for user-defined time windows. Time-varying re-sampling is an option to consider for velocity trends between certain geologic markers. Any deleted velocity functions could alternatelybe assigned lower weights. Anomolous interval velocities can be corrected to their envelope (Step 44) RMS velocities can now be calibrated, preferably using selected stratigraphic horizons which control sedimentation. (See FIG. 10). The following step can be taken for stratigraphic horizons. Referencing step 43 of FIG. 2, known horizons controlling sedimentation styles due to compaction, if any, such as the water bottom horizon in the Gulf of Mexico, are selected. Such stratigraphic horizons provide conforming surfaces for velocitycalibration. A conforming stratigraphic unit is built using, for example, two stratigraphic surfaces (See FIG. 10). Referencing step 45 of FIG. 2, in order to calibrate RMS velocities, preferably using key controlling stratigraphic horizons, variogram models for velocity trends from seismic velocities are computed. (See FIGS. 11, 12 and 13). Checkshot and/orother hard well data (step 46) is edited to the extent feasible. (See FIG. 7). RMS and/or interval velocities are computed at well locations from hard data (step 47). (See FIG. 8). The next step is to interpolate seismic velocities to the locationsof hard well data (step 48). One is now in a position to compute scale factors, preferably by dividing checkshot RMS velocities at well locations by seismic RMS velocities interpolated to the well locations (step 49). Geostatistical Kriging, using thevariogram model of velocity trends, is used to interpolate the scale factors to the selected locations of seismic velocities (step 50). Calibrated seismic velocities are computed by applying (such as by multiplying) the interpolated scale factors to theseismic velocities (step 51). See FIG. 11. (Note: Deviated checkshot and checkshot from adjacent blocks, if available, can be included for calibration.) Again, checkshot and/or other hard data is also preferably first edited, such as by reviewing time vs. depth, time vs. average-velocity and time vs. interval-velocity relationship (See FIG. 8). Some checkshot data may be determined not to beused because it measures a local anomaly. (Step 52) Kriging or Co-Kriging is applied to the calibrated seismic velocities at the select locations to geostatistically interpolate the calibrated velocities to all trace locations. The process includes fitting calibrated velocity data tocomputed velocity trends, consistent with geologic trends, for geostatistically sensitive trend fitting of the calibrated data. Referencing step 53 of FIG. 2, one can further compute a time-slice of seismic velocities and check or match lateral velocity trends with geologic maps, if available (See FIG. 12). Referencing step 54 of FIG. 2, one can further compute a variogram model of seismic velocities and compare or match it with "geologic trends", if available (See FIG. 13). Referencing step 55 of FIG. 2, calibrated seismic RMS velocities are interpolated by the Kriging process on a stratigraphic grid to trace locations for use in prestack time migration. Calibrated seismic average velocities can also begeostatistically interpolated for time-to-depth conversion and depth interpretation. Step 60 of FIG. 3 illustrates constructing a geologically plausible velocity model for subsequent prestack depth migration by the following steps. In Step 71, seismic interval velocities are calibrated analogously to the above. The scale factor between true depth and migrated depth can be added into the scale factors computed from checkshot data. Salt entry points and proximity surveyscan also be used for calibration in a similar manner to other hard data. In Step 72, sonic (interval) velocities are resampled and calibrated. Sonic velocities can be calibrated with checkshot data to be consistent with calibrated seismic velocities. For trend fitting as well as for use in calibration (steps 81 and 82), prestack time migration results can be interpreted and verified against well marker ties. Key stratigraphic horizons can be updated with time-to-depth conversion of timehorizons (FIG. 14). Variogram models can be computed for each stratigraphic unit. In Step 83, calibrated seismic interval velocities are interpolated to all trace locations on new-updated stratigraphic horizons for prestack depth migration. In Step 80, similar geostatistical methods can be used to integrate sonic logs to seismic velocities, preserving the local effect of the data of the sonic log. In Step 81, calibrated sonic (hard) data and seismic interval velocities (soft data) can be integrated by a geostatistical method. Alternative Collocated Co-Kriging is a preferred collocated co-Kriging method for such integration. Geologicheterogeneity scales are preferably used instead of an arbitrary smoothing scale (See FIGS. 14 and 15). Such scales are indicated by known anomalous geologic features Further Discussion Velocity calibration has traditionally been applied after migration, during interpretation. The instant invention teaches geologically sensitive velocity calibration is to be more profitably applied before preslack migration. Further, velocityediting is important and helpful before velocity calibration. Prestack migration with "iDEPTHing" improves both focusing and well ties. Velocity trends can be carefully examined particularly in an extensive (ex. 200) block area. Several envelope functions can be computed for editing and preserving velocity trends toward the directions of rapid variation. Stratigraphic horizons should be chosen if the boundary controls sedimentary layering. A good example is the water bottom in the Gulf of Mexico. Not every velocity layer boundary can be a stratigraphic horizon. Checkshot data should be carefully verified and checked for any errors common in checkshot interval velocity. Checkshot calibration fits not only vertical trends but also lateral trends using a variogram model. Some checkshot data should not beused for calibration, however, if they represent local geology such as a local Carbonates seam. Sonic data can be summed for every 100 feet and block shifted to be used for calibration. Geological features such as geo-pressure zones can be handled by the localized integration of sonic (hard) data. The integration is local within the geological heterogeneity scale using estimation variances. The current invention teaches that a geologically plausible model can be constructed based on a starting isotropic earth assumption. Curved-ray prestack time migration results will tie well markers and provide a helpful guide for drillingprospects. Residual processing will enhance focusing and signal-to-noise ratio (See FIG. 17). If the residual moveout becomes significant, anisotropy may be considered. Anisotropy should be measured from a laboratory, however, and anisotropic prestack migration is needed for further improvement of seismic imaging and well ties. An anisotropic factor model can be constructed from residual velocity errors withcalibration with anisotropy data from laboratory measurements. The current invention can be developed to utilize shear-wave seismic data or multi-component seismic data, with minor modifications. Historical Perspective Curved-ray prestack time migration has been known in the industry for many years but was not practically available to most in the industry until PC clusters became a viable hardware solution. In the oil industry as major vendors started movingtoward PC clusters, new algorithms, such as curved-ray prestack time migration, offering further accuracy became, generally, available. Prestack time migration has typically been used for AVO studies and for seismic attribute studies. Traditionally, the industry has not paid attention to editing and improving RMS velocities for such time migration because time migration has beenconsidered primarily as a process for increasing signal-to-noise ratios by collapsing diffractions. Because RMS velocities typically are smooth and monotonically increasing, and time migration results do not show any significant seismic imagedistortions therein due to velocity errors, it has historically been considered acceptable to use RMS velocities in time migration without checking the corresponding interval velocities converted from RMS velocities. The present inventor is the first to disclose and teach that velocity calibration and trend fitting for prestack time migration greatly enhances the value of the results, for imaging and tying well markers, and can expedite more complicated depthmigration. In December, 2002, the instant inventor persuaded Mr. Michael Kenney, a managing partner of Summit Energy Co. to try an experiment. The result was a remarkable success, correctly imaging all highly dipping salt faces and tying well markerson all sides. The results showed everything that Mr. Michael Kenney dreamed of seeing, including many interpretable geologic features. The experiment substantiated that velocity calibration and trend fitting prior to curved-ray prestack time migration could yield amazing results in terms of imaging and well tying. Further, such prestack time migration could advantageously beused as an initial step for prestack depth migration and secure significant economies. Further calibration and trend fitting can constrain a velocity-depth model after prestack time migration. The 2002/2003 Experiment ("Eureka") The experiment tested the value of constructing a 3D geologically plausible velocity model prior to curved ray prestack time migration. The reprocessing of a proprietary 3D data set attempted to achieve the goal of extracting a high quality saltIS flank image as well as tying into wells around the salt. Seismic velocities were edited in the interval velocity domain using geologic constraints. RMS velocities were calibrated with geostatistically interpolated scale factors based on checkshotdata and further trend fitted to trace locations. The water bottom was used as a stratigraphic horizon. The whole 3D seismic volume was migrated by curved-ray prestack time migration. The migration results provided excellent well ties on all sides ofthe salt structure in the survey area (FIGS. 14 and 16). The common image point gathers show consistencies in predicting times due to velocity accuracy (FIG. 17). Several reliable stratigraphic horizons can be interpreted from the curved-ray prestacktime migration results. The calibration and trend fitting ("iDEPTHing") of interval velocities included well marker miss-ties and known salt entry points. The foregoing description of preferred embodiments of the invention is presented for purposes of illustration and description, and is not intended to be exhaustive or to limit the invention to the precise form or embodiment disclosed. Thedescription was selected to best explain the principles of the invention and their practical application to enable others skilled in the art to best utilize the invention in various embodiments. Various modifications as are best suited to the particularuse are contemplated. It is intended that the scope of the invention is not to be limited by the specification, but to be defined by the claims set forth below. * * * * * Randomly Featured Patents
{"url":"http://www.patentgenius.com/patent/7493241.html","timestamp":"2014-04-20T06:23:33Z","content_type":null,"content_length":"83000","record_id":"<urn:uuid:ddaa8150-1d24-4526-8679-20f231344658>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Guadalupe, AZ Algebra 2 Tutor Find a Guadalupe, AZ Algebra 2 Tutor ...My enthusiastic, interactive teaching style engages students and encourages them to take an active role in learning. I have worked with students of all ages and abilities, from students diagnosed with dyslexia or ADD/ADHD to over-achieving students on their way to high-ranking colleges. I also ... 33 Subjects: including algebra 2, English, reading, Spanish ...C was my first language in high school. I have also had 1 semester of formal training in scientific programming in C. I have a Bachelor of Science in Mathematics from Arizona State University, one semester of linear algebra and one semester of graduate algebra specifically dealing with advanced topics in linear algebra. 15 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I have been teaching at the college level for 15 years. I am currently a professor of mathematics at Scottsdale Community College. I have taught and tutored everything from basic mathematics up through Calculus, Differential Equations and Mathematical Structures. 9 Subjects: including algebra 2, calculus, geometry, algebra 1 ...Geography is a subject that I am very familiar with and can help any student master. I am a high school Special Education teacher who specializes in English. I teach Literacy Strategies, English Fundamentals, and sophmore English classes. 40 Subjects: including algebra 2, English, writing, reading Being a teacher, I feel obligated to help children get a better handle on their education because I feel that education is the key to success in life. My passion for imparting knowledge, especially math which is troublesome to a lot of children, makes me feel that I can help any K-8 student overcom... 2 Subjects: including algebra 2, prealgebra Related Guadalupe, AZ Tutors Guadalupe, AZ Accounting Tutors Guadalupe, AZ ACT Tutors Guadalupe, AZ Algebra Tutors Guadalupe, AZ Algebra 2 Tutors Guadalupe, AZ Calculus Tutors Guadalupe, AZ Geometry Tutors Guadalupe, AZ Math Tutors Guadalupe, AZ Prealgebra Tutors Guadalupe, AZ Precalculus Tutors Guadalupe, AZ SAT Tutors Guadalupe, AZ SAT Math Tutors Guadalupe, AZ Science Tutors Guadalupe, AZ Statistics Tutors Guadalupe, AZ Trigonometry Tutors
{"url":"http://www.purplemath.com/Guadalupe_AZ_algebra_2_tutors.php","timestamp":"2014-04-18T00:32:27Z","content_type":null,"content_length":"23878","record_id":"<urn:uuid:afe930af-9a7f-49a6-8648-50ac24300be9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Plane waves in free space, topics to include phase velocity, characteristic impedance, polarization, poynting flux, wavelength, frequency relations. 2. Greens functions (scalar). Ishimaru sections 5-1, 5-2, 5-4, 5-5, 5-6 Be able to describe the application of greens function, as well as the steps used to create them. In particular, be able to demonstrate their literal construction for 1D problems. 3. method of moments. Ishimaru 18-1, 18-2, and many other references. Be able to describe the formulation of a Moment Method problem, describing the steps by which one approximates the solution of an integral equation by converting it to an equivalent matrix equation. □ Waves in layered structures. Ishimaru 3-7 □ Waves in inhomogeneous media. Ishimaru 3-13, 3-14
{"url":"http://www.ee.washington.edu/academics/graduate/qual/syllabi/em.s02.html","timestamp":"2014-04-20T05:44:35Z","content_type":null,"content_length":"2479","record_id":"<urn:uuid:f0576787-c4ee-49b2-b5e4-b67d07f70cc3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Square root and simplest form Is that in simplest form? There is probably something else I can do but I can't spot anything Re: Square root and simplest form Originally Posted by Is that in simplest form? There is probably something else I can do but I can't spot anything It really depends upon you mean by simplest form. You can say Re: Square root and simplest form Originally Posted by It really depends upon you mean by simplest form You can say But would that be classed as "simpler" or just equally similar? Write in simplest radical form: Re: Square root and simplest form Originally Posted by But would that be classed as "simpler" or just equally similar? Well, I did say it depends on who is asking. Originally Posted by Write in simplest radical form: and I have no idea how to go about this. Re: Square root and simplest form Originally Posted by Well, I did say it depends on who is asking. I was actually asking you, in your opinion, which would you prefer to see? also did you mean Also I can see how you worked it out, thanks. Re: Square root and simplest form Originally Posted by Well, I did say it depends on who is asking. I don't want to pick at and you but... Re: Square root and simplest form Originally Posted by I was actually asking you, in your opinion, which would you prefer to see? also did you mean Also I can see how you worked it out, thanks. ... probably you mean: Re: Square root and simplest form Originally Posted by ... probably you mean: wha?!!! now I'm mega confused... Re: Square root and simplest form Originally Posted by I don't want to pick at and you but... I don't know what I was thinking. Of course it is 56.
{"url":"http://mathhelpforum.com/algebra/221088-square-root-simplest-form-print.html","timestamp":"2014-04-17T15:17:19Z","content_type":null,"content_length":"15610","record_id":"<urn:uuid:3b013120-88ad-4afa-b7c2-02480a8a8cae>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Power Factor Explanation The Power Factor Explanation Now that you know some fundamentals, understanding the Power Factor Explanation will be easy. The components of motor current are load current and magnetizing current (adding those instantaneous values yields the total motor current). Also, because load current is in phase with voltage and magnetizing current lags voltage by 90 degrees, their sum will be a sine wave that peaks somewhere between 0 and 90 degrees lagging which is the motor current's offset from voltage. There are negative effects associated with increased offset and thats part of the power factor explanation. Anyhow, Power Factor represents the offset in time, or the delay, between voltage and the current being delivered and is defined as the cosine of that offset. So when you look at this example, Total Current lags the voltage by 45 degrees. That is the offset, or the Power Angle, or the delay. The cosine of 45 degrees is 0.707, and we call it "lagging" because the current lags behind the voltage. If the delay was 0 degrees (voltage and current in phase), the power factor would be 1.0 (because cosine 0 = 1) and if the delay were a full 90 degrees (this would be all magnetizing current), the power factor would be 0.0 (cosine 90 = 0).So big deal....why is this offset so important? Fair question, very fair. In a motor, the component of load current is the current associated with doing work (i.e. pumping fluid, compressing gas, etc.) and the magnetizing current is not doing work. I know, I know... without the magnetic field the motor wouldn't work. This is true, however, as stated earlier, the magnetic field takes some energy to get built up in one half cycle and then returns that energy to the system in the next half cycle, so its net effect is that it uses no energy. So if we consider the cable that delivers power to the motor; when we add the magnetizing current to the load current, the cable's total current flow becomes larger. We, however, really just want to drive the load. If we could find a way to supply the magnetizing current without sending it down the cable, then the cable would only need to deliver the load current. Somebody has already figured this out! Hint....its the capacitor and I'll explain in the next section how it does this. It stores and releases energy for use by the magnetic field locally (by locally I mean at the motor-end of the cable delivering the power). Buildings and Industrial Plants So far I have only discussed motors, but you can think of your building, your campus or your city as a BIG MOTOR. The current delivered to a building or an industrial plant (or just about anything else) is similar to a motor's current....it can be broken down into two components, load current (which is the sum of all the individual load currents in the building) and magnetizing current (which is the sum of all the individual magnetizing currents in the building). Disadvantages of Low Power Factor Lower power factor on a line means that higher current is flowing through it. The following are a few disadvantages to that: Higher current results in a greater voltage drop on the line, especially if the line is marginally sized or very long. Its like trying to push too much water flow through a pipe; the pressure drops as you move down the pipe. In the cable, this results in an energy loss due to heat dissipation. The higher current can push equipment closer to their rated capacities. Cables are rated based on their current carrying capacity. Transformers and generators are rated by VA (Volt-Amps) which is current carrying capacity at a certain voltage. Another disadvantage to higher current flow, and it can be a big one, is the Reactive Demand Charge. The Utility that delivers the power to you will likely charge you if your Power Factor is too low. There are many different ways to determine this charge but the rational is this. If the transmission line that delivers your power (remember all lines have a current carrying capacity) is carrying your magnetizing current (current that could be supplied locally by you), then you are eating into the capacity of that line and taking away the ability for them to sell power to other customers with that line. The result...Surprise! You Pay!! "Power Factor Capacitors"
{"url":"http://www.the-power-factor-site.com/Power-Factor-Explanation.html","timestamp":"2014-04-19T11:57:20Z","content_type":null,"content_length":"9313","record_id":"<urn:uuid:433bd1a2-5d82-4541-ad75-b0a448acf1dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation Solver Home | Menu | Get Involved Equation Solver Equation Solver solves a system of equations with respect to a given set of variables. It supports polynomial equations as well as some equations with exponents, logarithms and trigonometric functions. Equation solver can find both numerical and parametric solutions of equations. The final result of solving the equation is simplified so it could be in a different form than what you expect. Both equations with complex solutions and complex equations are supported. Functions of complex variable are supported. Equation Solver Examples Please let us know if you have any suggestions on how to make Equation Solver better. Math tools for your website Useful links Equation Solver in your language: Contact webmaster Deutsch English Español Français Italiano Nederlands Polski Português Русский 中文日本語 한국어 The Number Empire - powerful math tools for everyone. Google+ By using this website, you signify your acceptance of Terms and Conditions and Privacy Policy. Copyright 2014 numberempire.com All rights reserved
{"url":"http://www.numberempire.com/equationsolver.php","timestamp":"2014-04-18T16:02:31Z","content_type":null,"content_length":"23526","record_id":"<urn:uuid:0fcdd986-c810-4c1c-9243-858586d58189>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: September 2000 [00178] [Date Index] [Thread Index] [Author Index] Random spherical troubles • To: mathgroup at smc.vnet.net • Subject: [mg25170] Random spherical troubles • From: Barbara DaVinci <barbara_79_f at yahoo.it> • Date: Tue, 12 Sep 2000 02:58:56 -0400 (EDT) • Sender: owner-wri-mathgroup at wolfram.com Hi MathGrouppisti This time, my problem is to generate a set of directions randomly distributed over the whole solid angle. This simple approach is incorrect (spherical coordinates are assumed) : Table[{Pi Random[], 2 Pi Random[]} , {100}] because this way we obtain a set of point uniformly over the [0 Pi] x [0 2Pi] rectangle NOT over a spherical surface :-( If you try doing so and plot the points {1, random_theta , random_phi} you will see them gathering around the poles because that simple transformation from rectangle to sphere isn't "area-preserving" . Such a set is involved in a simulation in statistical mechanics ... and I can't get out this trouble. May be mapping [0 Pi] x [0 2Pi] in itself , using an "non-identity" transformation, can spread points in a way balancing the poles clustering effect. While I was brooding over that, an intuition flashed trought my mind : since spherical to cartesian transformation is x = rho Sin[ theta ] Cos[ phi ] y = rho Sin[ theta ] Sin[ phi ] z = rho Cos[ theta ] perhaps the right quantities to randomly spread around are Cos[ theta ] and Cos[ phi ] rather than theta and phi for itself. Give a glance at this : ArcCos[ Random[] ], ArcCos[ Random[] Sign[ 0.5 - Random[] ] } , {100}] Do you think it is close to the right ? Do you see a better way ? Have you just done the job in the past ? Should I reinvent the wheel ? I thanks you all for prior replies and in advance this time. Distinti Saluti (read : "Faithfully yours") Barbara Da Vinci barbara_79_f at yahoo.it Do You Yahoo!? Il tuo indirizzo gratis e per sempre @yahoo.it su http://mail.yahoo.it • Follow-Ups:
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/Sep/msg00178.html","timestamp":"2014-04-16T22:20:16Z","content_type":null,"content_length":"36527","record_id":"<urn:uuid:27e0ba4a-41aa-4763-8aca-cb5d0e0abf55>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Parker, CO SAT Math Tutor Find a Parker, CO SAT Math Tutor I am currently teaching high school physics (AP) and physical science. I also have had previous experience teaching high school math and earth science. My master's degree is in Science Curriculum and Instruction, and I am licensed to teach both math and science in the state of Colorado. 15 Subjects: including SAT math, chemistry, algebra 1, algebra 2 ...Additionally, my undergraduate studies included numerous Bible classes including Old and New Testament History and Geography; Evidences; Old Testament Poetry (Psalms, Proverbs, Ecclesiastes); as well as classes on The Book of Job; John; Romans, I and II Corinthians, Galations, Ephesians, Philippi... 41 Subjects: including SAT math, English, reading, writing ...I can tutor with two method. First, I can work with student through a practice test (I can provide or students can provide). Second, I have a general curriculum (that targets 20-30 range and covers algebra - trigonometry) and will require several sessions to complete the entire curriculum. I have been tutoring ACT science for the last 4 years. 7 Subjects: including SAT math, chemistry, algebra 2, ACT Math ...My goal is to empower you to test yourself and monitor your own progress. I am confident that I can provide an approach which will improve performance for test takers in many different subjects including Algebra, Psychology, Statistics, Research Methods, Biology, American/World History, and Engl... 31 Subjects: including SAT math, English, reading, writing ...The materials we will use are excellent: official recent tests published by the testmakers and lesson booklets that I have streamlined over the course of nine years and more than 600 students. Ultimately, the proof is in the results: My students score higher. The first hour is free so you can meet me and see the quality of my tutoring. 14 Subjects: including SAT math, geometry, GRE, algebra 1 Related Parker, CO Tutors Parker, CO Accounting Tutors Parker, CO ACT Tutors Parker, CO Algebra Tutors Parker, CO Algebra 2 Tutors Parker, CO Calculus Tutors Parker, CO Geometry Tutors Parker, CO Math Tutors Parker, CO Prealgebra Tutors Parker, CO Precalculus Tutors Parker, CO SAT Tutors Parker, CO SAT Math Tutors Parker, CO Science Tutors Parker, CO Statistics Tutors Parker, CO Trigonometry Tutors Nearby Cities With SAT math Tutor Aurora, CO SAT math Tutors Castle Rock, CO SAT math Tutors Centennial, CO SAT math Tutors Cherry Hills Village, CO SAT math Tutors Denver SAT math Tutors Englewood, CO SAT math Tutors Federal Heights, CO SAT math Tutors Foxfield, CO SAT math Tutors Glendale, CO SAT math Tutors Greenwood Village, CO SAT math Tutors Highlands Ranch, CO SAT math Tutors Littleton, CO SAT math Tutors Lone Tree, CO SAT math Tutors Lonetree, CO SAT math Tutors Wheat Ridge SAT math Tutors
{"url":"http://www.purplemath.com/Parker_CO_SAT_math_tutors.php","timestamp":"2014-04-20T04:20:41Z","content_type":null,"content_length":"24142","record_id":"<urn:uuid:40594a15-89d9-42cf-8209-b2061d1872b3>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Theorems of Morera and Liouville and Extensions Theorems of Morera and Liouville 6.6 The Theorems of Morera and Liouville and Extensions In this section we investigate some of the qualitative properties of analytic and harmonic functions. Our first result shows that the existence of an antiderivative for a continuous function is equivalent to the statement that the integral of f(z) is independent of the path of integration. This result stated in a form that will serve as a converse to the Cauchy-Goursat theorem. Theorem 6.13 (Morera's Theorem). Let f(z) be a continuous function in a simply connected domain D. If D, then f(z) is analytic in D. Proof of Theorem 6.13 is in the book. Complex Analysis for Mathematics and Engineering Cauchy's integral formula show how the value C to be a circle with center f(z) at points z on the circle C. Theorem 6.14 (Gauss's Mean Value Theorem). If f(z) is analytic in a simply connected domain D that contains the circle Proof of Theorem 6.14 is in the book. Complex Analysis for Mathematics and Engineering We now prove an important result concerning the modulus of an analytic function. Theorem 6.15 (Maximum Modulus Principle). Let f(z) be analytic and nonconstant in the bounded domain D. Then D. Proof of Theorem 6.15 is in the book. Complex Analysis for Mathematics and Engineering We sometimes state the maximum modulus principle in the following form. Theorem 6.16 (Maximum Modulus Principle). Let f(z) be analytic and nonconstant in the bounded domain D. If f(z) is continuous on the closed region R that consists of D and all of its boundary points B, then B. Proof of Theorem 6.16 is in the book. Complex Analysis for Mathematics and Engineering Example 6.26. Let D to be f(z) is continuous on the closed region and this value is assumed by f(z) at a point D. Solution. From the triangle inequality and the fact that D, it follows that If we choose so the vectors (6-58) to be an equality (see Exercise 19 in Section 1.3). Extra Example 1. Let D to be f(z) is continuous on the closed region D. Explore Solution for Extra Example 1. Extra Example 2. Let D to be f(z) is continuous on the closed region D. Explore Solution for Extra Example 2. Theorem 6.17 (Cauchy's Inequalities). Let f(z) be analytic in the simply connected domain D that contains the circle Proof of Theorem 6.17 is in the book. Complex Analysis for Mathematics and Engineering Theorem 6.18 shows that a nonconstant entire function cannot be a bounded function. Theorem 6.18 (Liouville's Theorem). If f(z) is an entire function and is bounded for all values of z in the complex plane, then f(z) is constant. Proof of Theorem 6.18 is in the book. Complex Analysis for Mathematics and Engineering Example 6.27. Show that the function sin(z) is not a bounded function. Solution. We established this characteristic with a somewhat tedious argument in Section 5.4. All we need do now is observe that f(z) is not constant, and hence it is not bounded. We can use Liouville's theorem to establish an important theorem of algebra. Theorem 6.19 (Fundamental Theorem of Algebra). If P(z) is a polynomial of degree P(z) has at least one zero. Proof of Theorem 6.19 is in the book. Complex Analysis for Mathematics and Engineering Corollary 6.4. Let P(z) be a polynomial of degree P(z) can be expressed as the product of linear factors. That is, where P(z) counted according to multiplicity an A is a constant. Extra Example 3. Find the n zeros of the equation Explore Solution for Extra Example 3. Extra Example 4. Find the n zeros of the equation Explore Solution for Extra Example 4. Extra Example 5. Find the roots of the Explore Solution for Extra Example 5. Exercises for Section 6.6. The Theorems of Morera and Liouville and Extensions Library Research Experience for Undergraduates Fundamental Theorem of Algebra The Next Module is The Fundamental Theorem of Algebra Return to the Complex Analysis Modules Return to the Complex Analysis Project This material is coordinated with our book Complex Analysis for Mathematics and Engineering. (c) 2012 John H. Mathews, Russell W. Howell
{"url":"http://mathfaculty.fullerton.edu/mathews/c2003/LiouvilleMoreraGaussMod.html","timestamp":"2014-04-20T18:23:29Z","content_type":null,"content_length":"28921","record_id":"<urn:uuid:f7dc5e17-3bc0-40c2-bab7-94f14a9084d1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding reverse saturation current. 1. The problem statement, all variables and given/known data Hi, I have a problem to find the reverse saturation current for a piece of diode. I have a question - given conductivity, a I-V curve with forward bias information (a table of I vs V values), how can i proceed? I have other information abt the diode such as its mobility, and then it is made of silicon - other material info assume given. Anyone can advise the approach? thanks. 2. Relevant equations 3. The attempt at a solution Where is the relevant equation? If your data are precise, substitute the values of one point of the curve (V-I) into the diode equation and obtain Is. If your data are real, they should have measurement errors and you should use the diode equation in conjunction with a filter to obtain Is.
{"url":"http://www.physicsforums.com/showthread.php?t=353628","timestamp":"2014-04-16T07:40:56Z","content_type":null,"content_length":"26222","record_id":"<urn:uuid:0176688c-d08a-4889-bc43-7474c74e54c5>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: DD Find the slope of the line shown in the given graph. View Solution DD Which of the following is true for the slope of the line shown in the graph? DD View Solution DD Find the slope of the line that passes through the points (3, 5) and (-1, - 8). DD View Solution DD Find the slope of the line passing through the points (23, - 3) and (- 35, - 2). DD View Solution DD Find the slope of the equation 2x + y = 5. DD View Solution DD What is the slope of the diagonal PQ of the rectangle shown in the graph? DD View Solution DD Find the slope of the line shown in the graph. DD View Solution DD Find the slope of the line AB as shown in the graph. DD View Solution DD Kate started reading a story book. Initially, she read 20 pages. By the end of 4 days, she completed 84 pages. Find the average rate of reading the book and use it to determine the View number of pages can be completed after 10 days. Solution DD The profit of a company in the year 1998 is $40,000. The profit raised to $48,000 in the year 2000. At this rate what would be the profit of the company in 2004? View Solution DD Find the x - intercept of the line y = 5x + 30. View Solution DD Find the x - intercept of the equation of a line y = 3x - 13. View Solution DD Find the y - intercept of the line 3x - 2y = 17. View Solution DD Find the y - intercept of the line y = 7x - 19. View Solution DD Use intercepts to identify the graph of the equation 3x + 2y = 18. DD View Solution DD Use intercepts to choose the graph of the equation 3x - 4y - 12 = 0. View Solution DD Write the intercepts of the line shown in the graph. DD View Solution DD Find the x and y - intercepts of the equation - 3x + (12) y = - 4. DD View Solution DD Find the x - and y - intercepts of the equation (- 56)x + 30y = 15. DD View Solution DD Use intercepts to choose the graph of the equation 5 = - 3x - 4y. View Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgedhxkjgdj&.html","timestamp":"2014-04-20T23:28:17Z","content_type":null,"content_length":"62016","record_id":"<urn:uuid:f8059333-b75e-40ee-beb3-a1fbf18e7e0c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Plausible Values to Estimate Group Scale Score Statistics and Their Standard Errors Table of Contents | Search Technical Documentation | References Using Plausible Values to Estimate Group Scale Score Statistics and Their Standard Errors Suppose there is a matrix M plausible values for each respondent. The following steps can be taken to estimate the statistic of interest, 1. For each plausible value, 2. The estimate of the statistic of interest is computed as 3. The standard error of a. A sampling component, which is computed for each plausible value as where m^th plausible value and the r^th replicate weights. Subsequently, the sampling component is the average U[m] over plausible values. In practice, U[1] is used to approximate this average, substantially reducing the amount of computation required. b. A measurement component, which is computed as 4. The final estimate of the standard error is then From these components, the proportion variance due to the fact that is not directly observed, and the proportion variance due to sampling, θ. Also, a more efficient sample, with fewer students per school and many schools reduces the proportion due to sampling. Note that the proportions are based on the variance of a statistic, which indicates how much confidence can be put into the estimate of this statistic. This variance (squared standard error) should not be confused with the variance of a sample, which is an indication of the distribution of observations, rather than the confidence of a single statistic. Last updated 27 October 2009 (JL)
{"url":"http://nces.ed.gov/nationsreportcard/tdw/analysis/2004_2005/summary_proced_group.asp","timestamp":"2014-04-20T18:26:26Z","content_type":null,"content_length":"36394","record_id":"<urn:uuid:fee2cf7d-7dd4-4d19-9704-8e990dce2f54>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
convert 350mm to inches You asked: convert 350mm to inches Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/convert_350mm_to_inches","timestamp":"2014-04-24T21:28:06Z","content_type":null,"content_length":"56467","record_id":"<urn:uuid:1b6047c7-f530-4290-8ea9-317a0bccfd08>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Hempstead, NY Geometry Tutor Find a Hempstead, NY Geometry Tutor ...I have taught Elementary Math, Algebra, Finite Mathematics, PreCalculus, Introductory and Intermediate Statistics both online and offline at other CUNY and non-CUNY colleges, including The City College of CUNY, La Guardia Community College of CUNY, St. Francis College and Berkeley College, overa... 21 Subjects: including geometry, calculus, physics, statistics ...My students include those from Hunter College High School, Stuyvesant, Bronx Science, Brooklyn Tech, and other private schools, etc., many of them were referred by students and parents. I helped many students got into their dream schools or honor classes. I have two master degrees (physics and math) and have very deep understanding of physics and math concepts. 12 Subjects: including geometry, calculus, algebra 2, algebra 1 ...My qualifications include:- A 2260 on the SAT (760 Verbal, 770 Writing, 730 Math), a score ranking in the 99.3rd percentile and achieved by 1,000 students out of over 1,000,000 test takers.- Graduate of Hunter College High School, the nation's #1 public high school (according to the Wall St. Jou... 41 Subjects: including geometry, English, reading, writing ...I believe that everybody learns at his or her own pace and with his or her own style and that one's instructor needs to be cognizant of this. Although I have taken several upper level engineering courses, I am focusing on tutoring math through calculus II, including SAT Math, and high school lev... 16 Subjects: including geometry, reading, English, calculus ...Public speaking is my thing, and I have worked with over 200 students over the past five years and assisted them in honing their public speaking skills. I am a National Forensic League Member of Premier Distinction and was proud to receive their prestigious All-American Award about a year ago. I have a great deal of experience with college admissions, especially ivy league admissions. 43 Subjects: including geometry, English, reading, algebra 1
{"url":"http://www.purplemath.com/Hempstead_NY_Geometry_tutors.php","timestamp":"2014-04-21T02:29:10Z","content_type":null,"content_length":"24290","record_id":"<urn:uuid:8ee28ba9-3bf9-4d0b-ba14-a03db5758d32>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Seek help on Generating multiple 8 base-pair long dependent motif Member Login Come Join Us! Are you a Computer / IT professional? Join Tek-Tips now! • Talk With Other Members • Be Notified Of Responses To Your Posts • Keyword Search • One-Click Access To Your Favorite Forums • Automated Signatures On Your Posts • Best Of All, It's Free! *Tek-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail. Donate Today! Posting Guidelines Promoting, selling, recruiting, coursework and thesis posting is forbidden. Link To This Forum! Forum Search FAQs Links Jobs Whitepapers MVPs Everwood (TechnicalUser) 21 Jul 05 15:16 Hi all! I am going to generating multiple 8 base-pair long dependent motif binding sites using Perl. The thread is: (1)First,randomly select 4 pairs of nucleotides as the group A. For example, we randomly select 4 pairs of nucleotides, “CA”, “TG”, “CG”, and “TC” from 16 combinations of two nucleoties : Then randomly select another 4 pairs of nucleotides, for instance, like “AA”,”CA”,”CT” and “TT” as the group B. Any pair in group A should be different from the one in group B. (2)To create the motif binding site. Choose “CA”, the first pair from the group A with 85% probability or “AA” from the group B with 15% probability to occupy the 1st position in the binding site, which means that each pair in the group A will be the dominant pair compared to the one in the group B. The sum of probabilities of corresponding pairs in group A and B will be 1.00 (0.85+0.15= Repeat this for the next 3 positions and put 4 pairs of nucleotides together, we will end up with a random binding motif like: CA --- 1st position CC --- 2nd position CG --- 3rd position TC --- 4th position CACCCGTC----the 8 bp long binding motif site We also can form another group A like GG,GT,AT,CG and group B like GA,AA,AC,CC. Then we can get a random motif like: GGAAATCG (3)Repeat this to make another 99 binding motif sites. I had experience of generating 8 base-pair long indepent motif binding sites with the probability of dominant nucleotide as 0.85. But I don't know how to generate two groups of pairs of nucleotides which are not identical. Thank you very much for suggestions! The previous code is: ## $len = length of the sequence - 1 $len = 2; ## $ar[i] letter in position i with probability $ap[i] ## other letters have the same probability (in our case 0.05) ## you have to build the arrays $ar[0] = "T"; $ap[0] = 0.85; $ar[1] = "C"; $ap[1] = 0.85; $ar[2] = "A"; $ap[2] = 0.85; for ($i=0;$i<=$len;$i++) { $ar_other[$i][0] = "A"; $ar_other[$i][1] = "C"; $ar_other[$i][2] = "G"; if ($ar[$i] eq "A") { $ar_other[$i][0] = "T"; } elsif ($ar[$i] eq "C") { $ar_other[$i][1] = "T"; } elsif ($ar[$i] eq "G") { $ar_other[$i][2] = "T"; $p = rand(1); #print "$p"; if ($p<(1-$ap[$i])) { $frac = (1-$ap[$i])/3; if ($p<(1-$ap[$i]-(2*$frac))) { $ran_seq[$i] = $ar_other[$i][0]; } elsif ($p<(1-$ap[$i]-$frac)) { $ran_seq[$i] = $ar_other[$i][1]; } else { $ran_seq[$i] = $ar_other[$i][2]; } else { $ran_seq[$i] = $ar[$i]; ## you have your sequence in the array @ran_seq for ($i=0;$i<=$len;$i++) { print "$ran_seq[$i] "; print "\n"; A overly longer code is: # randommotifs.pl # This program generates randomized instances of pseudomotifs, i.e. # sequences that are obtained by adding "noise", according to a # model to a consensus sequence. To keep this reasonably general, we # proceed through the following steps: # Define consensus motif and alphabet # For the required number of repetitions # For each position in motif # Define an appropriate probability distribution over the # Produce a character according to the distribution # ... # ... # The probability distributions are generated to produce a consensus # with a particular "consensus probability" and all other characters # with equal probabilites (the noise). This is defined in a single # so it is straightforward to change this for a different model of noise # (e.g. increase the probabilities for "similar" characters). It's easy # to become creative here and incorporate (biological) knowledge into # noise-model. # Probablities are then converted into cumulative intervals to chose a # particular character: # Eg: Probabilities A: 0.4, C: 0.3, G: 0.2, T: 0.1 # Intervals A: 0.4 C: 0.7, G: 0.9, T: 1.0 # This example can be pictured as # 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 # +----+----+----+----+----+---- # | A | C | G | T | # +----+----+----+----+----+----+----+----+----+----+ # .. now all we need to do is generate a random number between 0 and 1 # and see in which interval it falls. This then points to the character to be # written to output. # B. Steipe, July 2005. Public domain code. # ===================================================================== use strict; use warnings; my $N_Sequences = 100; # Number of sequences to be my @Motif = split(//,'TTGACATATAAT'); # This example is simply a # of the -35 and -10 consensus # of prokaryotic promoters; replace with # whatever ... my @Alphabet = split(//,'ACGT'); # Nucleotides; replace with whatever ... my $P_Consensus = 0.85; # Replace with whatever ... # ====== Globals ========================== my @Probabilities; # Stores the probability of each # ====== Program ========================== for (my $i=0; $i < $N_Sequences; $i++) { for (my $j=0; $j < scalar(@Motif); $j++) { loadConsensusCharacter($Motif[$j]); # Load the character for this position addNoiseToDistribution(); # Modify probabilities according to noise model print(getRandomCharacter(rand(1.0))); # Output character print "\n"; # ====== Subroutines ======================= sub loadConsensusCharacter { my ($char) = @_; my $Found = 'FALSE'; for (my $i=0; $i < scalar(@Alphabet); $i++) { if ( $char eq $Alphabet[$i]) { # found character in i_th position in Alphabet $Probabilities[$i] = 1.0; $Found = 'TRUE'; } else { $Probabilities[$i] = 0.0; if ($Found eq 'FALSE') { die("Panic: Motif-Character\"$char\" was not found in Alphabet. # ========================================== sub addNoiseToDistribution { # This implements a very simple noise model: # We work on an array in which one element is 1.0 and # all others are 0.0. The element with 1.0 has the same # index as the consensus character in the Alphabet. # We change that value to the consensus probability and # we distribute the remaining probability equally among # all other elements. # It should be straightforward to implement more elaborate # noise models, or use a position specific scoring matrix # or something else here. my $P_NonConsensus = ( 1.0-$P_Consensus) / (scalar(@Alphabet) - 1); for (my $i=0; $i < scalar(@Probabilities); $i++) { if ( $Probabilities[$i] == 1.0 ) { # ... this is the consensus $Probabilities[$i] = $P_Consensus; } else { $Probabilities[$i] = $P_NonConsensus; # ========================================== sub convertToIntervals { my $Sum = 0; # Convert the values of the probabilities array to the top # of intervals having widths proportional to their relative # Numbers range from 0 to 1.0 ... for (my $i=1; $i < scalar(@Probabilities); $i++) { $Probabilities[$i] += $Probabilities[$i-1]; # ========================================== sub getRandomCharacter { my ($RandomNumber) = @_; my $i=0; # Test which interval contains the RandomNumber ... # The index of the interval points to the Event we should choose. for ($i=0; $i < scalar(@Probabilities); $i++) { if ($Probabilities[$i] > $RandomNumber) { last; } TrojanWarBlade (Programmer) 21 Jul 05 17:16 I don't know if this helps or answers any of your questions but I had a go anyway: #!/usr/bin/perl -w use strict; my @fullset = qw(AA AC AG AT CA CC CG CT GA GC GG GT TA TC TG TT); my $probab = 0.85; my $i; my @major = (); # Cluster of 4 nucleotides (A) my @minor = (); # Cluster of 4 nucleotides (B) # Pick 4 random pairs my @temp = @fullset; for($i = 0; $i<4; ++$i) { push @major, $temp[int(rand(scalar(@temp)))]; print "@major\n"; # Pick 4 random pairs (excluding matches to @major) @temp = @fullset; for($i = 0; $i<4; ++$i) { my $index = int(rand(scalar(@temp))); redo unless($major[$i] ne $temp[$index]); push @minor, $temp[$index]; print "@minor\n"; # Generate resultant set based on previously defined probability my @result = (); for($i = 0; $i<4; ++$i) { push @result, rand(1.0) > $probab ? $major[$i] : $minor[$i]; print "@result\n"; You can obviously loop for 99 times if you want to. Everwood (TechnicalUser) 21 Jul 05 17:27 Hi Trojan, Thank you very much first of all. What I mentioned the probabilty of 0.85 of pair in the group A (major group in your case)means that it can be randomly selected 85 times out of 100 times vs. the pair in group B 15 times out of 100 times. I found you assigned 0.85 to the $probab in the code. Can you clarifying this ? Thank you for help! TrojanWarBlade (Programmer) 21 Jul 05 17:28 Slightly modified version here that eliminates the chance that two pairs can be identical in one nucleotide: #!/usr/bin/perl -w use strict; my @fullset = qw(AA AC AG AT CA CC CG CT GA GC GG GT TA TC TG TT); my $probab = 0.85; my $i; my @major = (); # Cluster of 4 nucleotides (A) my @minor = (); # Cluster of 4 nucleotides (B) # Pick 4 random pairs my @temp = @fullset; for($i = 0; $i<4; ++$i) { my $index = int(rand(scalar(@temp))); push @major, $temp[$index]; splice(@temp, $index, 1); print "@major\n"; # Pick 4 random pairs (excluding matches to @major) @temp = @fullset; for($i = 0; $i<4; ++$i) { my $index = int(rand(scalar(@temp))); redo unless($major[$i] ne $temp[$index]); push @minor, $temp[$index]; splice(@temp, $index, 1); print "@minor\n"; # Generate resultant set based on previously defined probability my @result = (); for($i = 0; $i<4; ++$i) { push @result, rand(1.0) > $probab ? $major[$i] : $minor[$i]; print "@result\n"; TrojanWarBlade (Programmer) 21 Jul 05 17:30 Yes, no problem. The rand(1.0) will generate a random number between 0.0000 and 0.99999 (ish). Therefore when my code says: push @result, rand(1.0) > $probab ? $major[$i] : $minor[$i]; If the rand(1.0) prduces something above 0.85 (say) 0.92364 then yuo get the minor pair, otherwise you get the major pair. This means that 85% of the time (generally) you get the major pair. Hope that helps explain. Everwood (TechnicalUser) 21 Jul 05 17:31 did not get you here. "two pairs can be identical in one nucleotide"? "one nucleotide" means "a position" in a binding site? Everwood (TechnicalUser) 21 Jul 05 17:35 Can u show a generated motif here? TrojanWarBlade (Programmer) 21 Jul 05 17:37 Sorry, genetics is not my thing so I'm trying. The "splice" that I added was to remove the chance of getting something like this: AA CG TA CG It removes the "CG" from the "@temp" list when it's been used so that it cannot be selected again. I did not know if duplicate pairs were allowed so I gave you both options. TrojanWarBlade (Programmer) 21 Jul 05 17:38 It's at the end "print @result" (I hope I'm understanding this enough and not making a fool of myself here!!!) Everwood (TechnicalUser) 21 Jul 05 17:41 Hi Trojan, I would say "duplicate pairs" are not allowed to appear. We will have two groups A and B (major and minor here). Each of them is different from another pair. So the ideal result should never be like AA CG TA CG. TrojanWarBlade (Programmer) 21 Jul 05 17:43 That's what I thought and that's why I gave you the second version. It's removes that possibility. The motif is in @result and therefore you may only need to print that. Everwood (TechnicalUser) 21 Jul 05 17:44 TrojanWarBlade (Programmer) 21 Jul 05 17:46 Here's a sample run of 100: And here's the final code that created it: #!/usr/bin/perl -w use strict; my @fullset = qw(AA AC AG AT CA CC CG CT GA GC GG GT TA TC TG TT); my $probab = 0.85; for(my $j=0; $j<99; $j++) { my $i; my @major = (); # Cluster of 4 nucleotides (A) my @minor = (); # Cluster of 4 nucleotides (B) # Pick 4 random pairs my @temp = @fullset; for($i = 0; $i<4; ++$i) { my $index = int(rand(scalar(@temp))); push @major, $temp[$index]; splice(@temp, $index, 1); #print "@major\n"; # Pick 4 random pairs (excluding matches to @major) @temp = @fullset; for($i = 0; $i<4; ++$i) { my $index = int(rand(scalar(@temp))); redo unless($major[$i] ne $temp[$index]); push @minor, $temp[$index]; splice(@temp, $index, 1); #print "@minor\n"; # Generate resultant set based on previously defined probability my @result = (); for($i = 0; $i<4; ++$i) { push @result, rand(1.0) > $probab ? $major[$i] : $minor[$i]; print join("", @result), "\n"; Is this what you wanted? TrojanWarBlade (Programmer) 21 Jul 05 17:47 Whoops, sorry, change the 99 to 100! Everwood (TechnicalUser) 21 Jul 05 17:50 yeah,that's it! Wonderful! Appreciated! TrojanWarBlade (Programmer) 21 Jul 05 17:52 Cool, glad to help, sorry I didn't understand the terminology properly. Maybe I'll learn in time. Good luck. KevinADC (TechnicalUser) 22 Jul 05 4:08 this was only an excersize for fun: use strict; use Benchmark qw(:all); my @fullset = qw(AA AC AG AT CA CC CG CT GA GC GG GT TA TC TG TT); my $probab = 0.85; my $reps = 1000; my $r = timethese($reps, { 'Kevin' => \&Kevin, 'Trojan' => \&Trojan, sub Kevin { my %results = (); for (0..99) { my %base_pairs = (); my $motif = ''; until (scalar keys %base_pairs == 8) { my $index = int rand @fullset; $base_pairs{$fullset[$index]} = $fullset[$index]; my @base_pairs = keys %base_pairs; my @major = @base_pairs[0..3]; my @minor = @base_pairs[4..7]; $motif .= rand(1) > $probab ? $minor[$_] : $major[$_] for (0..3); redo if exists $results{$motif}; $results{$motif} = $motif; print "$_\n" for keys %results; sub Trojan { for(my $j=0; $j<99; $j++) { my $i; my @major = (); # Cluster of 4 nucleotides (A) my @minor = (); # Cluster of 4 nucleotides (B) # Pick 4 random pairs my @temp = @fullset; for($i = 0; $i<4; ++$i) { my $index = int(rand(scalar(@temp))); push @major, $temp[$index]; splice(@temp, $index, 1); #print "@major\n"; # Pick 4 random pairs (excluding matches to @major) @temp = @fullset; for($i = 0; $i<4; ++$i) { my $index = int(rand(scalar(@temp))); redo unless($major[$i] ne $temp[$index]); push @minor, $temp[$index]; splice(@temp, $index, 1); #print "@minor\n"; # Generate resultant set based on previously defined probability my @result = (); for($i = 0; $i<4; ++$i) { push @result, rand(1.0) > $probab ? $major[$i] : $minor[$i]; print join("", @result), "\n"; with the results printed, Trojans code was typically a bit slower than my code (25% approx), but with the results not printed Trojans code was faster than mine (by about the same percentage). KevinADC (TechnicalUser) 22 Jul 05 4:24 I think you have this reversed Trojan: push @result, rand(1.0) > $probab ? $major[$i] : $minor[$i]; should be: push @result, rand(1.0) <= $probab ? $major[$i] : $minor[$i]; if the rand() function returns a value greater than the probability factor the minor sequence should be used, not the major sequence, which is what is happening in the first line above. TrojanWarBlade (Programmer) 22 Jul 05 4:41 Well spotted Kevin, I think you're right there. Not quite so sure about your algorithm for selecting the pairs though. Would it be fair to say that with your algorithm, @minor could not possibly contain any pairs that appear anywhere in @major and vice versa? If that's the case, you'll see from the original supplied example that it should be possible to have the same values, as long as they do not share the same index position. KevinADC (TechnicalUser) 22 Jul 05 15:11 I admit I was (still am) confused if identical base-pairs could be in major and minor as long they were not in same index. But since there could never be duplicated base-pairs in the motif (I think) it seemed easier just to make sure both arrays contained no duplicate base-pairs to begin with. That might make a difference in the final outcome of the generated list, but honestly I am not sure. Also, the implication of my benchmarking test is that the join() function is extremely slow and looks like it can drag down the run time of an entire script. You might want to look into that if you use join() often in code. I might play around with my code a little to see if I can speed it up.
{"url":"http://www.tek-tips.com/viewthread.cfm?qid=1095895","timestamp":"2014-04-17T16:46:50Z","content_type":null,"content_length":"62357","record_id":"<urn:uuid:2294671d-1114-4f40-aa5d-73e94a96d9e9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Visualising content and context using issue maps - an example based on a discussion of Cox's risk matrix theorem Visualising content and context using issue maps – an example based on a discussion of Cox’s risk matrix theorem Some time ago I wrote a post on a paper by Tony Cox which describes some flaws in risk matrices (as they are commonly used) and proposes an axiomatic approach to address some of the problems. In a recent comment on that post, Tony Waisanen suggested that someone take up the challenge to map the content of the post and the ensuing discussion using issue mapping. Hence my motivation to write the present post. My main aims in this post are to: 1. Create an issue map visualising the content of my post on Cox’s paper. 2. Incorporate points raised in the comments into the map, and show how they relate to Cox’s arguments. A quick word about the notation and software before proceeding. I’ll use the IBIS (Issue-based information system) notation to map the argument. Those unfamiliar with IBIS will find a quick introduction here. The mapping is done using Compendium, an open source issue mapping tool (that can do other things too). I’ll provide a commentary as I build the map, because the detail behind the map cannot be seen in the screenshot First map: the flaws in risk matrices and how to fix them Cox ask’s the question: “What’s wrong with risk matrices?” – this is, in fact, the title of the paper in which he describes his theorem. The question is therefore an excellent starting point for our As an answer to the question, Cox lists the following points as problems/flaws in risk matrices: 1. Poor resolution: risk matrices use qualitative categories (typically denoted by colour – red, green, yellow). Risks within a category cannot be distiguished. 2. Incorrect ranking of risks: In some cases, risks can end up in the wrong qualitative category – i.e. a quantitatively higher risk can be mistakenly categorised as a low risk and vice versa. In the worst case, this can lead to suboptimal resource allocation – i.e. a lower risk being given a higher priority. 3. Subjective inputs: Often, the criteria used to rank risks are based on subjective inputs. Such subjective inputs are prone to cognitive bias. This leads to inaccurate and unreliable risk See my posts limitations of scoring methods in risk analysis and cognitive biases as project meta-risks for more on the above points. The map with the root question, problems (ideas or responses, in IBIS terminology) and their consequences is shown in Figure 1. Note that I’ve put numbers (1), (2) etc. against the points so that I can refer to them by number in other nodes. The next question suggests itself: we’ve asked “What’s wrong with risk matrices?” so an obvious follow-up question is, “What can be done to fix risk matrices?” There are a few approaches available to address the problems. These are dicussed in my post and the discussion following it. The approaches can be summarised as follows: 1. Statistical approach: This involves obtaining the correct statistical distributions for probability of the risk occuring and the impact of the risk. This is generally hard to do because of the lack of data. However, once this is done, it obviates the need for risk matrices. Furthermore, it warns us about situations in which risk matrices may mislead. In Cox’s words, “One (approach) is to consider applications in which there are sufficient data to draw some inferences about the statistical distribution of (Probability, Consequence) pairs. If data are sufficiently plentiful, then statistical and artificial intelligence tools … can potentially be applied to help design risk matrices that give efficient or optimal (according to various criteria) discrete approximations to the quantitative distribution of risks. In such data-rich settings, it might be possible to use risk matrices when they are useful (e.g., if probability and consequence are strongly positively correlated) and to avoid them when they are not (e.g., if probability and consequence are strongly negatively correlated).” This is, in principle, the best approach. 2. Qualitative approach: This approach was discussed by Glen Alleman in this comment. It essentially involves characterising impact using qualitative information - i.e. narrative descriptions of impact. To quote from Glen’s comment, “...the numeric value of impacts are replaced by narrative descriptions of the actual operational impacts from the occurrence of the risk. These narratives are developed through analysis of the system…the quantitative risk as a product is abandoned in place of a classification of response to a predefined consequence.” This approach side steps a couple of the issues with risk matrices. Further, many risk aware organisations have used this method with great success (Glen mentions that NASA and the Department of Defense use such an approach to analyse risks on spaceflight/aviation projects) 3. Axiomatic approach: This is the approach that Tony Cox discusses in his paper. It has the advantage of being simple – it assumes that the risk function (defined as probability x impact, for example) is continuous whilst also ensuring consistency to the extent possible (i.e. ensuring a correct quantitative ranking of risks). The downside, as Glen emphasises in his comments, is that risk functions are actually discrete, as discussed in (1) above. Cox’s arguments hinge on the continuity of the risk function, so they do not apply to the discrete case. The map with these approaches added in is depicted in Figure 2. Note that I’ve added Cox’s theorem in as a map node, indicating that a detailed discussion of the theorem is presented in a separate Note also, that I have added an idea node representing how the issue regarding subjective inputs can be addressed. I will not pursue this point further in the present post as it did not come up in the discussion. That said, I have discussed this point in some detail in an article on cognitive bias in project risk management. Second map: Cox’s risk matrix theorem Since the entire discussion is based on Cox’s arguments, it is worth looking into his paper in some detail – in particular, at the axioms and the theorem itself. It is convenient to hive this material off into a separate map, but one connected to the original map (see the map node representing the theorem in Figure 2 above). The root question of the new map would be, “What is the basis of Cox’s theorem?” Answer: the theorem is based on the axioms and other (tacit) assumptions. Now, my earlier post on Cox’s theorem contains a very detailed treatment of the axioms, so I’ll offer only a one-line explanation for each here. The axioms are: 1. Weak consistency – which states that all risks in the highest category (red) must represent quantitatively higher risks than those in the lowest category (green). 2. Consistent colouring – As far as possible, risks with the same quantitative value must have the same colour. 3. Between-ness – small changes in probability or impact (i.e. the risk function) should not cause a risk to move from the highest (red) to lowest (green) or vice versa. The axioms are intuitively appealing – they express a basic consistency that one would expect risk matrices to satisfy. The secondary map, with the three axioms shown is depicted in Figure 3. Cox’s theorem, which essentially follows from these axioms, can be stated as follows: In a risk matrix that satisfies the three axioms, all cells in the bottom row and left-most column must be green and all cells in the second from bottom row and second from left column must be non-red. The theorem has two corollaries: 1. 2×2 matrices cannot satisfy the theorem. 2. 3×3 and 4×4 matrices which satisfy the theorem have a unique colouring scheme. These are rather surprising conclusions, arrived at from some very intuitive axioms. The secondary map, with the theorem and corollaries added in is shown in Fig. 4. That completes the map of the theorem. However, in this comment Glen Alleman pointed out that the assumption of a continuous function to describe risk (such as risk = probability x impact, where both quantities on the right hand side are continuous functions) is questionable. He also makes the point the probability is specified by a distribution, and numerical values that come out of distributions cannot be combined via arithmetic operations. The reason that folks make the simplifying assumptions (of continutity and ignoring the probabilistic nature of the variables) is that it is intuitive and easy to work with. As I mentioned in one of my responses to the comments, one can choose to define risk this way although it isn’t logically sound. Cox’s theorem essentially specifies consistency conditions that need to be satisfied when such ad-hoc approaches are used. The map with this discussion included is shown in Figure 5 (click anywhere on figure to view a full-sized image) That completes the mapping exercise: Figures 2 and 5 represent a fairly complete map of the post and the discussion around it. Caveats and conclusions At the risk of belaboring the obvious, the maps represent my interpretation of Cox’s work and my interpretation of others’ comments on my post on Cox’s work. Further, the discussion on which the maps are based is far from comprehensive because it did not cover other limitations of risk matrices. Please see my post on limitations of scoring methods in risk analysis for a detailed discussion of Before closing, it is worth looking at the Figures 2 and 5 from a broader perspective: the figures make clear the context of the discussion in a way that is simply not possible through words. As an example, Figure 2 lays bare the context of Cox’s theorem - it emphasises, for example, that Cox’s approach isn’t the only method to fix what’s wrong with risk matrices. Further, Figure 5 distinguishes between explicitly declared and tacit assumptions. Examples of the former are the three axioms and that of the latter is the assumption of continuity. In this post I’ve summarised the content and context of Cox’s risk matrix theorem via issue mapping. The maps provide an “at a glance” summary of the theorem alongwith supporting assumptions and axioms. Further, the maps also incorporate key elements of readers’ reaction regarding the post. I hope this example clarifies the content and context of my earlier post on Cox’s risk matrix theorem, whilst also serving as a demonstration of the utility of the IBIS notation in mapping complex arguments. Thanks go out to Tony Waisanen for suggesting that the post and comments be issue mapped, and to Glen Alleman, Robert Higgins and Prakash Vaidhyanathan for their contributions to the discussion. 2 Responses [...] this post for a visual representation of the above discussion of Cox’s risk matrix theorem and the [...]
{"url":"http://eight2late.wordpress.com/2009/12/18/visualising-content-and-context-using-issue-maps-an-example-based-on-a-discussion-of-coxs-risk-matrix-theorem/","timestamp":"2014-04-19T14:48:01Z","content_type":null,"content_length":"85169","record_id":"<urn:uuid:3abcc8cc-aff1-4924-a3ea-1d6bd166b193>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Riverside, RI Calculus Tutor Find a Riverside, RI Calculus Tutor ...I have a candidate master (expert) rating, and for three years taught chess in a summer program at Towson University. I have had success preparing students for the ASVAB, and I especially enjoy helping them tackle the word problems on the Arithmetic Reasoning section. Time management is important, and test-taking strategies are key. 45 Subjects: including calculus, Spanish, chemistry, English ...In high school, I tutored other high school students in math and physics. I currently teach social studies to third graders as an after school program. Over Winter Break, I also had an education-related internship for a company called Kinvolved which works to improve attendance and graduation rates in schools in underserved communities. 20 Subjects: including calculus, French, physics, geometry ...Right this month I'm writing my own SAT Math manual, because I think that I can deliver significantly better SAT gains with my own new system than with my old company's workbook. I taught Fortran for four semesters at URI/Providence while I was a graduate student in Computer Science at URI. As an undergraduate I wrote a compilation of student programming errors. 14 Subjects: including calculus, geometry, algebra 2, algebra 1 ...My method of tutoring is based on creating a trustworthy relationship between myself and the student. I aim to best understand which methods of learning the student adapts to most and incorporate these into the lessons. In addition to offering my knowledge in subject matter,I also make it a pri... 17 Subjects: including calculus, physics, geometry, algebra 2 Born in Cuba I arrived to the United States in 1995. Since then I remember playing teacher with my school friends at the time. Of course, I was the teacher and they were the students! 63 Subjects: including calculus, reading, English, Spanish Related Riverside, RI Tutors Riverside, RI Accounting Tutors Riverside, RI ACT Tutors Riverside, RI Algebra Tutors Riverside, RI Algebra 2 Tutors Riverside, RI Calculus Tutors Riverside, RI Geometry Tutors Riverside, RI Math Tutors Riverside, RI Prealgebra Tutors Riverside, RI Precalculus Tutors Riverside, RI SAT Tutors Riverside, RI SAT Math Tutors Riverside, RI Science Tutors Riverside, RI Statistics Tutors Riverside, RI Trigonometry Tutors
{"url":"http://www.purplemath.com/Riverside_RI_calculus_tutors.php","timestamp":"2014-04-19T12:45:52Z","content_type":null,"content_length":"24160","record_id":"<urn:uuid:b8726ded-359f-496c-bb49-6205c926641c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 1. A plane travels from Orlando to Denver and back again. On the five-hour trip from Orlando to Denver, the plane has a tailwind of 40 miles per hour. On the return trip from Denver to Orlando, the plane faces a headwind of 40 miles per hour. This trip takes six hours. What is the speed of the airplane in still air? 2. Two bicycles depart from Miami Beach going in opposite directions. The first bicycle is traveling at 10 miles per hour. The second bicycle travels at 5 miles per hour. How long does it take until the bikes are 45 miles apart? • one year ago • one year ago Best Response You've already chosen the best response. 3. Jesse rents a moving van for $75 and must pay $2 per mile. The following week, Alex rents the same van, is charged $80 for the rental and $1.50 per mile. If they each paid the same amount and drove the same number of miles, how far did they each travel? 4. During a 4th of July weekend, 32 vehicles became trapped on the Sunshine Skyway Bridge while it was being repaved. A recent city ordinance decreed that only cars with 4 wheels and trucks with six wheels could be on the bridge at any given time. If there were 148 tires that needed to be replaced to due to damage, how many cars and trucks were involved in the incident? Best Response You've already chosen the best response. How any questions are there? Best Response You've already chosen the best response. Just these and one more where I have to do something with another person. Best Response You've already chosen the best response. You should post these one by one. It makes it less confusing Best Response You've already chosen the best response. but they're numbered... Best Response You've already chosen the best response. Hint with question one since the 40mph tailwind acts for the same duration, in opposite directions it cancels itself out. for this your really just working out how much speed a plane has when it travels 5 miles in six hours \[distance = \frac{velocity}{time}\] Best Response You've already chosen the best response. Stilllllllll clueless. Best Response You've already chosen the best response. distance = speed / time. we want to know speed. so if we solve the equation for speed then put in the values of distance, 5 miles, and time, 6 hours, we get the speed. Best Response You've already chosen the best response. by we i mean you :P Best Response You've already chosen the best response. It's 5 hours, not 5 miles o.o Best Response You've already chosen the best response. Best Response You've already chosen the best response. ლ(ಠ益ಠლ) Y U NO ANSWER? Best Response You've already chosen the best response. okay wel swap them then! what is speed measured in? meters per second. what is distance? meters. time? seconds. So we have\[meters per second = \frac{meters}{second} = \frac{distance}{time}\] Best Response You've already chosen the best response. miles per hour* Best Response You've already chosen the best response. lol im hopeless at explaining Best Response You've already chosen the best response. I got 10 for #3. Best Response You've already chosen the best response. and I got 22 cars and 10 trucks for #4 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5075961ee4b009782ca5901a","timestamp":"2014-04-16T22:41:31Z","content_type":null,"content_length":"67215","record_id":"<urn:uuid:6a73870d-5152-4879-951a-4e62361c93aa>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Axler Linear Algebra:: What does this notation mean? In chapter 1 of Axler's LA Done Right, he defines a polynomial as such: "Our next example of a vector space involves polynomials. A function [itex]p:\mathbf{F}\rightarrow\mathbf{F}[/itex] is called a polynomial with coefficients in F ..." Can someone translate this "[itex]p:\mathbf{F}\rightarrow\mathbf{F}[/itex]" into words? I have never seen that notation before.
{"url":"http://www.physicsforums.com/showpost.php?p=2260146&postcount=1","timestamp":"2014-04-17T09:52:35Z","content_type":null,"content_length":"9179","record_id":"<urn:uuid:d7b32f7f-c5a5-4352-b761-7e529b450b3a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Computation of infinite integrals involving Bessel functions of arbitrary order by the (English) Zbl 0871.65012 Two new variants of the $\overline{D}$-transformation due to the author [J. Inst. Math. Appl. 26, 1-20 (1980; Zbl 0464.65002)] are designed for computing oscillatory integrals ${\int }_{a}^{\infty }g \left(t\right)·{𝒞}_{u }\left(t\right)dt$, where $g\left(t\right)$ is a non-oscillatory function and ${𝒞}_{u }\left(x\right)$ stands for an arbitrary linear combination of the Bessel functions of the first and second kinds ${J}_{u }\left(x\right)$ and ${Y}_{u }\left(x\right)$, of arbitrary real order $u$. The author also points out that the application to such integrals of an additional approach involving the so-called $mW$-transformation as well introduced by himself [see Math. Comput. 51, No. 183, 249-266 (1988; Zbl 0694.40004); see also D. Levin and A. Sidi, Appl. Math. Comput. 9, 175-215 (1981; Zbl 0487.65003)] produces similarly good results. Finally, convergence and stability results of both the $\overline{D}$-transformation and the $mW$-transformation in all of their forms are discussed and a numerical example is added. 65D20 Computation of special functions, construction of tables 33C10 Bessel and Airy functions, cylinder functions, ${}_{0}{F}_{1}$ 33E20 Functions defined by series and integrals 65D32 Quadrature and cubature formulas (numerical methods)
{"url":"http://zbmath.org/?q=an:0871.65012","timestamp":"2014-04-16T07:22:28Z","content_type":null,"content_length":"23570","record_id":"<urn:uuid:c4ffd0f8-6b89-49d7-a112-94a271df71be>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Reasoning about The Past with Two-Way Automata Results 1 - 10 of 103 , 2008 "... We present an algorithm to solve XPath decision problems under regular tree type constraints and show its use to statically type-check XPath queries. To this end, we prove the decidability of a logic with converse for finite ordered trees whose time complexity is a simple exponential of the size of ..." Cited by 60 (33 self) Add to MetaCart We present an algorithm to solve XPath decision problems under regular tree type constraints and show its use to statically type-check XPath queries. To this end, we prove the decidability of a logic with converse for finite ordered trees whose time complexity is a simple exponential of the size of a formula. The logic corresponds to the alternation free modal µ-calculus without greatest fixpoint, restricted to finite trees, and where formulas are cycle-free. Our proof method is based on two auxiliary results. First, XML regular tree types and XPath expressions have a linear translation to cycle-free formulas. Second, the least and greatest fixpoints are equivalent for finite trees, hence the logic is closed under negation. Building on these results, we describe a practical, effective system for solving the satisfiability of a formula. The system has been experimented with some decision problems such as XPath emptiness, containment, overlap, and coverage, with or without type constraints. The benefit of the approach is that our system can be effectively used in static analyzers for programming languages - In Proc. of the 16th Int. Joint Conf. on Artificial Intelligence (IJCAI’99 , 1999 "... In the last years, the investigation on Description Logics (DLs) has been driven by the goal of applying them in several areas, such as, software engineering, information systems, databases, information integration, and intelligent access to the web. The modeling requirements arising in the above ar ..." Cited by 55 (12 self) Add to MetaCart In the last years, the investigation on Description Logics (DLs) has been driven by the goal of applying them in several areas, such as, software engineering, information systems, databases, information integration, and intelligent access to the web. The modeling requirements arising in the above areas have stimulated the need for very rich languages, including fixpoint constructs to represent recursive structures. We study a DL comprising the most general form of fixpoint constructs on concepts, all classical concept forming constructs, plus inverse roles, n-ary relations, qualified number restrictions, and inclusion assertions. We establish the EXPTIME decidability of such logic by presenting a decision procedure based on a reduction to nonemptiness of alternating automata on infinite trees. We observe that this is the first decidability result for a logic combining inverse roles, number restrictions, and general fixpoints. 1 - In 21st Symposium on Logic in Computer Science (LICS’06 , 2006 "... Determinization and complementation are fundamental notions in computer science. When considering finite automata on finite words determinization gives also a solution to complementation. Given a nondeterministic finite automaton there exists an exponential construction that gives a deterministic au ..." Cited by 45 (3 self) Add to MetaCart Determinization and complementation are fundamental notions in computer science. When considering finite automata on finite words determinization gives also a solution to complementation. Given a nondeterministic finite automaton there exists an exponential construction that gives a deterministic automaton for the same language. Dualizing the set of accepting states gives an automaton for the complement language. In the theory of automata on infinite words, determinization and complementation are much more involved. Safra provides determinization constructions for Büchi and Streett automata that result in deterministic Rabin automata. For a Büchi automaton with n states, Safra constructs a deterministic Rabin automaton with n O(n) states and n pairs. For a Streett automaton with n states and k pairs, Safra constructs a deterministic Rabin automaton with (nk) O(nk) states and n(k + 1) pairs. Here, we reconsider Safra’s determinization constructions. We show how to construct automata with fewer states and, most importantly, parity acceptance condition. Specifically, starting from a nondeterministic Büchi automaton with n states our construction yields a deterministic parity automaton with n 2n+2 states and index 2n (instead of a Rabin automaton with (12) n n 2n states and n pairs). Starting from a nondeterministic Streett automaton with n states and k pairs our construction yields a deterministic parity automaton with n n(k+2)+2 (k+1) 2n(k+1) states and index 2n(k + 1) (instead of a Rabin automaton with (12) n(k+1) n n(k+2) (k+1) 2n(k+1) states and n(k+1) pairs). The parity condition is much simpler than the Rabin condition. In applications such as solving games and emptiness of tree automata handling the Rabin condition involves an additional multiplier of n 2 n! (or (n(k + 1)) 2 (n(k + 1))! in the case of Streett) which is saved using our construction. - In Proc. of the 22nd Nat. Conf. on Artificial Intelligence (AAAI 2007 , 2007 "... Expressive Description Logics (DLs) have been advocated as formalisms for modeling the domain of interest in various application areas. An important requirement is the ability to answer complex queries beyond instance retrieval, taking into account constraints expressed in a knowledge base. We consi ..." Cited by 35 (17 self) Add to MetaCart Expressive Description Logics (DLs) have been advocated as formalisms for modeling the domain of interest in various application areas. An important requirement is the ability to answer complex queries beyond instance retrieval, taking into account constraints expressed in a knowledge base. We consider this task for positive existential path queries (which generalize conjunctive queries and unions thereof), whose atoms are regular expressions over the roles (and concepts) of a knowledge base in the expressive DL ALCQIbreg. Using techniques based on two-way tree-automata, we first provide an elegant characterization of TBox and ABox reasoning, which gives us also a tight EXPTIME bound. We then prove decidability (more precisely, a 2EXPTIME upper bound) of query answering, thus significantly pushing the decidability frontier, both with respect to the query language and the considered DL. We also show that query answering is EXP-SPACE-hard already in rather restricted - In Proceedings of the International Conference on Computer-Aided Veri , 1998 "... Abstract. Symbolic model checking, which enables the automatic verification of large systems, proceeds by calculating with expressions that represent state sets. Traditionally, symbolic model-checking tools are based on backward state traversal; their basic operation is the function £¥¤§ ¦ , which g ..." Cited by 33 (5 self) Add to MetaCart Abstract. Symbolic model checking, which enables the automatic verification of large systems, proceeds by calculating with expressions that represent state sets. Traditionally, symbolic model-checking tools are based on backward state traversal; their basic operation is the function £¥¤§ ¦ , which given a set of states, returns the set of all predecessor states. This is because specifiers usually employ formalisms with future-time modalities, which are naturally evaluated by iterating applications of £¨¤§ ¦. It has been recently shown experimentally that symbolic model checking can perform significantly better if it is based, instead, on forward state traversal; in this case, the basic operation is the function £�©��§ � , which given a set of states, returns the set of all successor states. This is because forward state traversal can ensure that only those parts of the state space are explored which are reachable from an initial state and relevant for satisfaction or violation of the specification; that is, errors can be detected as soon as possible. In this paper, we investigate which specifications can be checked by symbolic forward state traversal. We formulate the problems of symbolic backward and forward model checking by means of two �-calculi. The �¥�� �- � calculus is based on the £¨¤� ¦ operation; the �¨���� �-� calculus, on the £�©��§ � operation. These two �-calculi induce query logics, which augment fixpoint expressions with a boolean emptiness query. Using query logics, we are able to relate and compare the symbolic backward and forward approaches. In particular, we prove that all �-regular (linear-time) specifications can be expressed as�¨���� �- � queries, and therefore checked using symbolic forward state traversal. On the other hand, we show that there are simple branching-time specifications that cannot be checked in this way. 1 - LNCS , 2000 "... Abstract. We develop an automata-theoretic framework for reasoning about infinitestate sequential systems. Our framework is based on the observation that states of such systems, which carry a finite but unbounded amount of information, can be viewed as nodes in an infinite tree, and transitions betw ..." Cited by 33 (4 self) Add to MetaCart Abstract. We develop an automata-theoretic framework for reasoning about infinitestate sequential systems. Our framework is based on the observation that states of such systems, which carry a finite but unbounded amount of information, can be viewed as nodes in an infinite tree, and transitions between states can be simulated by finite-state automata. Checking that the system satisfies a temporal property can then be done by an alternating two-way tree automaton that navigates through the tree. As has been the case with finite-state systems, the automatatheoretic framework is quite versatile. We demonstrate it by solving several versions of the model-checking problem for §-calculus specifications and prefixrecognizable systems, and by solving the realizability and synthesis problems for §-calculus specifications with respect to prefix-recognizable environments. 1 - Logical Methods in Computer Science 2, Issue 3, Paper 2 , 2006 "... Vol. 2 (3:2) 2006, pp. 1–31 www.lmcs-online.org ..." - HANDBOOK OF MODAL LOGIC , 2007 "... ..." - In Reasoning Web, volume 5689 of LNCS , 2009 "... Abstract. Ontologies provide a conceptualization of a domain of interest. Nowadays, they are typically represented in terms of Description Logics (DLs), and are seen as the key technology used to describe the semantics of information at various sites. The idea of using ontologies as a conceptual vie ..." Cited by 26 (17 self) Add to MetaCart Abstract. Ontologies provide a conceptualization of a domain of interest. Nowadays, they are typically represented in terms of Description Logics (DLs), and are seen as the key technology used to describe the semantics of information at various sites. The idea of using ontologies as a conceptual view over data repositories is becoming more and more popular, but for it to become widespread in standard applications, it is fundamental that the conceptual layer through which the underlying data layer is accessed does not introduce a significant overhead in dealing with the data. Based on these observations, in recent years a family of DLs, called DL-Lite, has been proposed, which is specifically tailored to capture basic ontology and conceptual data modeling languages, while keeping low complexity of reasoning and of answering complex queries, in particular when the complexity is measured w.r.t. the size of the data. In this article, we present a detailed account of the major results that have been achieved for the DL-Lite family. Specifically, we concentrate on DL-LiteA,id, an expressive member of this family, present algorithms for reasoning and query answering over DL-LiteA,id ontologies,
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=442016","timestamp":"2014-04-20T03:00:29Z","content_type":null,"content_length":"39333","record_id":"<urn:uuid:e17a6c17-314a-4735-8070-04d3571f6a51>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Deal with Maximizing/Minimizing Strategies on the GMAT QWQW series yet (except in sets). The reason for this is that the strategy to be used varies from question to question. What works in one question may not work in another. You might have to think up on what to do in a question from scratch and you have only 2 mins to do it in. The saving grace is that once you know what you have to do, the actual work involved to arrive at the answer is very Let’s look at some maximizing minimizing strategies in the next few weeks. We start with an OG question today with a convoluted question stem. Question: List T consists of 30 positive decimals, none of which is an integer, and the sum of the 30 decimals is S. The estimated sum of the 30 decimals, E, is defined as follows. Each decimal in T whose tenths digit is even is rounded up to the nearest integer, and each decimal in T whose tenths digits is odd is rounded down to the nearest integer. If 1/3 of the decimals in T have a tenths digit that is even, which of the following is a possible value of E – S? I. -16 II. 6 III. 10 A. I only B. I and II only C. I and III only D. II and III only E. I, II, and III There is a lot of information in the question stem and a lot of variables are explained. Let’s review the given data in our own words first. T has 30 decimals. The sum of all the decimals is S. 10 decimals have even tenths digit. They will be rounded up. 20 decimals have odd tenths digit. They will be rounded down. The sum of rounded numbers is E. E – S can take many values so how do we figure which ones it cannot take? We need to find the minimum value E – S can take and the maximum value it can take. That will help us figure out the values that E – S cannot take. Note that E could be greater than S and it could be less than S. So E – S could be positive or negative. Step 1: Getting Minimum Value of E – S Let’s try to make E as small as possible. For that, we need to do two things: 1. When we round up the decimals (even tenths digit), the difference between actual and estimate should be very small. The estimate should add a very small number to round it up so that E is not much greater than S. Say the numbers are something similar to 3.8999999 (the tenths digit is the largest even digit) and they will be rounded up to 4 i.e. the estimate gains about 0.1 per number. Since there are 10 even tenths digit numbers, the estimate will be approximately .1*10 = 1 more than actual. 2. When we round down the decimals (odd tenths digit), the difference between actual and estimate should be as large as possible. Say the numbers are something similar to 3.999999 (tenths digit is the largest odd digit) and they will be rounded down to 3 i.e. the estimate loses approximately 1 per number. Since there are 20 such numbers, the estimate is 1*20 = 20 less than actual. Overall, the estimate will be approximately 20 – 1 = 19 less than actual. Minimum value of E – S = -19 Step 2: Getting Maximum Value of E – S Now let’s try to make E as large as possible. For that, we need to do two things: 1. When we round up the decimals (even tenths digit), the difference between actual and estimate should be very high. Say the numbers are something similar to 3.000001 (tenths digit is the smallest even digit) and they will be rounded up to 4 i.e. the estimate gains 1 per number. Since there are 10 even tenths digit numbers, the estimate will be approximately 1*10 = 10 more than actual. 2. When we round down the decimals (odd tenths digit), the difference between actual and estimate should be very little. Say the numbers are something similar to 3.1 (tenths digit is the smallest odd digit). They will be rounded down to 3 i.e. the estimate loses approximately 0.1 per number. Since there are 20 such numbers, the estimate is approximately 0.1*20 = 2 less than actual. Overall, the estimate will be approximately 10 – 2 = 8 more than actual. Maximum value of E – S = 8. The minimum value of E – S is -19 and the maximum value of E – S is 8. So E – S can take the values -16 and 6 but cannot take the value 10. Answer (B) Karishma, a Computer Engineer with a keen interest in alternative Mathematical approaches, has mentored students in the continents of Asia, Europe and North America. She teaches the GMAT for Veritas Prep and regularly participates in content development projects such as this blog series! One Response 1. Karishma, Wonderful question.I spent lot of time deep diving into analysis.Great.
{"url":"http://www.veritasprep.com/blog/2014/01/how-to-deal-with-maximizingminimizing-strategies-on-the-gmat/","timestamp":"2014-04-17T00:51:35Z","content_type":null,"content_length":"50694","record_id":"<urn:uuid:1a423e9f-0065-48bb-9f86-8664d74f5511>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving limit April 3rd 2011, 09:57 AM #1 Junior Member Oct 2010 Proving limit Let be $x_1 \geq x_2 \geq ... \geq x_n \geq ...$ positive numbers and $\sum _{n=1}^{\infty}x_n < \infty.$ Prove that $\lim_{n \to \infty}nx_n =0.$ Thank you very much in advance! We have $0\leq nx_{2n}\leq x_{n+1}+\ldots +x_{2n}=s_{2n}-s_n$ where $s_n:=\sum_{k=1}^n x_k$. Hence the limit of the subsequence $\left\{2nx_{2n}\right\}$ is $0$. Now show that the limit is $0$ for the subsequence $\left\{(2n+1)x_{2n+1}\right\}$. The series with positive terms $\displaystyle \sum_{n=1}^{\infty} x_{n}$ converges and that means that there is an $\varepsilon >0$ and an $\alpha>0$ such that for n 'large enough' is... $\displaystyle x_{n} < \frac{\alpha}{n^{1+\varepsilon}}$ (1) From (1) it follows that... $\displaystyle \lim_{n \rightarrow \infty} n\ x_{n} =0$ (2) Kind regards Chisigma : are you sure that your property is true ? What about $x_n :=\dfrac 1{n(\ln n)^2}$? Thank you very much for all your answers! April 3rd 2011, 10:30 AM #2 April 3rd 2011, 12:10 PM #3 April 3rd 2011, 01:03 PM #4 April 6th 2011, 12:38 AM #5 Junior Member Oct 2010
{"url":"http://mathhelpforum.com/differential-geometry/176678-proving-limit.html","timestamp":"2014-04-19T21:57:09Z","content_type":null,"content_length":"43959","record_id":"<urn:uuid:1ca026f9-2136-45fd-973c-4f30b3a51b5e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
A property of some sequences and series. A sequence u[i] is said to be convergent if there exists a value u with the property that by choosing a large enough value of i, we can make u[i] as close as we wish to u. In other words, convergence is the tendency of a sequence toward a limit. In the case of a series, it is the tending toward of the consecutive partial sums of the series toward a limit. A sequence or series which does not converge is said to diverge. For example, for the series 1 + (½) + (½)^2 + ((½)^3 + ... the sum of the first two terms is 1.5, the first three 1.75, and the first four 1.875; as more and more terms are evaluated, the sum approaches the limiting value of (i.e., converges on) 2. Related categories
{"url":"http://www.daviddarling.info/encyclopedia/C/convergence.html","timestamp":"2014-04-19T20:13:57Z","content_type":null,"content_length":"6201","record_id":"<urn:uuid:46af2d8d-f8db-44c1-947e-f8dedbb3a1f4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/emecham/answered","timestamp":"2014-04-16T23:04:39Z","content_type":null,"content_length":"108977","record_id":"<urn:uuid:69425824-8cba-4a53-9f00-ae2c0722d638>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Law of inertia From Encyclopedia of Mathematics for quadratic forms The theorem stating that for any way of reducing a quadratic form (cf. also Quadratic forms, reduction of) with real coefficients to a sum of squares by a linear change of variables In its modern form, the law of inertia is the following statement concerning properties of symmetric bilinear forms over ordered fields. Let exactly Euclidean field, equality of signatures is a sufficient condition for the equivalence of bilinear forms. If the index of inertia such that the restriction of (so that the dimensions of Sometimes the signature of If two forms Witt ring The law of inertia can be generalized to the case of a Hermitian bilinear form over a maximal ordered field [1], [4]). [1] N. Bourbaki, "Elements of mathematics. Algebra: Modules. Rings. Forms" , 2 , Addison-Wesley (1975) pp. Chapt.4;5;6 (Translated from French) MR2333539 MR2327161 MR2325344 MR2284892 MR2272929 MR0928386 MR0896478 MR0782297 MR0782296 MR0722608 MR0682756 MR0643362 MR0647314 MR0610795 MR0583191 MR0354207 MR0360549 MR0237342 MR0205211 MR0205210 [2] S. Lang, "Algebra" , Addison-Wesley (1974) MR0783636 Zbl 0712.00001 [3] E. Artin, "Geometric algebra" , Interscience (1957) MR1529733 MR0082463 Zbl 0077.02101 [4] J. Milnor, D. Husemoller, "Symmetric bilinear forms" , Springer (1973) MR0506372 Zbl 0292.10016 The name index is also used for How to Cite This Entry: Law of inertia. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Law_of_inertia&oldid=24097 This article was adapted from an original article by V.L. Popov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/Law_of_inertia","timestamp":"2014-04-16T07:29:40Z","content_type":null,"content_length":"27113","record_id":"<urn:uuid:625845c2-6598-48f4-832a-82732c20eb61>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: NFA->DFA - Why use epsilon transitions? torbenm@pc-003.diku.dk (Torben =?iso-8859-1?Q?=C6gidius?= Mogensen) Tue, 14 Apr 2009 17:53:24 +0200 From comp.compilers | List of all articles for this month | From: torbenm@pc-003.diku.dk (Torben =?iso-8859-1?Q?=C6gidius?= Mogensen) Newsgroups: comp.compilers Date: Tue, 14 Apr 2009 17:53:24 +0200 Organization: Department of Computer Science, University of Copenhagen References: 09-04-011 Keywords: lex, DFA Posted-Date: 16 Apr 2009 12:29:50 EDT Alexander Morou <alexander.morou@gmail.com> writes: > I have a question about epsilon transitions in standard NFA > implementations: What exactly are they for? > I've never really understood their purpose, other than to confuse > people trying to understand FSA and regular languages (with respect > to language scanner generation). As Stephen Horne mentioned, the most compelling reason is to allow NFAs for regular expressions to be built compositionally from regular expressions in an easy way. There are methods for building epsilon-free NFAs compositionally from epsilon-free regular expressions, but these are much more complicated. Also, with epsilon transitions, the size of an NFA for a regexp of size N is O(N) states and O(N) transitions, but without epsilon transitions, the maximal size increases to O(N) states and O(N^2) transitions. Granted, that is not important if you intend to convert to a DFA afterwards, as this has O(2^N) states and transitions. Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/09-04-025","timestamp":"2014-04-16T18:58:01Z","content_type":null,"content_length":"5495","record_id":"<urn:uuid:5269d022-238f-4dec-a736-725d74b464ed>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Chris Beaumont's IDL Library This class defines an interface for running Markov Chain Monte Carlo simulations. The code is based on the opening chapters of "Markov Chain Monte Carlo in Practice" by Gilks et al. The code in this file sets up the logic for running an MCMC simulation using the Metropolis Hastings algorithm. To actually use these functions, a subclass must be defined, and the selectTrial and logTargetDistribution methods must be overridden. Each of these methods is problem-specific, and cannot be coded in advance. Author information September 2009: Written by Chris Beaumont October 2 2009: Modified run procedure to avoid unnecessary calls to logTargetDistribution October 3 2009: Added THIN keyword and state variable This procedure runs the Markov Chain This function generates a new trial link in the markov chain, given the current link. This function calculates the (unnormalized) logarithm of the target distribution that the Marcov Chain aims to sample from. This function returns the markov chain This function returns the number of successes and failures (that is, acceptances and rejections of the proposal link) in the Markov chain. This method returns the MCMC object's data, if any This function returns the seed link This function initializes the MCMC object Free memory when we are finished define the mcmc data structure Routine details mcmc::run, verbose=verbose This procedure runs the Markov Chain mcmc_common: Holds the seed variable for calls to randomu Author information September 2009: Written by Chris Beaumont Oct 3 2009: Optimized so that logTargetDistribution is only ever evaluated once per link result = mcmc::selectTrial(current [, transitionRatio=transitionRatio]) This function generates a new trial link in the markov chain, given the current link. It is not implemented by default, and must be overridden. This function must also return the ratio of the transition probabilities between the old and new links (i.e., q(old | new) / q(new | old) ). This ratio is needed by the Metropolis Hastings algorithm implemented in getTransitionProbability. Return value The next link in the chain to try. current in required The current link in the chain. This can be any kind of scalar (number, structure, object, etc), as long as the implementing class knows how to handle it. transitionRatio in optional This must be set to a named variable that will hold, on output, the transition ratio q(current | next) / q(next | current) result = mcmc::logTargetDistribution(link) This function calculates the (unnormalized) logarithm of the target distribution that the Marcov Chain aims to sample from. It is usually a likelihood or posterior distribution, but the function is not implemented in this interface. We use the logarithm of the function, since its values may often be very small. Return value The log of the target distribution, evaluated at link. link in required The point at which to evaluate the target distribution. result = mcmc::getChain(logf=logf) This function returns the markov chain Return value result = mcmc::getNSuccess( [nfail=nfail]) This function returns the number of successes and failures (that is, acceptances and rejections of the proposal link) in the Markov chain. Return value nfail in optional Set to a named variable to hold the number of failures result = mcmc::getData() This method returns the MCMC object's data, if any result = mcmc::getSeed() This function returns the seed link result = mcmc::init(seed, nstep, data [, thin=thin]) This function initializes the MCMC object Return value seed in required The initial point to start the chain at. Can be any scalar, as long as it is correctly handeled by the other MCMC methods nstep in required The number of links in the desired Markov Chain. data in required Any data relevant to the process thin in optional Set to a number to avoid storing every step of the chain. This is useful for long chains which might otherwise take up lots of memory. If set, then only every THIN'th link will be saved to the chain. Should be less than the correlation length of the chain. Free memory when we are finished define the mcmc data structure Author information September 2009: Written by Chris Beaumont Oct 2010: Added logf field File attributes Modifcation date: Thu Nov 3 17:24:52 2011 Lines: 301
{"url":"http://www.ifa.hawaii.edu/~beaumont/code/mcmc__define.html","timestamp":"2014-04-20T21:23:45Z","content_type":null,"content_length":"22103","record_id":"<urn:uuid:a09a6d98-b9af-44ce-8542-da0f4e0e64d0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Capitulo 1 Chapter I The 1980s were the decade in which computers began to influence all levels of education. The number of computers available for students' use changed from 800,000 to 1.7 million during the 1980s (U.S. Congress, Office of Technology Assessment, 1988). Today there are two million computers in American schools, which represent about one computer for every twenty five students. Many higher education institutions utilized newly available technology to enrich the curriculum and assist the teaching-learning process. Infusions of technology resulted in recommendations from accrediting agencies to develop formal evaluation procedures to assess the outcomes of technological innovations in the teaching-learning process and to determine their effectiveness in relation to the human and fiscal resources expended. Reports such as Everybody Counts (National Research Council, 1989) and Moving Beyond the Myths (National Research Council, 1991) supported the position that the society has much to gain from the increasing role of technology in mathematics education. The role of technology in mathematics education presented in Everybody Counts was: • School mathematics can become more like the mathematics people actually use, both on the job and in scientific application. • Weakness in algebraic skills need no longer prevent students from understanding ideas in more advanced mathematics. • Mathematics learning can become more active and dynamic, hence more effective. • Students can explore mathematics on their own, to ask and answer countless "what if" questions. Although calculators and computers will not necessarily cause students to think for themselves, they can provide an environment in which student-generated mathematical ideas can thrive. • Time invested in mathematics study can build long lasting intuition and insight, not just short lived strategies for calculation. (pp. 62-63) The National Council of Teachers of Mathematics publication Curriculum and Evaluation Standards for School Mathematics (NCTM,1989) established priorities for curricula as the basis for improving mathematics teaching. The document explicitly recommended the integration of technology within the mathematics program at all grade levels. It also urged changes in the methods and the strategies that educators use in the teaching of mathematics. NCTM Standards proposed that: • appropriate calculators should be available to all students at all times; • a computer should be available in every classroom for demonstration purposes; • every student should have access to a computer for individual and group work; • students should learn to use the computer as a tool for processing information and performing calculations to investigate and solve problems. (p. 8) Problem solving and technology related studies have been the focus of a considerable amount of research over the past 25 years. Lester (1994) asserted that there is a need to provide teachers with well-documented information about problem solving. He suggested three issues related to problem solving instruction that need more investigation: (1) The Role of the Teacher in a Problem-Centered Classroom; (2) What Actually Takes Place in Problem-Centered Classrooms?; (3) Research Should Focus on Groups and Whole Classes Rather than Individuals. Schroeder and Lester (1989) indicated that there is an urgent need to provide teachers with information about “teaching via problem solving.” Kaput and Thompson (1994) urged that educators should assume responsibilities for shaping the roles of new technologies in school mathematics and researchers should turn issues raised by new technologies into research-able questions. Clearly, there seems to be a need in mathematics education to find some effective uses for technology. It appears, also, that more research is necessary about the effect of a technology-based curriculum on mathematical reasoning skills and students' attitudes toward their own problem solving abilities. Purpose of the Study The purpose of this study is to prepare computer-based instructional materials for use in a mathematical reasoning and problem solving skills course at the college level, and to test the effects of these materials in a college setting. Among the questions to be addressed in this study are: 1. Within what topics of a mathematics reasoning skill course can a technology-based methodology be used? 2. What is the effect of a technology-based curriculum on students' achievement in a mathematical reasoning skill course given the fact that additional time was devoted to learning to use a 3. What attitudes toward computers do students at the college level have? 4. Does the use of technology improve students' attitude toward the usefulness of mathematics? 5. Does the use of technology increase students' confidence in learning mathematics? 6. Does the use of technology affect students' attitude toward their own problem solving abilities? Procedure of the Study The study took place in a private four-year university in Ponce, Puerto Rico, during the Spring of 1995 (See Appendix A for the consent forms). Three groups participated in the study. The students were divided into three sections by the registrar. Two of the groups were designated Computer I (C1, Instructor A), and Computer 2 (C2, Instructor B). The third group was designated Non-computer (NC, Instructor A). The investigator taught Computer 1 and Non-Computer groups. A mathematics department colleague taught group Computer 2. The students in the computer groups received three sessions of instruction in the use of Lotus 1-2-3. Additional instruction was given as needed for the purpose of the project. The instructional content was textbook oriented and was identical for both treatment C1, C2 and comparison NC groups. The following procedures were followed to achieve the purpose of the study. First, a review of relevant literature to obtain information about investigations related to the study was undertaken. Based upon this review instruments for evaluating achievement of problem solving skills were developed. Instruments to assess students' attitudes toward the usefulness of mathematics and confidence of learning mathematics; their own problem solving abilities; and computers were chosen. The investigator and a mathematics colleague constructed two measures, the pretest and the posttest, (See Appendix B). The problems on the two examinations were matched in that the strategies that can be used to solve a pretest problem can also help to solve a posttest problem. The Fennema-Sherman Mathematics Attitudes Scales, (See Appendix C) instruments were used to measure attitudes toward the learning of mathematics by males and females (Fennema & Sherman, 1976) and to asses the students' attitudes toward the usefulness of mathematics and their confidence of learning mathematics. The Student Attitude Questionnaire (See Appendix D), developed for the Mathematical Problem-Solving Project at Indiana University (Moses, 1976) was selected to assess problem solving abilities. The students' attitudes toward computers (See Appendix E) were assessed using instruments developed by the Office of Planning, Development and Evaluation in the university where the study was conducted. Permission for the use of the instruments was requested and received (See Appendix A). The study required other activities. There was a review of the software available in the Mathematics and Computer Assisted Instruction Laboratories. The investigator developed and selected from the literature reviewed computer-based activities for the mathematical reasoning skill course. She also organized the computer-based curriculum project, including the curricular materials and the schedule for computer laboratory visits. Other activities were the administration of the pretest, the posttest, and the self-assessment instruments. The investigator and the colleague working in the study interviewed a selected sample from the treatment groups to get student's thoughts and feelings about the treatment. Plan of the Report The plan of the report follows the sequence established in the procedure of the study. The review of literature is presented in Chapter II. The review includes studies involving problem solving, the use of computers in mathematics, and the use of computers to provide concrete experience as an aid to problem solving. Information about the subjects of the study, treatment groups and comparison group, the course, the environment, the instructional materials and the instruments appears in Chapter III. The analysis of the data is presented in Chapter IV, and the Summary, Conclusions and Recommendations in Chapter V.
{"url":"http://ponce.inter.edu/cai/tesis/bfeliciano/Cap=1.htm","timestamp":"2014-04-20T18:55:22Z","content_type":null,"content_length":"11097","record_id":"<urn:uuid:f3314c7b-b27c-40c5-bc85-1c7be1f005ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Deceleration Parameter deceleration parameter $q$ in cosmology is a measure of the cosmic acceleration of the expansion of the universe. It is defined by: $q stackrel\left\{mathrm\left\{def\right\}\right\}\left\{=\right\} -frac\left\{ddot\left\{a\right\} a \right\}\left\{dot\left\{a\right\}^2\right\}$ is the scale factor of the universe and the dots indicate derivatives by proper time . Recent measurements of dark energy imply that the expansion of the universe is actually accelerating; the minus sign and name "deceleration parameter" are historical; at the time of definition q was thought to be positive, now it is believed to be negative. The Friedmann acceleration equation can be written as $3frac\left\{ddot\left\{a\right\}\right\}\left\{a\right\} =-4 pi G \left(rho+3p\right)=-4pi G\left(1+3w\right)rho,$ is the energy density of the universe, is its pressure, and is the equation of state of the universe. This can be rewritten as by using the first Friedmann equation, where is the Hubble parameter $K = 1$ depending on whether the universe is The derivative of the Hubble parameter can be written in terms of the deceleration parameter: Except in the speculative case of phantom energy (which violates all the energy conditions), all postulated forms of matter yield a deceleration parameter $q ge -1$ . Thus, any universe should have a decreasing Hubble parameter and the local expansion of space is always slowing (or, in the case of a cosmological constant, proceeds at a constant rate, as in de Sitter space Observations of the cosmic microwave background demonstrate that the universe is very nearly flat, so: This implies that the universe is decelerating for any cosmic fluid with equation of state greater than (any fluid satisfying the strong energy condition does so, as does any form of matter present in the Standard Model , but excluding inflation). However, observations of distant type Ia supernovae indicate that is negative; the expansion of the universe is accelerating. This is an indication that the gravitational attraction of matter, on the cosmological scale, is more than counteracted by negative dark energy , in the form of either or a positive cosmological constant Before the first indications of an accelerating universe, in 1998, it was thought that the universe was dominated by dust with negligible equation of state, $w approx 0$. This had suggested that the deceleration parameter was equal to one half; the experimental effort to confirm this prediction led to the discovery of acceleration.
{"url":"http://www.reference.com/browse/Deceleration+Parameter","timestamp":"2014-04-21T08:39:44Z","content_type":null,"content_length":"80668","record_id":"<urn:uuid:bf3689a2-2f12-43ab-b3d5-96d1f7c10d04>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Mirrored on CSAIL Blog Friday, May 23rd, 2008 I’ve been increasingly tempted to make this blog into a forum solely for responding to the posts at Overcoming Bias. (Possible new name: “Wallowing in Bias.”) Two days ago, Robin Hanson pointed to a fascinating paper by Bousso, Harnik, Kribs, and Perez, on predicting the cosmological constant from an “entropic” version of the anthropic principle. Say what you like about whether anthropicology is science or not, for me there’s something delightfully non-intimidating about any physics paper with “anthropic” in the abstract. Sure, you know it’s going to have metric tensors, etc. (after all, it’s a physics paper) — but you also know that in the end, it’s going to turn on some core set of assumptions about the number of sentient observers, the prior probability of the universe being one way rather than another, etc., which will be comprehensible (if not necessarily plausible) to anyone familiar with Bayes’ Theorem and how to think formally. So in this post, I’m going to try to extract an “anthropic core” of Bousso et al.’s argument — one that doesn’t depend on detailed calculations of entropy production (or anything else) — trusting my expert readers to correct me where I’m mistaken. In defense of this task, I can hardly do better than to quote the authors themselves. In explaining why they make what will seem to many like a decidedly dubious assumption — namely, that the “number of observations” in a given universe should be proportional to the increase in non-gravitational entropy, which is dominated (or so the authors calculate) by starlight hitting dust — they write: We could have … continued to estimate the number of observers by more explicit anthropic criteria. This would not have changed our final result significantly. But why make a strong assumption if a more conservative one suffices? [p. 14] In this post I’ll freely make strong assumptions, since my goal is to understand and explain the argument rather than to defend it. The basic question the authors want to answer is this: why does our causally-connected patch of the universe have the size it does? Or more accurately: taking everything else we know about physics and cosmology as given, why shouldn’t we be surprised that it has the size it does? From the standpoint of post-1998 cosmology, this is more-or-less equivalent to asking why the cosmological constant Λ ~ 10^-122 should have the value it has. For the radius of our causal patch scales 1/√Λ ~ 10^61 Planck lengths ~ 10^10 light-years, while (if you believe the holographic principle) its maximum information content scales like 1/Λ ~ 10^122 qubits. To put it differently, there might be stars and galaxies and computers that are more than ~10^10 light-years away from us, and they might require more than ~10^122 qubits to describe. But if so, they’re receding from us so quickly that we’ll never be able to observe or interact with Of course, to ask why Λ has the value it does is really to ask two questions: 1. Why isn’t Λ smaller than it is, or even zero? (In this post, I’ll ignore the possibility of its being negative.) 2. Why isn’t Λ bigger than it is? Presumably, any story that answers both questions simultaneously will have to bring in some actual facts about the universe. Let’s face it: 10^-122 is just not the sort of answer you expect to get from armchair philosophizing (not that it wouldn’t be great if you did). It’s a number. As a first remark, it’s easy to understand why Λ isn’t much bigger than it is. If it were really big, then matter in the early universe would’ve flown apart so quickly that stars and galaxies wouldn’t have formed, and hence we wouldn’t be here to blog about it. But this upper bound is far from tight. Bousso et al. write that, based on current estimates, Λ could be about 2000 times bigger than it is without preventing galaxy formation. As for why Λ isn’t smaller, there’s a “naturalness” argument due originally (I think) to Weinberg, before the astronomers even discovered that Λ>0. One can think of Λ as the energy of empty space; as such, it’s a sum of positive and negative contributions from all possible “scalar fields” (or whatever else) that contribute to that energy. That all of these admittedly-unknown contributions would happen to cancel out exactly, yielding Λ=0, seems fantastically “unnatural” if you choose to think of the contributions as more-or-less random. (Attempts to calculate the likely values of Λ, with no “anthropic correction,” notoriously give values that are off by 120 orders of magnitude!) From this perspective, the smaller you want Λ to be, the higher the price you have to pay in the unlikelihood of your hypothesis. Based on the above reasoning, Weinberg predicted that Λ would have close to the largest possible value it could have, consistent with the formation of galaxies. As mentioned before, this gives a prediction that’s too big by a factor of 2000 — a vast improvement over the other approaches, which gave predictions that were off by factors of 10^120 or infinity! Still, can’t we do better? One obvious approach to pushing Λ down would be to extend the relatively-uncontroversial argument explaining why Λ can’t be enormous. After all, the tinier we make Λ, the bigger the universe (or at least our causal patch of it) will be. And hence, one might argue, the more observers there will be, hence the more likely we’ll be to exist in the first place! This form of anthropicizing — that we’re twice as likely to exist in a universe with twice as many observers — is what philosopher Nick Bostrom calls the Self-Indication Assumption. However, two problems with this idea are evident. First, why should it be our causal patch of the universe that matters, rather than the universe as a whole? For anthropic purposes, who cares if the various civilizations that arise in some universe are in causal contact with each other or not, provided they exist? Bousso et al.’s response is basically just to stress that, from what we know about quantum gravity (in particular, black-hole complementarity), it probably doesn’t even make sense to assign a Hilbert space to the entire universe, as opposed to some causal patch of it. Their “Causal-Patch Self-Indication Assumption” still strikes me as profoundly questionable — but let’s be good sports, assume it, and see what the consequences are. If we do this, we immediately encounter a second problem with the anthropic argument for a low value of Λ: namely, it seems to work too well! On its face, the Self-Indication Assumption wants the number of observers in our causal patch to be infinite, hence the patch itself to be infinite in size, hence Λ=0, in direct conflict with observation. But wait: what exactly is our prior over the possible values of Λ? Well, it appears Landscapeologists typically just assume a uniform prior over Λ within some range. (Can someone enlighten me on the reasons for this, if there are any? E.g., is it just that the middle part of a Gaussian is roughly uniform?) In that case, the probability that Λ is between ε and 2ε will be of order ε — and such an event, we might guess, would lead to a universe of “size” 1/ε, with order 1/ε observers. In other words, it seems like the tiny prior probability of a small cosmological constant should precisely cancel out the huge number of observers that such a constant leads to — Λ(1/Λ)=1 — leaving us with no prediction whatsoever about the value of Λ. (When I tried to think about this issue years ago, that’s about as far as I got.) So to summarize: Bousso et al. need to explain to us on the one hand why Λ isn’t 2000 times bigger than it is, and on the other hand why it’s not arbitrarily smaller or 0. Alright, so are you ready for the argument? The key, which maybe isn’t so surprising in retrospect, turns out to be other stuff that’s known about physics and astronomy (independent of Λ), together with the assumption that that other stuff stays the same (i.e., that all we’re varying is Λ). Sure, say Bousso et al.: in principle a universe with positive cosmological constant Λ could contain up to ~1/Λ bits of information, which corresponds — or so a computer scientist might estimate! — to ~1/Λ observers, like maybe ~1/√Λ observers in each of ~1/√Λ time periods. (The 1/√Λ comes from the Schwarzschild bound on the amount of matter and energy within a given radius, which is linear in the radius and therefore scales like 1/√Λ.) But in reality, that 1/Λ upper bound on the number of observers won’t be anywhere close to saturated. In reality, what will happen is that after a billion or so years stars will begin to form, radiating light and quickly increasing the universe’s entropy, and then after a couple tens of billions more years, those stars will fizzle out and the universe will return to darkness. And this means that, even though you pay a Λ price in prior probability for a universe with 1/Λ information content, as Λ goes to zero what you get for your money is not ~1/√Λ observers in each of ~1/√Λ time periods (hence ~1/Λ observers in total), but rather just ~1/√Λ observers over a length of time independent of Λ (hence ~1/√Λ observers in total). In other words, you get diminishing returns for postulating a bigger and bigger causal patch, once your causal patch exceeds a few tens of billions of light-years in radius. So that’s one direction. In the other direction, why shouldn’t we expect Λ to be 2000 times bigger than it is (i.e. the radius of our causal patch to be ~45 times smaller)? Well, Λ could be that big, say the authors, but in that case the galaxies would fly apart from each other before starlight really started to heat things up. So once again you lose out: during the very period when the stars are shining the brightest, entropy production is at its peak, civilizations are presumably arising and killing each other off, etc., the number of galaxies per causal patch is minuscule, and that more than cancels out the larger prior probability that comes with a larger value of Λ. Putting it all together, then, what you get is a posterior distribution for Λ that’s peaked right around 10^-122 or so, corresponding to a causal patch a couple tens of light-years across. This, of course, is exactly what’s observed. You also get the prediction that we should be living in the era when Λ is “just taking over” from gravity, which again is borne out by observation. According to another paper, which I haven’t yet read, several other predictions of cosmological parameters come out right as well. On the other hand, it seems to me that there are still few enough data points that physicists’ ability to cook up some anthropic explanation to fit them all isn’t sufficiently surprising to compel belief. (In learning theory terms, the measurable cosmological parameters still seem shattered by the concept class of possible anthropic stories.) For those of us who, unlike Eliezer Yudkowsky, still hew to the plodding, non-Bayesian, laughably human norms of traditional science, it seems like what’s needed is a successful prediction of a not-yet-observed cosmological parameter. Until then, I’m happy to adopt a bullet-dodging attitude toward this and all other proposed anthropic explanations. I assent to none, but wish to understand them all — the more so if they have a novel conceptual twist that I personally failed to think of.
{"url":"http://www.scottaaronson.com/blog/?cat=30&paged=3","timestamp":"2014-04-19T09:25:39Z","content_type":null,"content_length":"96533","record_id":"<urn:uuid:20a946bd-597e-4a79-88b0-9512176cf87d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
ELLIPSE Command's Parameter Option [DXF - DXF Reference] The Parameter option of the ELLIPSE command uses the following equation to define an elliptical arc. The variables a, b, c are determined when you select the endpoints for the first axis and the distance for the second axis. a is the negative of 1/2 of the major axis length, b is the negative of 1/2 the minor axis length, and c is the center point (2-D) of the ellipse. Because this is actually a vector equation and the variable c is actually a point with X and Y values, it really should be written as: □ Cx is the X value of the point c Cy is the Y value of the point c a is -(1/2 of the major axis length) b is -(1/2 of the minor axis length) i and j represent unit vectors in the X and Y directions In AutoCAD, once the axis endpoints are selected, all you have left to specify is the start and end of the elliptical arc. When you select the parameter option, you are asked for a start parameter and an end parameter. These values are plugged into the equation to determine the actual start and end points on the ellipse. The rest of the ellipse is filled in between these two points in a counterclockwise direction from the first parameter to the second. The value entered for the parameter u is taken to be degrees for the purposes of obtaining the cos(u) and sin(u). For example: Axis endpoint 1 = 0,1 Axis endpoint 2 = 4,1 Other axis distance = 2,0 Start parameter = 270 End parameter = 0 generates the start point at 2,2 and the end point at 0,1 and fills in the ellipse from 2,2 to 0,1 in a counterclockwise direction.
{"url":"http://www.autodesk.com/techpubs/autocad/acad2000/dxf/ellipse_command39s_parameter_option_dxf_06.htm","timestamp":"2014-04-17T00:54:48Z","content_type":null,"content_length":"4490","record_id":"<urn:uuid:4904f8b3-ebb3-4162-b3ca-0aef01c17d59>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete Mathematics & Theoretical Computer Science Volume 4 n° 1 (2000), pp. 45-60 author: Ross M. McConnell and Jeremy P. Spinrad title: Ordered Vertex Partitioning keywords: Modular Decomposition, Substitution Decomposition, Transitive Orientation abstract: A transitive orientation of a graph is an orientation of the edges that produces a transitive digraph. The modular decomposition of a graph is a canonical representation of all of its modules. Finding a transitive orientation and finding the modular decomposition are in some sense dual problems. In this paper, we describe a simple O(n + m log n) algorithm that uses this duality to find both a transitive orientation and the modular decomposition. Though the running time is not optimal, this algorithm is much simpler than any previous algorithms that are not Ω(n^2). The best known time bounds for the problems are O(n+m) but they involve sophisticated techniques. If your browser does not display the abstract correctly (because of the different mathematical symbols) you can look it up in the PostScript or PDF files. reference: Ross M. McConnell and Jeremy P. Spinrad (2000), Ordered Vertex Partitioning, Discrete Mathematics and Theoretical Computer Science 4, pp. 45-60 bibtex: For a corresponding BibTeX entry, please consider our BibTeX-file. ps.gz-source: dm040104.ps.gz (47 K) ps-source: dm040104.ps (155 K) pdf-source: dm040104.pdf (200 K) The first source gives you the `gzipped' PostScript, the second the plain PostScript and the third the format for the Adobe accrobat reader. Depending on the installation of your web browser, at least one of these should (after some amount of time) pop up a window for you that shows the full article. If this is not the case, you should contact your system administrator to install your browser correctly. Due to limitations of your local software, the two formats may show up differently on your screen. If eg you use xpdf to visualize pdf, some of the graphics in the file may not come across. On the other hand, pdf has a capacity of giving links to sections, bibliography and external references that will not appear with PostScript. Automatically produced on mer avr 16 10:25:34 CEST 2003 by gustedt
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/113/282","timestamp":"2014-04-20T11:05:19Z","content_type":null,"content_length":"14480","record_id":"<urn:uuid:9d6bd467-de3a-4387-bf81-373866f57667>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Rgdfg by Checkitout11 1.(a) State experimental evidences for i) strong repulsive force ii) strong attractive force between atoms in a solid. (b) A simple model for the oxygen molecule consists of two equal masses connected by a light spring . The atoms oscillate along their lines of centre while the midpoint of the spring remains fixed. i) Write down an expression for the frequency of oscillation of one oxygen atom in terms of its mass m and the force constant K of one half of the spring. How is K related to the force constant k of the whole spring?[the force constant is the force per unit extension.] ii)From spectroscopic measurements of oxygen molecules ,it may be deduced that the frequency of oscillation of the molecule is about 4.7 x10^13 Hz. In what region of the electromagnetic spectrum does radiation of this frequency lie? What effective force constant k does this correspond to? [Relative atomic mass of oxygen =16 ] 2. (a)Draw, on the same horizontal axis, then sketch graphs to show hot the following quantities vary with separation r. i) potential energy U ii) Force F between atoms Mark the positions corresponding to the equilibrium separation between atoms for both the graph. iii) State the relationship between the potential energy and the force between two atoms. iv)Use your graph of potential energy versus separation to explain the thermal expansion of a solid. v) Explain why the atoms in a crystalline solid are arranged in regular and orderly manner . b) The potential energy of a two –atom system is represented by u= a/r^2 - b/r^6 where r is the separation between atoms a and b are constants. Find r_0,the value of r at which U is a minimum in terms of a and b. Find u_0, the potential energy of the system when r=r_0. ANSWER (a)...
{"url":"http://www.studymode.com/essays/Rgdfg-417896.html","timestamp":"2014-04-19T01:52:08Z","content_type":null,"content_length":"29600","record_id":"<urn:uuid:b75f552d-e60e-41e1-9f2b-69e877098022>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Englewood, NJ Precalculus Tutor Find an Englewood, NJ Precalculus Tutor ...In this way, I know that they fully understand and will hopefully accomplish learning other math concepts more easily. I am fulfilled the most about teaching math when students who do not like it, recognize their ability to learn it well. That they may gain a better appreciation for their abili... 16 Subjects: including precalculus, chemistry, calculus, geometry ...I also received credit for VEE Economics and VEE Corporate Finance (part of the SOA Actuary track). I am highly proficient in probability and statistics, which are tested in great detail in actuarial science. I also have tutored in material covered on exam P/1 and exam FM/2. Samuel I am highly proficient in linear algebra. 21 Subjects: including precalculus, calculus, statistics, geometry I have been tutoring students in math for the past seven years. In addition, I have worked as a K-12 substitute teacher in both the public and private school systems for six years. I see myself as more of an academic mentor; I love enabling students of all ages to excel in math and the rest of their academic endeavors. 27 Subjects: including precalculus, reading, calculus, statistics I specialize in tutoring for Physics and Math. My education is rooted deeply in Physics, as I most recently received a Master's in Physics from the University of Connecticut. I taught introductory physics courses at UConn and enjoyed seeing my students grow both in academics and critical thinking, validated through both testing and laboratory reports. 9 Subjects: including precalculus, calculus, physics, algebra 1 ...I have also recently passed the Praxis II Exam in Math Content Knowledge while earning Recognition of Excellence for scoring in the top 15 percent over the last 5 years. My most valuable quality, however, is my ability to relate to students of all ages, and make even the most difficult subjects ... 22 Subjects: including precalculus, reading, English, chemistry Related Englewood, NJ Tutors Englewood, NJ Accounting Tutors Englewood, NJ ACT Tutors Englewood, NJ Algebra Tutors Englewood, NJ Algebra 2 Tutors Englewood, NJ Calculus Tutors Englewood, NJ Geometry Tutors Englewood, NJ Math Tutors Englewood, NJ Prealgebra Tutors Englewood, NJ Precalculus Tutors Englewood, NJ SAT Tutors Englewood, NJ SAT Math Tutors Englewood, NJ Science Tutors Englewood, NJ Statistics Tutors Englewood, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Englewood_NJ_precalculus_tutors.php","timestamp":"2014-04-20T16:19:05Z","content_type":null,"content_length":"24421","record_id":"<urn:uuid:c59d8e0a-894f-4fc2-8492-2efeb1d955c3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Upper central series From Groupprops This article defines a quotient-iterated series with respect to the following subgroup-defining function: center The upper central series of a group │Case for ordinal│ Verbal definition of │Definition using the│ │ │trivial subgroup │ │ │ │its image modulo the previous member is the center of the quotient group by the previous member. │center of │ │ │union of all the previous members │ │ The zeroth member is the trivial group, the first member is the center, and the second member is the second center. In other words: Proof methods The following articles give a sense of the important methods used to prove facts about the upper central series: Subgroup properties Related subgroup-defining functions │Subgroup-defining function│ Role in upper central series │ │center │first member, i.e., │ │second center │second member, i.e., │ │hypercenter │the final stable member, i.e., the │ Related group properties │ property │ meaning in terms of upper central series │ │Centerless group │upper central series stays stuck at trivial subgroup; never gets off the ground │ │Nilpotent group │upper central series reaches whole group in finitely many steps │ │Hypercentral group│transfinite upper central series reaches whole group │ Relation with lower central series For a nilpotent group, the lower central series and upper central series are closely related. They both have the same length, and there is a containment relation between them, which follows from the combination of the facts that upper central series is fastest ascending central series and lower central series is fastest descending central series. However, they need not coincide. Nilpotent groups where they do coincide are termed UL-equivalent groups, and nilpotent not implies UL-equivalent. Here is a table with some distinctions/contrasts between the two central series:
{"url":"http://groupprops.subwiki.org/wiki/Upper_central_series","timestamp":"2014-04-18T20:43:18Z","content_type":null,"content_length":"38315","record_id":"<urn:uuid:2e8efc0e-2746-4170-85d7-08fd16e25aad>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00544-ip-10-147-4-33.ec2.internal.warc.gz"}