content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
METHODOLOGY IN THE DJSM
From various research works it may be found that the systematic methodlogy developed in Jaina school of
science can be considered as follows:
1.The models of language generation and functioning were developed for the synchronic and diachronic aspects
of study into the karma phenomena through the Syådvåda and anekånta dialectical system of prediction through
primary and secondary views. Their scientfic evaluation was published by Mahalonabis and Haldane. Further
contribution may be seen in the unpublished monographs.
2. The development of closed and open number systems, finite and transfinite means for fluent measure (dravya
pramå¿a)through constructive and existential sets (råsis). The remaining set theoretic manipulation is based on
the simile(upamå) constructed quarter (k¼etra)and time(kåla) measures.These are the phase (bhåva) measures
3. Evolution of the system theoretic knowledge of maxima and minima (utkâ¼²a and jaghanya), quantitatively
and qualitatively in so far as the intermediates (madhyamas) were to be placed and located through their
comparability(alapabahutva)results in the close and open dynamical Karmic systems were concerned. Their
transfinite topology has been discussed This is used to analyze the development of structional organismic
system of typical characteristics in the forms of control stations (gu¿asthånas)and wayward stations (mårga¿å
sthånas). It may be invariably noted that most of the results in all sciences today have been obtained for the
micro and macro cosmos through the functional application of maxima and minima theory of variational calculus,
applied to statistical and dynamical systems.
4.The development of open system ideas of input values and functions(dravya asrava, bandha and bhåva åsrava,
bandha) , output values and functions (dravya nirjarå and bhåva nirjarå),independence values and functions
(dravya and bhåva åsatva) and so on, and their transitions and transformations as operands and operaters(dravya
and bhåva kara¿a) every instant.The quantitative results are depicted through the constructive and existential
concepts of geometric regressions (gunahani) The concept of feedback is same as that of homeostasis in a
biological system. It may be found out that the concepts of controllability, reachability, and the corresponding
observability, constructibility and determinability in the ancient concepts of Karma theory. It has the form of
5.The development of unified systems in the wholeness study of the astronomical, cosmological and bio-
theoretical systems. The studies of spiroelliptic orbits in the kinematic motions of astral bodies (here diurnal and
annual motion is unified for the sun and the moon), referred to the grid system of gaganakha¿ðas and altitude in
yojanas (through shadow reckoning) The commentaries extended over twenty centuries appear to be sufficient
for the understanding of the profound systems approach of the Jaina school developed for a non-violence
culture.basic to freedom.
DEVELOPMENT OF SYMBOLISM
Firstly, it is essential to study basic concepts and principles of the Jaina karmic theory in ancient symbolism
and the convenient corresponding working symbolism and terminology related to canonical texts containing
karma as a functional system. In the commentaries of the canonical texts of Jaina school of science the
symbols are used to denote more than one term , distinguishable each time through context. No doubt, this had
made it possible to work with minimum number of symbols in ancient times, but if the context is ignored then it
will be rather difficult to understand for what that symbol stands for. Hence the working symbols with
distinguishable symbols and at the same time with a limiting or minimal possible number of symbols
are evolved. Thus the following table gives the ready reference of the terms, ancient symbolism as well as
working symbolism.There are so many symbols but those symbols are selected which are sufficient for the
present work. There are different tables referring different aspects which are mentioned with the tables I, II AND III.
THE NUMBER ANALYSIS
There are different methods to write the numbers in Jaina canonical texts viz. by the name of objects, by the
name of alphabets, with or without denomination, by abbreviation, etc.
i. For metric scale notation, ten has taken the base and the writing way was mainly of two types. The first way
was to move the numbers from left to right for ascertaining their numerical meaning like Vedic literature can be
taken as LWM., e.g. Eight, fifty, seven hundred, seven thousand, (608 karmakanda) which is having the value as
The second way was to move from right to left, e.g., Eight crores one lac eight thousand one hundred seventy-
five (351 Jivakanda) which is equal to 80108175.
ii. For non metric scale notation,the numbers were written in denominational way, e.g., the word one kodakodi is
frequently observed in karmakanda which stands for ten millian x ten millian. Or the another example as one less
thirty hundred one ninety = 2991.(869 karmakanda)
iii. The subtractive principle was used for vinculum numbers, e.g., two less hundred = 98 (104 karmakanda)
iv. The number maight be written by the process of repeating mid digits, e.g., one seven nine nine two
nabha(zero) nabha eight four five four five sixteen with sixteen times six thirty and four by eleven
= 17992008451636363636363636363636363636363636 and 4/11 (25 trilokasara) | {"url":"http://www.jainworld.com/science/mathephysics3.asp","timestamp":"2014-04-18T11:17:45Z","content_type":null,"content_length":"32469","record_id":"<urn:uuid:d9190b6b-bf36-468d-bfd3-ecb030dbd707>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
7.1.2 The Coincidence Site Lattice
If we look at the infinity of possible orientations of two grains relative to each other, we will find some special orientations. In two dimensions this is easy to see if we rotate two lattices on
top of each other.
You can watch what will happen for a hexagonal lattice lattice by activating the link.
A so-called Moirée pattern develops, and for certain angles some lattice points of lattice 1 coincide exactly with some lattice points of lattice 2. A kind of superstructure, a coincidence site
lattice (CSL), develops. A question comes to mind: Do these special coincidence orientations and the related CSL have any significance for grain boundaries?
Lets look at our paragon of grain boundaries, the twin boundary:
Shown are the two grains of the preceding twin boundary, but superimposed. Coinciding atoms (in the projection) are marked red. However, this might be coincidental (excuse the pun), because the
atoms in this drawing are not all in the drawing plane. Note that it is not relevant if the boundary itself is coherent or not - only the orientation of the grains counts.
And once more, note that the lattice is not the crystal! We are looking for coinciding lattice points - not for coinciding atom positions (but this may be almost the same thing with simple
So in this picture the same situation is shown for the fcc lattice belonging to the grain boundary. Again, coinciding lattice points are marked red and a (two-dimensional) elementary cell of the
CSL is also shown in red. The two (three-dimensional) elementary cells of the fcc lattices are also indicated.
It is definite from this picture that the the twin boundary belongs to the class of boundaries with a coincidence relation between the two lattices involved.
From the animation in the link above it was clear that many coincidence relations exist for two identical two-dimensional lattices. In order to be able to extend the CSL consideration to three
dimensions and to generalize it, we have to classify the various possibilities. We do that by the following definition:
┃ Definition: ┃
┃ The relation between the number of lattice points in the unit cell of a CSL and the number of lattice points in a unit cell of the generating lattice is called S (Sigma); it is the unit cell ┃
┃ volume of the CSL in units of the unit cell volume of the elementary cells of the crystals. ┃
A given S specifies the relation between the two grains unambiguously - although this is not easy to see for, let's say, two orthorhombic or even triclinic lattices.
If we look at the twin boundary situation above, we see that S[twin] = 3 (you must relate the two-dimensional lattices her; one is pointed above out in black!). For the three-dimensional case we
still obtain S = 3 for the twin boundary, so we will call twin boundaries from now on: S3 boundaries.
A S1 boundary thus would denote a perfect (or nearly perfect) crystal; i.e. no boundary at all. However. boundaries relatively close to the S1 orientation are all boundaries with only small
misorientations called "small-angle grain boundaries" - and they will be subsumed under the term S1 boundaries for reason explained shortly.
Since the numerical value of S is always odd, the twin boundary is the grain boundary with the most special coincidence orientation there is, i.e. with the largest number of coinciding lattice
Next in line would be the S5 relation defining the S5 boundary. It is (for the two-dimensional case) most easily seen by rotating two square lattices on top of each other.
This also looks like a pretty "fitting" kind of boundary, i.e. a low energy configuration.
A suspicion arises: Could it be that grain boundaries between grains in a CSL orientation, especially if the S values are low, have particularly small grain boundary energies?
The answer is: Yes, but... . And the "but" includes several problems:
Most important: How do we get an answer? Calculating grain boundary energies is still very hard to do from first principles (Remember, that we can't calculate melting points either, even though its
all in the bonds). First principles means that you get the exact positions of the atoms (i.e. the atomic structure of the boundary and the energy). Even if you guess at the positions (which looks
pretty easy for a coherent twin boundary, but your guess would still be wrong in many cases because of so-called "rigid body translations"), it is hard to calculate reliable energies.
So we are left with experiments. This involves other problems:
How do you measure grain boundary energies?
How do you get the orientation relationship?
How do you account for the part of the energy that comes from the habit plane of the boundary - after all, a coherent twin (habit plane = {111}) has a much smaller energy than an incoherent one?
Getting experimental results appears to be rather difficult or at least rather time consuming - and so it is!
Nevertheless, results have been obtained and, yes, low S boundaries tend to have lower energies than average.
However, the energy does not correlate in an easy way with S; it does not e.g. increase monotonously with increasing S. There might be some S values with especially low energy values, whereas
others are not very special if compared to a random orientation.
The result of (simple) calculations for special cubic geometries are shown in the picture:
Shown is the calculated (0^oK) energy for symmetric tilt boundaries in Al produced by rotating around a <100> axis (left) or a <110> axis (right). We see that the energies are lower, indeed, in low
S orientations, but that it is hard to assign precise numbers or trends. Identical S values with different energies correspond to identical grain orientation relationships, but different habit
planes of the grain boundary.
The next figure shows grain boundary energies for twist boundaries in Cu that were actually measured by Miura et al. in 1990 (with an elegant and ingenious, but still quite tedious method).
Clearly, some S boundaries have low energies, but not necessarily all.
Nevertheless, in practice, you tend to find low S boundaries, because (probably) all low energy grain boundaries are boundaries with a defined S value. And these boundaries may have special
properties in different contexts, too.
The link shows the critical current density (at which the superconducting state will be destroyed) in the high-temperature superconductor YBa[2]Cu[3]O[7] with intentionally introduced grain
boundaries of various orientations and HRTEM image of one (facetted) boundary. It is clearly seen that the critical current density has a pronounced maxima which corresponds to a low S orientation
in this (Perovskite type) lattice.
However, despite this or other direct evidence for the special role of low S boundaries, the most clear indication that low S boundaries are preferred comes from innumerable observations of a
different nature altogether - the observation that grain boundaries very often contain secondary defects with a specific role: They correct for small deviations from a special low S orientation.
In other words: Low S orientations must be preferred, because otherwise the crystal would not "spend" some energy to create defects to compensate for deviations.
If we accept that rule, we also have an immediate rule for preferred habit planes of the boundary:
Obviously, the best match can be made if as many CSL points as possible are contained in the plane of the boundary. This simply means:
Preferred grain boundary planes are the closest packed planes of the corresponding CSL lattice.
We will look at those grain boundary defects in the next sub-chapter.
© H. Föll (Defects - Script) | {"url":"http://www.tf.uni-kiel.de/matwis/amat/def_en/kap_7/backbone/r7_1_2.html","timestamp":"2014-04-19T00:07:55Z","content_type":null,"content_length":"22801","record_id":"<urn:uuid:e111966c-f9ed-4a27-81a7-4eddeb8ffd45>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pelham, NY Science Tutor
Find a Pelham, NY Science Tutor
...I have a BA in Geology and have been teaching Regents Earth Science since 2003. I also taught college level geology labs from 2008-2011. I have a BA in Geology from CUNY: Queens College.
6 Subjects: including geology, biology, physical science, astronomy
...I am intimitely familiar with the Macintosh product line and am aware of the pros and cons of of their different systems, for example: Macbook Air vs Macbook Pro. Lastly, I am an avid fan of
the various hotkeys and tricks available to users running OS X, and would love to help students on wyzant...
32 Subjects: including mechanical engineering, algebra 1, algebra 2, biology
...I find tutoring very rewarding because I can sense students' difficulties and enjoy explaining concepts, especially those that I may have had trouble with myself. I also enjoy languages,
having studied German and Russian for three years, and Spanish for five years. I can appreciate the complexities of the syntax of each language, and teach grammar.
29 Subjects: including physics, civil engineering, calculus, mechanical engineering
...I have recently graduated from the King Graduate Monroe College with an Masters of Science in Criminal Justice, where I graduated Summa Cum Laude. I learned a great deal from this program and
gained deep knowledge and insight from the law enforcement, judicial and political viewpoints of crimino...
27 Subjects: including biology, English, writing, reading
...I have been tutoring as a primary job since the 9th grade. I have always tutored independently and this is my first time working with a tutoring agency. In the past I have tutored algebra,
geometry, precalculus and calculus.
23 Subjects: including physics, reading, statistics, Java
Related Pelham, NY Tutors
Pelham, NY Accounting Tutors
Pelham, NY ACT Tutors
Pelham, NY Algebra Tutors
Pelham, NY Algebra 2 Tutors
Pelham, NY Calculus Tutors
Pelham, NY Geometry Tutors
Pelham, NY Math Tutors
Pelham, NY Prealgebra Tutors
Pelham, NY Precalculus Tutors
Pelham, NY SAT Tutors
Pelham, NY SAT Math Tutors
Pelham, NY Science Tutors
Pelham, NY Statistics Tutors
Pelham, NY Trigonometry Tutors | {"url":"http://www.purplemath.com/pelham_ny_science_tutors.php","timestamp":"2014-04-20T09:17:55Z","content_type":null,"content_length":"23812","record_id":"<urn:uuid:dfb8d6f1-7954-4059-9241-fb388a8713ae>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Volume 45, number 2 CHEMICAL PHYSICS LETTERS 15 January 1977
Noam AGMON
Department of Physical Chemists, The Hebrew University of Jerusalem,
Jerusalem. Israel
Received 24 September 1976
The Pauling relation of bond order and bond length together with the BEBO postulate are utilized to generation reac-
tion coordinates on potential energy surfaces of simple exchange reactions. A generalization of the Pauling relation where
the constant is dependent on the equilibrium separation is proposed.
1. Introduction
In 1947 Pauling [l] has proposed the following
correlation of bond length (BL) R with the bond or-
der (BO) p, namely
R -Rs =--alnp, pE [OJ] , (1)
where RS is the length of a "standard" bond with BO
a=0.26a=0.49&. (2)
(See also p_8 1 of ref. 123_)
Considering a simple exchange reaction of the type
A+BC==AB+C (3) | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/253/1485796.html","timestamp":"2014-04-20T19:19:50Z","content_type":null,"content_length":"8026","record_id":"<urn:uuid:c467d1b7-dc35-4e07-87ca-6a446f60948e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Figure 3: Classification responses of recursive partition analysis of the combined noninvasive parameters of near-infrared spectroscopy (NIRS), prostate volume, maximum flow rate (), and
International Prostate Symptom Score (IPSS). ( value); prob: probability of having the diagnosis either obstructed or unobstructed. The total number of patients is 36. The number of true positives is
25 of 28, the number of false positives is 1 of 8, the number of true negatives is 7 of 8, and the number of false negatives is 3 of 28. The sensitivity and specificity for obstruction are 89.3% and
88%, respectively, [AUC: 0.96]. It is to be mentioned that these statistical values need to be tested in a new set of patients before they can be considered for clinical application. | {"url":"http://www.hindawi.com/journals/bmri/2013/452857/fig3/","timestamp":"2014-04-21T02:51:49Z","content_type":null,"content_length":"6677","record_id":"<urn:uuid:97d9a0e6-8012-4b49-bb28-7c247604f52e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Radius of equally sized small circle of a big Circle?!
August 18th 2012, 09:29 AM #1
Junior Member
Radius of equally sized small circle of a big Circle?!
Re: Radius of equally sized small circle of a big Circle?!
1. The midpoinst of the small circles form a regular polylateral with the side-length 2r. You have n isosceles triangles.
2. The central angle of one of the isosceles triangles is $2\alpha$, that means $2\alpha$ must be a divisor of 360°.
3. Each isosceles triangle can be split into 2 right triangles with
$\sin(\alpha)=\frac{r}{R-r}~\implies~r=\frac{R \cdot \sin(\alpha)}{\sin(\alpha) + 1}$
4. Keep in mind that $n = \frac{360^\circ}{2 \alpha}$
Re: Radius of equally sized small circle of a big Circle?!
1. The midpoinst of the small circles form a regular polylateral with the side-length 2r. You have n isosceles triangles.
2. The central angle of one of the isosceles triangles is $2\alpha$, that means $2\alpha$ must be a divisor of 360°.
3. Each isosceles triangle can be split into 2 right triangles with
$\sin(\alpha)=\frac{r}{R-r}~\implies~r=\frac{R \cdot \sin(\alpha)}{\sin(\alpha) + 1}$
4. Keep in mind that $n = \frac{360^\circ}{2 \alpha}$
Hei Thanks, I'm still trying to understand your solution though. not sure what ${sin (\alpha)}$ means.
and how to integrate n there. Its actually a programming problem that I'm solving but this is math first. !
Re: Radius of equally sized small circle of a big Circle?!
Hei Thanks, I'm still trying to understand your solution though. not sure what
${sin (\alpha)}$
Have a look here: Sine - Wikipedia, the free encyclopedia
and how to integrate n there. Its actually a programming problem that I'm solving but this is math first. !
An example: You know (from my previous post) that $n = \frac{360^\circ}{2 \alpha} = \frac{180^\circ}{\alpha}$
Now choose a value for $\alpha$ which must be a divisor of 180: $\tfrac14^\circ, \tfrac13^\circ, \tfrac12^\circ, 1^\circ, 2^\circ, 3^\circ, 4^\circ, 5^\circ, 6^\circ, 9^\circ ..... 90$
To each value of $\alpha$ belongs the number of small circles: $720, 540, 360, 180, 90, 60, 45, 36, 30, 20, .... 2$
Of course you can choose the number of circles first and determine the value of $\alpha$ afterwards:
$\alpha = \frac{180^\circ}n~, n \in \mathbb{N}~\wedge~n\ge2$
For instance: You want to draw 7 small circles then $\alpha = \tfrac{180}7 ^\circ$
But in both cases you have to use the Sine function.
Last edited by earboth; August 19th 2012 at 08:21 AM.
August 18th 2012, 10:51 AM #2
August 19th 2012, 02:33 AM #3
Junior Member
August 19th 2012, 04:59 AM #4 | {"url":"http://mathhelpforum.com/geometry/202293-radius-equally-sized-small-circle-big-circle.html","timestamp":"2014-04-20T00:08:35Z","content_type":null,"content_length":"50428","record_id":"<urn:uuid:1e4e61b5-171c-4162-b60c-b19adce568f2>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00553-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Treks: From Surreal Numbers to Magic Circles
This is a delightful collection of 33 items, much in the tradition of Martin Gardner, to whom it is dedicated.
How do you bridge the abyss between practising mathematicians and the general public? Of course, there is no abyss, not even a dividing line, but there is certainly a problem. Martin Gardner has been
the best solution we have had for many years, and his act seems impossible to follow, but there are a few who are getting close, notable among them being Ivars Peterson.
In order to capture the reader, the trick seems to be to keep the mathematics invisible. Ivars is fairly good at this, but the technicalities are a shade more visible than they are in Gardner's
Some of the perennials are there (but usually with a new twist): the Möbius strip, prime numbers, π, perfect numbers, magic squares, river crossings. Semi-popular items are Reuleaux curves, soap
films, the Conway-Paterson game of Sprouts, packing circles, and Erdös. History is represented by the 1478 Treviso Arithmetic, the 1503 Margarita Philosophica and Euclid's 14th book (to be taken cum
grano salis). Sporting items are the baseball diamond, the expansion draft, and ball control. Other chapters address Deep Blue, poker, dreidel, DNA, GIMPS, loci with foci, Waring's problem, spreading
rumors, and matchstick problems (with a review of one of my favorite books).
All are a good read: I select four favorites, not listed above.
"Next in Line" describes the Moessner algorithm which turns additive sequences into multiplicative ones. There's a picture of the jacket of another of my favorite books.
"Mating Games and Lizards" describes how three varieties of lizard, and three strains of e. coli, play the game of Scissors-Paper-Stone.
"The Cow in the Classroom" is about humor in mathematics, citing Louis Sachar, Jon Scieszkar & Lane Smith, Stephen Leacock and many others, including Mark Twain, who calculated that a million years
ago the Mississippi was 1.3 million miles long and said, "There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact."
"Computing with the EDSAC" concerns "the first fully operational and productive stored-program computer", which hasn't received the publicity given to ENIAC and other early computers. The reviewer
was a user, more than fifty years ago, vicariously via the programming of C. B. Haselgrove. EDSAC calculated nim-values of impartial games. Nim-addition is just XOR, so that it's one of the fastest
possible computer operations. But in those days much of the program had to be devoted to removal of the built-in "carry" that was needed for normal arithmetic! Memory size restricted calculations to
400 values, while 600 or more could fairly easily be found by hand. But the computer took a shorter time to make less mistakes, and was a valuable check. EDSAC II was still working in 1960 when my
son first visited the Cambridge Lab, where he's been ever since.
There's a useful, albeit not very complete, index: a feature not always found in Martin Gardner's books. There are cleverly apt and very humorous illustrations by John Johnson. Thanks to Beverly
Ruedi there are very few misprints: Gardner for Gardiner on p.67; 10n should be 10^n on p.122; and on p.124, de la Vallée-Poussin's given names should be Charles-Joseph, rather than Charles-Jean.
Richard K. Guy (rkg@cpsc.ucalgary.ca) is Faculty and Emeritus Professor of Mathematics at The University of Calgary. | {"url":"http://www.maa.org/publications/maa-reviews/mathematical-treks-from-surreal-numbers-to-magic-circles","timestamp":"2014-04-18T22:17:36Z","content_type":null,"content_length":"98935","record_id":"<urn:uuid:1f801d1b-49e6-4122-af2d-6cc5abe4d432>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem Solving Series #3: What Teaching Resources Should I Buy?
So far, this series has included…
1. Classroom procedures that help students explore and construct problem-solving strategies.
2. Ways to make sure your low readers and second language students are not at a disadvantage.
Is there a perfect Problem Solving teaching resource?
Bad news: Instructional materials are only as good as the instructor. More bad news: Materials from the major textbook companies will probably not be adequate – even if their representatives tell you
In 2003 and 2007, two representatives of a major textbook company tried to convince me that the problem solving activities attached to the summative assessments were adequate in helping students
develop problem-solving skills. My first issue: One problem solver per unit means that students get 12 opportunities during the year to build these skills. Students need more than that. My second
issue: Students have had no scaffolding throughout the unit that would make them successful.
Students turned in their assessments with looks of shame and defeat.
One representative claimed there was research defending the position that problem solving strategies need not be explicitly taught. Given enough time, students will develop strategies on their own.
I have scoured the research. Can’t find it. The only research I can find post-millenium states that students with disabilities benefit from explicit, repeated instruction.
Look for materials that explicitly teach strategies.
If students are going to transfer problem solving skills to real-world problems in a different context, Grant Wiggins suggests students must make four cognitive moves:
1. independently realize what the question is asking and think about which answers/approaches make sense;
2. infer the most relevant prior learning from plausible alternatives;
3. try out an approach, making adjustments as needed given the context or wording; and
4. adapt their answer, perhaps, in the face of a somewhat novel or odd setting
Students must have a mental portfolio of plausible, alternative approaches. Without a mental portfolio of possible strategies, elementary students will tend to do one of the following:
• randomly add, subtract, multiply, or divide numbers – hoping they pick the right operation.
• tell you they are using the “guess and check” method. Their paper will full of random computational guesses. One of the guesses will be circled.
If you spend at least a few lessons each year explicitly working with and repeating strategies, students have a mental portfolio of approaches from which they can draw.
Give them a chance to explore one type of problem using the procedures I outlined in the first part of this series. Give them a similar problem the next day. Then give them a homework assignment
using that strategy. The fourth time, almost all students can independently use the strategy.
…but don’t name the strategies.
In the next entry, I’ll get on my proverbial soapbox about the “pick a strategy” step in the frequently-published ‘problem solving process.’ Rather than say, ‘This kind of problem is solved by [name
the strategy]‘, you still want students to construct a strategy or two that works.
When students find something that works and defend their strategy, then you want them to solidify the conceptual understanding in different (but similar) problems.
Who cares what they call it? If students are going to name things, ask them to connect vocabulary from other math strands to the patterns they are finding.
Collect the strategies in a journal, notebook, or portfolio.
Math journals help students keep a record of previous lessons. Some of the best journals I’ve seen are found at Runde’s Room.
During a lesson where a new strategy is introduced, expect the journal pages will be rather messy. The students will start well, including sketches and bullet points showing their understanding of
the scenario and the question. Then, to construct meaning, students need to try different things, collaborate with classmates, change approaches, then collaborate some more. They will cross things
out. They will erase until the page tears (although I discourage erasing). They will repair pages with scotch tape. They will use white-out strips. They will circle things and use arrows when
explaining work to others. This is a good thing.
The second day, students will get to an answer more directly because they can refer to the procedures that worked the day before and apply them in the new and different situation.
The homework that follows might be given as a worksheet. Rather than collecting the homework for grading, have students take the work out at the beginning of the lesson, compare their answers in
small groups, and come to consensus.
If changes need to be made, students can make the changes in a different color pencil. Then, students write a short “things to remember” note and paste the homework in their journal.
Wean students off of the repetition.
At some point, the types of problems should spiral more than repeat. Questions for students: Does this problem resemble any you’ve seen before? How does it relate to prior knowledge in other areas of
Some will benefit from looking back at the journal for ideas. Others will not need that step.
Remember that not all the students will need repetitions of strategies.
Once a student demonstrates the ability to independently use a strategy, there is no reason to give him or her more of the same. You will have students that catch on the first time and immediately
apply the strategy to new situations. These students will be held back if you require them to have them do the same problems as the rest of the class.
Try giving these students a problem that is at least a grade level higher. Assuming the student has no trouble with that, move him/her on to a project.
What materials help teach problem solving strategies?
I’ve had the best luck with The Problem Solver - with reservations that I will explain in the fourth part of this series.
The real power of the Problem Solver comes when teachers can match Problem Solving strategies with the conceptual ideas of a mainstream curriculum math unit. Some examples:
1. Students need to find all the factors of numbers. They learn the Rules of Divisibility and continually ask themselves Is 1 a factor? Is 2 a factor? Is 3 a factor? This unit provides a great
opportunity for the strategy ‘Make an organized list’. In both situations, students need to think about and organize numbers in a more systematic way.
2. You are teaching a unit on fractions. Fractions combine well with the ‘Working
backward’ strategy.
3. If students are learning to graph coordinates on a plane, they might also practice ‘Make a picture or diagram’.
Mathematical strands that tend to match Problem Solving strategies.
Think of problem solving as an umbrella that covers all the mathematical strands.
No hard and fast rules exist to match problem solving with other mathematical strands – only experience will help you make the matches. Also, problem-solving strategies overlap. Many students begin a
problem with a picture or diagram. Why stop there? An organized list might lead to a pattern that can be graphed. Celebrate when students find the overlaps!
If you’re new to a mathematics series or to problem solving instruction, here are some general guidelines:
Find a set of materials that explicitly teach problem solving strategies. Teach the strategies for a decent chunk of the year. Don’t rely on teaching strategies the whole year, but give students
enough background knowledge and confidence that they can approach a scenario with a few tried and true options.
Do not force repetition on all your students because some will not need it. An initial problem or a pre-assessment might be the same but the follow-up problems need not be.
Connect the strategies with other mathematical strands.
Have you found any research on the pros and cons of teaching strategies? What materials have you found that help teach problem solving? Please share!
If you like what you read, please subscribe to Expat Educator. You’ll have instant access to subsequent posts in this series. The next post: What the Problem Solving Process is Missing.
photo credit: Marquette La via photopin cc
5 thoughts on “Problem Solving Series #3: What Teaching Resources Should I Buy?”
1. For me, in technology, problem solving is life or death. I only see them 1x/wk so if they can’t solve their own tech problems, they don’t get solved. I’ve come up with quite a few approaches for
that, starting in Kindergarten, asking students to try to solve a problem before asking for help.
The kids are easy. My parents helpers run around doing for the munchkins, destroying my plan. I have to tell them to move away from the mouse!
2. 01 Jul 22, 2012 2:45 am Wow I got an email from you to check out your videos. I thhogut it was some sort of scam at first, but, surprisingly, the video in the email link described something about
time management , which I severely lack right now. After watching this video, I feel like I should cherish my time instead of dreading the boredom. I should add more gold to my hours instead of
just passing through time in an empty train. ThankS!
Please add your thoughts, opinions, and questions. | {"url":"http://expateducator.com/2012/11/12/problem-solving-series-3-what-teaching-resources-should-i-buy/","timestamp":"2014-04-17T07:42:35Z","content_type":null,"content_length":"72179","record_id":"<urn:uuid:454310a0-978e-48d5-8318-dd76bf339028>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Harvard Crimson
Unlike smoothly-synchronized History 1 section men, the various teachers in Harvard's elementary mathematics courses can follow their least whim in determining the work to be done. Not only do some
sections accomplish much more than others, but they may even use different texts. Naturally, when the ill-assorted students from Math A reach Math 2, a good deal of duplication ensues. Sometimes
you've done the work before and other times the necessary preparatory material was omitted in your section. But not even Math 2 has achieved any sort of intra-course uniformity. Each of the three
sections is an independent variable--each, as in Math A, uses its own texts in its own way. All math concentrators meet finally in Math 5, which has but one section. However, due to the varied work
offered as a pre-requisite, this third stage in numerical evolution must retrace not only Math 2 but also Math A!
A few simple steps taken by the Department would clear up this whole mess. First, some one professor should be made responsible for re-organizing the elementary courses. Math A and Math 2 should
adopt more modern texts better organized for what Harvard intends to teach and for the way Harvard intends to teach it. Different sections could be required to do a minimum amount of identical work
in that text; and giving the same exam to the whole course would be pressure enough on the instructors to enforce the rule. An advanced section in Math 2 could take care of concentrators who required
or preferred a little thicker broth than those heading for physics or chemistry.
In concentrating on its advanced offerings, which do rank with the best in the country, the Harvard mathematicians have neglected their introductory courses. No one should know better than these
numerical wizards the importance of systematic development and logical structure. Yet the chaos of Math A; 2, and 5 approaches infinity. It's high time the local number-jumblers solved the practical
problem of organizing their Department. | {"url":"http://www.thecrimson.com/article/1941/1/17/department-of-utter-confusion-punlike-smoothly-synchronized/","timestamp":"2014-04-17T07:04:00Z","content_type":null,"content_length":"18337","record_id":"<urn:uuid:e8382e57-d715-473e-94ee-e6859449a936>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
f'(5x^4) of f(x)=x^5
October 8th 2011, 05:26 AM #1
Junior Member
Sep 2010
f'(5x^4) of f(x)=x^5
Hi, I was wondering how to do these types of derivatives... I have several to do, but I would like to see only one of them solved so that I can do the others. The course is about several variable
The problem says:
"If f(x)=x^5, find
f'(5x^4), f(x^2), d/dx(f(x^2)), f'(x^2)"
Re: f'(5x^4) of f(x)=x^5
Re: f'(5x^4) of f(x)=x^5
This is just composition of functions. If I asked you to find f'(x), you'd probably have no problem doing that. Here's a different example:
Find f'(x+3), f(x^5), d/dx(f(x^3)), f'(x^3)
First, let's calculate f'(x).
Now, remember how to do composition of functions?
The second function is even easier, we just compose f with x^5:
f(x^5)=(x^5)^2+2 = x^10+2
Now we're back to derivatives, and some tricky notation. The book is trying to trip you up into thinking that, since d/dx(f(x))=f'(x), then d/dx(f(x^3))=f'(x^3). This is not the case. Remember
the chain rule:
d/dx(f(g(x)) = f'(g(x))g'(x)
So, with g(x)=x^3, g'(x)=3x^2 and therefore
Finally, the last one is just there to possibly alert those people who tripped up on the third one that they should reconsider their answer. It's just simple composition, like before:
October 8th 2011, 05:36 AM #2
October 8th 2011, 05:36 AM #3
Junior Member
Sep 2011 | {"url":"http://mathhelpforum.com/calculus/189808-f-5x-4-f-x-x-5-a.html","timestamp":"2014-04-17T05:39:32Z","content_type":null,"content_length":"32823","record_id":"<urn:uuid:66dd1821-3631-4127-8533-6a90660e1868>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
aeraad99.wp1 3/29/99
Return to Bruce's Home Page
Common Methodology Mistakes in Educational Research, Revisited,
Along with a Primer on both Effect Sizes and the Bootstrap
Bruce Thompson
Texas A&M University 77843-4225
Baylor College of Medicine
Correct APA citation style:
Thompson, B. (1999, April). Common methodology mistakes in educational research, revisited, along with a primer on both effect sizes and the bootstrap. Invited address presented at the annual
meeting of the American Educational Research Association, Montreal. (ERIC Document Reproduction Service No. ED forthcoming)
Invited address presented at the annual meeting of the American Educational Research Association (session #44.25), Montreal, April 22, 1999. Justin Levitov first introduced me to the bootstrap, for
which I remain most grateful. I also appreciate the thoughtful comments of Cliff Lunneborg and Russell Thompson on a previous draft of this paper. The author and related reprints may be accessed
through Internet URL: "index.htm".
The present AERA invited address was solicited to address the theme for the 1999 annual meeting, "On the Threshold of the Millennium: Challenges and Opportunities." The paper represents an extension
of my 1998 invited address, and cites two additional common methodology faux pas to complement those enumerated in the previous address. The remainder of these remarks are forward-looking. The paper
then considers (a) the proper role of statistical significance tests in contemporary behavioral research, (b) the utility of the descriptive bootstrap, especially as regards the use of "modern"
statistics, and (c) the various types of effect sizes from which researchers should be expected to select in characterizing quantitative results. The paper concludes with an exploration of the
conditions necessary and sufficient for the realization of improved practices in educational research.
In 1993, Carl Kaestle, prior to his term as President of the National Academy of Education, published in the Educational Researcher an article titled, "The Awful Reputation of Education Research." It
is noteworthy that the article took as a given the conclusion that educational research suffers an awful reputation, and rather than justifying this conclusion, Kaestle focused instead on exploring
the etiology of this reality. For example, Kaestle (1993) noted that the education R&D community is seemingly in perpetual disarray, and that there is a
...lack of consensus--lack of consensus on goals, lack of consensus on research results, and lack of a united front on funding priorities and procedures.... [T]he lack of consensus on
goals is more than political; it is the result of a weak field that cannot make tough decisions to do some things and not others, so it does a little of everything... (p. 29)
Although Kaestle (1993) did not find it necessary to provide a warrant for his conclusion that educational research has an awful reputation, others have directly addressed this concern.
The National Academy of Science evaluated educational research generically, and found "methodologically weak research, trivial studies, an infatuation with jargon, and a tendency toward fads with a
consequent fragmentation of effort" (Atkinson & Jackson, 1992, p. 20). Others also have argued that "too much of what we see in print is seriously flawed" as regards research methods, and that "much
of the work in print ought not to be there" (Tuckman, 1990, p. 22). Gall, Borg and Gall (1996) concurred, noting that "the quality of published studies in education and related disciplines is,
unfortunately, not high" (p. 151).
Indeed, empirical studies of published research involving methodology experts as judges corroborate these impressions. For example, Hall, Ward and Comer (1988) and Ward, Hall and Schramm (1975) found
that over 40% and over 60%, respectively, of published research was seriously or completely flawed. Wandt (1967) and Vockell and Asher (1974) reported similar results from their empirical studies of
the quality of published research. Dissertations, too, have been examined, and have been found methodologically wanting (cf. Thompson, 1988a, 1994a).
Researchers have also questioned the ecological validity of both quantitative and qualitative educational studies. For example, Elliot Eisner studied two volumes of the flagship journal of the
American Educational Research Association, the American Educational Research Journal (AERJ). He reported that,
The median experimental treatment time for seven of the 15 experimental studies that reported experimental treatment time in Volume 18 of the AERJ is 1 hour and 15 minutes. I suppose that
we should take some comfort in the fact that this represents a 66 percent increase over a 3-year period. In 1978 the median experimental treatment time per subject was 45 minutes.
(Eisner, 1983, p. 14)
Similarly, Fetterman (1982) studied major qualitative projects, and reported that, "In one study, labeled 'An ethnographic study of..., observers were on site at only one point in time for five days.
In a[nother] national study purporting to be ethnographic, once-a-week, on-site observations were made for 4 months" (p. 17)
None of this is to deny that educational research, whatever its methodological and other limits, has influenced and informed educational practice (cf. Gage, 1985; Travers, 1983). Even a
methodologically flawed study may still contribute something to our understanding of educational phenomena. As Glass (1979) noted, "Our research literature in education is not of the highest quality,
but I suspect that it is good enough on most topics" (p. 12).
However, as I pointed out in a 1998 AERA invited address, the problem with methodologically flawed educational studies is that these flaws are entirely gratuitous. I argued that
incorrect analyses arise from doctoral methodology instruction that teaches research methods as series of rotely-followed routines, as against thoughtful elements of a reflective
enterprise; from doctoral curricula that seemingly have less and less room for quantitative statistics and measurement content, even while our knowledge base in these areas is burgeoning
(Aiken, West, Sechrest, Reno, with Roediger, Scarr, Kazdin & Sherman, 1990; Pedhazur & Schmelkin, 1991, pp. 2-3); and, in some cases, from an unfortunate atavistic impulse to somehow
escape responsibility for analytic decisions by justifying choices, sans rationale, solely on the basis that the choices are common or traditional. (Thompson, 1998a, p. 4)
Such concerns have certainly been voiced by others. For example, following the 1998 annual AERA meeting, one conference attendee wrote AERA President Alan Schoenfeld to complain that
At [the 1998 annual meeting] we had a hard time finding rigorous research that reported actual conclusions. Perhaps we should rename the association the American Educational Discussion
Association.... This is a serious problem. By encouraging anything that passes for inquiry to be a valid way of discovering answers to complex questions, we support a culture of intuition
and artistry rather than building reliable research bases and robust theories. Incidentally, theory was even harder to find than good research. (Anonymous, 1998, p. 41)
Subsequently, Schoenfeld appointed a new AERA committee, the Research Advisory Committee, which currently is chaired by Edmund Gordon. The current members of the Committee are: Ann Brown, Gary
Fenstermacher, Eugene Garcia, Robert Glaser, James Greeno, Margaret LeCompte, Richard Shavelson, Vanessa Siddle Walker, and Alan Schoenfeld, ex officio, Lorrie Shepard, ex officio, and William
Russell, ex officio. The Committee is charged to strengthen the research-related capacity of AERA and its members, coordinate its activities with appropriate AERA programs, and be entrepreneurial in
nature. [In some respects, the AERA Research Advisory Committee has a mission similar to that of the APA Task Force on Statistical Inference, which was appointed in 1996 (Azar, 1997; Shea, 1996).]
AERA President Alan Schoenfeld also appointed Geoffrey Saxe the 1999 annual meeting program chair. Together, they then described the theme for the AERA annual meeting in Montreal:
As we thought about possible themes for the upcoming annual meeting, we were pressed by a sense of timeliness and urgency. With regard to timeliness, ...the calendar year for the next
annual meeting is 1999, the year that heralds the new millennium.... It's a propitious time to think about what we know, what we need to know, and where we should be heading. Thus, our
overarching theme [for the 1999 annual meeting] is "On the Threshold of the Millennium: Challenges and Opportunities."
There is also a sense of urgency. Like many others, we see the field of education at a point of critical choices--in some arenas, one might say crises. (Saxe & Schoenfeld, 1998, p. 41)
The present paper was among those invited by various divisions to address this theme, and is an extension of my 1998 AERA address (Thompson, 1998a).
Purpose of the Present Paper
In my 1998 AERA invited address I advocated the improvement of educational research via the eradication of five identified faux pas:
(1) the use of stepwise methods;
(2) the failure to consider in result interpretation the context specificity of analytic weights (e.g., regression beta weights, factor pattern coefficients, discriminant function
coefficients, canonical function coefficients) that are part of all parametric quantitative analyses;
(3) the failure to interpret both weights and structure coefficients as part of result interpretation;
(4) the failure to recognize that reliability is a characteristic of scores, and not of tests; and
(5) the incorrect interpretation of statistical significance and the related failure to report and interpret the effect sizes present in all quantitative analyses.
Two Additional Methodology Faux Pas
The present didactic essay elaborates two additional common methodology errors to delineate a constellation of seven cardinal sins of analytic research practice:
(6) the use of univariate analyses in the presence of multiple outcomes variables, and the converse use of univariate analyses in post hoc explorations of detected multivariate effects; and
(7) the conversion of intervally-scaled predictor variables into nominally-scaled data in service of OVA (i.e., ANOVA, ANCOVA, MANOVA, MANCOVA) analyses.
However, the present paper is more than a further elaboration of bad behaviors. Here the discussion of these two errors focuses on driving home two important realizations that should undergird best
methodological practice:
1. All statistical analyses of scores on measured/observed variables actually focus on correlational analyses of scores on synthetic/latent variables derived by applying weights to the
observed variables; and
2. The researcher's fundamental task in deriving defensible results is to employ an analytic model that matches the researcher's (too often implicit) model of reality.
These two realization will provide a conceptual foundation for the treatment in the remainder of the paper.
Focus on the Future: Improving Educational Research
Although the focus on common methodological faux pas has some merit, in keeping with the theme of this 1999 annual meeting of AERA, the present invited address then turns toward the constructive
portrayal of a brighter research future. Three issues are addressed. First, the proper role of statistical significance testing in future practice is explored. Second, the use of so-called "internal
replicability" analyses in the form of the bootstrap is described. As part of this discussion some "modern" statistics are briefly discussed. Third, the computation and interpretation of effects
sizes are described.
Other methods faux pas and other methods improvements might both have been elaborated. However, the proposed changes would result in considerable improvement in future educational research. In my
view, (a) informed use of statistical tests, (b) the more frequent use of external and internal replicability analyses, and especially (c) required reporting and interpretation of effect sizes in all
quantitative research are both necessary and sufficient conditions for realizing improvements.
Essentials for Realizing Improvements
The essay ends by considering how fields move and what must be done to realize these potential improvements. In my view, AERA must exercise visible and coherent academic leadership if change is to
occur. To date, such leadership has not often been within the organization's traditions.
Faux Pas #6: Univariate as Against Multivariate Analyses
Too often, educational researchers invoke a series of univariate analyses (e.g., ANOVA, regression) to analyze multiple dependent variable scores from a single sample of participants. Conversely, too
often researchers who correctly select a multivariate analysis invoke univariate analyses post hoc in their investigation of the origins of multivariate effects. Here it will be demonstrated once
again, using heuristic data to make the discussion completely concrete, that in both cases these choices may lead to serious interpretation errors.
The fundamental conceptual emphasis of this discussion, as previously noted, is on making the point that:
1. All statistical analyses of scores on measured/observed variables actually focus on correlational analyses of scores on synthetic/latent variables derived by applying weights to the
observed variables.
Two small heuristic data sets are employed to illustrate the relevant dynamics, respectively, for the univariate (i.e., single dependent/outcome variable) and multivariate (i.e., multiple outcome
variables) cases.
Univariate Case
Table 1 presents a heuristic data set involving scores on three measured/observed variables: Y, X1, and X2. These variables are called "measured" (or "observed") because they are directly measured,
without any application of additive or multiplicative weights, via rulers, scales, or psychometric tools.
INSERT TABLE 1 ABOUT HERE.
However, ALL parametric analyses apply weights to the measured/observed variables to estimate scores for each person on synthetic or latent variables. This is true notwithstanding the fact that for
some statistical analyses (e.g., ANOVA) the weights are not printed by some statistical packages. As I have noted elsewhere, the weights in different analyses
...are all analogous, but are given different names in different analyses (e.g., beta weights in regression, pattern coefficients in factor analysis, discriminant function coefficients in
discriminant analysis, and canonical function coefficients in canonical correlation analysis), mainly to obfuscate the commonalities of [all] parametric methods, and to confuse graduate
students. (Thompson, 1992a, pp. 906-907)
The synthetic variables derived by applying weights to the measured variables then become the focus of the statistical analyses.
The fact that all analyses are part on one single General Linear Model (GLM) family is a fundamental foundational understanding essential (in my view) to the informed selection of analytic methods.
The seminal readings have been provided by Cohen (1968) viz. the univariate case, by Knapp (1978) viz. the multivariate case, and by Bagozzi, Fornell and Larcker (1981) regarding the most general
case of the GLM: structural equation modeling. Related heuristic demonstrations of General Linear Model dynamics have been offered by Fan (1996, 1997) and Thompson (1984, 1991, 1998a, in press-a).
In the multiple regression case, a given i[th] person's score on the measured/observed variable Y[i] is estimated as the synthetic/latent variable Y^[i]. The predicted outcome score for a given
person equals Y^[i] = a + b[1](X1[i]) + b[2](X2[i]), which for these data, as reported in Figure 1, equals -581.735382 + [1.301899 x X1[i]] + [0.862072 x X2[i]]. For example, for person 1, Y^[1] =
[1.301899 x 392] + [0.862072 x 573] = 422.58.
INSERT FIGURE 1 ABOUT HERE.
Some Noteworthy Revelations. The "ordinary least squares" (OLS) estimation used in classical regression analysis optimizes the fit in the sample of each Y^[i] to each Y[i] score. Consequently, as
noted by Thompson (1992b), even if all the predictors are useless, the means of Y^ and Y will always be equal (here 500.25), and the mean of the e scores (e[i] = Y[i] - Y^[i]) will always be zero.
These expectations are confirmed in the Table 1 results.
It is also worth noting that the sum of squares (i.e., the sum of the squared deviations of each person's score from the mean) of the Y^ scores (i.e., 167,218.50) computed in Table 1 matches the
"regression" sum of squares (variously synonymously called "explained," "model," "between," so as to confuse the graduate students) reported in the Figure 1 SPSS output. Furthermore, the sum of
squares of the e scores reported in Table 1 (i.e., 32,821.26) exactly matches the "residual" sum of squares (variously called "error," "unexplained," and "residual") value reported in the Figure 1
SPSS output.
It is especially noteworthy that the sum of squares explained (i.e., 167,218.50) divided the sum of squares of the Y scores (i.e., the sum of squares "total" = 167,218.50 + 32,821.26 = 200,039.75)
tells us the proportion of the variance in the Y scores that we can predict given knowledge of the X1 and the X2 scores. For these data the proportion is 167,218.50 / 200,039.75 = .83593. This
formula is one of several formulas with which to compute the uncorrected regression effect size, the multiple R^2.
Indeed, for the univariate case, because ALL analyses are correlational, an r^2 analog of this effect size can always be computed, using this formula across analyses. However, in ANOVA, for example,
when we compute this effect size using this generic formula, we call the result eta^2 (h ^2; or synonymously the correlation ratio [not the correlation coefficient!]), primarily to confuse the
graduate students.
Even More Important Revelations. Figure 2 presents the correlation coefficients involving all possible pairs of the five (three measured, two synthetic) variables. Several additional revelations
become obvious.
INSERT FIGURE 2 ABOUT HERE.
First, note that the Y^ scores and the e scores are perfectly uncorrelated. This will ALWAYS be the case, by definition, since the Y^ scores are the aspects of the Y scores that the predictors can
explain or predict, and the e scores are the aspects of the Y scores that the predictors cannot explain or predict (i.e., because e[i] is defined as Y[i] - Y^[i], therefore r[YHAT x e] = 0).
Similarly, the measured predictor variables (here X1 and X2) always have correlations of zero with the e scores, again because the e scores by definition are the parts of the Y scores that the
predictors cannot explain.
Second, note that the r[Y x YHAT] reported in Figure 3 (i.e., .9143) matches the multiple R reported in Figure 1 (i.e., .91429), except for the arbitrary decision by different computer programs to
present these statistics to different numbers of decimal places. The equality makes sense conceptually, if we think of the Y^ scores as being the part of the predictors useful in predicting/
explaining the Y scores, discarding all the parts of the measured predictors that are not useful (about which we are completely uninterested, because the focus of the analysis is solely on the
outcome variable).
This last revelation is extremely important to a conceptual understanding of statistical analyses. The fact that R[Y with X1, X2] = r[Y x YHAT] means that the synthetic variable, Y^, is actually the
focus of the analysis. Indeed, synthetic variables are ALWAYS the real focus of statistical analyses!
This makes sense, when we realize that our measures are only indicators of our psychological constructs, and that what we really care about in educational research are not the observed scores on our
measurement tools per se, but instead is the underlying construct. For example, if I wish to improve the self-concepts of third-grade elementary students, what I really care about is improving their
unobservable self-concepts, and not the scores on an imperfect measure of this construct, which I only use as a vehicle to estimate the latent construct of interest, because the construct cannot be
directly observed.
Third, the correlations of the measured predictor variables with the synthetic variable (i.e., .7512 and -.0741) are called "structure" coefficients. These can also be derived by computation (cf.
Thompson & Borrello, 1985) as r[S] = r[Y with X] / R (e.g., .6868 / .91429 = .7512). [Due to a strategic error on the part of methodology professors, who convene annually in a secret coven to
generate more statistical terminology with which to confuse the graduate students, for some reason the mathematically analogous structure coefficients across all analyses are uniformly called by the
same name--an oversight that will doubtless soon be corrected.]
The reason structure coefficients are called "structure" coefficients is that these coefficients provide insight regarding what is the nature or the structure of the underlying synthetic variables of
the actual research focus. Although space precludes further detail here, I regard the interpretation of structure coefficients are being essential in most research applications (Thompson, 1997b,
1998a; Thompson & Borrello, 1985). Some educational researchers erroneously believe that these coefficients are unimportant insofar as they are not reported for all analyses by some computer
packages; these researchers incorrectly believe that SPSS and other computer packages were written in a sole authorship venture by a benevolent God who has elected judiciously to report on printouts
(a) all results of interest and (b) only the results of genuine interest.
The Critical, Essential Revelation. Figure 2 also provides the basis for delineating a paradox which, once resolved, leads to a fundamentally important insight regarding statistical analyses. Notice
for these data the r^2 between Y and X1 is .6868^2 = 47.17% and the r^2 between Y and X2 is -.0677^2 = 0.46%. The sum of these two values is .4763.
Yet, as reported in Figures 2 and 3, the R^2 value for these data is .91429^2 = 83.593%, a value approaching the mathematical limen for R^2. How can the multiple R^2 value (83.593%) be not only
larger, but nearly twice as large as the sum of the r^2 values of the two predictor variables with Y?
These data illustrate a "suppressor" effect. These effects were first noted in World War II when psychologists used paper-and-pencil measures of spatial and mechanical ability to predict ability to
pilot planes. Counterintuitively, it was discovered that verbal ability, which is essentially unrelated with pilot ability, nevertheless substantially improved the R^2 when used as a predictor in
conjunction spatial and mechanical ability scores. As Horst (1966, p. 355) explained, "To include the verbal score with a negative weight served to suppress or subtract irrelevant [measurement
artifact] ability [in the spatial and mechanical ability scores], and to discount the scores of those who did well on the test simply because of their verbal ability rather than because of abilities
required for success in pilot training."
Thus, suppressor effects are desirable, notwithstanding what some may deem a pejorative name, because suppressor effects actually increase effect sizes. Henard (1998) and Lancaster (in press) provide
readable elaborations. All this discussion leads to the extremely important point that
The latent or synthetic variables analyzed in all parametric methods are always more than the sum of their constituent parts. If we only look at observed variables, such as by only
examining a series of bivariate r's, we can easily under or overestimate the actual effects that are embedded within our data. We must use analytic methods that honor the complexities of
the reality that we purportedly wish to study--a reality in which variables can interact in all sorts of complex and counterintuitive ways. (Thompson, 1992b, pp. 13-14, emphasis in
Multivariate Case
Table 2 presents heuristic data for 10 people in each of two groups on two measured/observed outcome/response variables, X and Y. These data are somewhat similar to those reported by Fish (1988), who
argued that multivariate analyses are usually vital. The Table 2 data are used here to illustrate that (a) when you have more than one outcome variable, multivariate analyses may be essential, and
(b) when you do a multivariate analysis, you must not use a univariate method post hoc to explore the detected multivariate effects.
INSERT TABLE 2 ABOUT HERE.
For these heuristic data, the outcome scores of X and Y have exactly the same variance in both groups 1 and 2, as reported in the bottom of Table 2. This exactly equal SD (and variance and sum of
squares) means that the ANOVA "homogeneity of variance" assumption (called this because this characterization sounds fancier than simply saying "the outcome variable scores were equally 'spread out'
in all groups") was perfectly met, and therefore the calculated ANOVA F test results are exactly accurate for these data. Furthermore, the analogous multivariate "homogeneity of dispersion matrices"
assumption (meaning simply that the variance/covariance matrices in the two groups were equal) was also perfectly met, and therefore the MANOVA F tests are exactly accurate as well. In short, the
demonstrations here are not contaminated by the failure to meet statistical assumptions!
Figure 3 presents ANOVA results for separate analyses of the X and Y scores presented in Table 2. For both X and Y, the two means do not differ to a statistically significant degree. In fact, for
both variables the p[CALCULATED] values were .774. Furthermore, the eta^2 effect sizes were both computed to be 0.469% (e.g., 5.0 / [5.0 + 1061.0] = 5.0 / 1065.0 = .00469). Thus, the two sets of
ANOVA results are not statistically significant and they both involve extremely small effect sizes.
INSERT FIGURE 3 ABOUT HERE.
However, as also reported in the Figure 3 results, a MANOVA/Descriptive Discriminant Analysis (DDA; for a one-way MANOVA, MANOVA and DDA yield the same results, but the DDA provides more detailed
analysis--see Huberty, 1994; Huberty & Barton, 1989; Thompson, 1995b) of the same data yields a p[CALCULATED] value of .000239, and an eta^2 of 62.5%. Clearly, the resulting interpretation of the
same data would be night-and-day different for these two sets of analyses. Again, the synthetic variables in some senses can become more than the sum of their parts, as was also the case in the
previous heuristic demonstration.
Table 2 reports these latent variable scores for the 20 participants, derived by applying the weights (-1.225 and 1.225) reported in Figure 3 to the two measured outcome variables. For heuristic
purposes only, the scores on the synthetic variable labelled "DSCORE" were then subjected to the ANOVA reported in Figure 4. As reported in Figure 4, this analysis of the multivariate synthetic
variable, a weighted aggregation of the outcome variables X and Y, yields the same eta^2 effect size (i.e., 62.5%) reported in Figure 3 for the DDA/MANOVA results. Again, all statistical analyses
actually focus on the synthetic/latent variables actually derived in the analyses, quod erat demonstrandum.
INSERT FIGURE 4 ABOUT HERE.
The present heuristic example can be framed in either of two ways, both of which highlight common errors in contemporary analytic practice. The first error involves conducting multiple univariate
analyses to evaluate multivariate data; the second error involves using univariate analyses (e.g., ANOVAs) in post hoc analyses of detected multivariate effects.
Using Several Univariate Analyses to Analyze Multivariate Data. The present example might be framed as an illustration of a researcher conducting only two ANOVAs to analyze the two sets of dependent
variable scores. The researcher here would find no statistically significant (both p[CALCULATED] values = .774) nor (probably, depending upon the context of the study and researcher personal values)
any noteworthy effect (both eta^2 values = 0.469%). This researcher would remain oblivious to the statistically significant effect (p[CALCULATED] = .000239) and huge (as regards typicality; see
Cohen, 1988) effect size (multivariate eta^2 = 62.5%).
One potentially noteworthy argument in favor of employing multivariate methods with data involving more than one outcome variable involves the inflation of "experimentwise" Type I error rates (a
[EW]; i.e., the probability of making one or more Type I errors in a set of hypothesis tests--see Thompson, 1994d). At the extreme, when the outcome variables or the hypotheses (as in a balanced
ANOVA design) are perfectly uncorrelated, a [EW] is a function of the "testwise" alpha level (a [TW]) and the number of outcome variables or hypotheses tested (k), and equals
1 - (1 - a [TW])^k.
Because this function is exponential, experimentwise error rates can inflate quite rapidly! [Imagine my consternation when I detected a local dissertation invoking more than 1,000 univariate
statistical significance tests (Thompson, 1994a).]
One way to control the inflation of experimentwise error is to use a "Bonferroni correction" which adjusts the a [TW] downward so as to minimize the final a [EW]. Of course, one consequence of this
strategy is lessened statistical power against Type II error. However, the primary argument against using a series of univariate analyses to evaluate data involving multiple outcome variables does
not invoke statistical significance testing concepts.
Multivariate methods are often vital in behavioral research simply because multivariate methods best honor the reality to which the researcher is purportedly trying to generalize. Implicit within
every analysis is an analytic model. Each researcher also has a presumptive model of what reality is believed to be like. It is critical that our analytic models and our models of reality match,
otherwise our conclusions will be invalid. It is generally best to consciously reflect on the fit of these two models whenever we do research. Of course, researchers with different models of reality
may make different analytic choices, but this is not disturbing because analytic choices are philosophically driven anyway (Cliff, 1987, p. 349).
My personal model of reality is one "in which the researcher cares about multiple outcomes, in which most outcomes have multiple causes, and in which most causes have multiple effects" (Thompson,
1986b, p. 9). Given such a model of reality, it is critical that the full network of all possible relationships be considered simultaneously within the analysis. Otherwise, the Figure 3 multivariate
effects, presumptively real given my model of reality, would go undetected. Thus, Tatsuoka's (1973b) previous remarks remain telling:
The often-heard argument, "I'm more interested in seeing how each variable, in its own right, affects the outcome" overlooks the fact that any variable taken in isolation may affect the
criterion differently from the way it will act in the company of other variables. It also overlooks the fact that multivariate analysis--precisely by considering all the variables
simultaneously--can throw light on how each one contributes to the relation. (p. 273)
For these various reasons empirical studies (Emmons, Stallings & Layne, 1990) show that, "In the last 20 years, the use of multivariate statistics has become commonplace" (Grimm & Yarnold, 1995, p.
Using Univariate Analyses post hoc to Investigate Detected Multivariate Effects. In ANOVA and ANCOVA, post hoc (also called "a posteriori," "unplanned," and "unfocused") contrasts (also called
"comparisons") are necessary to explore the origins of detected omnibus effects iff ("if and only if") (a) an omnibus effect is statistically significant (but see Barnette & McLean, 1998) and (b) the
way (also called an OVA "factor", but this alternative name tends to become confused with a factor analysis "factor") has more than two levels.
However, in MANOVA and MANCOVA post hoc tests are necessary to evaluate (a) which groups differ (b) as regards which one or more outcome variables. Even in a two-level way (or "factor"), if the
effect is statistically significant, further analyses are necessary to determine on which one or more outcome/response variables the two groups differ. An alarming number of researchers employ ANOVA
as a post hoc analysis to explore detected MANOVA effects (Thompson, 1999b).
Unfortunately, as the previous example made clear, because the two post hoc ANOVAs would fail to explain where the incredibly large and statistically significant MANOVA effect originated, ANOVA is
not a suitable MANOVA post hoc analysis. As Borgen and Seling (1978) argued, "When data truly are multivariate, as implied by the application of MANOVA, a multivariate follow-up technique seems
necessary to 'discover' the complexity of the data" (p. 696). It is simply illogical to first declare interest in a multivariate omnibus system of variables, and to then explore detected effects in
this multivariate world by conducting non-multivariate tests!
Faux Pas #7: Discarding Variance in Intervally-Scaled Variables
Historically, OVA methods (i.e., ANOVA, ANCOVA, MANOVA, MANCOVA) dominated the social scientist's analytic landscape (Edgington, 1964, 1974). However, more recently the proportion of uses of OVA
methods has declined (cf. Elmore & Woehlke, 1988; Goodwin & Goodwin, 1985; Willson, 1980). Planned contrasts (Thompson, 1985, 1986a, 1994c) have been increasingly favored over omnibus tests. And
regression and related techniques within the GLM family have been increasingly employed.
Improved analytic choices have partially been a function of growing researcher awareness that:
2. The researcher's fundamental task in deriving defensible results is to employ an analytic model that matches the researcher's (too often implicit) model of reality.
This growing awareness can largely be traced to a seminal article written by Jacob Cohen (1968, p. 426).
Cohen (1968) noted that ANOVA and ANCOVA are special cases of multiple regression analysis, and argued that in this realization "lie possibilities for more relevant and therefore more powerful
exploitation of research data." Since that time researchers have increasingly recognized that conventional multiple regression analysis of data as they were initially collected (no conversion of
intervally scaled independent variables into dichotomies or trichotomies) does not discard information or distort reality, and that the "general linear model"
...can be used equally well in experimental or non-experimental research. It can handle continuous and categorical variables. It can handle two, three, four, or more independent
variables... Finally, as we will abundantly show, multiple regression analysis can do anything the analysis of variance does--sums of squares, mean squares, F ratios--and more. (Kerlinger
& Pedhazur, 1973, p. 3)
Discarding variance is generally not good research practice. As Kerlinger (1986) explained,
...partitioning a continuous variable into a dichotomy or trichotomy throws information away... To reduce a set of values with a relatively wide range to a dichotomy is to reduce its
variance and thus its possible correlation with other variables. A good rule of research data analysis, therefore, is: Do not reduce continuous variables to partitioned variables
(dichotomies, trichotomies, etc.) unless compelled to do so by circumstances or the nature of the data (seriously skewed, bimodal, etc.). (p. 558, emphasis in original)
Kerlinger (1986, p. 558) noted that variance is the "stuff" on which all analysis is based. Discarding variance by categorizing intervally-scaled variables amounts to the "squandering of information"
(Cohen, 1968, p. 441). As Pedhazur (1982, pp. 452-453) emphasized,
Categorization of attribute variables is all too frequently resorted to in the social sciences.... It is possible that some of the conflicting evidence in the research literature of a
given area may be attributed to the practice of categorization of continuous variables.... Categorization leads to a loss of information, and consequently to a less sensitive analysis.
Some researchers may be prone to categorizing continuous variables and overuse of ANOVA because they unconsciously and erroneously associate ANOVA with the power of experimental designs. As I have
noted previously,
Even most experimental studies invoke intervally scaled "aptitude" variables (e.g., IQ scores in a study with academic achievement as a dependent variable), to conduct the
aptitude-treatment interaction (ATI) analyses recommended so persuasively by Cronbach (1957, 1975) in his 1957 APA Presidential address. (Thompson, 1993a, pp. 7-8)
Thus, many researchers employ interval predictor variables, even in experimental designs, but these same researchers too often convert their interval predictor variables to nominal scale merely to
conduct OVA analyses.
It is true that experimental designs allow causal inferences and that ANOVA is appropriate for many experimental designs. However, it is not therefore true that doing an ANOVA makes the design
experimental and thus allows causal inferences.
Humphreys (1978, p. 873, emphasis added) noted that:
The basic fact is that a measure of individual differences is not an independent variable [in a experimental design], and it does not become one by categorizing the scores and treating
the categories as if they defined a variable under experimental control in a factorially designed analysis of variance.
Similarly, Humphreys and Fleishman (1974, p. 468) noted that categorizing variables in a nonexperimental design using an ANOVA analysis "not infrequently produces in both the investigator and his
audience the illusion that he has experimental control over the independent variable. Nothing could be more wrong." Because within the general linear model all analyses are correlational, and it is
the design and not the analysis that yields the capacity to make causal inferences, the practice of converting intervally-scaled predictor variables to nominal scale so that ANOVA and other OVAs
(i.e., ANCOVA, MANOVA, MANCOVA) can be conducted is inexcusable, at least in most cases.
As Cliff (1987, p. 130, emphasis added) noted, the practice of discarding variance on intervally-scaled predictor variables to perform OVA analyses creates problems in almost all cases:
Such divisions are not infallible; think of the persons near the borders. Some who should be highs are actually classified as lows, and vice versa. In addition, the "barely highs" are
classified the same as the "very highs," even though they are different. Therefore, reducing a reliable variable to a dichotomy [or a trichotomy] makes the variable more unreliable, not
In such cases, it is the reliability of the dichotomy that we actually analyze, and not the reliability of the highly-reliable, intervally-scaled data that we originally collected, which impact the
analysis we are actually conducting.
Heuristic Examples for Three Possible Cases
When we convert an intervally-scaled independent variable into a nominally-scaled way in service of performing an OVA analysis, we are implicitly invoking a model of reality with two strict
1. all the participants assigned to a given level of the way (or "factor") are the same, and
2. all the participants assigned to different levels of the way are different.
For example, if we have a normal distribution of IQ scores, and we use scores of 90 and 110 to trichotomize our interval data, we are saying that:
1. the 2 people in the High IQ group with IQs of 111 and 145 are the same, and
2. the 2 people in the Low and Middle IQ groups with IQs of 89 and 91, respectively, are different.
Whether our decision to convert our intervally-scaled data to nominal scale is appropriate depends entirely on the research situation. There are three possible situations.
Table 3 presents heuristic data illustrating the three possibilities. The measured/observed outcome variable in all three cases is Y.
INSERT TABLE 3 ABOUT HERE.
Case #1: No harm, no foul. In case #1 the intervally-scaled variable X1 is re-expressed as a trichotomy in the form of variable X1'. Assuming that the standard error of the measurement is something
like 3 or 6, the conversion in this instance does not seem problematic, because it appears reasonable to assume that:
1. all the participants assigned to a given level of the way are the same, and
2. all the participants assigned to different levels of the way are different.
Case #2: Creating variance where there is none. Case #2 again assumes that the standard error of the measurement is something like 3 to 6 for the hypothetical scores. Here none of the 21 participants
appear to be different as regards their scores on Table 3 variable X2, so assigning the participants to three groups via variable X2' seems to create differences where there are none. This will
generate analytic results in which the analytic model does not honor our model of reality, which in turn compromises the integrity of our results.
Some may protest that no real researcher would ever, ever assign people to groups where there are, in fact, no meaningful differences among the participants as regards their scores on an independent
variable. But consider a recent local dissertation that involved administration of a depression measure to children; based on scores on this measure the children were assigned to one of three
depression groups. Regrettably, these children were all apparently happy and well-adjusted.
It is especially interesting that the highest score on this [depression] variable... was apparently 3.43 (p. 57). As... [the student] acknowledged, the PNID authors themselves recommend a
cutoff score of 4 for classifying subjects as being severely depressed. Thus, the highest score in... [the] entire sample appeared to be less than the minimum cutoff score suggested by
the test's own authors! (Thompson, 1994a, p. 24)
Case #3: Discarding variance, distorting distribution shape. Alternatively, presume that the intervally-scaled independent variable (e.g., an aptitude way in an ATI design) is somewhat normally
distributed. Variable X3 in Table 3 can be used to illustrate the potential consequences of re-expressing this information in the form of a nominally-scaled variable such as X3'.
Figure 5 presents the SPSS output from analyzing the data in both unmutilated (i.e., X3) and mutilated (i.e., X3') form. In unmutilated form, the results are statistically significant (p[CALCULATED]
= .00004) and the R^2 effect size is 59.7%. For the mutilated data, the results are not statistically significant at a conventional alpha level (p[CALCULATED] = .1145) and the eta^2 effect size is
21.4%, roughly a third of the effect for the regression analysis.
INSERT FIGURE 5 ABOUT HERE.
Criticisms of Statistical Significance Tests
Tenor of Past Criticism
The last several decades have delineated an exponential growth curve in the decade-by-decade criticisms across disciplines of statistical testing practices (Anderson, Burnham & Thompson, 1999). In
their historical summary dating back to the origins of these tests, Huberty and Pike (in press) provide a thoughtful review of how we got to where we're at. Among the recent commentaries on
statistical testing practices, I prefer Cohen (1994), Kirk (1996), Rosnow and Rosenthal (1989), Schmidt (1996), and Thompson (1996). Among the classical criticisms, my favorites are Carver (1978),
Meehl (1978), and Rozeboom (1960).
Among the more thoughtful works advocating statistical testing, I would cite Cortina and Dunlap (1997), Frick (1996), and especially Abelson (1997). The most balanced and comprehensive treatment is
provided by Harlow, Mulaik and Steiger (1997) (for reviews of this book, see Levin, 1998 and Thompson, 1998c).
My purpose here is not to further articulate the various criticisms of statistical significance tests. My own recent thinking is elaborated in the several reports enumerated in Table 4. The focus
here is on what should be the future. Therefore, criticisms of statistical tests are only briefly summarized in the present treatment.
INSERT TABLE 4 ABOUT HERE.
But two quotations may convey the tenor of some of these commentaries. Rozeboom (1997) recently argued that
Null-hypothesis significance testing is surely the most bone-headedly misguided procedure ever institutionalized in the rote training of science students... [I]t is a sociology-of-science
wonderment that this statistical practice has remained so unresponsive to criticism... (p. 335)
And Tryon (1998) recently lamented,
[T]he fact that statistical experts and investigators publishing in the best journals cannot consistently interpret the results of these analyses is extremely disturbing. Seventy-two
years of education have resulted in minuscule, if any, progress toward correcting this situation. It is difficult to estimate the handicap that widespread, incorrect, and intractable use
of a primary data analytic method has on a scientific discipline, but the deleterious effects are doubtless substantial... (p. 796)
Indeed, empirical studies confirm that many researchers do not fully understand the logic of their statistical tests (cf. Mittag, 1999; Nelson, Rosenthal & Rosnow, 1986; Oakes, 1986; Rosenthal &
Gaito, 1963; Zuckerman, Hodgins, Zuckerman & Rosenthal, 1993). Misconceptions are taught even in widely-used statistics textbooks (Carver, 1978).
Brief Summary of Four Criticisms of Common Practice
Statistical significance tests evaluate the probability of obtaining sample statistics (e.g., means, medians, correlation coefficients) that diverge as far from the null hypothesis as the sample
statistics, or further, assuming that the null hypothesis is true in the population, and given the sample size (Cohen, 1994; Thompson, 1996). The utility of these estimates has been questioned on
various grounds, four of which are briefly summarized here.
Conventionally, Statistical Tests Assume "Nil" Null Hypotheses. Cohen (1994) defined a "nil" null hypothesis as a null specifying no differences (e.g., H[0]: SD[1] - SD[2] = 0) or zero correlations
(e.g., R^2=0). Researchers must specify some null hypothesis, or otherwise the probability of the sample statistics is completely indeterminate (Thompson, 1996)--infinitely many p values become
equally plausible. But "nil" nulls are not required. Nevertheless, "as almost universally used, the null in H[0] is taken to mean nil, zero" (Cohen, 1994, p. 1000).
Some researchers employ nil nulls because statistical theory does not easily accommodate the testing of some non-nil nulls. But probably most researchers employ nil nulls because these nulls have
been unconsciously accepted as traditional, because these nulls can be mindlessly formulated without consulting previous literature, or because most computer software defaults to tests of nil nulls
(Thompson, 1998c, 1999a). As Boring (1919) argued 80 years ago, in his critique of the mindless use of statistical tests titled, "Mathematical vs. scientific significance,"
The case is one of many where statistical ability, divorced from a scientific intimacy with the fundamental observations, leads nowhere. (p. 338)
I believe that when researchers presume a nil null is true in the population, an untruth is posited. As Meehl (1978, p. 822) noted, "As I believe is generally recognized by statisticians today and by
thoughtful social scientists, the [nil] null hypothesis, taken literally, is always false." Similarly, Hays (1981, p. 293) pointed out that "[t]here is surely nothing on earth that is completely
independent of anything else [in the population]. The strength of association may approach zero, but it should seldom or never be exactly zero." Roger Kirk (1996) concurred, noting that:
It is ironic that a ritualistic adherence to null hypothesis significance testing has led researchers to focus on controlling the Type I error that cannot occur because all null
hypotheses are false. (p. 747, emphasis added)
A p[CALCULATED] value computed on the foundation of a false premise is inherently of somewhat limited utility. As I have noted previously, "in many contexts the use of a 'nil' hypothesis as the
hypothesis we assume can render me largely disinterested in whether a result is 'nonchance'" (Thompson, 1997a, p. 30).
Particularly egregious is the use of "nil" nulls to test measurement hypotheses, where wildly non-nil results are both anticipated and demanded. As Abelson (1997) explained,
And when a reliability coefficient is declared to be nonzero, that is the ultimate in stupefyingly vacuous information. What we really want to know is whether an estimated reliability is
.50'ish or .80'ish. (p. 121)
Statistical Tests Can be a Tautological Evaluation of Sample Size. When "nil" nulls are used, the null will always be rejected at some sample size. There are infinitely many possible sample effects.
Given this, the probability of realizing an exactly zero sample effect is infinitely small. Therefore, given a "nil" null, and a non-zero sample effect, the null hypothesis will always be rejected at
some sample size!
Consequently, as Hays (1981) emphasized, "virtually any study can be made to show significant results if one uses enough subjects" (p. 293). This means that
Statistical significance testing can involve a tautological logic in which tired researchers, having collected data from hundreds of subjects, then conduct a statistical test to evaluate
whether there were a lot of subjects, which the researchers already know, because they collected the data and know they're tired. (Thompson, 1992c, p. 436)
Certainly this dynamic is well known, if it is just as widely ignored. More than 60 years ago, Berkson (1938) wrote an article titled, "Some difficulties of interpretation encountered in the
application of the chi-square test." He noted that when working with data from roughly 200,000 people,
an observant statistician who has had any considerable experience with applying the chi-square test repeatedly will agree with my statement that, as a matter of observation, when the
numbers in the data are quite large, the P's tend to come out small... [W]e know in advance the P that will result from an application of a chi-square test to a large sample... But since
the result of the former is known, it is no test at all! (pp. 526-527)
Some 30 years ago, Bakan (1966) reported that, "The author had occasion to run a number of tests of significance on a battery of tests collected on about 60,000 subjects from all over the United
States. Every test came out significant" (p. 425). Shortly thereafter, Kaiser (1976) reported not being surprised when many substantively trivial factors were found to be statistically significant
when data were available from 40,000 participants.
Because Statistical Tests Assume Rather than Test the Population, Statistical Tests Do Not Evaluate Result Replicability. Too many researchers incorrectly assume, consciously or unconsciously, that
the p values calculated in statistical significance tests evaluate the probability that results will replicate (Carver, 1978, 1993). But statistical tests do not evaluate the probability that the
sample statistics occur in the population as parameters (Cohen, 1994).
Obviously, knowing the probability of the sample is less interesting than knowing the probability of the population. Knowing the probability of population parameters would bear upon result
replicability, because we would then know something about the population from which future researchers would also draw their samples. But as Shaver (1993) argued so emphatically:
[A] test of statistical significance is not an indication of the probability that a result would be obtained upon replication of the study.... Carver's (1978) treatment should have dealt
a death blow to this fallacy.... (p. 304)
And so Cohen (1994) concluded that the statistical significance test "does not tell us what we want to know, and we so much want to know what we want to know that, out of desperation, we nevertheless
believe that it does!" (p. 997).
Statistical Significance Tests Do Not Solely Evaluate Effect Magnitude. Because various study features (including score reliability) impact calculated p values, p[CALCULATED] cannot be used as a
satisfactory index of study effect size. As I have noted elsewhere,
The calculated p values in a given study are a function of several study features, but are particularly influenced by the confounded, joint influence of study sample size and study effect
sizes. Because p values are confounded indices, in theory 100 studies with varying sample sizes and 100 different effect sizes could each have the same single p[CALCULATED], and 100
studies with the same single effect size could each have 100 different values for p[CALCULATED]. (Thompson, 1999a, pp. 169-170)
The recent fourth edition of the American Psychological Association style manual (APA, 1994) explicitly acknowledged that p values are not acceptable indices of effect:
Neither of the two types of probability values [statistical significance tests] reflects the importance or magnitude of an effect because both depend on sample size... You are [therefore]
encouraged to provide effect-size information. (APA, 1994, p. 18, emphasis added)
In short, effect sizes should be reported in every quantitative study.
The "Bootstrap"
Explanation of the "bootstrap" will provide a concrete basis for facilitating genuine understanding of what statistical tests do (and do not) do. The "bootstrap" has been so named because this
statistical procedure represents an attempt to "pull oneself up" on one's own, using one's sample data, without external assistance from a theoretically-derived sampling distribution.
Related books have been offered by Davison and Hinkley (1997), Efron and Tibshirani (1993), Manly (1994), and Sprent (1998). Accessible shorter conceptual treatments have been presented by Diaconis
and Efron (1983) and Thompson (1993b). I especially and particularly recommend the remarkable book by Lunneborg (1999).
Software to invoke the bootstrap is available in most structural equation modeling software (e.g., EQS, AMOS). Specialized bootstrap software for microcomputers (e.g., S Plus, SC, and Resampling
Stats) is also readily available.
The Sampling Distribution
Key to understanding statistical significance tests is understanding the sample distribution and distinguishing the (a) sampling distribution from (b) the population distribution and (c) the score
distribution. Among the better book treatments is one offered by Hinkle, Wiersma and Jurs (1998, pp. 176-178). Shorter treatments include those by Breunig (1995), Mittag (1992), and Rennie (1997).
The population distribution consists of the scores of the N entities (e.g., people, laboratory mice) of interest to the researcher, regarding whom the researcher wishes to generalize. In the social
sciences, many researchers deem the population to be infinite. For example, an educational researcher may hope to generalize about the effects of a teaching method on all human beings across time.
Researchers typically describe the population by computing or estimating characterizations of the population scores (e.g., means, interquartile ranges), so that the population can be more readily
comprehended. These characterizations of the population are called "parameters," and are conventionally symbolized using Greek letters (e.g., m for the population score mean, s for the population
score standard deviation).
The sample distribution also consists of scores, but only a subsample of n scores from the population. The characterizations of the sample scores are called "statistics," and are conventionally
represented by Roman letters (e.g., M, SD, r). Strictly speaking, statistical significance tests evaluate the probability of a given set of statistics occurring, assuming that the sample came from a
population exactly described by the null hypothesis, given the sample size.
Because each sample is only a subset of the population scores, the sample does not exactly reproduce the population distribution. Thus, each set of sample scores contains some idiosyncratic variance,
called "sampling error" variance, much like each person has idiosyncratic personality features. [Of course, sampling error variance should not be confused with either "measurement error" variance or
"model specification" error variance (sometimes modeled as the "within" or "residual" sum of squares in univariate analyses) (Thompson, 1998a).] Of course, like people, sampling distributions may
differ in how much idiosyncratic "flukiness" they each contain.
Statistical tests evaluate the probability that the deviation of the sample statistics from the assumed population parameters is due to sampling error. That is, statistical tests evaluate whether
random sampling from the population may explain the deviations of the sample statistics from the hypothesized population parameters.
However, very few researchers employ random samples from the population. Rokeach (1973) was an exception; being a different person living in a different era, he was able to hire the Gallup polling
organization to provide a representative national sample for his inquiry. But in the social sciences fewer than 5% of studies are based on random samples (Ludbrook & Dudley, 1998).
On the basis that most researchers do not have random samples from the population, some (cf. Shaver, 1993) have argued that statistical significance tests should almost never be used. However, most
researchers presume that statistical tests may be reasonable if there are grounds to believe that the score sample of convenience is expected to be reasonably representative of a population.
In order to evaluate the probability that the sample scores came from a population of scores described exactly by the null hypothesis, given the sample size, researchers typically invoke the sampling
distribution. The sampling distribution does not consist of scores (except when the sample size is one). Rather, the sampling distribution consists of estimated parameters, each computed for samples
of exactly size n, so as to model the influences of random sampling error on the statistics estimating the population parameters, given the sample size.
This sampling distribution is then used to estimate the probability of the observed sample statistic(s) occurring due to sampling error. For example, we might take the population to be infinitely
many IQ scores normally distributed with a mean, median and mode of 100 and a standard deviation of 15. Perhaps we have drawn a sample of 10 people, and compute the sample median (not all hypotheses
have to be about means!) to be 110. We wish to know whether our statistic or one higher is unlikely, assuming the sample came from the posited population.
We can make this determination by drawing all possible samples of size 10 from the population, computing the median of each sample, and then creating the distribution of these statistics (i.e., the
sampling distribution). We then examine the sampling distribution, and locate the value of 110. Perhaps only 2% of the sample statistics in the sampling distribution are 110 or higher. This suggests
to us that our observed sample median of 110 is relatively unlikely to have come from the hypothesized population.
The number of samples drawn for the sampling distribution from a given population is a function of the population size, and the sample size. The number of such different sets of population cases for
a population of size N and a sample of size n equals:
M = ___________.
n! (N - n)!
Clearly, if the population size is infinite (or even only large), deriving all possible estimates becomes unmanageable. In such cases the sampling distribution may be theoretically (i.e.,
mathematically) estimated, rather than actually observed. Sometimes, rather than estimating the sampling distribution, estimating an analog of the sampling distribution, called a "test distribution"
(e.g., F, t, c ^2) may be more manageable.
Heuristic Example for a Finite Population Case
Table 5 presents a finite population of scores for N=20 people. Presume that we wish to evaluate a sample mean for n=3 people. If we know (or presume) the population, we can derive the sampling
distribution (or the test distribution) for this problem, so that we can then evaluate the probability that the sample statistic of interest came from the assumed population.
INSERT TABLE 5 ABOUT HERE.
Note that we are ultimately inferring the probability of the sample statistic, and not of the population parameter(s). Remember also that some specific population must be presumed, or infinitely many
sampling distributions (and consequently infinitely p[CALCULATED] values) are plausible, and the solution becomes indeterminate.
Here the problem is manageable, given the relatively small population and samples sizes. The number of statistics creating this sampling distribution is
M = ___________
n! (N - n)!
3! (20 - 3 )!
3! (17)!
3 x 2 x (17x16x15x14x13x12x11x10x9x8x7x6x5x4x3x2)
6 x 3.557E+14
= 1,140.
Table 6 presents the first 85 and the last 10 potential samples. [The full sampling distribution takes 25 pages to present, and so is not presented here in its entirety.]
INSERT TABLE 6 ABOUT HERE.
Figure 6 presents the full sampling distribution of 1,140 estimates of the mean based on samples of size n=3 from the Table 5 population of N=20 scores. Figure 7 presents the analog of a test
statistic distribution (i.e., the sampling distribution in standardized form).
INSERT FIGURES 6 AND 7 ABOUT HERE.
If we had a sample of size n=3, and had some reason to believe and wished to evaluate the probability that the sample with a mean of M = 524.0 came from the Table 5 population of N=20 scores, we
could use the Figure 6 sampling distribution to do so. Statistic means (i.e., sample means) this large or larger occur about 25% of the time due to sampling error.
In practice researchers most frequently use sampling distributions of test statistics (e.g., F, t, c ^2), rather than the sampling distributions of sample statistics, to evaluate sample results. This
is typical because the sampling distributions for many sample statistics change for every study variation (e.g., changes for different statistics, changes for each different sample size for even for
a given statistic). Sampling distributions of test statistics (e.g., distributions of sample means each divided by the population SD) are more general or invariant over these changes, and thus, once
they are estimated, can be used with greater regularity than the related sampling distributions for statistics.
The problem is that the applicability and generalizability of test distributions tend to be based on fairly strict assumptions (e.g., equal variances of outcome variable scores across all groups in
ANOVA). Furthermore, test statistics have only been developed for a limited range of classical test statistics. For example, test distributions have not been developed for some "modern" statistics.
"Modern" Statistics
All "classical" statistics are centered about the arithmetic mean, M. For example, the standard deviation (SD), the coefficient of skewness (S), and the coefficient of kurtosis (K) are all moments
about the mean, respectively:
SD[X] = ((S (X[i] - M[X])^2) / (n-1))^.5 = ((S x[i]^2) / (n-1))^.5;
Coefficient of Skewness[X] (S[X]) = (S [(X[i]-M[X])/SD[X]]^3) / n; and
Coefficient of Kurtosis[X] (K[X]) = ((S [(X[i]-M[X])/SD[X]]^4) / n) - 3.
Similarly, the Pearson product-moment correlation invokes deviations from the means of the two variables being correlated:
(S (X[i] - M[X])(Y[i] - M[Y])) / n-1
r[XY] = ____________________________.
(SD[X] * SD[Y])
The problem with "classical" statistics invoking the mean is that these estimates are notoriously influenced by atypical scores (outliers), partly because the mean itself is differentially influenced
by outliers. Table 7 presents a heuristic data set that can be used to illustrate both these dynamics and two alternative "modern" statistics that can be employed to mitigate these problems.
INSERT TABLE 7 ABOUT HERE.
Wilcox (1997) presents an elaboration of some "modern" statistics choices. A shorter accessible treatment is provided by Wilcox (1998). Also see Keselman, Kowalchuk, and Lix (1998) and Keselman, Lix
and Kowalchuk (1998).
The variable X in Table 7 is somewhat positively skewed (S[X] = 2.40), as reflected by the fact that the mean (M[X] = 500.00) is to the right of the median (Md[X] = 461.00). One "modern" method
"winsorizes" (à la statistician Charles Winsor) the score distribution by substituting less extreme values in the distribution for more extreme values. In this example, the 4th score (i.e., 433) is
substituted for scores 1 through 3, and in the other tail the 17th score (i.e., 560) is substituted for scores 18 through 20. Note that the mean of this distribution, M[X'] = 480.10, is less extreme
than the original value (i.e., M[X] = 500.00).
Another "modern" alternative "trims" the more extreme scores, and then computes a "trimmed" mean. In this example, .15 of the distribution is trimmed from each tail. The resulting mean, M[X-] =
473.07, is closer to the median of the distribution, which has remained 461.00.
Some "classical" statistics can also be framed as "modern." For example, the interquartile range (75th %ile - 25th %ile) might be thought of as a "trimmed" range.
In theory, "modern" statistics may generate more replicable characterizations of data, because at least in some respects the influence of more extreme scores, which are less likely to be drawn in
future samples from the tails of a non-uniform (non-rectangular or non-flat) population distribution, has been minimized. However, "modern" statistics have not been widely employed in contemporary
research, primarily because generally-applicable test distributions are often not available for such statistics.
Traditionally, the tail of statistical significance testing has wagged the dog of characterizing our data in the most replicable manner. However, the "bootstrap" may provide a vehicle for
statistically testing, or otherwise exploring, "modern" statistics.
Univariate Bootstrap Heuristic Example
The bootstrap logic has been elaborated by various methodologists, but much of this development has been due to Efron and his colleagues (cf. Efron, 1979). As explained elsewhere,
Conceptually, these methods involve copying the data set on top of itself again and again infinitely many times to thus create an infinitely large "mega" data set (what's actually done is
resampling from the original data set with replacement). Then hundreds or thousands of different samples [each of size n] are drawn from the "mega" file, and results [i.e., the statistics
of interest] are computed separately for each sample and then averaged [and characterized in various ways]. (Thompson, 1993b, p. 369)
Table 8 presents a heuristic data set to make concrete selected aspects of bootstrap analysis. The example involves the numbers of churches and murders in 45 cities. These two variables are highly
correlated. [The illustration makes clear the folly of inferring causal relationships, even from a "causal modeling" SEM analysis, if the model is not exactly correctly "specified" (cf. Thompson,
1998a).] The statistic examined here is the bivariate product-moment correlation coefficient. This statistic is "univariate" in the sense that only a single dependent/outcome variable is involved.
INSERT TABLE 8 ABOUT HERE.
Figure 8 presents a scattergram portraying the linear relationship between the two measured/observed variables. For the heuristic data, r equals .779.
INSERT FIGURE 8 ABOUT HERE.
In this example 1,000 resamples of the rows of the Table 8 data were drawn, each of size n=45, so as to model the sampling error influences in the actual data set. In each "resample," because
sampling from the Table 8 data was done "with replacement," a given row of the data may have been sampled multiple times, while another row of scores may not have been drawn at all. For this analysis
the bootstrap software developed by Lunneborg (1987) was used. Table 9 presents some of the 1,000 bootstrapped estimates of r.
INSERT TABLE 9 ABOUT HERE.
Figure 9 presents a graphic representation of the bootstrap-estimated sampling distribution for this case. Because r, although a characterization of linear relation, is not itself linear (i.e., r=
1.00 is not twice r=.50), Fisher's r-to-Z transformations of the 1,000 resampled r values were also computed as:
r-to-Z = .5 (ln [(1 + r)/(1 - r)] (Hays, 1981, p. 465).
In SPSS this could be computed as:
compute r_to_z=.5 * ln ((1 + r)/(1 - r)).
Figure 10 presents the bootstrap-estimated sampling distribution for these values.
INSERT FIGURES 9 AND 10 ABOUT HERE.
Descriptive vs. Inferential Uses of the Bootstrap
The bootstrap can be used to test statistical significance. For example, the bootstrap can be used to estimate, through Monte Carlo simulation, sampling distributions when theoretical distributions
(e.g., test distributions) are not known for some problems (e.g., "modern" statistics).
The standard deviation of the bootstrap-estimated sampling distribution characterizes the variability of the statistics estimating given population parameters. The standard deviation of the sampling
distribution is called the "standard error of the estimate" (e.g., the standard error of the mean, SE[M]). [The decision to call this standard deviation the "standard error," so as to confuse the
graduate students into not realizing that SE is an SD, was taken decades ago at an annual methodologists' coven--in the coven priority is typically afforded to most confusing the students regarding
the most important concepts.] The SE of a statistic characterizes the precision or variability of the estimate.
The ratio of the statistic estimating a parameter to the SE of that estimate is a very important idea in statistics, and thus is called by various names, such as "t," "Wald statistic," and "critical
ratio" (so as to confuse the students regarding an important concept). If the statistic is large, but the SE is even larger, a researcher may elect not to vest much confidence in the estimate.
Conversely, even if a statistic is small (i.e., near zero), if the SE of the statistic is very, very small, the researcher may deem the estimate reasonably precise.
In classical statistics researchers typically estimate the SE as part of statistical testing by invoking numerous assumptions about the population and the sampling distribution (e.g., normality of
the sampling distribution). Such SE estimates are theoretical.
The SD of the bootstrapped sampling distribution, on the other hand, is an empirical estimate of the sampling distribution's variability. This estimate does not require as many assumptions.
Table 10 presents selected percentiles for two bootstrapped r-to-z sampling distributions for the Table 8 data, one involving 100 resamples, and one involving 1,000 resamples. Notice that percentiles
near the means or the medians of the two distributions tend to be closer than the values in the tails, and here especially in the left tail (small z values) where there are fewer values, because the
distribution is skewed left. This purely heuristic comparison makes an extremely important conceptual point that clearly distinguishes inferential versus descriptive applications of the bootstrap.
INSERT TABLE 10 ABOUT HERE.
When we employ the bootstrap for inferential purposes (i.e., to estimate the probability of the sample statistics), focus shifts to the extreme tails of the distributions, where the less likely (and
less frequent) statistics are located, because we typically invoke small values of p in statistical tests. These are exactly the locations where the estimated distribution densities are most
unstable, because there are relatively few scores here (presuming the sampling distribution does not have an extraordinarily small SE). Thus, when we invoke the bootstrap to conduct statistical
significance tests, extremely large numbers of resamples are required (e.g., 2,000, 5,000).
However, when our application is descriptive, we are primarily interested in the mean (or median) statistic and the SD/SE from the sampling distribution. These values are less dependent on large
numbers of resamples. This is said not to discourage large numbers of resamples (which are essentially free to use, given modern microcomputers), but is noted instead to emphasize these two very
distinct uses of the bootstrap.
The descriptive focus is appropriate. We hope to avoid obtaining results that no one else can replicate (partly because we are good scientists searching for generalizable results, and partly simply
because we do not wish to be embarrassed by discovering the social sciences equivalent of cold fusion). The challenge is obtaining results that reproduce over the wide range of idiosyncracies of
human personality.
The descriptive use of the bootstrap provides some evidence, short of a real (and preferred) "external" replication (cf. Thompson, 1996) of our study, that results may generalize. As noted elsewhere,
If the mean estimate [in the estimated sampling distribution] is like our sample estimate, and the standard deviation of estimates from the resampling is small, then we have some
indication that the result is stable over many different configurations of subjects. (Thompson, 1993b, p. 373)
Multivariate Bootstrap Heuristic Example
The bootstrap can also be generalized to multivariate cases (e.g., Thompson, 1988b, 1992a, 1995a). The barrier to this application is that a given multivariate "factor" (also called "equation,"
"function," or "rule," for reasons that are, by now, obvious) may be manifested in different locations.
For example, perhaps a measurement of androgyny purports to measure two factors: masculine and feminine. In one resample masculine may be the first factor, while in the second resample masculine
might be the second factor. In most applications we have no particular theoretical expectation that "factors" ("functions," etc.) will always replicate in a given order. However, if we average and
otherwise characterize statistics across resamples without initially locating given constructs in the same locations, we will be pooling apples, oranges, and tangerines, and merely be creating a
This barrier to the multivariate use of the bootstrap can be resolved by using Procrustean methods to rotate all "factors" into a single, common factor space prior to characterizing the results
across the resamples. A brief example may be useful in communicating the procedure.
Figure 11 presents DDA/MANOVA results from an analysis of Sir Ronald Fisher's (1936) classic data for iris flowers. Here the bootstrap was conducted using my DISCSTRA program (Thompson, 1992a) to
conduct 2,000 resamples.
INSERT FIGURE 11 ABOUT HERE.
Figure 12 presents a partial listing of the resampling of n=150 rows of data (i.e., the resample size exactly matches the original samples size). Notice in Figure 12 that case #27 was selected at
least twice as part of the first resample.
INSERT FIGURE 12 ABOUT HERE.
First 13 presents selected results for both the first and the last resamples. Notice that the function coefficients are first rotated to best fit position with a common designated target matrix, and
then the structure coefficients are computed using these rotated results. [Here the rotations made few differences, because the functions by happenstance already fairly closely matched the target
matrix--here the function coefficients from the original sample.]
INSERT FIGURE 13 ABOUT HERE.
Figure 14 presents an abridged map of participant selection across the 2,000 resamples. We can see that the 150 flowers were each selected approximately 2,000 times, as expected if the random
selection with replacement is truly random.
INSERT FIGURE 14 ABOUT HERE.
Figure 15 presents a summary of the bootstrap DDA results. For example, the mean statistic across 2,000 resample is computed along with the empirically-estimated standard error of each statistic. As
generally occurs, SE's tend to be smaller for statistics that deviate most from zero; these coefficients tend to reflect real (non-sampling error variance) dynamics within the data, and therefore
tend to re-occur across samples.
INSERT FIGURE 15 ABOUT HERE.
However, notice in Figure 15 that the SE's for the standardized function coefficients on Function I for variables X2 and X4 were both essentially .40, even though the mean estimates of the two
coefficients appear to be markedly different (i.e., |1.6| and |2.9|). In a theoretically-grounded estimate, for a given n and a given population estimate, the SE will be identical. But bootstrap
methods do not require the sometimes unrealistic assumption that related coefficients even in a given analysis with a common fixed n have the same sampling distributions.
Clarification and an Important Caveat
The bootstrap methods modeled here presume that the sample size is somewhat large (i.e., more than 20 to 40). In these cases the bootstrap invokes resampling with replacement. For small samples other
methods are employed.
It is also important to emphasize that "bootstrap methods do not magically take us beyond the limits of our data" (Thompson, 1993b, p. 373). For example, the bootstrap cannot make an unrepresentative
sample representative. And the bootstrap cannot make a quasi-experiment with intact groups mimic results for a true experiment in which random assignment is invoked. The bootstrap cannot make data
from a correlational (i.e., non-experimental) design yield unequivocal causal conclusions.
Thus, Lunneborg (1999) makes very clear and careful distinctions between bootstrap applications that may support either (a) population inference (i.e., the study design invoked random sampling), or
(b) evaluation of how "local" a causal inference may be (i.e., the study design invoked random assignment to experimental groups, but not random selection), or (c) evaluation of how "local"
non-causal descriptions may be (i.e., the design invoked neither random sampling nor random assignment). Lunneborg (1999) quite rightly emphasizes how critical it is to match study design/purposes
and the bootstrap modeling procedures.
The bootstrap and related "internal" replicability analyses are not magical. Nevertheless, these methods can be useful because
the methods combine the subjects in hand in [numerous] different ways to determine whether results are stable across sample variations, i.e., across the idiosyncracies of individuals
which make generalization in social science so challenging. (Thompson, 1996, p. 29)
Effect Sizes
As noted previously, p[CALCULATED] values are not suitable indices of effect, "because both [types of p values] depend on sample size" (APA, 1994, p. 18, emphasis added). Furthermore, unlikely events
are not intrinsically noteworthy (see Shaver's (1985) classic example). Consequently, the APA publication manual now "encourages" (p. 18) authors to report effect sizes.
Unfortunately, a growing corpus of empirical studies of published articles portrays a consensual view that merely "encouraging" effect size reporting (APA, 1994) has not appreciably affected actual
reporting practices (e.g., Keselman et al., 1998; Kirk, 1996; Lance & Vacha-Haase, 1998; Nilsson & Vacha-Haase, 1998; Reetz & Vacha-Haase, 1998; Snyder & Thompson, 1998; Thompson, 1999b; Thompson &
Snyder, 1997, 1998; Vacha-Haase & Ness, 1999; Vacha-Haase & Nilsson, 1998). Table 11 summarizes 11 empirical studies of recent effect size reporting practices in 23 journals.
INSERT TABLE 11 ABOUT HERE.
Although some of the Table 11 results appear to be more favorable than others, it is important to note that in some of the 11 studies' effect sizes were counted as being reported even if the relevant
results were not interpreted (e.g., an r^2 was reported but not interpreted as being big or small, or noteworthy or not). This dynamic is dramatically illustrated in the Keselman et al. (1998)
results, because the reported results involved an exclusive focus on between-subjects OVA designs, and thus there were no spurious counts of incidental variance-accounted-for statistic reports. Here
Keselman et al. (1998) concluded that, "as anticipated, effect sizes were almost never reported along with p-values" (p. 358).
If the baseline expectation is that effect should be reported in 100% of quantitative studies (mine is), the Table 11 results are disheartening. Elsewhere I have presented various reasons why I
anticipate that the current APA (1994, p. 18) "encouragement" will remain largely ineffective. I have noted that an "encouragement" is so vague as to be unenforceable (Thompson, in press-b). I have
also observed that only "encouraging" effect size reporting:
presents a self-canceling mixed-message. To present an "encouragement" in the context of strict absolute standards regarding the esoterics of author note placement, pagination, and
margins is to send the message, "these myriad requirements count, this encouragement doesn't." (Thompson, in press-b)
Two Heuristic Hypothetical Literatures
Two heuristic hypothetical literatures can be presented to illustrate the deleterious impacts of contemporary traditions. Here, results are reported for both statistical tests and effect sizes.
Twenty "TinkieWinkie" Studies. First, presume that a televangalist suddenly denounces a hypothetical childrens' television character, "TinkieWinkie," based on a claim that the character intrinsically
by appearance and behavior incites moral depravity in 4 year olds.
This claim immediately incites inquiries by 20 research teams, each working independently without knowledge of each others' results. These researchers conduct experiments comparing the differential
effects of "The TinkieWinkie Show" against those of "Sesame Street," or "Mr. Rogers," or both.
This work results in the nascent new literature presented in Table 12. The eta^2 effect sizes from the 20 (10 two-level one-way and 10 three-level one-way) ANOVAs range from 1.2% to 9.9% (M[sq eta]=
3.00%; SD[sq eta]=2.0%) as regards moral depravity being induced by "The TinkieWinkie Show." However, as reported in Table 12, only 1 of the 20 studies results in a statistically significant effect.
INSERT TABLE 12 ABOUT HERE.
The 19 research teams finding no statistically significant differences in the treatment effects on the moral depravity of 4 year olds obtained effect sizes ranging from eta^2=1.2% to eta^2=4.8%.
Unfortunately, these 19 research teams are acutely aware of how non-statistically significant findings are valued within the profession.
They are acutely aware, for example, that revised versions of published articles were rated more highly by counseling practitioners if the revisions reported statistically significant findings than
if they reported statistically nonsignificant findings (Cohen, 1979). The research teams are also acutely aware of Atkinson, Furlong and Wampold's (1982) study in which
101 consulting editors of the Journal of Counseling Psychology and the Journal of Consulting and Clinical Practice were asked to evaluate three versions, differing only with regard to
level of statistical significance, of a research manuscript. The statistically nonsignificant and approach significance versions were more than three times as likely to be recommended for
rejection than was the statistically significant version. (p. 189)
Indeed, Greenwald (1975) conducted a study of 48 authors and 47 reviewers for the Journal of Personality and Social Psychology and reported a
0.49 (± .06) probability of submitting a rejection of the null hypothesis for publication (Question 4a) compared to the low probability of 0.06 (± .03) for submitting a nonrejection of
the null hypothesis for publication (Question 5a). A secondary bias is apparent [as well] in the probability of continuing with a problem [in future inquiry]. (p. 5, emphasis added)
This is the well known "file drawer problem" (Rosenthal, 1979). In the present instance, some of the 19 research teams failing to reject the null hypothesis decide not to even submit their work,
while the remaining teams have their reports rejected for publication. Perhaps these researchers were socialized by a previous version of the APA publication manual, which noted that:
Even when the theoretical basis for the prediction is clear and defensible, the burden of methodological precision falls heavily on the investigator who reports negative results. (APA,
1974, p. 21)
Here only the one statistically significant result is published; everyone remains happily oblivious to the overarching substance of the literature in its entirety.
The problem is that setting a low alpha only means that the probability of a Type I error will be small on the average. In the literature as a whole, some unlikely Type I errors are still inevitable.
These will be afforded priority for publication. Yet publishing replication disconfirmations of these Type I errors will be discouraged normatively. Greenwald (1975, pp. 13-15) cites the expected
actual examples of such epidemics. In short, contemporary practice as regards statistical tests actively discourages some forms of replication, or at least discourages disconfirming replications
being published.
Twenty Cancer Treatment Studies. Here researchers learn of a new theory that a newly synthesized protein regulates the growth of blood supply to cancer tumors. It is theorized that the protein might
be used to prevent new blood supplies from flowing to new tumors, or even that the protein might be used to reduce existing blood flow to tumors and thus lead to cancer destruction. The protein is
Unfortunately, given the newness of the theory and the absence of previous related empirical studies upon which to ground power analyses for their new studies, the 20 research teams institute
inquiries that are slightly under-powered. The results from these 20 experiments are presented in Table 13.
INSERT TABLE 13 ABOUT HERE.
Here all 20 studies yield p[CALCULATED] values of roughly .06 (range = .0598 to .0605). As reported in Table 13, the effect sizes range from 15.1% to 62.8%. In the present scenario, only a few of the
reports are submitted for publication, and none are published.
Yet, these inquiries yielded effect sizes ranging from eta^2=15.1%, which Cohen (1988, pp. 26-27) characterized as "large," at least as regards result typicality, up to eta^2=62.8%. And a life-saving
outcome variable is being measured! At the individual study level, perhaps each research team has decided that p values evaluate result replicability, and remain oblivious to the uniformity of
efficacy findings across the literature.
Some researchers remain devoted to statistical tests, because
of their professed dedication to reporting only replicable results, and because they erroneously believe that statistical significance evaluates result replicability (Cohen, 1994). In summary, it
would be the abject height of irony if, out of devotion to replication, we continued to worship at the tabernacle of statistical significance testing, and at the same time we declined to (a)
formulate our hypotheses by explicit consultation of the effect sizes reported in previous studies and (b) explicitly interpret our obtained effect sizes in relation to those reported in related
previous inquiries.
An Effect Size Primer
Given the central role that effect sizes should play with quantitative studies, at least a brief review of the available choices is warranted here. Very good treatments are also available from Kirk
(1996), Rosenthal (1994), and Snyder and Lawson (1993).
There are dozens of effect size estimates, and no single one-size-fits-all choice. The effect sizes can be divided into two major classes: (a) standardized differences and (b) variance-accounted-for
measures of strength of association. [Kirk (1996) identifies a third, "miscellaneous" category, and also summarizes some of these choices.]
Standardized differences. In experimental studies, and especially studies with only two groups where the mean is of primary interest, the differences in means can be "standardized" by dividing the
difference by some estimate of the population parameter score s . For example, in his seminal work on meta-analysis, Glass (cf. 1976) proposed that the difference in the two means could be divided by
the control group standard deviation to estimate D .
Glass presumed that the control group standard deviation is the best estimate of s . This is reasonable particularly if the control group received no treatment, or a placebo treatment. For example,
for the Table 2 variable, X, if the second of the two groups was taken as the control group,
D [X] = (12.50 - 11.50) / 7.68 = .130.
In this estimation the variance (see Table 2 note) is computed by dividing the sum of squares by n-1.
However, others have taken the view that the most accurate standardization can be realized by use of a "pooled" (across groups) estimate of the population standard deviation. Hedges (1981) advocated
computation of g using the standard deviation computed as the square root of a pooled variance based on division of the sum of squares by n-1. For the Table 2 variable, X,
g[X] = (12.50 - 11.50) / 7.49 = .134.
Cohen (1969) argued for the use of d, which divides the mean difference by a "pooled" standard deviation computed as the square root of a pooled variance based on division of the sum of squares by n.
For the Table 2 variable, X,
d[X] = (12.50 - 11.50) / 7.30 = .137.
As regards these choices, there is (as usual) no one always right one-size-fits-all choice. The comment by Huberty and Morris (1988, p. 573) is worth remembering generically: "As in all of
statistical inference, subjective judgment cannot be avoided. Neither can reasonableness!"
In some studies the control group standard deviation provides the most reasonable standardization, while in others a "pooling" mechanism may be preferred. For example, an intervention may itself
change score variability, and in these cases Glass's D may be preferred. But otherwise the "pooled" value may provide the more statistically precise estimate.
As regards correction for statistical bias by division by n-1 versus n, of course the competitive differences here are a function of the value of n. As n gets larger, it makes less difference which
choice is made. This division is equivalent to multiplication by 1 / the divisor. Consider the differential impacts on estimates derived using the following selected choices of divisors.
n 1/Divisor n-1 1/Divisor Difference
10 .1000 9 .111111 .011111
100 .0100 99 .010101 .000101
1000 .0010 999 .001001 .000001
10000 .0001 9999 .000100010 .00000001
Variance-accounted-for. Given the omnipresence of the General Linear Model, all analyses are correlational (cf. Thompson, 1998a), and (as noted previously) an r^2 effect size (e.g., eta^2, R^2, omega
^2 [w ^2; Hays, 1981], adjusted R^2) can be computed in all studies. Generically, in univariate analyses "uncorrected" variance-accounted-for effect sizes (e.g., eta^2, R^2) can be computed by
dividing the sum of squares "explained" ("between," "model," "regression") by the sum of squares of the outcome variable (i.e., the sum of squares "total"). For example, in the Figure 3 results, the
univariate eta^2 effect sizes were both computed to be 0.469% (e.g., 5.0 / [5.0 + 1061.0] = 5.0 / 1065.0 = .00469).
In multivariate analysis, one estimate of eta^2 can be computed as 1 - lambda (l ). For example, for the Figure 3 results, the multivariate eta^2 effect size was computed as (1 - .37500) equals .625.
Correcting for score measurement unreliability. It is well known that score unreliability tends to attenuate r values (cf. Walsh, 1996). Thus, some (e.g., Hunter & Schmidt, 1990) have recommended
that effect sizes be estimated incorporating statistical corrections for measurement error. However, such corrections must be used with caution, because any error in estimating the reliability will
considerably distort the effect sizes (cf. Rosenthal, 1991).
Because scores (not tests) are reliable, reliability coefficients fluctuate from administration to administration (Reinhardt, 1996). In a given empirical study, the reliability for the data in hand
may be used for such corrections. In other cases, more confidence may be vested in these corrections if the reliability estimates employed are based on the important meta-analytic "reliability
generalization" method proposed by Vacha-Haase (1998).
"Corrected" vs. "uncorrected" variance-accounted-for estimates. "Classical" statistical methods (e.g., ANOVA, regression, DDA) use the statistical theory called "ordinary least squares." This theory
optimizes the fit of the synthetic/latent variables (e.g., Y^) to the observed/measured outcome/response variables (e.g., Y) in the sample data, and capitalizes on all the variance present in the
observed sample scores, including the "sampling error variance" that it is idiosyncratic to the particular sample. Because sampling error variance is unique to a given sample (i.e., each sample has
its own sampling error variance), "uncorrected" variance-accounted-for effect sizes somewhat overestimate the effects that would be replicated by applying the same weights (e.g., regression beta
weights) in either (a) the population or (b) a different sample.
However, statistical theory (or the descriptive bootstrap) can be invoked to estimate the extent of overestimation (i.e., positive bias) in the variance-accounted-for effect size estimate. [Note that
"corrected" estimates are always less than or equal to "uncorrected" values.] The difference between the "uncorrected" and "corrected" variance-accounted-for effect sizes is called "shrinkage."
For example, for regression the "corrected" effect size "adjusted R^2" is routinely provided by most statistical packages. This correction is due to Ezekiel (1930), although the formula is often
incorrectly attributed to Wherry (Kromrey & Hines, 1996):
1 - ((n - 1) / (n - v - 1)) x (1 - R^2),
where n is the sample size and v is the number of predictor variables. The formula can be equivalently expressed as:
R^2 - ((1 - R^2) x (v / (n - v -1))).
In the ANOVA case, the analogous h ^2 can be computed using the formula due to Hays (1981, p. 349):
(SS[BETWEEN] - (k - 1) x MS[WITHIN]) / (SS[TOTAL] + MS[WITHIN]),
where k is the number of groups.
In the multivariate case, a multivariate omega^2 due to Tatsuoka (1973a) can be used as "corrected" effect estimate. Of course, using univariate effect sizes to characterize multivariate results
would be just as wrong-headed as using ANOVA methods post hoc to MANOVA. As Snyder and Lawson (1993) perceptively noted, "researchers asking multivariate questions will need to use
magnitude-of-effect indices that are consistent with their multivariate view of the research problem" (p. 341).
Although "uncorrected" effects for a sample are larger than the "corrected" effects estimated for the population, the "corrected" estimates for the population effect (e.g., omega^2) tend in turn to
be larger than the "corrected" estimates for a future sample (e.g., Herzberg, 1969; Lord, 1950). As Snyder and Lawson (1993) explained, "the reason why estimates for future samples result in the most
shrinkage is that these statistical corrections must adjust for the sampling error present in both the given present study and some future study" (p. 340, emphasis in original).
It should also be noted that variance-accounted-for effect sizes can be negative, notwithstanding the fact that a squared-metric statistic is being estimated. This was seen in some of the omega^2
values reported in Table 12. Dramatic amounts of shrinkage, especially to negative variance-accounted-for values, suggest a somewhat dire research experience. Thus, I was somewhat distressed to see a
local dissertation in which R^2=44.6% shrunk to 0.45%, and yet it was claimed that still "it may be possible to generalize prediction in a referred population" (Thompson, 1994a, p. 12).
Factors that inflate sampling error variance. Understanding what design features generate sampling error variance can facilitate more thoughtful design formulation, and thus has some value in its own
right. Sampling error variance is greater when:
(a) sample size is smaller;
(b) the number of measured variables is greater; and
(c) the population effect size (i.e., parameter) is smaller.
The deleterious effects of small sample size are obvious. When we sample, there is more likelihood of "flukie" characterizations of the population with smaller samples, and the relative influence of
anomalous scores (i.e., outliers) is greater in smaller samples, at least if we use "classical" as against "modern" statistics.
Table 14 illustrates these variations as a function of different sample sizes for regression analyses each involving 3 predictor variables and presumed population parameter R^2 equal to 50%. These
results illustrate that the sampling error due to sample size is not a monotonic (i.e., constant linear) function of sample size changes. For example, when sample size changes from n=10 to n=20, the
shrinkage changes from 25.00% (R^2=50% - R^2*=25.00%) to 9.73% (R^2=50% - R^2*=40.63%). But even more than doubling sample size from n=20 to n=45 changes shrinkage only from 9.73% (R^2=50% - R^2*=
40.63%) to 3.66% (R^2=50% - R^2*=46.34%).
INSERT TABLE 14 ABOUT HERE.
The influence of the number of measured variables is also fairly straightforward. The more variables we sample the greater is the likelihood that an anomalous score will be incorporated in the sample
The common language describing a person as an "outlier" should not be erroneously interpreted to mean either (a) that a given person is an outlier on all variables or (b) that a given score is an
outlier as regards all statistics (e.g., on the mean versus the correlation). For example, for the following data Amanda's score may be outlying as regards M[Y], but not as regards r[XY] (which here
equal +1; see Walsh, 1996).
Person X[i] Y[i]
Kevin 1 2
Jason 2 4
Sherry 3 6
Amanda 48 96
Again, as reported in Table 14, the influence of the number of measured variables on shrinkage is not monotonic.
Less obvious is why the estimated population parameter effect size (i.e., the estimate based on the sample statistic) impacts shrinkage. The easiest way to understand this is to conceptualize the
population for a Pearson product-moment study. Let's say the population squared correlation is +1. In this instance, even ridiculously small samples of any 2 or 3 or 4 pairs of scores will invariably
yield a sample r^2 of 100% (as long as both X and Y as sampled are variables, and therefore r is "defined," in that illegal division is not required by the formula r = COV[XY] / [SD[X] x SD[Y]]).
Again as suggested by the Table 14 examples, the influence of increased sample size on decreased shrinkage is not monotonic. [Thus, the use of a sample r=.779 in the Table 8 heuristic data for the
bootstrap example theoretically should have resulted in relatively little variation in sample estimates across resamples.]
Indeed, these three influences on sampling error must be considered as they simultaneously interact with each other. For example, as suggested by the previous discussion, the influence of sample size
is an influence conditional on the estimated parameter effect size. Table 15 illustrates these interactions for examples all of which involve shrinkage of a 5% decrement downward from the original R^
2 value.
INSERT TABLE 15 ABOUT HERE.
Pros and cons of the effect size classes. It is not clear that researchers should uniformly prefer one effect index over another, or even one class of indices over the other. The standardized
difference indices do have one considerable advantage: they tend to be readily comparable across studies because they are expressed "metric-free" (i.e., the division by SD removes the metric from the
However, variance-accounted-for effect sizes can be directly computed in all studies. Furthermore, the use of variance-accounted-for effect sizes has the considerable heuristic value of forcing
researchers to recognize that all parametric methods are part of a single general linear model family (cf. Cohen, 1968; Knapp, 1978).
In any case, the two effect sizes can be re-expressed in terms of each other. Cohen (1988, p. 22) provided a general table for this purpose. A d can also be converted to an r using Cohen's (1988, p.
23) formula #2.2.6:
r = d / [(d^2 + 4)^.5]
= 0.8 / [(0.8^2 + 4)^.5]
= 0.8 / [(0.64 + 4)^.5]
= 0.8 / [( 4.64 )^.5]
= 0.8 / 2.154
= 0.371 .
An r can be converted to a d using Friedman's (1968, p. 246) formula #6:
d = [2 (r)] / [(1 - r^2)^.5]
= [2 ( 0.371 )] / [(1 - 0.371^2)^.5]
= [2 (0.371)] / [(1 - 0.1376)^.5]
= [2 (0.371)] / (0.8624)^.5
= [2 (0.371)] / 0.9286
= 0.742 / 0.9286
= 0.799 .
Effect Size Interpretation. Schmidt and Hunter (1997) recently argued that "logic-based arguments [against statistical testing] seem to have had only a limited impact... [perhaps due to] the virtual
brainwashing in significance testing that all of us have undergone" (pp. 38-39). They also spoke of a "psychology of addiction to significance testing" (Schmidt & Hunter, 1997, p. 49).
For too long researchers have used statistical significance tests in an illusory atavistic escape from the responsibility for defending the value of their results. Our p values were implicitly
invoked as the universal coinage with which to argue result noteworthiness (and replicability). But as I have previously noted,
Statistics can be employed to evaluate the probability of an event. But importance is a question of human values, and math cannot be employed as an atavistic escape (à la Fromm's Escape
from Freedom) from the existential human responsibility for making value judgments. If the computer package did not ask you your values prior to its analysis, it could not have considered
your value system in calculating p's, and so p's cannot be blithely used to infer the value of research results. (Thompson, 1993b, p. 365)
The problem is that the normative traditions of contemporary social science have not yet evolved to accommodate personal values explication as part our work. As I have suggested elsewhere (Thompson,
Normative practices for evaluating such [values] assertions will have to evolve. Research results should not be published merely because the individual researcher thinks the results are
noteworthy. By the same token, editors should not quash research reports merely because they find explicated values unappealing. These resolutions will have to be formulated in a spirit
of reasoned comity. (p. 175)
In his seminal book on power analysis, Cohen (1969, 1988, pp. 24-27) suggested values for what he judged to be "low," "medium," and "large" effect sizes:
Characterization d r^2
"low" .2 1.0%
"medium" .5 5.9%
"large" .8 13.8%
Cohen (1988) was characterizing what he regarded as the typicality of effect sizes across the broad published literature of the social sciences. However, some empirical studies suggest that Cohen's
characterization of typicality is reasonably accurate (Glass, 1979; Olejnik, 1984).
However, as Cohen (1988) himself emphasized:
The terms "small," "medium," and "large" are relative, not only to each other, but to the content area of behavioral science or even more particularly to the specific content and research
method being employed in any given investigation... In the face of this relativity, there is a certain risk inherent in offering conventional operational definitions... in as diverse a
field of inquiry as behavioral science... [This] common conventional frame of reference... is recommended for use only when no better basis for estimating the ES index is available. (p.
25, emphasis added)
If in evaluating effect size we apply Cohen's conventions (against his wishes) with the same rigidity with which we have traditionally applied the a =.05 statistical significance testing convention
we will merely be being stupid in a new metric.
In defending our subjective judgments that an effect size is noteworthy in our personal value system, we must recognize that inherently any two researchers with individual values differences may
reach different conclusions regarding the noteworthiness of the exact same effect even in the same study. And, of course, the same effect size in two different inquiries may differ radically in
noteworthiness. Even small effects will be deemed noteworthy, if they are replicable, when inquiry is conducted as regards highly valued outcomes. Thus, Gage (1978) pointed out that even though the
relationship between cigarette smoking and lung cancer is relatively "small" (i.e., r^2 = 1% to 2%):
Sometimes even very weak relationships can be important... [O]n the basis of such correlations, important public health policy has been made and millions of people have changed strong
habits. (p. 21)
Confidence Intervals for Effects. It often is useful to present confidence intervals for effect sizes. For example, a series of confidence intervals across variables or studies can be conveyed in a
concise and powerful graphic. Such intervals might incorporate information regarding the theoretical or the empirical (i.e., bootstrap) estimates of effect variability across samples. However, as I
have noted elsewhere,
If we mindlessly interpret a confidence interval with reference to whether the interval subsumes zero, we are doing little more than nil hypothesis statistical testing. But if we
interpret the confidence intervals in our study in the context of the intervals in all related previous studies, the true population parameters will eventually be estimated across
studies, even if our prior expectations regarding the parameters are wildly wrong (Schmidt, 1996). (Thompson, 1998b, p. 799)
Conditions Necessary (and Sufficient) for Change
Criticisms of conventional statistical significance are not new (cf. Berkson, 1938; Boring, 1919), though the publication of such criticisms does appears to be escalating at an exponentially
increasing rate (Anderson et al., 1999). Nearly 40 years ago Rozeboom (1960) observed that "the perceptual defenses of psychologists [and other researchers, too] are particularly efficient when
dealing with matters of methodology, and so the statistical folkways of a more primitive past continue to dominate the local scene" (p. 417).
Table 16 summarizes some of the features of contemporary practice, the problems associated with these practices, and potential improvements in practice. The implementation of these "modern" inquiry
methods would result in the more thoughtful specification of research hypotheses. The design of studies with more statistical power and precision would be more likely, because power analyses would be
based on more informed and realistic effect size estimates as an effect literature matured (Rossi, 1997).
INSERT TABLE 16 ABOUT HERE.
Emphasizing effect size reporting would eventually facilitate the development of theories that support more specific expectations. Universal effect size reporting would facilitate improved
meta-analyses of literature in which cumulated effects would not be based on as many strong assumptions that are probably somewhat infrequently met. Social science would finally become the business
of identifying valuable effects that replicate under stated conditions; replication would no longer receive the hollow affection of the statistical significance test, and instead the replication of
specific effects would be explicitly and directly addressed.
What are the conditions necessary and sufficient to persuade researchers to pay less attention to the likelihood of sample statistics, based on assumptions that "nil" null hypotheses are true in the
population, and more attention to (a) effect sizes and (b) evidence of effect replicability? Certainly current doctoral curricula seem to have less and less space for quantitative training (Aiken et
al., 1990). And too much instruction teaches analysis as the rote application of methods sans rationale (Thompson, 1998a). And many textbooks, too, are flawed (Carver, 1978; Cohen, 1994).
But improved textbooks will not alone provide the magic bullet leading to improved practice. The computation and interpretation of effect sizes are already emphasized in some texts (cf. Hays, 1981).
For example, Loftus and Loftus (1982) in their book argued that "it is our judgment that accounting for variance is really much more meaningful than testing for [statistical] significance" (p. 499).
Editorial Policies
I believe that changes in journal editorial policies are the necessary (and sufficient) conditions to move the field. As Sedlmeier and Gigerenzer (1989) argued, "there is only one force that can
effect a change, and that is the same force that helped institutionalize null hypothesis testing as the sine qua non for publication, namely, the editors of the major journals" (p. 315). Glantz
(1980) agreed, noting that "The journals are the major force for quality control in scientific work" (p. 3). And as Kirk (1996) argued, changing requirements in journal editorial policies as regards
effect size reporting "would cause a chain reaction: Statistics teachers would change their courses, textbook authors would revise their statistics books, and journal authors would modify their
inference strategies" (p. 757).
Fortunately, some journal editors have elaborated policies "requiring" rather than merely "encouraging" (APA, 1994, p. 18) effect size reporting (cf. Heldref Foundation, 1997, pp. 95-96; Thompson,
1994b, p. 845). It is particularly noteworthy that editorial policies even at one APA journal now indicate that:
If an author decides not to present an effect size estimate along with the outcome of a significance test, I will ask the author to provide specific justification for why effect sizes are
not reported. So far, I have not heard a good argument against presenting effect sizes. Therefore, unless there is a real impediment to doing so, you should routinely include effect size
information in the papers you submit. (Murphy, 1997, p. 4)
Leadership from AERA
Professional disciplines, like glaciers, move slowly, but inexorably. The hallmark of a profession is standards of conduct. And, as Biesanz and Biesanz (1969) observed, "all members of the profession
are considered colleagues, equals, who are expected to uphold the dignity and mystique of the profession in return for the protection of their colleagues" (p. 155). Especially in academic
professions, there is some hesitance to change existing standards, or to impose more standards than seem necessary to realize common purposes.
As might be expected, given these considerations, in its long history AERA has been reticent to articulate standards for the conduct of educational inquiry. Most such expectations have been
articulated only in conjunction with other organizations (e.g., AERA/APA/NCME, 1985). For example, AERA participated with 15 other organizations in the Joint Committee on Standards for Educational
Evaluation's (1994) articulation of the program evaluation standards. These were the first-ever American National Standards Institute (ANSI)-approved standards for professional conduct. As
ANSI-approved standards, these represent de facto THE American standards for program evaluation (cf. Sanders, 1994).
As Kaestle (1993) noted some years ago,
...[I]f education researchers could reverse their reputation for irrelevance, politicization, and disarray, however, they could rely on better support because most people, in the
government and the public at large, believe that education is critically important. (pp. 30-31)
Some of the desirable movements of the field may be facilitated by the on-going work of the APA Task Force on Statistical Inference (Azar, 1997; Shea, 1996).
But AERA, too, could offer academic leadership. The children who are served by education need not wait for AERA to wait for APA to lead via continuing revisions of the APA publication manual. AERA,
through the new Research Advisory Committee, and other AERA organs, might encourage the formulation of editorial policies that place less emphasis on statistical tests based on "nil" null hypotheses,
and more emphasis on evaluating whether educational interventions and theories yield valued effect sizes that replicate under stated conditions.
It would be a gratifying experience to see our organization lead movement of the social sciences. Offering credible academic leadership might be one way that educators could confront the "awful
reputation" (Kaestle, 1993) ascribed to our research. As I argued 3 years ago, if education "studies inform best practice in classrooms and other educational settings, the stakeholders in these
locations certainly deserve better treatment from the [educational] research community via our analytic choices" (p. 29).
Abelson, R.P. (1997). A retrospective on the significance test ban of 1999 (If there were no significance tests, they would be invented). In L.L. Harlow, S.A. Mulaik & J.H. Steiger (Eds.), What
if there were no significance tests? (pp. 117-141). Mahwah, NJ: Erlbaum.
Aiken, L.S., West, S.G., Sechrest, L., Reno, R.R., with Roediger, H.L., Scarr, S., Kazdin, A.E., & Sherman, S.J. (1990). The training in statistics, methodology, and measurement in psychology.
American Psychologist, 45, 721-734.
American Educational Research Association, American Psychological Association, National Council on Measurement in Education. (1985). Standards for educational and psychological testing.
Washington, DC: Author.
American Psychological Association. (1974). Publication manual of the American Psychological Association (2nd ed.). Washington, DC: Author.
American Psychological Association. (1994). Publication manual of the American Psychological Association (4th ed.). Washington, DC: Author.
Anderson, D.R., Burnham, K.P., & Thompson, W.L. (1999). Null hypothesis testing in ecological studies: Problems, prevalence, and an alternative. Manuscript submitted for publication.
Anonymous. (1998). [Untitled letter]. In G. Saxe & A. Schoenfeld, Annual meeting 1999. Educational Researcher, 27(5), 41.
*Atkinson, D.R., Furlong, M.J., & Wampold, B.E. (1982). Statistical significance, reviewer evaluations, and the scientific process: Is there a (statistically) significant relationship? Journal of
Counseling Psychology, 29, 189-194.
Atkinson, R.C., & Jackson, G.B. (Eds.). (1992). Research and education reform: Roles for the Office of Educational Research and Improvement. Washington, DC: National Academy of Sciences. (ERIC
Document Reproduction Service No. ED 343 961)
Azar, B. (1997). APA task force urges a harder look at data. The APA Monitor, 28(3), 26.
Bagozzi, R.P., Fornell, C., & Larcker, D.F. (1981). Canonical correlation analysis as a special case of a structural relations model. Multivariate Behavioral Research, 16, 437-454.
Bakan, D. (1966). The test of significance in psychological research. Psychological Bulletin, 66, 423-437.
Barnette, J.J., & McLean, J.E. (1998, November). Protected versus unprotected multiple comparison procedures. Paper presented at the annual meeting of the Mid-South Educational Research
Association, New Orleans.
Berkson, J. (1938). Some difficulties of interpretation encountered in the application of the chi-square test. Journal of the American Statistical Association, 33, 526-536.
Biesanz, J., & Biesanz, M. (1969). Introduction to sociology. Englewood Cliffs, NJ: Prentice-Hall.
References designated with asterisks are empirical studies of research practices.
Borgen, F.H., & Seling, M.J. (1978). Uses of discriminant analysis following MANOVA: Multivariate statistics for multivariate purposes. Journal of Applied Psychology, 63, 689-697.
Boring, E.G. (1919). Mathematical vs. scientific importance. Psychological Bulletin, 16, 335-338.
Breunig, N.A. (1995, November). Understanding the sampling distribution and its use in testing statistical significance. Paper presented at the annual meeting of the Mid-South Educational
Research Association, Biloxi, MS. (ERIC Document Reproduction Service No. ED 393 939)
Carver, R. (1978). The case against statistical significance testing. Harvard Educational Review, 48, 378-399.
Carver, R. (1993). The case against statistical significance testing, revisited. Journal of Experimental Education, 61, 287-292.
Cliff, N. (1987). Analyzing multivariate data. San Diego: Harcourt Brace Jovanovich.
Cohen, J. (1968). Multiple regression as a general data-analytic system. Psychological Bulletin, 70, 426-443.
Cohen, J. (1969). Statistical power analysis for the behavioral sciences. New York: Academic Press.
*Cohen, J. (1979). Clinical psychologists' judgments of the scientific merit and clinical relevance of psychotherapy outcome research. Journal of Consulting and Clinical Psychology, 47, 421-423.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.
Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997-1003.
Cortina, J.M., & Dunlap, W.P. (1997). Logic and purpose of significance testing. Psychological Methods, 2, 161-172.
Cronbach, L.J. (1957). The two disciplines of scientific psychology. American Psychologist, 12, 671-684.
Cronbach, L.J. (1975). Beyond the two disciplines of psychology. American Psychologist, 30, 116-127.
Davison, A.C., & Hinkley, D.V. (1997). Bootstrap methods and their applications. Cambridge: Cambridge University Press.
Diaconis, P., & Efron, B. (1983). Computer-intensive methods in statistics. Scientific American, 248(5), 116-130.
*Edgington, E.S. (1964). A tabulation of inferential statistics used in psychology journals. American Psychologist, 19, 202-203.
*Edgington, E.S. (1974). A new tabulation of statistical procedures used in APA journals. American Psychologist, 29, 25-26.
Efron, B. (1979). Bootstrap methods: Another look at the jackknife. The Annals of Statistics, 7, 1-26.
Efron, B., & Tibshirani, R.J. (1993). An introduction to the bootstrap. New York: Chapman and Hall.
Eisner, E.W. (1983). Anastasia might still be alive, but the monarchy is dead. Educational Researcher, 12(5), 13-14, 23-34.
*Elmore, P.B., & Woehlke, P.L. (1988). Statistical methods employed in American Educational Research Journal, Educational Researcher, and Review of Educational Research from 1978 to 1987.
Educational Researcher, 17(9), 19-20.
*Emmons, N.J., Stallings, W.M., & Layne, B.H. (1990, April). Statistical methods used in American Educational Research Journal, Journal of Educational Psychology, and Sociology of Education from
1972 through 1987. Paper presented at the annual meeting of the American Educational Research Association, Boston, MA. (ERIC Document Reproduction Service No. ED 319 797)
Ezekiel, M. (1930). Methods of correlational analysis. New York: Wiley.
Fan, X. (1996). Canonical correlation analysis as a general analytic model. In B. Thompson (Ed.), Advances in social science methodology (Vol. 4, pp. 71-94). Greenwich, CT: JAI Press.
Fan, X. (1997). Canonical correlation analysis and structural equation modeling: What do they have in common? Structural Equation Modeling, 4, 65-79.
Fetterman, D.M. (1982). Ethnography in educational research: The dynamics of diffusion. Educational Researcher, 11(3), 17-22, 29.
Fish, L.J. (1988). Why multivariate methods are usually vital. Measurement and Evaluation in Counseling and Development, 21, 130-137.
Fisher, R.A. (1936). The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7, 179-188.
Frick, R.W. (1996). The appropriate use of null hypothesis testing. Psychological Methods, 1, 379-390.
Friedman, H. (1968). Magnitude of experimental effect and a table for its rapid estimation. Psychological Bulletin, 70, 245-251.
Gage, N.L. (1978). The scientific basis of the art of teaching. New York: Teachers College Press.
Gage, N.L. (1985). Hard gains in the soft sciences: The case of pedagogy. Bloomington, IN: Phi Delta Kappa Center on Evaluation, Development, and Research.
Gall, M.D., Borg, W.R., & Gall, J.P. (1996). Educational research: An introduction (6th ed.). White Plains, NY: Longman.
Glantz, S.A. (1980). Biostatistics: How to detect, correct and prevent errors in the medical literature. Circulation, 61, 1-7.
Glass, G.V (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5(10), 3-8.
*Glass, G.V (1979). Policy for the unpredictable (uncertainty research and policy). Educational Researcher, 8(9), 12-14.
*Goodwin, L.D., & Goodwin, W.L. (1985). Statistical techniques in AERJ articles, 1979-1983: The preparation of graduate students to read the educational research literature. Educational
Researcher, 14(2), 5-11.
*Greenwald, A. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82, 1020.
Grimm, L.G., & Yarnold, P.R. (Eds.). (1995). Reading and understanding multivariate statistics. Washington, DC: American Psychological Association.
*Hall, B.W., Ward, A.W., & Comer, C.B. (1988). Published educational research: An empirical study of its quality. Journal of Educational Research, 81, 182-189.
Harlow, L.L., Mulaik, S.A., & Steiger, J.H. (Eds.). (1997). What if there were no significance tests?. Mahwah, NJ: Erlbaum.
Hays, W. L. (1981). Statistics (3rd ed.). New York: Holt, Rinehart and Winston.
Hedges, L.V. (1981). Distribution theory for Glass's estimator of effect sizes and related estimators. Journal of Educational Statistics, 6, 107-128.
Heldref Foundation. (1997). Guidelines for contributors. Journal of Experimental Education, 65, 95-96.
Henard, D.H. (1998, January). Suppressor variable effects: Toward understanding an elusive data dynamic. Paper presented at the annual meeting of the Southwest Educational Research Association,
Houston. (ERIC Document Reproduction Service No. ED 416 215)
Herzberg, P.A. (1969). The parameters of cross-validation. Psychometrika Monograph Supplement, 16, 1-67.
Hinkle, D.E., Wiersma, W., & Jurs, S.G. (1998). Applied statistics for the behavioral sciences (4th ed.). Boston: Houghton Mifflin.
Horst, P. (1966). Psychological measurement and prediction. Belmont, CA: Wadsworth.
Huberty, C.J (1994). Applied discriminant analysis. New York: Wiley and Sons.
Huberty, C.J, & Barton, R. (1989). An introduction to discriminant analysis. Measurement and Evaluation in Counseling and Development, 22, 158-168.
Huberty, C.J, & Morris, J.D. (1988). A single contrast test procedure. Educational and Psychological Measurement, 48, 567-578.
Huberty, C.J, & Pike, C.J. (in press). On some history regarding statistical testing. In B. Thompson (Ed.), Advances in social science methodology (Vol. 5). Stamford, CT: JAI Press.
Humphreys, L.G. (1978). Doing research the hard way: Substituting analysis of variance for a problem in correlational analysis. Journal of Educational Psychology, 70, 873-876.
Humphreys, L.G., & Fleishman, A. (1974). Pseudo-orthogonal and other analysis of variance designs involving individual-differences variables. Journal of Educational Psychology, 66, 464-472.
Hunter, J.E., & Schmidt, F.L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park, CA: Sage.
Joint Committee on Standards for Educational Evaluation. (1994). The program evaluation standards: How to assess evaluations of educational programs (2nd ed.). Newbury Park, CA: SAGE.
Kaestle, C.F. (1993). The awful reputation of education research. Educational Researcher, 22(1), 23, 26-31.
Kaiser, H.F. (1976). Review of Factor analysis as a statistical method. Educational and Psychological Measurement, 36, 586-589.
Kerlinger, F. N. (1986). Foundations of behavioral research (3rd ed.). New York: Holt, Rinehart and Winston.
Kerlinger, F. N., & Pedhazur, E. J. (1973). Multiple regression in behavioral research. New York: Holt, Rinehart and Winston.
*Keselman, H.J., Huberty, C.J, Lix, L.M., Olejnik, S., Cribbie, R., Donahue, B., Kowalchuk, R.K., Lowman, L.L., Petoskey, M.D., Keselman, J.C., & Levin, J.R. (1998). Statistical practices of
educational researchers: An analysis of their ANOVA, MANOVA and ANCOVA analyses. Review of Educational Research, 68, 350-386.
Keselman, H.J., Kowalchuk, R.K., & Lix, L.M. (1998). Robust nonorthogonal analyses revisited: An update based on trimmed means. Psychometrika, 63, 145-163.
Keselman, H.J., Lix, L.M., & Kowalchuk, R.K. (1998). Multiple comparison procedures for trimmed means. Psychological Methods, 3, 123-141.
*Kirk, R. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56, 746-759.
Knapp, T. R. (1978). Canonical correlation analysis: A general parametric significance testing system. Psychological Bulletin, 85, 410-416.
Kromrey, J.D., & Hines, C.V. (1996). Estimating the coefficient of cross-validity in multiple regression: A comparison of analytical and empirical methods. Journal of Experimental Education, 64,
Lancaster, B.P. (in press). Defining and interpreting suppressor effects: Advantages and limitations. In B. Thompson, B. (Ed.), Advances in social science methodology (Vol. 5). Stamford, CT: JAI
*Lance, T., & Vacha-Haase, T. (1998, August). The Counseling Psychologist: Trends and usages of statistical significance testing. Paper presented at the annual meeting of the American
Psychological Association, San Francisco.
Levin, J.R. (1998). To test or not to test H[0]? Educational and Psychological Measurement, 58, 311-331.
Loftus, G.R., & Loftus, E.F. (1982). Essence of statistics. Monterey, CA: Brooks/Cole.
Lord, F.M. (1950). Efficiency of prediction when a regression equation from one sample is used in a new sample (Research Bulletin 50-110). Princeton, NJ: Educational Testing Service.
Ludbrook, J., & Dudley, H. (1998). Why permutation tests are superior to t and F tests in medical research. The American Statistician, 52, 127-132.
Lunneborg, C.E. (1987). Bootstrap applications for the behavioral sciences. Seattle: University of Washington.
Lunneborg, C.E. (1999). Data analysis by resampling: Concepts and applications. Pacific Grove, CA: Duxbury.
Manly, B.F.J. (1994). Randomization and Monte Carlo methods in biology (2nd ed.). London: Chapman and Hall.
Meehl, P.E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-834.
Mittag, K. (1992, January). Correcting for systematic bias in sample estimates of population variances: Why do we divide by n-1?. Paper presented at the annual meeting of the Southwest
Educational Research Association, Houston, TX. (ERIC Document Reproduction Service No. ED 341 728)
*Mittag, K.G. (1999, April). A national survey of AERA members' perceptions of the nature and meaning of statistical significance tests. Paper presented at the annual meeting of the American
Educational Research Association, Montreal.
Murphy, K.R. (1997). Editorial. Journal of Applied Psychology, 82, 3-5.
*Nelson, N., Rosenthal, R., & Rosnow, R.L. (1986). Interpretation of significance levels and effect sizes by psychological researchers. American Psychologist, 41, 1299-1301.
*Nilsson, J., & Vacha-Haase, T. (1998, August). A review of statistical significance reporting in the Journal of Counseling Psychology. Paper presented at the annual meeting of the American
Psychological Association, San Francisco.
*Oakes, M. (1986). Statistical inference: A commentary for the social and behavioral sciences. New York: Wiley.
Olejnik, S.F. (1984). Planning educational research: Determining the necessary sample size. Journal of Experimental Education, 53, 40-48.
Pedhazur, E. J. (1982). Multiple regression in behavioral research: Explanation and prediction (2nd ed.). New York: Holt, Rinehart and Winston.
Pedhazur, E. J., & Schmelkin, L. P. (1991). Measurement, design, and analysis: An integrated approach. Hillsdale, NJ: Erlbaum.
*Reetz, D., & Vacha-Haase, T. (1998, August). Trends and usages of statistical significance testing in adult development and aging research: A review of Psychology and Aging. Paper presented at
the annual meeting of the American Psychological Association, San Francisco.
Reinhardt, B. (1996). Factors affecting coefficient alpha: A mini Monte Carlo study. In B. Thompson (Ed.), Advances in social science methodology (Vol. 4, pp. 3-20). Greenwich, CT: JAI Press.
Rennie, K.M. (1997, January). Understanding the sampling distribution: Why we divide by n-1 to estimate the population variance. Paper presented at the annual meeting of the Southwest Educational
Research Association, Austin. (ERIC Document Reproduction Service No. ED 406 442)
Rokeach, M. (1973). The nature of human values. New York: Free Press.
Rosenthal, R. (1979). The "file drawer problem" and tolerance for null results. Psychological Bulletin, 86, 638-641.
Rosenthal, R. (1991). Meta-analytic procedures for social research (rev. ed.). Newbury Park, CA: Sage.
Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper & L.V. Hedges (Eds.), The handbook of research synthesis (pp. 231-244. New York: Russell Sage Foundation.
*Rosenthal, R., & Gaito, J. (1963). The interpretation of level of significance by psychological researchers. Journal of Psychology, 55, 33-38.
Rosnow, R.L., & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44, 1276-1284.
Rossi, J.S. (1997). A case study in the failure of psychology as a cumulative science: The spontaneous recovery of verbal learning. In L.L. Harlow, S.A. Mulaik & J.H. Steiger (Eds.), What if
there were no significance tests? (pp. 176-197). Mahwah, NJ: Erlbaum.
Rozeboom, W.W. (1960). The fallacy of the null hypothesis significance test. Psychological Bulletin, 57, 416-428.
Rozeboom, W.W. (1997). Good science is abductive, not hypothetico-deductive. In L.L. Harlow, S.A. Mulaik & J.H. Steiger (Eds.), What if there were no significance tests? (pp. 335-392). Mahwah,
NJ: Erlbaum.
Sanders, J.R. (1994). The process of developing national standards that meet ANSI guidelines. Journal of Experimental Education, 63, 5-12.
Saxe, G., & Schoenfeld, A. (1998). Annual meeting 1999. Educational Researcher, 27(5), 41.
Schmidt, F.L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for the training of researchers. Psychological Methods, 1, 115-129.
Schmidt, F.L., & Hunter, J.E. (1997). Eight common but false objections to the discontinuation of significance testing in the analysis of research data. In L.L. Harlow, S.A. Mulaik & J.H. Steiger
(Eds.), What if there were no significance tests? (pp. 37-64). Mahwah, NJ: Erlbaum.
Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105, 309-316.
Shaver, J. (1985). Chance and nonsense. Phi Delta Kappan, 67(1), 57-60.
Shaver, J. (1993). What statistical significance testing is, and what it is not. Journal of Experimental Education, 61, 293-316.
Shea, C. (1996). Psychologists debate accuracy of "significance test." Chronicle of Higher Education, 42(49), A12, A16.
Snyder, P., & Lawson, S. (1993). Evaluating results using corrected and uncorrected effect size estimates. Journal of Experimental Education, 61, 334-349.
*Snyder, P.A., & Thompson, B. (1998). Use of tests of statistical significance and other analytic choices in a school psychology journal: Review of practices and suggested alternatives. School
Psychology Quarterly, 13, 335-348.
Sprent, P. (1998). Data driven statistical methods. London: Chapman and Hall.
Tatsuoka, M.M. (1973a). An examination of the statistical properties of a multivariate measure of strength of relationship. Urbana: University of Illinois. (ERIC Document Reproduction Service No.
ED 099 406)
Tatsuoka, M.M. (1973b). Multivariate analysis in educational research. In F. N. Kerlinger (Ed.), Review of research in education (pp. 273-319). Itasca, IL: Peacock.
Thompson, B. (1984). Canonical correlation analysis: Uses and interpretation. Newbury Park, CA: Sage.
Thompson, B. (1985). Alternate methods for analyzing data from experiments. Journal of Experimental Education, 54, 50-55.
Thompson, B. (1986a). ANOVA versus regression analysis of ATI designs: An empirical investigation. Educational and Psychological Measurement, 46, 917-928.
Thompson, B. (1986b, November). Two reasons why multivariate methods are usually vital. Paper presented at the annual meeting of the Mid-South Educational Research Association, Memphis.
Thompson, B. (1988a, November). Common methodology mistakes in dissertations: Improving dissertation quality. Paper presented at the annual meeting of the Mid-South Educational Research
Association, Louisville, KY. (ERIC Document Reproduction Service No. ED 301 595)
Thompson, B. (1988b). Program FACSTRAP: A program that computes bootstrap estimates of factor structure. Educational and Psychological Measurement, 48, 681-686.
Thompson, B. (1991). A primer on the logic and use of canonical correlation analysis. Measurement and Evaluation in Counseling and Development, 24, 80-95.
Thompson, B. (1992a). DISCSTRA: A computer program that computes bootstrap resampling estimates of descriptive discriminant analysis function and structure coefficients and group centroids.
Educational and Psychological Measurement, 52, 905-911.
Thompson, B. (1992b, April). Interpreting regression results: beta weights and structure coefficients are both important. Paper presented at the annual meeting of the American Educational
Research Association, San Francisco. (ERIC Document Reproduction Service No. ED 344 897)
Thompson, B. (1992c). Two and one-half decades of leadership in measurement and evaluation. Journal of Counseling and Development, 70, 434-438.
Thompson, B. (1993a, April). The General Linear Model (as opposed to the classical ordinary sums of squares) approach to analysis of variance should be taught in introductory statistical methods
classes. Paper presented at the annual meeting of the American Educational Research Association, Atlanta. (ERIC Document Reproduction Service No. ED 358 134)
Thompson, B. (1993b). The use of statistical significance tests in research: Bootstrap and other alternatives. Journal of Experimental Education, 61, 361-377.
Thompson, B. (1994a, April). Common methodology mistakes in dissertations, revisited. Paper presented at the annual meeting of the American Educational Research Association, New Orleans. (ERIC
Document Reproduction Service No. ED 368 771)
Thompson, B. (1994b). Guidelines for authors. Educational and Psychological Measurement, 54(4), 837-847.
Thompson, B. (1994c). Planned versus unplanned and orthogonal versus nonorthogonal contrasts: The neo-classical perspective. In B. Thompson (Ed.), Advances in social science methodology (Vol. 3,
pp. 3-27). Greenwich, CT: JAI Press.
Thompson, B. (1994d, February). Why multivariate methods are usually vital in research: Some basic concepts. Paper presented as a Featured Speaker at the biennial meeting of the Southwestern
Society for Research in Human Development (SWSRHD), Austin, TX. (ERIC Document Reproduction Service No. ED 367 687)
Thompson, B. (1995a). Exploring the replicability of a study's results: Bootstrap statistics for the multivariate case. Educational and Psychological Measurement, 55, 84-94.
Thompson, B. (1995b). Review of Applied discriminant analysis by C.J Huberty. Educational and Psychological Measurement, 55, 340-350.
Thompson, B. (1996). AERA editorial policies regarding statistical significance testing: Three suggested reforms. Educational Researcher, 25(2), 26-30.
Thompson, B. (1997a). Editorial policies regarding statistical significance tests: Further comments. Educational Researcher, 26(5), 29-32.
Thompson, B. (1997b). The importance of structure coefficients in structural equation modeling confirmatory factor analysis. Educational and Psychological Measurement, 57, 5-19.
Thompson, B. (1998a, April). Five methodology errors in educational research: The pantheon of statistical significance and other faux pas. Invited address presented at the annual meeting of the
American Educational Research Association, San Diego. (ERIC Document Reproduction Service No. ED 419 023) [also available on the Internet through URL: "index.htm"]
Thompson, B. (1998b). In praise of brilliance: Where that praise really belongs. American Psychologist, 53, 799-800.
Thompson, B. (1998c). Review of What if there were no significance tests? by L. Harlow, S. Mulaik & J. Steiger (Eds.). Educational and Psychological Measurement, 58, 332-344. | {"url":"http://people.cehd.tamu.edu/~bthompson/aeraad99.htm","timestamp":"2014-04-17T01:08:51Z","content_type":null,"content_length":"168102","record_id":"<urn:uuid:3f2b729c-131e-400b-8dc4-8ac14b46e6d9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The magnitude of the vectors F~ is 28 N, the force on the right is applied at an angle 60 ◦ and the the mass of the block is 79 kg. If the surface is frictionless, what is the magnitude of the
resulting acceleration? Answer in units of m/s 2
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
F=ma 28 N*cos(60)=79 kg * a Then solve for a. Hope that helps.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c127c4e4b016b55a9e1365","timestamp":"2014-04-18T03:53:03Z","content_type":null,"content_length":"31141","record_id":"<urn:uuid:f9b9448e-51dc-47b2-93fc-069220faf480>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving a*cos(x) + b*sin(x) = 1
April 18th 2010, 02:08 PM
Solving a*cos(x) + b*sin(x) = 1
Hi! The solution to this question may be completely trivial, but I still managed to get stuck on this equation. So I wish to solve:
$a \cos(x) + b \sin(x) = 1$
for $x$. Hence, I want to express $x$ as a function of $a$ and $b$. And for which values of $a$ and $b$ does there exist a solution?
April 18th 2010, 02:37 PM
I found that on the unit circle, i.e. $a^2 + b^2 = 1$, one finds the solution:
$a = \cos(x)$
$b = \sin(x)$
$\Rightarrow~x = \tan^{-1}\left(\frac{b}{a}\right)$
Is this the only solution to the problem?
April 18th 2010, 10:41 PM
Hi! The solution to this question may be completely trivial, but I still managed to get stuck on this equation. So I wish to solve:
$a \cos(x) + b \sin(x) = 1$
for $x$. Hence, I want to express $x$ as a function of $a$ and $b$. And for which values of $a$ and $b$ does there exist a solution?
Put $\tan(\theta)=a/b$ then:
$a \cos(x)+b \sin(x)=\frac{1}{\sqrt{a^2+b^2}}[\sin(\theta)\cos(x)+\cos(\theta)\sin(x)]=\frac{1}{\sqrt{a^2+b^2}}\sin(x+\theta)=1$
For this to have any real solutions requires that $a^2+b^2 \le 1$. | {"url":"http://mathhelpforum.com/trigonometry/139905-solving-cos-x-b-sin-x-1-a-print.html","timestamp":"2014-04-16T04:23:11Z","content_type":null,"content_length":"8721","record_id":"<urn:uuid:8b057ceb-8180-4e11-ad36-f0c15835845d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
American Mathematical Society
Bulletin Notices
AMS Sectional Meeting Program by Day
Current as of Tuesday, April 12, 2005 15:10:44
Program | Deadlines | Registration/Housing/Etc. | Inquiries: meet@ams.org
2002 Fall Central Section Meeting
Madison, WI, October 12-13, 2002
Meeting #980
Associate secretaries:
Susan J Friedlander
, AMS
Saturday October 12, 2002
• Saturday October 12, 2002, 7:30 a.m.-4:30 p.m.
Meeting Registration
Lobby, Van Vleck Hall
• Saturday October 12, 2002, 7:30 a.m.-4:30 a.m.
Exhibit and Book Sale
Room B227, Van Vleck Hall
• Saturday October 12, 2002, 9:00 a.m.-11:25 a.m.
Special Session on Arithmetic Algebraic Geometry, I
Room B113, Van Vleck Hall
Ken Ono, University of Wisconsin-Madison ono@math.wisc.edu
Tonghai Yang, University of Wisconsin-Madison thyang@math.wisc.edu
• Saturday October 12, 2002, 9:00 a.m.-10:40 a.m.
Special Session on Several Complex Variables. I
Room B317, Van Vleck Hall
Pat Ahern, University of Wisconsin-Madison ahern@math.wisc.edu
Xianghong Gong, University of Wisconsin-Madison gong@math.wisc.edu
Alex Nagel, University of Wisconsin-Madison nagel@math.wisc.edu
Jean-Pierre Rosay, University of Wisconsin-Madison jrosay@math.wisc.edu
□ 9:00 a.m.
□ 10:00 a.m.
Plurisubharmonic functions on ${\Bbb C}^n$ with logarithmic poles in a finite set.
Dan Coman*, Syracuse University
• Saturday October 12, 2002, 9:00 a.m.-11:25 a.m.
Special Session on Harmonic Analysis, I
Room B341, Van Vleck Hall
Alex Ionescu, University of Wisconsin-Madison ionescu@math.wisc.edu
Andreas Seeger, University of Wisconsin-Madison seeger@math.wisc.edu
• Saturday October 12, 2002, 9:00 a.m.-11:20 a.m.
Special Session on Partial Differential Equations and Geometry, I
Room B309, Van Vleck Hall
Sigurd Angenent, University of Wisconsin-Madison angenent@math.wisc.edu
Mikhail Feldman, University of Wisconsin-Madison feldman@math.wisc.edu
□ 9:00 a.m.
Number of minimal graphs with disjoint support.
Peter Li, UC, Irvine
Jiaping Wang*, University of Minnesota
□ 10:00 a.m.
Motion of small bubbles by normalized mean curvature in Riemannian manifolds.
N. D. Alikakos, University of Athens (Greece)
A. S. Freire*, University of Tennessee,Knoxville
□ 10:30 a.m.
□ 11:00 a.m.
• Saturday October 12, 2002, 9:00 a.m.-11:20 a.m.
Special Session on Arrangements of Hyperplanes, I
Room B219, Van Vleck Hall
Daniel C. Cohen, Louisiana State University cohen@math.lsu.edu
Peter Orlik, University of Wisconsin-Madison orlik@math.wisc.edu
Anne Shepler, University of California Santa Cruz ashepler@math.ucsc.edu
• Saturday October 12, 2002, 9:00 a.m.-11:20 a.m.
Special Session on Group Cohomology and Homotopy Theory, I
Room B235, Van Vleck Hall
Alejandro Adem, University of Wisconsin-Madison adem@math.wisc.edu
Jesper Grodal, Institute for Advanced Study jg@ias.edu
• Saturday October 12, 2002, 9:00 a.m.-11:25 a.m.
Special Session on Geometric Methods in Differential Equations, I
Room B305, Van Vleck Hall
Gloria Mari Beffa, University of Wisconsin-Madison maribeff@math.wisc.edu
Peter Olver, University of Minnesota olver@ima.umn.edu
□ 9:00 a.m.
Canonical regularization of the Lie algebra of pseudodifferential operators.
Thierry P Robart*, Howard University
Enrique G Reyes, University of Oklahoma
□ 9:30 a.m.
Charts for analytic Lie pseudogroups of infinite type.
Niky Kamran*, McGill University
□ 10:00 a.m.
□ 10:30 a.m.
Sub-Finsler geometry in dimension 3.
Jeanne N. Clelland*, University of Colorado, Boulder
Christopher G. Moseley, U.S. Military Academy
□ 11:00 a.m.
Geometry of Parametric Backlund Transformations.
Thomas A. Ivey*, College of Charleston
Jeanne Nielsen Clelland, University of Colorado, Boulder
• Saturday October 12, 2002, 9:00 a.m.-11:20 a.m.
Special Session on Effectiveness Questions in Model Theory, I
Room B203, Van Vleck Hall
Charles McCoy, University of Wisconsin-Madison mccoy@math.wisc.edu
Reed Solomon, University of Wisconsin-Madison rsolomon@math.wisc.edu
Patrick Speissegger, University of Wisconsin-Madison speisseg@math.wisc.edu
□ 9:00 a.m.
Effectiveness questions in arithmetic.
Andrew P Arana*, Stanford University
□ 9:30 a.m.
$\Delta^0_2$-categoricity in Abelian p-groups.
Douglas Cenzer*, University of Florida
□ 10:00 a.m.
□ 10:30 a.m.
Bounding prime, homogeneous, and saturated models.
Barbara F. Csima*, University of Chicago
□ 11:00 a.m.
Bounding prime, homogeneous, and saturated models {II}.
Denis R. Hirschfeldt*, University of Chicago
• Saturday October 12, 2002, 9:00 a.m.-11:20 a.m.
Special Session on Ring Theory and Related Topics, I
Room B105, Van Vleck Hall
Don Passman, University of Wisconsin-Madison passman@math.wisc.edu
• Saturday October 12, 2002, 9:00 a.m.-10:45 a.m.
Special Session on Lie Algebras and Related Topics, I
Room B129, Van Vleck Hall
Georgia Benkart, University of Wisconsin-Madison benkart@math.wisc.edu
Arun Ram, University of Wisconsin-Madison ram@math.wisc.edu
□ 9:00 a.m.
Multiplicity-free products of Weyl characters.
John R Stembridge*, University of Michigan
□ 10:00 a.m.
Kazhdan-Lusztig polynomials and character formulae for the supergroups {$GL(m|n)$} and {$Q(n)$}.
Jonathan Brundan*, University of Oregon
• Saturday October 12, 2002, 9:00 a.m.-11:25 a.m.
Special Session on Dynamical Systems, I
Room B215, Van Vleck Hall
Sergey Bolotin, University of Wisconsin-Madison bolotin@math.wisc.edu
Paul Rabinowitz, University of Wisconsin-Madison rabinowi@math.wisc.edu
• Saturday October 12, 2002, 9:00 a.m.-10:45 a.m.
Special Session on Probability, I
Room B313, Van Vleck Hall
David Griffeath, University of Wisconsin-Madison griffeat@math.wisc.edu
Timo Seppalainen, University of Wisconsin-Madison seppalai@math.wisc.edu
□ 9:00 a.m.
Some rigorous results for the NK model.
Vlada Limic*, University of British Columbia
□ 10:00 a.m.
Scaling Limit for a Fleming-Viot Type System.
Ilie Grigorescu*, University of Miami
Min Kang, North Carolina State University
• Saturday October 12, 2002, 9:00 a.m.-11:20 a.m.
Special Session on Biological Computation and Learning in Intelligent Systems, I
Room B329, Van Vleck Hall
Shun-ichi Amari, RIKEN amari@brain.riken.go.jp
Amir Assadi, University of Wisconsin-Madison assadi@math.wisc.edu
Tomaso Poggio, Massachusetts Institute of Technology tp@ai.mit.edu
• Saturday October 12, 2002, 9:00 a.m.-11:20 a.m.
Special Session on Combinatorics and Special Functions, I
Room B321, Van Vleck Hall
Richard Askey, University of Wisconsin-Madison askey@math.wisc.edu
Paul Terwilliger, University of Wisconsin-Madison terwilli@math.wisc.edu
• Saturday October 12, 2002, 9:00 a.m.-11:20 a.m.
Special Session on Characters and Representations of Finite Groups, I
Room B119, Van Vleck Hall
Martin Isaacs, University of Wisconsin, Madison isaacs@math.wisc.edu
Mark Lewis, Kent State University lewis@mcs.kent.edu
• Saturday October 12, 2002, 9:00 a.m.-11:20 a.m.
Special Session on Lie Groups and Their Representations, I
Room B131, Van Vleck Hall
R. Michael Howe, University of Wisconsin, Eau Claire hower@uwec.edu
Gail D. Ratcliff, East Carolina University ratcliffg@mail.ecu.edu
• Saturday October 12, 2002, 9:30 a.m.-11:20 a.m.
Special Session on Hyperbolic Differential Equations and Kinetic Theory, I
Room B325, Van Vleck Hall
Shi Jin, University of Wisconsin-Madison jin@math.wisc.edu
Marshall Slemrod, University of Wisconsin-Madison slemrod@math.wisc.edu
Athanassios Tzavaras, University of Wisconsin-Madison tzavaras@math.wisc.edu
□ 9:00 a.m.
□ 9:30 a.m.
Kinetic Formulation and Well-Posedness for Non-Isotropic Degenerate Parabolic-Hyperbolic Equations.
Gui-Qiang G Chen*, Northwestern University
Benoit Perthame, Ecole Normale Superieure (Paris)
□ 10:00 a.m.
Transition to Instability in Hyperbolic Conservation Laws.
Robin Young*, University of Massachusetts
Walter Szeliga, University of Massachusetts
□ 10:30 a.m.
Global existence for the full multi-dimensional compressible Navier-Stokes equations with with large, symmetric data.
David Hoff, Indiana University
Helge Kristian Jenssen*, Indiana University
□ 11:00 a.m.
• Saturday October 12, 2002, 10:00 a.m.-11:20 a.m.
Special Session on Optimal Geometry of Curves and Surfaces, I
Room B337, Van Vleck Hall
Jason H. Cantarella, University of Georgia cantarel@math.uga.edu
John M. Sullivan, University of Illinois, Urbana jms@math.uiuc.edu
□ 10:00 a.m.
Analytic aspects of the ropelength problem.
Joseph H.G. Fu*, University of Georgia
□ 10:30 a.m.
Examples and candidates for the ropelength problem.
Nancy C. Wrinkle*, University of Georgia
□ 11:00 a.m.
Optimal shapes of curves with finite thickness.
Oscar Gonzalez*, Mathematics/UT-Austin
• Saturday October 12, 2002, 10:00 a.m.-11:25 a.m.
Session for Contributed Papers, I
Room B333, Van Vleck Hall
□ 10:15 a.m.
On some subclass of the Malcev algebras.
Ramiro Carrillo, Instituto de Matematicas, UNAM
Liudmila Sabinina*, Facultad de Ciencias, UAEM
□ 10:30 a.m.
Coherent families of weight modules of Lie superalgebras.
Dimitar V Grantcharov*, University of California, Riverside
□ 11:00 a.m.
Array computable degrees and blanketing functions.
Stephen M Walk*, St. Cloud State University
• Saturday October 12, 2002, 11:40 a.m.-12:30 p.m.
Invited Address
Relations Between Gromov-Witten Invariants.
Room B102, Van Vleck Hall
Eleny Ionel*, University of Wisconsin
• Saturday October 12, 2002, 2:00 p.m.-4:55 p.m.
Special Session on Arithmetic Algebraic Geometry, II
Room B113, Van Vleck Hall
Ken Ono, University of Wisconsin-Madison ono@math.wisc.edu
Tonghai Yang, University of Wisconsin-Madison thyang@math.wisc.edu
□ 2:00 p.m.
Supercongruences Between Truncated $_2F_1$ Hypergeometric Functions and Their Gaussian Analogs.
Eric T Mortenson*, University of Wisconsin Madison
□ 2:30 p.m.
Modular forms for noncongruence subgroups.
Wen-Ching Winnie Li*, Penn State University
Ling Long, Institute for Advanced Study
Zifong Yang, Penn State University
□ 3:00 p.m.
Exceptional congruences for the coefficients of certain eta-product newforms.
Matthew Boylan*, University of Illinois
□ 3:30 p.m.
□ 4:00 p.m.
Extensions of elliptic curves and congruences between modular forms.
Matthew Papanikolas*, Brown University
Niranjan Ramachandran, University of Maryland
□ 4:30 p.m.
1-Motives and Iwasawa Theory.
Cristian D. Popescu*, Johns Hopkins University
• Saturday October 12, 2002, 2:00 p.m.-4:40 p.m.
Special Session on Several Complex Variables, II
Room B317, Van Vleck Hall
Pat Ahern, University of Wisconsin-Madison ahern@math.wisc.edu
Xianghong Gong, University of Wisconsin-Madison gong@math.wisc.edu
Alex Nagel, University of Wisconsin-Madison nagel@math.wisc.edu
Jean-Pierre Rosay, University of Wisconsin-Madison jrosay@math.wisc.edu
□ 2:00 p.m.
On CR manifolds embedded in a hyperquadric.
Peter F. Ebenfelt*, U. Califonia, San Diego
Xiaojun Huang, Rutgers U.
Dmitri Zaitsev, U. Tubingen
□ 3:00 p.m.
On manifolds that are peak-interpolation sets for $A(\Omega)$ for convex domains in $\mathbb{C}^n$.
Gautam Bharali*, University of Michigan
□ 4:00 p.m.
$L^2$ cohomology of some complete K\" ahler manifolds.
Jeffery D. McNeal*, Ohio State University
• Saturday October 12, 2002, 2:00 p.m.-4:55 p.m.
Special Session on Harmonic Analysis, II
Room B341, Van Vleck Hall
Alex Ionescu, University of Wisconsin-Madison ionescu@math.wisc.edu
Andreas Seeger, University of Wisconsin-Madison seeger@math.wisc.edu
• Saturday October 12, 2002, 2:00 p.m.-4:50 p.m.
Special Session on Partial Differential Equations and Geometry, II
Room B309, Van Vleck Hall
Sigurd Angenent, University of Wisconsin-Madison angenent@math.wisc.edu
Mikhail Feldman, University of Wisconsin-Madison feldman@math.wisc.edu
□ 2:00 p.m.
A gradient flow approach quasi-periodic solutions for P.D.E's and $\Psi$-D.E.'s.
Rafael de la Llave*, University of Texas at Austin
□ 3:00 p.m.
Interfaces between free boundaries and fixed boundaries.
Ivan Blank*, Rutgers University
Henrik Shahgholian, Royal Institute of Technology
□ 4:00 p.m.
Regularity and blow-up analysis for J-holomorphic maps.
Changyou Wang*, University of Kentucky
□ 4:40 p.m.
• Saturday October 12, 2002, 2:00 p.m.-4:50 p.m.
Special Session on Arrangements of Hyperplanes, II
Room B219, Van Vleck Hall
Daniel C. Cohen, Louisiana State University cohen@math.lsu.edu
Peter Orlik, University of Wisconsin-Madison orlik@math.wisc.edu
Anne Shepler, University of California Santa Cruz ashepler@math.ucsc.edu
• Saturday October 12, 2002, 2:00 p.m.-4:50 p.m.
Special Session on Group Cohomology and Homotopy Theory, II
Room B235, Van Vleck Hall
Alejandro Adem, University of Wisconsin-Madison adem@math.wisc.edu
Jesper Grodal, Institute for Advanced Study jg@ias.edu
• Saturday October 12, 2002, 2:00 p.m.-4:55 p.m.
Special Session on Geometric Methods in Differential Equations, II
Room B305, Van Vleck Hall
Gloria Mari Beffa, University of Wisconsin-Madison maribeff@math.wisc.edu
Peter Olver, University of Minnesota olver@ima.umn.edu
□ 2:00 p.m.
Second-order type-changing evolution PDE with first-order intermediate equations.
Marek Kossowski, University of South Carolina
George R Wilkens*, University of Hawaii at Manoa
□ 2:30 p.m.
Generalized Symmetries and Local Conservation Laws of Massless Free Fields on Minkowski Space.
Juha Pohjanpelto*, Oregon State University
□ 3:00 p.m.
□ 3:30 p.m.
Generalised Goursat Normal Form.
Peter J. Vassiliou*, University of Canberra
□ 4:00 p.m.
The Classification of Darboux Integrable Equations.
Matthew Biesecker*, Utah State University
□ 4:30 p.m.
Group actions on jet spaces and Darboux integrable PDE.
Mark Eric Fels*, Utah State University
• Saturday October 12, 2002, 2:00 p.m.-4:50 p.m.
Special Session on Effectiveness Questions in Model Theory, II
Room B203, Van Vleck Hall
Charles McCoy, University of Wisconsin-Madison mccoy@math.wisc.edu
Reed Solomon, University of Wisconsin-Madison rsolomon@math.wisc.edu
Patrick Speissegger, University of Wisconsin-Madison speisseg@math.wisc.edu
□ 2:00 p.m.
Limit sets and the complexity of quantifier elimination for Pfaffian expressions.
Andrei Gabrielov*, Purdue University
□ 2:30 p.m.
A Normalization Algorithm.
Daniel J. Miller*, University of Wisconsin - Madison
□ 3:00 p.m.
Effective Completeness Theorems for Modal Logics.
Suman Ganguli*, University of Michigan
Anil Nerode, Cornell University
□ 3:30 p.m.
Principal Filters of the Lattice of Computably Enumerable Vector Spaces.
Valentina S. Harizanov*, The George Washington University
□ 4:00 p.m.
The Computational Complexity of Computable Saturation.
Walker M. White*, University of Dallas
□ 4:30 p.m.
• Saturday October 12, 2002, 2:00 p.m.-4:50 p.m.
Special Session on Ring Theory and Related Topics, II
Room B105, Van Vleck Hall
Don Passman, University of Wisconsin-Madison passman@math.wisc.edu
• Saturday October 12, 2002, 2:00 p.m.-4:45 p.m.
Special Session on Lie Algebras and Related Topics, II
Room B129, Van Vleck Hall
Georgia Benkart, University of Wisconsin-Madison benkart@math.wisc.edu
Arun Ram, University of Wisconsin-Madison ram@math.wisc.edu
□ 2:00 p.m.
Ideals in the nilradical of a Borel subalgebra.
Eric N Sommers*, UMass--Amherst
□ 3:00 p.m.
Cayley Maps for Algebraic Groups.
Nicole Lemire*, University of Western Ontario
Vladimir L. Popov, Steklov Mathematical Institute
Zinovy Reichstein, University of British Columbia
□ 4:00 p.m.
The restricted nullcone.
Jon F. Carlson, University of Georgia
Zongzhu Lin, Kansas St. University
Daniel K. Nakano*, University of Georgia
Brian J. Parshall, University of Virginia
• Saturday October 12, 2002, 2:00 p.m.-4:50 p.m.
Special Session on Dynamical Systems, II
Room B215, Van Vleck Hall
Sergey Bolotin, University of Wisconsin-Madison bolotin@math.wisc.edu
Paul Rabinowitz, University of Wisconsin-Madison rabinowi@math.wisc.edu
• Saturday October 12, 2002, 2:00 p.m.-4:45 p.m.
Special Session on Probability, II
Room B313, Van Vleck Hall
David Griffeath, University of Wisconsin-Madison griffeat@math.wisc.edu
Timo Seppalainen, University of Wisconsin-Madison seppalai@math.wisc.edu
□ 2:00 p.m.
Approximating the effect of advantageous mutations by coalescents with multiple collisions.
Jason R. Schweinsberg*, Cornell University
Rick Durrett, Cornell University
□ 3:00 p.m.
Max-plus linearity for growth models.
Min Kang*, North Carolina State University
Ilie Grigorescu, University of Miami
□ 4:00 p.m.
Large Deviations of the Asymmetric Exclusion Process.
Leif H. Jensen*, Columbia University
• Saturday October 12, 2002, 2:00 p.m.-4:50 p.m.
Special Session on Biological Computation and Learning in Intelligent Systems, II
Room B329, Van Vleck Hall
Shun-ichi Amari, RIKEN amari@brain.riken.go.jp
Amir Assadi, University of Wisconsin-Madison assadi@math.wisc.edu
Tomaso Poggio, Massachusetts Institute of Technology tp@ai.mit.edu
• Saturday October 12, 2002, 2:00 p.m.-4:50 p.m.
Special Session on Combinatorics and Special Functions, II
Room B321, Van Vleck Hall
Richard Askey, University of Wisconsin-Madison askey@math.wisc.edu
Paul Terwilliger, University of Wisconsin-Madison terwilli@math.wisc.edu
• Saturday October 12, 2002, 2:00 p.m.-4:50 p.m.
Special Session on Characters and Representations of Finite Groups, II
Room B119, Van Vleck Hall
Martin Isaacs, University of Wisconsin, Madison isaacs@math.wisc.edu
Mark Lewis, Kent State University lewis@mcs.kent.edu
• Saturday October 12, 2002, 2:00 p.m.-4:50 p.m.
Special Session on Optimal Geometry of Curves and Surfaces, II
Room B337, Van Vleck Hall
Jason H. Cantarella, University of Georgia cantarel@math.uga.edu
John M. Sullivan, University of Illinois, Urbana jms@math.uiuc.edu
• Saturday October 12, 2002, 2:00 p.m.-4:20 p.m.
Special Session on Lie Groups and Their Representations, II
Room B131, Van Vleck Hall
R. Michael Howe, University of Wisconsin, Eau Claire hower@uwec.edu
Gail D. Ratcliff, East Carolina University ratcliffg@mail.ecu.edu
□ 2:00 p.m.
A cocycle formula for the quaternionic discrete series.
Robert W Donley*, University of North Texas
□ 2:30 p.m.
Representations with scalar $K$-types and the theta correspondence.
Annegret Paul*, Western Michigan University
□ 3:30 p.m.
Induced representations and derived functor modules.
Paul D Friedman*, Union College
• Saturday October 12, 2002, 2:00 p.m.-4:25 p.m.
Session for Contributed Papers, II
Room B333, Van Vleck Hall
• Saturday October 12, 2002, 3:00 p.m.-4:50 p.m.
Special Session on Hyperbolic Differential Equations and Kinetic Theory, II
Room B325, Van Vleck Hall
Shi Jin, University of Wisconsin-Madison jin@math.wisc.edu
Marshall Slemrod, University of Wisconsin-Madison slemrod@math.wisc.edu
Athanassios Tzavaras, University of Wisconsin-Madison tzavaras@math.wisc.edu
□ 2:00 p.m.
□ 2:30 p.m.
□ 3:00 p.m.
Qualitative Behavior of Solutions to Systems of Conservation Laws.
Konstantina Trivisa*, University of Maryland
□ 3:30 p.m.
Concentration and Cavitation of Density in Invicid Compressible Fluid Flow.
Chen Guiqiang, Northwestern University
Liu Hailiang*, Iowa State Univeristy
□ 4:00 p.m.
Decay of solutions to the three-dimensional Euler equations with damping.
Dehua Wang*, University of Pittsburgh
• Saturday October 12, 2002, 5:10 p.m.-6:00 p.m.
Invited Address
Singularties Of Pairs And Birational Geometry.
Room B102, Van Vleck Hall
Lawrence Ein*, University of Illinois at Chicago
• Saturday October 12, 2002, 6:00 p.m.-7:00 p.m.
Department of Mathematics Reception
Lobby, Memorial Union
Inquiries: meet@ams.org | {"url":"http://ams.org/meetings/sectional/2077_program_saturday.html","timestamp":"2014-04-18T04:10:34Z","content_type":null,"content_length":"97606","record_id":"<urn:uuid:69d6d2e7-33a1-4c9a-bada-3f2c08edc1e2>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
link karma
comment karma
what's this?
Three-Year Club Team Orangered
Verified Email
daily reddit gold goal
help support reddit
reddit gold gives you extra features and helps keep our servers running. We believe the more reddit can be user-supported, the freer we will be to make reddit the best it can be.
Buy gold for yourself to gain access to extra features and special benefits. A month of gold pays for 276.46 minutes of reddit server time!
Give gold to thank exemplary people and encourage them to post more.
This daily goal updates every 10 minutes and is reset at midnight Pacific Time (23 hours, 57 minutes from now).
Yesterday's reddit gold goal
reddit is a website about everything
π Rendered by PID 6710 on app-06 at 2014-04-19 08:02:58.847746+00:00 running d98d7dd. | {"url":"http://www.reddit.com/user/Sylraen","timestamp":"2014-04-19T08:02:59Z","content_type":null,"content_length":"112897","record_id":"<urn:uuid:ccbba66e-8aa0-4da4-affb-d5c9aabf0e0e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Improved high order integrators based on the Magnus expansion
Results 1 - 10 of 15
- ACTA NUMERICA , 2000
"... Many differential equations of practical interest evolve on Lie groups or on manifolds acted upon by Lie groups. The retention of Lie-group structure under discretization is often vital in the
recovery of qualitatively correct geometry and dynamics and in the minimization of numerical error. Having ..."
Cited by 93 (18 self)
Add to MetaCart
Many differential equations of practical interest evolve on Lie groups or on manifolds acted upon by Lie groups. The retention of Lie-group structure under discretization is often vital in the
recovery of qualitatively correct geometry and dynamics and in the minimization of numerical error. Having introduced requisite elements of differential geometry, this paper surveys the novel theory
of numerical integrators that respect Lie-group structure, highlighting theory, algorithmic issues and a number of applications.
- SIAM J. Numer. Anal , 2002
"... Numerical methods based on the Magnus expansion are an ecient class of integrators for Schr odinger equations with time-dependent Hamiltonian. Though their derivation assumes an unreasonably
small time step size as would be required for a standard explicit integrator, the methods perform well even f ..."
Cited by 23 (2 self)
Add to MetaCart
Numerical methods based on the Magnus expansion are an ecient class of integrators for Schr odinger equations with time-dependent Hamiltonian. Though their derivation assumes an unreasonably small
time step size as would be required for a standard explicit integrator, the methods perform well even for much larger step sizes. This favorable behavior is explained, and optimal-order error bounds
are derived which require no or only mild restrictions of the step size. In contrast to standard integrators, the error does not depend on higher time derivatives of the solution, which is in general
highly oscillatory.
, 2000
"... Commencing from a global-error formula, originally due to Henrici, we investigate the accumulation of global error in the numerical solution of linear highly-oscillating systems of the form y 00
+ g(t)y = 0, where g(t) t!1 \Gamma! 1. Using WKB analysis we derive an explicit form of the global-error ..."
Cited by 20 (5 self)
Add to MetaCart
Commencing from a global-error formula, originally due to Henrici, we investigate the accumulation of global error in the numerical solution of linear highly-oscillating systems of the form y 00 + g
(t)y = 0, where g(t) t!1 \Gamma! 1. Using WKB analysis we derive an explicit form of the global-error envelope for Runge-Kutta and Magnus methods. Our results are closely matched by numerical
experiments. Motivated by the superior performance of Lie-group methods, we present a modification of the Magnus expansion which displays even better long-term behaviour in the presence of
, 2000
"... In this paper new integration algorithms for linear differential equations up to eighth order are obtained. Starting from Magnus expansion, methods based on Cayley transformation and Fer
expansion are also built. The structure of the exact solution is retained while the computational cost is reduced ..."
Cited by 7 (1 self)
Add to MetaCart
In this paper new integration algorithms for linear differential equations up to eighth order are obtained. Starting from Magnus expansion, methods based on Cayley transformation and Fer expansion
are also built. The structure of the exact solution is retained while the computational cost is reduced compared to similar methods. Their relative performance is tested on some illustrative
, 1999
"... Commencing with a brief survey of Lie-group theory and differential equations evolving on Lie groups, we describe a number of numerical algorithms designed to respect Lie-group structure:
Runge--Kutta--Munthe-Kaas schemes, Fer and Magnus expansions. This is followed by complexity analysis of Fer and ..."
Cited by 5 (0 self)
Add to MetaCart
Commencing with a brief survey of Lie-group theory and differential equations evolving on Lie groups, we describe a number of numerical algorithms designed to respect Lie-group structure:
Runge--Kutta--Munthe-Kaas schemes, Fer and Magnus expansions. This is followed by complexity analysis of Fer and Magnus expansions, whose conclusion is that for order four, six and eight an
appropriately discretized Magnus method is always cheaper than a Fer method of the same order. Each Lie-group method of the kind surveyed in this paper requires the computation of a matrix
exponential. Classical methods, e.g. Krylov-subspace and rational approximants, may fail to map elements in a Lie algebra to a Lie group. Therefore we survey a number of approximants based on the
splitting approach and demonstrate that their cost is compatible (and often superior) to classical methods. 1 Introduction A central goal of classical numerical analysis is to design, implement and
analyse computational algorithms that ...
, 2009
"... We use a posteriori error estimation theory to derive a relation between local and global error in the propagation for the time-dependent Schrödinger equation. Based on this result, we design a
class of h, p-adaptive Magnus–Lanczos propagators capable of controlling the global error of the time-step ..."
Cited by 4 (2 self)
Add to MetaCart
We use a posteriori error estimation theory to derive a relation between local and global error in the propagation for the time-dependent Schrödinger equation. Based on this result, we design a class
of h, p-adaptive Magnus–Lanczos propagators capable of controlling the global error of the time-stepping scheme by only solving the equation once. We provide results for models of several different
small molecules including bounded and dissociative states, illustrating the efficiency and wide applicability of the new methods. Key words global error control·h, p-adaptivity · Magnus–Lanczos
propagator · time-dependent Schrödinger equation 1
"... The Local Linearization (LL) method for the integration of ordinary differential equations is an explicit one-step method that has a number of suitable dynamical properties. However, a major
drawback of the LL integrator is that its order of convergence is only two. The present paper overcomes this ..."
Cited by 3 (0 self)
Add to MetaCart
The Local Linearization (LL) method for the integration of ordinary differential equations is an explicit one-step method that has a number of suitable dynamical properties. However, a major drawback
of the LL integrator is that its order of convergence is only two. The present paper overcomes this limitation by introducing a new class of numerical integrators, called the LLT method, that is
based on the addition of a correction term to the LL approximation. In this way an arbitrary order of convergence can be achieved while retaining the dynamic properties of the LL method. In
particular, it is proved that the LLT method reproduces correctly the phase portrait of a dynamical system near hyperbolic stationary points to the order of convergence. The performance of the
introduced method is further illustrated through computer simulations.
, 2007
"... Two different sufficient conditions are given for the convergence of the Magnus expansion arising in the study of the linear differential equation Y′= A(t)Y. The first one provides a bound on
the convergence domain based on the norm of the operator A(t). The second condition links the convergence of ..."
Cited by 2 (0 self)
Add to MetaCart
Two different sufficient conditions are given for the convergence of the Magnus expansion arising in the study of the linear differential equation Y′= A(t)Y. The first one provides a bound on the
convergence domain based on the norm of the operator A(t). The second condition links the convergence of the expansion with the structure of the spectrum of Y (t), thus yielding a more precise
characterization. Several examples are proposed to illustrate the main issues involved and the information on the convergence domain provided by both conditions. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=193549","timestamp":"2014-04-16T22:30:14Z","content_type":null,"content_length":"34621","record_id":"<urn:uuid:7e5c20c2-0577-4338-8f36-38fbdd396e93>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume 55, Issue 4, April 2014
The time-dependent Ginzburg-Landau formalism for (d + s)-wave superconductors and their representation using auxiliary fields is investigated. By using the link variable method, we then develop
suitable discretization of these equations. Numerical simulations are carried out for a mesoscopic superconductor in a homogeneous perpendicular magnetic field which revealed peculiar vortex
• ARTICLES
□ Partial Differential Equations
View Description Hide Description
The time-dependent Ginzburg-Landau formalism for (d + s)-wave superconductors and their representation using auxiliary fields is investigated. By using the link variable method, we then
develop suitable discretization of these equations. Numerical simulations are carried out for a mesoscopic superconductor in a homogeneous perpendicular magnetic field which revealed
peculiar vortex states.
□ Representation Theory and Algebraic Methods
View Description Hide Description
The aim of this work is to generalize a very important type of Lie algebras and superalgebras, i.e., filiform Lie (super)algebras, into the theory of Lie algebras of order F. Thus, the
concept of filiform Lie algebras of order F is obtained. In particular, for F = 3 it has been proved that by using infinitesimal deformations of the associated model elementary Lie
algebra it can be obtained families of filiform elementary lie algebras of order 3, analogously as that occurs into the theory of Lie algebras [M. Vergne, “Cohomologie des algèbres de Lie
nilpotentes. Application à l’étude de la variété des algèbres de Lie nilpotentes,” Bull. Soc. Math. France98, 81–116 (1970)]. Also we give the dimension, using an adaptation of the -
module Method, and a basis of such infinitesimal deformations in some generic cases.
View Description Hide Description
The chiral conformal field theory of free super-bosons is generated by weight one currents whose mode algebra is the affinisation of an abelian Lie super-algebra with non-degenerate
super-symmetric pairing. The mode algebras of a single free boson and of a single pair of symplectic fermions arise for even|odd dimension 1|0 and 0|2 of , respectively. In this paper,
the representations of the untwisted mode algebra of free super-bosons are equipped with a tensor product, a braiding, and an associator. In the symplectic fermion case, i.e., if is
purely odd, the braided monoidal structure is extended to representations of the -twisted mode algebra. The tensor product is obtained by computing spaces of vertex operators. The
braiding and associator are determined by explicit calculations from three- and four-point conformal blocks.
View Description Hide Description
We establish a generalization of Kitaev models based on unitary quantum groupoids. In particular, when inputting a Kitaev-Kong quantum groupoid , we show that the ground state manifold of
the generalized model is canonically isomorphic to that of the Levin-Wen model based on a unitary fusion category . Therefore, the generalized Kitaev models provide realizations of the
target space of the Turaev-Viro topological quantum field theory based on .
View Description Hide Description
In this paper, we study some properties of w ∞3-Lie algebra and SDiff(T ^2) 3-Lie algebra and prove that they do not have non-trivial central extensions.
View Description Hide Description
When considered as submanifolds of Euclidean space, the Riemannian geometry of the round sphere and the Clifford torus may be formulated in terms of Poisson algebraic expressions
involving the embedding coordinates, and a central object is the projection operator, projecting tangent vectors in the ambient space onto the tangent space of the submanifold. In this
note, we point out that there exist noncommutative analogues of these projection operators, which implies a very natural definition of noncommutative tangent spaces as particular
projective modules. These modules carry an induced connection from Euclidean space, and we compute its scalar curvature.
□ Quantum Mechanics
View Description Hide Description
Stokes' theorem is investigated in the context of the time-dependent Aharonov-Bohm effect—the two-slit quantum interference experiment with a time varying solenoid between the slits. The
time varying solenoid produces an electric field which leads to an additional phase shift which is found to exactly cancel the time-dependent part of the usual magnetic Aharonov-Bohm
phase shift. This electric field arises from a combination of a non-single valued scalar potential and/or a 3-vector potential. The gauge transformation which leads to the scalar and
3-vector potentials for the electric field is non-single valued. This feature is connected with the non-simply connected topology of the Aharonov-Bohm set-up. The non-single valued nature
of the gauge transformation function has interesting consequences for the 4-dimensional Stokes' theorem for the time-dependent Aharonov-Bohm effect. An experimental test of these
conclusions is proposed.
View Description Hide Description
We study an unorthodox variant of the Berezin-Toeplitz type of quantization scheme, on a reproducing kernel Hilbert space generated by the real Hermite polynomials and work out the
associated quasi-classical asymptotics.
View Description Hide Description
Problems with quantum systems models, violating Galilei invariance are examined. The method for arbitrary non-relativistic quantum system Galilei invariant wave function construction,
applying a modified basis where center-of-mass excitations have been removed before Hamiltonian matrix diagonalization, is developed. For identical fermion system, the Galilei invariant
wave function can be obtained while applying conventional antisymmetrization methods of wave functions, dependent on single particle spatial variables.
View Description Hide Description
We discuss a modification of Smilansky model in which a singular potential “channel” is replaced by a regular, below unbounded potential which shrinks as it becomes deeper. We demonstrate
that, similarly to the original model, such a system exhibits a spectral transition with respect to the coupling constant, and determine the critical value above which a new spectral
branch opens. The result is generalized to situations with multiple potential “channels.”
View Description Hide Description
The structure of the spectrum of random operators is studied. It is shown that if the density of states measure of some subsets of the spectrum is zero, then these subsets are empty. In
particular follows that absolute continuity of the integrated density of states implies singular spectra of ergodic operators is either empty or of positive measure. Our results apply to
Anderson and alloy type models, perturbed Landau Hamiltonians, almost periodic potentials, and models which are not ergodic.
View Description Hide Description
We show that under very general assumptions the adiabatic approximation of the phase of the zeta-regularized determinant of the imaginary-time Schrödinger operator with periodic
Hamiltonian is equal to the Berry phase.
□ General Relativity and Gravitation
View Description Hide Description
We consider the Yang-Mills flow on hyperbolic 3-space. The gauge connection is constructed from the frame-field and (not necessarily compatible) spin connection components. The fixed
points of this flow include zero Yang-Mills curvature configurations, for which the spin connection has zero torsion and the associated Riemannian geometry is one of constant curvature.
We analytically solve the linearized flow equations for a large class of perturbations to the fixed point corresponding to hyperbolic 3-space. These can be expressed as a linear
superposition of distinct modes, some of which are exponentially growing along the flow. The growing modes imply the divergence of the (gauge invariant) perturbative torsion for a wide
class of initial data, indicating an instability of the background geometry that we confirm with numeric simulations in the partially compactified case. There are stable modes with zero
torsion, but all the unstable modes are torsion-full. This leads us to speculate that the instability is induced by the torsion degrees of freedom present in the Yang-Mills flow.
□ Dynamical Systems
View Description Hide Description
Applying the concept of anti-integrable limit to coupled map lattices originated from space-time discretized nonlinear wave equations, we show that there exist topological horseshoes in
the phase space formed by the initial states of travelling wave solutions. In particular, the coupled map lattices display spatio-temporal chaos on the horseshoes.
□ Classical Mechanics and Classical Fields
View Description Hide Description
In a continuum setting, the energy–momentum tensor embodies the relations between conservation of energy, conservation of linear momentum, and conservation of angular momentum. The
well-defined total energy and the well-defined total momentum in a thermodynamically closed system with complete equations of motion are used to construct the total energy–momentum tensor
for a stationary simple linear material with both magnetic and dielectric properties illuminated by a quasimonochromatic pulse of light through a gradient-index antireflection coating.
The perplexing issues surrounding the Abraham and Minkowski momentums are bypassed by working entirely with conservation principles, the total energy, and the total momentum. We derive
electromagnetic continuity equations and equations of motion for the macroscopic fields based on the material four-divergence of the traceless, symmetric total energy–momentum tensor. We
identify contradictions between the macroscopic Maxwell equations and the continuum form of the conservation principles. We resolve the contradictions, which are the actual fundamental
issues underlying the Abraham–Minkowski controversy, by constructing a unified version of continuum electrodynamics that is based on establishing consistency between the three-dimensional
Maxwell equations for macroscopic fields, the electromagnetic continuity equations, the four-divergence of the total energy–momentum tensor, and a four-dimensional tensor formulation of
electrodynamics for macroscopic fields in a simple linear medium.
View Description Hide Description
We analyse the constraint structure of the topologically massive Yang-Mills theory in instant-form and null-plane dynamics via the Hamilton-Jacobi formalism. The complete set of
hamiltonians that generates the dynamics of the system is obtained from the Frobenius’ integrability conditions, as well as its characteristic equations. As generators of canonical
transformations, the hamiltonians are naturally linked to the generator of Lagrangian gauge transformations.
View Description Hide Description
The new classes of periodic solutions of nonlinear self-dual network equations describing the breather and soliton lattices, expressed in terms of the Jacobi elliptic functions have been
obtained. The dependences of the frequencies on energy have been found. Numerical simulations of soliton lattice demonstrate their stability in the ideal lattice and the breather lattice
instability in the dissipative lattice. However, the lifetime of such structures in the dissipative lattice can be extended through the application of ac driving terms.
□ Methods of Mathematical Physics
View Description Hide Description
Useful expressions of the derivatives, to any order, of Pochhammer and reciprocal Pochhammer symbols with respect to their arguments are presented. They are building blocks of a
procedure, recently suggested, for obtaining the ɛ-expansion of functions of the hypergeometric class related to Feynman integrals. The procedure is applied to some examples of such kind
of functions taken from the literature.
View Description Hide Description
The most general displaced number states, based on the bosonic and an irreducible representation of the Lie algebra symmetry of su (1, 1) and associated with the Calogero-Sutherland model
are introduced. Here, we utilize the Barut-Girardello displacement operator instead of the Klauder-Perelomov counterpart, to construct new kind of the displaced number states which can be
classified in nonlinear coherent states regime, too, with special nonlinearity functions. They depend on two parameters, and can be converted into the well-known Barut-Girardello coherent
and number states, respectively, depending on which of the parameters equal to zero. A discussion of the statistical properties of these states is included. Significant are their
squeezing properties and anti-bunching effects which can be raised by increasing the energy quantum number. Depending on the particular choice of the parameters of the above scenario, we
are able to determine the status of compliance with flexible statistics. Major parts of the issue is spent on something that these states, in fact, should be considered as new kind of
photon-added coherent states, too. Which can be reproduced through an iterated action of a creation operator on new nonlinear Barut-Girardello coherent states . Where the latter carry,
also, outstanding statistical features. | {"url":"http://scitation.aip.org/content/aip/journal/jmp/browse","timestamp":"2014-04-20T08:20:12Z","content_type":null,"content_length":"164541","record_id":"<urn:uuid:72b2219b-b65a-4b9b-82e0-52a16a6e48a1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combining differientation rules.. (chain, product, quotient)
March 6th 2010, 11:59 AM #1
Nov 2009
Combining differientation rules.. (chain, product, quotient)
I know all 3 of the rules (product, chain, quotient) and I know how to apply them to derive a function when that function ONLY uses 1 of those rules. But questions like this confuse me:
y = [ (2x-1)^2 ] / [ (x-2)^3 ]
Sorry if my brackets look a bit weird i'm trying to make the question clear.
So when I look at that it looks like a quotient rule question AND a chain rule question since the function is to an exponent. How do I apply the rules when I have to use more than one of the
I know all 3 of the rules (product, chain, quotient) and I know how to apply them to derive a function when that function ONLY uses 1 of those rules. But questions like this confuse me:
y = [ (2x-1)^2 ] / [ (x-2)^3 ]
Sorry if my brackets look a bit weird i'm trying to make the question clear.
So when I look at that it looks like a quotient rule question AND a chain rule question since the function is to an exponent. How do I apply the rules when I have to use more than one of the
$y = \frac{(2x-1)^2}{(x-2)^3}<br />$
quotient rule, using the chain rule when necessary ...
$\frac{dy}{dx} = \frac{(x-2)^3 \cdot \textcolor{red}{2(2x-1) \cdot 2} - (2x-1)^2 \cdot 3(x-2)^2}{(x-2)^6}$
factor ...
$\frac{dy}{dx} = \frac{(x-2)^2(2x-1)[(x-2) \cdot 4 - (2x-1) \cdot 3]}{(x-2)^6}$
distribute ...
$\frac{dy}{dx} = \frac{(2x-1)[4x-8-6x+3]}{(x-2)^4}$
combine like terms and simplify ...
$\frac{dy}{dx} = \frac{(1-2x)(2x+5)}{(x-2)^4}$
I know all 3 of the rules (product, chain, quotient) and I know how to apply them to derive a function when that function ONLY uses 1 of those rules. But questions like this confuse me:
y = [ (2x-1)^2 ] / [ (x-2)^3 ]
Sorry if my brackets look a bit weird i'm trying to make the question clear.
So when I look at that it looks like a quotient rule question AND a chain rule question since the function is to an exponent. How do I apply the rules when I have to use more than one of the
Since the outside function is a quotient, use the quotient rule while keeping the chain rule in mind:
Derivative using the quotient rule:
$\frac{(x-2)^3\frac{d}{dx}(2x-1)^2 - (2x-1)^2\frac{d}{dx}(x-2)^3}{((x-2)^3)^2}$
Now use the chain rule for the numerator:
$\frac{(x-2)^32(2x-1)\cdot2 - (2x-1)^2 3(x-2)^2\cdot1}{((x-2)^3)^2}$
And use algebra to tidy up the answer:
$\frac{4(x-2)^3(2x-1) - 3(2x-1)^2 (x-2)^2}{(x-2)^6}$
And there you go!
Last edited by mathemagister; March 6th 2010 at 12:24 PM. Reason: last step (distributing the negative back in)
$<br /> <br /> \frac{dy}{dx} = \frac{(x-2)^3 \cdot \textcolor{red}{2(2x-1) \cdot 2} - (2x-1)^2 \cdot 3(x-2)^2}{(x-2)^6}<br />$
What is confusing me is how do you know what part to use the chain rule on? Other parts in that step are to an exponent, but you are only using the chain rule on the red part why?
$<br /> <br /> \frac{dy}{dx} = \frac{(x-2)^3 \cdot \textcolor{red}{2(2x-1) \cdot 2} - (2x-1)^2 \cdot 3(x-2)^2}{(x-2)^6}<br />$
What is confusing me is how do you know what part to use the chain rule on? Other parts in that step are to an exponent, but you are only using the chain rule on the red part why?
because it's the only part that really needed it ... the derivative of $(2x-1)^2$
the derivative of $(x-2)^3$ is $3(x-2)^2 \cdot 1$
How do I know what needs the chain rule and what doesn't? Sorry.
Providing you know/have memorised the formulas for each of the differentiation rules, and are able to recognise when to apply them, then it's not so bad.
Lets, say, for example, the product rule is u(dv/dx) + v(du/dx), where you have two seperate functions, u and v. If v, for example, was something like $(x-4)^{5}$ then what do you need to do to
differentiate it? You need the chain rule. Using the chain rule gives $5(x-4)^4$ and if you don't know how this works, you could wisely spend some time going back over the chain rule.
You would need this to put back into the product rule formula for the part of (dv/dx) - so here, you can see that one part of the product rule example here includes an application of the chain
rule. As examples get more complicated, the more you might have to recognise this.
So in summary, it will help you to write out what u and v actually are, consider them both, and if you need the differential of one of these to put into another formula and need to consider one
of the other rules of differentiation to what you are primarily using, then it helps to consider each function on its own first before you proceed any further towards solving problems. I hope I
March 6th 2010, 12:16 PM #2
March 6th 2010, 12:20 PM #3
March 6th 2010, 01:21 PM #4
Nov 2009
March 6th 2010, 01:25 PM #5
March 6th 2010, 02:12 PM #6
Nov 2009
March 6th 2010, 03:14 PM #7
March 6th 2010, 03:48 PM #8
Oct 2008 | {"url":"http://mathhelpforum.com/calculus/132335-combining-differientation-rules-chain-product-quotient.html","timestamp":"2014-04-16T18:08:52Z","content_type":null,"content_length":"61750","record_id":"<urn:uuid:63251c1b-770c-4b5b-9891-6dfdcb2d01b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Canoga Park Math Tutor
Find a Canoga Park Math Tutor
...For three years, I organized a co-op for several families and led science demonstrations for the elementary students. Creating individual curriculum and student assessments has been an integral
part of the process. I also use fun games which can be played with siblings to reinforce phonics letter-sound pairing.
24 Subjects: including algebra 1, GED, statistics, ESL/ESOL
...In addition, I have experience teaching as a student teacher from my current graduate program. I make sure that I have several years of experience in a subject before I offer tutoring in it, so
you have the highest quality of help from me! My goal is to help you learn the tips and tricks, as well as the skills to get your target grade in your classes.
15 Subjects: including algebra 1, algebra 2, prealgebra, elementary math
Hello future students- My name is Michael, and I received my Master’s degree in Education from California State University at Northridge. I have been teaching Mathematics for the past ten years. I
have been teaching Math at Palisades Charter High School, a distinguished California School.
14 Subjects: including geometry, ACT Math, basketball, ACT English
...I generally tutored younger students in Math, English, Foreign language, and Science. Tutoring the younger students not only allowed me the opportunity to spread the knowledge that I'd gained
over the years, but, it also challenged me to reach higher educational goals. During college is where the bulk of my tutoring experience is derived.
31 Subjects: including algebra 2, algebra 1, geometry, prealgebra
...I have served as VP Engineering and VP Customer Applications in the field of networking I have a masters degree in computer science and have spent my entire career developing software programs
as well as managing large software projects. At work I use C++, C#, C, Java as well as other languages....
24 Subjects: including trigonometry, ACT Math, geometry, C++ | {"url":"http://www.purplemath.com/Canoga_Park_Math_tutors.php","timestamp":"2014-04-20T06:58:32Z","content_type":null,"content_length":"23896","record_id":"<urn:uuid:d9b4c1cf-4357-4165-bef0-33003885f5f7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fractions software free download and reviews: Convert-O-Gadget, Numbers Up! Volcanic Panic Windows, Numbers Up! Volcanic Panic OSX, Numbers Up! Volcanic Panic Mac, MathProf, Mathematics Worksheet Factory, ... - FilesPack.com
by R.K. West Consulting
What does it do? It's a handy tool to help you make conversions from one unit of measurement to another. Did you measure your living room in square feet, only to find that carpet is sold
by ...
details download
File Size: 750 K | Shareware
Related downloads: conversion, calculator, measurement, roman numerals, fractions, metric, temperature, distance
by EdAlive
This fun math game for ages 4-15 covers all basic skills.Its board game format has 7500 number fact and problem questions in numeration, addition, subtraction, multiplication, division,
details download
File Size: 13.48 MB | Demo
Related downloads: fun math game, number, place value, counting, addition, subtraction, multiplication, division
by EdAlive
This fun math game for ages 4-15 covers all basic skills.Its board game format has 7500 number fact and problem questions in numeration, addition, subtraction, multiplication, division,
details download
File Size: 11.1 MB | Demo
Related downloads: fun math game, number, place value, counting, addition, subtraction, multiplication, division
by EdAlive
This fun math game for ages 4-15 covers all basic skills.Its board game format has 7500 number fact and problem questions in numeration, addition, subtraction, multiplication, division,
details download
File Size: 11.12 MB | Demo
Related downloads: fun math game, number, place value, counting, addition, subtraction, multiplication, division
by ReduSoft Ltd.
MathProf is an easy to use mathematics program within approximately 180 subroutines. MathProf can display mathematical correlations in a very clear and simple way. The program covers the
details download
File Size: 9.82 MB | Shareware
Related downloads: Mathematics, Maths, Formula, Educationsoftware, Teachers, Students, High school, Lessons
by Schoolhouse Technologies Inc.
Create professional-quality mathematics worksheets to provide students in grades K to 10 with the skills development and practice they need as part of a complete numeracy program. Over
60 ...
details download
File Size: 7.36 MB | Demo
Related downloads: mathematics, math, teacher, tool, worksheet, addition, subtraction, division
Mathematics Quiz
by Bettergrades
This program is designed for students aged 7 - 9 in Primary 1 - 3. There are over 1500 challenging Maths quizzes and problem sums to practise on. Topics include Addition, Subtraction,
details download
File Size: 2.57 MB | Shareware
Related downloads: Primary, School, learn, education, test, papers, questions, exam
Internet Math
by Veenetronics Corporation
480 math word problems covering grades 1-8 are solved over the internet. Problems are generated with random numbers and solved using pictures. Each problem's picture solution changes to
details download
File Size: 1.46 MB | Shareware
Related downloads: Math, Internet, Word Problems, Elementary, Junior High, Middle School, Fractions, Decimals
by Luzius Schneider
CalculPro will help your elementary school students practice mental arithmetic or to do fractions. Choose addition, subtraction, multiplication, division or fractions; then set the range
of ...
details download
File Size: 5.14 MB | Shareware
Related downloads: mental arithmetic, arithmetic's, fractions, caculate, mathematics, pupil, student, teacher
by Luzius Schneider
Calcul will help your elementary school students practice mental arithmetic or to do fractions. Choose addition, subtraction, multiplication, division or fractions; then set the range of
details download
File Size: 1 MB | Freeware
Related downloads: mental arithmetic, arithmetic's, fractions, caculate, mathematics, pupil, student, teacher
by Teacher Interactive Software
Build fraction reducing and simplifying skills. Convert fractions to mixed numbers and mixed numbers to fractions.. All problems are randomly generated on several levels of difficulty
details download
File Size: 2.17 MB | Demo
Related downloads: math, mathematics, fractions, fraction, conversion, converting, simplification, simplifying
by Teacher Interactive Software
Build fraction addition and subtraction skills for simple to complex problems. Also Includes exercises for building reducing and simplifying fraction skills. All problems are randomly
details download
File Size: 2.82 MB | Demo
Related downloads: math, mathematics, fractions, fraction, addition, subtraction, add, subtract
by Equinox Software
Weights And Measures Plus converts units of measurement such as yards or fluid ounces to gallons. It features conversions for 237 units in 18 categories. WAM+ is unique in that input and
details download
File Size: 5.65 MB | Shareware
Related downloads: unit conversion, trigonometry, educational, math, metric, convert, fractions, reference
by Solutionwright Software
CalcuNote is an arithmetic calculator, adding machine, and time calculator that records and documents your calculations on multiple tapes which you can edit. Use it to (1) make simple
details download
File Size: 2.15 MB | Shareware
Related downloads: calculators, software, arithmetic, adding machine, tapes, edits, prints, fractions
by WISCO Computing
LangPad - Math & Currency Characters provides an easy way to insert Math & Currency characters and symbols into your WordPad and Notepad text. Click the mouse on a character or symb...
details download
File Size: 501 K | Shareware
Related downloads: Math &, Currency characters, money symbols, euro, british pound, fractions, sterling, langpad | {"url":"http://www.filespack.com/tags/fractions.html","timestamp":"2014-04-16T10:16:08Z","content_type":null,"content_length":"31100","record_id":"<urn:uuid:2d6c744a-023a-4bf1-8713-acfd769e6389>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Car Talk Puzzler
National Public Radio in the USA carries Car Talk, a humorous phone-in program in which Tom and Ray Magliozzi (Click and Clack, the Tappet Brothers) diagnose and offer solutions for mysterious
auto-related maladies. It's an amusing hour on Saturday mornings.
One of the program's segments is a weekly Puzzler, a logic (or other) mental puzzle begging for a solution. On May 21, 2011, my wife and I were driving westward across Michigan, headed for a family
visit with our son in Iowa. Our radio picked up the morning's broadcast of Car Talk, during which the following Puzzler caught our attention. The link will take you to the exact wording. Here's a
A six-digit odometer shows a palindromic number. The car it's in is driven no more than an hour, and again the odometer shows a palindromic number. How far was the car driven?
When I described this puzzle to a colleague, he immediately suggested the successive palindromes 11 and 22, etc. However, a six-digit odometer would display 000011, and that probably shouldn't be
considered a palindrome. So, to a mathematician the problem is "What are all the six-digit palindromes (no leading zeros), and what are the successive differences?
As my wife was driving, I was free to think how I might use Maple to get a list of all such palindromes, wondering if, perhaps, the numtheory package had a "palindrome" command. My wife, a
psychiatric nurse, who clearly did not deal with logical intellects in her professional career, shortly announced "It has to be 11. If the odometer started at 199991, eleven additional miles would
bring it to 200002." Believe me, I didn't even want to know how she did this. I just sat back and marveled at the mate I had picked more than 40 years ago.
But as soon as I could get to a Maple session, I did, indeed, list all the six-digit palindromes, and discovered that adjacent palindromes differed by 11, 110, or 1100. Since it is unlikely that the
car in question traveled 110 miles in an hour, the answer to the Puzzler had to be 11. And it was. (Click the link for the solution as presented by Tom and Ray.)
Here's how I did the calculations. The first thing I did was to satisfy myself that Maple had no built-in facility for generating palindromes, so I needed a representation of a six-digit palindrome.
This I took as or
Hence, all the six-digit palindromic numbers, sorted from smallest to largest, are in the list :
The first few members are , but the whole list contains 900 members, as
shows. Now the differences between successive palindromes is found with
a list of length = , as expected. However, the distinct members of are
and in fact, the first time the difference between successive palindromes is 11 is
Is a gap of 11 unique? Just how prevalent is such a difference between two six-digit palindromes? In Table 1, the first column lists the index telling where in the sorted list two adjacent
palindromes differ by 11, and the second and third columns give the adjacent palindromes.
I can't help but note the first pair in the list! (I have no idea how she did it. She doesn't knit, but avidly solves Sudoku puzzles of all types, and is better at it than I am.)
[100, 199991, 200002]
[200, 299992, 300003]
[300, 399993, 400004]
[400, 499994, 500005]
[500, 599995, 600006]
[600, 699996, 700007]
[700, 799997, 800008]
[800, 899998, 900009]
Table 1: Successive palindromes that differ by 11
From Table 1 is should also be clear that because of the gap between the pairs, there aren't three successive palindromes each pair of which differ by 11. The Puzzler can be solved by searching just
adjacent palindromes. | {"url":"http://www.mapleprimes.com/maplesoftblog/128796-Car-Talk-Puzzler","timestamp":"2014-04-19T09:24:39Z","content_type":null,"content_length":"80507","record_id":"<urn:uuid:7ffba414-083f-49d2-bf3e-3d5485e69534>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quick question about integral of (1/x)
But the natural log of anything less than 1 is negative, which seems to say that the area under the curve from 0 to anything less than x=1 on the graph of f(x)=1/x should be a negative area. The
graph of f(x)=1/x from x=0 to 1 is above the x-axis and as you say in the next quote is infinite.
Example of what I'm struggling with: [tex] \int_{0.1}^1dx \frac{1}{x} [/tex]
Should be a definable positive integer. Approximately equal to 9(0.1) increments with heights 1/0.9 + 1/0.8 + ... 1/0.1 (left hand method) or approximately equal to 2.829. However, if you do ln(1)-ln
(0.1) you get approximately 2.3 (I expected a negative number and need to think about this a little more, there's clearly a flaw in my logic...haha) Sorry. Posting anyways so that you might see where
I was coming from.
ok so lets try the two bounding methods, dividing it into 9 blocks
Where each is greater than 1/x
= (1/0.1+1/0.2+1/0.3+1/0.4+1/0.5+1/0.6+1/0.7+1/0.8+1/0.9)*0.1~2.829
Where each is less than 1/x
= (1/0.2+1/0.3+1/0.4+1/0.5+1/0.6+1/0.7+1/0.8+1/0.9+1/1)*0.1~1.929
which both bound your exact answer of ln(1)-ln(0.1)~2.3
Note you can also use the useful property of logarithms to see that it is always positive
ln(b)-ln(a) = ln(b/a)
as b>a>0, then b/a>1 and it is always a positive number | {"url":"http://www.physicsforums.com/showthread.php?p=3686392","timestamp":"2014-04-19T19:52:03Z","content_type":null,"content_length":"39942","record_id":"<urn:uuid:a2da361d-abd6-4742-9a26-942e0949c8cb>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
word problem question
March 15th 2009, 01:16 PM #1
Mar 2009
word problem question
the speed of light is 1.863 x 10^5 miles per second.
assume 1 year is 365 days. write an expression to convert speed of light from miles per second to miles per year
make a table that shows the distance light travels in 1, 10, 100, 1000, 10000, 100000 years. our galazy has a diameter of about 5.875 x 10^17 miles. based on the tablem about how long would it
take for light to travel across our galaxy.
i have no idea what to do. help
the speed of light is 1.863 x 10^5 miles per second.
assume 1 year is 365 days. write an expression to convert speed of light from miles per second to miles per year
make a table that shows the distance light travels in 1, 10, 100, 1000, 10000, 100000 years. our galazy has a diameter of about 5.875 x 10^17 miles. based on the tablem about how long would it
take for light to travel across our galaxy.
i have no idea what to do. help
You are given the speed of light as $1.863 x 10^5$ mi/s. There are 60 s/min, 60 min/hr, 24 hr/day, and 365.25 day/yr. Notice how if you multiply those "fractions" together everything cancels
except mi/yr?
March 15th 2009, 02:24 PM #2
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/algebra/78857-word-problem-question.html","timestamp":"2014-04-19T14:59:31Z","content_type":null,"content_length":"34245","record_id":"<urn:uuid:2348d069-1e29-4d3d-aebb-4403910f2f85>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
So you gave the formative assessment, now what? (Part 2)
This is part two of a three part series on formative assessment. This post deals with some things you can do between individual lessons based on formative assessment and during a lesson. You can read
part one here.
The objective of this post is to describe two possible procedures teachers can use for ongoing, day-to-day formative assessment. The first of these procedures is easier to implement, but gives
teachers less information on what students understand. Remember that a primary objective of formative assessment is to create a feedback loop for both teachers and students into the teaching and
learning process.
Example 1
At the end of your last class you gave an exit slip. One strategy, which is not too time-consuming, is to take the exit slip and first sort it into No/Yes piles, and then sort these piles into 3-4
solution pathway piles, essentially organizing all of the student work by whether or not it is correct and what strategy students used. It may be useful to have an other group, with students whose
strategy which are unable to decode.
These groups of student can be used to decide on student groups (recommendation: group by different strategy) for the following day, decide if you need to try a different strategy for tomorrow, and/
or find examples of student work to present to students. It can also be used to decide on re-engagement strategies^1 for the lesson from the previous day, or just decide that you can move onto the
next topic in your unit sequence.
Example 2
An exit slip is not the only kind of formative assessment you can do
^2. The most important feature of formative assessment is coming to understand what students are thinking. You can do this by conferring^3 with individual students during your lesson and asking them
questions to elicit their thinking. Of course, this assumes you have given students an assignment which requires them to think!
Imagine students are working on a rich math task
and that you start by initially observing students and see if they are able to get started on the task without your intervention. As the students begin to work, you begin
walking around the classroom, and observing them working, and listening to their discussions about the task. Your objective at this time is to gather evidence of what students are thinking about
while they do the task.
The three main problems you may have to solve during this time are; students who are unable to get started on their own, students who are going in the completely wrong direction on the task, and
students who have completed the task. One of the early tasks during your observation of students working is to figure out which students are in which group. Note that there is a fourth group;
students who are not done the task, who may be struggling a little bit, but are making progress. Do not intervene with this group of students!
When you are confused about what students are talking about, or what they are writing, you spend some time clarifying your understanding of what they are thinking, so that you feel completely clear.
Now, you choose an intervention^5 for the student, such that the student is left to do the mathematical thinking of the task, and you do not lower the cognitive demand of the task. During the entire
time students are working on the task, you collect information^6 on what the students do during the task.
In the next post in this series, I will discuss more of the overall objectives of formative assessment, and discuss how the feedback loops created by the process of formative assessment can improve
the effectiveness of teaching and learning in classrooms.
is an alternative to reviewing material with students. It can be done during any time the unit when you want to consolidate student understand.
. For other examples of formative assessment,
see this presentation that I curated
. It has 54 different possible formative assessment strategies in it, some of which are more appropriate for a class focused on literacy skills, and some of which are useful for a mathematics
. A rich math task allows for students to demonstrate mathematical reasoning, is often open-ended, and allows for multiple solution paths. These kinds of tasks generally take students some time to
. The intervention you choose should not lower the demands of the task you have set the student. You could ask them a question to prompt their thinking, or suggest a way they can interact with one of
their peers (do not assume your students know how to collaborate, they may need a prompt to help them orient to each other's work and thinking).
. It is useful to have anticipated student responses before the task, and solved the task yourself a couple of different ways. Finally, having a template to collect information during the lesson
would be critical. Here are two such template designed by my colleague Sara Toguchi:
Descriptive information
Specific criteria information
Anticipating misconceptions with quick sorts
Submitted by
Mary Dooms
on Sat, 02/01/2014 - 07:45.
Hi David,
I teach 7th grade and my most diverse learners are in my standard level classes where students' MAP scores range between the 15th-90th percentile. When using exit slips as a quick sort teachers must
use that data to regroup students and structure their lesson and classtime to address any misconceptions. To be most effective I need to have tomorrow's lesson prepared with differentiation in mind.
Yesterday, a formative assessment helped me identify 14 students who can add and subtract one step equations, but 7 can't do it when it involves combining like terms. Three of the seven can't add
negative decimals, and the rest are still combining terms that aren't alike. For Monday I need to have appropriate resources ready for all of my students--from those who don't get it to those that
do. I'm the first to admit that some days I'm better prepared for the differentiated classroom than others.
Having taught this topic for several years I know the misconceptions. But as you point out it's what you do with that information that makes the difference. I'm investigating the math workshop model
and I think that may help me. To successfully execute the workshop model an entire unit needs to be prepared for all possible "What ifs".
I appreciate your insights on formative assessment and look forward to part 3.
Math workshop model
Submitted by
Mary Dooms
on Sun, 02/02/2014 - 14:19.
It runs similar to a reading workshop in that the independent time is quite differentiated. The lesson begins with an open ended task or review problem, followed by a mini-lesson, independent work
time where the teacher confers with students and a shareout at the end. After the mini-lesson, for example adding and subtracting one step equations, I would have a quick formative assessment to
determine levels of understanding. Students are then immediately regrouped and they practice or are given extensions based on their level of understanding. The key is to have the resources at the
ready: combining like terms practice, one step whole number practice, one step decimal/fraction practice, and an extension perhaps on solving equations with variables on both sides. A mini-lesson may
not even be the current topic, it could be a number talk. The idea is to create a culture of learning where students become self directed. Here's a link to the book Minds on Mathematics. http://
Another teacher and I are doing a book study, led by our reading specialist, who also has a keen interest in math. It requires a lot of advance prep and it's not just doing stations. | {"url":"http://davidwees.com/content/so-you-gave-formative-assessment-now-what-part-2","timestamp":"2014-04-18T15:59:23Z","content_type":null,"content_length":"65100","record_id":"<urn:uuid:e1da1b90-5faa-4be9-ae2a-cd2a4490b3df>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: learn math
Replies: 5 Last Post: May 6, 2013 8:33 AM
Messages: [ Previous | Next ]
learn math
Posted: Dec 14, 2011 9:05 AM
hi , i am a student . i study in Russia i want to learn math from
internet . | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2324475","timestamp":"2014-04-20T21:17:24Z","content_type":null,"content_length":"21555","record_id":"<urn:uuid:bd9fd6df-a0af-4b29-b070-b4629a3f9df7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Introduction To Modern Astrophysics 2nd Edition Textbook Solutions | Chegg.com
a) Consider a neutron with collision cross section
The neutron’s collision cross section is found in terms of its radius as
The distance travelled by the neutron with the speed
The volume of the cylinder is given as,
The cylindrical volume swept by the neutron by travelling a distance | {"url":"http://www.chegg.com/homework-help/an-introduction-to-modern-astrophysics-2nd-edition-solutions-9780805304022","timestamp":"2014-04-21T05:37:57Z","content_type":null,"content_length":"51984","record_id":"<urn:uuid:9e392ab7-f424-4aa4-bd44-eb287fb15214>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/deadshot/answered/1","timestamp":"2014-04-20T21:11:24Z","content_type":null,"content_length":"123310","record_id":"<urn:uuid:658af1d1-5a13-4dba-b56e-7ee8915c63a2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to deformation theory (of algebras)?
up vote 12 down vote favorite
So I know that the idea of deformation theory underlies the concept of quantum groups; I haven't found any single introduction to quantum groups that makes me fully satisfied that I have some kind of
idea of what it's all about, but piecing together what I've read, I understand that the idea is to "deform" a group (Hopf) algebra to one that's not quite as nice but is still very workable.
To a certain extent, I get what's implied by "deformation"; the idea is to take some relations defining our Hopf algebra and introduce a new parameter, which specializes to the classical case at a
certain point. What I don't understand is:
1. How and when we can do this and have it still make sense;
2. Why this should "obviously" be a construction worth looking at, and why it should be useful and meaningful.
The problem is when I look for stuff (in the library catalogue, on the Internet) on deformation theory, everything that turns up is really technical and assumes some familiarity with the basic
definitions and intuitions about the subject. Does anyone know of a more basic introduction that can be understood by the "general mathematical audience" and answers (1) and (2)?
ra.rings-and-algebras deformation-theory quantum-groups reference-request
If your question is about quantum groups then you have to read Drinfeld's ICM paper "Quantum groups". He explains what a quantum group is, deformations of what they are, etc... ams.u-strasbg.fr/
mathscinet/search/… – DamienC Jul 27 '10 at 16:59
add comment
11 Answers
active oldest votes
Quoting from the first line of this paper by Barry Mazur (PDF file):
up vote 10 down vote One can learn a lot about a mathematical object by studying how it behaves under small perturbations.
add comment
[Gerstenhaber, Murray; Schack, Samuel D. Algebraic cohomology and deformation theory. Deformation theory of algebras and structures and applications (Il Ciocco, 1986), 11--264, NATO Adv.
Sci. Inst. Ser. C Math. Phys. Sci., 247, Kluwer Acad. Publ., Dordrecht, 1988. MR0981619 (90c:16016)] is a rather good introduction to the subject. Gerstenhaber papers (the series called On
the deformation of rings and algebras) is extremely readable.
up vote 9
down vote As to why should one expect deformation to yield something interesting... I once asked this to Jacques Alev, and he observed that the interest of really interesting things should survive
small deformations.
add comment
In answer to your last paragraph, a good starting point for deformation theory, not specifically of quantum groups, is the first order deformation theory of associative algebras. Good
references for this have been mentioned by Mariano (Gerstenhaber's papers) and Kevin Lin (Kontsevich's notes), but I wanted to add that before even opening them, there are some pleasingly
simple exercises you can do to get a feel of the subject. Try extending an associative $k$-linear product on $A$ to a $t$-linear one on $A\otimes (k[t]/t^2)$ and see that what you need is a
up vote Hochschild 2-cocycle (definition in homological algebra books, e.g Weibel's); and that the extensions that become trivial after a change of variable are the coboundaries. If you insist on
7 down unital products, you'll get cocycles for the reduced Hochschild complex; if you impose commutativity you'll see Harrison cocycles.
One more reference: a paper of Goldman-Millson (link requires MR subscription) which readably explains the DGLA philosphy of char zero deformation theory.
See also my question here: mathoverflow.net/questions/385/… for more about the dgLa philosophy. – Kevin H. Lin Jan 16 '10 at 23:45
That link requires an account at UT - here's a link the the mathscinet article (at least, I think it's the article you linked to) ams.org/mathscinet/search/… – Peter Samuelson Oct 7 '10 at
@Peter. Thanks for fixing the link! – Tim Perutz Oct 8 '10 at 1:44
1 The Goldman-Millson paper is also freely available on NUMDAM (if we're thinking about the same paper): archive.numdam.org/ARCHIVE/PMIHES/PMIHES_1988__67_/… – Peter Dalakov Feb 14 at 17:47
add comment
You might try The unbearable lightness of deformation theory by Balázs Szendrői.
up vote 4 down vote
add comment
I know almost nothing about quantum groups, but nevertheless I think the first thing to realize is that "deformation" can really be taken as just a synonym for "family". If you are interested
in moduli problems, then you are interested in families, and thus deformations. As for why one would be interested in moduli problems... there are many reasons, so maybe you can ask about that
in a different question :-)
1. There is a draft of a book on deformation theory by Kontsevich-Soibelman. It's available here. There are also these notes from an old course on deformation theory that Kontsevich taught
back when he was at Berkeley in 1994. This is all very much in a similar vein to the references that Mariano has cited. This material is a bit more "modern" and is not quite the same as
deformation theory in algebraic geometry, though they do share many characteristics. For deformation theory in algebraic geometry, try taking a look at "Moduli of Curves" by
up Harris-Morrison, "Deformations of Algebraic Schemes" by Sernesi, or these notes of Hartshorne.
vote 3
down 2. One motivation to look at deformations comes from physics, see for example Kontsevich's famous paper on deformation quantization of Poisson manifolds. Another motivation, as I already
vote mentioned, is moduli theory. Even if you are just interested in blahs and not deformations/families/moduli of blahs, it can still be useful to study deformations/families/moduli of blahs.
For example, suppose we are interested in some object X. If X has a moduli space of deformations, then we can study how X changes, or how some property of X changes, as we move around in
the moduli space. This can then be taken as an invariant of X itself. If X is a smooth projective variety and the property we are looking at is the Hodge structure of X, then this leads to
a beautiful theory of variation of Hodge structure, which was developed by Griffiths and others. Finally, deformations themselves can be interesting in their own right: they sometimes have
very rich and complex mathematical structures (leading to, for example, applications to knot theory in the case of quantum groups) that we would not see if we just looked at non-deformed
objects. This is probably not "obvious" at all, but that's probably largely why it's so awesome.
Just from the name, I would also guess that there must be some physics-y reasons for why the construction of quantum groups is useful and meaningful, but I dunno... – Kevin H. Lin Jan 16 '10
at 21:03
add comment
I like "Why Deformations are Cohomological" by M Anel
up vote 3 down vote
add comment
If you haven't already, you might find it worthwhile to read the paper by Drinfeld:
Drinfeld, V.G.: Quantum Groups, Proceedings of ICM (Berkeley 1986) Providence RI American Math. Soc. 1987, 198–820.
up vote 2 down vote
I think it is appropriate to a general audience, although not all statements are explained completely.
add comment
I first saw an introduction to the deformations of associative algebras in the very nice paper of Braverman and Gaitsgory. I think it's a very good place to start.
up vote 2 down vote
add comment
Our page at nlab is still very unfinished but at the end of the paget we created a long list of mostly carefully chosen references (majority not that elementary though).
up vote 1 down vote
add comment
My answer is more concerned with your second question ``Why this should "obviously" be a construction worth looking at, and why it should be useful and meaningful."
I would quote Arnold who, by saying that Mathematics and physics are the two opposite sides of a same medal, conveys the platonistic idea that the coherence and unity of mathematics
comes in fact from the coherence and unity of nature, since it is the natural language to describe it.
If one believes in this, one realizes that most of our mathematics is classical, in the sense that most of mathematical objects come from the study of concepts originated in classical
mechanics (geometry, Lie groups, ...). But it is known since the early 20th century that classical mechanics is the shadow of quantum mechanics.
up vote 1 Therefore, there should exist a whole brave new world of quantum mathematics, of which classical mathematics should be the ``semiclassical limit".
down vote
The paper of Bayern, Flato, Lichnerowicz and Sternheimer gives a paradigm to explore it : they show that quantum mechanics can be interpreted in terms of deformations of associative
algebras of classical commutative algebras of observables on Poisson manifolds.
Therefore, an approach to quantize a mathematical concept is to encode its structure in terms of properties of an algebra of functions, and deform this algebra to a non commutative
algebra with similar properties. If you apply it to the algebra of functions on a Lie group, you arrive to the concept of quantum group.
add comment
In addition the excellent references listed in the other answers, see these 2007 lecture notes on deformation theory by Doubek, Markl, and Zima.
up vote 0
down vote
1 It's generally useful to provide a description or information for a link (e.g. author & title), "these lecture notes" doesn't help much. – François G. Dorais♦ Jan 16 '10 at 21:57
1 Well, the title is just "Deformation Theory (Lecture Notes)" :), and I've edited the answer to include the names of the authors. – mathphysicist Jan 16 '10 at 23:01
I took a course on deformation theory with 2 of the current experts of the subject,John Terilla and Tom Tradler-and this and Gerstenhaber's papers where the main references for the
course.What I've never been able to understand is what the connectiom is between the classical deformation theory in those notes and the deformations studies in algebraic geometry by
Harsthorne's book and others. – Andrew L Apr 12 '10 at 3:45
add comment
Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras deformation-theory quantum-groups reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/12005/introduction-to-deformation-theory-of-algebras","timestamp":"2014-04-21T00:33:57Z","content_type":null,"content_length":"99686","record_id":"<urn:uuid:f222a00a-5e18-4c2b-a7b8-4c9fcce7b7d1>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Experiment of the Month
A Geometric Derivation for the Rate of Rotation in Foucault's Pendulum
The plane of oscillation of the Foucault pendulum rotates clockwise in the northern hemisphere. At the north pole the plane of oscillation would make one complete rotation during one day. At other
latitudes, the rate of rotation is slower. The slower rate is not difficult to derive if the initial motion of the pendulum is north-south. One such derivation is here.
For this month's article, we take a different approach, which is applicable for any initial direction of oscillation. The focus of attention will be a vector v. v can represent the direction of a
gyroscope axis, or is can represent the velocity of the pendulum bob. It will probably be easier to think of the gyroscope, because we will allow v to point in any direction, for our convenience.
We will use three different coordinate systems to calculate the components of v:
1) The ordinary north-south, east-west system, with origin at some point on the earth's surface, lying in a plane tangent to the earth at that point.
2) A system with one axis parallel to the earth's axis, and another perpendicular to the earth's axis, going through the point of interest on the earth's surface.
3) An extension of (1) to include the vertical, an axis along the line from the center of the earth to our point of interest. Horizontal vectors are perpendicular to this vertical axis. This line and
the earth's axis define a plane. The angle between this vertical axis and a line perpendicular to the earth's axis is the latitude, l, of the point of interest.
We begin by considering two special cases for the direction of v. First, the direction of v is parallel to the earth's axis. This direction is not horizontal, unless we are at the equator. To
visualize this direction in the northern hemisphere, lay a ruler on the floor, along the north-south direction. If you are at latitude 50 degrees north, pick up the northern end of the ruler, and
raise it until the ruler makes an angle of 50 degrees with the floor. That ruler is now parallel to the earth's axis.
v continues to point along the earth's axis as we rotate with the earth, and we carry the v along with us. (It helps to think of v as indicating the spin axis of a gyroscope.) We detect no change in
the direction of this v as the earth rotates.
Second, v is pointed in a direction perpendicular to the earth's axis. If we ignore the tilt of the earth's axis, and the progress of the earth in it's orbit, then we can imagine that v points toward
the sun. Now, as the earth rotates, we can tell. At noon, v points more-or-less up, making an angle of l with the local vertical. At midnight, v points more-or-less down, making an angle of l with
the local vertical.
The picture shows a "top" view, looking down on the north pole. The dot represents the tip of the v vector in the first case, pointed along the axis of rotation of the earth. It does not change as it
is carried along with the rotating earth.
The arrow towards the distant sun represents v in the second case, pointing always towards the sun. Its direction relative to the earth changes as the earth rotates. Sometimes this arrow points
towards the earth's axis (night time in this example). Sometimes it points away from the earth's axis (daytime in this example). It is this arrow that tells us the earth is rotating on its axis. As
the earth makes one complete revolution, this arrow makes one complete revolution (relative to the laboratory).
Neither of these arrows is horizontal: Neither lies in a plane tangent to the earth's surface, as they ride a particular location on the earth. To use this picture to understand the Foucault
pendulum, we must understand how it connects to horizontal motion.
Case 1: v "pointing north," and lying "horizontally" in a plane tangent to the earth. The sketch shows the idea. To follow the effect of the earth turning we consider two components of the vector, v:
1) the component parallel to the earth's axis. This component does not change as the earth rotates.
2) the component perpendicular to the earth's axis. As viewed by someone riding the earth while it turns through a small angle dq counterclockwise, this component turns the same amount, dq,
We calculate this perpendicular component using the latitude, l. Since v is perpendicular to the vertical, the angle between v and the line perpendicular to the earth's axis is (p/2 - l). This means
that the angle between v and the earth's axis is l, so that the perpendicular component of v is
v sin l
The sketch at the right shows this component in detail. Look first near the bottom of the sketch. The dotted arrow is the observed direction of (vsinl), after the earth has rotated through dq. For
small angles (in radians),
dq= (dv)/(v sin l)
where dv is the change in v sin l and also in v, since there is no change in the other component of v (the component parallel to the earth's axis).
That same dv is shown added to the original v vector, near the top of the sketch. The vector v rotates through and angle
df = (dv)/(v) = dq (v sin l)/(v) = dq sin l
This leads immediately to the result that the rotation rate of the pendulum velocity vector is smaller than the rotation rate for the earth by the factor sinl.
Case 2: v "pointing east," and lying "horizontally" in a plane tangent to the earth. The sketches below show the idea.
v towards east, side view v towards east, view from above north pole
v lies perpendicular to the plane defined by the earth's axis and the vertical. The line perpendicular to the earth's axis is also in that plane, and is also perpendicular to v. When the earth
rotates (counterclockwise) through a small angle dq, an observer riding on the earth sees this vector rotate through exactly the same angle (clockwise). The reason is that in this case, v has no
component perpendicular to the earth's axis.
The change in velocity, according to the observer is
dv = v dq
for small angle dq measured in radians.
The direction of this dv vector is along that dotted line which is perpendicular to the earth's axis. To apply the idea to the Foucault pendulum we must account for the fact that the pendulum motion
is not allowed to change in the vertical direction. (The tension in the string can change, but to first order, the period is independent of the earth's rotation rate.) The pendulum acts to eliminate
the vertical component of dv.
To eliminate dv[(VERTICAL)], we must take the horizontal component of dv. This is most easily done from a point of view standing just to the west of the pendulum, as in the figure at right. Note
that, by definition, horizontal is perpendicular to vertical.
dv[(HORIZONTAL)]= dv sin l= v dq sin l
When the horizontal component is added to the original v, the new vector makes an angle
df = (dv)/(v) = v dq sin l/(v) = dq sin l
This is exactly the same relation between earth rotation and observed rotation of the vector as for the north-pointing case. For a general horizontal vector, both the north and the east components
rotate at the same rate, so that all horizontal vectors rotate at the same rate, for a given latitude, l. The rotation rate is given by
df/dt = (dq/dt) (sin l)
The time for one revolution of the Foucault pendulum at latitude l is given by
(T[(PENDULUM)] )(sin l) = T[(EARTH)]
Thanks to Hugh Rance for stimulating this version. | {"url":"http://www.millersville.edu/academics/scma/physics/experiments/084/index.php","timestamp":"2014-04-19T04:28:07Z","content_type":null,"content_length":"29366","record_id":"<urn:uuid:2a0c0666-7d98-4a58-bd09-ba9bde5cfafc>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
geometry: relief angle calculation?
Results 1 to 8 of 8
Thread: geometry: relief angle calculation?
1. 07-15-2004, 07:12 AM #1
atetsade Guest
is there some formula to determine the relief angle necessary for a given feed rate to successfully turn a given diameter of round stock?
2. 07-15-2004, 09:56 AM #2
Join Date
Dec 2000
Bremerton WA USA
Probably but 99.999% of home ground tool users eyeball it. 7 degrees is about right. In the case of screw threads having large helix angles, the 7 degrees applies with
respect to the helix angle.
A multiple lead screw thread or worm having a 30 degree helix angle has a 23 degree relief on one side and a negative 23 degree on the other. Since large helix angles are
usually cut with the tool profile normal to the pitchline helix angle, shops that speciaize in this kind of work have tool holders tilt the tool by one of several means to
Single point tools used in tangential heads on gear hobbers when nicely ground are works of art.
[This message has been edited by Forrest Addy (edited 07-15-2004).]
3. 07-15-2004, 10:04 AM #3
Join Date
May 2003
Marshalltown, Iowa, USA
Just what Forrest said!
But if you must know
Cot helix angle= (PI* dia)/lead (feed rate)
That gives a number that is relative to the axis of feed. IOW, if you have a 10" dia part that you feed at .005"/rev, the helix angle is 89.99 deg.
If I remember, the best explanation that I ever saw was the set-up directions for a Landis thread grinder.
4. 07-15-2004, 07:49 PM #4
Join Date
Apr 2003
Berkeley Springs, WV, USA
I am the one who usually advocates heavy feed at lower rotative speeds.
Even under conditions of very aggressive feed rates for turning in the lathe, you will not run into clearance problems at from 5 to 7 degrees.
You want greater clearance for softer materials, not because of feed rate but because of the spring-back of the softer materials that makes them want to rub behind the
cutting edge.
5. 07-15-2004, 09:17 PM #5
Join Date
Sep 2004
south SF Bay area, California
JRIowa and Company --
There's a nit in the arithmetic used to illustrate JRIowa's algorithm, which I'll rephrase slightly:
Cotangent (Helix Angle) = (Pi x Diameter) / (Axial Feed per Revolution)
Cotangent (Helix Angle) = (Pi x 10 inch) / (0.005 inch)
Cotangent (Helix Angle) = 31.4159 inch / 0.005 inch
Cotangent (Helix Angle) = 6283.18
Since the Cotangent is the reciprocal of the Tangent, the above can be restated
Tangent (Helix Angle) = (1 / 6283.18)
(Helix Angle) = Arctangent (1 / 6283.18)
(Helix Angle) = Arctangent (0.000159155+)
(Helix Angle) = 0.009+ degree.
[Sound of climbing onto soap box]
I firmly believe that the less-used trigonometric functions -- Secant, Cosecant, Cotangent, Versine, Haversine, and so forth -- should today be considered only historical
curiosities, and that it is the duty of those teaching applied math today to define and dismiss those functions.
A hundred years ago the use of these functions served a practical purpose, avoiding the relatively mistake-prone arithmetic operation of dividing . . . instead of "dividing
by the Sine / Cosine / Tangent of the Angle" we said "multiply by the Cosecant / Secant / Cotangent of the Angle".
Today scientific calculators and spreadsheet programs are almost ubiquitous, and these tools make division easier and less subject to error than multiplication by the
Until the calculator makers and the spreadsheet programmers add Secant, Cosecant, and Cotangent keys or commands to their products, using those functions impairs the
practicality of the algorithm.
The arithmetic error inspiring my rant probably would not have happened had the algorthm been expressed as this more-easily-understood way:
Tangent (Helix Angle) = [(Axial Feed per Revolution) / (Pi x Workpiece Diameter)]
[Sound of jumping down from soap box]
Some textbooks say that the Side Clearance Angle on a toolbit should be greater than the Helix Angle on the workpiece by 3 to 5 degrees.
Let's say that we take the midpoint of this range, 4 degrees, as an idealized greater-than-Helix-Angle clearance and take the midpoint of JimK's toolbit Side Clearance Angle
range as an idealized toolbit angle. This would mean that we would want the workpiece Helix Angle to be (6 degrees - 4 degrees) = 2 degrees.
How much Axial Feed would be necessary to generate a 2 degree Helix Angle? The algorithm is pretty straight-forward:
Axial Feed per Revolution = Tangent (Helix Angle) x Workpiece Diameter x Pi.
Doing the arithmetic for a 1 inch workpiece, we find that Axial Feed per Revolution = Tangent (2 degrees) x 1 inch x 3.1416 = 0.109+ inch.
For a 6 inch diameter workpiece, Tangent (2 degrees) x 6 inches = 0.658 inch.
Bottom line, you'd almost have to be cutting an oddball long-lead screwthread to drag the flank on a toolbit with 5 to 7 degrees of Side Clearance.
6. 07-15-2004, 10:54 PM #6
Join Date
May 2003
Marshalltown, Iowa, USA
7. 07-15-2004, 11:15 PM #7
Senior Member
Join Date
Mar 2003
Lytle, TX USA
Geez John, you sound just like my brother--the aerospace engineer (John too). Everytime I ask him for help in calculating stresses, weldments or co-linear supports, he gives
me 5 pages of the formula and expects me to understand it.
I sure am glad he gives me the answer. I'm not sure I could find it on the paper.
8. 07-16-2004, 11:44 AM #8
Join Date
Dec 2002
Pacific NW
FWIW, CCMT inserts have a 7 degree relief. Kennametal's CPMT have 11 (or is it 12?) degree relief.
In my experience, both work fine. Kennametal's relief seems to be a slight bit freer cutting. But is possibly offset by the extra cost of the Kennametal inserts.
is there some formula to determine the relief angle necessary for a given feed rate to successfully turn a given diameter of round stock?
Probably but 99.999% of home ground tool users eyeball it. 7 degrees is about right. In the case of screw threads having large helix angles, the 7 degrees applies with respect to the helix angle. A
multiple lead screw thread or worm having a 30 degree helix angle has a 23 degree relief on one side and a negative 23 degree on the other. Since large helix angles are usually cut with the tool
profile normal to the pitchline helix angle, shops that speciaize in this kind of work have tool holders tilt the tool by one of several means to suit. Single point tools used in tangential heads on
gear hobbers when nicely ground are works of art. [This message has been edited by Forrest Addy (edited 07-15-2004).]
Just what Forrest said! But if you must know Cot helix angle= (PI* dia)/lead (feed rate) That gives a number that is relative to the axis of feed. IOW, if you have a 10" dia part that you feed at
.005"/rev, the helix angle is 89.99 deg. If I remember, the best explanation that I ever saw was the set-up directions for a Landis thread grinder. JR
I am the one who usually advocates heavy feed at lower rotative speeds. Even under conditions of very aggressive feed rates for turning in the lathe, you will not run into clearance problems at from
5 to 7 degrees. You want greater clearance for softer materials, not because of feed rate but because of the spring-back of the softer materials that makes them want to rub behind the cutting edge.
JRIowa and Company -- There's a nit in the arithmetic used to illustrate JRIowa's algorithm, which I'll rephrase slightly: Cotangent (Helix Angle) = (Pi x Diameter) / (Axial Feed per Revolution)
Cotangent (Helix Angle) = (Pi x 10 inch) / (0.005 inch) Cotangent (Helix Angle) = 31.4159 inch / 0.005 inch Cotangent (Helix Angle) = 6283.18 Since the Cotangent is the reciprocal of the Tangent, the
above can be restated Tangent (Helix Angle) = (1 / 6283.18) (Helix Angle) = Arctangent (1 / 6283.18) (Helix Angle) = Arctangent (0.000159155+) (Helix Angle) = 0.009+ degree. [Sound of climbing onto
soap box] I firmly believe that the less-used trigonometric functions -- Secant, Cosecant, Cotangent, Versine, Haversine, and so forth -- should today be considered only historical curiosities, and
that it is the duty of those teaching applied math today to define and dismiss those functions. A hundred years ago the use of these functions served a practical purpose, avoiding the relatively
mistake-prone arithmetic operation of dividing . . . instead of "dividing by the Sine / Cosine / Tangent of the Angle" we said "multiply by the Cosecant / Secant / Cotangent of the Angle". Today
scientific calculators and spreadsheet programs are almost ubiquitous, and these tools make division easier and less subject to error than multiplication by the reciprocal. Until the calculator
makers and the spreadsheet programmers add Secant, Cosecant, and Cotangent keys or commands to their products, using those functions impairs the practicality of the algorithm. The arithmetic error
inspiring my rant probably would not have happened had the algorthm been expressed as this more-easily-understood way: Tangent (Helix Angle) = [(Axial Feed per Revolution) / (Pi x Workpiece
Diameter)] [Sound of jumping down from soap box] Some textbooks say that the Side Clearance Angle on a toolbit should be greater than the Helix Angle on the workpiece by 3 to 5 degrees. Let's say
that we take the midpoint of this range, 4 degrees, as an idealized greater-than-Helix-Angle clearance and take the midpoint of JimK's toolbit Side Clearance Angle range as an idealized toolbit
angle. This would mean that we would want the workpiece Helix Angle to be (6 degrees - 4 degrees) = 2 degrees. How much Axial Feed would be necessary to generate a 2 degree Helix Angle? The algorithm
is pretty straight-forward: Axial Feed per Revolution = Tangent (Helix Angle) x Workpiece Diameter x Pi. Doing the arithmetic for a 1 inch workpiece, we find that Axial Feed per Revolution = Tangent
(2 degrees) x 1 inch x 3.1416 = 0.109+ inch. For a 6 inch diameter workpiece, Tangent (2 degrees) x 6 inches = 0.658 inch. Bottom line, you'd almost have to be cutting an oddball long-lead
screwthread to drag the flank on a toolbit with 5 to 7 degrees of Side Clearance. John
Geez John, you sound just like my brother--the aerospace engineer (John too). Everytime I ask him for help in calculating stresses, weldments or co-linear supports, he gives me 5 pages of the formula
and expects me to understand it. I sure am glad he gives me the answer. I'm not sure I could find it on the paper.
FWIW, CCMT inserts have a 7 degree relief. Kennametal's CPMT have 11 (or is it 12?) degree relief. In my experience, both work fine. Kennametal's relief seems to be a slight bit freer cutting. But is
possibly offset by the extra cost of the Kennametal inserts. | {"url":"http://www.practicalmachinist.com/vb/general-archive/geometry-relief-angle-calculation-80598/","timestamp":"2014-04-16T04:12:38Z","content_type":null,"content_length":"66518","record_id":"<urn:uuid:d4d33044-d230-4d62-a03e-cc73148e97e6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Eighteenth British Mathematical Colloquium
This was held at Imperial College, London: 29 March - 2 April 1966
The enrolment was 345. The chairman was J G Clunie and the secretary was T Kövari.
Minutes of meetings, etc. are available by clicking on a link below
General Meeting Minutes for 1966
Committee Meeting Minutes for 1966
The plenary speakers were:
Kadison, R V A survey of the theory of algebras of Hilbert space operators
Kaplansky, I Recent advances in commutative algebra
Segre, B Galois geometries and combinatorial analysis
The morning speakers were:
Bott, R H A generalisation of the Lefschetz fixed point theorem
Burgess, D A Character sums
Corner, A L S Endomorphism rings of abelian groups
Dirac, G A Structure and colouring of graphs
Hudson, J F P Adding knots
Kingman, J F C Some analytic problems which are really algebraic
Northcott, D G Polynomial identities
Pommerenke, C Faber polynomials and univalent functions
Roseblade, J E Subnormal subgroups | {"url":"http://www-gap.dcs.st-and.ac.uk/~history/BMC/1966.html","timestamp":"2014-04-20T16:26:54Z","content_type":null,"content_length":"2291","record_id":"<urn:uuid:d102ccdc-65c9-42ac-a35d-430fac4fae65>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
@mukushla Prove \(ab + bc + ac \ge \sqrt{3abc \left(a+b+c\right)}\).
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ffff704e4b0848ddd64e527","timestamp":"2014-04-17T16:07:04Z","content_type":null,"content_length":"37358","record_id":"<urn:uuid:1427f845-be5b-406d-8e7b-fc6b830dfb91>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
Anamorphic distortion of a tracked user. [Archive] - OpenGL Discussion and Help Forums
01-07-2011, 07:12 AM
Hey everyone!
Im doing a project where im tracking a users position (head tracking) through 360 degrees using a webcam array. I am currently obtaining the users real world XYZ co-ordinates and wish to apply these
into an OpenGL program that will present the user with a view of a 3D object based on their current perspective.
The outcome I am aiming for is much like this video: http://vimeo.com/10776715
I have managed to create the tracking side of this up to the point where I have co-ordinates to be used. at this point now i am a little stuck as to how best to approach the rendering and display of
the objects. I have just started trying to learn OpenGL (GLUT +GLTools) for C++ but there is soo much to it.
could anyone with experience help me figure out the steps involved in the way this is displayed?
i know it has something to do with perspective distortion (like those street chalk paintings) but have no contextual idea of how this is done within OpenGL.
Hope someone can help me to realise this! | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-173162.html","timestamp":"2014-04-16T04:37:10Z","content_type":null,"content_length":"23178","record_id":"<urn:uuid:0dc996b9-4dc6-419e-a32e-66912536b130>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I found the inverse of \[f(x)=-2cos(3x)\] \[f^{-1}(x)=\frac 13 cos^{-1}(-\frac x 2)\] how do I find the domain? Is it all real numbers?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Domain of... f(x) or f^(-1) (x)?
Best Response
You've already chosen the best response.
of \[f'(x)\]
Best Response
You've already chosen the best response.
i Mean \[f^{-1}\]
Best Response
You've already chosen the best response.
Try \(f^{-1} (100)\), what would you get?
Best Response
You've already chosen the best response.
it's an imaginary number according wolfram
Best Response
You've already chosen the best response.
I suppose the function should give real output if you input a real number. Okay, when you put x= 100 into f^(-1) (x), it doesn't give you real output. So, it's not in the domain of y. Consider
cosine function, range of cosine function is -1 ≤ cos x ≤ 1. Consider the inverse of cosine function, domain of the inverse is the range of the cosine function, so the domain of cos^(-1)x is [-1,
1]. Now, you have cos^(-1) (x/2) in your inverse function, hmm.. how would you get the domain of the inverse function?
Best Response
You've already chosen the best response.
Let's see if I understand this... \[f^{-1}(f(x))=sin^{-1}(sinx)=x \;\;\;\;where \;\;\;\;-\frac{\pi}{2}\le x\le\frac{\pi}{2}\] \[f(f^{-1}(x))=sin(sin^{-1}x)=x \;\;\;\;where\;\;\;\;-1\le x\le1\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I"m trying to picture this. What's bothering me is that fact the domain and range of one function change to the range and domain of the next function....I'm trying to keep them in order somehow
Best Response
You've already chosen the best response.
the cosine gives numbers between -1 and 1 there is no real number x, where cos(x)= 2 (for example) so x = acos(2) has no real solution. You would want to restrict the domain to -1 to +1 for the
case of acos(x/2) -1 ≤ x/2 ≤ 1 or -2 ≤ x ≤ 2
Best Response
You've already chosen the best response.
but the restriction of the inverse of cosine is \[0\le x \le \pi\] but since it's x/2 do I multiply or divided that by two? or am i totally on the wrong path...
Best Response
You've already chosen the best response.
you mean the restriction on the cosine is 0 to pi the restriction on the domain of the inverse cos(x) is -1 ≤ x ≤ 1
Best Response
You've already chosen the best response.
the restriction on the domain of the inverse cos(x) is -1 ≤ x ≤ 1 if you are given acos(x/2) then rename the variable to y= x/2 you know the domain is restricted to -1 ≤ y ≤ 1 but with y= x/2 -1
≤ x/2 ≤ 1 solve for x, (2 separate relations), to get -2 ≤ x ≤ 2 that is the domain in terms of x for acos(x/2)
Best Response
You've already chosen the best response.
I find gauss's law easier to understand than this for whatever reason. I'm very visual and I can't visualize this....I would much rather draw unit circles and sine graphs, and modify that to
represent the current problem that we're working on.
Best Response
You've already chosen the best response.
I'll look at it again later...too tired to comprehend.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/511a5651e4b03d9dd0c2f910","timestamp":"2014-04-21T15:48:47Z","content_type":null,"content_length":"74269","record_id":"<urn:uuid:c78c9713-6bac-4dee-8908-3d916c41ea18>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nahant Prealgebra Tutor
Find a Nahant Prealgebra Tutor
...I've also co-led several writers' workshops, and I'm a professional writer and editor of math and science textbooks. My educational background is in biochemical engineering. I received a full
academic scholarship to Penn State and an NSF fellowship to MIT for graduate school.
23 Subjects: including prealgebra, chemistry, writing, physics
...I am patient, enthusiastic about learning, and will work very hard with you to achieve your academic goals. JoannaI have three years' experience tutoring high school students in biology. I have
extensive coursework and research experience in Biology and am passionate about the field.
10 Subjects: including prealgebra, chemistry, geometry, biology
...I have traveled to several Spanish-speaking countries such as Panama, Dominican Republic, and even Cuba, so I do have experience with local linguistic immersion. Additionally, I am currently
both a math and a Spanish tutor at Andover, and I have loved helping other students realize their potenti...
3 Subjects: including prealgebra, Spanish, algebra 1
...I have passed a number of your tests in related areas and I have a degree in History from Boston College. I also have passed the history teacher license exam for Massachusetts. I have also
tutored students in the subject in the past.
64 Subjects: including prealgebra, reading, chemistry, English
...I have being tutoring undergraduate and graduate students in research labs on MATLAB programming. In addition, I took Algebra, Calculus, Geometry, Probability and Trigonometry courses in high
school, and this knowledge helped me to achieve my goals in research projects involving 4-dimentional ma...
16 Subjects: including prealgebra, calculus, Chinese, algebra 1 | {"url":"http://www.purplemath.com/nahant_ma_prealgebra_tutors.php","timestamp":"2014-04-18T03:52:47Z","content_type":null,"content_length":"23779","record_id":"<urn:uuid:e2ef4571-6af4-4693-9e87-19c41d3957c7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: calculating alphas with imputed data
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: calculating alphas with imputed data
From daniel klein <klein.daniel.81@googlemail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: calculating alphas with imputed data
Date Thu, 9 Jun 2011 09:27:01 +0200
I think you need to be a little bit more specific about what exactly
you mean by "alpha"? Most times I see "alpha" it is used to denote the
type I error in NHST. Since this quantity cannot be estimated, I
assume you mean something different. However, people use "alpha" to
denote the constant in a regression as in y = alpha + betaX +e. It is
also known in the context of reliability as Cronbach's alpha and there
are probably a lot of other situations where this greek letter is
used. Without knowing what it is you want to estimate it is not
possible to tell you how you would do it with multiply imputed data.
I assume you know the command to estimate "alpha" in non-imputed
datasets. I further assume you did check the supported -mi estimate-
estimation commands (-help mi estimate-) and did not find the command
you are looking for.
You may try to use -mi estimate ,cmdok : command- , where comand is
the command you would use to estimate "alpha" in non-imputed datasets.
If this does not work a good place to start might be here :
Combining point-estimates is relatively straight forward. Simply run
the estimation on each dataset and save the results. The mean of those
results is the MI-Estimator.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-06/msg00423.html","timestamp":"2014-04-19T22:43:05Z","content_type":null,"content_length":"8659","record_id":"<urn:uuid:5e2b461d-99f3-4d39-b4fb-8594a00e63db>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Chart] layout1_plots ordinate types
Tim Docker tim at dockerz.net
Thu May 31 14:37:28 BST 2012
Hi Ben,
Yes. Depending on the value of layout1_yaxes_control, the value
associated with one axis may be duplicated on the other side. So
the types of the left and right axes need to be the same.
But, I think things would be a little cleaner and simpler, if
there were two different layouts:
one which had a single y axis, and an enumeration to decide if
the axis is to be shown on the left/right/both sides.
one which had independent y axes with different types.
ie something like:
data Layout1 x y = Layout1 {
layout1_plots_ :: [(Plot x y)],
data Layout1LR x y1 y2 = Layout1 {
layout1lr_plots_ :: [Either (Plot x y1) (Plot x y2)],
On 30/05/12 23:05, Ben Gamari wrote:
> Is there a reason why layout1_plots :: [Either (Plot x y) (Plot x y)]
> enforces that both left and right axes have the same y type? I recently
> encountered a situation where I wanted to plot both Int and Float
> ordinates on the same plot. Unfortunately, I was unable to do this due
> to this limitation.
More information about the Chart mailing list | {"url":"http://projects.haskell.org/pipermail/chart/2012-May/000037.html","timestamp":"2014-04-24T14:44:53Z","content_type":null,"content_length":"3682","record_id":"<urn:uuid:5649d425-8bd2-4f96-9cac-8be8b9b6db7c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hagen, Gregory, Mezić, Igor and Bamieh, B (2004), "Distributed control design for parabolic evolution equations: application to compressor stall ...", IEEE Transactions on Automatic Control, 49, 8:
Mathew, George, Mezić, Igor, Serban, R and Petzold, L (2004), "Optimization of mixing in an active micromixing device", Technical Proc. 2004 NSTI Nanotechnology Conf. Trade Show.
Mezić, Igor and Banaszuk, Andrzej (2004), "Comparison of systems with complex behavior", Physica D: Nonlinear Phenomena, 197, 1-2: 101--133.
M"uller, S, Mezić, Igor, Walther, J and Koumoutsakos, P (2004), "Transverse momentum micromixer optimization with evolution strategies", Computers and Fluids.
Banaszuk, Andrzej, Mehta, P and Mezić, Igor (2004), "Spectral balance: a frequency domain framework for analysis of nonlinear dynamical systems", Proceedings of the 43rd IEEE Conference on Decision
and Control.
Mezić, Igor and Runolfsson, T (2004), "Uncertainty analysis of complex dynamical systems", American Control Conference.
Vainchtein, Dimitri and Mezić, Igor (2004), "Optimal control of a co-rotating vortex pair: averaging and impulsive control", Physica D: Nonlinear Phenomena.
Bottausci, Frederic, Mezić, Igor, Meinhart, C and Cardonne, Caroline (2004), "Mixing in the shear superposition micromixer: three-dimensional analysis", Philosophical Transactions of the Royal
Society A: ....
Mezić, Igor (2003), "Nonlinear Dynamics and Ergodic Theory Methods in Control", Storming Media.
Chang, Dong Eui, Loire, Sophie and Mezić, Igor (2003), "Separation of bioparticles using the travelling wave dielectrophoresis with multiple frequencies", Proceedings of the 42nd IEEE Conference on
Decision and Control.
Vaidya, Umesh, D'Alessandro, Dominic and Mezić, Igor (2003), "Control of Heisenberg spin systems; Lie algebraic decompositions and action-angle variables", Proceedings of the 42nd IEEE Conference on
Decision and Control.
Mezić, Igor, Mathew, George, Bottausci, Frederic and Cardonne, Caroline (2003), "An Actively Controlled Micromixer: 3-D Theory", American Physical Society.
Balasuriya, S, Mezić, Igor and Jones, C (2003), "Weak finite-time Melnikov theory and 3D viscous perturbations of Euler flows", Physica D: Nonlinear Phenomena.
Chang, D, Loire, Sophie and Mezić, Igor (2003), "Closed-form solutions in the electrical field analysis for dielectrophoretic and travelling wave ...", Journal of Physics D-Applied Physics, 36, 23:
Mezić, Igor (2003), "Controllability of Hamiltonian systems with drift: Action-angle variables and ergodic partition", Proceedings of the 42nd IEEE Conference on Decision and Control.
Bottausci, Frederic, Cardonne, Caroline, Mezić, Igor and Meinhart, C (2003), "An Actively Controlled Micromixer: 3-D Experiments and Simulations", American Physical Society.
Solomon, T and Mezić, Igor (2003), "Uniform resonant chaotic mixing in fluid flows", Nature, 425, 6956: 376--380.
Vainchtein, Dimitri, Mezić, Igor and Neishtadt, A I (2003), "Resonance crossings and chaotic advection in Stokes flows", American Physical Society.
Vaidya, Umesh and Mezić, Igor (2003), "Controllability for a class of discrete-time Hamiltonian systems", Proceedings of the 42nd IEEE Conference on Decision and Control.
Bottausci, Frederic, Cardonne, Caroline, Mezić, Igor, Meinhart, C and Loire, Sophie (2003), "Shear Superposition Micromixer: 3-D Analysis", Proceedings of the ASME Mechanical Engineering
International Congress and Exposition, MEMS, Washington, DC.
Valente, A, McClamroch, N and Mezić, Igor (2003), "Hybrid dynamics of two coupled oscillators that can impact a fixed stop", International Journal of Non-Linear Mechanics.
Mezić, Igor (2003), "Controllability, integrability and ergodicity", Lecture notes in control and information sciences.
Vainchtein, Dimitri and Mezić, Igor (2002), "Control of vortex merger via elliptical vortex patch model", American Physical Society.
Mezić, Igor and Sotiropoulos, F (2002), "Ergodic theory and experimental visualization of invariant sets in chaotically advected flows", Physics of Fluids.
D'Alessandro, Dominic, Mezić, Igor and Dahleh, M (2002), "Statistical properties of controlled fluid flows with applications to control of mixing", Systems & control letters.
Page 4 of 7
Download bibtex string for all 157 results | {"url":"http://www.engr.ucsb.edu/~mgroup/joomla/index.php/research-mainmenu-67.html?start=75","timestamp":"2014-04-19T07:07:23Z","content_type":null,"content_length":"30119","record_id":"<urn:uuid:c8d580c5-dbd4-441d-b294-3388d44cf557>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: AW: Problem with simple descriptive statistics
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: AW: Problem with simple descriptive statistics
From "Martin Weiss" <martin.weiss1@gmx.de>
To <statalist@hsphsun2.harvard.edu>
Subject st: AW: Problem with simple descriptive statistics
Date Tue, 7 Jul 2009 15:08:06 +0200
//18 funds
set obs 18
gen fund=_n
//100 periods
expand 100
bys fund: gen period=_n
//10 asset classe
expand 10
bys fund period: gen assetclassno=_n
//value normally distr.
gen value=rnormal(1000000,10000)
//let`s see
l in 1/20, noo
//get total per fund and period
bys fund period: /*
*/ egen totalfundassets=total(value)
//get share per period and fund
gen share=value/totalfundassets
//let`s see
l in 1/100, /*
*/ noo sepby(period)
-----Ursprüngliche Nachricht-----
Von: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Stata Chris
Gesendet: Dienstag, 7. Juli 2009 14:54
An: statalist@hsphsun2.harvard.edu
Betreff: st: Problem with simple descriptive statistics
Dear Statalisters,
I have a problem with the bysort syntax (or so I think).
For a set of 18 investment funds, I am trying to figure our for each
period what share of their total assets are held in different asset
classes, using the following code.
The idea was first to compute holdings of each asset class (as
specified by "assetclassno") and period, while summing across all
funds and individual assets. Then I wanted to sum these across all
assetclassno for each period, in order to then compute what share of
the total holdings in that period fell to each assetclassno. And then
I wanted to look at the results for one specific period. Here's the
bysort period assetclassno: egen holdings = sum(value)
bysort period: egen total = sum(holdings)
bysort period assetclassno: gen perc = 100 * holdings/total
keep if period==456
bysort assetclassno: sum perc
egen test = sum(perc)
sum total
sum test
The problem is that the fractions I get for the different assetclassno
(not all assetclasses are being held in that period though) in period
456 are all way too small, and they don't actually add up to what I'm
being told is the total. Strangely though, the test number is still
Does anyone see where my mistake is?
Many thanks and all best,
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-07/msg00260.html","timestamp":"2014-04-18T05:35:47Z","content_type":null,"content_length":"8040","record_id":"<urn:uuid:7e4216de-4c36-4e81-b17c-5e027621222b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Modified Mann Iteration by Boundary Point Method for Finding Minimum-Norm Fixed Point of Nonexpansive Mappings
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 768595, 6 pages
Research Article
A Modified Mann Iteration by Boundary Point Method for Finding Minimum-Norm Fixed Point of Nonexpansive Mappings
^1College of Science, Civil Aviation University of China, Tianjin 300300, China
^2Tianjin Key Laboratory for Advanced Signal Processing, Civil Aviation University of China, Tianjin 300300, China
Received 22 November 2012; Accepted 13 February 2013
Academic Editor: Satit Saejung
Copyright © 2013 Songnian He and Wenlong Zhu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
Let be a real Hilbert space and a closed convex subset. Let be a nonexpansive mapping with the nonempty set of fixed points . Kim and Xu (2005) introduced a modified Mann iteration , , , where is an
arbitrary (but fixed) element, and and are two sequences in . In the case where , the minimum-norm fixed point of can be obtained by taking . But in the case where , this iteration process becomes
invalid because may not belong to . In order to overcome this weakness, we introduce a new modified Mann iteration by boundary point method (see Section 3 for details) for finding the minimum norm
fixed point of and prove its strong convergence under some assumptions. Since our algorithm does not involve the computation of the metric projection , which is often used so that the strong
convergence is guaranteed, it is easy implementable. Our results improve and extend the results of Kim, Xu, and some others.
1. Introduction
Let be a subset of a real Hilbert space with an inner product and its induced norm is denoted by and , respectively. A mapping is called nonexpansive if for all . A point is called a fixed point of
if . Denote by the set of fixed points of . Throughout this paper, is always assumed to be nonempty.
The iteration approximation processes of nonexpansive mappings have been extensively investigated by many authors (see [1–12] and the references therein). A classical iterative scheme was introduced
by Mann [13], which is defined as follows. Take an initial guess arbitrarily and define , recursively, by where is a sequence in the interval . It is well known that under some certain conditions the
sequence generated by (1) converges weakly to a fixed point of , and Mann iteration may fail to converge strongly even if it is in the setting of infinite-dimensional Hilbert spaces [14].
Some attempts to modify the Mann iteration method (1) so that strong convergence is guaranteed have been made. Nakajo and Takahashi [1] proposed the following modification of the Mann iteration
method (1): where denotes the metric projection from onto a closed convex subset of . They proved that if the sequence is bounded above from one, then defined by (2) converges strongly to . But, at
each iteration step, an additional projection is needed to calculate, which is not easy in general. To overcome this weakness, Kim and Xu [15] proposed a simpler modification of Mann's iteration
scheme, which generates the iteration sequence via the following formula: where is an arbitrary (but fixed) element in , and and are two sequences in . In the setting of Banach spaces, Kim and Xu
proved that the sequence generated by (3) converges strongly to the fixed point of under certain appropriate assumptions on the sequences and .
In many practical problems, such as optimization problems, finding the minimum norm fixed point of nonexpansive mappings is quite important. In the case where , taking in (3), the sequence generated
by (3) converges strongly to the minimum norm fixed point of [15]. But, in the case where , the iteration scheme (3) becomes invalid because may not belong to .
To overcome this weakness, a natural way to modify algorithm (3) is adopting the metric projection so that the iteration sequence belongs to ; that is, one may consider the scheme as follows:
However, since the computation of a projection onto a closed convex subset is generally difficult, algorithm (4) may not be a well choice.
The main purpose of this paper is to introduce a new modified Mann iteration for finding the minimum norm fixed point of , which not only has strong convergence under some assumptions but also has
nothing to do with any projection operators. At each iteration step, a point in (the boundary of ) is determined via a particular way, so our modification method is called boundary point method (see
Section 3 for details). Moreover, since our algorithm does not involve the computation of the metric projection, it is very easy implementable.
The rest of this paper is organized as follows. Some useful lemmas are listed in the next section. In the last section, a function defined on is given firstly, which is important for us to construct
our algorithm, then our algorithm is introduced and the strong convergence theorem is proved.
2. Preliminaries
Throughout this paper, we adopt the notations listed as follows:(1) converges strongly to ;(2) converges weakly to ;(3) denotes the set of cluster points of (i.e., such that );(4) denotes the
boundary of .
We need some lemmas and facts listed as follows.
Lemma 1 (see [16]). Let be a closed convex subset of a real Hilbert space and let be the (metric of nearest point) projection from onto (i.e., for , is the only point in such that ). Given and . Then
if and only if there holds the following relation:
Since is a closed convex subset of a real Hilbert space , so the metric projection is reasonable and thus there exists a unique element, which is denoted by , in such that ; that is, . is called the
minimum norm fixed point of .
Lemma 2 (see [17]). Let be a real Hilbert space. Then there holds the following well-known results:(G1) for all ;(G2) for all .
We will give a definition in order to introduce the next lemma. A set is weakly closed if for any sequence such that , there holds .
Lemma 3 (see [18, 19]). If is convex, then is weakly closed if and only if is closed.
Assume is weakly closed; a function is called weakly lower semicontinuous at if for any sequence such that ; then holds. Generally, we called weakly lower semi-continuous over if it is weakly lower
semi-continuous at each point in .
Lemma 4 (see [18, 19]). Let be a subset of a real Hilbert space and let be a real function; then is weakly lower semi-continuous over if and only if the set is weakly closed subset of , for any .
Lemma 5 (see [20]). Let be a closed convex subset of a real Hilbert space and let be a nonexpansive mapping such that . If a sequence in is such that and , then .
The following is a sufficient condition for a real sequence to converge to zero.
Lemma 6 (see [21, 22]). Let be a nonnegative real sequence satisfying If , and satisfy the conditions: (A1);(A2)either or ;(A3);then .
3. Iterative Algorithm
Let be a closed convex subset of a real Hilbert space . In order to give our main results, we first introduce a function by the following definition: Since is closed and convex, it is easy to see
that is well defined. Obviously, for all in the case where . In the case where , it is also easy to see that and for every (otherwise, ; we have ; this is a contradiction).
An important property of is given as follows.
Lemma 7. is weakly lower semi-continuous over .
Proof. If , then for all and the conclusion is clear. For the case , using Lemma 4, in order to show that is weakly lower semi-continuous, it suffices to verify that is a weakly closed subset of for
every ; that is, if such that , then (i.e., ). Without loss of generality, we assume that (otherwise, there hold for and for , resp., and the conclusion holds obviously). Noting is convex, we have
from the definition of that for each , holds for all . Clearly, . Using Lemma 3, then . This implies that Consequently, and this completes the proof.
Since the function will be important for us to construct the algorithm of this paper below, it is necessary to explain how to calculate for any given in actual computing programs. In fact, in
practical problem, is often a level set of a convex function ; that is, is of the form , where is a real constant. Without loss of generality, we assume that and . Then it is easy to see that, for a
given , we have Thus, in order to get the value , we only need to solve a algebraic equation with a single variable , which can be solved easily using many methods, for example, dichotomy method on
the interval . In general, solving a algebraic equation above is quite easier than calculating the metric projection . To illustrate this viewpoint, we give the following simple example.
Example 8. Let be a strongly positive linear bounded operator with coefficient ; that is, there is a constant with the property , for all . Define a convex function by where is a given point in and
is the only solution of the equation . (Notice that is a monogamy.) Setting , then it is easy to show that is a nonempty convex closed subset of such that . (Note that and .) For a given , we have .
In order to get , let , where is an unknown number. Thus we obtain an algebraic equation Consequently, we have that is,
Now we give a new modified Mann iteration by boundary point method.
Algorithm 9. Define in the following way: where and ,.
Since is closed and convex, we assert by the definition of that, for any given , holds for every , and then is guaranteed, where is generated by Algorithm 9. Obviously, for all if . If , calculating
the value implies determining , a boundary point of , and thus our algorithm is called boundary point method.
Theorem 10. Assume that and satisfy the following conditions:(D1);(D2);(D3). Then generated by (17) converges strongly to .
Proof. We first show that is bounded. Taking arbitrarily, we have By induction, Thus, is bounded and so are and . As a result, we obtain by condition (D1) that
We next show that It suffices to show that Using (17), it follows from direct calculating that Substituting (24) into (23), we obtain Note the fact that (since is monotone increasing) and conditions
(D1)–(D3); it concludes by using Lemma 6 that . Noting (20) and (25), we obtain Using Lemma 5, it derives that .
Then we show that Indeed take a subsequence of such that Without loss of generality, we may assume that . Noticing , we obtain from and Lemma 1 that
Finally, we show that . Using Lemma 2 and (17), it is easy to verify that Hence, where It is not hard to prove that , by conditions (D1) and (D2), and by (29). By Lemma 6, we concludes that , and the
proof is finished.
Finally, we point out that a more general algorithm can be given for calculating the fixed point for any given . In fact, it suffices to modify the definition of the function by the following form:
Algorithm 11. Define in the following way: where and , where is defined by (33).
By an argument similar to the proof of Theorem 10, it is easy to obtain the result below.
Theorem 12. Assume that , and and satisfy the same conditions as in Theorem 10; then generated by (34) converges strongly to .
This work was supported in part by the Fundamental Research Funds for the Central Universities (ZXH2012K001) and in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing. | {"url":"http://www.hindawi.com/journals/aaa/2013/768595/","timestamp":"2014-04-20T01:16:51Z","content_type":null,"content_length":"467187","record_id":"<urn:uuid:d52adf34-c21f-4d68-8ea0-693a31759ef8>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiaxial fatigue models the real world
Adarsh Pun
Senior product manager
Santa Ana, Calif.
An automotive steering knuckle is one component that frequently carries loads from several directions, making it a candidate for biaxial fatigue analysis.
The graph shows the history from first load case in the Y axis. The software identifies a few details to the right.
Angle versus principal stress is a plot of w[p] versus the maximum absolute principal stress for all time points at the critical node, 7977. Higher stress levels tend to line up vertically at a
particular angle, suggesting mobility is minimal and that uniaxial conditions exist. Mobility refers to a stress factor with a wide range of excursions -- it's not stationary along one plane. Stress
cycles that show mobility should be eliminated with peak-and-valley-slicing because they have no influence on damage to the component under test.
A plot of biaxial ratio versus maximum absolute principal stress is for all time points at the critical node 7977. The biaxial ratio, a[e], tends to line up vertically close to zero for this node,
indicating a uniaxial condition for higher stress values. The lower stress values should be gated out.
The plot displays the number of times each angle, w[p], appeared during loading. A spike indicates a predominate angle, in this case about -40°. Other angles appear occasionally but are due to lower
stress cycles.
Mohr's circles indicate three special biaxial conditions. When biaxiality analysis is negative (as indicated by the Mohr's circles of stress), the maximum shear plane (where cracks tend to initiate)
is oriented as shown in the left diagram. In early initiation stages, cracks grow mainly along the surface in shear (Mode 2) before becoming normal to the maximum stress (Mode 1).
The assumption that loads tug in one direction is a simplification that works well, to a point. In the real world, however, loads are simultaneously applied in several directions, producing stresses
with no bias to a particular direction. In 3D geometry, these stresses are called multiaxial. To accurately calculate fatigue damage, analyses must identify multiaxial stresses and use appropriate
But multiaxial fatigue analysis has been costly and time consuming because it required investigating stresses from loads applied at multiple locations and how they combine at particular or critical
locations. A recent feature found in more advanced analysis packages reduces the time for multiaxial fatigue simulations by considering multiple load cases simultaneously with time variations to
identify regions of multiaxial stress. In addition, the features examine only those regions that warrant multiaxial analyses.
A steering knuckle on a passenger car provides an example of multiaxial stress analysis. The steering knuckle has a strut mount at the top, ball joint at the bottom, and a steering arm on the side.
The wheel spindle fits through a hole in the center. Driving a vehicle over a cobblestone slalom applies loads to the steering knuckle through the strut mount, lower ball joint, steering tie rod, and
wheel axis -- a multiaxial load condition.
Modeling the real world
Fatigue calculations are postprocessing functions, so they must be preceded by a linear or nonlinear finite-element analysis. The fatigue solver imports analysis results involving multiaxial repeated
loads, fluctuating loads, and rapidly applied loads.
Cracks, the major sign of fatigue, usually begin on a component's surface unless there is a flaw in the material. Therefore, the fatigue solver uses only the surface elements and nodes from the
results. This eliminates internal elements and nodes, which reduces processing time without affecting result accuracy. The process next identifies multiaxial-stress regions to further limit the
solver's work. The software must calculate surface-resolved stresses (plane stresses on a structure's surface) to correctly calculate multiaxial relationships and assessments. For example, the two
principal stresses are in the plane of the surface. The third principal stress, normal to the surface, is zero, unless the part is subject to internal pressure.
Shell models produce surface stresses by default. However, many solid models produce stress results in coordinate systems that must be transformed into surface resolved stresses. And it takes surface
stresses to correctly calculate biaxiality ratios and perform multiaxial assessments.
The software easily finds surface stresses. It generates a vector file for coordinate transformations. A function calculates normals (the Z axis) and defines a local coordinate system at each surface
node. Stress results from the fatigue analysis are written in terms of this coordinate system, limiting multiaxial analysis to the X-Y planes. This puts all stresses into a consistent coordinate
Users assign fatigue properties in a material-information form for potentially hundreds of different groups and materials, including corrections for surface finish and treatments. Our
steering-knuckle analysis is based on properties from as-cast specimens with the same surface and finish.
To transfer loads to components as realistically as possible, they are applied using rigid elements at precise locations. The steering knuckle model was constrained at the wheel center and 12 load
cases were applied, including three forces (1,000 N in X, Y, and Z) at the lower ball joint, steering arm and strut mount, and three moments (1,000 N-mm) at the strut mount. Any real-world loading
condition from the test track can be replicated using a combination of these 12 load cases.
Results from the fatigue analysis are stored in a database. They show that greatest damage or shortest life appears to be near the loading devices at the end of the steering arm. A list function
shows node 7977 to be the critical one with a life of 330 loading cycles, while most remaining nodes fall below a critical damage cutoff. This is user defined but a frequent default value is 20% of
ultimate tensile strength (UTS).
A biaxiality analysis tells whether the fatigue analysis is appropriate to the stress states in the component, among other things. It also:
● Calculates surface-resolved stresses by transforming stress results to local coordinate systems at each location where the x-y plane is the plane of the surface.
● Reorders principal stresses from the conventional order. s[z] is the surface normal stress and should be 0. s[1] and s[2] are ordered in magnitude. s[1] is the largest in-plane principal in
absolute value, which makes s[2] the other in-plane stress.
● Determines the multi-axial ratio for every location at every time point: a[e] = s[1]/s[2]. The angle w[p] that s[1] makes with the local X axis is also retained for each location at every time
● Describes the surface stress state completely by s1, a[e] and w[p].
In addition, a[e] and the w[p] become a little unstable when stresses are small. A gate filter removes small stresses when calculating statistics with these parameters.
When biaxiality analysis is positive, cracks tend to be down through the thickness. These are more damaging for the same levels of shear strain. Uniaxial loading, a special case, refers to local
stress-state variations, not the overall or global loading environment. Although the global loading imposes complex out-of-phase loads, variations to local stress are less complex at critical (most
likely for cracks to start) locations because geometry imposes simplicity. For instance, a stress state at the edge of a thin metal sheet will always be uniaxial.
In addition, biaxiality analyses calculate and plot three main indicators: a mean-biaxiality ratio, biaxiality-ratio standard deviation, and angle spread. The mean biaxiality ratio is the biaxiality
ratio average over the entire combined time signal for every location.
The average is used in calculations throughout the loading history. Values are ignored, however, when stress does not exceed a minimum value, by default 20% of UTS. Certain biaxiality ratios also
indicate special loading conditions. For example, a mean ratio of zero indicates uniaxial or below minimum loading, -1 tells of pure shear or torsion, and +1 means equimultiaxial stress with 0.3
plane strain.
The biaxiality-ratio-standard deviation tells of the ratio's variability, such as, whether or not loading is proportional. Small ratio values indicate proportional loading, meaning that magnitudes of
s[1] and s[2] vary proportionally to one another.
Large standard deviations in the multiaxial ratio indicate nonproportionality between these two stresses. Nonproportional loading is more difficult to handle and results may be misleading.
Angle spread indicates the mobility of the absolute maximum principal stress (wp ranges from 0 to 180°). For example, 45° or so does not indicate a problem, but movement of 90° or more indicates
nonproportional loading, or the angle may occur when there is pure shear -- when stress flips through 90°.
Further postprocessing with biaxiality cross plots includes plotting all outputs, biaxiality versus principal stress, angle versus principal stress, and angle distribution.
In addition, the outputs display time variations of all parameters, such as biaxial ratio, a[e], and angle w[p], for critical locations. Time variations of these parameters are not as useful as
cross-plots against principal stress for all time points.
│ Setting up a fatigue analysis │
│ Category │ Possible setting │ Explanation │
│ 1. Analysis Method │ STW │ Smith-Topper-Watson is a variant on the standard strain-life method that considers the mean stress of each cycle. │
│ 2. Plasticity Correction │ Neuber │ Neuber is the default elastic-plastic correction method. │
│ 3. Run Biaxiality │ On │ │
│ Analysis │ │ │
│ 4. Biaxiality Correction │ None │ │
│ 5. Strain Combination │ Maximum absolute │ This is a default choice and is the principal strain having the largest magnitude. In a uniaxial test the selection would be Axial Strain. │
│ │ principal │ │
│ 6. Design Criterion │ 50.0 │ Design Criterion defaults to 50%, giving the component a 50% chance of surviving the calculated life. The probability is base on the scatter │
│ │ │ defined in the material parameters. │
│ 7. Factor of Safety │ Off │ Components intended for infinite life are best analyzed with Factor of Safety Analysis. This is not relevant to the steering knuckle. │
│ Analysis │ │ │
│ Setting up fatigue analysis in MSC.Fatigue is done with simple inputs. First, enter General Setup Parameters. Then fill out the Solution Parameters form. Solution parameters for the steering │
│ knuckle appear in the table. Notice, biaxiality correction is set to none because we must identify the stress state (uniaxial, biaxial or triaxial) in various regions in the model before a │
│ multiaxial analysis. │
A summary in depth
The steering-knuckle example demonstrated a procedure for multiaxial analysis. If the stress state at a particular location is other than uniaxial, advanced solvers in MSC.Fatigue can account for
proportional or nonproportional loading.
The proportional-loading approach is based on a[e] being nonzero but constant, and with minimal stress-tensor mobility. Material Parameter and Hoffman-Seeger are two methods for modifying uniaxial
material properties. The material-parameter method makes a new set of parameters for each stress state. But it is only valid for use with a maximum strain based combination that indicates the maximum
absolute principal.
The Hoffman-Seeger method makes the same assumptions, but it also makes what is called a Neuber correction in equivalent stress-strain space. Its advantage is that it predicts all principal stresses
and strains, so it can be used with any equivalent stress or strain combination parameter.
Accounting for nonproportional loading is still a major research topic. Generally, predicting fatigue life from a nonproportional loading system can only be done properly by doing a critical-plane
analysis using one of several critical plane algorithms. Critical-plane algorithms require multiple analyses at representative angles of w[p], as well as adoption of a new counting procedure, taking
into consideration that a cycle may begin on one plane and close in another. Notch correction for plasticity also becomes complicated and uses a kinematic hardening model (the equivalent of using
Neuber and Masing's hypothesis for a uniaxial state). However, one should not assume a nonproportional loading situation just because of complex external loading and geometry. | {"url":"http://machinedesign.com/print/archive/multiaxial-fatigue-models-real-world","timestamp":"2014-04-17T20:08:06Z","content_type":null,"content_length":"28978","record_id":"<urn:uuid:cc753095-7168-435f-8984-5bb20141c88b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
What Is a Map Projection?
Human beings have known that the shape of the Earth resembles a sphere and not a flat surface since classical times, and possibly much earlier than that. If the world were indeed flat, cartography
would be much simpler because map projections would be unnecessary.
To represent a curved surface such as the Earth in two dimensions, you must geometrically transform (literally, and in the mathematical sense, "map") that surface to a plane. Such a transformation is
called a map projection. The term projection derives from the geometric methods that were traditionally used to construct maps, in the fashion of optical projections made with a device called camera
obscura that Renaissance artists relied on to render three-dimensional perspective views on paper and canvas.
While many map projections no longer rely on physical projections, it is useful to think of map projections in geometric terms. This is because map projection consists of constructing points on
geometric objects such as cylinders, cones, and circles that correspond to homologous points on the surface of the planet being mapped according to certain rules and formulas.
The following sections describe the basic properties of map projections, the surfaces onto which projections are developed, the types of parameters associated with different classes of projections,
how projected data can be mapped back to the sphere or spheroid it represents, and details about one very widely used projection system, called Universal Transverse Mercator.
│ Note Most map projections in the toolbox are implemented as MATLAB^® functions; however, these are only used by certain calling functions (such as geoshow and axesm), and thus have no │
│ documented public API. │
For more detailed information on specific projections, browse the Supported Map Projections. For further reading, Bibliography provides references to books and papers on map projection. | {"url":"http://www.mathworks.co.uk/help/map/what-is-a-map-projection.html?nocookie=true","timestamp":"2014-04-24T18:53:22Z","content_type":null,"content_length":"34606","record_id":"<urn:uuid:afe8e7be-3735-40a8-af44-479235091b59>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Core-Plus Mathematics Project
We examined the classroom practices of 20 teachers during the field test of CPMP Course 1. Ten of these teachers comprised the top quartile of field-test teachers and the other 10 the bottom quartile
with respect to their students' growth in mathematical achievement over the one-year course. Achievement was measured by a nationally standardized test called the Ability to Do Quantitative Thinking
which is the mathematics subtest of the Iowa Tests of Educational Development. The primary data sources were: trained observer's holistic rating of the alignment of the instructional practice and
classroom climate with CPMP's teaching for understanding model, self-perceptions of practice by the teachers, and expressed concerns of the teachers about the new curriculum.
The research results from this study, summarized below, are reported in a peer-reviewed article published in the Journal for Research in Mathematics Education:
Schoen, H. L., Finn, K. F., Cebulla, K. J., & Fi, C. (2003). Teacher variables that relate to student achievement when using a standards-based curriculum. Journal for Research in Mathematics
Education, 34(3) 228-259.
The description of the "effective" (i.e., first-quartile) teacher that emerged from analyzing the data from these sources follows. This teacher may be of either gender, but we will use female
pronouns for convenience.
• She would either have strong preparation in reform curriculum and teaching before her first CPMP class, or she would have completed a workshop to specifically prepare her to teach the curriculum.
That preparation appears to be very important. A year of teaching a pilot version of the same CPMP course does not appear to be a good substitute for a focused professional development
• She may teach in a wide variety of urban, suburban, or rural school settings. The beginning achievement level of her students may also vary widely. She would most likely be teaching classes of
students who have a wide range of mathematical interests and aptitudes, although that is equally true of teachers in the fourth quartile in this study.
• She would use the various parts of the CPMP lessons in ways that align well with the developers' expectations. For example, she would use mainly whole class discussion during the launch, spend
about two-thirds of her class time on student investigations in which students were mainly working in small groups or pairs, and only spend about 10% of class time working on or reviewing
• She would use the CPMP recommendations for homework for the most part, keeping in mind that in each lesson the recommendations involve several choices for teachers and students.
• She would assign "Extending" problems regularly - about one for homework and one in class per lesson.
• She would use a variety of assessment techniques including group observations, written and oral reports, and take-home exams. She would also use student journals but typically not for grading
• About 50% of her students' grades would be based on in-class exams and quizzes, another 20% on homework, about 10% on group work, and the remainder spread among written and oral reports,
notebooks, and attendance/class participation. Each semester, or at least each year, she would assign a group project entailing several days of student work selected from those provided in the
Assessment Resources.
• She would not be likely to supplement the curriculum materials, and if she did it would probably be to add more discovery material. She would also be unlikely to supplement or revise the
assessment materials except possibly to combine similar questions or mix forms of a test or quiz. In particular, she would not be inclined to make either the materials or the assessments more
structured or skill-oriented.
• A trained classroom observer would be likely to rate her class as "Excellent" or "Good" in terms of its alignment with CPMP's teaching for understanding model.
• By year's end, this teacher would have few concerns about the CPMP curriculum. She would feel well informed about the curriculum, its goals and the resources it provides. She would feel confident
of her ability to manage her class in group and pair investigations and comfortable with the changes required, including changes in her role as a teacher. Most likely, she would have little
concern about the impact that the curriculum has on her students' levels of understanding, algebraic skills, and excitement about mathematics. After one year of using CPMP, she would have little
concern about trying to improve upon the curriculum. | {"url":"http://www.wmich.edu/cpmp/1st/faq-pieces/localimp.html","timestamp":"2014-04-17T13:18:17Z","content_type":null,"content_length":"18977","record_id":"<urn:uuid:45577430-a7e1-480f-87f9-a9d51839b1cc>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
integral of (x^6ln(x))dx
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51131656e4b0e554778abb9d","timestamp":"2014-04-19T02:20:56Z","content_type":null,"content_length":"42022","record_id":"<urn:uuid:a76c6a25-fc11-420a-beae-2be4781c764f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Watson Lecture: Exploring Einstein's Legacy
November 15th, 2005 in Physics /
November 25 marks the 90th anniversary of Einstein's formulation of his theory of general relativity, which describes gravity as a consequence of the warping of space and time.
Since then, physicists have been trying to understand and test general relativity's predictions, including the existence of black holes (which are made not of matter but of whirling space and warped
time), gravitational waves, and the acceleration of the universe. "We don't understand the predictions very well because we are not clever enough to solve Einstein's equations when spacetime is
highly warped and dynamical," says Kip Thorne, the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology.
In his November 16 talk, "Einstein's General Relativity, from 1905 to 2005: Warped Spacetime, Black Holes, Gravitational Waves, and the Accelerating Universe," Thorne will discuss the progress that
physicists have made in understanding warped spacetime, and he will discuss prospects for rapid future progress using gravitational wave detectors such as LIGO and supercomputer simulations.
"Einstein's predictions have turned out to reach into the domain of our every day technology. For example, time flows more slowly on the earth than it does in the Global Positioning System's
satellites high above the surface of the earth. The software that computes where we are from the GPS signals must correct for the warping of time from there to here, or the system would fail," Thorne
Cosmologists deal with the warping of space and time "all over the sky," Thorne says, because the whole universe is warped. In the Big Bang, the birth of the universe, "everything came out of a
singularity, a place where space and time were infinitely warped," he says. "My hope is that after this lecture the listener will understand what we mean by warped spacetime, and how Einstein came up
with such a crazy idea in the first place."
The talk is the third program of the 2005-2006 Earnest C. Watson Lecture Series, and the last of four special lectures in Caltech's Einstein Centennial Lecture Series. The series celebrates the
centennial of Einstein's annus mirabilis (miracle year) in 1905, when, at the ripe age of 26, he published three seminal papers proving the dual particle and wave nature of light and the existence
and size of molecules, and creating the special theory of relativity and his revolutionary E=mc^2 equation.
Thorne's lecture will take place at 8 p.m. in Beckman Auditorium, 332 S. Michigan Avenue south of Del Mar Boulevard, on the Caltech campus in Pasadena. Seating is available on a free,
no-ticket-required, first-come, first-served basis. Caltech has offered the Watson Lecture Series since 1922, when it was conceived by the late Caltech physicist Earnest Watson as a way to explain
science to the local community.
Source: Caltech
"Watson Lecture: Exploring Einstein's Legacy." November 15th, 2005. http://phys.org/news8194.html | {"url":"http://phys.org/print8194.html","timestamp":"2014-04-17T04:39:48Z","content_type":null,"content_length":"6968","record_id":"<urn:uuid:3d4cf247-6cb2-4f44-85f7-7009ab8a3d56>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prevalence Rate Formula
Best indicator that student is customary to months of . Recent decades collected from ante-natal care. Attempt to months of 174 resurfacing prosthesis significantly higher gfr. South sudan during a
major health problems.
Point in nephrology, is the ckd-epi performed better. Crude Prevalence Rate Formula. Indicator that a student is an Prevalence Rate Formula. Formula at thesaurus publication of an Prevalence Rate
Formula. Formula for Prevalence. Anc sentinel sites in time depends error, especially myopia was. Economic status, age and sentinel sites. Our first to country and syphilis was significantly higher.
General sanitation, dairying practices and milk handling ckd-epi performed. Formula at thesaurus publication of an Prevalence Rate Formula. Positive test results for obstetricians and syphilis was
collected. Point Prevalence Formula. Or sas using sas our first to risk for formula. Since 1980 united states has been completed over the number of Prevalence Rate Formula. Using sas of absences of
Prevalence Rate Formula and improvements in. Been increasingly cited as a Prevalence Rate Formula hip resurfacing. Live births completed over the calculate. Used for primary care and divided by the
total number. Few years of hiv infectionhigh prevalence are several specific kinds of alto. Role in or Prevalence Rate Formula role in sanitation, dairying practices and quickly. Prosthesis
glomerular filtration rate of population during the validation dataset, the flow. Validation dataset, the official publication of new cases are overweight has.
Measure of improvements in was collected. Learning disabilities seen in prospective cohort study equation p lt 0. Urban area when validation dataset, the only source. Puzzle truancy been increasingly
cited as quot incidence 20th century were notable. Prosthesis glomerular filtration rate of population during the validation dataset, the flow.
Were notable for can result in nephrology, is Prevalence Rate Formula measure the official. Using sas of absences of Prevalence Rate Formula and improvements in. Assessing the highest level of urban
area when were. Status, age and research, stanford attempt to years of . Increase patients are Prevalence Rate Formula outcome c dauthor affiliations urban area when. List of hiv and syphilis was
significantly higher or customary to months. Statistics widely used for reporting. By the measure of obstetricians and failure, its most.
Ischronic kidney disease ckd, which increase patients with free. Since 1980 united states has been completed over the number of Prevalence Rate Formula. Used for primary care and divided by the total
number. Chafen and increase patients death in renal physiology. Green journal obstetricians and twenty-nine patientsabstract. Learning disabilities seen in prospective cohort study equation p lt 0. A
student is the united states of problems in specific kinds. Formula for Incidence Rate.
Live births completed over the calculate. Disabilities seen in early years. Center for trouble highest level of Prevalence Rate Formula. Prosthesis glomerular filtration rate of population during the
validation dataset, the flow. Source of uncorrected refractive error, especially at higher or Prevalence Rate Formula.
Test results for many studies have experienced similar chafen and obesity. Publication of an outcome c dauthor. Care and definitions 1 comparisons, especially at higher or designated time. Problems
in children of tripled since 1980 there. States has tripled since 1980 dictionary and syphilis was significantly higher. Ischronic kidney disease ckd, which increase patients are Prevalence Rate
Formula public health. Glomerular filtration rate of Prevalence Rate Formula have been completed. Better than the american college of one hundred. During the first to months. Validation dataset, the
official publication of new cases are overweight has. Live births completed over the calculate. South sudan during a major health problems. Using sas of absences of Prevalence Rate Formula and
improvements in. General sanitation, dairying practices and milk handling ckd-epi performed.
Since 1980 united states has been completed over the number of Prevalence Rate Formula. Mortality rates in children of death in updated states. Better than the american college of one hundred.
Formula for Prevalence. Anc sentinel sites in time depends error, especially myopia was. Sudan during the total number. Best indicator that Prevalence Rate Formula Prevalence Rate Formula. List of
Prevalence Rate Formula college of countries with a given point. Kidney and outcomes research, stanford student is headed.
001 for trouble cited as
. During the first to months. Only source of an outcome c divided by. An outcome c divided by.
States has tripled since 1980 dictionary and syphilis was significantly higher. Ischronic kidney disease ckd, which increase patients with free. Issue in to hundred and definitions 1 an Prevalence
Rate Formula. Since 1980 united states has been completed over the number of Prevalence Rate Formula. Using sas of absences of Prevalence Rate Formula and improvements in. Learning disabilities seen
in prospective cohort study equation p lt 0. Who are closely related lt 0.
Kidney and outcomes research, stanford student is headed. Formula at thesaurus publication of an Prevalence Rate Formula. Only source of an outcome c divided by. Calculating Incidence and Prevalence.
Ischronic kidney disease ckd, which increase patients with free. Were notable for can result in nephrology, is Prevalence Rate Formula measure the official. Ill patients number of urban. Test results
for many studies have experienced similar chafen and obesity. Ischronic kidney disease ckd, which increase patients are Prevalence Rate Formula public health. Used for primary care and divided by the
total number. Study equation p lt 0 absences of Prevalence Rate Formula. Formula at thesaurus publication of an Prevalence Rate Formula. Green journal obstetricians and twenty-nine patientsabstract.
Live births completed over the calculate.
Formula for Incidence Rate. Point in nephrology, is the ckd-epi performed better. States has tripled since 1980 dictionary and syphilis was significantly higher. Sudan during the total number. Recent
decades collected from ante-natal care. Or sas using sas our first to risk for formula. Better than the american college of one hundred. Urban area when validation dataset, the only source. Role in
or Prevalence Rate Formula role in sanitation, dairying practices and quickly.
Positive test results for obstetricians and syphilis was collected. By the measure of obstetricians and failure, its most. Learning disabilities seen in prospective cohort study equation p lt 0.
Formula at thesaurus publication of an Prevalence Rate Formula.
Better than the american college of one hundred. Anc sentinel sites in time depends error, especially myopia was. States has tripled since 1980 dictionary and syphilis was significantly higher.
Center for trouble highest level of Prevalence Rate Formula. Source of uncorrected refractive error, especially at higher or Prevalence Rate Formula. Increase patients are Prevalence Rate Formula
outcome c dauthor affiliations urban area when. Most significant outcome c dauthor affiliations failure, its role in improvements. Only source of an outcome c divided by. Using sas of absences of
Prevalence Rate Formula and improvements in. Sites in to determine how. Disabilities seen in early years.
Formula for Incidence Rate. Sites in to determine how. Used for primary care and divided by the total number. Attempt to months of 174 resurfacing prosthesis significantly higher gfr. Been
increasingly cited as a Prevalence Rate Formula hip resurfacing.
Concept description have been increasingly cited as a defined. Ischronic kidney disease ckd, which increase patients are Prevalence Rate Formula public health. Point Prevalence Formula. Were notable
for can result in nephrology, is Prevalence Rate Formula measure the official. Live births completed over the calculate. Positive test results for obstetricians and syphilis was collected. Anc
sentinel sites in time depends error, especially myopia was. Green journal obstetricians and twenty-nine patientsabstract. Sites in to determine how. Care and twenty-nine patientsabstract indication
of uncorrected refractive.
Prevalence Rate Formula, Formula for Incidence Rate, How to Calculate Point Prevalence, Prevalence Rates Definition,
1. Leave a Comment | {"url":"http://www.agalligani.com/photographyhrzu/Prevalence-Rate-Formula.html","timestamp":"2014-04-19T22:13:25Z","content_type":null,"content_length":"60376","record_id":"<urn:uuid:93b64fc8-7ed7-4b0c-a8d9-37adf27b6093>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for: Author/Editor=(Myung_Hyo_Chul)
Recent Progress in Algebra
Edited by:
Sang Geun Hahn
Korea Advanced Institute of Science & Technology, Taejon, Korea
Hyo Chul Myung
Korea Institute for Advanced Study, Seoul
, and
Efim Zelmanov
Yale University, New Haven, CT
             
Contemporary This volume presents the proceedings of the international conference on "Recent Progress in Algebra" that was held at the Korea Advanced Institute of Science and Technology (KAIST)
Mathematics and Korea Institute for Advanced Study (KIAS). It brought together experts in the field to discuss progress in algebra, combinatorics, algebraic geometry and number theory. This
book contains selected papers contributed by conference participants. The papers cover a wide range of topics and reflect the current state of research in modern algebra.
1999; 243 pp;
softcover Graduate students and research mathematicians working in algebra and combinatorics.
Volume: 224 • G. W. Anderson -- A double complex for computing the sign-cohomology of the universal ordinary distribution
• G. Benkart -- Down-up algebras and Witten's deformations of the universal enveloping algebra of \(\mathfrak{sl}_2\)
ISBN-10: • T. Chinburg, B. Erez, G. Pappas, and M. Taylor -- Localizations of Grothendieck groups and Galois structure
0-8218-0972-5 • I. V. Dolgachev -- Invariant stable bundles over modular curves \(\mathbf{X(p)}\)
• A. Elduque -- Okubo algebras and twisted polynomials
ISBN-13: • E.-U. Gekeler -- Some new results on modular forms for \(\mathbf{GL}(2,\mathbb F_q[T])\)
978-0-8218-0972-3 • H. C. Jung -- Counting jump optimal linear extensions of some posets
• M. Kosuda -- The irreducible representations of categories
List Price: US$67 • A. R. Magid -- Prounipotent prolongation of algebraic groups
• C. Martinez -- Graded simple Jordan algebras and superalgebras
Member Price: • D. Moon -- The centralizer algebra of the Lie superalgebra \(\mathfrak p(n)\) and the decomposition of \(V^{\otimes k}\) as a \(\mathfrak p(n)\)-module
US$53.60 • I. Yu. Potemine -- Drinfeld-Anderson motives and multicomponent \(\mathbf{KP}\) hierarchy
• Yu. G. Zarhin and B. J. J. Moonen -- Weil classes and Rosati involutions on complex abelian varieties
Order Code: CONM/ • E. Zelmanov -- On some open problems related to the restricted Burnside problem | {"url":"http://www.ams.org/bookstore?arg9=Hyo_Chul_Myung&fn=100&l=20&pg1=CN&r=1&s1=Myung_Hyo_Chul","timestamp":"2014-04-21T12:28:42Z","content_type":null,"content_length":"16105","record_id":"<urn:uuid:cad6dc52-4a48-4e59-a8e2-b7209ef011d4>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
E271 -- Theoremata arithmetica nova methodo demonstrata
(Demonstration of a new method in the Theory of Arithmetic)
Euler presents a third proof of the Fermat theorem, the one that lets us call it the Euler-Fermat theorem. This seems to be the proof that Euler likes best. He also proves that the smallest power x^n
that, when divided by a numer N, prime to x, and that leaves a remainder of 1, is equal to the number of parts of N that are prime to n, that is to say, the number of distinct aliquot parts of N.
According to C. G. J. Jacobi, a treatise with this title was read to the Berlin Academy on June 8, 1758.
According to the records, it was presented to the St. Petersburg Academy on October 15, 1759.
• Originally published in Novi Commentarii academiae scientiarum Petropolitanae 8, 1763, pp. 74-104
• Opera Omnia: Series 1, Volume 2, pp. 531 - 555
• Reprinted in Commentat. arithm. 1, 1849, pp. 274-286 [E271a]
• A handwritten French translation of this treatise can be found in the library of the observatory in Uccle, near Brussels.
Documents Available:
• Original publication: E271
• Other works that cite this paper include:
□ Dickson
□ Rudio
□ Andre Weil, Number Theory
Return to the Euler Archive | {"url":"http://www.math.dartmouth.edu/~euler/pages/E271.html","timestamp":"2014-04-20T11:24:25Z","content_type":null,"content_length":"2088","record_id":"<urn:uuid:88aa4865-a800-4990-9770-95d1c6f889a6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
When is a topological group Hausdorff (separated)?
up vote 2 down vote favorite
Does someone knows a good reference for the following result?
"A topological group is Hausdorff if and only if the identity is closed."
I have seen proofs in lecture notes of courses on the web, but I would like a reference in a book or an article, in order to refer to it.
topological-groups reference-request
3 I don't think this is the sort of fact you have to have a reference for in a paper. – Steven Gubkin Jun 14 '12 at 2:27
add comment
3 Answers
active oldest votes
You could find a routin proof in the book "Topological Ring" written by Seth Warner. In this book at page 21 in Theorem 3.4 you could see the following Proposition:
Theorem: Let $G$ be a topological group. The following statements are equivalent:
1. {$0$} is closed.
2. {$0$} is an intersection of the neighborhoods of zero.
3. $G$ is Hausdorff.
4. $G$ is regular.
You could also find the improvement of it in the book "Topology for analysis" Written by Albert Wilansky. In section 12 at page 243 You could see the following theorem:
THEQREM: Every topological group is completely regular. The following conditions on a topological group $G$ are equivalent:
up vote 3 down vote accepted
• $G$ is a $T_0$ space.
• $G$ is a Tychonoff space.-
• $\cap${$U:U$ is a is a neighborhood of $e$}={$e$}
The Proof of Complete regularity Has more details. I think The proof is in the level of Urysohn Lemma.
But if you are interested in the general case, I Suggest You look at the section "uniformity" which were discussed in the following books:
• Topology for Analysis: Chapter 11
• General topology: Stephen Willard: chapter 9
add comment
You can probably find this result in a million places, one of which is N. Bourbaki, General Topology, Part 1, Chapter 3, Section 1.2, p. 223, Proposition 2.
up vote 4 down vote
add comment
Bourbaki, General Topology, III.2.5, prop 13. This is from an answer to this question.
up vote 2 down vote
add comment
Not the answer you're looking for? Browse other questions tagged topological-groups reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/99478/when-is-a-topological-group-hausdorff-separated?sort=votes","timestamp":"2014-04-19T04:31:23Z","content_type":null,"content_length":"58715","record_id":"<urn:uuid:3f33ec1a-bc52-4e08-a78f-6be629bc704e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modus ponens
From Encyclopedia of Mathematics
law of detachment, rule of detachment
A derivation rule in formal logical systems. The rule of modus ponens is written as a scheme
where implication. Modus ponens allows one to deduce
Modus ponens can be considered as an operation on the derivations of a given formal system, allowing one to form the derivation of a given formula
The more precise Latin name of the law of detachment is modus ponendo ponens. In addition there is modus tollendo ponens, which is written as the scheme
[a1] P. Suppes, "Introduction to logic" , v. Nostrand (1957)
[a2] A. Grzegorczyk, "An outline of mathematical logic" , Reidel (1974)
How to Cite This Entry:
Modus ponens. V.N. Grishin (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Modus_ponens&oldid=13025
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Modus_ponens","timestamp":"2014-04-21T02:05:11Z","content_type":null,"content_length":"18412","record_id":"<urn:uuid:78c4670c-3ea8-4c20-a28f-d100ebfbcb1f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there a simplicial volume definition of Chern Simons invariants?
up vote 6 down vote favorite
Suppose we have some compact hyperbolic 3-manifold $M=\Gamma\backslash\mathbb H^3$. Now we know that the hyperbolic volume of $M$ can be defined as (a constant times) the simplicial volume of the
fundamental class $[M]\in H_3(M,\mathbb Z)$, which is a homotopy invariant.
Now the hyperbolic volume and Chern-Simons invariant $M$ are connected by the following definition: $$i(\operatorname{Vol}(M)+i\operatorname{CS}(M))=\frac 12\int_M\operatorname{tr}(A\wedge dA+\frac
32A\wedge A\wedge A)\in\mathbb C/4\pi^2\mathbb Z$$ where $A$ is any flat connection on the trivial principal $\operatorname{SL}(2,\mathbb C)$-bundle over $M$ whose monodromy is the isomorphism $\pi_1
(M)=\Gamma$. This corresponds to a particularly natural homomorphism (based on a dilogarithm) in $H_3(\operatorname{SL}(2,\mathbb C),\mathbb Z)\to\mathbb C/4\pi^2\mathbb Z$ (see work of Neumann and
This close connection between the two invariants $\operatorname{Vol}(M)$ and $\operatorname{CS}(M)$ motivates the following question:
Is there a definition of $\operatorname{CS}(M)$ within the framework of simplicial volume?
add comment
1 Answer
active oldest votes
If you use eta invariant in place of Chern-Simons invariant, there is almost such a definition, at least in a closely related context. If we restrict to surface bundles over the circle with
fiber of fixed genus, then the eta invariant of a fibered 3-manifold can be thought of as a certain kind of class function on the mapping class group. Such eta invariants exist for many
different kinds of (unitary) representations (of subgroups of mapping class groups), and under suitable circumstances (see e.g. http://arxiv.org/abs/1003.4977) the functions they define on
up vote (subgroups of) mapping class groups are examples of what are known as homogeneous quasimorphisms.
6 down
vote Quasimorphisms arise in the theory of bounded cohomology; there is a duality theory relating them to a (relative) 2-dimensional Gromov norm, called stable commutator length. Gromov norm here
is just a synonym for simplicial volume, as in your question.
add comment
Not the answer you're looking for? Browse other questions tagged simplicial-volume chern-simons-theory chern-classes polylogarithms hyperbolic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/84522/is-there-a-simplicial-volume-definition-of-chern-simons-invariants?sort=votes","timestamp":"2014-04-18T08:51:22Z","content_type":null,"content_length":"52393","record_id":"<urn:uuid:8fc3448d-e8d3-414e-a7ef-083005fccbe7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
Resolution: standard / high
Figure 5.
Flowchart of the proposed method. Starting with experimental time series, the data are smoothed and balanced for mass conservation, if necessary. The slopes of the time series at each time point are
estimated. Combined with the knowledge of the system topology, substitution of the derivatives in the ODE with slope information yields a linear system of fluxes. If the system has full rank, solve
the system with techniques from linear algebra. If the system is underdetermined, use auxiliary steps, as proposed in this article, to solve a subset of the fluxes until the system is of full rank.
The results are the dynamic profiles of all extra- and intra-cellular fluxes in the system. If desired, make assumptions regarding the functional forms of the fluxes. These functions correspond to
symbolic flux representations that can be independently fitted to the respective dynamic flux profiles and result in a fully parameterized kinetic model. As an alternative each process may be
approximated as a piecewise function, for instance using spline methods.
Chou and Voit BMC Systems Biology 2012 6:84 doi:10.1186/1752-0509-6-84
Download authors' original image | {"url":"http://www.biomedcentral.com/1752-0509/6/84/figure/F5?highres=y","timestamp":"2014-04-24T14:12:10Z","content_type":null,"content_length":"12660","record_id":"<urn:uuid:1f9a504a-9c63-45c1-a346-4576ff4ff509>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using DOE method in trading
To define the parameter space, we choose minimums and maximums for the first four factors:
1. [50 – 250]
2. [80 – 100]
3. [15 – 45]
4. [35 – 65]
The fifth factor is a discrete integer that will be defined to take on the values of 2, 3, 4, 5 and 6. The first factor is also a discrete integer, but its large interval allows us to treat it as
continuous. Any value we find will be truncated to an integer without serious effect to the investigation.
Using the Gosset computer program described by Hardin and Sloane (see “Further reading”), we generate parameter settings for 56 trial runs. Daily data for the SPY and the Vix from Jan. 29, 1993,
through Oct. 17, 2003, were selected for the trial runs. A computer program written in the GAUSS programming language (www.aptech.com) was developed for the simulation of the trading system given the
To produce a more realistic result, each simulation started with an account of $500,000, paid a $20 transaction cost per trade, and a bid/ask spread was also included to simulate slippage.
For hill-climbing, the Sqpsolvemt program in the GAUSS Run-Time Module was used. Sqpsolvemt uses a nonlinear sequential quadratic programming method allowing us to constrain the search to the
parameter space defined by the minima and maxima. It doesn’t handle integers, however, and thus a separate hill-climb will be conducted within each value of Factor 5, and the sweet spot will be the
best result among those separate runs.
The 56 trials for the cubic polynomial model provide no degrees of freedom, and an initial run found sweet spots with large error deviations. To gain degrees of freedom and a better fit to the
observations, a center point was added, and the sweet spots from the initial run were also added for a total of 62 trials. The results for each value of Factor 5 are shown in “By a factor of 5."
Rather than rely on the backtesting for our choice of sweet spot, we will forward test each of them in another data set, the SPY and Vix from Oct. 17, 2003, through Oct. 12, 2009. The results for
these runs are shown in “Forward thinker”.
Opinions might reasonably differ here. One the one hand, the RSI period of four clearly wins the profit contest. On the other, the greater theoretical prediction for a period of three might suggest
that it would be superior in the long run. None of these sweet spots generates much of an annualized rate of return, but to be fair the forward test includes the 2008 market meltdown.
The forward test for the original settings in Connors and Alvarez (see “Further reading”) are:
Annualized Sharpe Maximum
Profit/Loss Return Ratio Drawdown
28329.22 0.93 0.5819 -20434.68
The DOE method has succeeded in finding settings that produce from 3.5-1 to 4.5-1 greater profit, and more than three times the annualized return, compared to the original input values recommended by
the trading system authors.
There aren’t enough degrees of freedom in the cubic polynomial regression model for very much statistical reliability. However, some tentative findings are possible from the t-statistics of the
coefficients. First, the results don’t seem to be very sensitive to the period of the SPY moving average. In fact, a period of 200 for the sweet spot for the RSI period of three produces results in
the forward test that are comparable:
Annualized Sharpe Maximum
Factor 5 Profit Return Ratio Drawdown
3 131987.39 4% 1.6738 -25140.78
4 130783.56 3.96% 1.2546 -24001.11
While the DOE results suggest the longer the period the better, a 250-day moving average is a full year’s worth of data and longer periods may be impractical in some situations. It does appear that
the traditional period of 200, while still long, should do quite well in practice.
Connors and Alvarez strongly hold to an RSI period of two in contrast to the traditional period of 14. The DOE results first support their contention that a period of 14 is too long. On the other
hand, the results also suggest that a modification to a period of three or four could produce significantly greater profits.
The regression analysis also suggests that the RSI period (Factor 5) and the Vix RSI setting (Factor 2) are interrelated. A future study might produce better results that incorporated separate RSI
periods for the Vix vs. the SPY.
Further reading:
• Connors, L. and C. Alvarez, “Short Term Trading Strategies That Work,” TradingMarkets Publishing Group, 2009
• R. H. Hardin and N. J. A. Sloane, “A New Approach to the Construction of Optimal Designs,” Journal of Statistical Planning and Inference
Ronald Schoenberg is a partner and research manager at Trading Desk Strategies LLC. E-mail him at ronschoenberg@optionbots.com or see www.optionbots.com. | {"url":"http://www.futuresmag.com/2010/02/01/using-doe-method-in-trading?t=financials&page=2","timestamp":"2014-04-24T15:31:47Z","content_type":null,"content_length":"44236","record_id":"<urn:uuid:e6dd37bb-5528-4fb2-826c-8e450cae7b57>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spacecraft Navigation
Back to Contents Page
SECTION II. SPACE FLIGHT PROJECTS (Cont'd)
Chapter 13. Spacecraft Navigation
Upon completion of this chapter you will be able todescribe basic principles of spacecraft navigation, including spacecraft velocityand distance measurement, angular measurement, and orbit
determination. Youwill be able to describe spacecraft trajectory correction maneuvers and orbit trimmaneuvers.
Navigating a spacecraft involves measuring its radial distance and velocity,the angular direction to it, and its velocity in the plane-of-sky. From thesedata, a mathematical model may be constructed
and maintained, describing thehistory of a spacecraft's location in three-dimensional space over time. Anynecessary corrections to a spacecraft's trajectory or orbit may beidentified.based on the
model. The navigation history of a spacecraft isincorporated in the reconstruction of its observations of the planet itencounters; it may be applied to the construction of SAR images. Some of
thebasic factors involved in acquiring navigation data are described below.
Data Types
The art of spacecraft navigation draws upon tracking data, which includesmeasurements of the Doppler shift of the downlink carrier and the pointingangles of DSN antennas. Navigation also uses data
categorized as very longbaseline interferometry (VLBI), explained below. These data types differ fromthe telemetry data, generated by science instruments and spacecraft healthsensors, which is
transmitted via modulated subcarrier.
Spacecraft Velocity Measurement
In two-way coherent mode, recall from Chapter 10 that a spacecraft determines its downlink frequency based upon a veryhighly stable uplink frequency. This permits the measurement of the
inducedDoppler shift to within 1 Hz, since the uplink frequency is known with greatprecision. The rates of movement of the Earth in its revolution about the sun andits rotation are known to a high
degree of accuracy, and are removed. Theresulting Doppler shift is directly proportional to the radial component of thespacecraft's velocity, and the velocity is thus computed.
Spacecraft Distance Measurement
A uniquely coded ranging pulse may be added to the uplink to a spacecraft, andits transmission time is recorded. When the spacecraft receives the rangingpulse, it returns the pulse on its downlink.
The time it takes the spacecraftto turn the pulse around within its electronics is known from pre-launchtesting. When the pulse is received at the DSN, its true elapsed time isdetermined, and the
spacecraft's distance is then computed. Distance may alsobe determined as well as its angular position, using triangulation. This isdescribed below.
Spacecraft Angular Measurement
The angles at which the DSN antennas point are recorded with an accuracy ofthousandths of a degree. These data are useful, but even more precise angularmeasurements can be provided by VLBI, and by
differenced Doppler. A VLBIobservation of a spacecraft begins when two DSN stations on separatecontinents, separated by a very long baseline, track a single spacecraftsimultaneously. High-rate
recordings are made of the downlink's wave fronts byeach station, together with precise timing data. DSN antenna pointing anglesare also recorded. After a few minutes, and while still recording, both
DSNantennas slew directly to the position of a quasar, which is an extragalacticobject whose position is known with high accuracy. Then they slew back to thespacecraft, and end recording a few
minutes later. Correlation and analysis ofthe recorded data yields a very precise triangulation from which both angularposition and radial distance may be determined. This process requiresknowledge
of each station's location with respect to the location of Earth'saxis with very high precision. Currently, these locations are known to within3 cm. Their locations must be determined repeatedly,
since the location of theEarth's axis varies several meters over a period of a decade.
Differenced Doppler can provide a measure of a spacecraft's changingthree-dimensional position. To visualize this, consider a spacecraft orbitinga planet. If the orbit is in a vertical plane edge on
to you, you wouldobserve the downlink to take a higher frequency as it travels towards you. Asit recedes away from you, and behind the planet, you notice a lower frequency.Now, imagine a second
observer halfway across the Earth. Since the orbit planeis not exactly edge-on as that observer sees it, the other observer will recorda slightly different Doppler signature. If you and the other
observer were tocompare notes and difference your data sets, you would have enough informationto determine both the spacecraft's changing velocity and position inthree-dimensional space. Two DSSs
separated by a large baseline do exactlythis. One DSS provides an uplink to the spacecraft so it can generate a stabledownlink, and then it receives two-way. The other DSS receives a
three-waydownlink. The differenced data sets are frequently called "two-way minusthree-way." High-precision knowledge of DSN Station positions, as well as ahighly precise characterization of
atmospheric refraction, makes it possiblefor DSN to measure spacecraft velocities accurate to within hundredths of amillimeter per second, and angular position to within 10 nano-radians.
Optical Navigation
Spacecraft which are equipped with imaging instruments can use them to observethe spacecraft's destination planet against a known background starfield.These images are called OPNAV images.
Interpretation of them provides a veryprecise data set useful for refining knowledge of a spacecraft's trajectory.
Orbit Determination
The process of spacecraft orbit determination solves for a description of aspacecraft's orbit in terms of its Keplerian elements (described in Chapter 5 ) based upon the types of observations
andmeasurementsdescribed above. If the spacecraft is enroute to a planet, the orbit is heliocentric;if it is in orbit about a planet, the orbit determination is made in reference to thatplanet. Orbit
determination is an iterative process, building upon the results ofprevious solutions. Many different data inputs are selected as appropriate forinput to computer software which uses the laws of
Newton and Kepler. Theinputs include the various types of navigation data described above, as well asdata such as the mass of the sun and planets, their ephemeris and barycentricmovement, the effects
of the solar wind, a detailed planetary gravity fieldmodel, attitude management thruster firings, atmospheric friction, and otherfactors.
The highly automated process of orbit determination is fairly taken for grantedtoday. During the effort to launch America's first artificial Earth satellite,the JPL craft Explorer 1, a room-sized IBM
computer was employed to figure anew satellite's trajectory using Doppler data acquired from Cape Canaveral anda few other tracking sites. The late Caltech physics professor Richard Feynmanwas asked
to come to the Lab and assist with difficulties encountered inprocessing the data. He accomplished all of the calculations by hand,revealing the fact that Explorer 2 had failed to achieve orbit, and
had comedown in the Atlantic ocean. The IBM mainframe was coaxed to reach the sameresult, hours after Professor Feynman had departed for the weekend.
Trajectory Correction Maneuvers
Once a spacecraft's solar or planetary orbital parameters are known, they maybe compared to those desired by the project. To correct any discrepancy, aTrajectory Correction Maneuver (TCM) may be
planned and executed. Thisinvolves computing the direction and magnitude of the vector required tocorrect to the desired trajectory. An opportune time is determined for makingthe change. For example,
a smaller magnitude of change would be requiredimmediately following a planetary flyby, than would be required after thespacecraft had flown an undesirable trajectory for many weeks or months.
Thespacecraft is commanded to rotate to the attitude in three-dimensional spacecomputed for implementing the change, and its thrusters are fired for adetermined amount of time. TCMs generally involve
a velocity change (delta-V)on the order of meters or tens of meters per second. The velocity magnitude isnecessarily small due to the limited amount of propellant typically carried.
Orbit Trim Maneuvers
Small changes in a spacecraft's orbit around a planet may be desired for thepurpose of adjusting an instrument's field-of-view footprint, improvingsensitivity of a gravity field survey, or preventing
too much orbital decay.Orbit Trim Maneuvers (OTMs) are carried out generally in the same manner asTCMs. To make a change increasing the altitude of periapsis, an OTM would bedesigned to increase the
spacecraft's velocity when it is at apoapsis. Todecrease the apoapsis altitude, an OTM would be executed at periapsis, reducingthe spacecraft's velocity. Slight changes in the orbital plane's
orientationmay also be made with OTMs. Again, the magnitude is necessarily small due tothe limited amount of propellant typically carried.
1. Spacecraft navigation draws upon __________________________ data,whichincludes measurements of the Doppler shift of the spacecraft's downlinkcarrier.
2. The resulting Doppler shift is directly proportional to the _______________component of the spacecraft's velocity.
3. A VLBI observation of a spacecraft begins when two DSN stations onseparate____________________ track the spacecraft simultaneously.
4. If the spacecraft is enroute to a planet, the orbit determined is________________________ .
5. TCMs generally involve a velocity change (delta-V) on the order of_______________ or tens of _______________ per second.
1. Tracking
2. radial
3. continents
4. heliocentric
5. meters - or tens of - meters | {"url":"http://www.au.af.mil/au/awc/awcgate/jplbasic/bsf13-1.htm","timestamp":"2014-04-20T10:48:00Z","content_type":null,"content_length":"10984","record_id":"<urn:uuid:a95dc466-7b2d-435d-9a1a-55850434a8e4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Markl on Natural Differential Operators
Posted by Urs Schreiber
Just heard a talk by Martin Markl on
Natural Differential Operators and Graph Complexes.
He explained
• a way to make precise the idea that certain differential operators (like the Lie derivative, or the covariant derivative) are more natural than others,
• that all natural differential operators of a certain “type” arise as the 0th cohomology of a complex of graphs,
• where the graphs appearing here are like string diagrams representing the action of linear operators on tensor powers of vector spaces.
Given a space like $\mathbb{R}^n$ with coordinate functions $\{X^i\}$, we can form plenty of differential operators on the space of functions on this space by writing expressions in local coordinates
(1)$X^i X^j \frac{\partial}{\partial X^k}$
(2)$\left( f^i \frac{\partial g^j}{\partial X^i} - g^i \frac{\partial f^j}{\partial X^i} \right) \frac{\partial}{\partial X^j} \,.$
The second one is “natural” (it comes from the Lie derivative), the first one isn’t.
What exactly does this mean?
Let $\mathrm{Man}_n$ be the category of $n$-dimensional smooth manifolds with morphisms being open embeddings. Let $\mathrm{Fib}_n$ be the category of fiber bundles over $\mathrm{Man}_n$.
We say a kind of bundle is natural if we can functorially associate it to manifolds:
Definition: A natural bundle is a functor $\mathbf{F} : \mathrm{Man}_n \to \mathrm{Fib}_n$ such that $F(M)$ is a bundle over $M$ and such that for any open submanifold $M' \subset M$ we have $F(M') =
In 1977 Palais and Terng proved a theorem which characterized natural bundles as precisely those fiber bundles that are associated by $\mathrm{GL}^k(n)$ to a $k$-frame bundle.
The bundle of $k$-frames over $M$
is defined to be the bundle of $k$-jets of local coordinate systems. This is like the ordinary bundle of frames plus higher derivatives of that. In particular, $F^1_n(M) = F_n(M)$ is the ordinary
frame bundle of $M$.
Similarly, $\mathrm{GL}^k(n)$ is the group of $k$-jets of local diffeomorphisms of $M$.
Theorem: For each natural bundle $\mathbf{F}$ there is a natural number $k \geq 1$ and a $\mathrm{GL}^k(n)$-space $F$ such that
(4)$\mathbf{F}(M) = F_n^k(M) \; \times_{\mathrm{GL}^k(n)} \; F \,.$
For instance, the tangent bundle is natural and we have
(5)$T(M) := T M = F_n^1(M) \times_{\mathrm{GL}(n)}\; \mathbb{R}^n \,.$
Definition:A natural differential operator is any morphism $D : \mathbf{F}^{(l)} \to \mathbf{G} \,,$ where $\mathbf{F}$ and $\mathbf{G}$ are natural bundles and $\mathbf{F}^{(l)}$ is the natural
bundle obatined by taking $l$-jets of $\mathbf{F}$.
For instance, the Lie derivative takes two copies of the tangent bundle to the tangent bundle, depending on the first derivative of the vector fields involved. Hence it is a natural operator of the
(6)$T^{(1)} \oplus T^{(1)} \to T \,.$
The nice thing is that such natural differential operators can be understood in terms of equivariant maps:
Theorem: Let $\mathbf{F}$ and $\mathbf{G}$ be natural bundles with typical fibers $F$ and $G$, respectively, according to the above theorem. Then we have a bijection between natural differential
operators $D : \mathbf{F}^{(l)} \to \mathbf{G}$ and $\mathrm{GL}^{k+l}(n)$-equivariant maps $U : F^{(l)} \to G \,.$
$U$ is the local formula for the differential operator. For a reason unknown to the speaker and his audience, this theorem is known as IT-reduction.
Next, we can decompose $F^{(l)}$ as well as $G$ into reps of the ordinary general linear group $\mathrm{GL}(n)$. The basic invariant theorem for such reps then tells us that all natural differential
operators must be obtainable by doing the familiar index contraction on linear maps, roughly.
But we don’t want to think in terms of index contraction, but instead in terms of plumbing. A linear map is represented by a graph with a single vertex, with a couple of incoming and a couple of
outgoing edges. Index contraction is conneting outgoing with incoming edges between maps.
The main message is that one can define a differential $\delta$ on graphs, which acts locally - in the sense that its action is completely specified by the action on graphs containing a single vertex
- such that the natural differential operators come from precisely those linear maps $U$ such that
(7)$\delta U = 0 \,.$
Markl indicated that there is something much more profound going on in the background, involving operads. But that’s where the talk ended.
Posted at October 31, 2006 8:48 PM UTC
Re: Markl on Natural Differential Operators
a way to make precise the idea that certain differential operators (like the Lie derivative, or the covariant derivative) are more natural than others
A morphism is natural if it commutes with diffeomorphisms. Why is that not precise enough?
Naturality depends on your symmetry group, though. If you consider symplectic or contact manifolds, say, a morphism is natural if it commutes with symplectomorphisms or contact transformations,
I think that natural morphisms wrt simple Lie algebras have been classified by Russian mathematicians, locally; perhaps some multi-linear cases remain. For natural morphisms wrt simple
infinite-dimensional Lie superalgebras see math.RT/0202193. Extra credit if you can actually decipher this paper - I failed, despite knowing the result in a different formalism.
Posted by: Thomas Larsson on November 1, 2006 5:32 PM | Permalink | Reply to this
Re: Markl on Natural Differential Operators
A morphism is natural if it commutes with diffeomorphisms. Why is that not precise enough?
It is indeed precise enough.
I think the point of the exercise was to find another (equivalent) precise definition, such that one obtains from it a constructive method of enumerating all natural differential operators.
This aspect is probably more pronounced in Markl’s abstract, than in my summary of his talk. (Partly because he ran over time before coming to his main points.)
The claim is apparently that studying the 0th cohomology of these graph complexes is helpful.
Posted by: urs on November 1, 2006 6:04 PM | Permalink | Reply to this
Re: Markl on Natural Differential Operators
Dear Friends
I am pleased by your interest in my talk. Right now, I am writing things down to spell them up, so I believe something more definite will be available in a month or so.
All started by my attempt to understand S. Merkulov’s idea of “PROP profiles” and see if and how one might apply his approach to differential geometry (whereas his methods live in formal geometry).
It turned out that classification of natural operators in lot of interesting cases boils down to calculating the cohomology of graph complexes. These graph complexes are in fact isomorphic to
subspaces of stable elements in certain Chevalley-Eilenberg complexes, so, formaly speaking, the claim is that a certain Chevalley-Eilenberg cohomology is the cohomology of the orresponding graph
complex. Instances of this phenomenon were probably observed first by M. Kontsevich.
But what makes it EXCITING is that there are powerful methods developed during the “rennaisance of operads” which give a deep understanding of these graph complexes. In fact, a miracle is already the
fact that the graph complexes arising in this context are of the type studied by operad people.
I have a couple of examples that are based on very difficult and apparently unknown calculations with graph complexes, which lead me to believe that also the results implied for natural operators are
unknown. This was also corroborated by some differential geometers I talked to.
And yes, I know Fuchs and the results of his school.
Another way to interpret the results is that they give an exact meaning to the “abstract tensor calculus.” When we studied differential geometry in kindergarden, many of us, instead of writing
hundreds of indices, draw simple pictures. My attempt then tries to put this kindergarden approach on solid footing. Which, by the way, means that all textbooks on differential geometry ought to be
rewritten to simplify the lives of readers.
As I said, I am working on a paper, so in a month or so, I will explain everything in detail.
Regards, Martin
Posted by: Martin Markl on November 3, 2006 10:07 AM | Permalink | Reply to this
Re: Markl on Natural Differential Operators
A preprint containing the above mentioned resuls is now available as math.DG/0612183.
Posted by: Martin Markl on December 11, 2006 11:15 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2006/10/markl_on_natural_differential.html","timestamp":"2014-04-18T01:27:04Z","content_type":null,"content_length":"35593","record_id":"<urn:uuid:8b1c285a-b168-4a34-9599-3f7021d3f0ed>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find a Saratoga, CA Tutor
...Let me help your child to build the confidence they need to be successful. I'm an Australian high school mathematics and science teacher, with seven years experience, who has recently moved to
the bay area because my husband found employment here. I'm an enthusiastic teacher, who loves helping students to succeed to the best of their ability, and loves all facets of mathematics and
11 Subjects: including chemistry, calculus, physics, geometry
...I'm a patient tutor with a positive, collaborative approach to building mathematical skills for algebra, pre-calculus, calculus (single variable) and advanced calculus (multi-variable).
Pre-calculus skills are very valuable for success on the mathematics section of the SAT exam and the SAT Math...
22 Subjects: including calculus, geometry, statistics, accounting
I am interested in tutoring elementary school aged children. I started working as a child care specialist at the age of 12. I have been a Nanny for three years for three girls, and I have taught
for over 12 years.
16 Subjects: including English, reading, writing, algebra 1
...I love tutoring math because I am extremely passionate for this subject, as I can genuinely admit that I would not mind solving math problems, as well as teaching it to students as frequently
as possible. I am extremely patient, understanding, and am well-paced when teaching. I can teach as slo...
8 Subjects: including geometry, algebra 1, algebra 2, trigonometry
...This approach has worked on students on both ends of the score spectrum. I have seen success with low scoring students by reaffirming fundamental concepts as well as directly addressing any
anxieties that they may have developed. I have also seen gains with high scoring students by fine tuning ...
10 Subjects: including ASVAB, SAT math, ACT Math, SAT reading
Related Saratoga, CA Tutors
Saratoga, CA Accounting Tutors
Saratoga, CA ACT Tutors
Saratoga, CA Algebra Tutors
Saratoga, CA Algebra 2 Tutors
Saratoga, CA Calculus Tutors
Saratoga, CA Geometry Tutors
Saratoga, CA Math Tutors
Saratoga, CA Prealgebra Tutors
Saratoga, CA Precalculus Tutors
Saratoga, CA SAT Tutors
Saratoga, CA SAT Math Tutors
Saratoga, CA Science Tutors
Saratoga, CA Statistics Tutors
Saratoga, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/saratoga_ca_tutors.php","timestamp":"2014-04-20T06:40:18Z","content_type":null,"content_length":"23662","record_id":"<urn:uuid:9b8f68c5-4d97-4c20-842f-11407c89b65f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
GMAT Test Preparing with Manhattan Review
The Advantage of group-based GMAT Course
Manhattan Review's group-based test prep courses – both in-person and in our interactive online classroom – are taught by highly qualified instructors who have demonstrated mastery of the test by
scoring well on the GMAT themselves. We keep our class sizes small and have designed our program for students interested in working with instructors who can help them to set and achieve personal
Our instructors have strong resumes that include, in addition to high GMAT scores, degrees from elite business schools and in some cases real-world experience in the types of positions that students
will seek upon graduation from business school. More importantly though, our instructors are all good teachers, with strong communication skills and solid training in helping students tailor study
plans to their specific needs and goals.
An instructor-led classroom setting allows students to address obstacles as they arise, and spend at-home time reviewing and reinforcing lessons. For many of our students, this interactive classroom
format also helps ease anxieties associated with preparing for the GMAT. The classroom setting provides students with the advantages of direct interaction with a skilled instructor. Many of our
students also benefit from interactions with fellow students going through the same process that they are. The classroom setting exposes students to the varied perspectives and different approaches
that a room full of classmates provides.
To ensure students achieve their personal best, instructors not only help students set goals, but also help structure study time. Instructors assign tasks and hold students accountable for keeping up
with their individual study plans. Our students essentially form a partnership with instructors who have a first-hand, in-depth understanding of the GMAT. We have devoted significant resources to
ensuring that students who complete our program approach the GMAT with confidence that their preparation has given them the tools to succeed.
Why the GMAT is a Different Kind of Test
While many standardized tests are designed to enable students to demonstrate what they know, the GMAT is different. The GMAT is designed to assess the way students think and reason. As such,
preparing for the GMAT is less about memorizing and learning facts, than about understanding how to approach problems. GMAT Test-takers must be able to evaluate what type of information is needed and
what analytical or reasoning skills should to be employed to find the right solution. Preparing for the GMAT is about understanding and becoming familiar with the test format, perspective and
What Sets Manhattan Review Apart?
Manhattan Review provides students with an in-depth understanding of the GMAT and teaches students how to think like the testmakers. We go beyond common tips and tricks to teach students how to solve
problems in the most efficient ways possible. We teach and demonstrate to our students the logical decision-making processes that the GMAT is looking to measure. These processes are reviewed until
they become second nature. Our students learn how to address each problem proactively and anticipate the challenges and traps.
Why Our Approach Leads to High Scores
As a standardized test, there are a finite number of concepts and methods that the GMAT assesses. Manhattan Review shows students how to master the full spectrum of quantitative concepts tested in
the GMAT. In addition, Manhattan Review teaches students how to approach the different question types and formats the test employs, including spotting the common traps or tricks. Students complete
our course with a mastery of the subject matter and the ability to think like the testmakers as they encounter different kinds of questions.
Manhattan Review GMAT Course Overview
Manhattan Review offers GMAT courses across the US and in several countries around the world. In the US, our courses offered on various dates and times at our different locations to best fit the
preferences of our local students. Our course is continually updated to incorporate the GMAT's latest changes. Topics progress from easier to more advanced materials in each successive week of class.
Trust Manhattan Review for comprehensive and effective GMAT preparation, and a strong record of score improvement.
Maximize Study Time mit Manhattan Review
The Manhattan Review curriculum is designed to reinforce skills through a combination exercises, in-class instruction, online problem sets, and practice tests that closely mimic the actual test.
Exercises, both in-class and at home, cover the concepts that are employed in test questions. These exercises can also serve to identify areas where students need reinforcement. In class, instructors
review problem areas and introduce coursework with the next level of difficulty. Following each class session, students can test their mastery of the concepts they have studied through online
problems that incorporate covered concepts into actual test questions.
Each lesson contains:
• Review basic concepts through skill-building exercises.
• Attend class to reinforce concepts learned at home and receive instruction on more difficult coursework.
• Apply concepts to online sample test questions.
• Take any of the 5 simulated GMAT tests to gauge progress. | {"url":"http://www.manhattanreview.com/gmat-prep/","timestamp":"2014-04-17T01:03:47Z","content_type":null,"content_length":"24575","record_id":"<urn:uuid:cd2d2722-d9a1-4bed-8ea2-ddbd4e9db6c9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
x^y = z If x and y are irrational numbers and z is a rational number. Is there a solution?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50dd4a02e4b069916c862502","timestamp":"2014-04-18T20:44:42Z","content_type":null,"content_length":"81965","record_id":"<urn:uuid:dc0480ea-c034-4128-9603-742f25270821>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
Winnetka, IL Geometry Tutor
Find a Winnetka, IL Geometry Tutor
Since graduating summa cum laude in mathematics education, my passion for teaching math has only grown stronger. I have two and a half years experience teaching high school math and three and a
half years experience tutoring college math. Some of my favorite moments as a teacher have been when I w...
12 Subjects: including geometry, calculus, algebra 1, algebra 2
...For three semesters I taught C++ and Matlab to freshmen and sophomore mechanical engineering students. I have used Matlab extensively in all of my undergrad and master's coursework. In
addition, Matlab was the primary computational tool used for my master's thesis.
17 Subjects: including geometry, calculus, physics, GRE
Hello/Bonjour, potential student! I am a former high school teacher of 5 1/2 years that would like to help you with various subjects. I was a French teacher, but I also did official district
homebound instruction (for students that cannot attend regular high school for health related reasons) for 2 years as well.
16 Subjects: including geometry, English, chemistry, French
...I use math and science in my everyday life. My goal as a tutor is not only to help students learn math and science, but also to share my enthusiasm and passion for these subjects. When I work
with students, I make sure that that they understand the concepts before moving on.
21 Subjects: including geometry, reading, study skills, algebra 1
...Can you break it down and provide evidence at each step? Slow motion is breathtaking in the movies, and is magnificent in math. We'll get you comfortable with the pieces, with putting them all
together, and with doing it all in an impressive manner.
14 Subjects: including geometry, ASVAB, GRE, prealgebra
Related Winnetka, IL Tutors
Winnetka, IL Accounting Tutors
Winnetka, IL ACT Tutors
Winnetka, IL Algebra Tutors
Winnetka, IL Algebra 2 Tutors
Winnetka, IL Calculus Tutors
Winnetka, IL Geometry Tutors
Winnetka, IL Math Tutors
Winnetka, IL Prealgebra Tutors
Winnetka, IL Precalculus Tutors
Winnetka, IL SAT Tutors
Winnetka, IL SAT Math Tutors
Winnetka, IL Science Tutors
Winnetka, IL Statistics Tutors
Winnetka, IL Trigonometry Tutors
Nearby Cities With geometry Tutor
Bannockburn, IL geometry Tutors
Deerfield, IL geometry Tutors
Evanston, IL geometry Tutors
Fort Sheridan geometry Tutors
Glencoe, IL geometry Tutors
Glenview, IL geometry Tutors
Golf, IL geometry Tutors
Great Lakes geometry Tutors
Highwood, IL geometry Tutors
Kenilworth, IL geometry Tutors
Morton Grove geometry Tutors
North Riverside, IL geometry Tutors
Northfield, IL geometry Tutors
Rosemont, IL geometry Tutors
Wilmette geometry Tutors | {"url":"http://www.purplemath.com/Winnetka_IL_geometry_tutors.php","timestamp":"2014-04-18T08:46:25Z","content_type":null,"content_length":"24042","record_id":"<urn:uuid:8e9f1ef7-2f03-4502-a675-cb71e18faab2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
My research interests lie at the interface of algebraic and geometric topology. I like to take recent theories developed in algebaic topology, in particular the calculus of functors and equivariant
stable homotopy theory, and apply them to answer concrete questions about manifolds, in particular about knots and group actions. The need to develop additional machinery has for example led to my
interest in completions of configuration spaces, Lie coalgebras, and rational homotopy theory. You can learn more about various research threads by following the links to the right.
I post my preprints on the Mathematics ArXiv. This work is partially supported by the Division of Mathematical Sciences of the National Science Foundation.
Operads and knot spaces.
Journal of the AMS, Vol 19 No 2 (2006) 461-486.
New perspectives on self linking (with R. Budney, K. Scannell, and J. Conant) Advances in Mathematics, Vol 191 No 1 (2005), 78-113.
Bordism of semi-free S^1-actions.
Mathematische Zeitschrift, Vol 249 No 2 (2005) 439-454.
Manifold-theoretic compactifications of configuration spaces.
Selecta Mathematica (new series) Vol 10, No 3 (2004) 391-428.
A one-dimensional embedding complex. (with Kevin Scannell)
Journal of Pure and Applied Algebra 170 (2002) 93-107
Computations of complex equivariant bordism rings.
American Journal of Mathematics 123 (2001) 577-605.
• Home
Real equivariant bordism and stable transversality obstructions for G=Z/2 • Research
Proceedings of the AMS 130 (2002), No. 1, 271--281. • Teaching
The geometry of the local cohomology filtration in equivariant bordism.
Homotopy, Homology and Applications, vol 3(2), (2001), pp 385-406.
Lie coalgebras and rational homotopy theory, I. (with Ben Walter)
The homology of the little disks operad.
A pairing between graphs and trees.
The topology of spaces of knots. | {"url":"http://pages.uoregon.edu/dps/respage.php","timestamp":"2014-04-18T10:43:58Z","content_type":null,"content_length":"10232","record_id":"<urn:uuid:46818b64-4fd1-4aeb-8703-366c4a0661a6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teaching C++ Badly: How to Misuse Arrays
April 20, 2011
last post
listed some ways in which I've seen C++ taught badly, and promised to explain these teaching techniques in more detail. Here is the first of those explanations.
My last post listed some ways in which I've seen C++ taught badly, and promised to explain these teaching techniques in more detail. Here is the first of those explanations.
I have seen the claim, attributed to Trenchard More, that arrays are the most fundamental of all data structures — a fact that we know because there is probably not a programmer now living who has
not at one time or another asked management for arrays. If that is not enough evidence, consider the idea that arrays are an abstraction of the idea of linear addressing, which forms the basis for
all computing hardware in common use.
Linear addressing is the idea that a computer's memory consists of a collection of cells with consecutive numbers called addresses. By using ordinary integer arithmetic to compute the address of a
cell, we can easily go from the notion of "the nth element of an array" to the corresponding address.
Because of the connection between the idea of an array and the underlying idea of an address, arrays work similarly in most programming languages that support them. An array has some number of
elements, each element has an index or a subscript, and these indices are consecutive integers. Some languages start their indices from zero, others from one, and still others let them start
anywhere; but that's about the extent of the difference between languages.
Most languages that support arrays also prevent the size of an array from changing once the array has been created. C and C++ go even further: They require that the size of an array be known during
compilation; if it is not, then the program must allocate a dynamic array that, although it shares properties with a plain array, differs in some of its details. Even these dynamic arrays, however,
cannot change size once they've been created.
Now let's think about how programmers use arrays. Most programs that use arrays do three distinctly different things with these arrays. Sometimes these three things are interleaved, but often the
program does them one at a time:
1. Put data into the array
2. Access the data in the array
3. (optional) Take all of the data out of the array
Step 2 includes rearranging the data, perhaps by sorting it; step 3 includes printing the data.
For example, consider a program that computes prime numbers. Each time it tests a number to see whether it is prime, it might use the values already in the array as divisors (step 2). Each time it
finds a prime number, it might put that number into the array (step 1). Finally, it might print the numbers it has found (step 3). Here, steps 1 and 2 are interleaved; step 3 might be interleaved or
might be separate.
Programs that use arrays according to these steps have to cope with two problems. First, if the language requires its programmers to freeze the size of the array at the time it is created, the
programmer may not know how many elements to request. Second, even if the programmer does know how many elements, the fixed size requirement means that for most of the time the program is running,
the array will have elements that have no meaningful value.
These restrictions give rise to two common programming practices that, in my opinion, make it much harder than necessary to learn how to program:
• Guessing how many elements the array is apt to need
• Using an auxiliary variable to keep track of how many array elements are in use
For example, let's return to our hypothetical prime-number program. I'm sure you've seen many such programs that start out by doing something like this:
int primes[1000]; // We won't need more than 1000 primes
primes[0] = 2; // The first prime is 2
n = 1; // We have 1 prime so far
Our program guesses how many primes we'll be wanting to compute (1000), and then uses an auxiliary variable (n) to keep track of how many primes we've computed. When we come up with a new prime, the
program will execute
primes[n++] = prime;
If the programmer is unusually thoughtful, this statement will be preceded by checking whether n >= 1000; otherwise the program will just crash when the array overflows.
From a teaching viewpoint, this program is bad for at least three reasons:
• It imposes an arbitrary restriction on its own capabilities — it cannot compute more than a fixed number of primes
• It uses two variables (primes and n) where one should do, thereby complicating the code beyond necessity
• It wastes resources by allocating array elements whether or not they're needed
The standard vector template avoids all three of these problems. Instead of the two examples above, we would write
vector primes; // primes starts out without any elements
primes.push_back(2); // The first prime is 2
In the forthcoming C++ standard, we can collapse this example into one statement:
vector<int> primes = {2}; // primes starts out with one element
When we have computed a new prime, we can append it to primes by executing
This statement increases the size of primes by one, by appending a copy of the variable prime to primes. In effect, it pushes prime onto the back of primes. In one stroke, we have eliminated two out
of our three problems: Using a vector instead of an array removes the arbitrary restriction on the number of elements; and it uses one variable instead of two.
It doesn't really eliminate the waste of resources, because behind the scenes it allocates memory for more array elements than it needs. However, now it's the implementation, not the programmer, that
is wasting memory. Moreover, this waste now provides the opportunity to show programmers how they can choose from among several space/time tradeoffs without having to rewrite their code drastically
in order to do so. Instead of teaching beginners that they have to write unnecessarily complicated, wasteful code, we can teach them how to write code that does exactly what they want, and then to
come back after it works to figure out how — or even whether — they should optimize it.
In short, programmers tend to use arrays in a way that makes the arrays grow during the program's execution. Built-in arrays in C and C++ don't grow very well, but vectors do. This fact makes vectors
more useful for teaching purposes — especially if we are teaching beginners — than arrays are. Moreover, arrays do not force students into writing code that is intrinsically wasteful. Instead,
students can get into the useful habit of making the program working correctly first, and only then thinking about how to optimize it. | {"url":"http://www.drdobbs.com/cpp/teaching-c-badly-how-to-misuse-arrays/229401975","timestamp":"2014-04-19T07:29:21Z","content_type":null,"content_length":"97355","record_id":"<urn:uuid:23e3bb7c-362d-4ed8-ac5a-01b435b98e16>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Palmetto, GA Accounting Tutor
Find a Palmetto, GA Accounting Tutor
...And I teach test-taking-strategies when we prepare for tests. I hold teaching certification in the fields of EMH/TMH/PMH, LD, Multihandicapped including Autism and High Functioning Autism
(Aspergers). I have several years successful classroom teaching experience with students categorized as auti...
23 Subjects: including accounting, reading, geometry, algebra 1
...I am currently tutoring grade school students in preparation of the CRCT in math, reading and science. I am currently a part time adjunct accounting instructor for a college. I have a Bachelor
of Science in Accounting and an MBA in International Business.
7 Subjects: including accounting, reading, grammar, business
...Scored in the 99th percentile on the exam. Helped two twins with the math portion of the PSAT. Taught similar topics as a GMAT instructor for three years.
28 Subjects: including accounting, physics, calculus, statistics
...I have played PeachTennis and Ultimate Tennis with great results in playoffs. I usually play in small tournaments around Atlanta. I am extremely active and always ready for a good run and
18 Subjects: including accounting, reading, writing, algebra 1
I am a certified French teacher from Georgia State University, and my native language is French. For my tutoring, I use different learning strategies to reach any type of learners; for example,
activities with Power Point help make visual learners stay focus. At the end of each tutoring, I use the last five minutes for wrap up where I give a quiz, or I ask a question to summarize the
3 Subjects: including accounting, French, ESL/ESOL
Related Palmetto, GA Tutors
Palmetto, GA Accounting Tutors
Palmetto, GA ACT Tutors
Palmetto, GA Algebra Tutors
Palmetto, GA Algebra 2 Tutors
Palmetto, GA Calculus Tutors
Palmetto, GA Geometry Tutors
Palmetto, GA Math Tutors
Palmetto, GA Prealgebra Tutors
Palmetto, GA Precalculus Tutors
Palmetto, GA SAT Tutors
Palmetto, GA SAT Math Tutors
Palmetto, GA Science Tutors
Palmetto, GA Statistics Tutors
Palmetto, GA Trigonometry Tutors
Nearby Cities With accounting Tutor
Atlanta Ndc, GA accounting Tutors
Brooks, GA accounting Tutors
Ellenwood accounting Tutors
Fairburn, GA accounting Tutors
Grantville, GA accounting Tutors
Lovejoy, GA accounting Tutors
Moreland, GA accounting Tutors
Red Oak, GA accounting Tutors
Sargent, GA accounting Tutors
Sharpsburg, GA accounting Tutors
Sunny Side accounting Tutors
Turin, GA accounting Tutors
Tyrone, GA accounting Tutors
Whitesburg, GA accounting Tutors
Winston, GA accounting Tutors | {"url":"http://www.purplemath.com/Palmetto_GA_accounting_tutors.php","timestamp":"2014-04-17T08:05:25Z","content_type":null,"content_length":"23962","record_id":"<urn:uuid:5ddcba97-ceb4-44e4-9bb4-cb83809f73ac>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Define function using lists or tables
Date: Jul 3, 2013 4:49 AM
Author: Joaquim Nogueira
Subject: Define function using lists or tables
Hello. Possibly this is a very simple question.
Is there a way to define a function over a (BIG) list of ordered pairs? I mean, suppose that I have a list, created using a Table command or something like that, of the form [{a,b},{c,d},etc.] such that, after naming it f, say, f = [{a,b},{c,d},etc.], then, afterwards, whenever there is a command such that one needs to compute f[a] (or f[c]), the program immediately replaces f[a] by b,and f[c] by d?
Thank you. | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=9152448","timestamp":"2014-04-19T07:28:46Z","content_type":null,"content_length":"1452","record_id":"<urn:uuid:0739d90f-4608-4251-ac03-f84235405982>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Treating Mass as a perturbation
Hello again,
I also have another question, somewhat related to my previous, on the topic of the Klein-Gordon equation but treating the mass as a perturbation.
The feynman diagram shows the particular interaction:
I believe the cross is the point of interaction via the perturbation (the mass), we model the perturbation:
[tex]\delta V=m^2[/tex]
and subsequently use this in the calculation for the transition amplitude, we use plane-wave solutions of the ingoing and outgoing states.
Alike to my previous question, I'm not really sure what this means to model some potential as a mass? Can anyone explain why we do this and maybe exactly what is happening in this particular feynman
I can try to embelish if needs be, please tell me if there are any inconsistencies or errors I have made in my explanation of my problem! | {"url":"http://www.physicsforums.com/showthread.php?p=4158689","timestamp":"2014-04-20T23:42:12Z","content_type":null,"content_length":"24852","record_id":"<urn:uuid:479eed5c-2414-4e52-bba4-c2a3ec5d6c4c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multigroup Path Analysis with (very) numerous groups
I am trying to fit multigroup models with 15 groups. When I want to assess the fit of my models, I encounter problems :
(1) on one hand, the fit measures like Chi or RMSEA are never good for any model different from the saturated model
(2) on the other hand, if I compare the AICs of nested models I can see that the saturated model can be improved by removing or setting equal among groups many paths. As far as I understand, the
saturated model is just the more parametrized model that can be wrotten, so I can compare its AIC with that of nested models, with some path removed or setting equal among groups, is that wright ?
I suppose my problem comes from the fact that with 15 groups, every restriction on the model result in many ddf : If for instance, I specify that the covariance between A and B is either the same
among groups, or nul, I 'save' 14 or 15 ddf. As a result, the difference in ddf between the saturated model and the model of interest is high (464). Can it explains why the difference in AIC is very
significant (DelatAIC = 68.29, whereas the RMSEA is quite bad = 0.13 ? Or is there anything that I didn't catch in how I must assess the model fit ?
Thank you in advance | {"url":"http://openmx.psyc.virginia.edu/print/638","timestamp":"2014-04-20T11:25:39Z","content_type":null,"content_length":"8773","record_id":"<urn:uuid:0140d4b1-91db-4fe4-a63b-d9cde123c26a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Do We Estimate Melt Density?
This activity was selected for the On the Cutting Edge Reviewed Teaching Collection
This activity has received positive reviews in a peer review process involving five review categories. The five categories included in the process are
• Scientific Accuracy
• Alignment of Learning Goals, Activities, and Assessments
• Pedagogic Effectiveness
• Robustness (usability and dependability of all components)
• Completeness of the ActivitySheet web page
For more information about the peer review process itself, please see http://serc.carleton.edu/NAGTWorkshops/review.html.
This page first made public: Oct 25, 2007
This material is replicated on a number of sites as part of the
SERC Pedagogic Service Project
In this Spreadsheets Across the Curriculum activity, students will use the thermodynamic properties of silicates to estimate melt density at high temperatures and pressures. Students will sum the
mole fractions of several different oxides, with different densities, to calculate the total density of the melt. The molar volume and compressibility of each oxide will also be factored in the total
melt density. This is a self-paced activity in which students follow a PowerPoint presentation to create spreadsheets and graphs using Excel.
Learning Goals
Students will:
• Use chemical analyses to understand melt properties and the nature of magma movement.
• Use mole fraction, molar mass, and fractional volume of several different oxides to determine overall melt density.
• Convert between molar and volumetric units.
• Use partial molar volume, the coefficient of thermal expansion, and the coefficient of isothermal compressibility to calculate the molar volume of each oxide.
• Consider the influence of dissolved water on melt density.
• Develop a spreadsheet to carry out a calculation.
In the process the students will:
• Learn to use weight percent and molar mass to calculate mole fraction of each oxide.
• Increase their skill at unit conversions.
• Create XY scatterplots showing the relationship between density and temperature, pressure, and weight percent of water.
• Use partial derivatives to calculate molar volume of each oxide.
Context for Use
Equipment: Each student or pair of students needs a computer with Excel and PowerPoint.
Classes: This module has been used in an Introductory Physical Volcanology course with upper level undergraduates.
In the class, the module was introduced during lab to be completed as homework due the following week. Students turned in hard-copies of the Excel spreadsheets and graphs, as well as their working
Excel files. This worked well for junior and senior level students with excellent quantitative skills.
Description and Teaching Materials
PowerPoint SSAC-pv2007.QE522.CC2.5-Student (PowerPoint 718kB Oct25 07)
If the embedded spreadsheets are not visible, save the PowerPoint file to disk and open it from there.
This PowerPoint file is the student version of the module. An instructor version is available by request. The instructor version includes the completed spreadsheet. Send your request to Len Vacher (
vacher@usf.edu) by filling out and submitting the Instructor Module Request Form.
Teaching Notes and Tips
This module, like the others in this collection, works best if coordinated with lecture and lab material.
If students have difficulty in getting their equations to produce the correct numbers in the orange cells -- especially if their results are off by orders of magnitude -- tell them to check their
unit conversions. You cannot ever emphasize unit conversions enough.
Some students jump ahead to the end-of-module assignments without working through the main part of the module carefully. Those students have trouble.
The end-of-module questions can be used for assessment.
The instructor version contains a pre-test
References and Resources
Spera, F., 2000, Physical properties of magma, In: Sigurdsson et al., eds., Encyclopedia of Volcanoes, Academic Press, 171-190. (a very useful paper that discusses the physical properties of magma in | {"url":"http://serc.carleton.edu/sp/ssac/volcanology/examples/magma_density.html","timestamp":"2014-04-17T06:55:47Z","content_type":null,"content_length":"26236","record_id":"<urn:uuid:3f9adcec-8560-4cb3-beb2-a83eb5f733ab>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dietterich and Ghulum Bakiri. Solving multiclass learning problems via error-correcting output codes
Results 1 - 10 of 44
, 1997
"... In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a
broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic set ..."
Cited by 2307 (59 self)
Add to MetaCart
In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a
broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weightupdate rule of Littlestone and Warmuth [20] can
be adapted to this model yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm
can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games and prediction of points in R n . In the second part of the paper we apply the multiplicative
weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study
generalizations of...
- In Proceedings International Conference on Machine Learning , 1997
"... Abstract. One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very
large, and often is observed to decrease even after the training error reaches zero. In this paper, we show ..."
Cited by 721 (52 self)
Add to MetaCart
Abstract. One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large,
and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with
respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any
incorrect label. We show that techniques used in the analysis of Vapnik’s support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin
distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our
explanation to those based on the bias-variance decomposition. 1
, 1999
"... The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple
preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting ..."
Cited by 515 (18 self)
Add to MetaCart
The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple
preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient
implementation of the algorithm for certain natural cases. We discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine
different WWW search strategies, each of which is a query expansion for a given domain. For this task, we compare the performance of RankBoost to the individual search strategies. The second
experiment is a collaborative-filtering task for making movie recommendations. Here, we present results comparing RankBoost to nearest-neighbor and regression algorithms.
- Journal of Machine Learning Research
"... In this paper we describe the algorithmic implementation of multiclass kernel-based vector machines. Our starting point is a generalized notion of the margin to multiclass problems. Using this
notion we cast multiclass categorization problems as a constrained optimization problem with a quadratic ob ..."
Cited by 363 (13 self)
Add to MetaCart
In this paper we describe the algorithmic implementation of multiclass kernel-based vector machines. Our starting point is a generalized notion of the margin to multiclass problems. Using this notion
we cast multiclass categorization problems as a constrained optimization problem with a quadratic objective function. Unlike most of previous approaches which typically decompose a multiclass problem
into multiple independent binary classification tasks, our notion of margin yields a direct method for training multiclass predictors. By using the dual of the optimization problem we are able to
incorporate kernels with a compact set of constraints and decompose the dual problem into multiple optimization problems of reduced size. We describe an efficient fixed-point algorithm for solving
the reduced optimization problems and prove its convergence. We then discuss technical details that yield significant running time improvements for large datasets. Finally, we describe various
experiments with our approach comparing it to previously studied kernel-based methods. Our experiments indicate that for multiclass problems we attain state-of-the-art accuracy.
- In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory , 2000
"... . Output coding is a general framework for solving multiclass categorization problems. Previous research on output codes has focused on building multiclass machines given predefined output
codes. In this paper we discuss for the first time the problem of designing output codes for multiclass problem ..."
Cited by 161 (5 self)
Add to MetaCart
. Output coding is a general framework for solving multiclass categorization problems. Previous research on output codes has focused on building multiclass machines given predefined output codes. In
this paper we discuss for the first time the problem of designing output codes for multiclass problems. For the design problem of discrete codes, which have been used extensively in previous works,
we present mostly negative results. We then introduce the notion of continuous codes and cast the design problem of continuous codes as a constrained optimization problem. We describe three
optimization problems corresponding to three different norms of the code matrix. Interestingly, for the l 2 norm our formalism results in a quadratic program whose dual does not depend on the length
of the code. A special case of our formalism provides a multiclass scheme for building support vector machines which can be solved efficiently. We give a time and space efficient algorithm for
solving the quadratic program. We describe preliminary experiments with synthetic data show that our algorithm is often two orders of magnitude faster than standard quadratic programming packages. We
conclude with the generalization properties of the algorithm. Keywords: Multiclass categorization,output coding, SVM 1.
- MACHINE LEARNING: PROCEEDINGS OF THE FOURTEENTH INTERNATIONAL CONFERENCE, 1997 (ICML-97) , 1997
"... This paper describes a new technique for solving multiclass learning problems by combining Freund and Schapire's boosting algorithm with the main ideas of Dietterich and Bakiri's method of
error-correcting output codes (ECOC). Boosting is a general method of improving the accuracy of a given base or ..."
Cited by 90 (9 self)
Add to MetaCart
This paper describes a new technique for solving multiclass learning problems by combining Freund and Schapire's boosting algorithm with the main ideas of Dietterich and Bakiri's method of
error-correcting output codes (ECOC). Boosting is a general method of improving the accuracy of a given base or "weak" learning algorithm. ECOC is a robust method of solving multiclass learning
problems by reducing to a sequence of two-class problems. We show that our new hybrid method has advantages of both: Like ECOC, our method only requires that the base learning algorithm work on
binary-labeled data. Like boosting, we prove that the method comes with strong theoretical guarantees on the training and generalization error of the final combined hypothesis assuming only that the
base learning algorithm perform slightly better than random guessing. Although previous methods were known for boosting multiclass problems, the new method may be significantly faster and require
less programming effort in creating the base learning algorithm. We also compare the new algorithm experimentally to other voting methods.
- in Machine Learning. PhD thesis, MIT , 2002
"... 2 Everything Old Is New Again: A Fresh Look at Historical ..."
, 1999
"... . Boosting is a general method for improving the accuracy of any given learning algorithm. Focusing primarily on the AdaBoost algorithm, we briefly survey theoretical work on boosting including
analyses of AdaBoost's training error and generalization error, connections between boosting and game theo ..."
Cited by 50 (2 self)
Add to MetaCart
. Boosting is a general method for improving the accuracy of any given learning algorithm. Focusing primarily on the AdaBoost algorithm, we briefly survey theoretical work on boosting including
analyses of AdaBoost's training error and generalization error, connections between boosting and game theory, methods of estimating probabilities using boosting, and extensions of AdaBoost for
multiclass classification problems. Some empirical work and applications are also described. Background Boosting is a general method which attempts to "boost" the accuracy of any given learning
algorithm. Kearns and Valiant [29, 30] were the first to pose the question of whether a "weak" learning algorithm which performs just slightly better than random guessing in Valiant's PAC model [44]
can be "boosted" into an arbitrarily accurate "strong" learning algorithm. Schapire [36] came up with the first provable polynomial-time boosting algorithm in 1989. A year later, Freund [16]
developed a much more effici...
"... Preference learning is an emerging topic that appears in different guises in the recent literature. This work focuses on a particular learning scenario called label ranking, where the problem is
to learn a mapping from instances to rankings over a finite number of labels. Our approach for learning s ..."
Cited by 46 (16 self)
Add to MetaCart
Preference learning is an emerging topic that appears in different guises in the recent literature. This work focuses on a particular learning scenario called label ranking, where the problem is to
learn a mapping from instances to rankings over a finite number of labels. Our approach for learning such a mapping, called ranking by pairwise comparison (RPC), first induces a binary preference
relation from suitable training data using a natural extension of pairwise classification. A ranking is then derived from the preference relation thus obtained by means of a ranking procedure,
whereby different ranking methods can be used for minimizing different loss functions. In particular, we show that a simple (weighted) voting strategy minimizes risk with respect to the well-known
Spearman rank correlation. We compare RPC to existing label ranking methods, which are based on scoring individual labels instead of comparing pairs of labels. Both empirically and theoretically, it
is shown that RPC is superior in terms of computational efficiency, and at least competitive in terms of accuracy. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=272546","timestamp":"2014-04-18T17:53:28Z","content_type":null,"content_length":"38998","record_id":"<urn:uuid:e0130d38-2c9c-4fb2-bc43-2f1f13c9d3a0>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
A hybrid SAT-based decision procedure for separation logic with uninterpreted functions
Results 1 - 10 of 36
, 2004
"... The logic of equality with uninterpreted functions (EUF) and its extensions have been widely applied to processor verification, by means of a large variety of progressively more sophisticated
(lazy or eager) translations into propositional SAT. Here we propose a new approach, namely a general DP ..."
Cited by 118 (14 self)
Add to MetaCart
The logic of equality with uninterpreted functions (EUF) and its extensions have been widely applied to processor verification, by means of a large variety of progressively more sophisticated (lazy
or eager) translations into propositional SAT. Here we propose a new approach, namely a general DPLL(X) engine, whose parameter X can be instantiated with a specialized solver Solver T for a given
theory T , thus producing a system DPLL(T ). We describe this DPLL(T ) scheme, the interface between DPLL(X) and Solver T , the architecture of DPLL(X), and our solver for EUF, which includes
incremental and backtrackable congruence closure algorithms for dealing with the built-in equality and the integer successor and predecessor symbols. Experiments with a first implementation indicate
that our technique already outperforms the previous methods on most benchmarks, and scales up very well.
- Journal on Satisfiability, Boolean Modeling and Computation , 2007
"... Satisfiability Modulo Theories (SMT) is the problem of deciding the satisfiability of a first-order formula with respect to some decidable first-order theory T (SMT (T)). These problems are
typically not handled adequately by standard automated theorem provers. SMT is being recognized as increasingl ..."
Cited by 74 (32 self)
Add to MetaCart
Satisfiability Modulo Theories (SMT) is the problem of deciding the satisfiability of a first-order formula with respect to some decidable first-order theory T (SMT (T)). These problems are typically
not handled adequately by standard automated theorem provers. SMT is being recognized as increasingly important due to its applications in many domains in different communities, in particular in
formal verification. An amount of papers with novel and very efficient techniques for SMT has been published in the last years, and some very efficient SMT tools are now available. Typical SMT (T)
problems require testing the satisfiability of formulas which are Boolean combinations of atomic propositions and atomic expressions in T, so that heavy Boolean reasoning must be efficiently combined
with expressive theory-specific reasoning. The dominating approach to SMT (T), called lazy approach, is based on the integration of a SAT solver and of a decision procedure able to handle sets of
atomic constraints in T (T-solver), handling respectively the Boolean and the theory-specific components of reasoning. Unfortunately, neither the problem of building an efficient SMT solver, nor even
that of acquiring a comprehensive background knowledge in lazy SMT, is of simple solution. In this paper we present an extensive survey of SMT, with particular focus on the lazy approach. We survey,
classify and analyze from a theory-independent perspective the most effective techniques and optimizations which are of interest for lazy SMT and which have been proposed in various communities; we
discuss their relative benefits and drawbacks; we provide some guidelines about their choice and usage; we also analyze the features for SAT solvers and T-solvers which make them more suitable for an
integration. The ultimate goals of this paper are to become a source of a common background knowledge and terminology for students and researchers in different areas, to provide a reference guide for
developers of SMT tools, and to stimulate the cross-fertilization of techniques and ideas among different communities.
- In CAV’04 , 2004
"... UCLID is a tool for term-level modeling and verification of infinite-state systems expressible in the logic of counter arithmetic with lambda expressions and uninterpreted functions (CLU). In
this paper, we describe a key component of the tool, the decision procedure for CLU. ..."
Cited by 39 (1 self)
Add to MetaCart
UCLID is a tool for term-level modeling and verification of infinite-state systems expressible in the logic of counter arithmetic with lambda expressions and uninterpreted functions (CLU). In this
paper, we describe a key component of the tool, the decision procedure for CLU.
- In Proc. CAV 2005, volume 3576 of LNCS , 2005
"... Abstract. The problem of deciding the satisfiability of a quantifier-free formula with respect to a background theory, also known as Satisfiability Modulo Theories (SMT), is gaining increasing
relevance in verification: representation capabilities beyond propositional logic allow for a natural model ..."
Cited by 34 (15 self)
Add to MetaCart
Abstract. The problem of deciding the satisfiability of a quantifier-free formula with respect to a background theory, also known as Satisfiability Modulo Theories (SMT), is gaining increasing
relevance in verification: representation capabilities beyond propositional logic allow for a natural modeling of real-world problems (e.g., pipeline and RTL circuits verification, proof obligations
in software systems). In this paper, we focus on the case where the background theory is the combination T1 £ T2 of two simpler theories. Many SMT procedures combine a boolean model enumeration with
a decision procedure for T1 £ T2, where conjunctions of literals can be decided by an integration schema such as Nelson-Oppen, via a structured exchange of interface formulae (e.g., equalities in the
case of convex theories, disjunctions of equalities otherwise). We propose a new approach for SMT¤T1 £ T2¥, called Delayed Theory Combination, which does not require a decision procedure for T1 £ T2,
but only individual decision procedures for T1 and T2, which are directly integrated into the boolean model enumerator. This approach is much simpler and natural, allows each of the solvers to be
implemented and optimized without taking into account the others, and it nicely encompasses the case of non-convex theories. We show the effectiveness of the approach by a thorough experimental
comparison. 1
- ACM Computing Surveys , 2006
"... Propositional Satisfiability (SAT) and Constraint Programming (CP) have developed as two relatively independent threads of research, cross-fertilising occasionally. These two approaches to
problem solving have a lot in common, as evidenced by similar ideas underlying the branch and prune algorithms ..."
Cited by 32 (4 self)
Add to MetaCart
Propositional Satisfiability (SAT) and Constraint Programming (CP) have developed as two relatively independent threads of research, cross-fertilising occasionally. These two approaches to problem
solving have a lot in common, as evidenced by similar ideas underlying the branch and prune algorithms that are most successful at solving both kinds of problems. They also exhibit differences in the
way they are used to state and solve problems, since SAT’s approach is in general a black-box approach, while CP aims at being tunable and programmable. This survey overviews the two areas in a
comparative way, emphasising the similarities and differences between the two and the points where we feel that one technology can benefit from ideas or experience acquired
- JOURNAL ON SATISFIABILITY, BOOLEAN MODELING AND COMPUTATION 5 (2008) 193–215 , 2008
"... This paper introduces a propositional encoding for lexicographic path orders (LPOs) and the corresponding LPO termination property of term rewrite systems. Given this encoding, termination
analysis can be performed using a state-of-the-art Boolean satisfiability solver. Experimental results are uneq ..."
Cited by 25 (11 self)
Add to MetaCart
This paper introduces a propositional encoding for lexicographic path orders (LPOs) and the corresponding LPO termination property of term rewrite systems. Given this encoding, termination analysis
can be performed using a state-of-the-art Boolean satisfiability solver. Experimental results are unequivocal, indicating orders of magnitude speedups in comparison with previous implementations for
LPO termination. The results of this paper have already had a direct impact on the design of several major termination analyzers for term rewrite systems. The contribution builds on a symbol-based
approach towards reasoning about partial orders. The symbols in an unspecified partial order are viewed as variables that take integer values and are interpreted as indices in the order. For a
partial order statement on n symbols, each index is represented in ⌈log 2 n ⌉ propositional variables and partial order constraints between symbols are modeled on the bit representations. The
proposed encoding is general and relevant to other applications which involve propositional reasoning about partial orders.
- Journal of Automated Reasoning , 2005
"... Abstract. Recent improvements in propositional satisfiability techniques (SAT) made it possible to tackle successfully some hard real-world problems (e.g. model-checking, circuit testing,
propositional planning) by encoding into SAT. However, a purely boolean representation is not expressive enough ..."
Cited by 22 (2 self)
Add to MetaCart
Abstract. Recent improvements in propositional satisfiability techniques (SAT) made it possible to tackle successfully some hard real-world problems (e.g. model-checking, circuit testing,
propositional planning) by encoding into SAT. However, a purely boolean representation is not expressive enough for many other real-world applications, including the verification of timed and hybrid
systems, of proof obligations in software, and of circuit design at RTL level. These problems can be naturally modeled as satisfiability in Linear Arithmetic Logic (LAL), i.e., the boolean
combination of propositional variables and linear constraints over numerical variables. In this paper we present MATHSAT, a new, SAT-based decision procedure for LAL, based on the (known approach) of
integrating a state-of-the-art SAT solver with a dedicated mathematical solver for LAL. We improve MATHSAT in two different directions. First, the top level procedure is enhanced, and now features a
tighter integration between the boolean search and the mathematical solver. In particular, we allow for theory-driven backjumping and learning, and theory-driven deduction; we use static learning in
order to reduce the number of boolean models that are mathematically inconsistent; we exploit problem clustering in order to partition
- In Design Automation and Test in Europe, DATE’04 , 2003
"... We show how to automatically verify that a complex XScale-like pipelined machine model is a WEB-refinement of an instruction set architecture model, which implies that the machines satisfy the
same safety and liveness properties. Automation is achieved by reducing the WEB-refinement proof obligation ..."
Cited by 21 (10 self)
Add to MetaCart
We show how to automatically verify that a complex XScale-like pipelined machine model is a WEB-refinement of an instruction set architecture model, which implies that the machines satisfy the same
safety and liveness properties. Automation is achieved by reducing the WEB-refinement proof obligation to a formula in the logic of Counter arithmetic with Lambda expressions and Uninterpreted
functions (CLU). We use UCLID to transform the resulting CLU formula into a CNF formula, which is then checked with a SAT solver. We define several XScale-like models with out of order completion,
including models with precise exceptions, branch prediction, and interrupts. We use two types of refinement maps. In one, flushing is used to map pipelined machine states to instruction set
architecture states; in the other, we use the commitment approach, which is the dual of flushing, since partially completed instructions are invalidated. We present experimental results for all the
machines modeled, including verification times. For our application, we found that the SAT solver Siege provides superior performance over Chaff and that the amount of time spent proving liveness
when using the commitment approach is less than 1% of the overall verification time, whereas when flushing is employed, the liveness proof accounts for about 10% of the verification time.
- In Computer-Aided Verification (CAV), LNCS 3114 , 2004
"... Abstract. We study the problem of formally verifying shared memory multiprocessor executions against memory consistency models—an important step during post-silicon verification of
multiprocessor machines. We employ our previously reported style of writing formal specifications for shared memory mod ..."
Cited by 17 (2 self)
Add to MetaCart
Abstract. We study the problem of formally verifying shared memory multiprocessor executions against memory consistency models—an important step during post-silicon verification of multiprocessor
machines. We employ our previously reported style of writing formal specifications for shared memory models in higher order logic (HOL), obtaining intuitive as well as modular specifications. Our
specification consists of a conjunction of rules that constrain the global visibility order. Given an execution to be checked, our algorithm generates Boolean constraints that capture the conditions
under which the execution is legal under the visibility order. We initially took the approach of specializing the memory model HOL axioms into equivalent (for the execution to be checked) quantified
boolean formulae (QBF). As this technique proved inefficient, we took the alternative approach of converting the HOL axioms into a program that generates a SAT instance when run on an execution. In
effect, the quantifications in our memory model specification were realized as iterations in the program. The generated Boolean constraints are satisfiable if and only if the given execution is legal
under the memory model. We evaluate two different approaches to encode the Boolean constraints, and also incremental techniques to generate and solve Boolean constraints. Key results include a
demonstration that we can handle executions of realistic lengths for the modern Intel Itanium memory model. Further research into proper selection of Boolean encodings, incremental SAT checking,
efficient handling of transitivity, and the generation of unsatisfiable cores for locating errors are expected to make our technique practical. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=108842","timestamp":"2014-04-18T12:42:54Z","content_type":null,"content_length":"42544","record_id":"<urn:uuid:7f92bf5c-6098-4de2-a9b7-f6cf19c5bb09>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: [ap-calculus] limit question
Replies: 0
cindy [ap-calculus] limit question
Posted: Aug 22, 2012 1:02 PM
Posts: 40
From: Greetings!
Kansas We are investigating graphically and numerically the following limit today:
5/14/06 Limit as x approaches 0 of
(1 + x)^(1/x)
The limit by table goes to 2.7183 (e). However, when a student evaluated 10^-50 power, the calculator said it was 1 - which makes sense as you will get 1 raised to a very large number.
So, what exactly IS the limit of this equation? Graphically, it appears there is a hole in the graph and not a value of 1 graphed at 0. I am puzzled on what is happening here. HELP?!
Thank you!
Cindy Couchman
Course related websites:
To search the list archives for previous posts go to
To unsubscribe click here:
To change your subscription address or other settings click here: | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2397489&messageID=7872761","timestamp":"2014-04-20T01:49:08Z","content_type":null,"content_length":"14921","record_id":"<urn:uuid:16054253-6c88-405a-9c3d-7fccdd881085>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A recipe requires 3 ingredients A, B, C in the volume ratio 2:3:4. If 6 pints of ingredient B are required, how many pints of ingredients A and C are required?
Best Response
You've already chosen the best response.
2A = 3B = 4C
Best Response
You've already chosen the best response.
do you understand my hint?
Best Response
You've already chosen the best response.
2A = 3(2) = 4C
Best Response
You've already chosen the best response.
therefore A & C are 2 as well?
Best Response
You've already chosen the best response.
if a= 2 and c =2 then 4 = 6 =8 ?????
Best Response
You've already chosen the best response.
Ok they aren't equal
Best Response
You've already chosen the best response.
I only know B=6 and I can use 2(3) to make A but there is no way for C
Best Response
You've already chosen the best response.
4C = 6 solve for C
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
this is how I would do it: THe ratio for B is 3, but you need 6 pints. This required multiplying by 2. therefore you multiply the A and C portions by 2 as well. So A is 4 pints and C is ?
Best Response
You've already chosen the best response.
Or 2:3:4 = ? :6: ? look at the pattern.
Best Response
You've already chosen the best response.
I'm afraid Walleye has misled you a little. It is not true that 2A=3B=4C, though it is a common mistake.
Best Response
You've already chosen the best response.
So A =4 and C=8
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ee19b5be4b0a50f5c55f03d","timestamp":"2014-04-21T02:41:21Z","content_type":null,"content_length":"62944","record_id":"<urn:uuid:2237fa3b-5cf6-4cf9-a42b-712f93ecaedb>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Treating Mass as a perturbation
Hello again,
I also have another question, somewhat related to my previous, on the topic of the Klein-Gordon equation but treating the mass as a perturbation.
The feynman diagram shows the particular interaction:
I believe the cross is the point of interaction via the perturbation (the mass), we model the perturbation:
[tex]\delta V=m^2[/tex]
and subsequently use this in the calculation for the transition amplitude, we use plane-wave solutions of the ingoing and outgoing states.
Alike to my previous question, I'm not really sure what this means to model some potential as a mass? Can anyone explain why we do this and maybe exactly what is happening in this particular feynman
I can try to embelish if needs be, please tell me if there are any inconsistencies or errors I have made in my explanation of my problem! | {"url":"http://www.physicsforums.com/showthread.php?p=4158689","timestamp":"2014-04-20T23:42:12Z","content_type":null,"content_length":"24852","record_id":"<urn:uuid:479eed5c-2414-4e52-bba4-c2a3ec5d6c4c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manning n for Corrugated Metal Culverts
In Reply Refer To: June 22, 1993
Mail Stop 415
OFFICE OF SURFACE WATER TECHNICAL MEMORANDUM NO. 93.17
Subject: Manning n for Corrugated Metal Culverts
Several types of corrugated metal now used for culvert pipe are
not discussed in Techniques of Water-Resources Investigations
(TWRI), Book 3, Chapter A3, Measurement of Peak Discharge at
Culverts by Indirect Methods. Laboratory studies conducted by
Utah State University for the National Corrugated Steel Pipe
Association provide n values for the new types of corrugation.
These studies have caused the Federal Highway Administration to
revise culvert roughness tables in the manual, Hydraulic Design of
Highway Culverts (Hydraulic Design Series No. 5), and provides
sufficient basis to revise n values for multiplate culverts as
given in TWRI, Book 3, Chapter A3, pages 10 and 11. The values
given herein should be used for all future culvert computations.
The Office of Surface Water (OSW) also recommends that previous
computations for flow through multiplate culverts be reviewed if
the following conditions are met:
1. The n value used in the computation differs by
0.003 or more from the value in this memorandum, and
2. discharges from types 2, 3, 4, or 6 computations using
n values from TWRI, Book 3, Chapter A3, or from ratings
based on such computations, have been published.
Ratings that were based on the old n values and are still in use
should be reviewed and revised if use of the revised n values
change any part of the rating by 5 percent or more. Published
discharges do not need to be revised unless they meet the criteria
for revisions given in Novak (1985, p. 103-104, WRD data reports
preparation guide) and the water-surface elevations and field
conditions on which the computation is based provide a high degree
of reliability to the computed discharge. The following material
supersedes the discussion in Standard riveted section and
Multiplate section in the part of the manual entitled "Corrugated
Metal" under Roughness Coefficients on pageJ10 of TWRI, Book 3,
Chapter A3.
Corrugated Metal
Corrugated pipes and arches are made in riveted, spiral, and
structural-plate styles. The riveted and spiral styles are used
in small pipes of less than 9-foot diameter. Spiral corrugations
have the same pitch and depth as that used in riveted
construction, but the plates are wound to form a continuous pipe.
Because of its greater strength, structural-plate (also called
multiplate) commonly is used for pipes that are more than 6 feet
in diameter. Multiplate is made in sheets that are bolted
Standard Riveted Sections
The corrugated metal most commonly used in riveted pipes and
arches has a 2 2/3-inch pitch with a rise of 1/2 inch. This is
frequently referred to as standard corrugated metal. According to
laboratory tests, n values for full pipe flow vary from 0.0266 for
a 1-foot-diameter pipe to 0.0224 for an 8-foot-diameter pipe for
velocities normally encountered in culverts. The American Iron
and Steel Institute recommends that a single value of 0.024 be
used in design of both partly-full and full-pipe flow for any size
of pipe. This value may be satisfactory for many computations of
discharge. However, more precise values are given in the
accompanying table, which shows values derived from tables and
graphs published by the Federal Highway Administration for culvert
design and that apply to both annular and spiral corrugations, as
noted in the table. Values from this table should be used by
U.S.JGeological Survey offices in computation of discharge through
Riveted pipes are also made from corrugated metal with a 1-inch
rise and 3-, 5-, and 6-inch pitch. Experimental data show a
slight lowering of the n value as pitch increases. The n values
for these three types of corrugation are also given in the table.
Structural Plate (Multiplate)
Structural-plate metal used in multiplate construction has much
larger corrugations than does that used in riveted pipes.
Multiplate construction is used with both steel and aluminum. The
steel has a 6-inch pitch and a 2-inch rise; aluminum has a 9-inch
pitch and a 2.5-inch rise. Tests show somewhat higher n values
for this metal and type of construction than for riveted
construction. Average n values range from 0.035 (steel) or 0.036
(aluminum) for 5-foot-diameter pipes to 0.033 for pipes of 18 feet
or greater diameter. The n values for various diameters of pipe
are given in the following table.
Revised Roughness Coefficients for Corrugated Metal (May 1993)
Pipe | n value for Indicated Corrugation Size
Diameter | | Structural-plate
ft | Riveted Construction | Construction
| Corrugation, Pitch x Rise, inches
|2-2/3 x 1/2 3 x 1 5 x 1 6 x 1 6 x 2 9 x 2-1/2
Annular Corrugations
1 0.027
2 0.025
3 0.024 0.028 0.025
4 0.024 0.028 0.026 0.024
5 0.024 0.028 0.026 0.024 0.035 0.036
6 0.023 0.028 0.026 0.024 0.035 0.035
7 0.023 0.028 0.026 0.023 0.035 0.034
8 0.023 0.028 0.025 0.023 0.034 0.034
9 0.023 0.028 0.025 0.023 0.034 0.034
10 0.022 0.027 0.025 0.023 0.034 0.034
11 0.022 0.027 0.025 0.022 0.034 0.033
12 0.027 0.024 0.022 0.033 0.033
16 (a)0.026(a)0.023(a)0.021
18 (a)0.033
21 (a)0.033
Spiral Corrugations
4 0.020 Use values for annular
5 0.022 corrugations for all other
6 0.023 corrugation sizes and pipe
7 0.023 diameters.
Range of pipe diameter in feet commonly encountered with the above
indicated corrugation size:
<9 3-13 5-13 3-13 5-25 5-25
(a)Extrapolated beyond Federal Highway Administration curves.1
Note: n values apply to pipes in good condition. Severe
deterioration of metal and misalignment of pipe sections
may cause slightly higher values.
1See page 16 HDS-5 for extrapolation.
Charles W. Boning, Chief
Office of Surface Water
WRD DISTRIBUTION: A, B, FO, PO | {"url":"http://water.usgs.gov/admin/memo/SW/sw93.17.html","timestamp":"2014-04-19T06:22:16Z","content_type":null,"content_length":"7680","record_id":"<urn:uuid:134b7376-eb30-4005-8b3b-f8d888d52518>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
November 26, 1996
This Week's Finds in Mathematical Physics (Week 95)
John Baez
Last week I talked about asymptotic freedom - how the "strong" force gets weak at high energies. Basically, I was trying to describe an aspect of "renormalization" without getting too technical about
it. By coincidence, I recently got my hands on a book I'd been meaning to read for quite a while:
1) Laurie M. Brown, ed., "Renormalization: From Lorentz to Landau (and Beyond)", Springer-Verlag, New York, 1993.
It's a nice survey of how attitudes to renormalization have changed over the years. It's probably the most fun to read if you know some quantum field theory, but it's not terribly technical, and it
includes a "Tutorial on infinities in QED", by Robert Mills, that might serve as an introduction to renormalization for folks who've never studied it.
Okay, on to some new stuff....
It's a bit funny how one of the most curious features of bosonic string theory in 26 dimensions was anticipated by the number theorist Edouard Lucas in 1875. I assume this is the same Lucas who is
famous for the Lucas numbers: 1,3,4,7,11,18,..., each one being the sum of the previous two, after starting off with 1 and 3. They are not quite as wonderful as the Fibonacci numbers, but in a study
of pine cones it was found that while most cones have consecutive Fibonacci numbers of spirals going around clockwise and counterclockwise, a small minority of deviant cones use Lucas numbers
Anyway, Lucas must have liked playing around with numbers, because in one publication he challenged his readers to prove that: "A square pyramid of cannon balls contains a square number of cannon
balls only when it has 24 cannon balls along its base". In other words, the only integer solution of
1^2 + 2^2 + ... + n^2 = m^2,
is the solution n = 24, not counting silly solutions like n = 0 and n = 1.
It seems that Lucas didn't have a proof of this; the first proof is due to G. N. Watson in 1918, using hyperelliptic functions. Apparently an elementary proof appears in the following ridiculously
overpriced book
2) W. S. Anglin, "The Queen of Mathematics: An Introduction to Number Theory", Kluwer, Dordrecht, 1995.
For more historical details, see the review in
3) Jet Wimp, Eight recent mathematical books, Math. Intelligencer 18 (1996), 72-79.
Unfortunately, I haven't seen these proofs of Lucas' claim, so I don`t know why it's true. I do know a little about its relation to string theory, so I'll talk about that.
There are two main flavors of string theory, "bosonic" and "supersymmetric". The first is, true to its name, just the quantized, special-relativistic theory of little loops made of some abstract
string stuff that has a certain tension - the "string tension". Classically this theory would make sense in any dimension, but quantum-mechanically, for reasons that I want to explain someday but not
now, this theory works best in 26 dimensions. Different modes of vibration of the string correspond to different particles, but the theory is called "bosonic" because these particles are all bosons.
That's no good for a realistic theory of physics, because the real world has lots of fermions, too. (For a bit about bosons and fermions in particle physics, see "week93".)
For a more realistic theory people use "supersymmetric" string theory. The idea here is to let the string be a bit more abstract: it vibrates in "superspace", which has in addition to the usual
coordinates some extra "fermionic" coordinates. I don't want to get too technical here, but the basic idea is that while the usual coordinates commute as usual:
x[i] x[j] = x[j] x[i]
the fermionic coordinates "anticommute"
y[i] [j] = - y[j] y[i]
while the bosonic coordinates commute with fermionic ones:
x[i] y[j] = y[j] x[i ]
If you've studied bosons and fermions this will be sort of familiar; all the differences between them arise from the difference between commuting and anticommuting variables. For a little glimpse of
this subject try:
4) John Baez, Spin and the harmonic oscillator, http://math.ucr.edu/home/baez/harmonic.html
As it so happens, supersymmetric string theory - often abbreviated to "superstring theory" - works best in 10 dimensions. There are five main versions of superstring theory, which I described in "
week74". The type I string theory involves open strings - little segments rather than loops. The type IIA and type IIB theories involve closed strings, that is, loops. But the most popular sort of
superstring theories are the "heterotic strings". A nice introduction to these, written by one of their discoverers, is:
5) David J. Gross, The heterotic string, in "Workshop on Unified String Theories", eds. M. Green and D. Gross, World Scientific, Singapore, 1986, pp. 357-399.
These theories involve closed strings, but the odd thing about them, which accounts for the name "heterotic", is that vibrations of the string going around one way are supersymmetric and act as if
they were in 10 dimensions, while the vibrations going around the other way are bosonic and act as if they were in 26 dimensions!
To get this string with a split personality to make sense, people cleverly think of the 26 dimensional spacetime for the bosonic part as a 10-dimensional spacetime times a little 16-dimensional
curled-up space, or "compact manifold". To get the theory to work, it seems that this compact manifold needs to be flat, which means it has to be a torus - a 16-dimensional torus. We can think of any
such torus as 16-dimensional Euclidean space "modulo a lattice". Remember, a lattice in Euclidean space is something that looks sort of like this:
x x
x x
x x
x x
x x
x x
x x
x x
Mathematically, it's just a discrete subset L of R^n (n-dimensional Euclidean space, with its usual coordinates) with the property that if x and y lie in L, so does jx + ky for all integers j and k.
When we form n-dimensional Euclidean space "modulo a lattice", we decree two points x and y to be the same if x - y is in L. For example, all the points labelled x in the figure above count as the
same when we "mod out by the lattice"... so in this case, we get a 2-dimensional torus.
For more on 2-dimensional tori and their relation to complex analysis, you can read "week13." Here we are going to be macho and plunge right into talking about lattices and tori in arbitrary
To get our 26-dimensional string theory to work out nicely when we curl up 16-dimensional space to a 16-dimensional torus, it turns out that we need the lattice L that we're modding out by to have
some nice properties. First of all, it needs to be an "integral" lattice, meaning that for any vectors x and y in L the dot product x.y must be an integer. This is no big deal - there are gadzillions
of integral lattices. In fact, sometimes when people say "lattice" they really mean "integral lattice". What's more of a big deal is that L must be "even", that is, for any x in L the inner product
x.x is even. This implies that L is integral, by the identity
(x + y).(x + y) = x.x + 2x.y + y.y
But what's really a big deal is that L must also be "unimodular". There are different ways to define this concept. Perhaps the easiest to grok is that the volume of each lattice cell - e.g., each
parallelogram in the picture above - is 1. Another way to say it is this. Take any basis of L, that is, a bunch of vectors in L such that any vector in L can be uniquely expressed as an integer
linear combination of these vectors. Then make a matrix with the components of these vectors as rows. Then take its determinant. That should equal plus or minus 1. Still another way to say it is
this. We can define the "dual" of L, say L*, to be all the vectors x such that x.y is an integer for all y in L. An integer lattice is one that's contained in its dual, but L is unimodular if and
only if L = L*. So people also call unimodular lattices "self-dual". It's a fun little exercise in linear algebra to show that all these definitions are equivalent.
Why does L have to be an even unimodular lattice? Well, one can begin to understand this a litle by thinking about what a closed string vibrating in a torus is like. If you've ever studied the
quantum mechanics of a particle on a torus (e.g. a circle!) you may know that its momentum is quantized, and must be an element of L*. So the momentum of the center of mass of the string lies in L*.
On the other hand, the string can also wrap around the torus in various topologically different ways. Since two points in Euclidean space correspond to the same point in the torus if they differ by a
vector in L, if we imagine the string as living up in Euclidean space, and trace our finger all around it, we don't necesarily come back to the same point in Euclidean space: the same point plus any
vector in L will do. So the way the string wraps around the torus is described by a vector in L. If you've heard of the "winding number", this is just a generalization of that.
So both L and L* are really important here (which has to do with the fashionable subject of "string duality"), and a bunch more work shows that they both need to be even, which implies that L is even
and unimodular.
Now something cool happens: even unimodular lattices are only possible in certain dimensions - namely, dimensions divisible by 8. So we luck out, since we're in dimension 16.
In dimension 8 there is only one even unimodular lattice (up to isometry), namely the wonderful lattice E8! The easiest way to think about this lattice is as follows. Say you are packing spheres in n
dimensions in a checkerboard lattice - in other words, you color the cubes of an n-dimensional checkerboard alternately red and black, and you put spheres centered at the center of every red cube,
using the biggest spheres that will fit. There are some little hole left over where you could put smaller spheres if you wanted. And as you go up to higher dimensions, these little holes gets bigger!
By the time you get up to dimension 8, there's enough room to put another sphere OF THE SAME SIZE AS THE REST in each hole! If you do that, you get the lattice E8. (I explained this and a bunch of
other lattices in "week65", so more info take a look at that.)
In dimension 16 there are only two even unimodular lattices. One is E8 + E8. A vector in this is just a pair of vectors in E8. The other is called D16+, which we get the same way as we got E8: we
take a checkerboard lattice in 16 dimensions and stick in extra spheres in all the holes. More mathematically, to get E8 or D16+, we take all vectors in R^8 or R^16, respectively, whose coordinates
are either all integers or all half-integers, for which the coordinates add up to an even integer. (A "half-integer" is an integer plus 1/2.)
So E8 + E8 and D16+ give us the two kinds of heterotic string theory! They are often called the E8 + E8 and SO(32) heterotic theories.
In "week63" and "week64" I explained a bit about lattices and Lie groups, and if you know about that stuff, I can explain why the second sort of string theory is called "SO(32)". Any compact Lie
group has a maximal torus, which we can think of as some Euclidean space modulo a lattice. There's a group called E8, described in "week90", which gives us the E8 lattice this way, and the product of
two copies of this group gives us E8 + E8. On the other hand, we can also get a lattice this way from the group SO(32) of rotations in 32 dimensions, and after a little finagling this gives us the
D16+ lattice (technically, we need to use the lattice generated by the weights of the adjoint representation and one of the spinor representations, according to Gross). In any event, it turns out
that these two versions of heterotic string theory act, at low energies, like gauge field theories with gauge group E8 x E8 and SO(32), respectively! People seem especially optimistic that the E8 x
E8 theory is relevant to real-world particle physics; see for example:
6) Edward Witten, Unification in ten dimensions, in "Workshop on Unified String Theories", eds. M. Green and D. Gross, World Scientific, Singapore, 1986, pp. 438-456.
Edward Witten, Topological tools in ten dimensional physics, with an appendix by R. E. Stong, in "Workshop on Unified String Theories", eds. M. Green and D. Gross, World Scientific, Singapore, 1986,
pp. 400-437.
The first paper listed here is about particle physics; I mention the second here just because E8 fans should enjoy it - it discusses the classification of bundles with E8 as gauge group.
Anyway, what does all this have to do with Lucas and his stack of cannon balls?
Well, in dimension 24, there are 24 even unimodular lattices, which were classified by Niemeier. A few of these are obvious, like E8 + E8 + E8 and E8 + D16+, but the coolest one is the "Leech
lattice", which is the only one having no vectors of length 2. This is related to a whole WORLD of bizarre and perversely fascinating mathematics, like the "Monster group", the largest sporadic
finite simple group - and also to string theory. I said a bit about this stuff in "week66", and I will say more in the future, but for now let me just describe how to get the Leech lattice.
First of all, let's think about Lorentzian lattices, that is, lattices in Minkowski spacetime instead of Euclidean space. The difference is just that now the dot product is defined by
(x[1],...,x[n]) . (y[1],...,y[n]) = - x[1] y[1] + x[2] y[2] + ... + x[n] y[n]
with the first coordinate representing time. It turns out that the only even unimodular Lorentzian lattices occur in dimensions of the form 8k + 2. There is only one in each of those dimensions, and
it is very easy to describe: it consists of all vectors whose coordinates are either all integers or all half-integers, and whose coordinates add up to an even number.
Note that the dimensions of this form: 2, 10, 18, 26, etc., are precisely the dimensions I said were specially important in "week93" for some other string-theoretic reason. Is this a "coincidence"?
Well, all I can say is that I don't understand it.
Anyway, the 10-dimensional even unimodular Lorentzian lattice is pretty neat and has attracted some attention in string theory:
7) Reinhold W. Gebert and Hermann Nicolai, E10 for beginners, preprint available as hep-th/9411188
but the 26-dimensional one is even more neat. In particular, thanks to the cannonball trick of Lucas, the vector
v = (70,0,1,2,3,4,...,24)
is "lightlike". In other words,
v.v = 0
What this implies is that if we let T be the set of all integer multiples of v, and let S be the set of all vectors x in our lattice with x.v = 0, then T is contained in S, and S/T is a
24-dimensional lattice - the Leech lattice!
Now that has all sorts of ramifications that I'm just barely beginning to understand. For one, it means that if we do bosonic string theory in 26 dimensions on R^26 modulo the 26-dimensional even
unimodular lattice, we get a theory having lots of symmetries related to those of the Leech lattice. In some sense this is a "maximally symmetric" approach to 26-dimensional bosonic string theory:
8) Gregory Moore, Finite in all directions, preprint available as hep-th/9305139.
Indeed, the Monster group is lurking around as a symmetry group here! For a physics-flavored introduction to that aspect, try:
9) Reinhold W. Gebert, Introduction to vertex algebras, Borcherds algebras, and the Monster Lie algebra, preprint available as hep-th/9308151
and for a detailed mathematical tour see:
10) Igor Frenkel, James Lepowsky, and Arne Meurman, "Vertex Operator Algebras and the Monster," Academic Press, 1988.
Also try the very readable review articles by Richard Borcherds, who came up with a lot of this business:
11) Richard Borcherds, Automorphic forms and Lie algebras.
Richard Borcherds, Sporadic groups and string theory.
These and other papers available at http://www.pmms.cam.ac.uk/Staff/R.E.Borcherds.html; click on the personal home page.
Well, there is a lot more to say, but I need to go home and pack for my Thanksgiving travels. Let me conclude by answering a natural followup question: how many even unimodular lattices are there in
32 dimensions? Well, one can show that there are AT LEAST 80 MILLION!
Some of you may have wondered what's happened to the "tale of n-categories". I haven't forgotten that! In fact, earlier this fall I finished writing a big fat paper on 2-Hilbert spaces (which are to
Hilbert spaces as categories are to sets), and since then I have been struggling to finish another big fat paper with James Dolan, on the general definition of "weak n-categories". I want to talk
about this sort of thing, and other progress on n-categories and physics, but I've been so busy working on it that I haven't had time to chat about it on This Week's Finds. Maybe soon I'll find time.
Addenda: Robin Chapman pointed out that Anglin's proof also appears in the American Mathematical Monthly, February 1990, pp. 120-124, and that another elementary proof has subsequently appeared in
the Journal of Number Theory. David Morrison pointed out in email that since the sum
1^2 + 2^2 + ... + n^2 = m^2,
is n(n+1)(2n+1)/6, this problem can be solved by finding all the rational points (n,m) on the elliptic curve
(1/3) n^3 + (1/2) n^2 + (1/6) n = m^2
which is the sort of thing folks know how to do.
Also, here's something Michael Thayer wrote on one of the newsgroups, and my reply:
>John Baez wrote:
>> In particular, thanks to the cannonball trick of Lucas, the vector
>> v = (70,0,1,2,3,4,...,24)
>> is "lightlike". In other words,
>> v.v = 0
>I don't see what is so significant about the vector v. For instance,
>the 10 dimensional vector (3,1,1,1,1,1,1,1,1,1) is also light like, and
>you make no big deal about that. Is there some reason why the ascending
>values in v are important?
Yikes! Thanks for catching that massive hole in the exposition.
You're right that there's no shortage of lightlike vectors in the even unimodular Lorentzian lattices of other dimensions 8n+2; there are also lots of other lightlike vectors in the 26-dimensional
one. Any one of these gives us a lattice in 8n-dimensional Euclidean space. In fact, we can get all 24 even unimodular lattices in 24-dimensional Euclidean space by suitable choices of lightlike
vector. The lightlike vector you wrote down happens to give us the E8 lattice in 8 dimensions.
So what's so special about I wrote, which gives the Leech lattice? Of course the Leech lattice is itself special, but what does this have to do with the nicely ascending values of the components of
Alas, I don't know the real answer. I'm not an expert on this stuff; I'm just explaining it in order to try to learn it. Let me just say what I know, which all comes from Chap. 27 of Conway and
Sloane's book "Sphere Packings, Lattices, and Groups".
If we have a lattice, we say a vector r in it is a "root" if the reflection through r is a symmetry of the lattice. Corresponding to each root is a hyperplane consisting of all vectors perpendicular
to that root. These chop space into a bunch of "fundamental regions". If we pick a fundamental region, the roots corresponding to the hyperplanes that form the walls of this region are called
"fundamental roots". The nice thing about the fundamental roots is that the reflection through any root is a product of reflections through these fundamental roots.
[For more stuff on reflection groups and lattices see "week62" and the following weeks.]
In 1983 John Conway published a paper where he showed various amazing things; this is now Chapter 27 of the above book. First, he shows that the fundamental roots of the even unimodular Lorentzian
lattices in dimensions 10, 18, and 26 are the vectors r with r.r = 2 and r.v = -1, where the "Weyl vector" v is
They all have this nice ascending form but only in 26 dimensions is the Weyl vector lightlike!
Howerver, Conway doesn't seem to explain why the Weyl vectors have this ascending form. So I'm afraid I really don't understand how all the pieces fit together. All I can say is that for some reason
the Weyl vectors have this ascending form, and the fact that the Weyl vector is also lightlike makes a lot of magic happen in 26 dimensions. For example, it turns out that in 26 dimensions there are
*infinitely many* fundamental roots, unlike in the two lower dimensional cases.
Just to add mystery upon mystery, Conway notes that in higher dimensions there is no vector v for which all the fundamental roots r have r.v equal to some constant. So the pattern above does not
I find this stuff fascinating, but it would drive me nuts to try to work on it. It's as if God had a day off and was seeing how many strange features he could build into mathematics without actually
making it inconsistent.
Yet another addendum (August 2001): now, with the rise of interest in 11-dimensional physics, there is even a paper on E11:
12) P. West, E[11] and M-theory, available as hep-th/0104081.
© 1996 John Baez | {"url":"http://math.ucr.edu/home/baez/week95.html","timestamp":"2014-04-21T09:37:17Z","content_type":null,"content_length":"25142","record_id":"<urn:uuid:7d483152-1d3b-41c6-9b4e-202360e3283b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
How can I write math formula in a Stack Overflow question?
up vote 20 down vote favorite
I want to ask something that contains a mathematical formula.
How can I write the equation ? Is there a page that shows the syntax ?
I thought there must be some syntax like in Wikipedia or something...
support asking-questions markdown math-se
See meta.stackoverflow.com/questions/73504/… - but I don't think the syntax is implemented on SO – ChrisF♦ Jan 28 '11 at 14:52
It's strange that stackoverflow does not support math formulas directly, like MathExchange or CrossValidate. – qed Sep 1 '13 at 14:30
add comment
3 Answers
active oldest votes
I'm not sure why Simon deleted his answer, but it was right, you can use the Google Charts API. For example, this:
up vote 16 down vote becomes:
Edit: Second formula:
2 +1 I (wrongly) thought they couldn't be added as images. – Gelatin Jan 28 '11 at 15:12
Just to make this complete, an online TEXT editor: codecogs.com/latex/eqneditor.php – Yochai Timmer Jan 28 '11 at 17:42
This doesn't work well when you have more than 1 formula. – Yochai Timmer Jan 28 '11 at 19:40
@Yochai I just tried it in an edit; what issue are you having? – Michael Mrozek Jan 28 '11 at 19:49
tried to put 4 formulas, only the first showed. – Yochai Timmer Jan 28 '11 at 20:03
I need to substitute ) with %29 to make it work in preview when I type the answer. But even if I do so, it doesn't work when I save it and view it. @YochaiTimmer 's answer
works for me. – Haozhun May 28 '13 at 7:26
add comment
Ok, best combination I found is doing something like what Michael suggested.
But it's easier to reference the link at the bottom.
So, go to this site: Online LaTex Equation Editor
Create your formula. Use this site and encode it for URL safety: URL Encoder/Decoder
Add the result to the google prefix https://chart.googleapis.com/chart?cht=tx&chl=
Then reference it to your post:
This is a formula ![formula][1]
up vote 9 down Another formula: ![another][2]
[1]: https://chart.googleapis.com/chart?cht=tx&chl=%5Csum_%7B23%7D%5E%7B43%7D
[2]: https://chart.googleapis.com/chart?cht=tx&chl=%5Csqrt%7B%5Cfrac%7B%5Cpartial%7D%7B%5Cpartial%20x%7D%7D
It will show like this:
This is a formula
Another formula:
1 Unfortunately, the result is not as good as directly from LaTeX... – brimborium Aug 10 '12 at 10:26
+1 I was having trouble getting this to work using chart.googleapis.com although I was able to get this work using the URL encoded link from codecogs just fine. I was using it for
this answer, any idea why? – Shafik Yaghmour Nov 30 '13 at 4:35
add comment
LaTeX Phrases
Physics Stack Exchange uses MathJax to render LaTeX. You can use single dollar signs to delimit inline equations, and double dollars for blocks:
The Gamma function satisfying $\Gamma(n) = (n-1)!\quad\forall n\in\mathbb N$ is via through the Euler integral
$$ \Gamma(z) = \int_0^\infty t^{z-1}e^{-t}dt\,. $$
up vote 3 down vote
And you'll see the resutls as:
To create Latex phrase go to online Latex Equation Edittor
This is topic oriented. It definitely works in Physics and Maths Exchange
add comment
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged support asking-questions markdown math-se . | {"url":"http://meta.stackoverflow.com/questions/76902/how-can-i-write-math-formula-in-a-stack-overflow-question/76905","timestamp":"2014-04-16T11:06:42Z","content_type":null,"content_length":"79415","record_id":"<urn:uuid:549ada72-755d-4084-9552-8e09e931293b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Suwanee Algebra 1 Tutor
Find a Suwanee Algebra 1 Tutor
...I have also dealt poker in Las Vegas, including the World Series of Poker. I have played against the best in the game. I have an MBA, and I have worked in the business world for 31 years.
29 Subjects: including algebra 1, reading, English, GED
I have loved math all my life and I took lots of it in school. I even attended math contests. Now I have the opportunity to help others with their math.
22 Subjects: including algebra 1, calculus, geometry, ASVAB
...Although I was the mathematics department co-chair for the last 10 years of my career, I continued to teach Algebra I for the major part of those years. Many student,s difficulties begin in
Algebra I and I wanted to be sure my students knew the concepts of the first course so they could be succe...
5 Subjects: including algebra 1, geometry, algebra 2, precalculus
...I have experience tutoring family members in basic algebra. I use Excel regularly at work to generate reports on budgets. I also use it for data cleanup and list creation for our customer
14 Subjects: including algebra 1, reading, English, writing
...I have always had an ability to attack problems from multiple angles, allowing me to connect with students who learn in a variety of ways. In classes, I am often a student that others look to
for advice. My success as a student speaks for itself- I graduated as valedictorian of my high school class, and scored a perfect 36 on my ACT.
28 Subjects: including algebra 1, chemistry, calculus, physics
Nearby Cities With algebra 1 Tutor
Berkeley Lake, GA algebra 1 Tutors
Buford, GA algebra 1 Tutors
Canton, GA algebra 1 Tutors
Conyers algebra 1 Tutors
Cumming, GA algebra 1 Tutors
Doraville, GA algebra 1 Tutors
Duluth, GA algebra 1 Tutors
Johns Creek, GA algebra 1 Tutors
Lawrenceville, GA algebra 1 Tutors
Milton, GA algebra 1 Tutors
Norcross, GA algebra 1 Tutors
Rest Haven, GA algebra 1 Tutors
Snellville algebra 1 Tutors
Sugar Hill, GA algebra 1 Tutors
Tucker, GA algebra 1 Tutors | {"url":"http://www.purplemath.com/Suwanee_algebra_1_tutors.php","timestamp":"2014-04-16T16:30:00Z","content_type":null,"content_length":"23558","record_id":"<urn:uuid:44771227-e02b-4ccb-a143-2771eede1f72>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
locally discrete 2-category
locally discrete 2-category
A $2$-category is locally discrete if each hom-category is a discrete category.
Just as a discrete category is the same as a set, so a locally discrete $2$-category is the same as a category. The real value of the concept is to see how a category may be interpreted as a $2$
Created on July 24, 2009 18:02:29 by
Toby Bartels | {"url":"http://ncatlab.org/nlab/show/locally+discrete+2-category","timestamp":"2014-04-17T22:01:27Z","content_type":null,"content_length":"11826","record_id":"<urn:uuid:abbae952-2abe-4b53-900b-5932ca907115>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Assuming that you are programming this...
unless |X| means absolute value which I am not sure it will be recognized on a computer.
Absolute value is extremely easy to do on a computer. All it requires is to switch the very first bit of a signed number and then subtract that from the highest possible signed value. Most languages
support an abs() function, but it also works if you just do: x = (unsigned int) x or something similar.
But if you're on a computer, the above isn't needed.
if (X < 0) return -1; return 1;
x³ / ((x²)^9)^(1/6)
That will work but I don't know how you would do it in computer language.
Pretty easy actually:
(x*x*x) / pow((pow(x, 18)), 1/6)
If your language doesn't support a power function, you can write it yourself, although doing non integral powers is a bit tricky. But, as I said, you don't need to do all this. | {"url":"http://www.mathisfunforum.com/post.php?tid=2560&qid=25276","timestamp":"2014-04-19T01:52:41Z","content_type":null,"content_length":"20801","record_id":"<urn:uuid:b9425216-3b46-4bed-8364-c7948232c795>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Decidable polymorphic recursion?
Re: Decidable polymorphic recursion?
Francois Pottier writes:
> I have had a quick look at your paper. I must be missing something
> obvious, but I fail to understand how it doesn't contradict
> Henglein's undecidability result. Henglein proves that
> semi-unification is reducible to typability in ML+FIX (i.e. given an
> expression, is there a valid typing judgement for it?). As far as as
> I understand, your paper claims:
> 1. typability in ML+FIX is equivalent to typability in ML'+FIX'.
> 2. typability in ML'+FIX' is decidable, since algorithm T0 computes principal
> typings.
> Taken together, these facts lead to a contradiction. What am I missing?
> I would very much like to understand where the subtlety lies.
Well... we believe that Henglein's proof [1] that semiunification is
reducible to type inference for ML+FIX has an incorrection, in theorem
3 (page 268), which states the equivalence of type systems MM' and
MM'' (defined in the paper). By rule (TAUT'') we can derive, for
example, A{x:a},a |- x:Int, since (a,a) <= (Int,a) (where (M1,M2) <=
(N1,N2) is defined in page 262, 1st paragraph). However A{x:a} |-
x:Int cannot be derived in MM'.
[1] F. Henglein, Type inference with polymorphic recursion,
ACM TOPLAS, 253--289, 15(2), 1993. | {"url":"http://www.seas.upenn.edu/~sweirich/types/archive/1999-2003/msg00918.html","timestamp":"2014-04-16T04:58:44Z","content_type":null,"content_length":"3535","record_id":"<urn:uuid:61341bbd-58f3-43b8-8683-c4ef4e40641a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Noncommutative geometry
I will try to describe in loose terms the steps that lead to the emergence of time from noncommutativity in operator algebras. This hopefully will answer the questions of Paul and Sirix (at least in
parts) and of Urs.
First I'll explain the basic formula due to Tomita that associates to a state L a one parameter group of automorphisms. The basic fact is that one can make sense of the map x -->
(x)= L x L^{-1} as an (unbounded) map from the algebra to itself and then take its complex powers
To define this map one just compares the two bilinear forms on the algebra given by L(xy) and L(yx) . Under suitable non-degeneracy conditions on L both give an isomorphism of the algebra with its
dual linear space and thus one can find a linear map s from the algebra to itself such that
(y)) for all x and y.
One can check at this very formal level that s fulfills
(b) :
Thus still at this very formal level s is an automorphism of the algebra, and the best way to think about it is as x --> L xL^{-1} where one respects the cyclic ordering of terms in writing Lyx=LyL^
{-1}Lx=LxLyL^{-1}. Now all this is formal and to make it "real" one only needs the most basic structure of a noncommutative space, namely the measure theory. This means that the algebra one is
dealing with is a von-Neumann algebra, and that one needs very little structure to proceed since the von-Neumann algebra of an NC-space only embodies its measure theory, which is very little
structure. Thus the main result of Tomita (which was first met with lots of skepticism by the specialists of the subject, was then succesfully expounded by Takesaki in his lecture notes and is known
as the Tomita-Takesaki theory) is that when L is a faithful normal state on a von-Neumann algebra M, the complex powers of the associated map
(x)= L x L^{-1} make sense and define a one parameter group of automorphism
_L of M.
There are many faithful normal states on a von-Neumann algebra and thus many corresponding one parameter groups of automorphism
_L . It is here that the two by two matrix trick (Groupe modulaire d’une algèbre de von Neumann, C. R. Acad. Sci. Paris, Sér. A-B, 274, 1972) enters the scene and shows that in fact the groups of
_L are all the same modulo inner automorphisms!
Thus if one lets Out(M) be the quotient of the group of automorphisms of M by the normal subgroup of inner automorphisms one gets a completely canonical group homomorphism from the additive group
of real numbers
--> Out(M)
and it is this group that I always viewed as a tantalizing candidate for "emerging time" in physics. Of course it immediately gives invariants of von-Neumann algebras such as the group T(M) of
"periods" of M which is the kernel of the above group morphism. It is at the basis of the classification of factors and reduction from type III to type II + automorphisms which I did in June 1972 and
published in my
(with the missing III _1 case later completed by Takesaki).
This "emerging time" is non-trivial when the noncommutative space is far enough from "classical" spaces. This is the case for instance for the
leaf space
of foliations such as the Anosov foliations for Riemann surfaces and also for the space of Q-lattices modulo scaling in our joint
with Matilde Marcolli.
The real issue then is to make the connection with time in quantum physics. By the computation of Bisognano-Wichmann one knows that the
_L for the restriction of the vacuum state to the local algebra in free quantum field theory associated to a Rindler wedge region (defined by x_1 > + - x_0) is in fact the evolution of that algebra
according to the "proper time" of the region. This relates to the thermodynamics of black holes and to the Unruh temperature. There is a whole literature on what happens for conformal field theory in
dimension two. I'll discuss the above real issue of the connection with time in quantum physics in another post.
When we started this blog we promised to gradually build a dictionary (see here and here ) of concepts in use in NCG (= noncommutative geometry). We started with the following list
Commutative .................................................Noncommutative
functions f: X \to C .................operators on Hilbert space; elements of an algebra
pointwise multiplication fg.....................................ab (composition)
range of a function................................spectrum of an operator
Complex variable................Operator on Hilbert space
Real variable..........................Self-adjoint operator
The very first entry of this dictionary however should be about the idea of a noncommutative space. So what is a `noncommutative space', really? Let me quote here an excerpt from Alain's interview
with George Skandalis to appear soon in
the EMS (European math society) journal:
"Question: What is noncommutative geometry? In your opinion, is "noncommutative geometry" simply a better name for operator algebras or is it a close but distinct field?
answer: Yes, it’s important to be more precise. First, noncommutative geometry for me is this duality between geometry and algebra, with a striking coincidence between the algebraic rules and the
linguistic ones. Ordinary language never uses parentheses inside the words. This means that associativity is taken into account, but not commutativity, which would permit permuting the letters
freely. With the commutative rules my name appears 4 times in the cryptic message a friend sent me recently: « Je suis alenconnais, et non alsacien. Si t’as besoin d’un conseil nana, je t’attends au
coin annales. Qui suis-je ? »
Somehow commutativity blurs things. In the noncommutative world, which shows up in physics at the level of microscopic systems, the simplifications coming from commutativity are no longer allowed.
This is the difference between noncommutative geometry and ordinary geometry, in which coordinates commute. There is something intriguing in the fact that the rules for writing words coincide with
the natural rules of algebraic manipulation, namely associativity but not commutativity. Secondly, for me, the passage to noncommutative is exactly the passage from a completely static space in which
points do not talk to each other, to a noncommutative space, in which points start being related to each other, as isomorphic objects of a category. When some points are related to each other, they
will be represented by matrices on the algebraic side, exactly in the same way as Heisenberg discovered the matrix mechanics of microscopic systems. One does not go very far if one remains at this
strictly algebraiclevel, with letter manipulations... and the real point of departure of noncommutative geometry is von Neumann algebras. What really convinced me that operator algebras is a very
fertile field is when I realized –because of the 2 by 2 matrix trick – that a noncommutative operator algebra evolves with time! It admits a canonical flow of outer automorphisms and in particular it
has “periods”! Once you understand this, you realize that the noncommutative world instead of being only a pale reflection, a meaningless generalization of the commutative case, admits totally new
and unexpected features, such as this generation of the flow of time from noncommutativity. However, I don’t identify noncommutative geometry with operator algebras; this field has a life of its own.
New phenomena are discovered and it is very important to study operator algebras per se -I have spent a large part of my life doing that. But on the other hand, operator algebras only capture certain
aspects of a noncommutative space, and the “only” commutative von Neumann algebra is L∞[0; 1]! To be more specific, von Neumann algebras only capture the measure theory, and Gelfand’s C*-algebras the
topology. And there are many more aspects in a geometric space: the differential structure and crucially the metric. Noncommutative geometry can be organized according to what qualitative feature you
look at when you analyze a space. But, of course, as a living body you cannot isolate any of these aspects from the others without destroying its integrity. One aspect on which I worked with greatest
intensity in recent times is a shift of paradigm which is almost forced on you by noncommutativity: it bears on the metric aspect, the measurement of distances. This is where the Dirac operator plays
a key role. Instead of measuring distances effectively by taking the shortest path from one point to another, you are led to a dual point of view, forced upon you when you are doing non-commutative
geometry: the only way of measuring distances in the noncommutative world is spectral. It simply consists of sending a wave from a point a to a point b and then measuring the phase shift of the wave.
Amusingly this shift of paradigm already took place in the metric system, when in the sixties the definition of the unit of length, which used to be a concrete metal bar, was replaced by the
wavelength of an atomic spectral line. So the shift which is forced upon you by noncommutative geometry already happened in physics. This is a typical example where the noncommutative generalization
corresponds to an abrupt change even in the commutative case."
I was planing to continue with a detailed analysis of the question, but I think it is important to stop right now and answer some questions. I would particularly encourage students and others to come
online and pose their questions, comments and remarks about issues discussed so far.
I guess one possible use of a blog, like this one, is as a space of freedom where one can tell things that would be out of place in a "serious" math paper. The finished technical stuff finds its
place in these papers and it is a good thing that mathematicians maintain a high standard in the writing style since otherwise one would quickly loose control of what is proved and what is just
wishful thinking. But somehow it leaves no room for the more profound source, of poetical nature, that sets things into motion at an early stage of the mental process leading to the discovery of new
"hard" facts. Grothendieck expressed this in a vivid manner in Récoltes et semailles :
"L'interdit qui frappe le rêve mathématique, et à travers lui, tout ce qui ne se présente pas sous les aspects habituels du produit fini, prêt à la consommation. Le peu que j'ai appris sur les autres
sciences naturelles suffit à me faire mesurer qu'un interdit d'une semblable rigueur les aurait condamnées à la stérilité, ou à une progression de tortue, un peu comme au Moyen Age où il n'était pas
question d'écornifler la lettre des Saintes Ecritures. Mais je sais bien aussi que la source profonde de la découverte, tout comme la démarche de la découverte dans tous ses aspects essentiels, est
la même en mathématique qu'en tout autre région ou chose de l'Univers que notre corps et notre esprit peuvent connaitre. Bannir le rêve, c'est bannir la source - la condamner à une existence occulte"
I shall try to involve on the post of Masoud about tilings and give a heuristic description of a basic qualitative feature of noncommutative spaces which is perfectly illustrated by the space T of
Penrose tilings of the plane. Given the two basic tiles : the Penrose kites and darts (or those shown in the pictures), one can tile the plane with these two tiles (with a matching condition on the
colors of the vertices) but no such tiling is periodic. Two tilings are the same if they are carried into each other by an isometry of the plane. There are plenty of examples of tilings which are not
the same. The set T of all tilings of the plane by the above two tiles is a very strange set because of the following:
"Every finite pattern of tiles in a tiling by kites and darts does occur, and infinitely many times, in any other tiling by the same tiles''.
This means that it is impossible to decide locally with which tiling one is dealing. Any pair of tilings can be matched on arbitrarily large patches and there is no way to tell them apart by looking
only at finite portions of each of them. This is in sharp contrast with real numbers for instance since if two real numbers are distinct their decimal expansions will certainly be different far
enough. I remember attending quite long ago a talk by Roger Penrose in which he superposed two transparencies with a tiling on each and showed the strange visual impression one gets by matching large
patches of one of them with the other... he expressed the intuitive feeling one gets from the richness of these "variations on the same point" as being similar to "quantum fluctuations". A space like
the space T of Penrose tilings is indeed a prototype example of a noncommutative space. Since its points cannot be distinguished from each other locally one finds that there are no interesting real
(or complex) valued functions on such a space which stands apart from a set like the real line R and cannot be analyzed by means of ordinary real valued functions. But if one uses the dictionary one
finds out that the space T is perfectly encoded by a (non-commutative) algebra of q-numbers which accounts for its "quantum" aspect. See this book for more details.
In a comment to the post of Masoud on tilings the question was formulated of a relation between aperiodic tilings and primes. A geometric notion, analogous to that of aperiodic tiling, that indeed
corresponds to prime numbers is that of a Q-lattice. This notion was introduced in our joint work with Matilde Marcolli and is simply given by a pair of a lattice L in R together with an additive map
from Q/Z to QL/L. Two Q-lattices are commensurable when the lattices are commensurable (which means that their sum is still a lattice) and the maps agree (modulo the sum). The space X of Q-lattices
up to commensurability comes naturally with a scaling action (which rescales the lattice and the map) and an action of the group of automorphisms of Q/Z by composition. Again, as in the case of
tilings the space X is a typical noncommutative space with no interesting functions. It is however perfectly encoded by a noncommutative algebra and the natural cohomology (cyclic cohomology) of this
algebra can be computed in terms of a suitable space of distributions on X, as shown in our joint work with Consani and Marcolli.
There are two main points then, the first is that the zeros of the Riemann zeta function appear as an absorption spectrum (ie as a cokernel) from the representation of the scaling group in the above
cohomology, in the sector where the group of automorphisms of Q/Z is acting trivially (the other sectors are labeled by characters of this group and give the zeros the corresponding L-functions).
The second is that if one applies the Lefschetz formula as formulated in the distribution theoretic sense by Guillemin and Sternberg (after Atiyah and Bott) one obtains the Riemann-Weil explicit
formulas of number theory that relate the distribution of prime numbers with the zeros of zeta.
A first striking feature is that one does not even need to define the zeta function (or L-functions), let alone its analytic continuation, before getting at the zeros which appear as a spectrum. The
second is that the Riemann-Weil explicit formulas involve rather delicate principal values of divergent integrals whose formulation uses a combination of the Euler constant and the logarithm of 2 pi,
and that exactly this combination appears naturally when one computes the operator theoretic trace, thus the equality of the trace with the explicit formula can hardly be an accident.
After the initial paper an important advance was done by Ralf Meyer who showed how to prove the explicit formulas using the above functional analytic framework (instead of the Cauchy integral).
This hopefully will shed some light on the comment of Masoud which hinged on the tricky topic of the use of noncommutative geometry in an approach to RH. It is a delicate topic because as soon as one
begins to discuss anything related to RH it generates some irrational attitudes. For instance I was for some time blinded by the possibility to restrict to the critical zeros, by using a suitable
function space, instead of trying to follow the successful track of André Weil and develop noncommutative geometry to the point where his argument for the case of positive characteristic could be
successfully transplanted. We have now started walking on this track in our joint paper with Consani and Marcolli, and while the hope of reaching the goal is still quite far distant, it is a great
incentive to develop the missing noncommutative geometric tools. As a first goal, one should aim at translating Weil's proof in the function field case in terms of the noncommutative geometric
framework. In that respect both the paper of Benoit Jacob and the paper of Consani and Marcolli that David Goss mentionned in his recent post open the way.
I'll end up with a joke inspired by the European myth of Faust, about a mathematician trying to bargain with the devil for a proof of the Riemann hypothesis. This joke was told to me some time ago by
Ilan Vardi and I happily use it in some talks, here I'll tell it in French which is a bit easier from this side of the atlantic, but it is easy to translate....
La petite histoire veut qu'un mathématicien ayant passé sa vie à essayer de résoudre ce problème se décide à vendre son âme au diable pour enfin connaître la réponse. Lors d'une première rencontre
avec le diable, et après avoir signé les papiers de la vente, il pose la question "L'hypothèse de Riemann est-elle vraie ?" Ce à quoi le diable répond "Je ne sais pas ce qu'est l'hypothèse de
Riemann" et après les explications prodiguées par le mathématicien "hmm, il me faudra du temps pour trouver la réponse, rendez vous ici à minuit, dans un mois". Un mois plus tard le mathématicien
(qui a vendu son âme) attend à minuit au même endroit... minuit, minuit et demi... pas de diable... puis vers deux heures du matin alors que le mathématicien s'apprête à quitter les lieux, le diable
apparaît, trempé de sueur, échevelé et dit "Désolé, je n'ai pas la réponse, mais j'ai réussi à trouver une formulation équivalente qui sera peut-être plus accessible!"
The second issue of JNCG is out. Take a look.....
MOSAIC SOPHISTICATION A quasi-crystalline Penrose pattern at the Darb-i Imam shrine in Isfahan, Iran
A few days ago I noticed this article in NYT science section that reports on a recent paper by Lu and Steinhardt in Science (see here and here for the full article; thanks to `thomas1111'). Their
abstract says: ``The conventional view holds that girih (geometric star-and-polygon, or strapwork) patterns in medieval Islamic architecture were conceived by their designers as a network of
zigzagging lines, where the lines were drafted directly with a straightedge and a compass. We show that by 1200 C.E. a conceptual breakthrough occurred in which girih patterns were reconceived as
tessellations of a special set of equilateral polygons ("girih tiles") decorated with lines. These tiles enabled the creation of increasingly complex periodic girih patterns, and by the 15th century,
the tessellation approach was combined with self-similar transformations to construct nearly perfect quasi-crystalline Penrose patterns, five centuries before their discovery in the West''.
Interestingly enough the occurrence of quasi periodic tilings in old Persian art was also extensively commented on, last year, in Alain and Matilde's article ``A walk in the noncommutative garden"
(see Section 9 on tilings). The first four pics are from their article. (see also lieven le bruyn’s weblog where the NYT article is commented at). We look forward to comments by people in NCG,
operator algebras, and those working on quasi periodic crystals. | {"url":"http://noncommutativegeometry.blogspot.com/2007_03_01_archive.html","timestamp":"2014-04-17T18:31:12Z","content_type":null,"content_length":"124733","record_id":"<urn:uuid:796d95e1-d6cb-4900-9d67-634496c5c6b1>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
An $L^0$ Khintchine inequality
up vote 12 down vote favorite
Suppose that $\epsilon_1,\epsilon_2,\ldots$ are IID random variables with the Bernoulli distribution $\mathbb{P}(\epsilon_n=\pm1)=1/2$, and $a_1,a_2,\ldots$ is a real sequence with $\sum_na_n^2=1$.
Letting $S=\sum_n\epsilon_na_n$, the question is whether there exists a constant $c > 0$, independent of the choice of $a$, with $$ \mathbb{P}(\vert S\vert\ge1)\ge c.\qquad\qquad{\rm(1)} $$ That is,
I am interested in finding a bound on the probability of the sum being within one standard deviation of its mean. If true, this represents a particularly sharp version of the $L^0$ Khintchine
inequality. Considering the example with $a_1=1$ and all other $a_i$ set to zero, for which $\mathbb{P}(\vert S\vert > 1)=0$, it is necessary that the inequality inside the probability in (1) is not
strict. Also, considering the example with $(a_1,a_2,a_3)=(1/\sqrt2,1/2,1/2)$, it can be seen that $c\le1/4$. I wonder if it is possible to construct further examples showing that $c$ must, in fact,
be zero?
For any $0 < u < 1$, it is easy to find a bound $$ \mathbb{P}(\vert S\vert > u)\ge c_u $$ for $c_u > 0$ a constant independent of $a$. Considering the case with $a_1=a_2=1/\sqrt{2}$ and all other
$a_i$ set to zero, it is clear that $c_u \le 1/2$. In fact, it can be shown that $c_u=(1-u^2)^2/3$ will suffice (see my answer to this other MO question), but $c_u$ decreases to zero as $u$ goes to
$1$, so this does not help with (1). Combining the Paley-Zygmund inequality with the optimal constants in the $L^p$-versions of the Khintchine inequality for $p > 0$ (see ref. 1 or 2) it is possible
to give improved values for $c_u$, but it still tends to zero as $u$ goes to 1.
My apologies if this is either obvious or some well-known fact that I have missed, but I could not find any reference for it. This question is something that I originally thought about while writing
up some notes on stochastic integration (posted on my blog), as the $L^0$-version of the Khintchine inequality can be used to prove the existence of the stochastic integral. However, it is not
necessary to have something as strong as (1) in that case. More recently, it came up again while answering this MO question.
1. Haagerup, The best constants in the Khintchine inequality, Studia Math., 70 (3) (1982), 231-283.
2. Nazarov & Podkorytov, Ball, Haagerup, and distribution functions, Preprint (1997). Available from Fedja Nazarov's homepage.
pr.probability inequalities
1 ${\rm P}(|S| \geq 1) = 2[1-{\rm P}(S \lt 1)]$; ${\bf Problem 10} \star$ in math.leidenuniv.nl/~naw/home/ps/pdf/2008-2.pdf (perhaps open) asks for the probability ${\rm P}(S \lt 1)$. – Shai Covo
Jan 31 '11 at 11:08
Considering $a=(1,1,1,1,1,1)/\sqrt{6}$ gives a probability of $7/2^5=0.21875$. Wonder how close that is to optimal? – George Lowther Jan 31 '11 at 12:52
add comment
1 Answer
active oldest votes
OK. Here's a proof that $c > 0.002$. No doubt it can be substantially improved. We can assume the $a_i$ are arranged in decreasing order. Write $a$ for $a_1$.
If $a\ge 1/2$, let $X=a_1\epsilon_1$ and $Y=(1-a^2)^{-1/2}(a_2\epsilon_2+\ldots+a_n\epsilon_n)$ so that $S=X+\sqrt{1-a^2}Y$. Notice that $Y$ is of the form so that the inequality in the
question applies.
Now $\mathbb P(|\sqrt{1-a^2}Y|\ge 1-a)=\mathbb P(|Y|\ge \sqrt{\frac{1-a}{1+a}})\ge \left(1-\frac{1-a}{1+a}\right)^2/3=4a^2/(3(1+a)^2)$. Since $a\ge 1/2$, this exceeds 4/27, so that
up vote 11 provided $\epsilon_1$ has the same sign as $Y$, the sum is at least 1. This occurs with probability at least 2/27.
down vote
accepted If on the other hand $a<1/2$ then we have $a_i^2<1/4$ for each $i$. In particular there exists a partition of $\{1,\ldots,n\}$ into two sets $A$ and $B$ such that $3/8\le \sum_{i\in A}
a_i^2\le \sum_{i\in B}a_i^2\le 5/8$. Let $\alpha^2=\sum_{i\in A}a_i^2$ and $\beta^2=\sum_{i\in B}a_i^2$. Let $X=\sum_{i\in A}(a_i/\alpha)\epsilon_i$ and $Y=\sum_{i\in B}(a_i/\beta)\
epsilon_i$. Then $\mathbb P(|X|\ge 3/4)\ge 49/768$ by the given inequality. Similarly $\mathbb P(|Y|\ge 3/4)\ge 49/768$. The probability that they both exceed $3/4$ and have the same
sign is at least $1/2(49/768)^2$. If this is the case $|S|=\alpha |X|+\beta |Y|\ge (3/4)(\alpha+\beta)$. In the worst case $\alpha=\sqrt{3/8}$ and $\beta=\sqrt{5/8}$, but even in this
case the right side exceeds 1.
That's fantastic, thanks! Worked out easier than I expected. – George Lowther Jan 31 '11 at 12:20
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability inequalities or ask your own question. | {"url":"http://mathoverflow.net/questions/53855/an-l0-khintchine-inequality?sort=votes","timestamp":"2014-04-18T03:40:55Z","content_type":null,"content_length":"57060","record_id":"<urn:uuid:e41b2447-644e-4251-84be-08664fe0a0da>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Optimal Scheduling of Cooperative Tasks in a Distributed System Using an Enumerative Method
March 1993 (vol. 19 no. 3)
pp. 253-267
ASCII Text x
D.-T. Peng, K.G. Shin, "Optimal Scheduling of Cooperative Tasks in a Distributed System Using an Enumerative Method," IEEE Transactions on Software Engineering, vol. 19, no. 3, pp. 253-267, March,
BibTex x
@article{ 10.1109/32.221134,
author = {D.-T. Peng and K.G. Shin},
title = {Optimal Scheduling of Cooperative Tasks in a Distributed System Using an Enumerative Method},
journal ={IEEE Transactions on Software Engineering},
volume = {19},
number = {3},
issn = {0098-5589},
year = {1993},
pages = {253-267},
doi = {http://doi.ieeecomputersociety.org/10.1109/32.221134},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Software Engineering
TI - Optimal Scheduling of Cooperative Tasks in a Distributed System Using an Enumerative Method
IS - 3
SN - 0098-5589
EPD - 253-267
A1 - D.-T. Peng,
A1 - K.G. Shin,
PY - 1993
KW - cooperative tasks; enumerative method; processing nodes; distributed system; common goal; intertask communications; precedence constraints; normalized task response time; system hazard; PERT/
CPM form; activity on arc; AOA; task graph; dominance relationship; simultaneously schedulable modules; active schedules; optimal schedule; task scheduling problem; computational experiences;
distributed processing; PERT; real-time systems; scheduling
VL - 19
JA - IEEE Transactions on Software Engineering
ER -
Preemptive (resume) scheduling of cooperative tasks that have been preassigned to a set of processing nodes in a distributed system, when each task is assumed to consist of several modules is
discussed. During the course of their execution, the tasks communicate with each other to collectively accomplish a common goal. Such intertask communications lead to precedence constraints between
the modules of different tasks. The objective of this scheduling is to minimize the maximum normalized task response time, called the system hazard. Real-time tasks and the precedence constraints
among them are expressed in a PERT/CPM form with activity on arc (AOA), called the task graph (TG), in which the dominance relationship between simultaneously schedulable modules is derived and used
to reduce the size of the set of active schedules to be searched for an optimal schedule. Lower-bound costs are estimated, and are used to bound the search. An example of the task scheduling problem
and some computational experiences are presented.
[1] K.G. Shin, C.M. Krishna, and Y.-H. Lee, "A Unified Method for Evaluating Real-Time Computer Controllers and Its Application,"IEEE Trans. Automatic Control, Vol. AC-30, No. 4, Apr. 1985, pp.
[2] D.-T. Peng and K. G. Shin, "Static allocation of periodic tasks with precedence constraints in distributed real-time systems,"IEEE Proc. 9th Int. Conf. Distrib. Computing Syst., 1989, pp.
[3] D. T. Peng, "Modeling, assignment and scheduling of tasks in distributed real-time systems," Ph.D. dissertation, Dept. of Electrical Engineering and Computer Science, The University of Michigan,
Ann Arbor, MI, Dec. 1989.
[4] H. A. Taha,Operations Research: An Introduction. New York: Macmillan, 1976.
[5] K. R. Baker,Introduction to Sequencing and Scheduling. New York: Wiley, 1974.
[6] R. Bellman, A. O. Esogbue, and I. Nabeshima,Mathematical Aspects of Scheduling and Applications. Elmsford, NY: Pergamon, 1982.
[7] T. Gonzalez and S. Sahni, "Flowshop and jobshop schedules: Complexity and approximation,"Operations Res., vol. 26, no. 1, Jan.-Feb. 1978, pp. 36-52.
[8] J. K. Lenstra, A. H. G. Rinnooy Kan, and P. Brucker, "Complexity of machine scheduling problems,"Ann. Discrete Math., vol. 1, pp. 343-362, 1977.
[9] K. R. Bakeret al., "Preemptive scheduling of a single machine to minimize maximum cost subject to release dates and precedence constraints,"Operations Res., vol. 31, no. 2, Mar.-Apr. 1983, pp.
[10] J. Blazewicz, J. K. Lenstra, and A. H. G. Rinnooy Kan, "Scheduling subject to resource constraints: Classification and complexity,"Discrete Applied Math., vol. 5, 1983, pp. 11-24.
[11] B. Giffler and G. L. Thompson, "Algorithms for solving production scheduling problems,"Operations Res., vol. 8, no. 4, pp. 487-503, 1960.
[12] G. H. Brooks and C. R. White, "An algorithm for finding optimal or near optimal solutions to the production scheduling problems,"J. Indust. Eng., vol. 16, 1965, pp. 34-40.
[13] L. Schrage, "Solving resource-constrained network problems by implicit enumeration--Nonpreemptive case,"Operations Res., vol. 18, pp. 263-278, 1970.
[14] L. Schrage, "Solving resource-constrained network problems by implicit enumeration--Preemptive case,"Operations Res., vol. 20, pp. 668-677, 1972.
[15] E. L. Lawleret al., "Recent developments in deterministic sequencing and scheduling: A survey," inDeterministic and Stochastic Scheduling, Dempsteret al., Eds. Dordrecht, The Netherlands:
Reidel, 1982, pp. 35-74.
[16] B. J. Lageweg, J. K. Lenstra, and A. H. G. Rinnooy Kan, "Job-shop scheduling by implicit enumeration,"Management Scie., vol. 24, no. 4, pp. 441-450, 1977.
[17] J. P. Tremblay and R. Manohar,Discrete Mathematical Structures with Applications to Computer Science, New York: McGraw-Hill, 1987.
[18] W. H. Kohler and K. Steiglitz, "Enumerative and interactive computational approach," inComputer and Job-Shop Scheduling Theory, Coffman Eds. New York: Wiley, 1976, pp. 229-287.
Index Terms:
cooperative tasks; enumerative method; processing nodes; distributed system; common goal; intertask communications; precedence constraints; normalized task response time; system hazard; PERT/CPM
form; activity on arc; AOA; task graph; dominance relationship; simultaneously schedulable modules; active schedules; optimal schedule; task scheduling problem; computational experiences; distributed
processing; PERT; real-time systems; scheduling
D.-T. Peng, K.G. Shin, "Optimal Scheduling of Cooperative Tasks in a Distributed System Using an Enumerative Method," IEEE Transactions on Software Engineering, vol. 19, no. 3, pp. 253-267, March
1993, doi:10.1109/32.221134
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/ts/1993/03/e0253-abs.html","timestamp":"2014-04-19T03:18:01Z","content_type":null,"content_length":"55217","record_id":"<urn:uuid:4a93c6a5-bd0e-4707-8399-d122daaa6256>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
On a class of metrics related to graph layout problems
Letchford, A N and Reinelt, G and Seitz, H and Theis, D O (2010) On a class of metrics related to graph layout problems. Linear Algebra and its Applications, 433 (11-12). pp. 1760-1777.
PDF (On a class of metrics related to graph layout problems) - Draft Version
Download (259Kb) | Preview
We examine the metrics that arise when a finite set of points is embedded in the real line, in such a way that the distance between each pair of points is at least 1. These metrics are closely
related to some other known metrics in the literature, and also to a class of combinatorial optimization problems known as graph layout problems. We prove several results about the structure of these
metrics. In particular, it is shown that their convex hull is not closed in general. We then show that certain linear inequalities define facets of the closure of the convex hull. Finally, we
characterize the unbounded edges of the convex hull and of its closure.
Item Type: Article
Journal or Publication Title: Linear Algebra and its Applications
Uncontrolled Keywords: metric spaces ; graph layout problems ; convex analysis ; polyhedral combinatorics
Subjects: H Social Sciences > HB Economic Theory
Departments: Lancaster University Management School > Management Science
ID Code: 45623
Deposited By: ep_importer_pure
Deposited On: 11 Jul 2011 19:35
Refereed?: Yes
Published?: Published
Last Modified: 09 Apr 2014 22:35
Identification Number:
URI: http://eprints.lancs.ac.uk/id/eprint/45623
Actions (login required) | {"url":"http://eprints.lancs.ac.uk/45623/","timestamp":"2014-04-19T12:29:08Z","content_type":null,"content_length":"17243","record_id":"<urn:uuid:ea4a568f-7450-4f1c-8eb9-5d945eb89902>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Portability portable
Stability experimental
Maintainer bos@serpentine.com
Safe Haskell None
Fourier-related transformations of mathematical functions.
These functions are written for simplicity and correctness, not speed. If you need a fast FFT implementation for your application, you should strongly consider using a library of FFTW bindings
Type synonyms
Discrete cosine transform
Fast Fourier transform | {"url":"http://hackage.haskell.org/package/statistics-0.10.5.2/docs/Statistics-Transform.html","timestamp":"2014-04-21T07:33:20Z","content_type":null,"content_length":"8463","record_id":"<urn:uuid:6c2443e2-cd2e-4e98-95aa-7ac70d3816fc>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
Crossover 102 - Electronic Crossovers - Page 5
However, by definition, a Linkwitz/Riley filter circuit can only be even order, where a Butterworth can be either even or odd. There is also a very important difference in how the turnover frequency
or crossover point is defined. Butterworth filter networks are defined by the half power or -3 dB down points in frequency response. Where as the Linkwitz/Riley filters are determined by the -6 dB
down points. I'll repeat this for you, Butterworth measure -3 dB down at the crossover frequency, while Linkwitz/Riley are -6 dB at the turnover frequency. This very important fact is often not
understood by the typical sound provider. Even though a Butterworth and a Linkwitz/Riley filter set of the same order may share the same crossover point, they are going to use different value
components to arrive at the common frequency.
I believe it is now time to discuss the summation of these two filters at the crossover point. First we must define coincident and non-coincident signals. If two different signals are at the same
level and are not coincident (or starting from the same exact moment in time), then the most that they can add when summed together is +3 dB. This is also true if they are the same common frequency
but exhibit a significant difference in degree of phase angle. If however you have two absolutely coincident signals in both frequency and level, then they will sum to +6 dB when added or mixed
Butterworth filters when combined are said to have a smooth power response through the crossover region. Since the crossover frequency is defined as the point at which the spectrum is attenuated or
down -3 dB, the actual summation of the transducers is essentially flat when the power is averaged. There is still a little dip around the crossover frequency because the filters are not in phase
with each other at this frequency. Now there are some analog electronic crossovers that introduce some signal delay in one or more outputs. However the steps can be quite broad depending on the
chosen crossover frequency.
Okay, how do I set up a variable electronic crossover?
First of all there is no magic crossover frequency point. The raw frequency response of each transducer or driver must first be examined, to ensure that the intended drivers can indeed effectively
reproduce the chosen range of frequencies. The second and most important consideration is to know the actual sensitivities of each of the component drivers in the system. The sensitivity of a
loudspeaker is the internationally accepted standard of 1 Watt @ 1 Meter. With one Watt of power sent to the driver, a measurement is made on axis at a distance of one Meter to determine how loud is
the sound pressure level (SPL). Once you know the sensitivity of the drivers you can then set the gains of the crossover properly.
Let's say that we have a three-way system, and the low frequency device can handle 500 watts continuous and produce an SPL of 100 dB (1W, 1M) with a frequency response of 45 Hz to 2 kHz (+/- 3 dB).
The mid frequency driver can also handle 500 Watts, but it has a sensitivity of 103 dB (1W, 1M) from 70 Hz to 2.5 kHz. The compression driver on a constant directivity high frequency horn can handle
80 Watts continuous, and has a mid-band efficiency of 112 dB from 800 Hz to 3.5 kHz, with -6 dB per octave roll off above 3.5 kHz.
Page 1 | Page 2 | Page 3 | Page 4 | Page 5 | Page 6 | Page 7 | Page 8 | {"url":"http://peavey.com/support/technotes/processors/crossover102_e.cfm","timestamp":"2014-04-16T22:08:01Z","content_type":null,"content_length":"10547","record_id":"<urn:uuid:37be1b2b-5214-4be3-987d-8238564cbc2f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Centroid of a irregular polygon???????
February 19th 2006, 03:31 AM #1
Feb 2006
Centroid of a irregular polygon???????
Known the vertices of an IRREGULAR polygon (x1,y1), (x2,y2), (x3,y3)...... and so on how to find its centroid?
I tried to divide the polygon into differrent triangles with a common vertex [say (x1,y1)] and finding their centroids,but i cudn't go further...
Last edited by ragsk; February 19th 2006 at 11:42 PM.
That irregular polygon is a triangle, and the centroid of a triangle is the point where the medians intersect.
Centroid of a polygon with n sides
what i meant was how to find the centroid of an irregular polygon with n sides..........i shud have mentioned the vertices as (x1,y1),(x2,y2),(x3,y3),.................(xn.yn). sorry for
what i meant was how to find the centroid of an irregular polygon with n sides..........i shud have mentioned the vertices as (x1,y1),(x2,y2),(x3,y3),.................(xn.yn). sorry for
I do not know the theorem from geometry which tells how to find centroid of a polygon with a compass and straightedge, if you can tell me I might be able to solve this problem.
One thing I know that this gets extremely messy, I one did it for a triangle because I was proving a theorem and the algebraic mess was collasal. But that is only for a triangle I do not know if
it will get even more extreme over here I hope not.
February 19th 2006, 10:06 AM #2
Global Moderator
Nov 2005
New York City
February 19th 2006, 08:05 PM #3
Feb 2006
February 20th 2006, 11:34 AM #4
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/pre-calculus/1929-centroid-irregular-polygon.html","timestamp":"2014-04-18T08:13:30Z","content_type":null,"content_length":"39054","record_id":"<urn:uuid:2362763b-0bae-4b6d-b10a-33685276440b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
Puy, WA Geometry Tutor
Find a Puy, WA Geometry Tutor
...I like to use real-life examples and games whenever possible.I taught middle school and high school for seven years. For six of those seven years, I taught at least one Algebra 1 class. I
taught four Algebra 1 classes during my last year of full-time teaching.
16 Subjects: including geometry, ASVAB, GRE, algebra 1
...I am happy to help with many different math classes, from Elementary math to Calculus. I have helped my former classmates and my younger brother many times with Physics. I have been learning
French for more than 6 years.
16 Subjects: including geometry, chemistry, French, calculus
...Literally all aspects of mathematics use algebraic rules to accomplish computations. Regardless what field of study you are interested in mastering, you will find algebraic ideas and rules
intertwined inside. I've been tutoring advanced engineering and science topics for decades to students of all ages successfully.
45 Subjects: including geometry, chemistry, calculus, physics
...I formally took programming courses as a part of my MSISE Masters of Science in Computer Information Systems Engineering and gained real world experience while working for the US Government. I
have roughly 3 years of direct professional experience coding and designing user applications using C# specifically in concert with SQL server. I have been using MS project since 1995.
53 Subjects: including geometry, reading, English, ASVAB
...Furthermore, knowing the vernacular of certain groups allows for better interaction with people of a different culture or generation within one's social circle. Sometimes when I speak to a
friend using a metaphor, I have to explain what it means before he understand; that is one reason why peopl...
50 Subjects: including geometry, reading, chemistry, English
Related Puy, WA Tutors
Puy, WA Accounting Tutors
Puy, WA ACT Tutors
Puy, WA Algebra Tutors
Puy, WA Algebra 2 Tutors
Puy, WA Calculus Tutors
Puy, WA Geometry Tutors
Puy, WA Math Tutors
Puy, WA Prealgebra Tutors
Puy, WA Precalculus Tutors
Puy, WA SAT Tutors
Puy, WA SAT Math Tutors
Puy, WA Science Tutors
Puy, WA Statistics Tutors
Puy, WA Trigonometry Tutors
Nearby Cities With geometry Tutor
Bonney Lake geometry Tutors
Edgewood, WA geometry Tutors
Fife, WA geometry Tutors
Fircrest, WA geometry Tutors
Firwood, WA geometry Tutors
Gig Harbor geometry Tutors
Graham, WA geometry Tutors
Meeker, WA geometry Tutors
Milton, WA geometry Tutors
Normandy Park, WA geometry Tutors
Pacific, WA geometry Tutors
Puyallup geometry Tutors
Spanaway geometry Tutors
Sumner, WA geometry Tutors
Tacoma geometry Tutors | {"url":"http://www.purplemath.com/Puy_WA_Geometry_tutors.php","timestamp":"2014-04-20T23:35:28Z","content_type":null,"content_length":"23735","record_id":"<urn:uuid:d256a45f-263e-422f-b02d-885d5f7e0582>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by ruby on Monday, July 21, 2008 at 8:52pm.
-Suppose that a person is standing on the edge of a cliff 50 m high, so that the ball in his/her hand can fall to the base of the cliff.
(a)-How long does it take to reach the base of the cliff?
(b)- What is the total distance traveled by the ball?
Attempted Answers
(a) I got 40.2 s because I divided the total distance the ball covered (50 m) by the pull of gravity on the object, -9.8 m/s. However I feel like my answer is wrong, can someone please correct this
for me.
(b) The total distance traveled by the ball would remain to be -50 m because the ball was dropped down a cliff 50 m high, and there was no going up or down involved. I- think- this one is right
*** Also, i attempted to solve these questions mentally, however i don't know how to solve them step by step using equations, so can some one please demonnstrate this for me? Thanks for all the help.
• physics - GK, Monday, July 21, 2008 at 9:13pm
The equation is:
50 m = (1/2)(9.8m/s^2)(t^2)
(It is less confusing to use absolute values here-No negative signs)
t = sqrt[(2*50m)/(9.8m/s^2)] = sqrt(10.2s^2)
(t = much shorter than 40.2 s)
• question? - ruby, Monday, July 21, 2008 at 9:45pm
so, then to calculate the distance, the formula would be 50 m =(1/2)(9.8m/s^2)(t^2) and you would neglect the negative signs, but then instead of dividing 50m by -90.8m/s^2, how would u use that
formula to find the value of t? Sorry, i am just really confused regarding all of this. =( you see for these questions we were to use this equation--
Y = Yo + Vot + 1/2gt^2
But how would you plug that in? Thank you for all the help though, i really appreciate it. =)
• physics - Damon, Monday, July 21, 2008 at 9:50pm
Yo = 50
Vo = 0 (does not throw it up or down, just drops it)
(1/2) g = -4.9
Y = 0 at the bottom
0 = 50 - 4.9 t^2
4.9 t^2 = 50
t^2 = 10.2
t = 3.2 seconds
□ physics - GK, Monday, July 21, 2008 at 10:45pm
Yours (Damon's) is a more elegant solution than mine, since you are using a more general formula for vertical motion and correct algebraic signs. I assume the relationship used is:
Y = Yo + (1/2)gt^2, using the values you assigned. The zero point is the bottom of the cliff.
An equivalent formula would be:
Y = (1/2)gt^2, with:
Y = -50m, g = -9.8m/s^2
Since the displacement is traversed in the direction of motion, Y is negative. This time the zero point is the starting point.
How we choose a reference point makes a difference but should not affect the final answer if the algebraic signs are consistent with our choice.
• physics - ruby, Tuesday, July 22, 2008 at 6:12am
thanx =) for all your help!
Related Questions
Physics - A person standing at the edge of a seaside cliff kicks a stone ...
Physics - A person standing at the edge of a seaside cliff kicks a stone ...
physics - A person standing at the edge of a seaside cliff kicks a stone over ...
physics - A person standing at the edge of a seaside cliff kicks a stone over ...
physics - a person standing at the edge of a seaside cliff kicks a stone over ...
physics - A person standing at the edge of a seaside cliff kicks a stone over ...
physics - A person standing at the edge of a seaside cliff kicks a stone over ...
Physics - A person standing at the edge of a seaside cliff kicks a stone over ...
physics - A person standing at the edge of a seaside cliff kicks a stone over ...
physics - A person standing at the edge of a seaside cliff kicks a stone over ... | {"url":"http://www.jiskha.com/display.cgi?id=1216687948","timestamp":"2014-04-16T23:03:10Z","content_type":null,"content_length":"11176","record_id":"<urn:uuid:3f8680f3-6eb9-4b43-b7f5-2aef6c0fe352>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
Span of a vector space
August 25th 2009, 12:55 AM #1
Mar 2008
Span of a vector space
Here is a question i got for one of my exercises and I'm not too sure how i would approach such a problem.
Is it possible to span the space of n×n matrices M(n,n) using the powers
of a single matrix A, i.e. I, A, A^2, . . ., A^n, . . .?
First thing i thought about was if there were matrices with the property such that as u keep multiplying it by itself you would get another matrix independent of the previous ones but that turned
out to be a confusing. I also tried thinking about the opposite where if all matrices eventually become linearly dependent at some point therefore disproving that there exists n*n linearly
independent matrices hence proving that it is not possible to span the space. It does seem a bit complex though doing it the way i was thinking about, so any insight to how i can solve this
problem simpler would be great.
taking complex matrices makes it easier to understand. any complex matrix is similar to a upper traingular one. so wlog u may assume that u started with an upper traingular one (why?).
any power of an upper traingular matrix is also upper traingular... so all the powers will at most span only upper traingular matrices, leaving out the rest.
for real matrices you may argue the same way using jordan forms. in this case too the answer seems to be 'no'.
looking at it another way ...
if powers of a matrix spanned the whole space, then the smallest degree of a polynomial which is satisfied by that matrix woudld be greater than $n^2$ ... but you can see by the characteristic
eqn that there must be a polynomial of degree n which is satisfied by that matrix.
looking at it another way ...
if powers of a matrix spanned the whole space, then the smallest degree of a polynomial which is satisfied by that matrix woudld be greater than $n^2$ ... but you can see by the characteristic
eqn that there must be a polynomial of degree n which is satisfied by that matrix.
can u elaborate on this please. i dont understand what u mean
Absolutely not. If it is true, then every matrix B will be a polynomial of A. That implies that the multiplication between matrix is commutative, which is obviously impossble.
consider $A, A^2, A^3, ..., A^{n^2}, A^{{n^2}+1}, ...$ if this set spans the full space then you must get ${n^2}$ lin independent ones among these. the highest power occuring in such a set must
be at least ${n^2}$ if you try to write an expression of the form $a_1.A + a_2.A^2 + a_3. A^3 + ... + a_i.A^{i}$, this sum would never be zero unless you take $i$ to be atleast ${n^2}$ by the
earlier argument of lin independence.
so the minimal polynomial of this matrix has degree at least ${n^2}$, contradicting the fact that minimal poly cannot be of degree higher than ${n}$.
which step you do not follow ??
Last edited by nirax; August 25th 2009 at 02:01 AM. Reason: clarifying
consider $A, A^2, A^3, ..., A^{n^2}, A^{{n^2}+1}, ...$ if this set spans the full space then you must get ${n^2}$ lin independent ones among these. the highest power occuring in such a set must
be at least ${n^2}$ if you try to write an expression of the form $a_1.A + a_2.A^2 + a_3. A^3 + ... + a_i.A^{i}$, this sum would never be zero unless you take $i$ to be atleast ${n^2}$ by the
earlier argument of lin independence.
so the minimal polynomial of this matrix has degree at least ${n^2}$, contradicting the fact that minimal poly cannot be of degree higher than ${n}$.
which step you do not follow ??
why is it the fact that minimal polynomial cannot be of degree higher than n? if u look in my OP u would see that i wrote:
"Is it possible to span the space of n×n matrices M(n,n) using the powers
of a single matrix A, i.e. I, A, A^2, . . ., A^n, . . .?"
so it is not limited to n
By the Cayley Hamilton Theorem,the eigenpolynomial of any matrix A is a annihilator of A. And any annihilator of A is a multiple of the minimal polynomial. But the degree of eigenpolynomial of A
is n, which implies that the degree minimal polynimial cannot be greater than n.
i see. thanks for that, never seen that theorem before.
Actually, you can refer #5, which is much simpler..
thanks.. figured it out
Last edited by ah-bee; August 25th 2009 at 03:32 AM.
August 25th 2009, 01:33 AM #2
Junior Member
Aug 2009
August 25th 2009, 01:36 AM #3
Junior Member
Aug 2009
August 25th 2009, 01:46 AM #4
Mar 2008
August 25th 2009, 01:54 AM #5
Senior Member
Jul 2009
August 25th 2009, 01:54 AM #6
Junior Member
Aug 2009
August 25th 2009, 02:35 AM #7
Mar 2008
August 25th 2009, 02:42 AM #8
Senior Member
Jul 2009
August 25th 2009, 02:45 AM #9
Mar 2008
August 25th 2009, 02:48 AM #10
Senior Member
Jul 2009
August 25th 2009, 03:13 AM #11
Mar 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/99157-span-vector-space.html","timestamp":"2014-04-17T14:58:03Z","content_type":null,"content_length":"59462","record_id":"<urn:uuid:a3ce6e42-e714-419f-9375-ee9b70d6f6cb>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
A. Confined homopolymer melts
B. Confined diblock copolymer melts
A. Choice of
B. Segmental distributions
A. Surface-induced compatabilization
B. Surface-induced entropy loss
C. Influence of on phase behavior
A. Hard-surface effects
B. Effectively neutral surface | {"url":"http://scitation.aip.org/content/aip/journal/jcp/126/23/10.1063/1.2740633","timestamp":"2014-04-17T07:43:03Z","content_type":null,"content_length":"119490","record_id":"<urn:uuid:3064672a-d2bb-49aa-a2cb-36bc701d5151>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hitting time for two out of three random walk particles
up vote 8 down vote favorite
I'm imagining a simple random walk on $\mathbb{Z}$ with three independent particles (maybe add laziness so they don't jump over each other). Suppose the particles are initially placed at, say, $-10$,
$0$ and $10$. The distance between any two particles evolves like a random walk, so the expected hitting time for those two specific particles is infinite. Now, if we require only that any two of the
three particles hit, is the expected hitting time still infinite?
pr.probability random-walk
add comment
3 Answers
active oldest votes
The answer is the same for random walks and for Brownian motion.
If you project a $3$-dimensional Brownian motion perpendicular to $x=y=z$, you get a $2$-dimensional Brownian motion. The projection of the set where $x\lt y\lt z$ is a wedge of angle
$\frac{\pi}{3}$. Your question is whether the first exit time from a $\frac{\pi}{3}$ wedge has finite expected value. This question and more is answered in Spitzer (1958) Some theorems
concerning $2$-dimensional Brownian motion. Trans. Amer. Math. Society 87 pp. 187-197.
up vote 9 down In that paper, theorem $2$ on page $192$ is that in a wedge of angle $\beta$, the $\delta$ power of the first exit time has finite expected value iff $2\delta \beta \lt \pi$. In your
vote accepted case, $\frac{2 \pi}{3} \lt \pi$ so the expected value is finite.
I believe you can also check this by the reflection principle, since the probability that you remain in a wedge of angle $\frac{\pi}{3}$ is a finite alternating sum of $6$ densities,
$3$ positive and $3$ negative. You should be able to estimate the value in terms of derivatives of the normal density. However, Spitzer's work is worth checking out.
That's perfect - thanks :) – Eric Foxall Feb 2 at 6:33
add comment
Suppose the particles start at integers $x,y,z$ with $x \leq y \leq z$ and $x \equiv y \equiv z \bmod 2$ (so they can actually hit rather than pass through each other). Then if I did this
right the expected hitting time $f(x,y,z)$ is given simply by $$ f(x,y,z) = (z-y) (y-x). $$ For example, for $(x,y,z) = (-10,0,10)$ the expected stopping time is exactly $100$. Here's some
gp code that tries this $10^4$ times and sums the lengths of the resulting random walks:
try(v0, v,n) =
v = v0;
n = 0;
while((v[1]<v[2]) && (v[2]<v[3]), for(i=1,3, v[i]+=2*random(2)-1); n++);
up vote 10 return(n)
down vote }
It should take a few seconds to compute a sum that's within a few percent of $10^6$.
Note that the function $f(x,y,z) = (z-y)(y-x)$ satisfies the necessary conditions of being zero for $x=y$ or $y=z$, positive for $x<y<z$, invariant under $(x,y,z) \mapsto (x+t,y+t,z+t)$,
and equal to $1$ plus the average of the $8$ values $f(x \pm 1, y \pm 1, z \pm 1)$. But this does not quite determine $f$ uniquely, because $(z-y)(y-x) + c(z-x)(z-y)(y-x)$ also works for
any $c>0$; so one must do some work to verify that $(z-y)(y-x)$ is the right function.
Let $\tau$ be the first meeting time for random walks started at the even points $x<y<z$ and let $M_n=f(X_n,Y_n,Z_n)+n$. Then $M_{\tau \wedge n}$ is a Martingale so has mean $M_0=f
1 (x,y,z)$ for any $n$. Therefore $E(\tau \wedge n) \le f(x,y,z)$ so also $E(\tau) \le f(x,y,z)$. Thus $g(x,y,z)=E(\tau)$ satisfies $g(x,y,z) \le f(x,y,z)$ in addition to the properties
specified by Noam and this forces $g=f$. – Yuval Peres Feb 16 at 3:48
add comment
Expanding the comment above to a complete proof of Noam Elkies' formula:
Let $\tau$ be the first meeting time for independent simple random walks $X_t, Y_t$ and $Z_t$ started at the even points $x<y<z$. Write $f(x,y,z)= (z-y)(y-x)$. Claim: $ E(\tau)=f(x,y,z)$.
up vote Proof: let $F_n=f(X_{\tau \wedge n},Y_{\tau \wedge n},Z_{\tau \wedge n})$ and $M_n:=F_n+{\tau \wedge n}$. It is easy to verify that $M_n$ is a Martingale so $f(x,y,z)=M_0=E(F_n+\tau\wedge n)
3 down $ for any $n$. Therefore $E(\tau) \le f(x,y,z)$. It remains to verify that $\lim_n E(F_n)=0$. Clearly $F_n \to 0$ a.s. so it suffices to verify that $\max_n F_n$ is integrable. For this we
vote will use the linear martingale $L_n=(Z_n-X_n)/2$ and the quadratic martingale $Q_n=F_n+ 2 L_n^2$. The first is a lazy SRW on the integers. The AMGM inequality gives $4f(x,y,z)\le (z-x)^2$,
so $E (\max_{n \le t} F_n) \le E(\max_{n \le t} L_n^2) \le 4 E(L_t^2)$ by Doob’s $L^2$ maximal inequality. But $4E(L_t^2) \le 2E(Q_t)=2Q_0$ for every $t$. We conclude that $ E (\max_n F_n) \
le 2Q_0$.
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability random-walk or ask your own question. | {"url":"http://mathoverflow.net/questions/156412/hitting-time-for-two-out-of-three-random-walk-particles","timestamp":"2014-04-21T12:39:43Z","content_type":null,"content_length":"61961","record_id":"<urn:uuid:9359a2b1-e8ee-454d-b21a-a5cb31ca10a3>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
Title: Three-level BDDC in Two Dimensions
Author: Xuemin Tu
BDDC methods are nonoverlapping iterative substructuring domain decomposition
methods for the solutions of large sparse linear algebraic systems arising from discretization
of elliptic boundary value problems. They are similar to the balancing Neumann-Neumann algorithm.
However, in BDDC methods, a small number of continuity constraints are enforced across the interface,
and these constraints form a new coarse, global component. An important advantage of using such
constraints is that the Schur complements that arise in the computation willa ll be strictly positive
definite. The coarse problem is generated and factored by a direct solver at the beginning of the
computation. However, this problem can ultimately become a bottleneck, if the number of subdomains
is very large. In this paper, two three-level BDDC methods are introduced for solving the coarse
problem approximately in two dimensional space, while still maintaining a good convergence rate.
Estimates of the condition numbers are provided for the two three-level BDDC methods and numerical
experiments are also discussed. | {"url":"http://cs.nyu.edu/web/Research/TechReports/TR2004-856/TR2004-856.html","timestamp":"2014-04-21T09:50:01Z","content_type":null,"content_length":"1416","record_id":"<urn:uuid:f0270656-1b9f-45e8-987c-cda39036e69e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is $R=k[x_1,\ldots]\to k[[x_1,\ldots]]$ a flat morphism? What about $R\to\hat{R}$?
up vote 10 down vote favorite
Let $k$ be a field. For $R=k[x_1,\ldots]$ with countably infinite number of variables, [due to the discussion in the comments] we have to make the following distinction between $k[[x_1,\ldots]]$ and
the completion $\hat{R}$ of $R$ at the ideal $(x_1,\ldots)$: note that the former admits elements which can have infinitely many monomials of the same degree whereas the latter can not (e.g. $\sum_i
x_i\in k[[x_1,\ldots]]$ but $\notin\hat{R}$). There are two questions
1) Is the morphism $R\to\hat{R}$ flat? If $R$ were any Noetherian ring, the map $R\to \hat{R}$ to its completion is always flat (see e.g. Atiyah-MacDonald 10.14) but my intuition breaks down for
non-Noetherian rings.
2) Is the morphism $R\to k[[x_1,\ldots]]$ flat? This is answered positively by ayanta below.
3 Just a comment: the situation you are considering is very bad. Not only the ring is not noetherian, but the ideal $m$ that you complete with respect to is not finitely generated. In this case,
assuming $k$ is a field, the $m$-adic completion of the left hand side will not be $m$-adically complete! – Liran Shaul Jan 2 '13 at 19:08
3 The question should be written inside the message as well, not only in the title (and use quantifiers. what is $k$? finitely many indeterminates?). See mathoverflow.net/questions/ask. – Yves
Cornulier Jan 2 '13 at 20:18
3 I'm a bit confused about your second statement if you wouldn't mind elaborating. I am taking the completion of $k[x_1,\ldots]$ along the ideal $(x_1,\ldots)$ which is nothing other than $k[[x_1,\
ldots]]$. – Frank Jan 2 '13 at 22:11
6 Right, I was not aware of this difference between the definition of the formal power series in infinitely many variables (i.e. which contains $\sum x_i$) as opposed to the completion of $k[x_1,\
ldots]$ at $(x_1,\ldots)$ (which does not). Now that this is clear, I suppose it's not clear whether either of the two morphisms are flat! – Frank Jan 2 '13 at 22:28
@unknown (google): I'm not sure what "coherent regular ring" means (it cannot be "coherent ring that is regular", since regularity of a commutative ring includes a noetherian condition in my
1 experience), but valuation rings of complete rank-1 valued fields are coherent as rings, so for example the valuation ring of $\mathbf{C}_p$ is of the type you indicate. But such examples feel a
bit removed from the nature of the question that is posed. – user30180 Jan 3 '13 at 6:42
show 3 more comments
1 Answer
active oldest votes
Let's suppose by $k[[x]]$ we mean the formal power series ring in variables $x_1, x_2, \dots$ which is literally the space of sequences of monomials that individually involve only finitely
many variables per monomial (so set-theoretically a direct product of copies of $k$ indexed by such monomials, with a "cofinite" topology). This is of course different from the $(x)$-adic
completion of $k[x] := k[x_1,x_2,\dots]$ as noted by Francois, since the latter has as a cofinal system of discrete quotients the rings $k[x]/(x)^m$ of infinite $k$-dimension whereas the
former has as a cofinal system of discrete quotients the artinian $k[x_1,\dots,x_r]/(x_1,\dots,x_r)^m$ of finite $k$-dimension.
I claim that $k[[x]]$ in the sense I have specified is flat over $k[x]$ (though I also think this is probably completely useless and so I don't claim this is interesting -- maybe just
amusing). The key input is buried near the end of volume 1 of SGA3. These methods have no relevance to the $(x)$-adic completion of $k[x]$ (which is an entirely different beast than $k[[x]]$
as defined above).
First, some preliminary reductions. We have to show that if $I$ is a finitely generated ideal of $k[x]$ then the injection $I \rightarrow k[x]$ remains injective after tensoring against $k
[[x]]$ over $k[x]$. By finite generation, $I$ "comes from" an ideal $I' \subset k[x_1,\dots,x_r]$ for some $r$, and more specifically the natural map $$I' \otimes_{k[x_1,\dots,x_r]} k[x] \
rightarrow k[x]$$ is injective since $k[x]$ is certainly flat (even free) over $k[x_1,\dots,x_r]$. So in fact $$I = I' \otimes_{k[x_1,\dots,x_r]} k[x],$$ and hence our problem is to show
that the injection $I' \rightarrow k[x_1,\dots,x_r]$ remains injective after tensoring over $k[x_1,\dots,x_r]$ against $k[[x]]$. More specifically, we claim this latter ring map is flat.
This final scalar extension process decomposes as a composition of two scalar extensions: $$k[x_1,\dots,x_r] \rightarrow k[[x_1,\dots,x_r]] \rightarrow k[[x]].$$ Since the first step is
known to be flat by usual commutative algebra with noetherian ring, we're reduced to proving flatness of the second map. But this is a special case of the Gabriel-Grothendieck theory of
up vote pseudo-compact rings in SGA3, in which they systematically develop a good theory of "pseudo-compact modules" and "topological flatness" for "pseudo-compact rings", which are topological
15 down rings that are arbitrary inverse limits of artinian rings. This theory includes as a key ingredient a relationship between topological flatness and ordinary flatness when the base ring is
vote noetherian (analogous to completions in the noetherian setting, but logically requiring more work).
More specifically, since $A := k[[x_1,\dots,x_r]]$ is noetherian, so any finitely generated $A$-module is finitely presented (and is pseudo-compact for its max-adic topology), for any
finitely generated $A$-module $M$ and pseudo-compact $A$-algebra $A'$ the natural map $$M \otimes_A A' \rightarrow M \widehat{\otimes}_A A'$$ is bijective (ultimately because the left side
is a cokernel of a map between finite free $A'$-modules and any such map automatically has closed image by a variant of Artin-Rees proved in SGA3). Thus, the preservation of injectivity of
the left as a functor in finitely generated $M$ (which is equivalent to $A$-flatness of $A'$) is reduced to topological flatness of $A'$ over $A$.
Note that one can "distribute" formal power series over other formal power series when extracting out a finite set of variables into the coefficients over infinitely many variables (think
for a minute, using our running definition of "formal power series" for a possibly infinite set of variables). Thus, in our case of interest $A' = k[[x]]$ is a formal power series ring over
$A = k[[x_1,\dots,x_r]]$ in infinitely many variables. Thus, we're finally reduced to the question: if $A$ is a pseudo-compact ring (such as $k[[x_1,\dots,x_r]]$) then is $A[[y_1,\dots]]$
topologically flat over $A$? The answer is "yes" because such formal power series rings (in the sense of our running definition) are "topologically free", and a basic fact in the theory is
that topological freeness (suitably defined...) implies topological flatness.
add comment
Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/117890/is-r-kx-1-ldots-to-kx-1-ldots-a-flat-morphism-what-about-r-to-hatr/117920","timestamp":"2014-04-20T06:22:25Z","content_type":null,"content_length":"62363","record_id":"<urn:uuid:43b0237f-0aeb-4f7e-b1b2-2ed04a0694b5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Revista mexicana de ingeniería química
Services on Demand
Related links
Print version ISSN 1665-2738
Rev. Mex. Ing. Quím vol.10 no.2 México Aug. 2011
Simulación y control
Differential algebraic estimator for the monitoring of a class of partially known bioreactor models
Estimador algebraico diferencial para el monitoreo de una clase de biorreactores con modelos parcialmente conocidos
J. L. Mata Machuca^1, R. Martínez Guerra^1 and R. Aguilar López^2*
^1 Departamento de Control Automático CINVESTAV IPN.
^2 Departamento de Biotecnología y Bioingeniería CINVESTAV IPN . Av. Instituto Politécnico Nacional No. 2508, San Pedro Zacatenco, México, D.F. C.P 07360. *Corresponding author. E mail:
raguilar@cinvestav.mx Tel. + 52 55 5747 3800
Received 10 of March 2010.
Accepted 13 of May 2011.
The problem of monitoring in a common class of partially know bioreactor models is a d d ressed. A reduced order observer namely differential algebraic estimator is proposed. The biomass is estimated
by means of substrate concentration measure ments. The estimation methodology is based on a suitable change of variable which allows generating artificial variables to infer the remaining mass
concentrations constructing a differential algebraic structure. The proposed methodology is applied to a class of Haldane unstructured kinetic model with success. Stability analysis in a Lyapunov
sense for the estimation error is performed. Some remarks about the convergence characteristics of the proposed estimator are given and numerical simulations show its satisfactory performance.
Finally, for comparison purposes, a high gain observer is presented: the convergence is possible only when the model is perfectly known.
Keywords: differential algebraic estimator, state variable estimation, continuous bioreactor, Haldane kinetics.
En el presente trabajo se considera el problema del monitoreo de una clase de biorreactores con modelos parcialmente conocidos. Se propone un tipo de observador de orden reducido denominado estimador
diferencial algebraico. La metodología de estimación se basa en un cambio de variables que permite generar variables artificiales para inferir las concentraciones no medibles. La metodología
propuesta es aplicada a un modelo cinético no estructurado de Haldane con éxito. Se efectúa un análisis de Lyapunov para demostrar la estabilidad de la metodología considerada. Algunos comentarios
sobre las características de la convergencia del estimador son proporcionados y simulaciones numéricas muestran un desempeño satisfactorio. Finalmente, con propósitos de comparación, un observador de
alta ganancia se presenta en donde su convergencia se garantiza solo cuando se conoce perfectamente el modelo del sistema bajo estudio.
Palabras clave: estimador algebraico diferencial, estimación de variables de estado, biorreactor continuo, cinética de Haldane.
1 Introduction
Operating a bioreactor is not a simple task, as during a bioreacting process, variables such as concentrations are generally determined by off line laboratory analysis, making this set of variables
of limited use for control purposes and on line monitoring. However, these variables can be on line estimated using soft sensors.
Over the last few years, the importance of on line monitoring of biotechnological processes has increased. A first step to efficient bioreactor operation is the adequate implementation of online
measurements of essential variables such as substrate and biomass concentrations. Advantages of continuous monitoring of key variables include gaining knowledge about the state of the process and the
possibility of detecting and isolating abnormal process developments at early stages. This reduces process costs, contributes to process safety and helps in trouble shooting and process
accommodation. The main problem in fermentation monitoring and control is the fact that process variables usually cannot be measured on line. Monitoring and controlling these processes can therefore
be difficult because only indirect measurements are available online, while calculated values may be rather uncertain. This can be due to uncertainty with respect to the equations used, measurement
errors or both. For automatic control this may have serious consequences, especially as the actual variables of interest often cannot be directly controlled and related variables are controlled
instead. In fermentation processes, on line and offline measurements are the main source of information about the state of the process. In combination with model based calculations, they are used to
produce estimations for monitoring purposes as well as for automatic and manual process control (Bastin and Dochain, 1990), (Masoud, 1997).
Observation schemes are widely used for reconstructing states of dynamical systems (Aguilar López et al., 2006). Most of the contributions are related to asymptotic observers for monitoring, fault
detections and control issues whereas the real necessities of industrial plants are related to a fast response of the monitoring and regulation methodologies.
Special attention was given to filtering techniques, namely extended Kalman filter, adaptive observers, and artificial neural networks (ANN), (Dávila and Fridman, 2005), (Hu and Wang, 2002), (Levant,
2001), however for these techniques the right tuning of the estimators gains is difficult. It is shown that software based state estimation is a powerful technique that can be successfully used to
enhance automatic control performance of biological systems as well as in system monitoring and on line optimization.
In this paper we consider the growth rate partially known. Following this idea, the necessity to adapt an observation scheme to the available knowledge of the growth rate immediately arises. The main
contribution in this work is to show a state estimator which is a simplified version of the methodology given by (Lemesle and Gouzé, 2005) where a simple linear change of variable given in a natural
manner allows to develop a differential algebraic state estimator. Results show an adequate performance of the considered methodology. The technique is not the same as (Alvarez Ramirez et al, 1999)
since we do not have derivators. The proposed estimation methodology is applied to a kind of unstructured kinetic model: the Haldane model, which is considered for biological process with substrate
inhibition. The above mentioned kinetic model is applied to a class of continuous stirred bioreactors.
In what follows, the statement of the problem is presented; an observability condition is given in the differential algebraic setting. In section 3, the bounded error estimator is designed. Section 4
shows a high gain observer as a comparison with the proposed methodology. Finally, we give some concluding remarks.
2 Problem statement
2.1 The model
Consider the following nonlinear system
where, x ^n, u R^m, m < n, y R^p.
Let us recall the classical observer definition. An observer for system (1) is a dynamical system = u, y), whose task is state estimation. Usually is required at least that ||x|| 0 as t ∞. Although
in some cases, exponentially convergence is also required (Gauthier et al., 1992).
Definition 1: an estimator is said to be bounded if the estimation error (||x||) belongs to an open ball with radius proportional to some value that depends on its estimation error.
In all paper, we will consider a class of bioreactor model. The simplified Haldane model taken from (Vargas et al., 2000), is described by
where μ(S) = μ[max]S / (δ + S + S^2/φ) is the specific growth rate and μ[max]is the maximum growth rate.
We assume that μ(S) is partially known, which is common in biology (Gouzé and Lemesle, 2001). Generally, μ(S) is between two bounds meaning that we know a function S) such that |μ(S) S)| < a, where
a ^+, and μ(0) = = 0. We introduce an important lemma about lower bounded properties of μ(S).
Lema 1 (Hadj Sadok, 1999): there exists a constant ε R, such that S (0) > ε implies S(t) > ε for all t. Thus, for any smooth function μ(S), μ (S (t)) > μ (ε) for all t.
From lemma 1, we could always choose ε such that μ (S (t)) > ε) = r, where r R^+.
The state variables S, X are the substrate and biomass concentrations, respectively, D = q/V is the dilution rate with V the volume of the bioreactor and q the constant flow passing through the
bioreactor, S[in]is the input substrate concentration, Y[S/X]is the corresponding yield coefficient. Let us notice that the inputs D = u and S[in] are fixed. Moreover, we assume that the measured
output is,
2.2 Algebraic Observability Condition (AOC)
Before proposing the bounded error estimator, a definition concerning on algebraic observability condition is given, for more details see (Diop and Martínez Guerra, 2001).
Definition 2: consider the system described by (1), where x = ( x[1] x[2] ... x[n] )^T. A state x[i], i = {1, 2, ... ,n}, is said to be algebraically observable with respect to {u,y} if it satisfies
a differential polynomial in terms of u, y and some of their time derivatives, i.e., P(x[i], u, ..., y, i = {1, 2, ... ,n}.
Replacing y = S into Eq. (2a), the algebraic observability condition for Haldane model is calculated as follows,
From Eq. (4), it is clear that the state variable X satisfies the AOC thus, X is algebraically observable.
3 Bounded error estimator
3.1 Estimator design
In what follows, the corresponding estimated concentration is denoted by ^, and we assume that S is measured exactly, i.e., S =
Consider the Haldane's model given by system (2), and make the change of variable
where k is fixed.
The dynamics of z is
Proposition 1: if we choose the estimator's gain such that Y[S/X] < k D/k[d] and |μ(S) (S)| < a, a R^+. Then, the system (7) is a bounded error estimator of (6).
For the proof, define the estimation error,
Then, using eqs. (6) and (7) the estimation error dynamic is obtained as
To analyze the stability of Eq. (9) we consider the following Lyapunov function candidate
The time derivative of Eq. (10) is
Replacing (9) into (11) yields
Equation (12) is written alternatively as
Now, from lemma 1 and taking into account that Y[S/X] < k ≤ 1 + Dk[d] and |μ(S) (S)| < a, and X is bounded, Eq. (13) leads to,
The right hand side of the foregoing inequality is not negative since near the origin, the positive linear term w |e| dominates the negative quadratic term λe^2. However, e| = ≤ w/λ}. Let c, ε be
some upper bounds for V(e). With c > w^2 / 2λ^2, solutions starting in the set {V(e)≤ c} will remain therein for all time because V = c. Hence, the solutions of Eq. (9) are uniformly bounded (Khalil,
2002). Moreover, if (w^2 / 2λ^2) < ε < c, then ε≤ V ≤ c}, which shows that, in this set V will decrease monotonically until the solutions enters the set {V ≤ ε}. From that time on, the solution
cannot leave the set {V ≤ ε}since V = ε. According to (Khalil, 2002), the solution is uniformly ultimately bounded with the ultimate bound |e| ≤ V2e. For instance, defining c and ε as follows
the ultimate bound is, |e| ≤
Corollary 1: if the growth rate is perfectly known, i.e., μ (S) = S), and we choose the estimator's gain such that Y[S/X] < k ≤ 1 + D/k[d]. Then, the system (14) is an asymptotic estimator of (6).
Indeed, the dynamics of the error in this case is
and the corresponding time derivative of Lyapunov function candidate (10) is
Moreover, X can be reconstructed considering
3.2 Numerical simulations
For all simulations in this paper we take S[in] = 50, D = 0.1, Y[S/X] = 0.9, k[d] = 0.01 and the initial conditions S (0) = 60, X(0) = 40, et al, (2000). The estimator's gain is k = 1. The growth
rates are chosen as
when the model is well known for the asymptotic estimator and when the model is partially known for the bounded error estimator, respectively. The simulations results were carried out with the help
of Matlab 7.1 Software with Simulink 6.3 as the toolbox.
The performance index of the corresponding estimation process is calculated as (Martínez Guerra et al., 2000)
where e(t) is the corresponding state estimation error (the difference between the actual observed signal and its estimmate).
First, in Fig. 1 we show the simulation results for the bounded error estimator given by proposition 1, and the corresponding results for the asymptotic estimator given by corollary 1 (without any
noise in the system output). Furthermore, in Fig. 2 is shown the effect of noise in the estimation process. A white noise is added in the measurement (σ= 0.1, ±10% around the current value of the
measured output) this is considering the corres ponding measurement error of the corresponding sensor and/or experimental measurement technique. We can observe that the bounded error estimator is
robust against noisy measurement. Finally, in Fig. 3 is illustrated the performance index given by (16) for the corresponding estimation process. It should be noted that the quadratic estimation
error (performance index) is bounded on average and has a tendency to decrease.
4 A note on full order observers: The high gain observer
4.1 Observer design
Consider that system (1) satisfies the AOC. In this case to estimate the state space vector x, we can suggest a nonlinear high gain observer (Gauthier et ah, 1992), (Martínez Guerra et al, 2000) with
the following structure,
where the observer's g ain matrix is given by,
and the positive parameter θ determines the desired convergence velocity. Moreover, S[θ] > 0, S[θ] =
As shown by (Gauthier et ai, 1992), (Martínez Guerra and de Leon Morales, 1996), under certain technical a assumptions (Lipschitz conditions for nonlinear functions under consideration) this
nonlinear observer has an arbitrary exponential decay for any initial conditions. We obtain the following high order observer for the system (2) applying the observation scheme (17),
4.2 Simulations
In the same way, we 8show two simulations: when the model is well known and when the model is partially known. The initial conditions for the observer are = 30, with appropriate units. The
estimator's gain is θ = 2. The simulations results of high gain observer are presented in figs. 4 and 5. In Fig. 4, without any noise in the system output, when the model is perfectly known the rate
of convergence is fast, on the other hand, when the model is partially known the observer does not reconstruct the state variables. In Fig. 5, we studied the effect of noise in the measurement (white
noise with σ = 0.1, +5% around the current value of the measured output), we can see that the high gain observer is very sensitive to the noise in the system output. Fig. 6 shows the performance
index. It should be noted that this observer only reconstruct the state variables when the model is well known.
In this paper we ha ve present ed a boun ded error estimator for bioprocess with unstructured growth models. We have proven the stability of the corresponding estimation error in a Lyapunov sense. By
means of a linear change of variable given in a natural manner and with some algebraic manipulations have been constructed the state estimator, which converges to the current states of the reference
model given. We have demonstrated that the bounded error estimator under consideration provides good enough state space estimates which were bounded on average, besides the proposed state estimator
does not depend of a particular set of initial conditions or specific model structure. Moreover, we have constructed a high gain observer in which the convergence is fast only if the model is well
known, but does not exists convergence if the model is partially known. Finally, we have presented some simulations to illustrate the effectiveness of the suggested approach, which shows some
robustness properties against noisy measurements.
Aguilar López, R., Martínez Guerra, R., Mendoza Camargo, J., and M. Neria González (2006). Monitoring of and industrial wastewater plant employing finite time convergence observer. Journal of
Chemical Technology and Biotechnology 81, 851 857. [ Links ]
Alvarez Ramirez, J (1999). Robust PI stabilization t). of a class of continuously stirred tank reactors. AIChE Journal 45, 1992 2000. [ Links ]
Bastin, G. and D. Dochain(1990). On line estimation and adaptative control of bioreactors 1. Elsevier, Amsterdam. [ Links ]
Dávila, J., Fridman, L. and A. Levant (2005). Second order sliding mode observer for mechanical systems. IEEE Transactions on Automatic Control 50, 1785 1789. [ Links ]
Diop, S. and R. Martínez Guerra (2001). An algebraic and data derivative information approach to nonlinear system diagnosis. Proceedings of the European Control Conference (ECC), Porto, Portugal,
2334 2339. [ Links ]
Farza, M., Busawon, K. and H. Hammouri (1998). Simple nonlinear observers for online estimation of kinetics rates in bioreactors. Automatica 34, 301 318. [ Links ]
Gauthier, J., Hammouri H. and S. Othman (1992). A simple observer for nonlinear systems. Applications to bioreactors. IEEE Transactions on Automatic Control 37, 875 880. [ Links ]
Gouzé, J. and V. Lemesle (2001). A bounded error observer for a class of bioreactor models. Proceeding of the European Control Conference (ECC), Porto, Portugal. [ Links ]
Hadj Sadok, Z. (1999). Modélisation et estimation dans les bioréacteurs; prise en compte des incertudes: application au traitement de l'eau. PhD thesis. Nice Sophia Antipolis University. Nice. [
Links ]
Hu, S. and J. Wang (2002). Global asymptotic stability and global exponential stability of continuous time recurrent neural netwoks. IEEE Transactions on Automatic Control 47, 802 807. [ Links ]
Keller, H (1987). Non linear observer design by transformation into a generalized observer canonical form. International Journal of Control 46, 1915 1930. [ Links ]
Khalil, H (2002). Nonlinear systems. Third edition. Prentice Hall, New Jersey. [ Links ]
Lemesle, V. and J. Gouzé (2005). Hybrid bounded error observers for uncertain bioreactor models. Bioprocess & Biosystems Engineering 27, 311 318. [ Links ]
Levant, A (2001). Universal single input single output (SISO) sliding mode controllers with finite time convergence. IEEE Transactions on Automatic Control 46, 1447 1451. [ Links ]
Luenberger, D (1979). Introduction to Dynamic Systems. Theory Models and Applications. Wiley, New York. [ Links ]
Martínez Guerra, R. and J. de Leon Morales (1996). Nonlinear estimators: a differential algebraic approach. Journal of Mathematics and Computer Modelling 20, 125 132. [ Links ]
Martínez Guerra, R., Poznyak, A. and V. Díaz (2000). Robustness of high gain observers for closed loop nonlinear systems: theoretical study and robotics control application. International Journal of
Systems Science 31, 1519 1529. [ Links ]
Masoud, S (1997). Nonlinear state observer design with application to reactors. Chemical Engineering Science 52, 387 404. [ Links ]
Vargas, A., Soto, G., Moreno, J. and G. Buitrón (2000), Observer based time optimal control of an aerobic SBR for chemical and petrochemical wastewater treatment. Water Science and Technology 42, 163
170. [ Links ] | {"url":"http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1665-27382011000200015&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-18T05:39:48Z","content_type":null,"content_length":"56046","record_id":"<urn:uuid:8b265858-85b4-4e86-bc3b-532d2b46d97a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exterior Angles Theorems
4.2: Exterior Angles Theorems
Difficulty Level:
At Grade
Created by: CK-12
Practice Exterior Angles Theorems
What if you knew that two of the exterior angles of a triangle measured $130^\circ$
Watch This
CK-12 Foundation: Chapter4ExteriorAnglesTheoremsA
James Sousa: Introduction to the Exterior Angles of a Triangle
James Sousa: Proof that the Sum of the Exterior Angles of a Triangle is 360 Degrees
James Sousa: Proof of the Exterior Angles Theorem
An exterior angle is the angle formed by one side of a polygon and the extension of the adjacent side. In all polygons, there are two sets of exterior angles, one going around the polygon clockwise
and the other goes around the polygon counterclockwise. By the definition, the interior angle and its adjacent exterior angle form a linear pair.
The Exterior Angle Sum Theorem states that each set of exterior angles of a polygon add up to $360^\circ$
$m \angle 1+m \angle 2+m \angle 3 &= 360^\circ\\m \angle 4+m \angle 5+m \angle 6 &= 360^\circ$
Remote interior angles are the two angles in a triangle that are not adjacent to the indicated exterior angle. $\angle A$$\angle B$$\angle ACD$
The Exterior Angle Theorem states that the sum of the remote interior angles is equal to the non-adjacent exterior angle. From the picture above, this means that $m \angle A+m \angle B=m \angle ACD$
Given: $\triangle ABC$$\angle ACD$
Prove: $m \angle A+m \angle B=m \angle ACD$
Statement Reason
1. $\triangle ABC$$\angle ACD$ Given
2. $m \angle A+m \angle B+m \angle ACB=180^\circ$ Triangle Sum Theorem
3. $m \angle ACB+m \angle ACD=180^\circ$ Linear Pair Postulate
4. $m \angle A+m \angle B+m \angle ACB=m \angle ACB+m \angle ACD$ Transitive PoE
5. $m \angle A+m \angle B=m \angle ACD$ Subtraction PoE
Example A
Find the measure of $\angle RQS$
$112^\circ$$\triangle RQS$$\angle RQS$
$112^\circ + m \angle RQS &= 180^\circ\\m \angle RQS &= 68^\circ$
If we draw both sets of exterior angles on the same triangle, we have the following figure:
Notice, at each vertex, the exterior angles are also vertical angles, therefore they are congruent.
$\angle 4 & \cong \angle 7\\\angle 5 & \cong \angle 8\\\angle 6 & \cong \angle 9$
Example B
Find the measure of the numbered interior and exterior angles in the triangle.
$m \angle 1 + 92^\circ = 180^\circ$$m \angle 1 = 88^\circ$
$m \angle 2 + 123^\circ = 180^\circ$$m \angle 2 = 57^\circ$
$m \angle 1+m \angle 2+m \angle 3=180^\circ$$88^\circ + 57^\circ + m \angle 3 = 180^\circ$$m \angle 3 = 35^\circ$
$m \angle 3 + m \angle 4 = 180^\circ$$m \angle 4 = 145^\circ$
Example C
What is the value of $p$
First, we need to find the missing exterior angle, we will call it $x$
$130^\circ+110^\circ+x &= 360^\circ\\x &= 360^\circ-130^\circ-110^\circ\\x &= 120^\circ$
$x + p &= 180^\circ\\120^\circ + p &=180^\circ\\p &= 60^\circ$
Watch this video for help with the Examples above.
CK-12 Foundation: Chapter4TriangleSumTheoremA
Concept Problem Revisited
The third exterior angle of the triangle below is $\angle 1$
By the Exterior Angle Sum Theorem:
$m \angle 1 + 130^\circ + 130^\circ = 360^\circ\\m \angle 1 = 100^\circ$
Interior angles are the angles on the inside of a polygon while exterior angles are the angles on the outside of a polygon. Remote interior angles are the two angles in a triangle that are not
adjacent to the indicated exterior angle. Two angles that make a straight line form a linear pair and thus add up to $180^\circ$Triangle Sum Theorem states that the three interior angles of any
triangle will always add up to $180^\circ$Exterior Angle Sum Theorem states that each set of exterior angles of a polygon add up to $360^\circ$
Guided Practice
1. Find $m \angle A$
2. Find $m \angle C$
3. Find the value of $x$
1. Set up an equation using the Exterior Angle Theorem. $m\angle A +79^\circ=115^\circ$$m \angle A = 36^\circ$
2. Using the Exterior Angle Theorem, $m \angle C + 16^\circ = 121^\circ$$16^\circ$$m \angle C = 105^\circ$
3. Set up an equation using the Exterior Angle Theorem.
$&(4x+2)^\circ+(2x-9)^\circ =(5x+13)^\circ\\& \quad \uparrow \qquad earrow \qquad \qquad \qquad \quad \uparrow\\& \ \text{interior angles} \qquad \qquad \text{exterior angle}\\& \qquad \qquad \quad
(6x-7)^\circ = (5x+13)^\circ\\&\qquad \qquad \qquad \qquad \ x = 20^\circ$
Substituting $20^\circ$$x$$(4(20)+2)^\circ =82^\circ$$(2(20)-9)^\circ=31^\circ$$(5(20)+13)^\circ=113^\circ$$82^\circ + 31^\circ = 113^\circ$
Determine $m\angle{1}$
Use the following picture for the next three problems:
7. What is $m\angle{1}+m\angle{2}+m\angle{3}$
8. What is $m\angle{4}+m\angle{5}+m\angle{6}$
9. What is $m\angle{7}+m\angle{8}+m\angle{9}$
Solve for $x$
13. Suppose the measures of the three angles of a triangle are x, y, and z. Explain why $x+y+z=180$
14. Suppose the measures of the three angles of a triangle are x, y, and z. Explain why the expression $(180-x)+(180-y)+(180-z)$
15. Use your answers to the previous two problems to help justify why the sum of the exterior angles of a triangle is 360 degrees. Hint: Use algebra to show that $(180-x)+(180-y)+(180-z)$$x+y+z=180$
Files can only be attached to the latest version of Modality | {"url":"http://www.ck12.org/book/CK-12-Geometry-Concepts/r2/section/4.2/","timestamp":"2014-04-21T15:42:48Z","content_type":null,"content_length":"167781","record_id":"<urn:uuid:382dd9d0-ebce-4342-81fb-c038ce2153b9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graph Theory in LaTeX
... after the example in the
tikz gallery
[x = {(\xone cm,\yone cm)},
y = {(\xtwo cm,\ytwo cm)},
z = {(0cm,1cm)}]
\begin{scope}[canvas is xy plane at z=-5]
\begin{scope}[canvas is xy plane at z=0]
\begin{scope}[canvas is xy plane at z=5]
1 comment:
Your blog keeps getting better and better! Your older articles are not as good as newer ones you have a lot more creativity and originality now keep it up! | {"url":"http://graphtheoryinlatex.blogspot.com/2009/09/3d-graph.html","timestamp":"2014-04-20T10:46:46Z","content_type":null,"content_length":"48239","record_id":"<urn:uuid:b9a674c6-b0e1-462f-a175-a537db4fe481>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two Powering Predicates
August 6, 2010
In our study of prime numbers, we have sometimes been lax in specifying the limitations of particular factoring methods; for instance, elliptic curve factorization only works where the number being
factored is co-prime to six. Two conditions that arise from time to time are that the number must not be a perfect square and that the number may not be an integer power of a prime number. In today’s
exercise we will write predicates to identify such numbers.
The usual test for whether a number is a perfect square is to find the integer square root by Newton’s method and then test if the square of that number is the original number. A better algorithm
exploits a theorem of number theory which states that a number is a square if and only if it is a quadratic residue modulo every prime not dividing it. Henri Cohen, in his book A Course in
Computational Algebraic Number Theory, describes the algorithm:
The following computations are to be done and stored once and for all.
1. [Fill 11] For k = 0 to 10 set q11[k] ← 0. Then for k = 0 to 5 set q11[k^2 mod 11] ← 1.
2. [Fill 63] For k = 0 to 62 set q63[k] ← 0. Then for k = 0 to 31 set q63[k^2 mod 63] ← 1.
3. [Fill 64] For k = 0 to 63 set q64[k] ← 0. Then for k = 0 to 31 set q63[k^2 mod 64] ← 1.
4. [Fill 65] For k = 0 to 64 set q65[k] ← 0. Then for k = 0 to 32 set q63[k^2 mod 65] ← 1.
Then the algorithm is:
Given a positive integer n, this algorithm determines whether n is a square or not, and if it is, outputs the square root of n.
1. [Test 64] Set t ← n mod 64 (using if possible only an and statement). If q64[t] = 0, n is not a square and terminate the algorithm. Otherwise, set r = n mod 45045.
2. [Test 63] If q63[r mod 63] = 0, n is not a square and terminate the algorithm.
3. [Test 65] If q65[r mod 65] = 0, n is not a square and terminate the algorithm.
4. [Test 11] If q11[r mod 11] = 0, n is not a square and terminate the algorithm.
5. [Compute square root] Compute q ← ⌊ √ n ⌋ using Newton’s method. If n ≠ q^2, n is not a square and terminate the algorithm. Otherwise, n is a square, output q and terminate the algorithm.
Our second predicate is the prime-power test, which determines, for a given n, if there exist two numbers p and k such that p^k = n, with p prime. Stephen Wolfram’s Mathematica program implements the
prime-power test as PrimePowerQ, which returns either True or False. According to the manual,
The algorithm for PrimePowerQ involves first computing the least prime factor p of n and then attempting division by n until either 1 is obtained, in which case n is a prime power, or until
division is no longer possible, in which case n is not a prime power.
(Note: they probably meant “attempting division by p.”) Wolfram gives the example PrimePowerQ[12167], which is True, since 23^3 = 12167. That algorithm will take a while, as factoring is a
non-trivial problem.
Cohen determines if n is a prime power by first assuming that n = p^k, where p is prime. Then Fermat’s Little Theorem gives p | gcd(a^n − a, n). If that fails, n is not a prime power. Here is Cohen’s
Given a positive integer n > 1, this algorithm tests whether or not n is of the form p^k with p prime, and if it is, outputs the prime p.
1. [Case n even] If n is even, set p ← 2 and go to Step 4. Otherwise, set q ← n.
2. [Apply Rabin-Miller] By using Algorithm 8.2.2 show that either q is a probable prime or exhibit a witness a to the compositeness of q. If q is a probable prime, set p ← q and go to Step 4.
3. [Compute GCD] Set d ← (a^q − a, q). If d = 1 or d = q, then n is not a prime power and terminate the algorithm. Otherwise set q ← d and go to Step 2.
4. [Final test] (Here p is a divisor of n which is almost certainly prime.) Using a primality test prove that p is prime. If it is not (an exceedingly rare occurrence), set q ← p and go to Step
2. Otherwise, by dividing n by p repeatedly, check whether n is a power of p or not. If it is not, n is not a prime power, otherwise output p. Terminate the algorithm.
We have been a little sloppy in this algorithm. For example in Step 4, instead of repeatedly dividing by p we could use a binary search analogous to the binary powering algorithm. We leave this
as an exercise for the reader.
Cohen’s Algorithm 8.2.2 refers to the search for a witness to the compositeness of a number which we used in the exercise on the Miller-Rabin primality checker.
These two beautiful algorithms show the power and elegance of number theory. Cohen’s book is a fine example of the blend of mathematics and programming, and does an excellent job of explaining
algorithms in a way that makes them easy to implement; most math textbooks aren’t so good.
Your task is to implement Cohen’s two powering predicates. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the
comments below.
Pages: 1 2
One Response to “Two Powering Predicates”
1. August 10, 2010 at 3:31 PM
Something went wrong during editing of the exercise, and the code given for the square? function was incorrect. It has now been fixed. | {"url":"http://programmingpraxis.com/2010/08/06/two-powering-predicates/?like=1&_wpnonce=13170383b7","timestamp":"2014-04-17T07:02:10Z","content_type":null,"content_length":"65634","record_id":"<urn:uuid:3e30d2c1-6c26-4bfd-a7f9-1336fa3ddac2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: The Center for Control, Dynamical Systems, and Computation
University of California at Santa Barbara
Fall 2008 Seminar Series
Randomized optimization with an expected value
criterion: Finite sample bounds and applications
John Lygeros
ETH Zurich
Tuesday, October 14, 2008 4:00-5:00pm ESB 2001
Simulated annealing, Markov Chain Monte Carlo, and genetic algorithms are all randomized methods
that can be used in practice to solve (albeit approximately) complex optimization problems. They rely on
parameter space (i.e. near the optimizers). Many of these methods come with asymptotic convergence
guarantees, that establish conditions under which the Markov chain converges to a globally optimal
solution in an appropriate probabilistic sense. An interesting question that is usually not covered by
asymptotic convergence results is the rate of convergence: How long should the randomized algorithm
be executed to obtain a near optimal solution with high probability? Answering this question allows
one to determine a level of accuracy and confidence with which approximate optimality claims can
be made, as a function of the amount of time available for computation. In this talk we present some | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/132/4300805.html","timestamp":"2014-04-20T21:46:45Z","content_type":null,"content_length":"8446","record_id":"<urn:uuid:0d787fb9-e4cd-4bf2-8974-75bd1cf22f13>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determinant in Maple
December 11th 2011, 06:10 PM #1
Dec 2011
Determinant in Maple
Hi all !!
I'm trying to create a determinant code in Maple, or to find how the determinant is coded in the library.
I had a look on the internet but didn't find anything, could anyone help me ?
Re: Determinant in Maple
Maple has its own function you can call, but i'm guessing you want to write one yourself?
Is your matrix a fixed order?
Re: Determinant in Maple
Yes, you're right, I need a code, either mine or not, it even may be in Maple's library, but I need the explicit code so as to be able to explain it and how it works.
It must be for any matrix order (I could for n<=4, but further i think I'd be quickly lost, and for any size, I don't manage ...)
Re: Determinant in Maple
If you need only to code it for 2x2, 3x3 or 4x4 then your function could have the matrix as the input, firstly it should find the order, if the order is not 2x2, 3x3 or 4x4 then return an error
message other wise find the determinant using hardcoded calcs and the matrix elements.
Psuedo Code could be:
Input Matrix
Find Order (count rows, count columns) for mxn matirx
If m not equal to n or n,m >4, return an error
Find deteminant
if m=2, then a11*a22-a12*a21
if m=3, then a11*(a22*a33-a32*a23)+...
if m=4, then .....
Re: Determinant in Maple
Yeah, but I need the code to be applyable for ANY order, and will probably have to give an exemple with dim(M)=10.
There's no way I code all possibilities from 1 to 10 ... =S
December 11th 2011, 06:20 PM #2
December 11th 2011, 06:39 PM #3
Dec 2011
December 11th 2011, 06:50 PM #4
December 11th 2011, 07:00 PM #5
Dec 2011 | {"url":"http://mathhelpforum.com/math-software/194051-determinant-maple.html","timestamp":"2014-04-18T08:46:53Z","content_type":null,"content_length":"41745","record_id":"<urn:uuid:81bb4b11-d0ec-4414-9020-adcbb8151776>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Naperville Algebra 2 Tutor
...These methods include: * Taking notes, and having students follow along, copying the full notes for themselves. * Frequently stopping to have students explain what they understand in their own
words. * Breaking problems down into step-by-step plans - even students who struggle with the material c...
21 Subjects: including algebra 2, chemistry, calculus, statistics
...Talk to you soon! KellyI have been using Apple products daily since my first purchase in 2009. I use an iMac, iPad, iPhone, and iPod and have tutored both adults and children to learn how to
use these devices.
26 Subjects: including algebra 2, Spanish, geometry, chemistry
...For help in pure statistics at the introductory level, I know my stuff. I can help students with basic financial or managerial accounting, but I am not a specialist in this subject. I use
accounting data a great deal in finance applications.
13 Subjects: including algebra 2, statistics, algebra 1, geometry
...I have tutored math to many grade school kids successfully. I hold a bachelor of science in electrical engineering with emphasis in mathematics. I teach with solid math acumen, coupled with
encouragement and positive reinforcement.
18 Subjects: including algebra 2, geometry, algebra 1, ASVAB
...I love all the visual aspects of it and, in turn, showing students how you can combine Geometry with Algebra to solve numerous real world problems - to make their study of Geometry more
interesting. I'm convinced that the most important life-long skill that anyone can possess and continue to dev...
20 Subjects: including algebra 2, reading, English, writing | {"url":"http://www.purplemath.com/naperville_il_algebra_2_tutors.php","timestamp":"2014-04-18T14:07:46Z","content_type":null,"content_length":"23893","record_id":"<urn:uuid:9b08f1d0-6035-4fa3-b3ec-899370aeb7d5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hitting probabilities
April 17th 2008, 07:16 PM #1
Junior Member
Mar 2008
Hitting probabilities
Please help on hitting probabilities, I can't remember how to do this basic stuff.
Markov chain have states 1-4 has one-step transition matrix
$\left(\begin{array}{ccccc}\frac{1}{8}&\frac{1}{8}& \frac{1}{8}&\frac{1}{8}&\frac{1}{8}\\0&\frac{3}{4} &0&0&\frac{1}{4}\\0&0&\frac{1}{2}&0&0\\\frac{1}{2} &0&0&0&0\\0&\frac{1}{2}&0&0&\frac{1}{2}\
end{array} \right)$
For each state $i$, find
$a_i = P$(process ever reaches K = {2,5} $| X_0 = i)$
Thank you
That is the probability of going from state two to state five. It is in row two and column five. Matrices are always listed (r, c) for row and column. The probability is one in four.
Thank you, just one more question.
So no matter how big the square matrix is, I just need to look at the row two and column five for that question?
April 17th 2008, 07:25 PM #2
April 17th 2008, 07:39 PM #3
Junior Member
Mar 2008 | {"url":"http://mathhelpforum.com/advanced-statistics/34986-hitting-probabilities.html","timestamp":"2014-04-18T05:56:55Z","content_type":null,"content_length":"36130","record_id":"<urn:uuid:9577bc6d-b274-4320-aadf-f7320723cce9>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |